F# Math Provider wrappers to native Blas and Lapack runtimes for F# users to perform
linear algebra easily.
Yin Zhu, firstname.lastname@example.org
The Math Provider was a part of F# PowerPack, developed by an employee of Microsoft Research Cambridge. It uses P/Invoke technique to wrap LAPACK functions and provides simple interface in F#. Although P/Inovking to native code creates problems (mostly memory),
it does provide an easy, free, correct and
efficient way to do linear algebra in F#.
However, this library was later deprecated from F# PowerPack. No official news said the reason. I think one reason is that because that programmer left MSRC and another is that developing math library for F# deviates from the main job of F# designers and
implementers, who are mainly language people, not numerical computing guys and they are still a small team concentrating on F# the language itself.
Because F# source code is under Apache 2.0 license now, and the math provider project is also kind of dead on the MS side, I’d like to continue to maintain this small project and provide assistance to the community. If you’d like to contribute to this project,
welcome! and please email me.
Currently this library only supports Windows 32-bit platform. It runs on 64-bit Windows, but under 32-bit mode. The main problem is that I don’t have a 64-bit Fortran compiler on Windows and I don’t know the difference between 32-bit Lapack and 64-bit Lapack
(migrating from 32-bit to 64-bit needs a lot of experience..).
The MathProvider supports .Net 2.0 (Visual Studio 2008) and .Net 4.0 (Visual Studio 2010).
After downloading the release, you can find three DLLs: MathProvider.dll, blas.dll and lapack.dll. In your Visual Studio project, reference MathProvider.dll in your project and put blas.dll and lapack.dll into a folder that is searchable by .Net. If you
don’t know what does “searchable”or .Net GAC mean, just put the two dlls on the folder where your .exe files are, e.g., in bin\Debug or bin\Release.
If you are using F# interactive, you can use
System.Environment.CurrentDirectory <- @"C:\netlibFolder"
to set the current directory to be the lapack folder.
// reference F# PowerPack & MathProvider
// rename the module name
module L = MathProvider.LinearAlgebra
// set the directory of the Lapack runtime (blas.dll & lapack.dll)
System.Environment.CurrentDirectory <- @"D:\FSharp\math-provider\LapackRuntime\Netlib"
//System.Environment.CurrentDirectory <- @"D:\FSharp\math-provider\LapackRuntime\MKL"
// start the native provider, otherwise (implementent) F# implementation will be used
// create two 3x3 matrices
let A = matrix [ [12.; -51.; 4.; ]; [6.; 167.; -68.;]; [-4.; 24.; -41.; ] ]
let B = matrix [ [ 2.; 1.; 1.;] ; [ 1.; 2.; 1.;]; [ 1.; 1.; 2.;] ]
let det = L.det A
let inv = L.inv A
// qr decomposition
let q, r = L.qr A
// lu decomposition
let p, l, u = L.lu A
// cholesky decomposition
let ch = L.chol B
// svd decomposition
let v, s, ut = L.svd A
// eigen decomposition for symmetric matrix
let a, b = L.cov A A |> L.eigenSym
(* result *)
val A : matrix = matrix [[12.0; -51.0; 4.0]
[6.0; 167.0; -68.0]
[-4.0; 24.0; -41.0]]
val B : matrix = matrix [[2.0; 1.0; 1.0]
[1.0; 2.0; 1.0]
[1.0; 1.0; 2.0]]
val det : float = -85750.0
val inv : matrix = matrix [[0.06081632653; 0.02326530612; -0.03265306122]
[-0.006040816327; 0.005551020408; -0.009795918367]
[-0.009469387755; 0.0009795918367; -0.02693877551]]
val r : matrix = matrix [[-14.0; -21.0; 14.0]
[0.0; -175.0; 70.0]
[0.0; 0.0; -35.0]]
val q : matrix = matrix [[-0.8571428571; 0.3942857143; 0.3314285714]
[-0.4285714286; -0.9028571429; -0.03428571429]
[0.2857142857; -0.1714285714; 0.9428571429]]
val u : matrix = matrix [[12.0; -51.0; 4.0]
[0.0; 192.5; -70.0]
[0.0; 0.0; -37.12121212]]
val p : (int -> int)
val l : matrix = matrix [[1.0; 0.0; 0.0]
[0.5; 1.0; 0.0]
[-0.3333333333; 0.03636363636; 1.0]]
val ch : matrix = matrix [[1.414213562; 0.7071067812; 0.7071067812]
[0.0; 1.224744871; 0.4082482905]
[0.0; 0.0; 1.154700538]]
val v : matrix = matrix [[-0.2543778627; -0.5139835724; -0.81921474]
[0.9464104305; 0.0419982359; -0.3202240549]
[0.1989954776; -0.8567712854; 0.4757559925]]
val ut : matrix = matrix [[0.00960262784; 0.922507462; -0.385859783]
[-0.07574450365; 0.3854399898; 0.9196188256]
[-0.9970810196; -0.0203960004; -0.0735761066]]
val s : Vector<float> = vector [|190.5672437; 32.85688323; 13.69492038|]
val b : Matrix<float> = matrix [[0.9970810196; -0.07574450365; -0.00960262784]
[0.0203960004; 0.3854399898; -0.922507462]
[0.0735761066; 0.9196188256; 0.385859783]]
val a : vector = vector [|187.5508443; 1079.574776; 36315.87438|]
About the Performance
The BLAS and LAPACK runtime contained in the release are compiled from Netlib's standard implementation. I have already compiled the two dlls blas.dll and lapack.dll for you so you don’t need to know about Fortran compilers and related issues. However,
Blas.dll and lapack.dll are non-optimized.
Maybe you care about performance of a math library. But before rushing out to find other libraries or buy Intel MKL, you’d better know some facts about the performance:
1) .Net is not necessarily slower on the same code that is compiled in Native form. Array boundary checking is one overhead, but not very much. See
this. Typically, .Net is within 2 times slower than C/C++ code if array is heavily used. If no heavy array usage, the speed of numerical code should be about the same.
2) Highly optimized native code is much faster than normal native code. Optimized matrix multination (say the one in INTEL MKL library) could be 10 times faster than non-optimized version. Most of programmers don’t know how to write these
optimized code. And these tricks usually only apply on simple operations, e.g. basic vector and matrix operations. This is why
optimized BLAS is much faster (say, 10 times) than non-optimized BLAS, while optimized LAPACK does not have that much speed up over non-optimized LAPACK.
The famous statistical computing language R uses a small portion of Lapack (a non-optimized version). And statisticians are happy with that. Matlab uses Intel MKL, which has a optimized version of Lapack. Numpy and Octave use ATLAS, which is a free optimized
version of Lapack. However Intel MKL costs money and ATLAS is hard to compile on Windows.
In the future, I will release optimized lapack runtimes compiled from ATLAS.
Source code & Compiling
If you are interested in the source code or improving the current library, you may want to read the source code of the project and several posts of mine:
The current focus is on stability. Although the Lapack library is robust and reliable, the wrapper is buggy. I plan to add a systematic unit test for it.
Making more functionality available (e.g. some functions in Matlab) is another direction.
Please email me (email@example.com) if you’d like to contribute to this project.