kmath/kmath-noa
2021-07-13 15:12:19 +01:00
..
src testing autograd 2021-07-13 15:12:19 +01:00
build.gradle.kts testing autograd 2021-07-13 15:12:19 +01:00
README.md testing autograd 2021-07-13 15:12:19 +01:00

Module kmath-noa

A Bayesian computation library over NOA together with relevant functionality from LibTorch.

Our aim is to cover a wide set of applications from deep learning to particle physics simulations. In fact, we support any differentiable program written on top of AutoGrad & ATen.

Installation from source

Currently, we support only the GNU toolchain for the native artifacts. For GPU kernels, we require a compatible CUDA installation. If you are on Windows, we recommend setting up everything on WSL.

To install the library, you can simply publish to the local Maven repository:

./gradlew -q :kmath-noa:publishToMavenLocal

This will fetch and build the JNI wrapper jnoa.

In your own application add the local dependency:

repositories {
    mavenCentral()
    mavenLocal()
}

dependencies {
    implementation("space.kscience:kmath-noa:0.3.0-dev-14")
}

To load the native library you will need to add to the VM options:

-Djava.library.path=${HOME}/.konan/third-party/noa-v0.0.1/cpp-build/kmath

Usage

We implement the tensor algebra interfaces from kmath-tensors:

NoaFloat {
    val tensor = 
        randNormal(
            shape = intArrayOf(7, 5, 3), 
            device = Device.CPU) // or Device.CUDA(0) for GPU
    
    // Compute SVD
    val (tensorU, tensorS, tensorV) = tensor.svd()
    
    // Reconstruct tensor
    val tensorReg =
        tensorU dot (diagonalEmbedding(tensorS) dot tensorV.transpose(-2, -1))
}

The AutoGrad engine is exposed:

NoaFloat {
    // Create a quadratic function
    val dim = 3
    val tensorX = randNormal(shape = intArrayOf(dim))
    val randFeatures = randNormal(shape = intArrayOf(dim, dim))
    val tensorSigma = randFeatures + randFeatures.transpose(0, 1)
    val tensorMu = randNormal(shape = intArrayOf(dim))

    // Create a differentiable expression
    val expressionAtX = withGradAt(tensorX) { x ->
        0.5f * (x dot (tensorSigma dot x)) + (tensorMu dot x) + 25.9f
    }

    // Evaluate the gradient at tensorX
    // retaining the graph for the hessian computation
    val gradientAtX = expressionAtX.autoGradient(tensorX, retainGraph = true)
    
    // Compute the hessian at tensorX
    val hessianAtX = expressionAtX.autoHessian(tensorX)
}

Native memory management relies on scoping with NoaScope which is readily within an algebra context. Manual management is also possible:

// Create a scope
val scope = NoaScope()

val tensor = NoaFloat(scope){
    full(5f, intArrayOf(1))
}!! // the result might be null

// If the computation fails resources will be freed automatically
// Otherwise it's your responsibility:
scope.disposeAll()

// Attempts to use tensor here is undefined behaviour