2021-06-26 00:36:01 +03:00
|
|
|
# Module kmath-noa
|
|
|
|
|
2021-08-25 11:09:55 +03:00
|
|
|
A general purpose differentiable programming library over
|
2021-06-26 00:36:01 +03:00
|
|
|
[NOA](https://github.com/grinisrit/noa.git)
|
2021-07-11 21:01:50 +03:00
|
|
|
together with relevant functionality from
|
2021-06-26 00:36:01 +03:00
|
|
|
[LibTorch](https://pytorch.org/cppdocs).
|
|
|
|
|
2021-07-13 11:42:40 +03:00
|
|
|
Our aim is to cover a wide set of applications
|
2021-08-25 11:09:55 +03:00
|
|
|
from bayesian computation and deep learning to particle physics
|
2021-07-13 11:42:40 +03:00
|
|
|
simulations. In fact, we support any
|
2021-07-11 21:01:50 +03:00
|
|
|
differentiable program written on top of
|
|
|
|
`AutoGrad` & `ATen`.
|
2021-06-26 00:36:01 +03:00
|
|
|
|
2021-07-13 11:42:40 +03:00
|
|
|
## Installation from source
|
2021-06-27 18:28:28 +03:00
|
|
|
|
2021-08-02 00:04:08 +03:00
|
|
|
Currently, we support only the linux platform for the native artifacts.
|
2021-07-13 11:37:06 +03:00
|
|
|
For `GPU` kernels, we require a compatible
|
|
|
|
[CUDA](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)
|
|
|
|
installation. If you are on Windows, we recommend setting up
|
|
|
|
everything on [WSL](https://docs.nvidia.com/cuda/wsl-user-guide/index.html).
|
|
|
|
|
2021-08-02 12:10:15 +03:00
|
|
|
To install the library, you can simply publish `KMath` to the local
|
2021-07-12 23:12:02 +03:00
|
|
|
Maven repository:
|
|
|
|
```
|
2021-08-02 12:10:15 +03:00
|
|
|
$ ./gradlew -Dorg.gradle.java.home=/path/to/local/jdk -q publishToMavenLocal
|
2021-07-12 23:12:02 +03:00
|
|
|
```
|
2021-07-13 11:37:06 +03:00
|
|
|
This will fetch and build the `JNI` wrapper `jnoa`.
|
|
|
|
|
2022-02-07 12:15:03 +03:00
|
|
|
The library has been tested with
|
2022-02-07 12:27:35 +03:00
|
|
|
[graalvm-ce-java11-linux-amd64-22.0.0.2.](https://github.com/graalvm/graalvm-ce-builds/releases/tag/vm-22.0.0.2)
|
2022-02-07 12:15:03 +03:00
|
|
|
|
2021-07-13 11:37:06 +03:00
|
|
|
In your own application add the local dependency:
|
|
|
|
```kotlin
|
|
|
|
repositories {
|
|
|
|
mavenCentral()
|
|
|
|
mavenLocal()
|
|
|
|
}
|
|
|
|
|
|
|
|
dependencies {
|
2022-02-07 12:15:03 +03:00
|
|
|
implementation("space.kscience:kmath-noa:0.3.0-dev-17")
|
2021-07-13 11:37:06 +03:00
|
|
|
}
|
|
|
|
```
|
|
|
|
To load the native library you will need to add to the VM options:
|
|
|
|
```
|
2022-02-07 12:15:03 +03:00
|
|
|
-Djava.library.path=${HOME}/.kmath/third-party/noa-v0.0.1/cpp-build/jnoa
|
2021-07-13 11:37:06 +03:00
|
|
|
```
|
|
|
|
|
2021-07-13 17:12:19 +03:00
|
|
|
## Usage
|
|
|
|
|
2021-07-13 21:55:36 +03:00
|
|
|
The library is under active development. Many more features
|
|
|
|
will be available soon.
|
2021-07-13 21:50:57 +03:00
|
|
|
|
2021-07-31 20:37:42 +03:00
|
|
|
### Tensors and Linear Algebra
|
2021-07-13 21:48:21 +03:00
|
|
|
|
2021-07-13 17:12:19 +03:00
|
|
|
We implement the tensor algebra interfaces
|
|
|
|
from [kmath-tensors](../kmath-tensors):
|
|
|
|
```kotlin
|
|
|
|
NoaFloat {
|
|
|
|
val tensor =
|
|
|
|
randNormal(
|
|
|
|
shape = intArrayOf(7, 5, 3),
|
|
|
|
device = Device.CPU) // or Device.CUDA(0) for GPU
|
|
|
|
|
|
|
|
// Compute SVD
|
|
|
|
val (tensorU, tensorS, tensorV) = tensor.svd()
|
|
|
|
|
|
|
|
// Reconstruct tensor
|
|
|
|
val tensorReg =
|
|
|
|
tensorU dot (diagonalEmbedding(tensorS) dot tensorV.transpose(-2, -1))
|
2021-07-31 00:33:43 +03:00
|
|
|
|
|
|
|
// Serialise tensor for later
|
|
|
|
tensorReg.save("tensorReg.pt")
|
2021-07-13 17:12:19 +03:00
|
|
|
}
|
|
|
|
```
|
2021-07-31 20:37:42 +03:00
|
|
|
|
2021-07-31 00:33:43 +03:00
|
|
|
The saved tensor can be loaded in `C++` or in `python`:
|
|
|
|
```python
|
|
|
|
import torch
|
|
|
|
tensor_reg = list(torch.jit.load('tensorReg.pt').parameters())[0]
|
|
|
|
```
|
|
|
|
|
2021-07-31 23:23:15 +03:00
|
|
|
The most efficient way passing data between the `JVM` and the native backend
|
2021-07-31 20:37:42 +03:00
|
|
|
is to rely on primitive arrays:
|
|
|
|
```kotlin
|
|
|
|
val array = (1..8).map { 100f * it }.toFloatArray()
|
|
|
|
val updateArray = floatArrayOf(15f, 20f)
|
|
|
|
val resArray = NoaFloat {
|
|
|
|
val tensor = copyFromArray(array, intArrayOf(2, 2, 2))
|
|
|
|
NoaFloat {
|
|
|
|
// The call `tensor[0]` creates a native tensor instance pointing to a slice of `tensor`
|
|
|
|
// The second call `[1]` is a setter call and does not create any new instances
|
|
|
|
tensor[0][1] = updateArray
|
|
|
|
// The instance `tensor[0]` is destroyed as we move out of the scope
|
|
|
|
}!! // if the computation fails the result fill be null
|
|
|
|
tensor.copyToArray()
|
|
|
|
// the instance `tensor` is destroyed here
|
|
|
|
}!!
|
|
|
|
|
|
|
|
```
|
2021-07-31 00:33:43 +03:00
|
|
|
|
2021-07-13 21:48:21 +03:00
|
|
|
### Automatic Differentiation
|
2021-07-13 17:12:19 +03:00
|
|
|
The [AutoGrad](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html)
|
|
|
|
engine is exposed:
|
|
|
|
```kotlin
|
|
|
|
NoaFloat {
|
|
|
|
// Create a quadratic function
|
|
|
|
val dim = 3
|
|
|
|
val tensorX = randNormal(shape = intArrayOf(dim))
|
|
|
|
val randFeatures = randNormal(shape = intArrayOf(dim, dim))
|
|
|
|
val tensorSigma = randFeatures + randFeatures.transpose(0, 1)
|
|
|
|
val tensorMu = randNormal(shape = intArrayOf(dim))
|
|
|
|
|
|
|
|
// Create a differentiable expression
|
|
|
|
val expressionAtX = withGradAt(tensorX) { x ->
|
|
|
|
0.5f * (x dot (tensorSigma dot x)) + (tensorMu dot x) + 25.9f
|
|
|
|
}
|
|
|
|
|
|
|
|
// Evaluate the gradient at tensorX
|
|
|
|
// retaining the graph for the hessian computation
|
|
|
|
val gradientAtX = expressionAtX.autoGradient(tensorX, retainGraph = true)
|
|
|
|
|
|
|
|
// Compute the hessian at tensorX
|
|
|
|
val hessianAtX = expressionAtX.autoHessian(tensorX)
|
|
|
|
}
|
|
|
|
```
|
2021-07-13 21:48:21 +03:00
|
|
|
### Deep Learning
|
|
|
|
You can train any [TorchScript](https://pytorch.org/docs/stable/jit.html) model.
|
|
|
|
For example, you can build in `python` the following neural network
|
|
|
|
and prepare the training data:
|
|
|
|
|
|
|
|
```python
|
|
|
|
import torch
|
|
|
|
|
|
|
|
n_tr = 7
|
|
|
|
n_val = 300
|
|
|
|
x_val = torch.linspace(-5, 5, n_val).view(-1, 1)
|
|
|
|
y_val = torch.sin(x_val)
|
|
|
|
x_train = torch.linspace(-3.14, 3.14, n_tr).view(-1, 1)
|
|
|
|
y_train = torch.sin(x_train) + torch.randn_like(x_train) * 0.1
|
|
|
|
|
|
|
|
class Data(torch.nn.Module):
|
|
|
|
def __init__(self):
|
|
|
|
super(Data, self).__init__()
|
|
|
|
self.register_buffer('x_val', x_val)
|
|
|
|
self.register_buffer('y_val', y_val)
|
|
|
|
self.register_buffer('x_train', x_train)
|
|
|
|
self.register_buffer('y_train', y_train)
|
|
|
|
|
|
|
|
class Net(torch.nn.Module):
|
|
|
|
def __init__(self):
|
|
|
|
super(Net, self).__init__()
|
|
|
|
self.l1 = torch.nn.Linear(1, 10, bias = True)
|
|
|
|
self.l2 = torch.nn.Linear(10, 10, bias = True)
|
|
|
|
self.l3 = torch.nn.Linear(10, 1, bias = True)
|
|
|
|
|
|
|
|
def forward(self, x):
|
|
|
|
x = self.l1(x)
|
|
|
|
x = torch.relu(x)
|
|
|
|
x = self.l2(x)
|
|
|
|
x = torch.relu(x)
|
|
|
|
x = self.l3(x)
|
|
|
|
return x
|
|
|
|
|
|
|
|
class Loss(torch.nn.Module):
|
|
|
|
def __init__(self, target):
|
|
|
|
super(Loss, self).__init__()
|
|
|
|
self.register_buffer('target', target)
|
|
|
|
self.loss = torch.nn.MSELoss()
|
|
|
|
|
|
|
|
def forward(self, x):
|
|
|
|
return self.loss(x, self.target)
|
|
|
|
|
2021-07-13 22:00:36 +03:00
|
|
|
# Generate TorchScript modules and serialise them
|
2021-07-13 21:48:21 +03:00
|
|
|
torch.jit.script(Data()).save('data.pt')
|
|
|
|
torch.jit.script(Net()).save('net.pt')
|
|
|
|
torch.jit.script(Loss(y_train)).save('loss.pt')
|
|
|
|
```
|
2021-07-13 17:12:19 +03:00
|
|
|
|
2021-07-13 21:48:21 +03:00
|
|
|
You can then load the modules into `kotlin` and train them:
|
|
|
|
```kotlin
|
|
|
|
NoaFloat {
|
|
|
|
|
|
|
|
// Load the serialised JIT modules
|
|
|
|
// The training data
|
|
|
|
val dataModule = loadJitModule("data.pt")
|
|
|
|
// The DL model
|
|
|
|
val netModule = loadJitModule("net.pt")
|
|
|
|
// The loss function
|
|
|
|
val lossModule = loadJitModule("loss.pt")
|
|
|
|
|
|
|
|
// Get the tensors from the module
|
|
|
|
val xTrain = dataModule.getBuffer("x_train")
|
|
|
|
val yTrain = dataModule.getBuffer("y_train")
|
|
|
|
val xVal = dataModule.getBuffer("x_val")
|
|
|
|
val yVal = dataModule.getBuffer("y_val")
|
|
|
|
|
|
|
|
// Set the model in training mode
|
|
|
|
netModule.train(true)
|
2021-07-13 22:00:36 +03:00
|
|
|
// Loss function for training
|
2021-07-13 21:48:21 +03:00
|
|
|
lossModule.setBuffer("target", yTrain)
|
|
|
|
|
|
|
|
// Compute the predictions
|
|
|
|
val yPred = netModule.forward(xTrain)
|
2021-07-13 22:00:36 +03:00
|
|
|
// Compute the training loss
|
2021-07-13 21:48:21 +03:00
|
|
|
val loss = lossModule.forward(yPred)
|
|
|
|
println(loss)
|
|
|
|
|
|
|
|
// Set-up the Adam optimiser with learning rate 0.005
|
|
|
|
val optimiser = netModule.adamOptimiser(0.005)
|
|
|
|
|
|
|
|
// Train for 250 epochs
|
|
|
|
repeat(250){
|
|
|
|
// Clean gradients
|
|
|
|
optimiser.zeroGrad()
|
|
|
|
// Use forwardAssign to for better memory management
|
|
|
|
netModule.forwardAssign(xTrain, yPred)
|
|
|
|
lossModule.forwardAssign(yPred, loss)
|
|
|
|
// Backward pass
|
|
|
|
loss.backward()
|
|
|
|
// Update model parameters
|
|
|
|
optimiser.step()
|
|
|
|
if(it % 50 == 0)
|
|
|
|
println("Training loss: $loss")
|
|
|
|
}
|
|
|
|
|
|
|
|
// Finally validate the model
|
|
|
|
// Compute the predictions for the validation features
|
|
|
|
netModule.forwardAssign(xVal, yPred)
|
2021-07-13 22:00:36 +03:00
|
|
|
// Set the loss for validation
|
2021-07-13 21:48:21 +03:00
|
|
|
lossModule.setBuffer("target", yVal)
|
|
|
|
// Compute the loss on validation dataset
|
|
|
|
lossModule.forwardAssign(yPred, loss)
|
|
|
|
println("Validation loss: $loss")
|
2021-07-31 00:33:43 +03:00
|
|
|
|
|
|
|
// The model can be serialised in its current state
|
|
|
|
netModule.save("trained_net.pt")
|
2021-07-13 21:58:22 +03:00
|
|
|
}
|
2021-07-13 17:12:19 +03:00
|
|
|
|
2021-07-13 21:48:21 +03:00
|
|
|
```
|
|
|
|
|
|
|
|
### Custom memory management
|
2021-07-13 17:12:19 +03:00
|
|
|
Native memory management relies on scoping
|
|
|
|
with [NoaScope](src/main/kotlin/space/kscience/kmath/noa/memory/NoaScope.kt)
|
2021-07-13 21:48:21 +03:00
|
|
|
which is readily available within an algebra context.
|
2021-07-13 17:12:19 +03:00
|
|
|
Manual management is also possible:
|
|
|
|
```kotlin
|
|
|
|
// Create a scope
|
|
|
|
val scope = NoaScope()
|
|
|
|
|
|
|
|
val tensor = NoaFloat(scope){
|
|
|
|
full(5f, intArrayOf(1))
|
|
|
|
}!! // the result might be null
|
|
|
|
|
|
|
|
// If the computation fails resources will be freed automatically
|
|
|
|
// Otherwise it's your responsibility:
|
|
|
|
scope.disposeAll()
|
|
|
|
|
|
|
|
// Attempts to use tensor here is undefined behaviour
|
2021-07-13 21:48:21 +03:00
|
|
|
```
|
2021-07-13 21:55:36 +03:00
|
|
|
|
2021-07-14 10:26:00 +03:00
|
|
|
For more examples have a look at
|
|
|
|
[NOA](https://github.com/grinisrit/noa) docs.
|
2021-07-13 21:55:36 +03:00
|
|
|
|
2021-07-13 21:48:21 +03:00
|
|
|
Contributed by [Roland Grinis](https://github.com/grinisrit)
|