kmath/kmath-torch/README.md

67 lines
2.1 KiB
Markdown
Raw Normal View History

2020-12-30 01:42:33 +03:00
# LibTorch extension (`kmath-torch`)
2021-01-16 23:29:47 +03:00
This is a `Kotlin/Native` module, with only `linuxX64` supported so far. This library wraps some of
the [PyTorch C++ API](https://pytorch.org/cppdocs), focusing on integrating `Aten` & `Autograd` with `KMath`.
2020-12-30 01:42:33 +03:00
## Installation
2021-01-16 23:29:47 +03:00
2020-12-30 01:42:33 +03:00
To install the library, you have to build & publish locally `kmath-core`, `kmath-memory` with `kmath-torch`:
2021-01-16 23:29:47 +03:00
2020-12-30 01:42:33 +03:00
```
./gradlew -q :kmath-core:publishToMavenLocal :kmath-memory:publishToMavenLocal :kmath-torch:publishToMavenLocal
```
This builds `ctorch`, a C wrapper for `LibTorch` placed inside:
2021-01-16 23:29:47 +03:00
`~/.konan/third-party/kmath-torch-0.2.0-dev-4/cpp-build`
2020-12-30 01:42:33 +03:00
2021-01-18 22:02:01 +03:00
You will have to link against it in your own project.
2020-12-30 01:42:33 +03:00
## Usage
2021-01-16 23:29:47 +03:00
Tensors are implemented over the `MutableNDStructure`. They can only be instantiated through provided factory methods
and require scoping:
2020-12-30 01:42:33 +03:00
```kotlin
TorchTensorRealAlgebra {
2021-01-06 16:20:48 +03:00
val realTensor: TorchTensorReal = copyFromArray(
array = (1..10).map { it + 50.0 }.toList().toDoubleArray(),
2021-01-16 23:29:47 +03:00
shape = intArrayOf(2, 5)
2021-01-06 16:20:48 +03:00
)
println(realTensor)
2021-01-06 16:20:48 +03:00
val gpuRealTensor: TorchTensorReal = copyFromArray(
array = (1..8).map { it * 2.5 }.toList().toDoubleArray(),
2021-01-06 16:20:48 +03:00
shape = intArrayOf(2, 2, 2),
2021-01-18 22:02:01 +03:00
device = Device.CUDA(0)
2021-01-06 16:20:48 +03:00
)
println(gpuRealTensor)
2020-12-30 01:42:33 +03:00
}
```
2021-01-16 23:29:47 +03:00
2021-01-18 22:02:01 +03:00
High performance automatic differentiation engine is available:
2021-01-16 23:29:47 +03:00
2021-01-10 19:24:57 +03:00
```kotlin
TorchTensorRealAlgebra {
val dim = 10
2021-01-18 22:02:01 +03:00
val device = Device.CPU //or Device.CUDA(0)
val tensorX = randNormal(shape = intArrayOf(dim), device = device)
val randFeatures = randNormal(shape = intArrayOf(dim, dim), device = device)
val tensorSigma = randFeatures + randFeatures.transpose(0, 1)
val tensorMu = randNormal(shape = intArrayOf(dim), device = device)
// expression to differentiate w.r.t. x evaluated at x = tensorX
val expressionAtX = withGradAt(tensorX, { x ->
0.5 * (x dot (tensorSigma dot x)) + (tensorMu dot x) + 25.9
})
// value of the gradient at x = tensorX
val gradientAtX = expressionAtX.grad(tensorX, retainGraph = true)
// value of the hessian at x = tensorX
val hessianAtX = expressionAtX hess tensorX
2021-01-10 19:24:57 +03:00
}
```
2020-12-30 01:42:33 +03:00