Compare commits

...

11 Commits

211 changed files with 1083 additions and 644 deletions

View File

@ -34,7 +34,7 @@ job("Publish") {
api.space().projects.automation.deployments.start( api.space().projects.automation.deployments.start(
project = api.projectIdentifier(), project = api.projectIdentifier(),
targetIdentifier = TargetIdentifier.Key(projectName), targetIdentifier = TargetIdentifier.Key(projectName),
version = version+revisionSuffix, version = version + revisionSuffix,
// automatically update deployment status based on the status of a job // automatically update deployment status based on the status of a job
syncWithAutomationJob = true syncWithAutomationJob = true
) )

View File

@ -77,7 +77,8 @@
- Major refactor of tensors (only minor API changes) - Major refactor of tensors (only minor API changes)
- Kotlin 1.8.20 - Kotlin 1.8.20
- `LazyStructure` `deffered` -> `async` to comply with coroutines code style - `LazyStructure` `deffered` -> `async` to comply with coroutines code style
- Default `dot` operation in tensor algebra no longer support broadcasting. Instead `matmul` operation is added to `DoubleTensorAlgebra`. - Default `dot` operation in tensor algebra no longer support broadcasting. Instead `matmul` operation is added
to `DoubleTensorAlgebra`.
- Multik went MPP - Multik went MPP
### Removed ### Removed
@ -236,9 +237,11 @@
- MST to JVM bytecode translator (https://github.com/mipt-npm/kmath/pull/94) - MST to JVM bytecode translator (https://github.com/mipt-npm/kmath/pull/94)
- FloatBuffer (specialized MutableBuffer over FloatArray) - FloatBuffer (specialized MutableBuffer over FloatArray)
- FlaggedBuffer to associate primitive numbers buffer with flags (to mark values infinite or missing, etc.) - FlaggedBuffer to associate primitive numbers buffer with flags (to mark values infinite or missing, etc.)
- Specialized builder functions for all primitive buffers like `IntBuffer(25) { it + 1 }` (https://github.com/mipt-npm/kmath/pull/125) - Specialized builder functions for all primitive buffers
like `IntBuffer(25) { it + 1 }` (https://github.com/mipt-npm/kmath/pull/125)
- Interface `NumericAlgebra` where `number` operation is available to convert numbers to algebraic elements - Interface `NumericAlgebra` where `number` operation is available to convert numbers to algebraic elements
- Inverse trigonometric functions support in ExtendedField (`asin`, `acos`, `atan`) (https://github.com/mipt-npm/kmath/pull/114) - Inverse trigonometric functions support in
ExtendedField (`asin`, `acos`, `atan`) (https://github.com/mipt-npm/kmath/pull/114)
- New space extensions: `average` and `averageWith` - New space extensions: `average` and `averageWith`
- Local coding conventions - Local coding conventions
- Geometric Domains API in `kmath-core` - Geometric Domains API in `kmath-core`
@ -251,7 +254,8 @@
- `readAsMemory` now has `throws IOException` in JVM signature. - `readAsMemory` now has `throws IOException` in JVM signature.
- Several functions taking functional types were made `inline`. - Several functions taking functional types were made `inline`.
- Several functions taking functional types now have `callsInPlace` contracts. - Several functions taking functional types now have `callsInPlace` contracts.
- BigInteger and BigDecimal algebra: JBigDecimalField has companion object with default math context; minor optimizations - BigInteger and BigDecimal algebra: JBigDecimalField has companion object with default math context; minor
optimizations
- `power(T, Int)` extension function has preconditions and supports `Field<T>` - `power(T, Int)` extension function has preconditions and supports `Field<T>`
- Memory objects have more preconditions (overflow checking) - Memory objects have more preconditions (overflow checking)
- `tg` function is renamed to `tan` (https://github.com/mipt-npm/kmath/pull/114) - `tg` function is renamed to `tan` (https://github.com/mipt-npm/kmath/pull/114)

131
README.md
View File

@ -25,7 +25,8 @@ experience could be achieved with [kmath-for-real](/kmath-for-real) extension mo
# Goal # Goal
* Provide a flexible and powerful API to work with mathematics abstractions in Kotlin-multiplatform (JVM, JS, Native and Wasm). * Provide a flexible and powerful API to work with mathematics abstractions in Kotlin-multiplatform (JVM, JS, Native and
Wasm).
* Provide basic multiplatform implementations for those abstractions (without significant performance optimization). * Provide basic multiplatform implementations for those abstractions (without significant performance optimization).
* Provide bindings and wrappers with those abstractions for popular optimized platform libraries. * Provide bindings and wrappers with those abstractions for popular optimized platform libraries.
@ -55,150 +56,181 @@ module definitions below. The module stability could have the following levels:
## Modules ## Modules
### [attributes-kt](attributes-kt) ### [attributes-kt](attributes-kt)
> An API and basic implementation for arranging objects in a continuous memory block. > An API and basic implementation for arranging objects in a continuous memory block.
> >
> **Maturity**: DEVELOPMENT > **Maturity**: DEVELOPMENT
### [benchmarks](benchmarks) ### [benchmarks](benchmarks)
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
### [examples](examples) ### [examples](examples)
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
### [kmath-ast](kmath-ast) ### [kmath-ast](kmath-ast)
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
> >
> **Features:** > **Features:**
> - [expression-language](kmath-ast/src/commonMain/kotlin/space/kscience/kmath/ast/parser.kt) : Expression language and its parser > - [expression-language](kmath-ast/src/commonMain/kotlin/space/kscience/kmath/ast/parser.kt) : Expression language and
> - [mst-jvm-codegen](kmath-ast/src/jvmMain/kotlin/space/kscience/kmath/asm/asm.kt) : Dynamic MST to JVM bytecode compiler its parser
> - [mst-jvm-codegen](kmath-ast/src/jvmMain/kotlin/space/kscience/kmath/asm/asm.kt) : Dynamic MST to JVM bytecode
compiler
> - [mst-js-codegen](kmath-ast/src/jsMain/kotlin/space/kscience/kmath/estree/estree.kt) : Dynamic MST to JS compiler > - [mst-js-codegen](kmath-ast/src/jsMain/kotlin/space/kscience/kmath/estree/estree.kt) : Dynamic MST to JS compiler
> - [rendering](kmath-ast/src/commonMain/kotlin/space/kscience/kmath/ast/rendering/MathRenderer.kt) : Extendable MST rendering > - [rendering](kmath-ast/src/commonMain/kotlin/space/kscience/kmath/ast/rendering/MathRenderer.kt) : Extendable MST
rendering
### [kmath-commons](kmath-commons) ### [kmath-commons](kmath-commons)
> Commons math binding for kmath > Commons math binding for kmath
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
### [kmath-complex](kmath-complex) ### [kmath-complex](kmath-complex)
> Complex numbers and quaternions. > Complex numbers and quaternions.
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
> >
> **Features:** > **Features:**
> - [complex](kmath-complex/src/commonMain/kotlin/space/kscience/kmath/complex/Complex.kt) : Complex numbers operations > - [complex](kmath-complex/src/commonMain/kotlin/space/kscience/kmath/complex/Complex.kt) : Complex numbers operations
> - [quaternion](kmath-complex/src/commonMain/kotlin/space/kscience/kmath/complex/Quaternion.kt) : Quaternions and their composition > - [quaternion](kmath-complex/src/commonMain/kotlin/space/kscience/kmath/complex/Quaternion.kt) : Quaternions and their
composition
### [kmath-core](kmath-core) ### [kmath-core](kmath-core)
> Core classes, algebra definitions, basic linear algebra > Core classes, algebra definitions, basic linear algebra
> >
> **Maturity**: DEVELOPMENT > **Maturity**: DEVELOPMENT
> >
> **Features:** > **Features:**
> - [algebras](kmath-core/src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Algebraic structures like rings, spaces and fields. > - [algebras](kmath-core/src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Algebraic structures like
> - [nd](kmath-core/src/commonMain/kotlin/space/kscience/kmath/structures/StructureND.kt) : Many-dimensional structures and operations on them. rings, spaces and fields.
> - [linear](kmath-core/src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Basic linear algebra operations (sums, products, etc.), backed by the `Space` API. Advanced linear algebra operations like matrix inversion and LU decomposition. > - [nd](kmath-core/src/commonMain/kotlin/space/kscience/kmath/structures/StructureND.kt) : Many-dimensional structures
and operations on them.
> - [linear](kmath-core/src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Basic linear algebra
operations (sums, products, etc.), backed by the `Space` API. Advanced linear algebra operations like matrix
inversion and LU decomposition.
> - [buffers](kmath-core/src/commonMain/kotlin/space/kscience/kmath/structures/Buffers.kt) : One-dimensional structure > - [buffers](kmath-core/src/commonMain/kotlin/space/kscience/kmath/structures/Buffers.kt) : One-dimensional structure
> - [expressions](kmath-core/src/commonMain/kotlin/space/kscience/kmath/expressions) : By writing a single mathematical expression once, users will be able to apply different types of > - [expressions](kmath-core/src/commonMain/kotlin/space/kscience/kmath/expressions) : By writing a single mathematical
objects to the expression by providing a context. Expressions can be used for a wide variety of purposes from high expression once, users will be able to apply different types of
performance calculations to code generation. objects to the expression by providing a context. Expressions can be used for a wide variety of purposes from high
performance calculations to code generation.
> - [domains](kmath-core/src/commonMain/kotlin/space/kscience/kmath/domains) : Domains > - [domains](kmath-core/src/commonMain/kotlin/space/kscience/kmath/domains) : Domains
> - [autodiff](kmath-core/src/commonMain/kotlin/space/kscience/kmath/expressions/SimpleAutoDiff.kt) : Automatic differentiation > - [autodiff](kmath-core/src/commonMain/kotlin/space/kscience/kmath/expressions/SimpleAutoDiff.kt) : Automatic
differentiation
### [kmath-coroutines](kmath-coroutines) ### [kmath-coroutines](kmath-coroutines)
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
### [kmath-dimensions](kmath-dimensions) ### [kmath-dimensions](kmath-dimensions)
> A proof of concept module for adding type-safe dimensions to structures > A proof of concept module for adding type-safe dimensions to structures
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
### [kmath-ejml](kmath-ejml) ### [kmath-ejml](kmath-ejml)
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
> >
> **Features:** > **Features:**
> - [ejml-vector](kmath-ejml/src/main/kotlin/space/kscience/kmath/ejml/EjmlVector.kt) : Point implementations. > - [ejml-vector](kmath-ejml/src/main/kotlin/space/kscience/kmath/ejml/EjmlVector.kt) : Point implementations.
> - [ejml-matrix](kmath-ejml/src/main/kotlin/space/kscience/kmath/ejml/EjmlMatrix.kt) : Matrix implementation. > - [ejml-matrix](kmath-ejml/src/main/kotlin/space/kscience/kmath/ejml/EjmlMatrix.kt) : Matrix implementation.
> - [ejml-linear-space](kmath-ejml/src/main/kotlin/space/kscience/kmath/ejml/EjmlLinearSpace.kt) : LinearSpace implementations. > - [ejml-linear-space](kmath-ejml/src/main/kotlin/space/kscience/kmath/ejml/EjmlLinearSpace.kt) : LinearSpace
implementations.
### [kmath-for-real](kmath-for-real) ### [kmath-for-real](kmath-for-real)
> Extension module that should be used to achieve numpy-like behavior. > Extension module that should be used to achieve numpy-like behavior.
All operations are specialized to work with `Double` numbers without declaring algebraic contexts. > All operations are specialized to work with `Double` numbers without declaring algebraic contexts.
One can still use generic algebras though. > One can still use generic algebras though.
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
> >
> **Features:** > **Features:**
> - [DoubleVector](kmath-for-real/src/commonMain/kotlin/space/kscience/kmath/real/DoubleVector.kt) : Numpy-like operations for Buffers/Points > - [DoubleVector](kmath-for-real/src/commonMain/kotlin/space/kscience/kmath/real/DoubleVector.kt) : Numpy-like
> - [DoubleMatrix](kmath-for-real/src/commonMain/kotlin/space/kscience/kmath/real/DoubleMatrix.kt) : Numpy-like operations for 2d real structures operations for Buffers/Points
> - [DoubleMatrix](kmath-for-real/src/commonMain/kotlin/space/kscience/kmath/real/DoubleMatrix.kt) : Numpy-like
operations for 2d real structures
> - [grids](kmath-for-real/src/commonMain/kotlin/space/kscience/kmath/structures/grids.kt) : Uniform grid generators > - [grids](kmath-for-real/src/commonMain/kotlin/space/kscience/kmath/structures/grids.kt) : Uniform grid generators
### [kmath-functions](kmath-functions) ### [kmath-functions](kmath-functions)
> Functions, integration and interpolation > Functions, integration and interpolation
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
> >
> **Features:** > **Features:**
> - [piecewise](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/functions/Piecewise.kt) : Piecewise functions. > - [piecewise](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/functions/Piecewise.kt) : Piecewise
> - [polynomials](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/functions/Polynomial.kt) : Polynomial functions. functions.
> - [linear interpolation](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/interpolation/LinearInterpolator.kt) : Linear XY interpolator. > - [polynomials](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/functions/Polynomial.kt) : Polynomial
> - [spline interpolation](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/interpolation/SplineInterpolator.kt) : Cubic spline XY interpolator. functions.
> - [linear interpolation](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/interpolation/LinearInterpolator.kt) :
Linear XY interpolator.
> - [spline interpolation](kmath-functions/src/commonMain/kotlin/space/kscience/kmath/interpolation/SplineInterpolator.kt) :
Cubic spline XY interpolator.
> - [integration](kmath-functions/#) : Univariate and multivariate quadratures > - [integration](kmath-functions/#) : Univariate and multivariate quadratures
### [kmath-geometry](kmath-geometry) ### [kmath-geometry](kmath-geometry)
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
### [kmath-histograms](kmath-histograms) ### [kmath-histograms](kmath-histograms)
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
### [kmath-jafama](kmath-jafama) ### [kmath-jafama](kmath-jafama)
> Jafama integration module > Jafama integration module
> >
> **Maturity**: DEPRECATED > **Maturity**: DEPRECATED
> >
> **Features:** > **Features:**
> - [jafama-double](kmath-jafama/src/main/kotlin/space/kscience/kmath/jafama/) : Double ExtendedField implementations based on Jafama > - [jafama-double](kmath-jafama/src/main/kotlin/space/kscience/kmath/jafama/) : Double ExtendedField implementations
based on Jafama
### [kmath-jupyter](kmath-jupyter) ### [kmath-jupyter](kmath-jupyter)
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
### [kmath-kotlingrad](kmath-kotlingrad) ### [kmath-kotlingrad](kmath-kotlingrad)
> Kotlin∇ integration module > Kotlin∇ integration module
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
> >
> **Features:** > **Features:**
> - [differentiable-mst-expression](kmath-kotlingrad/src/main/kotlin/space/kscience/kmath/kotlingrad/KotlingradExpression.kt) : MST based DifferentiableExpression. > - [differentiable-mst-expression](kmath-kotlingrad/src/main/kotlin/space/kscience/kmath/kotlingrad/KotlingradExpression.kt) :
> - [scalars-adapters](kmath-kotlingrad/src/main/kotlin/space/kscience/kmath/kotlingrad/scalarsAdapters.kt) : Conversions between Kotlin∇'s SFun and MST MST based DifferentiableExpression.
> - [scalars-adapters](kmath-kotlingrad/src/main/kotlin/space/kscience/kmath/kotlingrad/scalarsAdapters.kt) :
Conversions between Kotlin∇'s SFun and MST
### [kmath-memory](kmath-memory) ### [kmath-memory](kmath-memory)
> An API and basic implementation for arranging objects in a continuous memory block. > An API and basic implementation for arranging objects in a continuous memory block.
> >
> **Maturity**: DEVELOPMENT > **Maturity**: DEVELOPMENT
### [kmath-multik](kmath-multik) ### [kmath-multik](kmath-multik)
> JetBrains Multik connector > JetBrains Multik connector
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
### [kmath-nd4j](kmath-nd4j) ### [kmath-nd4j](kmath-nd4j)
> ND4J NDStructure implementation and according NDAlgebra classes > ND4J NDStructure implementation and according NDAlgebra classes
> >
> **Maturity**: DEPRECATED > **Maturity**: DEPRECATED
@ -208,45 +240,52 @@ One can still use generic algebras though.
> - [nd4jarrayrings](kmath-nd4j/#) : Rings over Nd4jArrayStructure of Int and Long > - [nd4jarrayrings](kmath-nd4j/#) : Rings over Nd4jArrayStructure of Int and Long
> - [nd4jarrayfields](kmath-nd4j/#) : Fields over Nd4jArrayStructure of Float and Double > - [nd4jarrayfields](kmath-nd4j/#) : Fields over Nd4jArrayStructure of Float and Double
### [kmath-optimization](kmath-optimization) ### [kmath-optimization](kmath-optimization)
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
### [kmath-stat](kmath-stat) ### [kmath-stat](kmath-stat)
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
### [kmath-symja](kmath-symja) ### [kmath-symja](kmath-symja)
> Symja integration module > Symja integration module
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
### [kmath-tensorflow](kmath-tensorflow) ### [kmath-tensorflow](kmath-tensorflow)
> Google tensorflow connector > Google tensorflow connector
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
### [kmath-tensors](kmath-tensors) ### [kmath-tensors](kmath-tensors)
> >
> **Maturity**: PROTOTYPE > **Maturity**: PROTOTYPE
> >
> **Features:** > **Features:**
> - [tensor algebra](kmath-tensors/src/commonMain/kotlin/space/kscience/kmath/tensors/api/TensorAlgebra.kt) : Basic linear algebra operations on tensors (plus, dot, etc.) > - [tensor algebra](kmath-tensors/src/commonMain/kotlin/space/kscience/kmath/tensors/api/TensorAlgebra.kt) : Basic
> - [tensor algebra with broadcasting](kmath-tensors/src/commonMain/kotlin/space/kscience/kmath/tensors/core/BroadcastDoubleTensorAlgebra.kt) : Basic linear algebra operations implemented with broadcasting. linear algebra operations on tensors (plus, dot, etc.)
> - [linear algebra operations](kmath-tensors/src/commonMain/kotlin/space/kscience/kmath/tensors/api/LinearOpsTensorAlgebra.kt) : Advanced linear algebra operations like LU decomposition, SVD, etc. > - [tensor algebra with broadcasting](kmath-tensors/src/commonMain/kotlin/space/kscience/kmath/tensors/core/BroadcastDoubleTensorAlgebra.kt) :
Basic linear algebra operations implemented with broadcasting.
> - [linear algebra operations](kmath-tensors/src/commonMain/kotlin/space/kscience/kmath/tensors/api/LinearOpsTensorAlgebra.kt) :
Advanced linear algebra operations like LU decomposition, SVD, etc.
### [kmath-viktor](kmath-viktor) ### [kmath-viktor](kmath-viktor)
> Binding for https://github.com/JetBrains-Research/viktor > Binding for https://github.com/JetBrains-Research/viktor
> >
> **Maturity**: DEVELOPMENT > **Maturity**: DEVELOPMENT
### [test-utils](test-utils) ### [test-utils](test-utils)
> >
> **Maturity**: EXPERIMENTAL > **Maturity**: EXPERIMENTAL
## Multi-platform support ## Multi-platform support
KMath is developed as a multi-platform library, which means that most of the interfaces are declared in the KMath is developed as a multi-platform library, which means that most of the interfaces are declared in the
@ -257,16 +296,19 @@ feedback are also welcome.
## Performance ## Performance
Calculation of performance is one of the major goals of KMath in the future, but in some cases it is impossible to achieve both Calculation of performance is one of the major goals of KMath in the future, but in some cases it is impossible to
achieve both
performance and flexibility. performance and flexibility.
We expect to focus on creating a convenient universal API first and then work on increasing performance for specific We expect to focus on creating a convenient universal API first and then work on increasing performance for specific
cases. We expect the worst KMath benchmarks will perform better than native Python, but worse than optimized cases. We expect the worst KMath benchmarks will perform better than native Python, but worse than optimized
native/SciPy (mostly due to boxing operations on primitive numbers). The best performance of optimized parts could be better than SciPy. native/SciPy (mostly due to boxing operations on primitive numbers). The best performance of optimized parts could be
better than SciPy.
## Requirements ## Requirements
KMath currently relies on JDK 11 for compilation and execution of Kotlin-JVM part. We recommend using GraalVM-CE or Oracle GraalVM for execution to get better performance. KMath currently relies on JDK 11 for compilation and execution of Kotlin-JVM part. We recommend using GraalVM-CE or
Oracle GraalVM for execution to get better performance.
### Repositories ### Repositories
@ -289,4 +331,7 @@ dependencies {
## Contributing ## Contributing
The project requires a lot of additional work. The most important thing we need is feedback about what features are The project requires a lot of additional work. The most important thing we need is feedback about what features are
required the most. Feel free to create feature requests. We are also welcome to code contributions, especially in issues marked with [good first issue](hhttps://github.com/SciProgCentre/kmath/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) label. required the most. Feel free to create feature requests. We are also welcome to code contributions, especially in issues
marked
with [good first issue](hhttps://github.com/SciProgCentre/kmath/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
label.

View File

@ -3,7 +3,7 @@ plugins {
`maven-publish` `maven-publish`
} }
version = "0.1.0" version = rootProject.extra.get("attributesVersion").toString()
kscience { kscience {
jvm() jvm()

View File

@ -30,20 +30,28 @@ public interface Attributes {
override fun hashCode(): Int override fun hashCode(): Int
public companion object { public companion object {
public val EMPTY: Attributes = AttributesImpl(emptyMap()) public val EMPTY: Attributes = object : Attributes {
override val content: Map<out Attribute<*>, Any?> get() = emptyMap()
override fun toString(): String = "Attributes.EMPTY"
override fun equals(other: Any?): Boolean = (other as? Attributes)?.isEmpty() ?: false
override fun hashCode(): Int = Unit.hashCode()
}
public fun equals(a1: Attributes, a2: Attributes): Boolean = public fun equals(a1: Attributes, a2: Attributes): Boolean =
a1.keys == a2.keys && a1.keys.all { a1[it] == a2[it] } a1.keys == a2.keys && a1.keys.all { a1[it] == a2[it] }
} }
} }
internal class AttributesImpl(override val content: Map<out Attribute<*>, Any?>) : Attributes { internal class MapAttributes(override val content: Map<out Attribute<*>, Any?>) : Attributes {
override fun toString(): String = "Attributes(value=${content.entries})" override fun toString(): String = "Attributes(value=${content.entries})"
override fun equals(other: Any?): Boolean = other is Attributes && Attributes.equals(this, other) override fun equals(other: Any?): Boolean = other is Attributes && Attributes.equals(this, other)
override fun hashCode(): Int = content.hashCode() override fun hashCode(): Int = content.hashCode()
} }
public fun Attributes.isEmpty(): Boolean = content.isEmpty() public fun Attributes.isEmpty(): Boolean = keys.isEmpty()
/** /**
* Get attribute value or default * Get attribute value or default
@ -75,7 +83,7 @@ public inline fun <reified A : FlagAttribute> Attributes.hasFlag(): Boolean =
public fun <T, A : Attribute<T>> Attributes.withAttribute( public fun <T, A : Attribute<T>> Attributes.withAttribute(
attribute: A, attribute: A,
attrValue: T, attrValue: T,
): Attributes = AttributesImpl(content + (attribute to attrValue)) ): Attributes = MapAttributes(content + (attribute to attrValue))
public fun <A : Attribute<Unit>> Attributes.withAttribute(attribute: A): Attributes = public fun <A : Attribute<Unit>> Attributes.withAttribute(attribute: A): Attributes =
withAttribute(attribute, Unit) withAttribute(attribute, Unit)
@ -83,15 +91,15 @@ public fun <A : Attribute<Unit>> Attributes.withAttribute(attribute: A): Attribu
/** /**
* Create a new [Attributes] by modifying the current one * Create a new [Attributes] by modifying the current one
*/ */
public fun <T> Attributes.modify(block: AttributesBuilder<T>.() -> Unit): Attributes = Attributes<T> { public fun <O> Attributes.modified(block: AttributesBuilder<O>.() -> Unit): Attributes = Attributes<O> {
from(this@modify) putAll(this@modified)
block() block()
} }
/** /**
* Create new [Attributes] by removing [attribute] key * Create new [Attributes] by removing [attribute] key
*/ */
public fun Attributes.withoutAttribute(attribute: Attribute<*>): Attributes = AttributesImpl(content.minus(attribute)) public fun Attributes.withoutAttribute(attribute: Attribute<*>): Attributes = MapAttributes(content.minus(attribute))
/** /**
* Add an element to a [SetAttribute] * Add an element to a [SetAttribute]
@ -101,7 +109,7 @@ public fun <T, A : SetAttribute<T>> Attributes.withAttributeElement(
attrValue: T, attrValue: T,
): Attributes { ): Attributes {
val currentSet: Set<T> = get(attribute) ?: emptySet() val currentSet: Set<T> = get(attribute) ?: emptySet()
return AttributesImpl( return MapAttributes(
content + (attribute to (currentSet + attrValue)) content + (attribute to (currentSet + attrValue))
) )
} }
@ -114,7 +122,7 @@ public fun <T, A : SetAttribute<T>> Attributes.withoutAttributeElement(
attrValue: T, attrValue: T,
): Attributes { ): Attributes {
val currentSet: Set<T> = get(attribute) ?: emptySet() val currentSet: Set<T> = get(attribute) ?: emptySet()
return AttributesImpl(content + (attribute to (currentSet - attrValue))) return MapAttributes(content + (attribute to (currentSet - attrValue)))
} }
/** /**
@ -123,13 +131,13 @@ public fun <T, A : SetAttribute<T>> Attributes.withoutAttributeElement(
public fun <T, A : Attribute<T>> Attributes( public fun <T, A : Attribute<T>> Attributes(
attribute: A, attribute: A,
attrValue: T, attrValue: T,
): Attributes = AttributesImpl(mapOf(attribute to attrValue)) ): Attributes = MapAttributes(mapOf(attribute to attrValue))
/** /**
* Create Attributes with a single [Unit] valued attribute * Create Attributes with a single [Unit] valued attribute
*/ */
public fun <A : Attribute<Unit>> Attributes( public fun <A : Attribute<Unit>> Attributes(
attribute: A, attribute: A,
): Attributes = AttributesImpl(mapOf(attribute to Unit)) ): Attributes = MapAttributes(mapOf(attribute to Unit))
public operator fun Attributes.plus(other: Attributes): Attributes = AttributesImpl(content + other.content) public operator fun Attributes.plus(other: Attributes): Attributes = MapAttributes(content + other.content)

View File

@ -6,19 +6,18 @@
package space.kscience.attributes package space.kscience.attributes
/** /**
* A safe builder for [Attributes] * A builder for [Attributes].
* The builder is not thread safe
* *
* @param O type marker of an owner object, for which these attributes are made * @param O type marker of an owner object, for which these attributes are made
*/ */
public class AttributesBuilder<out O> internal constructor( public class AttributesBuilder<out O> internal constructor() : Attributes {
private val map: MutableMap<Attribute<*>, Any?>,
) : Attributes {
public constructor() : this(mutableMapOf()) private val map = mutableMapOf<Attribute<*>, Any?>()
override fun toString(): String = "Attributes(value=${content.entries})" override fun toString(): String = "Attributes(value=${map.entries})"
override fun equals(other: Any?): Boolean = other is Attributes && Attributes.equals(this, other) override fun equals(other: Any?): Boolean = other is Attributes && Attributes.equals(this, other)
override fun hashCode(): Int = content.hashCode() override fun hashCode(): Int = map.hashCode()
override val content: Map<out Attribute<*>, Any?> get() = map override val content: Map<out Attribute<*>, Any?> get() = map
@ -34,13 +33,18 @@ public class AttributesBuilder<out O> internal constructor(
set(this, value) set(this, value)
} }
public fun from(attributes: Attributes) { public infix fun <V> Attribute<V>.put(value: V?) {
set(this, value)
}
/**
* Put all attributes for given [attributes]
*/
public fun putAll(attributes: Attributes) {
map.putAll(attributes.content) map.putAll(attributes.content)
} }
public fun <V> SetAttribute<V>.add( public infix fun <V> SetAttribute<V>.add(attrValue: V) {
attrValue: V,
) {
val currentSet: Set<V> = get(this) ?: emptySet() val currentSet: Set<V> = get(this) ?: emptySet()
map[this] = currentSet + attrValue map[this] = currentSet + attrValue
} }
@ -48,15 +52,17 @@ public class AttributesBuilder<out O> internal constructor(
/** /**
* Remove an element from [SetAttribute] * Remove an element from [SetAttribute]
*/ */
public fun <V> SetAttribute<V>.remove( public infix fun <V> SetAttribute<V>.remove(attrValue: V) {
attrValue: V,
) {
val currentSet: Set<V> = get(this) ?: emptySet() val currentSet: Set<V> = get(this) ?: emptySet()
map[this] = currentSet - attrValue map[this] = currentSet - attrValue
} }
public fun build(): Attributes = AttributesImpl(map) public fun build(): Attributes = MapAttributes(map)
} }
public inline fun <O> Attributes(builder: AttributesBuilder<O>.() -> Unit): Attributes = /**
* Create [Attributes] with a given [builder]
* @param O the type for which attributes are built. The type is used only during compilation phase for static extension dispatch
*/
public fun <O> Attributes(builder: AttributesBuilder<O>.() -> Unit): Attributes =
AttributesBuilder<O>().apply(builder).build() AttributesBuilder<O>().apply(builder).build()

View File

@ -21,11 +21,14 @@ public abstract class PolymorphicAttribute<T>(public val type: SafeType<T>) : At
/** /**
* Get a polymorphic attribute using attribute factory * Get a polymorphic attribute using attribute factory
*/ */
public operator fun <T> Attributes.get(attributeKeyBuilder: () -> PolymorphicAttribute<T>): T? = get(attributeKeyBuilder()) @UnstableAttributesAPI
public operator fun <T> Attributes.get(attributeKeyBuilder: () -> PolymorphicAttribute<T>): T? =
get(attributeKeyBuilder())
/** /**
* Set a polymorphic attribute using its factory * Set a polymorphic attribute using its factory
*/ */
@UnstableAttributesAPI
public operator fun <O, T> AttributesBuilder<O>.set(attributeKeyBuilder: () -> PolymorphicAttribute<T>, value: T) { public operator fun <O, T> AttributesBuilder<O>.set(attributeKeyBuilder: () -> PolymorphicAttribute<T>, value: T) {
set(attributeKeyBuilder(), value) set(attributeKeyBuilder(), value)
} }

View File

@ -94,6 +94,7 @@ class ExpressionsInterpretersBenchmark {
} }
private val mst = node.toExpression(Float64Field) private val mst = node.toExpression(Float64Field)
@OptIn(UnstableKMathAPI::class) @OptIn(UnstableKMathAPI::class)
private val wasm = node.wasmCompileToExpression(Float64Field) private val wasm = node.wasmCompileToExpression(Float64Field)
private val estree = node.estreeCompileToExpression(Float64Field) private val estree = node.estreeCompileToExpression(Float64Field)

View File

@ -67,7 +67,7 @@ internal class BigIntBenchmark {
@Benchmark @Benchmark
fun kmMultiplyLarge(blackhole: Blackhole) = BigIntField { fun kmMultiplyLarge(blackhole: Blackhole) = BigIntField {
blackhole.consume(kmLargeNumber*kmLargeNumber) blackhole.consume(kmLargeNumber * kmLargeNumber)
} }
@Benchmark @Benchmark
@ -77,7 +77,7 @@ internal class BigIntBenchmark {
@Benchmark @Benchmark
fun jvmMultiplyLarge(blackhole: Blackhole) = JBigIntegerField { fun jvmMultiplyLarge(blackhole: Blackhole) = JBigIntegerField {
blackhole.consume(jvmLargeNumber*jvmLargeNumber) blackhole.consume(jvmLargeNumber * jvmLargeNumber)
} }
@Benchmark @Benchmark

View File

@ -75,6 +75,6 @@ internal class BufferBenchmark {
private companion object { private companion object {
private const val size = 100 private const val size = 100
private val reversedIndices = IntArray(size){it}.apply { reverse() } private val reversedIndices = IntArray(size) { it }.apply { reverse() }
} }
} }

View File

@ -24,7 +24,7 @@ internal class IntegrationBenchmark {
fun doubleIntegration(blackhole: Blackhole) { fun doubleIntegration(blackhole: Blackhole) {
val res = Double.algebra.gaussIntegrator.integrate(0.0..1.0, intervals = 1000) { x: Double -> val res = Double.algebra.gaussIntegrator.integrate(0.0..1.0, intervals = 1000) { x: Double ->
//sin(1 / x) //sin(1 / x)
1/x 1 / x
}.value }.value
blackhole.consume(res) blackhole.consume(res)
} }
@ -33,7 +33,7 @@ internal class IntegrationBenchmark {
fun complexIntegration(blackhole: Blackhole) = with(Complex.algebra) { fun complexIntegration(blackhole: Blackhole) = with(Complex.algebra) {
val res = gaussIntegrator.integrate(0.0..1.0, intervals = 1000) { x: Double -> val res = gaussIntegrator.integrate(0.0..1.0, intervals = 1000) { x: Double ->
// sin(1 / x) + i * cos(1 / x) // sin(1 / x) + i * cos(1 / x)
1/x - i/x 1 / x - i / x
}.value }.value
blackhole.consume(res) blackhole.consume(res)
} }

View File

@ -13,8 +13,6 @@ import space.kscience.kmath.jafama.JafamaDoubleField
import space.kscience.kmath.jafama.StrictJafamaDoubleField import space.kscience.kmath.jafama.StrictJafamaDoubleField
import space.kscience.kmath.operations.Float64Field import space.kscience.kmath.operations.Float64Field
import space.kscience.kmath.operations.invoke import space.kscience.kmath.operations.invoke
import kotlin.contracts.InvocationKind
import kotlin.contracts.contract
import kotlin.random.Random import kotlin.random.Random
@State(Scope.Benchmark) @State(Scope.Benchmark)
@ -36,7 +34,6 @@ internal class JafamaBenchmark {
} }
private inline fun invokeBenchmarks(blackhole: Blackhole, expr: (Double) -> Double) { private inline fun invokeBenchmarks(blackhole: Blackhole, expr: (Double) -> Double) {
contract { callsInPlace(expr, InvocationKind.AT_LEAST_ONCE) }
val rng = Random(0) val rng = Random(0)
repeat(1000000) { blackhole.consume(expr(rng.nextDouble())) } repeat(1000000) { blackhole.consume(expr(rng.nextDouble())) }
} }

View File

@ -6,6 +6,8 @@ plugins {
id("org.jetbrains.kotlinx.kover") version "0.7.6" id("org.jetbrains.kotlinx.kover") version "0.7.6"
} }
val attributesVersion by extra("0.1.0")
allprojects { allprojects {
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")
@ -63,7 +65,7 @@ ksciencePublish {
useApache2Licence() useApache2Licence()
useSPCTeam() useSPCTeam()
} }
repository("spc","https://maven.sciprog.center/kscience") repository("spc", "https://maven.sciprog.center/kscience")
sonatype("https://oss.sonatype.org") sonatype("https://oss.sonatype.org")
} }

View File

@ -24,8 +24,8 @@ dependencies {
implementation("com.fasterxml.jackson.module:jackson-module-kotlin:2.14.+") implementation("com.fasterxml.jackson.module:jackson-module-kotlin:2.14.+")
} }
kotlin{ kotlin {
jvmToolchain{ jvmToolchain {
languageVersion.set(JavaLanguageVersion.of(11)) languageVersion.set(JavaLanguageVersion.of(11))
} }
sourceSets.all { sourceSets.all {

View File

@ -63,7 +63,8 @@ fun Project.addBenchmarkProperties() {
if (resDirectory == null || !(resDirectory.resolve("jvm.json")).exists()) { if (resDirectory == null || !(resDirectory.resolve("jvm.json")).exists()) {
"> **Can't find appropriate benchmark data. Try generating readme files after running benchmarks**." "> **Can't find appropriate benchmark data. Try generating readme files after running benchmarks**."
} else { } else {
val reports: List<JmhReport> = jsonMapper.readValue<List<JmhReport>>(resDirectory.resolve("jvm.json")) val reports: List<JmhReport> =
jsonMapper.readValue<List<JmhReport>>(resDirectory.resolve("jvm.json"))
buildString { buildString {
appendLine("<details>") appendLine("<details>")
@ -76,16 +77,20 @@ fun Project.addBenchmarkProperties() {
appendLine("* Run on ${first.vmName} (build ${first.vmVersion}) with Java process:") appendLine("* Run on ${first.vmName} (build ${first.vmVersion}) with Java process:")
appendLine() appendLine()
appendLine("```") appendLine("```")
appendLine("${first.jvm} ${ appendLine(
first.jvmArgs.joinToString(" ") "${first.jvm} ${
}") first.jvmArgs.joinToString(" ")
}"
)
appendLine("```") appendLine("```")
appendLine("* JMH ${first.jmhVersion} was used in `${first.mode}` mode with ${first.warmupIterations} warmup ${ appendLine(
noun(first.warmupIterations, "iteration", "iterations") "* JMH ${first.jmhVersion} was used in `${first.mode}` mode with ${first.warmupIterations} warmup ${
} by ${first.warmupTime} and ${first.measurementIterations} measurement ${ noun(first.warmupIterations, "iteration", "iterations")
noun(first.measurementIterations, "iteration", "iterations") } by ${first.warmupTime} and ${first.measurementIterations} measurement ${
} by ${first.measurementTime}.") noun(first.measurementIterations, "iteration", "iterations")
} by ${first.measurementTime}."
)
appendLine() appendLine()
appendLine("| Benchmark | Score |") appendLine("| Benchmark | Score |")

View File

@ -17,4 +17,4 @@ own `MemoryBuffer.create()` factory).
## Buffer performance ## Buffer performance
One should avoid using default boxing buffer wherever it is possible. Try to use primitive buffers or memory buffers One should avoid using default boxing buffer wherever it is possible. Try to use primitive buffers or memory buffers
instead . instead .

View File

@ -1,27 +1,35 @@
# Coding Conventions # Coding Conventions
Generally, KMath code follows general [Kotlin coding conventions](https://kotlinlang.org/docs/reference/coding-conventions.html), but with a number of small changes and clarifications. Generally, KMath code follows
general [Kotlin coding conventions](https://kotlinlang.org/docs/reference/coding-conventions.html), but with a number of
small changes and clarifications.
## Utility Class Naming ## Utility Class Naming
Filename should coincide with a name of one of the classes contained in the file or start with small letter and describe its contents. Filename should coincide with a name of one of the classes contained in the file or start with small letter and describe
its contents.
The code convention [here](https://kotlinlang.org/docs/reference/coding-conventions.html#source-file-names) says that file names should start with a capital letter even if file does not contain classes. Yet starting utility classes and aggregators with a small letter seems to be a good way to visually separate those files. The code convention [here](https://kotlinlang.org/docs/reference/coding-conventions.html#source-file-names) says that
file names should start with a capital letter even if file does not contain classes. Yet starting utility classes and
aggregators with a small letter seems to be a good way to visually separate those files.
This convention could be changed in future in a non-breaking way. This convention could be changed in future in a non-breaking way.
## Private Variable Naming ## Private Variable Naming
Private variables' names may start with underscore `_` for of the private mutable variable is shadowed by the public read-only value with the same meaning. Private variables' names may start with underscore `_` for of the private mutable variable is shadowed by the public
read-only value with the same meaning.
This rule does not permit underscores in names, but it is sometimes useful to "underscore" the fact that public and private versions draw up the same entity. It is allowed only for private variables. This rule does not permit underscores in names, but it is sometimes useful to "underscore" the fact that public and
private versions draw up the same entity. It is allowed only for private variables.
This convention could be changed in future in a non-breaking way. This convention could be changed in future in a non-breaking way.
## Functions and Properties One-liners ## Functions and Properties One-liners
Use one-liners when they occupy single code window line both for functions and properties with getters like Use one-liners when they occupy single code window line both for functions and properties with getters like
`val b: String get() = "fff"`. The same should be performed with multiline expressions when they could be `val b: String get() = "fff"`. The same should be performed with multiline expressions when they could be
cleanly separated. cleanly separated.
There is no universal consensus whenever use `fun a() = ...` or `fun a() { return ... }`. Yet from reader outlook one-lines seem to better show that the property or function is easily calculated. There is no universal consensus whenever use `fun a() = ...` or `fun a() { return ... }`. Yet from reader outlook
one-lines seem to better show that the property or function is easily calculated.

View File

@ -1,21 +1,24 @@
# Expressions # Expressions
Expressions is a feature, which allows constructing lazily or immediately calculated parametric mathematical expressions. Expressions is a feature, which allows constructing lazily or immediately calculated parametric mathematical
expressions.
The potential use-cases for it (so far) are following: The potential use-cases for it (so far) are following:
* lazy evaluation (in general simple lambda is better, but there are some border cases); * lazy evaluation (in general simple lambda is better, but there are some border cases);
* automatic differentiation in single-dimension and in multiple dimensions; * automatic differentiation in single-dimension and in multiple dimensions;
* generation of mathematical syntax trees with subsequent code generation for other languages; * generation of mathematical syntax trees with subsequent code generation for other languages;
* symbolic computations, especially differentiation (and some other actions with `kmath-symja` integration with Symja's `IExpr`&mdash;integration, simplification, and more); * symbolic computations, especially differentiation (and some other actions with `kmath-symja` integration with
Symja's `IExpr`&mdash;integration, simplification, and more);
* visualization with `kmath-jupyter`. * visualization with `kmath-jupyter`.
The workhorse of this API is `Expression` interface, which exposes single `operator fun invoke(arguments: Map<Symbol, T>): T` The workhorse of this API is `Expression` interface, which exposes
single `operator fun invoke(arguments: Map<Symbol, T>): T`
method. `ExpressionAlgebra` is used to generate expressions and introduce variables. method. `ExpressionAlgebra` is used to generate expressions and introduce variables.
Currently there are two implementations: Currently there are two implementations:
* Generic `ExpressionField` in `kmath-core` which allows construction of custom lazy expressions * Generic `ExpressionField` in `kmath-core` which allows construction of custom lazy expressions
* Auto-differentiation expression in `kmath-commons` module allows using full power of `DerivativeStructure` * Auto-differentiation expression in `kmath-commons` module allows using full power of `DerivativeStructure`
from commons-math. **TODO: add example** from commons-math. **TODO: add example**

View File

@ -1,8 +1,12 @@
## Basic linear algebra layout ## Basic linear algebra layout
KMath support for linear algebra organized in a context-oriented way, which means that operations are in most cases declared in context classes, and are not the members of classes that store data. This allows more flexible approach to maintain multiple back-ends. The new operations added as extensions to contexts instead of being member functions of data structures. KMath support for linear algebra organized in a context-oriented way, which means that operations are in most cases
declared in context classes, and are not the members of classes that store data. This allows more flexible approach to
maintain multiple back-ends. The new operations added as extensions to contexts instead of being member functions of
data structures.
The main context for linear algebra over matrices and vectors is `LinearSpace`, which defines addition and dot products of matrices and vectors: The main context for linear algebra over matrices and vectors is `LinearSpace`, which defines addition and dot products
of matrices and vectors:
```kotlin ```kotlin
import space.kscience.kmath.linear.* import space.kscience.kmath.linear.*
@ -28,4 +32,5 @@ LinearSpace.Companion.real {
## Backends overview ## Backends overview
### EJML ### EJML
### Commons Math ### Commons Math

View File

@ -8,6 +8,7 @@ One of the most sought after features of mathematical libraries is the high-perf
structures. In `kmath` performance depends on which particular context was used for operation. structures. In `kmath` performance depends on which particular context was used for operation.
Let us consider following contexts: Let us consider following contexts:
```kotlin ```kotlin
// automatically build context most suited for given type. // automatically build context most suited for given type.
val autoField = NDField.auto(DoubleField, dim, dim) val autoField = NDField.auto(DoubleField, dim, dim)
@ -16,6 +17,7 @@ Let us consider following contexts:
//A generic boxing field. It should be used for objects, not primitives. //A generic boxing field. It should be used for objects, not primitives.
val genericField = NDField.buffered(DoubleField, dim, dim) val genericField = NDField.buffered(DoubleField, dim, dim)
``` ```
Now let us perform several tests and see, which implementation is best suited for each case: Now let us perform several tests and see, which implementation is best suited for each case:
## Test case ## Test case
@ -24,7 +26,9 @@ To test performance we will take 2d-structures with `dim = 1000` and add a struc
to it `n = 1000` times. to it `n = 1000` times.
## Specialized ## Specialized
The code to run this looks like: The code to run this looks like:
```kotlin ```kotlin
specializedField.run { specializedField.run {
var res: NDBuffer<Double> = one var res: NDBuffer<Double> = one
@ -33,13 +37,16 @@ The code to run this looks like:
} }
} }
``` ```
The performance of this code is the best of all tests since it inlines all operations and is specialized for operation The performance of this code is the best of all tests since it inlines all operations and is specialized for operation
with doubles. We will measure everything else relative to this one, so time for this test will be `1x` (real time with doubles. We will measure everything else relative to this one, so time for this test will be `1x` (real time
on my computer is about 4.5 seconds). The only problem with this approach is that it requires specifying type on my computer is about 4.5 seconds). The only problem with this approach is that it requires specifying type
from the beginning. Everyone does so anyway, so it is the recommended approach. from the beginning. Everyone does so anyway, so it is the recommended approach.
## Automatic ## Automatic
Let's do the same with automatic field inference: Let's do the same with automatic field inference:
```kotlin ```kotlin
autoField.run { autoField.run {
var res = one var res = one
@ -48,13 +55,16 @@ Let's do the same with automatic field inference:
} }
} }
``` ```
Ths speed of this operation is approximately the same as for specialized case since `NDField.auto` just Ths speed of this operation is approximately the same as for specialized case since `NDField.auto` just
returns the same `RealNDField` in this case. Of course, it is usually better to use specialized method to be sure. returns the same `RealNDField` in this case. Of course, it is usually better to use specialized method to be sure.
## Lazy ## Lazy
Lazy field does not produce a structure when asked, instead it generates an empty structure and fills it on-demand Lazy field does not produce a structure when asked, instead it generates an empty structure and fills it on-demand
using coroutines to parallelize computations. using coroutines to parallelize computations.
When one calls When one calls
```kotlin ```kotlin
lazyField.run { lazyField.run {
var res = one var res = one
@ -63,12 +73,14 @@ When one calls
} }
} }
``` ```
The result will be calculated almost immediately but the result will be empty. To get the full result The result will be calculated almost immediately but the result will be empty. To get the full result
structure one needs to call all its elements. In this case computation overhead will be huge. So this field never structure one needs to call all its elements. In this case computation overhead will be huge. So this field never
should be used if one expects to use the full result structure. Though if one wants only small fraction, it could should be used if one expects to use the full result structure. Though if one wants only small fraction, it could
save a lot of time. save a lot of time.
This field still could be used with reasonable performance if call code is changed: This field still could be used with reasonable performance if call code is changed:
```kotlin ```kotlin
lazyField.run { lazyField.run {
val res = one.map { val res = one.map {
@ -82,10 +94,13 @@ This field still could be used with reasonable performance if call code is chang
res.elements().forEach { it.second } res.elements().forEach { it.second }
} }
``` ```
In this case it completes in about `4x-5x` time due to boxing. In this case it completes in about `4x-5x` time due to boxing.
## Boxing ## Boxing
The boxing field produced by The boxing field produced by
```kotlin ```kotlin
genericField.run { genericField.run {
var res: NDBuffer<Double> = one var res: NDBuffer<Double> = one
@ -94,18 +109,22 @@ The boxing field produced by
} }
} }
``` ```
is the slowest one, because it requires boxing and unboxing the `double` on each operation. It takes about is the slowest one, because it requires boxing and unboxing the `double` on each operation. It takes about
`15x` time (**TODO: there seems to be a problem here, it should be slow, but not that slow**). This field should `15x` time (**TODO: there seems to be a problem here, it should be slow, but not that slow**). This field should
never be used for primitives. never be used for primitives.
## Element operation ## Element operation
Let us also check the speed for direct operations on elements: Let us also check the speed for direct operations on elements:
```kotlin ```kotlin
var res = genericField.one var res = genericField.one
repeat(n) { repeat(n) {
res += 1.0 res += 1.0
} }
``` ```
One would expect to be at least as slow as field operation, but in fact, this one takes only `2x` time to complete. One would expect to be at least as slow as field operation, but in fact, this one takes only `2x` time to complete.
It happens, because in this particular case it does not use actual `NDField` but instead calculated directly It happens, because in this particular case it does not use actual `NDField` but instead calculated directly
via extension function. via extension function.
@ -114,6 +133,7 @@ via extension function.
Usually it is bad idea to compare the direct numerical operation performance in different languages, but it hard to Usually it is bad idea to compare the direct numerical operation performance in different languages, but it hard to
work completely without frame of reference. In this case, simple numpy code: work completely without frame of reference. In this case, simple numpy code:
```python ```python
import numpy as np import numpy as np
@ -121,7 +141,9 @@ res = np.ones((1000,1000))
for i in range(1000): for i in range(1000):
res = res + 1.0 res = res + 1.0
``` ```
gives the completion time of about `1.1x`, which means that specialized kotlin code in fact is working faster (I think it is
gives the completion time of about `1.1x`, which means that specialized kotlin code in fact is working faster (I think
it is
because better memory management). Of course if one writes `res += 1.0`, the performance will be different, because better memory management). Of course if one writes `res += 1.0`, the performance will be different,
but it would be different case, because numpy overrides `+=` with in-place operations. In-place operations are but it would be different case, because numpy overrides `+=` with in-place operations. In-place operations are
available in `kmath` with `MutableNDStructure` but there is no field for it (one can still work with mapping available in `kmath` with `MutableNDStructure` but there is no field for it (one can still work with mapping

View File

@ -1,27 +1,54 @@
# Polynomials and Rational Functions # Polynomials and Rational Functions
KMath provides a way to work with uni- and multivariate polynomials and rational functions. It includes full support of arithmetic operations of integers, **constants** (elements of ring polynomials are build over), variables (for certain multivariate implementations), polynomials and rational functions encapsulated in so-called **polynomial space** and **rational function space** and some other utilities such as algebraic differentiation and substitution. KMath provides a way to work with uni- and multivariate polynomials and rational functions. It includes full support of
arithmetic operations of integers, **constants** (elements of ring polynomials are build over), variables (for certain
multivariate implementations), polynomials and rational functions encapsulated in so-called **polynomial space** and *
*rational function space** and some other utilities such as algebraic differentiation and substitution.
## Concrete realizations ## Concrete realizations
There are 3 approaches to represent polynomials: There are 3 approaches to represent polynomials:
1. For univariate polynomials one can represent and store polynomial as a list of coefficients for each power of the variable. I.e. polynomial $a_0 + \dots + a_n x^n $ can be represented as a finite sequence $(a_0; \dots; a_n)$. (Compare to sequential definition of polynomials.)
2. For multivariate polynomials one can represent and store polynomial as a matching (in programming it is called "map" or "dictionary", in math it is called [functional relation](https://en.wikipedia.org/wiki/Binary_relation#Special_types_of_binary_relations)) of each "**term signature**" (that describes what variables and in what powers appear in the term) with corresponding coefficient of the term. But there are 2 possible approaches of term signature representation:
1. One can number all the variables, so term signature can be represented as a sequence describing powers of the variables. I.e. signature of term $c \\; x_0^{d_0} \dots x_n^{d_n} $ (for natural or zero $d_i $) can be represented as a finite sequence $(d_0; \dots; d_n)$.
2. One can represent variables as objects ("**labels**"), so term signature can be also represented as a matching of each appeared variable with its power in the term. I.e. signature of term $c \\; x_0^{d_0} \dots x_n^{d_n} $ (for natural non-zero $d_i $) can be represented as a finite matching $(x_0 \to d_1; \dots; x_n \to d_n)$.
All that three approaches are implemented by "list", "numbered", and "labeled" versions of polynomials and polynomial spaces respectively. Whereas all rational functions are represented as fractions with corresponding polynomial numerator and denominator, and rational functions' spaces are implemented in the same way as usual field of rational numbers (or more precisely, as any field of fractions over integral domain) should be implemented. 1. For univariate polynomials one can represent and store polynomial as a list of coefficients for each power of the
variable. I.e. polynomial $a_0 + \dots + a_n x^n $ can be represented as a finite sequence $(a_0; \dots; a_n)$. (
Compare to sequential definition of polynomials.)
2. For multivariate polynomials one can represent and store polynomial as a matching (in programming it is called "map"
or "dictionary", in math it is
called [functional relation](https://en.wikipedia.org/wiki/Binary_relation#Special_types_of_binary_relations)) of
each "**term signature**" (that describes what variables and in what powers appear in the term) with corresponding
coefficient of the term. But there are 2 possible approaches of term signature representation:
1. One can number all the variables, so term signature can be represented as a sequence describing powers of the
variables. I.e. signature of term $c \\; x_0^{d_0} \dots x_n^{d_n} $ (for natural or zero $d_i $) can be
represented as a finite sequence $(d_0; \dots; d_n)$.
2. One can represent variables as objects ("**labels**"), so term signature can be also represented as a matching of
each appeared variable with its power in the term. I.e. signature of term $c \\; x_0^{d_0} \dots x_n^{d_n} $ (for
natural non-zero $d_i $) can be represented as a finite matching $(x_0 \to d_1; \dots; x_n \to d_n)$.
All that three approaches are implemented by "list", "numbered", and "labeled" versions of polynomials and polynomial
spaces respectively. Whereas all rational functions are represented as fractions with corresponding polynomial numerator
and denominator, and rational functions' spaces are implemented in the same way as usual field of rational numbers (or
more precisely, as any field of fractions over integral domain) should be implemented.
So here are a bit of details. Let `C` by type of constants. Then: So here are a bit of details. Let `C` by type of constants. Then:
1. `ListPolynomial`, `ListPolynomialSpace`, `ListRationalFunction` and `ListRationalFunctionSpace` implement the first scenario. `ListPolynomial` stores polynomial $a_0 + \dots + a_n x^n $ as a coefficients list `listOf(a_0, ..., a_n)` (of type `List<C>`).
1. `ListPolynomial`, `ListPolynomialSpace`, `ListRationalFunction` and `ListRationalFunctionSpace` implement the first
They also have variation `ScalableListPolynomialSpace` that replaces former polynomials and implements `ScaleOperations`. scenario. `ListPolynomial` stores polynomial $a_0 + \dots + a_n x^n $ as a coefficients
2. `NumberedPolynomial`, `NumberedPolynomialSpace`, `NumberedRationalFunction` and `NumberedRationalFunctionSpace` implement second scenario. `NumberedPolynomial` stores polynomials as structures of type `Map<List<UInt>, C>`. Signatures are stored as `List<UInt>`. To prevent ambiguity signatures should not end with zeros. list `listOf(a_0, ..., a_n)` (of type `List<C>`).
3. `LabeledPolynomial`, `LabeledPolynomialSpace`, `LabeledRationalFunction` and `LabeledRationalFunctionSpace` implement third scenario using common `Symbol` as variable type. `LabeledPolynomial` stores polynomials as structures of type `Map<Map<Symbol, UInt>, C>`. Signatures are stored as `Map<Symbol, UInt>`. To prevent ambiguity each signature should not map any variable to zero.
They also have variation `ScalableListPolynomialSpace` that replaces former polynomials and
implements `ScaleOperations`.
2. `NumberedPolynomial`, `NumberedPolynomialSpace`, `NumberedRationalFunction` and `NumberedRationalFunctionSpace`
implement second scenario. `NumberedPolynomial` stores polynomials as structures of type `Map<List<UInt>, C>`.
Signatures are stored as `List<UInt>`. To prevent ambiguity signatures should not end with zeros.
3. `LabeledPolynomial`, `LabeledPolynomialSpace`, `LabeledRationalFunction` and `LabeledRationalFunctionSpace` implement
third scenario using common `Symbol` as variable type. `LabeledPolynomial` stores polynomials as structures of
type `Map<Map<Symbol, UInt>, C>`. Signatures are stored as `Map<Symbol, UInt>`. To prevent ambiguity each signature
should not map any variable to zero.
### Example: `ListPolynomial` ### Example: `ListPolynomial`
For example, polynomial $2 - 3x + x^2 $ (with `Int` coefficients) is represented For example, polynomial $2 - 3x + x^2 $ (with `Int` coefficients) is represented
```kotlin ```kotlin
val polynomial: ListPolynomial<Int> = ListPolynomial(listOf(2, -3, 1)) val polynomial: ListPolynomial<Int> = ListPolynomial(listOf(2, -3, 1))
// or // or
@ -29,6 +56,7 @@ val polynomial: ListPolynomial<Int> = ListPolynomial(2, -3, 1)
``` ```
All algebraic operations can be used in corresponding space: All algebraic operations can be used in corresponding space:
```kotlin ```kotlin
val computationResult = Int.algebra.listPolynomialSpace { val computationResult = Int.algebra.listPolynomialSpace {
ListPolynomial(2, -3, 1) + ListPolynomial(0, 6) == ListPolynomial(2, 3, 1) ListPolynomial(2, -3, 1) + ListPolynomial(0, 6) == ListPolynomial(2, 3, 1)
@ -41,7 +69,8 @@ For more see [examples](../examples/src/main/kotlin/space/kscience/kmath/functio
### Example: `NumberedPolynomial` ### Example: `NumberedPolynomial`
For example, polynomial $3 + 5 x_1 - 7 x_0^2 x_2 $ (with `Int` coefficients) is represented For example, polynomial $3 + 5 x_1 - 7 x_0^2 x_2 $ (with `Int` coefficients) is represented
```kotlin ```kotlin
val polynomial: NumberedPolynomial<Int> = NumberedPolynomial( val polynomial: NumberedPolynomial<Int> = NumberedPolynomial(
mapOf( mapOf(
@ -59,6 +88,7 @@ val polynomial: NumberedPolynomial<Int> = NumberedPolynomial(
``` ```
All algebraic operations can be used in corresponding space: All algebraic operations can be used in corresponding space:
```kotlin ```kotlin
val computationResult = Int.algebra.numberedPolynomialSpace { val computationResult = Int.algebra.numberedPolynomialSpace {
NumberedPolynomial( NumberedPolynomial(
@ -83,7 +113,8 @@ For more see [examples](../examples/src/main/kotlin/space/kscience/kmath/functio
### Example: `LabeledPolynomial` ### Example: `LabeledPolynomial`
For example, polynomial $3 + 5 y - 7 x^2 z $ (with `Int` coefficients) is represented For example, polynomial $3 + 5 y - 7 x^2 z $ (with `Int` coefficients) is represented
```kotlin ```kotlin
val polynomial: LabeledPolynomial<Int> = LabeledPolynomial( val polynomial: LabeledPolynomial<Int> = LabeledPolynomial(
mapOf( mapOf(
@ -101,6 +132,7 @@ val polynomial: LabeledPolynomial<Int> = LabeledPolynomial(
``` ```
All algebraic operations can be used in corresponding space: All algebraic operations can be used in corresponding space:
```kotlin ```kotlin
val computationResult = Int.algebra.labeledPolynomialSpace { val computationResult = Int.algebra.labeledPolynomialSpace {
LabeledPolynomial( LabeledPolynomial(
@ -150,23 +182,42 @@ classDiagram
PolynomialSpaceOfFractions <|-- MultivariatePolynomialSpaceOfFractions PolynomialSpaceOfFractions <|-- MultivariatePolynomialSpaceOfFractions
``` ```
There are implemented `Polynomial` and `RationalFunction` interfaces as abstractions of polynomials and rational functions respectively (although, there is not a lot of logic in them) and `PolynomialSpace` and `RationalFunctionSpace` (that implement `Ring` interface) as abstractions of polynomials' and rational functions' spaces respectively. More precisely, that means they allow to declare common logic of interaction with such objects and spaces: There are implemented `Polynomial` and `RationalFunction` interfaces as abstractions of polynomials and rational
functions respectively (although, there is not a lot of logic in them) and `PolynomialSpace`
and `RationalFunctionSpace` (that implement `Ring` interface) as abstractions of polynomials' and rational functions'
spaces respectively. More precisely, that means they allow to declare common logic of interaction with such objects and
spaces:
- `Polynomial` does not provide any logic. It is marker interface. - `Polynomial` does not provide any logic. It is marker interface.
- `RationalFunction` provides numerator and denominator of rational function and destructuring declaration for them. - `RationalFunction` provides numerator and denominator of rational function and destructuring declaration for them.
- `PolynomialSpace` provides all possible arithmetic interactions of integers, constants (of type `C`), and polynomials (of type `P`) like addition, subtraction, multiplication, and some others and common properties like degree of polynomial. - `PolynomialSpace` provides all possible arithmetic interactions of integers, constants (of type `C`), and
- `RationalFunctionSpace` provides the same as `PolynomialSpace` but also for rational functions: all possible arithmetic interactions of integers, constants (of type `C`), polynomials (of type `P`), and rational functions (of type `R`) like addition, subtraction, multiplication, division (in some cases), and some others and common properties like degree of polynomial. polynomials (of type `P`) like addition, subtraction, multiplication, and some others and common properties like
degree of polynomial.
- `RationalFunctionSpace` provides the same as `PolynomialSpace` but also for rational functions: all possible
arithmetic interactions of integers, constants (of type `C`), polynomials (of type `P`), and rational functions (of
type `R`) like addition, subtraction, multiplication, division (in some cases), and some others and common properties
like degree of polynomial.
Then to add abstraction of similar behaviour with variables (in multivariate case) there are implemented `MultivariatePolynomialSpace` and `MultivariateRationalFunctionSpace`. They just include variables (of type `V`) in the interactions of the entities. Then to add abstraction of similar behaviour with variables (in multivariate case) there are
implemented `MultivariatePolynomialSpace` and `MultivariateRationalFunctionSpace`. They just include variables (of
type `V`) in the interactions of the entities.
Also, to remove boilerplates there were provided helping subinterfaces and abstract subclasses: Also, to remove boilerplates there were provided helping subinterfaces and abstract subclasses:
- `PolynomialSpaceOverRing` allows to replace implementation of interactions of integers and constants with implementations from provided ring over constants (of type `A: Ring<C>`).
- `PolynomialSpaceOverRing` allows to replace implementation of interactions of integers and constants with
implementations from provided ring over constants (of type `A: Ring<C>`).
- `RationalFunctionSpaceOverRing` &mdash; the same but for `RationalFunctionSpace`. - `RationalFunctionSpaceOverRing` &mdash; the same but for `RationalFunctionSpace`.
- `RationalFunctionSpaceOverPolynomialSpace` &mdash; the same but "the inheritance" includes interactions with polynomials from provided `PolynomialSpace`. - `RationalFunctionSpaceOverPolynomialSpace` &mdash; the same but "the inheritance" includes interactions with
- `PolynomialSpaceOfFractions` is actually abstract subclass of `RationalFunctionSpace` that implements all fractions boilerplates with provided (`protected`) constructor of rational functions by polynomial numerator and denominator. polynomials from provided `PolynomialSpace`.
- `MultivariateRationalFunctionSpaceOverMultivariatePolynomialSpace` and `MultivariatePolynomialSpaceOfFractions` &mdash; the same stories of operators inheritance and fractions boilerplates respectively but in multivariate case. - `PolynomialSpaceOfFractions` is actually abstract subclass of `RationalFunctionSpace` that implements all fractions
boilerplates with provided (`protected`) constructor of rational functions by polynomial numerator and denominator.
- `MultivariateRationalFunctionSpaceOverMultivariatePolynomialSpace` and `MultivariatePolynomialSpaceOfFractions`
&mdash; the same stories of operators inheritance and fractions boilerplates respectively but in multivariate case.
## Utilities ## Utilities
For all kinds of polynomials there are provided (implementation details depend on kind of polynomials) such common utilities as: For all kinds of polynomials there are provided (implementation details depend on kind of polynomials) such common
utilities as:
1. differentiation and anti-differentiation, 1. differentiation and anti-differentiation,
2. substitution, invocation and functional representation. 2. substitution, invocation and functional representation.

View File

@ -3,6 +3,7 @@
The Maven coordinates of this project are `${group}:${name}:${version}`. The Maven coordinates of this project are `${group}:${name}:${version}`.
**Gradle:** **Gradle:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -25,7 +25,8 @@ experience could be achieved with [kmath-for-real](/kmath-for-real) extension mo
# Goal # Goal
* Provide a flexible and powerful API to work with mathematics abstractions in Kotlin-multiplatform (JVM, JS, Native and Wasm). * Provide a flexible and powerful API to work with mathematics abstractions in Kotlin-multiplatform (JVM, JS, Native and
Wasm).
* Provide basic multiplatform implementations for those abstractions (without significant performance optimization). * Provide basic multiplatform implementations for those abstractions (without significant performance optimization).
* Provide bindings and wrappers with those abstractions for popular optimized platform libraries. * Provide bindings and wrappers with those abstractions for popular optimized platform libraries.
@ -67,16 +68,19 @@ feedback are also welcome.
## Performance ## Performance
Calculation of performance is one of the major goals of KMath in the future, but in some cases it is impossible to achieve both Calculation of performance is one of the major goals of KMath in the future, but in some cases it is impossible to
achieve both
performance and flexibility. performance and flexibility.
We expect to focus on creating a convenient universal API first and then work on increasing performance for specific We expect to focus on creating a convenient universal API first and then work on increasing performance for specific
cases. We expect the worst KMath benchmarks will perform better than native Python, but worse than optimized cases. We expect the worst KMath benchmarks will perform better than native Python, but worse than optimized
native/SciPy (mostly due to boxing operations on primitive numbers). The best performance of optimized parts could be better than SciPy. native/SciPy (mostly due to boxing operations on primitive numbers). The best performance of optimized parts could be
better than SciPy.
## Requirements ## Requirements
KMath currently relies on JDK 11 for compilation and execution of Kotlin-JVM part. We recommend using GraalVM-CE or Oracle GraalVM for execution to get better performance. KMath currently relies on JDK 11 for compilation and execution of Kotlin-JVM part. We recommend using GraalVM-CE or
Oracle GraalVM for execution to get better performance.
### Repositories ### Repositories
@ -99,4 +103,7 @@ dependencies {
## Contributing ## Contributing
The project requires a lot of additional work. The most important thing we need is feedback about what features are The project requires a lot of additional work. The most important thing we need is feedback about what features are
required the most. Feel free to create feature requests. We are also welcome to code contributions, especially in issues marked with [good first issue](hhttps://github.com/SciProgCentre/kmath/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) label. required the most. Feel free to create feature requests. We are also welcome to code contributions, especially in issues
marked
with [good first issue](hhttps://github.com/SciProgCentre/kmath/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
label.

View File

@ -67,8 +67,8 @@ kotlin {
} }
tasks.withType<KotlinJvmCompile> { tasks.withType<KotlinJvmCompile> {
kotlinOptions { compilerOptions {
freeCompilerArgs = freeCompilerArgs + "-Xjvm-default=all" + "-Xopt-in=kotlin.RequiresOptIn" + "-Xlambdas=indy" freeCompilerArgs.addAll("-Xjvm-default=all", "-Xopt-in=kotlin.RequiresOptIn", "-Xlambdas=indy")
} }
} }

View File

@ -6,6 +6,7 @@
package space.kscience.kmath.expressions package space.kscience.kmath.expressions
import space.kscience.kmath.UnstableKMathAPI import space.kscience.kmath.UnstableKMathAPI
// Only kmath-core is needed. // Only kmath-core is needed.
// Let's declare some variables // Let's declare some variables
@ -51,7 +52,7 @@ fun main() {
// >>> 0.0 // >>> 0.0
// But in case you forgot to specify bound symbol's value, exception is thrown: // But in case you forgot to specify bound symbol's value, exception is thrown:
println( runCatching { someExpression(z to 4.0) } ) println(runCatching { someExpression(z to 4.0) })
// >>> Failure(java.lang.IllegalStateException: Symbol 'x' is not supported in ...) // >>> Failure(java.lang.IllegalStateException: Symbol 'x' is not supported in ...)
// The reason is that the expression is evaluated lazily, // The reason is that the expression is evaluated lazily,

View File

@ -77,7 +77,7 @@ suspend fun main() {
val result = chi2.optimizeWith( val result = chi2.optimizeWith(
CMOptimizer, CMOptimizer,
mapOf(a to 1.5, b to 0.9, c to 1.0), mapOf(a to 1.5, b to 0.9, c to 1.0),
){ ) {
FunctionOptimizationTarget(OptimizationDirection.MINIMIZE) FunctionOptimizationTarget(OptimizationDirection.MINIMIZE)
} }

View File

@ -8,7 +8,6 @@ package space.kscience.kmath.operations
import space.kscience.kmath.commons.linear.CMLinearSpace import space.kscience.kmath.commons.linear.CMLinearSpace
import space.kscience.kmath.linear.matrix import space.kscience.kmath.linear.matrix
import space.kscience.kmath.nd.Float64BufferND import space.kscience.kmath.nd.Float64BufferND
import space.kscience.kmath.nd.ShapeND
import space.kscience.kmath.nd.Structure2D import space.kscience.kmath.nd.Structure2D
import space.kscience.kmath.nd.mutableStructureND import space.kscience.kmath.nd.mutableStructureND
import space.kscience.kmath.nd.ndAlgebra import space.kscience.kmath.nd.ndAlgebra

View File

@ -44,10 +44,10 @@ fun main() = with(Double.seriesAlgebra()) {
Plotly.page { Plotly.page {
h1 { +"This is my plot" } h1 { +"This is my plot" }
p{ p {
+"Kolmogorov-smirnov test for s1 and s2: ${kmTest.value}" +"Kolmogorov-smirnov test for s1 and s2: ${kmTest.value}"
} }
plot{ plot {
plotSeries("s1", s1) plotSeries("s1", s1)
plotSeries("s2", s2) plotSeries("s2", s2)
plotSeries("s3", s3) plotSeries("s3", s3)

View File

@ -53,7 +53,10 @@ class StreamDoubleFieldND(override val shape: ShapeND) : FieldND<Double, Float64
return BufferND(strides, array.asBuffer()) return BufferND(strides, array.asBuffer())
} }
override fun mutableStructureND(shape: ShapeND, initializer: DoubleField.(IntArray) -> Double): MutableBufferND<Double> { override fun mutableStructureND(
shape: ShapeND,
initializer: DoubleField.(IntArray) -> Double,
): MutableBufferND<Double> {
val array = IntStream.range(0, strides.linearSize).parallel().mapToDouble { offset -> val array = IntStream.range(0, strides.linearSize).parallel().mapToDouble { offset ->
val index = strides.index(offset) val index = strides.index(offset)
DoubleField.initializer(index) DoubleField.initializer(index)

View File

@ -12,7 +12,7 @@ import space.kscience.kmath.operations.withSize
inline fun <reified R : Any> MutableBuffer.Companion.same( inline fun <reified R : Any> MutableBuffer.Companion.same(
n: Int, n: Int,
value: R value: R,
): MutableBuffer<R> = MutableBuffer(n) { value } ): MutableBuffer<R> = MutableBuffer(n) { value }

View File

@ -31,7 +31,7 @@ fun main() {
val exampleNumber = 1 val exampleNumber = 1
var y_hat = funcDifficultForLm(t_example, p_example, exampleNumber) var y_hat = funcDifficultForLm(t_example, p_example, exampleNumber)
var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D() var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D()
for (i in 0 until Nparams) { for (i in 0 until Nparams) {
@ -51,7 +51,8 @@ fun main() {
val opts = doubleArrayOf(3.0, 10000.0, 1e-6, 1e-6, 1e-6, 1e-6, 1e-2, 11.0, 9.0, 1.0) val opts = doubleArrayOf(3.0, 10000.0, 1e-6, 1e-6, 1e-6, 1e-6, 1e-2, 11.0, 9.0, 1.0)
// val opts = doubleArrayOf(3.0, 10000.0, 1e-6, 1e-6, 1e-6, 1e-6, 1e-3, 11.0, 9.0, 1.0) // val opts = doubleArrayOf(3.0, 10000.0, 1e-6, 1e-6, 1e-6, 1e-6, 1e-3, 11.0, 9.0, 1.0)
val inputData = LMInput(::funcDifficultForLm, val inputData = LMInput(
::funcDifficultForLm,
p_init.as2D(), p_init.as2D(),
t, t,
y_dat, y_dat,
@ -64,7 +65,8 @@ fun main() {
doubleArrayOf(opts[6], opts[7], opts[8]), doubleArrayOf(opts[6], opts[7], opts[8]),
opts[9].toInt(), opts[9].toInt(),
10, 10,
1) 1
)
val result = DoubleTensorAlgebra.levenbergMarquardt(inputData) val result = DoubleTensorAlgebra.levenbergMarquardt(inputData)
@ -76,7 +78,7 @@ fun main() {
println() println()
println("Y true and y received:") println("Y true and y received:")
var y_hat_after = funcDifficultForLm(t_example, result.resultParameters, exampleNumber) var y_hat_after = funcDifficultForLm(t_example, result.resultParameters, exampleNumber)
for (i in 0 until y_hat.shape.component1()) { for (i in 0 until y_hat.shape.component1()) {
val x = (y_hat[i, 0] * 10000).roundToInt() / 10000.0 val x = (y_hat[i, 0] * 10000).roundToInt() / 10000.0
val y = (y_hat_after[i, 0] * 10000).roundToInt() / 10000.0 val y = (y_hat_after[i, 0] * 10000).roundToInt() / 10000.0

View File

@ -18,7 +18,8 @@ import kotlin.math.roundToInt
fun main() { fun main() {
val startedData = getStartDataForFuncEasy() val startedData = getStartDataForFuncEasy()
val inputData = LMInput(::funcEasyForLm, val inputData = LMInput(
::funcEasyForLm,
DoubleTensorAlgebra.ones(ShapeND(intArrayOf(4, 1))).as2D(), DoubleTensorAlgebra.ones(ShapeND(intArrayOf(4, 1))).as2D(),
startedData.t, startedData.t,
startedData.y_dat, startedData.y_dat,
@ -31,7 +32,8 @@ fun main() {
doubleArrayOf(startedData.opts[6], startedData.opts[7], startedData.opts[8]), doubleArrayOf(startedData.opts[6], startedData.opts[7], startedData.opts[8]),
startedData.opts[9].toInt(), startedData.opts[9].toInt(),
10, 10,
startedData.example_number) startedData.example_number
)
val result = DoubleTensorAlgebra.levenbergMarquardt(inputData) val result = DoubleTensorAlgebra.levenbergMarquardt(inputData)
@ -43,7 +45,7 @@ fun main() {
println() println()
println("Y true and y received:") println("Y true and y received:")
var y_hat_after = funcDifficultForLm(startedData.t, result.resultParameters, startedData.example_number) var y_hat_after = funcDifficultForLm(startedData.t, result.resultParameters, startedData.example_number)
for (i in 0 until startedData.y_dat.shape.component1()) { for (i in 0 until startedData.y_dat.shape.component1()) {
val x = (startedData.y_dat[i, 0] * 10000).roundToInt() / 10000.0 val x = (startedData.y_dat[i, 0] * 10000).roundToInt() / 10000.0
val y = (y_hat_after[i, 0] * 10000).roundToInt() / 10000.0 val y = (y_hat_after[i, 0] * 10000).roundToInt() / 10000.0

View File

@ -15,6 +15,7 @@ import space.kscience.kmath.tensors.core.DoubleTensorAlgebra
import space.kscience.kmath.tensors.core.LMInput import space.kscience.kmath.tensors.core.LMInput
import space.kscience.kmath.tensors.core.levenbergMarquardt import space.kscience.kmath.tensors.core.levenbergMarquardt
import kotlin.math.roundToInt import kotlin.math.roundToInt
fun main() { fun main() {
val NData = 100 val NData = 100
var t_example = DoubleTensorAlgebra.ones(ShapeND(intArrayOf(NData, 1))).as2D() var t_example = DoubleTensorAlgebra.ones(ShapeND(intArrayOf(NData, 1))).as2D()
@ -30,7 +31,7 @@ fun main() {
val exampleNumber = 1 val exampleNumber = 1
var y_hat = funcMiddleForLm(t_example, p_example, exampleNumber) var y_hat = funcMiddleForLm(t_example, p_example, exampleNumber)
var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D() var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D()
for (i in 0 until Nparams) { for (i in 0 until Nparams) {
@ -49,7 +50,8 @@ fun main() {
p_min = p_min.div(1.0 / 50.0) p_min = p_min.div(1.0 / 50.0)
val opts = doubleArrayOf(3.0, 7000.0, 1e-5, 1e-5, 1e-5, 1e-5, 1e-5, 11.0, 9.0, 1.0) val opts = doubleArrayOf(3.0, 7000.0, 1e-5, 1e-5, 1e-5, 1e-5, 1e-5, 11.0, 9.0, 1.0)
val inputData = LMInput(::funcMiddleForLm, val inputData = LMInput(
::funcMiddleForLm,
p_init.as2D(), p_init.as2D(),
t, t,
y_dat, y_dat,
@ -62,7 +64,8 @@ fun main() {
doubleArrayOf(opts[6], opts[7], opts[8]), doubleArrayOf(opts[6], opts[7], opts[8]),
opts[9].toInt(), opts[9].toInt(),
10, 10,
1) 1
)
val result = DoubleTensorAlgebra.levenbergMarquardt(inputData) val result = DoubleTensorAlgebra.levenbergMarquardt(inputData)
@ -74,7 +77,7 @@ fun main() {
println() println()
var y_hat_after = funcMiddleForLm(t_example, result.resultParameters, exampleNumber) var y_hat_after = funcMiddleForLm(t_example, result.resultParameters, exampleNumber)
for (i in 0 until y_hat.shape.component1()) { for (i in 0 until y_hat.shape.component1()) {
val x = (y_hat[i, 0] * 10000).roundToInt() / 10000.0 val x = (y_hat[i, 0] * 10000).roundToInt() / 10000.0
val y = (y_hat_after[i, 0] * 10000).roundToInt() / 10000.0 val y = (y_hat_after[i, 0] * 10000).roundToInt() / 10000.0

View File

@ -6,18 +6,23 @@
package space.kscience.kmath.tensors.LevenbergMarquardt.StreamingLm package space.kscience.kmath.tensors.LevenbergMarquardt.StreamingLm
import kotlinx.coroutines.delay import kotlinx.coroutines.delay
import kotlinx.coroutines.flow.* import kotlinx.coroutines.flow.Flow
import space.kscience.kmath.nd.* import kotlinx.coroutines.flow.flow
import space.kscience.kmath.nd.MutableStructure2D
import space.kscience.kmath.nd.ShapeND
import space.kscience.kmath.nd.as2D
import space.kscience.kmath.nd.component1
import space.kscience.kmath.tensors.LevenbergMarquardt.StartDataLm import space.kscience.kmath.tensors.LevenbergMarquardt.StartDataLm
import space.kscience.kmath.tensors.core.BroadcastDoubleTensorAlgebra.zeros import space.kscience.kmath.tensors.core.BroadcastDoubleTensorAlgebra.zeros
import space.kscience.kmath.tensors.core.DoubleTensorAlgebra import space.kscience.kmath.tensors.core.DoubleTensorAlgebra
import space.kscience.kmath.tensors.core.LMInput import space.kscience.kmath.tensors.core.LMInput
import space.kscience.kmath.tensors.core.levenbergMarquardt import space.kscience.kmath.tensors.core.levenbergMarquardt
import kotlin.random.Random import kotlin.random.Random
import kotlin.reflect.KFunction3
fun streamLm(lm_func: (MutableStructure2D<Double>, MutableStructure2D<Double>, Int) -> (MutableStructure2D<Double>), fun streamLm(
startData: StartDataLm, launchFrequencyInMs: Long, numberOfLaunches: Int): Flow<MutableStructure2D<Double>> = flow{ lm_func: (MutableStructure2D<Double>, MutableStructure2D<Double>, Int) -> (MutableStructure2D<Double>),
startData: StartDataLm, launchFrequencyInMs: Long, numberOfLaunches: Int,
): Flow<MutableStructure2D<Double>> = flow {
var example_number = startData.example_number var example_number = startData.example_number
var p_init = startData.p_init var p_init = startData.p_init
@ -32,7 +37,8 @@ fun streamLm(lm_func: (MutableStructure2D<Double>, MutableStructure2D<Double>, I
var steps = numberOfLaunches var steps = numberOfLaunches
val isEndless = (steps <= 0) val isEndless = (steps <= 0)
val inputData = LMInput(lm_func, val inputData = LMInput(
lm_func,
p_init, p_init,
t, t,
y_dat, y_dat,
@ -45,7 +51,8 @@ fun streamLm(lm_func: (MutableStructure2D<Double>, MutableStructure2D<Double>, I
doubleArrayOf(opts[6], opts[7], opts[8]), doubleArrayOf(opts[6], opts[7], opts[8]),
opts[9].toInt(), opts[9].toInt(),
10, 10,
example_number) example_number
)
while (isEndless || steps > 0) { while (isEndless || steps > 0) {
val result = DoubleTensorAlgebra.levenbergMarquardt(inputData) val result = DoubleTensorAlgebra.levenbergMarquardt(inputData)
@ -57,7 +64,7 @@ fun streamLm(lm_func: (MutableStructure2D<Double>, MutableStructure2D<Double>, I
} }
} }
fun generateNewYDat(y_dat: MutableStructure2D<Double>, delta: Double): MutableStructure2D<Double>{ fun generateNewYDat(y_dat: MutableStructure2D<Double>, delta: Double): MutableStructure2D<Double> {
val n = y_dat.shape.component1() val n = y_dat.shape.component1()
val y_dat_new = zeros(ShapeND(intArrayOf(n, 1))).as2D() val y_dat_new = zeros(ShapeND(intArrayOf(n, 1))).as2D()
for (i in 0 until n) { for (i in 0 until n) {

View File

@ -5,14 +5,15 @@
package space.kscience.kmath.tensors.LevenbergMarquardt.StreamingLm package space.kscience.kmath.tensors.LevenbergMarquardt.StreamingLm
import space.kscience.kmath.nd.* import space.kscience.kmath.nd.component1
import space.kscience.kmath.tensors.LevenbergMarquardt.* import space.kscience.kmath.tensors.LevenbergMarquardt.funcDifficultForLm
import space.kscience.kmath.tensors.LevenbergMarquardt.getStartDataForFuncDifficult
import kotlin.math.roundToInt import kotlin.math.roundToInt
suspend fun main(){ suspend fun main() {
val startData = getStartDataForFuncDifficult() val startData = getStartDataForFuncDifficult()
// Создание потока: // Создание потока:
val lmFlow = streamLm(::funcDifficultForLm, startData, 0, 100) val lmFlow = streamLm(::funcDifficultForLm, startData, 0, 100)
var initialTime = System.currentTimeMillis() var initialTime = System.currentTimeMillis()
var lastTime: Long var lastTime: Long
val launches = mutableListOf<Long>() val launches = mutableListOf<Long>()

View File

@ -18,7 +18,7 @@ import space.kscience.kmath.tensors.core.DoubleTensorAlgebra.Companion.pow
import space.kscience.kmath.tensors.core.DoubleTensorAlgebra.Companion.times import space.kscience.kmath.tensors.core.DoubleTensorAlgebra.Companion.times
import space.kscience.kmath.tensors.core.asDoubleTensor import space.kscience.kmath.tensors.core.asDoubleTensor
public data class StartDataLm ( public data class StartDataLm(
var lm_matx_y_dat: MutableStructure2D<Double>, var lm_matx_y_dat: MutableStructure2D<Double>,
var example_number: Int, var example_number: Int,
var p_init: MutableStructure2D<Double>, var p_init: MutableStructure2D<Double>,
@ -29,10 +29,14 @@ public data class StartDataLm (
var p_min: MutableStructure2D<Double>, var p_min: MutableStructure2D<Double>,
var p_max: MutableStructure2D<Double>, var p_max: MutableStructure2D<Double>,
var consts: MutableStructure2D<Double>, var consts: MutableStructure2D<Double>,
var opts: DoubleArray var opts: DoubleArray,
) )
fun funcEasyForLm(t: MutableStructure2D<Double>, p: MutableStructure2D<Double>, exampleNumber: Int): MutableStructure2D<Double> { fun funcEasyForLm(
t: MutableStructure2D<Double>,
p: MutableStructure2D<Double>,
exampleNumber: Int,
): MutableStructure2D<Double> {
val m = t.shape.component1() val m = t.shape.component1()
var y_hat = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(m, 1))) var y_hat = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(m, 1)))
@ -40,15 +44,13 @@ fun funcEasyForLm(t: MutableStructure2D<Double>, p: MutableStructure2D<Double>,
y_hat = DoubleTensorAlgebra.exp((t.times(-1.0 / p[1, 0]))).times(p[0, 0]) + t.times(p[2, 0]).times( y_hat = DoubleTensorAlgebra.exp((t.times(-1.0 / p[1, 0]))).times(p[0, 0]) + t.times(p[2, 0]).times(
DoubleTensorAlgebra.exp((t.times(-1.0 / p[3, 0]))) DoubleTensorAlgebra.exp((t.times(-1.0 / p[3, 0])))
) )
} } else if (exampleNumber == 2) {
else if (exampleNumber == 2) {
val mt = t.max() val mt = t.max()
y_hat = (t.times(1.0 / mt)).times(p[0, 0]) + y_hat = (t.times(1.0 / mt)).times(p[0, 0]) +
(t.times(1.0 / mt)).pow(2).times(p[1, 0]) + (t.times(1.0 / mt)).pow(2).times(p[1, 0]) +
(t.times(1.0 / mt)).pow(3).times(p[2, 0]) + (t.times(1.0 / mt)).pow(3).times(p[2, 0]) +
(t.times(1.0 / mt)).pow(4).times(p[3, 0]) (t.times(1.0 / mt)).pow(4).times(p[3, 0])
} } else if (exampleNumber == 3) {
else if (exampleNumber == 3) {
y_hat = DoubleTensorAlgebra.exp((t.times(-1.0 / p[1, 0]))) y_hat = DoubleTensorAlgebra.exp((t.times(-1.0 / p[1, 0])))
.times(p[0, 0]) + DoubleTensorAlgebra.sin((t.times(1.0 / p[3, 0]))).times(p[2, 0]) .times(p[0, 0]) + DoubleTensorAlgebra.sin((t.times(1.0 / p[3, 0]))).times(p[2, 0])
} }
@ -56,32 +58,40 @@ fun funcEasyForLm(t: MutableStructure2D<Double>, p: MutableStructure2D<Double>,
return y_hat.as2D() return y_hat.as2D()
} }
fun funcMiddleForLm(t: MutableStructure2D<Double>, p: MutableStructure2D<Double>, exampleNumber: Int): MutableStructure2D<Double> { fun funcMiddleForLm(
t: MutableStructure2D<Double>,
p: MutableStructure2D<Double>,
exampleNumber: Int,
): MutableStructure2D<Double> {
val m = t.shape.component1() val m = t.shape.component1()
var y_hat = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf (m, 1))) var y_hat = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(m, 1)))
val mt = t.max() val mt = t.max()
for(i in 0 until p.shape.component1()){ for (i in 0 until p.shape.component1()) {
y_hat += (t.times(1.0 / mt)).times(p[i, 0]) y_hat += (t.times(1.0 / mt)).times(p[i, 0])
} }
for(i in 0 until 5){ for (i in 0 until 5) {
y_hat = funcEasyForLm(y_hat.as2D(), p, exampleNumber).asDoubleTensor() y_hat = funcEasyForLm(y_hat.as2D(), p, exampleNumber).asDoubleTensor()
} }
return y_hat.as2D() return y_hat.as2D()
} }
fun funcDifficultForLm(t: MutableStructure2D<Double>, p: MutableStructure2D<Double>, exampleNumber: Int): MutableStructure2D<Double> { fun funcDifficultForLm(
t: MutableStructure2D<Double>,
p: MutableStructure2D<Double>,
exampleNumber: Int,
): MutableStructure2D<Double> {
val m = t.shape.component1() val m = t.shape.component1()
var y_hat = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf (m, 1))) var y_hat = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(m, 1)))
val mt = t.max() val mt = t.max()
for(i in 0 until p.shape.component1()){ for (i in 0 until p.shape.component1()) {
y_hat = y_hat.plus( (t.times(1.0 / mt)).times(p[i, 0]) ) y_hat = y_hat.plus((t.times(1.0 / mt)).times(p[i, 0]))
} }
for(i in 0 until 4){ for (i in 0 until 4) {
y_hat = funcEasyForLm((y_hat.as2D() + t).as2D(), p, exampleNumber).asDoubleTensor() y_hat = funcEasyForLm((y_hat.as2D() + t).as2D(), p, exampleNumber).asDoubleTensor()
} }
@ -89,7 +99,7 @@ fun funcDifficultForLm(t: MutableStructure2D<Double>, p: MutableStructure2D<Doub
} }
fun getStartDataForFuncDifficult(): StartDataLm { fun getStartDataForFuncDifficult(): StartDataLm {
val NData = 200 val NData = 200
var t_example = DoubleTensorAlgebra.ones(ShapeND(intArrayOf(NData, 1))).as2D() var t_example = DoubleTensorAlgebra.ones(ShapeND(intArrayOf(NData, 1))).as2D()
for (i in 0 until NData) { for (i in 0 until NData) {
@ -104,7 +114,7 @@ fun getStartDataForFuncDifficult(): StartDataLm {
val exampleNumber = 1 val exampleNumber = 1
var y_hat = funcDifficultForLm(t_example, p_example, exampleNumber) var y_hat = funcDifficultForLm(t_example, p_example, exampleNumber)
var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D() var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D()
for (i in 0 until Nparams) { for (i in 0 until Nparams) {
@ -129,7 +139,7 @@ fun getStartDataForFuncDifficult(): StartDataLm {
return StartDataLm(y_dat, 1, p_init, t, y_dat, weight, dp, p_min.as2D(), p_max.as2D(), consts, opts) return StartDataLm(y_dat, 1, p_init, t, y_dat, weight, dp, p_min.as2D(), p_max.as2D(), consts, opts)
} }
fun getStartDataForFuncMiddle(): StartDataLm { fun getStartDataForFuncMiddle(): StartDataLm {
val NData = 100 val NData = 100
var t_example = DoubleTensorAlgebra.ones(ShapeND(intArrayOf(NData, 1))).as2D() var t_example = DoubleTensorAlgebra.ones(ShapeND(intArrayOf(NData, 1))).as2D()
for (i in 0 until NData) { for (i in 0 until NData) {
@ -144,7 +154,7 @@ fun getStartDataForFuncMiddle(): StartDataLm {
val exampleNumber = 1 val exampleNumber = 1
var y_hat = funcMiddleForLm(t_example, p_example, exampleNumber) var y_hat = funcMiddleForLm(t_example, p_example, exampleNumber)
var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D() var p_init = DoubleTensorAlgebra.zeros(ShapeND(intArrayOf(Nparams, 1))).as2D()
for (i in 0 until Nparams) { for (i in 0 until Nparams) {

View File

@ -5,13 +5,10 @@
kotlin.code.style=official kotlin.code.style=official
kotlin.mpp.stability.nowarn=true kotlin.mpp.stability.nowarn=true
kotlin.native.ignoreDisabledTargets=true kotlin.native.ignoreDisabledTargets=true
org.gradle.configureondemand=true org.gradle.configureondemand=true
org.gradle.jvmargs=-Xmx4096m org.gradle.jvmargs=-Xmx4096m
org.gradle.parallel=true org.gradle.parallel=true
org.gradle.workers.max=4 org.gradle.workers.max=4
toolsVersion=0.15.2-kotlin-1.9.22 toolsVersion=0.15.2-kotlin-1.9.22
#kotlin.experimental.tryK2=true #kotlin.experimental.tryK2=true
#kscience.wasm.disabled=true #kscience.wasm.disabled=true

View File

@ -1,5 +1,5 @@
distributionBase=GRADLE_USER_HOME distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-8.6-bin.zip distributionUrl=https\://services.gradle.org/distributions/gradle-8.7-bin.zip
zipStoreBase=GRADLE_USER_HOME zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists zipStorePath=wrapper/dists

View File

@ -2,17 +2,17 @@
Extensions to MST API: transformations, dynamic compilation and visualization. Extensions to MST API: transformations, dynamic compilation and visualization.
- [expression-language](src/commonMain/kotlin/space/kscience/kmath/ast/parser.kt) : Expression language and its parser - [expression-language](src/commonMain/kotlin/space/kscience/kmath/ast/parser.kt) : Expression language and its parser
- [mst-jvm-codegen](src/jvmMain/kotlin/space/kscience/kmath/asm/asm.kt) : Dynamic MST to JVM bytecode compiler - [mst-jvm-codegen](src/jvmMain/kotlin/space/kscience/kmath/asm/asm.kt) : Dynamic MST to JVM bytecode compiler
- [mst-js-codegen](src/jsMain/kotlin/space/kscience/kmath/estree/estree.kt) : Dynamic MST to JS compiler - [mst-js-codegen](src/jsMain/kotlin/space/kscience/kmath/estree/estree.kt) : Dynamic MST to JS compiler
- [rendering](src/commonMain/kotlin/space/kscience/kmath/ast/rendering/MathRenderer.kt) : Extendable MST rendering - [rendering](src/commonMain/kotlin/space/kscience/kmath/ast/rendering/MathRenderer.kt) : Extendable MST rendering
## Artifact: ## Artifact:
The Maven coordinates of this project are `space.kscience:kmath-ast:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-ast:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")
@ -26,21 +26,27 @@ dependencies {
## Parsing expressions ## Parsing expressions
In this module there is a parser from human-readable strings like `"x^3-x+3"` (in the more specific [grammar](reference/ArithmeticsEvaluator.g4)) to MST instances. In this module there is a parser from human-readable strings like `"x^3-x+3"` (in the more
specific [grammar](reference/ArithmeticsEvaluator.g4)) to MST instances.
Supported literals: Supported literals:
1. Constants and variables (consist of latin letters, digits and underscores, can't start with digit): `x`, `_Abc2`. 1. Constants and variables (consist of latin letters, digits and underscores, can't start with digit): `x`, `_Abc2`.
2. Numbers: `123`, `1.02`, `1e10`, `1e-10`, `1.0e+3`&mdash;all parsed either as `kotlin.Long` or `kotlin.Double`. 2. Numbers: `123`, `1.02`, `1e10`, `1e-10`, `1.0e+3`&mdash;all parsed either as `kotlin.Long` or `kotlin.Double`.
Supported binary operators (from the highest precedence to the lowest one): Supported binary operators (from the highest precedence to the lowest one):
1. `^` 1. `^`
2. `*`, `/` 2. `*`, `/`
3. `+`, `-` 3. `+`, `-`
Supported unary operator: Supported unary operator:
1. `-`, e.&nbsp;g. `-x` 1. `-`, e.&nbsp;g. `-x`
Arbitrary unary and binary functions are also supported: names consist of latin letters, digits and underscores, can't start with digit. Examples: Arbitrary unary and binary functions are also supported: names consist of latin letters, digits and underscores, can't
start with digit. Examples:
1. `sin(x)` 1. `sin(x)`
2. `add(x, y)` 2. `add(x, y)`
@ -105,12 +111,15 @@ public final class CompiledExpression_-386104628_0 implements DoubleExpression {
} }
``` ```
Setting JVM system property `space.kscience.kmath.ast.dump.generated.classes` to `1` makes the translator dump class files to program's working directory, so they can be reviewed manually. Setting JVM system property `space.kscience.kmath.ast.dump.generated.classes` to `1` makes the translator dump class
files to program's working directory, so they can be reviewed manually.
#### Limitations #### Limitations
- The same classes may be generated and loaded twice, so it is recommended to cache compiled expressions to avoid class loading overhead. - The same classes may be generated and loaded twice, so it is recommended to cache compiled expressions to avoid class
- This API is not supported by non-dynamic JVM implementations like TeaVM or GraalVM Native Image because they may not support class loaders. loading overhead.
- This API is not supported by non-dynamic JVM implementations like TeaVM or GraalVM Native Image because they may not
support class loaders.
### On JS ### On JS
@ -188,7 +197,8 @@ public fun main() {
Result LaTeX: Result LaTeX:
$$\operatorname{exp}\\,\left(\sqrt{x}\right)-\frac{\frac{\operatorname{arcsin}\\,\left(2\\,x\right)}{2\times10^{10}+x^{3}}}{12}+x^{2/3}$$ $$\operatorname{exp}\\,\left(\sqrt{x}\right)-\frac{\frac{\operatorname{arcsin}\\,\left(2\\,x\right)
}{2\times10^{10}+x^{3}}}{12}+x^{2/3}$$
Result MathML (can be used with MathJax or other renderers): Result MathML (can be used with MathJax or other renderers):

View File

@ -2,7 +2,7 @@ plugins {
id("space.kscience.gradle.mpp") id("space.kscience.gradle.mpp")
} }
kscience{ kscience {
jvm() jvm()
js() js()
native() native()
@ -22,7 +22,7 @@ kscience{
implementation(npm("js-base64", "3.6.1")) implementation(npm("js-base64", "3.6.1"))
} }
dependencies(jvmMain){ dependencies(jvmMain) {
implementation("org.ow2.asm:asm-commons:9.2") implementation("org.ow2.asm:asm-commons:9.2")
} }
@ -31,7 +31,7 @@ kscience{
kotlin { kotlin {
js { js {
nodejs { nodejs {
testTask{ testTask {
useMocha().timeout = "0" useMocha().timeout = "0"
} }
} }

View File

@ -8,21 +8,27 @@ ${artifact}
## Parsing expressions ## Parsing expressions
In this module there is a parser from human-readable strings like `"x^3-x+3"` (in the more specific [grammar](reference/ArithmeticsEvaluator.g4)) to MST instances. In this module there is a parser from human-readable strings like `"x^3-x+3"` (in the more
specific [grammar](reference/ArithmeticsEvaluator.g4)) to MST instances.
Supported literals: Supported literals:
1. Constants and variables (consist of latin letters, digits and underscores, can't start with digit): `x`, `_Abc2`. 1. Constants and variables (consist of latin letters, digits and underscores, can't start with digit): `x`, `_Abc2`.
2. Numbers: `123`, `1.02`, `1e10`, `1e-10`, `1.0e+3`&mdash;all parsed either as `kotlin.Long` or `kotlin.Double`. 2. Numbers: `123`, `1.02`, `1e10`, `1e-10`, `1.0e+3`&mdash;all parsed either as `kotlin.Long` or `kotlin.Double`.
Supported binary operators (from the highest precedence to the lowest one): Supported binary operators (from the highest precedence to the lowest one):
1. `^` 1. `^`
2. `*`, `/` 2. `*`, `/`
3. `+`, `-` 3. `+`, `-`
Supported unary operator: Supported unary operator:
1. `-`, e.&nbsp;g. `-x` 1. `-`, e.&nbsp;g. `-x`
Arbitrary unary and binary functions are also supported: names consist of latin letters, digits and underscores, can't start with digit. Examples: Arbitrary unary and binary functions are also supported: names consist of latin letters, digits and underscores, can't
start with digit. Examples:
1. `sin(x)` 1. `sin(x)`
2. `add(x, y)` 2. `add(x, y)`
@ -87,12 +93,15 @@ public final class CompiledExpression_-386104628_0 implements DoubleExpression {
} }
``` ```
Setting JVM system property `space.kscience.kmath.ast.dump.generated.classes` to `1` makes the translator dump class files to program's working directory, so they can be reviewed manually. Setting JVM system property `space.kscience.kmath.ast.dump.generated.classes` to `1` makes the translator dump class
files to program's working directory, so they can be reviewed manually.
#### Limitations #### Limitations
- The same classes may be generated and loaded twice, so it is recommended to cache compiled expressions to avoid class loading overhead. - The same classes may be generated and loaded twice, so it is recommended to cache compiled expressions to avoid class
- This API is not supported by non-dynamic JVM implementations like TeaVM or GraalVM Native Image because they may not support class loaders. loading overhead.
- This API is not supported by non-dynamic JVM implementations like TeaVM or GraalVM Native Image because they may not
support class loaders.
### On JS ### On JS
@ -170,7 +179,8 @@ public fun main() {
Result LaTeX: Result LaTeX:
$$\operatorname{exp}\\,\left(\sqrt{x}\right)-\frac{\frac{\operatorname{arcsin}\\,\left(2\\,x\right)}{2\times10^{10}+x^{3}}}{12}+x^{2/3}$$ $$\operatorname{exp}\\,\left(\sqrt{x}\right)-\frac{\frac{\operatorname{arcsin}\\,\left(2\\,x\right)
}{2\times10^{10}+x^{3}}}{12}+x^{2/3}$$
Result MathML (can be used with MathJax or other renderers): Result MathML (can be used with MathJax or other renderers):

View File

@ -68,7 +68,7 @@ public sealed interface TypedMst<T> : WithType<T> {
) : TypedMst<T> { ) : TypedMst<T> {
init { init {
require(left.type==right.type){"Left and right expressions must be of the same type"} require(left.type == right.type) { "Left and right expressions must be of the same type" }
} }
override val type: SafeType<T> get() = left.type override val type: SafeType<T> get() = left.type

View File

@ -426,11 +426,13 @@ public class InverseTrigonometricOperations(operations: Collection<String>?) : U
* The default instance configured with [TrigonometricOperations.ACOS_OPERATION], * The default instance configured with [TrigonometricOperations.ACOS_OPERATION],
* [TrigonometricOperations.ASIN_OPERATION], [TrigonometricOperations.ATAN_OPERATION]. * [TrigonometricOperations.ASIN_OPERATION], [TrigonometricOperations.ATAN_OPERATION].
*/ */
public val Default: InverseTrigonometricOperations = InverseTrigonometricOperations(setOf( public val Default: InverseTrigonometricOperations = InverseTrigonometricOperations(
TrigonometricOperations.ACOS_OPERATION, setOf(
TrigonometricOperations.ASIN_OPERATION, TrigonometricOperations.ACOS_OPERATION,
TrigonometricOperations.ATAN_OPERATION, TrigonometricOperations.ASIN_OPERATION,
)) TrigonometricOperations.ATAN_OPERATION,
)
)
} }
} }
@ -452,10 +454,12 @@ public class InverseHyperbolicOperations(operations: Collection<String>?) : Unar
* The default instance configured with [ExponentialOperations.ACOSH_OPERATION], * The default instance configured with [ExponentialOperations.ACOSH_OPERATION],
* [ExponentialOperations.ASINH_OPERATION], and [ExponentialOperations.ATANH_OPERATION]. * [ExponentialOperations.ASINH_OPERATION], and [ExponentialOperations.ATANH_OPERATION].
*/ */
public val Default: InverseHyperbolicOperations = InverseHyperbolicOperations(setOf( public val Default: InverseHyperbolicOperations = InverseHyperbolicOperations(
ExponentialOperations.ACOSH_OPERATION, setOf(
ExponentialOperations.ASINH_OPERATION, ExponentialOperations.ACOSH_OPERATION,
ExponentialOperations.ATANH_OPERATION, ExponentialOperations.ASINH_OPERATION,
)) ExponentialOperations.ATANH_OPERATION,
)
)
} }
} }

View File

@ -17,7 +17,8 @@ internal class TestFeatures {
fun printNumeric() { fun printNumeric() {
val num = object : Number() { val num = object : Number() {
override fun toByte(): Byte = throw UnsupportedOperationException() override fun toByte(): Byte = throw UnsupportedOperationException()
// override fun toChar(): Char = throw UnsupportedOperationException()
// override fun toChar(): Char = throw UnsupportedOperationException()
override fun toDouble(): Double = throw UnsupportedOperationException() override fun toDouble(): Double = throw UnsupportedOperationException()
override fun toFloat(): Float = throw UnsupportedOperationException() override fun toFloat(): Float = throw UnsupportedOperationException()
override fun toInt(): Int = throw UnsupportedOperationException() override fun toInt(): Int = throw UnsupportedOperationException()

View File

@ -81,8 +81,10 @@ internal class TestMathML {
@Test @Test
fun radicalWithIndex() = fun radicalWithIndex() =
testMathML(RadicalWithIndexSyntax("", SymbolSyntax("x"), SymbolSyntax("y")), testMathML(
"<mroot><mrow><mi>y</mi></mrow><mrow><mi>x</mi></mrow></mroot>") RadicalWithIndexSyntax("", SymbolSyntax("x"), SymbolSyntax("y")),
"<mroot><mrow><mi>y</mi></mrow><mrow><mi>x</mi></mrow></mroot>"
)
@Test @Test
fun multiplication() { fun multiplication() {

View File

@ -52,7 +52,7 @@ internal external fun createType(types: Array<Type>): Type
internal external fun expandType(type: Type): Array<Type> internal external fun expandType(type: Type): Array<Type>
internal external enum class ExpressionIds { internal external enum class ExpressionIds {
Invalid, Invalid,
Block, Block,
If, If,
@ -1656,27 +1656,27 @@ internal open external class Module {
open fun `if`( open fun `if`(
condition: ExpressionRef, condition: ExpressionRef,
ifTrue: ExpressionRef, ifTrue: ExpressionRef,
ifFalse: ExpressionRef = definedExternally ifFalse: ExpressionRef = definedExternally,
): ExpressionRef ): ExpressionRef
open fun loop(label: String, body: ExpressionRef): ExpressionRef open fun loop(label: String, body: ExpressionRef): ExpressionRef
open fun br( open fun br(
label: String, label: String,
condition: ExpressionRef = definedExternally, condition: ExpressionRef = definedExternally,
value: ExpressionRef = definedExternally value: ExpressionRef = definedExternally,
): ExpressionRef ): ExpressionRef
open fun br_if( open fun br_if(
label: String, label: String,
condition: ExpressionRef = definedExternally, condition: ExpressionRef = definedExternally,
value: ExpressionRef = definedExternally value: ExpressionRef = definedExternally,
): ExpressionRef ): ExpressionRef
open fun switch( open fun switch(
labels: Array<String>, labels: Array<String>,
defaultLabel: String, defaultLabel: String,
condition: ExpressionRef, condition: ExpressionRef,
value: ExpressionRef = definedExternally value: ExpressionRef = definedExternally,
): ExpressionRef ): ExpressionRef
open fun call(name: String, operands: Array<ExpressionRef>, returnType: Type): ExpressionRef open fun call(name: String, operands: Array<ExpressionRef>, returnType: Type): ExpressionRef
@ -1685,14 +1685,14 @@ internal open external class Module {
target: ExpressionRef, target: ExpressionRef,
operands: Array<ExpressionRef>, operands: Array<ExpressionRef>,
params: Type, params: Type,
results: Type results: Type,
): ExpressionRef ): ExpressionRef
open fun return_call_indirect( open fun return_call_indirect(
target: ExpressionRef, target: ExpressionRef,
operands: Array<ExpressionRef>, operands: Array<ExpressionRef>,
params: Type, params: Type,
results: Type results: Type,
): ExpressionRef ): ExpressionRef
open var local: `T$2` open var local: `T$2`
@ -1730,7 +1730,7 @@ internal open external class Module {
condition: ExpressionRef, condition: ExpressionRef,
ifTrue: ExpressionRef, ifTrue: ExpressionRef,
ifFalse: ExpressionRef, ifFalse: ExpressionRef,
type: Type = definedExternally type: Type = definedExternally,
): ExpressionRef ): ExpressionRef
open fun drop(value: ExpressionRef): ExpressionRef open fun drop(value: ExpressionRef): ExpressionRef
@ -1754,7 +1754,7 @@ internal open external class Module {
externalModuleName: String, externalModuleName: String,
externalBaseName: String, externalBaseName: String,
params: Type, params: Type,
results: Type results: Type,
) )
open fun addTableImport(internalName: String, externalModuleName: String, externalBaseName: String) open fun addTableImport(internalName: String, externalModuleName: String, externalBaseName: String)
@ -1763,7 +1763,7 @@ internal open external class Module {
internalName: String, internalName: String,
externalModuleName: String, externalModuleName: String,
externalBaseName: String, externalBaseName: String,
globalType: Type globalType: Type,
) )
open fun addEventImport( open fun addEventImport(
@ -1772,7 +1772,7 @@ internal open external class Module {
externalBaseName: String, externalBaseName: String,
attribute: Number, attribute: Number,
params: Type, params: Type,
results: Type results: Type,
) )
open fun addFunctionExport(internalName: String, externalName: String): ExportRef open fun addFunctionExport(internalName: String, externalName: String): ExportRef
@ -1786,7 +1786,7 @@ internal open external class Module {
initial: Number, initial: Number,
maximum: Number, maximum: Number,
funcNames: Array<Number>, funcNames: Array<Number>,
offset: ExpressionRef = definedExternally offset: ExpressionRef = definedExternally,
) )
open fun getFunctionTable(): `T$26` open fun getFunctionTable(): `T$26`
@ -1796,7 +1796,7 @@ internal open external class Module {
exportName: String? = definedExternally, exportName: String? = definedExternally,
segments: Array<MemorySegment>? = definedExternally, segments: Array<MemorySegment>? = definedExternally,
flags: Array<Number>? = definedExternally, flags: Array<Number>? = definedExternally,
shared: Boolean = definedExternally shared: Boolean = definedExternally,
) )
open fun getNumMemorySegments(): Number open fun getNumMemorySegments(): Number
@ -1827,7 +1827,7 @@ internal open external class Module {
expr: ExpressionRef, expr: ExpressionRef,
fileIndex: Number, fileIndex: Number,
lineNumber: Number, lineNumber: Number,
columnNumber: Number columnNumber: Number,
) )
open fun copyExpression(expr: ExpressionRef): ExpressionRef open fun copyExpression(expr: ExpressionRef): ExpressionRef
@ -2231,7 +2231,7 @@ internal open external class Relooper(module: Module) {
from: RelooperBlockRef, from: RelooperBlockRef,
to: RelooperBlockRef, to: RelooperBlockRef,
indexes: Array<Number>, indexes: Array<Number>,
code: ExpressionRef code: ExpressionRef,
) )
open fun renderAndDispose(entry: RelooperBlockRef, labelHelper: Number): ExpressionRef open fun renderAndDispose(entry: RelooperBlockRef, labelHelper: Number): ExpressionRef

View File

@ -30,12 +30,13 @@ internal fun Identifier(name: String) = object : Identifier {
override var name = name override var name = name
} }
internal fun FunctionExpression(id: Identifier?, params: Array<dynamic>, body: BlockStatement) = object : FunctionExpression { internal fun FunctionExpression(id: Identifier?, params: Array<dynamic>, body: BlockStatement) =
override var params = params object : FunctionExpression {
override var type = "FunctionExpression" override var params = params
override var id: Identifier? = id override var type = "FunctionExpression"
override var body = body override var id: Identifier? = id
} override var body = body
}
internal fun BlockStatement(vararg body: dynamic) = object : BlockStatement { internal fun BlockStatement(vararg body: dynamic) = object : BlockStatement {
override var type = "BlockStatement" override var type = "BlockStatement"

View File

@ -91,6 +91,6 @@ internal typealias Extract<T, U> = Any
internal external interface PromiseLike<T> { internal external interface PromiseLike<T> {
fun then( fun then(
onfulfilled: ((value: T) -> Any?)? = definedExternally, onfulfilled: ((value: T) -> Any?)? = definedExternally,
onrejected: ((reason: Any) -> Any?)? = definedExternally onrejected: ((reason: Any) -> Any?)? = definedExternally,
): PromiseLike<dynamic /* TResult1 | TResult2 */> ): PromiseLike<dynamic /* TResult1 | TResult2 */>
} }

View File

@ -15,11 +15,11 @@
package space.kscience.kmath.internal.webassembly package space.kscience.kmath.internal.webassembly
import space.kscience.kmath.internal.tsstdlib.PromiseLike
import org.khronos.webgl.ArrayBuffer import org.khronos.webgl.ArrayBuffer
import org.khronos.webgl.ArrayBufferView import org.khronos.webgl.ArrayBufferView
import org.khronos.webgl.Uint8Array import org.khronos.webgl.Uint8Array
import org.w3c.fetch.Response import org.w3c.fetch.Response
import space.kscience.kmath.internal.tsstdlib.PromiseLike
import kotlin.js.Promise import kotlin.js.Promise
@Suppress("NESTED_CLASS_IN_EXTERNAL_INTERFACE") @Suppress("NESTED_CLASS_IN_EXTERNAL_INTERFACE")

View File

@ -91,7 +91,7 @@ public inline fun <reified T : Any> MST.compile(algebra: Algebra<T>, vararg argu
* @author Iaroslav Postovalov * @author Iaroslav Postovalov
*/ */
@UnstableKMathAPI @UnstableKMathAPI
public fun MST.compileToExpression(algebra: Int32Ring): IntExpression { public fun MST.compileToExpression(algebra: Int32Ring): IntExpression {
val typed = evaluateConstants(algebra) val typed = evaluateConstants(algebra)
return if (typed is TypedMst.Constant) object : IntExpression { return if (typed is TypedMst.Constant) object : IntExpression {

View File

@ -13,9 +13,6 @@ import space.kscience.kmath.expressions.*
import java.lang.invoke.MethodHandles import java.lang.invoke.MethodHandles
import java.lang.invoke.MethodType import java.lang.invoke.MethodType
import java.nio.file.Paths import java.nio.file.Paths
import java.util.stream.Collectors.toMap
import kotlin.contracts.InvocationKind
import kotlin.contracts.contract
import kotlin.io.path.writeBytes import kotlin.io.path.writeBytes
/** /**
@ -283,7 +280,6 @@ internal class GenericAsmBuilder<T>(
fun loadVariable(name: Symbol): Unit = invokeMethodVisitor.load(2 + argumentsLocals.indexOf(name), tType) fun loadVariable(name: Symbol): Unit = invokeMethodVisitor.load(2 + argumentsLocals.indexOf(name), tType)
inline fun buildCall(function: Function<T>, parameters: GenericAsmBuilder<T>.() -> Unit) { inline fun buildCall(function: Function<T>, parameters: GenericAsmBuilder<T>.() -> Unit) {
contract { callsInPlace(parameters, InvocationKind.EXACTLY_ONCE) }
val `interface` = function.javaClass.interfaces.first { Function::class.java in it.interfaces } val `interface` = function.javaClass.interfaces.first { Function::class.java in it.interfaces }
val arity = `interface`.methods.find { it.name == "invoke" }?.parameterCount val arity = `interface`.methods.find { it.name == "invoke" }?.parameterCount

View File

@ -332,7 +332,7 @@ internal sealed class PrimitiveAsmBuilder<T : Number, out E : Expression<T>>(
private fun visitVariables( private fun visitVariables(
node: TypedMst<T>, node: TypedMst<T>,
arrayMode: Boolean, arrayMode: Boolean,
alreadyLoaded: MutableList<Symbol> = mutableListOf() alreadyLoaded: MutableList<Symbol> = mutableListOf(),
): Unit = when (node) { ): Unit = when (node) {
is TypedMst.Variable -> if (node.symbol !in alreadyLoaded) { is TypedMst.Variable -> if (node.symbol !in alreadyLoaded) {
alreadyLoaded += node.symbol alreadyLoaded += node.symbol

View File

@ -8,7 +8,6 @@ package space.kscience.kmath.asm.internal
import org.objectweb.asm.* import org.objectweb.asm.*
import org.objectweb.asm.commons.InstructionAdapter import org.objectweb.asm.commons.InstructionAdapter
import space.kscience.kmath.expressions.Expression import space.kscience.kmath.expressions.Expression
import space.kscience.kmath.expressions.MST
import kotlin.contracts.InvocationKind import kotlin.contracts.InvocationKind
import kotlin.contracts.contract import kotlin.contracts.contract

View File

@ -9,6 +9,7 @@ Commons math binding for kmath
The Maven coordinates of this project are `space.kscience:kmath-commons:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-commons:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -35,11 +35,13 @@ public class CMGaussRuleIntegrator(
range.start, range.start,
range.endInclusive range.endInclusive
) )
GaussRule.LEGENDREHP -> factory.legendreHighPrecision( GaussRule.LEGENDREHP -> factory.legendreHighPrecision(
numpoints, numpoints,
range.start, range.start,
range.endInclusive range.endInclusive
) )
GaussRule.UNIFORM -> GaussIntegrator( GaussRule.UNIFORM -> GaussIntegrator(
getUniformRule( getUniformRule(
range.start, range.start,
@ -80,7 +82,7 @@ public class CMGaussRuleIntegrator(
type: GaussRule = GaussRule.LEGENDRE, type: GaussRule = GaussRule.LEGENDRE,
function: (Double) -> Double, function: (Double) -> Double,
): Double = CMGaussRuleIntegrator(numPoints, type).integrate( ): Double = CMGaussRuleIntegrator(numPoints, type).integrate(
UnivariateIntegrand({IntegrationRange(range)},function) UnivariateIntegrand({ IntegrationRange(range) }, function)
).value ).value
} }
} }

View File

@ -48,9 +48,11 @@ public fun CMLinearSpace.inverse(
public fun CMLinearSpace.solver(decomposition: CMDecomposition): LinearSolver<Double> = object : LinearSolver<Double> { public fun CMLinearSpace.solver(decomposition: CMDecomposition): LinearSolver<Double> = object : LinearSolver<Double> {
override fun solve(a: Matrix<Double>, b: Matrix<Double>): Matrix<Double> = solver(a, decomposition).solve(b.toCM().origin).wrap() override fun solve(a: Matrix<Double>, b: Matrix<Double>): Matrix<Double> =
solver(a, decomposition).solve(b.toCM().origin).wrap()
override fun solve(a: Matrix<Double>, b: Point<Double>): Point<Double> = solver(a, decomposition).solve(b.toCM().origin).toPoint() override fun solve(a: Matrix<Double>, b: Point<Double>): Point<Double> =
solver(a, decomposition).solve(b.toCM().origin).toPoint()
override fun inverse(matrix: Matrix<Double>): Matrix<Double> = solver(matrix, decomposition).inverse.wrap() override fun inverse(matrix: Matrix<Double>): Matrix<Double> = solver(matrix, decomposition).inverse.wrap()
} }

View File

@ -43,7 +43,7 @@ public object CMOptimizerData : SetAttribute<SymbolIndexer.() -> OptimizationDat
* Specify Commons-maths optimization data. * Specify Commons-maths optimization data.
*/ */
public fun AttributesBuilder<FunctionOptimization<Double>>.cmOptimizationData(data: SymbolIndexer.() -> OptimizationData) { public fun AttributesBuilder<FunctionOptimization<Double>>.cmOptimizationData(data: SymbolIndexer.() -> OptimizationData) {
CMOptimizerData.add(data) CMOptimizerData add data
} }
public fun AttributesBuilder<FunctionOptimization<Double>>.simplexSteps(vararg steps: Pair<Symbol, Double>) { public fun AttributesBuilder<FunctionOptimization<Double>>.simplexSteps(vararg steps: Pair<Symbol, Double>) {

View File

@ -73,7 +73,7 @@ internal class OptimizeTest {
val result: FunctionOptimization<Double> = chi2.optimizeWith( val result: FunctionOptimization<Double> = chi2.optimizeWith(
CMOptimizer, CMOptimizer,
mapOf(a to 1.5, b to 0.9, c to 1.0), mapOf(a to 1.5, b to 0.9, c to 1.0),
){ ) {
FunctionOptimizationTarget(OptimizationDirection.MINIMIZE) FunctionOptimizationTarget(OptimizationDirection.MINIMIZE)
} }
println(result) println(result)

View File

@ -2,15 +2,15 @@
Complex and hypercomplex number systems in KMath. Complex and hypercomplex number systems in KMath.
- [complex](src/commonMain/kotlin/space/kscience/kmath/complex/Complex.kt) : Complex numbers operations - [complex](src/commonMain/kotlin/space/kscience/kmath/complex/Complex.kt) : Complex numbers operations
- [quaternion](src/commonMain/kotlin/space/kscience/kmath/complex/Quaternion.kt) : Quaternions and their composition - [quaternion](src/commonMain/kotlin/space/kscience/kmath/complex/Quaternion.kt) : Quaternions and their composition
## Artifact: ## Artifact:
The Maven coordinates of this project are `space.kscience:kmath-complex:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-complex:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -148,8 +148,8 @@ public object ComplexField :
exp(pow * ln(arg)) exp(pow * ln(arg))
} }
public fun power(arg: Complex, pow: Complex): Complex = if(arg == zero || arg == (-0.0).toComplex()){ public fun power(arg: Complex, pow: Complex): Complex = if (arg == zero || arg == (-0.0).toComplex()) {
if(pow == zero){ if (pow == zero) {
one one
} else { } else {
zero zero

View File

@ -19,7 +19,8 @@ import kotlin.contracts.contract
*/ */
@OptIn(UnstableKMathAPI::class) @OptIn(UnstableKMathAPI::class)
public sealed class ComplexFieldOpsND : BufferedFieldOpsND<Complex, ComplexField>(ComplexField.bufferAlgebra), public sealed class ComplexFieldOpsND : BufferedFieldOpsND<Complex, ComplexField>(ComplexField.bufferAlgebra),
ScaleOperations<StructureND<Complex>>, ExtendedFieldOps<StructureND<Complex>>, PowerOperations<StructureND<Complex>> { ScaleOperations<StructureND<Complex>>, ExtendedFieldOps<StructureND<Complex>>,
PowerOperations<StructureND<Complex>> {
@OptIn(PerformancePitfall::class) @OptIn(PerformancePitfall::class)
override fun StructureND<Complex>.toBufferND(): BufferND<Complex> = when (this) { override fun StructureND<Complex>.toBufferND(): BufferND<Complex> = when (this) {
@ -53,7 +54,7 @@ public sealed class ComplexFieldOpsND : BufferedFieldOpsND<Complex, ComplexField
override fun atanh(arg: StructureND<Complex>): BufferND<Complex> = mapInline(arg.toBufferND()) { atanh(it) } override fun atanh(arg: StructureND<Complex>): BufferND<Complex> = mapInline(arg.toBufferND()) { atanh(it) }
override fun power(arg: StructureND<Complex>, pow: Number): StructureND<Complex> = override fun power(arg: StructureND<Complex>, pow: Number): StructureND<Complex> =
mapInline(arg.toBufferND()) { power(it,pow) } mapInline(arg.toBufferND()) { power(it, pow) }
public companion object : ComplexFieldOpsND() public companion object : ComplexFieldOpsND()
} }

View File

@ -2,23 +2,28 @@
The core interfaces of KMath. The core interfaces of KMath.
- [algebras](src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Algebraic structures like rings, spaces and fields. - [algebras](src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Algebraic structures like rings, spaces
- [nd](src/commonMain/kotlin/space/kscience/kmath/structures/StructureND.kt) : Many-dimensional structures and operations on them. and fields.
- [linear](src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Basic linear algebra operations (sums, products, etc.), backed by the `Space` API. Advanced linear algebra operations like matrix inversion and LU decomposition. - [nd](src/commonMain/kotlin/space/kscience/kmath/structures/StructureND.kt) : Many-dimensional structures and
- [buffers](src/commonMain/kotlin/space/kscience/kmath/structures/Buffers.kt) : One-dimensional structure operations on them.
- [expressions](src/commonMain/kotlin/space/kscience/kmath/expressions) : By writing a single mathematical expression once, users will be able to apply different types of - [linear](src/commonMain/kotlin/space/kscience/kmath/operations/Algebra.kt) : Basic linear algebra operations (sums,
objects to the expression by providing a context. Expressions can be used for a wide variety of purposes from high products, etc.), backed by the `Space` API. Advanced linear algebra operations like matrix inversion and LU
performance calculations to code generation. decomposition.
- [domains](src/commonMain/kotlin/space/kscience/kmath/domains) : Domains - [buffers](src/commonMain/kotlin/space/kscience/kmath/structures/Buffers.kt) : One-dimensional structure
- [autodiff](src/commonMain/kotlin/space/kscience/kmath/expressions/SimpleAutoDiff.kt) : Automatic differentiation - [expressions](src/commonMain/kotlin/space/kscience/kmath/expressions) : By writing a single mathematical expression
- [linear.parallel](#) : Parallel implementation for `LinearAlgebra` once, users will be able to apply different types of
objects to the expression by providing a context. Expressions can be used for a wide variety of purposes from high
performance calculations to code generation.
- [domains](src/commonMain/kotlin/space/kscience/kmath/domains) : Domains
- [autodiff](src/commonMain/kotlin/space/kscience/kmath/expressions/SimpleAutoDiff.kt) : Automatic differentiation
- [linear.parallel](#) : Parallel implementation for `LinearAlgebra`
## Artifact: ## Artifact:
The Maven coordinates of this project are `space.kscience:kmath-core:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-core:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -2,7 +2,7 @@ plugins {
id("space.kscience.gradle.mpp") id("space.kscience.gradle.mpp")
} }
kscience{ kscience {
jvm() jvm()
js() js()
native() native()
@ -73,8 +73,8 @@ readme {
) { "Automatic differentiation" } ) { "Automatic differentiation" }
feature( feature(
id="Parallel linear algebra" id = "Parallel linear algebra"
){ ) {
""" """
Parallel implementation for `LinearAlgebra` Parallel implementation for `LinearAlgebra`
""".trimIndent() """.trimIndent()

View File

@ -27,7 +27,7 @@ public interface XYErrorColumnarData<T, out X : T, out Y : T> : XYColumnarData<T
public companion object { public companion object {
public fun <T, X : T, Y : T> of( public fun <T, X : T, Y : T> of(
x: Buffer<X>, y: Buffer<Y>, yErr: Buffer<Y> x: Buffer<X>, y: Buffer<Y>, yErr: Buffer<Y>,
): XYErrorColumnarData<T, X, Y> { ): XYErrorColumnarData<T, X, Y> {
require(x.size == y.size) { "Buffer size mismatch. x buffer size is ${x.size}, y buffer size is ${y.size}" } require(x.size == y.size) { "Buffer size mismatch. x buffer size is ${x.size}, y buffer size is ${y.size}" }
require(y.size == yErr.size) { "Buffer size mismatch. y buffer size is ${x.size}, yErr buffer size is ${y.size}" } require(y.size == yErr.size) { "Buffer size mismatch. y buffer size is ${x.size}, yErr buffer size is ${y.size}" }

View File

@ -58,6 +58,7 @@ public fun <T> MST.interpret(algebra: Algebra<T>, arguments: Map<Symbol, T>): T
this.operation, this.operation,
algebra.number(this.value.value), algebra.number(this.value.value),
) )
else -> algebra.unaryOperationFunction(this.operation)(this.value.interpret(algebra, arguments)) else -> algebra.unaryOperationFunction(this.operation)(this.value.interpret(algebra, arguments))
} }

View File

@ -224,11 +224,7 @@ public inline fun <T : Any, F : Field<T>> SimpleAutoDiffField<T, F>.const(block:
public fun <T : Any, F : Field<T>> F.simpleAutoDiff( public fun <T : Any, F : Field<T>> F.simpleAutoDiff(
bindings: Map<Symbol, T>, bindings: Map<Symbol, T>,
body: SimpleAutoDiffField<T, F>.() -> AutoDiffValue<T>, body: SimpleAutoDiffField<T, F>.() -> AutoDiffValue<T>,
): DerivationResult<T> { ): DerivationResult<T> = SimpleAutoDiffField(this, bindings).differentiate(body)
contract { callsInPlace(body, InvocationKind.EXACTLY_ONCE) }
return SimpleAutoDiffField(this, bindings).differentiate(body)
}
public fun <T : Any, F : Field<T>> F.simpleAutoDiff( public fun <T : Any, F : Field<T>> F.simpleAutoDiff(
vararg bindings: Pair<Symbol, T>, vararg bindings: Pair<Symbol, T>,

View File

@ -182,7 +182,7 @@ public interface LinearSpace<T, out A : Ring<T>> : MatrixScope<T> {
* better use [StructureND.getOrComputeAttribute]. * better use [StructureND.getOrComputeAttribute].
*/ */
@UnstableKMathAPI @UnstableKMathAPI
public fun <V : Any, A : StructureAttribute<V>> Matrix<T>.compute( public fun <V : Any, A : StructureAttribute<V>> Matrix<T>.withComputedAttribute(
attribute: A, attribute: A,
): Matrix<T>? { ): Matrix<T>? {
return if (attributes[attribute] != null) { return if (attributes[attribute] != null) {

View File

@ -83,7 +83,7 @@ internal fun <T : Comparable<T>> LinearSpace<T, Ring<T>>.abs(value: T): T =
public fun <T : Comparable<T>> Field<T>.lup( public fun <T : Comparable<T>> Field<T>.lup(
matrix: Matrix<T>, matrix: Matrix<T>,
checkSingular: (T) -> Boolean, checkSingular: (T) -> Boolean,
): GenericLupDecomposition<T> { ): GenericLupDecomposition<T> {
require(matrix.rowNum == matrix.colNum) { "LU decomposition supports only square matrices" } require(matrix.rowNum == matrix.colNum) { "LU decomposition supports only square matrices" }
val m = matrix.colNum val m = matrix.colNum
val pivot = IntArray(matrix.rowNum) val pivot = IntArray(matrix.rowNum)
@ -110,7 +110,7 @@ public fun <T : Comparable<T>> Field<T>.lup(
// upper // upper
for (row in 0 until col) { for (row in 0 until col) {
var sum = lu[row, col] var sum = lu[row, col]
for (i in 0 until row){ for (i in 0 until row) {
sum -= lu[row, i] * lu[i, col] sum -= lu[row, i] * lu[i, col]
} }
lu[row, col] = sum lu[row, col] = sum
@ -122,7 +122,7 @@ public fun <T : Comparable<T>> Field<T>.lup(
for (row in col until m) { for (row in col until m) {
var sum = lu[row, col] var sum = lu[row, col]
for (i in 0 until col){ for (i in 0 until col) {
sum -= lu[row, i] * lu[i, col] sum -= lu[row, i] * lu[i, col]
} }
lu[row, col] = sum lu[row, col] = sum
@ -226,7 +226,7 @@ private fun <T> Field<T>.solve(
public fun <T : Comparable<T>> LinearSpace<T, Field<T>>.lupSolver( public fun <T : Comparable<T>> LinearSpace<T, Field<T>>.lupSolver(
singularityCheck: (T) -> Boolean, singularityCheck: (T) -> Boolean,
): LinearSolver<T> = object : LinearSolver<T> { ): LinearSolver<T> = object : LinearSolver<T> {
override fun solve(a: Matrix<T>, b: Matrix<T>): Matrix<T> = elementAlgebra{ override fun solve(a: Matrix<T>, b: Matrix<T>): Matrix<T> = elementAlgebra {
// Use existing decomposition if it is provided by matrix or linear space itself // Use existing decomposition if it is provided by matrix or linear space itself
val decomposition = a.getOrComputeAttribute(LUP) ?: lup(a, singularityCheck) val decomposition = a.getOrComputeAttribute(LUP) ?: lup(a, singularityCheck)
return solve(decomposition, b) return solve(decomposition, b)

View File

@ -18,12 +18,11 @@ public sealed class Int16RingOpsND : BufferedRingOpsND<Short, Int16Ring>(Int16Ri
@OptIn(UnstableKMathAPI::class) @OptIn(UnstableKMathAPI::class)
public class Int16RingND( public class Int16RingND(
override val shape: ShapeND override val shape: ShapeND,
) : Int16RingOpsND(), RingND<Short, Int16Ring>, NumbersAddOps<StructureND<Short>> { ) : Int16RingOpsND(), RingND<Short, Int16Ring>, NumbersAddOps<StructureND<Short>> {
override fun number(value: Number): BufferND<Short> { override fun number(value: Number): BufferND<Short> {
val short val short = value.toShort() // minimize conversions
= value.toShort() // minimize conversions
return structureND(shape) { short } return structureND(shape) { short }
} }
} }

View File

@ -35,7 +35,7 @@ public sealed class IntRingOpsND : BufferedRingOpsND<Int, Int32Ring>(Int32Ring.b
@OptIn(UnstableKMathAPI::class) @OptIn(UnstableKMathAPI::class)
public class IntRingND( public class IntRingND(
override val shape: ShapeND override val shape: ShapeND,
) : IntRingOpsND(), RingND<Int, Int32Ring>, NumbersAddOps<StructureND<Int>> { ) : IntRingOpsND(), RingND<Int, Int32Ring>, NumbersAddOps<StructureND<Int>> {
override fun number(value: Number): BufferND<Int> { override fun number(value: Number): BufferND<Int> {

View File

@ -14,13 +14,13 @@ import kotlin.jvm.JvmName
public fun <T, A : Algebra<T>> AlgebraND<T, A>.structureND( public fun <T, A : Algebra<T>> AlgebraND<T, A>.structureND(
shapeFirst: Int, shapeFirst: Int,
vararg shapeRest: Int, vararg shapeRest: Int,
initializer: A.(IntArray) -> T initializer: A.(IntArray) -> T,
): StructureND<T> = structureND(ShapeND(shapeFirst, *shapeRest), initializer) ): StructureND<T> = structureND(ShapeND(shapeFirst, *shapeRest), initializer)
public fun <T, A : Algebra<T>> AlgebraND<T, A>.mutableStructureND( public fun <T, A : Algebra<T>> AlgebraND<T, A>.mutableStructureND(
shapeFirst: Int, shapeFirst: Int,
vararg shapeRest: Int, vararg shapeRest: Int,
initializer: A.(IntArray) -> T initializer: A.(IntArray) -> T,
): MutableStructureND<T> = mutableStructureND(ShapeND(shapeFirst, *shapeRest), initializer) ): MutableStructureND<T> = mutableStructureND(ShapeND(shapeFirst, *shapeRest), initializer)
public fun <T, A : Group<T>> AlgebraND<T, A>.zero(shape: ShapeND): StructureND<T> = structureND(shape) { zero } public fun <T, A : Group<T>> AlgebraND<T, A>.zero(shape: ShapeND): StructureND<T> = structureND(shape) { zero }

View File

@ -454,10 +454,12 @@ public fun String.parseBigInteger(): BigInt? {
sign = +1 sign = +1
1 1
} }
'-' -> { '-' -> {
sign = -1 sign = -1
1 1
} }
else -> { else -> {
sign = +1 sign = +1
0 0

View File

@ -33,7 +33,7 @@ public class Float64BufferField(public val size: Int) : ExtendedField<Buffer<Dou
arg.map { it.pow(pow.toInt()) } arg.map { it.pow(pow.toInt()) }
} else { } else {
arg.map { arg.map {
if(it<0) throw IllegalArgumentException("Negative argument $it could not be raised to the fractional power") if (it < 0) throw IllegalArgumentException("Negative argument $it could not be raised to the fractional power")
it.pow(pow.toDouble()) it.pow(pow.toDouble())
} }
} }

View File

@ -84,7 +84,7 @@ public expect fun Number.isInteger(): Boolean
* *
* @param T the type of this structure element * @param T the type of this structure element
*/ */
public interface PowerOperations<T>: Algebra<T> { public interface PowerOperations<T> : Algebra<T> {
/** /**
* Raises [arg] to a power if possible (negative number could not be raised to a fractional power). * Raises [arg] to a power if possible (negative number could not be raised to a fractional power).

View File

@ -99,9 +99,10 @@ public fun <T> Iterable<T>.sumWith(group: Group<T>): T = group.sum(this)
* @param group tha algebra that provides addition * @param group tha algebra that provides addition
* @param extractor the (inline) lambda function to extract value * @param extractor the (inline) lambda function to extract value
*/ */
public inline fun <T, R> Iterable<T>.sumWithGroupOf(group: Group<R>, extractor: (T) -> R): R = this.fold(group.zero) { left: R, right: T -> public inline fun <T, R> Iterable<T>.sumWithGroupOf(group: Group<R>, extractor: (T) -> R): R =
group.add(left, extractor(right)) this.fold(group.zero) { left: R, right: T ->
} group.add(left, extractor(right))
}
/** /**
* Returns the sum of all elements in the sequence in provided space. * Returns the sum of all elements in the sequence in provided space.

View File

@ -34,7 +34,7 @@ public object Int16Field : Field<Int16>, Norm<Int16, Int16>, NumericAlgebra<Int1
override fun multiply(left: Int16, right: Int16): Int16 = (left * right).toShort() override fun multiply(left: Int16, right: Int16): Int16 = (left * right).toShort()
override fun norm(arg: Int16): Int16 = abs(arg) override fun norm(arg: Int16): Int16 = abs(arg)
override fun scale(a: Int16, value: Double): Int16 = (a*value).roundToInt().toShort() override fun scale(a: Int16, value: Double): Int16 = (a * value).roundToInt().toShort()
override fun divide(left: Int16, right: Int16): Int16 = (left / right).toShort() override fun divide(left: Int16, right: Int16): Int16 = (left / right).toShort()
@ -58,7 +58,7 @@ public object Int32Field : Field<Int32>, Norm<Int32, Int32>, NumericAlgebra<Int3
override fun multiply(left: Int, right: Int): Int = left * right override fun multiply(left: Int, right: Int): Int = left * right
override fun norm(arg: Int): Int = abs(arg) override fun norm(arg: Int): Int = abs(arg)
override fun scale(a: Int, value: Double): Int = (a*value).roundToInt() override fun scale(a: Int, value: Double): Int = (a * value).roundToInt()
override fun divide(left: Int, right: Int): Int = left / right override fun divide(left: Int, right: Int): Int = left / right
@ -81,7 +81,7 @@ public object Int64Field : Field<Int64>, Norm<Int64, Int64>, NumericAlgebra<Int6
override fun multiply(left: Int64, right: Int64): Int64 = left * right override fun multiply(left: Int64, right: Int64): Int64 = left * right
override fun norm(arg: Int64): Int64 = abs(arg) override fun norm(arg: Int64): Int64 = abs(arg)
override fun scale(a: Int64, value: Double): Int64 = (a*value).roundToLong() override fun scale(a: Int64, value: Double): Int64 = (a * value).roundToLong()
override fun divide(left: Int64, right: Int64): Int64 = left / right override fun divide(left: Int64, right: Int64): Int64 = left / right

View File

@ -32,4 +32,4 @@ public value class ArrayBuffer<T>(internal val array: Array<T>) : MutableBuffer<
/** /**
* Returns an [ArrayBuffer] that wraps the original array. * Returns an [ArrayBuffer] that wraps the original array.
*/ */
public fun <T> Array<T>.asBuffer(): ArrayBuffer<T> = ArrayBuffer( this) public fun <T> Array<T>.asBuffer(): ArrayBuffer<T> = ArrayBuffer(this)

View File

@ -55,7 +55,7 @@ public fun FlaggedBuffer<*>.isMissing(index: Int): Boolean = hasFlag(index, Valu
*/ */
public class FlaggedDoubleBuffer( public class FlaggedDoubleBuffer(
public val values: DoubleArray, public val values: DoubleArray,
public val flags: ByteArray public val flags: ByteArray,
) : FlaggedBuffer<Double?>, Buffer<Double?> { ) : FlaggedBuffer<Double?>, Buffer<Double?> {
init { init {

View File

@ -37,7 +37,8 @@ public typealias FloatBuffer = Float32Buffer
* The function [init] is called for each array element sequentially starting from the first one. * The function [init] is called for each array element sequentially starting from the first one.
* It should return the value for a buffer element given its index. * It should return the value for a buffer element given its index.
*/ */
public inline fun Float32Buffer(size: Int, init: (Int) -> Float): Float32Buffer = Float32Buffer(FloatArray(size) { init(it) }) public inline fun Float32Buffer(size: Int, init: (Int) -> Float): Float32Buffer =
Float32Buffer(FloatArray(size) { init(it) })
/** /**
* Returns a new [Float32Buffer] of given elements. * Returns a new [Float32Buffer] of given elements.

View File

@ -14,7 +14,7 @@ import kotlin.reflect.typeOf
* *
* @param T the type of elements contained in the buffer. * @param T the type of elements contained in the buffer.
*/ */
public interface MutableBuffer<T> : Buffer<T>{ public interface MutableBuffer<T> : Buffer<T> {
/** /**
* Sets the array element at the specified [index] to the specified [value]. * Sets the array element at the specified [index] to the specified [value].
@ -65,20 +65,21 @@ public interface MutableBuffer<T> : Buffer<T>{
/** /**
* Returns a shallow copy of the buffer. * Returns a shallow copy of the buffer.
*/ */
public fun <T> Buffer<T>.copy(bufferFactory: BufferFactory<T>): Buffer<T> =if(this is ArrayBuffer){ public fun <T> Buffer<T>.copy(bufferFactory: BufferFactory<T>): Buffer<T> = if (this is ArrayBuffer) {
ArrayBuffer(array.copyOf()) ArrayBuffer(array.copyOf())
}else{ } else {
bufferFactory(size,::get) bufferFactory(size, ::get)
} }
/** /**
* Returns a mutable shallow copy of the buffer. * Returns a mutable shallow copy of the buffer.
*/ */
public fun <T> Buffer<T>.mutableCopy(bufferFactory: MutableBufferFactory<T>): MutableBuffer<T> =if(this is ArrayBuffer){ public fun <T> Buffer<T>.mutableCopy(bufferFactory: MutableBufferFactory<T>): MutableBuffer<T> =
ArrayBuffer(array.copyOf()) if (this is ArrayBuffer) {
}else{ ArrayBuffer(array.copyOf())
bufferFactory(size,::get) } else {
} bufferFactory(size, ::get)
}
/** /**

View File

@ -21,14 +21,14 @@ fun <T : Any> assertMatrixEquals(expected: StructureND<T>, actual: StructureND<T
class DoubleLUSolverTest { class DoubleLUSolverTest {
@Test @Test
fun testInvertOne() = Double.algebra.linearSpace.run{ fun testInvertOne() = Double.algebra.linearSpace.run {
val matrix = one(2, 2) val matrix = one(2, 2)
val inverted = lupSolver().inverse(matrix) val inverted = lupSolver().inverse(matrix)
assertMatrixEquals(matrix, inverted) assertMatrixEquals(matrix, inverted)
} }
@Test @Test
fun testDecomposition() = with(Double.algebra.linearSpace){ fun testDecomposition() = with(Double.algebra.linearSpace) {
val matrix = matrix(2, 2)( val matrix = matrix(2, 2)(
3.0, 1.0, 3.0, 1.0,
2.0, 3.0 2.0, 3.0
@ -43,7 +43,7 @@ class DoubleLUSolverTest {
} }
@Test @Test
fun testInvert() = Double.algebra.linearSpace.run{ fun testInvert() = Double.algebra.linearSpace.run {
val matrix = matrix(2, 2)( val matrix = matrix(2, 2)(
3.0, 1.0, 3.0, 1.0,
1.0, 3.0 1.0, 3.0

View File

@ -50,7 +50,7 @@ class MatrixTest {
infix fun Matrix<Double>.pow(power: Int): Matrix<Double> { infix fun Matrix<Double>.pow(power: Int): Matrix<Double> {
var res = this var res = this
repeat(power - 1) { repeat(power - 1) {
res = res dot this@pow res = res dot this@pow
} }
return res return res
} }

View File

@ -29,7 +29,7 @@ class PermSortTest {
*/ */
@Test @Test
fun testOnEmptyBuffer() { fun testOnEmptyBuffer() {
val emptyBuffer = Int32Buffer(0) {it} val emptyBuffer = Int32Buffer(0) { it }
var permutations = emptyBuffer.indicesSorted() var permutations = emptyBuffer.indicesSorted()
assertTrue(permutations.isEmpty(), "permutation on an empty buffer should return an empty result") assertTrue(permutations.isEmpty(), "permutation on an empty buffer should return an empty result")
permutations = emptyBuffer.indicesSortedDescending() permutations = emptyBuffer.indicesSortedDescending()
@ -67,10 +67,14 @@ class PermSortTest {
assertContentEquals(expected, permutations.map { platforms[it] }, "PermSort using custom ascending comparator") assertContentEquals(expected, permutations.map { platforms[it] }, "PermSort using custom ascending comparator")
permutations = platforms.indicesSortedWith(compareByDescending { it.name.length }) permutations = platforms.indicesSortedWith(compareByDescending { it.name.length })
assertContentEquals(expected.reversed(), permutations.map { platforms[it] }, "PermSort using custom descending comparator") assertContentEquals(
expected.reversed(),
permutations.map { platforms[it] },
"PermSort using custom descending comparator"
)
} }
private fun testPermutation(bufferSize: Int) { private fun testPermutation(bufferSize: Int) {
val seed = Random.nextLong() val seed = Random.nextLong()
println("Test randomization seed: $seed") println("Test randomization seed: $seed")
@ -82,23 +86,23 @@ class PermSortTest {
// Ensure no doublon is present in indices // Ensure no doublon is present in indices
assertEquals(indices.toSet().size, indices.size) assertEquals(indices.toSet().size, indices.size)
for (i in 0 until (bufferSize-1)) { for (i in 0 until (bufferSize - 1)) {
val current = buffer[indices[i]] val current = buffer[indices[i]]
val next = buffer[indices[i+1]] val next = buffer[indices[i + 1]]
assertTrue(current <= next, "Permutation indices not properly sorted") assertTrue(current <= next, "Permutation indices not properly sorted")
} }
val descIndices = buffer.indicesSortedDescending() val descIndices = buffer.indicesSortedDescending()
assertEquals(bufferSize, descIndices.size) assertEquals(bufferSize, descIndices.size)
// Ensure no doublon is present in indices // Ensure no doublon is present in indices
assertEquals(descIndices.toSet().size, descIndices.size) assertEquals(descIndices.toSet().size, descIndices.size)
for (i in 0 until (bufferSize-1)) { for (i in 0 until (bufferSize - 1)) {
val current = buffer[descIndices[i]] val current = buffer[descIndices[i]]
val next = buffer[descIndices[i+1]] val next = buffer[descIndices[i + 1]]
assertTrue(current >= next, "Permutation indices not properly sorted in descending order") assertTrue(current >= next, "Permutation indices not properly sorted in descending order")
} }
} }
private fun Random.buffer(size : Int) = Int32Buffer(size) { nextInt() } private fun Random.buffer(size: Int) = Int32Buffer(size) { nextInt() }
} }

View File

@ -18,7 +18,7 @@ class NdOperationsTest {
println(StructureND.toString(structure)) println(StructureND.toString(structure))
val rolled = structure.roll(0,-1) val rolled = structure.roll(0, -1)
println(StructureND.toString(rolled)) println(StructureND.toString(rolled))

View File

@ -12,10 +12,10 @@ class StridesTest {
fun checkRowBasedStrides() { fun checkRowBasedStrides() {
val strides = RowStrides(ShapeND(3, 3)) val strides = RowStrides(ShapeND(3, 3))
var counter = 0 var counter = 0
for(i in 0..2){ for (i in 0..2) {
for(j in 0..2){ for (j in 0..2) {
// print(strides.offset(intArrayOf(i,j)).toString() + "\t") // print(strides.offset(intArrayOf(i,j)).toString() + "\t")
require(strides.offset(intArrayOf(i,j)) == counter) require(strides.offset(intArrayOf(i, j)) == counter)
counter++ counter++
} }
println() println()
@ -26,10 +26,10 @@ class StridesTest {
fun checkColumnBasedStrides() { fun checkColumnBasedStrides() {
val strides = ColumnStrides(ShapeND(3, 3)) val strides = ColumnStrides(ShapeND(3, 3))
var counter = 0 var counter = 0
for(i in 0..2){ for (i in 0..2) {
for(j in 0..2){ for (j in 0..2) {
// print(strides.offset(intArrayOf(i,j)).toString() + "\t") // print(strides.offset(intArrayOf(i,j)).toString() + "\t")
require(strides.offset(intArrayOf(j,i)) == counter) require(strides.offset(intArrayOf(j, i)) == counter)
counter++ counter++
} }
println() println()

View File

@ -13,7 +13,7 @@ internal class BufferExpandedTest {
private val buffer = (0..100).toList().asBuffer() private val buffer = (0..100).toList().asBuffer()
@Test @Test
fun shrink(){ fun shrink() {
val view = buffer.slice(20..30) val view = buffer.slice(20..30)
assertEquals(20, view[0]) assertEquals(20, view[0])
assertEquals(30, view[10]) assertEquals(30, view[10])
@ -21,10 +21,10 @@ internal class BufferExpandedTest {
} }
@Test @Test
fun expandNegative(){ fun expandNegative() {
val view: BufferView<Int> = buffer.expand(-20..113,0) val view: BufferView<Int> = buffer.expand(-20..113, 0)
assertEquals(0,view[4]) assertEquals(0, view[4])
assertEquals(0,view[123]) assertEquals(0, view[123])
assertEquals(100, view[120]) assertEquals(100, view[120])
assertFails { view[-2] } assertFails { view[-2] }
assertFails { view[134] } assertFails { view[134] }

View File

@ -41,7 +41,7 @@ public object Float64ParallelLinearSpace : LinearSpace<Double, Float64Field> {
} }
override fun buildVector(size: Int, initializer: Float64Field.(Int) -> Double): Float64Buffer = override fun buildVector(size: Int, initializer: Float64Field.(Int) -> Double): Float64Buffer =
IntStream.range(0, size).parallel().mapToDouble{ Float64Field.initializer(it) }.toArray().asBuffer() IntStream.range(0, size).parallel().mapToDouble { Float64Field.initializer(it) }.toArray().asBuffer()
override fun Matrix<Double>.unaryMinus(): Matrix<Double> = Floa64FieldOpsND { override fun Matrix<Double>.unaryMinus(): Matrix<Double> = Floa64FieldOpsND {
asND().map { -it }.as2D() asND().map { -it }.as2D()

View File

@ -8,4 +8,5 @@ package space.kscience.kmath.operations
/** /**
* Check if number is an integer * Check if number is an integer
*/ */
public actual fun Number.isInteger(): Boolean = (this is Int) || (this is Long) || (this is Short) || (this.toDouble() % 1 == 0.0) public actual fun Number.isInteger(): Boolean =
(this is Int) || (this is Long) || (this is Short) || (this.toDouble() % 1 == 0.0)

View File

@ -33,7 +33,8 @@ public fun <T> MutableBuffer.Companion.parallel(
typeOf<Double>() -> IntStream.range(0, size).parallel().mapToDouble { initializer(it) as Float64 }.toArray() typeOf<Double>() -> IntStream.range(0, size).parallel().mapToDouble { initializer(it) as Float64 }.toArray()
.asBuffer() as MutableBuffer<T> .asBuffer() as MutableBuffer<T>
//TODO add unsigned types //TODO add unsigned types
else -> IntStream.range(0, size).parallel().mapToObj { initializer(it) }.collect(Collectors.toList<T>()).asMutableBuffer() else -> IntStream.range(0, size).parallel().mapToObj { initializer(it) }.collect(Collectors.toList<T>())
.asMutableBuffer()
} }
public class ParallelBufferFactory<T>(override val type: SafeType<T>) : MutableBufferFactory<T> { public class ParallelBufferFactory<T>(override val type: SafeType<T>) : MutableBufferFactory<T> {

View File

@ -19,14 +19,14 @@ import kotlin.test.assertTrue
class ParallelMatrixTest { class ParallelMatrixTest {
@Test @Test
fun testTranspose() = Float64Field.linearSpace.parallel{ fun testTranspose() = Float64Field.linearSpace.parallel {
val matrix = one(3, 3) val matrix = one(3, 3)
val transposed = matrix.transposed() val transposed = matrix.transposed()
assertTrue { StructureND.contentEquals(matrix, transposed) } assertTrue { StructureND.contentEquals(matrix, transposed) }
} }
@Test @Test
fun testBuilder() = Float64Field.linearSpace.parallel{ fun testBuilder() = Float64Field.linearSpace.parallel {
val matrix = matrix(2, 3)( val matrix = matrix(2, 3)(
1.0, 0.0, 0.0, 1.0, 0.0, 0.0,
0.0, 1.0, 2.0 0.0, 1.0, 2.0
@ -36,7 +36,7 @@ class ParallelMatrixTest {
} }
@Test @Test
fun testMatrixExtension() = Float64Field.linearSpace.parallel{ fun testMatrixExtension() = Float64Field.linearSpace.parallel {
val transitionMatrix: Matrix<Double> = VirtualMatrix(6, 6) { row, col -> val transitionMatrix: Matrix<Double> = VirtualMatrix(6, 6) { row, col ->
when { when {
col == 0 -> .50 col == 0 -> .50
@ -49,7 +49,7 @@ class ParallelMatrixTest {
infix fun Matrix<Double>.pow(power: Int): Matrix<Double> { infix fun Matrix<Double>.pow(power: Int): Matrix<Double> {
var res = this var res = this
repeat(power - 1) { repeat(power - 1) {
res = res dot this@pow res = res dot this@pow
} }
return res return res
} }

View File

@ -8,4 +8,5 @@ package space.kscience.kmath.operations
/** /**
* Check if number is an integer * Check if number is an integer
*/ */
public actual fun Number.isInteger(): Boolean = (this is Int) || (this is Long) || (this is Short) || (this.toDouble() % 1 == 0.0) public actual fun Number.isInteger(): Boolean =
(this is Int) || (this is Long) || (this is Short) || (this.toDouble() % 1 == 0.0)

View File

@ -1,7 +1,5 @@
# Module kmath-coroutines # Module kmath-coroutines
## Usage ## Usage
## Artifact: ## Artifact:
@ -9,6 +7,7 @@
The Maven coordinates of this project are `space.kscience:kmath-coroutines:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-coroutines:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -25,6 +25,7 @@ public class LazyStructureND<out T>(
} }
public suspend fun await(index: IntArray): T = async(index).await() public suspend fun await(index: IntArray): T = async(index).await()
@PerformancePitfall @PerformancePitfall
override operator fun get(index: IntArray): T = runBlocking { async(index).await() } override operator fun get(index: IntArray): T = runBlocking { async(index).await() }
@ -48,13 +49,13 @@ public suspend fun <T> StructureND<T>.await(index: IntArray): T =
* PENDING would benefit from KEEP-176 * PENDING would benefit from KEEP-176
*/ */
@OptIn(PerformancePitfall::class) @OptIn(PerformancePitfall::class)
public inline fun <T, reified R> StructureND<T>.mapAsyncIndexed( public inline fun <T, R> StructureND<T>.mapAsyncIndexed(
scope: CoroutineScope, scope: CoroutineScope,
crossinline function: suspend (T, index: IntArray) -> R, crossinline function: suspend (T, index: IntArray) -> R,
): LazyStructureND<R> = LazyStructureND(scope, shape) { index -> function(get(index), index) } ): LazyStructureND<R> = LazyStructureND(scope, shape) { index -> function(get(index), index) }
@OptIn(PerformancePitfall::class) @OptIn(PerformancePitfall::class)
public inline fun <T, reified R> StructureND<T>.mapAsync( public inline fun <T, R> StructureND<T>.mapAsync(
scope: CoroutineScope, scope: CoroutineScope,
crossinline function: suspend (T) -> R, crossinline function: suspend (T) -> R,
): LazyStructureND<R> = LazyStructureND(scope, shape) { index -> function(get(index)) } ): LazyStructureND<R> = LazyStructureND(scope, shape) { index -> function(get(index)) }

View File

@ -9,6 +9,7 @@ A proof of concept module for adding type-safe dimensions to structures
The Maven coordinates of this project are `space.kscience:kmath-dimensions:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-dimensions:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -2,13 +2,13 @@ plugins {
id("space.kscience.gradle.mpp") id("space.kscience.gradle.mpp")
} }
kscience{ kscience {
jvm() jvm()
js() js()
native() native()
wasm() wasm()
dependencies{ dependencies {
api(projects.kmathCore) api(projects.kmathCore)
} }

View File

@ -2,16 +2,16 @@
EJML based linear algebra implementation. EJML based linear algebra implementation.
- [ejml-vector](src/main/kotlin/space/kscience/kmath/ejml/EjmlVector.kt) : Point implementations. - [ejml-vector](src/main/kotlin/space/kscience/kmath/ejml/EjmlVector.kt) : Point implementations.
- [ejml-matrix](src/main/kotlin/space/kscience/kmath/ejml/EjmlMatrix.kt) : Matrix implementation. - [ejml-matrix](src/main/kotlin/space/kscience/kmath/ejml/EjmlMatrix.kt) : Matrix implementation.
- [ejml-linear-space](src/main/kotlin/space/kscience/kmath/ejml/EjmlLinearSpace.kt) : LinearSpace implementations. - [ejml-linear-space](src/main/kotlin/space/kscience/kmath/ejml/EjmlLinearSpace.kt) : LinearSpace implementations.
## Artifact: ## Artifact:
The Maven coordinates of this project are `space.kscience:kmath-ejml:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-ejml:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -674,15 +674,17 @@ public object EjmlLinearSpaceDSCC : EjmlLinearSpace<Double, Float64Field, DMatri
val raw: Any? = when (attribute) { val raw: Any? = when (attribute) {
Inverted -> { Inverted -> {
val res = DMatrixRMaj(origin.numRows,origin.numCols) val res = DMatrixRMaj(origin.numRows, origin.numCols)
CommonOps_DSCC.invert(origin,res) CommonOps_DSCC.invert(origin, res)
res.wrapMatrix() res.wrapMatrix()
} }
Determinant -> CommonOps_DSCC.det(origin) Determinant -> CommonOps_DSCC.det(origin)
QR -> object : QRDecomposition<Double> { QR -> object : QRDecomposition<Double> {
val ejmlQr by lazy { DecompositionFactory_DSCC.qr(FillReducing.NONE).apply { decompose(origin.copy()) } } val ejmlQr by lazy {
DecompositionFactory_DSCC.qr(FillReducing.NONE).apply { decompose(origin.copy()) }
}
override val q: Matrix<Double> get() = ejmlQr.getQ(null, false).wrapMatrix() override val q: Matrix<Double> get() = ejmlQr.getQ(null, false).wrapMatrix()
override val r: Matrix<Double> get() = ejmlQr.getR(null, false).wrapMatrix() override val r: Matrix<Double> get() = ejmlQr.getR(null, false).wrapMatrix()
} }
@ -895,15 +897,17 @@ public object EjmlLinearSpaceFSCC : EjmlLinearSpace<Float, Float32Field, FMatrix
val raw: Any? = when (attribute) { val raw: Any? = when (attribute) {
Inverted -> { Inverted -> {
val res = FMatrixRMaj(origin.numRows,origin.numCols) val res = FMatrixRMaj(origin.numRows, origin.numCols)
CommonOps_FSCC.invert(origin,res) CommonOps_FSCC.invert(origin, res)
res.wrapMatrix() res.wrapMatrix()
} }
Determinant -> CommonOps_FSCC.det(origin) Determinant -> CommonOps_FSCC.det(origin)
QR -> object : QRDecomposition<Float32> { QR -> object : QRDecomposition<Float32> {
val ejmlQr by lazy { DecompositionFactory_FSCC.qr(FillReducing.NONE).apply { decompose(origin.copy()) } } val ejmlQr by lazy {
DecompositionFactory_FSCC.qr(FillReducing.NONE).apply { decompose(origin.copy()) }
}
override val q: Matrix<Float32> get() = ejmlQr.getQ(null, false).wrapMatrix() override val q: Matrix<Float32> get() = ejmlQr.getQ(null, false).wrapMatrix()
override val r: Matrix<Float32> get() = ejmlQr.getR(null, false).wrapMatrix() override val r: Matrix<Float32> get() = ejmlQr.getR(null, false).wrapMatrix()
} }

View File

@ -2,16 +2,18 @@
Specialization of KMath APIs for Double numbers. Specialization of KMath APIs for Double numbers.
- [DoubleVector](src/commonMain/kotlin/space/kscience/kmath/real/DoubleVector.kt) : Numpy-like operations for Buffers/Points - [DoubleVector](src/commonMain/kotlin/space/kscience/kmath/real/DoubleVector.kt) : Numpy-like operations for
- [DoubleMatrix](src/commonMain/kotlin/space/kscience/kmath/real/DoubleMatrix.kt) : Numpy-like operations for 2d real structures Buffers/Points
- [grids](src/commonMain/kotlin/space/kscience/kmath/structures/grids.kt) : Uniform grid generators - [DoubleMatrix](src/commonMain/kotlin/space/kscience/kmath/real/DoubleMatrix.kt) : Numpy-like operations for 2d real
structures
- [grids](src/commonMain/kotlin/space/kscience/kmath/structures/grids.kt) : Uniform grid generators
## Artifact: ## Artifact:
The Maven coordinates of this project are `space.kscience:kmath-for-real:0.4.0-dev-3`. The Maven coordinates of this project are `space.kscience:kmath-for-real:0.4.0-dev-3`.
**Gradle Kotlin DSL:** **Gradle Kotlin DSL:**
```kotlin ```kotlin
repositories { repositories {
maven("https://repo.kotlin.link") maven("https://repo.kotlin.link")

View File

@ -48,7 +48,7 @@ public fun Sequence<DoubleArray>.toMatrix(): RealMatrix = toList().let {
} }
public fun RealMatrix.repeatStackVertical(n: Int): RealMatrix = public fun RealMatrix.repeatStackVertical(n: Int): RealMatrix =
VirtualMatrix( rowNum * n, colNum) { row, col -> VirtualMatrix(rowNum * n, colNum) { row, col ->
get(if (row == 0) 0 else row % rowNum, col) get(if (row == 0) 0 else row % rowNum, col)
} }

View File

@ -39,7 +39,7 @@ public fun Buffer.Companion.withFixedStep(range: ClosedFloatingPointRange<Double
else -> return Float64Buffer(range.start) else -> return Float64Buffer(range.start)
} }
val numberOfPoints = floor(normalizedRange.length / step).toInt() + 1 val numberOfPoints = floor(normalizedRange.length / step).toInt() + 1
return Float64Buffer(numberOfPoints) { normalizedRange.start + step * it } return Float64Buffer(numberOfPoints) { normalizedRange.start + step * it }
} }
/** /**

Some files were not shown because too many files have changed in this diff Show More