Update documentation

This commit is contained in:
Alexander Nozik 2025-01-02 14:27:22 +03:00
parent 531f95d55f
commit c11007216c
17 changed files with 259 additions and 56 deletions
README.md
dataforge-context
dataforge-data
dataforge-io
README.mdbuild.gradle.kts
dataforge-io-proto
dataforge-io-yaml
src/commonMain/kotlin/space/kscience/dataforge/io
dataforge-meta
dataforge-scripting
dataforge-workspace
docs/templates
gradle.properties
gradle/wrapper

@ -1,7 +1,70 @@
[![JetBrains Research](https://jb.gg/badges/research.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub)
[![DOI](https://zenodo.org/badge/148831678.svg)](https://zenodo.org/badge/latestdoi/148831678)
![Gradle build](https://github.com/mipt-npm/dataforge-core/workflows/Gradle%20build/badge.svg)
## Publications
* [A general overview](https://doi.org/10.1051/epjconf/201817705003)
* [An application in "Troitsk nu-mass" experiment](https://doi.org/10.1088/1742-6596/1525/1/012024)
## Video
* [A presentation on application of DataForge (legacy version) to Troitsk nu-mass analysis.](https://youtu.be/OpWzLXUZnLI?si=3qn7EMruOHMJX3Bc)
## Questions and Answers
In this section, we will try to cover DataForge main ideas in the form of questions and answers.
### General
**Q**: I have a lot of data to analyze. The analysis process is complicated, requires a lot of stages, and data flow is not always obvious. Also, the data size is huge, so I don't want to perform operation I don't need (calculate something I won't need or calculate something twice). I need it to be performed in parallel and probably on remote computer. By the way, I am sick and tired of scripts that modify other scripts that control scripts. Could you help me?
**A**: Yes, that is precisely the problem DataForge was made to solve. It allows performing some automated data manipulations with optimization and parallelization. The important thing that data processing recipes are made in the declarative way, so it is quite easy to perform computations on a remote station. Also, DataForge guarantees reproducibility of analysis results.
**Q**: How does it work?
**A**: At the core of DataForge lies the idea of metadata processor. It utilizes the fact that to analyze something you need data itself and some additional information about what does that data represent and what does user want as a result. This additional information is called metadata and could be organized in a regular structure (a tree of values similar to XML or JSON). The important thing is that this distinction leaves no place for user instructions (or scripts). Indeed, the idea of DataForge logic is that one does not need imperative commands. The framework configures itself according to input meta-data and decides what operations should be performed in the most efficient way.
**Q**: But where does it take algorithms to use?
**A**: Of course algorithms must be written somewhere. No magic here. The logic is written in specialized modules. Some modules are provided out of the box at the system core, some need to be developed for a specific problem.
**Q**: So I still need to write the code? What is the difference then?
**A**: Yes, someone still needs to write the code. But not necessary you. Simple operations could be performed using provided core logic. Also, your group can have one programmer writing the logic and all other using it without any real programming expertise. The framework organized in a such way that one writes some additional logic, they do not need to think about complicated thing like parallel computing, resource handling, logging, caching, etc. Most of the things are done by the DataForge.
### Platform
**Q**: Which platform does DataForge use? Which operating system is it working on?
**A**: The DataForge is mostly written in Kotlin-multiplatform and could be used on JVM, JS and native targets. Some modules and functions are supported only on JVM
**Q**: Can I use my C++/Fortran/Python code in DataForge?
**A**: Yes, as long as the code could be called from Java. Most common languages have a bridge for Java access. There are completely no problems with compiled C/Fortran libraries. Python code could be called via one of existing python-java interfaces. It is also planned to implement remote method invocation for common languages, so your Python, or, say, Julia, code could run in its native environment. The metadata processor paradigm makes it much easier to do so.
### Features
**Q**: What other features does DataForge provide?
**A**: Alongside metadata processing (and a lot of tools for metadata manipulation and layering), DataForge has two additional important concepts:
* **Modularisation**. Contrary to lot other frameworks, DataForge is intrinsically modular. The mandatory part is a rather tiny core module. Everything else could be customized.
* **Context encapsulation**. Every DataForge task is executed in some context. The context isolates environment for the task and also works as dependency injection base and specifies interaction of the task with the external world.
### Misc
**Q**: So everything looks great, can I replace my ROOT / other data analysis framework with DataForge?
**A**: One must note that DataForge is made for analysis, not for visualization. The visualization and user interaction capabilities of DataForge are rather limited compared to frameworks like ROOT, JAS3 or DataMelt. The idea is to provide reliable API and core functionality. [VisionForge](https://git.sciprog.center/kscience/visionforge) project aims to provide tools for both 2D and 3D visualization both locally and remotely.
**Q**: How does DataForge compare to cluster computation frameworks like Apache Spark?
**A**: It is not the purpose of DataForge to replace cluster computing software. DataForge has some internal parallelism mechanics and implementations, but they are most certainly worse than specially developed programs. Still, DataForge is not fixed on one single implementation. Your favourite parallel processing tool could be still used as a back-end for the DataForge. With full benefit of configuration tools, integrations and no performance overhead.
**Q**: Is it possible to use DataForge in notebook mode?
**A**: [Kotlin jupyter](https://github.com/Kotlin/kotlin-jupyter) allows using any JVM program in a notebook mode. The dedicated module for DataForge is work in progress.
### [dataforge-context](dataforge-context)
@ -14,14 +77,28 @@
> **Maturity**: EXPERIMENTAL
### [dataforge-io](dataforge-io)
> IO module
> Serialization foundation for Meta objects and Envelope processing.
>
> **Maturity**: EXPERIMENTAL
>
> **Features:**
> - [IO format](dataforge-io/src/commonMain/kotlin/space/kscience/dataforge/io/IOFormat.kt) : A generic API for reading something from binary representation and writing it to Binary.
> - [Binary](dataforge-io/src/commonMain/kotlin/space/kscience/dataforge/io/Binary.kt) : Multi-read random access binary.
> - [Envelope](dataforge-io/src/commonMain/kotlin/space/kscience/dataforge/io/Envelope.kt) : API and implementations for combined data and metadata format.
> - [Tagged envelope](dataforge-io/src/commonMain/kotlin/space/kscience/dataforge/io/TaggedEnvelope.kt) : Implementation for binary-friendly envelope format with machine readable tag and forward size declaration.
> - [Tagged envelope](dataforge-io/src/commonMain/kotlin/space/kscience/dataforge/io/TaglessEnvelope.kt) : Implementation for text-friendly envelope format with text separators for sections.
### [dataforge-meta](dataforge-meta)
> Meta definition and basic operations on meta
> Core Meta and Name manipulation module
>
> **Maturity**: DEVELOPMENT
>
> **Features:**
> - [Meta](dataforge-meta/src/commonMain/kotlin/space/kscience/dataforge/meta/Meta.kt) : **Meta** is the representation of basic DataForge concept: Metadata, but it also could be called meta-value tree.
> - [Value](dataforge-meta/src/commonMain/kotlin/space/kscience/dataforge/meta/Value.kt) : **Value** a sum type for different meta values.
> - [Name](dataforge-meta/src/commonMain/kotlin/space/kscience/dataforge/names/Name.kt) : **Name** is an identifier to access tree-like structure.
### [dataforge-scripting](dataforge-scripting)
>
@ -31,6 +108,11 @@
>
> **Maturity**: EXPERIMENTAL
### [dataforge-io/dataforge-io-proto](dataforge-io/dataforge-io-proto)
> ProtoBuf Meta representation
>
> **Maturity**: PROTOTYPE
### [dataforge-io/dataforge-io-yaml](dataforge-io/dataforge-io-yaml)
> YAML meta converters and Front Matter envelope format
>

@ -6,7 +6,7 @@ Context and provider definitions
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-context:0.9.0-dev-1`.
The Maven coordinates of this project are `space.kscience:dataforge-context:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
@ -16,6 +16,6 @@ repositories {
}
dependencies {
implementation("space.kscience:dataforge-context:0.9.0-dev-1")
implementation("space.kscience:dataforge-context:0.10.0")
}
```

@ -6,7 +6,7 @@
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-data:0.9.0-dev-1`.
The Maven coordinates of this project are `space.kscience:dataforge-data:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
@ -16,6 +16,6 @@ repositories {
}
dependencies {
implementation("space.kscience:dataforge-data:0.9.0-dev-1")
implementation("space.kscience:dataforge-data:0.10.0")
}
```

@ -2,11 +2,20 @@
IO module
## Features
- [IO format](src/commonMain/kotlin/space/kscience/dataforge/io/IOFormat.kt) : A generic API for reading something from binary representation and writing it to Binary.
- [Binary](src/commonMain/kotlin/space/kscience/dataforge/io/Binary.kt) : Multi-read random access binary.
- [Envelope](src/commonMain/kotlin/space/kscience/dataforge/io/Envelope.kt) : API and implementations for combined data and metadata format.
- [Tagged envelope](src/commonMain/kotlin/space/kscience/dataforge/io/TaggedEnvelope.kt) : Implementation for binary-friendly envelope format with machine readable tag and forward size declaration.
- [Tagged envelope](src/commonMain/kotlin/space/kscience/dataforge/io/TaglessEnvelope.kt) : Implementation for text-friendly envelope format with text separators for sections.
## Usage
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-io:0.9.0-dev-1`.
The Maven coordinates of this project are `space.kscience:dataforge-io:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
@ -16,6 +25,6 @@ repositories {
}
dependencies {
implementation("space.kscience:dataforge-io:0.9.0-dev-1")
implementation("space.kscience:dataforge-io:0.10.0")
}
```

@ -4,7 +4,7 @@ plugins {
description = "IO module"
val ioVersion = "0.4.0"
val ioVersion = "0.6.0"
kscience {
jvm()
@ -22,6 +22,60 @@ kscience {
}
}
readme{
readme {
maturity = space.kscience.gradle.Maturity.EXPERIMENTAL
description = """
Serialization foundation for Meta objects and Envelope processing.
""".trimIndent()
feature(
"io-format",
ref = "src/commonMain/kotlin/space/kscience/dataforge/io/IOFormat.kt",
name = "IO format"
) {
"""
A generic API for reading something from binary representation and writing it to Binary.
Similar to KSerializer, but without schema.
""".trimIndent()
}
feature(
"binary",
ref = "src/commonMain/kotlin/space/kscience/dataforge/io/Binary.kt",
name = "Binary"
) {
"Multi-read random access binary."
}
feature(
"envelope",
ref = "src/commonMain/kotlin/space/kscience/dataforge/io/Envelope.kt",
name = "Envelope"
) {
"""
API and implementations for combined data and metadata format.
""".trimIndent()
}
feature(
"envelope.tagged",
ref = "src/commonMain/kotlin/space/kscience/dataforge/io/TaggedEnvelope.kt",
name = "Tagged envelope"
) {
"""
Implementation for binary-friendly envelope format with machine readable tag and forward size declaration.
""".trimIndent()
}
feature(
"envelope.tagless",
ref = "src/commonMain/kotlin/space/kscience/dataforge/io/TaglessEnvelope.kt",
name = "Tagged envelope"
) {
"""
Implementation for text-friendly envelope format with text separators for sections.
""".trimIndent()
}
}

@ -0,0 +1,21 @@
# Module dataforge-io-proto
ProtoBuf meta IO
## Usage
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-io-proto:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
repositories {
maven("https://repo.kotlin.link")
mavenCentral()
}
dependencies {
implementation("space.kscience:dataforge-io-proto:0.10.0")
}
```

@ -6,7 +6,7 @@ YAML meta IO
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-io-yaml:0.9.0-dev-1`.
The Maven coordinates of this project are `space.kscience:dataforge-io-yaml:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
@ -16,6 +16,6 @@ repositories {
}
dependencies {
implementation("space.kscience:dataforge-io-yaml:0.9.0-dev-1")
implementation("space.kscience:dataforge-io-yaml:0.10.0")
}
```

@ -11,14 +11,14 @@ kscience {
dependencies {
api(projects.dataforgeIo)
}
useSerialization{
useSerialization {
yamlKt()
}
}
readme{
readme {
maturity = space.kscience.gradle.Maturity.PROTOTYPE
description ="""
description = """
YAML meta converters and Front Matter envelope format
""".trimIndent()
}

@ -1,12 +0,0 @@
package space.kscience.dataforge.io
/**
* An object that could respond to external messages asynchronously
*/
public interface Responder {
/**
* Send a request and wait for response for this specific request
*/
public suspend fun respond(request: Envelope): Envelope
}

@ -2,11 +2,18 @@
Meta definition and basic operations on meta
## Features
- [Meta](src/commonMain/kotlin/space/kscience/dataforge/meta/Meta.kt) : **Meta** is the representation of basic DataForge concept: Metadata, but it also could be called meta-value tree.
- [Value](src/commonMain/kotlin/space/kscience/dataforge/meta/Value.kt) : **Value** a sum type for different meta values.
- [Name](src/commonMain/kotlin/space/kscience/dataforge/names/Name.kt) : **Name** is an identifier to access tree-like structure.
## Usage
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-meta:0.9.0-dev-1`.
The Maven coordinates of this project are `space.kscience:dataforge-meta:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
@ -16,6 +23,6 @@ repositories {
}
dependencies {
implementation("space.kscience:dataforge-meta:0.9.0-dev-1")
implementation("space.kscience:dataforge-meta:0.10.0")
}
```

@ -7,19 +7,57 @@ kscience {
js()
native()
wasm()
useSerialization{
useSerialization {
json()
}
}
description = "Meta definition and basic operations on meta"
readme{
readme {
maturity = space.kscience.gradle.Maturity.DEVELOPMENT
feature("metadata"){
description = """
Core Meta and Name manipulation module
""".trimIndent()
feature(
"meta",
ref = "src/commonMain/kotlin/space/kscience/dataforge/meta/Meta.kt",
name = "Meta"
) {
"""
**Meta** is the representation of basic DataForge concept: Metadata, but it also could be called meta-value tree.
Each Meta node could hava a node Value as well as a map of named child items.
""".trimIndent()
}
feature(
"value",
ref = "src/commonMain/kotlin/space/kscience/dataforge/meta/Value.kt",
name = "Value"
) {
"""
**Value** a sum type for different meta values.
The following types are implemented in core (custom ones are also available):
* null
* boolean
* number
* string
* list of values
""".trimIndent()
}
feature(
"name",
ref = "src/commonMain/kotlin/space/kscience/dataforge/names/Name.kt",
name = "Name"
) {
"""
**Name** is an identifier to access tree-like structure.
""".trimIndent()
}
}

@ -6,7 +6,7 @@
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-scripting:0.9.0-dev-1`.
The Maven coordinates of this project are `space.kscience:dataforge-scripting:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
@ -16,6 +16,6 @@ repositories {
}
dependencies {
implementation("space.kscience:dataforge-scripting:0.9.0-dev-1")
implementation("space.kscience:dataforge-scripting:0.10.0")
}
```

@ -2,22 +2,24 @@ plugins {
id("space.kscience.gradle.mpp")
}
kscience{
description = "Scripting definition fow workspace generation"
kscience {
jvm()
commonMain {
api(projects.dataforgeWorkspace)
implementation(kotlin("scripting-common"))
}
jvmMain{
jvmMain {
implementation(kotlin("scripting-jvm-host"))
implementation(kotlin("scripting-jvm"))
}
jvmTest{
jvmTest {
implementation(spclibs.logback.classic)
}
}
readme{
readme {
maturity = space.kscience.gradle.Maturity.PROTOTYPE
}

@ -6,7 +6,7 @@
## Artifact:
The Maven coordinates of this project are `space.kscience:dataforge-workspace:0.9.0-dev-1`.
The Maven coordinates of this project are `space.kscience:dataforge-workspace:0.10.0`.
**Gradle Kotlin DSL:**
```kotlin
@ -16,6 +16,6 @@ repositories {
}
dependencies {
implementation("space.kscience:dataforge-workspace:0.9.0-dev-1")
implementation("space.kscience:dataforge-workspace:0.10.0")
}
```

@ -1,8 +1,6 @@
[![JetBrains Research](https://jb.gg/badges/research.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub)
[![DOI](https://zenodo.org/badge/148831678.svg)](https://zenodo.org/badge/latestdoi/148831678)
![Gradle build](https://github.com/mipt-npm/dataforge-core/workflows/Gradle%20build/badge.svg)
## Publications
* [A general overview](https://doi.org/10.1051/epjconf/201817705003)
@ -10,27 +8,29 @@
## Video
* [A presentation on application of (old version of) DataForge to Troitsk nu-mass analysis.] (https://youtu.be/OpWzLXUZnLI?si=3qn7EMruOHMJX3Bc)
* [A presentation on application of DataForge (legacy version) to Troitsk nu-mass analysis.](https://youtu.be/OpWzLXUZnLI?si=3qn7EMruOHMJX3Bc)
## Questions and Answers
In this section, we will try to cover DataForge main ideas in the form of questions and answers.
### General
**Q**: I have a lot of data to analyze. The analysis process is complicated, requires a lot of stages and data flow is not always obvious. To top it the data size is huge, so I don't want to perform operation I don't need (calculate something I won't need or calculate something twice). And yes, I need it to be performed in parallel and probably on remote computer. By the way, I am sick and tired of scripts that modify other scripts that control scripts. Could you help me?
**A**: Yes, that is precisely the problem DataForge was made to solve. It allows to perform some automated data manipulations with automatic optimization and parallelization. The important thing that data processing recipes are made in the declarative way, so it is quite easy to perform computations on a remote station. Also, DataForge guarantees reproducibility of analysis results.
**Q**: I have a lot of data to analyze. The analysis process is complicated, requires a lot of stages, and data flow is not always obvious. Also, the data size is huge, so I don't want to perform operation I don't need (calculate something I won't need or calculate something twice). I need it to be performed in parallel and probably on remote computer. By the way, I am sick and tired of scripts that modify other scripts that control scripts. Could you help me?
**A**: Yes, that is precisely the problem DataForge was made to solve. It allows performing some automated data manipulations with optimization and parallelization. The important thing that data processing recipes are made in the declarative way, so it is quite easy to perform computations on a remote station. Also, DataForge guarantees reproducibility of analysis results.
**Q**: How does it work?
**A**: At the core of DataForge lies the idea of metadata processor. It utilizes the fact that in order to analyze something you need data itself and some additional information about what does that data represent and what does user want as a result. This additional information is called metadata and could be organized in a regular structure (a tree of values not unlike XML or JSON). The important thing is that this distinction leaves no place for user instructions (or scripts). Indeed, the idea of DataForge logic is that one do not need imperative commands. The framework configures itself according to input meta-data and decides what operations should be performed in the most efficient way.
**A**: At the core of DataForge lies the idea of metadata processor. It utilizes the fact that to analyze something you need data itself and some additional information about what does that data represent and what does user want as a result. This additional information is called metadata and could be organized in a regular structure (a tree of values similar to XML or JSON). The important thing is that this distinction leaves no place for user instructions (or scripts). Indeed, the idea of DataForge logic is that one does not need imperative commands. The framework configures itself according to input meta-data and decides what operations should be performed in the most efficient way.
**Q**: But where does it take algorithms to use?
**A**: Of course algorithms must be written somewhere. No magic here. The logic is written in specialized modules. Some modules are provided out of the box at the system core, some need to be developed for specific problem.
**A**: Of course algorithms must be written somewhere. No magic here. The logic is written in specialized modules. Some modules are provided out of the box at the system core, some need to be developed for a specific problem.
**Q**: So I still need to write the code? What is the difference then?
**A**: Yes, someone still needs to write the code. But not necessary you. Simple operations could be performed using provided core logic. Also, your group can have one programmer writing the logic and all other using it without any real programming expertise. The framework organized in a such way that one writes some additional logic, they do not need to think about complicated thing like parallel computing, resource handling, logging, caching etc. Most of the things are done by the DataForge.
**A**: Yes, someone still needs to write the code. But not necessary you. Simple operations could be performed using provided core logic. Also, your group can have one programmer writing the logic and all other using it without any real programming expertise. The framework organized in a such way that one writes some additional logic, they do not need to think about complicated thing like parallel computing, resource handling, logging, caching, etc. Most of the things are done by the DataForge.
### Platform
@ -40,9 +40,10 @@ In this section, we will try to cover DataForge main ideas in the form of questi
**Q**: Can I use my C++/Fortran/Python code in DataForge?
A: Yes, as long as the code could be called from Java. Most of common languages have a bridge for Java access. There are completely no problems with compiled C/Fortran libraries. Python code could be called via one of existing python-java interfaces. It is also planned to implement remote method invocation for common languages, so your Python, or, say, Julia, code could run in its native environment. The metadata processor paradigm makes it much easier to do so.
**A**: Yes, as long as the code could be called from Java. Most common languages have a bridge for Java access. There are completely no problems with compiled C/Fortran libraries. Python code could be called via one of existing python-java interfaces. It is also planned to implement remote method invocation for common languages, so your Python, or, say, Julia, code could run in its native environment. The metadata processor paradigm makes it much easier to do so.
### Features
**Q**: What other features does DataForge provide?
**A**: Alongside metadata processing (and a lot of tools for metadata manipulation and layering), DataForge has two additional important concepts:
@ -52,16 +53,17 @@ A: Yes, as long as the code could be called from Java. Most of common languages
* **Context encapsulation**. Every DataForge task is executed in some context. The context isolates environment for the task and also works as dependency injection base and specifies interaction of the task with the external world.
### Misc
**Q**: So everything looks great, can I replace my ROOT / other data analysis framework with DataForge?
**A**: One must note, that DataForge is made for analysis, not for visualisation. The visualisation and user interaction capabilities of DataForge are rather limited compared to frameworks like ROOT, JAS3 or DataMelt. The idea is to provide reliable API and core functionality. In fact JAS3 and DataMelt could be used as a frontend for DataForge mechanics.
**A**: One must note that DataForge is made for analysis, not for visualization. The visualization and user interaction capabilities of DataForge are rather limited compared to frameworks like ROOT, JAS3 or DataMelt. The idea is to provide reliable API and core functionality. [VisionForge](https://git.sciprog.center/kscience/visionforge) project aims to provide tools for both 2D and 3D visualization both locally and remotely.
**Q**: How does DataForge compare to cluster computation frameworks like Apache Spark?
**A**: Again, it is not the purpose of DataForge to replace cluster software. DataForge has some internal parallelism mechanics and implementations, but they are most certainly worse than specially developed programs. Still, DataForge is not fixed on one single implementation. Your favourite parallel processing tool could be still used as a back-end for the DataForge. With full benefit of configuration tools, integrations and no performance overhead.
**A**: It is not the purpose of DataForge to replace cluster computing software. DataForge has some internal parallelism mechanics and implementations, but they are most certainly worse than specially developed programs. Still, DataForge is not fixed on one single implementation. Your favourite parallel processing tool could be still used as a back-end for the DataForge. With full benefit of configuration tools, integrations and no performance overhead.
**Q**: Is it possible to use DataForge in notebook mode?
**A**: [Kotlin jupyter](https://github.com/Kotlin/kotlin-jupyter) allows to use any JVM program in a notebook mode. The dedicated module for DataForge is work in progress.
**A**: [Kotlin jupyter](https://github.com/Kotlin/kotlin-jupyter) allows using any JVM program in a notebook mode. The dedicated module for DataForge is work in progress.
${modules}

@ -6,4 +6,4 @@ org.gradle.jvmargs=-Xmx4096m
kotlin.mpp.stability.nowarn=true
kotlin.native.ignoreDisabledTargets=true
toolsVersion=0.16.0-kotlin-2.1.0
toolsVersion=0.16.1-kotlin-2.1.0

@ -1,5 +1,5 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-8.6-bin.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-8.12-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists