Call remote tasks of service workspace #75
@ -15,7 +15,8 @@ kotlin {
|
||||
jvmMain {
|
||||
dependencies {
|
||||
// TODO include fat jar of lambdarpc
|
||||
api(files("lambdarpc-core-0.0.1.jar"))
|
||||
val path = "../../../lambdarpc/LambdaRPC.kt/lambdarpc-core/build/libs"
|
||||
api(files("$path/lambdarpc-core-0.0.1-SNAPSHOT.jar"))
|
||||
runtimeOnly("io.grpc:grpc-netty-shaded:1.44.0")
|
||||
api("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.6.0")
|
||||
api("io.grpc:grpc-protobuf:1.44.0")
|
||||
|
@ -1,37 +0,0 @@
|
||||
package space.kscience.dataforge.distributed
|
||||
|
||||
import io.lambdarpc.utils.Endpoint
|
||||
import space.kscience.dataforge.context.AbstractPlugin
|
||||
import space.kscience.dataforge.context.PluginTag
|
||||
import space.kscience.dataforge.names.Name
|
||||
import space.kscience.dataforge.workspace.Task
|
||||
import kotlin.reflect.KType
|
||||
|
||||
/**
|
||||
* Plugin that purpose is to communicate with remote plugins.
|
||||
* @param endpoint Endpoint of the remote plugin.
|
||||
*/
|
||||
public abstract class ClientWorkspacePlugin(endpoint: Endpoint) : AbstractPlugin() {
|
||||
|
||||
/**
|
||||
* Tag og the [ClientWorkspacePlugin] should be equal to the tag of the corresponding remote plugin.
|
||||
*/
|
||||
abstract override val tag: PluginTag
|
||||
|
||||
/**
|
||||
* Enumeration of names of remote tasks and their result types.
|
||||
*/
|
||||
public abstract val tasks: Map<Name, KType>
|
||||
|
||||
private val _tasks: Map<Name, Task<*>> by lazy {
|
||||
tasks.mapValues { (_, type) ->
|
||||
RemoteTask<Any>(endpoint, type)
|
||||
}
|
||||
}
|
||||
|
||||
override fun content(target: String): Map<Name, Any> =
|
||||
when (target) {
|
||||
Task.TYPE -> _tasks
|
||||
else -> emptyMap()
|
||||
}
|
||||
}
|
@ -0,0 +1,36 @@
|
||||
|
||||
package space.kscience.dataforge.distributed
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import io.lambdarpc.utils.Endpoint
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import space.kscience.dataforge.context.AbstractPlugin
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import space.kscience.dataforge.context.Plugin
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import space.kscience.dataforge.context.PluginFactory
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import space.kscience.dataforge.context.PluginTag
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import space.kscience.dataforge.names.Name
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import space.kscience.dataforge.workspace.SerializableResultTask
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
import space.kscience.dataforge.workspace.Task
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
/**
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
* Plugin that purpose is to communicate with remote plugins.
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
* @param plugin A remote plugin to be used.
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
* @param endpoint Endpoint of the remote plugin.
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
*/
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
public class RemotePlugin<P : Plugin>(private val plugin: P, private val endpoint: String) : AbstractPlugin() {
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
public constructor(factory: PluginFactory<P>, endpoint: String) : this(factory(), endpoint)
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
override val tag: PluginTag
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
get() = plugin.tag
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
private val tasks = plugin.content(Task.TYPE)
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
.filterValues { it is SerializableResultTask<*> }
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
.mapValues { (_, task) ->
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
require(task is SerializableResultTask<*>)
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
RemoteTask(Endpoint(endpoint), task.resultType, task.resultSerializer)
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
}
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
override fun content(target: String): Map<Name, Any> =
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
when (target) {
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
Task.TYPE -> tasks
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
else -> emptyMap()
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
}
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
||||
}
|
||||
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin. I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this? But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
|
@ -1,13 +1,13 @@
|
||||
package space.kscience.dataforge.distributed
|
||||
|
||||
import io.lambdarpc.dsl.ServiceDispatcher
|
||||
import io.lambdarpc.context.ServiceDispatcher
|
||||
import io.lambdarpc.utils.Endpoint
|
||||
import kotlinx.coroutines.withContext
|
||||
import space.kscience.dataforge.data.DataSet
|
||||
import kotlinx.serialization.KSerializer
|
||||
import space.kscience.dataforge.meta.Meta
|
||||
import space.kscience.dataforge.meta.descriptors.MetaDescriptor
|
||||
import space.kscience.dataforge.names.Name
|
||||
import space.kscience.dataforge.workspace.Task
|
||||
import space.kscience.dataforge.workspace.SerializableResultTask
|
||||
import space.kscience.dataforge.workspace.TaskResult
|
||||
import space.kscience.dataforge.workspace.Workspace
|
||||
import space.kscience.dataforge.workspace.wrapResult
|
||||
@ -17,20 +17,20 @@ import kotlin.reflect.KType
|
||||
* Proxy task that communicates with the corresponding remote task.
|
||||
*/
|
||||
internal class RemoteTask<T : Any>(
|
||||
endpoint: Endpoint,
|
||||
internal val endpoint: Endpoint,
|
||||
override val resultType: KType,
|
||||
override val resultSerializer: KSerializer<T>,
|
||||
override val descriptor: MetaDescriptor? = null,
|
||||
) : Task<T> {
|
||||
private val taskRegistry: TaskRegistry? = null,
|
||||
) : SerializableResultTask<T> {
|
||||
private val dispatcher = ServiceDispatcher(ServiceWorkspace.serviceId to endpoint)
|
||||
|
||||
@Suppress("UNCHECKED_CAST")
|
||||
override suspend fun execute(
|
||||
workspace: Workspace,
|
||||
taskName: Name,
|
||||
taskMeta: Meta,
|
||||
): TaskResult<T> = withContext(dispatcher) {
|
||||
val dataset = ServiceWorkspace.execute(taskName)
|
||||
dataset.finishDecoding(resultType)
|
||||
workspace.wrapResult(dataset as DataSet<T>, taskName, taskMeta)
|
||||
override suspend fun execute(workspace: Workspace, taskName: Name, taskMeta: Meta): TaskResult<T> {
|
||||
val registry = taskRegistry ?: TaskRegistry(workspace.tasks)
|
||||
val result = withContext(dispatcher) {
|
||||
ServiceWorkspace.execute(taskName, taskMeta, registry)
|
||||
}
|
||||
val dataSet = result.toDataSet(resultType, resultSerializer)
|
||||
return workspace.wrapResult(dataSet, taskName, taskMeta)
|
||||
}
|
||||
}
|
||||
|
@ -3,21 +3,23 @@ package space.kscience.dataforge.distributed
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import io.ktor.utils.io.core.*
|
||||
import io.lambdarpc.dsl.LibService
|
||||
import io.lambdarpc.dsl.def
|
||||
import io.lambdarpc.utils.Address
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import io.lambdarpc.utils.Port
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import io.lambdarpc.utils.toSid
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import io.lambdarpc.dsl.j
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import io.lambdarpc.utils.ServiceId
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import kotlinx.coroutines.runBlocking
|
||||
import kotlinx.serialization.KSerializer
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import space.kscience.dataforge.context.Context
|
||||
import space.kscience.dataforge.context.Global
|
||||
import space.kscience.dataforge.context.gather
|
||||
import space.kscience.dataforge.data.DataSet
|
||||
import space.kscience.dataforge.data.DataTree
|
||||
import space.kscience.dataforge.distributed.serialization.DataSetCoder
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import space.kscience.dataforge.distributed.serialization.DataSetPrototype
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
I do not see, why we need it in the API on the server side the connection should be closeable, not the workspace. Maybe you should rename it to connection and remove inheritance from a workspace? I do not see, why we need it in the API on the server side the connection should be closeable, not the workspace. Maybe you should rename it to connection and remove inheritance from a workspace?
|
||||
import space.kscience.dataforge.distributed.serialization.MetaCoder
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import space.kscience.dataforge.distributed.serialization.NameCoder
|
||||
import space.kscience.dataforge.distributed.serialization.SerializableDataSetAdapter
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import space.kscience.dataforge.distributed.serialization.TaskRegistryCoder
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import space.kscience.dataforge.meta.Meta
|
||||
import space.kscience.dataforge.names.Name
|
||||
import space.kscience.dataforge.names.asName
|
||||
import space.kscience.dataforge.workspace.SerializableResultTask
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
import space.kscience.dataforge.workspace.Task
|
||||
import space.kscience.dataforge.workspace.TaskResult
|
||||
import space.kscience.dataforge.workspace.Workspace
|
||||
@ -25,37 +27,73 @@ import space.kscience.dataforge.workspace.wrapResult
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
/**
|
||||
* Workspace that exposes its tasks for remote clients.
|
||||
* @param port Port to start service on. Will be random if null.
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
*/
|
||||
public class ServiceWorkspace(
|
||||
address: String = "localhost",
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
port: Int? = null,
|
||||
override val context: Context = Global.buildContext("workspace".asName()),
|
||||
private val dataSerializer: KSerializer<Any>? = null,
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
data: DataSet<*> = runBlocking { DataTree<Any> {} },
|
||||
override val targets: Map<String, Meta> = mapOf(),
|
||||
) : Workspace, Closeable {
|
||||
private val _port: Int? = port
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
override val data: TaskResult<*> = wrapResult(data, Name.EMPTY, Meta.EMPTY)
|
||||
|
||||
override val tasks: Map<Name, Task<*>>
|
||||
get() = context.gather(Task.TYPE)
|
||||
|
||||
private val service = LibService(serviceId, address, port) {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
execute of { name ->
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
val res = produce(name, Meta.EMPTY)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
SerializableDataSetAdapter(res)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
private val service = LibService(serviceId, port) {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
execute of { name, meta, taskRegistry ->
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
if (name == Name.EMPTY) {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
requireNotNull(dataSerializer) { "Data serializer is not provided on $port" }
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
DataSetPrototype.of(data, dataSerializer)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
} else {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
val task = tasks[name] ?: error("Task $name does not exist locally")
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
require(task is SerializableResultTask) { "Result of $name cannot be serialized" }
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
val workspace = ProxyWorkspace(taskRegistry)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
// Local function to capture generic parameter
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
suspend fun <T : Any> execute(task: SerializableResultTask<T>): DataSetPrototype {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
val result = task.execute(workspace, name, meta)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
return DataSetPrototype.of(result, task.resultSerializer)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
}
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
execute(task)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
}
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Address this workspace is available on.
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
* Proxies task calls to right endpoints according to the [TaskRegistry].
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
*/
|
||||
public val address: Address = Address(address)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
private inner class ProxyWorkspace(private val taskRegistry: TaskRegistry) : Workspace by this {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
override val tasks: Map<Name, Task<*>>
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
get() = object : AbstractMap<Name, Task<*>>() {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
override val entries: Set<Map.Entry<Name, Task<*>>>
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
get() = this@ServiceWorkspace.tasks.entries
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
override fun get(key: Name): Task<*>? = remoteTask(key) ?: this@ServiceWorkspace.tasks[key]
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
}
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
/**
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
* Call default implementation to use [tasks] virtually instead of it in [ServiceWorkspace].
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
*/
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
override suspend fun produce(taskName: Name, taskMeta: Meta): TaskResult<*> =
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
super.produce(taskName, taskMeta)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
private fun remoteTask(name: Name): RemoteTask<*>? {
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
val endpoint = taskRegistry.tasks[name] ?: return null
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
val local = this@ServiceWorkspace.tasks[name] ?: error("No task with name $name locally on $port")
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
require(local is SerializableResultTask) { "Task $name result is not serializable" }
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
return RemoteTask(endpoint, local.resultType, local.resultSerializer, local.descriptor, taskRegistry)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
}
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
}
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
/**
|
||||
* Port this workspace is available on.
|
||||
*/
|
||||
public val port: Port
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
get() = service.port
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
public val port: Int
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
get() = _port ?: service.port.p
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
|
||||
/**
|
||||
* Start [ServiceWorkspace] as a service.
|
||||
@ -75,7 +113,7 @@ public class ServiceWorkspace(
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
override fun close(): Unit = service.shutdown()
|
||||
|
||||
public companion object {
|
||||
internal val serviceId = "d41b95b1-828b-4444-8ff0-6f9c92a79246".toSid()
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
internal val execute by serviceId.def(NameCoder, DataSetCoder)
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
internal val serviceId = ServiceId("d41b95b1-828b-4444-8ff0-6f9c92a79246")
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
internal val execute by serviceId.def(NameCoder, MetaCoder, TaskRegistryCoder, j<DataSetPrototype>())
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
||||
}
|
||||
}
|
||||
|
||||
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now. DataTree builder will be non-suspending in the next version, but it should be possible to create it without suspending now.
Need to think about better naming Need to think about better naming
Need to think about better naming Need to think about better naming
Why not suspended? Why not suspended?
Why not suspended? Why not suspended?
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
I thought about I thought about `WorkspaceNode` or `WorkerWorkspace`. There is also `DistributedWorkspace` but it is not truly distributed itself.
Also `ServiceWorkpace` is good enought to my opinion. "Service" here means that this workspace should be run on some endpoint to be available.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so. Because it is blocking in the google gRPC implementation. It can be made suspend and block separate thread but I do not see any reasons to do so.
|
@ -0,0 +1,18 @@
|
||||
package space.kscience.dataforge.distributed
|
||||
|
||||
import io.lambdarpc.utils.Endpoint
|
||||
import kotlinx.serialization.Serializable
|
||||
import space.kscience.dataforge.names.Name
|
||||
import space.kscience.dataforge.workspace.Task
|
||||
|
||||
@Serializable
|
||||
internal class TaskRegistry(val tasks: Map<Name, Endpoint>)
|
||||
|
||||
internal fun TaskRegistry(tasks: Map<Name, Task<*>>): TaskRegistry {
|
||||
val remotes = tasks.filterValues { it is RemoteTask<*> }
|
||||
val endpoints = remotes.mapValues { (_, task) ->
|
||||
require(task is RemoteTask)
|
||||
task.endpoint
|
||||
}
|
||||
return TaskRegistry(endpoints)
|
||||
}
|
@ -1,23 +0,0 @@
|
||||
package space.kscience.dataforge.distributed.serialization
|
||||
|
||||
import io.lambdarpc.coding.Coder
|
||||
import io.lambdarpc.coding.CodingContext
|
||||
import io.lambdarpc.transport.grpc.Entity
|
||||
import io.lambdarpc.transport.serialization.Entity
|
||||
import io.lambdarpc.transport.serialization.RawData
|
||||
import kotlinx.coroutines.runBlocking
|
||||
import java.nio.charset.Charset
|
||||
|
||||
internal object DataSetCoder : Coder<SerializableDataSet<Any>> {
|
||||
override fun decode(entity: Entity, context: CodingContext): SerializableDataSet<Any> {
|
||||
val string = entity.data.toString(Charset.defaultCharset())
|
||||
val prototype = DataSetPrototype.fromJson(string)
|
||||
return prototype.toDataSet()
|
||||
}
|
||||
|
||||
override fun encode(value: SerializableDataSet<Any>, context: CodingContext): Entity {
|
||||
val prototype = runBlocking { DataSetPrototype.of(value) } // TODO update LambdaRPC and remove blocking
|
||||
val string = prototype.toJson()
|
||||
return Entity(RawData.copyFrom(string, Charset.defaultCharset()))
|
||||
}
|
||||
}
|
@ -0,0 +1,23 @@
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
package space.kscience.dataforge.distributed.serialization
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import io.lambdarpc.coding.Coder
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import io.lambdarpc.coding.CodingContext
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import io.lambdarpc.transport.grpc.Entity
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import io.lambdarpc.transport.serialization.Entity
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import io.lambdarpc.transport.serialization.RawData
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import kotlinx.serialization.json.Json
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import space.kscience.dataforge.meta.Meta
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import space.kscience.dataforge.meta.MetaSerializer
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
import java.nio.charset.Charset
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
internal object MetaCoder : Coder<Meta> {
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
override suspend fun decode(entity: Entity, context: CodingContext): Meta {
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
val string = entity.data.toString(Charset.defaultCharset())
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
return Json.decodeFromString(MetaSerializer, string)
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
}
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
override suspend fun encode(value: Meta, context: CodingContext): Entity {
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
val string = Json.encodeToString(MetaSerializer, value)
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
return Entity(RawData.copyFrom(string, Charset.defaultCharset()))
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
}
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
||||
}
|
||||
Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters? Is it possible to replace manual 'Coder's with KSerializer or IOFormat? Or make converters?
Yes, it already exist, I just forgot about it. Yes, it already exist, I just forgot about it.
|
@ -5,16 +5,18 @@ import io.lambdarpc.coding.CodingContext
|
||||
import io.lambdarpc.transport.grpc.Entity
|
||||
import io.lambdarpc.transport.serialization.Entity
|
||||
import io.lambdarpc.transport.serialization.RawData
|
||||
import kotlinx.serialization.json.Json
|
||||
import space.kscience.dataforge.names.Name
|
||||
import java.nio.charset.Charset
|
||||
|
||||
internal object NameCoder : Coder<Name> {
|
||||
override fun decode(entity: Entity, context: CodingContext): Name {
|
||||
require(entity.hasData()) { "Entity should contain data" }
|
||||
override suspend fun decode(entity: Entity, context: CodingContext): Name {
|
||||
val string = entity.data.toString(Charset.defaultCharset())
|
||||
return Name.parse(string)
|
||||
return Json.decodeFromString(Name.serializer(), string)
|
||||
}
|
||||
|
||||
override fun encode(value: Name, context: CodingContext): Entity =
|
||||
Entity(RawData.copyFrom(value.toString(), Charset.defaultCharset()))
|
||||
override suspend fun encode(value: Name, context: CodingContext): Entity {
|
||||
val string = Json.encodeToString(Name.serializer(), value)
|
||||
return Entity(RawData.copyFrom(string, Charset.defaultCharset()))
|
||||
}
|
||||
}
|
||||
|
@ -1,16 +0,0 @@
|
||||
package space.kscience.dataforge.distributed.serialization
|
||||
|
||||
import space.kscience.dataforge.data.DataSet
|
||||
import kotlin.reflect.KType
|
||||
|
||||
/**
|
||||
* Represents [DataSet] that should be initialized before usage.
|
||||
*/
|
||||
internal interface SerializableDataSet<T : Any> : DataSet<T> {
|
||||
fun finishDecoding(type: KType)
|
||||
}
|
||||
|
||||
internal class SerializableDataSetAdapter<T : Any>(dataSet: DataSet<T>) :
|
||||
SerializableDataSet<T>, DataSet<T> by dataSet {
|
||||
override fun finishDecoding(type: KType) = Unit
|
||||
}
|
@ -0,0 +1,22 @@
|
||||
package space.kscience.dataforge.distributed.serialization
|
||||
|
||||
import io.lambdarpc.coding.Coder
|
||||
import io.lambdarpc.coding.CodingContext
|
||||
import io.lambdarpc.transport.grpc.Entity
|
||||
import io.lambdarpc.transport.serialization.Entity
|
||||
import io.lambdarpc.transport.serialization.RawData
|
||||
import kotlinx.serialization.json.Json
|
||||
import space.kscience.dataforge.distributed.TaskRegistry
|
||||
import java.nio.charset.Charset
|
||||
|
||||
internal object TaskRegistryCoder : Coder<TaskRegistry> {
|
||||
override suspend fun decode(entity: Entity, context: CodingContext): TaskRegistry {
|
||||
val string = entity.data.toString(Charset.defaultCharset())
|
||||
return Json.decodeFromString(TaskRegistry.serializer(), string)
|
||||
}
|
||||
|
||||
override suspend fun encode(value: TaskRegistry, context: CodingContext): Entity {
|
||||
val string = Json.encodeToString(TaskRegistry.serializer(), value)
|
||||
return Entity(RawData.copyFrom(string, Charset.defaultCharset()))
|
||||
}
|
||||
}
|
@ -21,11 +21,11 @@ internal data class DataPrototype(
|
||||
There is existing class for that There is existing class for that
There is existing class for that There is existing class for that
|
||||
val meta: String,
|
||||
val data: String,
|
||||
) {
|
||||
fun toData(type: KType): Data<Any> =
|
||||
There is existing class for that There is existing class for that
|
||||
fun <T : Any> toData(type: KType, serializer: KSerializer<T>): Data<T> =
|
||||
There is existing class for that There is existing class for that
|
||||
SimpleData(
|
||||
type = type,
|
||||
meta = Json.decodeFromString(MetaSerializer, meta),
|
||||
data = Json.decodeFromString(kotlinx.serialization.serializer(type), data)!!
|
||||
There is existing class for that There is existing class for that
|
||||
data = Json.decodeFromString(serializer, data)
|
||||
There is existing class for that There is existing class for that
|
||||
)
|
||||
|
||||
companion object {
|
||||
|
||||
There is existing class for that There is existing class for that
There is existing class for that There is existing class for that
|
@ -7,9 +7,8 @@ import kotlinx.coroutines.flow.Flow
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
import kotlinx.coroutines.flow.asFlow
|
||||
import kotlinx.coroutines.flow.map
|
||||
import kotlinx.coroutines.flow.toList
|
||||
import kotlinx.serialization.KSerializer
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
import kotlinx.serialization.Serializable
|
||||
import kotlinx.serialization.json.Json
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
import kotlinx.serialization.serializer
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
import space.kscience.dataforge.data.Data
|
||||
import space.kscience.dataforge.data.DataSet
|
||||
import space.kscience.dataforge.data.NamedData
|
||||
@ -23,51 +22,40 @@ import kotlin.reflect.KType
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
*/
|
||||
Do we need that? Do we need that?
|
||||
@Serializable
|
||||
internal data class DataSetPrototype(val data: Map<String, DataPrototype>) {
|
||||
fun toDataSet(): SerializableDataSet<Any> =
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
SerializableDataSetImpl(this)
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
fun toJson(): String = Json.encodeToString(serializer(), this)
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
fun <T : Any> toDataSet(type: KType, serializer: KSerializer<T>): DataSet<T> {
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
val data = data
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
.mapKeys { (name, _) -> Name.of(name) }
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
.mapValues { (_, dataPrototype) -> dataPrototype.toData(type, serializer) }
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
return SerializableDataSetImpl(type, data)
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
}
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
|
||||
companion object {
|
||||
suspend fun <T : Any> of(dataSet: DataSet<T>): DataSetPrototype = coroutineScope {
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
val serializer = serializer(dataSet.dataType)
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
suspend fun <T : Any> of(dataSet: DataSet<T>, serializer: KSerializer<T>): DataSetPrototype = coroutineScope {
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
val flow = mutableListOf<Pair<String, Deferred<DataPrototype>>>()
|
||||
dataSet.flowData().map { (name, data) ->
|
||||
name.toString() to async { DataPrototype.of(data, serializer) }
|
||||
}.toList(flow)
|
||||
DataSetPrototype(flow.associate { (name, deferred) -> name to deferred.await() })
|
||||
}
|
||||
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
fun fromJson(string: String): DataSetPrototype = Json.decodeFromString(serializer(), string)
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Trivial [SerializableDataSet] implementation.
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
* Trivial [DataSet] implementation.
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
*/
|
||||
private class SerializableDataSetImpl(private val prototype: DataSetPrototype) : SerializableDataSet<Any> {
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
private class SerializableDataSetImpl<T : Any>(
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
override val dataType: KType,
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
private val data: Map<Name, Data<T>>,
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
) : DataSet<T> {
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
|
||||
private lateinit var type: KType
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
private lateinit var data: Map<Name, Data<Any>>
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
override fun finishDecoding(type: KType) {
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
this.type = type
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
this.data = prototype.data
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
.mapKeys { (name, _) -> Name.of(name) }
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
.mapValues { (_, dataPrototype) -> dataPrototype.toData(type) }
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
}
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
override val dataType: KType
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
get() = type
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
override fun flowData(): Flow<NamedData<Any>> =
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
override fun flowData(): Flow<NamedData<T>> =
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
data.map { (name, data) -> SimpleNamedData(name, data) }.asFlow()
|
||||
|
||||
override suspend fun getData(name: Name): Data<Any>? = data[name]
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
override suspend fun getData(name: Name): Data<T>? = data[name]
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
|
||||
/**
|
||||
* Trivial named data implementation.
|
||||
*/
|
||||
private class SimpleNamedData(override val name: Name, override val data: Data<Any>) :
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
NamedData<Any>, Data<Any> by data
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
private class SimpleNamedData<T : Any>(override val name: Name, override val data: Data<T>) :
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
NamedData<T>, Data<T> by data
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
||||
}
|
||||
|
||||
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
Again, you do not need a separate structure. All you need is a generic DataSet with serializer. Again, you do not need a separate structure. All you need is a generic DataSet with serializer.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
When When `ServiceWorkspace.execute` returns result to the client, it do not know yet about serializer for the `T`. Then `RemoteTask` uses serializer to deserialize DataSet from prototype.
|
@ -0,0 +1,61 @@
|
||||
package space.kscience.dataforge.distributed
|
||||
|
||||
import kotlinx.serialization.serializer
|
||||
import space.kscience.dataforge.context.Context
|
||||
import space.kscience.dataforge.context.PluginFactory
|
||||
import space.kscience.dataforge.context.PluginTag
|
||||
import space.kscience.dataforge.context.info
|
||||
import space.kscience.dataforge.context.logger
|
||||
import space.kscience.dataforge.data.map
|
||||
import space.kscience.dataforge.data.select
|
||||
import space.kscience.dataforge.meta.Meta
|
||||
import space.kscience.dataforge.names.Name
|
||||
import space.kscience.dataforge.names.asName
|
||||
import space.kscience.dataforge.workspace.WorkspacePlugin
|
||||
import space.kscience.dataforge.workspace.fromTask
|
||||
import space.kscience.dataforge.workspace.task
|
||||
import kotlin.reflect.KClass
|
||||
|
||||
internal class MyPlugin1 : WorkspacePlugin() {
|
||||
override val tag: PluginTag
|
||||
get() = Factory.tag
|
||||
|
||||
val task by task<Int>(serializer()) {
|
||||
workspace.logger.info { "In ${tag.name}.task" }
|
||||
val myInt = workspace.data.select<Int>()
|
||||
val res = myInt.getData("int".asName())!!
|
||||
emit("result".asName(), res.map { it + 1 })
|
||||
}
|
||||
|
||||
companion object Factory : PluginFactory<MyPlugin1> {
|
||||
override fun invoke(meta: Meta, context: Context): MyPlugin1 = MyPlugin1()
|
||||
|
||||
override val tag: PluginTag
|
||||
get() = PluginTag("Plg1")
|
||||
|
||||
override val type: KClass<out MyPlugin1>
|
||||
get() = MyPlugin1::class
|
||||
}
|
||||
}
|
||||
|
||||
internal class MyPlugin2 : WorkspacePlugin() {
|
||||
override val tag: PluginTag
|
||||
get() = Factory.tag
|
||||
|
||||
val task by task<Int>(serializer()) {
|
||||
workspace.logger.info { "In ${tag.name}.task" }
|
||||
val dataSet = fromTask<Int>(Name.of(MyPlugin1.tag.name, "task"))
|
||||
val data = dataSet.getData("result".asName())!!
|
||||
emit("result".asName(), data.map { it + 1 })
|
||||
}
|
||||
|
||||
companion object Factory : PluginFactory<MyPlugin2> {
|
||||
override fun invoke(meta: Meta, context: Context): MyPlugin2 = MyPlugin2()
|
||||
|
||||
override val tag: PluginTag
|
||||
get() = PluginTag("Plg2")
|
||||
|
||||
override val type: KClass<out MyPlugin2>
|
||||
get() = MyPlugin2::class
|
||||
}
|
||||
}
|
@ -0,0 +1,89 @@
|
||||
switch to runTest switch to runTest
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
package space.kscience.dataforge.distributed
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import kotlinx.coroutines.runBlocking
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import org.junit.jupiter.api.AfterAll
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import org.junit.jupiter.api.BeforeAll
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import org.junit.jupiter.api.Test
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import org.junit.jupiter.api.TestInstance
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.context.Global
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.data.DataTree
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.data.await
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.data.getData
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.data.static
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.meta.Meta
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.names.Name
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.names.asName
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import space.kscience.dataforge.workspace.Workspace
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
import kotlin.test.assertEquals
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
internal class RemoteCallTest {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
private lateinit var worker1: ServiceWorkspace
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
private lateinit var worker2: ServiceWorkspace
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
private lateinit var workspace: Workspace
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
@BeforeAll
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
fun before() {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
worker1 = ServiceWorkspace(
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
context = Global.buildContext("worker1".asName()) {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
This is outdated, use This is outdated, use `Context()` factory function instead.
|
||||
plugin(MyPlugin1)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
},
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
data = runBlocking {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
DataTree<Any> {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
static("int", 42)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
},
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
worker1.start()
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
worker2 = ServiceWorkspace(
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
context = Global.buildContext("worker2".asName()) {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
plugin(MyPlugin1)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
plugin(MyPlugin2)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
},
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
worker2.start()
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
workspace = Workspace {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
context {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
plugin(RemotePlugin(MyPlugin1, "localhost:${worker1.port}"))
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
plugin(RemotePlugin(MyPlugin2, "localhost:${worker2.port}"))
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
@AfterAll
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
fun after() {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
worker1.shutdown()
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
worker2.shutdown()
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
@Test
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
fun `local execution`() = runBlocking {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
assertEquals(42, worker1.data.getData("int")!!.await())
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
val res = worker1
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.produce(Name.of(MyPlugin1.tag.name, "task"), Meta.EMPTY)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.getData("result".asName())!!
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.await()
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
assertEquals(43, res)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
@Test
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
fun `remote execution`() = runBlocking {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
val remoteRes = workspace
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.produce(Name.of(MyPlugin1.tag.name, "task"), Meta.EMPTY)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.getData("result".asName())!!
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.await()
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
assertEquals(43, remoteRes)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
@Test
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
fun `transitive execution`() = runBlocking {
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
val remoteRes = workspace
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.produce(Name.of(MyPlugin2.tag.name, "task"), Meta.EMPTY)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.getData("result".asName())!!
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
.await()
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
assertEquals(44, remoteRes)
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
||||
}
|
||||
switch to runTest switch to runTest
This function is not available in dataforge, shall I add some dependency to the project? This function is not available in dataforge, shall I add some dependency to the project?
|
@ -1,112 +0,0 @@
|
||||
package space.kscience.dataforge.distributed
|
||||
|
||||
import io.lambdarpc.utils.Endpoint
|
||||
import kotlinx.coroutines.runBlocking
|
||||
import org.junit.jupiter.api.AfterAll
|
||||
import org.junit.jupiter.api.BeforeAll
|
||||
import org.junit.jupiter.api.Test
|
||||
import org.junit.jupiter.api.TestInstance
|
||||
import space.kscience.dataforge.context.Context
|
||||
import space.kscience.dataforge.context.Global
|
||||
import space.kscience.dataforge.context.PluginFactory
|
||||
import space.kscience.dataforge.context.PluginTag
|
||||
import space.kscience.dataforge.data.DataTree
|
||||
import space.kscience.dataforge.data.await
|
||||
import space.kscience.dataforge.data.getData
|
||||
import space.kscience.dataforge.data.map
|
||||
import space.kscience.dataforge.data.select
|
||||
import space.kscience.dataforge.data.static
|
||||
import space.kscience.dataforge.meta.Meta
|
||||
import space.kscience.dataforge.names.Name
|
||||
import space.kscience.dataforge.names.asName
|
||||
import space.kscience.dataforge.workspace.Workspace
|
||||
import space.kscience.dataforge.workspace.WorkspacePlugin
|
||||
import space.kscience.dataforge.workspace.task
|
||||
import kotlin.reflect.KClass
|
||||
import kotlin.reflect.KType
|
||||
import kotlin.reflect.typeOf
|
||||
import kotlin.test.assertEquals
|
||||
|
||||
private class MyPlugin : WorkspacePlugin() {
|
||||
override val tag: PluginTag
|
||||
get() = Factory.tag
|
||||
|
||||
val task by task<Int> {
|
||||
val myInt = workspace.data.select<Int>()
|
||||
val res = myInt.getData("int".asName())!!
|
||||
emit("result".asName(), res.map { it + 1 })
|
||||
}
|
||||
|
||||
companion object Factory : PluginFactory<MyPlugin> {
|
||||
override fun invoke(meta: Meta, context: Context): MyPlugin = MyPlugin()
|
||||
|
||||
override val tag: PluginTag
|
||||
get() = PluginTag("Plg")
|
||||
|
||||
override val type: KClass<out MyPlugin>
|
||||
get() = MyPlugin::class
|
||||
}
|
||||
}
|
||||
|
||||
private class RemoteMyPlugin(endpoint: Endpoint) : ClientWorkspacePlugin(endpoint) {
|
||||
override val tag: PluginTag
|
||||
get() = MyPlugin.tag
|
||||
|
||||
override val tasks: Map<Name, KType>
|
||||
get() = mapOf(
|
||||
"task".asName() to typeOf<Int>()
|
||||
)
|
||||
}
|
||||
|
||||
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
|
||||
class ServiceWorkspaceTest {
|
||||
|
||||
private lateinit var worker1: ServiceWorkspace
|
||||
private lateinit var workspace: Workspace
|
||||
|
||||
@BeforeAll
|
||||
fun before() {
|
||||
worker1 = ServiceWorkspace(
|
||||
context = Global.buildContext("worker1".asName()) {
|
||||
plugin(MyPlugin)
|
||||
},
|
||||
data = runBlocking {
|
||||
DataTree<Any> {
|
||||
static("int", 0)
|
||||
}
|
||||
},
|
||||
)
|
||||
worker1.start()
|
||||
|
||||
workspace = Workspace {
|
||||
context {
|
||||
val endpoint = Endpoint(worker1.address, worker1.port)
|
||||
plugin(RemoteMyPlugin(endpoint))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@AfterAll
|
||||
fun after() {
|
||||
worker1.shutdown()
|
||||
}
|
||||
|
||||
@Test
|
||||
fun localExecution() = runBlocking {
|
||||
assertEquals(0, worker1.data.getData("int")!!.await())
|
||||
val res = worker1
|
||||
.produce(Name.of("Plg", "task"), Meta.EMPTY)
|
||||
.getData("result".asName())!!
|
||||
.await()
|
||||
assertEquals(1, res)
|
||||
}
|
||||
|
||||
@Test
|
||||
fun remoteExecution() = runBlocking {
|
||||
val remoteRes = workspace
|
||||
.produce(Name.of("Plg", "task"), Meta.EMPTY)
|
||||
.getData("result".asName())!!
|
||||
.await()
|
||||
assertEquals(1, remoteRes)
|
||||
}
|
||||
}
|
@ -1,6 +1,8 @@
|
||||
package space.kscience.dataforge.workspace
|
||||
|
||||
import kotlinx.coroutines.withContext
|
||||
import kotlinx.serialization.KSerializer
|
||||
import kotlinx.serialization.serializer
|
||||
import space.kscience.dataforge.data.DataSetBuilder
|
||||
import space.kscience.dataforge.data.DataTree
|
||||
import space.kscience.dataforge.data.GoalExecutionRestriction
|
||||
@ -17,11 +19,6 @@ import kotlin.reflect.typeOf
|
||||
@Type(TYPE)
|
||||
public interface Task<out T : Any> : Described {
|
||||
|
||||
/**
|
||||
* Type of the task result data.
|
||||
*/
|
||||
public val resultType: KType
|
||||
|
||||
/**
|
||||
* Compute a [TaskResult] using given meta. In general, the result is lazy and represents both computation model
|
||||
* and a handler for actual result
|
||||
@ -37,6 +34,12 @@ public interface Task<out T : Any> : Described {
|
||||
}
|
||||
}
|
||||
|
||||
@Type(TYPE)
|
||||
public interface SerializableResultTask<T : Any> : Task<T> {
|
||||
public val resultType: KType
|
||||
public val resultSerializer: KSerializer<T>
|
||||
}
|
||||
|
||||
public class TaskResultBuilder<T : Any>(
|
||||
public val workspace: Workspace,
|
||||
public val taskName: Name,
|
||||
@ -60,9 +63,6 @@ public fun <T : Any> Task(
|
||||
builder: suspend TaskResultBuilder<T>.() -> Unit,
|
||||
): Task<T> = object : Task<T> {
|
||||
|
||||
override val resultType: KType
|
||||
get() = resultType
|
||||
|
||||
override val descriptor: MetaDescriptor? = descriptor
|
||||
|
||||
override suspend fun execute(
|
||||
@ -78,9 +78,28 @@ public fun <T : Any> Task(
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* [Task] that has [resultSerializer] to be able to cache or send its results
|
||||
*/
|
||||
@DFInternal
|
||||
public class SerializableResultTaskImpl<T : Any>(
|
||||
override val resultType: KType,
|
||||
override val resultSerializer: KSerializer<T>,
|
||||
descriptor: MetaDescriptor? = null,
|
||||
builder: suspend TaskResultBuilder<T>.() -> Unit,
|
||||
) : SerializableResultTask<T>, Task<T> by Task(resultType, descriptor, builder)
|
||||
|
||||
@OptIn(DFInternal::class)
|
||||
@Suppress("FunctionName")
|
||||
public inline fun <reified T : Any> Task(
|
||||
descriptor: MetaDescriptor? = null,
|
||||
noinline builder: suspend TaskResultBuilder<T>.() -> Unit,
|
||||
): Task<T> = Task(typeOf<T>(), descriptor, builder)
|
||||
): Task<T> = Task(typeOf<T>(), descriptor, builder)
|
||||
|
||||
@OptIn(DFInternal::class)
|
||||
@Suppress("FunctionName")
|
||||
public inline fun <reified T : Any> SerializableResultTask(
|
||||
resultSerializer: KSerializer<T> = serializer(),
|
||||
descriptor: MetaDescriptor? = null,
|
||||
noinline builder: suspend TaskResultBuilder<T>.() -> Unit,
|
||||
): Task<T> = SerializableResultTaskImpl(typeOf<T>(), resultSerializer, descriptor, builder)
|
||||
|
@ -1,5 +1,6 @@
|
||||
package space.kscience.dataforge.workspace
|
||||
|
||||
import kotlinx.serialization.KSerializer
|
||||
import space.kscience.dataforge.context.Context
|
||||
import space.kscience.dataforge.context.ContextBuilder
|
||||
import space.kscience.dataforge.context.Global
|
||||
@ -37,25 +38,34 @@ public interface TaskContainer {
|
||||
|
||||
public inline fun <reified T : Any> TaskContainer.registerTask(
|
||||
name: String,
|
||||
resultSerializer: KSerializer<T>? = null,
|
||||
noinline descriptorBuilder: MetaDescriptorBuilder.() -> Unit = {},
|
||||
noinline builder: suspend TaskResultBuilder<T>.() -> Unit,
|
||||
): Unit = registerTask(Name.parse(name), Task(MetaDescriptor(descriptorBuilder), builder))
|
||||
) {
|
||||
val descriptor = MetaDescriptor(descriptorBuilder)
|
||||
val task = if (resultSerializer == null) Task(descriptor, builder) else
|
||||
SerializableResultTask(resultSerializer, descriptor, builder)
|
||||
registerTask(Name.parse(name), task)
|
||||
}
|
||||
|
||||
public inline fun <reified T : Any> TaskContainer.task(
|
||||
descriptor: MetaDescriptor,
|
||||
resultSerializer: KSerializer<T>? = null,
|
||||
noinline builder: suspend TaskResultBuilder<T>.() -> Unit,
|
||||
): PropertyDelegateProvider<Any?, ReadOnlyProperty<Any?, TaskReference<T>>> = PropertyDelegateProvider { _, property ->
|
||||
val taskName = Name.parse(property.name)
|
||||
val task = Task(descriptor, builder)
|
||||
val task = if (resultSerializer == null) Task(descriptor, builder) else
|
||||
SerializableResultTask(resultSerializer, descriptor, builder)
|
||||
registerTask(taskName, task)
|
||||
ReadOnlyProperty { _, _ -> TaskReference(taskName, task) }
|
||||
}
|
||||
|
||||
public inline fun <reified T : Any> TaskContainer.task(
|
||||
resultSerializer: KSerializer<T>? = null,
|
||||
noinline descriptorBuilder: MetaDescriptorBuilder.() -> Unit = {},
|
||||
noinline builder: suspend TaskResultBuilder<T>.() -> Unit,
|
||||
): PropertyDelegateProvider<Any?, ReadOnlyProperty<Any?, TaskReference<T>>> =
|
||||
task(MetaDescriptor(descriptorBuilder), builder)
|
||||
task(MetaDescriptor(descriptorBuilder), resultSerializer, builder)
|
||||
|
||||
public class WorkspaceBuilder(private val parentContext: Context = Global) : TaskContainer {
|
||||
private var context: Context? = null
|
||||
|
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
I still do not understand why do we need wrappers for plaguns. Plagins are only a way to load tasks into workspace. Ther should be a wrapper for a task, not a plugin.
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?
But where should endpoints for the tasks be provided? Seems that task declaration is not good enough place. Should I use Meta for this?