Model Runtimes
modelRuntimeRef
which is set to kodexa/base-model-runtime
. This means that the Model Assistant will use the base model runtime to run the model. In fact, it will look up the model runtime with the ref kodexa/base-model-runtime
. Then it will look at the model runtime to determine which action it uses for inference, and build a pipeline including that action. The platform will then schedule that pipeline and then model runtime action will be called.
Model
and then look for a function in that package called infer
.
The model runtime will pass the document that we are processing to the model and then the model will return a document. The model runtime will then pass the document back to the platform for further processing.
modelRuntimeParameters
property in the model.yml file.
Parameter Name | Train/Infer | Description |
---|---|---|
model_store | Both | An instance of the ModelStoreEndpoint for the model you are using |
model_data | Both | A string representing the path where you in training you can store data and in inference you can pick it up |
pipeline_context | Both | The PipelineContext for the processing pipeline |
training_id | Both | The ID of the ModelTraining that is in use |
additional_training_document | Train | A Document representing the document being tested |
training_options | Training | A dictionary of the training options that have been set for the model |
my_option
then you will get a parameter called my_option
passed to your inference function.