How do Modules interact with the Module Runtime?
When you deploy a module into Kodexa you include the module runtime that you want to use. Today, all Kodexa module runtimes have the same interface, but this may change in the future. There are two primary forms of interaction between the module runtime and the module. The first is inference, the second is training. How the module runtime calls your module is based on how you have declared your module in the module.yml file.Inference
The most common starting point with working with a module is learning how inference works. Let’s take a simple example of a module.yml:moduleRuntimeRef which is set to kodexa/base-module-runtime. This means that the Robotic Assistant will use the base module runtime to run the module. In fact, it will look up the module runtime with the ref kodexa/base-module-runtime. Then it will look at the module runtime to determine which action it uses for inference, and build a pipeline including that action. The platform will then schedule that pipeline and then module runtime action will be called.

Module and then look for a function in that package called infer.
The module runtime will pass the document that we are processing to the module and then the module will return a document. The module runtime will then pass the document back to the platform for further processing.
Inference with Options
In the previous example, we saw how the module runtime would pass the document to the module. In this example, we will see how the module runtime will pass options to the module. First, let’s add some inference options to our module.yml file:Overriding Entry Points
If you want to override the entry point for your module, you can do so by specifying themoduleRuntimeParameters property in the module.yml file.
Magic Parameters for Training and Inference
When you are training or inferring a module, you can pass in some magic parameters to the module runtime. If you include a parameter in either your train or infer function, it will be passed in by the module runtime.| Parameter Name | Train/Infer | Description |
|---|---|---|
| module_store | Both | An instance of the ModuleStoreEndpoint for the module you are using |
| module_data | Both | A string representing the path where you in training you can store data and in inference you can pick it up |
| pipeline_context | Both | The PipelineContext for the processing pipeline |
| training_id | Both | The ID of the ModuleTraining that is in use |
| additional_training_document | Train | A Document representing the document being tested |
| training_options | Training | A dictionary of the training options that have been set for the module |
Inference Options
While in training we pass all the training options as a dictionary, in inference we pass the options as individual parameters. This is because we want to make it easy to use the options in the inference code. Therefore, if you have an inference option calledmy_option then you will get a parameter called my_option passed to your inference function.
