What is an Execution?
An execution represents a single run of an assistant’s pipeline. It tracks the full lifecycle from the initial trigger event through each processing step to a final success or failure outcome. Every execution is associated with an assistant, scoped to an organization, and optionally linked to a specific document family. You can view executions for an assistant in the Kodexa UI, or query them via the API:How Executions are Created
Executions are created automatically when domain events match an assistant’s connections:- An event occurs — for example, new content is uploaded to a document store, or a scheduled job triggers.
- The platform evaluates assistant connections — each active assistant has connections that define which events it listens to (e.g., a specific store, channel, or workspace).
- Subscription filtering — if the connection has a subscription expression, the event is evaluated against it. This can include checks like file extension, document labels, or metadata values.
- An execution is created in
PENDINGstatus with the assistant’s pipeline configuration and event context attached.
Subscription Expressions
Assistant connections can filter events using expressions. Simple subscriptions are comma-separated event types:hasLabel(), hasMixins(), hasExtensions(), and matchesPath().
Pipeline Configuration
Each assistant has a pipeline defined in its options. The pipeline is an ordered list of steps, where each step references a module and can include options and conditionals:| Field | Description |
|---|---|
ref | Module reference in the format module://orgSlug/moduleSlug or module://orgSlug/moduleSlug:version |
name | A human-readable name for the step |
stepType | The type of step — typically MODEL for module execution |
options | Key-value options passed to the module at runtime |
conditional | An optional expression that determines whether the step should execute (see Data Flow Step Conditionals) |
Execution Lifecycle
An execution moves through the following statuses:PENDING
The execution has been created and is waiting for the scheduler to pick it up. Executions are prioritized — higher priority executions are scheduled first, and within the same priority level, older executions are processed first.RUNNING
The scheduler has planned the execution by creating slices — one per pipeline step. Each slice is dispatched to a module runtime (a Lambda function) for processing:- The scheduler reads the pipeline steps and creates an execution slice for each step.
- Each slice is enqueued to an SQS queue keyed by the module runtime.
- The dispatcher polls the queue, reserves concurrency for the runtime, and invokes the Lambda function with the slice payload.
- The Lambda container downloads the module, sets up the environment, and calls the module’s entry point function.
SUCCEEDED
All slices completed successfully. The execution’sendDate is set and the status moves to SUCCEEDED.
FAILED
One or more slices failed or timed out. If a slice’s lease expires before completion, it is marked asTIMED_OUT. If the Lambda invocation itself fails, the slice is marked as FAILED. Either condition causes the overall execution to be marked FAILED.
CANCELLED
The execution was explicitly cancelled via the API before it completed.Execution Context
Every execution carries a context — a JSON object that is passed to each module in the pipeline. The context is built from the triggering event and the assistant’s configuration:| Key | Description |
|---|---|
eventType | The type of event that triggered the execution (e.g., CONTENT_CREATED) |
documentFamilyId | The ID of the document family being processed, if applicable |
contentObjectId | The ID of the content object that triggered the event |
storeId | The ID of the document store |
channelId | The ID of the channel, for channel-triggered events |
taskId | The ID of the task, for task-triggered events |
taxonomyRefs | References to taxonomies configured on the assistant |
completeLabel | A label to apply to the document family when processing completes |
pipeline_context parameter. See Magic Parameter Injection for details.
MODEL Steps
When a pipeline step hasstepType: MODEL, the platform:
- Resolves the module runtime — looks up the module runtime referenced by the module (e.g.,
kodexa/base-module-runtime) to determine which Lambda function to invoke. - Downloads the module — the Lambda container downloads the module’s code and any module sidecars to a local directory.
- Calls the entry point — by default, the runtime looks for a package called
modulewith a function calledinfer. The function receives the document and any configured options. - Returns the result — the module returns the processed document, which is passed forward in the pipeline.
model_storeidentifies which module to download and run.model_optionscontains the inference options configured on the step.assistant_idlinks the execution back to the assistant for context.runtime_parameterscan override module runtime behavior (e.g., custom entry points).
Deduplication
The platform prevents duplicate executions. If aPENDING execution already exists for the same assistant and document family within the last minute, a new event for the same combination is ignored. This prevents redundant processing when multiple events fire in quick succession.
Monitoring Executions
Via the API
List recent executions for an assistant:Via the UI
In the Kodexa UI, navigate to your project, select an assistant, and view the Executions tab. Each execution shows its status, duration, the pipeline steps that were run, and any errors that occurred.Related Concepts
- Assistants — the entities that own pipelines and create executions
- Modules — the processing units that execute within pipeline steps
- Module Runtimes — the runtime environments that host module execution
- Event Handling with Modules — how modules can react to platform events
- Data Flow Step Conditionals — conditional logic for skipping pipeline steps
- Module Sidecars — shared code loaded alongside modules
