Documentation Index
Fetch the complete documentation index at: https://developer.kodexa.ai/llms.txt
Use this file to discover all available pages before exploring further.
LLM steps run a prompt or prompt template as a first-class step in an Activity. Use them when model reasoning should be explicit, bounded, observable, and connected to the workflow graph.
Use an LLM step for a prompt. Use an AGENT step when the work requires tool use, investigation, or multi-step execution.
Some current runtime fields and stored item types still use the legacy name
AI_PROMPT. In Activity Plan documentation, the concept is LLM: a prompt-backed step with inputs, model configuration, output mapping, and downstream actions.When to Use LLM
UseLLM when the workflow needs model reasoning such as:
- Summarizing an exception for a reviewer
- Explaining why a document failed validation
- Classifying a document against a bounded set of business categories
- Extracting a small structured decision from already-prepared context
- Drafting a recommendation that a human will approve
- Mapping messy text into a constrained JSON shape
LLM as an unbounded workflow controller. The Activity Plan should own the process. The LLM step should own one clear judgment or transformation.
Basic Shape
| Config key | Description |
|---|---|
promptBody | Inline prompt text |
promptTemplateRef | Reusable prompt template reference |
llmModelName | Model to use when invoking the AI gateway |
promptVariables | JSONata expressions used to build prompt variables |
includeDocument | Include document content in the prompt context |
enrichment | Optional Service Bridge enrichment before prompt rendering |
outputMapping | JSONata mapping from model response into step output |
promptActions | Allowed downstream actions emitted by the step |
perDocument | Run the prompt separately for each document family |
promptBody or promptTemplateRef. Prompt templates are better when the prompt should be reviewed, versioned, and reused.
Prompt Variables
Prompt variables keep prompts readable and make input selection explicit.Output Mapping
Output mapping turns the model response into stable workflow data.action can drive downstream dependencies:
Including Documents
SetincludeDocument when the prompt needs document content.
EXECUTION or SCRIPT, then prompting against the smaller extracted context. This makes model behavior easier to test and review.
Per-Document LLM Steps
UseperDocument when each document family needs its own model decision.
Enrichment Before Prompting
An LLM step can enrich context before rendering the prompt. Use enrichment when the model needs reference data from a system of record, but the model should not call that system directly.LLM vs AGENT
| Need | Use |
|---|---|
| One prompt with known inputs and mapped output | LLM |
| A model-generated summary for a reviewer | LLM |
| Tool use across project resources | AGENT |
| Multi-step investigation with intermediate decisions | AGENT |
| Workflow routing with strict allowed actions | Usually SCRIPT or LLM |
Checklist
- The prompt has a bounded purpose.
- The prompt uses explicit variables instead of hidden context.
- Model output is mapped into stable fields.
- Downstream actions are declared and constrained.
- Human review is used where model output should not be final.
- The prompt is tested against real document examples and exception cases.
Agent Steps
Use agent runtimes for tool-using work.
Create Task Steps
Route model results into human review.
