Skip to main content

Documentation Index

Fetch the complete documentation index at: https://developer.kodexa.ai/llms.txt

Use this file to discover all available pages before exploring further.

LLM steps run a prompt or prompt template as a first-class step in an Activity. Use them when model reasoning should be explicit, bounded, observable, and connected to the workflow graph. Use an LLM step for a prompt. Use an AGENT step when the work requires tool use, investigation, or multi-step execution.
Some current runtime fields and stored item types still use the legacy name AI_PROMPT. In Activity Plan documentation, the concept is LLM: a prompt-backed step with inputs, model configuration, output mapping, and downstream actions.

When to Use LLM

Use LLM when the workflow needs model reasoning such as:
  • Summarizing an exception for a reviewer
  • Explaining why a document failed validation
  • Classifying a document against a bounded set of business categories
  • Extracting a small structured decision from already-prepared context
  • Drafting a recommendation that a human will approve
  • Mapping messy text into a constrained JSON shape
Do not use LLM as an unbounded workflow controller. The Activity Plan should own the process. The LLM step should own one clear judgment or transformation.

Basic Shape

{
  "slug": "summarize-exception",
  "kind": "LLM",
  "dependsOn": ["validate:exception"],
  "config": {
    "promptTemplateRef": "invoice-exception-summary",
    "llmModelName": "gpt-4.1",
    "includeDocument": true,
    "promptVariables": {
      "vendorName": "$.task.data.vendorName",
      "exceptionCodes": "$.steps.validate.mapped_output.exceptionCodes"
    },
    "outputMapping": {
      "summary": "$.summary",
      "risk": "$.risk",
      "action": "$.recommendedAction"
    },
    "promptActions": [
      { "name": "review" },
      { "name": "auto-clear" }
    ]
  }
}
Config keyDescription
promptBodyInline prompt text
promptTemplateRefReusable prompt template reference
llmModelNameModel to use when invoking the AI gateway
promptVariablesJSONata expressions used to build prompt variables
includeDocumentInclude document content in the prompt context
enrichmentOptional Service Bridge enrichment before prompt rendering
outputMappingJSONata mapping from model response into step output
promptActionsAllowed downstream actions emitted by the step
perDocumentRun the prompt separately for each document family
Use either promptBody or promptTemplateRef. Prompt templates are better when the prompt should be reviewed, versioned, and reused.

Prompt Variables

Prompt variables keep prompts readable and make input selection explicit.
{
  "promptVariables": {
    "invoiceNumber": "$.task.data.invoiceNumber",
    "poNumber": "$.task.data.poNumber",
    "validationErrors": "$.steps.validate.mapped_output.errors"
  }
}
These expressions are evaluated against the Activity context, including organization, task, inputs, document data, and prior step results.

Output Mapping

Output mapping turns the model response into stable workflow data.
{
  "outputMapping": {
    "summary": "$.summary",
    "riskLevel": "$.risk.level",
    "action": "$.decision"
  }
}
The mapped action can drive downstream dependencies:
[
  {
    "slug": "summarize-exception",
    "kind": "LLM",
    "config": {
      "promptActions": [
        { "name": "review" },
        { "name": "auto-clear" }
      ]
    }
  },
  {
    "slug": "analyst-review",
    "kind": "CREATE_TASK",
    "dependsOn": ["summarize-exception:review"]
  },
  {
    "slug": "clear-exception",
    "kind": "SCRIPT",
    "dependsOn": ["summarize-exception:auto-clear"]
  }
]
Keep actions constrained. The Activity Plan should list every action the LLM step is allowed to emit.

Including Documents

Set includeDocument when the prompt needs document content.
{
  "slug": "classify-letter",
  "kind": "LLM",
  "config": {
    "promptTemplateRef": "classify-correspondence",
    "includeDocument": true,
    "outputMapping": {
      "category": "$.category",
      "confidence": "$.confidence"
    }
  }
}
For large documents, prefer preparing the exact context first with EXECUTION or SCRIPT, then prompting against the smaller extracted context. This makes model behavior easier to test and review.

Per-Document LLM Steps

Use perDocument when each document family needs its own model decision.
{
  "slug": "summarize-each-claim-document",
  "kind": "LLM",
  "config": {
    "promptTemplateRef": "claim-document-summary",
    "includeDocument": true,
    "perDocument": true
  }
}
Per-document steps produce document-level outputs that can be used by downstream routing and review tasks.

Enrichment Before Prompting

An LLM step can enrich context before rendering the prompt. Use enrichment when the model needs reference data from a system of record, but the model should not call that system directly.
{
  "enrichment": [
    {
      "serviceBridgeRef": "vendor-master",
      "endpointName": "lookup-vendor",
      "requestBody": {
        "vendorId": "$.task.data.vendorId"
      },
      "outputKey": "vendor"
    }
  ]
}
The Activity Plan stays explicit: bridge enrichment happens first, prompt rendering happens second, output mapping happens third.

LLM vs AGENT

NeedUse
One prompt with known inputs and mapped outputLLM
A model-generated summary for a reviewerLLM
Tool use across project resourcesAGENT
Multi-step investigation with intermediate decisionsAGENT
Workflow routing with strict allowed actionsUsually SCRIPT or LLM

Checklist

  • The prompt has a bounded purpose.
  • The prompt uses explicit variables instead of hidden context.
  • Model output is mapped into stable fields.
  • Downstream actions are declared and constrained.
  • Human review is used where model output should not be final.
  • The prompt is tested against real document examples and exception cases.

Agent Steps

Use agent runtimes for tool-using work.

Create Task Steps

Route model results into human review.