When an activity, task, or intake document is created without an explicit title, the platform can use an LLM to generate a contextual title and description automatically. This keeps dashboards, task lists, and audit trails readable without requiring users to write titles by hand.Documentation Index
Fetch the complete documentation index at: https://developer.kodexa.ai/llms.txt
Use this file to discover all available pages before exploring further.
When AI Naming Runs
AI naming is evaluated at three trigger points:| Trigger | Created Resource | Where to Configure |
|---|---|---|
| Activity creation | Activity title and description | Activity Plan metadata |
| Task creation (from a CREATE_TASK step) | Task title and description | Task Template metadata |
| Intake upload (from a task template) | Task title and description | Task Template metadata |
Enabling AI Naming
Add anaiNaming block to the metadata of an Activity Plan or Task Template.
JSON (API)
YAML (kdx sync)
The
aiNaming block lives inside the resource’s metadata object, not at the top level. This is the same metadata field used for other resource-level configuration.Prompt Placeholders
The prompt string supports placeholders that the platform resolves before sending the text to the LLM. Wrap each placeholder in curly braces.| Placeholder | Description | Example Output |
|---|---|---|
{templateName} | Activity plan or task template name | Invoice Review |
{activityPlanName} | Alias for {templateName} | Invoice Review |
{documentFamilyPaths} | Comma-separated file paths of attached documents | invoices/acme-001.pdf, invoices/acme-002.pdf |
{documentFamilyCount} | Number of document families | 2 |
{knowledgeFeatures} | Comma-separated knowledge feature names | Net 30 Terms, Auto-Renewal |
{metadata:key} | Top-level metadata value from document families | Acme Corp |
{metadata:key.nested.path} | Dot-path into metadata JSON (GJSON syntax) | Austin |
{externalData:key} | Entire JSON blob for an external data key | {"invoiceNumber":"INV-001"} |
{externalData:key.path} | Dot-path into external data JSON (first segment is the key, rest is GJSON path) | INV-001 |
Metadata and External Data Paths
The{metadata:...} and {externalData:...} placeholders use a dot-separated path to reach nested values.
For metadata, the path resolves directly against the document family metadata JSON:
Multi-Document Behavior
When multiple document families are involved in a single activity or task, metadata and external data values are collected from all documents, deduplicated, and joined with; .
For example, if two documents have companyName metadata values of “Acme Corp” and “Beta Inc”, the placeholder {metadata:companyName} resolves to:
Title Resolution Fallback Chain
The platform uses the first available title from this ordered chain:- Explicit title — If the caller provides a title directly, it is used as-is.
- AI naming — If
aiNaming.enabledis true and a prompt is configured, the LLM generates a title and description. - Template rendering — If
defaultTitleTemplateordefaultDescriptionTemplateis set on the plan or template, it is rendered using Go template syntax with{{ .inputs.field }}placeholders. - Plan or template name — The name of the Activity Plan or Task Template is used as the title.
- Generic fallback —
"Untitled".
Example Prompts
Invoice processing with company context
Task with external reference data
Contract review with knowledge features
LLM Response Format
The LLM must respond with a JSON object containingtitle and description fields:
Best Practices
Keep prompts concise. The platform uses a small, fast LLM model for naming to minimize latency. Long, detailed prompts do not improve results and slow down resource creation. Include the most distinctive data points. Company name, document type, and reference numbers produce the most useful titles. Avoid generic placeholders that add little differentiation. Always set adefaultTitleTemplate as a fallback. AI naming depends on an external LLM call. If the call fails, a well-crafted template ensures activities and tasks still get meaningful titles.
Test with representative data. Use documents that reflect your production workload to verify that placeholders resolve to useful values and that the LLM produces titles at the right level of detail.
