Skip to main content
Intakes provide automated document ingestion endpoints. Each intake defines a way for external systems to submit documents into a specific document store, triggering processing pipelines automatically. Intakes page showing configured intake endpoints with names, target stores, and status

How Intakes Work

An intake creates an HTTP endpoint that external systems can send documents to. When a document arrives at an intake endpoint:
  1. The document is uploaded to the configured document store
  2. If a script is configured, it runs to validate or enrich the document metadata
  3. A document family is created for tracking
  4. If a task template is configured, a task is automatically created
  5. Any configured knowledge features are assigned to the document
  6. Domain events are published, triggering any subscribed processing pipelines

Upload Endpoint

Each intake exposes an upload endpoint at:
POST /api/intake/{orgSlug}/{intakeSlug}
For example, an intake with slug invoice-upload in organization acme-corp would be available at:
POST /api/intake/acme-corp/invoice-upload

Single File Upload

curl -X POST https://platform.kodexa.ai/api/intake/acme-corp/invoice-upload \
  -H "Authorization: Bearer <token>" \
  -F "file=@invoice.pdf" \
  -F 'metadata={"vendor": "Acme Inc", "department": "finance"}'

Multiple File Upload

When Allow Multiple Files is enabled on the intake, you can upload multiple files in a single request:
curl -X POST https://platform.kodexa.ai/api/intake/acme-corp/invoice-upload \
  -H "Authorization: Bearer <token>" \
  -F "file=@invoice1.pdf" \
  -F "file=@invoice2.pdf" \
  -F 'metadata=[{"vendor": "Acme"}, {"vendor": "Globex"}]'
When providing metadata for multiple files, use a JSON array where each element corresponds to a file by index. If a single JSON object is provided instead, it is applied to all files.

Request Parameters

ParameterTypeRequiredDescription
filemultipart fileYesOne or more files to upload
pathstringNoDocument path in the store (defaults to filename)
metadataJSONNoKey-value metadata to attach to the document
documentVersionstringNoVersion identifier stored on the content object
externalDataJSONNoExternal data JSON object injected into the KDDB document under key "default". Accessible via doc.get_external_data() in processing modules.
labelsstringNoComma-separated label names to assign (normalized to uppercase, created if new)
statusIdstringNoDocument status ID to set on the document family
knowledgeFeaturesJSONNoArray of {"id": "..."} objects — merged with intake-configured features

Response

Returns HTTP 201 on success with the created document family object. For multiple files, returns an array of document family objects.
External data is stored directly in the KDDB document, not in a separate database table. Processing modules can access it via doc.get_external_data() (Python) or doc.getExternalData() (TypeScript/WASM). The platform UI reads it directly from the loaded document.

Configuring an Intake

1

Create Intake

Click the add button on the Intakes page. Provide a name and slug for the intake. The slug determines the upload endpoint URL.
2

Select Target Store

Choose the document store where incoming documents should be stored.
3

Configure Options

Set up optional features such as scripting, task templates, knowledge features, and metadata.

Intake Settings

SettingDescription
NameHuman-readable label for the intake
SlugURL-safe identifier used in the upload endpoint path
DescriptionOptional description of the intake’s purpose
ActiveWhen disabled, the intake rejects all uploads
Allow Multiple FilesEnable uploading multiple files in a single request
Target StoreThe document store where uploaded files are saved

Script Tab

Intakes support a JavaScript scripting tab that lets you run custom logic on each uploaded file before it is stored. Scripts run in a Goja JavaScript runtime with a 5-second timeout.

Available Variables

VariableTypeDescription
filenamestringOriginal uploaded filename
fileSizenumberFile size in bytes
mimeTypestringDetected MIME type
metadataobjectMutable metadata object (merged from intake config + upload params)
document.textstringExtracted text content (first 5 pages for PDFs)
document.pageCountnumberPage count if available
document.metadataobjectDocument-level metadata
log(level, message)functionWrite to server logs (debug, info, warn, error)

Return Value

Scripts return an object that controls how the upload is processed:
({
  metadata: metadata,       // Modified metadata object
  reject: false,            // Set true to reject the upload
  rejectReason: "",         // Reason string shown to the caller
  taskTemplates: [          // Optional: control which tasks are created
    {
      slug: "template-slug",    // Task template slug (required)
      priority: 1,              // Override task priority
      teamSlug: "team-slug",    // Override team assignment
      assigneeEmail: "a@b.com", // Override assignee
      title: "Custom title",    // Override task title
      description: "...",       // Override task description
      metadata: {}              // Per-task metadata
    }
  ]
})
FieldTypeRequiredDescription
metadataobjectNoModified metadata — replaces the merged metadata for this document
rejectbooleanNoSet true to reject the upload (returns HTTP 400)
rejectReasonstringNoReason shown to the caller when rejected
taskTemplatesarrayNoTask templates to create for this document (see below)

Task Templates Array

When the taskTemplates field is present in the return value, the script takes full control of task creation:
  • Array with entries — creates one task per entry, resolving each template by slug
  • Empty array [] — creates no tasks, even if a static template is configured on the intake
  • Field omitted entirely — falls back to the static task template configured on the intake (backward compatible)
Each entry in the array supports these fields:
FieldTypeRequiredDescription
slugstringYesThe slug of the task template to use. Must exist in the same organization.
prioritynumberNoOverride the task priority (0 = default)
teamSlugstringNoOverride team assignment by team slug
assigneeEmailstringNoOverride assignee by email address
titlestringNoOverride the task title (defaults to filename or AI naming)
descriptionstringNoOverride the task description
metadataobjectNoMetadata object stored on the created task
When scripting is enabled, the static Task Template dropdown in the Settings tab is disabled. All task creation logic moves to the script.

Example: Validate File Size

if (fileSize > 50 * 1024 * 1024) {
  return {
    metadata: metadata,
    reject: true,
    rejectReason: "File exceeds 50MB limit"
  };
}

// Enrich metadata with detected info
metadata["source"] = "intake";
metadata["originalFilename"] = filename;

return {
  metadata: metadata,
  reject: false,
  rejectReason: ""
};

Example: Route by Document Content

log("info", "Processing: " + filename);

if (document.text.includes("CONFIDENTIAL")) {
  metadata["classification"] = "confidential";
  metadata["requiresReview"] = "true";
}

return {
  metadata: metadata,
  reject: false,
  rejectReason: ""
};

Example: Dynamic Task Routing

Route documents to different task templates with custom overrides based on content:
log("info", "Classifying: " + filename);

var templates = [];

if (document.text.indexOf("INVOICE") >= 0) {
  templates.push({
    slug: "invoice-review",
    priority: 2,
    teamSlug: "finance-team",
    title: "Invoice Review: " + filename
  });

  // High-value invoices also get an approval task
  var amount = document.text.match(/\$[\d,]+\.?\d*/);
  if (amount) {
    templates.push({
      slug: "invoice-approval",
      priority: 1,
      title: "Approve: " + filename + " (" + amount[0] + ")",
      metadata: { detectedAmount: amount[0] }
    });
  }
} else if (document.text.indexOf("CONTRACT") >= 0) {
  templates.push({
    slug: "legal-review",
    teamSlug: "legal-team",
    title: "Contract Review: " + filename
  });
} else {
  log("info", "No matching template for: " + filename);
}

({
  metadata: metadata,
  reject: false,
  taskTemplates: templates
})
This script inspects the document text and:
  • Creates an invoice review task for invoices, plus an approval task for invoices with dollar amounts
  • Creates a legal review task for contracts
  • Skips task creation entirely for unrecognized documents
Enable the Script toggle to activate script execution. You can disable it without deleting the script code.

Loading Shared Modules

Intake scripts can load shared JavaScript modules using the Module Refs picker in the Script tab. Pre-loaded modules’ functions and variables are available in global scope within your intake script, letting you reuse common validation, transformation, or enrichment logic across multiple intakes. Select one or more JavaScript modules from your organization. They are fetched and executed in order before your intake script runs.
// Assuming "my-org/validation-helpers" is loaded via Module Refs
if (!validateFileType(mimeType)) {
  return { metadata: metadata, reject: true, rejectReason: "Unsupported file type" };
}

metadata["normalized_name"] = cleanFilename(filename);
return { metadata: metadata, reject: false, rejectReason: "" };

Task Template

Static Assignment

Select a task template from the dropdown to automatically create a task for each uploaded document. When configured:
  • A task is created using the selected template
  • The task title defaults to the uploaded filename
  • The uploaded document is linked to the task
  • If the template has AI naming configured, the task title is generated from the document content
This is useful for simple intake workflows where every document needs the same type of task.

Script-Driven Assignment

When a processing script is enabled on the intake, the static task template dropdown is disabled. Instead, the script controls which tasks are created by returning a taskTemplates array in its return value. This allows:
  • Conditional routing — different document types create different tasks
  • Multiple tasks — a single document can create several tasks across different templates
  • No tasks — some documents may not need a task at all
  • Per-task overrides — customize priority, team, assignee, title, and metadata per task
See the Script Tab section for the full return value reference and examples.
Task template slugs in the script must match existing templates in your organization. If a slug cannot be resolved, that template is skipped and a warning is logged. The remaining templates continue to be created.

Knowledge Features

Select one or more knowledge feature types to automatically assign to every document uploaded through this intake. This lets you pre-classify documents at ingestion time — for example, tagging all documents from a specific intake as belonging to a particular vendor or document category.

Processing Metadata

The Processing Metadata section lets you define key-value pairs that are attached to every document uploaded through this intake. These metadata values are available to downstream processing modules and assistants. Metadata is merged in this order (later values override earlier ones):
  1. Intake-level metadata (configured here)
  2. Metadata extracted from the document file
  3. Per-upload metadata (provided in the API request)
  4. Script modifications (if a script is enabled)
The following keys are reserved and automatically stripped from the metadata object before storage. Use them as separate form parameters instead: externalData, labels, statusId, knowledgeFeatures, documentVersion.
Labels are normalized to uppercase before storage. For example, labels=invoice,urgent creates labels INVOICE and URGENT. Labels that don’t exist in the organization are created automatically.

API Tokens

The API Tokens tab lets you create scoped tokens for machine-to-machine authentication against a specific intake endpoint. Unlike user API keys, intake tokens are scoped to a single intake and bypass user authentication — making them ideal for automated pipelines, third-party integrations, and CI/CD workflows.

Creating a Token

1

Open the API Tokens tab

Select an intake and navigate to the API Tokens tab.
2

Create a new token

Click the add button. Optionally set an expiration date.
3

Copy the token

The plaintext token (prefixed with kit_) is shown only once. Copy it immediately and store it securely.
Token values are shown only at creation time. After you close the dialog, only a hint (last 4 characters) is displayed. If you lose the token, you must create a new one.

Using Intake Tokens

Pass the token in the Authorization header when uploading to the intake endpoint:
curl -X POST https://platform.kodexa.ai/api/intake/acme-corp/invoice-upload \
  -H "Authorization: Bearer kit_abc123..." \
  -F "file=@invoice.pdf"
Intake tokens only grant access to the specific intake they were created for. They cannot be used to access other API endpoints.

Managing Tokens

The API Tokens tab displays all tokens for the intake with their creation date, hint, and expiration status. Click the delete button to revoke a token. A confirmation dialog is shown before deletion.

Token Security

  • Tokens are hashed with SHA-256 before storage — the platform never stores plaintext tokens
  • Each token is scoped to a single intake and cannot access other resources
  • Tokens can have optional expiration dates
  • Revoked tokens take effect immediately
Each intake provides a unique URL. Keep intake URLs and authentication credentials secure, as anyone with access can submit documents to your organization.