orchagent has four canonical types: prompt, tool, agent, and skill.Documentation Index
Fetch the complete documentation index at: https://docs.orchagent.io/llms.txt
Use this file to discover all available pages before exploring further.
The Four Types
| Type | What it is | Default execution engine |
|---|---|---|
prompt | Prompt template + schema → single LLM call | direct_llm |
tool | Python or JavaScript code runs in a sandbox | code_runtime |
agent | LLM tool-use loop with custom tools | managed_loop |
skill | Passive knowledge (markdown) — not runnable | N/A |
Quick distinction:
prompt types answer questions. tool types run your code. agent types reason and iterate with tools. skill types teach other agents.Type determines execution engine
Thetype field sets sensible defaults for how your agent executes. You can still override with explicit declarations:
| Override | Effect |
|---|---|
Adding runtime.command to any type | Forces code_runtime |
Adding loop config to any type | Forces managed_loop |
Neither declared on a prompt type | Uses direct_llm (the default) |
run_mode | Behavior |
|---|---|
on_demand (default) | Each call is independent — run via CLI, API, or schedule |
always_on | Persistent service — Discord bots, webhook listeners, workers |
callable | Behavior |
|---|---|
true (default) | Other agents can call this agent as a dependency |
false | Only users can call this agent (e.g., always-on services) |
Which Type Should I Use?
Ask yourself one question: what does your agent need to do?Common Use Cases
Not sure which pattern fits? Find your use case below:| I want to… | Type | Why |
|---|---|---|
| Analyze sentiment or classify text | prompt | One LLM call, structured JSON output |
| Summarize or translate documents | prompt | One LLM call, no tools needed |
| Generate marketing copy or emails | prompt | Prompt engineering, structured output |
| Extract data from text (names, dates, etc.) | prompt | One LLM call with output schema |
| Fix code until tests pass | agent | LLM reads code, runs tests, iterates |
| Review pull requests or audit code | agent | LLM navigates files, checks patterns |
| Research a topic and write a report | agent | LLM searches, reads, synthesizes |
| Scan repos for secrets or vulnerabilities | tool | Deterministic logic, fast, no LLM needed |
| Process uploaded files (PDF, CSV, images) | tool | File I/O, custom parsing logic |
| Call external APIs and transform data | tool | Full HTTP control, auth, error handling |
| Run a multi-model pipeline (different LLMs per step) | tool | You control which LLM handles each step |
| Build a Discord bot or webhook listener | tool | Persistent service, event-driven |
| Share coding standards with your team | skill | Passive knowledge, works with any AI tool |
| Package domain expertise (legal, medical, etc.) | skill | Reusable across multiple agents |
Still unsure? Start with
type: "prompt". If you find yourself thinking “I wish it could run a command” or “it needs to iterate,” switch to type: "agent". If you need full control, use type: "tool". You can always change later — just update the type field.Prompt Type (type: "prompt")
The simplest type. You provide a prompt template with variable placeholders, and orchagent handles the LLM call. Execution engine: direct_llm.
When to use:
- Single LLM call is sufficient
- No external API calls needed
- No complex logic or branching
Example
orchagent.json:
prompt.md:
schema.json:
Prompt Variables
Use{{variable}} syntax in your prompt.md:
400 MISSING_INPUT_FIELDS error listing any that are missing.
Agent Type (type: "agent")
Agent types give the LLM a tool-use loop inside a sandbox. Think of it as “Claude Code in a container, configured by you.” The platform provides built-in tools (bash, file read/write, list files) and you can define custom command-wrapper tools. The LLM iterates autonomously until it solves the task and submits a result. Execution engine: managed_loop.
When to use:
- The task requires running commands, reading/writing files, or iterating
- You want the LLM to figure out the steps, not hard-code them
- You’d otherwise write code just to orchestrate LLM + subprocess calls
- E2B sandbox with your custom environment (if Dockerfile provided)
- Built-in tools:
bash,read_file,write_file,list_files,submit_result - Your custom tools converted to named tool definitions
- A managed loop that runs until the LLM calls
submit_resultor hitsmax_turns
Custom Tools
Custom tools are command wrappers that give the LLM clean, named operations instead of having to guess shell commands. Simple tools (no parameters):{{param}} placeholders):
run_tests and deploy instead of guessing raw bash commands. Bash is always available as a fallback for ad-hoc commands.
Built-in Tools
Every managed loop agent automatically gets these tools:| Tool | Description |
|---|---|
bash | Run shell commands (120s per-command timeout) |
read_file | Read file contents |
write_file | Write/create files (auto-creates parent directories) |
list_files | List directory contents (optional recursive) |
submit_result | Submit final structured output and end the loop |
Safety Limits
| Limit | Default | Configurable |
|---|---|---|
loop.max_turns | 25 | Yes, in orchagent.json (platform max: 50) |
| Per-command timeout | 120 seconds | No |
| Overall timeout | Agent’s timeout_seconds | Yes |
Provider Support
Managed loop agents currently support Anthropic (Claude) only. Why? The managed loop uses Claude’s native tool-use protocol: the platform sends a system prompt with tool definitions, the LLM returnstool_use blocks, the platform executes them in the sandbox, and feeds tool_result messages back. This cycle repeats until the LLM calls submit_result or hits max_turns. The implementation relies on Anthropic-specific message formatting (system/user/assistant roles with structured tool-use content blocks) that doesn’t map 1:1 to other providers’ tool-calling APIs.
Multi-provider support for managed loop is on the roadmap. In the meantime, if you need to use OpenAI or Gemini models in a tool-use loop, use a code runtime agent instead — you have full control over the LLM calls and can use any provider’s SDK directly.
Tool Type (type: "tool")
Tool types run your Python or JavaScript in E2B sandboxes — secure, isolated environments. Each call spins up a fresh sandbox, runs your script, and returns the result. You have full control over everything. Execution engine: code_runtime.
When to use:
- You need full programmatic control over the execution flow
- Your use case doesn’t need an LLM at all (pure data processing, file conversion, etc.)
- You need multi-model orchestration (calling different LLMs for different steps)
- You have an existing codebase you want to wrap as an agent
- Agent types don’t give you enough control
- Python
- JavaScript
Example
- Python
- JavaScript
Input/Output Contract
Code runtime agents communicate via stdin/stdout as JSON. Standard input:Directory Structure
- Python
- JavaScript
main.py, app.py, index.py, main.js, index.js. Override with:
Skills in Tool Types
Tool types can access skills at runtime. When skills are passed via the--skills flag or X-Orchagent-Skills header, they are mounted as files in your sandbox:
/home/user/orchagent/skills/ with filenames like org_name_version.md. A manifest.json file provides metadata for programmatic access.
Skill Type (type: "skill")
Skills are passive knowledge — markdown files containing instructions, rules, or expertise that enhance agents. They are not runnable.
Use cases:
- Coding standards (React patterns, security rules)
- Domain knowledge (legal requirements, company policies)
- Writing guidelines (tone, formatting, brand voice)
SKILL.md Format
Skills use the Agent Skills standard:Frontmatter Fields
| Field | Required | Description |
|---|---|---|
name | Yes | Lowercase, hyphens only, max 64 chars |
description | Yes | When to use this skill (max 1024 chars) |
license | No | e.g., MIT |
metadata | No | Author, version, etc. |
Using Skills
Install locally for any AI coding tool:.claude/skills/, .cursor/skills/, .codex/skills/, .agent/skills/.
Compose with agents at run time:
Using Agents as Sub-Agents
Export agents as sub-agent configuration files for AI tools:LLM Provider Configuration
Specify supported providers in your manifest:"any" if your agent works with any provider:
agent types (managed loop) currently only support "anthropic". This will be expanded in the future.Choosing the Right Type
prompt | agent | tool | skill | |
|---|---|---|---|---|
| Best for | Single-step LLM tasks | Multi-step LLM reasoning | Your own code logic | Sharing knowledge |
| LLM involved? | Yes (one call) | Yes (iterative loop) | Optional (you decide) | No |
| Sandbox? | No | Yes (E2B) | Yes (E2B) | No |
| Providers | Any (OpenAI, Anthropic, Gemini) | Any (OpenAI, Anthropic, Gemini) | Any (you call the API) | N/A |
| Typical latency | 2-5 seconds | 10-120 seconds | 1-60 seconds | Instant (install) |
| Example | Sentiment analyzer, translator | Test fixer, code reviewer | Security scanner, PDF parser | React best practices |
| You write | prompt.md + schema.json | prompt.md + orchagent.json (loop config) | main.py or main.js | SKILL.md |
Migration Note
February 2026: orchagent uses four canonical types:
prompt, tool, agent, skill. Legacy type values code and agentic are still accepted by the API and CLI for backward compatibility:code→tool(execution engine:code_runtime)agentic→agent(execution engine:managed_loop)
execution_engine field (direct_llm, managed_loop, code_runtime) is inferred from your type at publish time. You do not need to set it manually — the type provides the right default.Next Steps
Manifest Format
Full orchagent.json schema
Publishing
Publish your agent or skill
Orchestration
Compose agents and skills
Agent Builder Skill
Run
orch skill install orchagent/agent-builder to give your AI coding tool the complete platform reference for building agents.