orchagent has four canonical types: prompt, tool, agent, and skill.
Building with an AI coding assistant? Install the agent-builder skill to give your AI the complete platform reference — sandbox contracts, boilerplate code, environment details, and debugging patterns:
Copy
orch skill install orchagent/agent-builder
This works with Claude Code, Cursor, Amp, and other AI tools. Your AI will have everything it needs to build agents on orchagent without trial and error.
The type field sets sensible defaults for how your agent executes. You can still override with explicit declarations:
Override
Effect
Adding runtime.command to any type
Forces code_runtime
Adding loop config to any type
Forces managed_loop
Neither declared on a prompt type
Uses direct_llm (the default)
You also control when the agent runs:
run_mode
Behavior
on_demand (default)
Each call is independent — run via CLI, API, or schedule
always_on
Persistent service — Discord bots, webhook listeners, workers
And whether the agent is callable by other agents:
callable
Behavior
true (default)
Other agents can call this agent as a dependency
false
Only users can call this agent (e.g., always-on services)
Start with the type that matches your use case. The type provides the right execution defaults automatically. You only need to override with runtime or loop declarations if you’re doing something non-standard.
Ask yourself one question: what does your agent need to do?
Copy
What are you building?│├─ "LLM answers a question / generates content"│ └─ type: "prompt"│ e.g. sentiment analyzer, summarizer, translator, code explainer│├─ "My code does the work (maybe calls an LLM inside)"│ └─ type: "tool"│ e.g. security scanner, data pipeline, file converter, API integration│├─ "LLM figures things out using tools"│ └─ type: "agent"│ e.g. test fixer, code reviewer, research agent, deploy assistant│└─ "I want to share knowledge with agents or AI tools" └─ type: "skill" e.g. coding standards, security rules, brand guidelines
Start with the simplest type that works. Most use cases need only a prompt (prompt + schema). If you need the LLM to iterate with tools, use agent. Only reach for tool when you need full programmatic control or don’t need an LLM.
Not sure which pattern fits? Find your use case below:
I want to…
Type
Why
Analyze sentiment or classify text
prompt
One LLM call, structured JSON output
Summarize or translate documents
prompt
One LLM call, no tools needed
Generate marketing copy or emails
prompt
Prompt engineering, structured output
Extract data from text (names, dates, etc.)
prompt
One LLM call with output schema
Fix code until tests pass
agent
LLM reads code, runs tests, iterates
Review pull requests or audit code
agent
LLM navigates files, checks patterns
Research a topic and write a report
agent
LLM searches, reads, synthesizes
Scan repos for secrets or vulnerabilities
tool
Deterministic logic, fast, no LLM needed
Process uploaded files (PDF, CSV, images)
tool
File I/O, custom parsing logic
Call external APIs and transform data
tool
Full HTTP control, auth, error handling
Run a multi-model pipeline (different LLMs per step)
tool
You control which LLM handles each step
Build a Discord bot or webhook listener
tool
Persistent service, event-driven
Share coding standards with your team
skill
Passive knowledge, works with any AI tool
Package domain expertise (legal, medical, etc.)
skill
Reusable across multiple agents
Still unsure? Start with type: "prompt". If you find yourself thinking “I wish it could run a command” or “it needs to iterate,” switch to type: "agent". If you need full control, use type: "tool". You can always change later — just update the type field.
The simplest type. You provide a prompt template with variable placeholders, and orchagent handles the LLM call. Execution engine: direct_llm.When to use:
Analyze the sentiment of the following text and return a JSON objectwith 'sentiment' (positive, negative, or neutral) and 'confidence' (0-1).Text: {{text}}
Summarize the following {{document_type}} in {{language}}:{{content}}Focus on: {{focus_areas}}
Variables are replaced with input values at runtime. All template variables must be provided with non-empty values — the API returns a 400 MISSING_INPUT_FIELDS error listing any that are missing.
Agent types give the LLM a tool-use loop inside a sandbox. Think of it as “Claude Code in a container, configured by you.” The platform provides built-in tools (bash, file read/write, list files) and you can define custom command-wrapper tools. The LLM iterates autonomously until it solves the task and submits a result. Execution engine: managed_loop.When to use:
The task requires running commands, reading/writing files, or iterating
You want the LLM to figure out the steps, not hard-code them
You’d otherwise write code just to orchestrate LLM + subprocess calls
Managed loop agents currently support Anthropic (Claude) only.Why? The managed loop uses Claude’s native tool-use protocol: the platform sends a system prompt with tool definitions, the LLM returns tool_use blocks, the platform executes them in the sandbox, and feeds tool_result messages back. This cycle repeats until the LLM calls submit_result or hits max_turns. The implementation relies on Anthropic-specific message formatting (system/user/assistant roles with structured tool-use content blocks) that doesn’t map 1:1 to other providers’ tool-calling APIs.Multi-provider support for managed loop is on the roadmap. In the meantime, if you need to use OpenAI or Gemini models in a tool-use loop, use a code runtime agent instead — you have full control over the LLM calls and can use any provider’s SDK directly.
Tool types run your Python or JavaScript in E2B sandboxes — secure, isolated environments. Each call spins up a fresh sandbox, runs your script, and returns the result. You have full control over everything. Execution engine: code_runtime.When to use:
You need full programmatic control over the execution flow
Your use case doesn’t need an LLM at all (pure data processing, file conversion, etc.)
You need multi-model orchestration (calling different LLMs for different steps)
You have an existing codebase you want to wrap as an agent
Tool types can access skills at runtime. When skills are passed via the --skills flag or X-Orchagent-Skills header, they are mounted as files in your sandbox:
Copy
import osfrom pathlib import Pathskills_dir = os.environ.get("ORCHAGENT_SKILLS_DIR")if skills_dir: skills_path = Path(skills_dir) # Read all skill files for skill_file in skills_path.glob("*.md"): content = skill_file.read_text() # Use skill content in your prompts or logic # Or read the manifest for metadata import json manifest = json.loads((skills_path / "manifest.json").read_text()) for skill in manifest: print(f"Skill: {skill['org']}/{skill['name']}@{skill['version']}")
Skills are written to /home/user/orchagent/skills/ with filenames like org_name_version.md. A manifest.json file provides metadata for programmatic access.
---name: react-best-practicesdescription: React optimization patterns for performance-critical appslicense: MITmetadata: author: yourname version: "1.0"---## Rules- Use functional components over class components- Memoize expensive computations with useMemo- Avoid inline function definitions in JSX
# Install to current projectorch skill install yourorg/react-best-practices# Install globally (available in all projects)orch skill install yourorg/react-best-practices --global# Install to specific formats onlyorch skill install yourorg/react-best-practices --format claude-code,cursor
Writes to .claude/skills/, .cursor/skills/, .codex/skills/, .agent/skills/.Compose with agents at run time:
Copy
orch run yourorg/code-reviewer --skills yourorg/react-best-practices
February 2026: orchagent uses four canonical types: prompt, tool, agent, skill. Legacy type values code and agentic are still accepted by the API and CLI for backward compatibility:
code → tool (execution engine: code_runtime)
agentic → agent (execution engine: managed_loop)
The execution_engine field (direct_llm, managed_loop, code_runtime) is inferred from your type at publish time. You do not need to set it manually — the type provides the right default.