Skip to main content
orchagent has four canonical types: prompt, tool, agent, and skill.
Building with an AI coding assistant? Install the agent-builder skill to give your AI the complete platform reference — sandbox contracts, boilerplate code, environment details, and debugging patterns:
orch skill install orchagent/agent-builder
This works with Claude Code, Cursor, Amp, and other AI tools. Your AI will have everything it needs to build agents on orchagent without trial and error.

The Four Types

TypeWhat it isDefault execution engine
promptPrompt template + schema → single LLM calldirect_llm
toolPython or JavaScript code runs in a sandboxcode_runtime
agentLLM tool-use loop with custom toolsmanaged_loop
skillPassive knowledge (markdown) — not runnableN/A
The first three are executable. Skills are passive knowledge that enhances other types.
Quick distinction: prompt types answer questions. tool types run your code. agent types reason and iterate with tools. skill types teach other agents.

Type determines execution engine

The type field sets sensible defaults for how your agent executes. You can still override with explicit declarations:
OverrideEffect
Adding runtime.command to any typeForces code_runtime
Adding loop config to any typeForces managed_loop
Neither declared on a prompt typeUses direct_llm (the default)
You also control when the agent runs:
run_modeBehavior
on_demand (default)Each call is independent — run via CLI, API, or schedule
always_onPersistent service — Discord bots, webhook listeners, workers
And whether the agent is callable by other agents:
callableBehavior
true (default)Other agents can call this agent as a dependency
falseOnly users can call this agent (e.g., always-on services)
Start with the type that matches your use case. The type provides the right execution defaults automatically. You only need to override with runtime or loop declarations if you’re doing something non-standard.

Which Type Should I Use?

Ask yourself one question: what does your agent need to do?
What are you building?

├─ "LLM answers a question / generates content"
│   └─ type: "prompt"
│      e.g. sentiment analyzer, summarizer, translator, code explainer

├─ "My code does the work (maybe calls an LLM inside)"
│   └─ type: "tool"
│      e.g. security scanner, data pipeline, file converter, API integration

├─ "LLM figures things out using tools"
│   └─ type: "agent"
│      e.g. test fixer, code reviewer, research agent, deploy assistant

└─ "I want to share knowledge with agents or AI tools"
    └─ type: "skill"
       e.g. coding standards, security rules, brand guidelines
Start with the simplest type that works. Most use cases need only a prompt (prompt + schema). If you need the LLM to iterate with tools, use agent. Only reach for tool when you need full programmatic control or don’t need an LLM.

Common Use Cases

Not sure which pattern fits? Find your use case below:
I want to…TypeWhy
Analyze sentiment or classify textpromptOne LLM call, structured JSON output
Summarize or translate documentspromptOne LLM call, no tools needed
Generate marketing copy or emailspromptPrompt engineering, structured output
Extract data from text (names, dates, etc.)promptOne LLM call with output schema
Fix code until tests passagentLLM reads code, runs tests, iterates
Review pull requests or audit codeagentLLM navigates files, checks patterns
Research a topic and write a reportagentLLM searches, reads, synthesizes
Scan repos for secrets or vulnerabilitiestoolDeterministic logic, fast, no LLM needed
Process uploaded files (PDF, CSV, images)toolFile I/O, custom parsing logic
Call external APIs and transform datatoolFull HTTP control, auth, error handling
Run a multi-model pipeline (different LLMs per step)toolYou control which LLM handles each step
Build a Discord bot or webhook listenertoolPersistent service, event-driven
Share coding standards with your teamskillPassive knowledge, works with any AI tool
Package domain expertise (legal, medical, etc.)skillReusable across multiple agents
Still unsure? Start with type: "prompt". If you find yourself thinking “I wish it could run a command” or “it needs to iterate,” switch to type: "agent". If you need full control, use type: "tool". You can always change later — just update the type field.

Prompt Type (type: "prompt")

The simplest type. You provide a prompt template with variable placeholders, and orchagent handles the LLM call. Execution engine: direct_llm. When to use:
  • Single LLM call is sufficient
  • No external API calls needed
  • No complex logic or branching
What you provide:
my-agent/
+-- orchagent.json      # Manifest (type: "prompt")
+-- prompt.md           # Your prompt template
+-- schema.json         # Input/output schemas (optional)
\-- README.md           # Documentation (optional)

Example

orchagent.json:
{
  "name": "sentiment-analyzer",
  "type": "prompt",
  "description": "Analyze sentiment of text",
  "supported_providers": ["openai", "anthropic"]
}
prompt.md:
Analyze the sentiment of the following text and return a JSON object
with 'sentiment' (positive, negative, or neutral) and 'confidence' (0-1).

Text: {{text}}
schema.json:
{
  "input": {
    "type": "object",
    "properties": {
      "text": { "type": "string", "description": "Text to analyze" }
    },
    "required": ["text"]
  },
  "output": {
    "type": "object",
    "properties": {
      "sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"] },
      "confidence": { "type": "number", "minimum": 0, "maximum": 1 }
    }
  }
}

Prompt Variables

Use {{variable}} syntax in your prompt.md:
Summarize the following {{document_type}} in {{language}}:

{{content}}

Focus on: {{focus_areas}}
Variables are replaced with input values at runtime. All template variables must be provided with non-empty values — the API returns a 400 MISSING_INPUT_FIELDS error listing any that are missing.

Agent Type (type: "agent")

Agent types give the LLM a tool-use loop inside a sandbox. Think of it as “Claude Code in a container, configured by you.” The platform provides built-in tools (bash, file read/write, list files) and you can define custom command-wrapper tools. The LLM iterates autonomously until it solves the task and submits a result. Execution engine: managed_loop. When to use:
  • The task requires running commands, reading/writing files, or iterating
  • You want the LLM to figure out the steps, not hard-code them
  • You’d otherwise write code just to orchestrate LLM + subprocess calls
What you provide:
my-agent/
+-- orchagent.json      # Manifest (type: "agent", loop + custom_tools)
+-- prompt.md           # Agent instructions (system prompt)
+-- schema.json         # Input/output schemas (optional)
+-- Dockerfile          # Custom environment (optional)
\-- requirements.txt    # Extra sandbox deps (optional)
What you declare in the manifest:
{
  "name": "cairo-test-engineer",
  "type": "agent",
  "description": "Fixes Cairo code until tests pass",
  "supported_providers": ["anthropic"],
  "loop": {
    "max_turns": 30
  },
  "timeout_seconds": 300,
  "custom_tools": [
    {
      "name": "run_tests",
      "description": "Run the Cairo test suite with snforge",
      "command": "snforge test"
    },
    {
      "name": "build_project",
      "description": "Build the scarb project",
      "command": "scarb build"
    }
  ]
}
What the platform provides:
  1. E2B sandbox with your custom environment (if Dockerfile provided)
  2. Built-in tools: bash, read_file, write_file, list_files, submit_result
  3. Your custom tools converted to named tool definitions
  4. A managed loop that runs until the LLM calls submit_result or hits max_turns

Custom Tools

Custom tools are command wrappers that give the LLM clean, named operations instead of having to guess shell commands. Simple tools (no parameters):
{
  "name": "run_tests",
  "description": "Run the test suite",
  "command": "pytest"
}
Tools with parameters (use {{param}} placeholders):
{
  "name": "deploy",
  "description": "Deploy to the specified network",
  "command": "sncast deploy --network {{network}}",
  "input_schema": {
    "type": "object",
    "properties": {
      "network": { "type": "string", "description": "Target network (testnet/mainnet)" }
    },
    "required": ["network"]
  }
}
The LLM sees named tools like run_tests and deploy instead of guessing raw bash commands. Bash is always available as a fallback for ad-hoc commands.

Built-in Tools

Every managed loop agent automatically gets these tools:
ToolDescription
bashRun shell commands (120s per-command timeout)
read_fileRead file contents
write_fileWrite/create files (auto-creates parent directories)
list_filesList directory contents (optional recursive)
submit_resultSubmit final structured output and end the loop

Safety Limits

LimitDefaultConfigurable
loop.max_turns25Yes, in orchagent.json (platform max: 50)
Per-command timeout120 secondsNo
Overall timeoutAgent’s timeout_secondsYes

Provider Support

Managed loop agents currently support Anthropic (Claude) only. Why? The managed loop uses Claude’s native tool-use protocol: the platform sends a system prompt with tool definitions, the LLM returns tool_use blocks, the platform executes them in the sandbox, and feeds tool_result messages back. This cycle repeats until the LLM calls submit_result or hits max_turns. The implementation relies on Anthropic-specific message formatting (system/user/assistant roles with structured tool-use content blocks) that doesn’t map 1:1 to other providers’ tool-calling APIs. Multi-provider support for managed loop is on the roadmap. In the meantime, if you need to use OpenAI or Gemini models in a tool-use loop, use a code runtime agent instead — you have full control over the LLM calls and can use any provider’s SDK directly.

Tool Type (type: "tool")

Tool types run your Python or JavaScript in E2B sandboxes — secure, isolated environments. Each call spins up a fresh sandbox, runs your script, and returns the result. You have full control over everything. Execution engine: code_runtime. When to use:
  • You need full programmatic control over the execution flow
  • Your use case doesn’t need an LLM at all (pure data processing, file conversion, etc.)
  • You need multi-model orchestration (calling different LLMs for different steps)
  • You have an existing codebase you want to wrap as an agent
  • Agent types don’t give you enough control
What you declare in the manifest:
{
  "name": "leak-finder",
  "type": "tool",
  "description": "Finds leaked secrets in codebases",
  "supported_providers": ["gemini"],
  "runtime": {
    "command": "python main.py"
  }
}

Example

# main.py
import json
import sys

def main():
    # Read input from stdin
    input_data = json.load(sys.stdin)
    repo_url = input_data.get("repo_url")

    # Your logic here: clone repo, scan files, call LLM, etc.
    result = {
        "issues": ["Found hardcoded API key in config.py"],
        "risk_score": 0.7
    }

    # Write output to stdout
    print(json.dumps(result))

if __name__ == "__main__":
    main()

Input/Output Contract

Code runtime agents communicate via stdin/stdout as JSON. Standard input:
{"repo_url": "https://github.com/user/repo"}
File uploads: When files are uploaded, you receive a manifest:
{
  "files": [
    {
      "path": "/tmp/uploads/invoice.pdf",
      "original_name": "invoice.pdf",
      "content_type": "application/pdf",
      "size_bytes": 1234567
    }
  ]
}
Standard output:
{"issues": ["..."], "risk_score": 0.7}

Directory Structure

my-agent/
+-- orchagent.json      # Agent manifest
+-- main.py             # Entry point
+-- requirements.txt    # Dependencies
\-- README.md           # Documentation (optional)
The CLI auto-detects entrypoints: main.py, app.py, index.py, main.js, index.js. Override with:
{"entrypoint": "run.py"}

Skills in Tool Types

Tool types can access skills at runtime. When skills are passed via the --skills flag or X-Orchagent-Skills header, they are mounted as files in your sandbox:
import os
from pathlib import Path

skills_dir = os.environ.get("ORCHAGENT_SKILLS_DIR")
if skills_dir:
    skills_path = Path(skills_dir)

    # Read all skill files
    for skill_file in skills_path.glob("*.md"):
        content = skill_file.read_text()
        # Use skill content in your prompts or logic

    # Or read the manifest for metadata
    import json
    manifest = json.loads((skills_path / "manifest.json").read_text())
    for skill in manifest:
        print(f"Skill: {skill['org']}/{skill['name']}@{skill['version']}")
Skills are written to /home/user/orchagent/skills/ with filenames like org_name_version.md. A manifest.json file provides metadata for programmatic access.

Skill Type (type: "skill")

Skills are passive knowledge — markdown files containing instructions, rules, or expertise that enhance agents. They are not runnable. Use cases:
  • Coding standards (React patterns, security rules)
  • Domain knowledge (legal requirements, company policies)
  • Writing guidelines (tone, formatting, brand voice)

SKILL.md Format

Skills use the Agent Skills standard:
---
name: react-best-practices
description: React optimization patterns for performance-critical apps
license: MIT
metadata:
  author: yourname
  version: "1.0"
---

## Rules

- Use functional components over class components
- Memoize expensive computations with useMemo
- Avoid inline function definitions in JSX

Frontmatter Fields

FieldRequiredDescription
nameYesLowercase, hyphens only, max 64 chars
descriptionYesWhen to use this skill (max 1024 chars)
licenseNoe.g., MIT
metadataNoAuthor, version, etc.

Using Skills

Install locally for any AI coding tool:
# Install to current project
orch skill install yourorg/react-best-practices

# Install globally (available in all projects)
orch skill install yourorg/react-best-practices --global

# Install to specific formats only
orch skill install yourorg/react-best-practices --format claude-code,cursor
Writes to .claude/skills/, .cursor/skills/, .codex/skills/, .agent/skills/. Compose with agents at run time:
orch run yourorg/code-reviewer --skills yourorg/react-best-practices

Using Agents as Sub-Agents

Export agents as sub-agent configuration files for AI tools:
# Install agent as Claude Code sub-agent
orch install yourorg/code-reviewer

# Install to Cursor
orch install yourorg/code-reviewer --format cursor

# Install to project only
orch install yourorg/code-reviewer --scope project

# Update installed agents
orch update
See CLI Commands for full details.

LLM Provider Configuration

Specify supported providers in your manifest:
{"supported_providers": ["openai", "anthropic", "gemini"]}
Use "any" if your agent works with any provider:
{"supported_providers": ["any"]}
agent types (managed loop) currently only support "anthropic". This will be expanded in the future.

Choosing the Right Type

promptagenttoolskill
Best forSingle-step LLM tasksMulti-step LLM reasoningYour own code logicSharing knowledge
LLM involved?Yes (one call)Yes (iterative loop)Optional (you decide)No
Sandbox?NoYes (E2B)Yes (E2B)No
ProvidersAny (OpenAI, Anthropic, Gemini)Any (OpenAI, Anthropic, Gemini)Any (you call the API)N/A
Typical latency2-5 seconds10-120 seconds1-60 secondsInstant (install)
ExampleSentiment analyzer, translatorTest fixer, code reviewerSecurity scanner, PDF parserReact best practices
You writeprompt.md + schema.jsonprompt.md + orchagent.json (loop config)main.py or main.jsSKILL.md

Migration Note

February 2026: orchagent uses four canonical types: prompt, tool, agent, skill. Legacy type values code and agentic are still accepted by the API and CLI for backward compatibility:
  • codetool (execution engine: code_runtime)
  • agenticagent (execution engine: managed_loop)
The execution_engine field (direct_llm, managed_loop, code_runtime) is inferred from your type at publish time. You do not need to set it manually — the type provides the right default.

Next Steps

Manifest Format

Full orchagent.json schema

Publishing

Publish your agent or skill

Orchestration

Compose agents and skills

Agent Builder Skill

Run orch skill install orchagent/agent-builder to give your AI coding tool the complete platform reference for building agents.