Chapter 06 of 14 · Part 2: Build Your First Agent

Chapter 6: Defining Your Agent — Name, System Prompt, Model, Effort

By the end of this chapter, you will know every field in the agent configuration, how to write an effective system prompt, and how to version and update agents without breaking running sessions.


The Big Idea

An agent definition is a document. It says who this agent is, what it's supposed to do, which model powers it, and what tools it can use. Every session you run references this document.

Getting the agent definition right is the highest-leverage thing you'll do in Managed Agents. A well-crafted system prompt can take an agent from producing mediocre work to producing genuinely excellent work on the same task. A poorly defined agent will frustrate you with erratic behavior no matter how good the underlying model is.

According to the agent setup documentation, an agent is "a reusable, versioned configuration that defines persona and capabilities. It bundles the model, system prompt, tools, MCP servers, and skills that shape how Claude behaves during a session."

This chapter goes field by field through the agent configuration so you understand what every knob does.

DiagramExploded diagram of an agent configuration object. Central labeled box "Agent." Radiating from it: six labeled spokes — "name" (human-readable label), "model" (Claude version), "system" (prompt/persona), "tools" (capabilities), "skills" (domain expertise), "description" (for your tracking). Each spoke has a brief annotation. Small version badge in the corner: "v1 → v2 → v3 → versioned on every update."

The Analogy

Writing an agent definition is like writing a hire profile and an orientation manual in one document.

The hire profile covers: what's this person's specialty, what tools do they have access to, what model are we asking them to run on. That's the model, tools, and name fields.

The orientation manual covers: here's how we work, here's what matters, here's how to handle situations that come up. That's the system field — the system prompt.

Most teams spend ten minutes on the hire profile and rush the orientation manual. Then they wonder why their new hire keeps doing things their own way. A detailed, specific orientation manual produces reliably good work. A vague one produces erratic surprises.

Write the system prompt like you're writing an orientation manual for someone brilliant but new. Don't tell them what kind of person to be. Tell them specifically how to approach the work, what good output looks like, and what to do when they're uncertain.

DiagramA folded "orientation manual" document with sections visible. Section headers labeled with agent config fields: "Who you are" (name, description), "How you think" (system prompt — the longest section), "What you can do" (tools), "Which brain you use" (model). Caption: "The system prompt is 80% of what determines quality."

How It Actually Works

The Full Configuration Fields

From the agent setup documentation:

Field Required Description
name Yes A human-readable name for the agent
model Yes The Claude model that powers the agent. All Claude 4.5 and later models are supported.
system No A system prompt defining the agent's behavior and persona
tools No The tools available to the agent
mcp_servers No MCP servers providing standardized third-party capabilities
skills No Skills supplying domain-specific context with progressive disclosure
callable_agents No Other agents this agent can invoke (Research Preview)
description No A description of what the agent does
metadata No Arbitrary key-value pairs for your own tracking

name — More Useful Than You Think

The name isn't just cosmetic. It appears in logs, in the Console, and in the response objects your code handles. Use a name that's meaningful at a glance: "Marketing Copy Agent" is useful; "Agent 1" is not.

If you build multiple agents for different use cases, consistent naming conventions make them manageable. A pattern like "{function}-{version}" or "{team}-{purpose}" works well at scale.

model — Two Formats

The model field can be passed as a simple string ID or as an object:

  • Simple: 'claude-opus-4-7'
  • With speed mode: {"id": "claude-opus-4-6", "speed": "fast"}

The speed: fast option is a Research Preview for Opus 4.6 specifically. It uses dedicated rate limits separate from standard Opus rate limits. (Agent setup)

The default speed is "standard". The response object echoes the speed field so you can confirm what mode is active.

system — Where Quality Is Made

The system prompt is distinct from user messages. Per the agent setup docs: "The system prompt is distinct from user messages, which should describe the work to be done."

This separation matters architecturally. The system prompt defines who the agent is and how it works. The user message describes the specific task for this session. Keep them separate in your thinking.

What to put in the system prompt:

  • The agent's role and specialty ("You are a financial analysis agent...")
  • Output format preferences ("Always structure reports with: Executive Summary, Key Findings, Recommendations")
  • Working style ("Before writing any code, outline your approach in plain English")
  • Error handling preferences ("If you're unsure about a data source, note the uncertainty explicitly")
  • Things to avoid ("Never delete files without confirming with the user first")

What NOT to put in the system prompt:

  • The specific task (that goes in the user message)
  • Credentials or secrets (those go in vaults)
  • Instructions so long they fill the entire context window

On sparse vs. detailed prompts:

The Quickstart example uses a deliberately minimal system prompt:

"You are a helpful coding assistant. Write clean, well-documented code."

The cookbook documentation explains: "The system prompt is deliberately sparse. We want the agent to figure out the iterate loop for itself rather than follow a step-by-step script, the test output makes the task obvious enough without further hand-holding."

This is good guidance for simple, well-defined tasks. For complex workflows with multiple sub-tasks or specific output requirements, a more detailed system prompt produces more consistent results. Use your prototype runs to calibrate how much guidance your agent needs.

tools — What the Agent Can Do

The tools field defines the agent's capabilities. The most important value here is the built-in toolset identifier agent_toolset_20260401, which enables all eight built-in tools by default. Chapter 7 covers each tool in detail.

tools:
  - type: agent_toolset_20260401

Custom tools and MCP tools also go here. They're covered in their respective chapters.

Versioning: How Updates Work

Every time you update an agent, a new version is created. The version counter increments from 1. According to the agent setup documentation:

ant beta:agents update \
  --agent-id "$AGENT_ID" \
  --version "$AGENT_VERSION" \
  --system "You are a helpful coding agent. Always write tests."

Update semantics you need to know:

  • Omitted fields are preserved. You only need to include fields you're changing. To update just the system prompt, pass only --system.
  • Array fields (tools, mcp_servers, skills) are fully replaced. If you pass a new tools array, it replaces the old one entirely. To add a tool, include all existing tools plus the new one.
  • metadata is merged. Pass new keys to add them; pass existing keys to update them; omit keys to preserve them. Set a key to an empty string to delete it.
  • No-op detection. If the update produces no change, no new version is created. The existing version is returned.

Pinning Sessions to a Version

By default, client.beta.sessions.create(agent=agent.id, ...) uses the latest agent version. To pin to a specific version:

pinned_session = client.beta.sessions.create(
    agent={"type": "agent", "id": agent.id, "version": 1},
    environment_id=environment.id,
)

"This lets you control exactly which version runs and stage rollouts of new versions independently." (Sessions)

This is critical for production deployments: update the agent with a new system prompt, test on a few sessions pinned to the new version, then roll out broadly once you've confirmed it works.

Archiving Agents

When you archive an agent, it becomes read-only. New sessions cannot reference it, but existing sessions continue to run. (Agent setup)

ant beta:agents archive --agent-id "$AGENT_ID"

Use archiving rather than deletion when you're retiring an agent that has active sessions or historical data you want to preserve.

Agent Response Object — What You Get Back

When you create or retrieve an agent, the response includes these fields beyond your configuration:

  • id — e.g., "agent_01HqR2k7vXbZ9mNpL3wYcT8f"
  • type — always "agent"
  • version — starts at 1, increments on each update
  • created_at and updated_at — ISO timestamps
  • archived_at — null until archived

(Agent setup)

DiagramAgent response JSON object with callout boxes. The `id` field highlighted — "This is your reference. Use it everywhere." The `version` field highlighted — "Track this. Pass it to update calls to avoid race conditions." The `archived_at` field — "null means active. Non-null means retired." Annotated JSON shown in a code block format.

Try it yourself

Try It Yourself

  1. Create an agent with a detailed system prompt:

    ant beta:agents create \
      --name "Research Agent v1" \
      --model '{id: claude-sonnet-4-6}' \
      --system "You are a research agent specializing in market analysis. When given a topic, you: 1) Search for recent information using web_search. 2) Read the most relevant sources using web_fetch. 3) Synthesize findings into a structured report with: Executive Summary (3 sentences), Key Findings (bullet list), Sources Used (URLs). Always note the date you found information. Never claim certainty about information that might be outdated." \
      --tool '{type: agent_toolset_20260401}'
    

    Note the returned id and version.

  2. Run a test session with this agent. Use the session creation and streaming pattern from Chapter 5. Give it a research task and watch how the system prompt shapes its behavior.

  3. Update the agent with a revised system prompt:

    ant beta:agents update \
      --agent-id "$AGENT_ID" \
      --version "1" \
      --system "You are a research agent specializing in market analysis. When given a topic, you: 1) Search for recent information using web_search. 2) Read the most relevant sources using web_fetch. 3) Synthesize findings into a structured report with: Executive Summary (3 sentences), Key Findings (bullet list), Sources Used (URLs). Always note the date you found information. Never claim certainty about information that might be outdated. If search results are sparse, try at least 3 different search queries before concluding there is insufficient information."
    

    Check that version incremented to 2.

  4. Run the same test with version 2. Compare the behavior. Did the addition to the prompt change how it handles sparse results?

  5. List your agent's version history:

    ant beta:agents:versions list --agent-id "$AGENT_ID"
    

    This returns all versions with timestamps — your audit trail.

DiagramBefore/after comparison of two system prompts. Left: sparse prompt (2 sentences). Right: detailed prompt (8 lines). Between them: a version badge showing v1 → v2 on update. Caption: "Your system prompt is the highest-leverage thing you control. Iterate it."

Common pitfalls

Common Pitfalls

  • Putting task-specific instructions in the system prompt. The system prompt is for permanent behavioral guidance. Specific task instructions belong in the user message for each session. If you put "Today, analyze Q1 earnings for Apple" in the system prompt, every session will try to do that, not just the one you intended.

  • Forgetting array replacement semantics. If you update tools and pass only the new tool you're adding, you'll wipe out all existing tools. Always include the full array when updating array fields.

  • Not versioning deliberately. Version increments happen automatically on every meaningful update. Use the --version flag on updates to pass the current version as a guard against accidentally updating from a stale state.

  • Ignoring the metadata field. This field is for your own organizational data — team name, project code, environment (dev/staging/prod), whatever helps you manage agents at scale. Set it up from the start and you'll thank yourself when you have twenty agents.

  • Writing the system prompt in one shot. Your first system prompt will not be your best one. Plan to iterate at least three to five times based on prototype session observations before calling it production-ready.


Toolkit

Toolkit

  • System Prompt Design Framework — A structured template with sections for: Role definition, Output format, Working style, Uncertainty handling, Things to avoid. Includes annotated examples for three agent types.

  • Agent Version Change Log Template — A lightweight Markdown template for tracking agent version changes: version number, date, what changed, why, and observed quality impact.


Chapter Recap

  • The agent definition bundles model, system prompt, tools, MCP servers, skills, and metadata into a reusable, versioned configuration. Create it once; reference it by ID for every session.
  • The system prompt is the highest-leverage element. Treat it like an orientation manual: specific about behavior and output quality, separate from task-specific instructions.
  • Updates create new versions automatically. Pass the current version number to update calls to avoid race conditions. Pin sessions to specific versions for controlled rollouts.