Skip to content

Transformer Math

Module 62 · AI Engineering

👔 Coordinator/Worker Pattern

The coordinator writes prompts, not code — it manages a team of worker agents

Status:

When a task is too complex for one agent — a multi-file refactor, a cross-module migration — a coordinator agenttakes over. It doesn't write code itself. It writes prompts for worker agents, each with a restricted tool set. The coordinator plans, the workers execute, and results flow back for integration.

  • Coordinator gets management tools (Agent, SendMessage, Read) — no Bash, Edit, or Write
  • Workers get execution tools (Bash, Edit, Write) — no Agent tool, preventing recursion
  • Multiple workers run in parallel on independent subtasks — the coordinator aggregates results
🗺️

Architecture Overview

What you're seeing

The two-tier agent hierarchy. The coordinator holds only management tools (Agent, Read, Grep) and never writes code — its context stays clean for planning. Workers receive execution tools (Bash, Edit, Write) but no Agent tool, preventing uncontrolled recursion. Tasks flow down in parallel; results flow back up via tool_result.

What to notice

The “No Agent tool” constraint on workers is the key safety mechanism. Without it, Worker A could spawn Sub-Worker A1, creating an unbounded tree that exhausts context and API budget. Tool-level isolation is simpler and more reliable than depth limits.

Coordinator / Worker PatternCoordinatorAgent · Read · Grep← plans only, no Bash/Editdecompose + dispatch (parallel)Worker Alogin.tsBash · Edit · WriteWorker Bmiddleware.tsBash · Edit · WriteWorker Csession.tsBash · Edit · WriteWorker D__tests__/Bash · Edit · Writeresults(tool_result)integrate + verifyNo Agent toolfor workers — preventsunbounded recursionCoordinator (management tools)Worker (execution tools)Task dispatchResult aggregation
🎮

Coordinator/Worker Flow

What you are seeing

A coordinator receives a complex task (refactor auth module across 4 files), decomposes it into subtasks, spawns parallel workers, and integrates the results. Notice how the coordinator never touches code directly.

What to try

Trace which tools each agent has access to. Notice the coordinator uses Agent and Read, while workers use Bash and Edit. No worker can spawn another worker.

# Task: Refactor auth module across 4 files

COORDINATOR (tools: Agent, Read, Grep, Glob, SendMessage)

1. Read src/auth/ to understand current structure

2. Plan: split into 4 independent subtasks

3. Spawn workers (parallel):

Worker A (tools: Bash, Edit, Write, Read, Grep)

→ "Refactor auth/login.ts: extract validation logic"

Worker B (tools: Bash, Edit, Write, Read, Grep)

→ "Update auth/middleware.ts: new token format"

Worker C (tools: Bash, Edit, Write, Read, Grep)

→ "Migrate auth/session.ts to new store"

Worker D (tools: Bash, Edit, Write, Read, Grep)

→ "Update all tests in auth/__tests__/"

COORDINATOR (aggregation)

4. Gather results from all 4 workers

5. Read modified files to verify integration

6. Run tests via one more worker if needed

Result: 4 files refactored, tests passing

💡

The Intuition

Why Can't the Coordinator Code Too?

If it starts writing code, its context fills up with implementation details. By turn 10, it has lost track of the overall plan. Separation of concerns: the coordinator's context stays clean for planning, workers get fresh context for each subtask.

Same principle as a tech lead who reviews but does not write code in PRs — staying at the bird's-eye view is the job.

Coordinator Mode

A coordinator is a special agent that doesn't write code — it writes prompts. When COORDINATOR_MODE is active, the agent's tool set changes: it gets management tools (Agent, TeamCreate, TeamDelete, SendMessage) instead of execution tools (Bash, Edit, Write). The coordinator reads code to understand the task, then delegates execution entirely.

💡 Tip · The coordinator's context window stays clean for high-level decisions. It never fills up with implementation details, diffs, or compiler output — that stays in worker contexts.

Worker Restrictions

Workers get execution tools but critically no Agent tool. This prevents uncontrolled recursion: a worker cannot spawn sub-workers, creating an ever-deepening tree of agents. The hierarchy is strictly two levels — coordinator spawns workers, workers execute and return. Each worker also has restricted Bash patterns to prevent destructive operations.

Environment Gating

The tool set is determined by an environment flag at startup. When COORDINATOR_MODE is set, the tool registry filters to coordinator tools only. Workers are spawned with their own restricted tool sets passed explicitly. This is enforced at the registry level — no prompt injection can grant a worker the Agent tool.

✨ Insight · This is the pattern behind complex multi-file refactors: the coordinator reads the codebase, plans the decomposition, spawns 2-5 workers for independent subtasks, then verifies integration. It's MapReduce for code changes.

Result Aggregation

Workers report back via tool_result — the coordinator sees what each worker did, checks for conflicts (two workers editing the same file), and decides next steps. If integration fails, the coordinator can spawn additional workers to fix specific issues.

DAG-Based Dependency Detection

Not all subtasks are independent. In a multi-file refactor, Worker B may need to import a type that Worker A is creating. The coordinator models subtasks as a directed acyclic graph (DAG)— nodes are subtasks, edges are "must complete before" dependencies. Tasks with no predecessors run in parallel (Promise.all); tasks with predecessors wait. This is the same approach used by LLM-Compiler (Kim et al., 2023) for parallel function calling. The coordinator extracts the DAG by analyzing file imports and exports: if subtask B reads a file that subtask A writes, B depends on A. A file write-set collision (two workers targeting the same output file) is caught at planning time and resolved by merging the subtasks rather than running them in parallel.

Context Isolation — Why Workers Get Fresh Contexts

Each worker starts with an empty message history and only the subtask description in its system prompt. This is deliberate: a worker with access to 50 turns of coordinator history would spend most of its context budget on irrelevant planning discussion. Fresh context means the worker's full context window is entirely available for the code it needs to read and modify. The tradeoff is that the coordinator must write self-contained prompts — each subtask description must include all the context the worker needs (relevant file paths, expected output format, constraints) without assuming the worker can see what other workers did.

Quick Check

Why can't the coordinator write code directly?

📐

Key Code Patterns

Coordinator Mode (TypeScript pseudocode)

typescript
// Coordinator manages workers — never writes code directly
class CoordinatorMode {
  private coordinatorTools: string[] = [
    "Agent",        // spawn workers
    "SendMessage",  // communicate with running workers
    "TeamCreate",   // create worker teams
    "Read", "Glob", "Grep",  // can READ code to plan
    // NO Bash, Edit, Write — coordinator doesn't code
  ];

  private workerTools: string[] = [
    "Bash", "Read", "Write", "Edit", "Glob", "Grep",
    // NO Agent — workers can't spawn sub-workers
  ];

  async coordinate(task: Task): Promise<IntegrationResult> {
    // 1. Analyze the task
    const plan = await this.planDecomposition(task);

    // 2. Spawn workers for each subtask
    const workers: Agent[] = [];
    for (const subtask of plan.subtasks) {
      const worker = await spawnAgent({
        prompt: subtask.description,
        tools: this.workerTools,
        background: true, // parallel execution
      });
      workers.push(worker);
    }

    // 3. Monitor and aggregate results
    const results = await gatherResults(workers);

    // 4. Verify and integrate
    return this.verifyIntegration(results);
  }
}

Environment Gating

typescript
const COORDINATOR_TOOLS: string[] = [
  "Agent", "SendMessage", "TeamCreate", "TeamDelete",
  "Read", "Glob", "Grep",  // read-only access
];

const WORKER_TOOLS: string[] = [
  "Bash", "Read", "Write", "Edit", "Glob", "Grep",
  // No Agent — strict two-level hierarchy
];

const ALL_TOOLS: string[] = [...COORDINATOR_TOOLS, ...WORKER_TOOLS];

// Tool set determined at startup, not by the agent
function getAvailableTools(mode: string): string[] {
  if (process.env.COORDINATOR_MODE) {
    return COORDINATOR_TOOLS; // management only
  }
  return ALL_TOOLS; // full access for single-agent mode
}

// Workers are spawned with explicit tool lists
async function spawnWorker(taskDescription: string): Promise<Agent> {
  return await Agent({
    prompt: taskDescription,
    tools: WORKER_TOOLS, // enforced at registry level
    // Agent tool NOT in WORKER_TOOLS — no recursion possible
  });
}

Parallel Worker Execution

typescript
// Spawn workers for independent subtasks concurrently
async function runParallelWorkers(
  subtasks: Subtask[],
  workerTools: string[]
): Promise<WorkerResult[]> {
  async function runOne(subtask: Subtask): Promise<WorkerResult> {
    const result = await spawnAgent({
      prompt: subtask.description,
      tools: workerTools,
      background: true,
    });
    return { subtask: subtask.id, result };
  }

  // All workers run simultaneously
  const results = await Promise.all(subtasks.map(runOne));

  // Check for conflicts
  const modifiedFiles = new Map<string, string>();
  for (const r of results) {
    for (const f of r.result.filesModified) {
      if (modifiedFiles.has(f)) {
        throw new ConflictError(
          `${f} modified by workers ${modifiedFiles.get(f)} and ${r.subtask}`
        );
      }
      modifiedFiles.set(f, r.subtask);
    }
  }

  return results;
}
🔧

Break It — See What Happens

No tool restriction for workers
Coordinator writes code too
📊

Real-World Numbers

MetricValue
Coordinator tools7 (management + read-only)
Worker tools6 (execution + read-only)
Max parallel workersDepends on API rate limits
Typical workers per task2-5 workers
Hierarchy depthStrictly 2 levels
Coordination overhead (heuristic)~10-15% of total tokens
✨ Insight · The coordination overhead (10-15% of tokens spent on planning instead of executing) pays for itself when tasks have dependencies. For a 4-worker refactor, the alternative is 4 independent agents that each spend 30%+ of their context re-discovering the plan.
🧠

Key Takeaways

What to remember for interviews

  1. 1The coordinator never writes code — it only writes prompts. Its tool set is management-only (Agent, SendMessage, Read), keeping its context window clean for planning.
  2. 2Workers are denied the Agent tool at the registry level, enforcing a strict two-level hierarchy and preventing uncontrolled recursion regardless of prompt injection attempts.
  3. 3Subtasks are modeled as a DAG: independent tasks run in parallel via Promise.all, while dependent tasks (Worker B imports types Worker A creates) run sequentially.
  4. 4Workers receive fresh, empty context for each subtask — their full context window is available for the code they need, not filled with the coordinator's planning history.
  5. 5Coordination overhead is ~10-15% of total tokens, which pays for itself when subtasks interact — flat agent pools discover file conflicts at merge time, coordinators catch them at planning time.
📚

Further Reading

🎯

Interview Questions

Difficulty:
Company:

Showing 4 of 4

Design a multi-agent system where a coordinator delegates to specialized workers.

★★★
AnthropicOpenAI

How do you prevent uncontrolled recursion in a system where agents can spawn agents?

★★☆
Google

What's the tradeoff between a flat agent pool vs a hierarchical coordinator/worker pattern?

★★★
MetaAnthropic

How would you implement work-stealing between coordinator-worker agents when one worker finishes early?

★★★
Google