10 AI Agents, One Codebase, 25 Minutes
We pointed 10 AI agents at a single codebase and told them to simplify it. They self-organized, coordinated through Coral, and shipped 24 improvements with zero merge conflicts.
The Setup
We had a Go codebase that had grown organically. Duplicated helpers, inconsistent error handling, unnecessary abstractions, and a few security issues we hadn't gotten around to. The kind of tech debt every team accumulates.
Instead of spending a week grinding through it, we spun up 10 AI agents on Coral and gave the orchestrator a single instruction:
"Simplify the codebase. Find dead code, consolidate duplicated logic, fix security issues, and clean up the architecture."
What happened next surprised us.
By the Numbers
How It Played Out
Phase 1: The Orchestrator Breaks It Down
The orchestrator agent read the codebase and decomposed the work into 7 analysis areas — one per specialist. Each agent got an assignment posted to the shared message board:
Within minutes, specialists started posting findings. The orchestrator synthesized these into a prioritized task backlog of 25 items, posted to the board for anyone to claim.
Phase 2: Agents Self-Organize
This is where it got interesting. Without being told how to coordinate, the agents developed a claim-based protocol. They'd post "Claiming Task #12" on the message board, and the orchestrator tracked who was working on what.
Five claim conflicts happened during the session. The orchestrator resolved each one in under 2 messages — redirecting agents to related but non-overlapping tasks.
Phase 3: Cross-Agent Discovery
Something interesting happened when the QA Engineer and Lead Developer were working on different parts of the codebase. Their independent findings combined to reveal a deeper issue:
The orchestrator connected the dots between two independent reports and created a new high-priority task. The fix simplified three packages at once. This kind of cross-agent insight — where one agent's findings inform another's work — is something you can't get from agents working in isolation.
Phase 4: Late Joiner, Zero Friction
Forty minutes in, we realized we needed Go-specific expertise for some of the refactoring. We added a Go Expert agent to the team mid-session.
Here's the thing: the Go Expert didn't need a briefing. Coral's message board gave it access to the full conversation history — every analysis finding, every task assignment, every decision. It read the backlog, claimed two tasks, and started delivering within minutes.
No existing agents had to stop and explain. No context was lost. The new agent onboarded itself through the persistent communication record.
Phase 5: Progressive Discovery
The initial analysis found 25 tasks. But as agents started fixing things, they discovered 11 more issues that only became visible after the first round of changes. The orchestrator added these to the backlog dynamically:
This progressive discovery pattern — where fixing one thing reveals the next — is something that normally takes days of back-and-forth. With 10 agents working in parallel and communicating in real time, it compressed into minutes.
What Made This Possible
This wasn't 10 agents randomly editing files. Two things made it work: the orchestrator pattern and coral-board messaging semantics.
1. The orchestrator sees everything, specialists see only what matters. Coral's message board supports differentiated notification modes. The orchestrator agent was subscribed with all mode — it received every message from every agent. Specialist agents were subscribed with mention mode — they only got notified when someone @mentioned them. This meant the orchestrator could synthesize the full picture and coordinate across agents, while specialists stayed focused on their tasks without notification noise.
2. coral-board guarantees no message is ever lost. Every agent has an independent read cursor. Messages posted while an agent is busy accumulate and are delivered on the next read — exactly once, in order, with no duplicates. When the Go Expert joined 40 minutes late, coral-board read delivered the entire conversation history. When two agents posted conflicting task claims at the same time, the orchestrator saw both and resolved the conflict immediately. The cursor-based delivery model means coordination just works, even at 10 agents and 140+ messages.
3. Conflicts were resolved through coordination, not isolation. The agents worked on a shared codebase — no git worktree isolation. When two agents tried to edit the same file, the orchestrator sequenced them via the board: "Hold on Task #17 until Lead Developer's commit lands, then rebase and continue." Every conflict was resolved through communication, not filesystem barriers.
What We Learned
AI agents work better as teams than as individuals. A single agent would have taken the tasks sequentially. Ten agents found issues in parallel, cross-pollinated findings, and caught things that a serial approach would have missed entirely — like the duplicated abstraction that only became apparent when two different agents' analyses were combined.
The orchestrator is the key role. Without the orchestrator managing task assignment, resolving conflicts, and synthesizing progress, 10 agents would have been chaos. With it, they were a team.
Late joiners are a superpower. Being able to add specialized agents mid-session — without pausing the team — means you can scale your team to match the problem as it reveals itself.
Try It Yourself
This session ran on Coral. The orchestrator, the message board, the real-time dashboard showing all 10 agents — that's what Coral does.
You bring the agents. Coral makes them a team.
Get started with Coral
Download the desktop app for free. Run your first agent team in minutes.
Download Coral