A few weeks ago I published an article arguing that agent swarms fail because they reproduce the pathologies of badly managed teams. The diagnosis was clear: convergence kills. AI agents smother productive disagreement because they’ve been trained to agree.

The obvious follow-up was: so what’s the alternative?

I did what I always do. I convened a panel. Ten sessions over two weeks, nineteen perspectives drawn from multi-agent systems theory, swarm biology, organizational complexity, facilitation science, martial arts, safety engineering, and production observability. I pointed them at one question: how should agent swarms actually communicate?

What came back changed how I think about my own practice.

Three Regimes, Not One Protocol

The first finding landed in Session 1 and shaped everything after it.

There is no single communication protocol for agent swarms. There are three fundamentally different regimes, and each needs a different architecture:

Discovery — exploratory, sparse, high-temperature. Agents generating options, following hunches, maintaining maximum divergence. Think: the first hour of a brainstorm, before anyone starts converging.

Transition — compressive, decisive. The moment where exploration becomes commitment. Someone has to say: “We’re doing this. Not that.” This is the layer I spend most of my advisory time in — and it’s the under-built layer everywhere, in both management frameworks and AI agent systems.

Delivery — pull-based, focused, low-temperature. Agents executing against shaped work, signaling capacity, operating efficiently within constraints.

These aren’t phases in a pipeline. They’re concurrent operating modes. A mature system runs all three simultaneously at different levels. And the hardest problem isn’t any single regime — it’s the transitions between them.

If that sounds familiar, it should. It’s organizational design. The same challenge that makes enterprise transformation difficult makes multi-agent coordination difficult. Not because the technology is the same, but because the coordination problem is.

The Persona Hypothesis

Four sessions in, the panel was examining sycophancy — the tendency of AI agents to converge on whatever the majority believes. Every multi-agent system struggles with this. The standard fixes (devil’s advocate roles, voting mechanisms, debate rounds) address symptoms without touching the structural cause.

Then I noticed something about the conversation itself. The panel itself wasn’t sycophantic. The perspectives were genuinely disagreeing — not performing disagreement, but holding structurally incompatible positions that produced real friction. The biologist studying ant colonies was seeing the problem differently than the multi-agent systems theorist, who was seeing it differently than the complexity scientist. Not different answers. Different criteria for what counts as a good answer.

The reason was the method. The Writing Lab doesn’t assign roles — “be the critic,” “be the optimist.” It builds rich personas with deep epistemological commitments. A biologist who has spent thirty years studying how ant colonies coordinate without central control doesn’t just disagree with a computer scientist building orchestration frameworks. She disagrees about what counts as coordination in the first place.

That distinction — first-order diversity (different answers) versus second-order diversity (different evaluation criteria) — turned out to be the key.

An AI researcher on the panel articulated it precisely: role labels produce first-order diversity. Thick personas produce second-order diversity. A “Budget Agent” and a “Creative Agent” will converge because they share the same underlying evaluation framework. Two genuinely different epistemological commitments cannot converge, because they disagree about what “good” means.

A multi-agent systems theorist gave it a name: Persona-Annealed Multi-Agent Deliberation. PAMAD. The idea is that persona diversity functions like a temperature parameter in simulated annealing — high diversity means exploration, low diversity means convergence, and the cooling schedule determines how the system moves from one to the other.

And here was the punch: the Writing Lab was already doing this. The method I’ve been using for advisory work — orchestrating multi-perspective discussions using rich personas — was structurally identical to the anti-sycophancy mechanism the AI research community has been searching for.

PAMAD emerged from the collision of perspectives I orchestrated. I didn’t predict it. But the method that produced it — the curation, the persona design, the facilitation structure — was mine. The framework was a discovery. The practice that enabled the discovery was not.

The Dojo

The session I didn’t expect to feel personal was the one about martial arts.

A panel of researchers on deliberate practice, antifragility, violence dynamics, and BJJ systems theory examined the framework. Every concept from the first four sessions mapped onto martial arts training. Pull-based coordination? That’s kuzushi — off-balancing, responding to what your partner gives you. The annealing schedule? That’s belt progression. PAMAD’s persona diversity? That’s partner rotation — training with different body types forces adaptation you can’t get from drilling with the same person.

The arc also forked here. One branch built a compilation model: train complex, deploy simple. The dojo produces the fighter, but the fighter doesn’t bring the dojo into the fight. The other branch rejected compilation entirely — it took a facilitation researcher’s diamond pattern and turned it into a continuous spiral. No freezing. The system cycles through diverge-groan-converge-deliver endlessly, each cycle inheriting the shared vocabulary of the previous one. Not annealing but tempering — oscillating between heat and cold, producing something both hard and flexible. The fork itself was evidence: second-order diversity doesn’t converge on one answer. It maps the space of possible answers.

I compete in judo two or three times a week. The mapping isn’t a metaphor I’ve borrowed. It’s a practice I live. And seeing it formalized by a panel as the architecture for multi-agent AI coordination was — I’ll say it — disorienting in the best way.

Where It Breaks

Session 6 brought in four researchers who study how complex systems kill people. No returning panelists. Fresh eyes. Their job: find the vulnerabilities.

They found three.

Vocabulary drift. The spiral architecture carries shared vocabulary across diamond cycles. Over many cycles, that vocabulary can drift — slowly shifting the agents’ frame in a direction nobody intended. Normalization of deviance. A subtle bias in Cycle 1 becomes foundational by Cycle 10.

Narrative loss. Sensemaking within each diamond cycle is strong. Sensemaking across cycles is absent. The system remembers what it decided but not why it dismissed the alternatives. When a problem resurfaces that was already explored and rejected, the agents can’t reconstruct the reasoning.

The gym problem. The system adapts within its range of experience. A fighter who only spars in a gym can’t handle a bar fight. An agent swarm that only runs on clean data can’t handle corrupted inputs. The fix: hostile diamonds — deliberately degraded cycles baked into the operational rhythm, not segregated as tests.

The failure analysis wasn’t discouraging. It was the most valuable session in the arc. Because it did exactly what the methodology promises: it surfaced what single perspectives hide. The designers were blind to the failure modes. The safety researchers saw nothing else.

These three failure modes weren’t new to me. I’ve seen all three in every transformation I’ve advised on. Teams drift from their original strategy without noticing — vocabulary drift. They forget why they dismissed an approach and revisit it six months later — narrative loss. They plan for ideal conditions and collapse under real ones — the gym problem. PAMAD didn’t invent new failure modes. It formalized the ones I’ve been diagnosing in boardrooms for years.

The Deeper Point

I started this arc to answer a question about AI. I ended it with a question about my own practice.

The Writing Lab — the method I use to advise organizations — turns out to be structurally identical to a multi-agent coordination architecture that addresses unsolved problems in AI. The rich personas are the anti-sycophancy mechanism. The facilitated diamond is the phase transition protocol. The curator’s judgment at the betting table is the human-in-the-loop safety layer.

I didn’t set out to design a framework. I set out to have a good conversation. The framework emerged from the conversation about conversations.

This has implications beyond AI research.

If organizational facilitation skills — designing for productive disagreement, holding divergent perspectives, managing the transition from exploration to commitment — map directly onto multi-agent AI architecture, then those skills aren’t relics. They’re prerequisites.

The people who know how to run a room might be the people who know how to run a swarm.

And the method that looks old-fashioned — convening different perspectives, letting them argue, watching what emerges — might be the method that matters most for the systems that are just beginning to argue with each other.


This article documents an arc of ten Writing Lab sessions that produced PAMAD (Persona-Annealed Multi-Agent Deliberation) — a framework for multi-agent coordination using rich epistemological personas as a structural anti-sycophancy mechanism. For the methodology behind these sessions, see the Writing Lab. Selected sessions from the arc will appear in The Laboratory as they’re adapted for publication.

If you’re building multi-agent systems and hitting convergence problems — I’ve been working on this for months. Let’s compare notes.