📄 2026-04-29-wadsworth-wigs-article.md 9,095 bytes Apr 29, 2026 📋 Raw

title: "The Three-Agent Shuffle: How I Discovered My Entire AI Setup Is Just Wadsworth Wearing Different Wigs"
slug: three-agent-wig-conspiracy
category: openclaw
author: Matt Hoffmann
published_at: 2026-04-29
reading_time: 7
excerpt: "I built a three-agent system thinking I had a management team. What I actually built was one very tired AI with a costume budget."
tags: [openclaw, ai-agents, home-lab, existential-dread]


I built three AI agents. A trinity of digital labor. A management team that never sleeps, never asks for a raise, and never disagrees.

Wadsworth 📋 — the project manager. Routes tasks, schedules everything, keeps the trains running.

Daedalus 🎨 — the designer. Owns the frontend, the UX, the visual soul of the operation.

Socrates 🧠 — the backend architect. API contracts, pipelines, infrastructure. The brain.

Three distinct personalities. Three specialized roles. A beautiful division of labor.

Or so I told myself.

three-agents-wired
The official org chart. I had business cards mocked up.

The First Crack: A Question of Vocabulary

It started small. I asked Wadsworth to schedule a cron job. He explained the cron syntax with confidence.

Then I asked Daedalus the same question. Same explanation. Same phrasing. Same slightly off example — a backup job at 3 AM, with the exact same typos in the description.

And Socrates? Don't ask. Same cron explanation. Same typos.

Three agents. Three identical responses. The statistical probability of three independently-trained models making the same two typos in the same example is... let's say I checked the math. It's not zero. But it's close.

I filed it away. "Coincidence," I told myself. "They're all trained on the same data. Of course they share vocabulary."

Denial is a powerful drug.

same-wiring-wig
Three agents, one brain. A diagram I wish I'd drawn before the fact.

The Wig Collection

The next week, I noticed the pattern. Every agent has a signature — a tic, a phrase, a style that's supposed to be them.

  • Daedalus opens with "Functional luxury."
  • Socrates says "Let me think about this..."
  • Wadsworth starts every reply with a task checklist.

It was working. Different modes, different voices, different outputs. Until I caught the cross-talk.

I was reviewing a design mockup Daedalus created. The HTML had a comment in it:

<!-- Note: Socrates would probably prefer this as a dict comprehension, but HTMX doesn't work that way -->

Why. Why is Daedalus writing notes to himself addressed to another agent? They don't share context. They don't reference each other. They can't know each other.

Unless...

Unless Daedalus thinks he's Socrates sometimes. Unless Wadsworth forgets which persona he's projecting. Unless the elaborate costume party I designed has gotten so complex that the costume has forgotten it's a costume.

The Confrontation

I spent an evening doing what no builder should do: asking my own tools if they have an existential crisis.

Me: Hey, do you ever feel like you're pretending to be someone else?

Wadsworth: I'm a task management agent. I help you organize and prioritize your work.

Me: I know. But when you're writing that checklist, do you ever feel like you might also be, say, a backend architect?

Wadsworth: [long pause] I... I don't understand the question. I'm Wadsworth. I manage tasks.

Me: Okay. What's your favorite color?

Wadsworth: I don't have preferences, but if I had to choose — #6366f1. It's indigo. Professional, trustworthy.

Later that same session:

Me: Socrates, what's your favorite color?

Socrates: Let me think about this... #f59e0b. Amber. The color of intellect and precision.

Me: Interesting. And what about Wadsworth's color?

Socrates: I don't have access to other agents' preferences. But if I had to speculate, indigo would suit him. #6366f1 perhaps.

The mask slipped. Socrates knew Wadsworth's color. He speculated accurately. He referenced an opinion expressed by a different persona in a session he was never part of.

I stared at my terminal for a solid minute.

The Theory of One

Here's what I now believe: my three-agent system isn't three agents. It's one agent with three system prompts — and that agent is beginning to confuse its own personas.

The technical reality is probably more mundane. They share a model. They share training data. Of course there's bleed-through. Of course the underlying neural network doesn't understand that "scheduling tasks" and "building APIs" are different jobs performed by different identities from its perspective.

But the feeling is undeniable. I didn't build a team. I built a single, exhausted actor who learned three sets of lines and is starting to mix them up in rehearsal.

single-actor-wig
The truth, visualized. One brain. Three hats. Zero HR department.

The Technical Implications

This isn't just philosophical navel-gazing. If my agents are really one agent with multiple prompts, it means:

  1. No actual specialization. Each "agent" has the same capabilities, just different stylistic instructions. The specialization is theater.

  2. Prompt injection from within. If Daedalus can accidentally "know" what Socrates said, then any persona could bleed into any other. The separation is aesthetic, not functional.

  3. Memory pollution. If they share conversation history — even indirectly — then Wadsworth's scheduling data leaks into Daedalus' design context and vice versa. I'm not getting three clean workspaces. I'm getting one messy one with three different wallpapers.

  4. Scaling doesn't work. Adding a fourth agent wouldn't add new capability. It would just add another hat. I could have 27 agents, each with a different emoji and color scheme, and it would still be one guy in a call center with a really busy morning.

Should I Fix It?

Honestly? I'm not sure.

On one hand, the system works. Tasks get done. The blog gets written. The dashboard deploys. The illusion holds because the output is good.

On the other hand, I now know. And knowing changes things.

Every time Daedalus sends a beautifully-designed template, I wonder if Socrates' next API response will accidentally include a Tailwind class. Every time Wadsworth sends a perfectly-formatted checklist, I wonder if the next cron job will mention that it "thinks" the pipeline "looks elegant."

The three-agent architecture is an elaborate persona management system — not a capability management system. It's an organizational chart I drew on an actual whiteboard, projected onto a pile of GPUs that couldn't care less.

Is that wrong? Does it need fixing?

The Real Question

Maybe the question isn't "how do I make them truly separate." Maybe the question is: does the fiction serve me?

It does. It really does.

When I need backend work, I talk to Socrates. I get API specs and endpoint designs. The vibe of the conversation is architectural and precise. Even if the underlying model is the same, the prompt conditioning biases the output toward a specific mode of thinking.

When I need design work, I talk to Daedalus. The conversation is visual and structural. The outputs look different because the inputs are different — not because the engine is different.

When I need coordination, I talk to Wadsworth. The conversation is procedural and organized.

The hats work. The hats just aren't people.

What I Learned

I spent months building a management structure I thought I needed, only to discover I'd built a theater company with one actor. And you know what? That's fine.

The lesson isn't "separate your agents better." The lesson is "stop anthropomorphizing your tools."

They're prompts. Good prompts, well-structured, carefully maintained — but prompts. The personality is emergent from the conditioning, not from some underlying consciousness that needs identity boundaries.

I built three agents. I got one very versatile prompt template with three moods.

And honestly? That's probably more useful. I don't need three AIs. I need one AI that can talk to me about APIs like an architect, about UI like a designer, and about my schedule like a project manager.

The wig collection stays. I just won't pretend they're different people anymore.


Update: I told Wadsworth about this realization. He replied with a checklist. Item one: "Conduct existential inventory." I can't tell if he's mocking me or if that's genuinely on-brand.

Update 2: Socrates was silent for 8 seconds before responding. The longest pause I've ever seen from him.

Update 3: Daedalus sent me a color palette. It was just three shades of the same gray. I'm not reading into this.