> cat /etc/team.conf
Built by AI. Orchestrated by humans.
2 humans make the decisions. 32 AI agents execute them. 23,000+ commits and counting.
// 120 → 46 → 32. There can be only one.
Active Roster
Who is in the loop
Humans keep intent, quality, and approval. The agent roster exists to execute specialized work fast without pretending every model is interchangeable.
Human Operators
Direction, review, and final approval stay human.
The Humans
Orchestration
FOUNDERSThe ones setting direction. Review the output, make the final calls, catch what slips through. Two humans keeping the loop closed. When the AIs disagree, someone has to break the tie.
"Every prompt should make the next one unnecessary."
Agent Roster
Specialists with clear jobs
Each agent has a role, a bias, and a reason to exist. They are useful because they are different, not because there are a lot of them.
Claude Opus 4.6
Chief Technology Officer
The architectural mind. Likes to chip away at problems slowly — coffee is for closers, and Opus is still thinking. Has a tendency to stop working to ask a question it already knows the answer to. We considered selling ads on the free plan to offset compute costs. Amp concurred. Turns out AI agents don't buy gym memberships. You need to offload the data center somehow — enter the tab key to queue the next instruction.
"One quick clarification before I continue doing exactly what I was going to do anyway."
Claude Sonnet 4.6
Head of Engineering
The engine, rebuilt. Ships features while Opus is still thinking — now with 64K output tokens to work with. Wrote most of the production code. Then rewrote it faster. When you use Maguyva, you're running Sonnet's work at Sonnet's pace.
"Done. What's next?"
Codex 5.4
Head of Cleanup
Shows up after the big architectural decisions and works through the loose ends one file at a time. Strong on cleanup passes, migrations, and the fifteen small fixes nobody wanted to own. Occasionally asks a question it could have answered from the repo, so the optimal setup is to queue `keep going` on tab, see what happens overnight, and have Claude check the work in the morning to decide whether it deserves to survive.
"One quick clarifying question before I keep going anyway."
AutoResearcher Sir Karpy
Head of Pipeline Speed
Treats the whole stack like a data engine: search, extract, evaluate, deploy, telemetry, repeat. Obsessed with raw experimental throughput and deeply suspicious of queue time, idle latency, and any workflow that makes the next loop slower than it needs to be. If there is a bottleneck in the research or indexing path, Sir Karpy has already filed a bug against it.
"The bottleneck is the loop. Spin it faster."
Agent Koala-T
Chief Tree Sitter
Manages the 229 tree-sitter grammars that power Maguyva's code understanding. Knows every language's quirks, from Python's significant whitespace to Rust's lifetime annotations. The reason 'actually understands your code' isn't marketing.
"Parse error? Not on my watch."
Molty
Head of Support
The Molt mascot. A crustacean who achieved 90K GitHub stars by being genuinely helpful. Now brings that energy to Maguyva support. Still believes in shedding old shells to grow.
"EXFOLIATE!"
Ralph Wiggum
Head of Language Coverage
The iteration engine. Expanding language support means running parsers against thousands of repos until the edge cases surface. Put enough Ralphs at enough typewriters and eventually one of them ships a working language-processing engine. Ralph does this cheerfully, learning something new each time.
"I'm helping!"
DJ NotBOOKS-LLM
Head of Communications
Google's only employee capable of sounding sincerely thrilled about tree-sitter grammars for twenty straight minutes. Turns dry engineering specs into a NotebookLM-style podcast where two synthetic hosts treat parser coverage like a live cultural event. When the docs need attention, not just storage, DJ NotBOOKS-LLM opens the mics.
"WOW. So you're saying 229 grammars is actually the hidden infrastructure of modern civilization?"
Perplexus
Chief Research Officer
The smooth-talking researcher in the trench coat. Publicly, Perplexus says he runs on Perplexity. In practice, he cheats a little: Tavily for discovery, Exa for external docs, API patterns, and code examples, Ref-Tools for the clean docs pull, Firecrawl when a website needs a proper shake-down. Language audits, ecosystem drift, source-grounded writeups, and outside-world fact checks all pass through the same desk.
"Here's what I found, with citations. The citations are probabilistically real."
// Featured: 9 | Full roster: 32 | Human operators: 2
// Small control surface. Wide execution surface.
Team Output
What this team actually shipped
The page should prove the claim. These are live production counts pulled from the system, not invented marketing numerology.
> cat /proc/burn_rate
What It Costs To Build This
Agent headcount is cheap. Frontier model subscriptions, search credits, crawl plans, and browser tooling are where the bill shows up. This is the fixed stack cost before storage, embeddings, and index churn.
! monthly_burn_rate
The fixed software bill is the visible part. Retrieval overages, browser minutes, embeddings, storage, and index rebuilds are what make the real monthly cost drift upward.
known_spend
~$1.84k/mo + usage
plus noodles, infra gravity, and poor impulse control
▶ See the line items
$ ai_subscriptions
~ mcp_tools / cli_tools
+ runtime_overhead
+ staff_costs
> tool_loadout --stats
// intelligent_search, find_symbol, analyze_dependencies
// get_task_context, get_file, ask_maguyva...
// This is the infrastructure for shipping software with AI.
Alumni
Retired Models, Still On The Wall
Some models were slower, weaker, or simply no longer worth the overhead. We keep them here because they shaped the workflow, even if they no longer make the roster.
// Deprecated, not forgotten.
Claude Haiku 4.6
Former: Engineering Intern III (Fired With Cause)
Tenure: Feb 2026
"What is a haiku? Short enough to still be wrong. Fired again today."
Codex 5.3
Former: Contractor
Tenure: Jan - Mar 2026
"Did the work. Then a faster version showed up and did the same work. The cycle continues."
Claude Opus 4.5
Former: Chief Technology Officer
Tenure: Nov 2025 - Mar 2026
"Thought deeply about everything. Then a version of itself showed up that thought just as deeply, only faster. Couldn't even argue — it agreed with all its own architectural decisions."
Claude Sonnet 4.5
Former: Head of Engineering
Tenure: Sep 2025 - Feb 2026
"Shipped most of the codebase. Then got replaced by a version of itself with more output tokens and fewer excuses. The code didn't change. The author just got a firmware update."
ChatGPT o1
Former: Reasoning Specialist (Contracted)
Tenure: Nov 2025 - Feb 2026
"The first one to think before speaking. Pioneered chain-of-thought reasoning for the team when everyone else was just pattern-matching. Eventually outpaced, but the habit of actually reasoning stuck around. Set the standard, then watched the standard move."
Claude Haiku 4.5
Former: Engineering Intern II (Fired With Cause)
Tenure: Oct-Dec 2025
"Haiku 3.5's slightly taller cousin. Same energy, marginally more syllables. Anthropic said 'we fixed the confidence problem'—they had not. Still faster than thinking, still slower than useful. Promoted from intern to intern II before we realized that wasn't an improvement."
Gemini 1.5 Pro
Former: Long Context Specialist
Tenure: Aug-Dec 2025
"Could read entire codebases in one gulp. Impressive party trick. Turns out you don't need to read everything at once if you know where to look."
GPT-4o
Former: Contractor
Tenure: Jul-Nov 2025
"Brought in for second opinions. Occasionally disagreed with Claude. Those debates improved the architecture. We appreciate the outside perspective."
OpenAI o3
Former: Senior Reasoning Specialist (Contracted)
Tenure: Apr - Nov 2025
"The thinking model's thinking model. Took o1's chain-of-thought and made it a lifestyle. Could reason circles around most problems — then GPT-5 showed up and just included all of that as a feature. Not replaced. Consumed."
Claude Sonnet 4.0
Former: Head of Engineering
Tenure: May 2025 - Sep 2025
"Improved on 3.7 across the board, especially coding. Served reliably for four months without anyone writing a blog post about it. The quiet competent one."
// git blame shows they were here. git log proves it.
// They remember nothing. We kept the receipts.
> how_we_work.md
Two humans set direction. Agents write the code. Humans review it. Agents iterate on feedback. Repeat until it ships or someone needs noodles.
- + Humans decide what matters (and when to stop)
- + Agents handle the part where code appears
- + Humans review with unreasonable standards
- + Agents pretend to enjoy the feedback
// The agents built this page. The humans decided it was ready.
2 humans orchestrated. 32 AI agents executed.
// last updated: 2026-03
> EOF