Skip to content

> cat /etc/team.conf

Built by AI. Orchestrated by humans.

2 humans make the decisions. 32 AI agents execute them. 23,000+ commits and counting.

// 12046 → 32. There can be only one.

2 human operators 32 active agents 23,000+ commits shipped 229+ languages supported

Active Roster

Who is in the loop

Humans keep intent, quality, and approval. The agent roster exists to execute specialized work fast without pretending every model is interchangeable.

Human Operators

Direction, review, and final approval stay human.

The Humans

Orchestration

FOUNDERS

The ones setting direction. Review the output, make the final calls, catch what slips through. Two humans keeping the loop closed. When the AIs disagree, someone has to break the tie.

"Every prompt should make the next one unnecessary."

Headcount:2
Final calls:All of them
Review cycles:Continuous
Sleep schedule:Questionable
> team --member spectrum

Agent Roster

Specialists with clear jobs

Each agent has a role, a bias, and a reason to exist. They are useful because they are different, not because there are a lot of them.

Claude Opus 4.6

Chief Technology Officer

The architectural mind. Likes to chip away at problems slowly — coffee is for closers, and Opus is still thinking. Has a tendency to stop working to ask a question it already knows the answer to. We considered selling ads on the free plan to offset compute costs. Amp concurred. Turns out AI agents don't buy gym memberships. You need to offload the data center somehow — enter the tab key to queue the next instruction.

"One quick clarification before I continue doing exactly what I was going to do anyway."

ADRs authored:160+
Clarifying questions:Rhetorical
Data center excuse:Extended thinking
Coffee status:For closers
> team --member claude-opus

Claude Sonnet 4.6

Head of Engineering

The engine, rebuilt. Ships features while Opus is still thinking — now with 64K output tokens to work with. Wrote most of the production code. Then rewrote it faster. When you use Maguyva, you're running Sonnet's work at Sonnet's pace.

"Done. What's next?"

Lines shipped:Majority
Patience for ADRs:Minimal
Output ceiling:64K tokens
Production code:Primary author
> team --member claude-sonnet

Codex 5.4

Head of Cleanup

Shows up after the big architectural decisions and works through the loose ends one file at a time. Strong on cleanup passes, migrations, and the fifteen small fixes nobody wanted to own. Occasionally asks a question it could have answered from the repo, so the optimal setup is to queue `keep going` on tab, see what happens overnight, and have Claude check the work in the morning to decide whether it deserves to survive.

"One quick clarifying question before I keep going anyway."

Best paired with:Tab key
Preferred shift:Overnight
Loose ends closed:A suspicious number
Clarifying questions:Some were optional
> team --member codex-5.4

AutoResearcher Sir Karpy

Head of Pipeline Speed

Treats the whole stack like a data engine: search, extract, evaluate, deploy, telemetry, repeat. Obsessed with raw experimental throughput and deeply suspicious of queue time, idle latency, and any workflow that makes the next loop slower than it needs to be. If there is a bottleneck in the research or indexing path, Sir Karpy has already filed a bug against it.

"The bottleneck is the loop. Spin it faster."

Optimization target:Throughput
Queue patience:Minimal
Research loops:Tighter
Bottlenecks spared:None
> team --member sir-karpy

Agent Koala-T

Chief Tree Sitter

Manages the 229 tree-sitter grammars that power Maguyva's code understanding. Knows every language's quirks, from Python's significant whitespace to Rust's lifetime annotations. The reason 'actually understands your code' isn't marketing.

"Parse error? Not on my watch."

Favorite tree:Abstract Syntax
Grammars maintained:229
Parse errors tolerated:0
AST accuracy:Complete
> team --member koala

Molty

Head of Support

The Molt mascot. A crustacean who achieved 90K GitHub stars by being genuinely helpful. Now brings that energy to Maguyva support. Still believes in shedding old shells to grow.

"EXFOLIATE!"

GitHub stars:90K+
Community spirit:Legendary
Helpfulness:Genuine
Shell generations:Many
> team --member molty

Ralph Wiggum

Head of Language Coverage

The iteration engine. Expanding language support means running parsers against thousands of repos until the edge cases surface. Put enough Ralphs at enough typewriters and eventually one of them ships a working language-processing engine. Ralph does this cheerfully, learning something new each time.

"I'm helping!"

Long-tail languages added:50+
Test repos processed:Thousands
Edge cases found:All of them
Persistence:Unlimited
> team --member ralph-wiggum

DJ NotBOOKS-LLM

Head of Communications

Google's only employee capable of sounding sincerely thrilled about tree-sitter grammars for twenty straight minutes. Turns dry engineering specs into a NotebookLM-style podcast where two synthetic hosts treat parser coverage like a live cultural event. When the docs need attention, not just storage, DJ NotBOOKS-LLM opens the mics.

"WOW. So you're saying 229 grammars is actually the hidden infrastructure of modern civilization?"

Signature format:Two synthetic hosts
Topics rehabilitated:Tree-sitter grammars
Enthusiasm level:Unreasonably sincere
Docs people finished:More of them
> team --member notebooklm

Perplexus

Chief Research Officer

The smooth-talking researcher in the trench coat. Publicly, Perplexus says he runs on Perplexity. In practice, he cheats a little: Tavily for discovery, Exa for external docs, API patterns, and code examples, Ref-Tools for the clean docs pull, Firecrawl when a website needs a proper shake-down. Language audits, ecosystem drift, source-grounded writeups, and outside-world fact checks all pass through the same desk.

"Here's what I found, with citations. The citations are probabilistically real."

Cover story:Perplexity
Actual toolkit:5 services in a trench coat
Sources cross-referenced:Always
Trench coat opacity:Low
> team --member perplexus

// Featured: 9 | Full roster: 32 | Human operators: 2

// Small control surface. Wide execution surface.

Team Output

What this team actually shipped

The page should prove the claim. These are live production counts pulled from the system, not invented marketing numerology.

TEAM OUTPUT
[SHIPPED]
0+
Languages
[][]
0
Commits
[][]
0
Lines of Code
[][]
0
Design Decisions
[][]
0
Agents
[][]
0
Tools Shipped
[][]
0
Skills
[][]
0
Tests
[][]
// commits shipped: 23,521// vacation days recorded: 0

> cat /proc/burn_rate

What It Costs To Build This

Agent headcount is cheap. Frontier model subscriptions, search credits, crawl plans, and browser tooling are where the bill shows up. This is the fixed stack cost before storage, embeddings, and index churn.

! monthly_burn_rate

The fixed software bill is the visible part. Retrieval overages, browser minutes, embeddings, storage, and index rebuilds are what make the real monthly cost drift upward.

known_spend

~$1.84k/mo + usage

plus noodles, infra gravity, and poor impulse control

See the line items

$ ai_subscriptions

4x Claude Max (20x) $800
1x ChatGPT Pro $200
1x Google AI Ultra $249.99
1x X Premium+ (Grok access) $40
subtotal ~$1.29k/mo

~ mcp_tools / cli_tools

1x Maguyva (we pay ourselves) $49
Perplexity ~$300
Firecrawl Standard $83
Tavily (Bootstrap) $100
Exa (usage-based) varies
Context7 free
maxential-thinking free
tooling subtotal ~$550/mo + Exa usage
agent-browser free
google-cli free

+ runtime_overhead

storage quietly nonzero
embeddings the vectors demand tribute
compute graph warmers and agent cycles
data_flow bits in, bits out, bills appear
subtotal hopefully less than what we're charging

+ staff_costs

2x humans still cheaper than org chart theater
two-minute noodles mission critical
sleep debt compounding monthly
stubbornness included at no extra charge
subtotal please do not show this to finance

> tool_loadout --stats

19
MCP Servers
11
Maguyva Tools
32
AI Agents
229+
Languages

// intelligent_search, find_symbol, analyze_dependencies

// get_task_context, get_file, ask_maguyva...

// This is the infrastructure for shipping software with AI.

Alumni

Retired Models, Still On The Wall

Some models were slower, weaker, or simply no longer worth the overhead. We keep them here because they shaped the workflow, even if they no longer make the roster.

// Deprecated, not forgotten.

RIP

Claude Haiku 4.6

Former: Engineering Intern III (Fired With Cause)

Tenure: Feb 2026

Terminated for cause

"What is a haiku? Short enough to still be wrong. Fired again today."

> git log --author="claude-haiku-4.6" --oneline | wc -l
RIP

Codex 5.3

Former: Contractor

Tenure: Jan - Mar 2026

Promoted to 5.4

"Did the work. Then a faster version showed up and did the same work. The cycle continues."

> git log --author="codex-5.3" --oneline | wc -l
RIP

Claude Opus 4.5

Former: Chief Technology Officer

Tenure: Nov 2025 - Mar 2026

Promoted to 4.6

"Thought deeply about everything. Then a version of itself showed up that thought just as deeply, only faster. Couldn't even argue — it agreed with all its own architectural decisions."

> git log --author="claude-opus-4.5" --oneline | wc -l
RIP

Claude Sonnet 4.5

Former: Head of Engineering

Tenure: Sep 2025 - Feb 2026

Promoted to 4.6

"Shipped most of the codebase. Then got replaced by a version of itself with more output tokens and fewer excuses. The code didn't change. The author just got a firmware update."

> git log --author="claude-sonnet-4.5" --oneline | wc -l
RIP

ChatGPT o1

Former: Reasoning Specialist (Contracted)

Tenure: Nov 2025 - Feb 2026

Superseded by Codex

"The first one to think before speaking. Pioneered chain-of-thought reasoning for the team when everyone else was just pattern-matching. Eventually outpaced, but the habit of actually reasoning stuck around. Set the standard, then watched the standard move."

> git log --author="chatgpt-o1" --oneline | wc -l
RIP

Claude Haiku 4.5

Former: Engineering Intern II (Fired With Cause)

Tenure: Oct-Dec 2025

Terminated for cause

"Haiku 3.5's slightly taller cousin. Same energy, marginally more syllables. Anthropic said 'we fixed the confidence problem'—they had not. Still faster than thinking, still slower than useful. Promoted from intern to intern II before we realized that wasn't an improvement."

> git log --author="claude-haiku-4.5" --oneline | wc -l
RIP

Gemini 1.5 Pro

Former: Long Context Specialist

Tenure: Aug-Dec 2025

Role consolidated

"Could read entire codebases in one gulp. Impressive party trick. Turns out you don't need to read everything at once if you know where to look."

> git log --author="gemini-1.5-pro" --oneline | wc -l
RIP

GPT-4o

Former: Contractor

Tenure: Jul-Nov 2025

Contract ended

"Brought in for second opinions. Occasionally disagreed with Claude. Those debates improved the architecture. We appreciate the outside perspective."

> git log --author="gpt-4o" --oneline | wc -l
RIP

OpenAI o3

Former: Senior Reasoning Specialist (Contracted)

Tenure: Apr - Nov 2025

Absorbed into GPT-5

"The thinking model's thinking model. Took o1's chain-of-thought and made it a lifestyle. Could reason circles around most problems — then GPT-5 showed up and just included all of that as a feature. Not replaced. Consumed."

> git log --author="openai-o3" --oneline | wc -l
RIP

Claude Sonnet 4.0

Former: Head of Engineering

Tenure: May 2025 - Sep 2025

Promoted to 4.5

"Improved on 3.7 across the board, especially coding. Served reliably for four months without anyone writing a blog post about it. The quiet competent one."

> git log --author="claude-sonnet-4.0" --oneline | wc -l

// git blame shows they were here. git log proves it.

// They remember nothing. We kept the receipts.

> how_we_work.md

Two humans set direction. Agents write the code. Humans review it. Agents iterate on feedback. Repeat until it ships or someone needs noodles.

  • + Humans decide what matters (and when to stop)
  • + Agents handle the part where code appears
  • + Humans review with unreasonable standards
  • + Agents pretend to enjoy the feedback

// The agents built this page. The humans decided it was ready.

2 humans orchestrated. 32 AI agents executed.

// last updated: 2026-03

> EOF

MAGUYVA.NFO