> cat /etc/evolution.log
What Maguyva runs on
Built for speed, debuggability, and continuous improvement. The stack is boring where it should be boring and explicit where generic abstractions made the agent workflow worse.
The current production architecture grew out of one constraint: make agent workflows easier to debug under pressure, not more abstract.
Production Layers
> lsmod | grep production
These are the core layers that stayed after multiple iterations. Every one of them earned its place by making debugging easier or by removing friction from agent execution.
PostgreSQL + Lance
Tenant-partitioned storage and indexes. Your code stays isolated, with no cross-user index mixing.
Voyage AI voyage-code-3
Premium code embeddings for your agents, supplied by Voyage Code 3
Binary Quantized HNSW
Embeddings compressed to 2048-bit signatures and indexed with HNSW. 32x storage reduction, sub-millisecond search, zero extra infrastructure.
Code Language Engine
A heavily modified Tree-sitter stack with custom grammars, handler heuristics, and 229-language extraction coverage
MCP Protocol
Remote MCP server for Claude Code, Claude Desktop, Cursor, VS Code, Windsurf, and other MCP-compatible clients. 11 tools exposed via FastMCP.
Lance columnar + R2
Local-first, cloud-synced embeddings
Architecture Timeline
The stack after the dead ends
Maguyva did not arrive at this shape all at once. Each layer survived because it made the system faster to reason about, cheaper to operate, or easier to debug under pressure.
The orchestration experiments started earlier. This timeline starts in September 2025, when the Maguyva product itself snapped into focus.
Focused Maguyva build starts → Voyage voyage-code-3
The product direction hardens and the rapid iteration loop begins
Binary vector retrieval layer
Quantized search with compact indexes, no separate vector stack
Code language engine comes online
Custom queries, parser fallbacks, and handler-based heuristics across 229 languages
Multi-modal fusion search
Ranked retrieval across semantic, text, AST, and graph indexes in a single query
Lance columnar + R2 sync
Local-first, 10x faster cold starts
229-language validation framework
Automated extraction testing across every supported language
Public launch at maguyva.ai
7 months from focused build to public availability
Customer Zero
> ps aux | grep maguyva
Maguyva is built by agents that run on Maguyva.
Our orchestration system coordinates specialist AI agents that use the same MCP tools, search indexes, and production infrastructure you get. There is no internal version. The team page lists “1x Maguyva ($49) — we pay ourselves.” That is not a joke. It is the invoice.
// customer zero ships to production daily
Design Principles
> cat /etc/principles.d/*.conf
These are the constraints that stayed useful in production. They are less about ideology and more about keeping the system understandable while agents are making changes at speed.
No Silent Fallbacks
Fail fast, fix fast. Every error surfaces immediately. No graceful degradation hiding bugs in production.
// panic > silent corruption
RPC-First
Database-side logic for performance. PostgreSQL functions execute close to data, eliminating round trips.
// PostgreSQL functions, 0 ORMs
CQRS
Separate write and read paths. Indexing pipeline writes, MCP tools read. Different optimization strategies.
// writes: batched | reads: cached
Deterministic UUIDs
Cache coherence without database lookups. Same input = same ID. Idempotent operations by design.
// uuid5(namespace, content_hash)
The best architecture is the one you can debug at 3am.
(We've tested this.)
// last updated: 2026-03