What if you could build a complex, production-ready application with over 100,000 lines of code—including a C++ DirectX overlay—in just 3 months, without writing a single line yourself? This isn't science fiction. It's how EF-Map was built: a sophisticated EVE Frontier mapping tool with real-time blockchain indexing, 3D WebGL rendering, and native desktop integration—all created by someone with zero coding knowledge (and absolutely no C++ experience) through a process we call "vibe coding."
The Premise: Intent Over Implementation
Traditional software development requires deep technical knowledge: understanding syntax, design patterns, debugging techniques, and countless framework-specific details. "Vibe coding" flips this paradigm. Instead of writing code, you describe what you want in plain language. An LLM agent (like GitHub Copilot in agent mode) translates your intent into working code, following strict guardrails and documentation patterns you've established.
The result? EF-Map has grown to include:
- 124,000+ lines of custom code across TypeScript, JavaScript, Python, SQL, and C++
- Real-time blockchain indexing via Docker-orchestrated MUD indexer and Postgres
- 3D interactive starmap with Three.js custom shaders and WebGL rendering
- Advanced pathfinding algorithms (A*, Dijkstra) running in Web Workers
- Cloudflare Pages + Worker backend serving 10,000+ monthly users
- Native desktop overlay helper (C++/DirectX 12) integrating with game client—built entirely through vibe coding despite zero C++ knowledge
- End-to-end encrypted tribal bookmarks using wallet-derived keys
All of this was built by describing features, not implementing them.
The Foundation: Documentation as Code
The secret isn't magic—it's structured documentation. Before writing any code, you establish three critical documents that form a contract between you (the human operator) and the LLM agent:
1. AGENTS.md – The Workflow Primer
This file defines how the LLM should operate. It establishes:
- Workflow expectations: Start with brief acknowledgement + plan, manage todo lists with exactly one item in-progress, report deltas instead of repeating full context
- Operating rules: Prefer smallest safe change, run CLI commands yourself (never ask user), preview-only deploys until approved
- Fast context loading: Read troubleshooting guide first (reduces orientation time 50%+), skim last 40 lines of decision log
- Quality gates: Typecheck passes, build succeeds, smoke tests for core interactions
"Workflow primer (GPT-5 Codex): Start every reply with a brief acknowledgement plus a high-level plan. Manage work through the todo list tool with exactly one item `in-progress`; update statuses as soon as tasks start or finish. Report status as deltas—highlight what changed since the last message instead of repeating full plans."
2. copilot-instructions.md – Coding Patterns & Guardrails
This file contains the technical contract. It defines:
- Architecture overview: Component relationships, data flows, tech stack decisions
- Vibe coding guidance: Restate goal as checklist, identify risk level, propose minimal patch, offer rationale for alternatives
- Risk classes: Low (docs/CSS), Medium (new worker file), High (core rendering, schema changes)
- Conventions: Where to emit metrics (only `usage.ts`), how to handle worker progress (throttle ≤5Hz), cache invalidation rules
- Common failure modes: Double metric counting, routing cache staleness, worker progress spam
- Intent Echo: Restate user goal as bullet checklist
- Assumptions: Call out ≤2 inferred assumptions
- Risk Class: Label Low/Medium/High + required tokens
- Plan: Files to edit, diff size, verification steps
- Patch: Apply minimal diff
- Verify: Typecheck + build + smoke steps
- Summarize: What changed, gates status, follow-ups
- Decision Log: Append entry if non-trivial
3. decision-log.md – The Living History
Every non-trivial change gets logged with a standardized template:
## YYYY-MM-DD – <Title>
- Goal: What problem this solves
- Files: Files modified (with line counts if significant)
- Diff: Added/removed lines of code
- Risk: low/medium/high
- Gates: typecheck ✅ build ✅ smoke ✅
- Follow-ups: (optional future work)
This creates a searchable audit trail. When the LLM starts a new session, it reads the last 40 lines of the decision log to understand recent context. When debugging, it can search for related past decisions.
Cross-Referencing: The Web of Context
Documentation isn't isolated. Each file references others, creating a knowledge graph the LLM can traverse:
| Need | Start here | Notes |
|---|---|---|
| Daily workflow refresher | AGENTS.md → "Workflow primer" |
Combined contract for expectations and guardrails |
| Build & quality gates | LLM_TROUBLESHOOTING_GUIDE.md → "Verification Matrix" |
Commands to run per change type (UI, worker, exporter, docs) |
| Cloudflare CLI ops | docs/CLI_WORKFLOWS.md → "Wrangler sandbox" |
Copy-pastable sequences for deployments, KV inspection |
| Data pipeline orientation | LLM_TROUBLESHOOTING_GUIDE.md → "Snapshot lifecycle" |
Visualized flow: Postgres → exporter → KV → Worker |
| Decision history | docs/decision-log.md |
Chronological log, newest first (read last 40 lines) |
The LLM_TROUBLESHOOTING_GUIDE.md deserves special mention. It's a comprehensive orientation document that reduces agent startup time by 50%+. Instead of the LLM asking "Where do I find X?" or "How does Y work?", it reads this guide first and gets:
- System architecture diagrams (chain indexer → Postgres → exporter → Cloudflare KV → frontend)
- Component inventory (what each file/folder does)
- Data flow visualizations (route rendering, usage metrics, Smart Gate links)
- Common troubleshooting scenarios with checklists
- Postgres schema reference with sample queries
Iterating in Small Steps: The Anti-Refactor Philosophy
Vibe coding enforces incremental delivery. The copilot-instructions.md explicitly states:
"Prefer smallest safe change; don't refactor broadly without explicit approval."
This principle prevents the LLM from over-engineering solutions. Every change follows this pattern:
- Describe the goal in plain language (e.g., "Add a toggle to show visited systems on the map with orange star highlights")
- LLM proposes minimal patch: "I'll add a boolean state in App.tsx, pass it to the starfield renderer, and modify the shader to check a visited-systems Set"
- User approves scope (or requests adjustments)
- LLM implements, runs typecheck + build, reports verification status
- Decision logged with risk assessment and gates
Each iteration is self-contained and reversible. If something breaks, you revert a single commit. No massive refactors that touch 20 files and introduce cascading bugs.
User request: "Add moving chevrons on Smart Gate route segments to match in-game aesthetic"
LLM plan: "I'll modify RouteRibbon shader to add a time-based chevron pattern, gated by a Display Settings toggle. Risk: low (visual-only, no data changes). Files: RouteRibbon.ts, DisplaySettings.tsx, usage.ts (for toggle metric)"
Outcome: 87 lines added across 3 files, typecheck ✅ build ✅ smoke ✅. Logged in decision-log.md (2025-09-23 entry). Deployed to preview, validated, merged to production.
Managing Complexity: High-Risk Surfaces
Not all changes are created equal. The documentation explicitly flags high-risk surfaces that require extra coordination:
- Core render loop & global state (
App.tsx,src/scene/*) – Impacts cinematic mode, panel wiring, selection handling - Cloudflare Worker entrypoints (
_worker.js) – Affects persistence, auth, API invariants; requires preview deploy + decision log entry - Snapshot/export pipelines (
tools/snapshot-exporter/*) – Can corrupt production data; run with DRY_RUN=1 first - Usage telemetry helpers (
src/utils/usage.ts) – Guard against double counting
When the LLM identifies a high-risk change, it:
- Explicitly states the risk class
- Requests an escalation token (e.g., "CORE CHANGE OK") from the human operator
- Proposes a safer alternative if available
- Documents extra verification steps in the decision log
The CLI Mandate: Agents Do The Work
A critical rule in the copilot-instructions.md:
"The assistant MUST directly run every Cloudflare / Wrangler CLI command that does not require pasting or revealing a secret value. Do NOT ask the operator to run a command the assistant can execute."
This eliminates a huge source of friction. Instead of:
LLM: "Please run: wrangler pages deploy dist --project-name ef-map"
Human: *copies command, pastes in terminal, waits for output, pastes back*
LLM: "Great, now run: wrangler pages deployment list"
Human: *repeats process*
The LLM just does it:
LLM: *executes wrangler pages deploy*
LLM: "Deployment succeeded. ID: abc123. Preview URL: https://preview.ef-map.pages.dev"
LLM: *executes wrangler pages deployment list*
LLM: "Confirmed deployment is live. Proceeding to smoke test..."
The human only intervenes for:
- Secret inputs (the LLM starts the command, prompts the human to paste the secret locally)
- Explicit approvals (production deploys, schema migrations)
- Visual smoke tests (confirming UI behavior in browser)
Proactive Tooling: VS Code Extensions
The documentation explicitly tells the LLM to use VS Code extensions as the first choice for inspection tasks:
| Extension | Use Case | Why Prefer It |
|---|---|---|
| PostgreSQL | Browse schemas, run queries | Avoid PowerShell docker exec quoting hell |
| Docker | Inspect containers, view logs | Faster than CLI for one-off checks |
| Chrome DevTools MCP | Direct browser testing | CRITICAL: Reduced debugging from hours to minutes |
| SQLite | Open map_data_v2.db | Verify schema after regeneration |
| REST Client | Test API endpoints | Quick validation without curl commands |
The Chrome DevTools MCP server deserves special attention. It's a Model Context Protocol server that lets the LLM:
- Navigate to URLs directly
- Click buttons and interact with UI
- Inspect console logs (filtered by type: error, warn, log)
- Examine network requests/responses (status, headers, bodies)
- Take screenshots for visual verification
This eliminated the "transcription bottleneck" where the human had to manually inspect DevTools and describe what they saw. Now the LLM just looks directly.
User reported: "Helper bridge connection failing with 405 errors"
Traditional debugging: Human opens DevTools, screenshots errors, transcribes to LLM, LLM guesses, repeat...
With Chrome DevTools MCP: LLM navigated to preview URL, clicked connection button, inspected console errors, examined network tab, discovered malformed URL construction (browser interpreted 127.0.0.1:38765/endpoint as relative path). Total time: <10 minutes vs. hours of manual back-and-forth.
Cross-Repo Coordination: Overlay Helper
EF-Map has a sibling repository (ef-map-overlay) containing the native Windows helper (C++, DirectX 12) that renders an in-game overlay. The documentation enforces synchronization:
"Native helper and DirectX overlay work now lives in the sibling repository ef-map-overlay. Keep shared documentation (AGENTS.md, copilot-instructions.md, decision logs) synchronized across both repos. When a task touches both projects, include cross-repo notes in each decision log entry."
This prevents drift. When the LLM makes a change that affects both repos (e.g., defining a new telemetry data contract), it:
- Updates the contract documentation in both repos
- Logs the decision in both decision-log.md files with cross-references
- Verifies compatibility with a smoke test script that exercises the integration
Building a C++ DirectX Overlay Without C++ Knowledge
The overlay helper represents perhaps the most striking example of vibe coding's potential. The entire DirectX 12 overlay—16,000 lines of C++—was built by someone with zero C++ experience.
Features delivered include:
- DirectX 12 hook injection: Intercepts the game's swap chain present call
- Real-time route overlay: Displays calculated routes from the web app directly in-game
- Mining telemetry widgets: Live DPS, efficiency metrics, session tracking
- 3D star map renderer: Native OpenGL starfield visualization (in progress)
- Inter-process communication: Shared memory channels, event queues, WebSocket bridge
- Windows tray application: System service with protocol handlers and log file parsing
How is this possible without C++ expertise? The same documentation patterns:
- Clear intent: "Add a route widget that shows next 5 hops with system names and distances"
- LLM proposes implementation: "I'll create a OverlayWidget base class with ImGui rendering, store route data in overlay_schema.hpp, and update the renderer to draw the widget when route data is present"
- Verification: Launch helper externally (VS Code terminals fail injection), inject via process name, validate in-game overlay appears
- Decision logged: Files changed, DLL size increase, smoke test results
The operator doesn't need to understand ID3D12Device, IDXGISwapChain3, or COM interfaces. They just describe what they want to see in-game, and the LLM translates it into working DirectX code.
Traditional path: Spend months learning C++, study DirectX documentation, understand graphics pipelines, debug memory leaks...
Vibe coding path: Describe desired overlay behavior, LLM generates code following established patterns, verify it works in-game. Shipped a working overlay in weeks, not months.
The Numbers: What Vibe Coding Delivered
Let's look at concrete metrics from EF-Map's development:
- Development time: ~3 months from concept to production (solo, part-time)
- Lines of code: 124,000+ across TypeScript, JavaScript, Python, C++, SQL (excluding dependencies and auto-generated data)
- Decision log entries: 350+ major features/fixes documented (339 current + 11 archived)
- Production deployments: 766 commits to production (all via preview → approval → production pipeline)
- Zero production outages from code changes (defensive deployment strategy works)
- Monthly active users: 10,000+ using the web app
- Overlay helper downloads: ~50 Windows installs (released November 2025, growing rapidly)
Features delivered through vibe coding include:
- Real-time blockchain event indexing (Primordium MUD indexer → Postgres)
- 3D starmap with 24,000+ star systems (Three.js custom shaders)
- A*/Dijkstra pathfinding in Web Workers (handles 100k+ edges)
- Smart Gate integration (on-chain player structures enabling instant travel)
- Multi-waypoint route optimization (genetic algorithm in worker pool)
- End-to-end encrypted tribal bookmarks (AES-GCM-256, wallet-derived keys)
- Native DirectX 12 overlay with live telemetry (mining rates, DPS, visited systems)
- Anonymous aggregate usage analytics (privacy-first, Cloudflare KV)
- Grafana observability dashboards (chain indexer health, API ingestion rates)
Common Failure Modes & How Documentation Prevents Them
The copilot-instructions.md includes a "Common Failure Modes & Preventers" section. Here are examples:
| Failure Mode | Preventer | How It's Enforced |
|---|---|---|
| Double metric counting | Centralize in usage.ts only | LLM rejects adding track() calls elsewhere; proposes moving to usage.ts |
| Routing cache stale after jump distance change | Ensure spatial cache invalidation | LLM checks for spatialGrids.clear() when cellSize changes |
| Route note pagination regressions | Keep segment-first pagination | LLM refers to P2PRouting.tsx pattern when adding pagination |
| Worker progress spam | Throttle ≥200ms | LLM mirrors existing throttle pattern in new workers |
Each preventer is encoded in documentation, not tribal knowledge. New LLM sessions automatically apply these patterns.
When Vibe Coding Shines (And When It Doesn't)
Ideal Use Cases:
- Well-defined problem domains (mapping, routing, data visualization)
- Projects with clear requirements but evolving implementation details
- Solo or small team projects where documentation overhead is manageable
- Domains where you understand what you want but not how to build it
- Rapid prototyping with production-quality code as the goal
Challenging Use Cases:
- Highly novel algorithms without prior art (LLM has fewer patterns to reference)
- Real-time systems requiring microsecond-level optimization (native code may need human expertise)
- Large teams (documentation synchronization becomes harder)
- Domains requiring deep mathematical proofs or formal verification
- Situations where the human doesn't understand the problem domain well enough to evaluate LLM output
Vibe coding works best when you can clearly describe desired behavior and acceptance criteria but don't know the technical implementation. The LLM translates intent into code; you validate the result matches your intent.
Lessons Learned: What Makes Vibe Coding Successful
1. Front-Load Documentation
Spend time upfront defining AGENTS.md, copilot-instructions.md, and your architecture. This pays massive dividends. Every hour spent on documentation saves dozens of hours in miscommunication.
2. Maintain Decision Log Discipline
Log every non-trivial change. This creates searchable history and prevents "Why did we do it this way?" amnesia. The LLM uses this to avoid repeating past mistakes.
3. Embrace Preview Deployments
Never touch production without testing in a preview environment first. Cloudflare Pages makes this trivial: every branch gets its own URL. The copilot-instructions.md enforces: "Preview-only rule: Any website/Worker/API changes must be tested via Cloudflare Pages Preview deployments first."
4. Use Quality Gates as First-Class Citizens
Typecheck + build + smoke tests aren't optional. They're encoded in the workflow. The LLM runs them automatically and reports status. If a gate fails, the LLM proposes a fix before moving forward.
5. Treat the LLM as a Strict Follower of Rules
LLMs excel at following documented patterns. They struggle with implicit knowledge. Make everything explicit. Instead of "use common sense for error handling," write: "Return 4xx for client errors early, wrap external calls in try/catch, log errors to console with [ComponentName] prefix."
6. Iterate on Documentation Based on Failures
When the LLM makes a mistake, don't just fix the code—update the documentation to prevent the same mistake in future sessions. Treat docs as living, evolving contracts.
7. Leverage Cross-Referencing Heavily
Don't duplicate information. Instead, create a web of references. The LLM_TROUBLESHOOTING_GUIDE.md acts as a hub pointing to specialized docs. This prevents documentation drift (updating one place updates the authoritative source).
The Future: Scaling Vibe Coding
EF-Map's success with vibe coding raises interesting questions:
- Can non-coders compete with traditional development teams? In certain domains, yes. EF-Map delivers features faster than some funded startups with engineering teams.
- What's the ceiling on project complexity? Currently unknown. EF-Map at 200k+ LoC is already pushing boundaries. Larger projects may need modular documentation strategies.
- How does this change software economics? Development costs drop dramatically. The limiting factor becomes understanding the problem domain, not technical implementation.
- What skills matter in a vibe coding world? System design, user experience intuition, problem decomposition, and technical writing become more valuable than syntax mastery.
Getting Started with Vibe Coding
If you want to try vibe coding for your own project:
- Start small: Pick a well-defined feature (e.g., "add dark mode toggle")
- Create minimal docs: Write a simple AGENTS.md with workflow rules and a PROJECT_REQUIREMENTS.md with feature specs
- Use an LLM with agent mode: GitHub Copilot, Cursor, or similar tools that support extended context and tool use
- Establish a decision log habit: Log every change with goal/files/risk/gates
- Iterate on documentation: When the LLM misunderstands, clarify the docs
- Add cross-references gradually: As your project grows, create troubleshooting guides and CLI workflow docs
The goal isn't perfection from day one. It's creating a feedback loop where documentation, code, and your understanding of the problem all improve together.
Conclusion: Democratizing Software Development
Vibe coding isn't just a productivity hack for existing developers—it's a paradigm shift that makes software development accessible to anyone who can clearly describe a problem. You don't need to know the difference between a closure and a callback. You need to know what you want to build and be able to evaluate whether the result works.
EF-Map proves this works at scale. A single non-coder, using structured documentation and LLM agents, delivered a production application serving thousands of users with features that would typically require a small engineering team.
The code isn't magic. The LLM isn't sentient. The secret is structured communication: clear intent, documented patterns, cross-referenced knowledge, and disciplined iteration.
If you can describe what you want and verify it works, you can build software. That's the promise of vibe coding.