← Back to Blog

Vibe Coding: Building a 124,000-Line Project in 3 Months Without Writing Code

What if you could build a complex, production-ready application with over 100,000 lines of code—including a C++ DirectX overlay—in just 3 months, without writing a single line yourself? This isn't science fiction. It's how EF-Map was built: a sophisticated EVE Frontier mapping tool with real-time blockchain indexing, 3D WebGL rendering, and native desktop integration—all created by someone with zero coding knowledge (and absolutely no C++ experience) through a process we call "vibe coding."

The Premise: Intent Over Implementation

Traditional software development requires deep technical knowledge: understanding syntax, design patterns, debugging techniques, and countless framework-specific details. "Vibe coding" flips this paradigm. Instead of writing code, you describe what you want in plain language. An LLM agent (like GitHub Copilot in agent mode) translates your intent into working code, following strict guardrails and documentation patterns you've established.

The result? EF-Map has grown to include:

All of this was built by describing features, not implementing them.

The Foundation: Documentation as Code

The secret isn't magic—it's structured documentation. Before writing any code, you establish three critical documents that form a contract between you (the human operator) and the LLM agent:

1. AGENTS.md – The Workflow Primer

This file defines how the LLM should operate. It establishes:

Example from EF-Map's AGENTS.md:

"Workflow primer (GPT-5 Codex): Start every reply with a brief acknowledgement plus a high-level plan. Manage work through the todo list tool with exactly one item `in-progress`; update statuses as soon as tasks start or finish. Report status as deltas—highlight what changed since the last message instead of repeating full plans."

2. copilot-instructions.md – Coding Patterns & Guardrails

This file contains the technical contract. It defines:

Assistant Interaction Protocol (from copilot-instructions.md):
  1. Intent Echo: Restate user goal as bullet checklist
  2. Assumptions: Call out ≤2 inferred assumptions
  3. Risk Class: Label Low/Medium/High + required tokens
  4. Plan: Files to edit, diff size, verification steps
  5. Patch: Apply minimal diff
  6. Verify: Typecheck + build + smoke steps
  7. Summarize: What changed, gates status, follow-ups
  8. Decision Log: Append entry if non-trivial

3. decision-log.md – The Living History

Every non-trivial change gets logged with a standardized template:

## YYYY-MM-DD – <Title>
- Goal: What problem this solves
- Files: Files modified (with line counts if significant)
- Diff: Added/removed lines of code
- Risk: low/medium/high
- Gates: typecheck ✅ build ✅ smoke ✅
- Follow-ups: (optional future work)

This creates a searchable audit trail. When the LLM starts a new session, it reads the last 40 lines of the decision log to understand recent context. When debugging, it can search for related past decisions.

Cross-Referencing: The Web of Context

Documentation isn't isolated. Each file references others, creating a knowledge graph the LLM can traverse:

Need Start here Notes
Daily workflow refresher AGENTS.md → "Workflow primer" Combined contract for expectations and guardrails
Build & quality gates LLM_TROUBLESHOOTING_GUIDE.md → "Verification Matrix" Commands to run per change type (UI, worker, exporter, docs)
Cloudflare CLI ops docs/CLI_WORKFLOWS.md → "Wrangler sandbox" Copy-pastable sequences for deployments, KV inspection
Data pipeline orientation LLM_TROUBLESHOOTING_GUIDE.md → "Snapshot lifecycle" Visualized flow: Postgres → exporter → KV → Worker
Decision history docs/decision-log.md Chronological log, newest first (read last 40 lines)

The LLM_TROUBLESHOOTING_GUIDE.md deserves special mention. It's a comprehensive orientation document that reduces agent startup time by 50%+. Instead of the LLM asking "Where do I find X?" or "How does Y work?", it reads this guide first and gets:

Iterating in Small Steps: The Anti-Refactor Philosophy

Vibe coding enforces incremental delivery. The copilot-instructions.md explicitly states:

"Prefer smallest safe change; don't refactor broadly without explicit approval."

This principle prevents the LLM from over-engineering solutions. Every change follows this pattern:

  1. Describe the goal in plain language (e.g., "Add a toggle to show visited systems on the map with orange star highlights")
  2. LLM proposes minimal patch: "I'll add a boolean state in App.tsx, pass it to the starfield renderer, and modify the shader to check a visited-systems Set"
  3. User approves scope (or requests adjustments)
  4. LLM implements, runs typecheck + build, reports verification status
  5. Decision logged with risk assessment and gates

Each iteration is self-contained and reversible. If something breaks, you revert a single commit. No massive refactors that touch 20 files and introduce cascading bugs.

Real Example: Smart Gate Chevrons Feature

User request: "Add moving chevrons on Smart Gate route segments to match in-game aesthetic"

LLM plan: "I'll modify RouteRibbon shader to add a time-based chevron pattern, gated by a Display Settings toggle. Risk: low (visual-only, no data changes). Files: RouteRibbon.ts, DisplaySettings.tsx, usage.ts (for toggle metric)"

Outcome: 87 lines added across 3 files, typecheck ✅ build ✅ smoke ✅. Logged in decision-log.md (2025-09-23 entry). Deployed to preview, validated, merged to production.

Managing Complexity: High-Risk Surfaces

Not all changes are created equal. The documentation explicitly flags high-risk surfaces that require extra coordination:

When the LLM identifies a high-risk change, it:

  1. Explicitly states the risk class
  2. Requests an escalation token (e.g., "CORE CHANGE OK") from the human operator
  3. Proposes a safer alternative if available
  4. Documents extra verification steps in the decision log

The CLI Mandate: Agents Do The Work

A critical rule in the copilot-instructions.md:

"The assistant MUST directly run every Cloudflare / Wrangler CLI command that does not require pasting or revealing a secret value. Do NOT ask the operator to run a command the assistant can execute."

This eliminates a huge source of friction. Instead of:

LLM: "Please run: wrangler pages deploy dist --project-name ef-map"
Human: *copies command, pastes in terminal, waits for output, pastes back*
LLM: "Great, now run: wrangler pages deployment list"
Human: *repeats process*

The LLM just does it:

LLM: *executes wrangler pages deploy*
LLM: "Deployment succeeded. ID: abc123. Preview URL: https://preview.ef-map.pages.dev"
LLM: *executes wrangler pages deployment list*
LLM: "Confirmed deployment is live. Proceeding to smoke test..."

The human only intervenes for:

Proactive Tooling: VS Code Extensions

The documentation explicitly tells the LLM to use VS Code extensions as the first choice for inspection tasks:

Extension Use Case Why Prefer It
PostgreSQL Browse schemas, run queries Avoid PowerShell docker exec quoting hell
Docker Inspect containers, view logs Faster than CLI for one-off checks
Chrome DevTools MCP Direct browser testing CRITICAL: Reduced debugging from hours to minutes
SQLite Open map_data_v2.db Verify schema after regeneration
REST Client Test API endpoints Quick validation without curl commands

The Chrome DevTools MCP server deserves special attention. It's a Model Context Protocol server that lets the LLM:

This eliminated the "transcription bottleneck" where the human had to manually inspect DevTools and describe what they saw. Now the LLM just looks directly.

Real Impact: HTTP 405 Bug Discovery (2025-10-27)

User reported: "Helper bridge connection failing with 405 errors"

Traditional debugging: Human opens DevTools, screenshots errors, transcribes to LLM, LLM guesses, repeat...

With Chrome DevTools MCP: LLM navigated to preview URL, clicked connection button, inspected console errors, examined network tab, discovered malformed URL construction (browser interpreted 127.0.0.1:38765/endpoint as relative path). Total time: <10 minutes vs. hours of manual back-and-forth.

Cross-Repo Coordination: Overlay Helper

EF-Map has a sibling repository (ef-map-overlay) containing the native Windows helper (C++, DirectX 12) that renders an in-game overlay. The documentation enforces synchronization:

"Native helper and DirectX overlay work now lives in the sibling repository ef-map-overlay. Keep shared documentation (AGENTS.md, copilot-instructions.md, decision logs) synchronized across both repos. When a task touches both projects, include cross-repo notes in each decision log entry."

This prevents drift. When the LLM makes a change that affects both repos (e.g., defining a new telemetry data contract), it:

  1. Updates the contract documentation in both repos
  2. Logs the decision in both decision-log.md files with cross-references
  3. Verifies compatibility with a smoke test script that exercises the integration

Building a C++ DirectX Overlay Without C++ Knowledge

The overlay helper represents perhaps the most striking example of vibe coding's potential. The entire DirectX 12 overlay—16,000 lines of C++—was built by someone with zero C++ experience.

Features delivered include:

How is this possible without C++ expertise? The same documentation patterns:

The operator doesn't need to understand ID3D12Device, IDXGISwapChain3, or COM interfaces. They just describe what they want to see in-game, and the LLM translates it into working DirectX code.

Learning Through Documentation, Not Tutorials

Traditional path: Spend months learning C++, study DirectX documentation, understand graphics pipelines, debug memory leaks...

Vibe coding path: Describe desired overlay behavior, LLM generates code following established patterns, verify it works in-game. Shipped a working overlay in weeks, not months.

The Numbers: What Vibe Coding Delivered

Let's look at concrete metrics from EF-Map's development:

Features delivered through vibe coding include:

Common Failure Modes & How Documentation Prevents Them

The copilot-instructions.md includes a "Common Failure Modes & Preventers" section. Here are examples:

Failure Mode Preventer How It's Enforced
Double metric counting Centralize in usage.ts only LLM rejects adding track() calls elsewhere; proposes moving to usage.ts
Routing cache stale after jump distance change Ensure spatial cache invalidation LLM checks for spatialGrids.clear() when cellSize changes
Route note pagination regressions Keep segment-first pagination LLM refers to P2PRouting.tsx pattern when adding pagination
Worker progress spam Throttle ≥200ms LLM mirrors existing throttle pattern in new workers

Each preventer is encoded in documentation, not tribal knowledge. New LLM sessions automatically apply these patterns.

When Vibe Coding Shines (And When It Doesn't)

Ideal Use Cases:

Challenging Use Cases:

The Key Insight:

Vibe coding works best when you can clearly describe desired behavior and acceptance criteria but don't know the technical implementation. The LLM translates intent into code; you validate the result matches your intent.

Lessons Learned: What Makes Vibe Coding Successful

1. Front-Load Documentation

Spend time upfront defining AGENTS.md, copilot-instructions.md, and your architecture. This pays massive dividends. Every hour spent on documentation saves dozens of hours in miscommunication.

2. Maintain Decision Log Discipline

Log every non-trivial change. This creates searchable history and prevents "Why did we do it this way?" amnesia. The LLM uses this to avoid repeating past mistakes.

3. Embrace Preview Deployments

Never touch production without testing in a preview environment first. Cloudflare Pages makes this trivial: every branch gets its own URL. The copilot-instructions.md enforces: "Preview-only rule: Any website/Worker/API changes must be tested via Cloudflare Pages Preview deployments first."

4. Use Quality Gates as First-Class Citizens

Typecheck + build + smoke tests aren't optional. They're encoded in the workflow. The LLM runs them automatically and reports status. If a gate fails, the LLM proposes a fix before moving forward.

5. Treat the LLM as a Strict Follower of Rules

LLMs excel at following documented patterns. They struggle with implicit knowledge. Make everything explicit. Instead of "use common sense for error handling," write: "Return 4xx for client errors early, wrap external calls in try/catch, log errors to console with [ComponentName] prefix."

6. Iterate on Documentation Based on Failures

When the LLM makes a mistake, don't just fix the code—update the documentation to prevent the same mistake in future sessions. Treat docs as living, evolving contracts.

7. Leverage Cross-Referencing Heavily

Don't duplicate information. Instead, create a web of references. The LLM_TROUBLESHOOTING_GUIDE.md acts as a hub pointing to specialized docs. This prevents documentation drift (updating one place updates the authoritative source).

The Future: Scaling Vibe Coding

EF-Map's success with vibe coding raises interesting questions:

Getting Started with Vibe Coding

If you want to try vibe coding for your own project:

  1. Start small: Pick a well-defined feature (e.g., "add dark mode toggle")
  2. Create minimal docs: Write a simple AGENTS.md with workflow rules and a PROJECT_REQUIREMENTS.md with feature specs
  3. Use an LLM with agent mode: GitHub Copilot, Cursor, or similar tools that support extended context and tool use
  4. Establish a decision log habit: Log every change with goal/files/risk/gates
  5. Iterate on documentation: When the LLM misunderstands, clarify the docs
  6. Add cross-references gradually: As your project grows, create troubleshooting guides and CLI workflow docs

The goal isn't perfection from day one. It's creating a feedback loop where documentation, code, and your understanding of the problem all improve together.

Conclusion: Democratizing Software Development

Vibe coding isn't just a productivity hack for existing developers—it's a paradigm shift that makes software development accessible to anyone who can clearly describe a problem. You don't need to know the difference between a closure and a callback. You need to know what you want to build and be able to evaluate whether the result works.

EF-Map proves this works at scale. A single non-coder, using structured documentation and LLM agents, delivered a production application serving thousands of users with features that would typically require a small engineering team.

The code isn't magic. The LLM isn't sentient. The secret is structured communication: clear intent, documented patterns, cross-referenced knowledge, and disciplined iteration.

If you can describe what you want and verify it works, you can build software. That's the promise of vibe coding.

vibe coding LLM development AI coding GitHub Copilot agent mode non-coder development documentation-driven iterative development EVE Frontier tools