"Your map is using 28% of my CPU just sitting there idle. That's a lot for a map that isn't doing anything." This feedback from an EVE Frontier player kicked off a debugging session that revealed hidden performance costs, taught valuable lessons about how Chrome measures CPU, and resulted in a solution that lets users reduce CPU consumption by 85% while still capturing every universe event for later browsing.
The Complaint: Idle Map, Active CPU
A user reported that having EF-Map open in a Chrome tab was consuming 20-28% CPU according to Chrome's Task Manager—even when they weren't interacting with the map at all. No panning, no zooming, just the tab sitting there in the background while they played the game.
For context, EF-Map is a Three.js-based 3D star map displaying 24,000+ solar systems with interactive features like live universe events, animated halos, and a scrolling event ticker. It's not a simple static page—but 28% CPU for doing "nothing" seemed excessive.
The Vibe Coding Investigation
As a non-coder using the "vibe coding" methodology, I don't dive into code myself. Instead, I describe problems and goals to an LLM assistant (GitHub Copilot in agent mode), and we work through solutions together. This investigation was a perfect example of that collaborative debugging process.
Understanding the Rendering Architecture
First, the LLM explained how Three.js rendering works:
- Animation loop runs at 60fps: The map uses
requestAnimationFramewhich fires ~60 times per second - Each frame re-renders everything: WebGL doesn't support partial scene updates—every frame redraws all 24,000+ stars, rings, gates, and effects
- Continuous rendering by default: Even if nothing changed, we were still rendering
The initial hypothesis was simple: if we throttle rendering when idle, CPU usage should drop.
First Attempt: Idle Render Throttling
We implemented a "dirty flag" system:
- Only render if something changed (camera moved, new selection, etc.)
- When idle, drop to ~5 frames per second instead of 60
- Detect camera changes via OrbitControls event listeners
Testing showed this worked perfectly for background tabs—when the user switched to another tab, CPU dropped to near zero. But when the tab was visible but idle, CPU was still high.
The Culprit: Live Events
Through Chrome Task Manager monitoring, we identified the real problem: Live Events features force continuous rendering.
What Forces 60fps Rendering
- Event Halos: Visual rings on the map that persist 30 seconds (20s display + 10s fade). With ~1 event per second, there could be 30+ concurrent halos, each animating
- Event Flashes: Quick ~500ms flash animations when events occur
- Christmas Lights: Seasonal decorative animation on the event ticker
- Ticker scrolling: CSS transform-based, actually lightweight (compositor-handled)
When the user disabled Live Events via the toggle, CPU dropped from 33% to 7%. Mystery solved—but this revealed a new problem.
The Dilemma: Features vs Performance
The Live Events feature isn't just eye candy. It provides:
- Event history: Up to 24 hours of universe events stored in IndexedDB
- Search functionality: Find specific events by type, system, or player
- Replay mode: Watch historical events play back on the map
- Session statistics: Track events received during your session
If users disabled Live Events to save CPU, they'd lose all this history. The WebSocket connection was tied to the display toggle—when you disabled the display, you also stopped receiving events entirely.
The Solution: Separate Capture from Display
The insight was simple: keep the WebSocket connected and save events to IndexedDB regardless of whether we're displaying them.
Changes made:
- WebSocket stays connected: Changed
enabled: userToggletoenabled: trueso we always receive events - Events always saved: IndexedDB capture continues regardless of display state
- Only suppress visuals: Halos, flashes, ticker scrolling, and Christmas lights are disabled when the toggle is off
- Session stats keep updating: Connection count, event counts, and session timer remain active
The Behavior Change Table
This table captures the before/after behavior that makes this solution valuable:
| Feature | Display ON | Display OFF |
|---|---|---|
| WebSocket connection | ✅ Connected | ✅ Connected (for history + shoutbox) |
| Event halos on map | ✅ Shows | ❌ Hidden (saves CPU) |
| Event flashes | ✅ Shows | ❌ Hidden (saves CPU) |
| Ticker scrolling | ✅ Animates | ❌ Stopped (saves CPU) |
| Christmas lights | ✅ Animates | ❌ Hidden (saves CPU) |
| IndexedDB capture | ✅ Saves | ✅ Still saves |
| Session stats | ✅ Updates | ✅ Still updates |
| Maps connected count | ✅ Shows | ✅ Still tracks |
| Event history browsing | ✅ Available | ✅ Available (full 72h) |
Bonus: Extended History to 72 Hours
The same user who reported the CPU issue also mentioned that 24 hours of event history felt too short. Since we were already in the event handling code, we extended the retention period from 24 to 72 hours.
This also meant updating:
- The IndexedDB pruning threshold
- The in-memory event history retention
- All UI text references ("last 24 hours" → "last 72 hours")
- The replay time range presets (added 48h and 72h options)
Technical Learning: Chrome Task Manager CPU %
This debugging session taught me something I didn't know: Chrome Task Manager and Windows Task Manager measure CPU differently.
| Metric | Chrome Task Manager | Windows Task Manager |
|---|---|---|
| What it shows | Per-process CPU usage | % of total system capacity |
| Baseline | Can exceed 100% | 100% = all cores combined |
| Example interpretation | 28% = using 28% of one core | 25% on 4-core = using 1 full core |
So when a user reports "28% CPU in Chrome Task Manager," that's 28% of a single logical core's capacity—not 28% of their entire system. But on a laptop with only 4 cores, that's still ~7% of total system resources for a "background" tab. Worth optimizing.
Why This Matters for Vibe Coding
This debugging session exemplifies the vibe coding workflow:
- User reports issue: "28% CPU seems high"
- I describe to LLM: "User says high CPU when idle, can you investigate?"
- LLM explains architecture: Teaches me about animation loops, WebGL rendering, requestAnimationFrame
- We try solutions: Idle throttling (partially works)
- We debug together: Using Chrome DevTools, identify Live Events as culprit
- I provide insight: "But I want to keep the history feature!"
- LLM proposes solution: Separate capture from display
- Implementation + testing: Deploy preview, verify in Chrome Task Manager
- Bonus improvements: Extend to 72 hours while we're in the code
Without any coding knowledge, I was able to:
- Understand why the CPU was high (animation loops, WebGL rendering constraints)
- Make informed decisions about tradeoffs (feature preservation vs. performance)
- Guide the solution toward what users actually need (history capture without visual CPU cost)
- Learn new things (Chrome Task Manager vs. system Task Manager)
Outcome
Before: 28% CPU with Live Events enabled, 0% history when disabled
After: 7% CPU with Live Events display disabled, full 72-hour history still captured
Future Awareness
This experience created lasting awareness. Now when considering new animated features for EVE Frontier Map, I think about:
- Does this force continuous rendering? If yes, make it toggleable
- What's the CPU cost at scale? 30 concurrent halos × 60fps = significant
- Can we separate data capture from display? Always prefer this pattern
- What's "idle" to the user? They see a static map; we're doing 60 render calls per second
The complaining user got their low-CPU mode. Everyone else gets a performance option they didn't know they needed. And the event history feature continues to work exactly as expected—just without the visual overhead when they don't want it.
Related Posts
- Live Events: Persistence, Replay, and Optimization - The original implementation of the event history system
- Vibe Coding: Building a 124,000-Line Project Without Writing Code - The development methodology behind EF-Map
- Three.js Rendering: Building a 3D Starfield - How the 3D map rendering works
- Web Workers: Background Computation - Another performance optimization pattern we use