"The map is really slow on my phone." That simple user report started a journey that led us from mobile-specific fixes to building a comprehensive Performance Mode with automatic GPU detection—ensuring every EVE Frontier pilot can navigate the galaxy smoothly, regardless of their hardware.
This is the story of how we identified three distinct performance problem spaces, built WebGL-based GPU detection, and created a user experience that "just works" without requiring technical knowledge from our users.
The Problem Space: Three Types of "Slow"
When users report performance issues, they often say "it's slow"—but that single complaint can mask very different underlying causes. Through debugging sessions and community feedback, we identified three distinct problem types:
1. Mobile Devices
Phones and tablets have obvious constraints. Limited GPU power, thermal throttling, and the expectation of smooth 60fps touch interactions mean visual effects that look great on desktop become slideshow-inducing burdens on mobile. We'd already tackled this in an earlier update with mobile-specific display defaults.
2. Discrete GPU Issues
Sometimes a user with a powerful GPU experiences poor performance due to driver bugs, misconfigured settings, or additive blending edge cases. These are often one-off debugging sessions that reveal bugs in our own code or help users fix their local configuration.
3. Integrated Graphics on Desktop
This was the gap in our coverage. A user on a MacBook Air with Apple M1 integrated graphics, or a desktop user with Intel UHD 630, would visit EF-Map from a desktop browser—so they'd get full desktop visual effects. But their GPU couldn't handle those effects smoothly.
These users weren't on mobile devices, so our mobile detection didn't help them. They often didn't know they had "integrated graphics" or understand why a web-based map would care. They just saw stuttering and assumed the site was broken.
The Solution: Automatic GPU Detection
WebGL provides a mechanism to query the underlying GPU renderer string via the WEBGL_debug_renderer_info extension. This gives us the actual GPU name—not just "WebGL" but strings like "ANGLE (Intel(R) UHD Graphics 630 Direct3D11)" or "ANGLE (Apple M2 GPU)".
We built a detectGPUType() function that parses these renderer strings and classifies GPUs into three categories:
GPU Classification Logic
- Integrated: Intel HD/UHD/Iris, Apple M1-M4, AMD Vega/Ryzen APUs
- Discrete: NVIDIA GTX/RTX/Quadro, AMD RX/Radeon Pro series
- Unknown: Unrecognized strings (treated as potentially weak)
The detection runs on page load and checks the unmasked renderer string against known patterns:
// Integrated GPU patterns
const integratedPatterns = [
/intel.*(?:hd|uhd|iris)/i, // Intel HD/UHD/Iris Graphics
/apple\s*m[1-4]/i, // Apple M1-M4 chips
/amd.*(?:vega|ryzen|radeon\s*graphics)/i // AMD APUs
];
// Discrete GPU patterns
const discretePatterns = [
/nvidia.*(?:gtx|rtx|quadro|tesla)/i, // NVIDIA discrete
/amd.*(?:rx\s*\d|radeon\s*pro)/i // AMD discrete
];
When an integrated GPU is detected (or the GPU type can't be determined), we auto-enable Performance Mode and show a subtle notification explaining what happened.
What Performance Mode Actually Does
When Performance Mode is enabled, we zero out the GPU-intensive visual effects that cause problems on weaker hardware:
| Setting | Normal Mode | Performance Mode |
|---|---|---|
| Star Glow Intensity | 30% | 0% |
| Star Flare Intensity | 20% | 0% |
| Star Glow Size | 1.0× | 0.5× |
| Backdrop Brightness | 100% | 0% |
These settings disable the depth effects and glow rendering that create the atmospheric feel of the starfield. The map remains fully functional—just without the visual polish that requires GPU blending operations.
Settings Backup and Restore
A key design decision: when Performance Mode is enabled, we save the user's current settings to a backup. When they toggle it off, those settings are restored. This means a user on a gaming laptop who gets auto-enabled due to "Intel UHD" detection can turn off Performance Mode and immediately get their preferred visual settings back—no manual slider adjustment needed.
The UX: Invisible Unless You Care
We wanted Performance Mode to be invisible to users who don't need to think about it, while remaining discoverable for power users.
Auto-Enable with Notification
When Performance Mode is auto-enabled due to GPU detection, we show a small dismissible notification below the top command bar:
Performance Mode enabled for Intel UHD Graphics 620. Toggle in Display Settings if you prefer full visual effects.
The notification includes the detected GPU name (so users can verify it's accurate) and a clear path to change the setting. It dismisses automatically after a few seconds or when clicked.
Manual Toggle in Display Settings
The Performance Mode checkbox appears at the top of the Display Settings panel with a brief description: "Reduces visual effects for smoother performance on integrated graphics or mobile devices."
Users with powerful discrete GPUs will never see the auto-enable notification and can ignore this checkbox entirely. Users on integrated graphics can experiment—turn Performance Mode off, see if their GPU handles the full effects, and make an informed choice.
Handling Existing Users
A challenge we faced: what about users who had already visited EF-Map before we added GPU detection? Their preferences were already saved with default visual effects enabled, even if they had integrated graphics.
We solved this with version tracking. The preferences system tracks a performanceModeOfferedVersion field. When we deploy a new auto-enable logic version, existing users get re-evaluated:
// If user hasn't been offered this version yet, check their GPU
if (prefs.performanceModeOfferedVersion < PERFORMANCE_MODE_OFFER_VERSION) {
const gpuType = detectGPUType();
if (gpuType !== 'discrete') {
setPerformanceMode(true);
// Show notification
}
prefs.performanceModeOfferedVersion = PERFORMANCE_MODE_OFFER_VERSION;
}
This means when we deploy Performance Mode, existing users on integrated graphics get auto-enabled on their next visit—even if they've been using the site for months. Their previous settings are backed up for easy restoration.
Testing Challenge
How do you test GPU detection on a machine with a discrete GPU? We added a URL parameter: ?simulateGPU=integrated overrides the WebGL detection and forces integrated GPU behavior. This was essential for verifying the notification UI and auto-enable logic during development.
The Journey: From Mobile Fix to Universal Solution
This feature didn't start as "build GPU detection." It evolved through conversations and debugging:
- Mobile users report slowness → We add mobile-specific display defaults using user-agent detection
- A desktop user on a MacBook Air mentions it's slow → We realize laptops with integrated graphics aren't covered
- Discussion: "Can we detect integrated GPUs?" → Research reveals WebGL renderer strings
- Build GPU classification logic → Pattern matching for Intel/Apple/AMD integrated vs NVIDIA/AMD discrete
- "But existing users won't get auto-enabled" → Add version tracking for re-evaluation
- Testing reveals URL param bug → Fix detection order so simulateGPU works before WebGL context creation
Each step built on the previous one. This is the vibe coding methodology in action—start with a specific problem, iterate toward a general solution, and ship incrementally.
Technical Implementation Details
For those interested in the implementation, here are the key technical decisions:
WebGL Extension Availability
Not all browsers expose WEBGL_debug_renderer_info. Firefox, in particular, has made this extension harder to access for privacy reasons. Our detection gracefully degrades: if we can't get the GPU name, we classify it as "unknown" and lean toward enabling Performance Mode (better safe than stuttering).
Pattern Matching Approach
Rather than maintaining a database of every GPU ever made, we use regex patterns that match product naming conventions:
- Intel always includes "HD", "UHD", or "Iris" in integrated graphics names
- Apple Silicon is always "Apple M1/M2/M3/M4"
- AMD APUs include "Vega" or "Ryzen" or standalone "Radeon Graphics"
- NVIDIA discrete always has GTX/RTX/Quadro/Tesla prefixes
- AMD discrete has "RX" series numbers or "Radeon Pro" branding
This approach handles future GPU releases without code changes, as long as manufacturers maintain their naming conventions.
Preferences Schema Evolution
Performance Mode required adding four new fields to the preferences schema:
performanceMode: boolean, is it currently enabled?performanceModeAutoEnabled: boolean, was it auto-enabled by detection?performanceModeBackup: object, saved settings for restoreperformanceModeOfferedVersion: number, for existing user re-evaluation
Our localStorage preferences system handles schema migrations automatically, so existing users get these new fields with sensible defaults on their next visit.
The Result
Users on integrated graphics now get a smooth experience by default. Users on discrete GPUs see no change. Everyone has a manual toggle if they want to experiment. The feature is invisible to those who don't need it and discoverable for those who do.
Lessons Learned
- User feedback reveals problem spaces you didn't know existed. We thought we'd covered mobile devices—we hadn't considered laptop integrated graphics as a distinct category.
- WebGL provides more hardware info than you might expect. The debug renderer extension gives direct access to GPU names for classification.
- Settings should be reversible. Backup/restore on toggle means users can experiment without fear of losing their preferences.
- Version tracking enables iterative rollout. We can improve detection logic and re-offer Performance Mode to existing users who might benefit.
- Testing escape hatches are essential. The
simulateGPUURL parameter made development and QA possible on hardware that wouldn't normally trigger the feature.
Try It Yourself
Visit EF-Map and check Display Settings to see if Performance Mode is enabled for your hardware. If you're on a gaming PC with a discrete GPU, you'll see the checkbox but it won't be auto-enabled. If you're on a laptop with integrated graphics, you might already be in Performance Mode without realizing it—and that's exactly the point.
You can also test the detection by adding ?simulateGPU=integrated to the URL. You'll see the notification banner and Performance Mode will enable (in a temporary state—it won't persist to your real preferences during simulation).
Related Posts
- GPU Performance Debugging: From 11.5 FPS to Smooth Rendering — How we diagnosed a discrete GPU performance issue that informed our understanding of visual effect costs
- Starfield Depth Effects: Adding Subtle Immersion — The visual effects that Performance Mode disables, and why they're worth having when your GPU can handle them
- CPU Optimization: Reducing Idle Rendering — Our earlier performance work on CPU usage, complementary to this GPU-focused optimization
- Transparency Report: How Every Feature Works — Details on our localStorage preferences system and client-side architecture