"I'm only getting 11.5 frames per second on the map, even with GPU acceleration enabled." This user report kicked off a collaborative debugging session that revealed important lessons about WebGL performance, GPU blending modes, and the hidden costs of visual effects that seem innocuous during development.
The user was running an AMD Radeon RX 7600S—a capable mobile GPU released in 2023. It should easily handle a WebGL star map. Chrome's hardware acceleration was confirmed enabled. So what was causing the dramatic performance collapse?
The Initial Investigation
My first hypothesis was a driver or software rendering issue. The RX 7600S is an AMD RDNA3 GPU, and historically AMD drivers have occasionally had WebGL quirks. Perhaps Chrome was falling back to software rendering via SwiftShader?
To test this, I created a diagnostic script for the user to paste into Chrome DevTools console:
(function(){
const canvas = document.querySelector('canvas');
const gl = canvas.getContext('webgl2') || canvas.getContext('webgl');
const dbg = gl.getExtension('WEBGL_debug_renderer_info');
console.log('=== EF-Map GPU Diagnostic ===');
console.log('Canvas Size:', canvas.width, 'x', canvas.height);
console.log('CSS Size:', canvas.clientWidth, 'x', canvas.clientHeight);
console.log('Device Pixel Ratio:', window.devicePixelRatio);
console.log('Renderer:', dbg ? gl.getParameter(dbg.UNMASKED_RENDERER_WEBGL) : gl.getParameter(gl.RENDERER));
console.log('Vendor:', dbg ? gl.getParameter(dbg.UNMASKED_VENDOR_WEBGL) : gl.getParameter(gl.VENDOR));
console.log('WebGL Version:', gl.getParameter(gl.VERSION));
console.log('Max Texture Size:', gl.getParameter(gl.MAX_TEXTURE_SIZE));
console.log('==============================');
})();
If the renderer showed "SwiftShader" or "llvmpipe", we'd know the GPU wasn't being used. But the user came back quickly with results that eliminated that theory:
| Metric | Value |
|---|---|
| Renderer | ANGLE (AMD, AMD Radeon RX 7600S) Direct3D11 |
| Canvas Size | 1920 × 1167 |
| Device Pixel Ratio | 1.25 |
| Max Texture Size | 16384 |
Hardware acceleration was definitely working. The GPU was being used. So why only 11.5 FPS?
The Task Manager Revelation
The user also shared their Windows Task Manager performance view, and this is where the picture became clear:
- GPU 3D utilization: 100% — completely saturated
- GPU temperature: 60°C — normal under load
- GPU memory: ~8 GB — near maximum
This wasn't a driver bug or a software fallback. The GPU was genuinely maxed out trying to render the scene. But why? EF-Map renders around 24,500 star sprites (though glow/flare effects are range-limited to those near the camera)—with reasonable settings, this shouldn't overwhelm any modern GPU.
Finding the Culprit: Display Settings
Looking at the user's Display Settings screenshot, I spotted the problem immediately:
| Setting | User's Value | Default | Impact |
|---|---|---|---|
| Star Glow | 100% | 15% | 6.7× more intense |
| Glow Size | 5.0× | 0.4× | 12.5× larger sprites |
The combined effect: each star's glow covered approximately 156× more pixels than at default settings. With up to 24,500 stars rendering glow sprites, flare sprites, and the stars themselves—each using custom WebGL blend modes—the pixel throughput was astronomical.
Understanding the Technical Cost
EF-Map's star glow and flare effects use THREE.CustomBlending with THREE.MaxEquation to prevent cluster washout. This "MAX blending" takes the maximum value of overlapping pixels rather than adding them together, which produces much better visual results in dense star clusters.
// Star glow material configuration
blending: THREE.CustomBlending,
blendEquation: THREE.MaxEquation,
blendSrc: THREE.OneFactor,
blendDst: THREE.OneFactor,
However, this comes with a cost. Unlike simple additive blending which can be highly optimized, MAX blending requires reading the existing framebuffer value for every pixel, comparing it to the new value, and writing the maximum. When you're doing this for millions of overlapping pixels per frame, even a powerful GPU can struggle.
The RX 7600S Factor
The "S" suffix in AMD naming indicates a power-limited mobile variant, designed for thin laptops with 50-75W TDP versus 165W for the desktop RX 7600. This means roughly 40-50% less performance than the desktop equivalent—making extreme settings even more problematic.
Confirming the Root Cause
To verify, I asked the user to test with reduced settings. They reported back almost immediately:
"With Star Glow at 15% and Glow Size at 0.4×, I'm getting 60+ FPS. That's definitely what was causing it."
Root cause confirmed: the slider maximums allowed settings that were simply too demanding for anything less than a high-end desktop GPU.
The Fix: Safeguards for All Users
Rather than just telling this user to reduce their settings, we implemented permanent safeguards to prevent anyone from accidentally tanking their performance:
1. Reduced Maximum Slider Values
| Setting | Old Max | New Max | Rationale |
|---|---|---|---|
| Star Glow | 100% | 65% | Still visually impressive, GPU-safe |
| Star Flare | 100% | 65% | Same reasoning |
| Glow Size | 5.0× | 3.0× | Reduces pixel overdraw significantly |
2. Performance Warning
We added a small disclaimer under the glow/flare settings:
⚠️ High glow/flare settings may impact performance on laptops and older GPUs.
3. Fixed Reset Defaults
During this investigation, we also discovered that the "Reset Display Defaults" button wasn't resetting all settings—it was missing the starfield and backdrop sliders entirely. Fixed that too.
Lessons for WebGL Developers
This debugging session reinforced several important principles:
- Test on constrained hardware. I developed the glow/flare effects on a high-end desktop RTX GPU. What seemed fine at 100% was catastrophic on a mobile GPU. Always test your visual effects across a range of hardware.
- Sliders need sane maximums. Just because a value is technically valid doesn't mean users should be able to set it. Cap your sliders at values that won't break the experience.
- Diagnostic tools accelerate debugging. Having a simple console command that dumps GPU info, canvas size, and WebGL state dramatically sped up this investigation. The user could run it immediately and share results.
- MAX blending is expensive. While
THREE.MaxEquationproduces beautiful results for overlapping effects, it's significantly more expensive than additive blending. Use it judiciously. - Responsive users make debugging possible. This entire investigation—from initial report to deployed fix—took under an hour because the user quickly ran diagnostics, shared screenshots, and tested hypotheses. Collaborative debugging at its best.
The Outcome
The fix was deployed the same day. Users with extreme saved settings will now be capped to the new maximums automatically. The visual appearance at default settings is unchanged, and the warning helps users understand the performance trade-off before cranking sliders to maximum.
Try It Yourself
If you're curious about your own GPU's performance with EF-Map's visual effects, you can experiment with the sliders in Display Settings → Starfield Settings. The glow and flare effects add atmospheric depth when zoomed in on star clusters, but as we learned, there's a real computational cost behind that visual polish.
For most users on laptops or integrated graphics, keeping Star Glow around 15-30% and Glow Size at 0.4-1.0× provides the best balance of visual quality and smooth performance.
Related Posts
- Performance Optimization Journey — Our earlier work reducing load times by 90% through spatial indexing and code splitting
- CPU Optimization: Reducing Idle Rendering — How we cut idle CPU usage from 28% to 4% while preserving live event history
- Starfield Depth Effects — The design thinking behind the depth brightness, desaturation, and blur effects
- Three.js 3D Starfield Rendering — How we built the core WebGL star rendering system