← Back to Blog

Hetzner VPS Migration: Moving 19 Docker Containers from Local to Cloud

For months, the entire backend infrastructure of EVE Frontier Map ran on a Windows workstation under my desk. Nineteen Docker containers—Postgres database, blockchain indexer, API services, Cloudflare tunnels, cron jobs, Grafana dashboards—all humming away on consumer hardware. It worked surprisingly well, but it was time to grow up.

This is the story of migrating everything to a Hetzner Cloud VPS in about 4 hours, with only 15-30 minutes of actual downtime. If you've ever wondered whether "just Dockerize everything" actually pays off when it's time to move, spoiler: it absolutely does.

TL;DR Results

  • Migration time: ~4 hours total, ~20 minutes downtime
  • Containers moved: 19 Docker services
  • Monthly cost: €17.49 (~$19 USD)
  • Uptime improvement: From "whenever my PC is on" to 24/7
  • Key enabler: Everything was already Dockerized with compose files

Why We Needed to Move

Running production infrastructure on a local workstation has some obvious problems:

1. Single point of failure. If my PC restarts for Windows updates, turns off during a power outage, or I need to reboot for any reason—the entire backend goes down. Users can't see killboard data, live events stop streaming, API endpoints return errors.

2. Resource contention. When I'm compiling code, gaming, or running heavy tasks, the Docker containers compete for CPU and RAM. The blockchain indexer alone was consuming 50-55% CPU continuously.

3. Network reliability. Residential internet isn't designed for hosting servers. ISP maintenance, DHCP lease changes, or random outages could take down the tunnels.

4. Psychological burden. There's something uncomfortable about knowing that production traffic depends on whether you remembered to leave your PC running before going on vacation.

The tipping point came when I realized I was avoiding Windows updates and even hesitating to shut down my PC for a simple restart. That's a sign your architecture needs to change.

The Starting Point: What We Were Running

Before we dive into the migration, let's understand what we were moving. The EF-Map backend had grown organically into a complex ecosystem:

Database Layer

ContainerPurposeResources
pg-indexer-reader-postgres-1Central PostgreSQL database (chain data, World API, subscriptions)~1.7 GB RAM

Blockchain Indexer (Primordium)

ContainerPurposeResources
pg-indexer-reader-postgres-index-write-1Writes blockchain events to Postgres~467 MB RAM, 50-55% CPU
pg-indexer-reader-postgres-query-read-1GraphQL read endpointMinimal
ef-head-pollerTracks blockchain head position~28 MB RAM

API Services

ContainerPurposePort
ef-assembly-apiSmart Assembly queries for Solar System View5001
ef-ssu-apiShip/structure data5002
ef-subscription-apiStripe integration + subscriber database5003
ef-event-emitterReal-time blockchain events to Cloudflare Durable Objects-

Cloudflare Tunnels

ContainerPurposePublic Hostname
ef-cloudflared-assembliesTunnel for Assembly APIassemblies.ef-map.com
ef-cloudflared-ssu-tunnelTunnel for SSU APIssu.ef-map.com
cloudflared-subscriptionsTunnel for Subscription APIsubscriptions.ef-map.com

World API Cron Jobs

ContainerPurposeSchedule
worldapi-cron-killmails-1Killmail ingestionEvery minute
worldapi-cron-smartcharacters-1Character data syncEvery minute
worldapi-cron-tribes-1Tribe list refreshEvery 30 min
worldapi-cron-tribes-details-1Tribe detailsEvery 30 min
worldapi-cron-tribe-members-1Tribe membershipEvery 30 min
worldapi-cron-daily-1Types/static data refreshDaily
worldapi-cron-snapshot-exporter-1Publishes to Cloudflare KVContinuous

Observability

ContainerPurposePort
ef-grafanaDashboards & monitoring3000
ef-adminerDatabase admin UI8080

Total: 19 containers, ~2.8 GB RAM active, 50-70% CPU sustained.

All of this was defined in Docker Compose files scattered across tools/ subdirectories—a pattern I'd established early that would prove invaluable during migration.

Why Hetzner?

I evaluated several cloud providers. Here's the price comparison for equivalent specs (16 vCPU, 32 GB RAM, 320 GB NVMe):

ProviderMonthly CostNotes
Hetzner CX53€17.49 (~$19)German company, excellent EU infrastructure
DigitalOcean~$336CPU-Optimized 16 vCPU/32 GB
AWS Lightsail~$164Max 8 vCPU for 32 GB (no 16 vCPU option!)
Linode (Akamai)~$288Dedicated 16 vCPU/32 GB

Hetzner is 10-18x cheaper for equivalent specs. AWS Lightsail doesn't even offer 16 vCPU. The catch? Their UI is less polished than the big US providers, and support is more limited. But for someone comfortable with Linux administration, the savings are impossible to ignore.

I chose the CX53 plan:

At €17.49/month (~$19 USD), this is cheaper than most coffee habits.

The Migration: Step by Step

The actual migration happened on December 4th, 2025. Here's the detailed timeline:

14:24 - VPS Provisioned

Using Hetzner's CLI tool, I created the server in about 30 seconds:

# Set API token
$env:HCLOUD_TOKEN = 'your-token-here'

# Create SSH key
hcloud ssh-key create --name ef-map-deploy --public-key-from-file ~/.ssh/id_ed25519.pub

# Create firewall (restrict SSH to my IP)
$myIP = (Invoke-RestMethod -Uri "https://api.ipify.org")
hcloud firewall create --name ef-map-fw
hcloud firewall add-rule ef-map-fw --direction in --protocol tcp --port 22 --source-ips "$myIP/32"
hcloud firewall add-rule ef-map-fw --direction in --protocol tcp --port 3000 --source-ips "$myIP/32"

# Create server
hcloud server create --name ef-map-prod --type cx53 --image ubuntu-24.04 --location fsn1 --ssh-key ef-map-deploy --firewall ef-map-fw

IP address assigned. Immediately added an SSH config alias:

# ~/.ssh/config
Host ef-map-vps
    HostName 
    User root
    IdentityFile ~/.ssh/id_ed25519_efmap

Now I can just type ssh ef-map-vps instead of remembering the IP.

14:26 - Bootstrap the VPS

Standard Ubuntu 24.04 bootstrap:

ssh ef-map-vps

# Update system
apt update && apt upgrade -y

# Install Docker and essentials
apt install -y docker.io docker-compose-plugin git curl fail2ban ufw

# Configure firewall
ufw allow 22/tcp
ufw allow 3000/tcp   # Grafana
ufw --force enable

# Create app directory
mkdir -p /opt/ef-map

14:30 - Export Postgres Dump

This is the critical step. I needed to move ~2 GB of indexed blockchain data, World API snapshots, and subscriber information:

# On local machine
docker exec pg-indexer-reader-postgres-1 pg_dump -U user -Fc postgres > C:\EF-Map-main\backups\migration\postgres_20251204.dump

The -Fc flag creates a "custom format" dump that's compressed and faster to restore. Result: 468 MB file.

14:35 - Transfer Data to VPS

# Transfer the dump
scp C:\EF-Map-main\backups\migration\postgres_20251204.dump ef-map-vps:/opt/ef-map/

# Transfer docker-compose files
scp -r tools/worldapi-cron ef-map-vps:/opt/ef-map/tools/
scp -r tools/assembly-api ef-map-vps:/opt/ef-map/tools/
scp -r tools/subscriptions ef-map-vps:/opt/ef-map/tools/
scp -r tools/ssu-api ef-map-vps:/opt/ef-map/tools/

# Transfer secrets (Cloudflare tunnel credentials, .env files)
scp tools/subscriptions/.env ef-map-vps:/opt/ef-map/tools/subscriptions/
scp tools/worldapi-cron/.env ef-map-vps:/opt/ef-map/tools/worldapi-cron/
ssh ef-map-vps "mkdir -p /root/.cloudflared"
scp ~/.cloudflared/*.json ef-map-vps:/root/.cloudflared/

14:40 - Start Postgres

# On VPS
docker network create pg-indexer-reader_indexer-network

docker run -d \
  --name pg-indexer-reader-postgres-1 \
  --network pg-indexer-reader_indexer-network \
  --network-alias postgres \
  -p 5432:5432 \
  -e POSTGRES_USER=user \
  -e POSTGRES_PASSWORD=password \
  -e POSTGRES_DB=postgres \
  -v /opt/ef-map/postgres-data:/var/lib/postgresql/data \
  postgres:16-alpine

14:45 - Restore Database

# Restore the dump
docker exec -i pg-indexer-reader-postgres-1 pg_restore -U user -d postgres --no-owner < /opt/ef-map/postgres_20251204.dump

# Verify data
docker exec pg-indexer-reader-postgres-1 psql -U user -d postgres -c "SELECT COUNT(*) FROM evefrontier__kill_mail;"
# Result: 4586 killmails ✓

docker exec pg-indexer-reader-postgres-1 psql -U user -d postgres -c "SELECT COUNT(*) FROM \"0x7085f3e652987f656fb8dee5aa6592197bb75de8\".smart_character;"
# Result: 9698 characters ✓

14:55 - The Docker Image Problem

Here's where I hit my first snag. My custom Docker images (worldapi-cron, snapshot-exporter, etc.) were built locally and pushed to GitHub Container Registry (GHCR). But GHCR authentication on the VPS was failing.

Rather than debug GHCR auth, I pivoted to the simpler approach: export images locally, transfer them, import on VPS.

# On local machine - export images
docker save ef-worldapi-cron:local | gzip > worldapi-cron.tar.gz
docker save ef-snapshot-exporter:local | gzip > snapshot-exporter.tar.gz
docker save ghcr.io/diabolacal/ef-map-subscription-api:latest | gzip > subscription-api.tar.gz
docker save ghcr.io/diabolacal/ef-map-assembly-api:latest | gzip > assembly-api.tar.gz
docker save ef-ssu-api:local | gzip > ssu-api.tar.gz
docker save event-emitter:latest | gzip > event-emitter.tar.gz

# Transfer to VPS
scp *.tar.gz ef-map-vps:/opt/ef-map/images/

# On VPS - import images
cd /opt/ef-map/images
for f in *.tar.gz; do gunzip -c $f | docker load; done

Total image size: ~789 MB compressed. Transfer took about 3 minutes on my connection.

15:05 - Start All Services

This is where the magic of Docker Compose really shines. Each service had a working compose file, so starting them was trivial:

# Start worldapi-cron stack (8 containers at once!)
cd /opt/ef-map/tools/worldapi-cron
docker compose up -d

# Start APIs with their tunnels
cd /opt/ef-map/tools/subscriptions && docker compose up -d
cd /opt/ef-map/tools/assembly-api && docker compose up -d
cd /opt/ef-map/tools/ssu-api && docker compose up -d

# Start event emitter
cd /opt/ef-map/tools/event-emitter && docker compose up -d

# Start Grafana
docker run -d --name ef-grafana \
  --network pg-indexer-reader_indexer-network \
  -p 3000:3000 \
  grafana/grafana:10.4.3

15:10 - All 16 Core Containers Running ✓

A quick docker ps confirmed all containers were healthy. I then verified each external endpoint:

# Test Cloudflare tunnel endpoints
curl -s https://subscriptions.ef-map.com/health  # 200 ✓
curl -s https://assemblies.ef-map.com/health     # 200 ✓
curl -s https://ssu.ef-map.com/health            # 200 ✓

The Cloudflare tunnels are particularly elegant here—they don't care where the backend is running. The tunnel containers connect outbound to Cloudflare's edge, so there's no firewall configuration or port forwarding needed. Traffic just flows.

16:00 - Start Primordium Indexer

The blockchain indexer required a bit more work because it uses multiple containers with specific networking:

# Start the MUD indexer stack
cd /opt/ef-map/primordium
docker compose up -d

# Verify it's syncing
docker logs pg-indexer-reader-postgres-index-write-1 --tail 20
# Output shows blocks being indexed ✓

16:30 - Start Remaining Services

Head poller and Grafana, bringing us to the full 19 containers:

docker run -d --name ef-head-poller \
  --network pg-indexer-reader_indexer-network \
  ef-head-poller:latest

✓ Migration Complete: 19 Containers Running

Total elapsed time: ~2 hours. Actual downtime: ~15-20 minutes (while Postgres was being restored and services started).

Security Hardening

With the core migration done, I spent another hour on security:

fail2ban

Within minutes of the VPS going live, I started seeing failed SSH login attempts in the logs. Welcome to the internet.

# fail2ban was installed during bootstrap
systemctl enable fail2ban
systemctl start fail2ban

# Check status after a few hours
fail2ban-client status sshd
# Result: 3 IPs banned from 56 failed attempts

SSH Hardening

Disabled password authentication entirely:

# /etc/ssh/sshd_config.d/hardening.conf
PasswordAuthentication no
PermitRootLogin prohibit-password
PubkeyAuthentication yes

UFW Firewall Rules

Only the necessary ports are open:

ufw status
# 22/tcp (SSH) - allowed from my IP only
# 80/tcp (HTTP) - for Cloudflare tunnels (though they connect outbound)
# 443/tcp (HTTPS) - same
# 3000/tcp (Grafana) - allowed from my IP only

Automated Backups to Cloudflare R2

The final piece: automated database backups. I chose Cloudflare R2 because:

Backup Script

#!/bin/bash
# /opt/ef-map/backup-postgres.sh

DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="/tmp/postgres_backup_${DATE}.dump"

# Create dump
docker exec pg-indexer-reader-postgres-1 pg_dump -U user -Fc postgres > $BACKUP_FILE

# Upload to R2 using wrangler
export CLOUDFLARE_API_TOKEN="your-token"
wrangler r2 object put ef-map-backups/postgres_${DATE}.dump --file=$BACKUP_FILE

# Clean up local file
rm $BACKUP_FILE

# Delete backups older than 7 days
wrangler r2 object list ef-map-backups --prefix=postgres_ | \
  awk -F',' '{print $1}' | \
  while read key; do
    # Parse date from filename and delete if > 7 days old
    # (simplified - actual script has date comparison logic)
  done

echo "[$(date)] Backup completed: postgres_${DATE}.dump" >> /var/log/ef-map-backup.log

Cron Schedule

# /etc/cron.d/ef-map-backup
0 3 * * * root /opt/ef-map/backup-postgres.sh

Backups run daily at 3 AM UTC. Typical dump size: ~233 MB (compressed).

Cost Breakdown

Let's talk money. Here's the monthly cost comparison:

ItemBefore (Local)After (VPS)
Server hosting$0 (existing hardware)€17.49 (~$19)
Electricity~$15-20 (PC running 24/7)$0
Cloudflare Pages/KV~$5 (KV operations)~$5 (unchanged)
R2 Backups$0$0 (free tier)
SnapshotsN/A~€0.44
Total~$20-25~$30

The VPS is slightly more expensive than "free" local hosting, but the operational benefits far outweigh the ~$10/month difference:

Development Workflow Impact

The best part? My development workflow is virtually unchanged.

Before Migration

# Check container logs
docker logs worldapi-cron-killmails-1 --tail 50

# Query Postgres
docker exec -it pg-indexer-reader-postgres-1 psql -U user -d postgres

After Migration

# Check container logs
ssh ef-map-vps "docker logs worldapi-cron-killmails-1 --tail 50"

# Query Postgres
ssh ef-map-vps "docker exec pg-indexer-reader-postgres-1 psql -U user -d postgres -c 'SELECT COUNT(*) FROM evefrontier__kill_mail;'"

The only difference is prefixing commands with ssh ef-map-vps. For interactive work, I can SSH in and work directly.

For VS Code, I configured the Postgres extension to connect directly to the VPS IP. Works seamlessly.

What Made This Easy

Reflecting on the migration, several architectural decisions made this surprisingly smooth:

1. Everything Was Dockerized from Day One

This was the biggest enabler. Every service—APIs, cron jobs, databases, tunnels—ran in Docker containers with explicit configuration. No "oh, that depends on a random system library I installed six months ago."

Migration was literally: export images, transfer, import, docker compose up -d.

2. Docker Compose Files as Documentation

Each compose file served as living documentation of how services should be configured. Environment variables, network settings, volume mounts—all captured in version-controlled YAML.

3. Cloudflare Tunnels for Ingress

The cloudflared tunnels are brilliant for migrations. They connect outbound to Cloudflare, so there's no port forwarding, dynamic DNS, or firewall rules to reconfigure. The tunnel just works wherever you start it.

4. Stateless Services Where Possible

The API containers (assembly-api, ssu-api) are stateless—they query Postgres and return results. No local state to migrate.

The cron jobs read from external APIs and write to Postgres. Again, no local state.

The only stateful component was Postgres itself, and pg_dump/pg_restore are battle-tested.

5. Secrets in .env Files

API keys, tokens, and credentials lived in .env files (gitignored), not baked into images. Transferring them was just scp.

Lessons Learned

1. Docker image transfers beat registry debugging. When GHCR auth wasn't working, exporting/importing images took 5 minutes. Debugging OAuth could have taken hours.

2. SSH aliases are worth setting up immediately. Typing ssh ef-map-vps instead of ssh [email protected] saves mental overhead on every single command.

3. Run fail2ban immediately. Brute-force attempts started within minutes of the server going live. The internet is a hostile place.

4. Test external endpoints first. The Cloudflare tunnels are the critical path for users. Verify those before worrying about internal tooling.

5. Keep the local stack around briefly. I didn't delete the local Docker containers for a few days, just stopped them. This was insurance in case I needed to quickly rollback.

What's Next

With stable cloud infrastructure, several improvements become easier:

Conclusion

Migrating 19 Docker containers from a Windows workstation to a Hetzner VPS took about 4 hours total, with only 15-30 minutes of actual downtime. The monthly cost is €17.49 (~$19 USD)—less than a mediocre dinner.

The key takeaway: containerization pays off at migration time. All those hours spent writing Docker Compose files, separating concerns, and keeping services stateless—they paid dividends when it was time to move.

If you're running production workloads on local hardware and wondering whether it's worth migrating to the cloud, I'd encourage you to try Hetzner. The pricing is incredibly competitive for European infrastructure, and the migration path (if you're already using Docker) is straightforward.

EF-Map's backend now runs 24/7 on proper server hardware in a professional data center. I can restart my PC, take vacations, and sleep soundly knowing the infrastructure will keep running.

That peace of mind is worth far more than €17.49/month.

Related Posts

hetznervps migrationdockercloud infrastructurepostgresqldevopscost optimizationeve frontierself-hostingcloudflare