Home lab — 6 Dell servers, one wildcard cert
2× Dell R620 (Proxmox + 8 VMs) and 4× Dell R710 (Ubuntu workers) running every product I ship, plus the CI/observability stack that keeps them honest.
Why bare metal in 2026
Most engineers I know default to a managed cloud — AWS, GCP, sometimes
Vercel for the frontend. The math stops being obvious once the project
runs longer than six months. A m5.2xlarge is ~$280/month before
storage, before egress. A second-hand R620 with the same effective
capacity is a one-time ~Rp 4-6 juta and a colocation fee that's a
fraction of one cloud VM.
For ClipFlow specifically, the savings vs an equivalent AWS setup are in the range of Rp 4-7 juta/month. That's the difference between this side project being a hobby and being a slow grind toward profitable.
What's in the rack
2× Dell R620 (E5-2680v2, 64 GB RAM each, dual SSD)
- Proxmox VE host
- 4 production VMs each → 8 total
- Run: ClipFlow API + web, MySQL, Redis, internal tooling, this portfolio site you're reading
- Wildcard
*.cube-x.devcert at the Nginx layer terminates all TLS
4× Dell R710 (older gen, 32 GB RAM each, lots of disk)
- Ubuntu Server 22.04 directly (no virtualization layer needed)
- Roles: Python worker pool (Whisper transcription), R2 mirror / backup target, Prometheus + Grafana + Loki, off-site test instance
- Cheaper to keep running than to refresh — they earn their keep doing work the R620 shouldn't be saturated with
Network
- Single 1 Gbps fibre upstream
- Cloudflare in front (TLS, WAF, DDoS, caching)
- All inter-server traffic on a private subnet — no service exposed to the public except through Nginx LB
Operational posture
I treat my lab like a production environment because it is one. The lessons that aren't taught in tutorials:
- Backups matter when you can't blame the cloud. Restic on a daily cron, mirror to R2, plus a weekly off-site dump to a NAS at a friend's place. Tested restore once a quarter — the only backup that's real is one you've actually restored from.
- Boring stack wins. Docker compose on systemd, Nginx vhost per
service, GitLab CI that just SSHes in and runs
docker compose up -d --build. No Kubernetes, no service mesh, no Argo, no Helm. I could add them tomorrow if I needed; I don't need. - One ops budget per week. Friday afternoons. OS patches, cert renewal verification, log review, capacity check. Without a fixed slot, drift compounds and Monday's outage is on me.
What I get from running it
It's not just cost. Running infra you can pick up gives you debugging intuitions you don't get from cloud abstractions. When ClipFlow's worker froze under load, I knew which VM, which container, which process — because the chain from URL → Nginx → upstream → container ID → PID is something I built end to end. Cloud engineers I've met debug at the abstraction they were given; lab engineers debug at the abstraction the bug actually lives at.
It also means I can ship side projects with zero recurring cloud cost beyond Cloudflare R2. This portfolio site cost me literally nothing incremental to host. ClipFlow runs at the price of electricity. That asymmetry is what lets a Bandung programmer compete on product output with engineers earning four times my salary.
Topology diagram
The lab topology lives on the home page — see the SVG under "The lab" section.
What's next
- WireGuard mesh so I can take a worker physically off-site without losing it from the cluster
- Move Prometheus to long-term storage (Mimir or VictoriaMetrics)
- Eventually replace one R710 with another R620 — newer power efficiency pays for itself in ~18 months at Indonesian electricity rates