On November 13th, I posed the question “Am I back?”, only referring to whether or not I’d post more frequently than that gap from 2021 to 2024 would suggest.

I’m back to a lot more than just posting.

I previously mentioned my experimentation with OpenShift, then K3s, back at the end of November. That ended with me deploying my old virtualization server pattern to the sole effect of “I can deploy software now.”

I’m now running Radicale, Kavita, CommaFeed, Gitea, and Synapse. For the first time since April 2022 I have a VPS - two of them.

One of them, Sentinel, is an HAProxy host in front of my Gitea service - a chain worthy of its own post. The other is my Synapse server.

We are not in Kansas anymore

This is not my 2015-2021 pattern. Back then, my VMs were manually provisioned CentOS servers assigned core infrastructure duties like DHCP, OpenVPN, Squid, and FreeIPA. I am not running through Anaconda anymore - I have a mirrors server with a kickstart file to automate that. But most of my VMs now run Fedora CoreOS. My experimentation with Kubernetes taught me new tricks.

Those CoreOS VMs are extremely fast to deploy in my environment. The process goes like this: 1. Generate a MAC address 2. Provision the DHCP reservation in pfSense 3. Run one virt-install command to boot CoreOS based on a backing store

That’s it. No installation process; no questions; just a shared ignition file and hostnames set via DHCP.

Once deployed, the actual services are centralized in /root/docker. There can be no question of how to back up every service deployed this way: the exact same backup of /root/docker.

I don’t have to be afraid of system updates like I was in the past. I don’t have to manually update these systems once every 2 weeks or so. They update themselves, and all I have to do is update the applications by running docker compose pull && docker compose up.

Backup Strategy

Backups in the old system consisted of snapshotting VM state, copying the disk image and definition to a share mounted via NFSoRDMA, and pivoting the overlay back into its backing store.

Backups in the new system involve tarsnap firing from a systemd timer to back up the application state and docker-compose.yml of the individual applications. Each server has its own host key, which I back up locally. And if any of these VMs were to fail, I would simply look up the host key, set up tarsnap again, and restore /root/docker.

Instead of tens of gigabytes of data, I’m backing up a few hundred megabytes. That makes cloud backups - like tarsnap - fiscally sensible. My backups only seem to run $0.001/day.

Progress

I may have set out to recreate the coziness of seven CentOS VMs on a CentOS host next to two other CentOS hosts administered by two fedora systems. But what I built is so much more powerful and reproducible. And this is only on one virtual host and a couple VPSes. If I ever buy the SSDs, I have two more identical systems I can bring up - not necessarily for high availability experiments - but for more raw capacity. And if I ever bring up all three? Then I can play with pacemaker and corosync like I previously wanted to. But there’s no reason to bring up more capacity than I need. I’m already a quantum leap ahead of where I was this time last year; already firmly back into my homelab. What more could I really ask for?