this post was submitted on 07 Apr 2025
26 points (96.4% liked)

Selfhosted

45532 readers
871 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
26
Testing vs Prod (lemmy.world)
submitted 13 hours ago* (last edited 13 hours ago) by njordomir@lemmy.world to c/selfhosted@lemmy.world
 

I've been slowly moving along in this self-hosting journey and now have a number of services that I regularly use and depend on. Of course I'm backing things up, but I also still worry about screwing up my server and having to rollback/rebuild/fix whatever got messed up.

I'm just curious, for those of you with home labs, do you use a testing environment of some kind or do you just push whatever your working on straight to "production

  • edit: grammar
top 11 comments
sorted by: hot top controversial new old
[–] HumanPerson@sh.itjust.works 1 points 1 hour ago

Eh, I sometimes spin up a temporary docker container for some nonsense on a separate computer. I usually just go for it after checking no one is on and backing up necessary data.

[–] avidamoeba@lemmy.ca 19 points 12 hours ago

Sir, every professional developer knows there's never time and people to maintain the testing environment so testing is done in production! That testing environment you're dreaming of is missed shareholder value.

[–] ambitiousslab@lemmy.ml 2 points 8 hours ago* (last edited 8 hours ago)

For services only I depend on, I have production-only. Since I can only inflict damage on myself, and can often work around problems.

For the XMPP server my friends and family also depend on, I have a dedicated nonprod VPS. My services are driven by ansible playbooks, so I'll tweak the playbook with whatever change I want to make works in nonprod, before running the same playbook against prod.

Whenever there's a new Debian Stable release, I'll rebuild the servers completely, to try and prevent "drift" between the nonprod and prod versions (not that I change things often enough for this to become a big problem). This is also the big test of my backups, which so far haven't been needed in a "real" emergency 🤞

[–] notabot@lemm.ee 2 points 9 hours ago

I manage all my homelab infra stuff via ansible and run services via kubenetes. All the ansible playbooks are in git, so I can roll back if I screw something up, and I test it on a sacrificial VM first when I can. Running services in kubenetes means I can spin up new instances and test them before putting them live.

Working like that makes it all a lot more relaxing as I can be confident in my changes, and back them out if I still get it wrong.

[–] JovialSodium 4 points 12 hours ago* (last edited 12 hours ago)

Nope. I fiddle until it does what I want. If the thing I'm working on is complex or I'm struggling with it I'll keep versions of configs. And I back up working configs via an rsync job. Which isn't a particularly robust solution but I'm content with it for my needs.

[–] N0x0n@lemmy.ml 4 points 12 hours ago* (last edited 12 hours ago)

Production is my testing lab, but only in my homelab ! I guess I don't care to perfectly secure my services (really dumb and easy passwords, no 2fa, not hiding plain sight passwords....) because I'm not directly exposing them to the web and accessing them externally via Wireguard ! That's really bad practice though, but any time soon will probably clean up that mess, but right now I can't, I have to cook some eggs...

There are 2 things though I actually do have some more complex workflow:

  • Rather complex incremental automated backup script for my docker container volumes, databases, config files, compose files.

  • Self-hosted mini-CA to access all my services via a nice .lab domain and get rid of that pesky warning on my devices.

I always do some tests if my backups are working on a VM on my personal desktop computer, because no backup means that all those years of tinkering for nothing... This will bring up some nasty depression..

Edit: If have a rather small homelab, everything on an old laptop, still quite happy with the result and works as expected.

[–] Zwuzelmaus@feddit.org 2 points 12 hours ago

No testing environment in my home lab so far.

But on the other hand, no planned builds either. Just fiddling around til it works.

I am currently planning for new hardware, and then doing it all with build scripts there, as fully automated as possible. The whole setup, from scratch. But for that I need to do some learning first.

So the new hardware is going to be it's own test environment for a good while, until it turns into production.

[–] beerclue@lemmy.world 2 points 12 hours ago

I personally use my home lab to test and learn, and I try to mimic a corporate environment. I have multiple instances of DNS, proxy, etc and I have a "prod" and a separate "staging" k8s environment. I try as much as possible, without going nuts about it, to update and try new changes that might be breaking in the staging cluster.

[–] themoonisacheese@sh.itjust.works 1 points 11 hours ago

My latest project runs on a VM I use vscode's ssh editing feature on. I edit the only copy of the file in existence (I have made no backup and there is no version control) and then I restart the systems service.

So what if I mess it up? Big deal. The discord bot goes down for a few minutes and I fix it.

Same goes for the machine configs. Ideally the machines are stable, the critical ones get backups, and if they aren't stable then I suppose the best way to fix it would be in prod ( my VMs run debian, they're stable).

[–] lorentz@feddit.it 1 points 11 hours ago

I don't have a testing environment, but essentially all my services are on docker saving their data in a directory mounted on the local filesystem. The dockerfile reads the sha version of the image from an env file. I have a shell script which:

  1. Triggers a new btrfs snapshot of the volume containing everyithing
  2. Pulls the new docker images and stores their hashes in the env file
  3. Restarts all the containers.

if a new Docker version is broken rolling back is as simple as copying the old version in the env file and recreating the container. If data gets corrupted I can just copy the last working status from an old snaphot.

The whole os is on a btrfs volume which is snapshotted regularly, so ideally if an update fucks it up beyond recovery I can always boot from a rescue image and restore an old snapshot. But I honestly feel this is extra precaution: in years that I run debian on all my computers, it never reached the point of being not bootable.

I use testing, prod and stale. Stale is simply one version behind prod in case I see something in prod I need to roll back