[-] rs5th@lemmy.scottlabs.io 11 points 1 year ago

You like deploying infrastructure, probably in a cloud environment, but you don’t want to push a bunch of buttons in their web interface, so you use Terraform to declaratively define the things you want, and it goes and builds them for you. Super useful for when you need to build resources often, to detect and correct config drift, and get started down the path of Infrastructure as Code.

[-] rs5th@lemmy.scottlabs.io 10 points 1 year ago

Hugo calls these sorts of things “frontends” and has a list here: https://gohugo.io/tools/frontends/

I haven’t had great luck with any of them personally.

[-] rs5th@lemmy.scottlabs.io 12 points 1 year ago

I wouldn't want to host anything on Windows unless you have to, or you want to learn more about Active Directory / Exchange / etc to help with a day job (assuming your day job is sysadmin / IT). Even then I'd do that inside Windows VMs on a Linux / ESXi host.

I personally wouldn't (and don't) host authoritative servers externally to the internet. I do split-horizon DNS, so that my internal BIND server handles my LAN, but I have outside DNS handled by someone that has an ACME (Let's Encrypt) module, so that I can do wildcart certs.

One thing to look into as you spin up services at home would be some sort of VPN like Tailscale, WireGuard, or even something like Cloudflare Tunnel so that you're not exposing services directly to the internet if you don't absolutely have to. I believe some of these projects/products let you specify DNS servers so that when your phone (for example) is connected to the VPN, it uses your home DNS servers instead of public ones.

Your very own self-hosting legend is about to unfold! A world of dreams and adventures with self-hosting awaits!

[-] rs5th@lemmy.scottlabs.io 8 points 1 year ago

Conservatives don't seem to have trouble with boob jobs, etc. I think this is an instance of using religion as an excuse when it's convenient.

[-] rs5th@lemmy.scottlabs.io 10 points 1 year ago

Here’s a cronjob to clean up the useless activity table every day:


apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: postgresql-cleanup
  namespace: lemmy
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: postgres-cleanup
            image: postgres:alpine
            command: ["psql", "--host=postgresql", "--dbname=postgres", "--username=postgres", "--command=DELETE FROM activity WHERE published < NOW() - INTERVAL '1 day';"]
            env:
            - name: PGPASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgresql
                  key: postgres-password
          backoffLimit: 0
          ttlSecondsAfterFinished: 3600

[-] rs5th@lemmy.scottlabs.io 10 points 1 year ago* (last edited 1 year ago)

You did a Kubernete! Congrats!

Edit to add: one Kubernetes instance talking to another!

[-] rs5th@lemmy.scottlabs.io 11 points 1 year ago

I believe the activity table in Postgres is retained for 6 months (although I’m purging mine daily) and the pict-rs cache is 168 hours (1 week).

[-] rs5th@lemmy.scottlabs.io 11 points 1 year ago

I think the larger issue was users from those external instances interacting with posts / comments in Beehaw’s communities. Since they’re open registration, bad actors could just create new accounts after being banned from Beehaw.

[-] rs5th@lemmy.scottlabs.io 11 points 1 year ago

Better tools would give the admin team more options. Like blocking users from lemmy.word from interacting with Beehaw, but Beehaw users still being able to interact with lemmy.world.

[-] rs5th@lemmy.scottlabs.io 9 points 1 year ago* (last edited 1 year ago)

I haven't used Docker Swarm (I have barely used Docker Compose), but I have run a couple on-prem Kubernetes clusters (at my house and for clients at my day job) and cloud Kubernetes clusters, so I can speak to how complex it is it set up and run.

My background is systems administration, engineering, IT, and now DevOps. I've been using Linux since Ubuntu 6.06.

I set up my Kubernetes cluster with kubeadm because I wanted to learn, and it took me about a weekend to get my single master, two worker cluster up and running. I think you could probably do this using k3s much faster and have less learning curve (you don't have to care as much about Container Network Interfaces, for example, because k3s makes that decision for you.)

There is a lot of documentation out there on Kubernetes. Helm as a "package manager" (really a templating engine) can be nice if the software you want to deploy has a Helm chart that is well written. Writing your own Helm charts can be a learning process, I've modified some but not written one from scratch yet.

Kubernetes releases new versions about quarterly. I've done several upgrades on my primary home cluster over the course of the past 2 years and they've been pretty smooth, about an hour of time investment ~~total~~ each. And remember, I'm on the more nerdy and complex flavor of Kubernetes. I think with k3s these would be even smoother and quicker.

I feel like Kubernetes knowledge is probably more valuable out in the industry if that's a factor for you. I haven't come across any Docker Swarm clusters in my DevOps travels, just Kubernetes and some HashiCorp Nomad.

I'm curious to see what folks say about Docker Swarm. If you have any questions about Kubernetes or running your workload on it, I'd be happy to try to help!

[-] rs5th@lemmy.scottlabs.io 10 points 1 year ago

I went down this rabbit hole a couple months ago: birds are classified as dinosaurs. Not “descended from dinosaurs”, actual dinosaurs. Sauce

[-] rs5th@lemmy.scottlabs.io 8 points 1 year ago

I'm running a Kubernetes cluster on the Dell hardware, then another single node k8s cluster on the Lenovo, mostly to run Adguard home / DNS in case the big cluster goes down for whatever reason.

Hardware:

  • Two Dell r610s, each with 12 cores and 96 GB of RAM, running ESXi 6.7
  • Lenovo M900, 4 core, 16 GB RAM, Ubuntu and k3s
  • Synology 1515 with 12 TB usable
  • Synology 1517 with 32 TB usable
  • Juniper SRX 220H (Firewall)
  • Juniper EX 2200 48 port switch
  • UnFi in-wall WiFi APs

I run the following services, all in Kubernetes, with FluxCD doing GitOps from a repo in GitHub (for now, might move to Gitea later):

  • Authentik
  • Bookstack
  • Calibre
  • Flame (Homepage)
  • Frigate NVR
  • Home Assistant
  • Memos
  • Monica
  • Plex
  • Prowlarr
  • Radarr
  • Rocket Chat
  • Sonarr
  • Tandoor
  • Tautulli
  • Unifi
  • UptimeKuma
  • VS Code
  • Zigbee2MQTT
view more: ‹ prev next ›

rs5th

joined 1 year ago
MODERATOR OF