15
Lemmy Instance on Small Private Server?
(self.asklemmy)
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
Looking for support?
Looking for a community?
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
I'm running my instance on 2 cores, 2GB of RAM. Of course I'm the only one on there at the moment, but it's running great, and I think it might even be fine with a single core.
As other have said, if you are planning to use it for your own "user's home instance," that should be fine. I've read a few people are running their instances on Raspberry PIs, which is pretty neat. While I have one I could use, I opted to setup a new droplet in DigitalOcean instead (I also run my own servers like you). A 2 core / 2GB RAM / 50GB SSD disk droplet on DigitalOcean is about $18 (USD) a month, while a single core droplet is about $12 (USD) per month.
If you plan to run an instance for others to use, be aware the federation is going to be chatty on your home network, and could impact other devices on your network. Probably not ideal, which is why I opted for a droplet in DigitalOcean instead.
It did cross my mind if I could have one of my raspberry pis run it. Actually, if it is possible I'd do on an Odroid N2+. Hmm...
How much headroom do you have left on that? I'm considering starting up a public instance and would love to get an estimate for per-user workload on a federated instance.
With just me on the system, CPU is barely ever over 2 -3%. Load average looks good. Memory usage looks fine. You know what? Let me post some graphs for the past 24 hours, which, I've pretty much been on here for 24 hours straight. Again, I'm the only user on my instance, and this is all running in docker containers.
I've mentioned this in a few other threads, but I'm tempted to fire up jmeter and push some load through my instance just to see how it behaves if I slam the system via the API. I just don't feel like learning the internal API endpoints and all that right at this time though.
Super cool, thanks
Awesome, this is super helpful! I'd be using a very similar setup. It might be best to start small, invite a couple people on, and see how that memory scales. I'll be avoiding any auto-scaling unless it becomes a much bigger project.
Well, ideally each service would have their own dedicated resources to begin with. But, given all of the lemmy services + Postgres are running on 2 cores with 2GB of RAM, that's pretty impressive.
Anyway, autoscaling doesn't necessarily solve scaling issues without a lot of thought and planning. It's not always as simple as throwing more hardware at the problem, as I'm sure you already know.
Any recommended guides? I consider myself pretty savvy with tech as a software engineer, but I’d really like some sort of docker image to just spin up on my unraid server. I’m pretty lazy playing the whole sys admin role…