Selfhosted

49249 readers
688 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

I think a lot of people have heard of OpenAI’s local-friendly Whisper model, but I don’t see enough self-hosters talking about WhisperX, so I’ll hop on the soapbox:

Whisper is extremely good when you have lots of audio with one person talking, but fails hard in a conversational setting with people talking over each other. It’s also hard to sync up transcripts with the original audio.

Enter WhisperX: WhisperX is an improved whisper implementation that automatically tags who is talking, and tags each line of speech with a timestamp.

I’ve found it great for DMing TTRPGs — simply record your session with a conference mic, run a transcript with WhisperX, and pass the output to a long-context LLM for easy session summaries. It’s a great way to avoid slowing down the game by taking notes on minor events and NPCs.

I’ve also used it in a hacky script pipeline to bulk download podcast episodes with yt-dlp, create searchable transcripts, and scrub ads by having an LLM sniff out timestamps to cut with ffmpeg.

Privacy-friendly, modest hardware requirements, and good at what it does. WhisperX, apply directly to the forehead.

3
 
 

I'm just using the Cosmic Terminal that's part of the Pop!_OS Cosmic Alpha, but I ran into similar issues with Gnome terminal and even with Termius.

Scenario: I'm currently working on leveraging a VPS to act as the gateway to my homelab so I have one ssh session to Unraid server and one to VPS. One in each tab. Obviously the name shows up as what the username@servername is called in each tab. But I keep getting tripped up and sometimes try to do something from the wrong machine. Once I even failed to realize that the ssh session to one of them cut out and I was back on my desktop and took me an embarrassingly long time to realize why stuff was failing.

So what are y'all using to keep that organized in your work flow? Separate terminal windows instead of tabs? Some shell customizations to make them look different than one another? Or just so ingrained in your brain that you never have this problem?

4
 
 

SOLVED: turns out mesa is not enough for this. i also had to install:

mesa-va-gallium mesa-dri-gallium

now we're good!


hi all!

i've hit a wall here and could use your input if you have any thoughts!

i'm running Owncast latest via rootful Podman on a distro with SELinux.

i'm trying to implement hardware acceleration via the server's AMD GPU, but it is not working.

AMD Radeon RX 7600

Kernel: 6.15.4-1-default

i've turned VAAPI on in the web admin settings.

the container comes with ffmpeg 6 and libva.

For SELinux, i've run:

setsebool -P container_use_devices true

In my quadlet i've added:

[Container]

AddDevice=/dev/dri

Exec=apk add mesa

the devices appear rw in the container:

/app # ls -l /dev/dri

total 0

crw-rw---- 1 root 486 226, 0 Jul 9 15:58 card0

crw-rw---- 1 root 489 226, 128 Jul 9 15:58 renderD128

here is the error i'm getting:

time="2025-07-09T15:58:46Z" level=error msg="[AVHWDeviceContext @ 0x7f96891c7cc0] Failed to initialise VAAPI connection: -1 (unknown libva error)."

time="2025-07-09T15:58:46Z" level=error msg="Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': I/O error"

time="2025-07-09T15:58:46Z" level=error msg="transcoding error. look at data/logs/transcoder.log to help debug. your copy of ffmpeg may not support your selected codec of h264_vaapi https://owncast.online/docs/codecs/"

time="2025-07-09T16:04:25Z" level=info msg="Inbound stream connected from 192.168.0.235:42698"

time="2025-07-09T16:04:25Z" level=info msg="Processing video using codec VA-API with 3 output qualities configured."

time="2025-07-09T16:04:25Z" level=error msg="[AVHWDeviceContext @ 0x7f8a2a047cc0] Failed to initialise VAAPI connection: -1 (unknown libva error)."

time="2025-07-09T16:04:25Z" level=error msg="Failed to set value '/dev/dri/renderD128' for option 'vaapi_device': I/O error"

time="2025-07-09T16:04:25Z" level=info msg="Inbound stream disconnected."

time="2025-07-09T16:04:25Z" level=error msg="unable to write rtmp packet io: read/write on closed pipe"

time="2025-07-09T16:04:25Z" level=error msg="transcoding error. look at data/logs/transcoder.log to help debug. your copy of ffmpeg may not support your selected codec of h264_vaapi https://owncast.online/docs/codecs/"

any help to troubleshoot this would be most appreciated! thank you!

5
 
 

Hey there!

i have an Owncast container that needs two extra files added to it every time it starts up because the base image doesn't include them. they can be downloaded from within the container. i just need a way to tell the container to always do that when it starts up.

i've tried adding this to my quadlet:

[Container]

Exec=apk update && apk add --no-cache mesa-va-gallium mesa-dri-gallium

but it doesn't work.

does anyone know how to correctly automate this?

thanks!

6
 
 

Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

Recently, I've been considering migrating to bare metal K3S for a few reasons:

  • To learn and actually practice K8S.
  • To have redundancy and to try HA.
  • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
  • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong. More specifically:

  • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
  • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
  • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

Thanks in advance,

Cheers!

7
 
 

My Homelab currently consists of 3 Mini PC's and will eventually be put in a 10" rack

They are all just plugged into the router my ISP provided, I'd like to get a new router that runs open-source software and create a new network from it. I have no idea where to begin.

What hardware would you recommend?

Bonus: If possible I'd like to in the future attach a sim card to my network as a backup for the occasion that the ISP connection drops. (just a nice to have)

8
 
 

Nice big old port scan. Brand new server too. Just a few days old so there is nothing to find. Don't worry I contacted AWS. Stay safe out there.

9
 
 

In a lot of movies people often say if I don't type a password in then "the" files will be emailed, emails sent to journalists, or the site will go online, etc. exposing whoever it is they are in danger from or blackmailing.

I always thought it was a cool concept, and it probably isn't hard for someone who knows what they're doing to set up some script for that, but I was wondering if there was a self hosted app that you could set up that let you set timers, say for a day, and if a password isn't entered to cancel it, it sends files to an email (maybe configured to hit all news outlets/journalists), deploys a website at a domain thats preconfigured/setup, tweets something, etc.

10
 
 

Like how on Debian's website, you can find their ISO's and other related files in this very simple file browser layout which looks kind of old but I want that, know any projects or way to set something like that up? The modern self-hosted stuff just does not seem simple enough, and both aesthetically and from a functional perspective I would like something like what debain does with their own files. I also want it to be reliable, for some reason, with both immich and nextcloud, a relative of mine was unable to download alot of photos without the download not even starting on Nextcloud, or it stopping 30% of the way on immich, if reliable downloads necessitate a desktop app with their own unique file exchanging protocol I would be ok with that too (willing to compromise with the desired aesthetic and minimalist design)

The ideal thing is the thing here: https://cdimage.debian.org/debian-cd/

11
 
 

Background: I've been writing a new media server like Jellyfin or Plex, and I'm thinking about releasing it as an OSS project. It's working really well for me already, so I've started polishing up the install process, writing getting started docs, stuff like that.

I'm interested in how other folks have set up their media libraries. Especially the technical details around how files are encoded and organized.

My media library currently has about 1,100 movies and just shy of 200 TV shows. I've encoded everything as high quality AV1 video with Opus audio, in a WebM container. Subtitles and chapters are in a separate WebVTT file alongside the video. The whole thing is currently about 9TB. With few exceptions, I sourced everything directly from Blu-ray or DVD using MakeMKV. It's organized pretty close to how Jellyfin wants it.

What about you?

12
 
 

Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you're not blowing up the dynamic volume upon restart.

In my case I changed this:

  immich-machine-learning:
    ...
    volumes:
      - model-cache:/cache

To that:

  immich-machine-learning:
    ...
    volumes:
      - ./cache:/cache

I no longer have to wait uncomfortably long when I'm trying to show off Smart Search to a friend, or just need a meme pronto.

That'll be all.

13
 
 

I'm kind of surprised I've struggled with this so long. Right now I've been using nextcloud camera upload, and mostly it works ok but shits the bed once in a while without me noticing and I need to spend time fixing it and it's never as simple as turning it off and on again.

I recently tried syncthing, and while it works, it frequently crashed and got stuck in a state where it says it's on and working by my destination folder is shown as disconnected, and the options to restart syncthing from the side menu are greyed out and the only way to make it work again is by force closing it and reopening it.

I'm running vanilla stock Google Android and truenas, does anyone have a better solution?

14
15
 
 

Hello, I've been saying it to myself for a year now, but I'm on summer break rn and I really need to do something with my life. Here's some of the software I plan to host. Goal is to not spend more than $150-200, I do have some gift cards though.

Absolutely Will Run:

Nextcloud & Immich - I want to replace Google and OneDrive

Might do in the near future:

Jellyfin - my mom and I usually just bootleg by using Kodi on our FireTV, so not a major need rn, but might be nice for future purposes.

piHole - better overall ad blocking, so I don't have to use nextDNS on all my devices, and maybe help my mom out.

VPN - I currently pay for Proton, and we use it on the FireTV, the TV app sucks cause it doesn't have killswitch (PC and mobile have Killswitch). I have several devices and profiles that I use, so I was thinking maybe just an overall VPN might be nice

Seeding - I think it would be nice to give back to the community, since I torrent every now and then.

OS Plan: I plan to use Proxmox as I have a little bit of experience using it, and others seem to like it a lot for managing multiple software.

I know I don't need to go full power mode rn, so I wanna stick with something low end that I could maybe upgrade in the future. Should I just buy a used laptop/PC, or get like an Optiplex or ThinkServer? I don't wanna rack up my parent's electric bill. I already got some hard drives a year ago, so but is using an external drive bad?

I know to use the Ethernet ports so my signal isn't shit, but I gotta work out the best spot I can put my server. I do know an okay amount of networking knowledge, and I'm a cyber student anyway so this is like a fun yet educational personal project for me.

When it comes to external access and security of these services, should I stick with Tailscale? Some people have concerns over the proprietary bits and are using headscale instead I guess.

Any guidance is much appreciated!

16
 
 

A comprehensive fitness coaching platform that allows create workout plans for you, track progress, and access a vast exercise database with detailed instructions and video demonstrations.

17
 
 

I thought this video was rather interesting, because at 12:27, the presenter crunches the numbers to find out how many years it would take for a new computer purchase to be more environmentally friendly (in regards to total CO2 expended) compared to using a less efficient used model.

Depending on the specific use case, it could take as little as 3 years to breakeven in terms of CO2 if both systems were at max power draw forever, and as long as 30 if the systems are mostly at idle.

18
 
 

Who benefits from this? Even though Let’s Encrypt stresses that most site operators will do fine sticking with ordinary domain certificates, there are still scenarios where a numeric identifier is the only practical choice:

Infrastructure services such as DNS-over-HTTPS (DoH) – where clients may pin a literal IP address for performance or censorship-evasion reasons.
IoT and home-lab devices – think network-attached storage boxes, for example, living behind static WAN addresses.
Ephemeral cloud workloads – short-lived back-end servers that spin up with public IPs faster than DNS records can propagate.
19
 
 

I don't know what to do, I'm experimenting with creating a Lemmy instance. it's listening on port 8536 but cloudflare won't respond and connect and while i connected the tunnel to the instance, i can't figure out the error or how to make it connect to the server.

"Failed to connect to localhost port 8536 after 0 ms: Couldn't connect to server"

20
 
 

Tailscale recently announced our Series C fundraise, and while we were grateful for the support, the Internet, as it does, also raised a few eyebrows — some wondering whether this meant the dreaded “enshittification” was on the horizon for Tailscale.

Full Article -->

Tailscale recently announced our Series C fundraise. We were grateful for all the community support, but the Internet also raised a few of its collective eyebrows, wondering whether this meant the dreaded “enshittification” was coming next.

That word describes a very real pattern we’ve all seen before: products start great, grow fast, and then slowly become worse as the people running them trade user love for short-term revenue.

It’s a topic I find genuinely fascinating, and I've seen the downward spiral firsthand at companies I once admired. So I want to talk about why this happens, and more importantly, why it won't happen to us. That's big talk, I know. But it's a promise I'm happy for people to hold us to. What is enshittification?

The term "enshittification" was first popularized in a blog post by Corey Doctorow, who put a catchy name to an effect we've all experienced. Software starts off good, then goes bad. How? Why?

Enshittification proposes not just a name, but a mechanism. First, a product is well loved and gains in popularity, market share, and revenue. In fact, it gets so popular that it starts to defeat competitors. Eventually, it's the primary product in the space: a monopoly, or as close as you can get. And then, suddenly, the owners, who are Capitalists, have their evil nature finally revealed and they exploit that monopoly to raise prices and make the product worse, so the captive customers all have to pay more. Quality doesn't matter anymore, only exploitation.

I agree with most of that thesis. I think Doctorow has that mechanism mostly right. But, there's one thing that doesn't add up for me: Enshittification is not a success mechanism.

I can't think of any examples of companies that, in real life, enshittified because they were successful. What I've seen is companies that made their product worse because they were… scared.

A company that's growing fast can afford to be optimistic. They create a positive feedback loop: more user love, more word of mouth, more users, more money, more product improvements, more user love, and so on. Everyone in the company can align around that positive feedback loop. It's a beautiful thing. It's also fragile: miss a step and it flattens out and soon it's a downward spiral instead of an upward one.

So, if I were, hypothetically, running a company, I think I would be pretty hesitant to deliberately sacrifice a step from that positive feedback loop, the loop I and the whole company spent so much time and energy building, to see if I can grow faster. User love? Nah, I'm sure we'll be fine, look how much money and how many users we have! Time to switch strategies!

Why would I do that? Whenever you switch strategies, there has to be a threshold moment, when something fundamental changes. Threshold moments and control

In Saint John, New Brunswick, there's a river that flows one direction at high tide, and the other way at low tide. Four times a day, gravity equalizes, then crosses a threshold to gently start pulling the other way, then accelerates. What doesn't happen is a rapidly flowing river in one direction "suddenly" shifts to rapidly flowing the other way. You can see the threshold coming. It's predictable.

In my experience, for a company or a product, there are two kinds of thresholds like this, that when crossed, create a flow change.

The first one is control: if the visionaries in charge lose control, chances are their replacements won't "get it."

The new people didn't build the underlying feedback loop, and so they don't realize how fragile it is. There are lots of reasons for a change in control; financial mismanagement, boards of directors, hostile takeovers.

The worst one is temptation. Being a founder is, well, it actually sucks. It's oddly like being repeatedly punched in the face. When I look back at my career, I guess I'm surprised by how few times per day it feels like I was punched in the face. But, the constant face punching gets to you after a while. Once you've established a great product, and amazing customer love, and lots of money, and an upward spiral, isn't your creation strong enough yet? Can't you step back and let the professionals just run it, confident that they won't kill the golden goose?

Empirically, mostly no, you can't. Actually the success rate of control changes, for well loved products, is abysmal. The saturation trap

The second trigger of a flow change is comes from outside: saturation. Every successful product, at some point, reaches approximately all the users it's ever going to reach. Before that, you can watch its exponential growth rate slow down: the infamous S-curve of product adoption.

Saturation can lead us back to control change: the founders get frustrated and back out, or the board ousts them and puts in "real business people" who know how to get growth going again. Generally that doesn't work. Modern VCs consider founder replacement a truly desperate move, most of the time. Maybe a last-ditch effort to boost short term numbers in preparation for an acquisition, if we're lucky.

But sometimes the leaders stay on despite saturation, and they try on their own to make things better. Sometimes that does work. Actually, it's kind of amazing how often it seems to work. Among successful companies, it's rare to find one that sustained hypergrowth, nonstop, without suffering through one of these dangerous periods.

(That's called survivorship bias. All companies have dangerous periods. The successful ones surivived them. But of those survivors, suspiciously few are ones that replaced their founders.)

If you saturate and can't recover - either by growing more in a big-enough current market, or by finding new markets to expand into - then the best you can hope for is for your upward spiral to mature gently into decelerating growth. If so, and you're a buddhist, then you hire less, you optimize margins a bit, you resign yourself to being About This Rich And I Guess That's All But It's Not So Bad. The devil’s bargain

Alas, very few people reach that state of zen. Especially the kind of ambitious people who were able to get that far in the first place. If you can't accept saturation and you can't beat saturation, then you're down to two choices: step away and let the new owners enshittify it, hopefully slowly. Or take the devil's bargain: enshittify it yourself.

I would not recommend the latter. If you're a founder and you find yourself in that position, honestly, you won't enjoy doing it and you probably aren't even good at it and it's getting enshittified either way. Let someone else do the job. Defenses against enshittification

Okay, maybe that section was not as uplifting as we might have hoped. I've gotta be honest with you here. Doctorow is, after all, mostly right. This does happen all the time.

Most founders aren't perfect for every stage of growth. Most product owners stumble. Most markets saturate. Most VCs get board control pretty early on and want hypergrowth or bust. In tech, a lot of the time, if you're choosing a product or company to join, that kind of company is all you can get.

As a founder, maybe you're okay with growing slowly. Then some copycat shows up, steals your idea, grows super fast, squeezes you out along with your moral high ground, and then runs headlong into all the same saturation problems as everyone else. Tech incentives are awful.

But, it's not a lost cause. There are companies (and open source projects) that keep a good thing going, for decades or more. What do they have in common?

An expansive vision that's not about money, and which opens you up to lots and lots of users. A big addressable market means you don't have to worry about saturation for a long time, even at hypergrowth speeds. Google certainly never had an incentive to make Google Search worse.

(Update 2025-06-14: A few people disputed that last bit. Okay. Perhaps Google has ccasionally responded to what they thought were incentives to make search worse -- I wasn't there, I don't know -- but it seems clear in retrospect that when search gets worse, Google does worse. So I'll stick to my claim that their true incentives are to keep improving.)
Keep control.It's easy to lose control of a project or company at any point. If you stumble, and you don't have a backup plan, and there's someone waiting to jump on your mistake, then it's over. Too many companies "bet it all" on nonstop hypergrowth and don't have any way back have no room in the budget, if results slow down even temporarily.

Stories abound of companies that scraped close to bankruptcy before finally pulling through. But far more companies scraped close to bankruptcy and then went bankrupt. Those companies are forgotten. Avoid it.
Track your data. Part of control is predictability. If you know how big your market is, and you monitor your growth carefully, you can detect incoming saturation years before it happens. Knowing the telltale shape of each part of that S-curve is a superpower. If you can see the future, you can prevent your own future mistakes.
Believe in competition. Google used to have this saying they lived by: "the competition is only a click away." That was excellent framing, because it was true, and it will remain true even if Google captures 99% of the search market. The key is to cultivate a healthy fear of competing products, not of your investors or the end of hypergrowth. Enshittification helps your competitors. That would be dumb.

(And don't cheat by using lock-in to make competitors not, anymore, "only a click away." That's missing the whole point!)
Inoculate yourself. If you have to, create your own competition. Linus Torvalds, the creator of the Linux kernel, famously also created Git, the greatest tool for forking (and maybe merging) open source projects that has ever existed. And then he said, this is my fork, the Linus fork; use it if you want; use someone else's if you want; and now if I want to win, I have to make mine the best. Git was created back in 2005, twenty years ago. To this day, Linus's fork is still the central one.

If you combine these defenses, you can be safe from the decline that others tell you is inevitable. If you look around for examples, you'll find that this does actually work. You won't be the first. You'll just be rare. Side note: Things that aren't enshittification

I often see people worry about enshittification that isn't. They might be good or bad, wise or unwise, but that's a different topic. Tools aren't inherently good or evil. They're just tools.

"Helpfulness." There's a fine line between "telling users about this cool new feature we built" in the spirit of helping them, and "pestering users about this cool new feature we built" (typically a misguided AI implementation) to improve some quarterly KPI. Sometimes it's hard to see where that line is. But when you've crossed it, you know.

Are you trying to help a user do what they want to do, or are you trying to get them to do what you want them to do?

Look into your heart. Avoid the second one. I know you know how. Or you knew how, once. Remember what that feels like.
Charging money for your product.Charging money is okay. Get serious. Companies have to stay in business.

That said, I personally really revile the "we'll make it free for now and we'll start charging for the exact same thing later" strategy. Keep your promises.

I'm pretty sure nobody but drug dealers breaks those promises on purpose. But, again, desperation is a powerful motivator. Growth slowing down? Costs way higher than expected? Time to capture some of that value we were giving away for free!

In retrospect, that's a bait-and-switch, but most founders never planned it that way. They just didn't do the math up front, or they were too naive to know they would have to. And then they had to.

Famously, Dropbox had a "free forever" plan that provided a certain amount of free storage. What they didn't count on was abandoned accounts, accumulating every year, with stored stuff they could never delete. Even if a very good fixed fraction of users each year upgraded to a paid plan, all the ones that didn't, kept piling up... year after year... after year... until they had to start deleting old free accounts and the data in them. A similar story happened with Docker, which used to host unlimited container downloads for free. In hindsight that was mathematically unsustainable. Success guaranteed failure.

Do the math up front. If you're not sure, find someone who can.
Value pricing.(ie. charging different prices to different people.) It's okay to charge money. It's even okay to charge money to some kinds of people (say, corporate users) and not others. It's also okay to charge money for an almost-the-same-but-slightly-better product. It's okay to charge money for support for your open source tool (though I stay away from that; it incentivizes you to make the product worse).

It's even okay to charge immense amounts of money for a commercial product that's barely better than your open source one! Or for a part of your product that costs you almost nothing.

But, you have to do the rest of the work. Make sure the reason your users don't switch away is that you're the best, not that you have the best lock-in. Yeah, I'm talking to you, cloud egress fees.
Copying competitors. It's okay to copy features from competitors. It's okay to position yourself against competitors. It's okay to win customers away from competitors. But it's not okay to lie.
Bugs. It's okay to fix bugs. It's okay to decide not to fix bugs; you'll have to sometimes, anyway. It's okay to take out technical debt. It's okay to pay off technical debt. It's okay to let technical debt languish forever.
Backward incompatible changes. It's dumb to release a new version that breaks backward compatibility with your old version. It's tempting. It annoys your users. But it's not enshittification for the simple reason that it's phenomenally ineffective at maintaining or exploiting a monopoly, which is what enshittification is supposed to be about. You know who's good at monopolies? Intel and Microsoft. They don't break old versions.

Enshittification is a real, and tragic, phenomenon. But let's protect a useful term and its definition! Those things aren't it. Epilogue: a special note to founders

If you're a founder or a product owner, I hope all this helps. I'm sad to say, you have a lot of potential pitfalls in your future. But, remember that they're only potential pitfalls. Not everyone falls into them.

Plan ahead. Remember where you came from. Keep your integrity. Do your best.

21
 
 

cross-posted from: https://lemmings.world/post/29678617

Thought I would share my simple docker/podman setup for torrenting over I2P. It's just 2 files, a compose file and a config file, along with an in-depth explanation, available at my repo https://codeberg.org/xabadak/podman-i2p-qbittorrent. And it comes with a built-in "kill-switch" to prevent traffic leaking out to the clearnet. But for the uninitiated, some may be wondering:

What is I2P and why should I care?

For a p2p system like bittorrent, for two peers to connect to each other, at least one side needs to have their ports open. If one side uses a VPN, their provider needs to support "port forwarding" in order for them to have their ports open (assuming everything else is configured properly). If you have ever tried to download a torrent with seeders available, yet failed to connect to any of them, your ports are probably not open. And with regulators cracking down on VPNs and forcing providers like Mullvad to shut down port forwarding, torrenting over the clearnet is becoming more and more difficult.

The I2P network doesn't have these issues. The I2P is an alternative internet network where all users are anonymous by default. So you don't need a VPN to hide your activity from your ISP. You don't need port-forwarding either, all peers can reach each other. And if you do happen to run a VPN on your PC, that's fine too - I2P will work just the same. So if you're turning your VPN on and off all the time, you can keep I2P running throughout, and continue downloading/uploading.

I2P eliminates all the complications and worries about seeding, making it easy for beginners to contribute to the network. I2P also makes downloading easier, since all peers are always reachable. And it's more decentralized too, since users don't need to rely on VPN providers. And of course, it's free and open source!

A fair warning though, I2P is restricted in some countries. And in terms of torrenting specifically, torrents have to explicitly support I2P. You can't just take any clearnet torrent and expect it to work on I2P. And the speeds are generally lower since there are less seeders, and the built-in anonymity has a cost as well. However I've been surprised at the amount of content on the I2P network, and I've been able to reach 1 MB/s download speeds. It's more than good enough for me, and it will only get better the more people join, so I hope this repo is enough for people to get started.

22
 
 

I have quite a few self-hosted services, both on machines at home and on a VPS. And there are even more odds and ends I've written that do things on my home network. A one-person maintenance team runs into serious memory limitations, particularly for the services that just run fine for years at a time.

After running into the frustration of forgetting how to run Nextcloud upgrades on the command line for the nth time, I realized it was time to write a tool.

The system wayfinder is what came out of that frustration. It lets you leave notes and commands in place around your infrastructure. After dogfooding it a bit, I was delighted when it saved me a ton of trouble dealing with one of my docker containers.

I took some time to work on it proper, wrote it up, and put it on GitHub, even though it is still a pre-release. Would you use a tool like this? What else would you want in it?

Edit: adding link to GitHub https://github.com/robbieh/way

23
 
 

everytime i check nginx logs its more scrapers then i can count and i could not find any good open source solutions

24
 
 

after almost 15yrs my plex server is no more. jellyfin behind nginx with authentik is running very nicely.

25
 
 

Hello Friendos

I'm a security / cloud engineer and I've had this lab for about 6 months now. In the last few weeks I've decided to start using it to self host some "production" services for me and my loved ones (extended family of 15) Mainly a next cloud instance that serves as our "picture vault"

The hardware is a poweredge R430 with twin ES-2620's and 128 GBs. It has 8x1TB 2.5

HDDs

This thing ended up being really overpowered for what I use it and I feel like by now I have explored everything I wanted to in this hardware. I was thinking about laterally scaling to R230s so I could play with load balancing and HA.

However these servers only have 2-4 drive bays, and I have no experience with DAS.

Can you guys help with some links? I'm researching DAS enclosures. I understand that any server with a PCI slot can take a SAS card, and any SAS enclosure is compatible.

Can you guys foresee any issue with a server as small as an R230 connecting to a SAS DAS?

I see that DAS enclosures have multiple connections per module, would I be able to connect multiple servers to the same module? or is it one server per connection and it can't be shared?

If I have to share the connection, I would have to host a NAS (I probably should anyways) and will have to upgrade my switch from gigabit to 10G

Would also appreciate some other recommendations for small form factor servers that can be bought for cheap. (18 inches or shorter)

Pic of current setup for attention ... don't judge my PC case :) 3U chassis for it is on the mail.

view more: next ›