23
submitted 7 months ago* (last edited 7 months ago) by aleq@lemmy.world to c/datahoarder@lemmy.ml

Not sure if this is better fit for datahoarder or some selfhost community, but putting my money on this one.

The problem

I currently have a cute little server with two drives connected to it running a few different services (mostly media serving and torrents). The key facts here is that 1) it's cute and little, 2) it's handling pretty bulky data. Cute and little doesn't go very well with big raid setups and such, and apart from upgrading one of the drives I'm probably at my limit in terms of how much storage I can physically fit in the machine. Also if I want to reinstall it or something that's very difficult to do without downtime since I'd have to move the drive and services of to a different machine (not a huge problem since I'm the only one using it, but I don't like it).

Solution

A distributed FS would definitely solve the issue of physically fitting more drives into the chassi, since I could basically just connect drives to a raspberry pi and have this raspi join the distributed fs. Great.

I think it could also solve the issue of potential downtime if I reinstall or do maintenance, since I can have multiple services read of the same distributed FS and reroute my reverse proxy to use the new services while the old ones are taken offline. There will potentially be a disruption, but no downtime.

Candidates

I know there are many different solutions for distributed filesystems, such as ceph, moosefs, glusterfs and miniio. I'm kinda leaning towards ceph because of it's integration in proxmox, but it also seems like the most complicated solution in the bunch. Is it worth it? What are your experiences with these, and given the above description of my use-case which do you think would be the best fit?

Since I already have a lot of data it's a bonus if it's easy to migrate from my current filesystem somehow.

My current setup uses a lot of hard links as well, so it's a big bonus if the solution has something similar (i.e. some easy way of storing the same data in multiple places without duplicating it)

you are viewing a single comment's thread
view the rest of the comments
[-] snekmuffin@lemmy.dbzer0.com 1 points 7 months ago

If you're on linux or bsd, look into ZFS. Insanely easy to set up and admin, fs-level volume management, compression and encryption, levels of RAID if you want them, and recently they even added phe option to expand your data pools with new drives. All of that completely userspace, without having to fiddle with expensive RAID cards or motherboard firmware.

[-] krnl386@lemmy.ca 1 points 7 months ago

Huh? ZFS is not 100% userspace. You’re right that ZFS doesn’t need hardware RAID (in fact, it’s incompatible), but the standard OpenZFS implementation (unless you’re referring to the experimental FUSE-based one) does use kernelspace on both FreeBSD and Linux.

[-] snekmuffin@lemmy.dbzer0.com 1 points 7 months ago

Oh, I might be wrong on that part, sorry about that

[-] aleq@lemmy.world 1 points 7 months ago

Isn't it a local filesystem though, so I can't expand the filesystem with other drives on my network?

this post was submitted on 04 Feb 2024
23 points (100.0% liked)

datahoarder

6604 readers
11 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS