50
submitted 8 months ago by hoxbug@lemmy.world to c/selfhosted@lemmy.world

Hello, I currently have a home server mainly for media, in which I have an SSD for the system and 2 6TB hard drives set up in raid 1 using mdadm, its the most I can fit in the case. I have been getting interested in ZFS and wanting to expand my storage since it's getting pretty full. I have 2 12TB external hard drives. My question is can I create a pool (I think that's what they are called), using all 4 of these drives in a raidz configuration, or is this a bad idea?

(6TB+6TB) + 12TB + 12TB, should give me 24TB, and should work even if one of the 6TB or 12TB fails if I understand this correctly.

How would one go about doing this? Would you mdadm the 2 6TB ones into a raid 0 and then create a pool over that?

I am also just dipping my toes now into Nixos so having a resource that would cover that might be useful since the home server is currently running Debian. This server will be left at my parents house and would like it to have minimal onsite support needed. Parents just need to be able to turn screen on and use the browser.

Thank you

you are viewing a single comment's thread
view the rest of the comments
[-] Voroxpete@sh.itjust.works 2 points 8 months ago

I feel like what you're saying here, in effect, is "USB connected drives in a RAID are a bad idea, but if you're going to do it, ZFS is the way to go."

[-] avidamoeba@lemmy.ca 1 points 8 months ago* (last edited 8 months ago)

Hahaha. Good one!

Well not quite. More like "USB connected drives in RAID could be less reliable than internal and software can deal with it. ZFS makes that easier than LVM+mdraid." The downside of LVM+mdraid in my experience is that it needs more commands typed in to repair an array if something's gone wrong. It probably doesn't break much more than ZFS would under the same hardware conditions and it probably can recover from the same conditions ZFS could. USB drives can present more failure modes than internal but one of the points of RAID is to mitigate hardware failures. So I'm considering USB drives as just shittier drivers whose shittiness the software should be able to hide. So far that has been borne out in practice in my anecdata. I've used both LVMRAID (LVM+built-in mdraid) and ZFS with questionable USB drives and both have handled them without data loss and rare downtime, less than once a year. ZFS requires less attention. With all of that said ZFS does of course provide data integrity checking and correction which is a significant plus over LVM+mdraid. It's already saved me from data corruption due to RAM I had no idea had a problem. RAM that passed Memtest86+'s first pass. Little did I know that it fails on subsequent passes... Yes the first and subsequent passes are different. So I'd use ZFS with USB or internal disks whenever I have the choice to. 😂

this post was submitted on 25 Feb 2024
50 points (96.3% liked)

Selfhosted

39683 readers
545 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS