this post was submitted on 13 Apr 2025
19 points (91.3% liked)
Sysadmin
8590 readers
135 users here now
A community dedicated to the profession of IT Systems Administration
No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
!lemmy@lemmy.ml
!lemmyworld@lemmy.world
!lemmy_support@lemmy.ml
!support@lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
you want to use a hardware raid system with redundancy imo
uses drives up but your shit will be a lot safer until you back it up onto something you can keep offline
which mobo would that be? idk I just work in data storage as a system admin and this is what I'd start at
Why use a hardware RAID? If your controller dies, your data is inaccessible. Software RAID with something like ZFS or Btrfs is safer.
what? are you kidding or just being dumb? you can restore a raid on different hardware with the hardware configuration tools
as long as the drives and the raid are fine, it will rebuild
Yes, you have to put in different hardware. (Almost definitely the same brand, maybe even same model.) It’s inaccessible until you get that hardware and replace it. I didn’t mean permanently. A software RAID will work in any system.
imo, hardware raid is irrelevant for most small-scale use-cases and can be a liability for homelabbers. In a professional context, I've had a raid card shit itself causing temporary data loss and downtime because my idiot bosses didn't buy a spare card back when they set up their system. If you're doing hardware RAID, you must buy two cards, and they MUST be on the same firmware version. Software RAID is basically just as fast, is far more flexible, has one less SPOF, and is cheaper (a cheap HBA being all you need hardware-wise). About the only other thing some RAID cards have is a battery backup unit to get around write hole issues, but good filesystems can help with that too.
Hardware RAID isn't necessarily obsolete, but I'd say it's like mainframes—the applications for it are highly specialized.
As it is right now, I have zero redundancy, just my media spread across two HDDs, a future plan is to have two NAS units, the primary unit that I access from my machine as normal, and a separate unit that runs rsync or borgbackup from the primary unit every night.
At this moment I don't want perfect being the enemy of good.
i hear you, you can always just get a add in card for more drives, buy some older but new platter drives and you can make it a lot easier
Yeah, I have thought about getting a PCIE SATA controller card but have heard about mixed oppinions of those...
you get what you pay for generally
I run clusters of both LSI-based hwraid and zfs at work. I strongly recommend zfs over hwraid. The long and short of it is hwraid hasn't kept up with software solutions, and software solutions are often both more performant and more resilient (at the cost of CPU/memory).
For homelab scale, zfs is definitely the way to go, especially for archive data.
Wendel wrote up a pretty good guide for those looking to understand what makes zfs so good if you want to dive deeper. https://forum.level1techs.com/t/zfs-guide-for-starters-and-advanced-users-concepts-pool-config-tuning-troubleshooting/196035
Yeah, I am a bit weary of hwraid since I have no experience with it, I have some experience with software raid.
My initial plan was going with Linux set up an mdadm raid and run an LVM on top of it, though the more I think about it, it feel like more of a lab/experiment scenario, snd I may get another NAS build to lab with.
As it stands now, I'll probably go with TrueNAS and ZFS since it will be running in "prod" at home.
I’d recommend ZFS or Btrfs over mdadm. They both have data repair if something goes wrong, and mdadm doesn’t.