this post was submitted on 13 Apr 2025
19 points (91.3% liked)

Sysadmin

8590 readers
135 users here now

A community dedicated to the profession of IT Systems Administration

No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
!lemmy@lemmy.ml
!lemmyworld@lemmy.world
!lemmy_support@lemmy.ml
!support@lemmy.world

founded 2 years ago
MODERATORS
 

I posted last week about building a NAS, and on friday I saw that the Jonsbo N4 case I had been eyeing for a while was in stock at a good price.

So now I am looking for a motherboard to base my system on, which seems to be a bit difficult.

I need an mATX or ITX board that can handle six SATA drives and also have an NVME slot for a boot drive.

Performance, I value power efficiency more than super high performance, and am on the fence between Open Media Vault or TrueNAS, I like the familiarity of Linux, but I do value the features of ZFS.

If I end up on TrueNAS I may run a VM in the hypervisor from time to time, mostly just for testing.

The NAS will not be an HTPC, but will serve media through SMB and possibly NFS later.

Cooling could be a bit of an issue as the case does not have a lot of space for a cooler

you are viewing a single comment's thread
view the rest of the comments
[–] thisbenzingring 2 points 2 days ago* (last edited 2 days ago) (3 children)

you want to use a hardware raid system with redundancy imo

uses drives up but your shit will be a lot safer until you back it up onto something you can keep offline

which mobo would that be? idk I just work in data storage as a system admin and this is what I'd start at

[–] hperrin@lemmy.ca 0 points 20 hours ago (1 children)

Why use a hardware RAID? If your controller dies, your data is inaccessible. Software RAID with something like ZFS or Btrfs is safer.

[–] thisbenzingring 1 points 15 hours ago (1 children)

what? are you kidding or just being dumb? you can restore a raid on different hardware with the hardware configuration tools

as long as the drives and the raid are fine, it will rebuild

[–] hperrin@lemmy.ca 1 points 11 hours ago* (last edited 8 hours ago)

Yes, you have to put in different hardware. (Almost definitely the same brand, maybe even same model.) It’s inaccessible until you get that hardware and replace it. I didn’t mean permanently. A software RAID will work in any system.

[–] Badabinski@kbin.earth 2 points 2 days ago

imo, hardware raid is irrelevant for most small-scale use-cases and can be a liability for homelabbers. In a professional context, I've had a raid card shit itself causing temporary data loss and downtime because my idiot bosses didn't buy a spare card back when they set up their system. If you're doing hardware RAID, you must buy two cards, and they MUST be on the same firmware version. Software RAID is basically just as fast, is far more flexible, has one less SPOF, and is cheaper (a cheap HBA being all you need hardware-wise). About the only other thing some RAID cards have is a battery backup unit to get around write hole issues, but good filesystems can help with that too.

Hardware RAID isn't necessarily obsolete, but I'd say it's like mainframes—the applications for it are highly specialized.

[–] stoy@lemmy.zip 2 points 2 days ago (2 children)

As it is right now, I have zero redundancy, just my media spread across two HDDs, a future plan is to have two NAS units, the primary unit that I access from my machine as normal, and a separate unit that runs rsync or borgbackup from the primary unit every night.

At this moment I don't want perfect being the enemy of good.

[–] thisbenzingring 3 points 2 days ago (1 children)

i hear you, you can always just get a add in card for more drives, buy some older but new platter drives and you can make it a lot easier

[–] stoy@lemmy.zip 1 points 2 days ago (1 children)

Yeah, I have thought about getting a PCIE SATA controller card but have heard about mixed oppinions of those...

[–] thisbenzingring 2 points 2 days ago

you get what you pay for generally

[–] Dran_Arcana@lemmy.world 2 points 2 days ago (1 children)

I run clusters of both LSI-based hwraid and zfs at work. I strongly recommend zfs over hwraid. The long and short of it is hwraid hasn't kept up with software solutions, and software solutions are often both more performant and more resilient (at the cost of CPU/memory).

For homelab scale, zfs is definitely the way to go, especially for archive data.

Wendel wrote up a pretty good guide for those looking to understand what makes zfs so good if you want to dive deeper. https://forum.level1techs.com/t/zfs-guide-for-starters-and-advanced-users-concepts-pool-config-tuning-troubleshooting/196035

[–] stoy@lemmy.zip 1 points 2 days ago (1 children)

Yeah, I am a bit weary of hwraid since I have no experience with it, I have some experience with software raid.

My initial plan was going with Linux set up an mdadm raid and run an LVM on top of it, though the more I think about it, it feel like more of a lab/experiment scenario, snd I may get another NAS build to lab with.

As it stands now, I'll probably go with TrueNAS and ZFS since it will be running in "prod" at home.

[–] hperrin@lemmy.ca 1 points 20 hours ago

I’d recommend ZFS or Btrfs over mdadm. They both have data repair if something goes wrong, and mdadm doesn’t.