80
submitted 1 year ago* (last edited 1 year ago) by SlovenianSocket@lemmy.ca to c/plex@lemmy.ca

Been building this server up for about 5 years, adding hard drives as needed.

Running unraid

E5-2698 v3
64gb ddr4 ecc
X99-E WS
P600 for transcoding
10gbit networking w/ 3gbit fibre WAN
15 HDDs of assorted sizes, totally 148TB, 132TB usable

top 42 comments
sorted by: hot top controversial new old
[-] sup@lemmy.ca 6 points 1 year ago

Wow, that's massive! How's the backup situation? Do you backup the data?

[-] SlovenianSocket@lemmy.ca 8 points 1 year ago

No it’s just Linux ISOs, easy to reaquire ;p. I backup my radarr and sonarr databases so I can easily recover if need be

[-] deelayman@lemmy.ca 6 points 1 year ago

Do you download the 4k Linux ISOs?

[-] SlovenianSocket@lemmy.ca 5 points 1 year ago

My automation is set to download the highest quality available, so yes lots of 4K remuxes

[-] n_emoo@lemmy.ca 6 points 1 year ago

Do you charge for.. Erm.. Api access?

[-] SlovenianSocket@lemmy.ca 2 points 1 year ago

By donation of cookies or other baked goods only

[-] RagingNerdoholic@lemmy.ca 4 points 1 year ago

Least insane porn connoisseur

[-] nhgeek@beehaw.org 2 points 1 year ago
[-] Jeef@sh.itjust.works 4 points 1 year ago

How many concurrent users do you get on average?

[-] SlovenianSocket@lemmy.ca 6 points 1 year ago
[-] Jeef@sh.itjust.works 7 points 1 year ago

That's enough lol. I think once i started having 2 streams I was like yep need a 3 r730 node cluster for "plex"

[-] juusukun@lemmy.ca 3 points 1 year ago

Oh man this is nice. I've been juggling space with an 8TB drive for years (got one when it was the biggest you could get). Recently after deleting some old stuff to free up space I discovered all the newer stuff was fragmented to shit. I was able to squeeze as much as I could onto SSDs and SD cards, defragment the entire thing with Defraggler, and move everything back into defragmented free space. I've managed to pack it like a can of sardines - I've gotten it down to less than 2GB free with no more than 2 or 3 fragments for most files, 7 tops. Still transfers both read and write at around 80MB/s average.

I've experimented with Plex, but since I used to only use the one system with Kodi I never bothered. I recently got a small HTPC to use with an old 3D TV and it's super easy to move Kodi's db to an external MySQL server to sync paused playback and completed lists etc.

I can also just connect Kodi to nVidia shield/moonlight and watch my entire library on my handheld wherever I go, without any additional setup other than Kodi on my desktop and making use of Moonlight which I use for emulating games a bit too powerful for my handheld

[-] lightrush@lemmy.ca 3 points 1 year ago

Great hardware.

Unraid... shudders

[-] rbos@lemmy.ca 2 points 1 year ago

[googles] "Unraid is a proprietary" ... NOPE

[-] cheerytext1981@lemmy.ca 1 points 1 year ago

What would you pick over Unraid?

[-] lightrush@lemmy.ca 2 points 1 year ago

Linux software LVMRAID or better yet - ZFS.

[-] quafeinum@lemmy.ca 4 points 1 year ago

But I don't want to spend my free time managing yet another server. Slap unraid on it an call it a day.

[-] lightrush@lemmy.ca 1 points 1 year ago* (last edited 1 year ago)

I was referring to the actual storage system. Unraid's funny JBOD vs some easy to use industry standard solutions. Not the overall OS with any dancing bears it displays, or doesn't. ☺️

If you're looking at the latter, I have no argument against installing something with easy to use interface etc. like Unraid.

[-] SlovenianSocket@lemmy.ca 3 points 1 year ago

Unraid supports zfs pools as of the 6.12 update

[-] lightrush@lemmy.ca 1 points 1 year ago

Oh interesting. Nice.

[-] antony@lemmy.ca 1 points 1 year ago

Do you have any guides for setting this up and optimising it? I'd like my next build to use Debian (like my desktop and servers) instead of Unraid or Synology, both of which are lacking in different ways and ready for retirement.

[-] lightrush@lemmy.ca 2 points 1 year ago* (last edited 1 year ago)

Guides no, but there's good documentation. E.g. LVMRAID and ZFS. Here's some overview of ZFS.

For storage arrays, I would use ZFS over LVMRAID for a few reasons the most important being data integrity.

For the system drive, i.e. where the OS is installed, LVMRAID might be simpler to use. There's probably a wiki somewhere for installing Debian on ZFS but LVMRAID has been a Linux staple for a while and it's easy to install an OS onto. E.g. via the OS installers. You could install on LVM then after you're up and running, you can convert that to an LVMRAID with a single command and a second SSD.

The simplest possible scheme I can think of from setup perspective is to use the Debian installer to put your OS on LVM. Once Debian is running, install a second SSD, the same size or larger, then use LVM's lvconvert to convert to a RAID1. See "linear to raid1" in the LVMRAID man page (doc). Then for storage, install ZFS and create a zpool of the desired type from the available disks and throw your data on it.

Read the docs (RTFM), write down a planned list of steps, build the commands needed for each step from the docs (where commands are relevant), then try it on a machine without data.

Here's a sample command I've used to create one of my zpools:

sudo zpool create -f -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa -O sync=disabled -O mountpoint=/media/storage-volume1 -O encryption=on raidz /dev/disk/by-id/ata-W /dev/disk/by-id/ata-X /dev/disk/by-id/usb-Y /dev/disk/by-id/usb-Z

It looks complicated but it's rather straightforward when you read the doc.

[-] antony@lemmy.ca 3 points 1 year ago

Sound advice. I tend to script everything via Ansible, and it sounds like beyond the initial OS install this is a good candidate for automation. I'm not sure I needed another excuse to go hardware shopping, but yet here we are.

[-] lightrush@lemmy.ca 2 points 1 year ago* (last edited 1 year ago)

You're the Ansible now. [I'm the captain now.jpg]

This is all automatable of course. I'm using SaltStack but the storage setup is no longer part of it. It used to be but then I migrated from LVMRAID mirrors to RAIDZ and I didn't update the code to fix it. ZFS setup is just too easy. It's one command more or less. I just have the exact command for each machine with the exact drives in them on file.

[-] joshhsoj1902@lemmy.ca 2 points 1 year ago

When it comes to a fileserver, I still prefer Truenas.

I've freenas/Truenas for 10 or so years now and unraid for about 5. For the last year I've been working on migrating everything back to Truenas (scale in my case)

Some of my pain points with unraid:

  • disk read speeds. (Since read is only ever happening from a single disk, it's much easier to notice bottlenecks)
  • disk replacement. When a disk fails, I find the process of replacing the disk (or decideding to not replace the disk and scatter the data across the remaining disks) fairly tricky and honestly a little scary. I've had to do it twice now and it's the biggest reason I'm now only using unraid to run services but not store any important data.
  • cache disks are meh. Over the years I've had 3 or 4 times where the mover just stopped, which resulted in a cache disks filling and not flushing to HDDs, which then corrupted some database or file an application was using. Like on one hand you have to use SSD cache disks to run apps or VMs since there is no way to speed up read speeds on HDDs, but on the other it just doesn't work well given enough time.

Some pros:

  • Application/service hosting is still great in unraid. It's still a pain in the ass getting a VM running on Truenas scale, but with Truenas Scale you can run docker directly.

  • being able to just add single disks at a time in unraid is nice (until you need to replace one...)


Anyway that's my off the top of my head reasoning. Truenas is a little more work to use overall, but I've found it much more stable

[-] lightrush@lemmy.ca 1 points 1 year ago* (last edited 1 year ago)

Sounds a bit like a clown raid if you ask me. It's as if it wasn't designed to be robust under production loads. 🤔

[-] blottootto@lemmy.ca 3 points 1 year ago
[-] thekaufaz@toast.ooo 2 points 1 year ago

Crazy. I have about 50 usable. Do you have it all backed up? Right now my backups fit on two 14tb drives.

[-] learning2Draw@lemmy.ca 2 points 1 year ago

Any recommendations on hdds?

[-] SlovenianSocket@lemmy.ca 7 points 1 year ago

External HDDs and shuck the drive out of them. Usually 25% cheaper than internal drives

[-] someguy3@lemmy.ca 2 points 1 year ago

Really? Used to be the opposite.

[-] papertowels@lemmy.one 2 points 1 year ago
[-] sup@lemmy.ca 1 points 1 year ago

Thank you for sharing! Wish there was a Canadian version too

[-] lightrush@lemmy.ca 1 points 1 year ago

Well the external ones are often slower, but who gives a shit when you put so many of them into a single storage array.

I'm using 4 external drives in their external enclosures over USB and the storage array performs pretty great for larger files. It's even pretty good for random IO but ZFS contributes to that.

[-] Bardak@lemmy.ca 1 points 1 year ago

I've been thinkg of getting a refebushed 1 litre office PC and a couple external HDDs to build a basic starter NAS for cheap.

[-] bier@lemmy.blahaj.zone 1 points 1 year ago

Refurbished Segate Exos drives go for around 13€/TB German-Site For example drive

[-] Goodtoknow@lemmy.ca 2 points 1 year ago

What does your parirty arrangement look like?

[-] SlovenianSocket@lemmy.ca 2 points 1 year ago

Just a single 16tb drive for now. I need a bigger chassis to do dual parity

[-] MentallyExhausted@reddthat.com 1 points 1 year ago

Oof, definitely recommend dual parity for this many drives.

[-] kionite231@lemmy.ca 1 points 1 year ago

Do you host any public instance in it?

[-] SlovenianSocket@lemmy.ca 1 points 1 year ago

No. Only public facing services is ombi and game servers

load more comments
view more: next ›
this post was submitted on 13 Jun 2023
80 points (98.8% liked)

Plex

2401 readers
1 users here now

Welcome to Plex, a community dedicated to Plex, the media server/client solution for enjoying your media!

founded 1 year ago
MODERATORS