TrueNAS virtualized under Proxmox with HBA card passed through. I don't run apps on my NAS, it's just for storage.
I just have a Synology with 4 drives. Super basic and was very easy to set up and takes up very little space in a closet. I mount it to my Ubuntu server using samba, and then any data processing that needs to be done on that data (e.g. plex, music server, etc.) is done on the server, which is much more powerful than the little Celeron CPU that the Synology has.
I have an Ubuntu VM running on my Proxmox server. It just exports some folders over NFS that I mount from my laptops and PC. Then I have Nextcloud running in a separate VM so my phone can upload photos. The NC storage is all the NFS mounted folders from the NAS. Simple and works.
I run everything on a lean Ubuntu server install. My Ansible playbooks then take over and set up ZFS and docker. All of my hosted services are in docker, and their data and configs are contained, regularly snapshotted, and backed up in ZFS.
I run basically all of the Arr stack, Plex (more friendly to my less tech savvy family then my preferred solution Jellyfin), HAss, Frigate NVR, Obsidian LiveSync, a few Minecraft worlds, Docspell, Tandoor recipes, gitea, Nextcloud, FoundryVTT, an internet radio station, syncthing, Wireguard, ntfy, calibre, Wallabag, Navidrome, and a few pet projects.
I also store or backup all of the important family documents and photos, though I haven't implemented Immich just yet, waiting for a few features and a little more development maturity.
About 30TB usable right now.
Docspell
Could you go into a bit more detail on this particular stack and how it's useful to you?
Certainly. Mostly it started as a way to keep tax documents and receipts safe and easily findable.
It's grown into a "huh, maybe this letter from <bank, school, insurance, charity, etc> is important, but it clutters the house less when ones and zeros", so we scan it in.
Then when we need info, we can just search for the name of the sender, the date, account numbers, literally anything remotely legible in the document and get lightning fast results.
Using an old Netgear Readynas R102 with 4.5 TB of usable storage in RAID 0.
I used to run all kinds of services on the nas itself via the ssh access, but I've since moved those to separate raspberry pis. The pis use the nas as a networked storage.
I run a webserver, music server, matrix server and torrent client seeding ubuntu images.
I want to make a storage cluster using Ceph in the future, but I've not found any suitable small computers that I could use with that.
I'm using a Synology setup. I thought I'd grab an off the self option as I have a habit of going down rabbit holes with DIY projects. It's working well, doing a one-way mirror off my local storage with nightly backups from the NAS to a cloud server.
I use synology. I’ve done freenas, openfiler, even just straight zfs/Linux/smb/iscsi on Ubuntu and others. Synology works well and is quite easy to setup. I let the nas do file storage. And tie other computers to it (namely sff dell machines) to do the other stuff, like Pi-hole or plex. Storage is shared from the nas via cifs/smb or iscsi.
Synology also has one of the best backups for home use imho with Active Backup for Business. It can do vmware, windows, max, Linux etc. I actually have an older second nas for that alone. But you can do it all in one easily.
I built a massive overkill NAS with the intention of turning it into a full blown home server. That fizzled out after a while (partially because the setup I went with didn't have GPU power options on the server PSUs, and fenangling an ATX PSU in there was too sketchy for me), so now it's a power hog that just holds files. I just turn it on to use the files, then flip it back off to save on its ridiculous idle power costs.
In hindsight I'd have gone with a lighter motherboard/CPU combo and kept the server grade stuff for a separate unit. The NAS doesn't need more than a beefy NIC and a SAS drive controller, and those are only x8 PCIE slots at most.
Also I use TrueNAS scale, more work to set up than UNRAID but the ZFS architecture seemed too good to ignore.
A GPU isn't really necessary for home server unless you want to do lots of client side transcoding. I have a powerhungry server that runs a VM offering samba and nfs shares as well as a bunch of other vms, lxc containers and docker containers, with a full *arr stack, Plex, jellyfin, a jupyterlab instance, pihole and a bunch of other stuff.
I was trying to do some fancy stuff like GPU passthrough to make the ultimate all in one unit that I could have 2 or 3 GPUS in and have several VMs running games independently, or at least the option to spin it up for a friend if they came over. I'm probably not quite sophisticated enough to pull that off anyways, and the use case was too uncommon to bother with after unga bungaing a power distribution board after a hard day of work.
Ah now I get it. You'll probably need an expensive PSU to make that work. I'm sure there would be some option though in the server segment for people building GPU clusters.
Yeah, I was trying to go all the way when I should have compartmentalized it a bit and just had two computers instead of one superbeast. The server PSUs aren't super expensive relatively speaking, 1U hotswap 1200W PSUs with 94% efficiency are like $100. Problem was that the power distribution board I had didn't have GPU power connectors, only CPU power connectors, and tired me wasn't going to accept no for an answer and thus let out the magic smoke in it. I got lucky and the distribution board seems to be the intended failure point in these things, so the expensive motherboard and components got by unscathed (I think, I never used the GPU, and it was just some cheap Ebay thing). Still a fairly costly mistake that I should have avoided, but I was tired that night and wanted something to just work out.
That's quite interesting. I would have thought that they were more expensive than that. I've been there too. You're doing a bunch of stuff, tired and just want it to somehow work. What have you been doing with the build after that, if you don't mind me asking?
Was going to make it a sort of central computer that could centralize all the computing for several members of the family. Was hoping to get a basic laptop that could hook into the unit and play games/program on a virtual machine with graphics far above what the laptop could have handled, plus the aforementioned spin up of more machines for friends. Craft Computing had a lot of fun computing setups I wanted to learn and emulate. I would have also had the standard suite of video services and general tomfoolery. Maybe dip into crypto mining with idle time later on. Lots of ideas that somewhat fizzled out.
That sounds really interesting. I have some VMs set up in a similar way for family memeber though they're very low power. They're mostly used to ease the transition from windows to Linux. I hope you get to do it again sometime :)
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.