traches

joined 2 years ago
[–] traches@sh.itjust.works 3 points 17 hours ago

Yeah, syncthing can do all of that except public share links. Run an instance on your NAS so there is always a sync target online.

[–] traches@sh.itjust.works 4 points 1 day ago* (last edited 1 day ago) (2 children)

I strongly recommend ZFS as a filesystem for this as it can handle your sync, backup, and quota needs very well. It also has data integrity guarantees that should frankly be table stakes in this application. Truenas is an easy way to accomplish this, and it can run docker containers and VMs if you like.

Tailscale is a great way to connect them all, and connect to your nas when you aren’t home. You can share devices between tailnets, so you don’t all have to be on the same Tailscale account.

I’ll caution against nextcloud, it has a zillion features but in my experience it isn’t actually that good at syncing files. It’s complicated to set up, complicated to maintain, and there are frequent bugs. Consider just using SMB file sharing (built into truenas), or an application that only syncs files without trying to be an entire office suite as well.

For your drive layouts, I’d go with big drives in a mirror. This keeps your power and physical space requirements low. If you want, ZFS can also transparently put metadata and small files on SSDs for better latency and less drive thrashing. (These should also be mirrored.) Do not add an L2ARC drive, it is rarely helpful.

The boxes are kinda up to you. Avoid USB enclosures if at all possible. Truenas can be installed on most prebuilt NAS boxes other than synology, presuming it meets the requirements. You can also build your own. Hot swap is nice, and a must-have if you need normies to work on it. Label the drive serial number on the outside so you can tell them apart. Don’t go for less than 4 bays, and more is better even if you don’t need them yet. You want as much RAM as feasibly possible; ZFS uses it for caching, and it gives you room to run containers and VMs.

[–] traches@sh.itjust.works 130 points 1 day ago (6 children)

Damn, if they had PII in a public bucket like that it’s criminally negligent. Well, at least it should be but I’m no lawyer

[–] traches@sh.itjust.works 4 points 3 days ago

God damn I laughed till I cried when I first saw this… would have been like 2007?

[–] traches@sh.itjust.works 1 points 5 days ago

It would be defined as part of the law, hopefully with something reasonable and robust.

Take genocide advocacy - it pretty clearly leads to people getting hurt even if we don’t know exactly who.

[–] traches@sh.itjust.works 1 points 5 days ago

They’d be lying if they present an „expert” who isn’t.

It just rubs me the wrong way that the only people with a claim against Fox News for the big lie was the voting machine company over lost profits. We can at least solve the standing issue.

[–] traches@sh.itjust.works 1 points 6 days ago (4 children)

In this context I pretty much mean advocating for genocide or fascism. That and I don’t think you should be able to lie out your ass and call it news.

[–] traches@sh.itjust.works 16 points 1 week ago (7 children)

I think it’s disingenuous to group freedom of thought with speech and expression. Limiting the first is impossible, while every country on earth limits the other two to some degree.

My personal opinion is that you shouldn’t be able to hurt people in stupid, hateful, predictable ways.

[–] traches@sh.itjust.works 223 points 1 week ago (9 children)

I fucking hate algorithm speak so much

[–] traches@sh.itjust.works 10 points 1 week ago* (last edited 1 week ago) (1 children)

Does google let you ban pinterest?

[–] traches@sh.itjust.works 14 points 1 week ago (5 children)

If you use kagi, the AI summary is opt-in. Trigger it with a question mark at the end of your query. I like kagi.

[–] traches@sh.itjust.works 18 points 2 weeks ago

I mean if it’s worked without modification for 6 years….

 

I'm working on a project to back up my family photos from TrueNas to Blu-Ray disks. I have other, more traditional backups based on restic and zfs send/receive, but I don't like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can't be ransomwared and that I can't screw up once created.

The dataset is currently about 2TB, and we're adding about 200GB per year. It's a lot of disks, but manageably so. I've purchased good quality 50GB blank disks and a burner, as well as a nice box and some silica gel packs to keep them cool, dark, dry, and generally protected. I'll be making one big initial backup, and then I'll run incremental backups ~monthly to capture new photos and edits to existing ones, at which time I'll also spot-check a disk or two for read errors using DVDisaster. I'm hoping to get 10 years out of this arrangement, though longer is of course better.

I've got most of the pieces worked out, but the last big question I need to answer is which software I will actually use to create the archive files. I've narrowed it down to two options: dar and bog-standard gnu tar. Both can create multipart, incremental backups, which is the core capability I need.

Dar Advantages (that I care about):

  • This is exactly what it's designed to do.
  • It can detect and tolerate data corruption. (I'll be adding ECC data to the disks using DVDisaster, but defense in depth is nice.)
  • More robust file change detection, it appears to be hash based?
  • It allows me to create a database I can use to locate and restore individual files without searching through many disks.

Dar disadvantages:

  • It appears to be a pretty obscure, generally inactive project. The documentation looks straight out of the early 2000s and it doesn't have https. I worry it will go offline, or I'll run into some weird bug that ruins the show.
  • Doesn't detect renames. Will back up a whole new copy. (Problematic if I get to reorganizing)
  • I can't find a maintained GUI project for it, and my wife ain't about to learn a CLI. Would be nice if I'm not the only person in the world who could get photos off of these disks.

Tar Advantages (that I care about):

  • battle-tested, reliable, not going anywhere
  • It's already installed on every single linux & mac PC , and it's trivial to put on a windows pc.
  • Correctly detects renames, does not create new copies.
  • There are maintained GUIs available; non-nerds may be able to access

Tar disadvantages:

  • I don't see an easy way to locate individual files, beyond grepping through snar metadata files (that aren't really meant for that).
  • The file change detection logic makes me nervous - it appears to be based on modification time and inode numbers. The photos are in a ZFS dataset on truenas, mounted on my local machine via SMB. I don't even know what an inode number is, how can I be sure that they won't change somehow? Am I stuck with this exact NAS setup until I'm ready to make a whole new base backup? This many blu-rays aren't cheap and burning them will take awhile, I don't want to do it unnecessarily.

I'm genuinely conflicted, but I'm leaning towards dar. Does anyone else have any experience with this sort of thing? Is there another option I'm missing? Any input is greatly appreciated!

 

I have a load-bearing raspberry pi on my network - it runs a DNS server, zigbee2mqtt, unifi controller, and a restic rest server. This raspberry pi, as is tradition, boots from a microSD card. As we all know, microSD cards suck a little bit and die pretty often; I've personally had this happen not all that long ago.

I'd like to keep a reasonably up-to-date hot spare ready, so when it does give up the ghost I can just swap them out and move on with my life. I can think of a few ways to accomplish this, but I'm not really sure what's the best:

  • The simplest is probably cron + dd, but I'm worried about filesystem corruption from imaging a running system and could this also wear out the spare card?
  • recreate partition structure, create an fstab with new UUIDs, rsync everything else. Backups are incremental and we won't get filesystem corruption, but we still aren't taking a point-in-time backup which means data files could be inconsistent with each other. (honestly unlikely with the services I'm running.)
  • Migrate to BTRFS or ZFS, send/receive snapshots. This would be annoying to set up because I'd need to switch the rpi's filesystem, but once done I think this might be the best option? We get incremental updates, point-in-time backups, and even rollback on the original card if I want it.

I'm thinking out loud a little bit here, but do y'all have any thoughts? I think I'm leaning towards ZFS or BTRFS.

 

Not sure about the artist, sorry

 
view more: next ›