this post was submitted on 11 Oct 2025
24 points (100.0% liked)

technology

24272 readers
322 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

So I'm helping a local tech non-profit refurbish some old Chromebooks for distribution to halfway houses and immigrants that need computer access for legal stuff. The current need is basically a rock solid platform for getting to websites, reading email, and editing mostly shared Google docs.

Issue is that the hardware is no longer supported by Google.

We've gone ahead and got Coreboot flashed on all 40 devices and have settled on using Fedora-Onyx (Atomic distro with a Budgie UI).

We need to install some flatpaks on each machine and set up a base configuration. Easy enough with rpm-ostree and some manual configuration, but I was wondering if anyone here has had more experience with managing the atomic distros.

Basically I want to have it so the volunteers just need to plug in a USB installer stick and get a fully setup instance. Is there an easy way to take a tree and transfer it to another machine that isn't using something like clonezilla? I'm assuming we could just maintain an image and rebase to thatafter installing, but I'm not fully aware of the easiest way to accomplish that.

you are viewing a single comment's thread
view the rest of the comments
[–] stupid_asshole69@hexbear.net 2 points 4 months ago (1 children)

Atomic isn’t the direction I’d go with it, you’re likely better off with a dd script.

Some of your emmc speed problems will go away after doing badblocks -wsv -b [your block size as reported by smartctl] or spinrite lvl3+ if you’re able. The reason is that flash sucks and gets worse over time until it’s rewritten but everyone designed the controllers to be write averse so now they fiddlefuck around trying to read data that would be rewritten in idle time by a sane controller/os.

Is your operational model “put all the old hardware back out into the world and then go away” or more like “provide minimal support to these devices once they’re issued”?

[–] invalidusernamelol@hexbear.net 1 points 4 months ago* (last edited 4 months ago) (1 children)

Providing minimal support is a huge aspect. It's a small shop and large orders like this aren't uncommon. I'll definitely look into optimizing the emmc, as that's a huge bottleneck. The primary goal is always to eek out as much life from devices that were slated to be landfilled as possible and provide minimal working solutions for free or as close to free as possible.

These Chromebooks are veritable e-waste and making it so we can get some last mile usage out of them while having a system that's moderately fault tolerant (btrfs is good for the unreliability of the emmc) is key. Plus the AB updates mirror normal Chromebook functionality.

The atomic style with flatpak also makes it really hard for an inexperienced end user to fully bork their system as the base image and root is read only. Having all the user files in a separate volume also means it's trivial to migrate them to a new machine or wipe an old one. This is essentially an experiment at this point, but we've had a ton of very positive feedback from people about Linux. All the elderly people find it easier to use since they aren't constantly being pushed notifications and spyware. Plus the atomic updates mean they don't have to worry about manually running apt/dnf upgrade to get updates and the whole process is just handled automatically in the background.

[–] stupid_asshole69@hexbear.net 2 points 4 months ago

Another way to completely rewrite the emmc, which is often the solution to slow flash, is dd if=/dev/urandom of=/dev/[your device id].

The flash cell loses charge over time and takes more operations to read correctly so you “see” slower reads and writes. It can happen even when reading newly written information because the controller isnt just reading each bit, but whole blocks. So if the controller decided to shove your new 200 byte file in the same 2048 byte block as some old information whose pointer got dropped then it’s still gonna be slow to access that new file because the integrity mechanisms are designed around the mediums physical block size. Rewriting all of it at once fixes the problem.

It’s often worth losing one of your minimum ten thousand write cycles to get normal predictable io performance back. Also going over your rated write cycles just means the flash will lose data integrity over time faster when powered off. I have a scratch device on my server made from many old ssds that is completely fine despite coming up on a million plus write cycles because I’m not relying on it to endure being turned off for a month.

It may be worthwhile to consider working out how to do a swap when someone brings you a non working unit. Then you can hand over a new one, make sure they’re able to use it and have their files or access to their passwords (bitwarden?) or whatever then handle the returned unit on your own time.