this post was submitted on 11 Oct 2025
24 points (100.0% liked)

technology

24272 readers
322 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

So I'm helping a local tech non-profit refurbish some old Chromebooks for distribution to halfway houses and immigrants that need computer access for legal stuff. The current need is basically a rock solid platform for getting to websites, reading email, and editing mostly shared Google docs.

Issue is that the hardware is no longer supported by Google.

We've gone ahead and got Coreboot flashed on all 40 devices and have settled on using Fedora-Onyx (Atomic distro with a Budgie UI).

We need to install some flatpaks on each machine and set up a base configuration. Easy enough with rpm-ostree and some manual configuration, but I was wondering if anyone here has had more experience with managing the atomic distros.

Basically I want to have it so the volunteers just need to plug in a USB installer stick and get a fully setup instance. Is there an easy way to take a tree and transfer it to another machine that isn't using something like clonezilla? I'm assuming we could just maintain an image and rebase to thatafter installing, but I'm not fully aware of the easiest way to accomplish that.

you are viewing a single comment's thread
view the rest of the comments
[–] stupid_asshole69@hexbear.net 2 points 4 months ago

Another way to completely rewrite the emmc, which is often the solution to slow flash, is dd if=/dev/urandom of=/dev/[your device id].

The flash cell loses charge over time and takes more operations to read correctly so you “see” slower reads and writes. It can happen even when reading newly written information because the controller isnt just reading each bit, but whole blocks. So if the controller decided to shove your new 200 byte file in the same 2048 byte block as some old information whose pointer got dropped then it’s still gonna be slow to access that new file because the integrity mechanisms are designed around the mediums physical block size. Rewriting all of it at once fixes the problem.

It’s often worth losing one of your minimum ten thousand write cycles to get normal predictable io performance back. Also going over your rated write cycles just means the flash will lose data integrity over time faster when powered off. I have a scratch device on my server made from many old ssds that is completely fine despite coming up on a million plus write cycles because I’m not relying on it to endure being turned off for a month.

It may be worthwhile to consider working out how to do a swap when someone brings you a non working unit. Then you can hand over a new one, make sure they’re able to use it and have their files or access to their passwords (bitwarden?) or whatever then handle the returned unit on your own time.