[-] zarenki@lemmy.ml 3 points 1 week ago

The conditions that processors run under in situations like military equipment are drastically different from those of consumer devices. Consistency and stability are more important than performance in those contexts. So much so that RTOS systems like VxWorks are popular in that space. They'd probably already have features like clock boost disabled (or use processors completely lacking it) in favor of a lower fixed clock speed, probably avoiding these issues entirely.

[-] zarenki@lemmy.ml 4 points 3 months ago

I tried to do this a while ago with a GNOME system, setting GDM to automatically log me in, but I ended up always getting prompted for my password from gnome-keyring shortly after logging in which seemed to defeat the point. If you use GNOME, you might want to look at ArchWiki's gnome-keyring page which describes a couple solutions to this problem (under the PAM section) which should be applicable on any systemd distro.

[-] zarenki@lemmy.ml 7 points 3 months ago

For years I've been using KeepassXC on desktop and Keepass2Android on mobile. Rather than sync the kdbx file between my devices, I have each device access it through the network. Either via sftp, smb, or nfs, but regardless I need to connect to my home's VPN to access it when away from home since I don't directly expose those things to the outside world.

I used to also keep a second copy of the website-tied passwords in Firefox Sync, but recently tried migrating that to Proton Pass because I thought the PIN feature might help, then ultimately decided to move away from that too and start using the KeepassXC-Browser plugin instead. I considered Bitwarden too but haven't tried it out yet, was somewhat deterred by seeing people say its UI seems very outdated.

[-] zarenki@lemmy.ml 7 points 4 months ago

There's only one case I've found where Wi-Fi use seems acceptable in IoT: ESPHome. It's open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

I still wouldn't call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn't exactly suited for a battery-powered device that's expected to run 24/7 regardless.

[-] zarenki@lemmy.ml 4 points 4 months ago

as soon as the BIOS loaded and showed the time, it was "wrong" because it was in UTC

Because you don't use Windows. Windows by default stores local time, not UTC, to the RTC. This behavior can be overriden with a registry tweak. Some Linux distro installer disks (at least Ubuntu and Fedora, maybe others) will try to detect if your system has an existing Windows install and mimicks this behavior if one exists (equivalent to timedatectl set-local-rtc 1) and otherwise defaults to storing UTC, which is the more sane choice.

Storing localtime on a computer that has more than one bootable OS becomes a particularly noticable problem in regions that observe DST, because each OS will try to change the RTC by one hour on its first boot after the time change.

[-] zarenki@lemmy.ml 6 points 4 months ago

They say the reason for needing their bridge is the encryption at rest, but I feel like the better way to handle wanting to push email privacy forward would be to publish (or better yet coordinate with other groups on drafting) a public standard that both clients and competing email servers could adopt for an email syncing protocol for that sort of zero-access encryption that requires users give their client a key file. A bridge would be easier to swallow as a fallback option until there's wider client support rather than as the only way.

A similar standard for server-to-server communication, like for automatic pgp key negotiation, would be nice too.

Still, Proton has a easy to access data export that doesn't require a bridge client or subscription or anything. I think that's required by GDPR. It's manual enough to not be an effective way to keep up-to-date backups in case you ever abruptly lose access but it's good enough to handle wanting to migrate to another provider.

[-] zarenki@lemmy.ml 3 points 4 months ago

Something I've noticed that is somewhat related but tangential to your problem: The result I've always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don't use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

volumes:
  nextcloud:
  db:

services:
  db:
    image: docker.io/mariadb:10.6
    ...
  app:
    image: docker.io/nextcloud
    ...

I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory's name otherwise.

The reasons I adjust my own compose files to be different from the image maintainer's recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I've had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.

[-] zarenki@lemmy.ml 4 points 4 months ago

I recommend giving dnf the -C flag to most operations, particularly those that don't involve downloading packages. The default behavior is often similar to pacman's -y flag and so the metadata sync ends up slowing everything down by orders of magnitude.

[-] zarenki@lemmy.ml 3 points 4 months ago

On Switch, no game cards support writing of save data or anything else. It's a departure from 3DS and all previous Nintendo cart formats for as long as games have supported saving at all.

That change is probably done to help tie saves to user accounts, enable cloud saves even when the card is not inserted, accommodate variable-size user data features like level creation, and mitigate the risk of game save based exploits like Twilight Hack spreading from user to user.

Unfortunately that (plus inability to put saves on SD card) means that backing up your own save data requires either being able to run homebrew on the system that has the save or having another Switch that can and relying on Nintendo servers to perform the transfer. Either way, having a Switch that runs homebrew means you don't need this dumper.

[-] zarenki@lemmy.ml 4 points 5 months ago

The main reason people use Fandom in the first place is the free hosting. Whether you use MediaWiki or any other wiki software, paying for the server resources to host your own instance and taking the time to manage it is still a tall hurdle for many communities. There already are plenty of MediaWiki instances for specific interests that aren't affected by Fandom's problems.

Even so, federation tends to foster a culture of more self-hosting and less centralization, encouraging more people who have the means to host to do so, though I'm not sure how applicable that effect would be to wikis.

[-] zarenki@lemmy.ml 5 points 6 months ago

I never liked to play DS games on 3DS because of the blurry screen: DS games run at a 256x192 resolution while the 3DS screens stretch that out to 320x240. Non integer factor scaling at such low resolutions is incredibly noticeable.

DSi (and XL) similarly can be softmodded with nothing but an SD card, though using a DS Lite instead with a flashcart can enable GBA-Slot features in certain DS games including Pokemon.

[-] zarenki@lemmy.ml 6 points 6 months ago

If you're planning to subscribe to Proton Unlimited or Proton Family regardless, you might as well try Proton Drive. They try to be fairly privacy focused similar to Proton's other products.

Mega has a similar privacy-oriented design. Such that the server side shouldn't have direct access to your unencrypted file data or its decryption keys.

Still, any web-based service necessitates trusting the JavaScript you receive not to leak out your password or keys. Both Proton and Mega have a good track record so far in that regard, but the best practice for privacy with raw data storage is to encrypt your own data with local tools and treat any remote server as untrusted.

view more: ‹ prev next ›

zarenki

joined 6 months ago