[-] chameleon@fedia.io 3 points 5 days ago

My dotfiles aren't distro-specific because they're symlinks into a git repo (or tarball) + a homegrown shell script to make them, and that's about the end of it.

My NixOS configuration is split between must-have CLI tools/nice-to-have CLI tools/hardware-related CLI tools/GUI tools and functions as a suitable reference for non-Nix distros, even having a few comments on what the package names are elsewhere, but installation is ultimately still manual.

[-] chameleon@fedia.io 20 points 6 days ago

It's absolutely not the case that nobody was thinking about computer power use. The Energy Star program had been around for around 15 years at that point and even had an EU-US agreement, and that was sitting alongside the EU's own energy program. Getting an 80Plus-certified power supply was already common advice to anyone custom-building a PC which was by far the primary group of users doing Bitcoin mining before it had any kind of mainstream attention. And the original Bitcoin PDF includes the phrase "In our case, it is CPU time and electricity that is expended.", despite not going in-depth (it doesn't go in-depth on anything).

The late 00s weren't the late 90s where the most common OS in use did not support CPU idle without third party tooling hacking it in.

[-] chameleon@fedia.io 19 points 1 week ago

Eh, no. "I'm going to make things annoying for you until you give up" is literally something already happening, Titanfall and the like suffered from it hugely. "I'm going to steal your stuff and sell it" is a tale old as time, warez CDs used to be commonplace; it's generally avoided by giving people a way to buy your thing and giving people that bought the thing a way to access it. The situation where a third party profits off your game is more likely to happen if you don't release server binaries! For example, the WoW private/emulator server scene had a huge problem with people hoarding scripts, backend systems and bugfixes, which is one of the reasons hosted servers could get away with fairly extreme P2W.

And he seems to completely misunderstand what happens to IP when a studio shuts down. Whether it's bankruptcy or a planned closure, it will get sold off just like a laptop owned by the company would and the new owner of the rights can enforce on it if they think it's useful. Orphan works/"abandonware" can happen, just like they can to non-GaaS games and movies, but that's a horrible failing on part of the company.

[-] chameleon@fedia.io 12 points 2 weeks ago

Releasing server binaries (nobody in the context of this petition is asking for source code) is one option. Single player mode is another. Everything you'd wanna know is on https://www.stopkillinggames.com/ . Exact wording of laws and the like comes in a later phase, as with every initiative ever it will be up to the lawmaking body to make that.

[-] chameleon@fedia.io 14 points 2 weeks ago

Browsing through the PDF, I'm getting the vibe that their way of measuring "skill" is weird. They claim to use multiple methods of measuring, they list a few obvious ones that they've found to be bad, but they don't say which ones they are using because "we are constantly iterating on our performance metrics to optimize the player experience per game-mode".

Elo-like systems tend to adjust skill based on the chance of winning current match X win/loss, but they're not (just) doing that. I wonder if they have a few weird metrics that look good on paper/in the lab but don't feel good in play.

[-] chameleon@fedia.io 52 points 3 weeks ago

Requiring agreement to some unspecified ever-changing terms of service in order to use the product you just bought, especially when use of such products is required in the modern world. Google and Apple in particular are more or less able to trivially deny any non-technical person access to smartphones and many things associated with them like access to mobile banking. Microsoft is heading that way with Windows requiring MS accounts, too, though they're not completely there yet.

[-] chameleon@fedia.io 17 points 3 weeks ago

Eh. I've been on the receiving end of one of those inboxes and the spam is absolutely, utterly unbearable. Coming up with a better system than a publicly listed email address is on Google at this point, because there is no reasonable way to provide support when you need a spam filter tuned up to such a level that all legitimate mail also ends up in spam.

[-] chameleon@fedia.io 12 points 3 weeks ago

My suggestion is to use system management tools like Foreman. It has a "content views" mechanism that can do more or less what you want. There's a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.

With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION as downloaded from a mirror at any given moment and serve it to a particular server, that's going to end up with apt update && apt upgrade installing those identical versions, provided that the actual package files in /pool are still available. You can set up caching proxies for that.

I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let's say bookworm) and their associated -updates and -security repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren't currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool were served by apt-cacher-ng, but I don't know if that's still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).

[-] chameleon@fedia.io 33 points 3 weeks ago

Realistically, immutability wouldn't have made a difference. Definition updates like this are generally not considered part of the provisioned OS (since they change somewhere around hourly) and would go into /var or the like, which is mutable persistent state on nearly every otherwise immutable OS. Snapshots like Timeshift are more likely to help.

[-] chameleon@fedia.io 14 points 4 weeks ago

Company offering new-age antivirus solutions, which is to say that instead of being mostly signature-based, it tries to look at application behavior instead. If Word was exploited because some user opened not_a_virus_please_open.docx from their spam folder, Word might be exploited and end up running some malware that tries to encrypt the entire drive. It's supposed to sniff out that 1. Word normally opens and saves like one document at a time and 2. some unknown program is being overly active. And so it should stop that and ring some very loud alarm bells at the IT department.

Basically they doubled down on the heuristics-based detection and by that, they claim to be able to recognize and stop all kinds of new malware that they haven't seen yet. My experience is that they're always the outlier on the top-end of false positives in business AV tests (eg AV-Comparatives Q2 2024) and their advantage has mostly disappeared since every AV has implemented that kind of behavior-based detection nowadays.

[-] chameleon@fedia.io 18 points 4 weeks ago

All GPUs released since they came out with the RTX 2000+ line are supported and all new GPUs will most likely have support, especially with this announcement saying they're committed to it. There's a support list on their GitHub and it includes all the weird little things you'd be worried about. Even silly little laptop chips like the new RTX 500 are on it.

I think the only reason they limited GPU support is because the older ones physically don't have the hardware for this approach; they switched to their newer RISC-V "GSP" processors with the RTX line. In the new open module, all of their proprietary "secret sauce" was shoved off to firmware running on that new GSP. Previously, their proprietary kernel module loaded all of that same secret sauce as a gigantic obfuscated blob running on your normal CPU instead. The Windows side of their driver has also been moving towards using the GSP, they even advertised it boosts performance or whatever, and I can believe it.

That said, with this new stuff, the official Nvidia userland portions providing Vulkan/OpenGL/CUDA support and the like are still proprietary. It's still worse than AMD in that regard. But at least it's possible to replace those bits, and Mesa/NVK are working on getting Vulkan up and running (with NVK supposedly getting pretty damn good, and Mesa's OpenGL-on-Vulkan is pretty good too so that's free).

view more: next ›

chameleon

joined 1 month ago