[-] BaumGeist@lemmy.ml 3 points 19 hours ago

Bitwarden. Hands down the best decision I made in regards to web safety was switching to a proper password manager.

Close second is uBlock Origin.

Also make sure to use DecentralEyes for easy enhanced privacy.

I use NoScript, but that level of granular control isn't everyone's cup of tea

140
submitted 2 days ago by BaumGeist@lemmy.ml to c/linux@lemmy.ml

I occasionally see love for niche small distros, instead of the major ones...

And it just seems to me like there's more hurdles than help when it comes to adopting an OS whose users number in the hundreds or dozens. I can understand trying one for fun in a VM, but I prefer sticking to the bigger distros for my daily drivers since the they'll support more software and not be reliant on upstream sources, and any bugs or other issues are more likely to be documented abd have workarounds/fixes.

So: What distro do you daily drive and why? What drove you to choose it?

[-] BaumGeist@lemmy.ml 1 points 2 days ago

My ideal form of government is one where everyone cooperates to build up society, and there may be leaders, but no one is owed obedience.

[-] BaumGeist@lemmy.ml 3 points 2 days ago

nope. Especially now after having lived for a few years in a booming rehab community, seeing secondhand where addiction gets people.

I did coke a few times, it was okay. The high doesn't last long enough to justify the cost, and I was already jonesing for more the last time

[-] BaumGeist@lemmy.ml 1 points 4 days ago* (last edited 4 days ago)

I have a tinkering laptop set up with Fedora, DNF is as simple as APT and friendlier imo. I've switched to Nala (an APT wrapper that enables concurrent downloads) on my Debian PCs. YMMV.

Simply put: every distro needs its own package manager because the distros handle packages differently, from the way software is bundled and distributed, to where files reside in the filesystem.

E.g. APT is so friendly because of how rigid Debian is about the structure and info that is bundled within the .deb archive, which Pacman users tend to consider as unnecessarily restrictive bloat that impairs download/installation times. Meanwhile, yay (and other AUR helper programs) compiles the packages from source.

Although there are some that work across distros, like Nix or Homebrew. Plus there's always flatpak or AppImages or (shudder) Snaps.

And of course, if you want people to think you're basically a programmer, there's always

$ git clone <git repository>
$ cd <git repository>
$ sudo make install

(for software that is packaged with a Makefile)

[-] BaumGeist@lemmy.ml 3 points 4 days ago

Looks like for speed EXT4 still reigns, but that misses the point of ZFS, Btrfs, Bcachefs AND F2FS, which are all COW filesystems and not intended to outperform journaling filesystems in speed.

[-] BaumGeist@lemmy.ml 1 points 4 days ago

Another program that works on Windows, which I prefer to Balena Etcher, but less so than Rufus: unetbootin

[-] BaumGeist@lemmy.ml 3 points 4 days ago

Even the manpage Telorand linked mentions it by name for non-interactive use.

Also, make sure you use the right program depending on thr partition table : sgdisk is the right choice for GPT disks, sfdisk is for MBR.

[-] BaumGeist@lemmy.ml 2 points 4 days ago* (last edited 4 days ago)

Use conv=fsync

This ensures the cache is written before dd exits, but doesn't necessarily write to disk directly. This means that, for small files, dd can finish release its hold on the input file quicker

[-] BaumGeist@lemmy.ml 2 points 4 days ago

iw dev <interface> station dump will show every metric about the connection, including the signal strength and average signal strength.

It won't show it as an ascii graphic as with nmcli, but it shouldn't be hard to create a wrapper script to grep that info and convert it to a simplified output if you're willing to put in the effort of understanding the dBm numbers.

E.g. -10 dBm is the maximum possible and -100 dBm is the minimum (for the 802.11 spec), but the scale is logarithmic so -90 dBm is 10x stronger than the absolute minimum needed for connectivity, and I can only get ~-20 dBm with my laptop touching the AP.

Basically my point is that the good ol' "bars" method of demonstrating connection strength was arbitrarily decided and isn't closely tied to connection quality. This way you get to decide what numbers you want to equate to a 100% connection.

[-] BaumGeist@lemmy.ml 5 points 4 days ago

I'm a big fan of the idea of efficient computing, and I think we'd see more power savings at the End Users based on hardware. I don't need an intel i9-nteen50 and a Geforce 4090 to mindlessly ingest videos or browse lemmy. In fact, I could get away with that using less power than my phone uses; we really should move to the ARM model of low power cores suitable for most tasks and performance cores that only turn on when necessary. Pair that with less bloatware and you're getting maximum performance per instruction run.

SoCs also have the benefit of power efficient GPU and memory, while standardizing hardware so programmers can optimize to the platform again instead of getting lost in APIs and driver bloat.

The only downside is the difficulty of upgrading hardware, but CPUs (and GPUs) are basically blackboxes to the End User already and no one complains about not being able to upgrade just the L1 cache (or vram).

Imagine a future where most end user MOBOs are essentially just a socket for a socketed-SoC standard, some m.2 ports, and of course the PCI slots (with the usual hardwired ports for peripherals). Desktops/laptops would generate less waste heat, computers would use less electricity, graphical software developement would be less of a fustercluck (imagine the manhours saved), there'd be less e-waste (imagine not needing a new mobo for the new chipset if you want to upgrade your cpu after 5 years), you'd be able to upgrade laptop PUs.

Of course the actual implementation of such a standard would necessarily get fuckered by competing interests and people who only want to see the numbers go up (both profit-wise and performance-wise) and we'd be back where we are now... But a gal can dream.

[-] BaumGeist@lemmy.ml 3 points 4 days ago

From an outsider perspective (I haven't used Nix at all), the downsides I see are that it's extra software on top of the defaults for any given distro, it's not optimized for the distro (meaning it might pull in dependencies that already exist or not use distro specific APIs/libs), and it doesn't adhere to the motivations of the distro (e.g. not adhering to the DFSGs for Debian).

And of course, most of the packages are community maintained and there's the immutability, which might be a hinderance to some use cases, but not for me.

All in all, not really the worst if you're not worried about space or getting the absolute most in performance and not an ideologue, but it's enough to make me stick with APT. I chose Debian because of its commitment to FOSS, not the stability nor performance.

[-] BaumGeist@lemmy.ml 2 points 4 days ago

Currently virt-manager on top of qemu/kvm on Debian 12. It was the easiest to get to emulate a TPM on my ancient hardware (9ish years old, but still powerful).

I'm learning enough about the backend that I'm hoping to get off the Redhat maintained software and only use the qemu cli, maybe write my own monitor with rust-vmm when I learn enough rust to do so.

244
submitted 3 months ago by BaumGeist@lemmy.ml to c/196@lemmy.blahaj.zone
17
submitted 5 months ago by BaumGeist@lemmy.ml to c/videos@lemmy.world

It's the series finale for our friend Plague Roach. Big props to Drue for all the work he's put into this project

Here's the full series playlist on youtube

277
submitted 6 months ago by BaumGeist@lemmy.ml to c/196@lemmy.blahaj.zone
24
submitted 9 months ago by BaumGeist@lemmy.ml to c/linux@lemmy.ml

I've been using nala on my debian-based computers instead of apt, mostly for the parallel downloads, but also because the UI is nicer. I have one issue, and that's the slow completions; it's not wasting painful amounts of time, but it still takes a second or two each time I hit tab. I don't know if this is the same for all shells, but I'm using zsh.

I tried a workaround, but it seems prone to breaking something. So far it's working fine for my purposes, so I thought I'd share anyway:

  1. I backed up /usr/share/zsh/vendor-completions/_nala to my home directory
  2. I copied /usr/share/zsh/functions/Completion/Debian/_apt to /usr/share/zsh/vendor-completions/_nala
  3. I used vim to %s/apt/nala/g (replace every instance of 'apt' to 'nala') in the new /usr/share/zsh/vendor-completions/_nala

Already that's sped up the completions to seemingly as fast as any other command. And already I can see some jank peaking through: zsh now thinks nala has access to apt commands that it definitely doesn't (e.g. nala build-dep, nala changelog and nala full-upgrade), and it has lost autocompletions for nala fetch and nala history.

Once I understand completions files syntax better, I'll fix it to only use the commands listed in nala's manpage and submit a pr to the git repo. In the meantime, if anyone has suggestions for how to correct the existing completions file or more ways to make the _apt completions fit nala, it'd be much appreciated.

77
submitted 11 months ago by BaumGeist@lemmy.ml to c/linux@lemmy.ml

As a user, the best way to handle applications is a central repository where interoperability is guaranteed. Something like what Debian does with the base repos. I just run an install and it's all taken care of for me. What's more, I don't deal with unnecessary bloat from dozens of different versions of the same library according to the needs of each separate dev/team.

So the self-contained packages must be primarily of benefit to the devs, right? Except I was just reading through how flatpak handles dependencies: runtimes, base apps, and bundling. Runtimes and base apps supply dependencies to the whole system, so they only ever get installed once... but the documentation explicitly mentions that there are only few of both meaning that most devs will either have to do what repo devs do—ensure their app works with the standard libraries—or opt for bundling.

Devs being human—and humans being animals—this means the overall average tendency will be to bundle, because that's easier for them. Which means that I, the end user, now have more bloat, which incentivizes me to retreat to the disk-saving havens of repos, which incentivizes the devs to release on a repo anyway...

So again... who does this benefit? Or am I just completely misunderstanding the costs and benefits?

52
submitted 1 year ago by BaumGeist@lemmy.ml to c/fuck_cars@lemmy.ml

Most people are aware that gasoline sucks as a fuel and is responsible for a large portion of carbon emissions, but defenders love to trot out that "if every end consumer gave up their car, it would only remove like 10% of carbon emissions"

I can find tons of literature about the impact gasoline vehicles have, but is there any broader studies that consider other factors—like manufacture, maintenance, and city planning—while exploring the environmental and/or economic impact of cars and car culture?

I know there's great sources that have made these critiques, but I'm looking for scientific papers that present all the data in a single holistic analysis

115
Shyness rule (lemmy.ml)
view more: next ›

BaumGeist

joined 2 years ago