[-] chkno@lemmy.ml 3 points 9 months ago

Nice.

Here's another worked example of a less adventurous pi pico (W) project I did recently. It's C, built with Nix, and doesn't require setting up all the hardware-debugger stuff (it uses the much simpler hold-bootsel-while-plugging-in and copy-the-.uf2-file mechanism to load code). The 5th commit is the simple blink example from the SDK with all the build mechanisms figured out.

[-] chkno@lemmy.ml 7 points 9 months ago* (last edited 9 months ago)

Bumping package versions usually isn't hard. Here, I'll do this one out loud here, & maybe you can do it next time you need to:

  1. Search https://github.com/NixOS/nixpkgs/pulls to see if someone else already has a PR open for a version bump for this package.
  2. Clone the nixpkgs repo if you haven't already: git clone https://github.com/NixOS/nixpkgs.git ~/devel/nixpkgs (or git pull if you have).
  3. Create a branch for this bump: git checkout -b stremio
  4. Find stremio: find pkgs -name stremio
  5. Edit it: $EDITOR pkgs/applications/video/stremio/default.nix Looks like nixpkgs has version 4.4.142. If I go to https://www.stremio.com/ (link in meta.homepage in this file) and click 'Download', it all says 4.4, which is not helpful. The 'source code' link goes to github, and the 'tags' link there lists version v4.4.164, which is what we're looking for.
  6. In my editor, I change the version: 4.4.1424.4.164.
  7. In my editor, I mess up both the hashes: I just add a block of zeros somewhere in the middle: sha256-OyuTFmEIC8PH4PDzTMn8ibLUAzJoPA/fTILee0xpgQI=sha256-OyuTFmEIC80000000000000000000A/fTILee0xpgQI=.
  8. Leaving my editor, I build the updated package: nix-build . -A stremio
  9. It fails, because the hashes are wrong, obviously. But it tells me what hash it got, which I copy/paste back in, in the spirit of collective TOFU. I do this twice, once for each hash.
  10. It builds successfully. I test the result: ./result/bin/stremio. Looks like it works enough to prompt me to log in, at least. I don't know what stremio is or have an account, but it's probably fine.
  11. I commit my change: git commit -a -m 'stremio: 4.4.142 -> 4.4.164'
  12. I push my commit: git push github (If this is your first time, create a fork of nixpkgs in the github web UI & git remote add a remote for it first)
  13. I create PR in the github web UI: https://github.com/NixOS/nixpkgs/pull/263387
6

Or push-ups with someone sitting on your shoulders?

I'm interested in reading about this type of exercise — what forms people have come up with that are safe, effective, and fun.

What is it called, so that I can search for it?

18
submitted 11 months ago by chkno@lemmy.ml to c/books@lemmy.ml

I have a specific book I've been meaning to read for awhile. I've heard that while it's a great journey, it's a dense / heavy / slow read along the way. It sounds like it'd be fun to read it together with a group of likewise interested folks.

Is there a service for pulling together reading groups around specific books, rather than the more common way of gathering a group of people and then selecting books? I'm imagining a website that has a sign-up page for ~every book and when ~10 people sign up for a book they all get an email introducing them to each other. Like if there was a bus stop for every book & when enough people had gathered, a bus appears & they depart together.

Given list of all the books, this seems like a pretty easy thing to make. Does it exist yet?

[-] chkno@lemmy.ml 2 points 11 months ago

Regulation is slow, full of drama, scales poorly, & can result in a legal thicket that teams of lawyers can navigate better than the individuals it's intended to advocate for. Decriminalizing interoperability is faster & can handle most of the small/simple cases, freeing up our community/legislative resources to focus on the most important regulatory needs.

[-] chkno@lemmy.ml 2 points 11 months ago

Yeah, that's normal. That's the seam -- where each layer starts/stops. Yours don't look any worse than mine.

Sometimes you can tweak settings to reduce them a bit, but the only way to avoid them completely is to print in spiral/vase mode (which is very limiting: 1 contiguous perimeter, no infill).

More importantly: You can control where they appear on the part! Your slicer may have settings like 'nearest' , 'random', 'aligned', 'rear', or may have a way to paint on the part in the UI where the seams should be. Seams are clearly visible when they're in the middle of an otherwise-smooth expanse like the side of your boat there, but are barely noticeable if you put them on a corner.

[-] chkno@lemmy.ml 3 points 11 months ago* (last edited 11 months ago)

X11 for xdotool. ydotool doesn't support (& can't really support with it's current architecture) retrieving information like the current mouse location, current window, window dimensions & titles. Also, normal (unprivileged) user ydotool use requires udev rules or session scripts and/or running a ydotool daemon & many distros don't yet ship with this Just Working.

X11 for Alt-F2 r to restart Gnome Shell without ending the whole session. This is a useful workaround for a variety of Gnome bugs.

406
submitted 11 months ago by chkno@lemmy.ml to c/privacy@lemmy.ml

Gmail prompt to provide phone number sounds like a threat

4
submitted 1 year ago* (last edited 1 year ago) by chkno@lemmy.ml to c/bikecommuting@lemmy.ml

When it's hot during the day and cold at night, I sometimes find myself under-dressed for late evening riding. I can pedal harder to generate body heat, but on flat ground that creates wind chill & doesn't help. Pedaling hard while lightly holding the brakes works really well to warm up!

But the downhill-biking folks warn about the hazards of overheating brakes (mostly disc brakes but also rim brakes / V-brakes). I have V-brakes.

I imagine just pedaling into brakes transfers heat into them much slowly than controlling downhill descents, since I can go down hills much faster than I can go up hills (it takes much longer to transfer one hill's worth of energy from my muscles into having climbed the hill than to transfer the same one hill's worth of energy into the brakes/rims while descending it).

Do I need to worry about this at all?

[-] chkno@lemmy.ml 4 points 1 year ago

I ran Gentoo for ~15 years and then switched to NixOS ~3 years ago. The last straw was Gentoo bug 676264, where I submitted version bump & build fix patches to fix security issues and was ignored for three months.

In Gentoo, glsa-check only tells you about security vulnerabilities after there's a portage update that would resolve it. I.e., for those three months, all Gentoo users had a ghostscript with widely-known vulnerabilities and glsa-check was silent about it. I'm not cherry-picking this example—this was one of my first attempts to help be proactive about security updates & found that the process is not fit for purpose. And most fixed vulnerabilities don't even get GLSA advisories—the advisories have to be created manually. Awhile back, I had made a 'gentle update' script that just updated packages glsa-check complained about. It turns out that's not very useful.

Contrast this with vulnix, a tool in Nix/NixOS which directly fetches the vulnerability database from nvd.nist.gov (with appropriate polite local caching) and directly checks locally installed software against it. You don't need the Nix project to do anything for this to Just Work; it's always comprehensive. I made a NixOS upgrade script that uses vulnix to show me a diff of security issues as it does a channel update. Example output:

commit ...
Author: <me>
Date:   Sat Jun 17 2023

    New pins for security fixes

    -9.8    CVE-2023-34152  imagemagick
    -7.8    CVE-2023-34153  imagemagick
    -7.5    CVE-2023-32067  c-ares
    -7.5    CVE-2023-28319  curl
    -7.5    CVE-2023-2650   openssl
    -7.5    CVE-2023-2617   opencv
    -7.5    CVE-2023-0464   openssl
    -6.5    CVE-2023-31147  c-ares
    -6.5    CVE-2023-31124  c-ares
    -6.5    CVE-2023-1972   binutils
    -6.4    CVE-2023-31130  c-ares
    -5.9    CVE-2023-32570  dav1d
    -5.9    CVE-2023-28321  curl
    -5.9    CVE-2023-28320  curl
    -5.9    CVE-2023-1255   openssl
    -5.5    CVE-2023-34151  imagemagick
    -5.5    CVE-2023-32324  cups
    -5.3    CVE-2023-0466   openssl
    -5.3    CVE-2023-0465   openssl
    -3.7    CVE-2023-28322  curl

diff --git a/channels b/channels

a/channels +++ b/channels @@ -8,23 +8,23 @@ [nixos] git_repo = https://github.com/NixOS/nixpkgs.git git_ref = release-23.05 -git_revision = 3a70dd92993182f8e514700ccf5b1ae9fc8a3b8d -release_name = nixos-23.05.419.3a70dd92993 -tarball_url = https://releases.nixos.org/nixos/23.05/nixos-23.05.419.3a70dd92993/nixexprs.tar.xz -tarball_sha256 = 1e3a214cb6b0a221b3fc0f0315bc5fcc981e69fec9cd5d8a9db847c2fae27907 +git_revision = c7ff1b9b95620ce8728c0d7bd501c458e6da9e04 +release_name = nixos-23.05.1092.c7ff1b9b956 +tarball_url = https://releases.nixos.org/nixos/23.05/nixos-23.05.1092.c7ff1b9b956/nixexprs.tar.xz +tarball_sha256 = 8b32a316eb08c567aa93b6b0e1622b1cc29504bc068e5b1c3af8a9b81dafcd12

[-] chkno@lemmy.ml 3 points 1 year ago* (last edited 1 year ago)

Sounds fine?

Yes: Treat the two enclosures independently and symmetrically, such that you can fully restore from either one (the only difference would be that the one in the safe is slightly stale) and the ongoing upkeep is just:

  1. Think: "Oh, it's been awhile since I did a swap" (or use a calendar or something)
  2. Unplug the drive at the computer.
  3. Cary it to the safe.
  4. Open the safe.
  5. Take the drive in the safe out.
  6. Put the other drive in the safe.
  7. Close the safe.
  8. Cary the other drive to the computer.
  9. Plug it in.
  10. (Maybe: authenticate for the drive encryption if you use normal full-disk encryption & don't cache the credential)

If I assume a normal incremental backup setup, both enclosures would have a full backup and a pile of incremental backups. For example, if swapped every three days:

Enclosure A        Enclosure B
-----------------  ---------------
a-full-2023-07-01
a-incr-2023-07-02
a-incr-2023-07-03
                   b-full-2023-07-04
                   b-incr-2023-07-05
                   b-incr-2023-07-06
a-incr-2023-07-07
a-incr-2023-07-08
a-incr-2023-07-09
                   b-incr-2023-07-10
                   b-incr-2023-07-11
                   b-incr-2023-07-12
a-incr-2023-07-13
....

The thing taking the backups need not even detect or care which enclosure is plugged in -- it just uses the last incremental on that enclosure to determine what's changed & needs to be included in the next incremental.

Nothing need care about the number or identity of enclosures: You could add a third if, for example, you found an offsite location you trust. Or when one of them eventually fails, you'd just start using a new one & everything would Just Work. Or, if you want to discard history (eg: to get back the storage space used by deleted files), you could just wipe one of them & let it automatically make a new full backup.

Are you asking for help with software? This could be as simple as dar and a shell script.

My personal preference is to tell the enclosure to not try any fancy RAID stuff & just present all the drives directly to the host, and then let the host do the RAID stuff (with lvm or zfs or whatever), but I understand opinions differ. I like knowing I can always use any other enclosure or just plug the drives in directly if/when the enclosure dies.

I notice you didn't mention encryption, maybe because that's obvious these days? There's an interesting choice here, though: You can do normal full-disk encryption, or you could encrypt the archives individually. Dar actually has an interesting feature here I haven't seen in any other backup tool: If you keep a small --aux file with the metadata needed for determining what will need to go in the next incremental, dar can encrypt the backup archives asymmetrically to a GPG key. This allows you to separate the capability of writing backups and the capability of reading backups. This is neat, but mostly unimportant because the backup is mostly just a copy of what's on the host. It comes into play only when accessing historical files that have been deleted on the host but are still recoverable from point-in-time restore from the incremental archives -- this becomes possible only with the private key, which is not used or needed by any of the backup automation, and so is not kept on the host. (You could also, of course, do both full-disk encryption and per-archive encryption if you want the neat separate-credential for deleted files trick and also don't want to leak metadata about when backups happen and how large the incremental archives are / how much changed.) (If you don't full-disk-encrypt the enclosure & rely only on the per-archive encryption, you'd want to keep the small --aux files on the host, not on the enclosure. The automation would need to keep one --aux file per enclosure, & for this narrow case, it would need to identify the enclosures to make sure it uses that enclosure's --aux file when generating the incremental archive.)

[-] chkno@lemmy.ml 3 points 1 year ago* (last edited 1 year ago)

Any sane compiler will simplify this into

    function cosmicRayDetector() {
      while(true) {
      }
    }

C++ may further 'simplify' this into

    function cosmicRayDetector() {
      return
    }
view more: next ›

chkno

joined 1 year ago