hallettj

joined 2 years ago
[–] hallettj@leminal.space 1 points 13 hours ago (1 children)

Not OP, but I've been using Niri as my daily driver for almost two years (since v0.1.2). The stability and polish have really impressed me. In addition to the scrolling workflow it has some especially nice features for screen sharing & capturing, like key binds to quickly switch which window you are sharing, and customizable rules to block certain windows when showing your whole desktop.

I do use a 40" ultrawide. Looking for options for getting the most out of an ultrawide was how I got into scrolling window managers.

I only occasionally use my 13" laptop display. I still like scrolling because I like spatial navigation. Even if windows end up mostly or entirely off the screen I still think about my windows in terms of whether they're left, right, up, or down from where I'm currently looking.

I don't like traditional tiling as much because I find squishing every window to be fully in view to be awkward; and with e.g. i3-style wms if I want to stash a window out of view, like in a tab that's a separate metaphor I have to keep track of, with another axis where windows might be. Scrolling consistently uses on spatial metaphor, placing all windows on one 2D plane with one coordinate system.

[–] hallettj@leminal.space 5 points 14 hours ago* (last edited 14 hours ago)

Home Manager is a Nix tool for managing configuration for a single user, usually on a Linux or MacOS system, or possibly WSL. You configure installed programs, program configuration (such as dot files), and a number of other things, and you get a reproducible environment that's easy to apply to multiple machines, or to roll back configuration, etc. I find it helpful for having a clear record of how everything is set up. It's the sort of thing that people sometimes use GNU Stow or Ansible for, but it's much more powerful.

A Home Manager configuration is very similar to a NixOS configuration, except that NixOS configures the entire system instead of just configuring user level stuff. (The lines do blur in Nix because unlike traditional package managers where packages are installed at the system level, using Nix packages can be installed at the system, user, project, or shell session level.) Home Manager is often paired with NixOS. Or on Macs Home Manager is often paired with nix-darwin. As I mentioned, the Home Manager portion of my config is portable to OSes other than NixOS. In my case I'm sharing it in another Linux distro, but you can also use Home Manager to share configurations between Linux, MacOS, and WSL.

[–] hallettj@leminal.space 0 points 15 hours ago

When I'm pairing with someone who uses VSCode it's usually painful how slow they are about finding and opening files. And so much screen space is taken up by stuff that is not code. It's extra frustrating because even VSCode has built-in solutions for all of this, but lots of people don't seem to understand how to use it efficiently.

[–] hallettj@leminal.space 7 points 16 hours ago* (last edited 15 hours ago) (2 children)
  • NixOS + Home Manager
  • Niri
  • Kitty
  • Neovim, via Neovide

For work it's Fedora + Home Manager because the remote admin software doesn't support NixOS. Thankfully I've been able to define my dev environment almost fully in a Home Manager config that I can use at work and at home.

I use lots of Neovim plugins. Beyond the basic LSP and completion plugins, some of my indispensables are:

  • Leap for in-buffer navigation & remote text copying
  • Oil for file management
  • Fugitive + Git Signs + gv.vim + diffview.nvim for git integration
  • nvim-surround to add/change/remove delimiters
  • vim-auto-save
  • kitty-scrollback
[–] hallettj@leminal.space 2 points 3 days ago

I was reading recently about how Tailscale makes peer-to-peer connections work, which I thought was quite interesting. If we stop using NAT there is still an issue of getting traffic through stateful firewalls. That can be hard without a server because, for example, in some cases you need to coordinate two nodes sending each other messages on the same port nearly simultaneously to get all the intervening firewalls to interpret that as an "outbound" session from both sides to allow traffic through. https://tailscale.com/blog/how-nat-traversal-works

[–] hallettj@leminal.space 1 points 3 days ago

I like rofi for this use case, but it uses fuzzy search instead of labels. You might have to type more than one letter, depending on what windows you have open. OTOH if you know any part of the window title you can start typing immediately without having to scan a list for a label first.

Labels work well for jumping to something you can already see, because the label appears where you are already looking, so you see it immediately. I'm guessing the process of finding the label for a window that is not visible would be clunkier - you'd have to find the label in a possibly long window list.

[–] hallettj@leminal.space 4 points 1 week ago (1 children)

Very cool! A while ago I found that instead of using fractional scaling, things were smoother for me if I set Gome's text scaling factor in accessibility settings. I think most GTK UI scales based off that value? It's pretty helpful for me, even though I'm not actually using Gnome anymore. But if fractional scaling support has gotten better, maybe I'll switch my approach.

[–] hallettj@leminal.space 3 points 1 week ago

Out of sheer curiosity I checked. 18 USC § 921(a)(16) defines "antique firearm" for purposes of crimes and criminal procedure. The term "firearm" is defined in 18 USC § 921(a)(3), which includes the text, "Such term does not include an antique firearm." (source)

It's perplexing because the "antique firearm" definition has numerous references to "firearm". The (A) and (B) parts include or reference the text, "any firearm (including any firearm with a matchlock, flintlock, percussion cap, or similar type of ignition system) ...".

So it looks like antique firearms are an instance of Russell's Paradox. I guess a flintlock is not not a firearm. Paradox resolving powers must be one of those things you need law school for.

[–] hallettj@leminal.space 6 points 1 week ago (1 children)

I had to look it up - apparently this guy's relation to Scotty is only explained in the director's cut of Wrath of Khan https://memory-alpha.fandom.com/wiki/Peter_Preston

[–] hallettj@leminal.space 11 points 1 week ago

Goddammit. I can see how this is convenient for Netflix subscribers in the short term. But this is yet another consolidation adding to the oppressive weight on consumers, and would-be competitors. For one thing, I'm sure this is going going to lead to higher Netflix subscription prices than we would have seen otherwise.

[–] hallettj@leminal.space 3 points 2 weeks ago

This is what we gotta do - put social stigma on absurd wealth

[–] hallettj@leminal.space 7 points 2 weeks ago (2 children)

My recollection is in the mirror universe Worf has a huge battle cruiser, and keeps Garak on a chain. So that's inconclusive I guess.

 

cross-posted from: https://leminal.space/post/28955576

I learned how to do this recently, and I wanted to share. Once you know what to do VPN confinement is easy to set up on NixOS.

The scenario: you want selected processes to run through a VPN, but you want everything else to not run through the VPN. On Linux you can do this with a network namespace. That's a kernel feature that defines a network stack that is isolated from your default network stack. Processes can be configured to run in a new namespace, and when they do they cannot access the usual not-VPN-protected network interfaces. Network namespaces work along with other types of namespaces, like process namespaces, to allow Docker containers to function almost as though they are separate machines from the host system. Actually Docker containers are regular processes that are carefully isolated using namespaces, cgroups, and private filesystems. Because of that isolation Docker containers are a popular choice for VPN confinement. But since all you really need is network isolation you can skip the middleman, and use network namespaces directly.

There is a third-party NixOS module that automates this, VPN-Confinement. Here's an example that runs a Borg backup job through a VPN connection. (This example also uses the third-party sops-nix module to encrypt VPN credentials.)

{ config, ... }:

let
  vpnNamespace = "wg";
in
{
  # Define the network namespace for VPN confinement. Creates a VPN network
  # interface in the namespace; creates a bridge; sets up routing; creates
  # firewall rules to prevent DNS leaking. The VPN-Confinement module requires
  # using Wireguard as the VPN protocol.
  vpnNamespaces.${vpnNamespace} = {
    enable = true;
    wireguardConfigFile = config.sops.secrets.wireguard_config.path;
  };

  # Set up whatever service should run via VPN
  services.borgbackup.jobs.homelab = {
    paths = "/home/jesse";
    encryption.mode = "none";
    environment.BORG_RSH = "ssh -i /home/jesse/.ssh/id_ed25519";
    repo = "ssh://offsite.sitr.us/backups/homelab";
    compression = "auto,zstd";
    startAt = "daily";
  };

  # Modify the systemd unit for your service to run its processes in the VPN
  # namespace.
  #
  # - sets Service.NetworkNamespacePath in the systemd unit
  # - sets Service.InaccessiblePaths = [ "/run/nscd" "/run/resolvconf" ] to prevent DNS leaking
  # - adds a dependency to the unit that brings up the VPN network namespace
  #
  # I found the name of the systemd service that services.borgbackups.jobs
  # creates by looking at the Borg module source. You can find the source for
  # NixOS modules by searching for config options on https://search.nixos.org/options
  systemd.services.borgbackup-job-homelab = {
    vpnConfinement = {
      enable = true;
      inherit vpnNamespace;
      # `inherit vpnNamespace;` has the same effect as `vpnNamespace = vpnNamespace;`
      # I used a variable to be certain that the value here matches the name
      # I used to set up the namespace on line 11.
    };
  };

  # Load your wireguard config file however you want. Your VPN provider probably
  # supports wireguard, and will likely generate a config file for you.
  sops.secrets.wireguard_config = {
    sopsFile = ./secrets.yaml;
    owner = "root";
    group = "root";
  };
}

This setup assumes using the Wireguard VPN protocol, and assumes that programs you want to be VPNed are run by systemd. VPN providers mostly support Wireguard, including Tailscale. But my understanding is that Tailscale's mesh routing requires additional setup beyond creating a Wireguard interface. So you'd likely want a different setup for confinement with Tailscale. You can run the Tailscale client in a network namespace (there is a start on such a setup here); or you might use Tailscale's subnet router feature to blend VPN and local network traffic instead of selective confinement.

Normally when you turn on a VPN your VPN client software creates a network interface that transparently sends traffic through an encrypted tunnel, and configures a default route to send network traffic through that interface. So traffic from all programs is routed through the tunnel. VPN-Confinement creates that network interface in the isolated namespace, and sets that default route in the namespace, so that only programs running in the namespace are affected. There is much more detail in this blog post. The VPN-Confinement module differs from the setup in that post in a couple of ways: it has some extra setup to block DNS requests that aren't properly tunneled; it creates a network bridge instead of a simple virtual ethernet cable for port forwarding; and it provides more options for firewall and routing configuration.

VPN-Confinement has an option to forward ports from the default network stack into the VPN namespace. This is useful if you want all outbound traffic to go through the VPN, but you want to accept inbound traffic from programs on the host, or from other machines on your local network, or anywhere else. This is handy if, for example, you're running a program on a headless server that provides a web UI for remote administration. Here's an expanded VPN namespace example:

vpnNamespaces.${vpnNamespace} = {
  enable = true;
  wireguardConfigFile = config.sops.secrets.wireguard_config.path;

  # Forward traffic to specified ports from the default network namespace to
  # the VPN namespace.
  portMappings = [{ from = 8080; to = 8080; }];
  accessibleFrom = [
    # Accept traffic from machines on the local network, and route through the
    # mapped ports.
    "192.168.1.0/24"
  ];
};

Requests to mapped ports from the host machine need to be addressed to the network bridge that VPN-Confinement sets up. You can configure its addresses using the bridgeAddress and bridgeAddressIPv6 options. By default the addresses are 192.168.15.5 and fd93:9701:1d00::1. If you're configuring addresses elsewhere in your NixOS config you can use an expression like this:

url = "http://${config.vpnNamespaces.${vpnNamespace}.bridgeAddress}:8080/";

If you look at the source for VPN-Confinement you'll see that namespace configuration and routing require a lot of stateful ip commands. I think it would be nice if there were an alternative, declarative interface to iproute2. But VPN-Confinement is able to encapsulate the stateful stuff in systemd ExecStart and ExecStopPost scripts.

I ran into an issue where mDNS stopped working while the VPN network namespace was active. I fixed that problem by configuring Avahi to ignore VPN-Confinement's network bridge:

services.avahi.denyInterfaces = [ "${vpnNamespace}-br" ];

Edit 2025-11-23: I deleted a comment that implied that if the VPN namespace string doesn't match in the two places where it is used traffic won't be tunneled. I tested again, and if the names don't match the service that is supposed to be protected won't start. You'll see an error like, Failed to restart test-unit.service: Unit wrong-name.service not found.. If you bypass VPN-Confinement by hand, and set Service.NetworkNamespacePath to a path that doesn't exist the unit will fail with an error like, test-unit.service: Failed to open network namespace path /run/netns/wrong-name: No such file or directory.

 

I learned how to do this recently, and I wanted to share. Once you know what to do VPN confinement is easy to set up on NixOS.

The scenario: you want selected processes to run through a VPN, but you want everything else to not run through the VPN. On Linux you can do this with a network namespace. That's a kernel feature that defines a network stack that is isolated from your default network stack. Processes can be configured to run in a new namespace, and when they do they cannot access the usual not-VPN-protected network interfaces. Network namespaces work along with other types of namespaces, like process namespaces, to allow Docker containers to function almost as though they are separate machines from the host system. Actually Docker containers are regular processes that are carefully isolated using namespaces, cgroups, and private filesystems. Because of that isolation Docker containers are a popular choice for VPN confinement. But since all you really need is network isolation you can skip the middleman, and use network namespaces directly.

There is a third-party NixOS module that automates this, VPN-Confinement. Here's an example that runs a Borg backup job through a VPN connection. (This example also uses the third-party sops-nix module to encrypt VPN credentials.)

{ config, ... }:

let
  vpnNamespace = "wg";
in
{
  # Define the network namespace for VPN confinement. Creates a VPN network
  # interface in the namespace; creates a bridge; sets up routing; creates
  # firewall rules to prevent DNS leaking. The VPN-Confinement module requires
  # using Wireguard as the VPN protocol.
  vpnNamespaces.${vpnNamespace} = {
    enable = true;
    wireguardConfigFile = config.sops.secrets.wireguard_config.path;
  };

  # Set up whatever service should run via VPN
  services.borgbackup.jobs.homelab = {
    paths = "/home/jesse";
    encryption.mode = "none";
    environment.BORG_RSH = "ssh -i /home/jesse/.ssh/id_ed25519";
    repo = "ssh://offsite.sitr.us/backups/homelab";
    compression = "auto,zstd";
    startAt = "daily";
  };

  # Modify the systemd unit for your service to run its processes in the VPN
  # namespace.
  #
  # - sets Service.NetworkNamespacePath in the systemd unit
  # - sets Service.InaccessiblePaths = [ "/run/nscd" "/run/resolvconf" ] to prevent DNS leaking
  # - adds a dependency to the unit that brings up the VPN network namespace
  #
  # I found the name of the systemd service that services.borgbackups.jobs
  # creates by looking at the Borg module source. You can find the source for
  # NixOS modules by searching for config options on https://search.nixos.org/options
  systemd.services.borgbackup-job-homelab = {
    vpnConfinement = {
      enable = true;
      inherit vpnNamespace;
      # `inherit vpnNamespace;` has the same effect as `vpnNamespace = vpnNamespace;`
      # I used a variable to be certain that the value here matches the name
      # I used to set up the namespace on line 11.
    };
  };

  # Load your wireguard config file however you want. Your VPN provider probably
  # supports wireguard, and will likely generate a config file for you.
  sops.secrets.wireguard_config = {
    sopsFile = ./secrets.yaml;
    owner = "root";
    group = "root";
  };
}

This setup assumes using the Wireguard VPN protocol, and assumes that programs you want to be VPNed are run by systemd. VPN providers mostly support Wireguard, including Tailscale. But my understanding is that Tailscale's mesh routing requires additional setup beyond creating a Wireguard interface. So you'd likely want a different setup for confinement with Tailscale. You can run the Tailscale client in a network namespace (there is a start on such a setup here); or you might use Tailscale's subnet router feature to blend VPN and local network traffic instead of selective confinement.

Normally when you turn on a VPN your VPN client software creates a network interface that transparently sends traffic through an encrypted tunnel, and configures a default route to send network traffic through that interface. So traffic from all programs is routed through the tunnel. VPN-Confinement creates that network interface in the isolated namespace, and sets that default route in the namespace, so that only programs running in the namespace are affected. There is much more detail in this blog post. The VPN-Confinement module differs from the setup in that post in a couple of ways: it has some extra setup to block DNS requests that aren't properly tunneled; it creates a network bridge instead of a simple virtual ethernet cable for port forwarding; and it provides more options for firewall and routing configuration.

VPN-Confinement has an option to forward ports from the default network stack into the VPN namespace. This is useful if you want all outbound traffic to go through the VPN, but you want to accept inbound traffic from programs on the host, or from other machines on your local network, or anywhere else. This is handy if, for example, you're running a program on a headless server that provides a web UI for remote administration. Here's an expanded VPN namespace example:

vpnNamespaces.${vpnNamespace} = {
  enable = true;
  wireguardConfigFile = config.sops.secrets.wireguard_config.path;

  # Forward traffic to specified ports from the default network namespace to
  # the VPN namespace.
  portMappings = [{ from = 8080; to = 8080; }];
  accessibleFrom = [
    # Accept traffic from machines on the local network, and route through the
    # mapped ports.
    "192.168.1.0/24"
  ];
};

Requests to mapped ports from the host machine need to be addressed to the network bridge that VPN-Confinement sets up. You can configure its addresses using the bridgeAddress and bridgeAddressIPv6 options. By default the addresses are 192.168.15.5 and fd93:9701:1d00::1. If you're configuring addresses elsewhere in your NixOS config you can use an expression like this:

url = "http://${config.vpnNamespaces.${vpnNamespace}.bridgeAddress}:8080/";

If you look at the source for VPN-Confinement you'll see that namespace configuration and routing require a lot of stateful ip commands. I think it would be nice if there were an alternative, declarative interface to iproute2. But VPN-Confinement is able to encapsulate the stateful stuff in systemd ExecStart and ExecStopPost scripts.

I ran into an issue where mDNS stopped working while the VPN network namespace was active. I fixed that problem by configuring Avahi to ignore VPN-Confinement's network bridge:

services.avahi.denyInterfaces = [ "${vpnNamespace}-br" ];

Edit 2025-11-23: I deleted a comment that implied that if the VPN namespace string doesn't match in the two places where it is used traffic won't be tunneled. I tested again, and if the names don't match the service that is supposed to be protected won't start. You'll see an error like, Failed to restart test-unit.service: Unit wrong-name.service not found.. If you bypass VPN-Confinement by hand, and set Service.NetworkNamespacePath to a path that doesn't exist the unit will fail with an error like, test-unit.service: Failed to open network namespace path /run/netns/wrong-name: No such file or directory.

 

Ok I'll come clean - this is my car

271
submitted 6 months ago* (last edited 6 months ago) by hallettj@leminal.space to c/mycology@mander.xyz
 

Update: The first photo was day 3 of growing. We harvested on day 4, and got 255 grams of tasty snack!

 

I got into bullet journaling a few weeks ago. I looked at a bunch of resources that went into detail, but I felt like I didn't have the big picture. The Absolute Ultimate Guide covers the motivation, what bullet journaling is all about, and details for getting started quickly, all in one relatively short post.

 

I'm trying to write a Nix package for a closed-source, precompiled binary with an unusual twist. The binary is statically-linked, but it contains an embedded binary that is dynamically-linked. Is there some way I can use patchelf or another tool to path the interpreter path in the embedded binary?

The embedded binary does not have any runtime library dependencies, but it does need an interpreter which it expects at the hard-coded path /lib64/ld-linux-x86-64.so.2. It is embedded using the golang "embed" library.

I have a workaround that wraps the binary using buildFHSEnv. That works, but the resulting closure is about 300 MB bigger than it needs to be.

 

The situation: you're trying to build something, but one of your configured substituters (a.k.a binary caches) is either offline, or having a moment of being very slow. Nix doesn't automatically time out, and skip that cache. No, you just can't build. You want to disable the problem cache so you can get on with your life. But since you use NixOS you need to run nixos-rebuild to update your substituter settings. A rebuild means hitting the problem cache...

When I've run into this problem I've thought, "I really need a way to selectively disable a cache in the nix build command." Previously I've had a hard time searching for such an option. Today I found it! Here it is:

$ nix build --option substituters "https://cache.nixos.org/ https://nix-community.cachix.org/"

or

$ nixos-rebuild build --option substituters "https://cache.nixos.org/ https://nix-community.cachix.org/"

The flag --option overrides settings that are normally read from /etc/nix/nix.conf. The idea here is instead of specifying a cache to disable, you list all of the caches that you do want to use.

Unless you are running as a "trusted user" you can't use this method to use substituters that aren't already configured because that would be a security problem. That means that substituter URLs need to be exactly the same as they are specified in /etc/nix/nix.conf including query parameters like ?priority.

I run into the misbehaving cache problem in two situations:

  • From time to time I get an error from cachix. I think it might be something like the cache claims to have a store path, but then actually downloading it fails. I'm not sure. Anyway the cache error makes the whole build command fail.
  • Sometimes garnix, as helpful as it is for avoiding expensive rebuilds on my slow laptop, gets very slow serving large packages like slack and google-chrome. These are unfree so they aren't cached on cache.nixos.org which usually takes precedence over garnix for unmodified nixpkgs packages. But since I build my nixos config on garnix the unfree packages do get cached there. I could wait all day for my nixos rebuild, or I could bypass the cache, download binaries from their original URLs, and be done in seconds.
 

I'm a fan of gaming - my main game is Overwatch. Until this week I've been using xwayland or gamescope to run Wine games which comes with downsides. Xwayland's window management can be buggy - in Gnome I can end up unable to switch back to a game window. Gamescope has some latency and visual artifact issues in my preferred window manager.

But now with the Wine 10 release candidates you can run Wine in native Wayland mode without any special registry settings or anything. And it works very well as far as I can tell! I went through the trouble of figuring out how to get Wine 10 set up on NixOS so I thought I would share.

Wine 10 is currently available in nixos-unstable. The simplest way I've found to get it working for games is to use Lutris, and to install both Lutris and Wine from unstable. To get a complete Wine setup for Lutris use wineWowPackages - for example wineWowPackages.stagingFull. The Full variant includes wine-mono which you'll probably want, and the staging package is the one that worked for me.

I have an overlay that lets me reference unstable packages via pkgs.unstable.${package-name}. With that in place I have this in my NixOS settings:

environment.systemPackages = [
  (pkgs.unstable.lutris.override {
    extraPkgs = pkgs: [
#               ----
#      ↓ same var ↑ 
#     ---- 
      pkgs.wineWowPackages.stagingFull
      pkgs.winetricks
    ];
  })
];

Note that you'll want to use the shadowed pkgs variable introduced in the function given to extraPkgs to reference the wine packages. I think that package set has some extra FHS stuff done to it or something.

If you don't have it already the shortcut for enabling necessary system settings for running games with Vulkan is to enable steam:

programs.steam.enable = true;

You can presumably put the Lutris configuration in Home Manager instead of NixOS by setting home.packages instead of environment.systemPackages. The steam setting needs to be set in NixOS.

When you run Lutris change the Wine runner settings to use the "system default" Wine version, and check the "use system winetricks" toggle.

To make sure that Wine uses Wayland you can unset the DISPLAY environment variable, or set it to an empty string. To do that in Lutris go into the game configuration settings. Under the "System options" tab add an environment variable named DISPLAY, and leave its value empty.

And that's it!

The one issue I've run into is that the Battle.net launcher is a blank black rectangle. The workaround is to run the launcher in gamescope or xwayland, install the game you want, and then re-launch without gamescope in native Wayland. You can start the game you want using the menu from Battle.net's system tray icon so that you don't need to use the launcher UI.

Edit: Thanks @vividspecter@lemm.ee for the point about unsetting DISPLAY!

Edit: @BlastboomStrice@mander.xyz pointed out that all of the Wine packages on unstable are updated to v10 so I changed the instructions to use stableFull instead of stagingFull.

Edit: stableFull wasn't actually working for me so I switched the instructions back to stagingFull

 

Logan Smith's Rust videos are excellent - I'm happy to see a new one is up!

14
submitted 1 year ago* (last edited 1 year ago) by hallettj@leminal.space to c/linux@lemmy.ml
 

Some app launchers these days run each app in a new systemd scope, which puts the app process and any child processes into their own cgroup. For example I use rofi which does this, and I noticed that fuzzel does also. That is handy for tracking and cleaning up child processes!

You can see how processes are organized by running,

$ systemctl --user status

I think that's a quite useful way to see processes organized. Looking at it I noticed a couple of scopes that shouldn't still be running.

Just for fun I wanted to use this to try to script a better killall. For example if I run $ killscope slack I want the script to:

  1. find processes with the name "slack"
  2. find the names of the systemd scopes that own those processes (for example, app-niri-rofi-2594858.scope)
  3. kill processes in each scope with a command like, systemctl --user stop app-niri-rofi-2594858.scope

Step 2 turned out to be harder than I liked. Does anyone know of an easy way to do this? Ideally I'd like a list of all scopes with information for all child processes in JSON or another machine-readable format.

systemctl --user status gives me all of the information I want, listing each scope with the command for each process under it. But it is not structured in an easily machine-readable format. Adding --output json does nothing.

systemd-cgls shows the same cgroup information that is shown in systemctl --user status. But again, I don't see an option for machine-readable output.

systemd-cgtop is interesting, bot not relevant.

Anyway, I got something working by falling back on the classic commands. ps can show the cgroup for each process:

$  ps x --format comm=,cgroup= | grep '^slack\b'
slack           0::/user.slice/user-1000.slice/user@1000.service/app.slice/app-niri-rofi-2594858.scope
slack           0::/user.slice/user-1000.slice/user@1000.service/app.slice/app-niri-rofi-2594858.scope
slack           0::/user.slice/user-1000.slice/user@1000.service/app.slice/app-niri-rofi-2594858.scope
...

The last path element of the cgroup happens to be the scope name. That can be extracted with awk -F/ '{print $NF}' Then unique scope names can be fed to xargs. Here is a shell function that puts everything together:

function killscope() {
    local name="$1"
    ps x --format comm=,cgroup= \
        | grep "^$name\b" \
        | awk -F/ '{print $NF}' \
        | sort | uniq \
        | xargs -r systemctl --user stop
}

It could be better, and it might be a little dangerous. But it works!

 

A short post on how variable names can leak out of macros if there is a name collision with a constant. I thought this was a delightful read!

 

Difftastic is a diff tool that uses treesitter parsing to compare code AST nodes instead of comparing lines. After following the instructions for use with git I'm seeing some very nice diffs when I run git diff or run git show --ext-diff. I thought it would be nice to get the same output for hunk diffs in the fugitive status window, and in fugitive buffers in general (which use the git filetype). But I haven't seen any easy way to do it. Has anyone got a setup like this?

I can run a command in neovim like :Git show --ext-diff to get difftastic output in a buffer. I'm thinking maybe I can set up fugitive to use the --ext-diff flag by default, or set up some aliases. But there is no syntax highlighting for the difftastic outputs since the ANSI color codes that difftastic uses in interactive terminal output don't work in neovim, and the syntax highlighting for the git filetype assumes standard diff output which is not compatible with difftastic output. For me losing colors is not a worthwhile trade for the otherwise more readable diff output.

My best idea right now is to set up a new filetype called difftastic, and write a new treesitter grammar or syntax plugin for it. Then set up some kind of neovim configuration to feed output from difftastic into buffers with the new filetype.

There is an open neovim issue discussing adding syntax-aware diffs directly to neovim, but that doesn't seem to have gone anywhere.

view more: next ›