nous

joined 2 years ago
[–] nous@programming.dev 16 points 5 hours ago* (last edited 4 hours ago) (1 children)

Debian has two main versions, stable - which is released every two years and supported for a long time. And unstable which is basically a rolling release and constantly changes adopting things to test them before the next stable release. There is also testing, but that is just to place thing in before promoting them to stable so has the same release cadence as stable.

Two years of fixed versions on a desktop is a very long time to be stuck on some packages - epically ones you use regularly. Most people want to use things that are newer then that, either new applications released or new features for apps they use in the past two years.

Ubuntu also has two release versions (that not really the right term though). They have a LTS version which is released every two years much like Debian is. But they also have a interim release that is released every 6 months. This gives users access to a lot newer versions of software and stuff that has been released more recently. Note that the LTS versions are just the same as the interim versions, its just that LTS versions are supported for a longer period of time, so you can use it for longer.

For the Ubuntu releases they basically take a snapshot of the Debian unstable version, and from that point on they maintain their own security patches for the versions they picked. They can share some of this work with Debians patches and backports, but since Debian stable and Ubuntu are based off different versions Ubuntu still needs to do a lot of work with figuring out which ones they need to apply to their stuff as well as ensuring things work on the versions they picked. Both distros do a lot of work in this regard and do work with each other where it makes sense.

Ubuntu also adds a few things on top of Debian. Some extra packages, does a few things that make the disto a bit more user friendly etc.

Any other distro that wants to base off one of these has to make the choice

  • Do they want a very slow release cadence matching Debian (or Ubuntu LTS).
  • Or do they want a faster release cadence of Ubuntu without doing much extra work as they can build off the work that Ubuntu is doing on top of Debian.
  • Or do they want to take on all that extra work themselves and have more control over the versions included in their repos.

For a lot of distro maintainers basing off Ubuntu gives them a newer set of packages to work with while doing a lot less work doing all that work themselves. Then they can focus on the value adds they want to add ontop of the distro rather then redoing the work Ubuntu already does or sticking with much older versions.

The value add work that needs to be done on either base I dont think is hugely different. You can take the core packages you want and change a few settings, or remake a few meta packages that you dont want from Ubuntu. This is really all stuff you will be doing which ever one you pick. It is a lot more work keeping up with security patching everything.

[–] nous@programming.dev 3 points 3 days ago

The query language is deliberately less expressive than jq's. jsongrep is a search tool, not a transformation tool-- it finds values but doesn't compute new ones. There are no filters, no arithmetic, no string interpolation.

This does make it distinctively less useful. I find quite a lot of the time I need filtering or transformations when doing complex stuff. Which when dealing with larger documents is becomes almost always. So the main benefit, it's speed, does not really matter if I cannot use it for a task. And if the only tasks I can use it on are simpler ones then I don't need its speed and it is not worth the effort to learn or use.

TBH I don't use jq much these days either. I switched to nushell a while back and it has native support for everything jq (and so this tool) can do. But I find it hard more intuitive to use. Every time I touch jq for anything more then just lookups I need to reread the docs to remember what the syntax is. In a much shorter time with nushell I don't need to do that anywhere near as often. Plus it works with yaml, toml and most formats.

[–] nous@programming.dev 2 points 5 days ago (1 children)

I am not sure that is fully true. Or at least not fully explained. The Steamdeck has a full KDE environment installed and it uses this when in desktop mode. But steam is not running in big picture mode in front of this.

KDE is not running at all when in the game mode of the Steamdeck. In that mode it uses a compositor written by valve called gamescope. Switching between these is effectively logging out and back in again to switch the compositor.

Also it now has a way to run the desktop as a nested session in game mode but that is winning kwin inside gamescope.

[–] nous@programming.dev 4 points 5 days ago (1 children)

You cannot eliminate X11/wayland overhead. You need a display manager of some sort. I suspect most games/proton will require X11 or at least xwayland and a wayland compositor. You probably do want to use a window manager of some sort as well or you do lose out on a lot of controls like window placement and sizing. Some games might do weird things if they dont directly launch in full screen mode. And steam itself would probably want to be run in big picture mode to make it go full screen. If you want something designed for gaming then you might try gamescope which is what the steamdeck uses as its window manager in the game mode.

There are probably other areas with a higher impact that you can optimize more before really worrying about a lack of window manager though.

[–] nous@programming.dev 2 points 6 days ago

That just makes your writes to the disk more efficient because of block alignment and caching nonsense.

This is not true. The reason to use dd is to be able to write a fixed amount from any location in the source to any location in the destination. You have lots of control how this happens. But the way everyone uses it, to write a whole file to another whole file it offers no benefit. If anything you have to tune the prams to get decent performance out of it. Any other copy tool uses a better block size by default and so all you can do is match the performance of other copy commands like bash redirection and cp.

[–] nous@programming.dev 10 points 1 week ago* (last edited 1 week ago) (2 children)

parse_oui_database takes in a file path as a &String that is used to open the file in a parsing function. IMO there are a number of problems here.

First, you should almost never take in a &String as a function argument. This basically means you have a reference to a owned object. It forces an allocation of everything passed into the function only to take a reference of it. It excludes types that are simply &strs forcing the caller to convert them to a full String - which involves an allocation. The function should just taking in a &str as it is cheap to convert a String to a &str (cheaper to use than a &String as well as &String have a double redirection).

Sometimes it might be even better might be to take in a impl<AsRef(str)> which means the function can take in anything that can be converted into a &str without the caller needing to do it directly. Though on larger functions like this that might not always be the best idea as it makes it generic and so will be monomorphised for every type you pass into it. This can bloat a binary if you do it on lots of large functions with lots of different input types. You can also get the best of both worlds with a generic wrapper function to a concrete implementation - so the large function has a concrete type &str and a warpper that takes a impl <AsRef<str>> and calls the inner function. Though in this case it is probably easier to just take in a &str and manually convert at all the one call sites.

Second. String/&str are not the write types for paths. Those would be PathBuf and &Path which both work like String and &str (so all the above applies to these as well). These are generally better to use as paths in most OSs dont have to be unicode. Which means there are files (though very rarely) which cannot be represented as a String. This is why File::open takes in a AsRef<Path> which your function can also.

Lastly. I would not conflate opening a file with parsing it. These should be two different functions. This makes the code a bit more flexible - you can get the file to parse from other sources. One big advantage to this would be for testing where you can just have the test data as strings in the test. It also makes the returned error type simpler as one function can deal with io errors and the other with parsing errors. And in this case you can just parse the file directly from the internet request rather than saving it to a file first (though there are other reasons you may or may not want to do this).

[–] nous@programming.dev 4 points 1 week ago (1 children)

You panic a lot on errors in functions where you return a result. Almost all of these are due to input problems not programming logic errors. These really should be return from the functions as errors so higher up functions can handle them how they might need to rather than just imminently crashing the program. Really panics should only be used when the situation should never occur and if it does that indicates a bug in the program - or you are being lazy with small/test code/one off scripts (but in this case why return any errors at all, you might as well just .unwrap() on everything instead of ?.

[–] nous@programming.dev 2 points 1 week ago (1 children)

This is a pet peeve of mine, but don't put main last. Its like opening a book to be greeted by a random chapter - probably one near the end and you have to hunt through it to find where the story actually starts, which is probably near the end.

This is a IMO horribly hangup from languages that require you to declare something before you can use it. You don't need to do that in rust. So put your functions in order that makes sense to read them from top to bottom. This typically means main should be one of the first functions you see - as it is the entry point to the code. In other files this might be the main functions a user is expected to use first. Sometimes you might want to see some datastructures before that. But overall things should be ordered by how it makes sense to read them to make it easier to make sense of the program.

[–] nous@programming.dev 2 points 2 weeks ago (1 children)

You are not considered to be working somewhere until you have signed a contract and after the start date on that contract. Accepting a offer is not signing a contract. You are not working at the new place yet. You have no obligations to do anything at that point. You just need to have stopped working at your current employment before your start date. You definitely do not need to quit before accepting the offer. No where I have worked requires that.

[–] nous@programming.dev 4 points 2 weeks ago (3 children)

You are right. You cannot onboard a new job before you leave your old one. Accepting an offer is not part of the onboarding process though. It happens before.

After an interview process the company makes an offer. The candidate can then accept or reject it. But that is really all informal. You can then negotiate with them for an official start date and contract. You just need to ensure you can hand in your notice and work the rest of your notice period before the start date of your new contract.

I don't know anyone that would hand in their notice before accepting the initial offer of a company. At least here in the UK.

[–] nous@programming.dev 3 points 2 weeks ago (1 children)

Probably not the only thing they are used for considering it's ties to the CIA

[–] nous@programming.dev 205 points 2 weeks ago (12 children)

You assume they don't already have a job and we're just looking for other opportunities. Not everyone is unemployed before they apply for other jobs. If anything that is a good time to look as it gives you stronger position to negotiate from.

view more: next ›