nous

joined 2 years ago
[–] nous@programming.dev 10 points 8 hours ago (4 children)

Random programming certificates are generally worthless. The course to get them might teach you a lot and be worth while, but the certificate at the end is worthless. If it is free then it does not matter too much either way, might be a good way to test yourself. But I would not rely on it to get you a job at all. For that you need other ways to prove you can do the job - typically with the ability to talk about stuff and having written some real world like application. Which a course might help you do to.

[–] nous@programming.dev 2 points 8 hours ago (1 children)

The surge of cancer since the 1900s is also explainable by the surge in our ability to detect cancer and overall understanding of it.

One big reason papers always find these links is just that they are finding correlations, which are always there, even for unrelated things. When you are looking at loads of factors in a observational study you are almost bound to find some accidental correlation. It is very hard to tell if that is just random or if there is a true cause behind it.

There are all sorts of spurious correlations if you look hard enough.

[–] nous@programming.dev 9 points 20 hours ago (3 children)

The only things not linked to cancer are the things not yet been studied. Seems like everything at some point has been linked to cancer.

The data showed that people who ate as little as one hot dog a day when it comes to processed meats had an 11% greater risk of type 2 diabetes and a 7% increased risk of colorectal cancer than those who didn’t eat any. And drinking the equivalent of about a 12-ounce soda per day was associated with an 8% increase in type 2 diabetes risk and a 2% increased risk of ischemic heart disease.

Sounds like a correlation... someone who eats one hot dog and drinks one soda per day is probably doing a lot of unhealthy things.

It’s also important to note that the studies included in the analysis were observational, meaning that the data can only show an association between eating habits and disease –– not prove that what people ate caused the disease.

Yup, that is what it is. A correlation. So overall not really worth the effort involved IMO. Not eating any processed meats at all is not likely a big issue, but your overall diet and amount of exercise/lifestyle. I would highly suspect that even if you did eat one hotdog per day, but had a otherwise perfect diet for the rest of the day and did plenty of exercise, got good sleep and all the other things we know are good for you then these negative effects would likely becomes negligible. But who the hell is going to do that? That's the problem with these observational studies - you cannot really tease out the effect of one thing out of a whole bad lifestyle.

I hate headlines like this as it makes it sounds like you can just do thins one simple thing and get massive beneficial effects. You cannot. You need to change a whole bunch of things to see the types of reduction in risk they always talk about. Instead they always make it sounds like if you have even one hot dog YOU ARE GOING TO DIE.

[–] nous@programming.dev 14 points 1 day ago* (last edited 1 day ago) (1 children)

The indicator being stuck is a recently fixed issue:

Fixed a case where the battery level indicator could become stuck

https://store.steampowered.com/news/app/1675200?emclan=103582791470414830&emgid=529850584204838038

[–] nous@programming.dev 3 points 1 day ago

YAML is not a good format for this. But any line based or steamable format would be good enough for log data like this. Really easy to parse with any language or even directly with shell scripts. No need to even know SQL, any text processing would work fine.

[–] nous@programming.dev 3 points 1 day ago

CSV would be fine. The big problem with the data as presented is it is a YAML list, so needs the whole file to be read into memory and decoded before you get and values out of it. Any line based encoding would be vastly better and allow line based processing to be done. CSV, json objects encoded into a single line, some other streaming binary format. Does not make much difference overall as long as it is line based or at least streamable.

[–] nous@programming.dev 1 points 1 day ago (1 children)

Never said it had to be a text file. There are many binary serialization formats that could be used. But is a lot of situations the overhead you save is not worth the debugging effort of working with binary data. For something like this that is likely not going to be more then a GB or so, probably much less it really does not matter that much if you use binary or text formats. This is an export format that will likely just have one batch processing layer on. This type of thing is generally easiest for more people to work with in a plain text format. If you really need efficient querying of the data then it is trivial and quick to load it into a DB of your choice rather then being stuck with sqlite.

[–] nous@programming.dev 1 points 1 day ago

export tracking data to analyze later on

That is essentially log data or essentially equivalent. Log data does not have to be human readable, it is just a series of events that happen over time. Most log data, even what you would think of as traditional messages from a program, is not parsed by humans manually but analyzed by code later on. It is really not that hard to slow to process log data line by line. I have done this with TB of data before which does require a lot more effort to do. A simple file like this would take seconds to process at most, even if you were not very efficient about it. I also never said it needed to be stored as text, just a simple file is enough - no need for a full database. That file could be binary if you really need it to be but text serialization would also be good enough. Most of the web world is processed via text serialization.

The biggest problem with yaml like in OP is the need to decode the whole file at once since it is a single list. Line by line processing would be a lot easier to work with. But even then if it is only a few 100 MBs loading it all in memory once and analyzing it all in memory would not take long at all - it just does not scale very well.

[–] nous@programming.dev 12 points 1 day ago (16 children)

What is wrong with a file for this? Sounds more like a local log or debug output that a single thread in a single process would be creating. A file is fine for high volume append only data like this. The only big issue is the format of that data.

What benefit would a database bring here?

[–] nous@programming.dev 28 points 5 days ago* (last edited 5 days ago) (2 children)

There is in this case, and why Linus did accept the patch in the end. Previous cases less so though which is why Linus is so pissed at this one.

The reason for this new feature is to help fix data loss on users systems - which is a fine line between a bug and a new feature really. There is precedent for this type on thing in RC releases from other filesystems as well. So the issue in this instance is a lot less black and white.

That doesn't excuse previous behaviour though.

[–] nous@programming.dev 33 points 5 days ago (2 children)

Only 40%? Would have thought it would be much higher. Don't more projects generally fail then that without being in a bubble?

[–] nous@programming.dev 4 points 6 days ago

The attack is known as the evil maid attack. It requires repeated access to the device. Basically if you can compromise the bootloader you can inject a keylogger to sniff out the encryption key the next time someone unlocks the device. This is what secure boot is meant to help protect against (though I believe that has also been compromised as well).

But realistically very few people need to worry about that type of attack. Encryption is good enough for most people. And if you don't have your system encrypted then it does not matter what bootloader you use as anyone can boot any live usb to read your data.

view more: next ›