this post was submitted on 03 Feb 2026
19 points (95.2% liked)

Hacker News

4290 readers
442 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

Source of the RSS Bot

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] over_clox@lemmy.world 3 points 1 week ago (1 children)

No, I learned programming back when programmers actually worked in binary and sexadecimal (Ok IBM fanboys, they call that hexadecimal now, since IBM doesn't like sex).

I still use the old measurement system, save for the rare occasions I gotta convert for the average layman terms.

It tells a lot really quick when talking to someone else, when they don't understand why 2^10 (1024) is the underlying standard that the CPU likes.

Oh wait, there's a 10 in (2^10)...

Wonder where that came from?.. ๐Ÿค”

I dunno, but bit shift binary multiplications and divisions are super fast in the integer realm, but get dogshit slow when performed in the decimal realm.

[โ€“] LodeMike@lemmy.today 2 points 1 week ago (1 children)

I'm not denying any of that. You can just be precise, is all.

[โ€“] over_clox@lemmy.world 3 points 1 week ago (1 children)

But if you fall into the folly of decimal on a device inherently meant to process binary, then you might allocate an array of 1000 items, rather than the natural binary of 1024, leading to a chance of a memory overflow...

Like, sell by the 1000, but program by the 1024.

[โ€“] LodeMike@lemmy.today 1 points 1 week ago

All the more reason to be precise