this post was submitted on 03 Feb 2026
19 points (95.2% liked)
Hacker News
4290 readers
442 users here now
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
From a programmer and optimizer perspecpective, I always prefer the original binary definitions for memory sizes.
Like, I prefer the speed and convenience of being able to perform bit shifts within a binary system to quickly multiply and divide by powers of 2, without the headache of having to think in decimal.
The whole base 10 thing is more meant for the average consumer and marketing, not those that actually understand the binary nature of the machine.
I always still use powers of 2 for everything. Even though I can have any hdd size I want with virtualization, I still do it as power of 2.
For anything consumer facing marketing, sure it's 1000. But it just makes sense to keep programming with 1024.
KiB for FTW :-)
been using it for years now when i need to be precise. colloquially, everyone i know still understands that contextually, K is 2^10
I'm not all about jamming a random i into terminology that was already well defined decades ago. But hey, you go for it if that's what you prefer.
By the way, 'for FTW' makes about as much sense as saying 'ATM machine', it's redundant.
Smh my head
yup! serves me right for responding while rushing out of the door. gonna leave that here for posterity.
edit: and... switching networks managed to triple post this response. i think thats enough internet for today.
LOL, redundancy FTW 👍😂🤣
KiB was defined decades ago... Way back in 1999. Before that it was not well defined. kb could mean binary or decimal depending on what or who was doing the measurements.
And? I started programming back in 1996, back when most computer storage and memory measurements were generally already well defined, around the base 2 binary system.
Floppy disks were about the only exception, 1.44MB was indeed base 10, but built on top of base 2 for cluster size. It was indeed a clusterfuck. 1.44MB was technically 1.38MiB when using modern terms.
I do wonder sometimes how many buffer overflow errors and such are the result of 'programmers' declaring their arrays in base 10 (1000) rather than base 2^10 (1024)... 🤔
yup! serves me right for responding while rushing out of the door. gonna leave this here for posterity.
Okay, so you can take 0.1 seconds to write a lowercase "i".
No, I learned programming back when programmers actually worked in binary and sexadecimal (Ok IBM fanboys, they call that hexadecimal now, since IBM doesn't like sex).
I still use the old measurement system, save for the rare occasions I gotta convert for the average layman terms.
It tells a lot really quick when talking to someone else, when they don't understand why 2^10 (1024) is the underlying standard that the CPU likes.
Oh wait, there's a 10 in (2^10)...
Wonder where that came from?.. 🤔
I dunno, but bit shift binary multiplications and divisions are super fast in the integer realm, but get dogshit slow when performed in the decimal realm.
I'm not denying any of that. You can just be precise, is all.
But if you fall into the folly of decimal on a device inherently meant to process binary, then you might allocate an array of 1000 items, rather than the natural binary of 1024, leading to a chance of a memory overflow...
Like, sell by the 1000, but program by the 1024.
All the more reason to be precise
It's less for the consumer and more of an SI/IEC/BIPM thing. If the prefix k means about 1000 depending on the context, that can cause all sorts of problems. They maintain that k means strictly 1000, for good reason.