Serious answer: Posits seem cool, like they do most of what floats do, but better (in a given amount of space). I think supporting them in hardware would be awesome, but of course there's a chicken and egg problem there with supporting them in programming languages.
Posits aside, that page had one of the best, clearest explanations of how floating point works that I've ever read. The authors of my college textbooks could have learned a thing or two about clarity from this writer.
No real use you say? How would they engineer boats without floats?
Just invert a sink.
Just build submarines, smh my head.
Based and precision pilled.
I know this is in jest, but if 0.1+0.2!=0.3 hasn't caught you out at least once, then you haven't even done any programming.
Me making my first calculator in c.
IMO they should just remove the equality operator on floats.
what if i add more =
That should really be written as the gamma function, because factorial is only defined for members of Z. /s
As a programmer who grew up without a FPU (Archimedes/Acorn), I have never liked float. But I thought this war had been lost a long time ago. Floats are everywhere. I've not done graphics for a bit, but I never saw a graphics card that took any form of fixed point. All geometry you load in is in floats. The shaders all work in floats.
Briefly ARM MCU work was non-float, but loads of those have float support now.
I mean you can tell good low level programmers because of how they feel about floats. But the battle does seam lost. There is lots of bit of technology that has taken turns I don't like. Sometimes the market/bazaar has spoken and it's wrong, but you still have to grudgingly go with it or everything is too difficult.
But if you throw an FPU in water, does it not sink?
It's all lies.
all work in floats
We even have float16 / float8
now for low-accuracy hi-throughput work.
Even float4. You get +/- 0, 0.5, 1, 1.5, 2, 3, Inf, and two values for NaN.
Come to think of it, the idea of -NaN tickles me a bit. "It's not a number, but it's a negative not a number".
I think you got that wrong, you got +Inf, -Inf and two NaNs, but they're both just NaN. As you wrote signed NaN makes no sense, though technically speaking they still have a sign bit.
Right, there's no -NaN. There are two different values of NaN. Which is why I tried to separate that clause, but maybe it wasn't clear enough.
Floats are only great if you deal with numbers that have no needs for precision and accuracy. Want to calculate the F cost of an a* node? Floats are good enough.
But every time I need to get any kind of accuracy, I go straight for actual decimal numbers. Unless you are in extreme scenarios, you can afford the extra 64 to 256 bits in your memory
I have been thinking that maybe modern programming languages should move away from supporting IEEE 754 all within one data type.
Like, we've figured out that having a null
value for everything always is a terrible idea. Instead, we've started encoding potential absence into our type system with Option
or Result
types, which also encourages dealing with such absence at the edges of our program, where it should be done.
Well, NaN
is null
all over again. Instead, we could make the division operator an associated function which returns a Result<f64>
and disallow f64
from ever being NaN
.
My main concern is interop with the outside world. So, I guess, there would still need to be a IEEE 754 compliant data type. But we could call it ieee_754_f64
to really get on the nerves of anyone wanting to use it when it's not strictly necessary.
Well, and my secondary concern, which is that AI models would still want to just calculate with tons of floats, without error-handling at every intermediate step, even if it sometimes means that the end result is a shitty vector of NaN
s, that would be supported with that, too.
I agree with moving away from float
s but I have a far simpler proposal... just use a struct of two integers - a value and an offset. If you want to make it an IEEE standard where the offset is a four bit signed value and the value is just a 28 or 60 bit regular old integer then sure - but I can count the number of times I used floats on one hand and I can count the number of times I wouldn't have been better off just using two integers on -0 hands.
Floats specifically solve the issue of how to store a ln absurdly large range of values in an extremely modest amount of space - that's not a problem we need to generalize a solution for. In most cases having values up to the million magnitude with three decimals of precision is good enough. Generally speaking when you do float arithmetic your numbers will be with an order of magnitude or two... most people aren't adding the length of the universe in seconds to the width of an atom in meters... and if they are floats don't work anyways.
I think the concept of having a fractionally defined value with a magnitude offset was just deeply flawed from the get-go - we need some way to deal with decimal values on computers but expressing those values as fractions is needlessly imprecise.
Nan isn't like null at all. It doesn't mean there isn't anything. It means the result of the operation is not a number that can be represented.
The only option is that operations that would result in nan are errors. Which doesn't seem like a great solution.
Well, that is what I meant. That NaN
is effectively an error state. It's only like null
in that any float can be in this error state, because you can't rule out this error state via the type system.
Why do you feel like it's not a great solution to make NaN
an explicit error?
While I get your proposal, I'd think this would make dealing with float hell. Do you really want to .unwrap()
every time you deal with it? Surely not.
One thing that would be great, is that the /
operator could work between Result
and f64
, as well as between Result
and Result
. Would be like doing a .map(|left| left / right)
operation.
Float is bloat!
While we're at it, what the hell is -0 and how does it differ from 0?
It's the negative version
So it's just like 0 but with an evil goatee?
Look at the graph of y=tan(x)+ⲡ/2
-0 and +0 are completely different.
Call me when you found a way to encode transcendental numbers.
Perhaps you can encode them as computation (i.e. a function of arbitrary precision)
Do we even have a good way of encoding them in real life without computers?
May I propose a dedicated circuit (analog because you can only ever approximate their value) that stores and returns transcendental/irrational numbers exclusively? We can just assume they're going to be whatever value we need whenever we need them.
From time to time I see this pattern in memes, but what is the original meme / situation?
It's my favourite format. I think the original was 'stop doing math'
Thank you 😁
There are probably a lot of scientific applications (e.g. statistics, audio, 3D graphics) where exponential notation is the norm and there’s an understanding about precision and significant digits/bits. It’s a space where fixed-point would absolutely destroy performance, because you’d need as many bits as required to store your largest terms. Yes, NaN and negative zero are utter disasters in the corners of the IEEE spec, but so is trying to do math with 256bit integers.
For a practical explanation about how stark a difference this is, the PlayStation (one) uses an integer z-buffer (“fixed point”). This is responsible for the vertex popping/warping that the platform is known for. Floating-point z-buffers became the norm almost immediately after the console’s launch, and we’ve used them ever since.
While it's true the PS1 couldn't do floating point math, it did NOT have a z-buffer at all.
Precision piled.
The meme is right for once
I'm like, it's that code on the right what I think it is? And it is! I'm so happy now
Obviously floating point is of huge benefit for many audio dsp calculations, from my observations (non-programmer, just long time DAW user, from back in the day when fixed point with relatively low accumulators was often what we had to work with, versus now when 64bit floating point for processing happens more as the rule) - e.g. fixed point equalizers can potentially lead to dc offset in the results. I don't think peeps would be getting as close to modeling non-linear behavior of analog processors with just fixed point math either.
Audio, like a lot of physical systems, involve logarithmic scales, which is where floating-point shines. Problem is, all the other physical systems, which are not logarithmic, only get to eat the scraps left over by IEEE 754. Floating point is a scam!
The only reason for floating point numbers is to use your laptop as a life buoy
I actually hate floats. Integers all the way (unless i have no other choice)
One of the most accurate ones of this format.
Floats are heresy
uses 64 bit double instead
Uhm, I haven't programmed in a low level language in years. I use python for my job now, and all I know are floats and ints. I don't know what this foreign language is you speak of.
The problem is, that most languages have no native support other than 32 or 64 bit floats and some representations on the wire don't either. And most underlying processors don't have arbitrary precision support either.
So either you choose speed and sacrifice precision, or you choose precision and sacrifice speed. The architecture might not support arbitrary precision but most languages have a bignum/bigdecimal library that will do it more slowly. It might be necessary to marshal or store those values in databases or over the wire in whatever hacky way necessary (e.g. encapsulating values in a string).
Integers have fallen billions must use long float
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics