Links
Demystifying Floating Points
gridbugs.org
Floating points are a way of representing numbers based on the idea that the larger the magnitude of a number, the less we care about knowing its precise value. If you’re anything like me (until recently) you use floating points regularly in your code with a rough understanding of what to expect, but don’t understand the specifics of what floats can and can’t represent. For example what’s the biggest floating point, or the smallest positive floating point? Or how many times can you add 1.0 to a floating point before something bad happens, and what is that something bad?