As a dev I care about many of those things and feel like a voice in the wilderness.
So far as weird numerics are concerned, it's astonishing how complacent the microcomputer world is about that.
I think that 0.1+0.2 = 0.3000000000000004 or whatever it is is a distraction from "it just works" and if we want to offer more functionality to the "non-professional programmer" we should do something about it.
Mainframes have had decimal numeric types from the very beginning since the people involved realized that fuzzy math meant you cut checks for the wrong amount. In the micro world there was the time that Apple had an Integer basic which schools thought wasn't good enough, so they licensed a basic with (bad) floating point from Microsoft.
The people who care about numerics tend to be people who care about performance, those people can deal with the weirdness but they don't want to give up any performance for it at all.
Floating point has a specific purpose in computing, where you deal with measurement, and don't know the scale. Adding one nanometer to a light year (to get back exactly a ligh t year) is actually a useful feature in numerical physical computation.
Outside of scientific computation, machine learning, statisticd, etc there is seldom legitimate use for floating point.
Nothing to do about performance I think. Either float is exactly right. Or, it is abused and is 100% entirely wrong.
Right now Javascript specifically is to blame -- for its insane choice of making floats its only and default number type. This is a problem with Javascript only though. If you use Java or C# or go or pretty much any language except Jabascript and type "float" the problem is the programmer not the language.
> insane choice of making floats its only and default number type
Javascript math on integers within +/- 10^20ish produces integers and works as expected. It's true that division produces a fraction, so if you're expecting 2's complement integer truncation (absolutely the kind of "funny math" that users in this context are expected to hate) you won't get it.
I don't see this as such a terrible design choice. The alternative is python's equally crazy menagerie of almost-but-not-quite-hidden numeric types waiting to bite you in unexpected ways.
> The alternative is python's equally crazy menagerie of almost-but-not-quite-hidden numeric types waiting to bite you in unexpected ways.
No, it's not. Python and JS aren't the only two approaches to numbers, even among dynamic languages (Schemes numeric tower, which favors exact types, is another.)
Nah. JS is particularly bad on this point, but every language that has not been a special purpose langauge for the narrow domains where float-first makes sense that has made the premature optimization of defaulting decimal literals with fractional parts to floats and/or not providing language level arbitrary-precision decimal and rational types—which is a lot of languages, many more broadly influential than JS (C, for one) are too blame.
Javascript is by no means uniquely bad. Far too many languages make float/double much easier to find and use than decimal types. If your decimal type is buried in the standard library that's not really any better than being in a third-party library. Java won't even let you use arithmetic operators on decimal types!
I think anything that wants to approximate physical reality finds the float useful. The real problems is when people use floats for human accounting and casual measurements like 0.1cm
This happens because 'rational' or fixed point floating point number types are not a first class number type along with float and integer in all popular programming languages.
I've seen large company pricing teams do amateur stuff like use floats for their prices because the rational number types are not first class citizens. This probably comes from CPUs not having something like it in their ASM code. They all work with ints, floats and vectors of the two, probably thinking people could just use integers when they need something like a rational number type.
Rationals are computationally nasty. Even if you have hardware acceleration, they are going to be slow. Here's why:
An integer addition (or subtraction) can be done in 1 clock cycle; multiplies will take 3; and division takes far longer, think 20 cycles or more. Fixed point additions are the same as integer additions, while multiplies and divisions both take a multiply and a division (although you can convert one of them to a shift if it's binary fixed point).
Floating point additions are more expensive than integers (due to the need of normalization), taking about 3 cycles. But multiplies also take about that amount of time, and you can do a fused multiply and add in the same amount of time, 3-4 cycles. Divisions, like integers, are more painful, but they're actually faster than integer division: a floating point division is going to be closer to 10 cycles, since you have less precision to worry about.
For rational arithmetic, you essentially have to run a GCD algorithm to do the normalization steps. Even if this were implemented with special hardware, it would be on the same order as an integer division operation in terms of latency. An addition or subtraction would cost a GCD, 4 multiplies, and an addition, while multiplication and division are cheaper at a GCD and two multiplications each.
In summary, if you're looking at the cost of doing a dot product, it's 4 cycles per element if integer or floating-point; 5-25-ish if fixed; and 50+ for rational (assuming hardware acceleration; pure software is an order of magnitude slower). It's telling that the recommendation for solving rational equations is to either convert to integers and do integer linear programming, or to do a floating-point approximation first and then refine in the rational domain.
A final note is that integer division is much more rarely vectorized than floating-point division, so fixed and rational arithmetic are not going to see benefits from vectorization on current hardware.
> Rationals are computationally nasty. Even if you have hardware acceleration, they are going to be slow
So what? If you have both rationals and floats (and maybe arbitrary-precision decimals in between) as first class citizens, and give literals the most natural exact type unless a modifier to select floats is used, there's no unnecessary footguns.
I entirely agree with you: the default numeric type in computing should be the numeric tower, with floats chosen on an opt-in basis where their properties are useful.
Also, zero shouldn't test as false in conditional statements, that's what we have booleans for (sorry, lisp friends nodding along until this moment, the empty list shouldn't be falsy either: just false, and None/nil).
So far as weird numerics are concerned, it's astonishing how complacent the microcomputer world is about that.
I think that 0.1+0.2 = 0.3000000000000004 or whatever it is is a distraction from "it just works" and if we want to offer more functionality to the "non-professional programmer" we should do something about it.
Mainframes have had decimal numeric types from the very beginning since the people involved realized that fuzzy math meant you cut checks for the wrong amount. In the micro world there was the time that Apple had an Integer basic which schools thought wasn't good enough, so they licensed a basic with (bad) floating point from Microsoft.
The people who care about numerics tend to be people who care about performance, those people can deal with the weirdness but they don't want to give up any performance for it at all.