I don't know about regime notation, but it is nice to see a new format with the most annoying ideas from IEEE754 removed:
- Two's complement instead of one's complement
- No infinities, no signed zeroes
- Only one exceptional value with the same encoding as the smallest signed integer: 10...0
The above means that comparisons work exactly like for signed integers (with the unique exceptional value behaving like -∞). Also, one can test whether x is invalid using `x == -x && x != 0` rather than the ugly `x != x`, i.e. no need to break reflexivity. Even if posits do nothing but remove the one's complement dust off IEEE754, that would be a positive change in my view.
As an aside, this is why I love HN (no /s). I have absolutely no idea what a posit is, and here is a community of experts nitpicking it, discussing it at length and in detail on this thread. No matter what the subject, SMEs always pop up out of nowhere and give the rest of us a crash-course education :)
IEEE754 uses sign-magnitude, not 1's-complement. It doesn't use any sort of complementation.
Edit: Also, it looks like posits also use sign-magnitude and not 2's-complement? I am confused as to where you are getting this from. [Edit again: I was wrong about this, see below.]
Yes, sign-magnitude is the right terminology, my mistake, but too late to edit. As for two's complement, look at slide 18-19 from the link somebody produced below:
Hm, they do indeed use 2's-complement... sort of. In case of a negative posit, they don't apply 2's-complement to the mantissa or anything like that, but rather to the entire posit without regard to the meaning of the bits. That's... kind of surprising, huh! Although it's not the first number format to do that. But it is a sort of 2's-complement, yes...
The big problem with posits is that its relative error depends on its value, this is terrible for a lot of engineering work and scientific simulations where you need to present an error estimate that includes computational error (for ML it's probably fine)
the relative error is bounded by 2^{-24} only in the interval [1.0e-6,1.0e6] for the Posit32 format, whereas it is [1.17e-38,3.4e38] for the IEEE binary32 format.
IMO, this isn't a big deal. If you want rigorous error estimates, you need to use some form of interval arithmetic (or ball arithmetic). Also, these types of engineering and scientific work are pretty much all 64 bit, while POSIT is mainly useful for <=32 bit. My ideal processor would have 64 bit floating point (with Inf/NaN behavior more like posits) and possits for 16 and 32 bit.
I think the problem is that you can't really restrict IEEEs values. I would like to have an error for infinities unless explicitly constructed (so number/number = infinity is an error but number * infinity = infinity is not).
So I understand the use case but they can't replace floats with those infinities remove imho. They complement them. You can work around those things by wrapping posits but sometimes you just need them (for example: I needed it yesterday). I don't always use them, but still somewhat frequently. They are not error codes, I work with the extended real number line [1] so infinity is a valid return value. What's the alternative...working with PositOrInfintiy?
I would probably use an exceptional value and later treat the exceptional like if I would encounter an infinity (usually they are later involved in calculations where infinites turn to 0). But that's just hard to understand for anyone but myself.
Maybe I work not low-level enough but I've never used `x != x` would use `x == -x && x != 0` because that's just not readable. I want a function (isinf, isnumber etc) and I am happy.
> I want a function (isinf, isnumber etc) and I am happy.
Sure, isnan(x) is what one should use. The fact that it is `x != x` is an implementation detail. The problem is that it is also a hack that breaks the usual mathematical axioms for equality and for order relations. For instance, if you want to sort floating-point values, you have to write your own comparison predicate in case there is a NaN, because a NaN is neither smaller, greater or equal to itself.
As for infinities, they somewhat work for real numbers, but it gets more complicated for complex numbers. For instance, Annex G of the C standard stipulates that an infinite complex number multiplied by a nonzero finite complex number should yield an infinite complex number. Sounds reasonable, but consider:
So Annex G recommends some complicated functions to be executed at each complex multiplication and complex division, which makes little sense for most applications, and I suspect few people do that. As an aside, Annex G breaks the whole point of NaNs, because it stipulates that numbers like (∞ + iNaN) should be considered infinities rather than NaNs, which means that NaNs are no longer necessarily viral.
All in all, what I find frustating with these aspects of IEEE754 is that they complicates things under the hood, but the benefits seem to me limited to some specialized applications.
There are a whole bunch of people who insist that lots of the "baggage" of IEEE 754 is really important and must not be abandoned. For most applications it is just extra garbage and for the few there are other ways to do what they want.
That may be true, but you don't need a fundamentally different number representation in order to throw away most of it. Also, the die size cost is quite small.
The biggest saving you could make is probably foregoing exactly rounded results, and only stipulate that the result has to be within ±1 lsb of the true value. That would save multipliers from computing a bunch of bits that don't end up in the result anyway, except for the rare case where they decide the rounding. That would probably be a good trade-off for most AI chips. For general purpose CPUs I don't think it is worth the breakage.
We literally had a post about "old not being broken" on the front page yesterday with a significant discussion on developer churn and now people are already clamoring on other pages to deprecate floating points. Let's have a little memory to shake us out of our habits, shall we?
- Two's complement instead of one's complement
- No infinities, no signed zeroes
- Only one exceptional value with the same encoding as the smallest signed integer: 10...0
The above means that comparisons work exactly like for signed integers (with the unique exceptional value behaving like -∞). Also, one can test whether x is invalid using `x == -x && x != 0` rather than the ugly `x != x`, i.e. no need to break reflexivity. Even if posits do nothing but remove the one's complement dust off IEEE754, that would be a positive change in my view.