Not really. I don't think you've read his papers. The implementations in hardware lack range, but soft posits can expand to ridiculous ranges, and comparing the math, posits are superior(imo, granted, they're more or less similar) to BLC, depending on how they're implemented, of course.
> posits are superior(imo, granted, they're more or less similar) to BLC, depending on how they're implemented, of course
I'm very confused that you say posits are "more or less similar" to Binary Lambda Calculus. Posits are an inert data encoding: to interpret a posit as a number, we plug its parts (sign, regime, exponent and fraction) into a simple formula to get a numerical value. Those parts can have varying size (e.g. for soft posits), but the range of results is fundamentally limited by its use of exponentials.
In constrast, BLC is a Turing-complete, general-purpose programming language. To interpret a BLC value as a number, we:
- Parse its binary representation into lambda abstractions (00), function calls (01) and de-Bruijn-indexed variables (encoded in unary)
- Apply the resulting term to the symbol `one-more-than` and the symbol `zero`
- Beta-reduce that expression until it reaches a normal form. There is no way, in general, to figure out if this step will finish or get stuck forever: even if will eventually finish, that could take trillions of years or longer.
- Read the resulting symbols as a unary number, e.g. `one-more-than (one-more-than zero)` is the number two
Posit-style numbers can certainly be represented in BLC, by writing a lambda abstraction that implements the posit formula; but BLC can implement any other computable function, which includes many that grow much faster than the exponentials used in the posit formula (e.g. this has a nice explanation of many truly huge numbers https://www.reddit.com/r/math/comments/283298/how_to_compute... )
PS: I still prefer type 1 unums, due to their fallback to intervals :)