The article discusses different encodings of numbers to give non-dense representations of numbers exceeding (2^64)-1. (These representations are inherently non-dense by the pigeonhole principle.) But I feel like this is missing a key point. A 64-bit string that we agree represents only numbers can represent 18446744073709551616 different numbers. The choice of what numbers are represented is completely up to us. If we want certain properties (dense integers) we end up with the highest being 18446744073709551615. If we want other properties (nearly logarithmic distribution, signedness, and good mapping to hardware for arithmetic) we might end up with FP64 with a maximum value around 10^308. And if we want no interesting property constraints except being dual to a program on a Turing machine, we end up with a busy beaver number. But... remember, we can choose any 18446744073709551616 values we want to be representable. There's no restriction on the interpretation of these strings; or, equivalently, the amount of external information required to explain the interpretation of these strings is unbounded. As a result, we can choose any computable number to be encoded by the string 64'b1, or by any other string, and exceed any desired bounds.
Every finite integer is computable. We often represent non-integer numbers.
> your encodings will, unlike those in the lambda calculus, be completely arbitrary
Well, they /may/ be completely arbitrary. They may not be. The key is to choose encodings that are useful for the problem domain. Admittedly if the problem domain is "winning at the schoolyard game of saying a bigger number," those encodings may look arbitrary for most other purposes.
But there's actually an intent to my original comment besides being pedantic. The most general way to think of a 64-bit numeric representation is as a list of 2^64 numbers, feeding into a 2^64:1 mux, with the 64-bit string being the select bits of the mux. (Or, equivalently, a 2^64 entry x arbitrary width ROM with one number per entry, with the 64-bit string being the address input of the ROM. Same thing.) The two questions you must answer, then, are (a) which 2^64 numbers are most useful in your problem domain; and (b) are there hardware optimizations to reduce the (ridiculous) scale of the mux/ROM model that are so valuable that you're willing to make compromises on which numbers you select?
>First of all, every finite number is computable by definition.
You likely mean every integer or rational is computable (although not by definition). There are finite real numbers that are not computable, in fact most of them are not.
I believe you are correct, however I am struggling to see how.
If you have the time to help me with my understanding then I would appreciate it.
I'm looking at wikipedia's formal definition, which says that for x to be computable, you need to provide a function from naturals to integers such that if you pick a denominator of a fraction (n), this function can give the numerator such that (f(n)-1)/n and (f(n)+1)/n end up straddling the value which is computable.
So, for an integer N, you make f(x) = xN then (f(n)-1)/n = N-(1/n) and (f(n)+1)/n = N+(1/n).
Therefore, for any integer N, it is computable.
Now, what is stopping me from doing something similar with a real?
If I say: f(x) = floor(xN)
Now (f(n)-1)/n = floor(n*N)-(1/n)
It is at this point where I realise I need to go to bed and sleep. If you see this and have the time to explain to me where it falls apart with reals, then I will be most happy. To be clear - I'm quite sure I am wrong, and this isn't me being passive aggressive about it.
You were looking at the definition of a computable real number [1].
The issue there is whether you can approximate the number arbitrarily well.
An issue that is nonexistent in integers, as you note in
> Therefore, for any integer N, it is computable.
> If I say: f(x) = floor(xN)
For many definitions of a real, it's not at all clear whether you can compute this f(x). The ability to compute f already amounts to being able to approximate f arbitrarily well.
For example, for Chaitin's number, you cannot compute your f() except for a few small values of N.
Thanks for your reply - and I probably would have missed it except by complete chance.
I still don't think it has properly clicked, because the function "f(x) = xN" still needs me to know what N is - despite it being an integer.
For example, let's suppose that you manage to have a conversation with God and you discover that BB(100) has the value of 42, and Chaitlin's number is 1/2.
Does Chaitlin's number suddenly become computable? My intuition is that it remains uncomputable, but maybe my intuition is wrong... I guess by the definition, it is computable, but you can't prove it.
I'm also struggling with the reciprocal of BB(100). This is rational, so maybe it too is computable.
I guess I am struggling with the lack of a constructive proof and what that means - it is like we are saying "there exists an algorithm to do X, but the algorithm can't be known", and maybe that is the core of me being wrong - we can prove that such an algorithm exists, and so 1/BB(100) is computable just like BB(100) is computable 0 but every time I write this down, I still can't see how this logic doesn't also break down with actually non-computable numbers. e.g. "There is a function f(x) which returns an integer, I can't tell you what the integer is, but it is the one that results in showing that Chaitlin's number is computable"
Anyway, if you notice this and do reply, then very much thank you - and apologies if your reply goes unread
You wouldn't be able to express BB(100) explicitly. My article is about a lower bound on the much smaller BB(63), and it's already very hard to give a sense of how large that is. Go would of course be able to give you the 100 bit program for it.
> Chaitlin's number is 1/2
We know that Chaitin's number is not computable. So it cannot be 1/2. It has an infinite binary expansion of which we can only ever compute a fixed number of bits.
> I guess by the definition, it is computable
It's not.
> I'm also struggling with the reciprocal of BB(100). This is rational, so maybe it too is computable.
A number x is computable if-and-only-if its reciprocal is. Any fixed integer, like N=BB(100) is computable (with the trivial program print N), and therefore so is its reciprocal.
What is not computable is the function BB(.)
> there exists an algorithm to do X
What is X ?
If you want to discuss this further, please just send your reply by email.
>For example, let's suppose that you manage to have a conversation with God and you discover that BB(100) has the value of 42, and Chaitlin's number is 1/2.
An uncomputable number can not be expressed as a finite sequence of digits in any computable base. So Chaitin's constant must consist of an infinite number of digits regardless of what base you choose, so long as the base is computable.
So "God" or an oracle can never actually produce Chaitin's constant in any finite representation, all an oracle can do is behave as a function where you give it an integer N, and it returns the N'th digit of Chaitin's constant.
It's interesting that you are counting how many lambda calculus terms can fit into 64 bits given a particular encoding, but you're not making room in the 64 bits for an explanation of how your encoding works (let alone how lambda calculus works!). Of course, any encoding you used to include an "explanation" in the 64 bits would itself require some explanation. I guess what I'm getting at is that I'm not sure that your approach is any less arbitrary than any other approach.
Before I can understand your comment, could you please add an explanation about how the English language works? And for whatever language you use to explain that (obviously not English), please provide an explanation for that language as well... sorry; couldn't resist /s
I don’t think it’s possible to provide an infinite chain of such explanations, nor am I attempting to define a problem like “find the largest number representable in English with n characters.”
The lower bound on the information content of a string is the length of the shortest program that can output the string (and halt), this is called its Kolmogoroff complexity. Lambda calculus is (one of) the most compact ways to encode programs and requires the least context to interpret. Thus it’s totally fair to say that the largest number encoded in LC in some number of bits is much more fundamental and less arbitrary a candidate than just randomly deciding that "1" now refers to some externally defined large number.
The author has also implemented Binary Combinatory Logic. I also find combinators more "natural" than lambda calculus (although the latter is certainly more useful day-to-day!); however, I still find lambda calculus a reasonable foundation since the Binary Lambda Calculus self-interpreter is shorter than the Binary Combinatory Logic self-interpreter (e.g. see the author's page at http://tromp.github.io/cl/cl.html )
Lambda calculus is present in most "modern" programming languages, i.e. those which use lexical scope rather than global or dynamic scope; and those which call functions to return results (especially first-class functions) rather than jumping to subroutines for their effects. It's why Python functions are written with the keyword `lambda`, and it's why Amazon's function-as-a-service product is called Lambda.
For example, say you're refactoring some code and come across:
To the sibling comment about arbitrariness, we could use a hybrid where we trade off some bits from IEEE FP to introduce far reaches and also some precision there.. so like, keep 32 bits or 54 bits for IEEE compatibility, then switch to "extended" ranges for e.g. BB numbers, higher alephs, etc..
There was this one system for calculation with infinities that avoided the Hilbert Hotel problem.. can't find it but was called smth like Infinioid or some other play on the name. Would be neat to bolt those on too :)
> To the sibling comment about arbitrariness, we could use a hybrid where we trade off some bits from IEEE FP to introduce far reaches and also some precision there.. so like, keep 32 bits or 54 bits for IEEE compatibility, then switch to "extended" ranges for e.g. BB numbers, higher alephs, etc.
This sort of trick/hack is the reason why theorems in (algorithmic) information theory involve constant factors. For example, we can define an image compressor which outputs a single `1` when given the Lenna test image[0], and otherwise acts exactly like PNG except prefixing its output with a `0`. To decode: a `1` decodes to the Lenna image, and anything starting with `0` gets decoded as PNG without the leading `0`. This gives perfect compression with no loss of quality, when tested on that Lenna image ;)
> People sometimes mistakenly think that numbers or data in computers exist in some meaningful way.
That's basically Platonism. I think it's a reasonable position for some things, e.g. Booleans (two-valued logic), natural/integer/rational numbers, tuples, lists, binary trees, etc. I think it's meaningful to talk about, say, the number 2, separately from the way it may be encoded in RAM as a model of e.g. the number of items in a user's shopping cart.
This position gets less reasonable/interesting/useful as we consider data whose properties are more arbitrary and less "natural"; e.g. there's not much point separating the "essence" of an IEEE754 double-precision float from its representation in RAM; or pontificating about the fundamental nature of a InternalFrameInternalFrameTitlePaneInternalFrameTitlePaneMaximizeButtonWindowNotFocusedState[0]
The question in the article is whether lambda calculus is "natural" enough to be usefully Platonic. It's certainly a better candidate than, say, Javascript; although I have a soft spot for combinatory logic (which the author has also created a binary encoding for; although its self-interpreter is slightly larger), and alternatives like concatenative languages, linear combinators (which seem closer to physics), etc.