Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem, as stated, provably has no answer. Assume such a number exists. Call it n. Now define a new 64-bit numbering scheme such that each number x is interpreted as n+x. n+x > n, which invalidates the hypothesis. There needs to be more constraints for this to be interesting. Like largest number representable where the representation is closed under addition and multiplication, or something like that.


> There needs to be more constraints for this to be interesting.

Scott Aaronson's quote in the article provides this constraint:

> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.

Your "each number x is interpreted as n+x" is a clear example of the cheating that makes for an uninteresting scheme.


That doesn’t provide a constraint. The quote you mentioned is about why to model computation as a Turing machine vs. some other model, not about how to “find the largest number representable in 64-bits”, which doesn't have an answer.


I gave an answer for the largest number known to be representable in 64 bits with a scheme that is clearly not cheating.


Define “cheating”.


I believe tromp implies a constraint of “existed before we started answering the question”.


[flagged]


This is the kind of thing you learn in the first lecture of a discrete mathematics class that's required for every single CS program in the world.

It's very obvious if you think about it. If the encoding on the memory is n that doesn't mean it represents "n" data conceptually. It can be n+x for any value of x. For example, you can declare 0 represents $0, 1 represents $0.01, 2 represents $0.02 and so n represents n cents. This is pretty common in tax calculation applications. Then, you can also say n represents n+99^99 so that what GP said holds. They just provided a formal argument for it which proves it by induction, as we call it.


> They just provided a formal argument for it which proves it by induction, as we call it.

This is contradiction, I think.

Assume n (exists and) is the largest number representable in 64 bits is statement 1.

The numbers in the n+x system are larger than n is statement 2.

That causes a contradiction and since statement 2 is true, statement 1 must be false. The addition in n+x is not induction, just a way to define a system.


I think that answer is pretty obvious with some basic mathematical training.


The computer science version is: how much data can you compress to 64 bits? Well you can come up with any payload, create a compression algorithm that returns 0 if it matches that pay load and 1 ++ payload otherwise, then compress that payload. If the “number” is the binary number expressed by the payload that is your maximum number. You can go as high as you like.


That's why you should include the size of the decompressor (AKA using a self-extracting archive). That's what the Hutter Prize does, for example http://prize.hutter1.net

The encodings in this article are self-extracting archives; although they don't use a bloated, arbitrary format like e.g. x86_64 Linux ELF executables. Instead they use Binary Lambda Calculus, which has a variety of interpreters (the smallest being 206 bits, written in BLC itself)


this was my thought as well


Posits formerly unums, these are amazing. The best solution I've seen. Look up John Gustafson. This article doesn't mention him or posits. They can expand with inaccuracy but retain precision. Or sacrifice precision for accuracy. Making the best use of the bits depending on the application.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: