I still have difficulties understanding on a high level why lengths in triangles can produce irrational numbers.
I guess once you accept that area in two dimensions involves multiplication, it is a necessary consequence.
I wonder what it means for projects such as wolfram physics where space is discrete.
Do truly right angled triangles even exist in nature?
Is doing a refactoring ever the simplest thing that could have been done?
I think "do the simplest thing" should be "do the thing that increases complexity the least" (which might be difficult to do and require restructuring).
I wouldn't say "translating", but "finding/constructing a model that satisfies the business rules".
This can be quite hard in some cases, in particular if some business rules are contradicting each other or can be combined in surprisingly complex ways.
This reminds me of primitive words [1]: A primitive word is a word that is not the (2+ times) repetition of any other word.
This is slightly different than a non-pattern word from the article, which is a word that is not a 3+ times repetition of any other word.
The anti-pattern game is about extending words such that they do not contain a pattern word.
I wonder how the situation changes if 2 times repetitions would count as pattern (i.e. non-primitive words).
For primitive words, it is an open problem if the language of primitive words (over any non-trivial finite alphabet) is context free.
I wonder if the language of words that don't contain patterns (or non-primitive words) is context free.
Yes, seems like there are only finitely many words over a binary alphabet that do not contain a non-primitive word (0, 01, 010 and 1, 10, 101).
How would it change if the alphabet has three symbols?
It certainly seems like you can get much longer words. I just had a quick go and came up with
0 1 0 2 0 1 2 0 1 0 2 0 2 1
but I stopped there because it gets tedious to check manually for repetition. Might be worth writing a little script to produce the word where each letter is the smallest possible number that doesn't create repetition.
Just checked with AI: Thue showed 1906 that there are infinitely many square free words (:= a word that doesn't contain a non-primitive word) over an alphabet with at least 3 symbols.
On p.2 they follow my idea of adding the lowest possible letter at the end, although they generalise it to adding the letter as close to the end as possible. They conjecture that this process does not stop. I'm always amazed with combinatorics how quickly you arrive at questions that no one knows the answer to.
How much does this leak into typical math-related proofs?
If someone would create LeanQ where quotient types are built in nicely, how much work would it be to port the Fermat project from Lean to LeanQ?
AIUI, this cannot lead to inconsistency or a "wrong" proof. So if a proof checks out in Lean that's good enough, you might not even need a separate LeanQ.
Does this mean that most of the proofs in Lean and LeanQ would look exactly the same, it's just that the proofs of some technical low-level lemmas around quotient types (which I guess mathematicians are not really interested in anyway) look different?
For example, if I want to prove that a+b=b+a, I wouldn't care if I'm directly in peano arithmetic or just have a construction of the peano axioms in ZFC, as in both cases the proofs would be identical (some axioms in PA would be lemmas in ZFC).
If that's the case with quotients, I wonder why it's such a big deal for some.
AFAICT, this issue only comes up if you form the quotient of a proposition by a relation. But there is no point in doing that (all the proofs of a proposition are already equal!) so it's not an issue in practice and it wouldn't be difficult to fix.
However, Lean's SR is broken in other ways which do show up in practice: definitional proof irrelevance is incompatible with Lean's computation rule for Acc. That one is much harder to fix.
The impossible chess board problem must have something to do with the idea of solving tree eval with little memory (https://youtu.be/wTJI_WuZSwE?si=lgTc65RhXQesKchR)!
When the chess board is random, it feels impossible to add additional information. However, by choosing which cell to flip and knowing that you followed a certain operation, you can encode the position information in the random data, without needing more cells.
Doing the proof inside the algorithm (i.e. doing inline induction over the algorithm recursion structure) has the advantage that the branching structure doesn't have to be duplicated as in an external proof like the one I did.
In my proof I didn't struggle so much with induction, but much more with basic Lean stuff, such as not getting lost in the amount of variables, dealing with r.fst/r.snd vs r=(fst, snd) and the confusing difference of .get? k and [k]?.
Nice job. My attempt at the initial strong induction proof was a long time ago so I don't remember the details. It definitely followed a similar structure as yours (but this was before `omega` im pretty sure). Can't quite remember where I got stuck but your proof is good. Thanks!
All I can tell here is that I do certain level of valication on server side. As one of the goals of this project is to popularize the fun of mathematics among the general public, I think I would need to avoid a open network configuration to strictly conduct academic verification. The algorithm itself is publicly opened, so anyone can verify the computation step is correct or not.
https://github.com/nakatahr/gridbach-core
Never trust the client. You must do a full verification. IIUC from another comment, you only ask the client to return the interval they tested and some token to ensure the server send them that interval.
The idea is that it's very hard to find the two primes and it's very hard to prove that they are actually primes. But if the client send you both primes and send you each primality certificate, then the verification is very fast. Also, you can store that info so people can see it.
It would still take a nontrivial amount of computation to do all the verification afterwards. Back of the envelope calculations suggest it should less than 100x longer to find the two primes than to verify them.
It'd be neat to do the verification in the same manner by redistributing one client's results to another, therefore obtaining a proof modulo client collusion.
I wonder what it means for projects such as wolfram physics where space is discrete. Do truly right angled triangles even exist in nature?