> Turing proved that the halting problem cannot be solved on a Turing machine. This opens the possibility for the existence of programs that might be "productive" in your sense, but might also never stop, and it is not possible to know in advance.
Maybe. I struggle to imagine a practical case where we want to run a program that we didn't and couldn't know whether it worked though.
> I am not convinced of this at all. Perhaps true if you are talking about banking systems or web applications, but probably not true if you are talking about AI. Intermediary states of an endless computation might be interesting. Maybe this is a way to obtain unbounded creativity. Maybe this is the way to build minds. We don't know enough.
This is ridiculous reasoning. "We don't understand X, we don't understand Y, therefore X might be related to Y."
> A more general observation: I find that we live in an era that is too obsessed with productivity at the cost of fundamental research, dreaming and imagination. I am convinced that the latter mindset is the only one that can bring qualitative changes to our culture and civilisation, and I think that our long-term survival depends on such qualitative jumps.
> Of course, I also understand that someone has to take care of the plumbing...
Choosing to look at non-halting programs rather than halting programs is like choosing to look at crystal energy instead of nuclear fusion. A certain amount of willingness to question baseline assumptions is valuable, but I think our long-term survival depends far more on being willing to acknowledge the fundamental results of the field and put in the hard engineering work necessary to achieve things under the constraints of reality, rather than trying to wave them away.
> Maybe. I struggle to imagine a practical case where we want to run a program that we didn't and couldn't know whether it worked though.
The vast majority of the programs used in real life are not formally proven and are written in Turing complete languages. That is already the world you live in.
> This is ridiculous reasoning. "We don't understand X, we don't understand Y, therefore X might be related to Y."
I didn't say that.
Notice that a (very simplified) model of the human brain, the recurrent neural network, is already Turing complete. Notice also that humans (and Darwinian evolution, for that matter) display a capacity for creativity that has not been successfully replicated by AI efforts yet. Notice further that non-halting computations (e.g. infinitely zooming a Mandelbrot set) are the closest thing we have to unbounded creativity.
> The vast majority of the programs used in real life are not formally proven and are written in Turing complete languages. That is already the world you live in.
They're unproven, not believed to be unprovable. (Indeed, almost all practical programs make at least some effort to offer evidence and informal arguments for their correctness, via tests, comments on unsafe constructs, and so on).
> Notice also that humans (and Darwinian evolution, for that matter) display a capacity for creativity that has not been successfully replicated by AI efforts yet. Notice further that non-halting computations (e.g. infinitely zooming a Mandelbrot set) are the closest thing we have to unbounded creativity.
> They're unproven, not believed to be unprovable.
A decidable language (let's say Ld) is less powerful than a Turing-complete language (Lt). This means that there are some computations that can be expressed in Lt but not in Ld, and that there are some problems that can be solved in Lt but not in Ld. These are theorems of theoretical computer science. I don't believe anyone has a clear idea of the practical implications of this limitation, and how many of the algorithms in current use are only possible in Lt.
My guess is that the complexity of the software in use nowadays vastly exceeds our ability to do such an analysis, but maybe you know something I don't. Otherwise, while it is true that they are not believed to be unprovable, it is also true that they are not believed to be provable.
> This is meaningless woo.
I'm not sure I should reply to this, because it is just name calling, but for other people reading this (and you, if you're still interested): people who study artificial creativity and related fields such as Artificial Life take these ideas seriously and have interesting philosophical definitions and mathematical formalisms to address them. I have been to conferences sponsored by serious universities and other organisations such as ACM and IEEE where ideas such as the generative power of the Mandelbrot set are seriously discussed. There are several attempts to quantify creativity and to connect the idea of creativity with computer science.
It is important to not have a mind so open that the brain falls off, but I suggest that you may be going too far in the opposite direction.
> I don't believe anyone has a clear idea of the practical implications of this limitation, and how many of the algorithms in current use are only possible in Lt.
> My guess is that the complexity of the software in use nowadays vastly exceeds our ability to do such an analysis, but maybe you know something I don't. Otherwise, while it is true that they are not believed to be unprovable, it is also true that they are not believed to be provable.
Mathematicians and computer scientists deliberately seek out problems that are not in Ld, and have only found constructed examples, mostly minor variations on the same "diagonalization" argument. Any algorithm that is known to work is necessarily in Ld, and those constitute the overwhelming majority of algorithms that are published or used, for obvious reasons. (And those that are merely believed to work are, in the overwhelming majority of cases, believed to work for reasons that translate directly into a belief that they could be proven to work).
> have only found constructed examples, mostly minor variations on the same "diagonalization" argument
Anything that is Turing-complete cannot be implemented in Ld (by definition). Off the top of my head, this includes: Recurrent Neural Networks, CSS, Minecraft, TrueType fonts, x86 emulators (MOV is Turing-complete) and Conways' Game of Life.
Of course you can argue that lots of things can be implemented in an Ld language. Sure, I have nothing against it, but it's not like desiring Turing-completeness is an absurd requirement.
> Any algorithm that is known to work is necessarily in Ld
No, most algorithms are known to work correctly for the common cases that are tested for + all the edges cases the developers can think of or encounter in real life. For non-trivial software, this is a minuscule subset of the possible states. Lots of things are surprisingly Turing-complete, and it is not trivial to prevent this for a sufficiently complex system.
> Anything that is Turing-complete cannot be implemented in Ld (by definition). Off the top of my head, this includes: Recurrent Neural Networks, CSS, Minecraft, TrueType fonts, x86 emulators (MOV is Turing-complete) and Conways' Game of Life.
You're begging the question - your Turing-complete algorithm is "evaluate an expression in a Turing-complete language". It's easy to make a language accidentally Turing-complete (especially when you're thinking in a Turing-complete language), but that completeness is undesirable, and in realistic use cases it's harmful rather than helpful. No-one wants to sit waiting indefinitely to see whether a web page or font is actually going to render or not (and indeed we often end up going to great lengths to make these things Turing-incomplete in practice with timeouts and the like).
> No, most algorithms are known to work correctly for the common cases that are tested for + all the edges cases the developers can think of or encounter in real life. For non-trivial software, this is a minuscule subset of the possible states.
You're not contradicting me, you're saying "most algorithms are not known to work". I don't think that's true in the sense of "algorithms" published in journals/textbooks. I would agree that most code isn't known to work, but that translates into reality: most code doesn't work, most programs have cases where they just break and also just crash every so often.
> Lots of things are surprisingly Turing-complete, and it is not trivial to prevent this for a sufficiently complex system.
For a complex system written from scratch, it's easy enough with the right tools. If you build it in a non-Turing-complete language it won't be Turing-complete, and if something is hard to do in a total way it's probably a bad idea.
Porting an existing system would be much harder, I'll agree, and porting the existing code/protocol ecosystem would be a huge ask. (I do think it's necessary though; the impact of malware attacks gets worse every day, the level of bugginess we're used to seeing in software is rapidly ceasing to be good enough).
This entire discussion about Turing completeness is completely irrelevant. The problem that's relevant to software verification isn't the halting problem in its original formulation, but a simple corollary of it, often called "bounded halting" (which serves as the core of the time hierarchy theorem, possibly the most important theorem in computer science), which states that you cannot know whether a program halts within n steps in under n steps. The implication is that there can exist no general algorithm for checking a universal property (e.g. the program never crashes on any input) that is more efficient than running the program on every possible input until termination.
But bounded halting holds not only for Turing complete languages, but for total languages, too (using the same proof). In fact, it holds even for finite state machines (with a different proof). This is why even the verification of finite state machines is generally infeasible (or very expensive under restricted conditions) both in theory and in practice.
To my mind all that says is that even general total languages or finite state machines allow expressing too much. Surely one can solve this by working in a language where functions are not merely total but come with explicit bounds on their runtime. This seems like a good fit for a stratified language design like Noether - we might have to resort to including a few not-provably-bounded total functions (or even not-provably-total functions) in a practical program, but hopefully would be able to minimize this and make the parts where we were making unproven assumptions very explicit.
If your program is a FSM, then your functions run in constant time, and still verification is infeasible. Search for "Just as a quick example of why verifying even FSMs is hard" in my blog post http://blog.paralleluniverse.co/2016/07/23/correctness-and-c...
Languages that are so limited as to be unusable are still PSPACE-complete to verify.
It is simply impossible to create a useful programming language where every program is easily verifiable. Propositional logic -- the simplest possible (and too constrained to be useful) "language" -- is already intractable. Feasible verification in the worst-case and computation of even the most limited kind are simply incompatible. As anyone who has done any sort of formal verification -- it's hard. There are two general ways of getting around this difficulty. Either we verify only crude/local properties using type checking/static analysis (both are basically the same abstract interpretation algorithm), or taylor a verification technique to a small subset of programs. The other option is, of course, to work hard.
You're assuming that we can only ever add to languages, not constrain them. Yes if your program has 6 decision points and you model each possible combination explicitly then your model will have 2^6 possible states - but you don't have to model it that way. In previous discussions you've said that humans understand these programs by using symmetries to drastically reduce the size of the state space - let the tool do the same thing.
Sure, and model checkers and static analyzers do use symmetries and much more sophisticated ideas (abstract interpretation), but the point remains that it is provably impossible to create a language where every program is feasibly verifiable. My example wasn't even the most restrictive (even though it was too restrictive to be generally useful): it didn't have loops or recursion or higher-order functions. But even a language that doesn't have functions or branching at all, and has only boolean variables is already NP-complete to verify. Computation -- even of the most restrictive kind -- and verification are essentially at odds. In a way, that is the defining feature of computation: a mathematical object constructed of very simple parts, whose composition quickly creates intractability.
On the other hand, specific programs, even in Turing-complete languages, can be feasibly verifiable. We can make our tools more sophisticated, but we can't make our languages restrictive enough that verification would always be possible in practice. There may be things languages can do to help, but changing the expressiveness of the computational model is simply not one of them.
I'm already used to working without unbounded loops and moving away from unrestricted recursion; I can appreciate that unrestricted boolean expressions are complex to verify but I'm just not convinced that day-to-day software development (whether we call it computation or something else) actually needs such things. To my mind verification goes hand-in-hand with implementation, provided the language gives you the tools to do that - maybe I'm arguing for a metalanguage rather than a language, but if I want to produce a program with particular properties that seems easy to achieve by constraining myself to a sublanguage where these properties are true by construction, and then writing my program.
> > the existence of programs that might be "productive" in your sense, but might also never stop, and it is not possible to know in advance
> I struggle to imagine a practical case where we want to run a program that we didn't and couldn't know whether it worked though.
First-order theorem proving is a very practical case. Proof in first-order logic is "recursively enumerable", which means that we can write provers that, given a formula F, produce a proof of F in finite time if such a proof exists. But in general we don't know if a proof exists or how long it will take to find it. So if we start a prover, it will just sit there and not look "productive" until it either returns a proof or we kill it because we're tired of waiting.
But we wouldn't want to just leave such a theorem prover running indefinitely. Far more practical to have it check all possible proofs of length up to n (trivial to do in a total language), or at most run it for a particular number of days, months or years. At some point you do end the experiment.
Sure. You usually run the prover with a timeout. The point is, this is a practical example of running a program where you cannot predict whether it will "work" or not. Additionally, when you reach the timeout, you will not have gained any information, i.e., the process is not "productive" in the sense discussed above.
Edit: Can't reply to your reply. Interesting. Yes, true, "there is no proof of length < N" is some information, but it's not information that tells you anything about the provability of your formula. It's not productive information. You can keep trying, but you won't be able to overturn the semidecidability of first-order logic in a Hacker News comment thread.
If the prover simply went into an infinite loop while checking the first proof candidate and never got as far as checking the second one, would it be working correctly? I would consider such a prover not useful, and a useful prover would be one that told me something when reaching the timeout, such as that there were no proofs for the given theorem shorter than a certain length.
Maybe. I struggle to imagine a practical case where we want to run a program that we didn't and couldn't know whether it worked though.
> I am not convinced of this at all. Perhaps true if you are talking about banking systems or web applications, but probably not true if you are talking about AI. Intermediary states of an endless computation might be interesting. Maybe this is a way to obtain unbounded creativity. Maybe this is the way to build minds. We don't know enough.
This is ridiculous reasoning. "We don't understand X, we don't understand Y, therefore X might be related to Y."
> A more general observation: I find that we live in an era that is too obsessed with productivity at the cost of fundamental research, dreaming and imagination. I am convinced that the latter mindset is the only one that can bring qualitative changes to our culture and civilisation, and I think that our long-term survival depends on such qualitative jumps.
> Of course, I also understand that someone has to take care of the plumbing...
Choosing to look at non-halting programs rather than halting programs is like choosing to look at crystal energy instead of nuclear fusion. A certain amount of willingness to question baseline assumptions is valuable, but I think our long-term survival depends far more on being willing to acknowledge the fundamental results of the field and put in the hard engineering work necessary to achieve things under the constraints of reality, rather than trying to wave them away.