Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

right "the best we can do" is actually pretty damned good, if anyone would bother doing it.


I bother doing it but it still ends up: "the best they want to pay us for" which usually is less than the best.


That reminds me of the old G.K. Chesterton quote, which was originally about religion.

Provable correctness "...has not been tried and found wanting; it has been found difficult and not tried."


What is the largest program that has been proven correct?

We're writing programs that are tens of millions of lines of code (OSes, databases, air traffic control systems). My impression of provable correctness is that the largest program that has been proven is in the area of 100,000 lines (feel free to correct if I am in error). We need to be able to prove programs two orders of magnitude bigger than that; provable correctness has not been tried for such programs because nobody wants to wait a generation or two before they get the results.


Sort of. We haven't proven a single thing beyond certain size and complexity. That was true when it started back in Orange Book days. Even when Dijkstra came up with the stuff and the method of dealing witg it: composition w/ preconditions, postconditions, and invariants.

You code in a functional style and/or with state machines composed. They composed stuff hierarchically and loopfree where possible to facilitate analysis. Each component is small enough to analyze for all success or failure states. The results of each become "verification conditions" for things that build on it. With such methods, one can build things that are as large as you want so long as each component (primitive or composite) can be analyzed.

Not clear at what point this breaks down. I used it for drivers, protocols, algorithms, systems, networks, and so one. I brute forced it with more manual analysis because Im not mathematical or trained enough for proofs. However, what holds it back on large things is usually just lack of effort far as I can tell.


So maybe 1) we shouldn't wrote systems that are tens of millions of anarchically intercoupled lines of code or 2) we should write 100 100,000 line subsystems and link them together in a rigorous way.

They key is then to simply avoid emergent interactions between modules/subsystems. That's pretty doable.

What doesn't happen is that people never reason about the cost of defects. Even if they do some reasoning about this, they don't apply that to other components of systems.


Exactly what I said above. Verifying individual components and their straight-forward compisition is how it was done in the past. People just ignore that.


I heard you. I just still fail to believe it. :)


Y'know a lot of the best, most robust and mature software out there is a generation old... Maybe that's just how long it takes to get large scale software right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: