Hacker News new | past | comments | ask | show | jobs | submit login

Irrelevant, really. Tests establish a minimum threshold of acceptability, they don't (and can't) guarantee anything like overall correctness.





Just checking off the list of things you've determined to be irrelevant. Compiler? Nope. Linter? Nope. Test suite? Nope. How about TLA+ specifications?

I truly don't know what you're trying to communicate, with all of your recent comments related to AI and LLM and codegen and etc., the only thing I can guess is that you're just cynically throwing sand into the wind. It's unfortunate, your username used to carry some clout and respect.

TLA+ specs don’t verify code. They verify design. Such design can be expressed in whatever, including pseudocode (think algorithms notation in textbooks). Then you write the TLA specs that will judge if invariants are truly respected. Once you’re sure of the design, you can go and implement it, but there’s no hard constraints like a type system.

At what level of formal methods verification does the argument against AI-generated code fall apart? My expectation is that the answer is "never".

The subtext is pretty obvious, I think: that standards, on message boards, are being set for LLM-generated code that are ludicrously higher than would be set for people-generated code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: