Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs will never be better than humans on the basis that LLMs are just a shitty copy of human code.


I think they can be an excellent copy of human code. Are they great at novel out-of-training-distribution tasks? Definitely not, they suck at them. Yet I'd argue that most problems aren't novel, at most they are some recombination of prior problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: