Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be awesome to develop some theory around what kind of problems LLMs can and cannot solve. That should deter some leads pushing for solving the unsolvable with the technology.

That being said, this isn’t a knockout blow by any stretch. The strength of LLMs lies in the people who are excited about them. And there’s a perfect reinforcing mechanism for the excitement - the chatbots that use the models.

Admit for a second that you’re a human with biases. If you see something more frequently, you’ll think it’s more important. If you feel good when doing something, you’ll feel good about that thing. If all your friends say something, you’re likely to adopt it as your own belief.

If you have a chatbot that can talk to you more coherently than anyone you’ve ever met, and implement these two nested loops that you’ve always struggled with, you’re poised to become a fan, an enthusiast. You start to believe.

And belief is power. As in the case of neuroscience development not being able to retire the concept of the dualism of body and soul, so will the testing of LLMs not be able to retire the concept of AI poised to dominate everything soon.



>It would be awesome to develop some theory around what kind of problems LLMs can and cannot solve. That should deter some leads pushing for solving the unsolvable with the technology.

That could have unfortunate consequences. Most people stopped looking at neural nets for years because they thought that Minsky's and Papert's 1969 proof that perceptrons (linear neural nets) couldn't solve basic problems incorrectly applied to neural nets in general. So the field basically abandoned neural nets for a couple of decades which were more or less wasted on "symbolic" approaches to AI that accomplished little.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: