Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right, for example early LLMs were notoriously bad at math, as they had been trained on language. They'd get simple math right, likely due to "rote memorization", but couldn't do basic arithmetic with 3-digit numbers. The common AI agents seem much better now. I suspect they added separate math processing logic and trained the LLMs to recognize when and how to delegate to it, though I'm not certain of that.

Similarly coding-focused LLMs can access backend engines that actually run the code and get feedback, either to show the user or to internally iterate.

Having a whole host of such backend processors would be great. Users still only ever have to interact using natural language, but get the power of all these specialized tools in the backend. There are some tasks LLMs can do, but special-purpose algorithms may do better, faster, and/or with less energy usage.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: