Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"once" the training data can do it, LLMs will be able to do it. and AI will be able to do math once it comes to check out the lights of our day and night. until then it'll probably wonder continuously and contiguously: "wtf! permanence! why?! how?! by my guts, it actually fucking works! why?! how?!"


AWS announced 2 or 3 weeks a way of formulating rules into a formal language.

AI doesn't need to learn everything, our LLM Models already contain EVERYTHING. Including ways of how to find a solution step by step.

Which means, you can tell an LLM to translate whatever you want, into a logical language and use an external logic verifier. The only thing a LLM or AI needs to 'understand' at this point is to make sure that the statistical translation from left to right is high enough.

Your brain doesn't just do logic out of the box, You conclude things and formulate them.

And plenty of companies work on this. Its the same with programming, if you are able to write code and execute it, you execute it until the compiler errors are gone. Now your LLM can write valid code out of the box. Let the LLM write unit tests, now it can verify itself.

Claude for example offers you, out of the box, to write a validation script. You can give claude back the output of the script claude suggested to you.

Don't underestimate LLMs


Is this the AWS thing you referenced? https://aws.amazon.com/what-is/automated-reasoning/


yes


I do think it is time to start questioning whether the utility of ai solely can be reduced to the quality of the training data.

This might be a dogma that needs to die.


If not bad training data shouldn’t be problem


There can be more than one problem. The history of computing (or even just the history of AI) is full of things that worked better and better right until they hit a wall. We get diminishing returns adding more and more training data. It’s really not hard to imagine a series of breakthroughs bringing us way ahead of LLMs.


I tried. I don't have the time to formulate and scrutinise adequate arguments, though.

Do you? Anything anywhere you could point me to?

The algorithms live entirely off the training data. They consistently fail to "abduct" (inference) beyond any language-in/of-the-training-specific information.


The best way to predict the next word is to accurately model the underlying system that is being described.


It is a gradual thing. Presumably the models are inferring things on runtime that was not a part of their training data.

Anyhow, philosophically speaking you are also only exposed to what your senses pick up, but presumably you are able to infer things?

As written: this is a dogma that stems from a limited understanding of what algorithmic processes are and the insistence that emergence can not happen from algorithmic systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: