Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried. I don't have the time to formulate and scrutinise adequate arguments, though.

Do you? Anything anywhere you could point me to?

The algorithms live entirely off the training data. They consistently fail to "abduct" (inference) beyond any language-in/of-the-training-specific information.



The best way to predict the next word is to accurately model the underlying system that is being described.


It is a gradual thing. Presumably the models are inferring things on runtime that was not a part of their training data.

Anyhow, philosophically speaking you are also only exposed to what your senses pick up, but presumably you are able to infer things?

As written: this is a dogma that stems from a limited understanding of what algorithmic processes are and the insistence that emergence can not happen from algorithmic systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: