I tried. I don't have the time to formulate and scrutinise adequate arguments, though.
Do you? Anything anywhere you could point me to?
The algorithms live entirely off the training data. They consistently fail to "abduct" (inference) beyond any language-in/of-the-training-specific information.
It is a gradual thing. Presumably the models are inferring things on runtime that was not a part of their training data.
Anyhow, philosophically speaking you are also only exposed to what your senses pick up, but presumably you are able to infer things?
As written: this is a dogma that stems from a limited understanding of what algorithmic processes are and the insistence that emergence can not happen from algorithmic systems.
Do you? Anything anywhere you could point me to?
The algorithms live entirely off the training data. They consistently fail to "abduct" (inference) beyond any language-in/of-the-training-specific information.