Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No prompt will cause an LLM to rapidly improve itself, much less into an AGI. Prompts don't cause permanent change in the LLM, only differences in output.


You're talking about how GPT functions in 2023. I am discussing such a point where when LLM outputs become valuable LLM modifications.

AI recursing on itself progressing toward an AGI.


No matter how many times you feed the output of an LLM back to itself, the underlying model does not change. Online training (of actual model weights not just fine-tuning) would be hugely resource intensive and not guaranteed to do any better than the initial training. Interference will happen whether catastrophic or simply drift. We can fantasize about future architectures all day long, but that doesn't make them capable of AGI or even give us a path forward.


> AI recursing on itself progressing toward an AGI.

Inbreeding LLMs will result in an "AGI"?

In this universe, if you try to get something from nothing, you just end up with noise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: