No prompt will cause an LLM to rapidly improve itself, much less into an AGI. Prompts don't cause permanent change in the LLM, only differences in output.
No matter how many times you feed the output of an LLM back to itself, the underlying model does not change. Online training (of actual model weights not just fine-tuning) would be hugely resource intensive and not guaranteed to do any better than the initial training. Interference will happen whether catastrophic or simply drift. We can fantasize about future architectures all day long, but that doesn't make them capable of AGI or even give us a path forward.