LLMs are already grounding their results in Google searches with citations. They have been doing that for a year already. Optional with all the big models from OpenAI, Google, xAI
And yet they still hallucinate and offer dead links. I've gotten wrong answers to simple historical event and people questions with sources that are entirely fabricated and referencing a dead link to an irrelevant site. Google results don't do that. This is why I use LLM's to help me come up with better searches that I perform and tune myself. That's valuable, the wordsmithing they can do given their solid word and word part statistics.