This is why I initially held off on using ChatGPT: as a computer scientist, I was focused on testing adversarial strategies and approaching it with a hacker mindset. My takeaway? This isn’t the best way to use LLMs. They’re actually fantastic tools for quickly understanding, and then verifying information. While they can reduce the need for extensive searches, they also help uncover concepts you might not know, especially when you engage in dialogue.
I also search on Google and find "hallucinations": pages with false content that rank better than the real information. In Google the current issue is about ranking and forgetting information that was previously available.
I also search on Google and find "hallucinations": pages with false content that rank better than the real information. In Google the current issue is about ranking and forgetting information that was previously available.