I know there are a lot of conversations going on about the dangerous elements of LLMs, but it’s nice to read a story like this alongside them — it’s a reminder of the remarkable potential of this new technology.
Could you have found out somehow else about the severity of the problem? Maybe, sure! But the fact that you had this LLM to ask, and it so easily understood whatever info you gave it, asked clarifying follow-up questions, and gave you info/directions in the way you needed to hear them — it’s such a new way of interacting with computers, almost like an API but made for humans to use, and cases like this is where that value shines most clearly IMO.
Thank you for sharing this feedback! Yes, the big unlock for me was using the LLM to explain things to me on my own terms and that's what ultimately gave it more persuasive power over even a simple (non-personalized) google search
I know there are a lot of conversations going on about the dangerous elements of LLMs, but it’s nice to read a story like this alongside them — it’s a reminder of the remarkable potential of this new technology.
Could you have found out somehow else about the severity of the problem? Maybe, sure! But the fact that you had this LLM to ask, and it so easily understood whatever info you gave it, asked clarifying follow-up questions, and gave you info/directions in the way you needed to hear them — it’s such a new way of interacting with computers, almost like an API but made for humans to use, and cases like this is where that value shines most clearly IMO.