In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.
And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”
It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.
But higher than that, no, I’ve not had success with it.
It’s also nice as a general purpose wizard code generator. But that’s just rote work.
It's true that once you have learned enough to tell the LLM exactly what answer you want, it can repeat it back to you verbatim. The question is how far short of that you should stop because the LLM is no longer an efficient way to make progress.
From a knowledge standpoint an LLM can give you pointers at any point.
Theres no way it will "fall short".
You just have to improve your prompt. In the worst case scenario you can say "please list out all the different research angles I should proceed from here and which of these might most likely yield a useful result for me"
My skepticism flares up with sentences like "Theres no way it will "fall short"." Especially in the face of so many first hand examples of LLMs being wrong, getting stuck, or falling short.
I feel actively annoyed by the amount of public gaslighting I see about AI. It may get there in the future, but there is nothing more frustrating than seeing utter bullshit being spouted as truth.
First, rote work is the kind I hate most and so having AI do it is a huge win. It’s also really good for finding bugs, albeit with guidance. It follows complicated logic like a boss.
Maybe you are running into the problem I did early. I told it what I wanted. Now I tell it what I want done. I use Claude Code and have it do its things one at a time and for each, I tell it the goal and then the steps I want it to take. I treat it as if it was a high-level programming language. Since I was more procedural with it, I get pretty good results.
They seem pretty good with human language learning. I used ChatGPT to practice reading and writing responses in French. After a few weeks I felt pretty comfortable reading a lot of common written French. My grammar is awful but that was never my goal.
In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.
And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”
It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.
But higher than that, no, I’ve not had success with it.
It’s also nice as a general purpose wizard code generator. But that’s just rote work.
YMMV