> No, not everyone can really use AI to deliver something that works
"That works" is doing a lot of heavy lifting here, and really depends more on the technical skills of the person. Because, shocker, AI doesn't magically make you good and isn't good itself.
Anyone can prompt an AI for answers, it takes skill and knowledge to use those answers in something that works. By prompting AI for simple questions you don't train your skill/knowledge to answer the question yourself. Put simply, using AI makes you worse at your job - precisely when you need to be better.
"Put simply, using AI makes you worse at your job - precisely when you need to be better."
I don't follow.
Usually jobs require deliver working things. The more efficient the worker knows his tools(like AI), the more he will deliver -> the better he is at his job.
If he cannot deliver reliable working things, because he does not understand the LLM output, then he fails at delivering.
You cannot just reduce programming to "deliver working things", though. For some tasks, sure, "working" is all that matters. For many tasks, though, efficiency, maintainability, and other factors are important.
You also need to take into account how to judge if something is "working" or not — that's not necessarily a trivial task.
Completely agree. I'm judging the outputs of a process, I really am only interested in the inputs to that process as a matter of curiosity.
If I can't tell the difference, or if the AI helps you write drastically better code, I see it as no more nor no less than, for example, pair programming or using assistive devices.
I also happen to think that most people, right now, are not very good at using AI to get things done, but I also expect those skills to improve with time.
Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc. For the record, I'm not just saying "AI bad"; I've come around to some use of AI being acceptable in an interview, provided it's properly assessed.
> Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc
Agreed, but I as the "end user" care not at all whether you're running a local LLM that you fine tune, or storing it all in your eidetic memory, or writing it down on post it notes that are all over your workspace[1]. Anything that works, works. I'm results oriented, and I do care very much about the results, but the methods (within obvious ethical and legal constraints) are up to you.
[1] I've seen all three in action. The post-it notes guy was amazing though. Apparently he had a head injury at one point and had almost no short term memory, so he coated every surface in post-its to remind himself. You'd never know unless you saw them though.
I think we're agreeing on the aim—good results—but disagreeing on what those results consist of. If I'm acting as a 'company', one that wants a beneficial relationship with a productive programmer for the long-term, I would rather have [ program that works 90%, programmer who is 10% better at their job having written it ] as my outputs than a perfect program and a less-good programmer.
I take epistemological issue with that, basically, because I don't know how you measure those things. I believe fundamentally that the only way to measure things like that is to look at the outputs, and whether it's the system improving or the person operating that system improving I can't tell.
What is the difference between a "less good programmer" and a "more good programmer" if you can't tell via their work output? Are we doing telepathy or soul gazing here? If they produce good work they could be a team of raccoons in a trench coat as far as I'm aware, unless they start stealing snacks from the corner store.
There is also a skill in prompting the AI for the right things in the right way in the right situations. Just like everyone can use google and read documentation, but some people are a lot better at it than others.
You absolutely can be a great developer who can't use AI effectively, or a mediocre developer who is very good with AI.
> not everyone can really use AI to deliver something that works.
That's not the assumption. The assumption is that if you prove you have a firm grip on delivering things that work without using AI, then you can also do it with AI.
And that it's easier to test you when you're working by yourself.
And ultimately, this is what this is about, right? Delivering working products.