Can't disagree more. Talent is built and perfected upon thousands hours practice, LLMs just make you lazy. One thing people with seniority in the field don't realize, as I guess you are, is that LLMs don't help develop "muscle memory" in young practioners, it just make them miserable, often caged in an infinite feedback loop of bug fixing or trying to untangle a code mess. They may extract some value by using it for studying but I doubt it and only goes so far, when started I remember being able to extract so much knowledge just by reading a book about algorithms, try to reimplement things, break them, and so on. Today I can use an LLM because I'm wise enough and I can spot wrong answers, but still feel becoming a bit lazy.
I strongly agree with this comment. Anecdotal evidence time!
I'm an experienced dev (20 years of C++ and plenty of other stuff), and I frequently work with younger students in a mentor role, e.g. I've done Google Summer of Code three times as a mentor, and am also in KDE's own mentorship program.
In 2023/24, when ChatGPT was looming large, I took on a student who was of course attempting to use AI to learn and who was enjoying many of the obvious benefits - availability, tailoring information to his inquiry, etc. So we cut a deal: We'd use the same ChatGPT account and I could keep an eye on his interactions with the system, so I could help him when the AI went off the rails and was steering him into the wrong direction.
He initially made fast progress on the project I was helping him with, and was able to put more working code in place than others in the same phase. But then he hit a plateau really hard soon after, because he was running into bugs and issues he couldn't get solutions from the AI for and he just wasn't able to connect the dots himself.
He'd almost get there, but would sometimes forget to remove random single lines doing the wrong thing, etc. His mental map of the code was poor, because he hadn't written it himself in that oldschool "every line a hard-fought battle" style that really makes you understand why and how something works and how it connects to problems you're solving.
As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.
To his credit, he eventually realized leaning on ChatGPT was holding him back mentally and he tried to take things slower and go back to API docs and slowly building up his codebase by himself.
It's like when you play World of Warcraft for the first time and you have this character boost to max level and you use it. You didn't go through the leveling phase and you do not understand the mechanics of your character, the behaviour of the mobs, or even how to get to another continent.
You are directly loaded with all the shiny tools and, while it does make it interesting and fun at first, the magic wears off rather quickly.
On the other hand, when you had to fight and learn your way up to level 80, you have this deeper and well-earned understanding of the game that makes for a fantastic experience.
This is fascinating. The idea of leveling off in the learning curve is one that I hadn't considered before, although with hindsight it seems obvious. Based on your recollection (and without revealing too many personal details), do you recall any specific areas that caused the struggle? For example, was it a lack of understanding of the program architecture? Was it an issue of not understanding data structures? (or whatever) Thanks for your comment, it opened up a new set of questions for me.
A big problem was that he couldn't attain a mental model of how the code was behaving at runtime, in particular the lifetimes of data and objects - what would get created or destroyed when, exist at what time, happen in what sequence, exist for the whole runtime of the program vs. what's a temporary resource, that kind of thing.
The overall "flow" of the code didn't exist in his head, because he was basically taking small chunks of code in and out of ChatGPT, iterating locally wherever he was and the project just sort of growing organically that way. This is likely also what make the ChatGPT outputs themselves less useful over time: He wasn't aware of enough context to prompt the model with it, so it didn't have much to work with. There wasn't a lot of emerging intelligence a la provide what the client needs not what they think they need.
These days tools like aider end up prompting the model with a repo map etc. in the background transparently, but in 2023/24 that infra didn't exist yet and the context window of the models at the time was also much smaller.
In other words, the evolving nature of these tools might lead to different results today. On the other hand, if it had back then chances are he'd become even more reliant on them. The open question is whether there's a threshold there where it just stops mattering - if the results are always good, does it matter the human doesn't understand them? Naturally I find that prospect a bit frightening and creepy, but I assume some slice of the work will start looking like that.
> "every line a hard-fought battle" style that really makes you understand why and how something works
Absolutely true. However:
The real value of AI will be to *be aware* when at that local optimum, and then - if unable to find a way forward - at least reliably notify the user that that is indeed the case.
Bottom line, the number of engineering “hard thought battles” is finite, and should be chosen very wisely.
The performance multiplier that LLM agents brought changed the world. At least as the consumer web did in the 90s, and there will be no turning back.
This is like a computer company around 1980, would be hiring engineers but forbade access to computers for some numerical task.
Funny, it reminds me the reason Konami MSX1 games look like they do, compared to the most of the competition: having access to superior development tools - their HP hardware emulator workstations.
If you are unable to come up with a filter for your applicants that is able to detect your own product, maybe you should evolve. What about asking an AI how to solve this? ;)
> As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.
So as a mentor, you totally talked directly with them about what excites them, tied it to their work, encouraged them to talk about their frustrations openly, helped them develop resilience by showing them towards setbacks are part of the process, and helped give them a sense of purpose and see how their work contributes to a bigger picture, to directly address the side effects of being a human with emotions which could have happened regardless of the tool they used, and didn't just let them flounder because of your personal feelings about a particular tool they used, right? Or do you only mentor winners, and you've never had a mentee hit a wall before LLMs were invented and never had to help anyone through some of the impacts from emotional lows that an immature intern might need help from a mentor to work through.
So, so, so many people have learnt to code on their own without a mentor. It requires a strong desire to learn and perseverance but it’s absolutely possible.
That you can learn so much about programming from books and open source and trial and error has made it a refuge for people with extreme social anxiety, for whom "bothering" a mentor with their questions would be unthinkable.
Failing for a bit, thinking hard and then somehow getting to the answer - for me it was usually tutorials, asking on stackoverflow/forums, finding a random example on some webpage.
The fastest way for me to learn something new is to find working code or code that I can kick for a bit until it compiles/runs. Often I'll comment out everything and make it print hello world, and then from there try to figure out what the essential bits I need to bring back in, or simplify/mock, etc, until it works again.
I learn a lot more by forming a hypothesis "to make it do this, I need that bit of code, which needs that other bit that looks like it's just preparing this/that object" - and the hypothesis gets tested every time I try to compile/run.
Nowadays I might paste the error into chatgpt and it'll say something that will lead me a step or two closer to figuring out what's going on.
Why is modifying working code you didn't write better than having an AI help write code with you? Is it that the modified code doesn't run until you fix it? It still bypasses the 'hard won effort' criteria though?
I forgot to say, the aim is usually to integrate it into a bigger project that I'm writing by myself. The working code is usually for interfacing to libraries I didn't write - I could spend a year reading every line of code for a given library and understanding everything it does, and then realise it doesn't do what I want. The working code is to see what it can do first, or to kick it closer to what I want - only when I know it can do it will I spend the time to fully understand what's going on. Otherwise a hundred lifetimes wouldn't be enough to go through the amount freely available crapware out there.
Not sure! My own path was very mentor-dependent. Participating in open source communities worked for me to find my original mentors as well. The other participants are incentivized to mentor/coach because the main thing you're bringing is time and motivation--and if they can teach you what you need to know to come back with better output while requiring less handholding down the road, their project wins.
It's not for everyone because open source tends to require you to have the personality to self-select goals. Outside of more explicit mentor relationships, the projects aren't set up to provide you with a structured curriculum or distribute tasks. But if you can think of something you want to get done or attempt in a project, chances are you'll get a lot of helping hands and eager teachers along the way.
Mostly by reading a good book to get the fundamentals down, then taking on a project to apply the knowledge and supplement the gap with online ressource. There are good books and nice open source projects out there. You can get far with these by just studying them with determination. Later you can go on the theoretical and philosophical part of the field.
How do you know what a good book is? I've seen recommendations in fields I'm knowledgeable about that were hot garbage. Those were recommendations by reputed people for reputed authors. I don't know how a beginner is supposed to start without trying a few and learning some bad habits.
If you're a beginner, almost any book by a reputable publisher is good. The controversial ideas start at the upper intermediary or advanced level. No beginner knows enough to argue about clean code or the gang of four book.
There is no 'learning' in the abstract. You learn something. Doing tutorials teach you how to do the thing you do in them.
It all comes down to what you wanna learn. If you want to acquire skills doing the things you can ask AI to do, probably a bad idea to use them. If you want to learn some pointers on a field you don't even know what key words are relevant to take to a library, LLMs can help a lot.
If you wanna learn complex context dependent professional skills, I don't think there's an alternative to an experienced mentor.
I have a feeling that "almost getting there" will simply become the norm. I have seen a lot of buggy and almost but not exactly right applications, processes and even laws that people simply have to live with.
If US can be the worlds biggest economy while having an opiod epidemy and writing paper cheques, if Germany can be Europes manufacturing hub while using faxes, sure we as a society can live in the unoptimal state of everything digital being broken 10% of the time insteaf of hald percent
This seems to be the way of things. Oral traditions were devastated by writing, but the benefit is another civilization can hold on to all your knowledge while you experience a long and chaotic dark age so you don't start from 0 when the Enlightenment happens.
Years back I worked somewhere where we had to PDF documents to e-fax them to a supplier. We eventually found out that on their end it was just being received digitally and auto-converted to PDF.
It was never made paper.. So we asked if we could just email the PDF instead of paying for this fax service they wanted.
There was a comment here on HN, I think, that explained why enterprises spend so much money on garbage software. It turned out that the garbage software was a huge improvement on what they did before, so it was still a savings in time and money and easier than a total overhaul.
I wonder what horror of process and machinery the supplier used before the fax->PDF process.
I once worked on a janky, held-together-with-duct-tape-and-bubblegum distributed app written in Microsoft Access. Yes, Microsoft Access for everything, no central server, no Oracle, no Postgres. Data was shared between client and server by HTTP downloads of zipped-up Access .mdb files which got merged into the clients' main database.
The main architect of the app told me, "Before we came along, they wer doing all this with Excel spreadsheets. This is a vast improvement!"
Use LLM. But do not let it be the sole source of your information for any particular field. I think it's one of the most important disciplines the younger generation - to be honest, all generations - will have to learn.
I have a rule for myself as a non-native English speaker: Any day I ask LLMs to fix my English, I must read 10 pages from traditionally published books (preferably pre-2023). Just to prevent LLM from dominating my language comprehension.
I use LLMs as a translation tool, and make sure to generate JSON flashcards.
Sometimes it is more important to get a point across in another language than it is to learn that language. Computers being automatable, you can use it to create a backlog for when you skipped learning so that you can maintain some control of your habit of not learning what you're saying.
You perfectly encapsulated my view on this. I'm utterly bewildered with people who take the opposing position that AI is essentially a complete replacement for the human mind and you'd be stupid not to fully embrace it as your thought process.
I drove cars before the sat nav systems and when I visited somewhere, I'd learn how to drive to there. The second drive would be from memory. However, as soon as I started relying on sat navs, I became dependent on them. I can not drive to a lot of places that I visited more than once without a sat nav these days (and I'm getting older, that's a part of it too).
I wonder if the same thing will happen with coding and LLMs.
On a roadtrip ten years back we chose to navigate by map and compass, and avoid highways.
With sat nav I don't even try to read the exit signs; I just follow the blue line. It takes me 10-20 drives somewhere before I have the muscle memory, and I never made an active mental effort.
Going somewhere by public transportation or foot, e.g. a large homogenic parking lot complex, I consciously make an effort to take mental pictures so I can backtrack or traverse perfectly the second time; in spite of that being mentally challenging, it's still the easiest way I have.
I cannot assemble the hardware that I write code for. This is in spite of having access to both the soldering equipment, the parts and the colleagues who are willing to help me.
At some point all skills become abstract; efficiency is traded for flexibility when you keep doing the same thing for a very long time.
I can still drive a stick shift, but maybe not in 20 years.
I can even feel it in my own coding. I've been coding almost my entire life all the way back to C64 Basic and ever since I am relying on Copilot for most of my regular work I can feel my non AI assisted coding skills get rusty.
Spot on. I'm not A Programmer(TM), but I have dabbled in a lot of languages doing a lot of random things.
Sometimes I have qwen2.5-coder:14b whip up a script to do some little thing where I don't want to spend a week doing remedial go/python just to get back to learning how to write boilerplate. All that experience means I can edit it easily enough because recognition kicks in and drags the memory kicking and screaming back into the front.
I quickly discovered it was essentially defaulting to "absolute novice." No error handlers, no file/folder existence checking, etc. I had to learn to put all that into the prompt.
>> "Write a python script to scrape all linked files of a certain file extension on a web page under the same domain as the page. Follow best practices. Handle errors, make strings OS-independent, etc. Be persnickety. Be pythonic."
I'm far from an expert and my memory might be foggy, but that looks like a solid script. I can see someone with less practice doing battle with debuggers trying the first thing that comes out without all the extra prompting hitting errors and not having any clue.
For example: I wrote a thing that pulled a bunch of JSON blobs from an API. Fixing the "out of handles" error is how I learned about file system and network default limits on open files and connections, and buffering. Hitting stuff like that over and over was educational and instilled good habits.
I hear this argument all the time, and I think “this is exactly how people who coded in assembly back in the day thought about those using higher level programming languages.”
It is a paradigm shift, yes. And you will know less about the implementation at times, yes. But will you care when you can deploy things twice, three times, five times as fast as the person not using AI? No. And also, when you want to learn more about a specific bit of the AI written code, you can simply delve deep into it by asking the AI questions.
The AI right now may not be perfect, so yes you still need to know how to code. But in 5 years from now? Chances are you will go in your favorite app builder, state what you want, tweak what you get and you will get the product that you want, with maybe one dev making sure every once in a while that you’re not messing things up - maybe. So will new devs need to know high level programming languages? Possibly, but maybe not.
1. We still teach assembly to students. Having a mental model of what the computer is doing is incredibly helpful. Every good programmer has such a model in my experience. Some of them learned it by studying it explicitly, some picked it up more implicitly. But the former tends to be a whole lot faster without the stop on the way where you are floundering as a mid level with a horribly incorrect model for years (which I’ve seen many many times).
2. Compilers are deterministic. You can recompile the source code and get the same assembly a million times.
You can also take a bit of assembly then look at the source code of the compiler and tell exactly where that assembly came from. And you can change the compiler to change that output.
3. Source code is written in a formal unambiguous language.
I’m sure LLMs will be great at spitting out green field apps, but unless they evolve to honest to goodness AGI, this won’t get far beyond existing low code solutions.
No one has solved or even proposed a solution for any of these issues beyond “the AI will advance sufficiently that humans won’t need to look at the code ever. They’ll never need to interact with it in any way other than through the AI”.
But to get to that point will require AGI and the AI won’t need input from humans at all, it won’t need a manager telling it what to build.
The point of coding is not to tell a machine what to do.
The point of coding is to remove ambiguity from the specs.
"Code" is unambiguous, deterministic and testable language -- something no human language is (or wants to be).
LLMs today make many implementation mistakes where they confuse one system with another, assume some SQL commands are available in a given SQL engine when they aren't, etc. It's possible that these mistakes will be reduced to almost zero in the future.
But there is a whole other class of mistakes that cannot be solved by code generation -- even less so if there's nobody left capable of reading the generated code. It's when the LLM misunderstands the question, and/or when the requirements aren't even clear in the head of the person writing the question.
I sometimes try to use LLMs like this: I state a problem, a proposed approach, and ask the LLM to shoot holes in the solution. For now, they all fail miserably at this. They recite "corner cases" that don't have much or anything to do with the problem.
Only coding the happy path is a recipe for unsolvable bugs and eventually, catastrophe.
You seem so strong opinionated and sure what the future holds for us, but I must remember you though, that in your example "from assembly to higher level programming languages" the demand for programmers didn't go down, went up, and as companies were able to develop more, more development and more investments were made, more challenges showed up, new jobs were invented and so on... You get where I'm going... The thing I'm questioning is how much lazy new technologies make you, many programmers even before LLMs had no idea how a computer works and only programmed in higher level languages, it was a disaster before with many people claming software was bad and industry going down a road where software quality matters less and less. Well that situation turbo boosted by an LLMs because "doesn't matter i can deploy 100x times a day" disrupting user experience imo won't led us far
I think it's a lot more complicated than that. I think it can be used as a tool for people who already have knowledge and skills, but I do worry how it will affect people growing up with it.
Personally I see it more like going to someone who (claims) to know what they're doing and asking them to do it for me. I might be able to watch them at work and maybe get a very general idea of what they're doing but will I actually learn something? I don't think so.
Now, we may point to the fact that previous generations railed at the degeneration of youth through things like pocket calculators or mobile phones but I think there is a massive difference between these things and so-called AI. Where those things were tools obligatorily (if you give a calculator to someone who doesn't know any formulae it will be useless to them), I think so-called AI can just jump straight to giving you the answer.
I personally believe that there are necessary steps that must be passed through to really obtain knowledge and I don't think so-called AI takes you through those steps. I think it will result in a generation of people with markedly fewer and shallower skills than the generations that came before.
AI will let some people conquer skills otherwise out of their reach, with all the pros and cons of that. It is exactly like the example someone else brought up of not needing to know assembly anymore with higher level languages: true, but those who do know it and can internalize how the machines operate have an easier time when it comes to figuring out the real hard problems and bugs they might hit.
Which means that you only need to learn machine language and assembly superficially, and you have a good chance of being a very good programmer.
However, where I am unsure how the things will unfold is that humans are constantly coming up with different programming languages, frameworks, patterns, because none of the existing ones really fit their mental model or are too much to learn about. Which — to me at least — hints at what I've long claimed: programming is more art than science. With complex interactions between a gazillion of mildly incompatible systems, even more so.
As such, for someone with strong fundamentals, AI tools never provided much of a boon to me (yet). Incidentally, neither did StackOverflow ever help me: I never found a problem that I struggled with that wasn't easily solved with reading the upstream docs or upstream code, and when neither was available or good enough, SO was mostly crickets too.
These days, I rarely do "gruntwork" programming, and only get called in on really hard problems, so the question switches to: how will we train the next generation of software engineers who are going to be called in for those hard problems?
Because let's admit it, even today, not everybody can handle them.
It is if the way to learn is doing it without a tool. Imagine using a robot to lift weights if you want to grow your own muscle mass. "Robot is a tool"
"Growing your own muscle mass" is an artificial goal that exists because of tools. Our bodies evolved under the background assumption that daily back-breaking labor is necessary for survival, and rely on it to stay in good operating conditions. We've since all but eliminated most of that labor for most people - so now we're forced to engage in otherwise pointless activity called "exercise" that's physically hard on purpose, to synthesize physical exertion that no longer happens naturally.
So obviously, your goal is strictly to exert your body, you have to... exert your body. However, if your goal is anything else, then physical effort is not strictly required, and for many people, for many reasons, is often undesirable. Hence machines.
And guess what, people's overall health and fitness have declined. Obesity is at an all time high. If you're in the US, there is a 40% chance you are obese. Your body likely contains very little muscle mass, you are extremely likely to die of side effects of metabolic syndrome.
People are seeing the advent of machines to replace all physical labor and transportation, not gradually like in the 20th century, but withing the span of a decade going from the average physical exertion of 1900 to the average modern lack of physical exertion, take a car everyday, do no manual labor do no movement.
They are saying that you need exercise to replace what you are losing, you need to train your body to keep it healthy and can't just rly on machines/robots to do everything for them because your body needs that exertion - and your answer is to say "now that we have robots there is no need to exercise even for exercise sake". A point that's pretty much wrong as modern day physical health shows.
>And guess what, people's overall health and fitness have declined.
Have you seen what physical labor does to a man's body? Go to a developing country to see it. Their 60 year olds look like our 75 year olds.
Sure, we're not as healthy as we could be with proper exercise and diet. But on the long run, sitting on your butt all day is better for your body than hard physical labor.
You've completely twisted what the parent post was saying, and I can't but laugh out loud at claims like:
> there is a 40% chance you are obese.
Obesity is not a random variable — "darn, so unlucky for me to have fallen in the 40% bucket of obese people on birth": you fully (except in rare cases) control the factors that lead to obesity.
A solution to obesity is not to exercise but a varied diet, and eating less of it to match your energy needs (or be under when you are trying to lose weight). While you can achieve that by increasing your energy needs (exercise) and maintain energy input, you don't strictly have to.
Your link is also filled with funny "science" like the following:
> Neck circumference of more than 40.25 cm (15.85 in) for men ... is considered high-risk for metabolic syndrome.
Darn, as a 195cm / 6'5" male and neck circumference of 41cm (had to measure since I suspected I am close), I am busted. Obviously it correlates, just like BMI does (which is actually "smarter" because it controls for height), but this is just silly.
Since you just argued a point someone was not making: I am not saying there are no benefits to physical activity, just that obesity and physical activity — while correlated, are not causally linked. And the problems when you are obese are not the same as those of being physically inactive.
Hate to disagree with you over GP, with whose comment I mostly disagree too, but:
> you fully (except in rare cases) control the factors that lead to obesity.
Not really, unless you're a homo economicus rationalus and are fully in control of yourself, independent of physical and social environment you're in. There are various hereditary factors that can help or hinder one in maintaining their weight in times of plenty, and some of the confounding problems are effectively psychological in nature, too.
> A solution to obesity is not to exercise but a varied diet, and eating less of it to match your energy needs
I've seen reported research bounce back and forth on this over the years. Most recent claim I recall is that neither actually does much directly, with exercise being more critical than diet because it helps compensate for the body oversupplying energy to e.g. the immune system.
I mean, obviously "calories in, calories out" is thermodynamically true, but then your body is a dynamic system that tries to maintain homeostasis, and will play all kinds of screwy games with you if you try to cut it off energy, or burn it off too quickly. Exercise more? You might start eating more. Eat less? You might start to move less, or think slower, or starve less essential (and less obvious) aspects of your body. Or induce some extreme psychological reactions like putting your brain in a loop of obsessive thinking about food, until you eat enough at which point the effects just switches off.
Yes, most people have a degree of control over it. But that degree is not equally distributed - some people play in "easy mode", some people play in "god mode", helped by strong homeostasis maintaining healthy body weight, some people play in "hard mode"... and then some people play in "nightmare mode" - when body tries to force you to stay below healthy weight.
> I've seen reported research bounce back and forth on this over the years. Most recent claim I recall is that neither actually does much directly, with exercise being more critical than diet because it helps compensate for the body oversupplying energy to e.g. the immune system.
Hah, I've understood what I think is the same study you refer to as exactly that exercise does not help because people who've walked 60km a day regularly did not get "sick" because in people who did not "exercise" that much, excess energy was instead used on the immune system responding too aggressively when it didn't need to — basically, you'll use the same energy, just for different purposes. Perhaps I am mixing up the studies or my interpretation is wrong.
And there are certainly confounding factors to one "controlling" their food intake, but my point is that it's not really random with a "40% chance" of you eating so much to become obese.
Also note that restoring the equilibrium (healthy weight, whatever that's defined to be) is more prone to the factors you bring up, than maintaining it once there — as in, rarely people become obese and continue becoming more and more obese, they do reach a certain equilibrium but then have a hard time going through food/energy deficiency due to all the heavy adaptations the body and mind do to us.
And yes, those in "nightmare mode" have their own struggles, and because of such focus on obesity, they are pretty much disregarded in any medical research.
My "adaptation" for keeping a pretty healthy weight is that I am lazy to prepare food for myself, and then it only comes down to not having too many snacks in the house — trickier with kids, esp if I am skipping a family meal (I'll prepare enough food for them, so again, need to try not to eat the left-overs :D). So I am fully cognizant that it's not the same for everyone, but it's still definitely not "40% chance" — it's a clear abuse of the statistical language.
It could be a simple lifestyle that makes you "fit" (lots of walking, working a not-too-demanding physical job, a physical hobby, biking around...).
The parent post is saying that technological advance has removed the need for physical activity to survive, but all of the gym rats have come out of the woodwork to complain how we are all going to die if we don't hit the gym, pronto.
- Physical back-breaking work has not been eliminated for most people.
- Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people (arguable for most people as it is a mammalian trait) ergo it is not undesirable. UK NHS calls physical exercise essential.
> Physical back-breaking work has not been eliminated for most people.
I said most of it for most people specifically to avoid the quibble about mechanization in poorest countries and their relative population sizes.
> Physical exercise triggers biological reward mechanism which make exercise enjoyable and, er, rewarding for many people
I envy them. I'm not one of them.
> ergo it is not undesirable
Again, I specifically said "and for many people, for many reasons, is often undesirable" as to not have to spell out the obvious: you may like the exercise benefits of a physically hard work, but your boss probably doesn't - reducing the need for physical exertion reduces workplace injuries, allows worker to do more for longer, and opens up the labor pool to physically weaker people. So even if people only ever felt pleasure from physical exertion, the market would've been pushing to eliminate it anyway.
> UK NHS calls physical exercise essential.
They wouldn't have to if people actually liked doing it.
Equally, if you just point to your friend and say "that's Dave, he's gonna do it for me", they won't give you the job. They'll give it to Dave instead.
That much is true, but I've seen a forklift operator face a situation where pallet of products fell apart and heavy items ended up on the floor. Guess who was in charge of picking them up and manually shelving them?
The claim was that it's lazy to use a tool as a substitute for learning how to do something yourself. But when the tool entirely obviates the need for doing the task yourself, you don't need to be able to do it yourself to do the job. It doesn't matter if a forklift driver isn't strong enough to manually carry a load, similarly once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.
> once AI is good enough it won't matter if a developer doesn't know how to write all the code an AI wrote for them, what matters is that they can produce code that fulfills requirements, regardless of how that code is produced.
Once AI is that good, the developer won't have a job any more.
The whole question is if the AI will ever get that good?
All evidence so far points to no (just like with every tool — farmers are still usually strong men even if they've got tractors that are thousands of times stronger than any human), but that still leaves a bunch of non-great programmers out of a job.
Tool use is fine, when you have the education and experience to use the tools properly, and to troubleshoot and recover when things go wrong.
The use of AI is not just a labour saving device, it allows the user to bypass thinking and learning. It robs the user of an opportunity to grow. If you don't have the experience to know better it may be able to masquerade as a teacher and a problem solver, but beyond a trivial level relying on it is actively harmful to one's education. At some point the user will encounter a problem that has no existing answer in the AI's training dataset, and come to realise they have no real foundation to rely on.
Code generative AI, as it currently exists, is a poisoned chalice.
The point he's making is, we still have to learn to use tools no? There still had to he some knowledge there or else you're just sat sifting through all the crap the AI spits out endlessly for the rest of your life. The op wrote his comment like it's a complete replacement rather than an enhancement.
Tools help us to put layers of abstraction between us and our goals. when things become too abstracted we lose sight of what we're really doing or why. Tools allow us to feel smart and productive while acting stupidly, and against our best interests. So we get fascism and catastrophic climate change, stuff like that. Tools create dependencies. We can't imagine life without our tools.
"We shape our tools and our tools in turn shape us" said Marshall McLuhan.
For learning it can very well be. And also it really depends on the tool and task. Calculator is fine tool. But symbolic solver might be a few steps too far. If you don't already understand the process. And possibly the start and end points.
Problem with AI is that it is often black box tool. And not even deterministic one.
AI as applied today is pretty deterministic. It does get retrained and tuned often in most common applications like ChatGPT, but without any changes, you should expect a deterministic answer.
Copying and pasting from stack overflow is a tool.
It’s fine to do in some cases, but it certainly gets abused by lazy incurious people.
Tool use in general certainly can be lazy. A car is a tool, but most people would call an able bodied person driving their car to the end of the driveway to get the mail lazy.
I think the same kind of critical thinking that was required to reimplement and break algorithms must now be used to untangle AIs answers. In that way, it's a new skill, with its own muscle memory. Previously learnt skills like debugging segfaults slowly become less relevant.
I assume you disagree with there being such a thing as "responsible AI use", because besides that, I completely agree with everything you write, including my own experience of "spot wrong answers, but feel becoming lazy".
So I suppose you think that becoming lazy is always irresponsible?
It seems to me, then, that either the Amish are right, or there is a gray zone.
Being a CS teacher, my use of "responsible AI use" probably comes from a place of need: If I can say there is responsible AI use, I can pull the brake maybe a little bit for learners. It seems like LLMs in all their versatility are a great disservice to students. I'm not convinced it's entirely bad, but it is overwhelmingly bad for weak learners.
Let me give you an example from yesterday. I was learning tailwind and had a really long class attribute on a div which I didn't like. I wanted to split it and found a way to do it using my JavaScript framework (the new way to do it was suggested by deepseek). When I started writing by hand the list of classes in the new format copilot gave me an auto complete suggestion after I wrote the first class. I pressed tab and it was done.
I showed this to my new colleague who is a bit older than me and sort of had similar attitudes as you. He told me he can do the same with some multi cursor shenanigans and I'll be honest in that I wasn't interested in his approach. Seems like he would've taken more time to solve the same problem even though he had superior technique than me. He said sure it takes longer but I need to verify by reading the whole class list and that's a pain but I just reloaded the page and it was fine. He still wasn't comfortable with me using copilot.
So yes, it does make me lazier but you could say the same about using go instead of C or any higher level abstraction. These tools will only get better and more correct. It's our job to figure out where it is appropriate to use them and where it isn't. Going to either extremes is where the issue is
Remember though that lazyness, as I learned in computing, is kinda "doing something later": you might have pushed the change/fix faster than your senior fellow programmer, but you still need to review and test that change right? Maybe the change you're talking about was really trivial and you just needed to refresh your browser to see a trivial change, but when it's not, being lazy about a change will only gets you suffer more when reviewing a pr and testing the non trivial change working for thousands customers with different devices
The problem is he wasn't comfortable with my solution even though it was clearly faster and it could be tested instantly. It's a mental block for him and a lot of people in this industry.
I don't advocate blindly trusting LLMs. I don't either and of course test whatever it spits out.
Testing usually isn’t enough if you don’t understand the solution in the first place. Testing is a sanity check for a solution that you do understand. Testing can’t prove correctness, it can only rind (some) errors.
LLMs are fine for inspiration in developing a solution.
I wouldn’t say it’s laziness. The thing is that every line of code is a burden as it’s written once, but will be read and edited many times. You should write the bare amount that makes the project work, then make it readable and then easily editable (for maintenance). There are many books written about the last part as it’s the hardest.
When you take all three in consideration, an llm won’t really matter unless you don’t know much about the language or the libraries. When people goes on about Vim or Emacs, it’s just that it makes the whole thing go faster.
100%. Learning is effort. Exercising is effort. Getting better at anything is effort. You simply can't skip the practice that's the reality. If you want to *learn* something from scratch AI will only help with answers but you still need to put the time in to understand it.
Learning comes from focus and repetition. Talent comes from knowing which skill to use. Using AI effectively is a talent. Some of us embrace learning new skills while others hold onto the past. AI is here to stay, sorry. You can either learn to adapt to it or you can slowly die.
The argument that AI is bad and anyone who uses it ends up in a tangled mess is only your perspective and your experience. I’m way more productive using AI to help me than I ever was before. Yes, I proofread the result. Yes, I can discern a good response from a bad one.
AI isn’t a replacement for knowing how to code, but it can be an extremely valuable teacher to those orgs that lack proper training.
Any company that has the position that AI is bad, and lacks proper training and incentives for those that want to learn new skills, isn’t a company I ever want to work for.
Yeah, and I'd like to emphasize that this is qualitatively different from older gripes such as "calculators make kids lazy in math."
This is because LLMs' have an amazing ability to dream up responses stuffed with traditional signals of truthfulness, care, engagement, honesty etc... but that ability is not matched by their chances of dreaming up answers and ideas that are logically true.
This gap is inevitable from their current design, and it means users are given signals that it's safe for their brains to think-less-hard (skepticism, critical analysis) about what's being returned at the same moments when they need to use their minds the most.
That's new. A calculator doesn't flatter you or pretend to be a wise professor with a big vocabulary listening very closely to your problems.
It's 2025, not 2015. 'google it and add the word reddit' is a thing. For now.
Google 'reflections on trusting trust'. Your level of trust in software that purports to think for you out of a multi-gig stew of word associations is pretty intense, but I wouldn't call it pretty sensible.
That's not what my comment implies. I'm just saying that relying solely on LLMs makes you lazy, like relying just on google/stackoverflow whatever, it doesn't shift you from a resource that can be layed off to a resource that can't. You must know your art, and use the tools wisely
Feeling "Lazy" is just an emotion, which to me has nothing to do with how productive you are as a human. In fact the people not feeling lazy but hyped are probably more effective and productive. You're just doing this to yourself because you have assumptions on how a productive/effective human should function. You could call it "stuck in the past".
I have the idea that there are 2 kinds of people, those avidly against AI because it makes mistakes (it sure does) and makes one lazy and all other kinds of negative things, and those that experiment and find a place for it but aren't that vocal about it.
Sure you can go too far, I've heard someone in Quality Control Proclaim "ChatGPT just knows everything, its saves me so much time!" To which I asked if they heart about hallucinations and they hadn't, they'd just been copying whatever it said into their reports. Which is certainly problematic.