If you're talking about all of AI with your statement I think you may need to reconcile that opinion with the fact that chat GPT alone has almost 1 billion daily users. Clearly lots of people derive enormous value from AI.
If there's something more specific and different you were referring to I'd love to hear what it is.
I’m probably an outlier: I use chatgpt/gemini for specific purposes, but ai summaries on eg google search or youtube gives me negative value (I never read them and they take up space).
I can't say I find them 100% useless - though I'd rather they not pop up by default - and I understand why people like them so much. They let people type in a question and get a confident and definitive answer all in natural language, which is what it seems like the average person has tried to do all along with search engines. The issue is that they think whatever it spits out is 100% true and factual, which is scary.
> Clearly lots of people derive enormous value from AI.
I don’t really see this. Lots of people like freebies, but the value aspect is less clear. AFAIK, none of these chatbots are profitable. You would not see nearly as many users if they had to actually pay for the thing.
> "It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."
Bullshit. Citation very much needed. It's a shame--a shameful stain on the profession--that journalists don't respond critically to such absurd nonsense and ask the obvious question: are you fucking lying?. It is absolutely not true that AI tools make doctors more effective, or teachers, or programmers. It would be very convenient to people like Pichai and Scam Altman, but that don't make it so.
And AI skeptics are waiting to see the proof in the pudding. If we have a new tool that makes hundreds of thousands of devs vastly more productive, I expect to see the results of that in new, improved software. So far, I'm just seeing more churn and more bugs. It may well be the case that in a couple years we'll see the fruits of AI productivity gains, but talk is cheap.
The proof is in feature velocity of devs/teams that use it and in the layoffs due to efficiency gains.
I think it's very hard to convince AI skeptics since for some reason they feel more threatened by it than rest. It's counterproductive and would hinder them professionally but then it's their choice.
Without rigorous, controlled study I'm not ready to accept claims of velocity, efficiency, etc. I'm a professional software engineer, I have tried various AI tools in the workplace both for code review and development. I found personally that they were more harmful than effective. But I don't think my personal experience is really important data here. Just like I don't think yours is. What matters is whether these tools actually do something or whether instead they just make some users feel something.
The studies I've seen--and there are very few--seem to indicate the effect is more placebo than pharmacological.
Regardless, breathless claims that I'm somehow damaging my career by wondering whether these tools actually work are going to do nothing to persuade me. I'm quite secure in my career prospects, thank you kindly.
I do admit I don't much like being labeled an "AI skeptic" either. I've been following developments in machine learning for like 2 decades and I'm familiar with results in the field going back to the 1950s. You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.
You say you've used AI tools for code review and deploys, but do you ever just use chat GPT as a faster version of Google for things like understanding a language you aren't familiar with, finding bugs in existing code, or generating boilerplate?
Really I only use chat GPT and sometimes Claude code, I haven't used these special-cased AI tools
> You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.
As I said the evidence is in companies not hiring anymore since they don't need as many developers as before. If you want rigorous controlled studies you'll get it in due time. In the meantime maybe just look into the workflows of how people are using
re AI skeptics: I started pushing AI in our company early this year, and one of the first questions I got was that "are we doing it to reduce costs". I fully understood and sympathize with the fact many engineers feel threatened and feel they are being replaced. So I clarified it's just to increase our feature velocity which was my honest intention since ofc I'm not a monster.
I then asked this engineer to develop a feature using bolt, and he partially managed to do it but in the worst way possible. His approach was to spend no time on planning/architecture and to just ask AI to do it in a few lines. When hit with bugs he would ask the AI "to fix the bug" without even describing the bug. His reasoning was that if he had to do this prep work then why would he use AI. Nonetheless he finished entire month's worth of credit in a single day.
I can't find the proper words, but there's a certain amount of dishonesty in this attitude that really turns me off. Like turbotax sabotaging tax reforms so they can rent seek.
> If you want rigorous controlled studies you'll get it in due time.
I hope so, because the alternative is grim. But to be quite honest I don't expect it'll happen, based on what I've seen so far. Obviously your experience is different, and you probably don't agree--which is fine. That's the great thing about science. When done properly it transcends personal experience, "common sense", faith, and other imprecise ways of thinking. It obviates the need to agree--you have a result and if the methodology is sound in the famous words of Dr. Malcolm "well, there it is." The reason I think we won't get results showing AI tooling meaningfully impacts worker productivity are twofold:
(1) Early results indicate it doesn't. Experiences differ of course but overall it doesn't seem like the tools are measurably moving the needle one way or the other. That could change over time.
(2) It would be extremely favorably in the interests of companies selling AI dev tools to show clearly and inarguably that the things they're selling actually do something. Quantifying this value would help them set prices. They must be analyzing this problem, but they're not publishing or otherwise communicating their findings. Why? I can only conclude it's because they're not favorable.
So given these two indications at this point in time, a placebo like effect seems most likely. That would not inspire me to sign a purchase agreement. This makes me afraid for the economy.
It's not really about optimism or pessimism, it's effect vs no effect. Self reported anecdotes like yours abound, but as far as I'm aware the effect isn't real. That is, it's not in fact true that if a business buys AI tools for its developers their output will increase in some way that impacts the business meaningfully. So while you may feel more productive using AI tooling, in fact you probably aren't, actually.
No. If you're trying to make a causal link between some layoffs and AI tooling you need to bring the receipts. Show that the layoffs were caused by AI tooling, don't just assume it. I don't think you can, or that anyone has.
I am very much not an AI skeptic, I use AI every day for work, and it's quite clear to me that most of the layoffs of the past few years are correcting for the absurd over hiring from the Covid era. Every software company really convinced themselves that they needed like 2-3x the workforce they actually did because "the world changed". Then it became clear that the world in fact did not fundamentally change in the ways they thought.
Chat GPT just happened to come out around the same time so we get all this misattribution
If there's something more specific and different you were referring to I'd love to hear what it is.