I agree with you, and I don't want anything related to the current AI craze in my life, at all.
But when I come on HN and see people posting about AI IDEs and vibe coding and everything, I'm led to believe that there are developers that like this sort of thing.
I see using AI for coding as a little different. I'm producing something that is designed for a machine to consume and react to. Code is the means by which I express my aims to the machine. With AI there's an extra layer of machine that transforms my written aims into a language any machine can understand. I'm still ambivalent about it, I'm proud of my code. I like to know it inside out. Surrendering all that feels alien to me. But it's also undeniable that AI has sped up a bunch of the boring grunt work I have to do in projects. You can write, say, an OpenAPI spec, some tests and tell the AI to do the rest. It's very, very far from perfect but it remains very useful.
But the fact remains that I'm producing something for a machine to consume. When I see people using AI to e.g. write e-mails for them that's where I object: that's communication intended for humans. When you fob that off onto a machine something important is lost.
Partly it's these people all trying to make money selling AI tools to each other, and partly there's a lot of people who want to take shortcuts to learning and productivity without thinking or caring about long term consequences, and AI offers that.
Even as a principal software developer and someone who is skeptical and exhausted with the AI hype, AI IDEs can be useful. The rule I give to my coworkers is: use it where you know what to write but want to save time doing it. Unit tests are great for this. Quick demos and test benches are great. Boilerplate and glue are great for this. There are lots of places where trivial, mind-numbing work can be done quickly and effortlessly with an AI. These are cases where it's actually making life better for the developer, not replacing their expertise.
I've also had luck with it helping with debugging. It has the knowledge of the entire Internet and it can quickly add tracing and run debugging. It has helped me find some nasty interactions that I had no idea were a thing.
AI certainly has some advantages in certain use cases, that's why we have been using AI/ML for decades. The latest wave of models bring even more possibilities. But of course, it also brings a lot of potential for abuse and a lot of hype. I, too, all quite sick of it all and can't wait for the bubble to burst so we can get back to building effective tools instead of making wild claims for investors.
I think you've captured how I feel about it too. If I try to go beyond the scopes you've described, with Cursor in my case and a variety of models, I often end up wasting time unless it's a purely exploratory request.
"This package has been removed, grep for string X and update every reference in the entire codebase" is a great conservative task; easy to review the results, and I basically know what it should be doing and definitely don't want to do it.
"Here's an ambiguous error, what could be the cause?" sometimes comes up with nonsense, but sometimes actually works.
This might be a less popular opinion on a site like HN, but I'm of the opinion that CEO's don't do a whole lot.
Maybe at small startups they are more involved, but the larger the company, the less I think that CEOs or other C-Suite types actually do.
While I also think ChatGPT is over-hyped and largeley incorrect in what it says, I would answer your question with a "yes". ChatGPT is perfectly capable of writing/delivering speeches at MS Build or whatever.
> but I'm of the opinion that CEO's don't do a whole lot.
Rather, you just don't understand what CEOs in large public companies actually do. You're comparing them to earlier stage CEOs, who can be more hands on.
When running a public company of a quarter million people, the CEO's responsibility starts to look more like an asset manager responsible for a $4 trillion dollar book.
And no - nobody wants that role replaced by an LLM.
Just invest in a reasonably diverse index fund ( or a few). This is actually the optimal drama free way to go for most.
In the long run nobody out performs consistently anyway. We all get hit due to market events.
You may be giving CEOs much more credit than is due. And for all that they actually do, the outperformer is a rarity not the norm.
LLM could certainly fit this. Particularly when trained with all the MBA nonsense education in the world. It wouldn’t be the end of the world and it wouldn’t be substantially better/worse. But it would be cheaper.
I'm sorry, I must be missing something. Which companies make up the index funds if (most) CEOs liquidated their companies and invested in index funds? And how would they liquidate at anything close to their valuation without being priced based on their future expectations?
I don’t think they meant it literally. They were responding to the comment that their job was “like” managing a portfolio of investments. And in that respect the strategy of diversifying “like” with an index fund seemingly appealed to the commenter.
Jeff Bezos once talked about how his job depends on the quality of his decisions. He might not make very many decisions, but they are very high impact. Delegating this high-impact decision making to AI, which often makes random low-quality decisions, sounds like a bad idea.
The CEO also needs to sell those decisions to the organization to get buy-in on the vision and carry it out. How inspired will the organization be by the direction of an AI? No one in an organization will care about this stuff more than the CEO (if they are decent). An AI can’t care, so why would anyone else?
An AI may be making an ultimately random choice (prove the CEO isn't) but it's actual options are weighted on statistical grounds from much wider sources of data than a human can knowingly handle.
I say knowingly because actually the sum total of accumulated info for just a month or two of human activity eclipses even today's LLM.
And the CEO decisions are frequently flawed because there's a strong filter of the information from below.
Perhaps a crowd sourced (employee sourced) decision making process would be best with the wisdom of crowds.
> his decision quality is why Amazon is full of fake and low quality garbage these days
Is that actually negatively impacting Amazon's bottom line? One tends to assume they'd do something about it if they viewed it as a serious threat to revenue
Having been a CEO and around many CEOs... they of course do work. But they don't do x250 times more work of another worker. It's just a different type of work. Yet all work from every employee is critical for the enterprise to function. And in my experience, they often do less work than many employees in terms of hours. If you're an on-call engineer for example, your CEO doesn't get paged and have to wake up in the middle of the night. If you look at any enterprise the CEO is likely not the one doing the most work. That's kind of the whole point and the reason they want to become a CEO/founder (to capture a larger share of the wealth for the work they put in).
Capitalist enterprises (with owners vs non-owner workers) are fundamentally non-meritocratic and exploitative. Everyone works to generate value, and only a small class of workers captures the bulk of the value.
The main responsibility of the CEO of a large company is to set the company's culture and make top-level decisions about what the company does and does not choose to do. Depending in the company they may bring relationships with executives, investors and experts inside and outside the company.
Everything else is delegated to lower level executives and staff.
This is better than most phones on the market, but I can't help but be turned off when I scroll down and start seeing the Google Play logo and mentions about AI and Google Gemini.
> If you have to click or browse several results forget it, makes no sense not to use an LLM that provides sources.
I just searched for "What is inherit_errexit?" at Perplexity. Eight sources were provided and none of them were the most authoritative source, which is this page in the Bash manual:
Whereas, when I searched for "inherit_errexit" using Google Search, the above page was the sixth result. And when I searched for "inherit_errexit" using DuckDuckGo, the above page was the third result.
I continue to believe that LLMs are favored by people who don't care about developing an understanding of subjects based on the most authoritative source material. These are people who don't read science journals, they don't read technical specifications, they don't read man pages, and they don't read a program's source code before installing the program. These are people who prioritize convenience above all else.
> I continue to believe that LLMs are favored by people who don't care about developing an understanding of subjects based on the most authoritative source material. These are people who don't read science journals, they don't read technical specifications, they don't read man pages, and they don't read a program's source code before installing the program. These are people who prioritize convenience above all else.
This makes a lot of sense to me. As a young guy in the 90's I was told that some day "everyone will be fluent in computers" and 25 years later it's just not true. 95% of my peers never developed their fluency, and my kids even less so. The same will hold try for AI, it will be what smartphones were to PCs: A dumbed down interface for people who want to USE tech not understand it.
I've really wanted to write a clickbait blog post article to post on HN [0] with the title "Hackers don't use LLMs". You've pretty succinctly summarised how I feel about the subject with your last paragraph.
[0]: not that I write blog post articles anyway, it's just a fantasy day dream thing that's been running through my head
Why would you even search for that out of the context of the IDE where you're coding or writing documentation? If you're writing bash you'd have all those man pages loaded in context for it to answer questions and generate code properly.
Alt + Tab > Ctrl + T > Type > Enter > PgDn > Click > PgDn > Alt + Left > Click > PgDn > Alt + Left > Click > PgDn > Alt + Tab > [Another 45-60 minutes coding] > GOTO Start
With these keybinds (plus clicking mouse, yuck) I can read Nx sources of information around a topic.
I'm always looking to read around the topic. I don't stop at the first result. I always want to read multiple sources to (a) confirm that's the standard approach (b) if not, are there other approaches that might be suitable (c) is there anything else that I'm not aware of yet. I don't want the first answer. I want all the answers, then I want to make my own choices about what fits with the codebase that I am writing or the problem domain that I'm working in.
Due to muscle memory, the first four/five steps i can do in like one or two seconds. Sometimes less.
Switching to the browser puts my brain into "absorb new information" mode, which is a different skill to "do what IDE tells me to do". Because, as a software engineer, my job is to learn about the problem domain and come up with appropriate solutions given known constraints -- not to blindly write whatever code I'm first exposed to by my IDE. I don't work in an "IDE context". I work in a "solving problems with software context".
==
So I agree with the GP. A lot of posts I see about people saying "why not just use LLM" seem to be driven by a motivation for convenience. Or, more accurately, unconsidered/blind laziness.
It's okay to be lazy. But be smart lazy. Think and work hard about how to be lazy effectively.
I like to see multiple ideas or opinions on a subject. LLMs seem to distill the knowledge and opinions in ways that are more winner-take-all, or at most only the top few samples. Even if you prompt for a deeper sampling it seems it seems the quality drops (like resolution reduces for each) and its still based on popularity vs merits for some types of data.
But when I come on HN and see people posting about AI IDEs and vibe coding and everything, I'm led to believe that there are developers that like this sort of thing.
I cannot explain this.
reply