Out of interest, have you built an agent? Experienced the power of 500 lines of code run in a loop with uncapped tool calls? It's a different experience and outcome to copy and paste'ing from a chat interface.
> You can optimize agents all the way down your conveyor belts, but none of it will replace the discretionary management of a human.
Concur! My personal goal is to reduce the amount of times I need to get down into the weeds so I can focus more on spending time with my kids.
> Having a thousand automated AI robots to maintain your code may be no more useful than a thousand chimps and typewriters to help you write Shakespeare.
Perhaps; time will tell. There's classes of activities that I do every day that right now could be automated through agents running with unlimited tool calls. Think about topics like Renovatebot. Like, why don't we have Renovatebot for that class of KTLO?
I wrote one with Google's BERT in 2020 because I was hair-on-fire ecstatic over the idea. Cargo-culted an inference library and hooked it into a Slack bot to post the changelogs. You can guess how that turned out, but yes, at one point I shared the dream. Nothing I've seen has motivated me to try again, the pace of Claude and ChatGPT releases haven't inspired me to try again.
My worry is that you're getting too hyped up when there's not really any serious evidence the issues can be solved. It would be cool if it did, but again, refer to the flying car - great dream, but avgas isn't getting any cheaper. Nor pilots insurance.
> Like, why don't we have Renovatebot for that class of KTLO?
Liability? If you don't keep the lights on, the business is critically impaired. AI agents doesn't use the right address when sending the power bill - cute error in testing, catastrophic error in real life. How do we, as engineers, realistically stop an AI from doing that? How can we introduce heuristic variability without opening avenues for catastrophic, unfixable failure? You might be throwing developers under the bus by advocating for them too strongly here. "Pushbutton idempotency" and "robot that cleans my room" is a square peg trying to fit in a round hole.
With respect a lot has changed since 2020. I appreciate your replies. Your points are valid. There is a lot to be solved. There’s some stuff that should not be automated but there’s definitely some stuff that should.
To my mind this is core to the whole thing. Yes, we could make 1000 robots that clean up codebases overnight, but until we can answer the above question, we should definitely, absolutely, not do that.
Absolutely! I kind of hate the term "vibe coding" because of its associations with brain off. It is so important for an engineer to take accountability for what they ship.
Now to your ponderoo about libraries, something I've found that's really fascinating is I've really stopped using open source libraries unless there's a network ecosystem effect for example like Tailwind. But for everything else, it's much easier to code generate it. And if there's something wrong with the implementation, I can take ownership and accountability for that implementation and I can just fix it with a couple more prompts. No more bullshit in open source. Some person might not even be maintaining the project anymore or having to deal with like getting a pull request fixed or open source supply chain attack vectors related to project takeovers and all that noise. It just doesn't exist anymore. It's really changed how I do software development.
Exactly. One of my favorite things to do is to dump a code path into the context window and ask it to generate a mermaid sequence diagram or a class diagram explaining how everything is connected together.
I was at Penn's first AI conference last year and heard Dr Lilach Mollick's keynote where she said this is shown to be true over and over. She doesnt seem to publish often, but her husband Ethan always has a lot to say about AI.
No, it is actually a critical skill. Employers will be looking for software engineers that can orchestrate their job function and these are the two key primitives to do that.
The way it is written is to say that this is an important interview question for any software engineering position, and I'm guessing you agree by the way you say it's critical.
But by the same logic, should we be asking for the same knowledge of the language server protocol and algorithms like treesitter? They're integral right now in the same way these new tools are expected to become (and have become for many).
As I see it, knowing the internals of these tools might be the thing that makes the hire, but not something you'd screen every candidate with who comes through the door. It's worth asking, but not "critical." Usage of these tools? sure. But knowing how they're implemented is simply a single indicator to tell if the developer is curious and willing to learn about their tools - an indicator which you need many of to get an accurate assessment.
Understanding how to build an agent and how Model Context Protocol works is going to be, by my best guess, the new "what is a linked list and how do you reverse a linked list" interview question in the future. Sure, new abstractions are going to come along, which means that you could perhaps be blissfully unaware about how to do that because there's a higher order function to achieve such things. But for now, we are at the level of C and, like C, it's essential to know what those are and how to work with them.
See the tweet on workflow that I put in the post. No courseware, no bullshit, it's there. Have fun. The blog has plenty of guidance from using specs to creating standard libraries of prompts and how to clone a venture capital-backed company while you sleep.
FWIW, since I just realized my only main comment was a criticism, I found your article very insightful. It baffles me how many people will disagree with the general premise or nit-pick one tiny detail. The only thing more surprising to me than the rate at which AI is developing is the number of developers jamming their heads into the sand over it.
Hey dude, it's been a couple weeks since we caught up for a zoom. Maybe it's three weeks now. Still keen to catch up again, dude. It's gonna be a little bit busy for the next couple of weeks. I've got two conference talks to do then I'm gonna be over in San Fran, but keen.
Forgive me if I don't consider your personal blog an authority of your honesty. "I'm not a liar, for real dude see it says right there"
Why are all your public projects "joke/toy projects" if AI is so awesome and production ready? My experience reflects this as the truth. Yet your work backs up my experience rather than your words.
To avoid only being snark. I think all software is about power/control and software has allowed unprecedented concentration of power Which is why it resists being official like other industries. No one with power wants restrictions on their power/software. Ultimately AI is good for small'ish projects and a productivity multiplier(eventually). I think it will lead a new revolution in distilling the current incumbents in the business world that are stagnant on vast proprietary software systems that previously could not be unseated. Small players will be able to codify/automate their business to make it competitive with big players. So I'm not "anti-ai".
edit: AI will simultaneously rot the proprietary software advantage from the inside out. As companies are further convinced that AI can solve their problem of having to pay people to maintain their software.
I think it's pretty counter-productive to default to not trusting anyone under any circumstances.
Having a "disclosures" page on a personal website is a pretty strong quality signal for me - it's inspired me to set a task to add my own.
As with all of these things, the trick is to establish credibility over time. I've been following Geoff for a few years. Given his previous work I think he has integrity and I'm ready to believe his disclosures page.
However we seem to live in a time where integrity is basically valued at zero or more commonly as something to "bank" so you can cash in for an enormous payoff when the time comes. I agree he seems authentic, therefore valuable. Which means an AI company can come and offer him 7-8 figures to build hype. I think its hard for people to truly grasp just how much money is flying around in hype cycles. Those numbers are not unrealistic. That's set for life money, not many are in a position to refuse that kind of wealth. (he lives in van, just saying)
I hope he is one of the few authentic people left but the world has left me jaded.
Secretly offering someone 7-8 figures to hype for you is a big business risk to take on.
If details of that deal leak, it's a big embarrassment for the company.
In the USA it is also illegal. There are substantial FTC fines to worry about. If it affects the stock price it could be classified as securities fraud (Matt Levine will happily tell you that "everything is securities fraud").
>If details of that deal leak, it's a big embarrassment for the company.
Intermediaries.
Also IMO the risk of someone whom is not already rich turning down that kinda money is so close to zero that it is effectively zero. No risk.
If everything is securities fraud then by that logic it would not be considered in making sketchy deals. Also as you double state, it only matters if the company is public anyway. Hmmm is openai public? Are any of the AI players besides MS,Oracle,Google? Short answer. No.
I'm not sure why with all the public unpunished criminal behavior we see nowadays you have such trouble believing that there really are lots of paid shills for such a hyped product.
I'm not accusing you of being a paid shill. My core argument is that there are lots of paid shills for AI being created for the last 2 years and going.
I of course will never have the hard evidence to prove it so inferring is all I can do or point people to.
HN seems to have a very long tolerance of suspected/potential white collar crimes. So I don't expect many allies on here. Seems the mindset is the ends justifies the means prevails.
reply