| AI is orders of magnitude more useful and transformative than Facebook was in 2005
It better be, it's taken over 40000x the funding.
The question is not whether AI is useful, the question is whether it's useful enough relative to the capital expectations surrounding it. And those expectations are higher than anything the world has ever seen.
Investors problems obviously. If you care about the ecological or moral aspect of meat consumption we already have way more than enough affordable/healthy/tasty solutions
Several centuries ago the peasants got uppity. Thought they deserved to eat meat and other decent food. Thought they deserved to wear something other than rags. It wasn't easy to fix, but our masters grinded away this whole time to come up with a solution. You will eat maggots and you will be happy.
With regard to Etsy, hand-made crafts don't scale so a VC-backed startup around them was never going to be able to resist this. Only hope would be a highly moderated and curated Craigslist-style website that was happy to pay the bills, pay some salaries and keep the lights on while maintaining integrity.
Craft fairs, though, no excuse or reason. There should not be profit maximizing at local craft fairs. They're a bellwether for the degradation of culture.
I use AI as a rubber duck to research my options, sanity-check my code before a PR, and give me a heads up on potential pain points going forward.
But I still write my own code. If I'm going to be responsible for it, I'm going to be the one who writes it.
It's my belief that velocity up front always comes at a cost down the line. That's been true for abstractions, for frameworks, for all kinds of time-saving tools. Sometimes that cost is felt quickly, as we've seen with vibe coding.
So I'm more interested in using AI in the research phase and to increase the breadth of what I can work on than to save time.
Over the course of a project, all approaches, even total hand-coding with no LLMs whatever, likely regress to the mean when it comes to hours worked. So I'd rather go with an approach that keeps me fully in control.
Yeah my guess is that it takes roughly the same amount of time regardless if it's AI agents or hand coding, the time just gets spent in different ways (writing vs reading for example).
My question is why use AI to output javascript or python?
Why not output everything in C and ASM for 500x performance? Why use high level languages meant to be easier for humans? Why not go right to the metal?
If anyone's ever tried this, it's clear why: AI is terrible at C and ASM. But that cuts into what AI is at its core: It's not actual programming, it's mechanical reproduction.
Which means its incapabilities in C and ASM don't disappear when using it for higher-level languages. They're still there, just temporarily smoothed over due to larger datasets.
My small-program success story with genAI coding is pretty much the opposite of your claim. I used to use a bash script with a few sox instances piped into each other to beat-match and mix a few tracks. Couldn't use a GUI... Then came gpt-5, and I wanted to test it anyway. So I had it write a single-file C++ program that does the track database, offline mixing, limiter and a small REPL-based "UI" to control the thing. I basically had results before my partner was finished preparing breakfast. Then I had a lot of fun bikeshedding the resulting code until it felt like something I'd like to read. Some back and forth, pretending to have an intern and just reviewing/fixing their code. During the whole experience, it basically never generated code which wouldn't compile. Had a single segfault which was due to unclear interface to a C library. Got that fixed quickly.
And now, I have a tool to do a (shuffled if I want) beat-matched mix of all the tracks in my db which match a certain tag expression. "(dnb | jungle) & vocals", wait a few minutes, and play a 2 hour beat-matched mix, finally replacing mpd's "crossfade" feature. I have a lot of joy using that tool, and it was definitely fun having it made. clmix[1] is now something I almost use daily to generate club-style mixes to listen to at home.
One thing I have been doing is breaking out of my long-held default mode of spinning up a react/nextjs project whenever I need frontend, and generating barebones HTML/CSS/JS for basic web apps. A lot of the reason we went with the former was the easy access to packages and easy-to-understand state management, but now that a lot of the functionality packages used to provide can be just as easily generated, I can get a lot more functionality while keeping dependencies minimal.
I haven't tried C or ASM yet, but it has been working very well with a C++ project I've been working on, and I'm sure it would do reasonably well with bare-bones C as well.
I'd be willing to bet it would struggle more with a lower-level language initially, but give it a solid set of guardrails with a testing/eval infrastructure and it'll get its way to what you want.
Pretty interesting take this. I wonder if there is a minimal state management we could evolve which would be sufficient for LLMs to use while still making it possible for a human to reason about the abstraction. It won't be as bloated as the existing ones we came up with organically however.
I mean, you're basically LLM-washing other people's code, then. All those UI components that other people wrote and at least expected attribution may not be libraries anymore, sure. But you've basically just copied and maybe lightly modified that code into your project and then slapped a sticker on it saying "mine." If you did that manually with open source code, you'd be in violation of the attribution terms almost all the licenses have in common. But somehow it's okay if the computer does it for you?
It is a gray area. What if you took Qt, removed macros, replaced anchoring with css for alignment, took all widget properties out into an entity component system and called it ET, could Trolltech complain? It is an entirely new design and nothing like they built. A ship of Theseus if you will.
The Ship of Theseus has nothing to do with the identity of the parts. That is not in question at all; they are explicitly different parts. The thought experiment is the question of the identity of the whole.
Qt in your example is a part. You're application is the whole. If you replaced Qt with WxWidgets, is your application still the same application?
But to answer your question, to replace Qt with you're own piecemeal code doesn't do anything more to Qt than replacing it with WxWidgets would: nothing. The Qt code is gone. The only way it would ship-of-theseus itself into "still being Qt, despite not being the original Qt" would be if Qt required all modifications to be copyright-assigned and upstreamed. That is absurd. I don't think I've ever seen a license that did anything like that.
Even though licenses like the GPL require reciprocal FOSS release in-kind, you still retain the rights to your code. If you were ever to remove the GPL'd library dependency, then you would no longer be required to reciprocate. Of course, that would be a new version of your software and the previous versions would still be available and still be FOSS. But neither are you required to continue to offer the original version to anyone new. You are only required to provide the source to people who have received your software. And technically, you only have to do it when they ask, but that's a different story.
We used higher level programming languages because "Developer time is more expensive than compute time", but if the AI techbros are right, we are approaching the point where that is not going to be true.
It's going to take the same amount of time creating a program in C as it does in Python.
The premise of your question is wrong. I would still write Python for most of my tasks even if I were just as fast at writing C or ASM.
Because the conciseness and readability of the code that I use is way more important than execution speed 99% of the time.
I assume that people who use AI tools still want to be able to make manual changes. There are hardly any all or nothing paradigms in the tech world, why do you assume that AI is different?
I don't get this. AI coders keep saying they review all the code they push, and your suggestion is to use even harder languages the average vibe coder is unable to understand, all in name of "performance"? Faster code maybe, and exponentially increasing the tech debt and amount of bugs that slips through.
It wasn't even long ago that we thought developer experience and capacity for abstraction (which is easier to achieve in higher level languages) was paramount.
> AI coders keep saying they review all the code they push
Those tides have shifted over the past 6 weeks. I'm increasingly seeing serious, experienced engineers who are using AI to write code and are not reviewing every line of code that they push, because they've developed a level of trust in the output of Opus 4.5 that line-by-line reviews no longer feel necessary.
(I'm hesitant to admit it but I'm starting to join their ranks.)
In the past week, I saw Opus 4.5 (being used by someone else) implement "JWT based authentication" by appending the key, to a (fake) header and body. When asked to fix this, it switched to hashing the key (and nothing else), and appending the hash instead. The "signature" still did not depend on the body, meaning any attacker could trivially forge an arbitrary body, allowing them to e.g. impersonate any user they wanted to.
Do I think Opus 4.5 would always make that mistake? No. But it does indicate that the output of even SotA models needs careful review if the code actually matters.
Until your client tells you that it doesn't work in Edge and you find out it's because every browser has its own styling and they are impossible to change enough to get the really long options to show up correctly.
Then you're stuck with a bugfix's allotment of time to implement an accessible, correctly themed combo box that you should have reached for in the first place, just like what you had to do last week with the native date pickers.
I'd argue that adding complexity from the get-go to ensure that all users have a pleasant experience from the get-go is better than simplicity at the expense of some percentage of users.
I think it's important for web devs to spend more than two seconds to think if the complexity is necessary from the get-go though.
It better be, it's taken over 40000x the funding.
The question is not whether AI is useful, the question is whether it's useful enough relative to the capital expectations surrounding it. And those expectations are higher than anything the world has ever seen.
reply