I think clean code is more important than ever. LLMs can work better with good code (no surprise), and they are trained on so much shit code they produce garbage in terms of clean code.
They also don't have a good taste or deeper architectural understanding if big codebasis where it's even more important.
What you learned over the years, you can just scale up with agents.
The huge advantage of SQLite is not that it's on the same machine, but it's that is it in process which makes deployment and everything else just simpler.
I think it depends on the domain. For example, GPT-5 is better for frontend, React code, but struggles with niche things like Nix. Claude's UI designs are not as pretty as GPT-5's.
This is also pretty subjective. I’m a power user of both and tend to prefer Claude’s UI about 70-80% of the time.
I often would use Claude to do a “make it pretty” pass after implementation with GPT-5. I find Claude’s spatial and visual understanding when dealing with frontend to be better.
I am sure others will have the exact opposite experience.
My experience is exactly opposite. Claude excelling in ui, and react. While gpt5 being better on really niche stuff, migth just be me better at caching when gpt5 halucinates as opposed to the claude4 hallucinations.
But after openai started gatekeeping all their new decent models in the api, i will happily refuse to buy more credits, and rather use foss models from other providers (I wish claude had proper no log policies).
What I did recently when developing a TUI was that I put the state in a dict, start the app in an infinite loop and whenever it quit, reload module, keep the state and instantiate the class with that state again. Something like this:
import tui
state = {"current_step_index": 0, "variables": None}
while True:
app = tui.App(state)
tui.run()
state = app.get_state()
importlib.reload(tui)
AGI is the biggest succesful scam in human history Sam Altman came up with to get the insane investment and hype they are getting. They are intentionally not defined what it is and when will be achieved, making it a never-reachable goal to keep the flow of money going. "we will be there in a couple of years", "this feels like AGI" was told every fucking GPT release.
It's the best interest for every AI lab to keep this lie going.
They are not stupid, they know it can't be reached with the current state-of-the-art techniques, transformers, and even with the recent groundbreaking techniques like reasoning, and I think we are not even close.
It's so much easier to build a mental model of a code base with LLMs. You just ask specific questions of a subsystem and they show files, code snippets, point out the idea, etc.
I just recently took the time to understood how the GIL works exactly in CPython, because I just asked a couple of questions about it, Claude showed me the relevant API and examples where can I find it. I looked it up in the CPython codebase and all of a sudden it clicked.
The huge difference was that it cost me MINUTES. I didn't even bother to dig in before, because I can't perfectly read C, the CPython codebase is huge and it would have taken me a really long time to understand everything.
Not even close. An agentic tool can be fully autonomous, an IDE like Cursor is, well it's "just" an editor. Quite the opposite. Sure it does some heavy lifting too, but still the user writes the code. They start to implement fully agentic tools and models, but they are nowhere near work as good as Claude Code does.
What you learned over the years, you can just scale up with agents.
reply