Something I quickly learned while retooling this past week is that it’s preferable not to add opinionated frameworks to the project as they increase the size of the context the model should be aware of. This context will also not likely be available in the training data.
For example, rather than using Plasmo for its browser extension boilerplate and packaging utilities, I’ve chosen to ask the LLM to setup all of that for me as it won’t have any blindspots when tasked with debugging.
Disagree. Some abstractions are still vital, and it's for the same reasons as always: communicate purpose and complexity concisely rather than hiding it.
The best code is that which explains itself most efficiently and readably to Whoever Reads It Next. That's even more important with LLMs than with humans, because the LLMs probably have far less context than the humans do.
Developers often fall back on standard abstraction patterns that don't have good semantic fit with the real intent. Right now, LLMs are mostly copying those bad habits. But there's so much potential here for future AI to be great at creating and using the right abstractions as part of software that explains itself.
I’ve thought about your comment, and I think we’re both right.
Fundamentally, computers are a series of high and low voltages, and everything above that is a combination of abstraction and interpretations.
Fundamentally there will always be some level of this-it’s not like an A(G)I will interface directly using electrical signals (though in some distant future it could).
However from what I believe, this current phase of AI (LLMs + Generators + Tools) are showing that computers do not need to solve problems the same way humans need to because computers face different constraints.
So abstractions that programmers utilize to manage complexity won’t be necessary (in some future time).
Future programming language designers are then answering questions like:
"How low-level can this language be while considering generally available models and hardware available can only generate so many tokens per second?",
"Do we have the language models generate binary code directly, or is it still more efficient time-wise to generate higher level code and use a compiler?"
"Do we ship this language with both a compiler and language model?"
"Do we forsake code readability to improve model efficiency?"
I think the dichotomy between how developers have reacted to LLMs (mass adoption) and how authors, illustrators, etc. have reacted (derision, avoidance) demonstrates that coding was never an art to begin with. Code is not an end in itself, it's an obstacle in the way of an end.
There are people who enjoy code for the sake of it, but they're a very, very small group.
Surely no respectable professional would just ship code they don’t understand, right? So the LLM should probably spit out code in reasonably well known languages using reasonably well known libraries and other abstractions…
Right now, and perhaps the immediate future sure. But eventually I do think software that writes software will do it better than current programmers can.
Do you ever think twice about the bayer filter applied to your cmos image sensor?
It's not just frameworks – I noticed this recently when starting a new project and utilizing EdgeDB. They have their own Typescript query builder, and [insert LLM] cannot write correct constructions with that query builder to save its life.
For example, rather than using Plasmo for its browser extension boilerplate and packaging utilities, I’ve chosen to ask the LLM to setup all of that for me as it won’t have any blindspots when tasked with debugging.