So using agents forces (or at least nudges) you to use go and tailwind, because they are simple enough (and abundant in the training data) for the AI to use correctly.
Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?
Competing with the existing alternatives will be too hard. You won't even be able to ask real humans for help on platforms like StackOverflow because they will be dead soon.
> Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?
I highly doubt it. These things excel at translation.
Even without training data, if you have an idiosyncratic-but-straightforward API or framework, they pick it up no problem just looking at the codebase. I know this from experience with my own idiosyncratic C# framework that no training data has ever seen, that the LLM is excellent at writing code against.
I think something like Rust lifetimes would have a harder time getting off the ground in a world where everyone expects LLM coding to work off the bat. But something like Go would have an easy time.
Even with the Rust example though, maybe the developers of something that new would have to take LLMs into consideration, in design choices, tooling choices, or documentation choices, and it would be fine.
> Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?
That's a very good question.
Rephrased: as good training data will diminish exponentially with the Internet being inundated by LLM regurgitations, will "AI savvy" coders prefer old, boring languages and tech because there's more low-radiation training data from the pre-LLM era?
The most popular language/framework combination in early 2020s is JavaScript/React. It'll be the new COBOL, but you won't need an expensive consultant to maintain in the 2100s because LLMs can do it for you.
Corollary: to escape the AI craze, let's keep inventing new languages. Lisps with pervasive macro usage and custom DSLs will be safe until actual AGIs that can macroexpand better than you.
> Rephrased: as good training data will diminish exponentially with the Internet being inundated by LLM regurgitations
I don't think the premise is accurate in this specific case.
First, if anything, training data for newer libs can only increase. Presumably code reaches github in a "at least it compiles" state. So you have lots of people fight the AIs and push code that at least compiles. You can then filter for the newer libs and train on that.
Second, pre-training is already mostly solved. The pudding seems to be now in post-training. And for coding a lot of post-training is done with RL / other unsupervised techniques. You get enough signals from using generate -> check loops that you can do that reliably.
The idea that "we're running out of data" is way too overblown IMO, especially considering the last ~6mo-1y advances we've seen so far. Keep in mind that the better your "generation" pipeline becomes, the better will later models be. And the current "agentic" loop based systems are getting pretty darn good.
> First, if anything, training data for newer libs can only increase.
How?
Presumably in the "every coder is using AI assistants" future, it will be an incredible amount of friction to get people to adopt languages that AI assistants don't know anything about
So how does the training data for a new language get made, if no programmers are using the language, because the AI tools that all programmers rely on aren't trained on the language?
You can code today with new libs, you just need to tell the model what to use. Things like context7 work, or downloading docs, llms.txt or any other thing that will pop up in the future. The idea that LLMs can only generate what they were trained on is like 3 years old. They can do pretty neat things with stuff in context today.
The context would have to be massive in order to ingest an entire new programming language and associated design patterns, best practices and such wouldn't it?
I'm not an expert here by any means but I'm not seeing how this makes much sense versus just using languages that the LLM is already trained on
1. The previous gen has become bloated and complex because it widened it's scope to cover every possible miche scenario and got infiltrated by 'expert' language and framework specialists that went on an atrotecture binge.
2. As a result a new stack is born, much simpler, back to basics, than the poorly aged encumbant. It doesn't cover every niche, but it does a few newly popular things realy easy and well, and rises on the coattails of this new thing as the default envoronment for it.
3. Over time the new stack ages just as poorly as the old stack for all the same reasons. So the cycle repeats.
I do not see this changing with ai-assisted coding, as context enrichment is getting better allowing a full stack specification in post training.
> It doesn't cover every niche, but it does a few newly popular things realy easy and well, and rises on the coattails of this new thing as the default envoronment for it
How will it ever rise on the coattails of anything if it isn't in the AI training data so no one is ever incentivized to use it to begin with?
AI legible documentation. If you optimize for a "1-pager" doc you can add to the context of an LLM and that is all it needs to know to use your package or framework ... people will use it if has some kind non-technical advantage. deepwiki.com is sorta an attempt to automate doing something like this.
> So using agents forces (or at least nudges) you to use go and tailwind
Not even close, and the article betrays the author's biases more than anything else. The fact that their Claude Code (with Sonnet) setup has issues with the `cargo test` cli for instance is hardly a categorical issue with AIs or cargo, let alone rust in general. Junie can't seem to use its built-in test runner tool on PHP tests either, that doesn't mean AI has a problem with PHP. I just wrote a `bin/test-php` script for it to use instead, and it figures out it has to use that (telling it so in the guidelines helps, but it still keeps trying to use its built-in tool first)
As for SO, my AI assistant doesn't close my questions as duplicates. I appreciate what SO is trying to do in terms of curation, but the approach to it has driven people away in droves.
JB's product strategy is baffling. The AI assistant is way more featureful, but it's a lousy agent. Junie is pretty much only good as an agent, but it's hardwired to one model, doesn't support MCP, but does have a whole lot of internal tools ... which it can't seem to use reliably. They really need to work on having just one good AI product that does it all.
I really liked Augment, except for its piggish UI. Then they revealed the price tag, and back to Junie I went.
Just yesterday I gave Claude (via Zed) a project brief and a fresh elixir phoenix project. It had 0 problems. It did opt for tailwind for the css, but phoenix already sets it up when using `mix phx.new` so that's probably why.
I don't buy that it pushes you into using Go at all. If anything I'd say they push you towards Python a lot of the time when asking it random questions with no additional context.
The elixir community is probably only a fraction of the size of Go or Python, but I've never had any issues with getting it to use it.
> Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?
If you truly believe in the potential of agentic AI, then the logical conclusion is that programming languages will become the assembly languages of the 21st century. This may or may not become the unfortunate reality.
I'd bet money that in less than six months, there'll be some buzz around a "programming language for agents".
Whether that's going to make sense, I have some doubts, but as you say: For an LLM optimist, it's the logical conclusion. Code wouldn't need to be optimised for humans to read or modify, but for models, and natural language is a bit of an unnecessary layer in that vision.
Personally I'm not an LLM optimist, so I think the popular stack will remain focused on humans. Perhaps tilting a bit more towards readability and less towards typing efficiency, but many existing programming languages, tools and frameworks already optimise for that.
My best results have been with Ruby/Rails and either vanilla Bootstrap, or something like Tabler UI, Tailwind seems to be fine as well, but I'm still not a fan of the verbosity.
With a stable enough boilerplate you can come up with outstanding results in a few hours. Truly production ready stuff for small size apps.
How are you getting results when Ruby has no type system? That seems like where half the value of LLM coding agents are (dumping in type errors and it solving them).
Bunch of unit, functional and E2E tests, just like before LLMs :) Haven't tried with Ruby specifically but works well with JavaScript and other dynamic languages so should work fine with Ruby too.
I wonder if people that loves typescript never wrote tests and that is why they are so fascinated with types for dynamic languages. I guess they have never been really productive.
Or even better, what if you could automate writing half or more of your unit test, and ensure they run not just out of band, but on ever build?
And even better rather than have them off in some far away location annotate the code itself so the tests will be updated with the code.
That's pretty impressive and someone would have to be short sighted to feel the false productivity of constantly manually implementing what a computer can automatically do for them.
Not to mention how much better if you work on any actual large scale systems with true cross team dependencies and not trivial code bases that get thrown away every few years where it almost doesn't matter how you write it.
No, I come from Ruby and Elixir and don't love typescript or C style languages. But man oh man, LLMs are just so good with them - the feedback loop is just so good with a typed language.
With maturing synthetic data pipelines, can't they just take one base llm and fine tune it for 20 different niches, and allow user to access the niche with a string parameter in the API call? Even if a new version of a language released only yesterday, they could quickly generate enough synthetic training data to bake in the new syntax for that niche, and roll it out.
If AI really takes over coding, programming languages will be handled the same way we currently handle assembly code.
Right now languages are the interface between human and computer. When LLM's would take over, their ideal programming language is probably less verbose than what we are currently using. Maybe keywords could become 1 token long, etc. Just some quick thoughts here :D.
> no new language/framework/library will ever be able to emerge?
Here is a Youtube video that makes the same argument. React is / will be the last Javascript framework, because it is the dominant one right now. Even of people publish new frameworks, LLM coding assistants will not be able to assist coding using the new frameworks, so the new frameworks will not find users or popularity.
And even for React, it will be difficult to add any more new features, because LLMs only assist to write code that uses the features the LLMs know about, which are the old, established ways to write React.
> LLM coding assistants will not be able to assist coding using the new frameworks
Why not? When my coding agent discovers that they used the wrong API or used the right API wrong, it digs up the dependency source on disk (works at least with Rust and with JavaScript) and looks up the new details.
I also have it use my own private libraries the same way, and those are not in any training data guaranteed.
I guess if whatever platform/software you use doesn't have tool calling youre kind of right, but also missing something kind of commonplace today.
New frameworks can be created, but they will be different from before:
- AI-friendly syntax, AI-friendly error handling
- Before being released, we will have to spend hundred of millions of token of agents reading the framework and writing documentation and working example code with it, basically creating the dataset that other AI can reference when using the new framework.
- Create a way to have that documentation/example code easily available for AI agents (via MCP or new paradigm)
Agents no, LLMs yes. Not for generating code per se, but for answering questions. Common Lisp doesn't seem to have a strong influx of n00bs like me, and even though there's pretty excellent documentation, I find it sometimes hard to know what I'm looking for. LLMs definitely helped me a few times by answering my n00b questions I would have otherwise had to ask online.
Vibe coding Common Lisp could probably work well with additional tool support. Even a good documentation lookup and search tool, exposed in an AGENTS.md file, could significantly improve the problem Joe ran into of having the code generate bogus symbols. If you provide a small MCP server or other tool to introspect a running image containing your application, it could be even better.
LLMs can handle the syntax of basically any language, but the library knowledge is significantly improved by having a larger corpus of code than Common Lisp tends to have publicly available.
Looking forward to progress on the memory control proposal(s).
Another reason to want more than 4GB of memory is to have more address space, assuming that you have the ability to map it. With that capability Wasm64 could be useful also for apps that don't plan to use a huge amount for real.
Yes, this is my primary personal motivator as co-champion of the proposal. But, going “fully virtual” is hard because of the very wide array of use cases for wasm.
For example, there are embedded users of wasm whose devices don’t even have MMUs. And even when running on a device with an MMU, the host may not want to allow arbitrary mappings, e.g. if using wasm as a plugin system in a database or something.
It’s likely imo that any “fully virtual” features will be part of the wasm web API and not the core spec, if they happen at all.
It is a bit unfair to Wasmer, because it incur in the (presumed) overhead of `wasmer run ...`, but I could not figure out if the actual clang binary is directly available after it is downloaded the first time.
The C function you linked calls _Fork, which calls __wasi_proc_fork, which calls __imported_wasix_32v1_proc_fork, which is the 'proc_fork' import in the 'wasix_32v1' module. The question is, what JavaScript function is provided for this proc_fork import when instantiating a WebAssembly module using WASIX? From what I understand about the WebAssembly JavaScript API, such a function is impossible to implement, which is why I'm curious how it was done here.
I am curious about the browser compatibility layer.
Does it support networking? (And if so, is it using a proxy to connect to the internet or is it using a virtualized network layer that stays in the browser?)
How is fork implemented? (Asyncify all the code?)
I was looking at the Github org page (https://github.com/wasix-org), but I can't figure out where this stuff is.
I don't necessarily disagree, but implementing these things in your custom runtime is easy. I am more interested in how they achieved them in the more constrained browser environment.
its only nonsense from ones perspective... from another perspective it makes a lot of sense - if one wants to create a fully sandboxed browser terminal (benefits of which are huge). then compiling bash to WASM goes a long way as who wants to rewrite all that terminal code anyway?
now I know its hard to imagine someone other than your personal boogeyman contributing value to the open source world, but there is a real need for me to keep my coding skills up to scratch and it just so happens that `wasmer.io` are the coolest kids in WASM town right now, so I'll share my spare time with them thanks.
wasmer has a well-documented history of malicious behavior, they are very clearly not the coolest kids in WASM town, and very clearly not who you want to be aligning yourself with, but of course do whatever you like :shrug:
Does this mean that eventually in a world where we all use this stuff, no new language/framework/library will ever be able to emerge?
Competing with the existing alternatives will be too hard. You won't even be able to ask real humans for help on platforms like StackOverflow because they will be dead soon.