I'm not sure why people on HN (of all places) are so divided regarding the perception of AI/ML.
I have not seen anything like it before. We literaly had not system or way of even doing things like code generation based on text input.
Just last week i asked for a script to do image segmentation with a basic UI and claude just generated that for me in under 1 Minute.
I could list tons of examples which are groundbreaking. The whole Image generation stack is completly new.
That blog article is fair enough, there is hype around this topic for sure, but alone for every researcher who needs to write code for their research, AI can make them already a lot more efficient.
But i do believe, that we have entered a new ara: An ara were we take data again very serious. A few years back, you said 'the internet doesn't forget' then we realized that yes the internet starts to forget. Google deleted pages, removed the cache feature and it felt like we stoped caring for data because we didn't knew what to do with it.
Then ai came along. And not only is now data king again but we are now in the mids of reinforcment ara: We now give feedback and the systems incorporate that feedback into their training/learning.
And the ai/ml topic is getting worked on on every single aspect of it: Hardware, Algorithm, use cases, data, tools, protocols, etc. We are in the middle of incorporating and building for and on it. This takes a little bit of time. Still the progress is crazy exhausting.
We will only see in a few years if there is a real ceiling. We do need more GPUs, bigger Datacenters to do a lot more experiments on AI architecture and algorithm. We have a clear bottleneck. Big companies train one big model for weeks and month.
> Just last week i asked for a script to do image segmentation with a basic UI and claude just generated that for me in under 1 Minute.
Thing is we just see that it's copy pasting stack overflow, but now in a fancy way so this is sounding like "I asked Google for a nearby restaurant and it found it in like 500ms, my C64 couldn't do that". It sounds impressive (and it is) because it sounds like "it learned about navigating in the real world and it can now solve everything related to that" but what it actually solved is "fancy lookup in a GIS database". It's useful, damn sure it is, but once the novelty wears off you start seeing it for what it is instead of what you imagine it is.
Edit: to drive the point home.
> claude just generated that
What you think happened is AI is "thinking" and building a ontology over which it reasoned and came to the logical conclusion that this script was the right output. What actually happened is your input correlates to this output according to the trillion examples it saw. There is no ontology. There is no reasoning. There is nothing. Of course this is still impressive and useful as hell, but the novelty will wear off in time. The limitations are obvious by this point.
I'm following LLMs, AI/ML for a few years now and not just on a high level.
There is not a single system out there today which can do what claude can do.
I stil see it for what it is: A technology i can communicate/use with natural language and get a very diverse of tasks done. From writing/generating code, to svgs, to emails, translation etc. etc. etc.
Its a paradigma shift for the whole world literaly.
We finally have a system which encodes not just basic things but high level concepts. And we humans are doing often enough something very similiar.
And what limitations are obvious? Tell me? We have not reached any real ceiling yet. We are limited by GPU capacity or how many architectural experiments a researcher can run. We have plenty of work to do to cleanup the data set we use and have. We need to build more infrastructure, better software support etc.
We have not even reached the phase were we all have local AI/ML chips build in.
We don't even know yet how a system will act if everyone of us has access to very fast inferencing like you already get with groq.
> Its a paradigma shift for the whole world literaly.
That's hyperbolic. I use LLMs daily. They speed up tasks you'd normally use Google for and can extrapolate existing code into other languages. They boost productivity for professionals, but it's not like the discovery of the steam engine or electricity.
> And what limitations are obvious? Tell me? We have not reached any real ceiling yet.
Scaling parameters is the most obvious limitation of the current LLM architecture (transformers). That’s why what should have been called GPT-5 is instead named GPT 4.5, it isn’t significantly better than the previous model despite having far more parameters, a lot more cleaned up training data and optimizations.
The low-hanging fruit has already been picked, and most obvious optimizations have been implemented. As a result, almost all leading LLM companies are now operating at a similar level. There hasn’t been a real breakthrough in over two years. And the last huge architectural breakthrough was in 2017 (with paper "Attention is all you need").
Scaling at this point yields only diminishing returns. So no, what you’re saying isn’t accurate, the ceiling is clearly visible now.
> ... but it's not like the discovery of the steam engine or electricity.
completly disagree. People might have googled before but the human<>computer interface was never in any way as accessable as it is now for a normal human being. Can i use Photoshop? yes but i learned it. My sisters played around with Dall-E and are now able to do simiiliar things.
It might feel boring to you that technology accessability drips down like this, but this changes a lot for a lot of people. The entry barrier to everything got a lot lower. It makes a huge difference to you as a human being if you have rich parents and good teachers or not. You had never the chance to just get help like this. Millions of kids struggle because they don't have parents they can ask certain questions required for understanding topics in school.
Steam Engine = fundamental for our scaling economy
electricity = fundamental for liberating all of us from day time
internet = interconnecting all of us
LLM/ML/AI = liberating knowledge through accessability
> 'There hasn’t been a real breakthrough in over two years.'
DeepSeek alone was a real breakthrough.
But let me ask an LLM about this:
- Mixture of Experts (MoE) scaling
- Long-context handling
- Multimodal capabilities
- Tool use & agentic reasoning
Funny enough your comment comes before claude 4.0 release (again increase in performance, etc.) and the Google IO.
We don't know if we found all 'low hanging fruits'. The meta paper about thinking in latent space came out in February. I would definitly call this a low hanging fruit.
We are limited, very hard, on infrastructure. Every experiement you want to try consumes a lot of it. If you look at the top x GPU AI clusters, we don't have that many on the planet. We have Google, Microsoft, Azure, Nvidia, Baidu, Tesla and xAI, Cerebras. Not that many researcher are able to just work on this.
Google has now its first Diffusion based Model active. 2025! We are so far away from testing out more and more approaches, architectures etc. And we are optimizing on every front. Cost, speed, precision etc.
> My sisters played around with Dall-E and are now able to do simiiliar things.
This is no way shape or form in any actual productive way similar to being skilled at Photoshop. There is absolutely no way these people can mask, crop, tweak color precisely, etc. There are hundreds of these sub-tasks. It's not just "making cool images". No amount of LLMing will make you skilled and no amount of delegation will make you able to ask these specific questions in a skillful way to the LLM.
There is a very real fundamental problem here. To be able to state the right questions you have to have a base of competence that ya'll are so happy about throwing into the wind. The next generation will not even know what a "mask" is, let alone ask an LLM for details. Education is dropping worldwide and these things are not going to help. They are going to accelerate this bullshit.
> liberating knowledge through accessability
Because the thing is, availability of knowledge never was the issue. The existence of ridiculous amounts of copyright free educational material and the hundreds of gigs of books on Project Gutenberg are testament to that.
Even in my youth (90s) there were plenty of books and easy to access resources to learn, say, calculus. Did I peruse them? Hell no. Did my friends? You bet your ass they were busy wasting time doing bullshit as well. Let's just be honest about this.
These problems are not technical and no amount of technology is going to solve them. If anything, it'll make things worse. Good education is everything, focus on that. Drop the AI bullshit, drop the tech bullshit. Read books, solve problems. Focus on good teachers.
I honestly think it's still way too early to say this either way. If your hypothesis that there are no breakthroughs left is right, then it's still a very big deal, but I'd agree with you that it's not steam engine level.
But I don't think "the transformer paper was eight years ago" is strong evidence for that argument at all. First of all, the incremental improvement and commercialization and scaling that has happened in that period of time is already incredibly fast. Faraday had most of the pieces in place for electricity in the 1830s and it took half a century to scale it, including periods where the state of the art began to stagnate before hitting a new breakthrough.
I see no reason to believe it's impossible that we'll see further step-change progressions in AI. Indeed, "Attention is All You Need" itself makes me think it's more likely than not. Out of the infinite space of things to try, they found a fairly simple tweak to apply to existing techniques, and it happened to work extremely well. Certainly a lot more of the solution space has been explored now, but there's still a huge space of things that haven't been tried yet.
LLMs are great at tasks that involve written language. If your task does not involve written language, they suck. That's the main limitation. No matter how hard you push, AI is not a 'do everything machine' which is how it's being hyped.
Written language is very powerful apparently. After all LLM can generate SVG, python code to use Blender etc.
One demo i saw with LLM and code use: "Generate a small snake game" and because the author still had the Blender MCP tool connection, the LLM decided to generate 3D assets through Blender for that game.
> We finally have a system which encodes not just basic things but high level concepts
That's the thing I'm trying to convey: it's in fact not encoding anything you'll recognize and if it is, it's certainly not "concepts" as you understand them. Not saying it cannot correlate text that includes what you call "high level concepts" or do what you imagine to be useful work in that general direction. Again not making claims it's not useful, just saying that it becomes kind of meh once you factor in all costs and not just the hypothetical imaginary future productivity gains. AKA building literal nuclear reactors to do something that basically amounts to filling in React templates or whatever BS needs doing.
If it was reasoning it could start with a small set of bootstrap data and infer/deduce the rest from experience. It cannot. We are not even close as in there is not even theory to get us there forget about the engineering. It's not a subtle issue. We need to throw literally all data we have at it to get it to acceptable levels. At some point you have to retrace some steps and think over some decisions, but I guess I'm a skeptic.
In short it's a correlation engine which, again, is very useful and will go ways to improve our lives somewhat - I hope - but I'm not holding my breath for anything more. A lot of correlation does not causation make. No reasoning can take place until you establish ontology, causality and the whole shebang.
I do understand it but i also think that the current LLMs are the first step to it.
GPT-3 started proper investment into this topic, there was not enough research done in this direction and now it is. People like Yann LeCun already analyse different approaches/architecture but they still use the infrastructure of LLMs (ML/GPUs) and potentially the data.
I never said that LLM is the breaktrhough in consesnes.
But you can also ask LLM strategies for thinking. It can tell you a lot of things. We will see if a LLM will be a fundamental part of AGI or not but GPU/ML will probably be.
I also think that the compression mechanism through LLM lead to concepts through optimization. You can see from the antropic paper, that an LLM doesn't work in normal language space but in a high dimensional one and then 'expresses' the output in a language you like.
We also see that real multi modal models are better in a lot of tasks due to a lot more context available through them. Estimating what someone said due to context.
The necessary infrastructure and power requirement is something i accept too. We can assume, i do, that further progress in a lot of topics will require this type of compute and it also solves our data bottleneck: normal CPU architecture is limited by memory databus.
Also in comparision to a lot of other companies, if the richest companies in the world invest in nuclear, i think this is a lot better than any other companies. They have a lot higher margins and knowledge. co2 is a market separator for them too.
I also expect this amount of compute to be the base for fixing real issues we all face like cancer or optimizing cancer or any other sickness detection. We need to make medicin a lot cheaper and if someone in africa can do a cheap x ray and send it to the cloud to get any feedback, that would / could help a lot of people.
Doing complex and massive protein analysis or mRna research in virtual space, also requires GPUs.
All of this happened in a timespan of only a few years. I have not seen anything progressing as fast as AI/ML currenly does and as unfortunate it is, this needs compute.
Even my small inhouse image recognition fine tuning explodes when you do a handful parameter optimizations but the quality is a lot better than what we had before.
And enabling people to have real natural language UI is HUGE. It makes so much more accessable. Not just for people with a disability.
Things like 'do a eli5 on topic x'. "explain to me this concept" etc. I would have loved that when i tried to be successful in the university math curiculum.
All of that is already crazy and still is. But in parallel what Nvidia and others currently do with ML and Robotics is also something which requires all of that compute. And the progress is again breath taking. The current flood of basic robots standing and walking around is due to ML.
I mean, you're not even wrong ! Most all of these large models are based on the idea that if you put all of the representations that we can of the world into a big pile that you can tease out some kind of meaning. There's not even really a cohesive theory as to that, and surely no testable way to prove that it's true. It certainly seems like you can make a system that behaves as if it could be like that, and I think that's what you're picking up on. But it's actually probably something else and something far shorter of that.
There is an interesting analogy that my Analysis I professor once said: The intersection of all valid examples are also a definition of an object. In many ways this is, at least in my current understanding, how ML systems "think". So yeah it will take some superposition of examples and kind of try to interpolate between those. But fundamentally it is - at least so far - always an interpolation, not an extrapolation.
Whether we consider that "just regurgitating Stackoverflow" or "it thought up the solution to my problem" mostly comes up to semantics
> There is not a single system out there today which can do what claude can do.
Of course there is, it's called Gemini 2.5 Pro and it is also the reason I cancelled my Claude (and earlier OpenAI) subscriptions (I had quite a few of them to go around limits).
Yeah. It’s just fancier techniques than linear regression. Just like the latter takes a set of numbers and produces another set, LLMs takes words and produces another set of words.
The actual techniques are the breakthrough. The result are fun to play with and may be useful in some occasions, but we don’t have to put them on a pedestal.
You have the wrong idea of how an LLM works. Its more like an model that iteratively finds associating / relevant blocks. The reasoning are the iterative steps it takes.
> “I'm not sure why people on HN (of all places) are so divided regarding the perception of AI/ML.”
Everyone is a rational actor from their individual perspective. The people hyping AI, and the people dismissing the hype both have good reasons.
The is justification to see this new tech as ground breaking. There is justification to be weary about massive theft of data and dismissiveness of privacy.
First, acknowledge and respect that there are so many opinions on any issue. Take yourself out of the equation for a minute. Understand the other side. Really understand it.
>but alone for every researcher who needs to write code for their research, AI can make them already a lot more efficient.
scientists don't need to be efficient, they need to be correct. Software bugs were already a huge cause of scientific error, and responsible for lack of reproducibility, see for example cases like this (https://www.vice.com/en/article/a-code-glitch-may-have-cause...)
Programming in research environments is done with some notoriously questionably variation in quality, as is the case for the industry to be fair, but in research minor errors can ruin results of entire studies. People are fed up and come to much harsher judgements on AI because in an environment like a lab you cannot write software with the attitude of an impressionist painter or the AI equivalent, you need to actually know what you're typing.
AI can make you more efficient if you don't care if you're right, which is maybe cool if you're generating images for your summer beach volleyball event, but it's a disastrous idea if you're writing code in a scientific environment.
I basically agree, but want to point out two major differences to other "hype-y" topics that existed in the past that in my opinion make the whole AI discussions on HN a little bit more controversial than other older hype discussions:
1. The whole investment volume (and thus hope and expectations) into AI is much larger than into other hype topics.
2. Sam Altman, the CEO of OpenAI, was president of YCombinator, the company begind Hacker News, from 2014 to 2019.
On (1): Investment volume relative to what? To me, it looks like a very similar pattern of investors crowding into the currently hot thing, trying to get a piece of the winners of the power law.
On (2): I'm honestly not sure I think this is making a big difference at all. Not much of the commentary here is driven by YC stuff, because most of the audience here has no direct entwinement with YC.
>On (1): Investment volume relative to what? To me, it looks like a very similar pattern of investors crowding into the currently hot thing, trying to get a piece of the winners of the power law.
The profile of investors (nearly all the biggest tech companies amongst others) as well as how much they're willing to and have put down (billions) is larger than most.
Open AI alone just started work on a $100B+ datacenter (Stargate)
Yeah maybe I buy it. But it reminds me of the investment in building out the infrastructure of the internet. That predates HN, but it's the kind of thing we would have debated here if we could have :)
The ultimate job of a programmer is to translate human language into computer language. Computers are extremely capable, but they speak a very cryptic overtly logical language.
LLMs are undeniably treading onto that territory. Who knows how far in they will make it, but the wall is breached. Which is unsettling to down right scary depending on your take. It is a real threat to a skill that many have honed for years and for which is very lucrative to have. Programmers don't even need to be replaced, having to settle for $100k/yr in a senior role is almost just a scary.
Google never gave a good reason for why they stopped making their cache public, but my theory is that it was because people were scraping it to train their LLMs.
> Just last week i asked for a script to do image segmentation with a basic UI and claude just generated that for me in under 1 Minute.
I agree that this is useful! It will even take natural language and augment the script, and maybe get it right! Nice!
The AI is combing through scraped data with an LLM, and conjuring forth some imagemagick snippets into a shell script. This is very useful, and if you’re like most people, who don’t know imagemagick intimately, it’s going to save you tons of time.
Where it gets incredibly frustrating is tech leadership seeing these trivial examples, and assuming it extrapolates to general software engineering at their companies. “Oh it writes code, or makes our engineers faster, or whatever. Get the managers mandating this, now! Also, we need to get started on the layoffs. Have them stack rank their reports by who uses AI the best, so that we are ready to pull the trigger.”
But every real engineer who uses these tools on real (as in huge, poorly written) codebases, if they are being honest (they may not be, given the stack ranking), will tell you “on a good day it multiplies my productivity by, let’s say, 1.1-2x? On a bad day, I end up scrapping 10k lines of LLM code, reading some documentation on my own, and solving the problem with 5 lines of intentional code.”
Please, PLEASE pay attention to this details that I added: Huge, poorly written codebases. This is just the reality at most software companies that have graduated from series A startup. What my colleagues and I are trying to tell you, leadership, is that these “it made a script” and “it made a html form with a backend” examples ARE NOT cleanly extrapolating to the flaming dumpster fire codebases we actually work with. Sometimes the tools help! Sometimes, they don’t.
It’s as if LLM is just another tool we use sometimes.
This is why I am annoyed. It’s incredibly frustrating to be told by your boss “use tool or get fired” when that tool doesn’t always fit the task at hand. It DOES NOT mean I see zero value in LLMs.
most work in software jobs is not making one-off scripts like in your example. a lot of the job is about modifying existing codebases which include in-house approachs to style and services along with various third party frameworks like Spring driven by annotations, and requirements around how to write tests and how many. AI is just not very helpful here, you spend more time spinning wheels trying to craft the absolute perfect script than just making code changes directly.
There is no single reason. Nobody will argue that LLMs are already quite useful at some tasks if used properly.
As for the opposing view, there are so many reasons.
* Founders and other people who bet their money on AI try to pump up the hype in spite of problems with delivery
* We know some of them are plainly lying, but the general public doesn't
* They repeat their assumptions as facts ("AI will replace most X and Y jobs by year Z")
* We clearly see that the enormous development of LLMs has plateaued but they try to convince the general public it's the contrary
* We see the difference on how a single individual (Aaron Swartz) is treated when making a small copyright infringement, and how the consequences for AI companies like OpenAI or Meta who copied the whole contents of Libgen are non-existent.
* Some people like me just hate AI slop - in writing and imaging. It just puts me off and I stop reading/watching etc.
I have not seen anything like it before. We literaly had not system or way of even doing things like code generation based on text input.
Just last week i asked for a script to do image segmentation with a basic UI and claude just generated that for me in under 1 Minute.
I could list tons of examples which are groundbreaking. The whole Image generation stack is completly new.
That blog article is fair enough, there is hype around this topic for sure, but alone for every researcher who needs to write code for their research, AI can make them already a lot more efficient.
But i do believe, that we have entered a new ara: An ara were we take data again very serious. A few years back, you said 'the internet doesn't forget' then we realized that yes the internet starts to forget. Google deleted pages, removed the cache feature and it felt like we stoped caring for data because we didn't knew what to do with it.
Then ai came along. And not only is now data king again but we are now in the mids of reinforcment ara: We now give feedback and the systems incorporate that feedback into their training/learning.
And the ai/ml topic is getting worked on on every single aspect of it: Hardware, Algorithm, use cases, data, tools, protocols, etc. We are in the middle of incorporating and building for and on it. This takes a little bit of time. Still the progress is crazy exhausting.
We will only see in a few years if there is a real ceiling. We do need more GPUs, bigger Datacenters to do a lot more experiments on AI architecture and algorithm. We have a clear bottleneck. Big companies train one big model for weeks and month.