> ... but it's not like the discovery of the steam engine or electricity.
completly disagree. People might have googled before but the human<>computer interface was never in any way as accessable as it is now for a normal human being. Can i use Photoshop? yes but i learned it. My sisters played around with Dall-E and are now able to do simiiliar things.
It might feel boring to you that technology accessability drips down like this, but this changes a lot for a lot of people. The entry barrier to everything got a lot lower. It makes a huge difference to you as a human being if you have rich parents and good teachers or not. You had never the chance to just get help like this. Millions of kids struggle because they don't have parents they can ask certain questions required for understanding topics in school.
Steam Engine = fundamental for our scaling economy
electricity = fundamental for liberating all of us from day time
internet = interconnecting all of us
LLM/ML/AI = liberating knowledge through accessability
> 'There hasn’t been a real breakthrough in over two years.'
DeepSeek alone was a real breakthrough.
But let me ask an LLM about this:
- Mixture of Experts (MoE) scaling
- Long-context handling
- Multimodal capabilities
- Tool use & agentic reasoning
Funny enough your comment comes before claude 4.0 release (again increase in performance, etc.) and the Google IO.
We don't know if we found all 'low hanging fruits'. The meta paper about thinking in latent space came out in February. I would definitly call this a low hanging fruit.
We are limited, very hard, on infrastructure. Every experiement you want to try consumes a lot of it. If you look at the top x GPU AI clusters, we don't have that many on the planet. We have Google, Microsoft, Azure, Nvidia, Baidu, Tesla and xAI, Cerebras. Not that many researcher are able to just work on this.
Google has now its first Diffusion based Model active. 2025! We are so far away from testing out more and more approaches, architectures etc. And we are optimizing on every front. Cost, speed, precision etc.
> My sisters played around with Dall-E and are now able to do simiiliar things.
This is no way shape or form in any actual productive way similar to being skilled at Photoshop. There is absolutely no way these people can mask, crop, tweak color precisely, etc. There are hundreds of these sub-tasks. It's not just "making cool images". No amount of LLMing will make you skilled and no amount of delegation will make you able to ask these specific questions in a skillful way to the LLM.
There is a very real fundamental problem here. To be able to state the right questions you have to have a base of competence that ya'll are so happy about throwing into the wind. The next generation will not even know what a "mask" is, let alone ask an LLM for details. Education is dropping worldwide and these things are not going to help. They are going to accelerate this bullshit.
> liberating knowledge through accessability
Because the thing is, availability of knowledge never was the issue. The existence of ridiculous amounts of copyright free educational material and the hundreds of gigs of books on Project Gutenberg are testament to that.
Even in my youth (90s) there were plenty of books and easy to access resources to learn, say, calculus. Did I peruse them? Hell no. Did my friends? You bet your ass they were busy wasting time doing bullshit as well. Let's just be honest about this.
These problems are not technical and no amount of technology is going to solve them. If anything, it'll make things worse. Good education is everything, focus on that. Drop the AI bullshit, drop the tech bullshit. Read books, solve problems. Focus on good teachers.
completly disagree. People might have googled before but the human<>computer interface was never in any way as accessable as it is now for a normal human being. Can i use Photoshop? yes but i learned it. My sisters played around with Dall-E and are now able to do simiiliar things.
It might feel boring to you that technology accessability drips down like this, but this changes a lot for a lot of people. The entry barrier to everything got a lot lower. It makes a huge difference to you as a human being if you have rich parents and good teachers or not. You had never the chance to just get help like this. Millions of kids struggle because they don't have parents they can ask certain questions required for understanding topics in school.
Steam Engine = fundamental for our scaling economy electricity = fundamental for liberating all of us from day time internet = interconnecting all of us LLM/ML/AI = liberating knowledge through accessability
> 'There hasn’t been a real breakthrough in over two years.' DeepSeek alone was a real breakthrough.
But let me ask an LLM about this:
- Mixture of Experts (MoE) scaling
- Long-context handling
- Multimodal capabilities
- Tool use & agentic reasoning
Funny enough your comment comes before claude 4.0 release (again increase in performance, etc.) and the Google IO.
We don't know if we found all 'low hanging fruits'. The meta paper about thinking in latent space came out in February. I would definitly call this a low hanging fruit.
We are limited, very hard, on infrastructure. Every experiement you want to try consumes a lot of it. If you look at the top x GPU AI clusters, we don't have that many on the planet. We have Google, Microsoft, Azure, Nvidia, Baidu, Tesla and xAI, Cerebras. Not that many researcher are able to just work on this.
Google has now its first Diffusion based Model active. 2025! We are so far away from testing out more and more approaches, architectures etc. And we are optimizing on every front. Cost, speed, precision etc.