Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Man, remember when everyone was like 'AGI just around the corner!' Funny how well the Gartner hype cycle captures these sorts of things



To be fair, the technology sigmoid curve rises fastest just before its inflection point, so it is hard to predict at what point innovation slows down due to its very nature.

The first Boeing 747 was rolled out in 1968, only 65 years after the first successful heavier-than-air flight. If you told people back then that not much will fundamentally change in civil aviation over the next 57 years, no one would have believed you.


And not just in aviation. Consider what aviation did to make the world smaller. Huge 2nd order changes. The COVID-19 pandemic would not have happened the way it did, if there were no Boeing or Airbus.

Big hard-to-predict changes ahead.


They're similar to self-driving vehicles. Both are around the corner, but neither can negotiate the turn.


I saw your comment and counted — in May I took a Waymo thirty times.


Waymo is a popular argument in self-driving cars, and they do well.

However, Waymo is Deep Blue of self-driving cars. Doing very well in a closed space. As a result of this geofencing, they have effectively exhausted their search space, hence they work well as a consequence of lack of surprises.

AI works well when search space is limited, but General AI in any category needs to handle a vastly larger search space, and they fall flat.

At the end of the day, AI is informed search. They get inputs, and generate a suitable output as deemed by their trainers.


This view of Waymo doesn’t account for the fact that self driving is about a lot more than just taking the right roads. It has to deal with other drivers, construction, road closures, pedestrians, bikes, etc.


What I wrote is exactly the opposite. Quoting myself:

> hence they work well as a consequence of lack of surprises. Emphasis mine.

In this context, "lack of surprises" is exactly the rest of the driving besides road choice. In the same space, the behaviors of other actors are also a finite set, or more precisely, can be predicted with much better accuracy.

I drive the same route for ~20 years for commute. The events which surprise me are few and far between, because other people's behavior in that environment is a finite set, and they all behave very predictably, incl. pedestrians, bikes, and other drivers.

Choosing roads are easy, handling surprises hard, but if you saw most potential surprises, then you can drive even without thinking. While I'm not proud of it, my brain took over and drove me home a couple of times on that route when I was too tired to think.


Yeah, AI has been good for a long time in limited search space areas. So good that many of these things that were called AI in the past are not called AI now, but 'just' 'algorithm'.


Everything is “just” an algorithm. LLM is a weighted graph with some randomization which is tuned with tons of data. You have input and output encoders on top of it.

That’s all.


I suspect that Waymo car's could operate in a lot more areas than they do. The issue is that Waymo are trying to sell the service of safe travel and not a car with an addon you can pay for which doesn't actually work.

In other words, since they accept liability for their cars it's not in their interest to roll out the service too fast. It makes more sense to do it slow and steady.

It's not really a strong argument that their technology is incapable of working in general areas.


And commerically viable nuclear fusion


I harvest fusion energy every single day... It's just there in the sky, for free!


Waymo's pretty good at unprotected lefts


Waymo is pretty good at (a finite number of) unprotected lefts, and this doesn't count as "level 5 autonomous driving".


All that to keep the investment pyramid schemes going.


…but that was, like, two years ago? If we go from GPT2 to AGI in ten years that will still feel insanely fast.


We won’t


AGI has always been "just around the corner", ever since computers were invented.

Some problems have become more tractable (e.g. language translation), mostly by lowering our expectations of what constitutes a "solution", but AGI is nowhere nearer. AGI is a secular milleniarist religion.


Yeah it's already been 2½ years! How long does it take to develop artificial life anyway? Surely no more than 3 years? I demand my money back!


We will be treating LLMs “like a junior developer” forever.


Even if they never get better than they are today (unlikely) they are still the biggest change in software development and the software development industry in my 28 year career.


That's for sure; I said with the original chatgpt already ; if this is the level it stays at but just becomes (much) faster and open, it's already a bizar revolution. Something many old former (and current, as I see online, but I don't know any personally) AI students/researchers did not think possible in our lifetime, and there it was.


And I'm fine with that.


I think we just around at 80% of progress

the easy part is done but the hard part is so hard it takes years to progress


> the easy part is done but the hard part is so hard it takes years to progress

There is also no guarantee of continued progress to a breakthrough.

We have been through several "AI Winters" before where promising new technology was discovered and people in the field were convinced that the breakthrough was just around the corner and it never came.

LLMs aren't quite the same situation as they do have some undeniable utility to a wide variety of people even without AGI springing out of them, but the blind optimism that surely progress will continue at a rapid pace until the assumed breakthrough is realized feels pretty familiar to the hype cycle preceding past AI "Winters".


> We have been through several "AI Winters" before

Yeah, remember when we spent 15 years (~2000 to ~2015) calling it “machine learning” because AI was a bad word?

We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc

You know a technology isn’t practical yet because it’s still being called AI.


I don’t think there’s any “AI” in aircraft autopilots.


AI encompasses a wide range of algorithms and techniques; not just LLMs or neural nets. Also, it is worth pointing out that the definition of AI has changed drastically over the last few years and narrowed pretty significantly. If you’re viewing the definition from the 80–90’s, most of what we call "automation" today would have been considered AI.


Autopilots were a thing before computers were a thing, you can implement one using mechanics and control theory. So no, traditional autopilots are not AI under any reasonable definition, otherwise every single machine we build would be considered AI as almost all machines has some form of control systems in them, for example is your microwave clock an AI?

So I'd argue any algorithm that comes from control theory is not AI, those are just basic old dumb machines. You can't make planes without control theory, humans can't keep a plane steady without it, so Wrights Brothers adding this to their plane is why they succeeded making a flying machine.

So if autopilots are AI then the Wrights Brothers developed an AI to control their plane. I don't think anyone sees that as AI, not even at the time they did the first flight.


Uh, the bellman equation was first used for control theory and is the foundation of modern reinforcement learning... so wouldn't that imply LLMs "come from" control theory?


Is the training algorithm the AI or is the model that you get at the end the AI?


Ah yes the mythical strawman definition of AI that you can never seem to pin down, was never rigorous, and never enjoyed wide expert acceptance. It's on par with "well many people used to say, or at least so I've been told, that ...".


That’s the point: AI is a marketing term and always has been. The underlying tech changes with every hype wave.

One of the first humanoid robots was an 18th century clockwork mechanism inside a porcelain doll that autonomously wrote out “Cogito Ergo Sum” in cursive with a pen. It was considered thought provoking at the time because it implied that some day machines could think.

BBC video posted to reddit 10 years ago: https://www.reddit.com/r/history/s/d6xTeqfKCv


It certainly sees use as an ever shifting marketing term. That does not exclude it from being a useful technical term. Indeed if the misuse of a term by marketers was sufficient to rob a word of meaning then I doubt we'd have any means of communication left.

> It was considered thought provoking at the time because it implied that some day machines could think.

What constitutes "thinking"? That's approximately the same question as what qualifies as AGI. LLMs and RL seem to be the first time humanity has achieved anything that begins to resemble that but clearly both of those come up short ... at least so far.

Meanwhile I'm quite certain that a glorified PID loop (ie autopilot) does not qualify as machine learning (AI if you'd prefer). If someone wants to claim that it does then he's going to need to explain how his definition excludes mechanical clockwork.


What do you think an executing LLM is? It’s basically a glorified PID loop. It isn’t learning anything new. It isn’t thinking about your conversation while you go take a poo.

And I think the point is that the definition doesn’t exclude pure mechanical devices since that’s exactly what a computer is.


To claim that an LLM is equivalent to a PID loop is utterly ridiculous. By that logic a 747 is "basically a glorified lawn mower".

> It isn’t thinking about your conversation while you go take a poo.

The commercial offerings for "reasoning" models can easily run for 10 to 15 minutes before spitting out an answer. As to whether or not what it's doing counts as "thinking" ...

> the definition doesn’t exclude pure mechanical devices since that’s exactly what a computer is.

By the same logic a songbird or even a human is also a mechanical device. What's your point?

I never said anything about excluding mechanical devices. I referred to "mechanical clockwork" meaning a mechanical pocket watch or similar. If the claim is that autopilot qualifies as AI then I want to know how that gets squared with a literal pocket watch not being AI.


> The commercial offerings for "reasoning" models can easily run for 10 to 15 minutes before spitting out an answer. As to whether or not what it's doing counts as "thinking" ...

Tell me you don’t know how AI works without telling me you don’t know how AI works. After it sends you an output, the AI stops doing anything. Your conversation sits resident in ram for a bit, but there is no more processing happening.

It is waiting until you give it feedback... some might say it is a loop... a feedback loop ... that continues until the output has reached the desired state ... kinda sounds familiar ... like a PID loop where the human is the controller...

>To claim that an LLM is equivalent to a PID loop is utterly ridiculous.

Is it? It looks like one to me.

> By that logic a 747 is "basically a glorified lawn mower".

I don’t think a 747 can mow lawns, but I assume it has the horsepower to do it with some modifications.


AI is multiple things.

AI is a marketing term for various kinds of machine learning applications.

AI is an academic field within computer science.

AI is the computer-controlled enemies you face in (especially, but not solely, offline) games.

This has been the case for decades now—especially the latter two.

Trying to claim that AI either "has always been" one particular thing, or "has now become" one particular thing, is always going to run into trouble because of this multiplicity. The one thing that AI "has always been" is multiple things.


Interpreting "just around the corner" as "this year" sounds like your error. Most projections are are years out, at least.


I remember "stochastic parrot" and people saying it's fancy markov chain/dead end. You don't hear them much after roughly agentic coding appeared.


Spicy autocomplete is still spicy autocomplete


I'm not sure if system capable of ie. reasoning over images deserves this label anymore?


The thing is "spicy" or "glorified" autocomplete are not actually bad labels, they are autocomplete machines that are very good up to the point of convincing people that they think.


Many ppl are good at convincing other ppl that they think, but also many fail at thinking and many fail at convincing.


Yours seems like a c.2023 perspective of coding assistants. These days it’s well beyond autocomplete and “generate a function that returns the numbers from the Fibonacci sequence.”

But I would think that would be well understood here.

How can you reduce what is currently possible to spicy autocomplete? That seems pretty dismissive, so much so that I wonder if it is motivated reasoning on your part.

I’m not saying it’s good or bad; I’m just saying the capability is well beyond auto complete.


What do you think has changed? The situation is still about as promising for AGI in a few years - if not more so. Papers like this are the academics mapping out where the engineering efforts need to be directed to get there and it seems to be a relatively small number of challenges that are easier as the ones already overcome - we know machine learning can solve Towers of Hanoi, for example. It isn't fundamentally complicated like Baduk is. The next wall to overcome is more of a low fence.

Besides, AI already passes the Turing test (or at least, is most likely to fail because it is too articulate and reasonable). There is a pretty good argument we've already achieved AGI and now we're working on achieving human- and superhuman-level intelligence in AGI.


> What do you think has changed? The situation is still about as promising for AGI in a few years - if not more so

It's better today. Hoping that LLMs can get us to AGI in one hop was naive. Depending on definition of AGI we might be already there. But for superhuman level in all possible tasks there are many steps to be done. The obvious way is to find a solution for each type of tasks. We have already for math calculations, it's using tools. Many other types can be solved the same way. After a while we'll gradually get to well rounded 'brain', or model(s) + support tools.

So, so far future looks bright, there is progress, problems, but not deadlocks.

PS: Turing test is a <beep> nobody seriously talks about today.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: