Hacker News new | past | comments | ask | show | jobs | submit login

The goal posts on AGI would be superluminal and somewhere back in the 1400s if they were physical objects. I’ve never seen or heard of a field so deeply in denial about its progress.

For every major trouncing of criterion we somehow invent 4 or 5 new requirements for it to be “real” AGI. Now , it seems, AGI must display human level intelligence at superhuman speed (servicing thousands of conversations at once), be as knowledgeable as the most knowledgeable 0.1% of humans across every facet of human knowledge, be superhumanly accurate, perfectly honest, and not make anything up.

I remember when AGI meant being able to generalize knowledge over problems not specifically accounted for in the algorithm… the ability to exhibit the “generalization” of knowledge, in contrast to algorithmic knowledge or expert systems. It was often referred to as “mouse level” or sometimes “dog-level” intelligence. Now we expect something vastly more capable than any being that has ever existed or it’s not “AGI” lmfao. “ASI” will probably have to solve all of the world’s problems and bring us all to the promised land before it will merit that moniker lol.






"I remember when AGI meant being able to generalize knowledge over problems not specifically accounted for in the algorithm… "

So do we have that? As far as I know, we just have very, very large algorithms (to use your terminology). Give it any problem not in the training data and it fails.


Same goes for most animals and humans, the vast majority of the time. We expect consistent savant level performance or it’s not “AGI” if humans were good at actual information synthesis, Einstein and Tom Robbins would be everyone’s next door neighbors.

Partly true, but in my experience - no LLM has shown any understanding of the problem space all of the time.

If we applied the same standards to people…

As a sounding board and source of generally useful information, even my small locally hosted models generally outperform a substantial slice of the population.

We all know people we would not ask anything that mattered, because their ideas and opinions are typically not insightful or informative. Conversing with a 24b model is likely to have higher utility. Do these people then not exhibit “general intelligence”? I really think we generally accept pattern matching and next-token ramblings, hallucinations, and rampant failures of reasoning in stride from people, while applying a much, much higher bar to LLMs.

To me this makes no sense, because LLMs are compilations of human culture and their only functionality is to replicate human behavior. I think on average they do a pretty good job vs a random sampling of people, most of the time.

I guess we see this IRL when we internally label some people as “NPC’s”.


"As a sounding board and source of generally useful information, even my small locally hosted models generally outperform a substantial slice of the population."

So does my local copy of Wikipedia.

But the lines do get blurry and many real humans indeed seem no more than stochastical parrots pretending understanding.


> “ASI” will probably have to solve all of the world’s problems and bring us all to the promised land before it will merit that moniker lol.

People base their notions of AI on science fiction, and it usually goes one of two ways in fiction.

Either a) skynet awakens and kills us all or

B) the singularity happens, AI get so far ahead they become deities, and maybe the chosen elect transhumanists get swept up into some simulation that is basically a heavenly realm or something.

So yeah, bringing us to the promised land is an expectation of super AI that does seem to come out of certain types of science fiction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: