Hacker News new | past | comments | ask | show | jobs | submit login

Have been having a lot of laughs about all the things we call AI nowadays. Now it’s becoming less funny.

To me it’s just generative AI, LLMs, media generation. But I see the CNN folks suddenly getting “AI” attention. Anything deep learning really. It’s pretty weird. Even our old batch processing, SLURM based clusters with GPU nodes are now “AI Factories”.




> To me it’s just generative AI, LLMs, media generation.

That's not what AI is.

Artificial Intelligence has decades of use in academia. Even a script which plays Tic Tac Toe is AI. LLMs have advanced the field profoundly and gained widespread use. But that doesn't mean that a Tic Tac Toe bot is no longer AI.

When a term passes to the mainstream people manufacture their own idea of what it means. This has happened to the term "hacker". But that doesn't mean decades of AI papers are wrong because the public uses a different definition.

It's similar to the professional vs the public understanding of the term "prop" in movie making. People were criticizing Alec Baldwin for using a real gun on the set of Rust instead of a "prop" gun. But as movie professionals explained, a real gun is a prop gun. Prop in theater/movies just means property. It's anything that's used in the production. Prop guns can be plastic replicas, real guns which have been disabled, or actually firing guns. Just because the public thinks "prop" means "fake", doesn't mean movie makers have to change their terms.


Even the A* search algorithm is technically AI.


Oh man, I really want to watch CNN folks try to pronounce Dijkstra!


We could have it both ways with a Convolutional News Network


Let alone Edsger


That reminded me of this classic: https://www.youtube.com/watch?v=icoe0kK8btc


die-jick-stra!


Well, it used to be. But whenever we understand something, we move the goal posts of what AI is.

At least that's what we used to do.


It's not "moving the goalposts." It's realizing that the principles behind perceptrons / Lisp expert systems / AlphaGo / LLMs / etc might be very useful and interesting from a software perspective, but they have nothing to do with "intelligence," and they aren't a viable path for making machines which can actually think in the same way a chimpanzee can think. At best they do a shallow imitation of certain types of formal human thinking. So the search continues.


No, it's still moving the goalposts. It just that we move the goalposts for pretty good reasons. (I agree!)

Btw, you bring up the perspective of realising that our tools weren't adequate. But it's broader: completely ignoring the tools, we also realise that eg being able to play eg chess really, really well didn't actually capture what we wanted to mean by 'intelligence'. Similar for other outcomes.


Moving the goal posts and noticing that that you mistook the street lights for goal posts is not really the same.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: