Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am a PhD biophysicist working within the field of biological imaging. Professionally, my team (successfully) uses deep learning and GANs for a variety of tasks within the field of imaging, such as segmentation, registration, and predictive protein/transcriptomics. It’s good stuff, a game changer in many ways. In no way however, does it represent generalized AI, and nobody in the field makes this claim even though the output of these algorithms match or out perform humans in cases.

LLMs are no different. Like DL modules that are very good at outputting images that mimic biological signatures, LLMs are very good at outputting texts that eerily mimic human language.

However — and this is a point which programmers are woefully and comically ignorant — human language and reason are two separate things. Tech bros wholly confuse the two however, and thus make outlandish claims we have achieved or are on the brink of achieving — actual AI systems.

In other words, while LLMs and DL in general can perform specific tasks well, they do not represent a breakthrough in artificial intelligence, and thus will have a much narrower application space than actual AI.



If you've been in the field you really should know that the term AI has been used to describe things for decades in the academic world. My degree was in AI back before RBMs and Hintons big reveal about making things 100000 times faster (do the main step just once not 100 times and take 17 years to figure that out).

You're talking more about AGI.

We need "that's not AI" discussions like we need more "serverless? It's still on some server!!" discussions.


I think it's even incomparable to server vs serverless discussions.

It's about meaning of intelligence. These people don't have problems claiming that ants or dolphins are intelligent, but suddenly for machines to be classified as artificial intelligence they must be exactly on the same level as humans.

Intelligence is just about the ability to solve problems. There's no implication that in order for something to be intelligent it has to perform on at least the same level as top people in that field in the World.

It just has to be beyond a simple algorithm and be able to solve some sort of problem. You have AIs in video games that are just bare logic spaghetti computation with no neural networks.


By your definition, a handheld calculator from the 1960s is ‘AI’.

In other words, you’ve lost the argument.


I said beyond a simple algorithm.


Or you're using AI as a term differently to the people in the field. SVMs are extremely simple, two layer perceptrons are things you can work out by hand!

Just stop trying to redefine AI as a term, you'll lose against the old hands and you'll lose against the marketing dept and you'll lose against the tech bros and nobody who you actually need to explain it to will care. Use AGI or some other common term for what you're clearly talking about.


So, the ‘revolutionary’, ‘earth-shattering, ‘soon-to-make-humans obsolete’ talk about ChatGPT is all bullshit and this is just another regular, run-of-the-mill development with the label of ‘AI’ slapped on somewhere, just like all the others from the last 40 years? What in the hell is even your point then? Is ChatGPt a revolutionary precursor to AGI if not AGI already? I say it’s not.


This is true. But only to a point where mimicking and more broadly speaking, statistically imitating data, are understood in a more generalized way.

LLMs statistically imitates texts of real world. To achieve certain threshold of accuracy, it turns out they need to imitate the underlying Turing machine/program/logic that runs in our brains to understand/react properly to texts by ourselves. That is no longer in the realm of the old school data-as-data statistics I would say.


The problem with this kind of criticism of any AI-related technology is that it is an unfalsifiable argument akin to saying that it can't be "proper" intelligence unless God breathed a soul into the machine.

The method is irrelevant. The output is what matters.

This is like a bunch of intelligent robots arguing that "mere meat" cannot possibly be intelligent!

https://www.mit.edu/people/dpolicar/writing/prose/text/think...


> LLMs are very good at outputting texts that eerily mimic human language.

What a bizarre claim. If LLMs are not actually outputting language, why can I read what they output then? Why can I converse with it?

It's one thing to claim LLMs aren't reasoning, which is what you later do, but you're disconnected from reality if you think they aren't actually outputting language.


You have missed the point entirely.


My point is that you made a bad point. Just be more precise with your phrasing if that wasn't your intended meaning.


Is there a block button? Or a filter setting? You are see unaware and uninquisitive of actual human language, you cannot see the gross assumptions you are making.


> generalized AI

No one is talking about it being AGI. Everyone is talking about just AI specifically. I think your problem is thinking that AI = AGI.

For example AI in video games is very specific and narrow to its domain.


Personally, I find it hard to believe a PhD would have trouble conjugating “drink”. :P


human language and reason are two separate things

... in the human brain which has evolved "cores" to handle each task optimally.

It's like the Turing Test. If it looks like it's reasoning, does it matter that it's doing it like a human brain or not?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: