Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do think intelligence is something more than data storage and retrieval. I believe it is adaptive behavior thinking about what data I have, what I could obtain, and how to store/retrieve it. I could be wrong, but that's my hypothesis.

We humans don't simply use a fixed model, we're retraining ourselves rapidly thousands of times a day. On top of that, we seem to be perceiving the training, input, and responses as well. There is an awareness of what we're doing, saying, thinking, and reacting that differs from the way current AI produces an output. Whether that awareness is just a reasoning machine pretending to think based on pre-determined actions from our lower brain activity, I don't know, but it definitely seems significantly more complex than what is happening in current "AI" research.

I think you're also onto something, there is a lot of passive data store/retrieve happening in our perception. I think a better understanding of this is worthwhile. However, I have also been informed by folks who are attempting to model and recreate the biological neurons that we use for language processing. Their belief is that LLM and ChatGPT is quite possibly not even headed in the right direction. Does this make LLM viable long term? I don't know. Time will tell. It already seems to be popping up everywhere already, so it seems to have a business case even in its current state.

As for my father, I do not "put him down" as you say. I explained it to him, and I was completely respectful, answered his questions, provided sources and research upon request, etc. I am not rude to my father, I deeply respect him. When I say "painfully" I mean, it was quite painful seeing how ChatGPT so effectively tricked him into thinking it was intelligent. I worry because these "tricks" will be used by bad people against all of us. There is even an article about an AI voice tool being used to trick a mother into thinking scammers had kidnapped her daughter (it was on HackerNews earlier today).

That is what I mean by painful. Seeing that your loved ones can be confused and misled. I take no joy in putting down my father and I do not actively look to do so. I merely worry that he will become another data point of the aging populace that is duped by phone call scams and other trickery.

Edit: Another thing about my father, he hates being misled or feeling ignorant. It was painful because he clearly was excited and hopeful this was real AI. However, his want to always understand how things work removed much of that science fiction magic in the knowing.

He's very grateful I explained how it works. For me though, it's painful being the one he asks to find out about it. Going from "oh my goodness, this is intelligent" fade to "oh, it's just predicting text responses". ChatGPT became a tool, not a revelation of computing. Because, as it is, it is merely a useful tool. It is not "alive" so to speak.



Going from "oh my goodness, this is intelligent" fade to "oh, it's just predicting text responses"

Eventually your father will reach the third stage: "Uh, wait, that's all we do." You will then have to pry open the next niche in your god-of-the-gaps reasoning.

The advent of GPT has forced me to face an uncomfortable (yet somehow liberating) fact: we're just plain not that special.


Haha, I think he's already at that point with respect to humanity. All my childhood he impressed upon us that we're not special, that only hard work and dedication will get you somewhere in life.

It's a small leap to apply that to general intelligence, I would think.

You are right though, we are coming closer and closer to deciphering the machinations of our psyche's. One day we'll know fully what it is that makes us tick. When we do, it will seem obvious and boring, just like all the other profound developments of our time.


We reflect, we change, we grow. We have so many other senses that contribute to our "humaness". If you listen to and enjoy music tell me how those feelings are just "predictive text responses".

Communication is one part of being human. A big part for sure, but only one of many.


What is the qualitative difference between one type of perception and the other?

“Text” are tokens. Tokens are abstract and can be anything. Anything that has structure can be modeled. Which is to say all of reality.

We have a lot of senses indeed. Multimodal I believe it’s called in ML jargon.

I don’t know where enjoyment itself comes from. I like to think it’s a system somewhere that predicts the next perception right getting rewarded.

Qualia are kind of hard to pin down as I’m sure you’ll know.


Yes, wholly agree. The special parts are in language. Both humans and AI are massively relying on language. No wonder AIs can spontaneously solve so many tasks. The secret is in that trillion training tokens, not in the neural architecture. Any neural net will work, even RNNs work (RWKV). People are still hung up on the "next token prediction" paradigm and completely forget the training corpus. It reflects a huge slice of our mental life.

People and LLMs are just fertile land where language can make a home and multiply. But it comes from far away and travels far beyond us. It is a self replicator and an evolutionary process.


> I do think intelligence is something more than data storage and retrieval. I believe it is adaptive behavior thinking about what data I have, what I could obtain, and how to store/retrieve it. I could be wrong, but that's my hypothesis.

Basing assertions of fact on a hypothesis while criticizing the thinking of other people seems off.


I understand better now, thanks for the explanation.

I have some experience in the other direction: everyone around me is hyperskeptical and throwing around the “stochastic parrot”.

Meanwhile completely ignoring how awesome this is, what the potential of the whole field is. Like it’s cool to be the “one that sees the truth”.

I see this like a 70’s computer. In and of itself not that earth shattering, but man.. the potential.

Just a short while ago nothing like this was even possible. Talking computers in scifi movies are now the easy part. Ridiculous.

Also keep in mind text is just one form of data. I don’t see why movement, audio and whatever other modality cannot be tokenized and learned from.

That’s also ignoring all the massive non-LLM progress that has been made in the last decades. LLMs could be the glue to something interesting.


Oh, yeah, I hear you on that as well. It's still a really cool tool! Probabilistic algorithms and other types of decision layering was mostly theory when I was in University. Seeing it go from a "niche class for smart math students" to breaking headlines all over the world is definitely pretty wild.

You are correct that nothing like this was even possible a couple decades ago. From a pure progress and innovation perspective, this is pretty incredible.

I can be skeptical, one of my favourite quotes is "they were so preoccupied with whether they could, they didn’t stop to think if they should". I like to protect innovation from pitfalls is all. Maybe that makes me too skeptical, sorry if that affected my wording.


Oh yeah, the “should”. I agree on that one. One way or another, it’s going to be an interesting ride.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: