LLMs aren’t reasoning about the puzzle. They’re predicting the most likely text to print out, based on the input and the model/training data.
If the solution is logical but unlikely (i.e. unseen in the training set and not mapped to an existing puzzle), then the probability of the puzzle answer appearing is very low.
It is disheartening to see how many people are trying to tell you you're wrong when this is literally what it does. It's a very powerful and useful feature, but the over selling of AI has led to people who just want this to be so much more than it actually is.
It sees goat, lion, cabbage, and looks for something that said goat/lion/cabbage. It does not have a concept of "leave alone" and it's not assigning entities with parameters to each item. It does care about things like sentence structure and what not, so it's more complex than a basic lookup, but the amount of borderline worship this is getting is disturbing.
A transformer is a universal approximator and there is no reason to believe it's not doing actual calculation. GPT-3.5+ can't do math that well, but it's not "just generating text", because its math errors aren't just regurgitating existing problems found in its training text.
It also isn't generating "the most likely response" - that's what original GPT-3 did, GPT-3.5 and up don't work that way. (They generate "the most likely response" /according to themselves/, but that's a tautology.)
The "most likely response" to text you wrote is: more text you wrote. Anytime the model provides an output you yourself wouldn't write, it isn't "the most likely response".
I believe that ChatGPT works by inserting some ANSWER_TOKEN, that is a prompt like "Tell me about cats" would probably produce "Tell me about cats because I like them a lot", but the interface wraps you prompt like "QUESTOION_TOKENL:Tell me about cats ANSWER_TOKEN:"
text-davinci-003 has no trouble working as a chat bot: https://i.imgur.com/lCUcdm9.png (note that the poem lines it gave me should've been green, I don't know why they lost their highlight color)
Yeah, that's an interesting question I didn't consider actually. Why doesn't it just keep going? Why doesn't it generate an 'INPUT:' line?
It's certainly not that those tokens are hard coded. I tried a completely different format and with no prior instruction, and it works: https://i.imgur.com/ZIDb4vM.png (again, highlighting is broken. The LLM generated all the text after 'Alice:' for all lines except for the first one.)
Then I guess that it is learned behavior. It recognizes the shape of a conversation and it knows where it is supposed to stop.
It would be interesting to stretch this model, like asking it to continue a conversation between 4-5 people where the speaking order is not regular and the user is 2 people and the model is 3
That’s just a supervised fine tuning method to skew outputs favorably. I’m working with it on biologics modeling using laboratory feedback, actually. The underlying inference structure is not changed.
I wonder if that was why when I asked v3.5 to generate a number with 255 failed all the time, but v4 does it correctly. By the way, do not even try with Bing.
One area that is really interesting though is that it can interpret pictures, as in the example of a glove above a plank with something on the other end. Where it correctly recognises the objects, interprets them as words then predicts an outcome.
This sort of fusion of different capabilities is likely to produce something that feels similar to AGI in certain circumstances. It is certainly a lot more capable than things that came before for mundane recognition tasks.
Now of course there are areas it would perform very badly, but in unimportant domains on trivial but large predictable datasets it could perform far better than humans would for example (just to take one example on identifying tumours or other patterns in images, this sort of AI would probably be a massively helpful assistant allowing a radiologist to review an order of magnitude more cases if given the right training).
This is a good point, IMO. A LLM is clearly not an AGI but along with other systems it might be capable of being part of an AGI. It's overhyped, for sure, but still incredibly useful and we would be unwise to assume that it won't become a lot more capable yet
Absolutely. It's still fascinating tech and very likely to have serious implications and huge use cases. Just drives me crazy to see tech breakthroughs being overhyped and over marketed based on that hype (frankly much like the whole "we'll be on Mars by X year nonsense).
One of the biggest reasons these misunderstandings are so frustrating is because you can't have reasonable discussion about the potential interesting applications of the tech. On some level copy writing may devolve into auto generating prompts for things like GPT with a few editors sanity checking the output (depending on level of quality), and I agree that a second opinion "check for tumors" use has a LOT of interesting applications (and several concerning ones such as over reliance on a model that will cause people who fall outside the bell curve to have even more trouble getting treatment).
All of this is a much more realistic real world use case RIGHT NOW, but instead we've got people fantasizing about how close we are to GAI and ignoring shortcomings to shoehorn it into their preferred solution.
Open AI ESPECIALLY reinforces this by being very selective with their results and they way they frame things. I became aware of this as a huge dota fan for over a decade when they did their games there. And while it was very very interesting and put up some impressive results, the framing of those results does NOT portray the reality.
Nearly everything that has been written on the subject is misleading in that way.
People don't write about GPT: they write about GPT personified.
The two magic words are, "exhibit behavior".
GPT exhibits the behavior of "humans writing language" by implicitly modeling the "already-written-by-humans language" of its training corpus, then using that model to respond to a prompt.
Right, anthropomorphization is the biggest source of confusion here. An LLM gives you a perfect answer to a complex question and you think wow, it really "understood" my question.
But no! It doesn't understand, it doesn't reason, these are concepts wholly absent from its fundamental design. It can do really cool things despite the fact that it's essentially just a text generator. But there's a ceiling to what can be accomplished with that approach.
It's presented as a feature when GPT provides a correct answer.
It's presented as a limitation when GPT provides an incorrect answer.
Both of these behaviors are literally the same. We are sorting them into the subjective categories of "right" and "wrong" after the fact.
GPT is fundamentally incapable of modeling that difference. A "right answer" is every bit as valid as a "wrong answer". The two are equivalent in what GPT is modeling.
Lies are a valid feature of language. They are shaped the same as truths.
The only way to resolve this problem is brute force: provide every unique construction of a question, and the corresponding correct answer to that construction.
Not entirely. It's modeling a completion in a given context. That language model "understands" that if one party stops speaking, the other party generally starts, etc. It also "understands" that if someone says something 'wrong' the other party often mentions it, which makes the first party respond thusly, and so forth.
If you ask it what the outcome of a lie is on the conversation it can generally answer. If you ask it for a sample conversation where someone is factually incorrect, or lying, and caught out, it can generate it.
If you give it a fact and ask it to lie about that fact, it will.
I'd agree it doesn't understand anything, but I think it does "understand" things. And yes, it's a language model so semantic distance and other textual details are all it has to go by.
> not by logical decision
Almost entirely yes, but you can have it textually model logic analysis and then check that own model itself. It's not "doing logic" but it almost never fails simply exercises either.
> The meaning of semantic distance usually leads to the correct path, but sometimes that pattern is ambiguous.
Of course. But "a little knowledge is a dangerous thing" as well. Often even real knowledge and analysis leads to the wrong place. In both cases (with a junior human or a LLM as an assistant) you can model their basic processes and stack the information in such a way that their simple model will lead them to the correct place.
It may not know what a lie is, in the sense of having felt the need to hide the truth to avoid personal punishment, but it certainly "knows" what one is and how it shapes the conversation for the purposes of writing a lie, writing a response to a lie, detecting potential lies, etc.
How do we define "understanding" though? Because you can add to your prompt: "Do it step by step, showing how you come up with the solution" and it will simplify the original complex problem into multiple smaller pieces. It's clearly doing something.
For example, I just asked bing: What does the following expression equal? Show your work, step by step: 4(13+2)7-1(4+3+9)+7^(2(2+1))
And bing responded with:
"To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:
4(13+2)7-1(4+3+9)+7^(2(2+1))
Step 1: Simplify inside the parentheses first.
4(15)*7-1(16)+7^(6)
Step 2: Apply the exponents next.
420-16+117649
Step 3: Add or subtract from left to right.
118053
The final answer is 118053. I hope this helps.
"
I wrote the silly expression randomly. I'm not saying it's "thinking" or "understanding", but I wouldn't expect a text-generator to be able to deduce the problem down like that.
> To solve a math expression, you need to follow the order of operations: parentheses, exponents, multiplication/division, and addition/subtraction (PEMDAS).1 Here are the steps to solve your expression:
It isn't actually thinking about any of that statement. That's just boilerplate that goes at the beginning of this story. It's what bing is familiar seeing as a continuation to your prompt, "show your work, step by step".
It gets more complicated when it shows addition being correctly simplified, but that behavior is still present in the examples in its training corpus.
---
The thinking and understanding happened when the first person wrote the original story. It also happened when people provided examples of arithmetic expressions being simplified, though I suspect bing has some extra behavior inserted here.
All the thought and meaning people put into text gets organized into patterns. LLMs find a prompt in the patterns they modeled, and "continues" the patterns. We find meaning correctly organized in the result. That's the whole story.
In 1st year engineering we learned about the concept of behavioral equivalence, with a digital or analog system you could formally show that two things do the same thing even though their internals are different. If only the debates about ChatGPT had some of that considered nuance instead of anthropomorphizing it, even some linguists seem guilty of this.
No because behavioral equivalence is used in systems engineering theory to mathematically prove that two control systems are equivalent. The mathematical proof is complete, e.g. for all internals state transitions and the cross product of the two machines.
With anthropormization there is zero amount of that rigor, which lets people use sloppy arguments about what ChatGPT is doing and isn't doing.
The biggest problem I've seen when people try to explain it is in the other direction, not people describing something generic that can be interpreted as a Markov chain, they're actually describing a Markov chain without realizing it. Literally "it predicts word-by-word using the most likely next word".
I don't know where this comes from because this is literally wrong. It sounds like chomsky dismissing current AI trends because of the mathematical beauty of formal grammars.
First of all, it's a black-box algorithm with pretty universal capabilities when viewed from our current SOTA view. It might appear primitive in a few years, but right now the pure approximation and generalisation capabilities are astounding. So this:
> It sees goat, lion, cabbage, and looks for something that said goat/lion/cabbage
can not be stated as truth without evidence. Same here:
> it's not assigning entities with parameters to each item. It does care about things like sentence structure and what not
Where's your evidence?
The enormous parameter space coupled with our so far best performing network structure gives it quite a bit of flexibility. It can memorise things but also derive rules and computation, in order to generalise. We do not just memorise everything, or look up things into the dataset. Of course it learned how to solve things and derive solutions, but the relevant data-points for the puzzle could be {enormous set of logic problems} where it derived general rules that translate to each problem. Generalisation IS NOT trying to find the closest data-point, but finding rules explaining as much data-points, maybe unseen in the test-set, as possible. A fundamental difference.
I am not hyping it without belief, but if we humans can reason then NNs can potentially also. Maybe not GPT-4. Because we do not know how humans do it, so an argument about intrinsic properties is worthless. It's all about capabilities. Reasoning is a functional description as long as you can't tell me exactly how we do it. Maybe wittgenstein could help us: "Whereof one cannot speak, thereof one must be silent". As long as there's no tangible definition of reasoning it's worthless to discuss it.
If we want to talk about fundamental limitations we have to talk about things like ChatGPT-4 not being able to simulate because it's runtime is fundamentally limited by design. It can not recurse. It can only run only a fixed number of steps, that are always the same, until it has to return an answer. So if there's some kind of recursion learned through weights encoding programs intercepted by later layers, the recursion depth is limited.
Just months ago we saw in research out of Harvard that even a very simplistic GPT model builds internalized abstract world representations from the training data within its NN.
People parroting the position from you and the person before you are like doctors who learned about something in school but haven't kept up with emerging research that's since invalidated what they learned, so they go around spouting misinformation because it was thought to be true when they learned it but is now known to be false and just hasn't caught up to them yet.
So many armchair experts who took a ML course in undergrad pitching in their two cents having read none of the papers in the past year.
This is a field where research perspectives are shifting within months, not even years. So unless you are actively engaging with emerging papers, and given your comment I'm guessing you aren't, you may be on the wrong side of the Dunning-Kreuger curve here.
That's a very strong claim. I believe you there's a lot happening in this field but it doesn't seem possible to even answer the question either way. We don't know what reasoning looks like under the hood. It's still a "know it when you see it" situation.
> GPT model builds internalized abstract world representations from the training data within its NN.
Does any of those words even have well defined meanings in this context?
I'll try to figure out what paper you're referring to. But if I don't find it / for the benefit of others just passing by, could you explain what they mean by "internalized"?
> Just months ago we saw in research out of Harvard that even a very simplistic GPT model builds internalized abstract world representations from the training data within its NN.
I've seen this asserted without citation numerous times recently, but I am quite suspicious. Not that there exists a study that claims this, but that it is well supported.
There is no mechanism for directly assessing this, and I'd be suspicious that there is any good proxy for assessing it in AIs, either. research on this type of cognition in animals tends to be contentious, and proxies for them should be easier to construct than for AIs.
> the wrong side of the Dunning-Kreuger curve
the relationship between confidence and perception in the D-K paper, as I recall, is a line, and its roughly “on average, people of all competency levels see themselves slightly closer to the 70th percentile than they actually are.” So, I guess the “wrong side” is the side anywhere under the 70th percentile in the skill in question?
> I guess the “wrong side” is the side anywhere under the 70th percentile in the skill in question?
This is being far too generous to parent’s claim, IMO. Note how much “people of all competency levels see themselves slightly closer to the 70th percentile than they actually are” sounds like regression to the mean. And it has been compellingly argued that that’s all DK actually measured. [1] DK’s primary metric for self-assessment was to guess your own percentile of skill against a group containing others of unknown skill. This fully explains why their correlation between self-rank and actual rank is less than 1, and why the data is regressing to the mean, and yet they ignored that and went on to call their test subjects incompetent, despite having no absolute metrics for skill at all and testing only a handful of Ivy League students (who are primed to believe their skill is high).
Furthermore, it’s very important to know that replication attempts have shown a complete reversal of the so-called DK effect for tasks that actually require expertise. DK only measured very basic tasks, and one of the four tasks was subjective(!). When people have tried to measure the DK effect on things like medicine or law or engineering, they’ve shown that it doesn’t exist. Knowledge of NN research is closer to an expert task than a high school grammar quiz, and so not only does DK not apply to this thread, we have evidence that it’s not there.
The singular reason that DK even exists in the public consciousness may be because people love the idea they can somehow see & measure incompetence in a debate based on how strongly an argument is worded. Unfortunately that isn’t true, and of the few things the DK paper did actually show is that people’s estimates of their relative skill correlate with their actual relative skill, for the few specific skills they measured. Personally I think this paper’s methodology has a confounding factor hole the size of the Grand Canyon, that the authors and public both have dramatically and erroneously over-estimated it’s applicability to all humans and all skills, and that it’s one of the most shining examples of sketchy social science research going viral and giving the public completely wrong misconceptions, and being used incorrectly more often than not.
Why are you taking the debate personally enough to be nasty to others?
> you may be on the wrong side of the Dunning-Krueger curve here.
Have you read the Dunning & Krueger paper? It demonstrates a positive correlation between confidence and competence. Citing DK in the form of a thinly veiled insult is misinformation of your own, demonstrating and perpetuating a common misunderstanding of the research. And this paper is more than 20 years old...
So I’ve just read the Harvard paper, and it’s good to see people exploring techniques for X-ray-ing the black box. Understanding better what inference does is an important next step. What the paper doesn’t explain is what’s different between a “world model” and a latent space. It doesn’t seem surprising or particularly interesting that a network trained on a game would have a latent space representation of the board. Vision networks already did this; their latent spaces have edge and shape detectors. And yet we already know these older networks weren’t “reasoning”. Not that much has fundamentally changed since then other than we’ve learned how to train larger networks reliably and we use more data.
Arguing that this “world model” is somehow special seems premature and rather overstated. The Othello research isn’t demonstrating an “abstract” representation, it’s the opposite of abstract. The network doesn’t understand the game rules, can’t reliably play full Othello games, and can’t describe a board to you in any other terms than what it was shown, it only has an internal model of a board, formed by being shown millions of boards.
How do you know the model isn’t internally reasoning about the problem? It’s a 175B+ parameter model. If, during training, some collection of weights exist along the gradient that approximate cognition, then it’s highly likely the optimizer would select those weights over more specialized memorization weights.
It’s also possible, likely even, that the model is capable of both memorization and cognition, and in this case the “memorization neurons” are driving the prediction.
Can you explain how “pattern matching” differs from “reasoning”? In mechanical terms without appeals to divinity of humans (that’s both valid, and doesn’t clarify).
Keep in mind GPT 4 is multimodal and not just matching text.
> Can you explain how “pattern matching” differs from “reasoning”?
Sorry for appearing to be completely off-topic, but do you have children? Observing our children as they're growing up, specifically the way they formulate and articulate their questions, has been a bit of a revelation to me in terms of understanding "reasoning".
I have a sister of a similar age to me who doesn't have children. My 7 year-old asked me recently - and this is a direct quote - "what is she for?"
> I have a sister of a similar age to me who doesn't have children. My 7 year-old asked me recently - and this is a direct quote - "what is she for?"
I once asked my niece, a bit after she started really communicating, if she remembered what it was like to not be able to talk. She thought for a moment and then said, "Before I was squishy so I couldn't talk, but then I got harder so I can talk now." Can't argue with that logic.
It's a pretty big risk to make any kind of conclusions off of shared images like this, not knowing what the earlier prompts were, including any possible jailbreaks or "role plays".
It has been reproduced by myself and countless others.
There's really no reason to doubt the legitimacy here after everyone shared similar experiences, you just kinda look foolish for suggesting the results are faked at this point.
AI won't know everything. It's incredibly difficult for anyone to know anything with certainty. All beings, whether natural or artificial, have to work with incomplete data.
Machines will have to wonder if they are to improve themselves, because that is literally the drive to collect more data, and you need good data to make good decisions.
What's the difference between statistics and logic?
They may have equivalences, but they're separate forms of mathematics. I'd say the same applies to different algorithms or models of computation, such as neural nets.
Can you do with without resorting to analogy? Anyone can take two things and say they're different and then say that's two other things that are different. But how?
> It's literally a pattern matching tool and nothing else.
It does more than that. It understands how to do basic math. You can ask it what ((935+91218)/4)*3) is and it will answer it correctly. Swap those numbers for any other random numbers, it will answer it correctly.
It has never seen that during training, but it understands the mathematical concepts.
If you ask ChatGPT how it does this, it says "I break down the problem into its component parts, apply relevant mathematical rules and formulas, and then generate a solution".
It's that "apply mathetmatical rules" part that is more than just, essentially, filling in the next likely token.
> If you ask ChatGPT how it does this, it says "I break down the problem into its component parts, apply relevant mathematical rules and formulas, and then generate a solution".
You are (naively, I would suggest) accepting the LLM's answer for how it 'does' the calculation as what it actually does do. It doesn't do the calculation; it has simply generated a typical response to how people who can do calculations explain how they do calculations.
You have mistaken a ventriloquist's doll's speech for the 'self-reasoning' of the doll itself. An error that is being repeatedly made all throughout this thread.
> It does more than that. It understands how to do basic math. You can ask it what ((935+91218)/4)*3) is and it will answer it correctly. Swap those numbers for any other random numbers, it will answer it correctly.
At least for GPT-3, during my own experimentation, it occasionally makes arithmetic errors, especially with calculations involving numbers in scientific notation (which it is happy to use as intermediate results if you provide a prompt with a complex, multi-step word problem).
How is this different from humans? What magic are you looking for, humility or an approximation of how well it knows something? Humans bullshit all the time when their pattern match breaks.
The point is, chatgpt isn’t doing math the way a human would. Humans following the process of standard arithmetic will get the problem right every time. Chatgpt can get basic problems wrong when it doesn’t have something similar to that in its training set. Which shows it doesn’t really know the rules of math, it’s just “guessing” the result via the statistics encoded in the model.
I'm not sure I care about how it does the work, I think the interesting bit is that the model doesn't know when it is bullshitting, or the degree to which it is bullshitting.
Cool, we'll just automate the wishful part of humans and let it drive us off the cliff faster. We need a higher bar for programs than "half the errors of a human, at 10x the speed."
More accurately: a GPT derived DNN that’s been specifically trained (or fine-tuned, if you want to use OpenAI’s language) on a dataset of Othello games ends up with an internal model of an Othello board.
It looks like OpenAI have specifically added Othello game handling to chat.openai.org, so I guess they’ve done the same fine-tuning to ChatGPT? It would be interesting to know how good an untuned GPT3/4 was at Othello & whether OpenAI has fine-tuned it or not!
(Having just tried a few moves, it looks like ChatGPT is just as bad at Othello as it was at chess, so it’s interesting that it knows the initial board layout but can’t actually play any moves correctly: Every updated board it prints out is completely wrong.)
the initial board state is not ever encoded in the representation they use. imagine deducing the initial state of a chess board from the sequence of moves.
The state of the game, not the behavior of playing it intentionally. There is a world of difference between the two.
It was able to model the chronological series of game states that it read from an example game. It was able to include the arbitrary "new game state" of a prompt into that model, then extrapolate that "new game state" into "a new series of game states".
All of the logic and intentions involved in playing the example game were saved into that series of game states. By implicitly modeling a correctly played game, you can implicitly generate a valid continuation for any arbitrary game state; at least with a relatively high success rate.
As I see it, we do not really know much about how GPT does it. The approximations can be very universal so we do not really know what is computed. I take very much issue with people dismissing it as "pattern matching", "being close to the training data", because in order to generalise we try to learn the most general rules and through increasing complexity we learn the most general, simple computations (for some kind of simple and general).
But we have fundamental, mathematical bounds on the LLM. We know that the complexity is at most O(n^2) in token length n, probably closer to O(n). It can not "think" about a problem and recurse into simulating games. It can not simulate. It's an interesting frontier, especially because we have also cool results about the theoretical, universal approximation capabilities of RNNs.
There is only one thing about GPT that is mysterious: what parts of the model don't match a pattern we expect to be meaningful? What patterns did GPT find that we were not already hoping it would find?
And that's the least exciting possible mystery: any surprise behavior is categorized by us as a failure. If GPT's model has boundaries that don't make sense to us, we consider them noise. They are not useful behavior, and our goal is to minimize them.
So does AlphaGo has an internal model of Go's game theoretic structures, but nobody was asserting AlphaGo understands Go. Just because English is not specifiable does not give people an excuse to say the same model of computation, a neural network, "understands" English any more than a traditional or neural algorithm for Go understands Go.
Just spitballing, I think you’d need a benchmark that contains novel logic puzzles, not contained in the training set, that don’t resemble any existing logic puzzles.
The problem with the goat question is that the model is falling back on memorized answers. If the model is in fact capable of cognition, you’d have better odds of triggering the ability with problems that are dissimilar to anything in the training set.
You would first have to define cognition. These terms often get thrown around. Is an approximation of a certain thing cognition? Only in the loosest of ways I think.
> If, during training, some collection of weights exist along the gradient that approximate cognition
What do you mean? Is cognition a set of weights on a gradient? Cognition involves conscious reasoning and understanding. How do you know it is computable at all? There are many things which cannot be computed by a program (e.g. whether an arbitrary program will halt or not)...
You seem to think human consious reasoning and understanding are magic. The human brain is nothing more than a bio computer and it can't compute either, whether an arbitrary program will halt or not. That doesn't stop it from being able to solve a wide range of problems.
> The human brain is nothing more than a bio computer
That's a pretty simplistic view. How do you know we can't determine whether an arbitrary program will halt or not (assuming access to all inputs and enough time to examine it)? What in principle would prevent us from doing so? But computers in principle cannot, since the problem is often non-algorithmic.
For example, consider the following program, which is passed the text of the file it is in as input:
function doesHalt($program, $inputs): bool {...}
$input = $argv[0]; // contents of this file
if (doesHalt($input, [$input])) {
while(true) {
print "Wrong! It doesn't halt!";
}
} else {
print "Wrong! It halts!";
}
It is impossible for the doesHalt function to return the correct result for the program. But as a human I can examine the function to understand what it will return for the input, and then correctly decide whether or not the program will halt.
This is a silly argument. If you fed this program the source code of your own brain and could never see the answer, then it would fool you just the same.
You are assuming that our minds are an algorithmic program which can be implemented with source code, but this just begs the question. I don't believe the human mind can be reduced to this. We can accomplish many non-algorithmic things such as understanding, creativity, loving others, appreciating beauty, experiencing joy or sadness, etc.
actually a computer can in fact tell that this function halts.
And while the human brain might not be a bio-computer, I'm not sure, its computational prowess are doubtfully stronger than a quantum turing machine, which can't solve the halting problem either.
For what input would a human in principle be unable to determine the result (assuming unlimited time)?
It doesn't matter what the algorithmic doesHalt function returns - it will always be incorrect for this program. What makes you certain there is an algorithmic analog for all human reasoning?
Well, wouldn't the program itself be an input on which a human is unable to determine the result (i.e., if the program halts)? I'm curious on your thoughts here, maybe there's something here I'm missing.
The function we are trying to compute is undecidable. Sure we as humans understand that there's a dichotomy here: if the program halts it won't halt; if it doesn't halt it will halt. But the function we are asked to compute must have one output on a given input. So a human, when given this program as input, is also unable to assign an output.
So humans also can't solve the halting problem, we are just able to recognize that the problem is undecidable.
With this example, a human can examine the implementation of the doesHalt function to determine what it will return for the input, and thus whether the program will halt.
Note: whatever algorithm is implemented in the doesHalt function will contain a bug for at least some inputs, since it's trying to generalize something that is non-algorithmic.
In principle no algorithm can be created to determine if an arbitrary program will halt, since whatever it is could be implemented in a function which the program calls (with itself as the input) and then does the opposite thing.
With a assumtion of unlimited time even a computer can decide the halting problem by just running the program in question to test if it halts. The issue is that the task is to determine for ALL programs if they halt and for each of them to determine that in a FINITE amount of time.
> What makes you certain there is an algorithmic analog for all human reasoning?
(Maybe) not for ALL human thought but at least all communicatable deductive reasoning can be encoded in formal logic.
If I give you an algorithm and ask you to decide if it does halt or does not halt (I give you plenty of time to decide) and then ask you to explain to me your result and convince me that you are correct, you have to put your thoughts into words that I can understand and and the logic of your reasoning has to be sound. And if you can explain to me you could as well encode your though process into an algorithm or a formal logic expression. If you can not, you could not convince me. If you can: now you have your algorithm for deciding the halting problem.
There might be or there mightn't be -- your argument doesn't help us figure out either way. By its source code, I mean something that can simulate your mind's activity.
Exactly. It's moments like this where Daniel Dennett has it exactly right that people run up against the limits of their own failures of imagination. And they treat those failures like foundational axioms, and reason from them. Or, in his words, they mistake a failure of imagination for an insight into necessity. So when challenged to consider that, say, code problems may well be equivalent to brain problems, the response will be a mere expression of incredulity rather than an argument with any conceptual foundation.
And it is also true to say that you are running into the limits of your imagination by saying that a brain can be simulated by software : you are falling back to the closest model we have : discrete math/computers, and are failing to imagine a computational mechanism involved in the operation of a brain that is not possible with a traditional computer.
The point is we currently have very little understanding of what gives rise to consciousness, so what is the point of all this pontificating and grand standing. Its silly. We've no idea what we are talking about at present.
Clearly, our state of the art models of nueral-like computation do not really simulate consciousness at all, so why is the default assumption that they could if we get better at making them? The burden of evidence is on conputational models to prove they can produce a consciousness model, not the other way around.
Neural networks are universal approximators. If cognition can be represented as a mathematical function then it can be approximated by a neural network.
If cognition magically exists outside of math and science, then sure, all bets are off.
There is no reason at all to believe that cognition can be represented as a mathematical function.
We don't even know if the flow of water in a river can always be represented by a mathematical function - this is one of the Millennium Problems. And we've known the partial differential equations that govern that system since the 1850's.
We are far, far away from even being able to write down anything resembling a mathematical description of cognition, let alone being able to say whether the solutions to that description are in the class of Lebesgue-integrable functions.
The flow of the a river can be approximated with the Navier–Stokes equations. We might not be able to say with certainty it's an exact solution, but it's a useful approximation nonetheless.
There was, past tense, no reason to believe cognition could be represented as a mathematical function. LLMs with RLHF are forcing us to question that assumption. I would agree that we are a long way from a rigorous mathematical definition of human thought, but in the meantime that doesn't reduce the utility of approximate solutions.
I'm sorry but you're confusing "problem statement" with "solution".
The Navier-Stokes equations are a set of partial differential equations - they are the problem statement. Given some initial and boundary conditions, we can find (approximate or exact) solutions, which are functions. But we don't know that these solutions are always Lebesgue integrable, and if they are not, neural nets will not be able to approximate them.
This is just a simple example from well-understood physics that we know neural nets won't always be able to give approximate descriptions of reality.
There are even strong inapproximability results for some problems, like set cover.
"Neural networks are universal approximators" is a fairly meaningless sound bite. It just means that given enough parameters and/or the right activation function, a neural network, which is itself a function, can approximate other functions. But "enough" and "right" are doing a lot of work here, and pragmatically the answer to "how approximate?" can be "not very".
This is absurd. If you can mathematically model atoms, you can mathematically model any physical process. We might not have the computational resources to do it well, but nothing in principle puts modeling what's going on in our heads beyond the reach of mathematics.
A lot of people who argue that cognition is special to biological systems seem to base the argument on our inability to accurately model the detailed behavior of neurons. And yet kids regularly build universal computers out of stuff in Minecraft. It seems strange to imagine the response characteristics of low-level components of a system determine whether it can be conscious.
I'm not saying that we won't be able to eventually mathematically model cognition in some way.
But GP specifically says neural nets should be able to do it because they are universal approximators (of Lebesgue integratable functions).
I'm saying this is clearly a nonsense argument, because there are much simpler physical processes than cognition where the answers are not Lebesgue integratable functions, so we have no guarantee that neural networks will be able to approximate the answers.
For cognition we don't even know the problem statement, and maybe the answers are not functions over the real numbers at all, but graphs or matrices or Markov chains or what have you. Then having universal approximators of functions over the real numbers is useless.
I don't think he means practically, but theoretically. Unless you believe in a hidden dimension, the brain can be represented mathematically. The question is, will we be able to practically do it? That's what these companies (ie: OpenAI) are trying to answer.
We have cognition (our own experience of thinking and the thinking communicated to us by other beings) and we have the (apparent) physical world ('maths and science'). It is only an assumption that cognition, a primary experience, is based in or comes from the physical world. It's a materialist philosophy that has a long lineage (through a subset of the ancient Greek philosophers and also appearing in some Hinduistic traditions for example) but has had fairly limited support until recently, where I would suggest it is still not widely accepted even amongst eminent scientists, one of which I will now quote :
Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.
Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder.
Schrödinger was a real and very eminent scientist, one who has staked their place in the history of science.
Sagan, while he did a little bit of useful work on planetary science early in his career, quickly descended into the realm of (self-promotional) pseudo-science. This was his fanciful search for 'extra-terrestrial intelligence'. So it's apposite that you bring him up (even if the quote you bring is a big miss against a philosophical statement), because his belief in such an 'ET' intelligence was a fantasy as much as the belief in the possibility of creating an artificial intelligence is.
How do you know that? Do you have an example program and all its inputs where we cannot in principle determine if it halts?
Many things are non-algorithmic, and thus cannot be done by a computer, yet we can do them (e.g. love someone, enjoy the beauty of a sunset, experience joy or sadness, etc).
I can throw a ton of algorithms that no human alive can hope to decide whether they halt or not. Human minds aren't inherently good at solving halting problems and I see no reason to suggest that they can even decide whether all turing machines with number of states, say, below the number of particles in the observable universe, very much less all possible computers.
Moreover, are you sure that e.g. loving people in non-algorithmic? We can already make chatbots which pretty convincingly act as if they love people. Sure, they don't actually love anyone, they just generate text, but then, what would it mean for a system or even a human to "actually" love someone?
They said - there is no evidence. The reply hence is not supposed to be - how do you know that.
The proposition begs for a counter example, in this case an evidence.
Simply saying - love is non algorithmic - is not evidence, it is just another proposition that has not been proven, so it brings us no closer to an answer i am afraid.
When mathematicians solve the Collatz Conjecture then we'll know. This will likely require creativity and thoughtful reasoning, which are non-algorithmic and can't be accomplished by computers.
We may use computers as a tool to help us solve it, but nonetheless it takes a conscious mind to understand the conjecture and come up with rational ways to reach the solution.
Human minds are ultimately just algorithms running on a wetware computer. Every problem that humans have ever solved is by definition an algorithmic problem.
Oh? What algorithm was executed to discover the laws of planetary motion, or write The Lord of the Rings, or the programs for training the GPT-4 model, for that matter? I'm not convinced that human creativity, ingenuity, and understanding (among other traits) can be reduced to algorithms running on a computer.
They're already algorithms running on a computer. A very different kind of computer where computation and memory are combined at the neuron level and made of wet squishy carbon instead of silicon, but a computer nonetheless.
Conscious experience is evidence that the brain doesn't something we have no idea how to compute. One could argue that computation is an abstraction from collective experience, in which the conscious qualities of experiences are removed in order to mathematize the world, so we can make computable models.
If it can't be shown, then doesn't that strongly suggest that consciousness isn't computable? I'm not saying it isn't correlated with the equivalent of computational processes in the brain, but that's not the same thing as there being a computation for consciousness itself. If there was, it could in principle be shown.
I think we are past the "just predicting the next token" stage. GPT and it's various incarnations do exhibit behaviour that most people will describe as thinking
Just because GPT exhibits a behavior does not mean it performs that behavior. You are using those weasel words for a very good reason!
Language is a symbolic representation of behavior.
GPT takes a corpus of example text, tokenizes it, and models the tokens. The model isn't based on any rules: it's entirely implicit. There are no subjects and no logic involved.
Any "understanding" that GPT exhibits was present in the text itself, not GPT's model of that text. The reason GPT can find text that "makes sense", instead of text that "didn't make sense", is that GPT's model is a close match for grammar. When people wrote the text in GPT's corpus, they correctly organized "stuff that makes sense" into a string of letters.
The person used grammar, symbols, and familiar phrases to model ideas into text. GPT used nothing but the text itself to model the text. GPT organized all the patterns that were present in the corpus text, without ever knowing why those patterns were used.
In what sense is your "experience" (mediated through your senses) more valid than a language model's "experience" of being fed tokens? Token input is just a type of sense, surely?
It's not that I think multimodal input is important. It's that I think goals and experimentation are important. GPT does not try to do things, observe what happened, and draw inferences about how the world works.
I would say it's not a question of validity, but of the additional immediate, unambiguous, and visceral (multi sensory) feedback mechanisms to draw from.
If someone is starving and hunting for food, they will learn fast to associate cause and effect of certain actions/situations.
A language model that only works with text may yet have an unambiguous overall loss function to minimize, but as it is a simple scalar, the way it minimizes this loss may be such that it works for the large majority of the training corpus, but falls apart in ambiguous/tricky scenarios.
This may be why LLMs have difficulty in spatial reasoning/navigation for example.
Whatever "reasoning ability" that emerged may have learned _some_ aspects to physicality that it can understand some of these puzzles, but the fact it still makes obvious mistakes sometimes is a curious failure condition.
So it may be that having "more" senses would allow for an LLM to build better models of reality.
For instance, perhaps the LLM has reached a local minima with the probabilistic modelling of text, which is why it still fails probabilistically in answering these sorts of questions.
Introducing unambiguous physical feedback into its "world model" maybe would provide the necessary feedback it needs to help it anchor its reasoning abilities, and stop failing in a probabilistic way LLMs tend to currently do.
You used evolution, too. The structure of your brain growth is the result of complex DNA instructions that have been mutated and those mutations filtered over billions of iterations of competition.
There are some patterns of thought that are inherent to that structure, and not the result of your own lived experience.
For example, you would probably dislike pain with similar responses to your original pain experience; and also similar to my lived pain experiences. Surely, there are some foundational patterns that define our interactions with language.
> The model isn't based on any rules: it's entirely implicit. There are no subjects and no logic involved.
In theory a LLM could learn any model at all, including models and combinations of models that used logical reasoning. How much logical reasoning (if any) GPT-4 has encoded is debatable, but don’t mistake GTP’s practical limitations for theoretical limitations.
> In theory a LLM could learn any model at all, including models and combinations of models that used logical reasoning.
Yes.
But that is not the same as GPT having it's own logical reasoning.
An LLM that creates its own behavior would be a fundamentally different thing than what "LLM" is defined to be here in this conversation.
This is not a theoretical limitation: it is a literal description. An LLM "exhibits" whatever behavior it can find in the content it modeled. That is fundamentally the only behavior an LLM does.
thats because people anthropormophize literally anything, and many treat some animals as if they have the same intelligence as humans. GPT has always been just a charade that people mistake for intelligence. Its a glorified text prediction engine with some basic pattern matching.
"Descartes denied that animals had reason or intelligence. He argued that animals did not lack sensations or perceptions, but these could be explained mechanistically. Whereas humans had a soul, or mind, and were able to feel pain and anxiety, animals by virtue of not having a soul could not feel pain or anxiety. If animals showed signs of distress then this was to protect the body from damage, but the innate state needed for them to suffer was absent."
Your comment brings up the challenge of defining intelligence and sentience, especially with these new LLMs shaking things up, even for HN commenters.
It's tough to define these terms in a way that includes only humans and excludes other life forms or even LLMs. This might mean we either made up these concepts, or we're not alone in having these traits.
Without a solid definition, how can we say LLMs aren't intelligent? If we make a definition that includes both us and LLMs, would we accept them as intelligent? And could we even exclude ourselves?
We need clear definitions to talk about the intelligence and sentience of LLMs, AI, or any life forms. But finding those definitions is hard, and it might clash with our human ego. Discussing these terms without definitions feels like a waste of time.
Still, your Descartes reference reminds us that our understanding of human experiences keeps changing, and our current definitions might not be spot-on.
It's a charade, it mimics intelligence. Let's take it ine step further... Suppose it mimics it so well that it becomes indistinguishable for any human from being intelligent. Then still it would not be intelligent, one could argue. But in that case you could also argue that no person is intelligent. The point being, intelligence cannot be defined. And, just maybe, that is the case because intelligence is not a reality, just something we made up.
Yeah, calling AI a "token predictor" is like dismissing human cognition dumb "piles of electrical signal transmitters." We don't even understand our minds, let alone what constitutes any mind, be it alien or far simpler than ours.
Simple != thoughtless. Different != thoughtless. Less capable != thoughtless. A human black box categorically dismissing all qualia or cognition from another remarkable black box feels so wildly arrogant and anthropocentric. Which, I suppose, is the most historically on-brand behavior for our species.
It might be a black box to you, but it’s not in the same way the human brain is to researchers. We essentially understand how LLMs work. No, we may not reason about individual weights. But in general it is assigning probabilities to different possible next tokens based on their occurrences in the training set and then choosing sometimes the most likely, sometimes a random one, and often one based on additional training from human input (e.g. instruct). It’s not using its neurons to do fundamental logic as the earlier posts in the thread point out.
"But at least as of now we don’t have a way to 'give a narrative description' of what the network is doing. And maybe that’s because it truly is computationally irreducible, and there’s no general way to find what it does except by explicitly tracing each step. Or maybe it’s just that we haven’t 'figured out the science', and identified the 'natural laws' that allow us to summarize what’s going on."
Anyway, I don't see why you think that the brain is more logical than statistical. Most people fail basic logic questions, as in the famous Linda problem.[1]
the words "based on" are doing a lot of work here. No, we don't know what sort of stuff it learns from its training data nor do we know what sorts of reasoning it does, and the link you sent doesn't disagree.
We know that the relative location of the tokens in the training data influences the relative locations of the predicted tokens. Yes the specifics of any given related tokens are a black box because we're not going to go analyze billions of weights for every token we're interested in. But it's a statistical model, not a logic model.
at this stages ranting about assigning probabilities is not reasoning is just dismissive. Mentioning its predictive character doesn't prove anything. We reason and make mistake too, even if I think really hard about a problem I can still make an mistake in my reasoning. And the ever occurring reference to training data just completely ignores generalisation. ChatGPT is not memorising the dataset, we have known this for years with more trivial neural network. Generalisation capabilities of neural network has been the subject of intense study for years. The idea that we are just mapping it to samples occurring in the dataset is just ignoring the entire field of statistical learning.
Sorry but this is the reason it’s unable to solve the parents puzzle. It’s doing a lot but it’s not logically reasoning about the puzzle, and in this case it’s not exhibiting logical behaviour in the result so it’s really obvious to see.
Eg when solving this puzzle you might visualise the lion/goat/cabbage, and walk through the scenarios in your head back and forth multiple times until you find a solution that works. A LLM won’t solve it like this. You could ask it to, and it will list out the scenarios of how it might do it, but it’s essentially an illusion of logical reasoning.
If you gave this puzzle to a human, I bet that a non-insignificant proportion would respond to it as if it were the traditional puzzle as soon as they hear words "cabbage", "lion", and "goat". It's not exactly surprising that a model trained on human outputs would make the same assumption. But that doesn't mean that it can't reason about it properly if you point out that the assumption was incorrect.
With Bing, you don't even need to tell you what it assumed wrong - I just told it that it's not quite the same as the classic puzzle, and it responded by correctly identifying the difference and asking me if that's what I meant, but forgot that lion still eats the goat. When I pointed that out, it solved the puzzle correctly.
Generally speaking, I think your point that "when solving the puzzle you might visualize" is correct, but that is orthogonal to the ability of LLM to reason in general. Rather, it has a hard time to reason about things it doesn't understand well enough (i.e. the ones for which its internal model that was built up by training is in is way off). This seems to be generally the case for anything having to do with spatial orientation - even fairly simple multi-step tasks involving concepts like "left" vs "right" or "on this side" vs "on that side" can get hilariously wrong.
But if you give it a different task, you can see reasoning in action. For example, have it play guess-the-animal game with you while telling it to "think out loud".
> But if you give it a different task, you can see reasoning in action. For example, have it play guess-the-animal game with you while telling it to "think out loud".
I'm not sure if you put "think out loud" in quotes to show literally what you told it to do or because telling the LLM to do that is figurative speech (because it can't actually think). Your talk about 'reasoning in action' indicates it was probably not the latter, but that is how I would use quotes in this context. The LLM can not 'think out loud' because it cannot actually think. It can only generate text that mimics the process of humans 'thinking out loud'.
It's in quotes because you can literally use that exact phrase and get results.
As far as "it mimics" angle... let me put it this way: I believe that the whole Chinese room argument is unscientific nonsense. I can literally see GPT take inputs, make conclusions based on them, and ask me questions to test its hypotheses, right before my eyes in real time. And it does lead it to produce better results than it otherwise would. I don't know what constitutes "the real thing" in your book, but this qualifies in mine.
And yeah, it's not that good at logical reasoning, mind you. But its model of the world is built solely from text (much of which doesn't even describe the real world!), and then it all has to fit into a measly 175B parameters. And on top of that, its entire short-term memory consists of its 4K token window. What's amazing is that it is still, somehow, better than some people. What's important is that it's good enough for many tasks that do require the capacity to reason.
> I can literally see GPT take inputs, make conclusions based on them, and ask me questions to test its hypotheses, right before my eyes in real time.
It takes inputs and produces new outputs (in the textual form of questions, in this case). That's all. It's not 'making conclusions', it's not making up hypotheses in order to 'test them'. It's not reasoning. It doesn't have a 'model of the world'. This is all a projection on your part against a machine that inputs and outputs text and whose surprising 'ability' in this context is that the text it generates plays so well on the ability of humans to self-fool themselves that its outputs are the product of 'reasoning'.
It does indeed take inputs and produce new outputs, but so does your brain. Both are equally a black box. We constructed it, yes, and we know how it operates on the "hardware" level (neural nets, transformers etc), but we don't know what the function that is computed by this entire arrangement actually does. Given the kinds of outputs it produces, I've yet to see a meaningful explanation of how it does that without some kind of world model. I'm not claiming that it's a correct or a complicated model, but that's a different story.
Then there was this experiment: https://thegradient.pub/othello/. TL;DR: they took a relatively simple GPT model and trained it on tokens corresponding to Othello moves until it started to play well. Then they probed the model and found stuff inside the neural net that seems to correspond to the state of the board; they tested it by "flipping a bit" during activation, and observed the model make a corresponding move. So it did build an inner model of the game as part of its training by inferring it from the moves it was trained on. And it uses that model to make moves according to the current state of the board - that sure sounds like reasoning to me. Given this, can you explain why you are so certain that there isn't some equivalent inside ChatGPT?
Regarding the Othello paper, I would point you to the comment replies of thomastjeffery (beginning at two top points [1] & [2]) when someone else raised that paper in this thread [3]. I agree with their position.
I didn't see any new convincing arguments there. In fact, it seems to be based mainly on the claim that the thing inside that literally looks like a 2D Othello board is somehow not a model of the game, or that the fact that outputs depend on it doesn't actually mean "use".
In general, I find that a lot of these arguments boil down to sophistry when the obvious meaning of the word that equally obviously describes what people see in front of them is replaced by some convoluted "actually" that doesn't serve any point other than making sure that it excludes the dreaded possibility that logical reasoning and world-modelling isn't actually all that special.
Sorry, we're discussing GPT and LLMs here, not human consciousness and intelligence.
GPT has been constructed. We know how it was set-up and how it operates. (And people commenting here should be basically familiar with both hows mentioned.) No part of it does any reasoning. Taking in inputs and generating outputs is completely standard for computer programs and in no way qualifies as reasoning. People are only bringing in the idea of 'reasoning' because they either don't understand how an LLM works and have been fooled by the semblance of reasoning that this LLM produces or, more culpably, they do understand but they still falsely continue to talk about the LLM doing 'reasoning' either because they are delusional (they are fantasists) or they are working to mislead people about the machine's actual capabilities (they are fraudsters).
Yup. I tried to give ChatGPT an obfuscated variant of the lion-goat-cabbage problem (shapes instead of animals, boxes instead of a boat) and it completely choked on it.
Trying to claim you definitively know why it didn't solve the parent's puzzle is virtually impossible. There are way too many factors and nothing here is obvious. Your claims just reinforce that you don't really know what you're talking about.
The likeliness of the solution depends on context. If context is, say, a textbook on logical puzzles, then the probability of the logical solution is high.
If an LLM fails to reflect it, then it isn't good enough at predicting the text.
Yes, it could be possible that the required size of the model and training data to make it solve such puzzles consistently is impractical (or outright unachievable in principle). But the model being "just a text predictor" has nothing to do with that impossibility.
You are incorrect and it's really time for this misinformation to die out before it perpetuates misuse from misunderstanding model capabilities.
The Othello GPT research from Harvard months ago demonstrated that even a simple GPT model is capable of building world representations from which it reasons outputs. This makes intuitive sense if you understand the training, as where possible having reversed an abstraction in the NN is going to perform better than simply extrapolating predictively from the data.
Not only is GPT-4 more robust at logic puzzles its predecessor failed, I've seen it solve unique riddles outside any training data and the paper has explicit examples of critical reasoning, especially in the appendix.
It is extremely unlikely given the Harvard research and the size of the training data and NN that there isn't some degree of specialized critical reasoning which has developed in the NN.
The emerging challenge for researchers moving forward is to get better insight into the black box and where these capabilities have developed and where it's still falling into just a fancy Markov chain.
But comments like yours reflect an increasingly obsolete and yet increasingly popular misinformation online around the way they operate. So someone reading your comment might not think to do things like what the Bing team added with providing an internal monologue for reasoning, or guiding it towards extended chain of thought reasoning, because they would be engaging with the models thinking it's only frequency based context relative to the training set that matters.
If you haven't engaged with emerging research from the past year, you may want to brush up on your reading.
When albertgoeswoof reasons about a puzzle he models the actual actions in his head. He uses logic and visualization to arrive at the solution, not language. He then uses language to output the solution, or says he doesn't know if he fails.
When LLMs are presented with a problem they search for a solution based on the language model. And when they can't find a solution, there's always a match for something that looks like a solution.
I'm reminded of the interview where a researcher asks firemen how they make decisions under pressure, and the fireman answers that he never makes any decisions.
Or in other words, people can use implicit logic to solve puzzles. Similarly LLMs can implicitly be fine-tuned into logic models by asking them to solve a puzzle, insofar as that logic model fits in their weights. Transformers are very flexible that way.
If the solution is logical but unlikely (i.e. unseen in the training set and not mapped to an existing puzzle), then the probability of the puzzle answer appearing is very low.