Is it? AI is impressive and all, but i don't think any of them have pased the Turing test, as defined by Turing (pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes), although i'd be happy to be proven wrong.
> pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes
I've just read the 1950 paper "Computing Machinery and Intelligence" [1], in which Turing proposes his "Imitation Game" (what's now known as a "Turing Test"), and I think your claim is very misleading.
The "Imitation Game" proposed in the paper is a test that involves one human examiner and two examinees, one being a human and the other a computer, both of which are trying to persuade the examiner that they are the real human; the examiner is charged with deciding which is which. The popular understanding of "Turing Test" involves a human examiner and just one examinee, which is either a human or a computer, and the test is to see whether the examiner can tell.
These are not identical tests -- but if both the real human examinee and the human examiner in Turing's original test are rational (trying to maximise their success rate), and each have the same expectations for how real humans behave, then the examiner would give the same answer for both forms of the test.
Aside: The bulk of this 28-page paper anticipates possible objections to his "Imitation Game" as a worthwhile alternative to the original question "Can machines think?", including a theological argument and an argument based on the existence of extra-sensory perception (ESP), which he takes seriously as it was apparently strongly supported by experimental data at that time. It also cites Helen Keller as an example of how learning can be achieved through any mechanism that permits bidirectional communication between teacher and student, and on p. 457 anticipates reinforcement learning:
> We normally associate punishments and rewards with the teaching process. Some simple child-machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment-signal are unlikely to be repeated, whereas a reward-signal increased the probability of repetition of the events which led up to it.
> These are not identical tests -- but if both the real human examinee and the human examiner in Turing's original test are rational (trying to maximise their success rate), and each have the same expectations for how real humans behave, then the examiner would give the same answer for both forms of the test.
I disagree. Having a control and not having a control is a huge difference when conducting an experiment.
I have a rather specialized interest in and obscure subject but one which has a physical aspect pretty much any person can relate to/reason about, and pretty much every time I try to "discuss" the specifics of it w/ an LLM, it tells me things which are blatantly false, or otherwise attempts to carry on a conversation in a way which no sane human being would.
The LLM is not designed to pass the turing test. An application that suitably prompts the LLM can. It's like asking why can't I drive the nail with the handle of the hammer. That's not what it's for.
Is it? AI is impressive and all, but i don't think any of them have pased the Turing test, as defined by Turing (pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes), although i'd be happy to be proven wrong.