Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have always considered that the term AI was inaccurate and didn't describe an actual form of intelligence, regardless of the problem it was solving. It's great that we're now solving problems with computer algorithms that we used to think required actual intelligence. But that doesn't mean we're moving the goalposts on what intelligence is; it means we're admitting we're wrong about what can be achieved without it.

An "artificial intelligence" is no more intelligent than an "artificial flower" is a flower. Making it into a more convincing simulacrum, or expanding the range of roles where it can adequately substitute for the real thing (or even vastly outperform the real thing), is not reifying it. Thankfully, we don't make the same linguistic mistake with "artificial sweeteners"; they do in fact sweeten, but I would have the same complaint if we said "artificial sugars" instead.

The point of the Turing test and all the other contemporary discourse was never to establish a standard to determine whether a computing system could "think" or be "intelligent"; it was to establish that this is the wrong question. Intelligence is tightly coupled to volition and self-awareness (and expressing self-awareness in text does not demonstrate self-awareness; a book titled "I Am A Book" is not self-aware).

No, I cannot rigorously prove that humans (or other life) are intelligent by this standard. It's an axiom that emerges from my own experience of observing my thoughts. I think, therefore I think.



I'm generally in agreement with you (I upvoted your comment), but think it's important not to distort--even unintentionally--the motivations, speech, or work of historical figures like Alan Turing.

"I propose to consider the question, 'Can machines think?'"

(Paraphrasing from the Wikipedia entry[1][2]) Turing reckons with the problem of formally defining "thought", so instead chooses to reframe the question, and invent an experiment that uses more specific, well-defined conditions and thresholds, arguing that this would be more likely to produce a concrete answer. This experiment is, of course, what he called the "Imitation Game", and having laid out its rules and conditions, he posed, "Are there imaginable digital computers which would do well in the imitation game?"

He then spends a large portion of the rest of the paper to address nine ostensible objections to the notion that machines could "think".

All that said, there's plenty of room to debate whether or not Turing's experiment is/isn't flawed, whether subjective interpretations of a purely language-driven interaction is a valid model for "thinking" more broadly, and whether these ideas can/should be applied to contemporary technology. I personally think that Turing's experiment should simply be accepted for what it is, rather than being wielded dogmatically[3]--it's an exceptionally insightful contribution to a wider body of human knowledge and philosophy, but insufficient for establishing anything like "sentience" in machines.

1. It is insane that the original paper is behind a paywall.

2. https://en.wikipedia.org/wiki/Turing_test

3. As I have seen frequently elsewhere, I am not suggesting the parent comment is doing so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: