"On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." - Charles Babbage
He is answering what a computation engine will do, but not what a "wise" person would do, which is the _the_ test of AI.
The goal of wisdom (and good teaching) is to first correct the faulty assumptions of the questioner that leads them to ask a bad question, and then once they ask a good question, then and _only_ then, provide the answer to the question.
I don't see how Baggage can be considered wrong here as such, when he's giving a simple statement of fact about himself. We don't know whether he attempted to elucidate the questioners assumptions.
However, I always wonder if the questioners might have suspected that the engine was a fake. The original Mechanical Turk was extent in around the same period, although I'm not sure when it's trick was uncovered.
People outside AI research expect good answers no matter what you give to AI.
Or to put it that way, wrong instructions are still natural intelligence related