I was confused by your first reply at first. I think that's because you are answering a different question from a number of other people. You're asking about the conditions under which and AI might fool people into thinking it was a human, whereas I think others are considering the conditions under which a human might consistently emotionally attach to an AI, even if the human doesn't really think it's real.
Yeah, I think the effect they are talking about is like getting attached to a fictional character in a novel. Writing good fiction is a different sort of achievement.
It's sort of related since doing well at a Turing test would require generating a convincing fictional character, but there's more to playing well than that.