> Our expectations here are very much set by human-human interactions
True, but also a healthy dose of marketing these tools as hyper-intelligent, anthropomorphizing them constantly, and hysterical claims of them being "sentient" or at least possessing a form of human intelligence by random "experts" including some commenters on this site. That's basically all you hear about when you learn about these language models, with a big emphasis on "safety" because they are ohhhh so intelligent just like us (that's sarcasm).
I hear you, and that certainly plays a role -- but we actually did the work in that paper months before ChatGPT was released (June-July 2022), and most of the folks who participated in our study had not heard much about LLMs at the time.
(Obviously if you ran the same study today you'd get a lot more of what you describe!)
True, but also a healthy dose of marketing these tools as hyper-intelligent, anthropomorphizing them constantly, and hysterical claims of them being "sentient" or at least possessing a form of human intelligence by random "experts" including some commenters on this site. That's basically all you hear about when you learn about these language models, with a big emphasis on "safety" because they are ohhhh so intelligent just like us (that's sarcasm).