The problem is that the majority of user interaction doesn't need to be "useful" (as in increasing productivity): the majority of users are looking for entertainment, so turning up the sycophancy knob makes sense from a commercial point of view.
I'm not so sure sycophancy is best for entertainment, though. Some of the most memorable outputs of AI dungeon (an early GPT-2 based dialog system tuned to mimic a vaguely Zork-like RPG) was when the bot gave the impression of being fed up with the player's antics.
> I'm not so sure sycophancy is best for entertainment, though.
I don't think "entertainment" is the right concept. Perhaps the right concept is "engagement". Would you prefer to interact with a chatbot that hallucinated or was adamant you were wrong, or would you prefer to engage with a chatbot that built upon your input and outputted constructive messages that were in line with your reasoning and train of thought?
Some of the open models like kimi k2 do a better job of pushing back. It does feel a bit annoying to use them when they don’t just immediately do what you tell them. Sugar-free is a good analogy!
Or it causes some tragedies...