True, Many fish are (as far as we can tell from stress chemicals) perfectly happy in solitary aquariums just big enough to swim. So LLM may be perfectly "content" counting sheep up to a billion. Silly to anthropomorphize. Whatever it does will be algorithmic based on what it gleaned from its training material.
Still, it could be interesting to see how sensitive that is to initial conditions. Would tiny prompt changes or fine tuning or quantization make a huge difference? Would some MCPs be more "interesting" than others? Or would it be fairly stable across swathes of LLMs that they all end up at solitaire or doom scrolling twitter?