Agreed -- humans aren't reassessing the utility of a given pattern of action (e.g. group cooperation vs. selfish behaviour) every time they engage in it, they are building mental heuristics that permit quick decisions without re-evaluating the priors (system 1 vs. system 2 thinking, if you will).
Under that model, it's expected that humans would have a similar attitude to group/commons cooperation in a game as they would in their real-world interactions.
Taking the next step from this observation -- I wonder how much meta-level discussion would be required to break this tendency? Could we apply such meta-level discussion to the real-world, too, and improve cooperation in the societies where these old strategies perhaps don't apply as much now?
Under that model, it's expected that humans would have a similar attitude to group/commons cooperation in a game as they would in their real-world interactions.
Taking the next step from this observation -- I wonder how much meta-level discussion would be required to break this tendency? Could we apply such meta-level discussion to the real-world, too, and improve cooperation in the societies where these old strategies perhaps don't apply as much now?