… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.