>Suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people.
It was introduced to point out a place where the currently-best "decision theory" produces a seemingly-wrong recommended action in the hope that someone might come up with a better decision theory or alternatively to explain why the seemingly-wrong recommendation is actually right. A "decision theory" is a recipe for acting in the face of uncertainty that is precisely stated enough that it might eventually serve as the basis of a computer program.
This also touches mind theory. In particular it relies on the belief that running a Turing machine that kills 3^^^^3 people is as bad as breeding and killing the same number of people in the physical world we live in.
1% chance of humanity going extinct is exactly the pascals mugging doomerism. If someone thinks there’s like >10% chance they’re probably actively working on like AI safety or something else to stop it.
I argue that it's not that hilarious because the thinking is very tightly related. The very contemplations that lead to AI doomerism lead to pascals mugging.
One of my main gripes with AI doomerism is that it is downstream of being pascal's mugged into being a doomer