Exploring Pascal’s Wager

Blaise Pascal was a 17th-century mathematician/philosopher most famous for introducing a certain way to organize numbers, now known as Pascal’s Triangle:

Uh…seems…useful?

Pascal is also known for writing short thought experiments in philosophy and mathematics. One of these is an argument called Pascal’s Wager, which goes something like this:

“Look: if god exists, and you believe in him, you get eternal life…if you believe in god and it turns out that he doesn’t exist, what have you really lost? Not that much, you’re going to die anyway.

If you don’t believe in god, because of your human pride, and you shake your fists at him, but it turns out he does exist, INFINITE TORTURE. And if you don’t believe in god and he doesn’t exist, then cool.

So really, why not believe in god? Because, at worst, you will die forever, just like you would if you didn’t believe in him. At best, you’ll have eternal life.

But if you choose not to believe in god, you’re playing with infinite odds that you will actually be tortured and mutilated by demons that tear your flesh apart and pour lemon juice on it.”

– David Pizarro, Episode 113 of the Very Bad Wizards podcast

 

Before I explain why I’m writing about Pascal’s Wager, here’s a simpler version of the problem. Suppose you’re shown a large room of brightly colored balls:

Go on, but if you don’t get to the point soon I’m jumping in.

A person approaches you and tells you that one of the balls has the word “GOD” written on it, and all the others read “Just Earth, Stars, Universes, and Solar-systems” (J.E.S.U.S., for short). They then offer you the following game:

If you give me one dollar, that large crane arm over there will dive in and pick one ball. If it reads “GOD”, you get ONE MILLION DOLLARS.

 

THE CLAWWWW EXISTSSSS

The question: is it a good idea to play? There’s actually an equation for the amount of money you’d win on average, if you played the game a near-infinite amount of times. This equation, called the “expected value“, looks like this:

$$E_{\text{PLAY}} = \frac{1}{\text{total }\#\text{ of balls}}\times \$999,999 + \frac{\# \text{ J.E.S.U.S. balls}}{\text{total }\#\text{ of balls}}\times (-\;\$1) $$

(Achievement unlocked: use of the phrase “jesus balls” in a math expression.)

The expression for $E_{\text{PLAY}}$ is simply net winnings weighted by probability of winning, plus net loss weighted by probability of loss. In gambling, if $E_{\text{PLAY}}$ is greater than zero, reason suggests you should go in. If the expected value is less than or equal to zero, however, you should pass.

So what happens in our ball game? Let’s say there are 10,000 total balls in the room. Even with those small odds of drawing the single GOD ball, the expected value calculation actually favors buying in:

$$E_{\text{PLAY}} = \frac{1}{10,000}\times \$999,999 + \frac{9,999}{10,000}\times (-\;\$1) \approx \$100$$

Intuitively, the reason this works is that the amount of money you get for winning greatly outweighs the odds of losing. However, you can imagine versions of the game where this is not the case: if there are significantly more balls, for instance, or if you are being offered significantly less money.

Pascal’s argument is, essentially, that the decision whether or not to believe in god is a version of the ball game in which you’re being offered INFINITE MONIES. Looking at the expected value equation, we can see that increasing the winnings indefinitely makes playing (believing) a good idea, no matter how much the game costs.

Wait, what about hell? Right, good question. The ball game isn’t a perfect analogy to the original Wager, which also has the threat of eternal damnation if you mistakenly disbelieve. So, we have to add this rule to the game:

Even if you don’t play, I’ll still draw a ball. If it reads “GOD”, you owe me ONE MILLION DOLLARS.

 

Now there’s a potential (and likely) cost for not playing! This just makes the decision easier. Here’s the overall expected value of not playing (“passing”):

$$E_{\text{PASS}} = \frac{1}{10,000}\times\left(-\;\$999,999\right) + \frac{9,999}{10,000}\times \$1 \approx -\;\$100$$

So, the larger the winnings and losings, the more obvious the choice becomes: believe in god (play for \$1) and expect basically infinite reward; disbelieve (pass and keep \$1) and expect eternal punishment.

I’ve never found the analogy of roulette to religious belief particularly compelling. First, there are too many concessions you have to make before it’s realistic: that lifetime belief in god costs as little as a lottery ticket; that there aren’t an infinite number of other potential gods who won’t make this bargain; that you can be hand-wavy with complex concepts like infinity and human value.

Even if you grant those concessions, there’s still the problem that you can no more decide to believe in god than you can decide who you fall in love with. This basic fact severs any link between religious belief and gambling, and is the main reason I was never overly impressed with Pascal’s argument.

So, the face-palming was real when Pascal’s Wager popped up as the primary subject of the latest episode from my favorite podcast Very Bad Wizards. At first, I couldn’t decide if going through the episode was worth the risk that they’d mangle probability rules, or run through lots of trivial objections.

Eventually, however, my hunger for the infrequent VBW epistemology adventures got the better of me, and I wasn’t disappointed. The majority of the first half of the episode (the latter half introduced a variant on the Wager) focused on precisely the aforementioned problem of belief and choice, and many of the philosophical/psychological intricacies involved.

In the episode, the hosts eventually arrived at the same conclusion that has had me continually unimpressed with the Wager: the type of gods for which the Wager is relevant require belief to be a conscious, free act. Essentially, they said (heavily paraphrased): “Well, the argument sort of works if you’re Jewish, because then being religious is in fact mostly participation in activities. But Pascal wasn’t Jewish, so…yeah.”

This post picks up where the VBW discussion left off. The fact is, the majority of Christians actually do think that a believer is something you get up every day and decide to be. As David pointed out in the episode, this is arguably one of most important structural supports of the Christian mindset. Like David, I was raised and confirmed Catholic, so I have some perspective here. From first-hand experience, I know that Pascal’s Wager remains pretty convincing to a lot of folks.

So, I don’t think David & Tamler satisfactorily closed the issue for, say, Jews who think keeping kosher their whole life is a good bargain for the chance at eternal life, or Christians who really think belief is a choice. The natural next question for me is: can we concede that belief is reducible to an opt-in checkbox, and still find an intuitive counter to Pascal?

I think yes, and it has to do with how we value time when life is limited.

And here I was just about to dive in and spend eternity not reading this blog post.


A key assumption of Pascal’s Wager is that a year of life has a constant value, regardless of whether or not heaven and hell exist. This is like saying “I could eat a hamburger every day, forever, and each day it will taste just as good as the very first one!”

For most people, this doesn’t work for hamburgers, and it doesn’t work for time, either. Suppose you knew you were going to die at age 80. How much would you pay for an extra year? What if you knew you were going to die at age 40? Surely, a year is worth more in the latter case. At the other extreme, if you know you’ll live for 1,000 years, a year should be worth far less.

Now, what if you knew you would never die? How much would you pay for an extra year? A quick answer is, of course, zero: you don’t need extra years if you’re immortal.

The point of these questions is to illustrate that, in some sense, an 80-year life might be at least as valuable as an endless life, all things considered. Compared with eternity, any finite amount of time is fleeting and therefore precious. This is felt deeply by many atheists:

“Death is a deadline…knowing life is temporary brings focus to our lives, inspires us to treasure the people and experiences we encounter, and motivates us to do something valuable with the short time we have.”

Greta Christina, from Comforting Thoughts About Death That Have Nothing To Do With God.

 

If years of a finite life are more valuable, that should affect the outcome of Pascal’s Wager. To see this work out, let’s define some terms:

  • Let $P_G$ be the probability that god exists. Referencing the ball game, $P_G$ is like 1 over the total number of balls.
  • Let $Y_E$ by the number of years living on earth, and $Y_H$ the number of years living in either heaven or hell. In what follows, I’ll assume these numbers are finite. The only way to assess what happens when heaven is “infinite” is to consider the resulting expected values when $Y_H$ is arbitrarily large.
  • Finally, we need some notion of a “flourishing value” as a believer, atheist, or resident of heaven. Let’s write these as $F_B$, $F_A$, and $F_H$ (respectively). These numbers can be positive or negative, and represent how much fun you’re having (on average, over long periods of time) in each state.

…for instance, $F_B < F_A$ means that on average, there’s a net cost to belief. The assumption that $F_B < F_A$ is part of the original Wager, which claims that even though belief may be a cost, the expected values work out in the end. However, I won’t assume this in any of the following calculations.

We can assume that $F_H$ is a positive number (heaven is fun guyz), and for simplicity, the level of flourishing in hell is just its negative version, $-F_H$. This reflects the symmetry inherent to standard conceptions of heaven and hell.

Let’s re-derive the conclusion of the original Wager. As a correct believer, I would get $Y_E$ years valued at $F_B$, and $Y_H$ years valued at $F_H$, resulting in a full life value of $Y_EF_B + Y_HF_H$. However, since I don’t know whether or not god exists, I have to factor this in to the overall expected value:

$$ E_B = P_G \cdot \left(Y_EF_B + Y_HF_H\right) + (1\,- P_G)\cdot Y_EF_B $$

The above equation is directly analogous to the one used in the ball game. To decide if belief is rational, we have to calculate the expected value of non-belief. As a correct non-believer, I would get just $Y_E$ years valued at $F_A$, giving a full life value of $Y_EF_A$. As an incorrect non-believer, I get a huge value loss from years in hell*. Factoring this in:

$$ E_A = P_G \cdot \left(Y_EF_A\,- Y_HF_H\right) + (1\,- P_G)\cdot Y_EF_A $$

Notice that in the first part of this equation, $Y_HF_H$ is subtracted, since if god exists, an atheist spends $Y_H$ years in hell, each valued at $-F_H$. Here’s the expression for the difference between $E_B$ and $E_A$:

$$ E_B\, – E_A = Y_E\cdot (F_B\, – F_A) + 2P_GY_HF_H$$

This equation is illustrative. If $F_B < F_A$, the first term adds a finite, negative value that does not depend on $P_G$. The second term is always positive, and gets larger as $Y_H$ grows. So, the trade-off in the original Wager is made clear: for any non-zero probability of god $P_G$, there is a length of time spent in heaven $Y_H$ that would offset any cost incurred by belief.

Phew. Ok. Now let’s calculate $E_B – E_A$ if we value years realistically, that is, if years are valued inversely proportional to total years lived. For instance, as a correct believer, I would get $Y_E$ years valued at $F_B / (Y_E + Y_H)$, and $Y_H$ years valued at $F_H / (Y_E + Y_H)$. Factoring this in:

$$ E_B = P_G \cdot \left(Y_E\cdot\frac{F_B}{Y_E + Y_H} + Y_H\cdot\frac{F_H}{Y_E + Y_H}\right) + (1 \,- P_G)\cdot Y_E\cdot\frac{F_B}{Y_E} $$

For non-belief:

$$ E_A = P_G \cdot \left(Y_E\cdot\frac{F_A}{Y_E + Y_H}\,- Y_H\cdot\frac{F_H}{Y_E + Y_H}\right) + (1\,- P_G)\cdot Y_E\cdot\frac{F_A}{Y_E} $$

Deriving the new expected value difference takes some algebra, but here’s a simplified form:

$$ E_B\,- E_A = \left(P_G \cdot \frac{Y_E}{Y_E + Y_H} + 1 \,- P_G\right)\cdot (F_B\, – F_A) + 2P_G\cdot\frac{Y_H}{Y_E + Y_H}\cdot F_H$$

This equation is much less illustrative, so bear with me. The mathematics of infinity are needed to fully understand what’s going on here. Consider that as $Y_H$ becomes very large, the fraction $\frac{Y_E}{Y_E+Y_H}$, seen in the first term, approaches 0. And $\frac{Y_H}{Y_E + Y_H}$, seen in the second term, becomes close to 1.

So how do we rule on this case when $Y_H$ “equals infinity”? We just have to consider $E_B\, – E_A$ when $Y_H$ is arbitrarily large. This means that we replace those fractions in $E_B\, – E_A$ that depend on $Y_H$ with their “limiting” values (0 and 1). When we do this, we get a much simpler expression:

$$ E_B\,- E_A =(1 \,- P_G)\cdot(F_B\, – F_A) + 2P_GF_H$$

This is even simpler than $E_B-E_A$ from the original Wager. Moreover, it has a strikingly different conclusion: a rational decision about belief now depends on all of $P_G$, $F_A$, $F_B$, and $F_H$. If $P_G$ is very small, then $F_H$ must be large enough to outweigh the (very likely) cost of belief.

Of course, the believer could simply respond to this new equation with some optimism: “$F_H$ is, of course, very large! How could it not be? And there are so many benefits to belief, maybe $F_B > F_A$! Then $E_B-E_A$ will always be positive, qed”. Such a response is valid, certainly. But the point of this analysis is that, with a simple, realistic assumption about how we value years in a finite life, we force Pascal adherents to argue for a priori assumptions on $P_G$, $F_A$, $F_B$, and $F_H$. In the original Wager, these values didn’t matter.

Let’s think for a second about what that means. For the record, I’m an atheist. It’ll be hard to convince me that $P_G$ shouldn’t be super, duper small, at least not without some seriously strong assumptions. Also, I don’t see $F_H$ as being too large. The god of the bible was generally a dick, Jesus wasn’t much better, and most religious conceptions of heaven sound pretty boring (or awful). Finally, $F_B$ is clearly lower than $F_A$, at least for me. I mean, I get 2 full 24-hour days back per year just by not going to church. Thus, it’s easy for me to plug in realistic numbers for these constants that justify non-belief.

There’s one more realistic assumption that makes things even worse for Pascal. What if, as the years go on, each year is just a little less fun than the last? This is certainly reasonable to imagine**, and approximately encapsulates what I meant by “heaven sounds pretty boring.”

Mathematically, let’s say that each subsequent year is $Q$ times as fun, where $Q$ is a number between 0 and 1 (non-inclusive). Relegating the math to the end, this assumption results in a simple rule. For any $Q$, $P_G$, $Y_E$, and $F_H$, there exists $Y_H$ large enough such that

$$ \text{$E_B\; – E_A < 0$ when $F_B-F_A<0$, and $E_B\; – E_A \geq 0$ when $F_B \,- F_A\geq0$.}$$

Essentially, this says that if heaven is infinitely long, the rationality of belief depends only on the per-year cost of belief; importantly, it doesn’t depend on the probability of god! We’ve assumed only that (I) years are valued inversely proportional to how long you’re alive, and (II) each year is a little less fun. Rewording again: no matter how likely ($P_G$) or fun ($F_H$) god & heaven are, the fact that eternity is a long, long time makes my enjoyment of life on earth pretty dang valuable.

These analyses have better-illustrated parts of the Pascal debate that I’ve read and heard throughout the years. I think many atheists pass off Pascal’s Wager as ludicrous because they’re already intuitively doing the sorts of calculations I’ve put forth. They instantly realize “well, I don’t think heaven’s that exciting”, or “finite life is pretty valuable”, and how those feelings should play into Pascal’s Wager.

Relatedly, rigorously formulating the Pascal scenarios above can help to illuminate the differences between deeper assumptions that believers and atheists bring to the table. For instance, I think assumptions (I) and (II) well-approximate ways people really value time. Believers may not concede that ground so easily.

Ideas similar to those in this post are, I suspect, covered in the references mentioned in objection #2 to premise 1 of the Wager, though I haven’t read them. I was surprised not to find anything like them in the rational wiki article on the Wager, as I find much of what’s there less intuitive. Regardless, I wanted to make an accessible, fully explanatory post for this particular counter. I’d be interested in more recent related work: please send it along.

*Yes, I’m fully aware the current Pope said atheists aren’t necessarily going to hell. That actually makes things worse for Pascal adherents; it can be incorporated in the equations by setting the flourishing level of hell equal to zero.

**The result still holds as long as years eventually start becoming less fun. We can allow for usual up/down-ticks that happen over the course of a standard earthly, mortal life, but the math notation is far more complicated.


Math for the strong counter

If each year is a little “less fun” at rate $Q$, we have to break down the values of the years and sum them. This analysis requires some high-school level pre-calc, which I won’t explain fully (aside from a link or two). First, the values received by a correct believer (CB) and incorrect believer (IB) in this case are

$$ \text{CB} = \sum_{y = 1}^{Y_E} \frac{F_B}{Y_E + Y_H}Q^{y – 1} + \sum_{y = Y_E + 1}^{Y_H + Y_E}\frac{F_H}{Y_E + Y_H}Q^{y – 1},\;\;\;\;\text{IB} = \sum_{y = 1}^{Y_E} \frac{F_B}{Y_E}Q^{y – 1}$$

We can simplify these expressions using the formula for a geometric series:

$$ \text{CB} = \frac{(1 \,- Q^{Y_E})F_B + (Q^{Y_E} \,- Q^{Y_E + Y_H})F_H}{(1 – Q)(Y_E + Y_H)},\;\;\;\;\text{IB} = \frac{(1 \,- Q^{Y_E})F_B}{(1 \,- Q)Y_E}$$

With these shorthands, the expected value of belief is just $E_B = P_G\cdot \text{CB} + (1 – P_G)\cdot \text{IB}$. Similar algebra gives the expressions for an incorrect atheist (IA) and correct atheist (CA) as

$$ \text{IA} = \frac{(1 \,- Q^{Y_E})F_A \,- (Q^{Y_E} \,- Q^{Y_E + Y_H})F_H}{(1 – Q)(Y_E + Y_H)},\;\;\;\;\text{CA} = \frac{(1 \,- Q^{Y_E})F_A}{(1 \,- Q)Y_E}$$

…and similarly, $E_A = P_G\cdot\text{IA} + (1 – P_G)\cdot \text{CA}$. With some more algebra, we can obtain an expression for the expected value difference:

$$E_B – E_A = \frac{P_G}{1 – Q}\left[\frac{(F_B – F_A)(1 – Q^{Y_E}) + 2F_H(Q^{Y_E} – Q^{Y_E + Y_H})}{Y_E + Y_H}\right] + \frac{1 – P_G}{1 – Q}\left[\frac{(F_B – F_A)(1 -Q^{Y_E})}{Y_E}\right]$$

Now, when $Y_H$ increases without bound, $Q^{Y_E + Y_H}$ approaches 0, and $Y_E + Y_H$ grows toward infinity. This means that the limiting value of the big, messy first term is zero. Therefore, for fixed values of $P_G$, $Q$, $F_B$, $F_A$, $F_H$, and $Y_E$, we have

$$E_B \,- E_A \approx \frac{1 \,- P_G}{1 \,- Q}\left[\frac{(F_B \,- F_A)(1 \,-Q^{Y_E})}{Y_E}\right]$$

for large enough $Y_H$. The multiplicands other than $F_B – F_A$ on the right-hand-side are all positive. This implies that the sign of the expected value difference is determined by $F_B – F_A$.

Computers can’t think, and they’re not conscious.

This post is just an extended link to John Searle’s talk at Google in 2015, which is an excellent response to the two most common opinions about artificial intelligence:

  1. A fancy enough computer program will rise up, make friends with all the other computer programs, and maliciously destroy humanity.
  2. A fancy enough computer program will rise up, make friends with all the other computer programs, and become a benevolent super-being.

Searle’s main point is that, though computers can replicate many feats of human thought, there is no reason to believe they are actually thinking or being creative, each of which would be necessary for the above scenarios. Nor will plentiful enough, or powerful enough, features of a computer program necessarily constitute a consciousness. Instead, creating a true artificial intelligence will depend on complete knowledge of how the brain creates conscious thought, which we’re not even close to having.

So, don’t worry: self-driving cars will never spontaneously become sentient and decide to drive us all off cliffs. They will never experience you, or anything at all.

In this post, I’ll first try give a clearer explanation of Searle’s opening point, which is a little technical; the remainder of Searle’s talk is accessible and important. Then I’ll give a few pull-quotes, and link to a bunch of related articles I’ve come across.


At the beginning of his talk, Searle makes some distinctions between usages of the words “objective” and “subjective”. Why? As he argues, the misuse of these words lead to confusion in the discussion of AI. Consciousness, as many will say, isn’t objective: so it can’t be studied by science, and logic doesn’t apply to it.

But consciousness is only subjective as a matter of being, not as a matter of knowledge. When something is subjective as a matter of being, it is defined by its relationship to an observer. Our experiences are subjective as a matter of their being, but their contents and causes are objective as a matter of knowledge.

The word to describe anything that is a matter of being is “ontological”, and the word to describe anything that is a matter of knowledge is “epistemological”. Facts can be objective or subjective, in both senses. Consider your experience of color:

 

The flower exists (epistemologically objective) independently of our experience (ontologically objective). Our internal vision of the flower is a fact of nature (epistemologically objective), but is part of our experience (ontologically subjective). Our opinion of the flower’s look is also part our experience too (ontologically subjective), but is not a true fact of nature (epistemologically subjective).

Searle’s first main point is that the study of consciousness should not be dismissed as futile or wrong-headed simply because it deals in ontologically subjective material. Neuroscientists are already beginning to understand how to reconstruct our internal experiences:


Some gems from the talk:

“‘So, could a machine think?’ Well, human beings are machines. ‘Yes, but could you make an artificial machine that could think?’ Why not? It’s like an artificial heart. The question ‘can you build an artificial brain that can think?’ is like the question ‘can you build an artificial heart that pumps blood?’. We know how the heart does it, so we know how to do it artificially . . . we have no idea how to create a thinking machine because we don’t know how the brain does it . . . so we have two questions: ‘could a machine think?’ and ‘could an artificially made machine think?’ The answer to question one is obviously yes. The answer to question two is, we don’t know yet but there’s no obstacle in principle.”

“The sense in which *I* carried out the computation is absolutely intrinsic and observer-independent . . . when my pocket calculator does the same operation, the operation is entirely observer-relative. Intrinsically, all that goes on is a set of electronic state-transitions that we have designed so that we can interpret [the result] computationally. And again . . . for most purposes it doesn’t matter. When it matters is when people say ‘well, we’ve created this race of mechanical intelligences, and they might rise up and overthrow us’, or attribute some other equally implausible psychological interpretation to the machinery.”

“Turing machines are not to be found in nature. They are to be found in our interpretations of nature.”


Related articles/references, in order of relevance: