Thursday, February 20, 2014

Opposed Checks in D&D are the Same as Coin Flips

This is a lesson in statistics and probability, as applied to the popular Dungeons and Dragons role playing game.  I'm having trouble lately with the LaTeX embedder: if you see a lot of dollar signs and slashes, then check your plugins and permissions on your browser and allow MathJax to work, so you can see the equations better.

At least since version 3.0, the Dungeons and Dragons rule book has featured a rule of opposed checks.  These are supposed to represent, using dice, the opposition of two separate skills: so, your ability to Hide versus the orc's ability to Spot; your ability to tie a rope versus the orc's ability to escape from bonds.  You roll your skill, the orc rolls his skill, you apply modifiers, and the higher outcome wins.

Even before this, rolling dice was a common way to set the difficulty of something in old versions and in other non-d20 games.  How hard is the door to force?  You didn't think of it, now you're on the spot, so you roll a die to figure out how hard it is.  Then you tell the PCs to beat that number on their own roll.  Makes sense.

Doing checks this way is, from a probabilistic point of view, about as good as flipping a coin. The probability of the PC winning is slightly more than 50-50.  I'll prove it.

When you roll a die, a number comes up.  A random number, hopefully.  If the dice has $n$ sides, then this number is between $1$ and $n$.  It is customary to denote a random number with a capital letter: in this case, I'm going to call $X$ the result of rolling the die; $X=1, 2, 3,\ldots, n$, depending on what we roll.

If we consider some number between $1$ and $n$, say 6, then the probability that $X=6$ is, as we all know, $1/n.$  It is common to write this as $\Pr(X=6) = \frac{1}{n}.$  And, of course, it isn't just for $X=6$ that this is the case, but for any number $x$ between $1$ and $n$.  More generally, for any such $x$, we write $\Pr(X=x) = \frac{1}{n}.$

In our case, we are going to roll the same die twice.  This gives us two random numbers (the results of the two dice), which we will call $X_1$ and $X_2.$  We'll say that $X_1$ is the die we roll, and $X_2$ is the die that the PC rolls.  We want to know $\Pr(X_2 \geq X_1)$, that is, the probability that the second result is higher than the first or, more concretely, the probability that the PC wins his contest against the orc.

This could go a number of ways.  The GM might roll a 1, in which case the PC is guaranteed to win, or the GM might roll a $n$, in which case the PC has to also roll $n$ or lose, with other possibilities in between.  But we don't want to consider the probability of the PC winning given some particular roll from the GM, because that's trivial.  So what we want to do instead is consider all of these possibilities.

We look at
$$\Pr(X_2\geq X_1) = \sum_{x=1}^n \Pr(X_2 \geq x) \cdot \Pr(X_1=x),$$
which means that we consider the probability of the GM rolling some number $X_1=x$, then multiply by the probability of the PC winning given this roll, then consider this for all the possible $x$ the GM might roll and add these together.  That gives us the probability of the PC winning his roll, regardless of what the GM rolls.

Breaking this down, we find
$$\Pr(X_2\geq X_1) = \sum_{x=1}^n \sum_{y=x}^n \frac{1}{n} \frac{1}{n} = \frac{1}{n^2}\sum_{x=1}^n\sum_{y=x}^n 1 = \frac{1}{n^2}\sum_{x=1}^n (n-x+1).$$
Stopping for a second, for people less familiar with this stuff, the $\sum_{y=x}^n$ term means that we add up every value of $X_2$, starting at $x$, and ending at $n$.  In a concrete example, if we're rolling a 6-sided die, and the first roll is $X_1=3$, then we add up contributions from $y=3,4,5,6,$; that's $4 = 6-3+1$ terms we consider.  More generally, it is $n-x+1$ terms, which is why we wrote $(n-x+1)$ there.  Moving on,
$$\Pr(X_2\geq X_1) = \frac{1}{n^2} \left(\sum_{x=1}^n n - \sum_{x=1}^n x + \sum_{x=1}^n 1\right) = \frac{1}{n^2}\left(n^2 - \sum_{x=1}^n x + n\right).$$
The term $\sum_{x=1}^n x$ means the sum of the first $n$ numbers.  So,
$$\sum_{x=1}^n = 1 + 2 + 3 + 4 + 5 + \cdots + n.$$
Those who've had Calculus will be familiar with this, but other people maybe no so much.  There's a beautiful formula due to Gauss, arguably the first person to discover it, whose proof is even more beautiful.  Consider the following image:

This shows a bunch of stacks of squares, increasing from 1 to 2 to 3, on up to $n$.  The area of these squares is $\sum_{x=1}^n x.$  Now consider a second one of exactly the same size: the two interlock, forming a rectangle:

The width of the rectangle is $n$ and the height is $n+1$, so its areas is $n(n+1)$.  But the area of the rectangle is equal to twice the area of the stacked squares!  Therefore,
$$\sum_{x=1}^n x = \frac{n(n+1)}{2}.$$

So then, carrying on with our equation, we now have
$$\Pr(X_2 \geq X_1)  = \frac{1}{n^2}\left(n^2 - \frac{n(n+1)}{2} + n\right) = 1 - \frac{n+1}{2n} + \frac{1}{n} = \frac{2n - n - 1 + 2}{2n} = \frac{n+1}{2n},$$
which, as I said, is slightly better than 50% probability.  For a 6-sided die, it is $\frac{7}{12}.$  For a 20-sided die, it is $\frac{21}{40}$, which corresponds almost exactly to a DC of 10 (because a roll equal to 10 still counts as a success).  So rolling an opposed roll for the orc is the same as considering the orc's passive check.

For dice, the result is not quite $1/2$ because the rolls can only equal certain specific results (like 1, or 7), but in a general case, it is actually true that the probability of a second random number being larger than the first random number is exactly $1/2$: that is, $\Pr(X_2>X_1) = \frac{1}{2}.$  I'll prove it.

So, consider  $X_1, X_2$, which are still random numbers, but not necessarily from a die.  For instance, we might push blocks on ice, and $X_2$ and $X_1$ gives the distance the blocks travel before coming to rest.  Or throw darts at a wall and $X_2,X_1$ are the distances from a bullseye.  Or something.  It's also not necessarily that case that every possible value is equally likely.  For a fair die, every number has probability $1/n$ or coming up; for throwing darts at a bullseye, if we're any good, then we will be more likely to be near the bullseye.  Let $\Pr(X=x) = p(x)$, where $p(x)$ is just some function: give it a value $x$ and it gives you a probability $p$.  Here $p(x)$ is called the "probability distribution function".  For simplicity, we also consider $\Pr(X\leq x) = F(x)$, called the "cumulative density function".  This is the probability of $X$ being less than some value $x$; as we'll see, a separate symbol for this is really useful.

As before, we have
$$\Pr(X_2\geq X_1) = \int \Pr (X_2\geq x)\cdot\Pr(X_1=x)dx = \int (1-F(x))p(x) dx = \int p(x)dx - \int F(x)p(x)dx.$$
You may be wondering what the weird S is, the $\int$ thing.  That's an integral sign, and it basically just means "add up all the possible values of $x$."  It's different from the $\sum$ symbol in that $\sum$ considers only discrete values while $\int$ considers continuous spectra of values.  We have used here the fact that $1 = Pr(X\leq x) + Pr(X\geq x) = F(x) + \Pr(X\geq x)$ to express this in terms of $F$.

If we add up all the probabilities of things happening, we should get 100%; that is, $\int p(x)dx = 1.$  This makes sense; the probability that we roll a 1 or a 2 or a 3, etc, is 1.  So
$$\Pr(X_2\geq X_1) = 1 - \int F(x)p(x)dx.$$
To fully evaluate this, we can write it another way.  Think what happens if, instead of rolling for the orc first then making the PC roll higher than that, we have the PC roll, then roll for the orc and make sure the orc rolls lower.  It's the same thing in the end, but can be written as:
$$\Pr(X_2\geq X_1) = \int \Pr(X_2 = x) \Pr(X_1 \leq x) = \int p(x) F(x) dx.$$
Comparing these two,
$$\int p(x) F(x)dx = \Pr(X_2\geq X_1) = 1 - \int p(x) F(x)x,$$
which must mean $\Pr(X_2 \geq X_1)  = \int p(x)F(x)dx = 0.5.$

So, the long and short of it is, if we have two random numbers that we produce in the same way, one after the other, and we want to know the probability that the second is larger than the first, then this is 50%.  In terms of D&D, this means that if you generate the DC for a skill check by rolling a die, then have the PC roll to beat that die, then you may as well flip a coin to accomplish the same thing.  This also means you can fix the DC of the opposed rolls at 10, and just add the orc's bonuses to 10 to increase the DC; it achieves the exact same thing.

5 comments:

Anonymous said...

This is very interesting, I also never thought about the sum first n positive integers series like that. But I was wondering, if instead of the two rolls trying to beat each other, they are instead trying to succeed at beating a target number (akin to Basic roleplaying where to make a successful attack you roll under your attacking skill and to block a successful attack you can attempt to roll under your skill at blocking.) Is there a way to simplify it similar to this? Or is it just not really applicable.

Reece said...

Let me make sure I understand the question.

Your attacking skill is 19, so you need to roll 19 or under to hit? Or your block skill is 12, so you need to roll under 12 to block? Is that the gist?

If you are rolling a 20-sided die, then the probability of rolling less than $x$ is $p(x-1)$ (I hope the math encodes correctly there). That's based on the PC, and doesn't change.

But, if the DM didn't figure out the Orc's block skill beforehand, then maybe he'll roll for it. Let's call that roll $X_1$. Then the DM rolls again (call it $X_2$) to see if the Orc blocks, and has to get less than $X_1$. So we need to know $\Pr(X_2<X_1),$ which is the probability of the Orc blocking. This would work in an analogous way to the above.

I think the answer that you get is 19/40, which is just a little bit less than 1/2.

Reece said...

Apparently the math does encode!

Let me take away all of the fun of doing the sum yourself:

$$\sum_{x=1}^n \Pr(X_2x)\cdot\Pr(X_1=x) = \sum_{x=1}^n \left(\sum_{k=1}^{x-1} \frac{1}{n}\right)\cdot \frac{1}{n} = \frac{1}{n^2} \sum_{x=1}^n \sum_{k=1}^{x-1} 1 = \frac{1}{n^2} \sum_{x=1}^n (x-1) = \frac{1}{n^2} \frac{n(n-1)}{2} = \frac{1}{2} - \frac{1}{2n}.$$


For $n=20$, this becomes $\frac{1}{2}−\frac{1}{40}=\frac{19}{40}$.

Hope I did that right. It's late here. I'm so sorry if you received a dozen replies in your inbox, I had trouble getting the equation above to come out right and couldn't edit the comment.

Anonymous said...

Cool, thanks for that. You got the question right yeah. So if you are rolling to determine the difficulty of something, you might as well just use the median value of the die. Thats pretty interesting.

Reece said...

Probabilistically, they're the same. That doesn't mean that your players will respond the same way, though :P