Consider a game in which one tosses three honest coins and receives a dollar for each head that occurs. How much money, on average, should one expect to get if one were permitted to play this game once? If $X$ denotes the amount that will be won, then $X$ assumes one of the values $0\,$ , $\, 1\,$ , $\, 2\,$ , or $3\,$ , with probabilities $\frac{1}{8}\,$ , $\, \frac{3}{8}\,$ , $\, \frac{3}{8}\,$ , and $\frac{1}{8}\,$ , respectively. Thus one would expect to get zero dollars $\frac{1}{8}$ of the time, one dollar $\frac{3}{8}$ of the time, two dollars $\frac{3}{8}$ of the time, and three dollars $\frac{1}{8}$ of the time. If the game were played a large number of times, one should expect the average pay-out per attempt to be $$ \$\left( 0\cdot\frac{1}{8} +1\cdot\frac{3}{8} +2\cdot\frac{3}{8} +3\cdot\frac{1}{8} \right) \;\text{.} $$ This $\$ 1.50$ figure is what is routinely called the expected amount to be won if the game is played once.
We now generalize the above consideration.
Suppose that a discrete random variable $X$ necessarily assumes one of the values $\displaystyle{ x_1 }\,$ , $\,\displaystyle{ x_2 }\,$ , ⦠, $\,\displaystyle{ x_n }\,$ , and that the probabilities associated with those values are $\displaystyle{ p\left( x_1\right) }\,$ , $\,\displaystyle{ p\left( x_2\right) }\,$ , ⦠, $\,\displaystyle{ p\left( x_n\right) }\;$ . Then the expected value of $X$ is defined by the computation $$ E[X] =\sum_{k=1}^n\, x_k\, p\left( x_k\right) \;\text{.} $$ This is precisely the way we computed averages for data from relative frequency information.
In the preceding three-coin toss consideration, $\, X$ is the payout and assumes values $0\,$ , $\, 1\,$ , $\, 2\,$ , or $3\,$ , with corresponding probabilities $\frac{1}{8}\,$ , $\, \frac{3}{8}\,$ , $\, \frac{3}{8}\,$ , and $\frac{1}{8}\;$ .
For continuous random variables, the corresponding definition is $$ E[X] =\int_{-\infty}^{\infty}\, x\, p(x)\, {\rm d}x \;\text{.} $$ The expected value of a random variable is usually called the mean or average value of the random variable. And, as mentioned above, the way the mean/average is computed should be compared with how mean/average is computed for a data set from relative frequency information.
Suppose now that the game is changed so that one wins amount $\displaystyle{ g\left( x_i \right) }$ rather than $\displaystyle{ x_i }$ when the value $\displaystyle{ x_i }$ occurs. Then the expected value in the discrete case would be $$ E[ g(X) ] =\sum_{i=1}^n \, g\left( x_i \right)\, p\left( x_i \right) \;\text{.} $$ Thus, in the three-coin toss game, if $\displaystyle{ g(x)=x^2 \, }$ , then $$ E\left[ X^2 \right] =0\cdot\frac{1}{8} +1\cdot\frac{3}{8} +4\cdot\frac{3}{8} +9\cdot\frac{1}{8} =3 \;\text{.} $$
In general, we say that the expected value of the function $g(X)$ of random variable $X\,$ , with density function $p\,$ , is given by $$ E[g(X)] =\left\{ \begin{array}{ccc} \sum_{i-1}^\infty\, g\left( x_i\right)\, p\left( x_i\right) & \qquad & X \; \text{discrete} \\ & & \\ \int_{-\infty}^{+\infty}\, g(x)\, p(x)\, {\rm d}x & \qquad & X\; \text{continuous} \end{array}\right. \;\text{.} $$ Of course, in the discrete case, $\,\displaystyle{ x_1 }\,$ , $\,\displaystyle{ x_2 }\,$ , ⦠, are the possible values of $X\,$ , and the associated probabilities are $\displaystyle{ p\left( x_1\right) }\,$ , $\,\displaystyle{ p\left( x_2\right) }\,$ , … .
Example: Chuck-a-Luck
A well-known carnival game, Chuck-a-Luck in its most simple form offers an excellent illustration of expected value. One simple form of the game runs as follows. The player pays a dollar to roll three dice. If no two of the dice match, the player loses his dollar. If exactly two of the dice match, the player gets two dollars back (total pay-out is then $\$1\,$ ). If all three of the dice match, the player gets three dollars back (total pay-out is then $\$2\,$ ).
There are $6^3 =216$ ways the three dice can land. The number of ways that exactly two match is $90$ (as there are six possible values for the pair, then five for the singleton, and finally three ways that any given pair-singleton possibility can arise). There are also $6$ ways that all three of the dice can match. Thus the number of ways the player loses is $216 -(90 +6) =120\;$ . With these in hand, the expected value of the game for the player is $$ \frac{120}{216}\cdot (-1) +\frac{90}{216}\cdot 1 +\frac{6}{216}\, 2 =-\frac{1}{12}\;\text{.} $$
The following spreadsheet illustrates this fact. In each row, the values in the first three columns represent three rolls of dice. In column E is the value of the roll to the player. This is repeated for $120$ rows, representing one hundred twenty tries at the game. In cell G2 is the player’s net result after the games, and in cell H2 is the average value of each game to the player ( “G2/120” ). Compare this with $-\frac{1}{12} \doteq -0.08333\;$ .
Click here to open a copy of this so you can experiment with it. You will need to be signed in to a Google account.
Note: These sums/integrals do not always converge. Random variables, or functions of random variables, which do not have expected values can present serious difficulties. Here, unless otherwise stated, we will be considering only random variables, and functions of random variables, for which the sums or integrals arising converge absolutely.
Properties of $E$
Certain computational properties of $E$ make it particularly convenient. The following will be needed in later sections. Let $g\,$ , $\,\displaystyle{ g_1 }\,$ , and $\displaystyle{ g_2 }$ be functions of random variables for which indicated expected values exist. We have the following properties.
$E[c\,g] =c\,E[g]$ for any constant $c$ |
$\displaystyle{ E\left[g_1 +g_2\right] =E\left[ g_1\right] +E\left[ g_2\right] }$ |
These two properties combined are called linearity, so that $E$ is linear. They can be verified as consequences of integration â one of the most important linear operations.
Note: The expected value of a sum of random variables is the sum of their expected values. But the expected value of a product may or may not be the product of the expected values. This will be discussed later.