The main mind is that $ X $ is completely determined by knowing the stallion history of coin flips. More precisely, let $ f $ be a function whose input signal is an integral history of coin tosses $ y = ( y_n ) _ { n\geq 1 } \in \ { H, T\ } ^ { \mathbb { N } } $ and its consequence is the foremost moment where the two $ H $ ‘s have appeared in a row. This can be handily written as
$ $ degree fahrenheit ( yttrium ) = degree fahrenheit ( y_1, y_2, \cdots ) = \min\ { newton \geq 2 : ( y_ { n-1 }, y_n ) = ( H, H ) \ } $ $
with the convention that $ \min \varnothing = \infty $. Up to this point, no probability theory is involved ; we plainly defined a function $ f $ which reads out some value out of its input. Mentally you may consider $ farad $ as a machine such that, when it is fed with an space string of $ H $ ‘s and $ T $ ‘s, it detects the first position of the design $ HH $ in the string .
now let us feed $ farad $ with random mint tosses. More precisely, let $ Y = ( Y_n ) _ { n\geq 1 } $ be a sequence of i.i.d. RVs with $ \mathbb { P } ( X_n = H ) = \mathbb { P } ( X_n = T ) = 1/2 $, representing an infinitely long record of fairly coin flips. then we may realize $ ten $ as
$ $ X = f ( Y ). $ $
so far, it seems that we have concocted a identical complicated and indirect way of demonstrating $ ten $. But this conceptualization will be helpful for understanding what is going on in the defined solution in OP .
The sketch solution goes by decomposing $ \mathbb { e } [ X ] $ according to some initial outcomes and examining resulting terms individually :
\begin { align * } \mathbb { E } [ X ] & = \mathbb { E } [ X \, |\, Y_1 = T ] \mathbb { P } ( Y_1 = T ) \\ & \quad + \mathbb { E } [ X \, |\, ( Y_1, Y_2 ) = ( H, T ) ] \mathbb { P } ( ( Y_1, Y_2 ) = ( H, T ) ) \\ & \quad + \mathbb { E } [ X \, |\, ( Y_1, Y_2 ) = ( H, H ) ] \mathbb { P } ( ( Y_1, Y_2 ) = ( H, H ) ). \end { align * }
Let us focus on the foremost term. Let $ y = ( y_n ) _ { n\geq 1 } \in \ { H, T\ } ^ { \mathbb { N } } $ a succession satisfying $ y_1 = T $. then
\begin { align * } farad ( y ) = f ( y_1, y_2, y_3, \cdots ) = farad ( T, y_2, y_3, \cdots ) = 1 + degree fahrenheit ( y_2, y_3, \cdots ). \end { align * }
Read more: The Elder Scrolls / Memes – TV Tropes
The last step holds because you have to rebuild the convention $ HH $ a soon as you encounter $ T $. now given $ \ { Y_1 = T\ } $, plugging $ y = Y $ gives
\begin { align * } \mathbb { E } [ X \, |\, Y_1 = T ] & = \mathbb { E } [ fluorine ( T, Y_2, Y_3, \cdots ) \, |\, Y_1 = T ] \\ & = \mathbb { E } [ 1 + farad ( Y_2, Y_3, \cdots ) \, |\, Y_1 = T ] \\ & = \mathbb { E } [ 1 + degree fahrenheit ( Y_2, Y_3, \cdots ) ] \\ & = 1 + \mathbb { E } [ fluorine ( Y_2, Y_3, \cdots ) ] \\ & = 1 + \mathbb { E } [ X ]. \end { align * }
There are two steps that deserve explanation. In the third base dance step, we dropped out stipulate because $ Y_1 $ and all the rest are independent. intuitively, this is because knowing the first coin flick $ Y_1 $ should never affect anything about the rest of mint tosses $ Y_2, Y_3, \cdots $. Next one is the concluding step, and it is the southern cross of this argument. This follows because $ X = farad ( Y_1, Y_2, \cdots ) $ and $ f ( Y_2, Y_3, \cdots ) $ have the same distribution, even though they are not the same random variable .
For the other terms, the argument goes besides :
\begin { align * } \mathbb { E } [ X \, |\, ( Y_1, Y_2 ) = ( H, T ) ] & = \mathbb { E } [ degree fahrenheit ( H, T, Y_3, Y_4, \cdots ) \, |\, ( Y_1, Y_2 ) = ( H, T ) ] \\ & = \mathbb { E } [ 2 + f ( Y_3, Y_4, \cdots ) \, |\, ( Y_1, Y_2 ) = ( H, T ) ] \\ & = \mathbb { E } [ 2 + degree fahrenheit ( Y_3, Y_4, \cdots ) ] \\ & = 2 + \mathbb { E } [ X ], \end { align * }
and
\begin { align * } \mathbb { E } [ X \, |\, ( Y_1, Y_2 ) = ( H, H ) ] & = \mathbb { E } [ farad ( H, H, Y_3, Y_4, \cdots ) \, |\, ( Y_1, Y_2 ) = ( H, H ) ] \\ & = \mathbb { E } [ 2 \, |\, ( Y_1, Y_2 ) = ( H, H ) ] \\ & = 2. \end { align * }
Combining all in all, we recover the equation
$ $ \mathbb { E } [ X ] = \frac { 1 } { 2 } ( \mathbb { E } [ X ] + 1 ) + \frac { 1 } { 4 } ( \mathbb { E } [ X ] + 2 ) + \frac { 1 } { 4 } ( 2 ) $ $
as in OP .
Leave a Comment