# What is the probability of getting TTHH before HHH in repeated fair coin toss?

$\begingroup$ I would have guessed that this interrogate is basically a duplicate, but I was n’t able to find a interrogate with an answer that applies to this particular problem .
We can reformulate this problem using a Markov range : explicitly we ‘re looking to compute the probabilities $\Bbb P ( HHH ), \Bbb P ( TTHH )$ that the Markov chain settles into absorbing states corresponding respectively to $HHH$ and $TTHH$ occurring first .
The ( eight ) states are the possible stages of progress toward one of the two goal sequences ( $HHH$, $TTHH$ ).

The ephemeral states are :

The absorb states are :

In each state transient state there are two possible following states, corresponding to flipping $H$ and $T$ next, respectively, both of which thus have probability $\frac { 1 } { 2 }$. explicitly, the conversion matrix of the Markov chain, with deference to the above ordain of states, is  P=\left ( \begin { align } { cccccc|cc } \cdot & \frac { 1 } { 2 } & \frac { 1 } { 2 } & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \frac { 1 } { 2 } & \frac { 1 } { 2 } & \cdot & \cdot & \cdot & \cdot \\ \cdot & \frac { 1 } { 2 } & \cdot & \cdot & \frac { 1 } { 2 } & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \frac { 1 } { 2 } & \cdot & \frac { 1 } { 2 } & \cdot & \cdot \\ \cdot & \frac { 1 } { 2 } & \cdot & \cdot & \cdot & \cdot & \frac { 1 } { 2 } & \cdot \\ \cdot & \frac { 1 } { 2 } & \cdot & \cdot & \cdot & \cdot & \cdot & \frac { 1 } { 2 } \\ \hline & & & & & & 1 & \cdot \\ & & & & & & \cdot & 1 \\ \end { array } \right ).  Since the absorb states are at the end of our list, the above matrix is in a favored form : The upper-left block, call it $Q$, specifies the transitions between transient states, and the upper-right jam, call it $R$, specifies those from ephemeral states to absorbing states .
By construction, if we start in the $one$ thorium ephemeral state the probability of settling in the $j$ thursday absorbing state is the $( i, j )$ -entry $B_ { ij }$ of $B : = N R$, where $N : = ( I – q ) ^ { -1 }$ is the alleged cardinal matrix of the Markov chain. Since we start in the initial department of state, $\emptyset$, it ‘s enough to compute $B_ { 11 }$, i.e., the probability that the sequence $HHH$ occurs first, and $B_ { 12 }$, i.e., the probability that $TTHH$ occurs first ; since there are entirely $2$ absorbing states, $B_ { 12 } = 1 – B_ { 11 }$, and so it ‘s adequate to compute  \Bbb P ( HHH ) = B_ { 11 } = \sum_ { k=1 } ^6 N_ { 1k } R_ { k1 }.  The lone nonzero introduction in the first base column of $R$ is $R_ { 51 } = \frac { 1 } { 2 }$, then $\Bbb P ( HHH ) = N_ { 15 } R_ { 51 } = \frac { 1 } { 2 } N_ { 15 }$, and using Cramer ‘s convention we can compute $N_ { 15 } = \frac { 5 } { 6 }$ without computing the other entries of $N$. Substituting yields the claim probabilities :  \Bbb P ( HHH ) = \frac { 5 } { 12 } \qquad \textrm { and } \qquad \Bbb P ( TTHH ) = \frac { 7 } { 12 }. 

reference : https://leowiki.com
Category custom BY HOANGLM with modern data process : Economy