Trang chủ » blog » CIS 2033 Lecture 5, Spring – PDF Free Download

# CIS 2033 Lecture 5, Spring – PDF Free Download

recording

1 CIS 2033 Lecture 5, Spring teacher : David Dobor Updated January 8, Supplemental read from Dekking south casebook : chapter 3. In this lecture, we explore the idea of independence of events in some detail. introduction We now introduce and develop the concept of independence between events. The general idea is the following. Whenever we are told that a certain consequence A has occurred, this broadly changes the probability of some other consequence B, and the probabilities, our beliefs about A, will have to be replaced by conditional probabilities. But if the conditional probability turns out to be the same as the categoric probability, then the occurrence of consequence vitamin a does not carry any useful information on whether event B will occur. In such a sheath, we say that events A and B are independent. We will develop some intuition about the mean of independence of two events and introduce an annex, the concept of conditional independence. We will then proceed to define the independence of a collection of more than two events. If, for any two of the events in the collection we have independence between them, we will say that we have pairwise independence. But we will see that independence of the entire solicitation is something different. It involves extra conditions. ultimately, we will close with an application in dependability analysis and with a courteous puzzle that will serve as a give voice of caution about putting in concert probabilistic models. Coin Tossing Example As an initiation to the main subject of this call on the carpet, let us go through a simple case and on the way review what we have learned so far. The exemplar that we re going to consider involves three tosses of a bias coin. It s a coin that results in heads with probability p. ( We re going to make this a fiddling more precise in a moment. ) The coin may be biased in the sense that this number p is not necessarily the lapp as one half. We represent this particular probabilistic experiment in terms a tree that shows us the different stages of the experiment. Each particular branch corresponds to a sequence of possible results in the different stages, and the leaves of this tree correspond to the possible outcomes. The branches of this tree are annotated by certain Figure 1 : A model for the experiment consist of corner mint tosses. The probability of an H or a T on any individual chuck is not necessarily 1/2 here. 2 commonwealth of independent states 2033 call on the carpet 5, form numbers, and these numbers are to be interpreted appropriately as probabilities or conditional probabilities. therefore for exemplar, the count phosphorus that labels the branch eminating from the etymon of the tree is interpreted as the probability of heads in the inaugural flip, an event that we denote as H 1 whose probability is phosphorus = P ( H 1 ), shown in crimson on number 2. similarly, the phosphorus in the amobarbital sodium circle on figure 2 is to be interpreted as a conditional probability of obtaining heads in the moment flip given that the first toss resulted in heads. And the “ green p ” is to be interpreted as the conditional probability of heads in the third gear toss, given that the first pass resulted in heads and the moment toss besides resulted in heads : P ( H 3 H 1 H 2 ). We sometimes write P ( H 3 H 1 H 2 ) alternatively of P ( H 3 H 1 H 2 ) – same thing, equitable a slenderly unlike notation. Let us now continue with some calculations. First, we re going to rehearse the multiplication rule, which allows us to calculate the probability of a certain consequence. In this case, the consequence of sake is tails followed by heads followed by tails, that is, the result THT. This result is the leaf circled in crimson on figure 2. According to the multiplication rule, to find the probability of a finical concluding result, we multiply probabilities and conditional probabilities along the path that leads to this particular consequence human body 2 : lapp tree as in figure 1, but annotated. For example, the “ blue sky p ” is interpreted as the probability of H on the irregular flip, given that the first base flip resulted in H vitamin a well, P ( H 2 H 1 ). P ( THT ) = ( 1 p ) p ( 1 phosphorus ) = P ( T 1 ) P ( H 2 T 1 ) P ( T 3 T 1 H 2 ) Let us nowadays calculate the probability of the consequence that we obtain precisely one oral sex in the three tosses. This is an event that can happen in multiple ways. In fact, there are precisely three ways in which this can happen, and these are shown in design 2 as leaves that have blue rectangles around them. To find the total probability of this event, we need to add the probability of the different outcomes that correspond to this consequence. That is, we add up the probabilities of the events ( leaves in blue squares ) as follows : P ( Got precisely one H in three tosses of the coin ) = = P ( HTT ) + P ( THT ) + P ( TTH ) = = phosphorus ( 1 phosphorus ) ( 1 p ) + ( 1 phosphorus ) p ( 1 p ) + ( 1 phosphorus ) ( 1 p ) phosphorus = p ( 1 phosphorus ) 2 + p ( 1 p ) 2 + p ( 1 phosphorus ) 2 = 3 p ( 1 p ) 2 Notice that each one of the 3 different ways that this event can

3 commonwealth of independent states 2033 lecture 5, spring happen have the lapp probability. So these 3 outcomes are evenly likely. finally, let us calculate a conditional probability. Suppose that we were told that there was precisely one head. so in detail, one of the events in the blue rectangles has occurred. And we re concern in the probability that the first toss is heads, which corresponds to the three leaves circled in green in number 3. So we are after the trace probability : P ( first flip is H precisely 1 H occurred ) = ? Can you can guess the answer ? The answer should be 1/3. Why is that ? Each one of the outcomes in blue sky rectangles HTT, THT, TTH has the same probability. so when you condition on the result of one of these three having happened, the conditional probability of each particular result of the three should be 1/3. indeed, to reiterate, given that one of these tree happened, there s, in especial, probability 1/3 that the top one – the matchless that is besides in the green circle that says HTT – has happened. figure 3 : lapp tree as in figures 1 and 2. here, the three leaves that are circled in green have all resulted in the first toss being an H. OK, so intuitively this probability should be 1/3. But let us see if we can derive this answer in a formal manner. Let second use the definition of conditional probabilities : P ( first gear toss is H precisely 1 H occurred ) = P ( H 1 precisely 1 H occurred ) P ( precisely 1 H occurred ) = … now, for the numerator, the probability of both events happening – i.e. that we have precisely one head and the beginning pass is forefront – is the intersection of the consequence in the gloomy rectangle and the event in the k set, which can happen only if the sequence HTT occurs, the probablity of which we calculated above. Above, we besides calculated the denominator. So we continue : … = = 1/3 p ( 1 p ) 2 3 phosphorus ( 1 phosphorus ) 2 = OK, great ! So we got the same solution formally as our intuition suggested – therefore far all is good with mathematics. Or our intuition. Let me immediately make a few comments about this particular model. This particular case is pretty special in the trace obedience. = 4 commonwealth of independent states 2033 call on the carpet 5, bounce We have that of the probability of H 2 ( heads in the moment flip ), given H 1 ( i.e. that the first one was heads ), is peer to p. And the lapp is true for the conditional probability of heads in the moment flip given that the first one was tails : P ( H 2 H 1 ) = phosphorus = P ( H 2 T 1 ) In other words, knowing the result of the first chuck doesn thyroxine change our beliefs about what may happen, and with what probability, in the second base toss. furthermore, if you calculate the unconditional probability of heads in the irregular chuck, what you would get using the sum probability theorem would be the follow : P ( H 2 ) = P ( H 1 ) P ( H 2 H 1 ) + P ( T 1 ) P ( H 2 T 1 ), ( 1 ) and if you plug in the values and do the algebra, this turns out to be equal to p again ! P ( H 2 ) = phosphorus phosphorus + ( 1 p ) phosphorus = p So the categoric probability of heads in the second flip turns out to be the same as the conditional probabilities in equality ( 1 ). Again, knowing what happened in the first base toss doesn triiodothyronine change your beliefs about the second discard, which were associated with this particular probability, p. What we re going to do next is to generalize this limited situation by giving a definition of independence of events, and then discuss assorted properties and concepts associated with independence. 5 commonwealth of independent states 2033 lecture 5, spring independence of Two Events In the former exercise, we had a model where the result of the foremost mint flip did not affect the probabilities of what might happen in the moment discard. This is a phenomenon that we call independence which we now proceed to define. Let us start with a inaugural undertake at the definition. We have an event, B, that has a certain probability of occurring. We are then told that event A occurred, but suppose that this cognition does not affect our beliefs about B in the sense that the conditional probability remains the same as the original unconditional probability. thus, the happening of A provides no newly information about B. In such a case, we may say that consequence B is independent from event A. Thus permit s write down our first – the so called “ intuitive ” – definition of independence : P ( B A ) = P ( B ) If this is indeed the casing, comment what is the probability that both A and B happen : P ( A B ) = P ( A ) P ( B A ) always genuine by multiplication rule = P ( A ) P ( B ) besides true if B is independent of A So we can find the probability of both events happening by merely multiplying their individual probabilities. It turns out that this relation back is a cleaner way of the defining formally the notion of independence. So we will say that two events, A and B, are independent if this relation holds : Why do we use this definition rather than the original one ? This formal definition has respective advantages. First, it is reproducible with the earlier definition. If this equality is true, then the conditional probability P ( B A ) will be equal to P ( B ) : P ( B A ) = P ( A B ) P ( A ) = P ( A ) P ( B ) P ( A ) = P ( B ). 6 commonwealth of independent states 2033 lecture 5, spring A more important cause is that this formal definition is symmetrical with respect to the roles of A and B. So alternatively of saying that B is independent from A, based on this definition we can now say that events A and B are independent of each early. In addition, since this definition is symmetrical and since it implies the condition that P ( B A ) = P ( B ), it must besides imply the symmetrical sexual intercourse, namely, that P ( A B ) = P ( A ). finally, on the technical side, conditional probabilities are alone defined when the conditioning event has non-zero probability. So this original definition would alone make smell in those cases where the probability of the event A would be non-zero. In line, this newly definition makes sense even when we re dealing with zero probability events. indeed this definition is indeed more general, and this besides makes it more elegant. Let us now build some understand of what independence actually is. suppose that we have two events, A and B, both of which have positive probability. Furthermore, suppose that these two events are disjoint ( i.e. they do not have any coarse elements ). Are these two events independent ? Let us check the definition. The probability that both A and B occur is zero because the two events are disjoint. They can not happen together. On the other pass, the probability of A times the probability of B is positive, since each one of the two terms is positive. And consequently these two events can not mugwump. In fact, intuitively, these two events are ampere pendent as thai twins. If you know that A occurred, then you are certain that B did not occur. So the occurrence of A tells you a batch about the happening or non-occurrence of B. So we see that being independent is something completely different from being disjoin. Independence is a relation about information. It is significant to always keep in thinker the intuitive meaning of independence. Two events are freelancer if the happening of one consequence does not change our beliefs about the other. It does not affect the probability that the other consequence besides occurs. digit 4 : Two events, A and B. Are they independent ? When do we have independence in the real earth ? The distinctive case is when the happening or nonoccurrence of each of the two events A and B is determined by two physically distinct and noninteracting processes. For example, whether my coin results in heads and whether it will be snowing on New Year second Day are two events that should be modeled as independent. But I should besides say that 7 curie 2033 lecture 5, bounce there are some cases where independence is less obvious and where it happens through a numeric accident. Check Your agreement : We have a peculiar mint. When chuck doubly, the beginning flip results in Heads with probability 1/2. however, the second gear convulse constantly yields the same result as the inaugural discard. therefore, the merely possible outcomes for a succession of 2 tosses are HH and TT, and both have peer probabilities. Are the two events A = { Heads in the first base chuck } and B = { Heads in the second gear convulse } freelancer ? intuitively, the happening of consequence A gives us data on whether event B will occur, and therefore the two events are dependent. mathematically, P ( A ) = P ( B ) = P ( A B ) = 1/2, so that P ( A B ) = P ( A ) P ( B ). Check Your understand : Let A be an event, a subset of the sample outer space Ω. Are A and Ω independent ? Yes, because P ( A Ω ) = P ( A ) = P ( A ) 1 = P ( A ) P ( Ω ). intuitively, P ( A ) represents our beliefs about the likelihood that A will occur. If we are told that Ω occurred, this does not give us any newly data ; we already knew that Ω is sealed to occur. For this rationality P ( A Ω ) = P ( A ). Check Your reason : When is an event A mugwump of itself ? Choose one of the following possible answers : ( a ) Always ( b-complex vitamin ) If and entirely if P ( A ) = 0 ( cytosine ) If and only if P ( A ) = 1 ( five hundred ) If and entirely if P ( A ) is either 0 or 1. Using the definition, A is freelancer of itself if and entirely if P ( A A ) = P ( A ) P ( A ). Since A A = A, we have P ( A A ) = P ( A ) and we obtain the equivalent condition P ( A ) = P ( A ) P ( A ), or P ( A ) ( 1 P ( A ) ) = 0, and this happens if and only if P ( A ) is either 0 or 1. independence of Event Complements Let us now discuss an interesting fact about independence that should enhance our sympathize. Suppose that events A and B 8 commonwealth of independent states 2033 lecture 5, spring are independent. intuitively, if I tell you that A occur, this does not change your beliefs as to the likelihood that B will occur. But in that case, this should not change your beliefs as to the likelihood that B will not occur. So A should be autonomous of B c. In early words, the happening of A tells you nothing about B, and consequently tells you nothing about B carbon either. The previous paragraph gave an intuitive argument that if A and B are independent, then A and B c are besides freelancer. But let us now verify this intuition through a formal proofread. The conventional proof goes as follows. We have the two events, A and B. Event A can be broken down into two pieces. One piece is the intersection of A with B – that s the man shaded in red in number 5. The second piece is the partially of A which is outside B. And that piece is A intersection with the complement of B – that s the blasphemous piece in number 5. These red and blue pieces together comprise event A. now, these two pieces are disjoin from each early. And consequently, by the additivity axiom, P ( A ) = P ( A B ) + P ( A B c ) Using independence, the first term becomes P ( A ) P ( B ), and we leave the second term as is : figure 5 : A is decomposed into the red and blue parts : A = ( A B ) ( A B coke ). P ( A ) = P ( A ) P ( B ) + P ( A B deoxycytidine monophosphate ) now let us rearrange terms a fiddling bit. From the previous two equations it follows that : P ( A B hundred ) = P ( A ) P ( A ) P ( B ) = P ( A ) ( 1 P ( B ) ) = P ( A ) P ( B c ) So we proved that the probability of A and B c occurring together is the product of their individual probabilities. And that s precisely the definition of A being independent from B c. And this concludes the courtly proof. Check Your understand : Suppose that A and B are independent events. Are A c and B c mugwump ? We saw in this section that for any 2 generic events E1 and E2, independence of E1 and E2 implies independence of E1 and E2 c. In the case of this particular problem, we can apply this leave with E1 = A and E2 = B to conclude that since A and B are assumed to be mugwump, then A and B hundred are besides mugwump. independence is symmetrical, so A and B c being autonomous is the lapp as B speed of light and A being independent. If we now reuse the generic result with E1 = B coulomb and E2 = A, we can conclude that B coke and A cytosine are besides autonomous, which by isotropy is the lapp as A hundred and B c being independent. To summarize : A and B autonomous = A and B c autonomous = B c and A mugwump = B deoxycytidine monophosphate and A coke autonomous = A c and B c autonomous 9 curie 2033 lecture 5, spring conditional Independence Conditional probabilities are like ordinary probabilities, except that they apply to a modern situation where some extra data is available. For this rationality, any concept relevant to probability models has a counterpart that applies to conditional probability models. In this spirit, we can define a notion of conditional independence, which is nothing but the impression of independence applied to a conditional model. Let us be more specific. think that we have a probability model and two events, A and B. We are then told that event C occurred, and we construct a conditional model. conditional independence is defined as ordinary independence but with respect to the conditional probabilities. To be more precise, remember that independence is defined in terms of this relation : P ( A B ) = P ( A ) P ( B ) calculate 6 : Are A and B independent given C ? now, in the conditional exemplary we just use the same relation, but with conditional probabilities alternatively of ordinary probabilities : P ( A B C ) = P ( A C ) P ( B C ) So this is the definition of conditional independence. We may now ask, is there a relation back between independence and conditional independence ? Does one imply the early ? Let us look at an exemplar. Suppose that we have two events and these two events are autonomous. We then stipulate on another event, C. And suppose that the picture is like the one shown in figure 7. Are A and B conditionally independent ? well, in the new universe where C has happened, events A and B have no intersection. As we discussed early, this means that events A and B are “ highly dependent ”. Within C, if A occur, this tells us that B did not occur. figure 7 : Are A and B freelancer given C ? The conclusion from this model is that independence does not imply conditional independence : in this detail model, we saw that given C, A and B are not autonomous. 10 curie 2033 lecture 5, spring Check Your sympathy : Suppose that A and B are conditionally autonomous given C. Suppose that P ( C ) > 0 and P ( C cytosine ) > Are A and B c guaranteed to be conditionally freelancer given C ? 2. Are A and B guaranteed to be conditionally independent given C c ? 1. We have seen that in any probability model, independence of A and B implies independence of A and B c. The conditional mannequin ( given C ) is just another probability model, so this property remains true. 2. This may be true in some limited cases, for example, if A and B both have zero probability. however, it is in general assumed. Suppose, for example, that events A and B have nonempty intersection inside C, and are conditionally independent, but have empty intersection inside C speed of light, which would make them subject ( given C degree centigrade ). Independence v Conditional Independence We have already seen an exercise in which we have two events that are independent but become dependant in a conditional model. Thus independence and conditional independence are not the same. We will immediately see another example in which a exchangeable situation is obtained. The model is as follows. We have two possible coins, coin A and coin B. The top one-half of calculate 8 shows the mannequin of the world given that mint A has been chosen. In this conditional model, the probability of heads is 0.9. And, furthermore, the probability of heads is 0.9 in the second chuck no count what happened in the first flip and sol on as we continue. P ( H coin A ) = 0.9 A alike assumption is made in the early possible conditional universe. This is a universe in which we re dealing with coin B, the buttocks half of number 8., This time, the probability of heads at each convulse is 0.1. calculate 8 : A model for flipping a mint, either coin A or coin B. P ( H coin B ) = 0.1 thus given a detail coin, we assume that we have freelancer tosses. This is another manner of saying that we re assuming conditional independence. Within this conditional model, that is, given that we ve selected a particular coin, flips are independent. Suppose now that we choose one of the two coins. Each coin is chosen with the lapp probability, 0.5. We then start flipping that choose coin over and over. 11 curie 2033 lecture 5, spring The question we will try to answer is whether the mint tosses are independent. And by this we mean a interview that refers to the overall model – in the general model, where you do not know ahead of time which of the two coins, coin A or mint B, is going to be selected and then flipped, are the different coin tosses independent ? We can approach this interview by trying to compare conditional and categoric probabilities. That s what independence is about. independence is about sealed conditional probabilities being the lapp as the unconditional probabilities. so this here, for exemplar, let us compare whether the 11-th mint toss is dependent or independent from what happened in the first 10 coin tosses. I.e : P ( toss 11 is H ) to be compared with : P ( toss 11 is H first 10 tosses are H s ) Let us calculate these probabilities. For the beginning one, we use the full probability theorem. There s a certain probability that we have coin A, and then we have the probability of heads in the 11th flip given that it was coin A. There s besides a certain probablility that it s coin B and then a conditional probability that we obtain heads given that it was coin B : P ( toss 11 is H ) = P ( A ) P ( H 11 A ) + P ( B ) P ( H 11 B ) We use the numbers that are given in this exercise. We have 0.5 probability of obtaining a detail mint, 0.9 probability of heads for coin A, 0.5 probability that it s coin B, and 0.1 probability of heads if it is indeed mint B. We do the arithmetic, and we find that the answer is 0.5 : P ( toss 11 is H ) = = 0.5 which makes arrant common sense : we have coins with different biases, but the average diagonal is 0.5. If we do not know which coin it s going to be, the average bias is going to be 0.5. So the probability of heads in any particular pass is 0.5 when we do not know which coin it is. speculate now that person told you that the first 10 tosses were heads. Will this affect your beliefs about what s going to happen in the 11th flip ? We can calculate this measure using the definition of conditional probabilities, or the Bayes rule, but let us alternatively think intuitively. If it is coin B, the events of 10 heads in a row is highly unlikely. thus if

12 curie 2033 lecture 5, spring I see 10 heads in a row, then I should conclude that there is about certainty that I m dealing with mint A. The information that I m given tells me that I m extremely likely to be dealing with coin A in the case when I see 10 H randomness in a quarrel. So we might arsenic well condition on this equivalent information that it is coin A that I m dealing with. But if it is coin A, then the probability of heads is going to be equal to 0.9. P ( toss 11 is H first 10 tosses are H south ) P ( H 11 A ) = 0.9 So the conditional probability is quite different from the unconditional probability. The information on the first 10 tosses affects my beliefs about what s going to happen in the 11th pass. consequently, we do not have independence between the different tosses. 13 commonwealth of independent states 2033 lecture 5, spring independence of a Collection of Events Suppose I have a fair coin which I toss multiple times. I want to model a position where the results of former flips do not affect my beliefs about the likelihood of heads in the future flip. And I would like to describe this situation by saying that the coin tosses are independent. You may say : we already defined the impression of independent events – doesn metric ton this impression apply ? Well, not quite. We defined independence of two events. But here, we want to talk about independence of a collection of events. For exemplar, we would like to say that the events, heads in the beginning toss, heads in the second flip, heads in the third base flip, and so on, are all independent. What is the proper definition ? Let us start with intuition. We will say that a syndicate of events is independent if knowledge about some of the events doesn thymine change my belief, my probability model, for the remaining events. For exemplar, if I want to say that events A 1, A 2 and so on are autonomous, I would like relations such as the following to be true : the probability that event A 3 happens and A 4 does not happen remains the same even if I condition on some information about some other events, like the event that A 1 happened or that both A 2 happened and A 5 did not happen : If A 1, A 2, … are independent, then P ( A 3 A4 c ) = P ( A 3 A4 c A 1 ( A 2 A5 cytosine ) ) ( 2 ) The authoritative thing to notice here is that the indices involved in the event of interest are distinct from the indices associated with the events on which I m given some information. I m given some data about the events A 1, A 2, and A 5, and this information does not affect my beliefs about something that has to do with events A 3 and A 4. We would like all relations of the kind to be genuine. This could be one possible definition, just saying that the family of events are independent if and alone if any relation back of this type is dependable. But such a definition would not be aesthetically pleasing. alternatively, we intro- 14 curie 2033 lecture 5, spring duce the follow definition, which mimics our earlier definition of independence of two events. definition : We will say that a collection of events are independent if you can calculate probabilities of intersections of these events by multiplying individual probabilities. And this should be possible for all choices of indices involved and for any number or events involved : Let us translate this into something concrete. Consider the case of three events, A 1, A 2, and A 3. Our definition requires that we can calculate the probability of the intersection of two events by multiplying individual probabilities. And we would like all of these three relations to be true, because this property should be true for any choice of the indices. What do we have here ? The first relation tells us that A 1 and A 2 are independent. The moment one tells us that A 1 and A 3 are independent. The third – that A 2 and A 3 are autonomous. We call this position pairwise independence. But the definition requires something more. It requires that the probability of tripartite intersections can besides be calculated the like way by multiplying person probabilities. And this extra condition does make a deviation, as we re going to see in a late case. Is this the mighty definition ? Yes. One can prove formally that if the conditions in this definition are satisfied, then any recipe of the kind shown in equation ( 1 ) above is 15 commonwealth of independent states 2033 call on the carpet 5, give true. In particular, we besides have relations such as the following. P ( A 3 ) = P ( A 3 A 1 A 2 ) = P ( A 3 A 1 A hundred 2 ) = P ( A 3 A c 1 A 2 ) thus any kind of data that I might give you about events A 1 and A 2 which one of them occurred and which one didn thymine is not going to affect my beliefs about the event A 3. The conditional probabilities are going to be the same as the categoric probabilities. I said earlier that this definition implies that all relations of this kind are true. This can be proved. The proof is a bit boring. And we will not go through it. Check Your understand : Suppose that A, B, C and D are mugwump. Use intuitive intelligent ( not a mathematical proof ) to answer the postdate. 1. Is it guaranteed that A C is independent from B vitamin c D ? 2. Is it guaranteed that A B coulomb D is autonomous from B speed of light D c ? 1. The occurrence of event A C contains information about A and C, but provides no information on the happening of B, D, or for that matter, B c D. Hence we have independence. 2. Event D influences both of the events A B speed of light D and B c D c, and therefore introduces a dependence between them. For a more concrete argument, if we are told that consequence A B vitamin c D occurs, then we know that D occurred. consequently, D cytosine did not occur, and this generally reduces the probability of event B speed of light D c. Independence versus Pairwise Independence We will now consider an case that illustrates the remainder between the notion of independence of a collection of events and the impression of pairwise independence within that collection. The model is simple. We have a fairly mint which we flip twice. At each somersault, there is probability 1/2 of obtaining heads. furthermore, we assume that the two flips are autonomous of each other. Let H 1 be the event that the inaugural mint pass resulted in heads, which corresponds to the upper two squares labeled HH and HT in this diagram. Let H 2 be the consequence that the second toss resulted in heads, which corresponds to the top entrust and penetrate left squares the two ways that we can have the irregular toss being heads. figure 9 : Two tosses of a coin. here s our apparatus : H 1 : first toss is H – top leave and top right squares in figure 9. H 2 : second pass is H – top left and bottom left squares in Figure 9. P ( H 1 ) = P ( H 2 ) =1/2 16 curie 2033 lecture 5, jump immediately, we re assuming that the tosses are autonomous. So the consequence HH has a probability which is peer to the probability that the first pass resulted in heads that s 1/2 times the probability that the second toss resulted in heads, which is 1/2. So we have probability 1/4 for this consequence : P ( HH ) = P ( H ) P ( H ) = 1/2 1/2 = 1/4 By a exchangeable controversy, we have that the probabilities of the other three events in number 9 are : P ( HT ) = P ( TH ) = P ( TT ) = 1/4 17 commonwealth of independent states 2033 lecture 5, spring Let us now introduce a newly event, call it C, which is the consequence that the two tosses had the same result. So this is the event that we obtain either HH or TT : C = { HH, TT } : two tosses had the like solution top leave and bottom correctly squares in name 9. Is this event C mugwump from the events H 1 and H 2 ? Let us first expression for pairwise independence. Let randomness look at the probability that H 1 occurs and C occurs ampere well : P ( H 1 C ) = P ( HH ) = 1/4 ( 3 ) ( This is true because the first toss resulted in heads, and the two tosses had the like solution, which is the lapp as the probability of obtaining heads followed by heads. ) How about the product of the probabilities of H 1 and of C ? Is it the same ? Well, the probability of H 1 is 1/2. And the probability of C what is it ? Event C consists of two outcomes. Each one of these outcomes has probability 1/4. So the total is, again, 1/2. P ( H 1 ) P ( C ) = 1/2 1/2 = 1/4 ( 4 ) So we notice that the probability of the two events happening is the same as the product of their person probabilities, and therefore, H 1 and C are independent events. By the same argument, H 2 and C are going to be mugwump. It s a symmetrical situation. similarly, you can check that H 1 and H 2 are besides mugwump from each other. So we have all of the conditions for pairwise independence. Let us now check whether we have independence. To check for independence, we need to besides look into the probability of all three events happening and see whether it is peer to the product of the individual probabilities. The probability of all three events happening this is the probability that H 1 occurs and H 2 occurs and C occurs. What is this consequence ? Heads in the first flip, heads in the irregular discard, and the two tosses are the same this happens if and only if the consequence is heads followed by heads. And this has probability 1/4 : On the early hired hand, P ( H 1 H 2 C ) = P ( HH ) = 1/4 P ( H 1 ) P ( H 2 ) P ( C ) = 1/2 1/2 1/2 = 1/8 18 curie 2033 lecture 5, bounce so P ( H 1 H 2 C ) = P ( H 1 ) P ( H 2 ) P ( C ). consequently in this example, H 1, H 2, and C are pairwise freelancer, but they re not independent in the sense of an freelancer collection of events. How do we understand this intuitively ? If I tell you that consequence H 1 occurred and I ask you for the conditional probability of C given that H 1 occurred, what is this ? The entirely way that you can have the two tosses having the same result is going to be in the second pass besides resulting in heads : P ( C H 1 ) = P ( H 2 H 1 ). furthermore, since H 2 and H 1 are independent, this is merely the probability that we have H in the second chuck. And this number is 1/2 : P ( H 2 H 1 ) = P ( H 2 ) = 1/2. And 1/2 is besides the like as P ( C ). To recap, we have : P ( C H 1 ) = P ( H 2 H 1 ) = P ( H 2 ) = 1/2 = P ( C ) This is another way of understanding the independence of H 1 and C : Given that the beginning flip resulted in H, this does not help you in any way in guessing whether the two tosses will have the lapp solution or not. The beginning one was H, but the second base one could be either H or T with peer probability. therefore event H 1 does not carry any useful information about the happening or non-occurrence of consequence C. On the other hand, if I were to tell you that both events, H 1 and H 2, happened, what would the conditional probability of C be ? If both H 1 and H 2 occurred, then the results of the two coin tosses were identical, so you know that C besides occurred. So this probability must equal to 1 : P ( C H 1 H 2 ) = 1 And this number, 1, is unlike from the unconditional probability of C, which is 1/2. So we have here a situation where cognition of H 1 having occurred does not help you in making a better think on whether C is going to occur. H 1 by itself does not carry any useful information. But the two events together, H 1 and H 2, do carry utilitarian information about C. Once you know that H 1 and H 2 occurred, then C is certain to occur, so your original probability for C, which was 1/2, now gets revised to a value of 1. This means that H 1 and H 2 do carry data relevant to C. Therefore, C is not independent from these two events jointly. And we say that events H 1, H 2, and C are not freelancer. 19 curie 2033 lecture 5, give Reliability Independence is a very useful property. Whenever it is true, we can break up complicated situations into bare ones. In particular, we can do separate calculations for each while of a given model and then combine the results. We re going to look at an application of this mind to the analysis of dependability of a system that consists of independent units. We have a system that consists of a number, let s say, n, of units. Each one of the units can be “ up ” or “ down ”. And it s going to be “ up ” with a certain probability p iodine. furthermore, we will assume that unit failures are independent. intuitively, what we mean is that failure of some of the units does not the affect the probability that some of the other units will fail. If we want to be more formal, we might proceed as follows. We could define an consequence uranium iodine to be the event that the ith unit is “ up ”. And then make the presumption that the events U 1, U 2, … U n are independent. alternatively, we could define events F i, where event F one is the event that the i-th whole is down, or that it has failed. And we could assume that the events F iodine are independent, but we do not actually need a separate assumption. As a consequence of the assumption that the U iodine s are mugwump, one can argue that the F iodine south are besides autonomous. How do we know that this is the casing ? If we were dealing with just two units, then this is a fact that we have already proved a little earlier. We did prove that if two events are independent, then their complements are besides independent. now that we re dealing with multiple events here, a general number north, how do we argue ? here we use the intuitive reason that we have of what independence means. independence in this context means that whether some units are “ up ” or down, does not change the probabilities that some of the other units will be “ up ” or down. And by taking that rendition, independence of the events that units are “ up ” is basically the lapp as independence of the units having failed. So we take this significance for granted and nowadays we move to do some calculations for particular systems. Consider a particular system that consists of three components. And we will say that the system is “ astir ”, if there exists a path from the left to the right that consists of units that are “ up ”. so in this case, for the system to be “ up ”, we need all three components to be “ up ” and we proceed as follows.

20 curie 2033 lecture 5, spring TTTTTTThe probability that the system is “ up ” this is the consequence that the beginning whole is “ up ”, and the moment unit is “ up ”, and the one-third unit of measurement is “ up ”. And now we use independence to argue that this is equal to the probability that the first unit of measurement is “ up ” times the probability that the second gear unit of measurement is “ astir ” times the probability that the third gear unit is “ up ”. And in the notation that we have introduced this is barely p1 times p2 times p3.ttttt now, let us consider a different organization. In this system, we will say that the arrangement is “ up ”, again, if there exists a path from the exit to the right that consists of units that are “ up ”. In this particular lawsuit the system will be “ up ”, arsenic retentive as at least one of those three components are “ up ”. We would like again to calculate the probability that the system is “ improving ”. And the organization will be “ up ”, vitamin a retentive as either unit 1 is “ up ”, or whole 2 is “ up ”, or whole 3 is “ up ”. How do we continue from here ? We can not use independence readily, because independence refers to probabilities of intersections of events, whereas here we have a union. How do we turn a union into an intersection ? This is what De Morgan s Laws allow us to do, and involves taking complements. alternatively of using formally De Morgan s Laws, let s good argue directly. Let us look at this event. That unit 1 fails, and unit 2 fails, and unit 3 fails. What is the sexual intercourse between this event and the event that we have here. They re complements. Why is that ? Either all units fail, which is this event, or there exists at least one unit, which is “ up ”. so since this event is the complement of that event, this means that their 21 curie 2033 lecture 5, spring probabilities must add to 1, and consequently we have this relative. And now we re in better shape, because we can use the independence of the events F to write this as 1 minus the product of the probabilities that each one of the units fails. And with the note that we have introduced using the protease inhibitor second, this is a follows. The probability that unit 1 fails is 1 minus the probability that it is “ up ”. similarly, for the irregular unit, 1 minus the probability that it is “ up ”. And the lapp for the third base unit. So we have derived a formula that tells the probability that a system of this kind is “ up ” in terms of the probabilities of its individual components. Check Your reason : suppose that each unit of a system is up with probability 2/3 and down with probability 1/3. different units are independent. For each one of the systems shown below, calculate the probability that the whole system is up ( that is, that there exists a path from the left end to the right end, consisting entirely of units that are up ). 1. What is the probability that the follow system is up ? 2. What is the probability that the postdate arrangement is up ? 22 curie 2033 lecture 5, spring In the first gear diagram, the parallel connection of the two units ( on the right ) is down when both units fail, which happens with probability ( 1/3 ) ( 1/3 ) = 1/9. Therefore the parallel joining is up with probability 1 1/9 = 8/9. The overall system is up if the first base whole is up ( probability 2/3 ) and the parallel connection is besides up ( probability 8/9 ), which happens with probability ( 8/9 ) ( 2/3 ) = 16/ In the second diagram, the top path is up when both of its units are up ? this happens with probability ( 2/3 ) ( 2/3 ) = 4/9. Thus it fails with probability 1 4/9 = 5/9. The overall arrangement fails when the peak path fails ( probability 5/9 ) and the buttocks path besides fails ( probability 1/3 ). Thus the probability of failure is ( 5/9 ) ( 1/3 ) = 5/27. It follows that the probability that the system is up ( does not fail ) is 1 5/27 = 22/27.

generator : https://leowiki.com
Category : Economy