Trang chủ » blog » CIS 2033 Lecture 5, Spring – PDF Free Download

recording

3 commonwealth of independent states 2033 lecture 5, spring happen have the lapp probability. So these 3 outcomes are evenly likely. finally, let us calculate a conditional probability. Suppose that we were told that there was precisely one head. so in detail, one of the events in the blue rectangles has occurred. And we re concern in the probability that the first toss is heads, which corresponds to the three leaves circled in green in number 3. So we are after the trace probability : P ( first flip is H precisely 1 H occurred ) = ? Can you can guess the answer ? The answer should be 1/3. Why is that ? Each one of the outcomes in blue sky rectangles HTT, THT, TTH has the same probability. so when you condition on the result of one of these three having happened, the conditional probability of each particular result of the three should be 1/3. indeed, to reiterate, given that one of these tree happened, there s, in especial, probability 1/3 that the top one – the matchless that is besides in the green circle that says HTT – has happened. figure 3 : lapp tree as in figures 1 and 2. here, the three leaves that are circled in green have all resulted in the first toss being an H. OK, so intuitively this probability should be 1/3. But let us see if we can derive this answer in a formal manner. Let second use the definition of conditional probabilities : P ( first gear toss is H precisely 1 H occurred ) = P ( H 1 precisely 1 H occurred ) P ( precisely 1 H occurred ) = … now, for the numerator, the probability of both events happening – i.e. that we have precisely one head and the beginning pass is forefront – is the intersection of the consequence in the gloomy rectangle and the event in the k set, which can happen only if the sequence HTT occurs, the probablity of which we calculated above. Above, we besides calculated the denominator. So we continue : … = = 1/3 p ( 1 p ) 2 3 phosphorus ( 1 phosphorus ) 2 = OK, great ! So we got the same solution formally as our intuition suggested – therefore far all is good with mathematics. Or our intuition. Let me immediately make a few comments about this particular model. This particular case is pretty special in the trace obedience. = 4 commonwealth of independent states 2033 call on the carpet 5, bounce We have that of the probability of H 2 ( heads in the moment flip ), given H 1 ( i.e. that the first one was heads ), is peer to p. And the lapp is true for the conditional probability of heads in the moment flip given that the first one was tails : P ( H 2 H 1 ) = phosphorus = P ( H 2 T 1 ) In other words, knowing the result of the first chuck doesn thyroxine change our beliefs about what may happen, and with what probability, in the second base toss. furthermore, if you calculate the unconditional probability of heads in the irregular chuck, what you would get using the sum probability theorem would be the follow : P ( H 2 ) = P ( H 1 ) P ( H 2 H 1 ) + P ( T 1 ) P ( H 2 T 1 ), ( 1 ) and if you plug in the values and do the algebra, this turns out to be equal to p again ! P ( H 2 ) = phosphorus phosphorus + ( 1 p ) phosphorus = p So the categoric probability of heads in the second flip turns out to be the same as the conditional probabilities in equality ( 1 ). Again, knowing what happened in the first base toss doesn triiodothyronine change your beliefs about the second discard, which were associated with this particular probability, p. What we re going to do next is to generalize this limited situation by giving a definition of independence of events, and then discuss assorted properties and concepts associated with independence. 5 commonwealth of independent states 2033 lecture 5, spring independence of Two Events In the former exercise, we had a model where the result of the foremost mint flip did not affect the probabilities of what might happen in the moment discard. This is a phenomenon that we call independence which we now proceed to define. Let us start with a inaugural undertake at the definition. We have an event, B, that has a certain probability of occurring. We are then told that event A occurred, but suppose that this cognition does not affect our beliefs about B in the sense that the conditional probability remains the same as the original unconditional probability. thus, the happening of A provides no newly information about B. In such a case, we may say that consequence B is independent from event A. Thus permit s write down our first – the so called “ intuitive ” – definition of independence : P ( B A ) = P ( B ) If this is indeed the casing, comment what is the probability that both A and B happen : P ( A B ) = P ( A ) P ( B A ) always genuine by multiplication rule = P ( A ) P ( B ) besides true if B is independent of A So we can find the probability of both events happening by merely multiplying their individual probabilities. It turns out that this relation back is a cleaner way of the defining formally the notion of independence. So we will say that two events, A and B, are independent if this relation holds : Why do we use this definition rather than the original one ? This formal definition has respective advantages. First, it is reproducible with the earlier definition. If this equality is true, then the conditional probability P ( B A ) will be equal to P ( B ) : P ( B A ) = P ( A B ) P ( A ) = P ( A ) P ( B ) P ( A ) = P ( B ). 6 commonwealth of independent states 2033 lecture 5, spring A more important cause is that this formal definition is symmetrical with respect to the roles of A and B. So alternatively of saying that B is independent from A, based on this definition we can now say that events A and B are independent of each early. In addition, since this definition is symmetrical and since it implies the condition that P ( B A ) = P ( B ), it must besides imply the symmetrical sexual intercourse, namely, that P ( A B ) = P ( A ). finally, on the technical side, conditional probabilities are alone defined when the conditioning event has non-zero probability. So this original definition would alone make smell in those cases where the probability of the event A would be non-zero. In line, this newly definition makes sense even when we re dealing with zero probability events. indeed this definition is indeed more general, and this besides makes it more elegant. Let us now build some understand of what independence actually is. suppose that we have two events, A and B, both of which have positive probability. Furthermore, suppose that these two events are disjoint ( i.e. they do not have any coarse elements ). Are these two events independent ? Let us check the definition. The probability that both A and B occur is zero because the two events are disjoint. They can not happen together. On the other pass, the probability of A times the probability of B is positive, since each one of the two terms is positive. And consequently these two events can not mugwump. In fact, intuitively, these two events are ampere pendent as thai twins. If you know that A occurred, then you are certain that B did not occur. So the occurrence of A tells you a batch about the happening or non-occurrence of B. So we see that being independent is something completely different from being disjoin. Independence is a relation about information. It is significant to always keep in thinker the intuitive meaning of independence. Two events are freelancer if the happening of one consequence does not change our beliefs about the other. It does not affect the probability that the other consequence besides occurs. digit 4 : Two events, A and B. Are they independent ? When do we have independence in the real earth ? The distinctive case is when the happening or nonoccurrence of each of the two events A and B is determined by two physically distinct and noninteracting processes. For example, whether my coin results in heads and whether it will be snowing on New Year second Day are two events that should be modeled as independent. But I should besides say that 7 curie 2033 lecture 5, bounce there are some cases where independence is less obvious and where it happens through a numeric accident. Check Your agreement : We have a peculiar mint. When chuck doubly, the beginning flip results in Heads with probability 1/2. however, the second gear convulse constantly yields the same result as the inaugural discard. therefore, the merely possible outcomes for a succession of 2 tosses are HH and TT, and both have peer probabilities. Are the two events A = { Heads in the first base chuck } and B = { Heads in the second gear convulse } freelancer ? intuitively, the happening of consequence A gives us data on whether event B will occur, and therefore the two events are dependent. mathematically, P ( A ) = P ( B ) = P ( A B ) = 1/2, so that P ( A B ) = P ( A ) P ( B ). Check Your understand : Let A be an event, a subset of the sample outer space Ω. Are A and Ω independent ? Yes, because P ( A Ω ) = P ( A ) = P ( A ) 1 = P ( A ) P ( Ω ). intuitively, P ( A ) represents our beliefs about the likelihood that A will occur. If we are told that Ω occurred, this does not give us any newly data ; we already knew that Ω is sealed to occur. For this rationality P ( A Ω ) = P ( A ). Check Your reason : When is an event A mugwump of itself ? Choose one of the following possible answers : ( a ) Always ( b-complex vitamin ) If and entirely if P ( A ) = 0 ( cytosine ) If and only if P ( A ) = 1 ( five hundred ) If and entirely if P ( A ) is either 0 or 1. Using the definition, A is freelancer of itself if and entirely if P ( A A ) = P ( A ) P ( A ). Since A A = A, we have P ( A A ) = P ( A ) and we obtain the equivalent condition P ( A ) = P ( A ) P ( A ), or P ( A ) ( 1 P ( A ) ) = 0, and this happens if and only if P ( A ) is either 0 or 1. independence of Event Complements Let us now discuss an interesting fact about independence that should enhance our sympathize. Suppose that events A and B 8 commonwealth of independent states 2033 lecture 5, spring are independent. intuitively, if I tell you that A occur, this does not change your beliefs as to the likelihood that B will occur. But in that case, this should not change your beliefs as to the likelihood that B will not occur. So A should be autonomous of B c. In early words, the happening of A tells you nothing about B, and consequently tells you nothing about B carbon either. The previous paragraph gave an intuitive argument that if A and B are independent, then A and B c are besides freelancer. But let us now verify this intuition through a formal proofread. The conventional proof goes as follows. We have the two events, A and B. Event A can be broken down into two pieces. One piece is the intersection of A with B – that s the man shaded in red in number 5. The second piece is the partially of A which is outside B. And that piece is A intersection with the complement of B – that s the blasphemous piece in number 5. These red and blue pieces together comprise event A. now, these two pieces are disjoin from each early. And consequently, by the additivity axiom, P ( A ) = P ( A B ) + P ( A B c ) Using independence, the first term becomes P ( A ) P ( B ), and we leave the second term as is : figure 5 : A is decomposed into the red and blue parts : A = ( A B ) ( A B coke ). P ( A ) = P ( A ) P ( B ) + P ( A B deoxycytidine monophosphate ) now let us rearrange terms a fiddling bit. From the previous two equations it follows that : P ( A B hundred ) = P ( A ) P ( A ) P ( B ) = P ( A ) ( 1 P ( B ) ) = P ( A ) P ( B c ) So we proved that the probability of A and B c occurring together is the product of their individual probabilities. And that s precisely the definition of A being independent from B c. And this concludes the courtly proof. Check Your understand : Suppose that A and B are independent events. Are A c and B c mugwump ? We saw in this section that for any 2 generic events E1 and E2, independence of E1 and E2 implies independence of E1 and E2 c. In the case of this particular problem, we can apply this leave with E1 = A and E2 = B to conclude that since A and B are assumed to be mugwump, then A and B hundred are besides mugwump. independence is symmetrical, so A and B c being autonomous is the lapp as B speed of light and A being independent. If we now reuse the generic result with E1 = B coulomb and E2 = A, we can conclude that B coke and A cytosine are besides autonomous, which by isotropy is the lapp as A hundred and B c being independent. To summarize : A and B autonomous = A and B c autonomous = B c and A mugwump = B deoxycytidine monophosphate and A coke autonomous = A c and B c autonomous  9 curie 2033 lecture 5, spring conditional Independence Conditional probabilities are like ordinary probabilities, except that they apply to a modern situation where some extra data is available. For this rationality, any concept relevant to probability models has a counterpart that applies to conditional probability models. In this spirit, we can define a notion of conditional independence, which is nothing but the impression of independence applied to a conditional model. Let us be more specific. think that we have a probability model and two events, A and B. We are then told that event C occurred, and we construct a conditional model. conditional independence is defined as ordinary independence but with respect to the conditional probabilities. To be more precise, remember that independence is defined in terms of this relation : P ( A B ) = P ( A ) P ( B ) calculate 6 : Are A and B independent given C ? now, in the conditional exemplary we just use the same relation, but with conditional probabilities alternatively of ordinary probabilities : P ( A B C ) = P ( A C ) P ( B C ) So this is the definition of conditional independence. We may now ask, is there a relation back between independence and conditional independence ? Does one imply the early ? Let us look at an exemplar. Suppose that we have two events and these two events are autonomous. We then stipulate on another event, C. And suppose that the picture is like the one shown in figure 7. Are A and B conditionally independent ? well, in the new universe where C has happened, events A and B have no intersection. As we discussed early, this means that events A and B are “ highly dependent ”. Within C, if A occur, this tells us that B did not occur. figure 7 : Are A and B freelancer given C ? The conclusion from this model is that independence does not imply conditional independence : in this detail model, we saw that given C, A and B are not autonomous. 10 curie 2033 lecture 5, spring Check Your sympathy : Suppose that A and B are conditionally autonomous given C. Suppose that P ( C ) > 0 and P ( C cytosine ) > Are A and B c guaranteed to be conditionally freelancer given C ? 2. Are A and B guaranteed to be conditionally independent given C c ? 1. We have seen that in any probability model, independence of A and B implies independence of A and B c. The conditional mannequin ( given C ) is just another probability model, so this property remains true. 2. This may be true in some limited cases, for example, if A and B both have zero probability. however, it is in general assumed. Suppose, for example, that events A and B have nonempty intersection inside C, and are conditionally independent, but have empty intersection inside C speed of light, which would make them subject ( given C degree centigrade ). Independence v Conditional Independence We have already seen an exercise in which we have two events that are independent but become dependant in a conditional model. Thus independence and conditional independence are not the same. We will immediately see another example in which a exchangeable situation is obtained. The model is as follows. We have two possible coins, coin A and coin B. The top one-half of calculate 8 shows the mannequin of the world given that mint A has been chosen. In this conditional model, the probability of heads is 0.9. And, furthermore, the probability of heads is 0.9 in the second chuck no count what happened in the first flip and sol on as we continue. P ( H coin A ) = 0.9 A alike assumption is made in the early possible conditional universe. This is a universe in which we re dealing with coin B, the buttocks half of number 8., This time, the probability of heads at each convulse is 0.1. calculate 8 : A model for flipping a mint, either coin A or coin B. P ( H coin B ) = 0.1 thus given a detail coin, we assume that we have freelancer tosses. This is another manner of saying that we re assuming conditional independence. Within this conditional model, that is, given that we ve selected a particular coin, flips are independent. Suppose now that we choose one of the two coins. Each coin is chosen with the lapp probability, 0.5. We then start flipping that choose coin over and over. 11 curie 2033 lecture 5, spring The question we will try to answer is whether the mint tosses are independent. And by this we mean a interview that refers to the overall model – in the general model, where you do not know ahead of time which of the two coins, coin A or mint B, is going to be selected and then flipped, are the different coin tosses independent ? We can approach this interview by trying to compare conditional and categoric probabilities. That s what independence is about. independence is about sealed conditional probabilities being the lapp as the unconditional probabilities. so this here, for exemplar, let us compare whether the 11-th mint toss is dependent or independent from what happened in the first 10 coin tosses. I.e : P ( toss 11 is H ) to be compared with : P ( toss 11 is H first 10 tosses are H s ) Let us calculate these probabilities. For the beginning one, we use the full probability theorem. There s a certain probability that we have coin A, and then we have the probability of heads in the 11th flip given that it was coin A. There s besides a certain probablility that it s coin B and then a conditional probability that we obtain heads given that it was coin B : P ( toss 11 is H ) = P ( A ) P ( H 11 A ) + P ( B ) P ( H 11 B ) We use the numbers that are given in this exercise. We have 0.5 probability of obtaining a detail mint, 0.9 probability of heads for coin A, 0.5 probability that it s coin B, and 0.1 probability of heads if it is indeed mint B. We do the arithmetic, and we find that the answer is 0.5 : P ( toss 11 is H ) = = 0.5 which makes arrant common sense : we have coins with different biases, but the average diagonal is 0.5. If we do not know which coin it s going to be, the average bias is going to be 0.5. So the probability of heads in any particular pass is 0.5 when we do not know which coin it is. speculate now that person told you that the first 10 tosses were heads. Will this affect your beliefs about what s going to happen in the 11th flip ? We can calculate this measure using the definition of conditional probabilities, or the Bayes rule, but let us alternatively think intuitively. If it is coin B, the events of 10 heads in a row is highly unlikely. thus if

20 curie 2033 lecture 5, spring TTTTTTThe probability that the system is “ up ” this is the consequence that the beginning whole is “ up ”, and the moment unit is “ up ”, and the one-third unit of measurement is “ up ”. And now we use independence to argue that this is equal to the probability that the first unit of measurement is “ up ” times the probability that the second gear unit of measurement is “ astir ” times the probability that the third gear unit is “ up ”. And in the notation that we have introduced this is barely p1 times p2 times p3.ttttt now, let us consider a different organization. In this system, we will say that the arrangement is “ up ”, again, if there exists a path from the exit to the right that consists of units that are “ up ”. In this particular lawsuit the system will be “ up ”, arsenic retentive as at least one of those three components are “ up ”. We would like again to calculate the probability that the system is “ improving ”. And the organization will be “ up ”, vitamin a retentive as either unit 1 is “ up ”, or whole 2 is “ up ”, or whole 3 is “ up ”. How do we continue from here ? We can not use independence readily, because independence refers to probabilities of intersections of events, whereas here we have a union. How do we turn a union into an intersection ? This is what De Morgan s Laws allow us to do, and involves taking complements. alternatively of using formally De Morgan s Laws, let s good argue directly. Let us look at this event. That unit 1 fails, and unit 2 fails, and unit 3 fails. What is the sexual intercourse between this event and the event that we have here. They re complements. Why is that ? Either all units fail, which is this event, or there exists at least one unit, which is “ up ”. so since this event is the complement of that event, this means that their 21 curie 2033 lecture 5, spring probabilities must add to 1, and consequently we have this relative. And now we re in better shape, because we can use the independence of the events F to write this as 1 minus the product of the probabilities that each one of the units fails. And with the note that we have introduced using the protease inhibitor second, this is a follows. The probability that unit 1 fails is 1 minus the probability that it is “ up ”. similarly, for the irregular unit, 1 minus the probability that it is “ up ”. And the lapp for the third base unit. So we have derived a formula that tells the probability that a system of this kind is “ up ” in terms of the probabilities of its individual components. Check Your reason : suppose that each unit of a system is up with probability 2/3 and down with probability 1/3. different units are independent. For each one of the systems shown below, calculate the probability that the whole system is up ( that is, that there exists a path from the left end to the right end, consisting entirely of units that are up ). 1. What is the probability that the follow system is up ? 2. What is the probability that the postdate arrangement is up ? 22 curie 2033 lecture 5, spring In the first gear diagram, the parallel connection of the two units ( on the right ) is down when both units fail, which happens with probability ( 1/3 ) ( 1/3 ) = 1/9. Therefore the parallel joining is up with probability 1 1/9 = 8/9. The overall system is up if the first base whole is up ( probability 2/3 ) and the parallel connection is besides up ( probability 8/9 ), which happens with probability ( 8/9 ) ( 2/3 ) = 16/ In the second diagram, the top path is up when both of its units are up ? this happens with probability ( 2/3 ) ( 2/3 ) = 4/9. Thus it fails with probability 1 4/9 = 5/9. The overall arrangement fails when the peak path fails ( probability 5/9 ) and the buttocks path besides fails ( probability 1/3 ). Thus the probability of failure is ( 5/9 ) ( 1/3 ) = 5/27. It follows that the probability that the system is up ( does not fail ) is 1 5/27 = 22/27.