DOC PREVIEW
RIT SIMG 713 - Repeated Trials

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Repeated TrialsLectures 6Spring 2002Bernoulli TrialsRepeated independent trials in which there can be only two outcomesare called Bernoulli trials in honor of James Bernoulli (1654-1705).• Bernoulli trials lead to the binomial distribution.• If a the number of trials is large, then the probability of k suc-cesses in n trials can be approximated by the Poisson distribution.• The binomial distribution and the Poisson distribution are closelyapproximated by the normal (Gaussian) distribution.• These three distributions are the foundation of much of theanalysis of physical systems for detection, communication andstorage of information.Lecture 6 1Bernoulli TrialsConsider an experiment E that has two outcomes, say a and b, withprobability p and q =1− p, respectively.LetEnbe the experiment that consists of n independent repetitionsofE.The outcomesEnare n-sequences with the components a and b.The outcomes ofE2are {aa}, {ab}, {ba}, {bb}, with probabilities p2,pq, pq, and q2, respectively.Lecture 6 2Bernoulli TrialsTheorem: The outcomes of Enare the 2nsequences of length n.The number of outcomes ofEnthat contain a exactly k times isgiven by the binomial coefficient.nk=n!k!(n−k)!.Proof: Assume that each of the terms in the expansion of (a + b)nrepresents one of the possible outcomes of the experiment En.Multiplying by (a + b)toform(a + b)n+1produces an expression inwhich each term in (a + b)nappears twice–once with a appendedand once with b appended.If the assumption is true, then this constructs all possible distinctarrangements of n + 1 terms.The assumption is clearly true for n =2.Lecture 6 3Bernoulli TrialsTheorem: The probability that the outcome of an experiment thatconsists of n Bernoulli trials has k successes and n − k failures isgiven by the binomial distributionb(n, k, p)=nkpk(1 − p)n−kwhere the probability of success on an individual trial is given by p.The peak value is near k = np, as was established in a homeworkproblem.In an experiment with n trials one can expect about np successesand n(1 − p failures.Lecture 6 4Effect of changes in pA graph of the binomial distribution for n =30andp =0.1, 0.3and0.5 is shown below.Lecture 6 5Effect of changes in n• The mean value of a binomial distribution is np• The variance of a binomial distribution is np(1 − p),• The standard deviation is np(1 − p)• The standard deviation is a measure of the spread of a distribu-tion about its mean value.• Both the mean value and the standard deviation increase withthe number of trials, but the mean value increases faster.• The ratio σ/µ is a measure of the spread relative to the meanvalue.σµ= np(1 − p)np=1√n1 − ppLecture 6 6Effect of changes in nA graph of the binomial distribution as a function of the fractionk/n is shown below for n =30, 100 and 300. The vertical values arenb(n, k, p), which compensates for the increase in density of pointsand gives the plots equal areas.Lecture 6 7Law of Large NumbersDefine Snto be the number of successes on n trials. ThenP[Sn= k]= b(n, k, p)We would like to know the behavior of the function in the centralregion and on the tails. We can examine the tail on the right withthe ratio (The tail on the left will be symmetric.)b(n, r, p)b(n, r − 1,p)=(n − r +1)prq=1+(n +1)p − rrq< 1 −r − nprqExample: n = 120, p =0.01, r = 15. Then r − np =15− 12=3while r − (n +1)p =15− 13 = 2. Hence,1 −r − (n +1)prq< 1 −r − nprqLecture 6 8Law of Large NumbersThe ratio of successive terms is a number that is decreasing. There-fore, the sum is smaller than the sum over a geometric series, inwhich the ratio of terms is a constant. A bound on the probabilityP [Sn≥ r] is therefore given by the geometric sum with ratio ρ if ρis the ratio for the first pair of terms.P [Sn≥ r] ≤∞k=0b(n, r, p)ρk= b(n, r, p)11 − ρSubstitution of ρ =1−r− nprqnow leads to the upper boundP [Sn≥ r] ≤ b(n, r, p)rqr − npLecture 6 9Law of Large NumbersWe now need to replace b(n, r, p) with an upper bound that is easyto work with. We do this by noting that all of the terms betweenthe center, m, and r are greater than b(n, r, p) and that the total ofthose terms must be less than 1. The number of such terms is nomore than r−np, so (r−np)b(n, r, p) < 1sothatb(n, r, p) < 1/(r−np).Putting this into the above equation yields the simple upper boundP [Sn≥ r] ≤ b(n, r, p)rqr − np≤1r − nprqr − np≤rq(r − np)2if r>npLecture 6 10Law of Large NumbersA similar analysis could be performed on the left tail. However, thiscan be avoided by observing that saying that there are at most rsuccesses is the same as saying there are at least (n − r) failures.Exchanging n−r for r and p for q on the right side above then yields,after simplification,P [Sn≤ r] ≤(n − r)p(np − r)2if r<npLecture 6 11Law of Large NumbersLet us now look at the probability that the number of successesis much different from np. We can address this by using the aboveresults. Let r = n(p + ε). ThenP [Sn≥ n(p + ε)] ≤n(p + ε)q(n(p + ε) − np)2=n(p + ε)q(nε)2→ 0because the denominator grows as n2while the numerator grows asn. In the same way, the probability on the left tail also decreaseswith n, so that P [Sn≤ n(p − ε)] → 0.Almost all the probability is in the central region, which is of widthnε. Since the location of the center is m = np, the ratio of the widthto the center point is ε/p, which can be as small as one wishes.Lecture 6 12Law of Large NumbersTheorem: The probability that the ratio Sn/n differs from p by lessthan ε in a set of n Bernoulli trials approaches unity as n increases.PSnn− p<ε→ 1As n increases, the probability that the average number of successesdiffers from p by more than ε tends to zero.*We find application of the law of large numbers in many areas ofscience and engineering. One prominent example is in Shannon’sdevelopment of the noisy channel coding theorem.*For further discussion of the law of large numbers see William Feller, An Intro-duction to Probability Theory and its Applications, Vol I, page 152..Lecture 6


View Full Document

RIT SIMG 713 - Repeated Trials

Download Repeated Trials
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Repeated Trials and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Repeated Trials 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?