DOC PREVIEW
UW-Madison STAT 371 - Chapter 7 Rules for Means and Variances - Prediction

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Chapter 7 Rules for Means and Variances Prediction 7 1 Rules for Means and Variances The material in this section is very technical and algebraic And dry But it is useful for understanding many of the methods we will learn later in this course We have random variables X1 X2 Xn Throughout this section we will assume that these rv s are independent Sometimes they will also be identically distributed but we don t need i d for our main result There is a similar result without independence too but we won t need it Let i denote the mean of Xi Let i2 denote the variance of Xi Let b1 b2 bn denote n numbers Define W b1 X1 b2 X2 bn Xn W is a linear combination of the Xi s The main result is The mean of W is W Pn i 1 bi i 2 The variance of W is W Pn 2 2 i 1 bi i Special Cases 1 i i d case If the sequence is i i d then we can write i and 2 i2 In this case the P P 2 mean of W is W ni 1 bi and the variance of W is W ni 1 b2i 2 2 Two independent rv s If n 2 then we usually call them X and Y instead of X1 and 2 X2 We get W b1 X b2 Y which has mean W b1 X b2 Y and variance W 2 2 2 2 b1 X b2 Y 3 Two i i d rv s Combining the notation of the previous two items W b1 X b2 Y has 2 mean W b1 b2 and variance W b21 b22 2 Especially important is the case W X Y which has mean W 2 and variance 2 W 2 2 2 Another important case is W X Y which has mean W 0 and variance W 2 2 71 7 2 Predicting for Bernoulli Trials Predictions are tough especially about the future Yogi Berra We plan to observe m BT and want to predict the total number of successes that we will get Let Y denote the r v and y the observed value of the total number of successes in the future m trials Similar to estimation we will learn about point and interval predictions 7 2 1 When p is Known We begin with point prediction of Y We adopt the criterion that we want the probability of being correct to be as large as possible Below is the result Calculate the mean of Y which is mp If mp is an integer then it is the most probable value of Y and our prediction is y mp Here are some examples Suppose that m 20 and p 0 50 Then mp 20 0 5 10 is an integer so 10 is our point prediction of Y With the help of our website calculator details not given we find that P Y 10 0 1762 Suppose that m 200 and p 0 50 Then mp 200 0 5 100 is an integer so 100 is our point prediction of Y With the help of our website calculator we find that P Y 100 0 0563 Suppose that m 300 and p 0 30 Then mp 300 0 3 90 is an integer so 90 is our point prediction of Y With the help of our website calculator we find that P Y 90 0 0502 If mp is not an integer then it can be shown that the most probable value of Y is one of the integers immediately on either side of mp Just check them both Here are some examples Suppose that m 20 and p 0 42 Then mp 20 0 42 8 4 is not an integer The most likely value of Y is either 8 or 9 With the help of our website calculator we find that P Y 8 0 1768 and P Y 9 0 1707 Thus y 8 Suppose that m 100 and p 0 615 Then mp 100 0 615 61 5 is not an integer The most likely value of Y is either 61 or 62 With the help of our website calculator we find that P Y 61 0 0811 and P Y 62 0 0815 Thus y 62 In each of the above examples we saw that the probability that a point prediction is correct is very small As a result scientists usually prefer a prediction interval It is possible to create a one sided prediction interval but we will consider only two sided prediction intervals We have two choices using a snc approximation or finding an exact interval Even if you choose the exact interval it is useful to begin with the snc approximation The snc approximation says to use the interval mp z mpq 72 where z is the same number as we used for the two sided confidence interval for p Here is an example Suppose that m 100 and p 0 615 The snc approximate 95 prediction interval is q 61 5 1 96 61 5 0 385 61 5 9 54 52 96 71 04 Now it makes no sense to predict a fractional number of successes so we will round off the endpoints to get 53 71 We can use our binomial calculator to see whether the snc approximation is any good Doing this we find that the exact probability that Y will be in the interval 53 71 is 0 9483 If this answer had been importantly too small or too large we could modify either or both endpoints to get a more desirable answer The point is that the snc approximation will either give us a good answer as in this case or help us find a good answer 7 2 2 When p is Unknown We now consider the situation in which p is unknown We will begin with point prediction The first problem is that we cannot achieve our criterion s goal we cannot find the most probable value of Y The most probable value as we saw above is at or near mp but we don t know what p is Thus we adopt an ad hoc approach We simply decide to use mp as our point prediction where p is our point estimate of p Of course if mp is not an integer we need to round off to a near integer B c this is ad hoc I say just round to the nearest integer If two integers are equally close round to the one that is an even number But where did p come from Well we need to add another ingredient to our procedure We assume that we have past data from the CM that will generate the m future trials We denote the past data as consisting of n trials which yielded x successes giving p x n as our point estimate of the unknown p As I said above our point prediction is mp mx n This answer is ad hoc we use it b c it is sensible This …


View Full Document

UW-Madison STAT 371 - Chapter 7 Rules for Means and Variances - Prediction

Documents in this Course
HW 4

HW 4

4 pages

NOTES 7

NOTES 7

19 pages

Ch. 6

Ch. 6

24 pages

Ch. 4

Ch. 4

10 pages

Ch. 3

Ch. 3

20 pages

Ch. 2

Ch. 2

28 pages

Ch. 1

Ch. 1

24 pages

Ch. 20

Ch. 20

26 pages

Ch. 19

Ch. 19

18 pages

Ch. 18

Ch. 18

26 pages

Ch. 17

Ch. 17

44 pages

Ch. 16

Ch. 16

38 pages

Ch. 15

Ch. 15

34 pages

Ch. 14

Ch. 14

16 pages

Ch. 13

Ch. 13

16 pages

Ch. 12

Ch. 12

38 pages

Ch. 11

Ch. 11

28 pages

Ch. 10

Ch. 10

40 pages

Ch. 9

Ch. 9

20 pages

Ch. 8

Ch. 8

26 pages

Ch. 7

Ch. 7

26 pages

Load more
Download Chapter 7 Rules for Means and Variances - Prediction
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Chapter 7 Rules for Means and Variances - Prediction and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Chapter 7 Rules for Means and Variances - Prediction and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?