Unformatted text preview:

Statistical Techniques IIEXST7015Post-ANOVA orPost-Hoc Tests16a_PostANOVA_Tests 1Overview of ANOVARecall that we are testing for differences among indicator variables. The treatments may be fixed or random.H0: µ1= µ2= µ3= ... = µkfor fixed effects.H0: σ2τ= 0 for random effects. Assume ei ∼NIDrv(0,σ2). Remember that this covers 3 separate assumptions. Also, assume no block "interactions" for the RBD. 16a_PostANOVA_Tests 2Overview (continued)Every analysis can be expressed as a model with appropriate notation and subscripting. CRD : Yij= µ+ τi+ εij For the moment we will be concerned only with examining for differences among the treatment levels. We will assume that we have already detected a significant difference among treatments levels with ANOVA. 16a_PostANOVA_Tests 3Overview (continued)Treatments levels may be fixed or random. Determining appropriate tests depends on recognizing correctly. With random effects we are probably not interested in individual treatment levels. We are likely to be interested in the variability among the treatment levels and the distribution of the levels. With fixed effects we will probably want to compare individual levels. 16a_PostANOVA_Tests 4Post ANOVA testsHaving rejected the Null hypothesis we wish to determine how the treatment levels interrelate. This is the "post-ANOVA" part of the analysis. These tests fall into two general categories. Post hoc tests (LSD, Tukey, Scheffé, Duncan's, Dunnett's, etc.)A priori tests or pre-planned comparisons (contrasts)16a_PostANOVA_Tests 5Post ANOVA (continued)A priori tests are better. These are tests that the researcher plans on doing before they gather data, and if we dedicate 1 d.f. to each one we generally feel comfortable doing each at some specified level of alpha. 16a_PostANOVA_Tests 6Post ANOVA (continued)However, since multiple tests do entail risks of higher experiment wide error rates, it would not be unreasonable to apply some technique, like Bonferroni's adjustment, to insure an experimentwise error rate of the desired level of alpha (α). So how might we do these "post hoc" tests? 16a_PostANOVA_Tests 7Post ANOVA (continued)The simplest approach would be to do pairwise test of the treatments using something like the two-sample t-test. This tests examines the null hypothesisH0: µ1= µ2 or H0: µ1- µ2= 0, against the alternative Ha:µ1-µ2 ≠ 0, or Ha:µ1-µ2≥ 0 or Ha:µ1-µ2 ≤ 0. 16a_PostANOVA_Tests 8Post ANOVA (continued)Recall two things about the two-sample t-test. First, in a t-test we had to determine if the variance was equal for the two populations tested. Second, the variance of the test (variance of the difference between µ1and µ2) was equal to σ21/n1+ σ22/n2. If the variance are equal (as they MUST be for ANOVA) then the variance is σ2(1/n1+1/n2). We estimate σ2with MSE. 16a_PostANOVA_Tests 9Post ANOVA (continued)So, we would test each pair of means using the two sample t-test as t = (⎯Y1-⎯Y2) / √(MSE((1/n1+1/n2))). If the design is balanced we can simplify this to t = (⎯Y1-⎯Y2)/√(2MSE/n). 16a_PostANOVA_Tests 10Post ANOVA (continued)Notice that if the value of t is greater than the tabular value of t, we would reject the null hypothesis. If the value of t is less than the tabular value we would fail to reject. Lets call the tabular value t*, and write the case for rejection of the Null Hypothesis (H0) as;t* ≤ (⎯Y1-⎯Y2) / √(MSE((1/n1+1/n2))). 16a_PostANOVA_Tests 11Post ANOVA (continued)So we would reject H0if t* ≤(⎯Y1-⎯Y2) / √(MSE((1/n1+1/n2))) t*[√(MSE((1/n1+1/n2)))] ≤(⎯Y1-⎯Y2) (⎯Y1-⎯Y2) ≥t*[√(MSE((1/n1+1/n2)))] So, for any difference (⎯Y1-⎯Y2) that is greater than t*[√(MSE((1/n1+1/n2)))] we find the difference statistically different (reject Ho), and for any value less we find the difference consistent with the null hypothesis. Right? 16a_PostANOVA_Tests 12Post ANOVA (continued)This value of t*[√(MSE((1/n1+1/n2)))] is what R. A. Fisher called the "Least Significant Difference", commonly called the LSD (not to be confused with the Latin Square Design = LSD). We calculate this value for each pair of differences and if the observed difference is less, the treatments are "not significantly different". If greater they are "significantly different". 16a_PostANOVA_Tests 13Post ANOVA (continued)One last detail. If the design is balanced then the value of t*[√(MSE((1/n1+1/n2)))] simplifies to t*[√(2MSE/n)]. This is nice because all pairwise comparisons would use the same test value. It is nice, but not necessary. This is the first of our post ANOVA tests, it is called the "LSD". 16a_PostANOVA_Tests 14Post ANOVA (continued)But hey, wait a minute! Didn't Fisher invent ANOVA in the first place to avoid doing a bunch of separate t-tests? So, now we are doing a bunch of separate t-tests. What is wrong with this picture? 16a_PostANOVA_Tests 15Post ANOVA (continued)So, Fisher comes up with this. OK. When we do a bunch of separate t-tests, we don't know if there are any real differences at the αlevel. When we do the LSD as a post ANOVA test we SHOULD know that there are some differences. So we only do the LSD if the ANOVA says that there are differences, otherwise, don't do the LSD. 16a_PostANOVA_Tests 16Post ANOVA (continued)This is called "Fisher's Protected LSD". We can use the LSD ONLY if the ANOVA shows differences, otherwise we are NOT justified in using the LSD. Makes sense. But there were still a lot of nervous statisticians looking for something a little better. As a result there are MANY alternative calculations. We will discuss the "classic" solutions. 16a_PostANOVA_Tests 17Post ANOVA (continued)Basically, we calculate the LSD with our chosen value of α. We then do our mean comparisons. Each test has a pairwise error rate of α. We have already seen one alternative, the Bonferroni adjustment. If we do 5 tests, or 10 tests, our error rate is no more than 5(α/2) or 10(α/2). Generally, for g tests our error rate is no more than gα/2. 16a_PostANOVA_Tests 18Post ANOVA (continued)To keep an experiment wide error rate of α, we simply do each comparison using a t value for an α equal to α/2g. For two tailed tests (which the LSD almost always is) we do each test at α/2 and the Bonferroni test would use a t for an error rate of α/2g. One tailed tests are possible. 16a_PostANOVA_Tests 19Post ANOVA (continued)The Bonferroni adjustment is


View Full Document
Download Post-ANOVA
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Post-ANOVA and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Post-ANOVA 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?