New version page

Chapter Ten: Analysis of Variance

Upgrade to remove ads

This preview shows page 1-2-3-4 out of 12 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

Lecture Notes Chapter Ten: Analysis of Variance Randall Miller 1 | Page 1. Elements of a Designed Experiment Definition 10.1 The response variable is the variable of interest to be measured in the experiment. We also refer to the response as the dependent variable. Definition 10.2 Factors are those variables whose effect on the response is of interest to the experimenter. Quantitative factors are measured on a numerical scale, whereas qualitative factors are not (naturally) measured on a numerical scale. Definition 10.3 Factor levels are the values of the factor utilized in the experiment. Definition 10.4 The treatments of an experiment are the factor-level combinations utilized. Definition 10.5 An experimental unit is the object on which the response and factors are observed or measured. Definition 10.6 A designed experiment is an experiment in which the analyst controls the specification of the treatments and the method of assigning the experimental units to each treatment. An observational experiment is an experiment in which the analyst simply observes the treatments and the response on a sample of experimental units.Lecture Notes Chapter Ten: Analysis of Variance Randall Miller 2 | Page 2. The Completely Randomized Design Definition 10.7 The completely randomized design is a design in which treatments are randomly assigned to the experimental units or in which independent random samples of experimental units are selected for each treatment. ANOVA F-test to Compare k Treatment Means: Completely Randomized Design 01 2: ...: At least two treatment means differ.kaHHµµ µ= = = Test statistic:MSTMSEF = Rejection region: FFα>whereFαis based on( )11kν= −numerator degrees of freedom (associated with MST) and( )2nkν= −denominator degrees of freedom (associated with MSE). Conditions required for a Valid ANOVA F-test: Completely Randomized Design 1. The samples are randomly selected in an independent manner from the k treatment populations. (This can be accomplished by randomly assigning the experimental units to the treatments.) 2. All k sampled populations have distributions that are approximately normal. 3. The k population variances are equal (i.e.,22 212...kσσ σ= = =). General ANOVA Summary Table for a Completely Randomized Design Source df SS MS F Treatments 1k − SST SSTMST =1k − MSTMSE Error nk− SSE SSEMSE =nk− Total 1n − SS(Total) What Do You Do When the Assumptions are not Satisfied for the Analysis of Variance for a Completely Randomized Design? Answer: Use a nonparametric statistical method such as the Kruskal-Wallis H-test of section 14.5.Lecture Notes Chapter Ten: Analysis of Variance Randall Miller 3 | Page Steps for Conducting an ANOVA for a Completely Randomized Design 1. Make sure that the design is truly completely randomized, with independent random samples for each treatment. 2. Check the assumptions of normality and equal variances. 3. Create an ANOVA summary table that specifies the variability’s attributable to treatments and error, making sure that those variability’s lead to the calculation of the F-statistic for testing the null hypothesis that the treatment means are equal in the population. Use a statistical software package to obtain the numerical results. If no such package is available, use the calculation formulas in Appendix B. 4. If the F-test leads to the conclusion that the means differ, a. Conduct a multiple-comparisons procedure for as many of the pairs of means as you wish to compare. (See Section 10.3.) Use the results to summarize the statistically significant differences among the treatment means. b. If desired, from confidence intervals for one or more individual treatment means. 5. If the F-test leads to the nonrejection of the null hypothesis that the treatment means are equal, consider the following possibilities; a. The treatment means are equal; that is, the null hypothesis is true. b. The treatment means really differ, but other important factors affecting the response are not accounted for by the completely randomized design. These factors inflate the sampling variability, as measured by MSE, resulting in smaller values of the F-statistic. Either increase the sample size for each treatment, or use a different experimental design (as in 10.4) that accounts for the other factors affecting the response. [Note: Be careful not to automatically conclude that the treatment means are equal since the possibility of a Type II error must be considered if you accept0H.]Lecture Notes Chapter Ten: Analysis of Variance Randall Miller 4 | Page Formulas for the Calculations in the Completely Randomized Design ( )221 Correction for meanTotal of all observations=Total number of observationsniiCMyn===∑ ( )( )21SS Total Total sum of squaresSum of squares of all observationsniiCM y CM=== = = −∑ 2221212SST = Sum of square for treatmentsSum of squares of treatments totals witheach square divided by the number ofobservations for that treatment...kkCMTTTCMnn n= −= + ++ − SSE = Sum of squares for error = SS(Total) – SST MST = Mean square for treatments = SST1k − MSE = Mean square for error = SSEnk− F = Test statistic = MSTMSE Where ( )Total number of observationsNumber of treatmentsTotal for treatment 1,2,...,inkT ii k=== =Lecture Notes Chapter Ten: Analysis of Variance Randall Miller 5 | Page 3. Multiple Comparisons of Means Determining the Number of Pairwise Comparisons of Treatment Means In general, if there are k treatment means, there are ( )1 /2c kk= − pairs of means that can be compared. Guidelines for Selecting a Multiple-Comparison Method in ANOVA Method Treatment sample sizes Types of comparisons Tukey Equal Pairwise Bonferroni Equal or unequal Pairwise Scheffé Equal or unequal General contrasts 4. The Randomized Block Design Definition 10.8 The randomized block design consists of a two-step procedure: 1. Matched sets of experimental units, called blocks, are formed, with each block consisting of k experimental unites (where k is the number of treatments). The b blocks should consist of experimental units that are as similar as possible. 2. One experimental unit from each block is randomly assigned to each treatment, resulting in a total ofn bk=responses. ANOVA F-Test to Compare k Treatment Means: Randomized Block Design 01 2: ...: At least two


Download Chapter Ten: Analysis of Variance
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Chapter Ten: Analysis of Variance and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Chapter Ten: Analysis of Variance 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?