This preview shows page 1-2-3-4-5 out of 15 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

One-way ANOVA, I 9.07 4/15/2004 Review • between two conditions of an independent variable • than two sample means What’s coming up • conditions of a single independent variable • generalize this new technique to apply to Earlier in this class, we talked about two-sample z- and t-tests for the difference – Does a trial drug work better than a placebo? – Drug vs. placebo are the two conditions of the independent variable, “treatment” Multiple comparisons We often need a tool for comparing more In the next two lectures, we’ll talk about a new parametric statistical procedure to analyze experiments with two or more Then, in the two lectures after that, we’ll more than one independent variable 1• • – – conditions, or levels – • We’ll talk An example • • subjects • • ANalysis Of Variance = ANOVA A very popular inferential statistical procedure It can be applied to many different experimental designs Independent or related samples An independent variable with any number of Any number of independent variables Arguably it is sometimes over-used. more about this later. Suppose we want to see whether how well people perform a task depends upon how difficult they believe the task will be We give 15 easy math problems to 3 groups of 5 Before we give them the test, we tell group 1 that the problems are easy, group 2 that the problems are of medium difficulty, and group 3 that the problems will be difficult Measure # of correctly solved problems within an allotted time. How do we analyze our results? • –H0: µ = µ , H0: µ = µ ,H0: µ = µ• – αsingle t-test is 0.05 – so our experiment-wise error rate is (1-0.953) = 0.14 – – which cranks up p even more • equal to α We could do 3 t-tests: easy medium medium difficultdifficult easy But this is non-ideal With =0.05, the probability of a Type I error in a Here, we can make a Type I error in any of the 3 tests, This is much larger than our desired error rate Furthermore, the 3 tests aren’t really independent, We perform ANOVA because it keeps the experiment-wise error rate 2ANOVA • • – – • •A one-way • . • ). ANOVA is the general-purpose tool for determining whether there are any differences between means If there are only two conditions of the independent variable, doing ANOVA is the same as running a (two-tailed) two-sample t-test. Same conclusions Same Type I and Type II error rates Terminology Recall from our earlier lecture on experimental design: ANOVA is performed when there is only one independent variable When an independent variable is studied by having each subject only exposed to one condition, it is a between-subjects factor, and we will use a between-subjects ANOVAWhen it is studied using related samples (e.g. each subject sees each condition , we have a within-subjects factor, and run a within-subjects ANOVAOne-way, between-subjects ANOVA • • effect. • means. Assumptions of the one-way, between-subjects ANOVA • • • according to a normal distribution • sphericity assumption) • required that you have the same number of samples in each group, but ANOVA will be more robust to We talk about this for starters. The concepts behind ANOVA are very much like what we have talked about in terms of the percent of the variance accounted for by a systematic One thing this means is we will be looking for a significant difference in , but we’ll do it by looking at a ratio of variancesThe dependent variable is quantitative The data was derived from a random sample The population represented in each condition is distributed The variances of all the populations are homogenous (also referred to as the It is not violations of some of its other assumptions if this is true. 3ANOVA’s hypotheses • ANOVA tests only two-tailed hypotheses •H0: µ1 = µ2 = … = µk •Ha: not all µTypical strategy • – Post-hoc comparisons – only when • ’s are equal Run ANOVA to see if there are any differences. If there are, do some additional work to see which means are significantly different: Note that you perform post-hoc comparisons ANOVA tells you there are significant differences between at least two of the means. An exception: if there are only two means to begin with, and ANOVA tells you there is a difference in means, you already know that the two means must differ – no need to do any additional work. Analysis of variance • • means, we can do so by variance • for by an effect the response, if I know the condition? • effect? • (Variance unaccounted for ANOVA gets its name because it is a procedure for analyzing variance Though we are interested in testing for a difference in analyzing This has to do with what we’ve talked about before: proportion of the variance accounted How much do I reduce my uncertainty about In other words, what proportion of the variance is accounted for by the systematic “The effect”: the means of the red, blue, and green groups are significantly different) Total variance 4accounted for • accounted for Variance Partitioning the variance • – • not groups.– (variance between groups) Keeping the variance within each group the same, the bigger the difference in means, the greater the proportion of the variance So, while we’re interested in a difference in means, we can get at it by looking at a ratio of variances – the proportion of variance unaccounted for Total variance Before, when we talked about proportion of the variance accounted for, we partitioned the variance in the data this way: Total variance = (variance not accounted for) + (variance accounted for) As shown in the previous picture, the variance accounted for is essentially the variance within So, the more traditional description of the partitioning of the variance is: Total variance = (variance within groups) + MSbn M (the “grand mean” m1 m3 m2 MSwn of the entire experiment) 5MSM • – This is within-group variance. variation or noise in the system. – conditions. This is the between-groups variance. This is essentially the signal in the system. • the noise ratio • signalnoise. 0 H0 in favor of Ha • groups) • will call them mean squares mean ) total Within- and between-group variance Essentially, the total variance in the data comes from two sources: Scores may differ from each other even when the participants are in the same condition. It is essentially a measure of the basic Scores may differ because


View Full Document

MIT 9 07 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?