Unformatted text preview:

Psychology 210 Statistical Methods Statistics Lab # 8 As always, reboot your computer and launch SPSS. This week, we’ll be tackling the famous and/or infamous ANalysis Of VAriance, or ANOVA. ANOVA is probably the most widely used statistical method in all of psychology, primarily because of its flexibility. As a psychologist, you’ll come to know, appreciate, and yes, even love all of its wonderful intricacies. To start, let’s open a data file that contains information about the singers in the New York City chorale society (on the web page as “singers.sav”). http://people.whitman.edu/~herbrawt/classes/210/psych210.html The essence of an ANOVA is the F-ratio, which can be defined as the variability between groups, relative to the variability within groups. If this sounds familiar, that’s because it’s conceptually related to the t-ratio. If you ponder for a moment what the numerator and denominator of the t-ratio really mean, it should make sense. iancegroupswithiniancegroupsbetweenFvarvar−−= XSXtμ−= Let’s jump right in and start by running a quick two-group ANOVA. Select Analyze Æ Compare Means Æ One-Way ANOVA. The “One-Way” means simply that we’ve got one independent (grouping) variable. Later, we’ll consider what to do with more than one independent variable. All that’s left it to specify a pair of variables to analyze… say, Sex (male or female) and Height. Send them to the appropriate boxes in the ANOVA window, as shown below. The only real constraint is that the Dependent variable should be an interval or ratio variable, and the Factor should be discrete. This should allow us to see if male and female singers are on the average, different heights.Remember that an ANOVA is a hypothesis testing procedure, just like a t-test or Chi-square. In this case, the hypotheses involves a comparison of means: H0: µmale=µfemale H1: µmale≠µfemale Click OK before the anticipation overwhelms you. The output will be a simple looking table. The important values to consider are those listed under F and sig. These are the F-ratio and significance level – remember that F is the ratio of between-groups variance to within-groups variance, and significance level is the probability that any observed difference might be due to random chance, rather than a systematic effect. You can interpret them just like you would interpret a t-test. If the probability is less than .05, reject the null hypothesis that the means are really the same. The traditional way to refer to the results of an ANOVA in a paper is something like the following: “There were statistically reliable differences between the heights of different sexes, F(1,127) = 153.324, p < .001.” Notice that this statement contains the F statistic, p value (“sig.” on the SPSS printout), and degrees of freedom between and within groups. Also notice that p < .001 implies that we’ve rejected the null hypothesis of equal means (again, .000 on the printout is a rounded figure and shouldn’t be taken literally as zero). Technically, the F is an omnibus test and doesn’t specify which groups are different from each other - only that there are some differences among the possibilities. In this case however, there are only two groups (males and females) and one possible comparison, so we can imply that male and female singers are different without needing to run a post-hoc test. You may be wondering what the big deal is because this doesn’t tell us anything we couldn’t find with a simple t-test, give yourself 1,000 imaginary points and a congratulatory handshake. What then, is the advantage to running an ANOVA, if the results are identical to a t-test? To answer this, consider what would happen if one wanted to compare several means: You might decide to run multiple t-tests (i.e., 3 comparisons for 3 means, 6 for 4 means, and so on). This has two unpleasant consequences.1) Running t-tests over and over is not particularly fun nor is it time-efficient. 2) The probability of making a type-1 error increases with each statistical test. ANOVA avoids both of these problems by incorporating all of the comparisons into a single statistical test (an “omnibus test”, to throw around a little unwieldy jargon). In comparing two means, like we did above, there is no meaningful difference between this and a t-test. However, if we wished to compare more than two groups, we would be wise to use an ANOVA. Let’s do this by comparing the height of people who sing the various voice parts. Again, select Analyze Æ Compare Means Æ One-Way ANOVA. This time, select voice as your factor, and recall that it divides the sample into 4 groups (soprano, alto, tenor, bass). My null and research hypotheses become slightly more complex: H0: µsoprano=µalto=µtenor=µbass H1: µsoprano≠µalto≠µtenor≠µbass Before continuing, click on the Post-Hoc button, and select the Tukey option (shown below). Now click Continue and OK to see the results. The first table looks the same as it did before. However, we’ll need to interpret its output a little bit differently this time. The probability labeled sig is actually the probability that there is one or more significant differences among the means included in the test. For our example, this would be the probability that at least one of the 6 comparisons was statistically significant. Unfortunately, we still don’t know which, and the basic ANOVA doesn’t tell us (though in the previous two-group example it would be logically obvious). To determine which groups are indeed different, we need to run a post-hoc, or a posteriori test like Tukey’s HSD (which we already asked for). The results of this analysis are displayed on the table marked Multiple Comparisons, below the ANOVA table. This table (also reproduced on the next page) shows comparisons for every pair of means, along with the difference, standard error, and significance level. Those comparisons with a significance level ofless than .05 are statistically reliable and are marked with an asterisk for convenience. It is worth noting that this alpha level is adjusted to account for the added number of comparisons; it is lowered so that the total probability of a type 1 error, across all tests is .05 (as opposed to the probability of a type 1 error for each test being .05). Notice that if you did not get an F-ratio of less than .05, there’s no


View Full Document

Whitman PSYCHOLOGY 210 - Laboratory

Download Laboratory
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Laboratory and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Laboratory 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?