DOC PREVIEW
UMass Amherst PSYCH 240 - Interpreting NHSTs

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Psych 240 1st Edition Lecture 22Current LectureNHSTs and Truth Part 1-The majority of experiment results that you will encounter in the social sciences and medicine will be evaluated with NHSTs (unfortunately)-We know that NHSTs don't tell you the probability that any hypothesis is true-Bayes theorem can help us make the best conclusion possible given the limited information available from a NHST NHST Possible Outcomes-oType 1 (Alpha) Error: claiming to evidence for the alternative hypothesis when it’s falseoType 2 (Beta) Error: failing to find evidence supporting the alternative hypothesis when it is true-This is a missed opportunity, but no actually a false conclusion (we don't conclude anything from a non-significant result)o"No Conclusion": failing to find evidence for the alternative hypothesis when it is false-This is the best we can hope for when the null is true, because NHSTs can't find evidence for the nulloDetected Evidence: claiming to have evidence for the alternative hypothesis when it is actually true NHSTs and Truth Part 2-We want to figure out the probability that each hypothesis is true (null and alternative). What three things do we need to know to do that?a. The prior probability that the null versus the alternative hypothesis is trueb. The probability of the test outcome if the null hypothesis it trueThese notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.-We know only one this for the NHSTc. The probability of the test outcome if the alternative hypothesis it true-CoPower: the probability of observing a significant result if the alternative hypothesis is true-To define power, we must define the results we expect if the alternative hypothesis is trueoIn Bayesian statistics, you don’t just get to say "anything except this" for an alternative hypothesis-Effect Size: IF there is an effect, how big do to expect it to be?Cohen's D: a standardized measure of effect size that can't be used for many different variablesEx. In an independent samples t test it is the distance between the means of sample 1 and 2 divided by the pooled standard deviation of the scores in each sampleThere are rough standards for small, medium, and large effect sized in terms of Cohen's d-Small=.2-Medium=.5-Large=.8oFactors affecting power-Effect size - power is higher for variables that produce larger effect sized-Sample size - power is higher for larger sample sizes-Experimental design - power is higher for within-subjects designs than between-subjects designs-1-tailes tests are more powerful than 2-tailed tests, but only if the effect really goes the direction you think it goesIf you expect the wrong direction for the effect, 1-tailed tests are much less powerful than 2-tailed tests-Alpha - power is lower for lower alpha valuesoRough power guidelines-Statisticians have fancy mathematical ways to define power that are beyond the scope of this course-Some researchers might report their power, but this is rare-When you are making sense of results that you encounter, you can apply some very tough guidelines to make a decent estimate of power-A reasonable "middle of the road" power estimate is 6This is the approximate power to detect a medium effect in a between-subjects design with N=40 (per group) and alpha=.05, 2-tailed-Decrease your power estimate for smaller sample sizes, increase it for larger-Decrease it for experiments trying to detect smaller effects, increate it for larger effects-Decrease it for lower alpha values-Increase it for within-subjects designoAfter we define power, we can specify the probability of every possible outcome for a significance test:-AoIf you cant think of any good reasons why one hypothesis is more plausible than the other, then you can start with even priors-p(Alt.)=.5 ; p(Null)=.5oResearchers are often motivated to find surprising results. In this case, it is reasonable to set the prior probability that the alternative hypothesis is true below .5-p(Alt.)<.5 ; p(Null)>.5oThe more surprising the result, the farther below .5 p(Alt.) should


View Full Document

UMass Amherst PSYCH 240 - Interpreting NHSTs

Download Interpreting NHSTs
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Interpreting NHSTs and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Interpreting NHSTs 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?