**Unformatted text preview:**

Jaymie Ticknor Quantitative Methods 2317 Sect 001 19 and 21 February 2014 Lecture 6 Chapter 6 Statistical Significance Effect Size and Statistical Power Decision Error when you make an incorrect decision in hypothesis testing two types of error Type I and Type II Type I Error reject null hypothesis when it is in fact true should not reject conclude that the study supports the research hypothesis when it is false the chances of making Type I error is the same as our significance level reject null hypothesis is p 0 05 chances of making a Type I Error 5 this is also called alpha level Type II Error not rejecting the null hypothesis when it is in fact false should reject conclude that the results of the study are inconclusive when the research hypothesis is true also called beta level Trade off between Type I and Type II Errors if you make your significance level more strict e g p 0 001 you are less likely to make a Type I Error but more likely to make a Type II Error If you make your significance level more lenient e g p 0 1 you are less likely to make a Type II Error but more likely to make a Type I Error Researchers usually make a compromise standard significance levels p 0 05 or 0 01 Effect Size statistical significance tells us whether it is likely that the result of our study is not due to chance but it does not tell us much about how big the effect is measure of the difference between population means larger population 1 larger effect size smaller standard deviation in research it is important to calculate both Cohen s d effect size d 1 2 Effect Size Conventions Small d under 0 20 Medium d 0 21 0 50 Large d 0 51 0 80 and over Meta Analysis review of research that combines effect sizes from different studies to come up with one overall effect size studies multiple studies Statistical Power probability that the study will produce a statistically significant result if the research hypothesis is true if there is a true effect the likelihood that your study will detect that effect probability of having an effect power Effect size sample size significance level and one tailed test vs two tailed test determine the power of a study can have larger effect size by greater difference between means of population and smaller standard deviation in population more power Increase Power with Effect Size increase the predicted difference between population means 1 2 change experimental procedure to get bigger effect Decrease the population standard deviation study a population with less variance have more standard testing conditions and more precise measures Larger sample size more power as sample size increases standard deviation of the distribution of means standard error decreases Increase Power with Sample Size increase sample size get more people but may be costly Significance Level more lenient significance levels p 0 10 0 20 more power more strict significance levels p 0 01 less power Increase Power with Significance Level use a less extreme level of significance p 0 10 or p 0 20 not recommended because this increases Type I error One vs Two Tailed Test one tailed test more power Role of Power in Interpreting Results if effect and sample size is large study is more likely to be statistically significant very small effect could still be statistically significant if the sample size is large Study is statistically significant sample size is small important result large may or may not have practical importance Study is not statistically significant sample size is small inconclusive study may be underpowered large research hypothesis probably false

View Full Document