PSY 307 – Statistics for the Behavioral SciencesPoint EstimatesConfidence IntervalWhat is a Confidence Interval?Levels of ConfidenceDemosCalculating Different LevelsSample SizeEffect of Sample SizeOther Confidence IntervalsEffect SizeSlide 12A Significant EffectCalculating Effect SizeComparisons Across StudiesProbabilities of ErrorWhen there is no effect…Effect Size and Distribution OverlapPowerSmall Effects Have Low PowerLarge Effects Have More PowerCalculating PowerSample Power Graph 1Sample Power Graph 2How Power Changes with NEffect of Larger Sample Sizeb Decreases with Larger N’sIncreasing PowerPSY 307 – Statistics for the Behavioral SciencesChapter 11-12 – Confidence Intervals, Effect Size, PowerPoint EstimatesThe best estimate of a population mean is the sample mean.When we use a sample to estimate parameters of the population, it is called a point estimate.How accurate is our point estimate?The sampling distribution of the mean is used to evaluate this.Confidence IntervalThe range around the sample mean within which the true population mean is likely to be found.It consists of a range of values.The upper and lower values are the confidence limits.The range is determined by how confident you wish to be that the true mean falls between the values.What is a Confidence Interval?A confidence interval for the mean is based on three elements:,The value of the statistic (e.g., the mean, ). ,The standard error (SE) of the measure (x).,The desired width of the confidence interval (e.g., 95% or 99%, 1.96 for z).To calculate for z: ± (zconf)(x)Levels of ConfidenceA 95% confidence interval means that if a series of confidence intervals were constructed around different means, about 95% of them would include the true population mean.When you use 99% as your confidence interval, then 99% would include the true pop mean.Demoshttp://www.stat.sc.edu/~west/javahtml/ConfidenceInterval.htmlhttp://www.ruf.rice.edu/~lane/stat_sim/conf_interval/Calculating Different LevelsFor 95% use the critical values for z scores that cutoff 5% in the tails:533 ± (1.96)(11) = 554.56 & 511.44where M = 533 and M = 11For 99% use the critical values that cutoff 1% in the tails:533 ± (2.58)(11) = 561.38 & 504.62Sample SizeIncreasing the sample size decreases the variability of the sampling distribution of the mean:Effect of Sample SizeBecause larger sample sizes produce a smaller standard error of the mean:The larger the sample size, the narrower and more precise the confidence interval will be.Sample size for a confidence interval, unlike a hypothesis test, can never be too large.Other Confidence IntervalsConfidence intervals can be calculated for a variety of statistics, including r and variance.Later in the course we will calculate confidence intervals for t and for differences between means.Confidence intervals for percents or proportions frequently appear as the margin of error of a poll.Effect SizeEffect size is a measure of the difference between two populations.One population is the null population assumed by the null hypothesis.The other population is the population to which the sample belongs.For easy comparison, this difference is converted to a z-score by dividing it by the pop std deviation, .Effect SizeEffect SizeX1X2A Significant EffectEffect SizeX1X2Critical Value Critical ValueCalculating Effect SizeSubtract the means and divide by the null population std deviation:Interpreting Cohen’s d:Small = .20Medium = .50Large = .80Comparisons Across StudiesThe main value of calculating an effect size is when comparing across studies.Meta-analysis – a formal method for combining and analyzing the results of multiple studies.Samples sizes vary and affect significance in hypothesis tests, so test statistics (z, t, F) cannot be compared.Probabilities of ErrorProbability of a Type I error is .Most of the time = .05A correct decision exists .95 of the time (1 - .05 = .95).Probability of a Type II error is .When there is a large effect, is very small.When there is a small effect, can be large, making a Type II error likely.When there is no effect… 1.65.05COMMON = .05Sample means that produce a type I errorHypothesized and true distributions coincideEffect Size and Distribution OverlapCohen’s d is a measure of effect size.The bigger the d, the bigger the difference in the means.http://www.bolderstats.com/gallery/normal/cohenD.htmlPowerThe probability of producing a statistically significant result if the alternative hypothesis (H1) is true.Ability to detect an effect.1- (where is the probability of making a Type II error)Small Effects Have Low PowerEffect SizeX1X2Critical valuePowerLarge Effects Have More PowerEffect SizeX1X2Critical Value Critical ValuePowerCalculating PowerMost researchers use special purpose software or internet power calculators to determine power.This requires input of:Population mean, sample meanPopulation standard deviationSample sizeSignificance level, 1 or 2-tailed testhttp://www.stat.ubc.ca/~rollin/stats/ssize/n2.htmlSample Power Graph 1Sample Power Graph 2How Power Changes with NWISE Demohttp://wise.cgu.edu/powermod/exercise1b.aspEffect of Larger Sample SizeSmaller standard deviations mean less overlap between two distributions.Larger samples produce smaller standard deviations. Decreases with Larger N’sNote: This is for an effect in the negative direction (H0 is the red curve on the right).Increasing PowerStrengthen the effect by changing your manipulation (how the study is done).Decrease the population’s standard deviation by decreasing noise and error (do the study well, use a within subject design).Increase sample size.Change the significance
View Full Document