Unformatted text preview:

Slide 1AgendaStatistical Power/Design SensitivityTypes of HypothesesAccept-Reject DichotomyType I ErrorType II ErrorType II ErrorDeterminants of PowerSample SizeAlphaStatistical TestsEffect SizeBasic Approaches to PowerWorking with Power and Precision 2.0 and 3.0Construct Validity & External ValidityConstruct ValidityConstruct ValidityConstruct ValidityWhy Construct Inferences are a ProblemWhy Construct Inferences are a ProblemAssessment of Sampling ParticularsA Note about “Operations”Threats to Construct ValidityThreats to Construct ValidityThreats to Construct ValidityExternal ValidityExternal ValidityExternal ValidityThreats to External ValidityConstancy of Effect Size versus Constancy of Causal DirectionRandom Sampling and External ValiditySlide 33EVAL 6970:Experimental and Quasi-Experimental DesignsDr. Chris L. S. CorynDr. Anne CullenSpring 2012Agenda•Statistical power/design sensitivity•Statistical conclusion validity and internal validityStatistical Power/Design SensitivityTypes of Hypotheses•General forms:–Superiority•Nondirectional or directional–Equivalence and noninferiority•Within a prespecified boundAccept-Reject Dichotomy4H0 true H0 falseFail to AcceptCorrect decision1 – αType II errorβFail to RejectType I errorαCorrect decision1 – βType I Error•Type I error (sometimes referred to as a false-positive) is the conditional prior probability of rejecting H0 when it is true, where this probability is typically expressed as alpha (α)•Alpha is a prior probability because it is specified prior to data gathering, and it is a conditional probability because H0 is assumed to be true and can be expressed as4α = p (Reject H0 | H0 true)Type II Error•Power is the conditional prior probability of making the correct decision to reject H0 when it is actually false, where4Power = p (Reject H0 | H0 false)•Type II error (often referred to as a false-negative) occurs when the sample result leads to the failure to reject H0 when it is actually false, and it also is a conditional prior probability, where4β = p (Fail to reject H0 | H0 false)Type II Error•Because power and β are complimentary4Power + β = 1.00•Whatever increases power decreases the probability of a Type II error and vice versa•Several factors affect statistical power, including α levels, sample size, score reliability, design elements (e.g., within-subject designs, covariates), and the magnitude of an effect in a populationDeterminants of Power•Four primary factors (there are others) that affect design sensitivity/statistical power–Sample size–Alpha level–Statistical tests–Effect size•By lowering α, for example, statistical power is lost, thus reducing the likelihood of a Type I error, which simultaneously increases the probability of a Type II errorSample Size•Statistical significance testing is concerned with sampling error, the discrepancy between sample values and population parameters•Sampling error is smaller for larger samples and therefore less likely to obscure real differences and increase statistical powerAlpha•Alpha levels influence the likelihood of statistical significance•Larger alpha levels make significance easier to attain than smaller levels•When the null hypothesis is false, statistical power increases as alpha increasesStatistical Tests•Tests of statistical significance are made within the framework of particular statistical tests•The test itself is one of the factors affecting statistical power•Some tests are more sensitive than others (e.g., analysis of covariance)Effect Size•The larger the true effect, the greater the probability of statistical significance and the greater the statistical powerBasic Approaches to Power1. Power determination approach (post hoc)–Begins with an assumption about an effect size–The aim is to determine the power to detect an effect size with a given sample size2. Effect size approach (a priori)–Begins with a desired level of power to estimate a minimum detectable effect size (MDES) at a prespecified level of powerWorking with Power and Precision 2.0 and 3.0Construct Validity & External ValidityConstruct ValidityConstruct Validity•The degree to which inferences are warranted from the observed persons, settings, treatments, and outcome (cause-effect) operations sampled within a study to the constructs that these samples representConstruct Validity•Most constructs of interest do not have a natural units of measurement•Nearly all empirical studies are studies of specific instances of persons, settings, treatments, and outcomes and require inferences to the higher order constructs represented by sampled instancesWhy Construct Inferences are a Problem•Names reflect category memberships that have implications about relationships to other concepts, theories, and uses (i.e., nomonological network)•In the social sciences it is nearly impossible to establish a one-to-one relationship between the operations of a study and corresponding constructsWhy Construct Inferences are a Problem•Construct validity is fostered by:1. Clear explication of person, treatment, setting, and outcome constructs of interest2. Careful selection of instances that match constructs3. Assessment of match between instances and constructs4. Revision of construct descriptions (if necessary)Assessment of Sampling Particulars•All sampled instances of persons, settings, treatments, and outcomes should be carefully assessed using whatever methods (i.e., quantitative, qualitative, etc.) necessary to assure a match between higher order constructs and sampled instances (i.e., careful explication)A Note about “Operations”•To operationalize is to define a concept or variable in such a way that it can be measured or defined (i.e., operated on)•A operational definition is a description of the way a variable will be observed and measured–It specifies the actions [operations] that will be taken to measure a variableThreats to Construct Validity1. Inadequate explication of constructs. Failure to adequately explicate a construct may lead to incorrect inferences about the relationship between operation and construct2. Construct confounding. Operations usually involve more than one construct, and failure to describe all constructs may result in incomplete construct inferences3. Mono-operation bias. Any one operationalization of a construct both underrepresents the construct of interest and measure irrelevant constructs,


View Full Document

WMU EVAL 6970 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?