MASON PSYC 612 - Lecture 4: Scale Development

Unformatted text preview:

PSYC 612, SPRING 2010Lecture 4: Scale DevelopmentLecture Date: 9/22/2010Cont ents1 Preliminary Questions 12 Part I: Introduction and Cursory Review (45 minut es; 5 minute break) 12.1 Measurement - defined, described, and defended . . . . . . . . . . . . . . . . . . . . 22.1.1 Precision and accuracy or reliability and validity . . . . . . . . . . . . . . . . 52.1.2 Measurement error and measurement bias . . . . . . . . . . . . . . . . . . . 53 Part II: Advanced material - Validity (50 minutes; 5 min break) 63.0.3 Convergent Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.0.4 Discriminant Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.0.5 Criterion Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.0.6 Content Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.0.7 Construct Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Part III: Optional Discussion of Advance Organizer (30 minutes) 81 Preliminary Qu estions•Have you read all the assigned reading for today?•Have you scheduled your module?•Do you feel prepared for the module?•Are you r eady to being the second module material today?2 Part I: Introdu ction and Cursory Review (45 minu tes; 5minute br eak)The authors do a fine job setting the stage for the discussion of social science measurement. It isimportant to realize that the authors’ intent is not to teach you everything about measurement butrather to introduce you t o the topic from the perspective of a social scient ist. Physical scientists1view measurement in a similar light but use different terms and have generally different modelsthey use to characterize measurement.2.1 Measurement - defined, described, and defendedMeasurement is often described as the cold method of assigning numbers to phenomena we study.Those phenomena might include directly observable properties (e.g., length), tangible characteris-tics (e.g., softness), or even to unobservable characteristics and properties. Most of what we areinterested in social science falls in the latter domain - the unobservable. Measurement certainlyis not an easy task even with observable phenomena but the task gets far more difficult when wecannot directly verify what we are measuring because we have no way of observing the phenomena.Consider the problem of measuring the extent a person f eels betrayed in an unfaithful relationship,the enthusiasm an investor has a bout the market, or the confidence a pilot has in his aircraft.These phenomena are important in social science; these and similar “constructs” are what makepsychology a fascinating scientific enterprise.The purpose of covering measurement in a statistics course is so that you can understand thelimits of your investigation and your audience can appreciate the theoretical depth of your inquiry.Better measurement leads to stronger findings. Recall the adva nced organizer from the beginningof the semester see Figure 1 below). Measurement limits the extent that you can draw strong causalinferences. Furthermore, measurement dictates the clarity of our findings. Weak measurement leadsto uncertain conclusions with unclear implications.Figure 1: Advanced OrganizerOne clear way that measurement influences your statistical models comes in the form of sta-tistical power. Recall what I said last week about statistical power and the denominator. Moreimportantly, measurement influences the magnitude of the effect. Let me review the concept ofstatistical power again briefly a nd then show you how measurement relates to power through effectsize estimation. First, statistical power is the probability of rejecting the null hypothesis given thefact that the null is fa lse.power = P (α < .05|H0) = 1 − β2To put this into the context of hypothesis testing (our forthcoming topic), I want to introduceyou to a simple table - one that should be easy to recreate for you after you see it several times.We discussed this before but I want to make it a more formally available ta ble instead of one drawnon the board. So here goes....H0RealityH0= T H0= FTest ResultReject (p < .05) Type I (α) CorrectNot Reject (p > .05) Correct Type II (β)From the table above, I want everyone to see that the real issue in statistical power is the f ailureto reject the false null (Type II errors). Thus, power is nothing more than 1 − β. Let me explainthe logic. Power is t he probability that we will reject the null given that the null is false. If wehave an inferential error where we fail to reject the null when it is true then we can easily computethe probability that we will reject the null when it is indeed false; we simply take the differencebetween perfection (1.0) and β.The previous paragraph lays out the formal definition o f statistical power. Therefore, statisticalpower only pertains to the null hypothesis. One might suppose that statistical power may beadapted to non-null hypotheses, however, the tables and methods for computing statistical powertend to be restricted to the null. Second, statistical power is a function of several variables - samplesize, alpha, and effect size - and all hold a direct relationship. As the three variables increase,statistical power increases. For example, as sample sizes increase, statistical power increases. Thatrelationship holds true due t o the law of large numbers and the sensitivity of our tests to findsignificant results with larger samples. Truth be told, the law of large numbers is inherent in ourstatistical procedures and therefore the p-values are overly sensitive to sample size. That sensitivityis merely reflected in the statistical power calculations. Similar to sample size, alpha is directlyrelated to statistical power. As alpha increases (i.e., your allowance for being wrong increases),statistical power increases because you lowered the threshold of significance. Finally, and mostgermane to our discussion today is the relationship between effect size and statistical power. Aseffect sizes increase, statistical power increases. Now understanding how effect sizes are computedis absolutely essential for your appreciation of this relationship so I will go through a simple effectsize computation for your benefit.The basic equation for an effect size (Cohen’s d) is:d =¯X1−¯X2s∗where s∗may either be a pooled (mean) standard deviation or the standard deviation from thegroup that is more generalizable.


View Full Document

MASON PSYC 612 - Lecture 4: Scale Development

Download Lecture 4: Scale Development
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 4: Scale Development and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 4: Scale Development 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?