USC COMM 301L - lect4_1-Measurement validity and reliability

Unformatted text preview:

COMM 301: Empirical Research in CommunicationBuilding good measures: Measurement validity and reliabilityInternal vs. External Validity of a StudyMeasurement validity & reliabilityMeasurement ReliabilityReliability - test retest techniqueReliability - parallel forms techniqueReliability - split half techniqueReliability - intercoder reliabilityReliability – coefficient judgmentWhat else affects measurement validity?Validity - content validityValidity - concurrent validityValidity - predictive validityValidity - construct validitySlide 16Validity - summaryReliability and validity: ExerciseOne more exercise for measurement validity and reliabilityReliability and validity11COMM 301:Empirical Research in Communication Kwan M LeeLect4_122Building good measures:Measurement validity and reliability •Things to know by the end of the lecture:–What is external validity vs. internal validity of a study?–What is measurement validity and reliability?–How do we assess them?–What are the various aspects of measurement validity? –Relationship between measurement validity and reliability33Internal vs. External Validityof a Study•We will talk more about this in Chap. 7•External validity–Is it generalizable?–Sampling and replication•Internal validity–Is it accurate?–Can be influenced by measurement validity, measurement reliability, and other factors (E.g., history; maturation; sensitization; experiment demand;…..)•Today’s lecture focuses on measurement validity and reliability.44Measurement validity & reliability•Measurement validity: –Does the measure give accurate results?•Measurement validity is important part of internal validity–Internal validity: the accuracy of an investigation’s results as influenced by the planning, design, and conduct of the investigation. •Measurement reliability:–Does the measure give constant/reliable results?55Measurement Reliability•Reliability: the extent to which a measurement gives consistent results–Across time (test-retest)–Across items within a questionnaire (split half; alpha) or between questionnaires (parallel forms) –Across observers (intercorder reliability)•Reliability is a must for Measurement Validity (and also for Internal Validity) but it does not guarantee validity.•Tests for reliability–test retest (self-report); parallel forms (self-report); split half (self-report)–Don’t need to pay attention to the above three–Cronbach’s alpha (self-report)  this is what most researchers use!–intercoder reliability (observational)66Reliability - test retest technique•Test retest technique–measurement instrument administered more than once to same group of respondents–results across each administration compared–similarity analyzed with correlation coefficient•0 = no similarity, no relationship•1 = perfect similarity, perfect relationship•Potential problems–test sensitization (E.g., remembering)–Maturation (e.g., actual changes)77Reliability - parallel forms technique•Parallel forms technique–addresses test sensitization–use two separate but parallel measures (e.g. two different questionnaires measuring the same construct) given to same set of respondents–no concern for time span–Drawback: effort to create a second instrument88Reliability - split half technique•Split half technique–single instrument, multiple parallel measures for each construct–similar to parallel form technique, except all measurement items are on the same instrument–Advantage: efficient, assess reliability when we collect data–Disadvantage: effort to create parallel measures99Reliability - intercoder reliability•Intercoder reliability (observational)–in observing and coding, we use more than one coder–each coder’s observation and coding compared–if coding is correlated, then there is intercoder reliability1010Reliability – coefficient judgment•In general, if correlation coefficients or Cronbach’s alpha is…–0 = no correlation, no reliability–1 = perfect positive correlation– -1 = perfect negative correlation–0.8 is acceptable threshold (well, can be somewhat lower than this)1111What else affects measurement validity?•Reliability itself not enough to ensure measurement validity, need to look at other validity issues–content (or face) validity–predictive validity–concurrent validity–construct validity1212Validity - content validity•Content validity (or Face validity)–whether measurement reflects the characteristics of the construct measured–does it appear to measure what it is designed to measure–How to assess content validity?•Panel of experts1313Validity - concurrent validity•Concurrent validity–how well a measurement instrument compares with a previously validated one–How to assess concurrent validity?•take the instrument in question, and also the already validated instrument•administer both to same group of respondents•compare the two sets of results•the result should be similar1414Validity - predictive validity•Predictive validity–measurement’s ability to predict the expected outcomes•Example: SAT Score (predictor for success in a higher education)–How to assess predictive validity?•Examine the relationship between the measurement and the later outcome–E.g., SAT and College GPA–Key is to determine the appropriate outcome measure1515Validity - construct validity•Construct validity–measurement is consistent with its theoretical framework that it evolved from–Various theoretical relationships between a variable measured by a measurement under consideration and the other variables should be observed.–E.g., aggression measure and its theoretical relationships to gender, pro-social activities, etc...1616Validity - construct validity•To assess construct validity–One solution is the “known groups method•theory is used to generate/discover 2 groups of subjects, one group with high level of construct, another with low level of the construct•measurement instrument in question administered to both groups•results compared•if instrument has construct validity, should clearly tell both groups apart•Example–Create low vs high anxiety groups  Measure  Compare–Check theoretical relationships with other variables1717Validity - summary•Judgment based – Face validity•Criterion based – Predictive validity; Concurrent validity•Theory based – Construct validity1818Reliability and


View Full Document

USC COMM 301L - lect4_1-Measurement validity and reliability

Download lect4_1-Measurement validity and reliability
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view lect4_1-Measurement validity and reliability and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view lect4_1-Measurement validity and reliability 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?