UD PSYC 207 - Chapter 5: Measuring Variables and Sampling

Unformatted text preview:

Chapter 5: Measuring Variables and Sampling- variable: a condition or characteristic that can take on different values or categories- failing to measure your variable accurately can lead to flawed data- measurement: the assignment of symbols or numbers to something accordingto a set of rules- 4 "scales of measurement": nominal, ordinal, interval, ratio (also referred to as variables)- nominal scale: the use of symbols, such as words or numbers, to classify or categorize measurement objects into groups or types o simplest, most basic scaleo nonquantitativeo identifies types rather than amountso i.e. gender, personality types, country you were born in, etc.o used to name, categorize or classify- ordinal scale: a rank order measurement scale o allows you to determine which person is higher or lower on a variable of interest, but not exactly how much compared to the othero i.e. order of finish in a marathon, social class, rank ordering for a jobo used to rank order objects or individuals- interval scale: scale of measurement with equal intervals of distance between numbers o i.e. Celsius temperature, year, IQ scores, etc.o does not possess an absolute zero pointo used to rank order, plus has equal intervals or distances between adjacent numbers- ratio scale: a scale of measurement with rank ordering, equal intervals, and an absolute zero point o most quantitative level of measuremento i.e. weight, height, response time, Kelvin temperature, annual income- two major properties of good measurement are validity and reliability- reliability: the consistency or stability of scores- 4 primary types of reliability: test-retest, equivalent forms, internal consistency, interrater reliability- reliability coefficients are commonly obtained as quantitative indexes of reliability- reliability coefficient: type of correlation coefficient used as an index of reliability o should be strong and positive (i.e. > 0.70) to indicate strong consistency of relationship- equivalent forms: consistency of a groups individual scores on two or more versions of the same test o i.e. SATs, ACTso success of this method depends on equivalence of the test forms- internal consistency reliability: consistency with which items on a test measure a single construct o affected by length of test , the longer = the more reliableo goal is to obtain high reliability with relatively few items for each constructo most commonly reported index of internal consistency is coefficient alpha, or Cronbach's Alpha = should be .70 or higher, high value means items are consistently measuring the same thingo for multidimensional tests, coefficient alpha should be reported for each dimension separately- interrater reliability: degree of consistency or agreement between two or more scorers, judges, observers, or raters o should be strong and positiveo also measured by interobserver agreement: the percentage of time that different observers' ratings are in agreement- validity: accuracy of inferences, interpretations, or actions made on the basisof test scores- "test" defined as any measurement procedure or device- "validation is an inquiry into the soundness of the interpretations proposed forscores from a test"- all validity types are part of construct validity- operationalization: the way a construct is represented and measured in a particular research study o defining how you determine what belongs in the studyo i.e. "depressed" meaning individuals scoring above a 20 on the Beck Depression Inventoryo do the operations produce a correct or appropriate representation of the intended construct?- validation: gathering of evidence regarding the soundness of inferences made from test scores o continual process- 3 major ways to collect evidence of validity: based on content, based on internal structure, based on relations to other variables- validity evidence based on content, content-related evidence: judgment by experts of the degree to which items, tasks, or questions on a test adequately represent the construct o 1. do the items appear to represent the thing a researcher is attempting to measure? (face validity)o 2. does the set of items underrepresent the construct's content? was anything important excluded?o 3. do any of these items represent something other than what the researcher is trying to measure?- validity evidence based on internal structure: must determine how many dimensions are being used o multidimensional construct: construct consisting of 2 or more dimensions; contrasted with a unidimensional constructo factor analysis: statistical technique, analysis procedure to determine the number of dimensions present in a set of itemso number of subsets of items indicates number of dimensionso indexes are used to indicate the degree of homogeneity of each dimension or factoro homogeneity: degree to which a set of items measures a single construct  2 primary indices are the item-to-total correlation and the coefficient alpha- validity evidence based on relations to other variables: obtained by relating your test scores with one or more relevant and known criteria o a criterion is a standard that you want to correlate with or predict accurately on the basis of your test scoreso validity coefficient: type of correlation coefficient used in validation researcho test scores should be related to the criterion in the predicted direction and magnitudeo criterion-related validity: degree to which scores predict or relate to a known criterion such as a future performance or an already-establishedtest  predictive validity: degree to which scores obtained at one time correctly predict scores on a criterion at a later time concurrent validity: degree to which test scores obtained at onetime correctly relate to the scores on a known criterion obtainedat approximately the same timeo convergent validity evidence: based on the degree to which the focal test scores correlate with independent measures of different constructso discriminant validity: based on the degree to which the focal test scores do NOT correlate with measures of different constructso known groups validity evidence: degree to which groups that are known to differ on a construct actually differ according to the test usedto measure the construct- Using Reliability and Validity Information... - norming group: the reference group upon which reported reliability and validity evidence is basedo if the people you intend to use a test with are very different from the people in the


View Full Document

UD PSYC 207 - Chapter 5: Measuring Variables and Sampling

Documents in this Course
Load more
Download Chapter 5: Measuring Variables and Sampling
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Chapter 5: Measuring Variables and Sampling and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Chapter 5: Measuring Variables and Sampling 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?