Unformatted text preview:

Unobtrusive Measures and Secondary Analysis CCJS300 Study Guide Adjustment Bureau 4 15 get from Colin Validity and Reliability Thinking about error in research what can go wrong Error may be seen as a kind of invalidity and the sources of potential error or invalidity are always present How can we deal with issues of validity and reliability Validity does my measuring instrument in fact measure what it claims to measure s surveys Reliability stability and consistency of measurement if the study were repeated would the instrument yield stable and uniform measures Research Mythbusters How to spot invalid claims of scientific research Robert Park 2003 The discoverer pitches the claim directly to the media The discoverer says that a powerful establishment is trying to suppress his or her work The scientific effect involved is always at the very limit of detection the effect size is small Evidence for a discovery is anecdotal Data is not the plural of anecdote The discoverer claims that a belief is credible because it has existed for centuries The discoverer has worked in isolation The discoverer must propose new laws of nature to explain an observation warp speed is science fiction not science fact Types of Validity Ways determining validity and types of validity Face Validity Does the measuring instrument appear at face value to be measuring what I am attempting to measure If it walks quacks and poops like a duck it is probably a duck Rosenberg self esteem test Content Validity Examine each item or question the content of the instrument to judge whether each item measures the concept in question Item by item analysis Construct Concept Validity Does the instrument measure what it has been designed to measure A rather philosophical theoretical kind of validity Construct validity questions the fit between the theoretical and operational definitions of terms Pragmatic Criterion Validity Does the instrument work Concurrent validity does the measure enhance our ability to gauge present characteristics of the item Predictive validity can we use the item to accurately forecast or predict future events or condition Convergent Discriminant Validation Use multiple methods to measure multiple traits you should find two things o A convergence of similar concepts across different measurement methods used o A discrimination of different concepts across the same measurement methods The use of multiple methods to measure the same phenomena is called Triangulation Reliability Reliability Should have stable and consistent replication of findings upon repeated measurement Stable a respondent should give the same answer to the same question upon being retested Consistent are the items used to measure some phenomena highly related or associated with each other o Look at inter item correlations o Rosenberg Self Esteem Scale Types of Reliability population Test Retest the same instrument is administered twice to the same If the results are basically the same we will assume stability of measurement Potential issue o Pretest bias present o Could be a testing effect Multiple Forms administer alternate forms of the instrument to the same population A disguised test retest situation Should get the same results on each form Split Half Technique Each half of a set of questions or scale may be analyzed separately Questions should all measure the same thing No testing effects only administered one time Larger or longer scales Reliability The Statistic Several different types available with the SPSS procedure Scale reliability analysis Most commonly used is Cronbac s Alpha Ranges from 0 1 similar to a correlation coefficient but no negative values 0 no reliability 1 very reliable If Variation and Reliability so important why are there so few studies Little professional esteem in replication studies Exact replication may be hard to do If design flaws exist in original study why replicate it Move on Issues of funding for replications Lack of a tradition of validation studies unlike the harder science Scaling and Index Construction Levels of Measurement All variables may be classified as belonging to a particular level of measurement Nominal simplest level of measurement o Cases are placed into mutually exclusive categories o Ex gender political party etc o Appropriate statistics assuming the variable is an IV T test Chi square cross tabs joint probability distribution Ordinal embody all the properties of Nominal variables o Values may be rank ordered o Lower values higher values o The numbers imply some distance between the ranks attitude questions strongly agree strongly disagree o Appropriate statistics ANOVA Chi Square Other measures of association Man Whitney Mu test based on ordinal but can Spearman Rank order Correlation Coefficient compare both nominal and ordinal Interval contain all properties of nominal and ordinal variables but also o Assume equal and uniform distances between the values of the variable o Not true zero o Ex temperature scales IQ test scores o Appropriate statistics Pearson Correlation Coefficient Regression Multiple Regression Advanced Techniques factor analysis canonical correlation Ratio contain the properties of all variables and have a fixed meaningful zero point o Only level of measurement where can speak about twice as many or half as many o Ex age height weight number of felony arrests o Appropriate statistics same as interval Scaling Procedures Using more than just 1 question to measure complex concepts Developing sets of questions Why would we want to use composite multi question measures Developing single question indicators of complex concepts may be hard to do it may not capture the variable we want to use Good for ordinal level of measurement variables provides a wider range of variation Indexes and scales are efficient for data analysis several questions give us more comprehensive and accurate indicators o The mean X bar is very efficient estimator Distinction Between Scales and Indexes Index constructed through simple accumulation of scores assigned to individual attributes Scale constructed through the assignment of scores to patterns of attributes Rosenberg Differs from an index by taking advantage of any intensity structure that may exist among those attributes General Instructions on How to Devise Multi Question Scales All questions used must be relevant to the variable you are trying to measure All questions should be equally weighted Try to use variables measured at


View Full Document

UMD CCJS 300 - Unobtrusive Measures and Secondary Analysis

Download Unobtrusive Measures and Secondary Analysis
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Unobtrusive Measures and Secondary Analysis and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Unobtrusive Measures and Secondary Analysis and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?