DOC PREVIEW
UW-Madison SOC 357 - Evaluation Research

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Class 18Evaluation ResearchClass Outline• Evaluation Basics• Approaches to Evaluation Research• Types of Evaluation Research Designs• Internal Validity in Evaluation ResearchEvaluation Research• Evaluation research, or program evaluation, refers to the kind of applied social research that attempts to evaluate the effectiveness of social programs.• Appropriate for any study of planned or actual social intervention.• Goal is to determine whether a social intervention has produced the intended result.• Results are not always well received.2Stakeholders• A stakeholder is someone who has sufficient program knowledge to contribute to the process in meaningful ways, and whose self-defined stake in the program is high (Greene, 1988). • Types of stakeholders– Agents: those persons involved in producing, using, and implementing the program– Beneficiaries: those persons who profit in some way from the use of the program–Victims: those persons who are negatively affected by the programApproaches to Evaluation Research• Black-box evaluation or theory-driven evaluation– Black-box evaluation involves determining whether a program has the intended effect.– Theory-driven evaluation seeks to understand how the program operates and to identify the program elements that are operational.• Researcher or stakeholder orientation– Should the evaluators be responsive to program stakeholders or should they emphasize the importance of research expertise and maintain some autonomy in order to develop unbiased evaluation? Approaches to Evaluation Research• Quantitative or qualitative methods– Qualitative methods add more depth, detail, and nuance to complex programs.• Simple or complex outcomes– Even single-purpose programs may turn out to have multiple outcomes.3Reference: Chen, Huey-Tsyu (1990). Theory-driven evaluations. Sage Publications, Newbury Park, CA. p. 50TreatmentIntervening MechanismOutcomeImplementation EnvironmentGeneralizability to Other SituationsCause EffectA Model for Theory-Driven EvaluationQuestions to be Asked in Theory-Driven Evaluation• What is the goal of the program? • What is the treatment?• Under what circumstances is the program being implemented?•Does it work? • What is the effect? • What other variables could have caused the effect? • Can you say that this program will work in another place and time? Internal Validity in Evaluation Research:The Naïve Estimator of Causal Effect• The naïve way to estimate treatment effect is to compare units of analysis affected by the program to those unaffected by the program. • Say in a community, N1children attended Head Start, and N2did not. 27 years later, measure the mean years of schooling of the two groups, y1(those who attended Head Start) and y2(those who did not attend Head Start).4• We compute y1-y2= 13 - 14 = -1.• Should we conclude from this that Head Start has a negative effect on educational attainment? • The Westinghouse report (1969).• The appropriate research question is not to compare observed y1and observed y2. Internal Validity in Evaluation Research:The Naïve Estimator of Causal Effect• Rather, we should ask the counter-factual question, for those who attended Head Start, what would have happened to them if they hadn't attended? –Or, y1t - y1c(t denoting treatment; c denoting control)– Note that y1t is observed, but y1cis not. • This is a missing data problem. • y1t - y1c is the average treatment effect for the treated.Causal Effect as a Counter-Factual Question• We could also ask: for those who did not attend Head Start, what would have happened to them if they had attended? – y2t - y2c– Note that y2c is observed, but y2tis not.• y2t - y2c is the average treatment effect for the control group.Causal Effect as a Counter-Factual Question5y2cy1cIf received controly2t -y2cy2tControl group (N2)y1t -y1cy1tTreatment group (N1)Treatment effectIf received treatmentCausal Effect as a Counter-Factual QuestionAssumption for Simple Comparisons• If N1children are comparable to N2children, we can assume – y1c = y2c– y1t = y2t• In that case– y1t - y1c = y2t - y2c= y1t - y2c• That is, we can use the naïve method to estimate the treatment effect.• In reality, we need to consider selectivity.Selectivity Bias: Observed Selectivity• If subjects who receive social intervention and those who do not are different in observed characteristics, this type of selectivity is called observed selectivity. • This problem can be handled by statistical controls in multivariate analyses, which make the two groups comparable.6z2cy2cControl group (N2)z1t –z2cz1tIntact familiesy1t –y2cTreatment effecty1tTreatment group (N1)Single-parent familiesSelectivity Bias: Observed Selectivity• The more difficult problem is to deal with selectivity in unmeasured characteristics.• One type of unobserved selectivity is also called “endogeneity problem.”– Some people participated in the program because they foresaw that they would benefit from the program. – Some people decided not to participate because they thought participation was not going to work for them.• Statistical models that correct for unobserved selectivity require strong and implausible assumptions.Selectivity Bias: Unobserved SelectivityEvaluation Research Designs• Experimental designs– Example: the well-known High/Scope Perry Preschool study conducted in Ypsilanti, MI.– Advantage: randomization– Disadvantage: Conclusions from experimental settings may not generalize to natural settings.• Quasi-experimental designs– Time-series design– Nonequivalent control groups– Multiple time-series


View Full Document

UW-Madison SOC 357 - Evaluation Research

Documents in this Course
Syllabus

Syllabus

12 pages

Sampling

Sampling

35 pages

Class 7

Class 7

6 pages

Review

Review

3 pages

Load more
Download Evaluation Research
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Evaluation Research and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Evaluation Research 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?