DOC PREVIEW
Berkeley A,RESEC C253 - Impact Evaluation

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Impact Analysis Handout - 1 - 10/24/08 PP 253 / ARE 253 Sadoulet / de Janvry Fall 2008 Handout #6 Impact Evaluation 1. Evaluation systems Project sequence: Inputs  Activities  Output  Intermediate outcomes  Final outcomes (goals) Implementation Results Types of evaluation:  Programmatic evaluation: logframe From activities to outputs and outcomes (indicators). Evaluate achieved against planned outputs and outcomes at given times (intermediate and final).  Comprehensive expenditure analysis Use of resources: observe and explain inconsistencies between actual and planned expenditures.  Impact analysis: Changes in selected indicators of outcomes that can be attributed to a specific intervention. To do an impact analysis:  We need to clearly identify a specific intervention (what program, what expected objectives, at what time, at what place, applied to what unit of analysis).  We need to specify indicators of outcomes (endogenous variables) to be used to measure impact. Hence, the project objectives (goals, mandates) need to be clearly defined. These indicators must be observable before/after or with/without the intervention. They can be indicators of intermediate or final outcomes.  We need to identify a counterfactual with no intervention against which the change with intervention can be measured: before/after, with/without.  Need to have data for many units of observations to do statistical analysis. Objectives of evaluation systems: Often required by law: Yearly in Mexico, as required by Congress; U.S. 1993 Government Performance and Results Act, fully implemented starting in 1997. Allows to engage in results-based management. Use results of evaluation to:  Assess value of program (ex-post).  Adjust program (feedbacks): minor adjustments, major adjustments, redesign, cancel.  Link to resource allocation, budgeting, personnel management.  Evaluation is a learning process (hence role of participation, ownership).  Improving evaluation = learning to learn (start simple, use pilots, and improve over time).  Need incentives to learn, use results, and change programs. Impact evaluation challenge and techniques: • Selection bias: - program placement - self-selectionImpact Analysis Handout - 2 - 10/24/08 • Techniques for impact evaluation - Experimental design, randomization. Treatment and control groups - Quasi-experimental design: Treatment and comparison groups Matching methods, Double-difference techniques - Non-experimental design, Instrumental variables Statistical methods - Qualitative methods 2. Experimental design – Randomization Randomization allows to create identical treatment and control groups. • Procedure and ethical issue Treatment group and control group Example: Rural education program Progresa in Mexico • Program impact from simple difference Impact =1NTyii!T"average outcome in treatment group! " # $ # #1NCyjj !C"average outcome in control group! " # $ # Can be done on subgroups to evaluate heterogeneity of program effect Example: School subsidy in urban Pakistan (Quetta) Need to check that control and treatment groups have similar distribution of exogenous variables, outcome prior to program (if available), and behavior prior to the program (if available). 3. Matching method to construct comparison groups Identify non-participants that are comparable in essential characteristics to the participants. Possible for program with partial coverage, i.e., when there exists a large population that, for exogenous reasons, has been excluded from the program. Key assumption for the validity of the method: Selection on observables: Once you control for all the observable characteristics, the participation to the program is not correlated to any other determinant of the outcome. Examples: Local programs By contrast to: - credit program placed where economic opportunities are highest. - health clinics placed where most needed - self-selection for program participationImpact Analysis Handout - 3 - 10/24/08 • Data needed: A sample of participants (usually from a special survey designed for the program evaluation) and a large sample of non-participants (usually some other large existing survey, such as LSMS for households) from which one can pick the comparison group. Both surveys must include variables X that are important determinants of program participation and outcome. • Propensity score matching (individual matching): Variables X that help predicts program participation. Instead of matching on all the X, one matches on the probability of participation. a) Using both sample to estimate the probability of participation as function of variables X. b) For each participant i to the program find the closest matches m(i) among the non-participants, i.e., the non-participants with the closest predicted value of probability of participation. One can choose 1 to 5 closest matches. c) For each participant i, compute the average outcome of the closest matches: ymi=1nyjj!m(i)" d) The impact of the program is: Impact =1NTyi! ymi( )i "T# There are many variations of this method (using just one match, or a few matches, or many matches that are weighted according to how “close” they are to the treated person). Example: Argentina’s workfare program Trabajar Note: The income effect is less than the payment from Trabajar, because of the foregone income. By subtracting the predicted income effect from each observed income, one can estimate the “without program” income. Whatever the matching method, program impact can be computed for different subgroups of the population to assess heterogeneity in the impact of the program. • The matching method is similar in spirit to a simple regression model in which you “control for” the observable characteristics with a pre-defined function: Estimating yi= a + eXi+ !Ti+ "i identifies the impact of the treatment T if ! is orthogonal to T, i.e., conditional on the observables X, participation T is not correlated to the other factors influencing the outcome. This regression is however more restrictive than the matching method as it imposes a given function y (X). 4. Double difference method When the control or comparison groups are not perfectly comparable: - imperfect randomization, imperfect matching - there is no


View Full Document

Berkeley A,RESEC C253 - Impact Evaluation

Documents in this Course
Impact

Impact

9 pages

Load more
Download Impact Evaluation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Impact Evaluation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Impact Evaluation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?