1Principal-Agent ProblemxaOutput - wageWage utility –action costx1x2xjPijAgent Principala1a2aiw1= w(x1)U(w1) e(a1) xj-wjU(wj) – e(ai) pgWage utility action cost1xmOutcomesanActions11w2= w(x2)…wm= w(xm)U(w2) e(a2)… U(wm) e(an)UPrincipal-agent problem{ If the agent doesn’t accept the contract, his payoff is his reservation utility, U{ If the agent accepts the contract, he chooses between n possible actions: a1, … , an.{ These actions produce m possible outcomes: x1, … , xm.{ There is a stochastic relationship between actions and outcomes (called “technology”). When the action is ai, the principal observes outcome xjwith probability Pij.{ If the principal observes outcome xj, she pays the agent wj. { The agent’s payoff is U(w)-e(ai), where U(w) is the utility of 2gpy ()(i), ( ) ywage w to the agent and e(ai) is the cost of action aito the agent. z U is increasing, differentiable and concave.{ Assuming the principal is risk-neutral, her payoff is xj–wj.2What if the agent’s actions can be observed?{ The principal can design a contract where the wages are conditioned on the actions, i.e., w(ai):constraint Incentive)()( :constraintion Participatminmaxm1jiiiijijUaewUwwxP≥−≡−∑=3low.ly sufficientother allset and)()( such that set ,action choose agent to theinduce To)()()()(kiiiikkiiwaeUwUwaikaewUaewU+=≠∀−≥−Unobservable actions - Principal’s problem: Step 1{ Given an action ai, how to set the wages such that the agent chooses ai and the principal’s payoff is maximized?maximized?)()()()(:constraint Incentive)()( :constraintion Participatm1jm1jm1jkjkjijijijijikaewUPaewUPUaewUP≠∀−≥−≥−∑∑∑=4)(min)(max:) choose agent to theinducingofcost expected the(minimize objective sPrincipal')()()()(m1jm1j1j1jijijjjijikjkjijijaCwPwxPa=≡−∑∑∑∑====3Principal’s problem: Step 1{ C(ai) is the minimal cost (to the principal) of inducing the agent to pp) g gtake action ai.{ C(ai) is convex Æ the original maximization objective is concave{ Well-behaved mathematical program with a concave objective 5program with a concave objective function (maximization) and linear constraints.Lagrangian Relaxation Reminder{ Given general form of problem:{Maxxf(x){Maxxf(x){ s.t. gi(x) <= 0 for all i = 1..k{ s.t. hj(x) >= 0 for all j = 1..mz (or s.t. –h_i(x) <= 0){ L = f(x) –Σi[λi* (gi(x))] -Σj[μj*(-hj(x))]{ Where λi>= 0, μj>= 0. 64Lagrangian Relaxation Reminder{ Given the function L, can take derivatives with respect to the poriginal variables{ Also remember “complementary slackness”z λi*gi(x) =0 for all izμj*(-hj(x))=0 for all jμj(hj(x)) 0 for all j7Principal’s problem: Step 1:action given aFor m∑ia....)()()()(................................................)()()(maxm1jm1jm1jm1j∑∑∑∑====≠∀−≥−≥−−kkjkjijijijijjjijikaewUPaewUPUaewUPwxPλμ))()()()(())()(()(),,(1m1jm1jm1j∑∑∑∑∑==≠==+−−+−−+−=mjkjkjijijikkijijjjijaewUPaewUPUaewUPwxPwLλμμλ5Principal’s problem: Step 1∑∑∑∑∑==⎟⎞⎜⎛+−−+−=mijijjjijUaewUPwxPwL)()()()())()(()(),,(mm1jm1jλμμλ∑∑∑∑≠≠==⎞⎛∂→=⎟⎟⎠⎞⎜⎜⎝⎛∂∂−∂∂+∂∂+−=∂∂⎟⎟⎠⎞⎜⎜⎝⎛+−−ikjjkjjjijkjjijijjikjkjkjijijkwwUPwwUPwwUPPwwLaewUPaewUP)(0)()()(),,()()()()(1m1jλμμλλ∑∑∑≠≠≠⎟⎟⎠⎞⎜⎜⎝⎛−+=∂∂→=⎟⎟⎠⎞⎜⎜⎝⎛⎟⎟⎠⎞⎜⎜⎝⎛−+∂∂+−=⎟⎟⎠⎞⎜⎜⎝⎛−+∂∂+−ikijkjkjjikijkjkjjkjijikkijjjijPPwwUPPwwUPPPwwUP1)(101)(10)()(λμλμλμPrincipal’s problem: Step 1∑⎟⎟⎠⎞⎜⎜⎝⎛−+=kjkPPU1)('1λμOur desired action is i, and j is the outcome we are analyzing∑≠⎟⎠⎜⎝ikijjPwU)('Likelihood Ratio: likelihood of observing xj given agent chooses k, compared to likelihood of observing xj given agent chooses I(smaller indicates stronger precision of the signal)Base payment for participationAs this fraction gets bigger, wage gets bigger10So wages will increase for outcomes where the likelihood ratio is smallerUnder some conditions, the ratio implies a “monotone” scheme (better result Æ better wage) may be appropriate6Two actions and two outcomes{ Suppose the agent has two possible actions, a and b, and there are two possible outcomes, x1and x2.S th t ti b i f d b th i i l{Suppose that action b is preferred by the principal)(max:action For 21jwxPbjjbj−∑=11λμ....)()()()()()(.................)()()(221122112211jaewUPwUPbewUPwUPUbewUPwUPaabbbb−+≥−+≥−+Sensitivity analysis: Changes in the agent’s costs{ What is the impact of the agent’s costs on the outcome?on the outcome?λμμλ+−−−++−−++−+−=))()()()()()(())()()(()()(),,(max221122112211222111aewUPwUPbewUPwUPUbewUPwUPwxPwxPwLaabbbbbbHow does objective function of Principal change as cost of action 12λμλλμμλ=∂∂+−=∂∂)(),,()()(),,(aewLbewLjpgundesirable action a (or desirable action b) increases? Carrot Stick7This Type of Game{ Who are the players? {Simultaneous or multi-stage game? {Simultaneous or multistage game? { Perfect information or not?{ Complete information or not? { What were actions of the agent? Principal?{ What are the strategies? { Have we found a Nash equilibrium? 13Example 1{ Principal (Π(x,w) = x-w) contracts with Agent (U(w,e) = √w - e2), whose effort determines results{ Prob of each state = 1/3{ Agent reservation utility is U=114{ What are effort and wage in symmetric information?{ What happens under asymmetric information?14OUTCOMESo1o2o3EFFORT e=6 60,000 60,000 30,000e=4 30,000 60,000 30,0008Example: Symmetric Information{ Given effort level e, E))(({ Objective decreases in wage while constraint becomes tighter (implies?)UewwxE≥−−2w s.t.))(( max15{ For e = 6: w=22,500; z Π = 1/3 (60K + 60K + 30K) – 22,500 = 27,500{ For e = 4, w=16,900; z Π = 1/3 (30K + 60K + 30K) – 16,900 = 23,100Example: Asymmetric Information{ For e = 6,366)30(31)60(31)60(31 max⎟⎠⎞⎜⎝⎛−+−+− wkwkwk{ How to solve?z Lagrangian 23362662366366w431313163313131 1146313131 s.t.)(3)(3)(36−++≥−++≥−++⎟⎠⎜⎝wwwwwwwww16z (but note in previous slides, for two actions, both constraints were tight. If profit had same slope, the solution was at the intersection.){ For e = 6, w6= 28,900 & w3= 12,100, Π = 26,700{ For e = 4, w=16,900, Π = 23,100{ Which does principal choose? { What is loss due to asymmetric information?9Recap{ Principal-Agent problems with
View Full Document