Unformatted text preview:

Class on Olson 1. Why is this important? • Central Thesis: the existence of a common interest among a set of people—an interest that could be advanced through collective action—does not lead to voluntary collective action to advance the interest. More particularly, failures to advance common interests—to act in ways that make each person better off—are fully consistent with individually rational agency and good information. • Microfoundational Thesis: Ideas about individual rational action (and relatedly, about incentives) that are understood to be relevant in thinking about economic activity are also relevant in thinking about collective action: more generally, that the distinction between economics and sociology (including political sociology) is a difference in domain of inquiry (a difference in the phenomena being studied), not in the nature or “logic” of individual action. More particularly, that a great deal of collective action (and failure of collective action) in society and in politics, can be understood in terms of a rational pursuit of goals (utility maximizing)—with action based on an assessment of the costs and benefits of an action—and does not involve either norm-guided action (action explained by reference to standards of right/wrong or appropriate/inappropriate conduct, perhaps associated with roles in an institution) or action animated by some form of group identification. In both these cases, the person decides what to do not simply by reference to the consequences of his/her conduct, but by references to considerations that apply to the action itself: its rightness, for example. • Political-Sociological Thesis: Collective action problems are solved by hegemons (and other large players), who are sufficiently interested in a collective good to bear the costs of ensuring its provision; by smaller groups, whose members can identify their contribution to the provision of the good; and by the existence of (positive and negative) selective benefits, which lead people to contribute to the provision of a collective good in order to gain a private good or avoid a private bad. • Political Thesis: Against a certain picture of pluralism and group politics: that even under fair conditions, with basic rights protected and a reasonable distribution of resources—even with a perfected political market—we can get a highly skewed political universe, with group organization and power unreflective of underlying levels of support, and political outcomes reflective of this skewed system of group bargaining.2. Central Thesis: the existence of a common interest that could be advanced through collective action may well not lead to collective action, even among fully rational and informed actors. • One way in which this is true is when actors are concerned about relative gains: (i) arguably true in the international case, where the concern about relative gains reflects the concern that an unequal distribution of the benefits of a cooperative activity may lead to threats to security; (ii) also true when we have envy and associated concerns about relative positions. But neither of these cases is really germane here, because in in both cases, the potential cooperators would not really be, all things considered, better off if they were to pursue common action: there would be some gain, but the agents themselves would not regard the gain as, all things considered, worth it, as it would be overridden by the resulting security threat (in case (i)) or (in case (ii)) the resulting loss in relative position which may matter as much as absolute position. What we are looking for are cases in which acting collectively would make each person better off (by their own lights) than they will be in the absence of such action, but the agents nonetheless do not act collectively. • Most familiar case is the one-shot prisoner’s dilemma: in this case, we have a unique, dominant strategy equilibrium (with both agents confessing), and the equilibrium is suboptimal. An outcome is an equilibrium just in case neither person has any incentive to shift choice, given the choice of the other; it is a dominant strategy equilibrium just in case each agent has a choice which is better no matter what the other person chooses (a dominant strategy); and the equilibrium is suboptimal in that each person could be better off (if both refused to confess). • A second, and distinct case in which rational agents may not be able to coordinate in a way that makes each better off is what is called “an assurance problem.” In this case, we do not have a dominant strategy equilibrium. So if I am confident that you will do your part, then my best response is to do mine (reciprocity). But in the circumstances, I cannot rationally be confident that you will do your part: say, if you don’t, things will work out very badly for me. So I rationally choose not to cooperate, because the harm from being suckered is so much greater than the loss from not cooperating. (Maybe I do not cooperate because I simply think you may not; or because I think that you may not because I think that you think I am not going to because you think I am going to protect myself from the possibility of your not cooperating). • Now both the one-shot PD and the one-shot assurance game are obviously highly artificial because they are abstracted from a larger, setting that is not itself “modeled.” So what happens when we think of the PD as being played repeatedly. Do we still get a failure to cooperate formutual benefit among rational individuals? The standard answer is that repetition makes no difference if the game is played a definite number of times. The argument proceeds by backwards induction: each person sees that it is irrational to cooperate on round N, inasmuch as that last stage is equivalent to paying once. But then it is not rational to cooperate at stage N-1, because that is in effect the final stage, and so on back to the initial stage. Now notice that this backwards induction argument for universal defection assumes that there is common knowledge of rationality: each of us knows that the other is going to defect at the last stage, and that the other knows that I know that, etc. So suppose instead when I am anticipating the last stage, I can see that it is rational for me to defect, but I may not know whether you are rational or not. If you are not, then whether you defect or not may depend on what I


View Full Document

MIT 17 960 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?