DOC PREVIEW
UCSD PHIL 13 - UTILITARIANISM & CONSEQUENTIALISM

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 A NOTE ON UTILITARIANISM & CONSEQUENTIALISM FOR PHILOSOPHY 13 Richard Arneson Fall, 2008 Broadly speaking, utilitarianism holds that morality should guide conduct in such a way that the outcome is best for people on the whole. This might be interpreted as Act-utilitarianism = one ought always to do that act which, compared to available alternatives, maximizes utility. Act-utilitarianism so understood is a test or criterion of what one should do (a test of right action, one may say). It cannot directly serve as a guide to decision making when one does not know which of the acts that one could perform will maximize utility. Associated with act-utilitarianism is an ancillary test (a guide for decision making) intended for use when one knows the value of each of the outcomes that could result from one's actions and can estimate the probability of any given outcome occurring if one does an act that might lead to it. Expected-utility maximization = one ought always to do the act which, compared to available alternatives, maximizes expected utility. Here the expected utility of an action is the sum of the value of its disjoint outcomes times the probability of each one's occurrence. Example: Suppose I am on a rescue mission in the wilderness. At a stream juncture I must make one of two choices--go right or go left. If I go right, I know I will certainly save 30 people. If I go left, there is a 60% chance I will save zero people and a 40% chance I will save 100. There are no other significant consequences associated with either option, so let's identify utility here with lives saved. In this situation I maximize expected utility by going left (because .6 x 0 + .4 x 100 = 40, which is greater than 30). Notice that in this situation if I go left, I successfully maximize expected utility (this is the rational thing to have done, I might say to myself) even if it turns out that I save nobody by doing so. In hindsight, with full knowledge, I might regret that I did not go right, as this proves to be the right act by the act-utilitarian standard. But in making decisions with limited information acting so as to maximize expected utility has been held to be the best way to try to maximize utility. The rationale of expected utility maximization is that if a decision problem were repeated many times, expected utility maximization would produce more utility than would following any other decision rule. (It is not uncontroversial, however. Imagine that a rescue mission must choose between action A, which would save 100 people with certainty, and action B, which would save zero lives with 99% probability and with 1% probability would save 10001 lives. The expected utility rule picks action B, on the assumption that "lives saved" is here a perfect proxy for "utility." But some might object on the ground that action B is very unlikely to yield a better result than A.) In general, the act utilitarian needs to distinguish the objectively right act, the act that of the given alternatives would actually maximize utility, and the subjectively right act, the act that on the basis of the information available to the agent at the time of choice is the one that is most rational to choose for one whose goal is to maximize utility. In order to follow the expected utility rule one must be able (a) to identify all the possible outcomes of each of the actions one might take, (b) to evaluate each possible outcome by attaching a utility number to it, and (c) to form an accurate estimate of the probability that any given outcome will occur if one performs the action that might lead to it. When these probabilities pertinent to decision making are known, one speaks of decisionmaking under risk; when these probabilities are not known (i.e., when condition c is not met), one speaks of decisionmaking under uncertainty. If probabilities are unknown one should pay heed only to the best and worst outcomes that a given action option could reach and ignore all intermediate cases. We can also distinguish average utilitarianism and total utilitarianism. The former says one ought always to act so as to maximize average utility (utility per person). The latter says one ought always to act so as to maximize total or aggregate utility. The two views come to the same unless one is making decisions that will affect the number of persons in the world, such as deciding whether or not to have a baby. Or think of population control policies. If you are the ruler of a poor country, you may face a choice between a policy of promoting more births and a policy designed to lower the birth rate. If promoting births will lead to a larger population of people, total utilitarianism might endorse that policy even though average happiness will be lower than it would be if a policy of discouraging births had been followed, just so long as the extra births increase the total of human happiness even while they lower the average. Utilitarianism versus Common-sense Morality. Utilitarianism appears to conflict with what we might call "common-sense morality," the view that takes morality to be constituted not by any goal to be pursued but by rules to be followed. The rules of common-sense morality by2 and large do not posit goals that must be followed but instead set side constraints on one's actions. "Don't commit murder," "Don't tell lies," "Keep your promises" are examples of such rules. An example that illustrates the conflict: Suppose one is a surgeon. Six patients enter one's office at once. Five of them unfortunately are gravely ill. Each of the five must receive an organ transplant very soon or he will die. One needs a heart, one needs a kidney, etc.--five different organs are needed. But fortunately, the sixth man who wandered into the office has all the healthy organs needed. The surgeon faces a choice between killing the healthy patient in order to save the five, and refusing to kill the healthy patient, thereby letting the five die. (In the example these are the only possible choices--it won't work to wait till one of the diseased patients dies, then cut up his corpse and use his organs to save the four who are threatened. By the time the first threatened person dies, his organs will be useless for transplant purposes.) What should the surgeon do? The common-sense moral rule "Don't murder" tells her that she should refrain from cutting up the one even in order to save the five. But it appears that by act-utilitarian


View Full Document

UCSD PHIL 13 - UTILITARIANISM & CONSEQUENTIALISM

Download UTILITARIANISM & CONSEQUENTIALISM
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view UTILITARIANISM & CONSEQUENTIALISM and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view UTILITARIANISM & CONSEQUENTIALISM 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?