Utilitarianism

Utilitarianism

This is a quick overview of some aspects of Utilitarianism. For links to many excellent internet resources on utilitarianism, see the Utilitarianism section of Lawrence Hinman's Ethics Updates site.

Consequentialist Theories of Ethics

We can distinguish between two main varieties of modern ethics: consequentialist theories and deontological theories. A consequentialist moral theory is one which holds that the evaluation of outcomes or states of affairs is more fundamental than the evaluation of actions. (As it is sometimes put, the good is more fundamental than the right.) In particular, consequentialism holds that the rightness of wrongness of actions is definable in terms of the goodness or badness of states of affairs.

A deontological moral theory is one which denies this: which asserts, that is, that the notions of rightness and wrongness are just as basic as the notions of goodness and badness (or possibly even more basic), and cannot be defined in terms of them.

Thus, while a consequentialist would say that an action like stealing your neighbor's lawn mower is wrong because it has bad results (it makes your neighbor very unhappy), the deontologist will say that it is wrong because of intrinsic features of the action or of the general policy you are acting on, and would be wrong even if the results were not bad.

In attempting to explain the rightness of actions in terms of the goodness of their results, most consequentialists would employ something like the following definition:

An action is right if, and only if, it produces more intrinsic good than any alternative action.

This doesn't help much until we know two further things, though.

Questions for Consequentialism; Utilitarianism Defined

Clearly, there are two very important questions that need to be answered before this definition can give us very specific advice. First, what is intrinsically good? Some things seem to be good only because they lead to other things that are good, not because they are intrinsically good. For instance, going to the dentist is a good thing, but only because it leads to healthier teeth, and therefore to less pain, better digestion, etc. For most people, going to the dentist would not be desirable if it did not have these consequences:  in itself, it is not a very attractive prospect. So something that is intrinsically good is something that is good in itself, and not because it leads to something else that is good. Consequentialists typically take one of three views about what is intrinsically good. Hedonistic consequentialists hold that the only thing that is intrinsically good is pleasure.  Eudaimonistic consequentialists hold that the only thing that is intrinsically good is happiness, which on some views is a broader notion than pleasure. (On the other hand, John Stuart Mill uses the terms "happiness" and "pleasure" pretty much interchangeably.) Finally, preferential consequentialists hold that what is intrinsically good is desire satisfaction, or the satisfaction of preferences. If all we ever wanted was pleasure or happiness, this would reduce to one of the other views, but in fact most of us seem to have desires for things other than our own pleasure or happiness, for example the well-being of those close to us.

The second question we need to answer is: more intrinsic good for whom? There are many possible answers here: for me, for my friends and family, for members of my community or my nation, for all people, or all rational beings, or all sentient beings. For our purposes, let us distinguish between two main varieties of consequentialism: egoism, which holds that the right action is the one that produces the most intrinsic good for the agent, and utilitarianism, which holds that the right action is the one that produces the most intrinsic good for everyone affected. (We can leave it an open question for now whether "everyone affected" includes nonhumans or not. However, if utilitarians think that pleasure, or happiness, or satisfaction of preferences is intrinsically good, it is hard to avoid the conclusion that we should consider the well-being of any being capable of pleasure or happiness or preferences. Utilitarians thus have very powerful reasons to be concerned about ethical treatment of nonhuman animals.) A quick summary:

ethicsOverview.gif (7813 bytes)

The Utilitarian Decision Procedure

Utilitarianism attempts to provide something like an algorithm or decision procedure for moral problems.  The steps to follow, for a utilitarian, are these:

(Actually, in most realistic cases, we won't be certain what the outcomes of the action will be.  In that case we need to determine the factors that the outcome depends on, determine the outcome for all possible combinations of these factors, multiply the value of an outcome by its probability of occurrence and add the resulting values for the possible outcomes of a given action.  This will give us the "expected utility" of the action as opposed to its actual utility.)

Criticisms of Utilitarianism

A number of criticisms of utilitarianism have been offered. Among them are these:

1. Utilitarianism makes choices that are difficult, in which it is not obvious what the right thing to do is, seem too easy. Bernard Williams' example of Jim illustrates this: Jim has a choice between shooting one man and having six others go free, or doing nothing, in which case all seven will be killed. It doesn't seem obvious what the right thing to do is, but if utilitarianism were correct, it would be obvious. (A possible utilitarian response to this criticism is that it is obvious what the right thing to do is, namely shoot the one; Jim's anguish and difficulty in deciding what to do is due to his repulsion at the thought of killing someone, but not due to any moral difficulty.) More generally, suppose something bad will happen regardless of whether you do it or not. Then it seems that if you are a utilitarian, you should be indifferent to whether you do it or someone else does; after all, the same consequences will come about either way.

2. Utilitarianism gives no special moral weight to things like promises and contracts. If the world would be a slightly better or happier place if I broke a promise, then, according to the utilitarian, I should break it. (This is true for "act utilitarianism"; in the case of a variant called "rule utilitarianism," which holds that we should use utilitarian criteria to evaluate rules rather than individual actions, the situation is more complicated.) A standard example to illustrate this is the desert island promise.

3. Utilitarianism seems to have no room for special moral obligations to one's family and close friends. (Elliott Sober calls this the problem of personal loyalties.) What matters is only utility; it doesn't matter who gets it. Thus if you can save just one person from a fire, either your spouse or a wealthy philanthropist, we would normally think that you should save your spouse even if the philanthropist would do more good (i.e. would contribute more to overall utility). But it seems that the utilitarian must deny this.

4. Utilitarianism gives no special moral weight to justice. Maybe just outcomes will often produce more overall happiness than unjust ones. But in those cases in which an unjust outcome would produce more happiness, a utilitarian will need to favor it: the mere fact that it is unjust does not matter, morally speaking.

5. Utilitarianism regards all happiness as equally good, regardless of who gets it.  Making an awful person happy, for the utilitarian, is just as valuable as making a splendid person happy. Kant finds this completely unacceptable, holding that happiness is of no value unless the happy person is morally good. (Is it better, other things being equal, for a torturer to be happy or unhappy?)



Last update: March 20, 2012.
Curtis Brown | Introduction to Philosophy | Philosophy Department | Trinity University
cbrown@trinity.edu