Wednesday, May 12, 2010

FORMAL METHODS IN POLITICAL PHILOSOPHY FOURTH INSTALLMENT

So much for the easy stuff. Now let's say a word or two about more complex issues that play a very important role in criticizing the application of Rational Choice Theory and Game Theory to military strategy and nuclear deterrence. We have been talking about maximizing expected utility, as though it were obvious that two alternative actions or strategies or choices with the same expected utility are equally worthy of being chosen. But a moment's thought shows that this assumption is, at the very least, questionable.

A simple example will make the point. Suppose I am presented with the opportunity to play either of two games. The first offers a coin toss, with heads winning me an amount of money for which I have very great utility, and tails losing me an amount of money for which I have exactly the same utility. [I have to define the game in this clumsy way, remember, because it is, by hypothesis, utility and not money that I seek to maximize. Given the shape of my utility function, it might be that for me, a million dollars gained is equal in utility to the one hundred thousand dollars I already have. So this might be a coin toss game that wins a million if heads comes up and loses a hundred thousand if tails comes up.] The expected utility of this game is, by construction, zero. [1/2 times the utility to me of a million dollars 1/2 times the utility to me of a hundred thousand dollars, where, by hypothesis, those two amounts of utility are equal.] The second game consists of the game master simply saying to me, "You neither win nor lose anything." The expected value of this game is also zero.

Now, the theory of rational choice says I should be indifferent between these two games. There is, according to my calculations of expected value, no reason to prefer one to the other. But in fact, as I think is obvious, some people would clearly prefer to play the first game, while others [myself included] would prefer to play the second. This is not, let me emphasize, because I value the million I might win less highly than the one hundred thousand I already have [assuming that I have it, hem hem]. If that is true, then just adjust the amounts until the utilities are equal, wherever in dollar amounts that balance lies. [There must be such a pair of amounts, by the way. That is one of the implications of the assumption that my utility function is reflexive, complete, and transitive.]

Intuitively [and correctly], the explanation for the varying ways in which different people would rank these two games is that people have different tastes for risk itself, independent of their calculation of expected value. Some people like to take risks, and others are risk averse. Take me, for example. I don't like risks. Suppose I decide [who knows how?] that fifty dollars is worth twice as much to me as twenty dollars [because I have declining marginal utility for money]. If you offer me a sure twenty dollars or a fifty percent chance of getting fifty dollars, I am as likely as not to take the sure twenty, because I just don't like risk. I know that the mathematical expectation of the risky alternative is (1/2 x 50) or 25 dollars. And since I have positive, albeit declining, marginal utility for money, I prefer $25 to $20. Even so, I will take the sure $20. I have better things to do with my life and i just don't like risk.

This problem was the subject of a fascinating debate fifty years ago or so between the French economist Maurice Allais and the emigré Ukrainian economist Jacob Marschak. Allais argued the point I have just been making. Marschak replied that the problem of attitudes toward risk itself could be got round by changing the nature of the set, S, of alternatives over which a subject is asked to express preferences. Instead of a set of outcomes, or payoffs as they are frequently referred to in the literature, you can present the subject with a set of what Marschak called prospects, which are total future states of affairs. Since a prospect includes the pattern of risk involved in the making of a choice, preference for risk itself can be built into the utility function, thus getting around the fact that people have different tastes for risk independently of their attitudes toward the various outcomes that may result from a gamble.

This response is correct, and can easily enough be handled mathematically, but it misses a deeper point that is, I believe, fundamental. The whole purpose of introducing the concept of a utility function and the associated process of maximizing expected utility is supposed to be to provide a chooser with a definite and calculable method for making a decision confronted with alternatives, based only on the chooser's utility function. In effect, the theory says to someone making a choice, "If you know how you feel about the outcomes [your utility function] and if you know the probabilities [the premise of choice under risk], then this method will allow you to calculate what it is rational for you to do, even when it is unclear to you what that is." If this claim can be sustained, then the method of expected utility maximization is a very powerful aid to rational choice. But if it is necessary to shift to a utility function defined over total prospects, then all of the power and usefulness of the rule of expected utility maximization is lost. This may not be clear when one is awash in formalism and symbolism, but if you remind yourself what those symbols actually mean, and do not let yourself be beguiled by the spiffiness of the mathematics, then the force of Allais' objection is clear [in my opinion].

There are also a number of more subtle points relating to the construction of the utility function. In order for a cardinal utility function to be constructed from someone's preferences, it is necessary that all of the outcomes in the set S be commensurable with one another. That is, it must be possible to represent the subject's preferences by an assignment of finite cardinal numbers, so that for any three alternatives a, b, and c in S, there is some probability p for a such that:


b = pa + (1-p)c .

Since p + (1-p) = 1, the expression on the right of the equation says that it is certain that either a or c will happen. The equation says that there is some way of adjusting the probabilities so that the subject is indifferent between outcome b and the gamble of a or c with the probabilities p and (1-p).

But it may be that one of the possible outcomes is, in the eyes of the subject, so much worse than any of the others [for example the subject's death] that there is no probability of that outcome, however small, that the subject is willing to risk. Alternatively, there might be one outcome so much better, in the subject's view, that there is nothing else you can offer the subject to compensate her for losing even the tiniest bit of her chance of getting it [for example, eternal salvation]. If either of these is the case, then the subject does not have a cardinal preference ordering, but instead has what is called a lexicographic preference ordering. Since this will come up later, a word about lexicographic preference orderings.

When we alphabetize a group of words [hence "lexicographic"], we put first all the words that begin with the letter "a," regardless of what the subsequent letters in the word are. We put "azure" before "bad," because kit starts with the letter a, even though the z, the u, and the r in "azure" come relatively late in the alphabet, whereas the letters a and d in "bad" come early. The earliness of a and d does not, as it were, compensate for the fact that b comes after a, nor does the lateness of z, u, and r count against "azure" in the alphabetizing. In other words, we are not assigning numbers to the letters and then arranging the words in the order of the sum of the letters in them [as medieval Hebrew scholars did in the Kabbalah]. Arranging a set of alternatives in this fashion, with one or more alternatives being, as we say, "lexicographically prior to" the others, yields a lexicographic ordering of the set.

Keep this in mind, along with everything else I am telling you. It will turn out to play a role in my criticism of the application of Game Theory to military strategy by deterrence theorists, and also will turn out to pose problems for Rawls.

Well, that was fun. Now let us discuss an even hairier problem that actually played a very important role in decisions made by the Defense Department in the 1960s about the construction of the command and control systems for America's nuclear weapons [we are talking serious stuff here, folks.]

As I explained in my blog, the enormous destructive power and revolutionary character of nuclear weapons forced America's military planners to turn for advice to economists, psychologists, mathematicians, and philosophers. Very quickly, a number of these think tank defense intellectuals began to worry about the following problem. If the Soviet Union should be so foolhardy as to launch a first strike nuclear attack on America, it might, as part of this attack, target Washington D.C. In an instant [quite literally, in an instant] every decision maker of any constitutional authority in Washington might go up in a mushroom cloud. At the same time, almost certainly communications among those remaining alive would be disrupted by the effects of the explosions occurring across the country. The nuclear submarines carrying missiles with multiple separately programmable warheads would still be functional, presumably, but they might be out of contact with whatever remained of the military or civilian high command.

It was clear to the defense intellectuals that two things needed to be planned for and implemented. First, a physical system of backup communications and control of warhead delivery systems had to be put in place now, so that even after the incineration of the president and his so-called black box, it would be physically possible to use the remaining missiles, if that what was what it was decided to do. Second, a set of standing orders had to be promulgated now, directing officers [or even enlisted soldiers] still in possession of usable nuclear weapons to carry out whatever orders it was decided, ex ante, to give them. Because of the instantaneity and scope of nuclear destruction, it was clear that those responsible for making decisions about the use of nuclear weapons could not wait until after the attack to deliberate and decide. The relevant people might not survive the attack, and even if they did, they might not be in a position to issue orders that could be received. The response had to be planned for in advance, if there was to be a response at all.

To the defense intellectuals, who were accustomed to thinking and writing about matters of nuclear deterrence strategy in terms of Game Theory or Rational Choice Theory, this second desideratum was a matter of defining the nation's utility function in the face of a set of hypothetical choices. But at this point, some of those intellectuals realized that they faced a very puzzling problem. To put it simply, should they find ways to build into the physical system and set of standing orders the preference structure that the relevant decision makers have now, or the preference structure they might have after the attack? After all, contemplating these end-times scenarios quietly in a backroom of the Pentagon, the planners might conclude that should America suffer the sort of devastating attack that would effectively terminate the existence of the United States as a functioning political entity, it would make no sense at all to launch a counter-attack whose sole purpose was the vengeful killing of several hundred million Soviet citizens, none of whom had played any role in the launch of the attack. But the defense intellectuals could also see that after the attack, with America in ruins, those still in control of nuclear weapons might desperately want revenge simply for the sake of revenge. In short, the trauma of the attack might change the preference order, or utility function, of the surviving decision makers.

Since the planners could recognize this possibility in advance, in accordance with which utility function should the plans be made? The one the decision makers had now, or the one they thought they were likely to have then?

If we step back from the horror of these speculations, we can see that this dramatic example is an instance of a much larger theoretically intractable problem. Rational Choice Theory assumes that utility functions are both exogenously given and invariant. The utility functions are exogenously given in the sense that whatever determines them is outside of, or exogenous to, the system of decision being analyzed. The utility functions are invariant because, for purposes of the expected utility calculations, they are assumed to remain unchanged and are the foundation on which the calculations are based. So in situations in which the utility functions themselves change, the theory has nothing to say.

The same point can be made in another and more striking way. We have already seen that interpersonal comparisons of utility are not allowed in the theory of rational choice. The utility functions are cardinal, which is to say invariant under linear transformations, which in turn means that neither the units nor the zero point of two distinct utility functions are comparable. All of modern economic theory is erected on this assumption, by the way. [See the classic work by Lionel Robbins, An Essay on the Nature and Significance of Economic Science.] Now, from the point of view of Rational Choice Theory, a person simply is an embodied utility function. If a person's utility function changes, then so far as the theory is concerned, that person is now a new person, no longer the old person, and there can be no useful comparison of that person's utility function before and after the change, because that is the same as trying to compare the utility functions of two different people. In other words, the question posed by the defense intellectuals has no answer.
11When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things.

12For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known. [1 Corinthians 13]

Now, if you think about it for even a moment, you will see that growing up, maturing, and aging is a process, common to all human beings, that among other things involves a change of one's utility function. Surely any useful theory of rational choice must allow for growth and change. But the Theory of Rational Choice does not, and cannot. That does, to put it mildly, seem to be a bit of a problem.

Well, so much for the Theory of Rational Choice, for the moment.

[end of fourth installment of Use and Abuse]

7 comments:

  1. When you say that utility functions are 'exogenously given' you gloss this as their being determined by something outside of the decision system under consideration. If we let the system be a person, then this (correct me if I'm wrong) means that a person's preferences are just given--at least from the perspective of rational choice theory. So we look at the rationality of the person given these preferences by bringing the formal theory to bear on her choices. (Again, please set me straight if I'm wrong about any of this.)

    I agree with you that this is problematic, and for the reasons you mention. But I wonder if someone (perhaps you) has said anything about different ways that a utility function can be exogenously given. Consider: (i) the person's preferences are simply given vs. (ii) the person sets her preferences in response to relevant considerations. I take it that (i) leads to your line of criticism. But I am not sure whether (ii) does.

    Supposing that it counts as an exogenously given utility function if someone establishes her preferences in response to reasons (considerations that favor adopting these preferences), then doesn't this provide for norms of preference-adoption? And this, in turn, provides for criticism of certain utility functions (i.e., on the grounds that their adoption violates these norms). Then the military wonks might decide the question whether to set the policy according to the utility function they currently have or the one the country might have. The revised function might be critcizable on the grounds that to adopt it is to fail to reasonably respond to the facts.

    This might be one way in which (something like) rational choice theory can respond to variance in utility functions. It seems, however, to require a particular understanding of what it is for a preference ordering to be exogenously given.

    I wonder what you think.

    ReplyDelete
  2. Would you be willing to go over the mathematics of Marschak's reply to Allais? Or if one were interested in seeing the details of his reply, do you have a suggestion for where one might look for a summary of formal aspects of the debate?

    Thanks again for the lectures. I am really enjoying them.

    ReplyDelete
  3. Ben, let me think about this for a bit. I am swamped trying to write for two blogs at the same time. Obviously, a very great deal can be said about the way in which preferences are determined. Once you introduced a notion of substantive rationality, not just formal rationality [which I would want to do], you can start criticizing someone's priorities, not just how he or she acts given those priorities.

    Nathana, have you googled Marschak and Allais to find the Marschak paper? I no longer have a copy, if I ever did.

    ReplyDelete
  4. The best I could find by googling is that it started off with Allais questioning Marschak's "independence axiom" using empirical evidence, getting people including famous economists to violate it in the Allais paradox. Link to his paper:
    http://mikael.cozic.free.fr/allais53.pdf

    And this leads Marschak to make a normative defense of rational choice theory in this paper (contains link to PDF):
    http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.bsmsp/1200500250

    ReplyDelete
  5. The last problem you discuss I find esp. fascinating. I wonder if you believe that rational choice theory's inability to handle dynamic change of preferences is an inherent flaw that cannot be fixed?

    Some philosophers like David Gauthier and Edward McClennen I think are willing to work within the rational choice framework, revising and extending it to handle such cases. E.g., McClennen allows for endogenous change of preferences, where one's previous choices modify one's preferences at a later point, so as to enable a utility-maximizing agent to choose in keeping with one's earlier plan or commitment. (Quite different from what Ben proposes above, more Humean perhaps, but making room for how commitments or long-term plans affect one's preferences over time.)

    ReplyDelete
  6. Mainstream economists most often assume that utility preferences are exogenous....determined outside the economic system. This leaves utility to be determined by non-economic variables such as social norms or psychological drives or philosophical debate. This gives the notion of "consumer sovereignty" some substance.

    Heterodox economists are most likely to analyze "endogenous" utility preferences, based on status, advertising, or class aspirations. Some even argue for "relative" utility, so that more stuff doesn't add to happiness if the entire society is also better off.

    ReplyDelete
  7. Awesome Post. I have been reading your posts slowly but surely as I am sadly inept at mathematics. I am a student at a community college in Northern California. I sincerely appreciate this learning experience Professor!

    ReplyDelete