Once again, let us pause to catch our breath. We arrived at this magnificent theorem by making a series of very powerful constraining and simplifying assumptions. Let us just list some of them:

(0) We began by talking about games.

(1) We limited ourselves to two person games

(2) We limited ourselves to players whose preferences satisfy the six powerful Axioms from which we can deduce that their preferences can be represented by cardinal utility functions.

(3) We limited ourselves to players with strictly competitive preferences

(4) We allowed for mixed strategies.

(5) We accepted mathematical expectation as a rational way of calculating the value of a strategy involving elements of risk.

(6) We adopted von Neuman's extremely conservative rule of choice of strategies -- maximizing the security levels.

(7) We assumed no pre-play communication between the players.

(8) We assumed perfect knowledge by both players of the information required to construct the payoff matrix or payoff space.

Every one of these assumptions can be altered or dropped. When that happens, a vast array of possibilities open up. No really powerful theorems can be proved about any of those possibilities, but lots and lots can be said. Here is how I am going to proceed. First, I am going to discuss each of these assumptions briefly and sketch the sorts of possibilities that open up when we drop it or alter it. After that, I will gather up everything we have learned and apply it to a number of specific texts in which Game Theory concepts are used. I will offer a discussion of the so-called Prisoner's Dilemma, a full scale analysis of John Rawls' central claim in *A Theory of Justice*, a critique of Robert Nozick's *Anarchy, State, and Utopia*, a detailed critique of a book by Jon Elster called *Making Sense of Marx*, a critique of the use made of Game Theory by nuclear deterrence strategists, and some remarks on the use of Game Theory concepts in writings by legal theorists. By then, you ought to be able to carry out this sort of critique yourselves whenever you encounter Game Theoretic or Rational Choice notions in your field of specialization.

Now let me say something about each of the nine assumptions listed above.

**(0) The Modeling of Real Situations as Games**

I identify this as assumption zero because it is so fundamental to the entire intellectual enterprise that it is easy to forget what a powerful simplification and idealization it is. Games are activities **defined by** rules. Imagine yourself watching two people playing chess, not knowing what chess is, but knowing only that a game is being played in the area. How would you describe what you are watching? Which of the things you see are appropriately included in the game and which are extraneous? Which characteristics of the various objects and people in the neighborhood are part of, or relevant to, the game? Is gender relevant? Is race relevant? Is the dog sitting by the table part of the game? Are the troubled sighs of one of the persons a part of the game? How do you know when the game begins and when it ends? Is the clothing of the persons in the area relevant? Are all of the people in the area part of the game, or only some of them? Indeed, are any of them part of the game? You cannot answer any of these questions easily without alluding to the rules of the game of chess. Once you acquaint yourself with the rules of chess, all of these questions have easy answers.

Now imagine yourself watching a war. Not one of the questions I raised in the previous paragraph has an obvious answer with regard to a war. When does a war start and when does it end? Are the economic activities taking place in the vicinity of the fighting part of the war or not? Who are the participants in a war? States that have formally declared war on one another, other nearby states, private individuals? And so forth. War is not a game. I don't mean that in the usual sense -- that it is serious, that people get killed, etc. I mean it in the Game Theory sense. War is not an activity defined by a set of rules with reference to which those questions can be answered. Neither is market exchange, contrary to what you might imagine, nor is love, nor indeed is politics. There are many **descriptive** generalizations you can make about war, market exchange, love, and politics, but no statements that are **determinative** or **definitive** of those human activities. When you apply the concepts of Game Theory to any one of them, you are covertly importing into your discussion all the powerful simplifications and rule-governed stipulations that permit us to identify an activity as a game. Whenever you read an author who uses the concepts of Game Theory [move, payoff, strategy, zero sum, Prisoners' Dilemma, etc] in talking about some political or military or legal or economic situation, think about that.

**(1) Games with more than two persons:**

As soon as we open things up to allow for more than two players in a game, everything gets very complicated. First of all, with three or more players, no meaning can be given to the concept of opposed preference orders. We can still make the assumption of cardinal utility functions if we wish, because that is an assumption about an individual player's preference structure, and has no reference to any particular game. With three or more players, it also becomes difficult to represent the game by means of a payoff matrix. Not impossible -- we can always define an n-dimension matrix -- just very difficult either to visualize or to employ as a heuristic device for analyzing a game. That is why writers who invoke the concepts or the language of Game Theory will sometimes reduce a complex social situation to "a player and everyone else," in effect trying to turn a multi-player game into a two player game. That is almost always a bad idea, because in order to treat a group of people as one player, you must abstract from precisely the intra-party dynamics that you usually want to analyze.

Multi-player games also for the first time introduce the possibility of coalitions of players. Coalitions may either be overt and explicit, as when several players agree to work together, or they may be tacit, as when players who are not communicating overtly with one another begin to adjust their behavior to one another in reciprocal ways for cooperative ends. Once we allow for coalitions, we encounter the possibility of defections of one or more parties from a coalition, and that leads to the possibility that two players or groups of players will bid for the allegiance of a player by offering adjustments in the payoff schedule, or side payments.

All of this sounds very enticing and interesting, and I can just imagine some of you salivating and saying to yourselves, "Yeah, yeah, now he is getting to the good stuff." But I want to issue a caution. The appeal of Game Theory to social scientists, philosophers, and others, is that it offers a powerful analytical structure. That power is achieved, as I have labored to show you, by making a series of very precise, constraining simplifications and assumptions. As soon as you start relaxing those assumptions and simplifications, you rapidly lose the power of the analytical framework. **You cannot have your cake and eat it too.** By the time you have loosened things up enough so that you can fit your own concerns and problems into the Game Theory conceptual framework, you will almost certainly have lost the rigor and power you were lusting after, and you are probably better off using your ordinary powers of analysis and reason. Otherwise, you are just tricking your argument out in a costume, in effect wearing the garb of a Jedi knight and carrying a toy light saber to impress your children.

**(2) Abrogating one of the Six Axioms**

The six Axioms laid down by von Neuman conjointly permit us to represent a player's preferences by means of a cardinal utility function. There are various ways in which we might ease those axioms. One is to assume only an ordinal preference structure. As we have seen, that is sufficient for solving some two-person games, and it might be sufficient for usefully analyzing some multi-party games. We may need no more than the knowledge of the order in which individuals rank alternatives. All majority rule voting systems, for example, require only ordinal preference orders, a fact that is important when considering the so-called "paradox of majority rule."

The assumption of completeness is very powerful and potentially covertly biased in favor of one or another ideological position, a fact that I will try to show you when we come to talk about Nozick's work. In effect, the assumption of completeness serves the purpose of transforming all relationships into market exchanges, with results that are very consequential and, at least for some of us, baleful.

Transitivity is also a powerful assumption, and some authors, most notably Rawls, have chosen to deny it in certain argumentative contexts. Recall my brief discussion of Lexicographic orders. When Rawls says that the First Principle of Justice is "lexically prior" to the Difference Principle, he is denying transitivity. He is also, as we shall see, making an extremely implausible claim. Whether he understood that is an interesting question.

One of the trickiest thickets to negotiate is the relationship between money and utility. Because the Axioms we must posit in order to represent a player's preferences by a cardinal utility function are so daunting, those who like to invoke the impressive looking formalism of Game Theory almost always just give up and treat the money payoffs in a game [or a game like situation] as equivalent to the players' utilities. This is wrong, and some folks seem to know that it is wrong, but they almost never get further than just making some casual assumption of declining marginal utility for money. The issue of aversion to risk is usually ignored, or botched.

To give you one quick example of the tendency of writers to ignore the complexity of the six Axioms, here is the entry in the end-of-volume Glossary for "von Neuman-Morgenstern Expected Utility Theory," in *Game Theory and the Law* by Douglas G. Baird, Robert H. Gertner, and Randal C. Picker:

"Von Neuman and Morgenstern proved that, when individuals make choices under uncertainty in a way that meets a few plausible consistency conditions, one can always assign a utility function to outcomes so that the decisions people make are the ones they would make if they were maximizing expected utility. This theory justifies our assumption throughout the text that we can establish *payoffs* for all *strategy* combinations, even when they are mixed, and that individuals will choose a *strategy* based on whether it will lead to the highest expected *payoff*."

Now that you have sweated through my **informal** explanation of each of the six Axioms, I leave it to you whether they are correctly characterized as "a few plausible consistency conditions."

**(3) Relaxing the Assumption of Strictly Competitive Preferences**

As I have already pointed out, there are a great many two-party situations [like two people negotiating over the price of a house] in which the parties do not have strictly opposed preference orders. This is manifestly true in nuclear deterrence strategy situations in which it is in the interest of both parties to avoid one outcome -- namely mutually destructive all out war.

In addition to games that are partly competitive and partly cooperative, we can also consider totally cooperative games, sometimes called "coordination games." Here is one example. In his book, *The Strategy of Conflict*, Schelling cites a coordination game he invented to try out on his Harvard classes. He divided his class into pairs of students, and told them that without consultation, they were to try to coordinate on a time and place where they would meet. Each member of the pair was to write a time and place on a slip of paper, and then the two of them would read the slips together. "Winning" meant both students choosing the same time and place. An impressive proportion of the pairs, Schelling reported, won the game by coordinating on "Harvard Square at noon when classes let out." Obviously, their success in coordinating involved their bringing to the game all manner of information that would be considered extraneous in a competitive game, such as the fact that both players are Harvard students. Some time after reading this, I was chatting with a Harvard couple I knew, and I decided to try the game out on them. When I opened the first piece of paper, my heart sank. The young man had written, "4:30 p.m., The Coffee Connection." "Oh Lord," I thought, "he didn't understand the game at all." Then I looked at the young lady's piece of paper. It read, "4:30 p.m., The Coffee Connection." It seems that is where they met every day for coffee. Schelling wins again!

Not much in the way of theorems, but a great deal in the way of insight, can be gained from analyzing these situations, as Schelling has shown.

**(4) Mixed Strategies**

The subject of mixed strategies has an interesting history. During the Second World War, the Allies struggled with the problem of defending the huge trans-Atlantic convoys of military supply ships going from the United States to England against then terrible depredations of the Nazi wolf packs of u-boats. The best defense was Allied airplanes capable of spotting u-boats from the air and bombing them, but the question was, What routes should the planes fly? If the planes, day after day, flew the same routes, the u-boats learned their patterns and maneuvered to avoid them. There was also the constant threat of espionage, of the secret anti-u-boat routes being stolen. The Allied planners finally figured out that a mixed strategy of routes determined by a lottery rather than by decision of the High Command held out the most promising hope of success.

Generally speaking, however, mixed strategies are a bit of *arcana* perfect for proving a powerful mathematical theorem but not much use in choosing a plan of action.

**(5)-(6) Calculation of Mathematical Expectation versus Maximization of Security Levels**

We have already discussed at some length the limitations of maximization of expected utility as a criterion of rationality of decision making. von Neuman and Morgenstern reject it in favor of the much more conservative rule of maximizing one's security level. We have also seen that this rule of decision making does not allow for risk aversion [or a taste for risk], unless we totally change the set over which preferences are expressed, so that they become compound lotteries over even total future prospects rather than Outcomes in any ordinary sense. As we have also seen, maximization of expected utility rules out lexicographic preference orders, and when I come to talk about the application of this methodology to nuclear strategy and deterrence policy, I will argue that the assumption of non-lexicographic preference orders covertly constitutes an argument for a nuclear strategy favoring the Air Force or the Army rather than the Navy in the inside-the-Beltway budget battles.

**(7) Pre-Play Communication**

Once we permit pre-play communication, all manner of fascinating possibilities open up. As we might expect, situations with pre-play communication and non-strictly opposed preference orders are among the richest fields for discussion and at the same time allow for the least in the way of rigorous argument or proof. In the hands of an author with a good imagination and a sense of humor, this can be lots of fun, but virtually everything that can be said about such situations can be said without calling them games and drawing imposing looking 2 x 2 payoff matrices. For example, as any hotshot deal maker in the business world knows, when you are engaged in a negotiation, it is sometimes very useful to make yourself deliberately unreachable as the clock ticks on toward the deadline for a deal. If a deal must be struck by noon on Tuesday, and if both parties want to reach agreement somewhere in the bargaining space defined by the largest amount of money the first party is willing to pay and the smallest amount the second party is willing to accept, it is tactically smart for the buyer to make a lowball offer within that space, and then be unavailable until noon Tuesday [somewhere without cell phone coverage, in the ICU of a hospital, on an airplane.] The seller must then accept the offer or lose the sale. Since by hypothesis the seller is willing, albeit reluctant, to sell at that price, she will accept rather than lose the sale. If the seller sees this coming, she can in turn give binding instructions to her agent to accept no offer unless there is the possibility of a counteroffer before the deadline. Then she can make *herself* unavailable. And so forth. This is the stuff of upscale yuppie prime time tv shows. It just sounds more impressive when you call it Game Theory.

**(8) Perfect Information**

The general subject of perfect and imperfect information has been so much discussed in economics of late that I need not say anything here. Suffice it to note that formal Game Theory assumes perfect information of the payoff matrix, which embodies both the rules of the game and players' preference structures. Games do allow for imperfect information, of course. Poker players do not know one another's cards, for example. But that is a different matter, built into the rules of the game.

Nassim Taleb (Black Swan author) talks about assumption 0 a lot (more so just in terms of misuses of probabilities) and calls it the ludic fallacy.

ReplyDeleteBacon, I was unfamiliar with Taleb so I read up quickly on the ludic fallacy on Wikipedia, and I agree completely. I must get hold of that when I get home. Think, for example, how much is assumed simply in the notion of moves in a game. In real life, people frequently make two moves before their opponents can make one, or they cheat. How would one analyse cheating in chess?! So much is built into the use of the game theoretic model that its users seem not to notice. Thanks for the reference.

ReplyDeleteNo problem. I recommend his Fooled By Randomness; a truly paradigm-shattering book for me.

ReplyDelete