Monday, May 17, 2010


Let us suppose A agrees to play the game with B and then is called away for an emergency. She asks the referee to play for her, following to the letter her instructions. [There has to be a referee to make sure no one cheats]. The referee agrees, but insists that A give him a complete set of instructions, so that no matter what B does, the referee will know how to play A's hand. A says: here is what I want you to do:


    Take 1 matchstick. If B takes 1, take 2. If B takes 2, take 1.


    This set of instructions is called a Strategy. It tells the referee what to do in every situation in which A has a choice. There is no need to specify what the referee is to do when A's move is forced by the rules. The referee is now totally prepared for all eventualities. How many strategies does A have, total, including the one she actually chose? Well, here they are:


    [A1]    Take 1. If B takes 1, take 2. If B takes 2, take 1

    [A2]    Take 1. If B takes 1 take 1. If B takes 2, take 1.

    [A3]     Take 2.


    Notice that strategy A3 is complete because once A takes 2, the rest of the game so far as she is concerned is forced.


    Now let us suppose B says, "Well, if A isn't going to be there, I will just leave my strategy choice with the referee also." What are B's strategies?


    [B1]    If A takes 1, take 2. If A takes 2, take 2.

    [B2]    If A takes 1, take 2. If A takes 2, take 1.

    [B3]    If A takes 1, take 1. If A takes 2, take 1.

    [B4]    If A takes 1, take 1. If A takes 2, take 2.


    So A has three strategies and B has four. There are thus 3 x 4 = 12 possible pairs of strategy choices that A and B can leave with the referee. Notice [very important] that there is no communication between A and B. Each chooses a strategy by him or herself. Now, there are no chance elements in this game -- no rolls of the dice, no spins of a wheel. Game Theory allows for that, but this game doesn't happen to have any such "moves by Lady Luck." Therefore, once you know the strategy choices of A and B, you can calculate the outcome of the game. And we know what the payoffs are. In each possible outcome, either A wins a penny and B loses a penny, or B wins a penny and A loses a penny. Notice that there is no assumption that a penny yields the same amount of utility to A as to B. Indeed, any such statement is meaningless.


    We are now ready to construct what Game Theory calls the "payoff matrix," which in this case is a grid three by four, each box of which represents the payoffs to A and B of a pair of strategies that are played against one another. For example, what happens if A tells the referee to play her first strategy, [A1], and B tells the referee [without knowing what A is doing] to play his first strategy, B1? Well, A1 tells the referee to take 1 stick. Then it is B's turn, and B1 tells the referee that if A takes 1, the referee is to take 2. Now it is A's turn, and she has no choice but to take the last matchstick. B wins, and A pays B one cent. So the payoff for A is -1, and the payoff for B is +1. Here is the complete payoff matrix for this game [Sorry about the little boxes on the right. I goofed, and can't seem to delete them]:








-1, +1

-1, +1

-1, +1

-1, +1



-1, +1

-1, +1

-1, +1

-1, +1



+1, -1

-1, +1

-1, +1

+1, -1




    You should take a few minutes to look at this carefully and be sure that you see how I derived the figures in the boxes -- the payoffs. This is called the normal form of the game. If you look at the payoff matrix just above, you will see that B wins in all but two cases: when A plays strategy A3 and B plays either B1 or B4. Now, Game Theory assumes that both players know everything we have just laid out about the game, so A and B both know the payoff matrix. B can see that if he chooses strategy B2 or B3, then he is guaranteed a win no matter what A does. Furthermore, either B2 or B3 is equally good for B. We describe this by saying that B2 and B3 are dominant strategies for B. A is out of luck. Her only hope, and a pretty slim one at that, is to play A3 and hope against hope that B is a dope.


    With this elementary example before us, let me now make several comments.


    [1] Legal theorists, political scientists, sociologists, philosophers all seem to think that there is something deep and profound about the Prisoner's Dilemma. Well, I invented the simplest game I could think of, and in that idiot game, there are three strategies for A and four for B. The Prisoner's Dilemma is a game with only two strategies for each player. How can something that much simpler than the idiot game I invented possibly tell us anything useful about the world? The truth is, it can't!


    [2] From the point of view of Game Theory, the entire game is represented by the payoff matrix. Any information not contained in the payoff matrix [like the fact that this game uses matchsticks, or that B has brown eyes] is irrelevant. All of the games with the same payoff matrix are, from the point of view of Game Theory, the same game. For a long time, until we get to something called Bargaining Theory, the little stories I tell about the games I am analyzing will serve simply to make the argument easier to follow. All the inferences will be based on the information in the payoff matrix. When we get to Bargaining Theory, which is tremendous fun but rather light on theorems, it will turn out that a great deal turns on what story you tell about the game. [For those of you who are interested, the classic work, which also won the author a Nobel Prize, is The Strategy of Conflict by Thomas Schelling.]


    [3] In the game above, the only information we actually use about the payoffs is A's ordinal preference for the possible outcomes of the game and B's ordinal preference for those outcomes. We make no use of the fact that the payoffs are money, nor do we use the fact that the amount of money won by one player happens to equals the amount of money lost by the other player. Even when we are talking about ordinal preference, I am going to use numbers, simply because they make it very easy to keep in mind the player's preference order.


    [4] At a certain point, when we introduce moves by Lady Luck [roll of the dice, spin of the wheel, etc.], we will have to shift up to cardinal preference orders for A and B. At that point, we will need cardinal numbers for the entries in the payoff matrices. The numbers before the comma will be A's utility for a certain outcome, as determined by A's cardinal utility function, and the numbers after the comma will be those for B. NOTHING AT ALL CAN BE INFERRED FROM THE NUMERICAL RELATIONSHIP BETWEEN AN ENTRY IN FRONT OF A COMMA AND THE ENTRY AFTER A COMMA. This is because the utility indices indicated by the numbers before the comma are invariant up to a linear transformation [or an affine transformation, as it is apparently now called, but I am too old to learn anything], and the same is true for the utility indices after the comma. If I multiply all of B's utilities for payoffs by one million, no information has been added or lost.


    [5] A is assumed to have a utility function that assigns ordinal [later cardinal] numbers to the outcomes. So is B. The outcomes are the terminations of the game as defined by the rules and diagrammed on the game tree. The rules may simply stipulate who is declared to have won and who has lost, or they may assign various payoffs, in money or anything else, to one or more of the players. No assumption is made about the attitudes of the players to these outcomes, save that their attitudes must generate consistent [ordinal or cardinal] preference orders of the outcomes. A can perfectly well prefer having B win the game over herself winning the game. Eventually, we will be assuming that both A and B are capable of carrying out expected utility calculations, and that each prefers an outcome with a greater expected utility to one with a lesser expected utility. But that assumption does not have built into it any hidden assumptions about what floats A's boat. It is utility, not money or anything else, that A and B are maximizing.


    [6] We have been talking thus far only about two person games. The mathematical theory developed by von Neumann is capable of proving powerful theorems only for two person games. A great deal can be said about multi-person games, especially those allowing for pre-play communication, which leads to coalitions, betrayals, and all manner of interesting stuff. But unfortunately not much that is rigorous and susceptible of proof can be said about such games.


    [7] Really important: Game Theory treats the extensive form of a game [game tree] and the normal form of the game [payoff matrix] as equivalent. As we have already seen in the case of planning for nuclear war, that assumed equivalence can be problematic, because in the playing out of the game in extensive form, the utility functions of the players may change. We will talk some more about this later, but for now, we are going to accept Game Theory's treatment of the two forms of a game as equivalent.




  1. In this game, there is no cell where both A and B can win.

    As I understand the Prisoner's Dilemma, the "tragedy" is that there IS such a cell where both can win, but the players won't choose it unless there is trust and communication between them...or repeat games.

    Is this correct?

  2. No, that is a confusion. There is nothing called "winning" or "losing" in the Prisoner's Dilemma game. The problem is that there is an outcome that is prefered by both of the players to the outcomes they end up with, so if they could coordinate on it and count on each other not to "defect" to another strategy, each would be better off. We shall discuss that when we get to it, but there is a great deal of work to be done first.

  3. Thank you for the clarification.

    To say that there is an outcome that is "preferred" then raises the question of utility and how we measure it again, as you have pointed out.

    But the "prisoner's dilemma" still holds considerable interest, in my view....not to be easily dismissed.

  4. Correct me if I am wrong, but you can surely have both players prefer a certain outcome when they both have rational preferences (e.g. by hypothesis, both prefer more utils to less utils and there is a certain strategy yields both more utils.). I think the measurement point is that there is no way to do a direct numerical comparison of the utility FUNCTIONS between two people.

  5. What is the distinction between winning a game and utility maximization on the part of each player?
    In the case that B wins and that A would prefer that B win, can both players be said to win? Would that not mean that if B wins, then A wins as well, in that A gets more utility than would be possible otherwise?

  6. I'm curious about Nathana's second question as well. So far I can think of two possible responses:

    (1) Either A's utility gain from B's utility gain is already included in the payoffs listed, or not. If it is already included, then A should decide according to her payoffs. If it is not already included, then the game in question (where A gains utility from B's utility gain) is a different game from the one currently under consideration, because the outcomes have different payoffs for A.

    (2) It is a requirement that A's utility function be independent of B's. I don't think this requirement is based on the stricture against interpersonal comparison of utility, at least not in any obvious way?

  7. You are getting hung up on the term "win." Each possible route through the game tree has an outcome. The players have preferences over those outcomes. If the rules of the game stipulate that certain termini are labeled wins for A and others wins for B, those are just the rules of the game. Being pleased with an outcome is not at all the same thing as "winning.' In chess, for example, a weaker player might be quite pleased with a draw against a stronger player, but if the rules label it a draw, it is a confusion to say that the player "won" because he was pleased with the outcome. This is exactly the sort of confusion that one gets into when one uses the concepts of Game Theory loosely and informally. That is what this entire effort is designed to teach.

  8. [A2, B3] ends in defeat for player B, not player A.