Sunday, May 30, 2010

SPRING BREAK

On Wednesday, I will post the last installment [big boffo ending] before going on a three week vacation to Paris. Before I go, I will post links to three things I have written -- one on Rawls, one on Nozick, and one on Elster -- that I would very much like you to read before I return. When I come back, I will start a lengthy discussion of the ways in which these formal materials are used in Political Theory, Deterrence Theory and Military Strategy, and in Legal Theory. That is the real payoff of everything we have been doing so far, and I hope you will stay with me as we go through that material.

Eleventh Installment

    Now let us introduce the concept of a Mixed Strategy. All along, we have been working with two-person games whose rules allow for only a finite number of moves [with cut-off points like the rules that limit a chess game]. The Game Tree for such a game, however complex, defines a finite number of strategies for each player, even though that number may, as we have seen, be large even for very simple games. With a finite number of strategies for each player, we can convert the extensive form of the game to the normal form by constructing a payoff matrix with a finite number of rows representing A's strategies and a finite number of columns representing B's strategies. [From a mathematician's point of view, it doesn't matter how big the number of rows and columns, so long as they are finite in number]. But as we have seen, even with strictly opposed preferences, a game in which both players seek to maximize their security level may not have a stable equilibrium solution. Now von Neuman takes the final step by introducing the concept of a mixed strategy.

    When A gives her instructions to the Referee, she need not specify one of her pure strategies. Instead, she can define a probability distribution over her pure strategies and instruct the Referee to construct a Lottery that embodies that distribution. Just to be clear, this is a Lottery in which the "prizes" are strategies, not outcomes. Before leaving for her appointment, A tells the Referee to spin the wheel that has been constructed and play whichever strategy comes up. This is a real spin of the wheel. Neither A nor the referee can know which pure strategy will be played until the wheel has been spun. B can do the same thing, of course.

    Each Lottery, or probability distribution over the set of pure strategies, is a mixed strategy, and quite obviously there are an infinite number of them. With an infinite number of mixed strategies for A, and an infinite number for B, there are of course also an infinite number of mixed strategy pairs, which is to say pairs each of which consists of one mixed strategy for A and one mixed strategy for B. Notice that a pure strategy now becomes simply a mixed strategy in which all the probability weights but one are zero.

    For any mixed strategy pair, A can calculate the value to her of those mixed strategies being played against one another, although it is obviously tedious to do so. She says to herself: Suppose I play mixed strategy MA1 and B plays mixed strategy MB1. MA1 offers a .3 probability of my playing pure strategy A1, and MB1 offers B a .2 probability of his playing pure strategy B1, so I will calculate the payoff to me of A1 played against B1 and then discount that payoff, or multiply it, by (.4)(.1) = .04. Then I will do the same for A2 against B1, etc etc etc. Then I will add up all the bits of payoffs, and that is the value to me of the mixed strategy pair( MA1, MB1). I hope by now this is clear and reasonably obvious.

    However, we can no longer construct a payoff matrix, because there are infinitely many mixed strategies for each player. Instead, we need a space of points that represent the payoffs to A of each of the infinite pairs of mixed strategies. Since we are dealing now with strictly competitive zero-sum games, we do not need to represent the payoff to B. Under the normalization we have chosen, that is simply 1 minus the payoff to A.

    At this point I must do what mathematicians call "waving their hands." That is, instead of giving rigorous definitions and proofs [not that I have been doing that so far], I must simply wave my hands and describe informally what is going on, and hope that you have enough mathematical intuition to grasp what I am saying. To represent all of the mixed strategy pairs and their payoffs to A, we are going to need a space with (n-1) + (m-1) + 1 dimensions. The first (n-1) dimensions will represent the probability weights being given to A's n pure strategies. [(n-1) because the weights must add up to 1, so once you have specified (n-1) of them, the last one is implicit.] The next (m-1) dimensions represent the probability weights being given to B's m pure strategies. The last dimension, which you can think of intuitively as the height of a point off of a hyperplane, represents A's payoff. We only need to represent A's payoff because this is a zero-sum game, and B's payoff is just the negative of A's. Obviously, we are only interested in the part of the space that runs, on each axis, from 0 to 1, because both the probability weights and the payoffs all run from 0 to 1 inclusive.

    It is said that the great Russian English mathematician Besicovitch could visualize objects in n-dimensional vector space. If he wanted to know whether something was true, he would "look" at the object in the space and rotate it, examining it until it was obvious to him what its properties were. Then he would take out pen and paper and go through the tedium of proving what he already knew was true. I suspect the same must have been true of von Neuman. Well, God knows it isn't true of me, so I must just soldier on, trying to connect the dots.

    You and I can get some visual sense of what such a space would be like by thinking of the simplest case, in which A and B each have only two strategies. In that nice simple case, the number of dimensions we need is (2-1) + (2-1) + 1, or 3. And most of us can imagine a three-dimensional system of axes. Just think of a three dimensional graph with an x-axis and a y-axis forming a plane or bottom, and a z-axis sticking up. The infinity of A's mixed strategies can be represented as points along the x-axis running from the origin to plus 1. The origin represents the mixed strategy with zero weight given to A1, and therefore a weight of 1 given to A2. In other words, it represents the pure strategy A2. Any point along the line represents some mixture of A1 and A2. The point 1 on the x-axis represents the pure strategy A1. Same thing on the y-axis for B's strategies B1, B2, and the mixtures of them. We thus have a square bounded by the points (0,0), (1,0), (0,1), and (1,1). The z-axis measures the payoff to A for each point in that square, and that set of points between 0 and 1 in height that together form a surface over the square.

    Now [here goes the hand-waving], the function mapping mixed strategy pairs onto payoffs is a continuous one, because a tiny change in the assignment of probability weights results in a tiny change in the payoff. [I hope that is obvious. If it isn't, take my word for it -- which is a great thing for a philosopher to say who is trying to explain some mathematics, I know, but I have my limits!]

    OK. Got that in your mind's eye? Now, let us recall that von Neuman offered a "solution" of strictly competitive games in terms of something called security levels and equilibrium pairs of strategies. Suppose that somewhere in that space of payoffs, there is a point that represents the payoff to a pair of equilibrium mixed strategies. What would that mean and what would it look like?

    Well, what it would mean is this: First of all, if B holds to his mixed-strategy choice, any movement A makes back and forth along the x-axis is going to be worse for her. [That is what it means to maximize your security level]. Visually, that means that as she moves back and forth along the x-axis, the point in space representing the payoff to her goes down. What is more, because of continuity, it goes down smoothly. A little movement one way or the other produces a little move down of the payoff point. A bigger move one way or the other produces a bigger move down. For B, the whole situation is reversed, because B's payoffs are equal to the negative of A's payoff. [Zero-sum game]. So, if A holds to her mixed strategy choice, any movement B makes along the y-axis will push the payoff point up [up for B is bad, because the payoff point is A's payoff, and B's is the negative of that.]

    Now, what does this region of the payoff surface look like? Well, the point we are focusing on has the property that if you move back or forth in one dimension the surface falls off, and if you move back or forth in the other dimension, the surface climbs. Have you ever watched a Western movie? Have you ever ridden a horse? Can you see that the surface looks like a saddle? Especially a Western saddle, which has a pommel to grab onto and a nice high back to the seat. The point right under you on the saddle, the point you are sitting on, is a "saddle point." It has the property that if you run your finger from that point side to side [along the y-axis], your finger goes down, and if you run your finger from that point front or back, your finger goes up.

    Now we know what an equilibrium point looks like, at least in three dimensional space, for the case in which A and B each have two pure strategies. Exactly the analogous thing would be true of a hyperplane in hyperspace [you can get your light sabers at the desk before we go into warp speed]. So, we can say that If there is a saddle point in the space representing a two person zero sum mixed strategy game, then that point will occur at the intersection of an equilibrium pair of mixed strategies, and in that sense will be a Solution to the game.

    So, are there such points? Now comes the boffo ending for which all of this has been a preparation. John von Neuman proved the following theorem: Every two person zero sum mixed strategy game has a solution. That solution is represented by a saddle point in the n-dimensional vector space representing the normal form of the game. I really think von Neuman must have seen this in one exquisite flash of mathematical intuition, and then just cranked out a proof of a proposition he could just see is true. I am not going to go through the proof [I am not completely crazy], but having brought you this far, I think I owe it to you to just tell you the idea underlying it.

    In a nutshell, here it is. von Neuman defines a continuous transformation or mapping of the strategy space onto itself. He then proves that the transformation has this neat property: a point is mapped by the transformation onto itself [i.e., it remains invariant under the transformation] if and only if that point is a saddle point, and is thus a solution to the game. He then appeals to a famous theorem proved by the great Dutch mathematician L. E. J. Brower, which states that every continuous transformation of a compact space onto itself has at least one fixed point, which is to say a point that the transformation maps onto itself. [Hence, this is known as the Fixed Point Theorem.] Ta Da! [Can you believe that when I taught this stuff to my philosophy graduate students at UMass, I not only proved von Neuman's theorem, I even proved Brouwer's theorem? Ah, there were giants in the earth in those days, as the Good Book says.]

    And that is it, folks. That is the high point of formal Game Theory. There is a vast amount more to say, and I am going to say a good deal of it, but the subject never again rises to this level of formal elegance or power. Notice, before we move on, one important fact. von Neuman proves that every zero-sum two person mixed strategy game has a solution. But since Brower's theorem just tells you there exists a fixed point, and doesn't tell you how to find it, in general Game Theory cannot tell us how to solve even this limited category of games. [If there is anyone out there who has ever been involved with Linear Programming, every Linear Programming problem is equivalent to a zero-sum mixed strategy two person game, so that is why, in a certain sense of why, you also cannot be sure of solving a Linear Programming problem.] Oh yes, one final point, which we have already encountered in a simpler form. if there are two saddle points, they are equivalent, in the sense that they give the same payoffs to A and to B.

Friday, May 28, 2010

SHAMELESS COMMERCE DEPARTMENT

Since I am about to leave for Paris, I thought I would take the opportunity to do some advertising. Six years ago, Susie and I bought a little one room ground floor studio apartment on the Left Bank in the old part of Paris, half a block from the Seine, catty-corner from Notre Dame. We rent it out when we are not there, regularly advertising in the back pages of the NEW YORK REVIEW OF BOOKS. The ad reads " Paris - Blue State Special." We will be there in June, and it is rented for much of the summer, but starting at the beginning of September, it is mostly open. If anyone is interested,send me an email. Where else in Paris can you find the complete German language edition of the collected works of Marx and Engels, not to speak of a shelf of books on Kant and a copy of every version of every book I have ever published?

TENTH INSTALLMENT

I want now to take some time to make sure that everyone understands just how strong these assumptions are, and also exactly how to interpret them. The first point to understand is in a way the hardest. You might think that our subject, A, decides how she feels about all of these simple and compound lotteries by carrying out expected utility calculations and then saying to herself, "Well, since this one has a greater mathematical expectation than that one, I prefer this one to that one." You might think that, because, good heavens, how else could she possibly decide which she prefers to which? But if you thought that [which of course none of you does], you would be WRONG, WRONG, WRONG! TOTALLY WRONG, WRONG, WRONG! That would be, to use correctly a phrase that these days is almost always used incorrectly, begging the question. It would be assuming what is to be proved, and thus arguing in a circle. What von Neuman actually supposes is that our subject, A, looks at the outcomes O1, O2, etc and decides how she feels about them. She ranks them in order of her preference. She then looks at the infinitude of simple lotteries and compound lotteries and decides how she feels about them as well. She merges this all in her mind into a single complete, transitive ordering of all of those outcomes and simple lotteries and compound lotteries. Then von Neuman posits that her preferences, thus arrived at, in fact obey the six Axioms. If that is so, then, von Neuman shows, her preferences can be represented AS THOUGH she were carrying out expected utility calculations in her head in accordance with the axioms.

    We are talking here about an enormously powerful set of idealizing and simplifying assumptions, as powerful in their way as the assumptions economists have to make before they can talk about continuously twice differentiable production functions [which they need in order to prove their nifty equilibrium theorems.] Let me draw on something I said earlier to show you just how powerful these Axioms are. Look at Axiom V, the transitivity axiom, and let us recall the eye doctor example. Suppose that the lotteries A is comparing are big Amusement Park wheels, on which are marked off different sized wedges [each defined by two radii], each one of which is associated with one of the outcomes in the set, O. It would be no problem at all to construct a whole series of wheels, each of which is such a tiny bit different from the one next to it that when A is shown the two wheels together, she looks at them and says, I am indifferent between those two lotteries." But suitably arranged, the series of wheels might very slowly, indiscernibly, alter the size of the wedges associated with two prizes or outcomes, Oi and Oj, until, if we were to show A the first and the last in the series, she would look at them and say, flatly, I prefer the one on the left to the one on the right. Whoops. No transitivity! Axiom V rules out any such state of affairs.

    Well, you can think about each one of the Axioms and see whether you can imagine a situation in which the assumption of that Axiom clearly requires something very strong and even counterintuitive. But rather than go on about that, I am going to take the next step.

    We are now ready to extend our notion of strictly opposed preference orders. Recall that we describe the preference orders of A and B over a set of outcomes, O, as "strictly opposed" when A prefers Oi to Oj or is indifferent between them if and only if B prefers Oj to Oi or is indifferent between them. We will describe the preference orders of A and B over the infinite set of lotteries, simple and compound, over the set of outcomes, O, as "strictly competitive" when A prefers Lottery L1 to Lottery L2 or is indifferent between them if and only if B prefers L2 to L1 or is indifferent between them. This means that A and B not only rank all of the outcomes in exactly opposite ways. They also rank all of the lotteries, simple or compound, over those outcomes in exactly opposite ways.

    In this very specific set of circumstances [where all six axioms apply to both A's preferences and B's preferences, and A and B have strictly competitive preferences], we can normalize the utility functions of A and B so that for any lottery, L, simple or compound, over the set of outcomes, O, the sum of the utility index assigned to L by A's utility function and the utility index assigned to L by B's utility function is a constant. This is what is meant by saying that a game played by A and B is a constant sum game.

    Rather than grind out an algebraic proof, I will offer a simple, intuitive proof that should be easy to grasp. We shall use u(L) to mean the utility that A's utility function assigns to L, and u'(L) to mean the utility that B's utility function assigns to L. Now, we are permitted arbitrarily to let A's most preferred outcome, O1, have a utility of 1, and A's least preferred outcome have a utility of 0. Since A and B have strictly opposed preferences for outcomes, B's most preferred outcome is On and his least preferred outcome is O1. We are permitted to set B's utility for On equal to 1 and for O1 equal to 0. So the utility assignments of both A and B can be portrayed as lying along a line that runs between 1 and 0.

    No matter what lottery, L, we have chosen, we know from the Axioms that it is equivalent, for A, to some lottery over just O1 and On whose probability weights are u and (1-u) for some u. Think of that as a point somewhere on the line running between 1 and 0. [Remember that for the best and worst alternatives, O1 and On, the point is an endpoint of the line.] The same thing is true for B. We are now going to prove that the point on the line representing A's utility for L and the point on the line representing B's utility for L are the same point. To prove this, we will assume the contrary and derive a contradiction with our assumption that A and B have strictly opposed preferences. So, let us choose a point representing u(L) and a different point representing u'(L), and then choose some point that lies between those two points, which we shall call S. Here is a picture of the situation. The line runs from 1 to 0 for A, and from 0 to 1 for B:

         1                                     0

         |-----------------u(L)---------------S-------------u'(L)-------------------------|

         0                                     1


 

    The point S represents a lottery, Ls, with weights S for On and (1-S) for O1. Now, just from looking at the diagram, we can see the following:

    (i) A prefers L to Ls, because L puts greater weight on O1 than Ls does. [u(L) is closer to the 1 than S is].

    (ii) B prefers L to Ls, because L puts greater weight on On [his favorite] than Ls does. [u'(L) is closer to his 1 than S is.]

    But this means that A and B do not have strictly opposed preferences, since they both prefer L to Ls. And this contradicts the assumption. So no matter which lottery L we choose, there cannot be a point S between u(L) and u'(L), which means they are the same point.

    But if they are the same point, then A's utility is u and B's utility is u' = (1-u), regardless of which lottery, L, we choose. and:

    u + u' = u + (1-u) = 1    

    Now, B's utility function is invariant under an affine (linear) transformation. So let us introduce the following affine transformation:

    u'' = u' - 1    

    What this does is to re-label B's utility assignments so that instead of running from 1 to 0, the run from 0 to -1. This means that A's and B's utilities for any arbitrary lottery L are no longer u and (1-u). Instead, they are now u and -u. And the sum of u and -u is zero.

    THIS, AND ONLY THIS, IS WHAT IS MEANT BY SAYING THAT A GAME PLAYED BY A AND B IS A ZERO-SUM GAME.

Wednesday, May 26, 2010

Ninth Installment

At long last, we are ready to state the six assumptions about someone's preferences, or Axioms, as von Neuman and Morgenstern call them, the positing of which is sufficient to allow us to deduce that the person's preferences over a set of outcomes can be represented by a Cardinal Utility Function. There is a very great deal of hairy detail that I am going to skip over, for two reasons. The first is that I want there to be someone still reading this when I get done. The second is that it is just too much trouble to try to get all this symbolism onto my blog. You can find the detail in Luce and Raiffa. O.K., here we go.

    Assume there is a set of n outcomes, or prizes, O = (O1, O2, ...., On)

AXIOM I: The individual has a weak preference ordering over O, with O1 the most preferred and On the least preferred, and this ordering is complete and transitive. Thus, for any Oi and Oj, either Oi R Oj or Oj R Oi. Also, If Oi R Oj, and Oj R Ok, then Oi R Ok. [I told you we would use that stuff at the beginning.]

AXIOM II: [A biggie] The individual is indifferent between any Compound Lottery and the Simple Lottery over O derived from the Compound Lottery by the ordinary mathematical process of reducing a compound lottery to a simple lottery [as I did for the example].

    This a very powerful axiom, and we have already met something like it in our discussion. In effect, it says that the individual has neither a taste for nor an aversion to any distribution of risk. The point is that the Compound Lotteries may exhibit a very broad spread of risk, whereas the Simple Lottery derived from them by the reduction process may have a very narrow spread of risk. Or vice versa. The individual doesn't care about that.

AXIOM III: For any prize or outcome Oi, there is some Lottery over just the most and least preferred outcomes such that the individual is indifferent between that Lottery and the outcome Oi. A Lottery over just the most and least preferred outcomes is a Lottery that assigns some probability p to the most preferred outcome, O1, and a probability (1-p) to the least preferred outcome, On, and zero probability to all the other outcomes. Think of this as a needle on a scale marked 0 to 1. You show the person the outcome Oi, and then you slide the needle back and forth between the 1, which is labeled O1 and the 0 [zero] which is labeled On. Somewhere between those two extremes, this Axiom says, there is a balancing point of probabilities that the person considers exactly as good as the certainty of Oi. Call that point Ui. It is the point that assigns a probability of Ui to O1 and a probability of (1 - Ui) to On.

    We are now going to give a name to the Lottery we are discussing, namely the Lottery [UiO1, (1- Ui)On]. We are going to call it Õi . Thus, according to this Axiom and our symbolism, the player A is indifferent between Oi and Õi.

    If you have good mathematical intuition and are following this closely, it may occur to you that this number between 1 and 0, Ui, is going to turn out to be the Utility Index assigned to Oi in A's cardinal utility function. You would be right.

    This Axiom is essentially a continuity axiom, and it is very, very powerful. It implies a number of important things. First, it implies that A does NOT have a lexicographic preference order. All of the outcomes are, in A's eyes, commensurable with one another, in the sense that for each of them, A is indifferent between it and some mix or other of the most and the least preferred outcomes. It also implies that we can, so far as A's preferences are concerned, reduce any Lottery, however complex, to some Simple Lottery over just O1 and On. The Axiom guarantees that there is such a Lottery. Notice also that this Axiom implies that A is capable of making infinitely fine discriminations of preference between Lotteries. In short, this is one of those idealizing or simplifying assumptions [like continuous production functions] that economists make so that they can use fancy math.

AXIOM IV. In any lottery, Õ can be substituted for Oi. Remember, Axiom III says that A is indifferent between Õi and Oi. This axiom says that when you substitute Õi for Oi in a lottery, A is indifferent between the old lottery and the new one. In effect, this says that the surrounding or context in which you carry out the substitution makes no difference to A. For example, the first lottery might assign a probability of .4 to the outcome Oi, while the new lottery assigns the same probability, .4, to Õi. [If you are starting to get lost, remember that Õi is the lottery over just O1 and On, such that A is indifferent between that lottery and the pure outcome Oi.]

AXIOM V. Preference and Indifference among lottery tickets are transitive relations. So if A prefers Lottery 1 to Lottery 2, and Lottery 2 to Lottery 3, then A will prefer Lottery 1 to Lottery 3. Also, if A is indifferent between Lottery 1 and Lottery 2, and is indifferent between Lottery 2 and Lottery 3, then A will be indifferent between Lottery 1 and Lottery 3. This is a much stronger Axiom than it looks, as we shall see presently.

    If you put Axioms I through V together, they imply something very powerful, namely that for any Lottery, L, there is a lottery over just O1 and On, such that A is indifferent between L and that lottery over O1 and On. We need to go through the proof of this in order to prepare for the wrap up last axiom.

    Let L be the lottery (p1O1, p2O2, ...., pnOn).

    Now, for each Oi in L, substitute Õi. Axioms III and IV say this can be done.

    So, using our previous notation, where xIy means A is indifferent between x and y,

    (p1O1, ..., pnOn) I (p1Õ1, ..., pnÕn) so, expanding the right hand side,

    (p1O1, ..., pnOn) I (p1[U1O1, (1-U1)On]), ...., (pn[UnOn, (1-Un)On) or, multiplying

    (p1O1, ..., pnOn) I ([p1U1 + p2U2 + ... + pnUn]O1, [p1{1-U1} + .... + pn{1-Un}On]) or

    (p1O1, ..., pnOn) I ([p1U1 + p2U2 + ... + pnUn]O1, [p1{1-U1} + ... + pn{1-Un}]On)

    if we let p = p1U1 + p2U2 + ... pnUn then we have:

    (p1O1, ..., pnOn) I (pO1, (1-p)On) In other words, the lottery, L, with which we started is indifferent to a lottery just over the best and worst outcomes, O1 and On.

AXIOM VI The last axiom says that if p and p' are two probabilities, i.e., two real numbers between 1 and 0, then: (pO1, [1-p]On) R (p'O1, [1-p']On) if and only if p ≥ p'

    This Axiom says that the individual [A in our little story] prefers [or is indifferent between] one lottery over the best and the worst alternatives to another lottery over those same two alternatives if and only if the probability assigned to O1 in the first lottery is equal to or greater than the probability assigned to O1 in the second lottery.

    Now, let us draw a deep breath, step out of the weeds, and remember what we have just done. First, we started with a finite set of outcomes, O = (O1, O2, ...., On). Then we defined a simple lottery over the set O as a probability distribution over the set O. Then we defined a compound lottery as a lottery whose prizes include tickets in simple lotteries. At this point, we introduced five AXIOMS or assumptions about the preferences that our sample individual A has over the set of outcomes and simple and compound lotteries of those outcomes. These are not deductions. They are assumptions. Then we showed that these five Axioms, taken together, imply a very powerful conclusion. Finally, we introduced a sixth Axiom or assumption about A's preferences.

    That is where we are now. von Neuman now takes the last step, and shows that if someone's preferences obey all six Axioms, then that person's preferences can be represented by a cardinal utility function over those outcomes that is invariant up to an affine (linear) transformation. I am not going to go through the proof, which consists mostly of substituting and multiplying through and gathering terms and all that good stuff. Suffice it to say that when von Neuman gets all done, he has shown that one way of assigning utility indices to the outcomes in O in conformity with the six Axioms is to assign to each outcome Oi the number Ui [as defined above]. This is then "the utility to A of Oi." Remember that this is just one way of assigning A's utility indices to the outcomes in the set O. Any affine transformation of those assignments will serve just as well.

    All of this has to be true about A's preferences in order for us to say that A's preferences can be represented by a cardinal utility function.

Tuesday, May 25, 2010

PROBLEM SOLVED

I have now found a way to post articles and chapters on a server that everyone can access. So, in a while, I will post some things I have written and ask people to read them as background for my commentary Stay tuned

Monday, May 24, 2010

AN APPEAL TO MY READERS

I have a problem. In another week or a bit more, I had planned to start a Game Theoretic analysis of Rawls' A THEORY OF JUSTICE. Now, I have published a book on this subject [UNDERSTANDING RAWLS] in which there is a 37 page long formal analysis of the Bargaining Game at the heart of Rawls' theory. It would be insane for me to try to type that into my blog! Can anyone make a suggestion? Is it online anywhere in the blogosphere? Would anyone be willing to read that much about Rawls? Or should I just make a reference to it and move on to other subjects. All help gratefully received.

Installment 7.5

I screwed up. This segment should have been posted BEFORE the previous one. Call it Installment 7.5. My apologies. If you are printing these out, insert this installment between Installments 7 and 8.


 

You might think that most of life is like this, and especially that all bargaining is, but a little reflection will convince you that that is not so. Think of the situation in which Jones has a house to sell and Smith wants to buy a house. They enter into a negotiation, which we can call a bargaining game. Suppose the lowest price for which Jones will sell the house is $350,000 and the highest price Smith will pay for the house is $375,000. Jones cannot get more than $375,000 for the house, and Smith cannot get the house for less than $350,000. Clearly, within that $25,000 spread, they have strictly opposed preferences. But both of them have an interest in concluding a sale, rather than in having the bargaining break down because they cannot come to an agreement in that "bargaining space." So, simplifying considerably, if we suppose there are three possible outcomes, namely (1) a sale price of 355,000, (2) a sale price of 370,000, and (3) no sale price, then clearly Jones prefers (2) to (1) and (1) to (3). Smith prefers (1) to (2) and (2) to (3). Jones' preference order is 213 and Smith's is 123. These are NOT strictly opposed preference orders [because in both orders alternative 3 is last]. Thus, many real world situations to which we might want to apply Game Theory are not cases of strictly opposed preference orders.

    Now consider a simple example of strictly opposed preference orders. Suppose a married couple, Harry and John, are trying to decide where they will go for their vacation, and suppose that all either of them cares about is the weather. For Harry, the warmer the better; for John, the cooler the better. [So why did they get married?, you ask.] They play a game in which the outcomes are the different places they could go for the vacation. Obviously, if Harry prefers destination 1 to destination 2, because 1 is warmer than 2, then we can be sure that Harry will prefer destination 2 to destination 1. You get the idea. Here is the payoff matrix.


 


 

    

    

B1

B2

B3

B4

A1

9, -9

-4, 4

2, -2

-1, 1

A2

1, -1

3, -3

-1, 1

0,0

A3

6, -6

4, -4

5, -5

3, -3

A4

-3, 3

5, -5

1, -1

-2, 2


 

    This is a more difficult game to analyze, and not merely because each player has four strategies rather than two. The problem is that neither player has a strictly dominating strategy. Consider each of the eight strategies in turn:

    A1 is not best for A if B should choose B2, B3, or B4

    [Because if B2 anything is better, if B3 A3 is better, if B4 A2 or A3 is better]

    A2 is not best if B should choose B1, B2, B3, or B4

    A3 is not best if B should choose B1, or B2

    A4 is not best if B should choose B1. B3. or B4

    B1 is not best if A should choose A1, A2, or A3

    B2 is not best if A should choose A2, A3, or A4

    B3 is not best if A should choose A1, A3, or A4

    B4 is not best if A should choose A1, A2, or A4

    Before we go on, make sure you understand how I arrived at this series of conclusions. Look just for a moment at strategy B3. B says to himself: "If A should choose A1, I will get -2 with B3. But I would get -9 with B1, 4 with B2, and 1 with B4. So clearly B3 does not do best for me no matter what A does, and that is what 'dominant strategy' means. So B3 is not a dominant strategy." The same reasoning leads A and b to conclude that neither one has a dominant strategy.

    Now let us adopt von Neuman and Morgenstern's proposal that the players seek to maximize their security levels. By the same process we followed a short while ago, we find that the security levels for the strategies available to A and B are:

    A1 -4 A2 -1 A3 3 A4 -3

    B1 -9 B2 -5 B3 -5 B4 -3

    So A3 and B4 are the strategies with the maximum security levels, and following von Neuman and Morgenstern's rule, the players choose the strategy pair (A3 B4) which, according to the payoff matrix yields the payoffs (3. -3). If A holds to A3, B cannot do any better by switching strategies, because the other payoffs to B in that row are -6, -4, and -5. If B holds to strategy B4, A cannot any better by switching strategies, because the other payoffs to A in that column are -1, 0, and -2. A pair of strategies with this property is called an equilibrium pair of strategies.

    The following fact is crucial: A pair of strategies (Ai, Bj) is in equilibrium if and only if the entry Aij is the minimum of its row, Ai, and the maximum of its column, Bj. Here is a proof of that important proposition:

    To say that Ai and Bj are in equilibrium is to say that neither player can improve his or her payoff by a strategy switch so long as the other player holds firm. This means that A's payoff, Aij, is larger than any other payoff in its column, these being the payoffs available to A when B is holding to the strategy Bj. By the same reasoning, Bij is the largest, or most preferred, payoff to B in row Ai, since those are the payoffs available to B so long as A holds fast to Ai. But by hypothesis, A and B have strictly opposed preferences [this is where that crucial assumption comes in], so the outcome at Aij will be the least preferred of all the payoffs in row Ai from A's point of view. Thus, if Ai and Bj are in equilibrium, it follows that Aij will be the maximum of its column and the minimum of its row.

    Conversely, suppose that Aij is the maximum of its column and the minimum of uits row. Since it is the maximum of its column, A can only do worse by switching to a different strategy so long as B holds fast. And since A and B have strictly opposed preferences, payoff Bij must be the most preferred of its row, for Aij is the least preferred of its row. So B can only lose by switching so long as A holds fast. But this is the definition of an equilibrium pair of strategies. Q. E. D.

    We have wandered pretty far into the weeds here, so you should take a moment to go over this argument and make sure you understand it. It is a typical Game Theory argument, and you need to become comfortable with that way of reasoning. Remember, we have already talked about whether identifying security levels and choosing a strategy to maximize your security level is a rational way of proceeding in game that presents you with choice under uncertainty. It is interesting to note, as we will see much later, that Rawls adopts this notion of rationality in A Theory of Justice, where he dramatizes it by saying, in effect [not a quote], Design an institution as though your worst enemy was going to assign you a place in it. In such a case, pretty clearly, maximizing the payoff to the least favored role in the institution makes a good deal of sense.

    Thus far, we have looked at games in which each player's maximum security level shows up in only one of the available strategies, but obviously there might be several strategies with identical security levels, and that security level could perfectly well be the maximum one. In that case, the rule to choose the strategy with the maximum security level does not tell player which strategy to choose. All of the strategies exhibiting the maximum security level are equally good, as far ass the rule is concerned. But if we cannot specify which strategy a player will choose, following the rule, then how can we know what the outcome of the game will be? Fortunately, when players have strictly opposed preferences, it makes no difference. The following is a very important fact:     

    If strategy pairs (Ai, Bj) and (Ap, Br) are both equilibrium pairs of strategies, then so too are the pairs (Ai, Br) and (Ap, Bj). What is more, in that case Aij = Air = Apj = Apr and Bij = Bir = Bpr = Bij. So, no matter how A and b mix and match their strategies with the maximum security levels, the results will be the same. [Note: When I say this, I mean the payoffs to the players will be the same. The actual play of the game may differ according to which strategies A and B choose, but they don't care about that, by hypothesis. They only care about the payoffs. keep that in mind, because down the line, it could be problematic.]    Here is the proof. Let us suppose that we have a payoff matrix that is n rows by m columns, or n x m. I am going to show you a central part of the total matrix that is large enough to include all four payoff pairs: (Aij, Bij), (Air, Bir), (Apj, Bpj), and (Apr, Bpr).

 

........

Bj

......

.........

.........

Br

.......

  

...............

         

Ai

 

Aij, Bij

   

Air, Bir

   

..............

         

Ap

 

Apj, Bpj

   

Apr, Bpr

   

.............

         

.............

         


 

(1) Apr ≥ Air because (Apr, Bpr) is an equilibrium point, and hence Apr is the maximum of its column. [Notice, the symbol ≥ means "equal to or greater than." That is the way my word processing program writes it. ]

(2) Air ≥ Aij because (Aij, Bij) is an equilibrium point, and hence Aij is a minimum of its row.

(3) Aij ≥ Apj same reasoning as (1)

(4) Apj ≥ Apr same reasoning as (2). Hence

(5) Apr ≥ Air ≥ Aij ≥ Apj ≥ Apr Therefore

(6) Apr = Air = Aij = Apj

    The same reasoning establishes that Bij = Bir = Bpr = Bpj and therefore obviously (Air, Bir) and (Apj, Bpj) are also equilibrium pairs.

    Just to review, the key to the proof is the fact that A and B have strictly opposed preference orders. If that is not the case, the argument clearly does not go through.

Friday, May 21, 2010

EIGHTH INSTALLMENT

Everything we have said thus far assumes only ordinal preferences, but that is not going to be enough to allow us to analyze games that involve chance elements, or what I have somewhat facetiously been calling moves by Lady Luck [think Marlon Brando singing "Luck be a lady tonight" in the movie version of Guys and Dolls. ] Suppose that at some point in a game the rules call for a roll of the dice, a flip of a coin, or a spin of a wheel, with some player's options determined by the outcome of the chance event. That is going to create problems for our analysis.

    Here is the simplest game I could think of to illustrate this idea. A moves first, and she has a choice. She can choose not to toss a coin, in which case B has to choose between a move that has the payoff (2.4, -2.4) to A and B, and a move that has the payoff (2, -2) to A and B. Pretty obviously, B will choose the latter. OR A can opt to flip a coin. If the coin comes up heads, the game ends with the payoff (1, -1). if the coin comes up tails, the game ends with the payoff (6, -6). Not much of a game, but it will do.

    What should A do? If she opts not to toss the coin, she has a sure thing payoff [given that B is rational] of 2. if she opts to flip the coin, she has a one-half chance of a payoff of 1 and a one-half chance of a payoff of 6. Now, if we forget that the numbers in the example are ordinal labels, we might be tempted to suppose that A can solve her problem by engaging in an expected utility calculation. After all, (1/2 x 1) + (1/2 x 6) = .5 + 3 = 3.5 so A should apparently choose to flip the coin. But these are ordinals, not cardinals, and all we really know is that for A, the payoffs are ranked 6 first, 2.4 second, 2 third, and 1 fourth. This ranking is preserved if we re-label the 6 as a 2.5 That still makes it first, which is all the information we have. But now, when we carry out an expected utility calculation on the game, we have 2 versus (1/2 x 1) + (1/2 x 2.5) = 1.75. With these numbers, A should change her strategy choice.

    Obviously, we cannot analyze games with chance elements unless we assume that the players have cardinal utility functions with utility assignments that are invariant under an affine [linear] transformation. Therefore, we need now to introduce the formal machinery required to allow us to talk about cardinal utility functions. This is going to get seriously gnarly, I am afraid. The faint of heart may wish to take a vacation for a day or two while I lay all of this out. I choose to go into this for two reasons: First, as my son Tobias, who is following this blog, pointed out to me at dinner several evenings ago, I am really a rather nerdy wonk when it comes to this stuff. I just plain like it. I hadn't realized that, but of course he is right. I think it is nifty. Second, one of the central ideological messages of this blog is that too many intellectuals and academics adopt the jargon and the style of argument of Game Theory without any real grasp of the assumptions that are embedded in what they are saying. They are like party crashers at a Mass who think that the Eucharist is just a light snack, oblivious to its theological meaning. For those who can handle it, I want to take you through the formal unfolding of the concept of a cardinal utility function. For those who cannot handle it, I want to shock and awe you so that in the future, when you idly assume that someone has a cardinal utility function, you will at least know what lies beneath the surface of that assumption. Here goes.

    We must begin with the notion of a probability distribution over a set of outcomes. Remember that all of this theory assumes that in a game there are a finite number of possible outcomes [maybe just win or lose, but maybe also lots of different money payouts, or even things like a trip to the zoo, a fur coat, a dinner date with Kevin Bacon, etc.] The convention in probability theory is that probabilities range from 1 to 0 inclusive. If a possible outcome has a probability of 1, that means it is certain to happen. If it has a probability of 0, that means it is certain not to happen. Thus, probabilities are expressed as real numbers between 1 and 0. Most of the time, they are expressed as a decimal point followed by some real number, like .4 or .125, and so forth.

    We also assume that the possible outcomes are independent of one another, not part of or nested inside one another. So we cannot have two outcomes one of which is "I lose the game" and another of which is "I lose the game and have to pay fifty dollars to the person who won." If all of this is so, then the probability that either outcome O will happen or outcome P will happen is equal to the probability that O will happen plus the probability that P will happen. So, if the probability of O is .3 and the probability of P is .6, then the probability of either O or P is (.3 + .6) = .9. A little reflection will tell you that if there are three possible outcomes, O, P, and Q, and if it is certain that one or another of them will happen, then the sum of their probabilities must be 1. In other words, "O or P or Q" is certain to happen.

    A probability distribution over the set of possible outcomes is a set of numbers, each of which is between 1 and 0 inclusive [some of the outcomes may be certain not to happen, in which case they have probability zero] and all of which add up to 1. Another way to express this [hang on, I am really reaching with my word processing program here] is this: If the probability of outcome i is pi then for all n possible outcomes from 1 to n, ∑pi = 1. [Cripes, That wasn't worth the effort. Oh well.]

    Following Luce and Raiffa, I am going to use the term "Lottery" to refer to an experiment that has built into it a probability distribution over the set of n possible outcomes in a game. For example, if you want to construct a lottery that has built into it a .5 chance of outcome O, a .25 chance of outcome P, and a .25 chance of outcome Q, you can make a wooden wheel with an arrow fixed to its center. You can then draw radii on the wheel dividing it into a half segment and two quarter segments. Then you can spin the arrow in such a way that there was an equal chance of the point of the arrow coming to rest anywhere on the wheel [remember to oil the bearings]. That wheel would be a lottery with the desired probabilities.

    O.K. We already know that each player has a consistent ordinal preference over the set of possible outcomes of the game. But we also know that that information all by itself is not enough to authorize us to represent that preference order by cardinal numbers. We can certainly use cardinal numbers in payoff matrices to represent a player's preferences -- that is what I have been doing. But we cannot treat them as cardinal numbers. We can only treat them as ordinals. Obviously we need more information about the player's preference structure if we are to define a cardinal preference order for him or her over the possible outcomes.

    Before we state the six axioms that von Neuman and Morgenstern proved are sufficient to allow us to impute a cardinal utility function to a player, we need some more definitions and some more notation. [I warned you this would get gnarly.] First of all, we must extend our notion of a Lottery to something called a Compound Lottery. A Simple Lottery is a probability distribution over as set of outcomes, O,P, Q, etc. A Compound Lottery is a Lottery the prizes in which are tickets in other lotteries over O, P, Q, etc. To make this as clear as I can, let me take a very simple case. Imagine a set of three outcomes (O, P, Q).

    O = +$5

    P = +$8

    Q = -$10 [i.e., the player has to play ten dollars]

and a Lottery, L1, with three prizes, namely tickets in Lotteries L11, L12, and L13. L1 is set up so that there is a:

    a .5 chance of winning a ticket in L11,

     a .25 chance of winning a ticket in L12, and

    a .25 chance of winning a ticket in L13.

    The prizes in the Lotteries L11, L12, and L13 are the outcomes (O, P, and Q)

    Now, L11, L12, and L13 are probability distributions over O, P, Q, etc. So, let us suppose that these three Lotteries are in fact:

    L11: .3 chance of O, .4 chance of P, and .3 chance of Q

    L12: 1. chance of O, .0 chance of P, .9 chance of Q

    L13: .8 chance of O, .1 chance of P, .1 chance of Q

    Notice that in each case the probabilities add up to 1.

    If our player, A, buys a ticket in L1, what is her chance of ending up with each of the outcomes, O, P, or Q? Well, she has a .5 chance of winning a ticket in L11, and L11 in turn offers a .3 chance of O, so that gives A so far a (.5)(.3) = .15 chance of O.

    She also has a .25 chance of a ticket in L12, and L12 offers a .1 chance of O, so that gives her a (.25)(.1) = .025 chance of O.

    Finally, she has a .25 chance of a ticket in L13, which offers a .8 chance of O, so that gives her a (.25)((.8) = .2 chance of O.

    Adding .15, .025, and .2, we get a .375 chance of O.

    If you carry out the same calculations for outcomes P and Q, you will find that A has a .225 chance of getting P and a .4 chance of getting Q, and sure enough, .375 + .225 + .4 = 1.

    Now, what is the money value to A of this gamble? It is the Mathematical Expectation, or:

    (.375)(5) + (.225)(8) + (.4)(-10) = 1.875 + 1.8 - 4 = -.325 or minus 32.5 cents.

    So, if all A cares about is making money, she ought not to buy a lottery ticket in L at any price. In fact, she should not even play if someone gives her a ticket.

    This is what is called reducing a Compound Lottery to a Simple Lottery, and it should be obvious that you can do this with any finite number of prizes and any number of levels of lotteries of lotteries of lotteries. Notice one small point that will be important later: If a Lottery offers a chance of p for one outcome, O, and chances of zero for all the other outcomes save a single one, Q, then the probability for Q will be (1-p).

Wednesday, May 19, 2010

SEVENTH INSTALLMENT

Now we are ready to start. I will progress from game to game [i.e., from payoff matrix to payoff matrix], making things a little more complicated each time, until we get to the payoff for all of this [so to speak]: a two person mixed strategy zero sum game. I will then explain [but probably not drive you nuts by actually proving] The Fundamental Theorem of Game Theory, von Neuman's own [if I may make a play on a popular line of environmentally friendly spaghetti sauces and such], which says that Every two person zero-sum mixed strategy game has a solution in the strong sense. But I get ahead of myself.


 

    Here is our first little game. Two businesses are competing to produce and sell children's toys, and each one must decide whether to make hula hoops or yoyos [but not both, for technical reasons]. Each player knows everything there is to know about the costs, the market, and such, except what the other player is going to choose to do. Although neither player knows what the other will do, each knows what effect the other's decision will have on the bottom line of each company. The following matrix shows the profit each will make in one of the four possible situations: A makes hula hoops and B makes hula hoops, A makes hula hoops and B makes yoyos, A makes yoyos and B makes hula hoops, or A makes yoyos and B makes yoyos. Now, I know that some of you are philosophers, but I really do not want you wasting your time wondering how they know these things, or what extraneous factors, cultural or otherwise, might affect their decisions. We will get nowhere if you do. Just go with the flow here. This is the payoff matrix for the game. We assume that both players prefer making more money to making less, and do not care about anything else.


 

B1 make hula hoops

B2 Make yoyos

A1 Make hula hoops

10000, 1000

8000, 6000

A2 Make yoyos

9000, 6000

6000, 5000


 

    A is in a position to make a rational decision, for consider: If A decides to make hula hoops, then no matter which choice B makes, A can be sure of doing better by sticking with hula hoops than by changing to yoyos. As the matrix shows, if B chooses B1, then A1 is better than A2 [because 10000 is bigger than 9000]. If B chooses B2, then A1 is better than A2 [because 8000 is bigger than 6000]. The strategy A1 is thus said to dominate the strategy A2.


 

    Let us from now on use the notation Pij to mean the payoff to A for the strategy pair [Ai, Bj], and the notation Qij to mean the payoff to B for the strategy pair [Ai, Bj]. We can se by looking at the matrix that P11 > P21 and P12 > P22.


 

    Notice several things about this game:


 

    [1] A does not need to know B's payoffs for the four possible combinations. In this very simple case, A's payoff schedule alone is enough to allow the calculation that A1 dominates A2.

    [2] As I have several time emphasized, we do not need to know the actual dollar or utility payoffs to carry out this line or reasoning. All we need to know is that P11 > P 21 and P12 > P22.

    Now look at the situation from B's point of view. A quick look at the payoff matrix reveals that B cannot use the line of reasoning that solved the choice problem for A. B's preference pattern is (Q12 = Q21) > Q22 > Q11. Strategy B1 does not dominate strategy B2, because Q12 > Q11, and B2 does not dominate B1, because Q12 > Q22. Now B takes an important step forward. Since B knows that A is rational, and since B knows A's payoffs, B reasons that A will choose strategy A1, since it is A's dominant strategy. Knowing that, B now looks at the payoff matrix and sees that with A choosing strategy A1, B's best strategy is B2, because 6000 > 1000.


 

    Thus, assuming that the figures in the payoff matrix are correct, that A is perfectly rational, and that A cares only about maximizing her utility as represented by the figures in the matrix, B can now solve the choice problem by choosing strategy B2, even though neither of the B strategies dominates the other. With this tiny step forward, we begin to develop a theory that takes account of and adjusts for the rational calculations of the other player. In this way, we move beyond the opacity of the marketplace, which is the central point of Game Theory.


 

    The games in which one or the other player has a strictly dominant strategy are relatively rare [and uninteresting]. When neither player has a strictly dominant strategy, we must extend our notion of what constitutes a "rational" choice. Consider the following game with a 3 x 3 payoff matrix. Once again, the preferences are ordinal, and I use numbers simply because it is easy to tell in what order A or B ranks two payoffs.


 

B1

B2

B3

A1

3, 9

5, 6

1, 4

A2

0, 11

6, 2

2, 11

A3

4, 5

5, 19

3, 8


 

    A little careful examination reveals that neither A nor B has a dominant strategy. For example, strategy A2 is not dominant [i.e., does best no matter what B does], because if B chooses B1, A would do better with either A1 or A3. Strategy B3 is not dominant for B because if A were to choose A1, B would be better off with B2 or B1. And so forth.


 

    What are A and B to do in this situation? This is, notice, exactly what we meant by "choice under uncertainty.' A and b know everything about the payoff matrix, therefore everything about possible outcomes and consequences of pairs of strategic choices, but neither of them can be certain what the other will do. In this situation, there are obviously a great many different ways A and B might proceed [leaving to one side for the moment engaging in industrial espionage to find out what the other is thinking.] If they had cardinal utility functions, and hence could say not merely in which order they prefer the possible outcomes but also by how much they do, they might decide to avoid any strategies that have any possibility at all of outcomes they consider really terrible. That might narrow their choices to the point where dominance considerations could be invoked. By the same token, they might decide to give preference to strategies that have at least some possibility of outcomes they prize very, very highly. And of course one might do one and the other the other. With cardinal preference orders, they might even cast about for some way to engage in expected utility calculations, although without any knowledge of the probabilities, that would be very difficult.


 

    At this point, von Neuman and Morgenstern propose an extremely conservative rule for choosing from among the available strategies. They suggest that the players, in effect, try to minimize the damage they may suffer as a consequence of the choices of their opponents. Let us, they say, define something we will call the "security level" of a strategy. The security level is the worst outcome that the strategy can produce. For example, look at strategy A1. Depending on what B chooses, A will get 3, 5, or 1. So her security level for A1, the worst she can do if that is her choice, is 1. By the same reasoning, we can see that the security level of A2 is 0, and of A3 is 3. Correspondingly, B's security levels for his three strategies are 5 for B1, 2 for B2, and 4 for B3. If the players adopt von Neuman and Morgenstern's rule for rational choice, then A will choose A3, which has the maximum security level of her three strategies, and B will adopt B1 for the same reason. The outcome of the game will be the intersection of strategies A3 and B1, which, as we can see from the payoff matrix gives A 4 and B 5.


 

    This pair of strategy choices guarantees at least 4 to A and at least 5 to B. But notice the following fact: There are two other pairs of strategy choices that are better for both A and B. (A1 B2) gives the 5 and 6, instead of 4 and 5, and (A3 B2) gives them 5 and 19 instead of 4 and 5. Now, both A and B can see this, of course, but without what is called pre-play communication or any way of making commitments to one another, they have no way of reaching either of those mutually better [i.e. Pareto Preferred] outcomes. The problem is that if B chooses B2, A may switch to A2 to get 6, leaving B with 2, which is worse than he can guarantee to himself by following the von Neuman Morgenstern rule.


 

    There are lots of games like this -- or, to put it another way, lots of situations which, when analyzed as games, produce payoff matrices with these characteristics. We shall return to them later. [The Prisoner's Dilemma is one example].


 

    Instead, let us take the next step forward in the evolution of the theory. Very often, in a game, A and B have what are called "strictly opposed" preferences over the outcomes. What that means is simply that if you take all the possible outcomes of the game and rank them from best to worst from A's point of view, B will rank them in exactly the opposite order. Somewhat more formally, where Pij and Qij are the payoffs to A and B when A's strategy Ai a is played against B's strategy Bj, and Pqr and Qqr are the payoffs to A and B when A's strategy q is played against B's strategy Br, then:


 

    (i) Pij > Pqr if and only if Qqr > Qij and

    (ii) Pij = Pqr if and only if Qqr = Qij

 

 

Monday, May 17, 2010

CREDIT WHERE CREDIT IS DUE

Some of you may have seen the cry for help that appeared briefly on this blog. I removed it because the cry was answered, by Boram Lee. The installment today is brought to you courtesy of Boram Lee. As I have observed before, this is the first generation in the history of the human race in which the young have to explain to the old how things work. The deeper implications of that fact have not yet been explored.

FORMAL METHODS IN POLITICAL PHILOSOPHY SIXTH INSTALLMENT

Let us suppose A agrees to play the game with B and then is called away for an emergency. She asks the referee to play for her, following to the letter her instructions. [There has to be a referee to make sure no one cheats]. The referee agrees, but insists that A give him a complete set of instructions, so that no matter what B does, the referee will know how to play A's hand. A says: here is what I want you to do:


 

    Take 1 matchstick. If B takes 1, take 2. If B takes 2, take 1.


 

    This set of instructions is called a Strategy. It tells the referee what to do in every situation in which A has a choice. There is no need to specify what the referee is to do when A's move is forced by the rules. The referee is now totally prepared for all eventualities. How many strategies does A have, total, including the one she actually chose? Well, here they are:


 

    [A1]    Take 1. If B takes 1, take 2. If B takes 2, take 1

    [A2]    Take 1. If B takes 1 take 1. If B takes 2, take 1.

    [A3]     Take 2.


 

    Notice that strategy A3 is complete because once A takes 2, the rest of the game so far as she is concerned is forced.


 

    Now let us suppose B says, "Well, if A isn't going to be there, I will just leave my strategy choice with the referee also." What are B's strategies?


 

    [B1]    If A takes 1, take 2. If A takes 2, take 2.

    [B2]    If A takes 1, take 2. If A takes 2, take 1.

    [B3]    If A takes 1, take 1. If A takes 2, take 1.

    [B4]    If A takes 1, take 1. If A takes 2, take 2.


 

    So A has three strategies and B has four. There are thus 3 x 4 = 12 possible pairs of strategy choices that A and B can leave with the referee. Notice [very important] that there is no communication between A and B. Each chooses a strategy by him or herself. Now, there are no chance elements in this game -- no rolls of the dice, no spins of a wheel. Game Theory allows for that, but this game doesn't happen to have any such "moves by Lady Luck." Therefore, once you know the strategy choices of A and B, you can calculate the outcome of the game. And we know what the payoffs are. In each possible outcome, either A wins a penny and B loses a penny, or B wins a penny and A loses a penny. Notice that there is no assumption that a penny yields the same amount of utility to A as to B. Indeed, any such statement is meaningless.


 

    We are now ready to construct what Game Theory calls the "payoff matrix," which in this case is a grid three by four, each box of which represents the payoffs to A and B of a pair of strategies that are played against one another. For example, what happens if A tells the referee to play her first strategy, [A1], and B tells the referee [without knowing what A is doing] to play his first strategy, B1? Well, A1 tells the referee to take 1 stick. Then it is B's turn, and B1 tells the referee that if A takes 1, the referee is to take 2. Now it is A's turn, and she has no choice but to take the last matchstick. B wins, and A pays B one cent. So the payoff for A is -1, and the payoff for B is +1. Here is the complete payoff matrix for this game [Sorry about the little boxes on the right. I goofed, and can't seem to delete them]:


 

B1

B2

B3

B4

 

A1

-1, +1

-1, +1

-1, +1

-1, +1

 

A2

-1, +1

-1, +1

-1, +1

-1, +1

 

A3

+1, -1

-1, +1

-1, +1

+1, -1

 


 

    

    You should take a few minutes to look at this carefully and be sure that you see how I derived the figures in the boxes -- the payoffs. This is called the normal form of the game. If you look at the payoff matrix just above, you will see that B wins in all but two cases: when A plays strategy A3 and B plays either B1 or B4. Now, Game Theory assumes that both players know everything we have just laid out about the game, so A and B both know the payoff matrix. B can see that if he chooses strategy B2 or B3, then he is guaranteed a win no matter what A does. Furthermore, either B2 or B3 is equally good for B. We describe this by saying that B2 and B3 are dominant strategies for B. A is out of luck. Her only hope, and a pretty slim one at that, is to play A3 and hope against hope that B is a dope.


 

    With this elementary example before us, let me now make several comments.


 

    [1] Legal theorists, political scientists, sociologists, philosophers all seem to think that there is something deep and profound about the Prisoner's Dilemma. Well, I invented the simplest game I could think of, and in that idiot game, there are three strategies for A and four for B. The Prisoner's Dilemma is a game with only two strategies for each player. How can something that much simpler than the idiot game I invented possibly tell us anything useful about the world? The truth is, it can't!


 

    [2] From the point of view of Game Theory, the entire game is represented by the payoff matrix. Any information not contained in the payoff matrix [like the fact that this game uses matchsticks, or that B has brown eyes] is irrelevant. All of the games with the same payoff matrix are, from the point of view of Game Theory, the same game. For a long time, until we get to something called Bargaining Theory, the little stories I tell about the games I am analyzing will serve simply to make the argument easier to follow. All the inferences will be based on the information in the payoff matrix. When we get to Bargaining Theory, which is tremendous fun but rather light on theorems, it will turn out that a great deal turns on what story you tell about the game. [For those of you who are interested, the classic work, which also won the author a Nobel Prize, is The Strategy of Conflict by Thomas Schelling.]


 

    [3] In the game above, the only information we actually use about the payoffs is A's ordinal preference for the possible outcomes of the game and B's ordinal preference for those outcomes. We make no use of the fact that the payoffs are money, nor do we use the fact that the amount of money won by one player happens to equals the amount of money lost by the other player. Even when we are talking about ordinal preference, I am going to use numbers, simply because they make it very easy to keep in mind the player's preference order.


 

    [4] At a certain point, when we introduce moves by Lady Luck [roll of the dice, spin of the wheel, etc.], we will have to shift up to cardinal preference orders for A and B. At that point, we will need cardinal numbers for the entries in the payoff matrices. The numbers before the comma will be A's utility for a certain outcome, as determined by A's cardinal utility function, and the numbers after the comma will be those for B. NOTHING AT ALL CAN BE INFERRED FROM THE NUMERICAL RELATIONSHIP BETWEEN AN ENTRY IN FRONT OF A COMMA AND THE ENTRY AFTER A COMMA. This is because the utility indices indicated by the numbers before the comma are invariant up to a linear transformation [or an affine transformation, as it is apparently now called, but I am too old to learn anything], and the same is true for the utility indices after the comma. If I multiply all of B's utilities for payoffs by one million, no information has been added or lost.


 

    [5] A is assumed to have a utility function that assigns ordinal [later cardinal] numbers to the outcomes. So is B. The outcomes are the terminations of the game as defined by the rules and diagrammed on the game tree. The rules may simply stipulate who is declared to have won and who has lost, or they may assign various payoffs, in money or anything else, to one or more of the players. No assumption is made about the attitudes of the players to these outcomes, save that their attitudes must generate consistent [ordinal or cardinal] preference orders of the outcomes. A can perfectly well prefer having B win the game over herself winning the game. Eventually, we will be assuming that both A and B are capable of carrying out expected utility calculations, and that each prefers an outcome with a greater expected utility to one with a lesser expected utility. But that assumption does not have built into it any hidden assumptions about what floats A's boat. It is utility, not money or anything else, that A and B are maximizing.


 

    [6] We have been talking thus far only about two person games. The mathematical theory developed by von Neumann is capable of proving powerful theorems only for two person games. A great deal can be said about multi-person games, especially those allowing for pre-play communication, which leads to coalitions, betrayals, and all manner of interesting stuff. But unfortunately not much that is rigorous and susceptible of proof can be said about such games.


 

    [7] Really important: Game Theory treats the extensive form of a game [game tree] and the normal form of the game [payoff matrix] as equivalent. As we have already seen in the case of planning for nuclear war, that assumed equivalence can be problematic, because in the playing out of the game in extensive form, the utility functions of the players may change. We will talk some more about this later, but for now, we are going to accept Game Theory's treatment of the two forms of a game as equivalent.