Wednesday, June 30, 2010


Part Four



    The time has come to put all of this formal stuff to use. In the second major part of this tutorial, I shall examine a number of attempts to apply the materials of Game Theory and Rational Choice Theory to substantive issues in political theory, economics, military strategy, and the law. My message will in the main be negative. I shall argue, again and again, that authors attempting to gain rigor or clarity or insight by the use of these methods actually misuse them, failing to understand them correctly or failing to understand the scope and nature of the simplifications and abstractions that are required before the materials of Game Theory and Rational Choice Theory can be properly applied.


    I have asked you to read two essays and a chapter of a book, all by me, and all available by clicking on the links provided in the blog post of June 2, 2010. In order to move things along and keep this tutorial to a manageable size, I am going to rely on you to do that reading, so that I can refer to it without summarizing it or repeating what I have said in those texts.


    My order of discussion will be as follows:


    1. A discussion of the Prisoner's Dilemma

    2. A discussion of the Free Rider Problem

    3. An extended and very detailed analysis of the central thesis of John Rawls' A Theory of justice.

    4. A brief discussion of certain arguments in Robert Nozick's Anarchy, State, and Utopia.

    5. A discussion of some of the applications of Game Theory and Rational Choice Theory in Game Theory and the Law by Baird, Gertner, and Picker.

    6. A discussion of the role played by Game Theory in the debates about military strategy and deterrence policy in the United States in the first twenty years following World War II. In connection with this portion of the discussion, I will make available the text of a book I wrote in 1962 but was never able to get published.


    Assuming anyone is still with me after all of that, I will entertain suggestions of how we might usefully keep this tutorial going. Alternatively, I can go back to playing Spider Solitaire on my computer. :)


The Prisoner's Dilemma


    The Prisoner's Dilemma is a little story told about a 2 x 2 matrix. For those who are unfamiliar with the story [assuming someone fitting that description is reading these words], here is the statement of the "dilemma" on Wikipedia:


"Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated the prisoners, visit each of them to offer the same deal. If one testifies for the prosecution against the other (defects) and the other remains silent (cooperates), the defector goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?"


    The following matrix is taken to represent the situation.



B1 cooperate

B2 defect

A1 cooperate

6 months, 6 months

10 years, Go free

A2 defect [

Go free, 10 years

5 years, 5 years


    The problem supposedly posed by this little story is that when each player acts rationally, selecting a strategy solely by considerations of what we have called dominance [A2 dominates A1 as a strategy; B2 dominates B1 as a strategy], the result is an outcome that both players consider sub-optimal. The outcome of the strategy pair [A1,B1], namely six months for each, is preferred by both players to the outcome of the strategy pair [A2,B2], which results in each player serving five years, but the players fail to coordinate on this strategy pair
even though both players are aware of the contents of the matrix and can see that they would be mutually better off if only they would cooperate.


    For reasons that are beyond me, this fact about the matrix, and the little story associated with it, is considered by many people to reveal some deep structural flaw in the theory of rational decision making, akin to the so-called "paradox of democracy" in Collective Choice Theory. Military strategists, legal theorists, political philosophers, and economists profess to find Prisoner's Dilemma type situations throughout the universe, and some, like Jon Elster [as we shall see when we come to the Free Rider Problem] believe that it calls into question the very possibility of collective action.


    There is a good deal to be said about the Prisoner's Dilemma, from a formal point of view, so let us get to it. [Inasmuch as there are two prisoners, it ought to be called The Prisoners' Dilemma, but never mind.] The first problem is that everyone who discusses the subject confuses an outcome matrix with a payoff matrix. In the game being discussed here, there are two players, each of whom has two pure strategies. There are no chance elements or "moves by nature" [such as tosses of a coin, spins of a wheel, or rolls of a pair of dice]. Let us use the notation O11 to denote the outcome that results when player A plays her strategy 1 and player B plays his strategy 1. O12 will mean the outcome when A plays her strategy 1 and B plays his strategy 2, and so forth. There are thus four possible outcomes: O11, O12, O21, O22.


    In this case, O11 is "A serves six months and B serves six months." O12 is "A serves 10 years and B goes free," and so forth. Thus, the Outcome Matrix for the game looks like this:





A serves six months and B serves six months

A serves ten years and B goes free


A goes free and B serves ten years

A serves 5 years and B serves five years


    Notice that instead of putting a comma between A's sentence and B's sentence, I put the word "and." That is a fact of the most profound importance, believe it or not. The totality of both sentences, and anything else that results from the playing of those two strategies, is the outcome. Once the outcome matrix is defined by the rules of the game, each player defines an ordinal preference ranking of the four outcomes. The players are assumed to be rational -- which in the context of Game Theory means two things: First, each has a complete, transitive preference order over the four outcomes; and Second, each makes choices on the basis of that ordering, always choosing the alternative ranked higher in the preference ordering over an alternative ranked lower.


    Nothing in Rational Choice Theory dictates in which order the two players in our little game will rank the alternatives. A might hate B's guts so much that she is willing to do some time herself if it will put B in jail. Alternatively, she might love him so much that she will do anything to see him go free. A and B might be sister and brother, or they might be co-religionists, or they might be sworn comrades in a struggle against tyranny. [They might even be fellow protesters arrested in an anti-apartheid demonstration at Harvard's Fogg Art Museum -- see my other blog for a story about how that turned out.]


    "But you are missing the whole point," someone might protest. "Game Theory allows us to analyze situations independently of all these considerations. That is its power." To which I reply, "No, you are missing the real point, which is that in order to apply the formal models of Game Theory, you must set aside virtually everything that might actually influence the outcome of a real world situation. How much insight into any legal, political, military, or economic situation can you hope to gain when you have set to one side everything that determines the outcome of such situations in real life?"


    In practice, of course, everyone assumes that A ranks the outcomes as follows: O21 > O11 > O22 > O12. B is assumed to rank the outcomes O12 > O11 > O22 > O21. With those assumptions, since only ordinal preference is assumed in this game, the payoff matrix of the game can then be constructed, and here it is:





second, second

fourth, first


first, fourth

third, third


    [Notice, by the way, that this is not a game with strictly opposed preference orders, because both A and B prefer O11 to O22. With strictly opposed preference orders, you cannot get a Pareto sub-optimal outcome from a pair of dominant strategies -- for extra credit, prove that. :) ]


    That payoff matrix contains the totality of the information relevant to a game theoretic analysis. Nothing else. But what about those jail terms? Those are part of the outcome matrix, not the payoff matrix. The payoff matrix gives the utility of each outcome to each player, and with an ordinal ranking, the only utility information we have is that a player ranks one of the outcomes first, second, third, or fourth [or is indifferent between two or more of them, of course, but let us try to keep this simple.] But ten years versus going scot free, and all that? That is just part of the little story that is told to perk up the spirits of readers who are made nervous by mathematics. We all know that when you are introducing kindergarteners to geometry, it may help to color the triangles red and blue and put little happy faces on the circles and turn the squares into SpongeBob SquarePants. But eventually, the kids must learn that none of that has anything to do with the proofs of the theorems. The Pythagorean Theorem is just as valid for white triangles as for red ones.


    To see how beguiled we can be by irrelevant stories, consider the following outcome matrix, derived from a variant of the story we have been dealing with:





A serves one day and B serves one day

A serves 40 years and a day and B goes free


A goes free and B serves 40 years and a day

A serves 40 years and B serves 40 years


    In this variant, if both criminals keep their mouths shut, they go free after only one night in jail. If they both rat, they spend forty years in jail. If one rats and the other doesn't, the squealer goes free today and the other serves 40 years and a day. Both criminals know this, of course, because the premise of the game is that this is Decision Under Uncertainty, meaning that they know the content of the outcome matrix and of the payoff matrix but not the choice made by the other player. The structure of the payoff matrix associated with this outcome matrix is supposed to be identical with that associated with the original story, namely: For A, O21 > O11 > O22 > O12, and for B, O12 >O11 >O22 > O21, because the premise of the little example is that each player rates the outcomes solely on the basis of the length of his or her sentence, regardless of how long or short that is. It is therefore still the case that O11 is preferred by both players to O22, and it is still the case that IF each player's preference order is determined solely by a consideration of that player's sentencing possibilities [and that each player prefers less time in jail to more], and that each player chooses a strategy solely by attending to considerations of dominance, then the two of them will end up with a Pareto sub-optimal result. But how likely is all of that to occur in the real world? I suggest the answer is, not likely at all. For the upshot of the game to remain the same, we must assume two things, neither of which is even remotely plausible in any but the most bizarre circumstances: First, that each player is perfectly prepared to condemn his or her partner in crime to a sentence of 40 years and a day just to have a chance at reducing a one day sentence to zero; and second, that the two of them, faced with this extraordinary outcome matrix, cannot coordinate on the Pareto Preferred Outcome without the benefit of communication.

Monday, June 28, 2010


Welcome back from Spring Break. As you know, I was planning to resume my discussion of the use and abuse of formal models with a discussion of the Prisoner's Dilemma. However, yesterday, as I was writing the subsequent discussion, of the Free Rider Problem, I hit a random key and everything disappeared into cyberpurgatory, from which I have been unable to retrieve it. I tried descending into the cyber underworld, but despite the helpful suggestions of several readers, I simply could not find my lost file. Perhaps the cause was the 103 degree heat here in Chapel Hill, or maybe it was the fact that I had posted a comment on my other blog critical of America's Afghan policy.

At any rate, after a troubled night of sleep, I am refreshed, and ready to recreate the fourteen pages that I lost. On Wednesday, this tutorial will resume. As they say in airliners when you have been sitting on the tarmac for three hours, Thank you for your patience [as though you had a choice.]

Sunday, June 27, 2010


I have just lost the next two posts of this blog -- one on the Prisoner's Dilemma, the other on the Free Rider Problem. I was cruising along, writing in WORD [Windows Vista] when I hit a key [I don't know which one] and everything in the file disappeared. I have done everything I can think of -- I went online and found official Microsoft instructions for finding lost files. No luck. It simply does not seem to be anywhere, even though my version of WORD is set to make an automatic backup every three minutes.

If anyone has any ideas, please let me know. I am too old for this! I am going back to a pen and a pad with a carbon sheet under every page. I just do not know whether I can rewrite the thirteen pages or so that I have just lost.

Friday, June 25, 2010


Well, I am home again, after a totally successful Paris trip, to give my sister a smashing eightieth birthday. Twenty-two people gathered for a champagne reception followed by a dinner cruise on the Seine. I got to spend time with my sons, my grandchildren, my daughter in law, my sister, my nephew and niece and grandnephews and grandnieces, and my Parisian cousins.

Now, it is back to work. If you have not read the three essays I set as homework during the break, see below the links, posted on June 2nd. On Monday, I will resume three times a week installments of this tutorial. I will start with a discussion of The Prisoners' Dilemma, then move on to The Free Rider Problem, as discussed in the essay about Jon Elster. After that will come an extended discussion of Rawls, then a brief discussion of Robert Nozick, then some examination of the use of Game Theory in legal theory, and after that perhaps a discussion of the use of Game Theory and Rational choice Theory in nuclear deterrence and military strategy discussions.

If anyone is still with me after all of that, we shall see what else remains to be said.

Tuesday, June 15, 2010


I am still in Paris, preparing for my sister's eightieth birthday bash, but I shall return to this tutorial on June 25th with a discussion of The Prisoner's Dilemma. I hope you will stay with me, and will do the reading I suggested in my last post before the Spring Break.

Wednesday, June 2, 2010


I shall be in Paris until June 24th. While I am there, I would like you to read three selections from my writings. When I return, I will begin a lengthy discussion of these and other writings by various authors, applying all the technical materials we have been studying on this blog.

I would like you to read this essay about Jon Elster

Then, I would like you to read this selection from my book on Rawls

Finally, I would like you to read this essay about Robert Nozick's ANARCHY, STATE, AND UTOPIA.

All three authors make extended use of the Formal Methods we havwe been studying.

I hope I see you back here when I return.

Last Installment Before Spring Break

Once again, let us pause to catch our breath. We arrived at this magnificent theorem by making a series of very powerful constraining and simplifying assumptions. Let us just list some of them:


    (0) We began by talking about games.

    (1) We limited ourselves to two person games

    (2) We limited ourselves to players whose preferences satisfy the six powerful Axioms from which we can deduce that their preferences can be represented by cardinal utility functions.

    (3) We limited ourselves to players with strictly competitive preferences

    (4) We allowed for mixed strategies.

    (5) We accepted mathematical expectation as a rational way of calculating the value of a strategy involving elements of risk.

    (6) We adopted von Neuman's extremely conservative rule of choice of strategies -- maximizing the security levels.

    (7) We assumed no pre-play communication between the players.

    (8) We assumed perfect knowledge by both players of the information required to construct the payoff matrix or payoff space.


    Every one of these assumptions can be altered or dropped. When that happens, a vast array of possibilities open up. No really powerful theorems can be proved about any of those possibilities, but lots and lots can be said. Here is how I am going to proceed. First, I am going to discuss each of these assumptions briefly and sketch the sorts of possibilities that open up when we drop it or alter it. After that, I will gather up everything we have learned and apply it to a number of specific texts in which Game Theory concepts are used. I will offer a discussion of the so-called Prisoner's Dilemma, a full scale analysis of John Rawls' central claim in A Theory of Justice, a critique of Robert Nozick's Anarchy, State, and Utopia, a detailed critique of a book by Jon Elster called Making Sense of Marx, a critique of the use made of Game Theory by nuclear deterrence strategists, and some remarks on the use of Game Theory concepts in writings by legal theorists. By then, you ought to be able to carry out this sort of critique yourselves whenever you encounter Game Theoretic or Rational Choice notions in your field of specialization.


    Now let me say something about each of the nine assumptions listed above.


(0) The Modeling of Real Situations as Games


    I identify this as assumption zero because it is so fundamental to the entire intellectual enterprise that it is easy to forget what a powerful simplification and idealization it is. Games are activities defined by rules. Imagine yourself watching two people playing chess, not knowing what chess is, but knowing only that a game is being played in the area. How would you describe what you are watching? Which of the things you see are appropriately included in the game and which are extraneous? Which characteristics of the various objects and people in the neighborhood are part of, or relevant to, the game? Is gender relevant? Is race relevant? Is the dog sitting by the table part of the game? Are the troubled sighs of one of the persons a part of the game? How do you know when the game begins and when it ends? Is the clothing of the persons in the area relevant? Are all of the people in the area part of the game, or only some of them? Indeed, are any of them part of the game? You cannot answer any of these questions easily without alluding to the rules of the game of chess. Once you acquaint yourself with the rules of chess, all of these questions have easy answers.

    Now imagine yourself watching a war. Not one of the questions I raised in the previous paragraph has an obvious answer with regard to a war. When does a war start and when does it end? Are the economic activities taking place in the vicinity of the fighting part of the war or not? Who are the participants in a war? States that have formally declared war on one another, other nearby states, private individuals? And so forth. War is not a game. I don't mean that in the usual sense -- that it is serious, that people get killed, etc. I mean it in the Game Theory sense. War is not an activity defined by a set of rules with reference to which those questions can be answered. Neither is market exchange, contrary to what you might imagine, nor is love, nor indeed is politics. There are many descriptive generalizations you can make about war, market exchange, love, and politics, but no statements that are determinative or definitive of those human activities. When you apply the concepts of Game Theory to any one of them, you are covertly importing into your discussion all the powerful simplifications and rule-governed stipulations that permit us to identify an activity as a game. Whenever you read an author who uses the concepts of Game Theory [move, payoff, strategy, zero sum, Prisoners' Dilemma, etc] in talking about some political or military or legal or economic situation, think about that.


(1) Games with more than two persons:


    As soon as we open things up to allow for more than two players in a game, everything gets very complicated. First of all, with three or more players, no meaning can be given to the concept of opposed preference orders. We can still make the assumption of cardinal utility functions if we wish, because that is an assumption about an individual player's preference structure, and has no reference to any particular game. With three or more players, it also becomes difficult to represent the game by means of a payoff matrix. Not impossible -- we can always define an n-dimension matrix -- just very difficult either to visualize or to employ as a heuristic device for analyzing a game. That is why writers who invoke the concepts or the language of Game Theory will sometimes reduce a complex social situation to "a player and everyone else," in effect trying to turn a multi-player game into a two player game. That is almost always a bad idea, because in order to treat a group of people as one player, you must abstract from precisely the intra-party dynamics that you usually want to analyze.

    Multi-player games also for the first time introduce the possibility of coalitions of players. Coalitions may either be overt and explicit, as when several players agree to work together, or they may be tacit, as when players who are not communicating overtly with one another begin to adjust their behavior to one another in reciprocal ways for cooperative ends. Once we allow for coalitions, we encounter the possibility of defections of one or more parties from a coalition, and that leads to the possibility that two players or groups of players will bid for the allegiance of a player by offering adjustments in the payoff schedule, or side payments.

    All of this sounds very enticing and interesting, and I can just imagine some of you salivating and saying to yourselves, "Yeah, yeah, now he is getting to the good stuff." But I want to issue a caution. The appeal of Game Theory to social scientists, philosophers, and others, is that it offers a powerful analytical structure. That power is achieved, as I have labored to show you, by making a series of very precise, constraining simplifications and assumptions. As soon as you start relaxing those assumptions and simplifications, you rapidly lose the power of the analytical framework. You cannot have your cake and eat it too. By the time you have loosened things up enough so that you can fit your own concerns and problems into the Game Theory conceptual framework, you will almost certainly have lost the rigor and power you were lusting after, and you are probably better off using your ordinary powers of analysis and reason. Otherwise, you are just tricking your argument out in a costume, in effect wearing the garb of a Jedi knight and carrying a toy light saber to impress your children.


(2) Abrogating one of the Six Axioms


    The six Axioms laid down by von Neuman conjointly permit us to represent a player's preferences by means of a cardinal utility function. There are various ways in which we might ease those axioms. One is to assume only an ordinal preference structure. As we have seen, that is sufficient for solving some two-person games, and it might be sufficient for usefully analyzing some multi-party games. We may need no more than the knowledge of the order in which individuals rank alternatives. All majority rule voting systems, for example, require only ordinal preference orders, a fact that is important when considering the so-called "paradox of majority rule."


    The assumption of completeness is very powerful and potentially covertly biased in favor of one or another ideological position, a fact that I will try to show you when we come to talk about Nozick's work. In effect, the assumption of completeness serves the purpose of transforming all relationships into market exchanges, with results that are very consequential and, at least for some of us, baleful.


    Transitivity is also a powerful assumption, and some authors, most notably Rawls, have chosen to deny it in certain argumentative contexts. Recall my brief discussion of Lexicographic orders. When Rawls says that the First Principle of Justice is "lexically prior" to the Difference Principle, he is denying transitivity. He is also, as we shall see, making an extremely implausible claim. Whether he understood that is an interesting question.


    One of the trickiest thickets to negotiate is the relationship between money and utility. Because the Axioms we must posit in order to represent a player's preferences by a cardinal utility function are so daunting, those who like to invoke the impressive looking formalism of Game Theory almost always just give up and treat the money payoffs in a game [or a game like situation] as equivalent to the players' utilities. This is wrong, and some folks seem to know that it is wrong, but they almost never get further than just making some casual assumption of declining marginal utility for money. The issue of aversion to risk is usually ignored, or botched.


    To give you one quick example of the tendency of writers to ignore the complexity of the six Axioms, here is the entry in the end-of-volume Glossary for "von Neuman-Morgenstern Expected Utility Theory," in Game Theory and the Law by Douglas G. Baird, Robert H. Gertner, and Randal C. Picker:


    "Von Neuman and Morgenstern proved that, when individuals make choices under uncertainty in a way that meets a few plausible consistency conditions, one can always assign a utility function to outcomes so that the decisions people make are the ones they would make if they were maximizing expected utility. This theory justifies our assumption throughout the text that we can establish payoffs for all strategy combinations, even when they are mixed, and that individuals will choose a strategy based on whether it will lead to the highest expected payoff."


     Now that you have sweated through my informal explanation of each of the six Axioms, I leave it to you whether they are correctly characterized as "a few plausible consistency conditions."


(3) Relaxing the Assumption of Strictly Competitive Preferences


    As I have already pointed out, there are a great many two-party situations [like two people negotiating over the price of a house] in which the parties do not have strictly opposed preference orders. This is manifestly true in nuclear deterrence strategy situations in which it is in the interest of both parties to avoid one outcome -- namely mutually destructive all out war.


    In addition to games that are partly competitive and partly cooperative, we can also consider totally cooperative games, sometimes called "coordination games." Here is one example. In his book, The Strategy of Conflict, Schelling cites a coordination game he invented to try out on his Harvard classes. He divided his class into pairs of students, and told them that without consultation, they were to try to coordinate on a time and place where they would meet. Each member of the pair was to write a time and place on a slip of paper, and then the two of them would read the slips together. "Winning" meant both students choosing the same time and place. An impressive proportion of the pairs, Schelling reported, won the game by coordinating on "Harvard Square at noon when classes let out." Obviously, their success in coordinating involved their bringing to the game all manner of information that would be considered extraneous in a competitive game, such as the fact that both players are Harvard students. Some time after reading this, I was chatting with a Harvard couple I knew, and I decided to try the game out on them. When I opened the first piece of paper, my heart sank. The young man had written, "4:30 p.m., The Coffee Connection." "Oh Lord," I thought, "he didn't understand the game at all." Then I looked at the young lady's piece of paper. It read, "4:30 p.m., The Coffee Connection." It seems that is where they met every day for coffee. Schelling wins again!

Not much in the way of theorems, but a great deal in the way of insight, can be gained from analyzing these situations, as Schelling has shown.


(4) Mixed Strategies


    The subject of mixed strategies has an interesting history. During the Second World War, the Allies struggled with the problem of defending the huge trans-Atlantic convoys of military supply ships going from the United States to England against then terrible depredations of the Nazi wolf packs of u-boats. The best defense was Allied airplanes capable of spotting u-boats from the air and bombing them, but the question was, What routes should the planes fly? If the planes, day after day, flew the same routes, the u-boats learned their patterns and maneuvered to avoid them. There was also the constant threat of espionage, of the secret anti-u-boat routes being stolen. The Allied planners finally figured out that a mixed strategy of routes determined by a lottery rather than by decision of the High Command held out the most promising hope of success.


    Generally speaking, however, mixed strategies are a bit of arcana perfect for proving a powerful mathematical theorem but not much use in choosing a plan of action.


(5)-(6) Calculation of Mathematical Expectation versus Maximization of Security Levels


    We have already discussed at some length the limitations of maximization of expected utility as a criterion of rationality of decision making. von Neuman and Morgenstern reject it in favor of the much more conservative rule of maximizing one's security level. We have also seen that this rule of decision making does not allow for risk aversion [or a taste for risk], unless we totally change the set over which preferences are expressed, so that they become compound lotteries over even total future prospects rather than Outcomes in any ordinary sense. As we have also seen, maximization of expected utility rules out lexicographic preference orders, and when I come to talk about the application of this methodology to nuclear strategy and deterrence policy, I will argue that the assumption of non-lexicographic preference orders covertly constitutes an argument for a nuclear strategy favoring the Air Force or the Army rather than the Navy in the inside-the-Beltway budget battles.


(7) Pre-Play Communication


    Once we permit pre-play communication, all manner of fascinating possibilities open up. As we might expect, situations with pre-play communication and non-strictly opposed preference orders are among the richest fields for discussion and at the same time allow for the least in the way of rigorous argument or proof. In the hands of an author with a good imagination and a sense of humor, this can be lots of fun, but virtually everything that can be said about such situations can be said without calling them games and drawing imposing looking 2 x 2 payoff matrices. For example, as any hotshot deal maker in the business world knows, when you are engaged in a negotiation, it is sometimes very useful to make yourself deliberately unreachable as the clock ticks on toward the deadline for a deal. If a deal must be struck by noon on Tuesday, and if both parties want to reach agreement somewhere in the bargaining space defined by the largest amount of money the first party is willing to pay and the smallest amount the second party is willing to accept, it is tactically smart for the buyer to make a lowball offer within that space, and then be unavailable until noon Tuesday [somewhere without cell phone coverage, in the ICU of a hospital, on an airplane.] The seller must then accept the offer or lose the sale. Since by hypothesis the seller is willing, albeit reluctant, to sell at that price, she will accept rather than lose the sale. If the seller sees this coming, she can in turn give binding instructions to her agent to accept no offer unless there is the possibility of a counteroffer before the deadline. Then she can make herself unavailable. And so forth. This is the stuff of upscale yuppie prime time tv shows. It just sounds more impressive when you call it Game Theory.


(8) Perfect Information


    The general subject of perfect and imperfect information has been so much discussed in economics of late that I need not say anything here. Suffice it to note that formal Game Theory assumes perfect information of the payoff matrix, which embodies both the rules of the game and players' preference structures. Games do allow for imperfect information, of course. Poker players do not know one another's cards, for example. But that is a different matter, built into the rules of the game.