Friday, May 14, 2010

Weeks 1 and 2


WEEK 1

Our term began on March 4th with an introduction to our class, Advanced Analytical Techniques. On the first day, Professor Kris Wheaton gave a brief overview of what to expect throughout the term and what was expected of us as students. He passed out a list of dozens of methods/modifiers that we could choose from to study for the term. As I was perusing the list, most of the methods/modifiers had brief descriptions of each and nothing seemed to catch my attention. That is, until I reached game theory (which, by the way, did not have a description underneath).

All that I knew about game theory was what a zero-sum game is (to be detailed in a later post) or at least I heard of it before. Actually, that's not entirely true; I also knew that it was mathematically complex, but I was confident in my ability to grasp the concepts by the end of the term. But, since we did not have to decide what we wanted to study until the end of week 2 (we only had one class the first week), I decided I would do a little background research to make sure I actually wanted to study it, so I read the Wikipedia article.

As soon as I read it, I knew I had chosen the right subject. However, it was not because I was overly excited about game theory itself. On the contrary, I was excited about the possible range of topics that I could apply it to, which was the other requirement of the course. Specifically, I learned that game theory is used extensively in international relations, which just happens to be what my undergraduate background is in. Essentially, that's what happened during the first week; nothing out of the ordinary, thus, so far so good.

WEEK 2

This week turned out to be a wash, at least when it came to my research of game theory. You see, I'm also a member of the Competitive Intelligence club here in the department and we were quite fortunate to receive an invitation from SCIP (Society of Competitive Intelligence Professionals) to come volunteer at their annual conference in Washington, DC. The conference lasted the entire second week of Spring Term (March 8-12). I had high hopes of getting some work done after the business day concluded, but, frankly, that was wishful thinking on my part. Let me just say that even though the conference was great, it was one of the most tiring weeks I have ever had to go through. And trust me, if you're enrolled in Mercyhurst College Institute of Intelligence Studies, you experience many weeks of extreme fatigue! But, at least I was able to lock down game theory as my subject for the term. Beginning in week three, that's when the real fun started...




Reblog this post [with Zemanta]

Weeks 3 and 4

Cover of "The Predictioneer's Game: Using...Cover via Amazon

WEEK 3

During week 3, I attempted to isolate a topic that I could apply using game theory and I continued to build my knowledge of game theory itself. Admittedly, finding a topic that is both interesting and not previously covered by someone else proved to be quite challenging, therefore, I focused most of my time trying to understand what game theory is. To accomplish that goal, I purchased two texts (that I will discuss in a later post) that Professor Wheaton recommended to me in class. The first, Prisoner’s Dilemma by William Poundstone, is an introductory work designed to give students a general understanding of game theory by discussing its origins and its use in social sciences, mainly international relations (specifically foreign policy following the advent of the atomic bomb); I read one-third of the book thus far. The second text is The Predictioneer’s Game by Bruce Bueno de Mesquita; I did not begin reading this book until about a week later. But, from thumbing through the pages of each book at the store, I realized that they would be more useful in helping me to understand game theory than they would be in helping me find a topic to apply it to.

In addition, I collected information related to Bueno de Mesquita’s work using game theory because of his status as the world’s leading game theorist. Thus far, I read his work pertaining to Iran’s nuclear program, the Copenhagen climate summit, and several papers dealing with domestic politics and political economy. Unfortunately, as is to be expected when reading scholarly articles, much of his work assumes the reader already has an extensive background in game theory, which, of course I didn't. Thus, I decided to change course; that is, I adopted a "bottom-up" strategy of learning the basics of game theory and gradually increasing the complexity of the works I would read.

WEEK 4

I continued reading William Poundstone’s book “Prisoner’s Dilemma.” Most of what I read dealt with the Soviet development of the atomic bomb and the reaction across the American political spectrum. There was much talk about conducting a preventive war before the Soviets could develop an atomic capability that could adequately serve as a deterrent. Even though the Soviets possessed the bomb by 1949, some U.S. policymakers thought it was possible to launch a preventive war that would cripple the Soviets’ ability to respond, and thereby help create a world government of sorts. This history of nuclear weapons served as a good example of real-world situation where one could apply game theory. But, more importantly for the study of game theory, Poundstone detailed the Ohio State Studies on game theory commissioned by the U.S. Air Force in the late 1950s and early 1960s. From these studies, although I believed this to be true anyway, I learned that humans psychologically feel greater satisfaction in knowing they “beat” their adversaries even if cooperating with them would have led to a higher payoff for both players. In fact, in one experiment, only two out of twenty-two pairs of players consistently cooperated with one another to increase their mutual payoffs. In addition to further reading, I attempted to create a model of the U.S. political system using financial reform legislation as the application topic.

I also began reading The Predictioneer's Game during week 4. If you've read Freakonomics by Dubner and Levitt, this book is the game theory version of it, meaning it tries to make game theory accessible to the masses. Bueno de Mesquita (BDM) opens the book by talking about how to purchase a car. He uses this example to sum up game theory's main argument: "that people do what they believe is their best interest." Here's how the example goes: most people buy cars in one of two ways. Most people go to a dealership, test drive a car and probably negotiate a price. This can be an annoying process because who likes to argue over money? The other way some people buy cars is that they research the car they want to buy and get a few quotes from a group of dealers and pick the best one. BDM says this is better, but not by much. But, back to the first way people buy cars, going to the dealership. BDM argues that by simply going to the dealer, the buyer is sending a "costly signal." That is, the fact that you went to the dealer signals that you want to buy a car because otherwise it was a waste of time and energy to go there. Thus, the dealer has the advantage because he possesses information about you, but not vice versa, so you probably do not end up receiving a good deal. He goes on to list other things buyers often do that puts them at a disadvantage, but I'd be rewriting the chapter if I listed it here. Anyway, here's how BDM suggests one should buy a car. Obviously, we all want to get the best deal possible in any purchasing situation, but how does one gain the advantage? Control the agenda in a negotiation; in this case, that means forcing the seller to put his best price knowing that the buyer won't accept right now no matter how good it is. First, determine exactly what you want in a car. It does not matter what the buyer wants (performance over safety, luxury interior, etc.). Then, locate every dealer within a fixed radius from the buyer's home that has the particular vehicle in stock (this can probably be done online). Call every dealer that has the car and tell them you plan to buy a car today (unlikely for most people, but it is in his example) and that the dealer who offers the lowest out-the-door price gets the sale because a check for that amount will be written and the buyer won't have another check with him. Make sure to tell each dealer that you plan to quote the lowest price to the next dealer and so on. By doing this, the dealer knows what price he needs to beat if he wants you to buy from his establishment. If the dealer tries to haggle with you, it's to his own detriment because you can move on to the next dealer. BDM argues that you can never get this advantage by physically going to the dealer because they already know you're interested in a car and probably need to buy one as soon as possible. Furthermore, if the dealer says you cannot purchase a car on the phone, the buyer simply replies that he cannot purchase a car from THAT dealership over the phone and then moves on to the next dealer. The dealer might even try to guilt the buyer by saying that the next dealer will only drop the price as little as $50 and that the buyer should just purchase the car from him right now. Another easy response according to BDM, "fine, then I'll the buy the car for $50 less at the other dealership, but if you quote that price now, I'll buy the car from you" (that's my paraphrasing of BDM's actual line). A desperate dealer will even go so far as to insult your intelligence by trying to assure you that they have the "best prices in town" rather than quote you a price on the phone. If the conversation continues, though, the dealer will run out of sales gimmicks and will be forced to quote his absolute lowest price.

That's how to buy a car according to Bruce Bueno de Mesquita. It seems like a good idea, but I'm willing to bet that he has the financial resources to buy a car with a check. But, when I can afford to buy a new car I am definitely going to try his method with a twist. That is, I'll say exactly the same things he does, except I'll only write a check for a down payment and I'll get financing through a bank, not the dealer's financing department. When that day comes, I'll be sure to write a post about my experience.


Reblog this post [with Zemanta]

Weeks 5 and 6

Boston GS ProtestImage by americans4financialreform via Flickr

WEEK 5

Up until now, I was still having a great deal of difficulty finding a topic that I could apply game theory to that another student/researcher had not covered before. Almost every international relations issue that I found interesting enough had at least one paper written on the subject. Also, since I am a novice in game theory, there was really nothing I could do to build upon this existing research.

But, while I was reading BDM's book, he argues that only four questions need to be answered in order to make a forecast using game theory. They are as follows:

1. Identify every person or group with an interest in trying to influence the outcome, but that does not necessarily mean only the key decision-makers.
2. Estimate as accurately as possible what policy each player would like to see prevail, in other words, what do they want?
3. Determine how important the issue is for each player/group involved. Is it so important that the player/group will focus solely on this situation in order to achieve a specific outcome or do they have other pressing matters that preclude them from participating as much as they'd like?
4. Determine how influential each player/group is relative to the other players involved. How persuasive is each player?

When I read that excerpt, only one subject occurred to me: U.S. domestic politics. Specifically, I thought that I apply game theory to financial reform legislation that was being debated at the time. This worked for two reasons; I could BDM's four questions relatively simply and there would not likely be any existing research on this particular topic. Here are the players I believed would have a meaningful interest in influencing the outcome (there are definitely more than just the group I picked, but the list would have grown quite large if I identified every person interested in the outcome. President Obama, Speaker of the House Nancy Pelosi, House Minority Leader John Boehner, Senate Majority Leader Reid, Senate Minority Leader Mitch McConnell, House Financial Services Committee Chairman Barney Frank, House Financial Services Committee Ranking Republican Spencer Bachus, Senate Financial Services Committee Chairman Christopher Dodd, Senate Financial Services Committee Ranking Republican Richard Shelby, and Republican Senator Bob Corker. I was not able to answer the other three questions during Week 5, so I decided to pose the question to the class during Week 6.

WEEK 6

Following the classroom discussion on Tuesday, I decided to reorganize the important players of the financial reform debate into groups. With this new organization, although there are differences in the specific policy positions of each individual player, each group is organized by the broad position that each of the players within the group advocate. Therefore, for simplicity’s sake, I organized the individual players into one group that advocates strong financial reform and another group that opposes strong financial reform (although they are not opposed to some kind of reform). The first group advocating strong reform includes President Obama, House Speaker Pelosi, Senate Majority Leader Reid, House Financial Services Committee Chairman Frank, Senate Financial Services Committee Dodd, White House Chief of Staff Rahm Emanuel, and White House advisers David Axelrod and Valerie Jarrett. The second group opposing strong reform includes House Minority Leader Boehner, Senate Minority Leader McConnell, House Financial Services Ranking Republican Bachus, Senate Financial Services Ranking Republican Grassley, and banking lobbyists. I am still researching who the most powerful lobbyists are; although the entire banking industry would likely be affected by the legislation, it would be impractical to list the “banking industry” writ large as a player in this game. In addition to continued work on the model, I visited the website www.gametheory.net and found it to be a very helpful tool. The site is aimed at a variety of people from students and researchers to business people trying to apply game theory to some situation their firms are facing. I downloaded several lecture notes from introductory courses to game theory. Specifically, Massachusetts Institute of Technology and University of Pittsburgh lecturers provided very accessible information that even casual students of game theory may understand. Since these are introductory course notes, robust mathematical models of game theory were not included.

Finally, Professor Wheaton suggested that we all start to think about how we wanted to publish the results of our individual studies. Obviously, I'm writing a blog now, but at the time I was considering building a Google Site since I had experience using it for my Competitive Intelligence class during the winter term.



Reblog this post [with Zemanta]

Weeks 7 and 8


WEEK 7

As I continued working on the model using BDM's four questions, it became increasingly apparent to me that without having access to his algorithm, it would be immensely difficult to recreate his work, which was essentially what I was trying to do. But, since I had already grouped the players into two teams, basically Democrats and Republicans, I could develop a 2x2 game with four possible outcomes. By this time, I finished reading Prisoner's Dilemma, which mainly discusses two player games. I created a scale from 0 to 10, where 0 equals one side completely failing to meet any of their intended objectives and 10 equals one side completely succeeding to meet all of their intended objectives. Furthermore, the numbers will seem counter-intuitive when taken at face value. For example, if Democrats cooperate and Republicans do not cooperate, Democrats will adopt at least a few Republican ideas in the final bill, therefore the Republicans will have a lower number because they would not be able to argue that Democrats acted in a completely partisan manner. The opposite would happen if Democrats do not cooperate and Republicans do cooperate (i.e. – Dems completely ignore sound Republican ideas, thus Republicans can argue the Dems acted in a partisan way). If both sides cooperate, although the final bill will not be the most ideal for either side, the political benefits of cooperation exceed the perceived weaknesses. If both sides do not cooperate, the Dems will push through a bill that has no Republican ideas, but since the Republicans did not cooperate themselves, they would not be able to argue credibly that the Dems alone acted in a partisan manner, thus even though the bill would encompass everything the Dems wanted, public opinion would likely not favor their position. Based on this model, I would forecast that both Democrats and Republicans would choose to cooperate with each other and each receive a payoff of 5. This is the most optimal outcome because cooperation yields the highest possible payoffs (5 or 7) regardless of what the other player chooses. The matrix I created is at the top of the page.

WEEK 8

This week, I quickly realized that I would need to update the matrix due to new polling data and the announcement of a Securities and Exchange Commission investigation into Goldman Sachs. Financial reform legislation clearly had the support of the general public and President Obama had more trust on the issue than Congressional Republicans. Each of these indicated that Democrats had the advantage in the debate, therefore the matrix needed a modification.

I re-operationalized what the scale meant. Back in Week 7, I created a scale from 0 to 10, where 0 equals one side completely failing to meet any of their intended objectives and 10 equals one side completely succeeding to meet all of their intended objectives. I changed that to 0 meaning that the player preferred no reform whatsoever and wanted to maintain the status quo. A score of 10 meant that the player wanted a complete overhaul of the financial system that completely altered the status quo. Generally, the more liberal the player (at least on this issue), the higher the score the individual received and the more conservative the player (at least on this issue), the lower the score the individual received. Also, I began to feel a bit nervous regarding financial reform. Senators Dodd and Shelby were negotiating in the Senate Financial Services Committee and it seemed like a deal to move debate on the legislation to the floor was imminent. Once that happened, passage of the bill could either occur very swiftly (which for the purposes of this study, I did not want to happen) or it could drag out. Luckily, the bill had not passed yet and my study was not yet nullified.






Reblog this post [with Zemanta]

Weeks 9 and 10


WEEKS 9 and 10

This was the last week of the term, at least as far as Advanced Analytic Techniques class was concerned. Professor Wheaton wanted to give the class the final week of the term to complete their individual articles and to prepare their results for publication. By the way, when I say "publication" that does not necessarily mean having the results of the studies published in a journal or other media. What it means is dissemination of the results to the outside world. Thus, that could mean a classroom briefing, a simulation, a website, a blog, or actually getting it published in a journal. Anyway, during Week 9, I abandoned my idea of a Google Site and decided upon the blog instead. With the Google Site, it would have been difficult to avoid creating a text-heavy website, whereas blogs are better suited for that. Also, Professor Wheaton recommended that I use a blogging tool known as Zemanta. Zemanta allows you to make your blog more interactive by analyzing the text and automatically generating images, articles, tags, links to other sites, etc. that embed in the blog with no effort from the user. Since I am a novice to blogging, this tool came in very handy. In fact, with the exception of the matrix and a few links I inserted myself, all of the tags, links to other articles, videos, etc. were embedded into my blog by Zemanta.

Anyway, back to the project. I created a new matrix (at the top of the page) which clearly indicates that Democrats have the advantage across the board. But, not so fast. The Republicans can still achieve a respectable payoff if they choose the right strategy. The highlighted box is the optimal strategy in this game. Both sides could move simultaneously, with no other rules in effect. Based on this matrix, both sides would prefer to cooperate with each other than not, given public opinion regarding financial reform, Democrats could easily decide not to cooperate and achieve a higher payoff relative to the Republicans, but not vice versa. This goes to the notion of the irrational player, which game theory cannot account for. That is, suppose Democrats, having repeatedly tried and failed to court Republican support for health care reform, decided that they were not going to cooperate with Republicans on financial reform under any circumstances. As is evident in the matrix, the Democrats’ potential payoffs under “don’t cooperate” significantly reduce, but, when compared to the Republicans’ potential payoffs (regardless of whether they cooperate or not) they still prevail, even though it is not rational to choose “don’t cooperate.” However, it is important to keep in mind that the highlighted cell is the optimal outcome of this game because neither player can improve his individual payoff by unilaterally changing strategies. That is, by cooperating, Democrats guarantee themselves a payoff of either 8 or 9 depending on what the Republicans choose to do, whereas if they choose not to cooperate they can only achieve a 6½ or 3. On the other hand, by cooperating, Republicans guarantee themselves either a 4 or 5 depending on what the Democrats do, whereas if they choose not to cooperate they can only achieve a 2 or 1. Obviously, this payoff matrix definitely favors Democrats, but they can maximize their payoff by cooperating regardless of what the Republicans choose to do.

So, with all of my research and tweaking of the model, I forecast that, in the end, Democrats and Republicans will cooperate with each other on financial reform. Based on the first matrix I developed, the outcome was the same, however, the payoffs drastically changed in favor of the Democrats. This is one of the main issues of game theory. An analyst needs the most current and most accurate information and has to account for any and all variables before developing a matrix, otherwise the forecast will not be as accurate as possible or could end up being wrong. Either way, it is no help to the decision-maker.

Reblog this post [with Zemanta]

Zero-Sum and Non-Zero Sum Games

Harry Truman's poker chipsImage via Wikipedia

A zero-sum game is a game where the total payoffs are fixed, where one player’s gain is another player’s loss. An excellent example of this is a poker game where the players contribute money into a pot and someone “wins” it after all the bets are tallied and the winning hand is revealed. However, nobody actually “won” the pot of money, rather all the other players lost money that another player gained. The total amount of money available will never change in this game. The simplest form of a zero-sum game consists of two players and two strategies because a one-player game is not a game and only having one choice of strategies is not really a choice. The only way for a player to win is for the other player to lose, no cooperation is possible. That is, in order to be a true zero-sum game, the expected payoff for one player must equal the expected cost for the other player (if I gain $1, you must lose $1) for a sum of zero.

NON-ZERO SUM GAMES

A non-zero sum game where one player's gain does not necessarily mean the other player's loss; these games are actually more complex because there is usually more than one rational strategy. They are referred to as "non-zero sum" because the sum of the two player's payoffs does not always equal zero. Furthermore, non-zero sum games are not forced to be non-cooperative. That is, sometimes cooperation between the players leads to the optimal solution. The greatest example of a non-zero sum game is the prisoner's dilemma (I'll explain it in a later post). Essentially, each player is acting in his own self-interest, but that doesn't necessarily mean that the one player's gain is the other player's loss. Depending on how much the prisoner's cooperate with each other, that will determine each player's individual strategy. Furthermore, examples of non-zero sum games are more prevalent in real world situations, which makes them more useful to game theorists.

Reblog this post [with Zemanta]

John von Neumann and The Minimax Principle

John von NeumannImage via Wikipedia

JOHN VON NEUMANN

John von Neumann was a Hungarian-American mathematician who is widely regarded as the father of game theory (although, a Frenchman named Emile Borel published seven years before von Neumann). He was born in Budapest, Hungary in 1903 and possessed an eidetic memory which allowed him to excel in his studies. Von Neumann's inspiration for developing game theory came from poker, which he played rather unsuccessfully. He quickly realized that poker was not guided by probabilities alone and that one needed to play against the players, not against the cards. Furthermore, he wanted to formalize the notion of deception against the other players in the game. It was in his 1928 paper "Theory of Parlor Games" where he first broached the subject of game theory and proved the minimax theorem. In fact, von Neumann is quoted as saying, "As far as I can see, there could be no theory of games on these bases without that theorem...throughout the period in question I though there was nothing worth publishing until the 'minimax theorem' was proved." Once he proved the theorem, he collaborated with Oskar Morgenstern, an Austrian mathematician, to work on game theory. In 1944, they published their seminal work "Theory of Games and Economic Behavior" which is widely considered one of the most important texts of modern economic theory. To illustrate the point, the intended audience for the book was originally only economists, but it was applied to other subjects such as politics, sociology, psychology, and many others. From that point on, John von Neumann focused much of his work on war and politics. In fact, he would go on to hold several positions within the United States government such as the RAND Corporation and the Atomic Energy Commission under the Eisenhower administration.

John von Neumann would have been considered an extreme hawk by today's standards. He openly advocated preventive war against the Soviet Union. Another famous quote attributed to him, "If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?" Of course, by the mid-1950s, the USSR had amassed enough of a nuclear arsenal to sustain a more than credible deterrent against a first strike by the United States.

Unfortunately, von Neumann was diagnosed with bone cancer in 1955. Amazingly, he continued his work as a consultant even while receiving debilitating chemotherapy. In fact, he moved his office to Walter Reed Army Medical Center and received frequent visits from the Secretary of Defense and his colleagues in the U.S. Air Force. John von Neumann succumbed to his cancer on February 8, 1957 and would be remembered as one of the greatest minds of the twentieth century.

His other significant accomplishments include his development of the digital computer, basing computer calculations on binary numbers, and having computers store programs in a coded form instead of punch cards.

THE MINIMAX PRINCIPLE

To quote from William Poundstone, "the minimax theorem proves that every finite, two-person, zero-sum game has a rational solution in the form of a pure or mixed strategy." In other words, when there is a precisely defined conflict between two people whose interests are completely opposite from one another, there is always a rational solution. Essentially, a player is trying to minimize his potential loss while maximizing his potential gain. The solution is rational because each player cannot expect to do any better given the nature of the conflict. The principle is explained using an example of two kids and a cake.

The first kid cuts the cake into two slices and the second kid decides which slice he wants. The cutter expects to get the smaller piece because the chooser will select the larger piece. By cutting the cake as evenly as possible, the cutter guarantees himself almost half the cake. But, if he cuts the cake unevenly, he knows he will get the much smaller piece. Therefore, in order for the cutter to minimize his opponent’s maximum payoff, he will cut the cake as evenly as possible. This is a very basic example of the minimax principle, however, its proof demonstrated that two rational players, whose interests are completely opposed, can agree on a rational course of action confident that the opponent will follow suit (by cutting the cake as evenly as possible, the cutter can be sure that the opponent will leave about half the cake).

Von Neumann thought that the minimax principle could be applied to n-person (two or more) games as well. Take a three-person game for example. The preferences of Player 1 and Player 2 are completely opposed to each other, but Player 1 and Player 3 share similar (or the same) preferences. In that case, they could form a coalition and defeat Player 2. By allying with each other, Players 1 and 3 essentially constitute one player and Player 2 is the other. Now you've got two players with completely opposing preferences (sound familiar?) It doesn't have to stop there; using the minimax principle you could develop n-person games ad infinitum, discover all the possible winning coalitions, and reduce them to zero-sum games. However, the one problem with this line of reasoning is that you're assuming that rational actors would determine the results of every possible coalition and join the one with the maximum payoff. What about games where cooperation is outlawed? As I'll discuss in a later post, John Nash discovered a way to arrive at an equilibrium even when players cannot cooperate with each other.

I researched what the actual minimax proof looks like and found this one from Brigham Young University to be the least challenging (I still have trouble understanding it though). If you're mathematically gifted, I suggest you read it, because it is essentially the foundational principle of game theory.








Reblog this post [with Zemanta]

The Nash Equilibrium


From gametheory.net the definition of a Nash Equilibrium, "A Nash equilibrium, named after John Nash, is a set of strategies, one for each player, such that no player has incentive to unilaterally change her action. Players are in equilibrium if a change in strategies by any one of them would lead that player to earn less than if she remained with her current strategy. For games in which players randomize (mixed strategies), the expected or average payoff must be at least as large as that obtainable by any other strategy.

Whereas John von Neumann studied cooperative games, John Nash studied noncooperative games. This definition is more easily understood by saying that Player 1 would be satisfied with his decision given that he knows what Player 2's decision is; neither player has any regrets. However, that does not necessarily mean that each player earned the maximum possible payoff. It just means that each player is willing to live with the outcome that was achieved. Actually, in most cases if one player realizes his maximum payoff, it probably is not the rational outcome (prisoner's dilemma is an excellent example of this). The reason for this, Nash argued, is that if either player has a reason to change strategy (and would if given the chance), then that outcome is unstable and irrational. And it makes sense, why would you let your opponent reach his maximum payoff while you don't? Obviously, you wouldn't, and Nash proved it. This built upon von Neumann's minimax principle that the solution to zero-sum games is the equilibrium point; Nash proved that non-zero-sum games have equilibrium points as well.
Reblog this post [with Zemanta]

Prisoner's Dilemma


The prisoner’s dilemma is a central issue in game theory that displays how it is possible for two people not to cooperate with each other even if it is each person’s best interest to do so. Suppose the police arrest two individuals suspected of committing a crime. The police do not have enough evidence to convict both criminals and separate both prisoners in two rooms and offer each the same deal. If one confesses while the other remains silent, the confessor goes free and the other receives a ten-year jail sentence. If both remain silent, they each go to jail for one year on a lesser charge. If they each confess, they both go to jail for five years. In this situation, it is in both prisoners’ interests to cooperate with each other (remain silent) and both want to avoid the ten-year sentence. However, since neither prisoner knows what the other will choose, it would be irrational to cooperate unilaterally (remain silent and possibly go to jail for 10 years) because of the chance that the other will defect (confess and go free). Since the goal is to avoid the maximum sentence, each prisoner should choose to confess because he will either go free or go to jail for five years, each of which is preferable to going to jail for ten years. The prisoner’s dilemma is important because it shows that for people to be perfectly rational actors, they cannot trust the other players in the game, regardless of the greater good they may achieve by cooperating.

In this case, the Nash equilibrium is mutual defection because neither prisoner can improve his position by cooperating with the police. In fact, in a one-shot version of the prisoner's dilemma, you should always defect. But, what if you played an iterated (many times) version of the game?

Two researchers, RAND's Merrill Flood and Melvin Dresher used two friends as guinea pigs and had them play the game one hundred times. Each player was shown the payoff table and had no advance knowledge of what his opponent would do, but they would gain information as each round was played (their individual payoffs would tell each person what the other had chosen in the previous iteration) As stated earlier, mutual defection is the Nash equilibrium in a prisoner's dilemma. However, when Flood and Dresher ran the experiment, player 1 chose the non-equilibrium (cooperation) strategy 68 times and player 2 did it 78 times! Both players kept a log of comments that they wrote down after each round and it shows a struggle to cooperate. The payoffs were skewed in favor of player 1, which meant player 2 could stand to gain more by unilateral defection. However, each time he did this, player 1 retaliated by defecting on the next round. Surprisingly, the "punishments" occurred infrequently and both players returned to mutual cooperation. Flood and Dresher presented the results of this experiment to Nash, who dismissed it because there was too much interaction between the players.

Let me go back to the difference between a one-shot version and an iterated version of this game. As I said earlier, it's best to defect if you're only playing the game one time, but over the "long run" both players gain even more if they cooperated with each other. But, there is a problem with that reasoning, too. There is a concept known as backward induction. Say you and a friend were to play this game one hundred times like Flood's and Dresher's guinea pigs. You would probably quickly understand that mutual cooperation is best over the long run. But when you get to the hundredth game, it actually turns into a one-shot prisoner's dilemma. At the risk of being redundant, in a one-shot version you should always defect. It is safe to assume that your friend realizes that fact as well and probably says to himself, "Why should I be the only one to get the punishment payoff in the last game? I'm going to start defecting in game 99." Wait a minute...you're smart too and you realize he could do exactly that, defect in game 99. So now you start defecting in game 98 because you don't want to get the punishment payoff either. See the point? Both players can logically deduce that the other player will defect at some point because the game cannot go on indefinitely. Therefore, both players start defecting from the first game even though they can maximize their individual payoffs by mutually cooperating.

This leads to two different paradoxes for the prisoner's dilemma; first, in a one-shot version the rational choice is for both players to defect, but their individual payoffs would be higher if they cooperated; second, in an iterated version, the backward induction paradox is in play, meaning that the rational player will get stuck with the punishment payoff every time and the irrational player ends up significantly ahead.

Nobody has been able to solve the prisoner's dilemma and I doubt anyone will be able to in the near future.
Reblog this post [with Zemanta]

Chicken


We have all heard of the game of chicken, who would've thought that game theory was at play?! Here's the scenario: two drivers are speeding toward each other on a collision course and will probably die if they crash into one another, one of them must swerve to avoid that outcome, but by doing so he risks being seen as a coward by his peers. Essentially, both drivers want to avoid the "coward" label, but they arrive at the worst possible outcome (death) if neither swerves his car. This is different from the prisoner's dilemma because you should (theoretically) defect every time, regardless of what the other player does. It's not that simple in chicken; that is, what if both players decide to defect (not swerve)? They both die and that's the worst possible outcome. In this game, each player has a great interest in knowing what his opponent plans to do; they both want to do the opposite of what the other player does. If you know with absolute certainty that the other driver was going to drive straight, you would swerve (come on, we all would) because you would rather live to play other games than die to prove a point. The opposite is true as well; that is, if you knew the other driver was going to swerve no matter what, you would drive straight and be the hero (again, we all would). Therefore, there are two Nash equilbria in this game [see the above payoffs (1,5) and (5,1)]. This is not an ideal situation because each player is hoping the other swerves so he can drive straight. But, you cannot argue that each driver acted irrationally if neither of them swerved. Sure, they achieved the worst possible outcome, but they did not know in advance what the other driver was going to do. If you happen to swerve and the other driver goes straight, you can at least stay alive and achieve some payoff rather than no payoff whatsoever. The same holds true for your opponent, so why don't both of you take a chance and drive straight? This is an example where Nash's equilibrium theory falls short. But, if you were forced to adopt a pure strategy, you should always swerve. Not only will you be alive, but swerving has the maximum minimum payoff; that is the worst you could do is achieve a score of 1 by swerving. On the contrary, the worst you could achieve by driving straight is a score of 0.

An excellent real-world example of chicken is the Cuban Missile Crisis of October 1962. If you're reading this blog, I assume you already know what that was so I won't detail it here. Suffice it to say, the Soviet Union swerved while the United States kept driving straight. Although the Soviets lost face following the incident, they averted the worst possible outcome (war, possibly nuclear). The United States was also quite close to swerving as well. The Soviets offered a deal whereby the U.S. would dismantle its missiles in Turkey in exchange for the Soviets doing the same in Cuba (this did happen though, albeit six months later). If the U.S. accepted the quid pro quo deal, that would have constituted mutual swerving and neither side emerging "victorious."

Reblog this post [with Zemanta]

The Predictioneer's Game and Prisoner's Dilemma (the book)

This post is dedicated to reviewing my two main sources of information for my study of game theory: Prisoner's Dilemma by William Poundstone and The Predictioneer's Game by Bruce Bueno de Mesquita (BDM).

PRISONER'S DILEMMA

Let me begin by saying that if you're even slightly interested in learning about game theory, you need to begin by reading this book. Game theory is not a subject that one can just delve into without any prior knowledge because it is quite mathematically complex. I learned this lesson the difficult way; when I first started studying game theory, I used the Internet just like almost of us would do. Unfortunately, most of the information online falls under two extremes: either it was too basic for someone to reach a novice level of understanding or it was so complex that it already assumed you know a significant amount of game theory beforehand.

Prisoner's Dilemma is great for beginners because it isn't just another mathematical book about game theory. Sure, some math is involved, it has to be to illustrate the concepts. But at least the examples are all user-friendly. Also, it serves as an excellent biography of John von Neumann and as a brief history of nuclear weapons and the Cold War. In fact, most of the applications that Poundstone uses come from the Cold War. For a book that is discussing one of the most complex theories in mathematics, Prisoner's Dilemma is surprisingly easy to read and, dare I say, a page-turner!

In addition to what you'd expect a work of this caliber to be, Prisoner's Dilemma also boosted my confidence that I could understand what game theory was, even if it was at just an introductory level. When I first started this process, I really underestimated the complexity of game theory and I experienced serious doubts about whether I could handle the subject. But when Professor Wheaton recommended it to me in class, and I started reading it, I was able to refocus my goal. I'm pretty ambitious and I thought I could teach myself the mathematical intricacies of game theory. When I (quickly) realized that that wasn't realistic, Prisoner's Dilemma allowed me to recognize that most people probably didn't know much about game theory besides that it existed and maybe what a zero-sum game or the prisoner's dilemma was. I can say with absolute certainty, that without this book, none of what you're reading on this blog would've been possible for me write about. Prisoner's Dilemma discusses most of the main concepts of game theory in a way that makes the reader want to increase his or her knowledge to the point where he or she can understand complex proofs, at least in my opinion. Bottom line: if you want to learn about game theory, you MUST to read this book.

THE PREDICTIONEER'S GAME

Bruce Bueno de Mesquita is one of the world's leading game theorist, if not THE leading game theorist today. Although he's written several texts, this one is aimed at the average person who probably has not had extensive exposure to game theory. I like to compare it Freakonomics, but it's not quite as good. Unlike Prisoner's Dilemma, BDM's book is more about the application of game theory to real-world situations rather than an introduction to the theory itself and the use of historical examples to illustrate the theory. In addition, he acts like a salesman on behalf of game theory. That is, he argues that game theory can be used not only to forecast the future, but if used properly, to actually shape future events. In fact, I developed the model for my personal application using BDM's recommendations. Unfortunately, his constant reference to his model never leads to him revealing what that model is, therefore, without his algorithms it would be nearly impossible for a game theory novice like myself to recreate it.

Although the book is quite an interesting read, and accessible to average reader, BDM's ego shines brightly. He never misses a chance to congratulate himself on his successful predictions as is clearly evident with is discussion of the Israeli-Palestinian conflict (he predicted the 1993 accord in 1991). Moreover, even though he devotes an entire chapter to his failure to predict the outcome of the 1994 health care debate, he blames the fact that he did not have the most accurate information. He even claims he would have been right if Illinois Rep. Dan Rostenkowski was never indicted on federal corruption charges! I'm not sure how he can say that since that's not what happened. That's one of the biggest criticisms of game theory, by the way; the notion that if the expected outcome never comes to fruition, the flaw occurred in how the model was used, not that the model itself is flawed (although I'll give BDM credit for admitting he needed to change his model to account for unpredictable events).

Up until now, I've been pretty harsh of my critique of The Predictioneer's Game, but in reality, for all the issues I had with it, the book did come in pretty handy. Like I said earlier, without BDM's recommendations, I would not have known where to begin building my model (see Week 5's post for the recommendations). In addition, even though I was not able to replicate his model, because I did not have access to his algorithm, by using his recommendations I did not need a complex equation to build it. And that is the main lesson I learned from reading this book: that in order to apply game theory, you do not necessarily need a complex algorithm to arrive at the rational solution. Of course, having those complex algorithms would allow an analyst to express more confidence in a forecast, but it's better than nothing. Although The Predictioneer's Game is not quite the necessity Prisoner's Dilemma is, I would recommend it to anyone who has an interest in game theory. It is an easy and interesting read and helps lessen the intimidation factor of game theory.
Reblog this post [with Zemanta]