Game Theory Notes
Game Theory Notes
This was also my CE reading notes
Chapter 1: Nomenclature and Notations
Needs Refinement
Probability
Probability Space
A probability space is a mathematical construct that formalizes the concept of a random experiment. It consists of three elements:
Sample Space (): This is the set of all possible outcomes of the random experiment. For example, if you're rolling a fair six-sided die, the sample space would be .
Event Space (): This is a collection of subsets of the sample space , called events, such that certain rules are satisfied. These rules usually include that the sample space and the empty set are in , and that the event space is closed under complements (if is in , then so is its complement) and countable unions (the union of countably many events in is also in ).
Probability Measure (): This is a function that assigns a probability to each event in the event space. The probability measure satisfies certain properties, such as:
- Non-negativity: for all events .
- Normalization: , indicating that the probability of the entire sample space is 1.
- Countable additivity: If are pairwise disjoint events (i.e., they have no outcomes in common), then .
Set Theory
- is the set produced by the Cartesian product for all elements that satisfy the statement and is in the set.
- is the Cartesian product of the set and
- is the set of natural numbers
Game Theory Notation
- Player is the opponent of
Chapter 2: Nash Equilibrium
P28-
2.1 Strategic Games
Definition 1.0
A Strategic Game is a "model of interactive decivion making in which each descision-maker choose his plan of action once and for all, and these choices are made simultaneously." It is consisted of
- A finite set (the set of players)
- a nonempty set (the set of actions available for player )
- a preference relation on (The preference action. This just compares two action for player and tells whether the player prefers one action over another.)
This model's abstraction made it possible for one to apply it to many situations. For example, the player may be an "individual human being or any otehr decision making entity like a government." This model has absolutely no restriction on anything the player prefers or avoids.
This model is the basis for any model for situations from market dynamics to political simulation.
There are some times where the player's preferences are over the consequences rather then the action. In this case, we extend our definition:
Definition 1.1
A stategic game is consisted of
- ...
- ...
- ...
- A nonempty set (The consequence)
- A function (maps actions to consequences)
- a preference relationship such that , if and only if (A profile of preference relation over )
In other cases, the game would be affected by an exogenous randoms variable whose realization is not known to the players before a action is done. In this case, an additional modification could be made:
Definition 1.2
A stategic game is consisted of
...
...
...
...
A probability space (The random variable)
a preference relationship such that , if and only if the lottery over induced by is at least as good accoding to as the lottery induced by (A profile of preference relation over )
A function (maps actions to consequences with random variable)
2.2 Nash Equilibrium
A commonly used solution concept in game theory is the Nash Equilibrium.
Definition 2.0
A Nash Equilibrium of a stategic game is the profile of actions so that we have
Which just basicly means for all actions the action is the best for the player .
Therefore, for a Nash equilibrium to exist, all players are in a situation where the there are only one action where the consequence of the yields the least cost.
There is also an other definition:
Definition 2.1
In the Nash Equilibrium of a stategic game , define to be the set of player 's best actions. Then
We call the set-valued function the best-response function of the player .
Then, the Nash equilibrium would be the profile of actions for which
This definition is just basicly yielding the actions profile for player to get maximum profit as defined by .
2.3 Examples
Warning
All cases below contains only two players with the same action profiles.
These following classical games present a variety of strategic situations. All of these situations, for simplicity, are symmetric games.
Definition 3.0
A Symmetric Game is a strategic game so that if the player set , then and it holds that . It just basicly means that everyone has the same set of actions and have the same preference.
Bach or Stravinsky? (BoS)
Scenario
Two people wish to go out together to a concert of music by either Bach or Stravinsky. Their main concern is to go out together, but one person prefers Bach and the other person prefers Stravinsky.
We can rephrase this situation into:
- Player I wants to go to a concert of music by Bach
- Player II wants to go to a concert of music by Stravinsky
- Player I and II wants to go together
There is a common way to express the player I's and II's preferences by laying out a table like this where is the consequence map:
I / II | ... | |||
---|---|---|---|---|
(, ) | (, ) | ... | (, ) | |
(, ) | (, ) | ... | (, ) | |
... | ... | ... | ... | ... |
(, ) | (, ) | ... | ( , ) |
We can try to define a consequence map of by and construct a table like the following:
I / II | Bach | Stravinsky |
---|---|---|
Bach | (2, 1) | (0, 0) |
Stavinsky | (0, 0) | (1, 2) |
It's clear now that the two Nash equilibriums exist at , .
Mozart or Mahler? (MoM)
This is a type of situation where a Nash equilibrium doesn't yield what we want.
We could have a situation where two concerts of music my Mozart and Mahler are availiable, and with both players wanting to go to the concert by Mozart. By constructing the same table we get:
I / II | Mozart | Mahler |
---|---|---|
Mozart | (2, 2) | (0, 0) |
Mahler | (0, 0) | (1, 1) |
In this case the two equilibria are and . But of the inferior equilibria is definitly smaller then the one, so the player will go to the Mozart concert at the same time.
The Prisoner's Dilemma
A very interesting and classical example.
Scenario
Two suspects in a crime are put into separate cells. If they both confess, each will be sentenced to three years in prison. If only one of them confesses, he will be freed and used as a witness against the other, who will receive a sentence of four years. If neither confesses, they will both be convicted of a minor offense and spend one year in prison.
It's pretty easy to construct the consequence map. We can directly use the length of the sentence for the map, and it's still a nice map, only now the player wants to be as low as possible.
I/II | Confess | Don't Confess |
---|---|---|
Confess | (3, 3) | (0, 4) |
Don't Confess | (4, 0) | (1, 1) |
It's pretty obvious that for any player in this game, they could reach a of 0 when they confess. Because in a Nash equilibrium, players don't care about each other, so they both tend to confess. This give this game the unique equilibrium .
Warning
Note that in a Nash equilibrium, we don't care about . It's only when there are multiple equilibria we consider other factors such as the common profit in a game. This is why the prisoners don't both choose not to confess even though it gives the best result for both of them.
Dove or Hawk? (DoH)
There are this sort of dilemna in socializing, too.
Scenario
Two animals are fighting over some prey. Each can behave like a dove or like a hawk. The best outcome for each animal is that in which it acts like a hawk while the other acts like a dove; the worst outcome is that in which both animals act like hawks.
I/II | Dove | Hawk |
---|---|---|
Dove | (3, 3) | (1, 4) |
Hawk | (4, 1) | (3, 3) |
So the two equilibra is when the two animals act differently. |
Matching Pennies
This is a situation where a Nash equilibrium doesn't exist.
Scenario
Each of two people chooses either Head or Tail. If the choices differ, person 1 pays person 2 a dollar; if they are the same, person 2 pays person 1 a dollar. Each person cares only about the amount of money that he receives.
In this situation, two players are diametrically opposed, and there are no hope of collaboration. Such a game is call strictly competitive.
I/II | Head | Tail |
---|---|---|
Head | (-1, 1) | (1, -1) |
Tail | (1, -1) | (-1, 1) |
There are no Nash equilibria in here.
2.4 The Existence of a Nash Equilibrium
We can proof the existence of a Nash equilibrium by using Kakutani's fixed point theorem.
Theorem 4.0
Let be a compact convex subset of and let be a set-valued function for which
- the set is convex and
- the graph of is closed (i.e. all points in the graph of have a limit in )
Then . Which basicly means that gets mapped to itself after the correspondence is applied.
Theorem 4.1
Brouwer's fixed point theorem states that for any convex and compact and a continuous correspondence map then there exists a fixed point so that .
And we can define a preference relation as quasi-concave on if
the set is convex.
Proposition 4.2
The strategic game has a Nash equilibrium if
- and is convex and compact
- , the relation is
- Continuous
- Quasi-concave on A
Proof. Define by
Then the map
- is a set-valued function which it is convex since is quasi-concave on
- is nonempty since is continuous and is compact
also has a closed graph since is continuous. Then by Kakutani's theorem has a fixed point. By definition, this function takes in the best-response actions of last round, and gives the best-reponse actions this round. Therefore, a fixed point means that there is a situation where the strategy is to use the same actions last round, which, by definition, is the Nash equilibrium of the game.
The proposition is more commonly known as Nash Theorem.
Theorem 4.4
Nash theorem for any game , if the action set is a convex and compact subset of a Euclidean space and the realtion is continuous and quasi-concave on then there exists a Nash equilibrium for the game.
Example
Consider a two-person strategic game that satisfies the conditions of Proposition 4.3. Let and assume that the game is symmetric: and if and only if for all and . Then proof, using Kakutani's theorem, that there is an action such that is a Nash equilibrium of the game.
Proof. Now consider a map . This because the game satisfy the conditions of proposition 1.3, then according to the Nash theorem, the map over this set of actions generated by the cartesian product must have a unique Nash equilibrium. Considering the set , then must be in it, so the Nash equilibrium must be symmetric.
2.5 Strictly Competitive Games
While it is pretty hard to find Nash equilibria of an arbitrary strategic games, there are a few exceptions.
Definition 5.0
A strategic game is strictly competitive when we have if and only if . This just means that for two players competing, an action is only better then for player 1 when it is worse for player 2. Strictly competitive games are also called zerosums because the payoff function for the two players add up to .
When player maxminimizes his actions, that means that it is best for him on the assumption that whatevery he does, his opponent will try her best to hurt him as much as possible. This can only happen under a strictly competitive game. In a sense, the maximinimizer is the best-response function in a strictly competitive game.
Now we want to show that in a strictly competitive game, the tuples of maxminimizers for every player is the Nash equilibria.
Definition 5.1
let be a strictly competitive game. Then the action is a maxminimizer for player 1 if
This basically means maximizing for player 1, and making it at least as good as any other .
Here, operater means finding the smallest value for the expression the operator is acting on, while maintaining the expression in the subscript true. Ex, , then .
Similarly, the maxminimizer for player 2 is if
In words, a maxminimizer for player i is an action that maximizes the payoff that player i can guarantee. A maxminimizer for player 1 solves the problem maxx miny u1(x,y) and a maxminimizer for player 2 solves the problem maxy minx u2(x, y).
We will assume for convenience that means that payoff function for player .
Lemma 5.2
Let be a strictly competitive game. Then
Proof. We know that for any function we have also . This follows that we have which is also . This follows that .
Then the following results give the connection between the Nash equilibria of a strictly competitiev game and the set of pairs of maxminimizers.
Proposition 5.3
Let be a strictly competitive game.
- If is a Nash equilibrium of G then is a maxminimizer for player 1 and is a maxminimizer for player 2.
- If is a Nash equilibrium of G then . Thus the Nash equilibria of G yield the same payoffs.
- Proof the converse statement of (1).
Proof. Let's first proof (1) and (2).
- Let be a Nash equilibrium for the game . Then , and it follows that . It can be proved the same way the is the maxminimizer for player 2. (I don't like the proof from the book. It's too lengthy)
- I'm not sure how to proof this explicitly, but intuitively it seems that means maximizing x and minimizing y, which is the maxminimizer for player 1, and it can be proved similarly for player 2. And the cartesian product of all the player's maxminimizers has been proved to be a Nash equilibrium, then the lemma holds.
- Let . Then because we want to prove the situation where it has a nash equilibrium, then also equals to . By lemma 5.2, we have . Then let be a maxminimizer for player 1 then . In similar method it is easy to obtain that . By combining these two inequalities, we get . By using , we have proved that this is the Nash equilibrium.
Now some useful facts. By part (c), we have proved that the players' Nash equilibrium strategies could be found by two problems:
- Finding the maxima with respect to of minima with respect to of . This is basicly .
- Finding the maxima with respect to of minima with respect to of . This is basicly .
Note that also part (a) and (c) implies that the Nash equilibria of a strictly competitive game is interchangeable.
Definition 5.4
For a game's equilibria to be interchangeable, then if and are Nash equilibria of a strategic game, then so are and .
Part (b) of preposition 5.3 also implies that for all strictly competitive games. Further extending the general inequality of give us that for any , we have for all , so that . This basicly means that for any best-response action , no matter what the other player do( actually means minimizing with the action , so under a strictly competitive game, this is equivilent to ), it will always be the best response action for player 1.
Thus in any game (whether or not it is strictly competitive) the payoff that player 1 can guarantee herself is at most the amount that player 2 can hold her down to.
The hypothesis that the game has a Nash equilibrium is essential in establishing the opposite inequality.
For example, in the game Matching Pennies, but . To think about it, this game is sort of completely random. The order of the and the directly effected the outcome, since the last to decide the action(which coin side to flip) have enough information to guarantee a success in the game.
Definition 5.5
If , that is, is a Nash equilibrium of the game then is called the value of the game.
This follows that if , as defined in Proposition 5.3, is the value of a game, then it is guarateened that the payoff of player is at least her equilibrium payoff . It follows that any equilibrium strategy of player 2 guaruntees that his payoff is at least .
2.6 Bayesian Games: Strategic Games with Imperfect Information
Sometimes, we wish to model situations in which some of the players are not certain of characteristic of other parties. The model of a Baysesian game, which is closely related to that of a strategic game, is designed for this purpose. Definition 1.2 is sort of like the strandard model.
A Bayesian Game is consisted of a set of players and a profile of actions. There are also a set of possible "state of nature", each of which is a description of all the players’ relevant characteristics. For convenience we assume that is finite. Each player has a prior belief about the stat of nature given by a probability measure on . basicly just means the how much the player believes the situation is true. In any given play of the game some state of nature is realized. We model that players' information about the state of nature by a profile of signal functions, and being the signal preceived by player i under the realization . Let be the set of all possible values of , and we refer to as the set of types of player i(This is just the collection of all possible state the player i thought the game is in). We assume (The possibility the player think the state is true) . The player isn't absolutely negative about anything. If player received the signal the he deduces the the state is in the set ; the posterior belief about the state that has been realized assigns to each state the probability (How close is the player to the truth is, if you think about it, with as the truth and what the player think is the truth) if and the probability zero otherwise (i.e. the probability of conditional on ). As an example, if for all then player has full information about the state of nature. Alternatively, if and for each player the probability measure is a product measure on and then the players’ signals are independent and player does not learn from his signal anything about the other players’ information.
Definition 6.0
A Bayesian game consists of
- a finite set (the set of players)
- a finite set (the set of states)
and
- a set (the set of actions available to player i)
- a finite set (the set of signals that may be observed by the player)
- a function (the signal function, which is how a player observes a state )
- a probability measure on (the prior belief of player i) for which for all
- a preference relation on the set of probability measures over (the preference relation of player ), where
This definition allows players to have different prior beliefs. It can be subjective, coincident with "an 'objective' measure", or identical.
This model is often used in a case where the state of nature is a profile of parameters of the players' preferences(ex, profiles of their valuations of an object). However, the model is much more general then this. Ex, we can consider its use to capture situations where players are uncertain about the information others know.
Note that sometimes(like in previous chapters) a Bayesian game is described not in terms of an underlying state space , but as a reduced form where the basic primitives that relates to the player's information is the profile of the set of possible types.
Now let's turn to the defifinition of equilibrium for a Bayesian game. In our situation, the player knows his types; he does not need to plan for steps to take in other types. One might think that the definition for a Nash equilibrium in any given game should be defined for each state of nature in isolation, but "however, in any given state a player who wishes to determine his best action may need to hold a belief about what the other players would do in other states, since he may be imperfectly informed about the state. Further, the formation of such a belief may depend on the action that the player himself would choose in other states, since the other players may also be imperfectly informed."
Now we led to define a Nash equilibrium of a Baysesian game to be a Nash equilibrium of the strategic game in which for each and each possible signal there is a player, whom we refer to as ("type of player "). The set of actions for is , and thus the set of action profiles in is . The preferences of each player are defined as follows:
- The posterior belief of player , together with an action profile in , generates a lottery over .
- The probability assigned by is the player's posterior belief that the state is when he recieves the signal being the action of player in the profile .
- Player in prefers the action to the action profile if and only if player in the game prefers the lottery to the lottery .
To put it together, we have
Definition 6.1
A Nash equilibrium of a Baysesian game is a Nash equilibrium of the strategic game defined as following:
- The set of players is the set of all pairs for and .
- The set of actions of each player is
The preference relation of each player is defined by
where is the lottery over that assigns Bayes factor to if , otherwise.
2.6.2 Examples
Second-price auction
Scenario
An object is to be assigned to a player in the set in exchange for a payment. Player ’s valuation of the object is , and . The mechanism used to assign the object is a (sealed-bid) auction: the players simultaneously submit bids (nonnegative numbers), and the object is given to the player with the lowest index among those who submit the highest bid, in exchange for a payment.
In a first price auction the payment that the winner makes is the price that he bids.
In a second price auction the payment that the winner makes is the highest bid among those submitted by the players who do not win (so that if only one player submits the highest bid then the price paid is the second highest bid).
Now consider a variant of the second-price sealed-bid auction described above in which each player knows his own valuation but is uncertain of the other players’ valuations. Specifically, suppose that the set of possible valuations is the finite set and each player believes that every other player’s valuation is drawn independently from the same distribution over .
Then this situation, as a Bayesian game, is
- the set of players,
- The set of states, (profiles of valuation)
- the set of actions,
- the set of signals can recieve,
- the signal function of ,
- the prior belief of is given for some distribution over
- player 's preference relation is the expectation of the random variable whose value in state is if is the player with the lowest index for whom and otherwise
This game has a Nash equilibrium, namely in which for any player and (Bidding one's valuation).
2.6.3 Comments on the model
The model's captured the player's uncertainty by a probability measure over some set of states as proposed by Harsanyi. He assumed that tthe prior belief of every player is the same, and that all differences should be deried from an objective mechanism that assigns information to each player.
In previous sections we have shown that the assumption of a common prior beliefs "has strong implications for the relationship between the players’ posterior beliefs." For example, "after a pair of players receive their signals it cannot be “common knowledge” between them that player 1 believes the probability that the state of nature is in some given set to be and that player 2 believes this probability to be , though it is possible that player 1 believes the probability to be , player 2 believes it to be , and one of them is unsure about the other’s belief."
A payesian game can also be uysed to model situations where players are uncertain about other's knowledge of a state.
For example, a Bayesian game in which the set of players is and states , and each player assigns probability to each state, and the signal function for and were and . Now we assume that player 's preferences satisfy for $j = 1, 2 and for any action profile and while player is indifferent between all the pairs . In such state player knows that player prefers over , but in no. Since in player doesn't know which state it is, so she doesn't know whether
- Player knows that she prefers to or
- Player is not sure about her preference between and
It's tempting to expand the range of situations we can model where all payers are uncertain about each other's knowledge of other's using a Bayesian game. We assume that all player's payoffs only depends on a parameter . Denoting the set of possible beliefs of each player by , then the belief of any player of any player is the probability distribution over That implies that the belief of one player depends on other's beliefs. Therefore, the answer to the question we posed is not trivial, and it is equivalent to the question of finding a collection so that the set is isomorphic to $\Theta \times X_{-i} $. Is so, then it is possible to let to be the state space and use this model to capture the player's uncertainty not only of other's payoffs but also other's beliefs.
Chapter 3: Mixed, Correlated, and Evolutionary Equilibrium
This chapter two concepts of equilibrium are discussed: A mixed strategic Nash equilibrium and correlated equilibrium. There will also be discussion on a variant of Nash equilibrium design to model the outcome of an evolutionary process.
3.1 Mixed Strategy Nash Equilibrium
This is design to model a steady state of the game when the participants' choices are not deterministic but are regulated by probabilistic rules. In previous chapters we had defined that a strategic game to be a tripple , where of each player is defined over the set of action profiles. In this chapter we allow the players' choice to be nondeterminstic. This leads to the need to add to the primitives of a model a specification of each player's preference relation over lotteries on . Therefore, we assume that the preference relation of each plauer satisfies the assumptions of von Neumann and Morgenstern so that it can be represented by the expected value of some function . Therefore, we can update our model to . This function "represents player i’s preferences over the set of lotteries on A". Netherless, this is still a strategic game.
Now let . Now we denote with the set of probability distributions over and refer to a memeber of as a mixed strategy of player , and that we assume that players' mixed strategies are independent randomizations. For clarity, we sometimes refer to a member of as a pure strategy. For any finite set and we denote by the probability that assigns to and define the *support of to be the set of elements for which .
A profile of mixed strategies induces a probability distribution over set . If, for ex, each is finite , then given that randomizations are independence of each other then the probability of the action profile is , so player 's evaluation of is .
We can now derive from another strategic game, called the "mixed extension" of , in which the set of actions of each player is the set of his mixed strategies in .
Definition 7.0
The mixed extension of a strategic game is the strategic game in which is the set of probability distributions over and assigns to each the expected value under of the lottery over that is induced by (so that )