Games:
Games in which no sides know the probability of the choices of the other side are often referred to as “games with uncertainty” or “games with ambiguous information.” These games are characterized by the lack of precise knowledge about the probabilities of the outcomes or the actions of other players.
Games in which all sides know the probability of the choices of the other side are typically referred to as “games of complete and perfect information” when players have full knowledge of all aspects of the game, including the payoffs and strategies of others. However, if the focus is specifically on the known probabilities of choices, these games can also be categorized as “Bayesian games” or “games of risk”.
“games with pre-play communication” or “cheap talk games.” In these games, players engage in communication to share information, intentions, or strategies before choosing their actions. The communication is typically non-binding, meaning it does not directly affect the payoffs or the actual choices made during the game.
Games in which no sides know the probability of the choices of the other side but all sides know the preferences of the other side can be described as “games with symmetric uncertainty”. In these games, players are uncertain about the probabilities of the other players’ actions but have full knowledge of the payoff structures and preferences of all players involved.
MiniMax: minimizing the possible loss of worst case (maximum loss) scenario. For every choice, find its maximum possible loss considering other actors, choose the alternative with minimum maximum possible loss.
Maximin: Maximin maximises the minimum possible gain. For every choice, find its minimum possible gain considering other actors, choose the alternative with maximum minimal gain.
In game theory and decision-making, several strategies and concepts are employed beyond the well-known minimax and maximin. Here are some of the key strategies and approaches:
-
Nash Equilibrium: A situation where no player can benefit by changing their strategy while the other players keep their strategies unchanged. It represents a stable state where players are making the best decisions they can, given the decisions of the others.
the knowledge of the preferences (or payoffs) of the other side is crucial to finding the Nash equilibrium in a game. The concept of Nash equilibrium inherently relies on players knowing the payoff structure of the game, including how their own payoffs are affected by their own actions and the actions of other players.
A Nash equilibrium is not necessarily Pareto efficient. This means that there can be other outcomes where all players could be better off, but these outcomes are not stable because they are not equilibria. The classic example is the Prisoner’s Dilemma, where the Nash equilibrium leads to a suboptimal outcome for both players (Wikipedia) (Springer) (Policonomics – Economics made simple).
In sequential games, players make decisions at different points in time. These games are often represented in extensive form (game trees) where the order of moves is explicitly shown. The concept of Nash Equilibrium is still applicable but often analyzed using the subgame perfect equilibrium, a refinement of Nash Equilibrium specific to sequential games.
Key Concepts:
-
Subgame Perfect Equilibrium (SPE):
- SPE is a refinement of Nash Equilibrium used for sequential games.
- It ensures that players’ strategies constitute a Nash Equilibrium in every subgame of the original game, ensuring credible threats and promises.
- It eliminates non-credible threats by requiring equilibrium strategies in every subgame.
-
Backward Induction:
- A common method to find the SPE by analyzing the game from the end (the final moves) and making decisions that are optimal at every stage.
Example: Stackelberg Competition
- Two firms choose quantities sequentially.
- The leader firm moves first, and the follower firm moves second, knowing the leader’s choice.
- The Nash Equilibrium here involves the follower choosing the best response to the leader’s choice, and the leader anticipating this and choosing its strategy accordingly.
- The resulting equilibrium is subgame perfect.
-
Pareto Efficiency: A state where it is impossible to make any player better off without making at least one player worse off. This concept is used to evaluate the allocation of resources and the outcomes of strategic interactions.
-
Dominant Strategy: A strategy that is the best for a player, no matter what the strategies of other players are. If a player has a dominant strategy, they will always prefer it over any other strategy.
-
Mixed Strategy: Instead of choosing a single strategy, players randomize over possible strategies according to specific probabilities. This is particularly useful in games where no pure strategy Nash equilibrium exists.
-
Evolutionary Stable Strategy (ESS): In evolutionary game theory, an ESS is a strategy that, if adopted by a population, cannot be invaded by any alternative strategy. It is a refinement of the Nash equilibrium concept.
-
Bayesian Nash Equilibrium: This extends the Nash equilibrium concept to games with incomplete information, where players have beliefs about the types or payoffs of other players, represented by a probability distribution.
-
Stackelberg Competition: In games involving a leader and followers, the leader commits to a strategy first, and the followers then choose their strategies. The leader’s strategy takes into account the followers’ best responses.
-
Repeated Games and Folk Theorem: Repeated interaction among players can lead to cooperation and other outcomes that might not be possible in a single-shot game. The Folk Theorem states that a wide variety of outcomes can be sustained as equilibria in infinitely repeated games.
-
Correlated Equilibrium: Players choose their strategies based on signals received from a correlating device. The signals help coordinate their strategies in a way that can lead to higher payoffs than in a Nash equilibrium.
-
Coalition Formation and Cooperative Game Theory: This involves players forming coalitions and sharing payoffs. The core, Shapley value, and bargaining solutions are concepts used to study the distribution of payoffs among coalition members.
-
Regret Minimization and Learning Algorithms: Players adapt their strategies based on past performance, seeking to minimize regret over time. This approach is often used in online learning and adaptive algorithms.
-
Sequential Equilibrium: This is used in extensive-form games, where players make decisions at different points in time. It refines the Nash equilibrium concept by incorporating beliefs about what has happened in the game so far.
Real life games
real life games are either
cooperative and simultaneous
or
non-cooperative and sequential
even when there is a situation that needs simultaneous action soon it will lead a sequential response
for example:
A classic real-life example of a non-cooperative, simultaneous game is competitive pricing in business, specifically in an oligopolistic market. Here’s how it works:
Competitive Pricing in an Oligopoly
Description
In an oligopoly, a few firms dominate the market. These firms must decide simultaneously on their pricing strategies without knowing the decisions of their competitors. Each firm aims to set a price that maximizes its profit, taking into account the expected pricing decisions of its rivals.
Key Characteristics
- Non-Cooperative: Firms do not collaborate or communicate their pricing strategies to each other.
- Simultaneous: Pricing decisions are made at the same time, without knowing the competitors’ choices.
- Strategic Interdependence: The profit of each firm depends not only on its own pricing but also on the pricing strategies of its competitors.
Example: Airline Industry
In the airline industry, companies often face simultaneous decisions regarding ticket pricing. Suppose two airlines, Airline A and Airline B, operate in the same market and need to decide on the price of a specific route.
Payoff Matrix:
Airline B: Low Price | Airline B: High Price | |
---|---|---|
Airline A: Low Price | (10, 10) | (20, 5) |
Airline A: High Price | (5, 20) | (15, 15) |
- Low Price: Each airline sets a lower price to attract more customers, potentially leading to a price war.
- High Price: Each airline sets a higher price, potentially leading to higher individual profits if the other airline also sets a high price.
- The numbers in the cells represent the payoffs (profits) for Airline A and Airline B, respectively.
Real-World Dynamics
- Price Wars: If both airlines set low prices, they may attract more customers but at the cost of reduced profit margins. This situation is similar to a Nash equilibrium where neither airline can benefit by unilaterally changing its price given the other’s low price.
- High Price Agreement: If both airlines set high prices (without colluding, as collusion is illegal in most jurisdictions), they can achieve higher profits. However, there is always a temptation for one to deviate and lower prices to capture a larger market share, potentially destabilizing this equilibrium.
=-=-=-=-=-=-==-=-
in game of two people with two choices if there is only one nash equilibrium , then one of the two will have a dominant strategy
Player 2: X | Player 2: Y | |
---|---|---|
Player 1: A | (2, 1) | (0, 0) |
Player 1: B | (1, 2) | (3, 1) |
in the example above player 2 has a dominant strategy
Player 2: X | Player 2: Y | |
---|---|---|
Player 1: A | (2, 1) | (4, 0) |
Player 1: B | (1, 2) | (3, 3) |
in the example above player 1 has a dominant strategy
=-=-=-=-=-=-=-=-=-=-
Tit for tat
Tit for tat is both the simplest strategy and the most successful in direct competition in infinitely repeated prisoners dilemma game.
Tit for tat is largely cooperative despite that its name emphasizes an adversarial nature.
Research has indicated that when individuals who have been in competition for a period of time no longer trust one another, the most effective competition reverser is the use of the tit-for-tat strategy. Individuals commonly engage in behavioral assimilation, a process in which they tend to match their own behaviors to those displayed by cooperating or competing group members. Therefore, if the tit-for-tat strategy begins with cooperation, then cooperation ensues. On the other hand, if the other party competes, then the tit-for-tat strategy will lead the alternate party to compete as well. Ultimately, each action by the other member is countered with a matching response, competition with competition and cooperation with cooperation.
In the case of conflict resolution, the tit-for-tat strategy is effective for several reasons: the technique is recognized as clear, nice, provocable, and forgiving. Firstly, it is a clear and recognizable strategy. Those using it quickly recognize its contingencies and adjust their behavior accordingly. Moreover, it is considered to be nice as it begins with cooperation and only defects in response to competition. The strategy is also provocable because it provides immediate retaliation for those who compete. Finally, it is forgiving as it immediately produces cooperation should the competitor make a cooperative move.
two agents playing tit for tat remain vulnerable. A one-time, single-bit error in either player’s interpretation of events can lead to an unending “death spiral”: if one agent defects and the opponent cooperates, then both agents will end up alternating cooperate and defect, yielding a lower payoff than if both agents were to continually cooperate. Tit for two tats could be used to mitigate this problem.
tit-for-tat strategy is not proved optimal in situations short of total competition. For example, when the parties are friends it may be best for the friendship when a player cooperates at every step despite occasional deviations by the other player. Most situations in the real world are less competitive than the total competition in which the tit-for-tat strategy won its competition.
The tit-for-tat inability of either side to back away from conflict, for fear of being perceived as weak or as cooperating with the enemy, has been the cause of many prolonged conflicts throughout history.
https://en.wikipedia.org/wiki/Tit_for_tat
https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation
Some other variations:
-
Generous Tit for Tat:
Similar to Tit for Tat, but occasionally forgives the opponent’s defection and returns to cooperation even if the opponent has defected in the previous round. This generosity can promote cooperation and prevent a cycle of retaliation.
-
Pavlov (Win-Stay, Lose-Shift):
This strategy involves cooperating as long as the outcome is favorable (winning or mutual cooperation) and defecting if the outcome is unfavorable (losing or mutual defection). It’s based on the idea of reinforcing successful strategies.
-
Grim Trigger:
Cooperate initially, and as soon as the opponent defects, switch to a permanent strategy of always defecting. This strategy is designed to punish defection severely and discourage the opponent from betraying trust.
-
Random:
Randomly choose between cooperation and defection in each round. This strategy can be effective in unpredictable environments and may prevent opponents from exploiting a predictable pattern.
-
Forgiving Tit for Tat:
Similar to Generous Tit for Tat, but forgives after a certain number of rounds even if the opponent continues to defect. This forgiveness can prevent a prolonged cycle of retaliation and encourage cooperation.
-
TFT-2 (Tit for Tat with Two Turns Memory):
Takes into account the opponent’s last two moves instead of just the previous one. This can provide a more nuanced response, considering a slightly longer history of interactions.
-
Soft Tit for Tat:
Similar to Tit for Tat but responds with cooperation if the opponent defects only once in a while. This strategy allows for occasional forgiveness and may avoid unnecessary retaliation.
=-=-=-=-=-=-=-==-
Set
Group
Ring
Field
Ideal
Principal Ideal
Galois Field
Lattice
Algebra
Mapping
Other links
https://www.csee.umbc.edu/portal/help/theory/group_def.shtml
List of games in game theory – Wikipedia
GitHub – medvedev1088/game-theory-cheat-sheet: Game Theory Cheat Sheet
RunTheModel – Game Theory Simulation: Repeated and Evolutionary Games – Marketplace & Competition
https://en.wikipedia.org/wiki/List_of_games_in_game_theory
1. Cooperative and Non-Cooperative Games:
2. Normal Form and Extensive Form Games:
3. Simultaneous Move Games and Sequential Move Games:
4. Constant Sum, Zero Sum, and Non-Zero Sum Games:
5. Symmetric and Asymmetric Games:
Repeated Games and Trust
Game Theory and Trust: Untangling the Role of Repeated Interactions in Trust Building
Game theory and its sociological aspect https://blog.ipleaders.in/game-theory-and-its-sociological-aspect/
https://blog.ipleaders.in/game-theory-and-its-sociological-aspect/
Sociology and Game Theory: Contemporary and Historical Perspectives
https://www.jstor.org/stable/657964
What Is Game Theory? An Overview of the Sociological Concept
https://www.thoughtco.com/game-theory-3026626
=-=-=-=-=-=-=-=-=-=-=-=-
Ashley Hodgson
https://www.youtube.com/@AshleyHodgson/playlists
- “An Introduction to Game Theory” by Martin J. Osborne: This book provides a comprehensive introduction to game theory, including the importance of payoff knowledge in determining Nash equilibrium.
- “Game Theory for Applied Economists” by Robert Gibbons: Offers practical insights into the role of payoff information in game-theoretic analysis.
- Stanford Encyclopedia of Philosophy – Game Theory: Provides detailed explanations of Nash equilibrium and the necessity of payoff knowledge in strategic decision-making Stanford Encyclopedia of Philosophy.
- Game Theory: Analysis of Conflict” by Roger B. Myerson: Offers in-depth coverage of game theory concepts, including mixed strategies.
- “The Strategy of Conflict” by Thomas C. Schelling: Discusses focal points and coordination problems in games with multiple equilibria.
- “Games of Strategy” by Avinash K. Dixit, Susan Skeath, and David H. Reiley: This text covers various game theory concepts, focusing on strategic decision-making under different types of uncertainty.