GAME THEORY BEYOND ECONOMICS

IEEE_CAS_VIT
6 min readMay 31, 2021

Friday night poker game, it is your turn, stakes are high and you have no clue what to do. Should you play tight while your opponent is playing loose? Or, should you bluff or not? Or, raise/fold?

Fortunately, theories have been written for your problem.

Game theory as we know it today came about partly because of one man’s interest in poker. John Von Neumann was a mathematician, physicist, computer scientist, and an overall genius. His inspiration for game theory was poker, a game he played occasionally and not terribly well. Von Neumann realized that poker was not guided by probability theory alone and wanted to formalize the idea of “bluffing,” a strategy that is meant to deceive the other players and hide information from them.

After certain discussions, he knew that game theory would prove invaluable to economists. He teamed up with Oskar Morgenstern, an Austrian economist at Princeton, to develop his theory, and came up with the book, Theory of Games and Economic Behavior.

Blockchain technology is the best example of the application of game theory in modern technology. It incentivizes miners (or players) to make decisions that are best for them and also for the block of transactions they are building.

“Eventually, every game theory textbook will have a chapter on public blockchains.”

— Naval

Game theory is the study of how and why people make decisions within a competitive situation while keeping in mind what actions their competitors will take. You can think of it as the study of strategic decision-making.

Game theory can be used for any situation where two or more people have to make decisions with rewards and consequences. The ultimate goal is to find whether an “optimum” strategy for a given game exists.

This optimum strategy for a game is called the Nash Equilibrium, named after Nobel prize winner John Nash. The man who was surprisingly made a celebrity after Russell Crowe played him in the biopic A Beautiful Mind.

Nash equilibrium by definition is the stable state of a system involving the interaction of different participants, in which no participant can gain by a unilateral change of strategy if the strategies of the others remain unchanged.

Can’t follow? Let’s take an example:

Imagine a game between Joey and Chandler. In this simple game, both players can choose strategy A, to receive one pizza slice, or strategy B, to lose one pizza slice. Logically, both players choose strategy A and receive a payoff of one pizza slice.

If you revealed Joey’s strategy to Chandler and vice versa, you see that no player deviates from the original choice. Knowing the other player’s move doesn’t mean much and doesn’t change either player’s behavior. Outcome A represents a Nash equilibrium. It’s the optimum strategy keeping both the players in mind.

(Added catch being sometimes the Nash equilibrium i.e. the optimum strategy may not be the best strategy. Like in the case of prisoner’s dilemma.)

Let’s switch on the geek mode. So there are 4 main types of game:

1. Zero-sum and Non-Zero sum game

A zero-sum game is a game for which Neumann came up with a theory. In Zero-sum games, when a player gains something, it causes loss to the other players, like in chess. Whereas, in non-zero-sum games, multiple players can take benefit of the gains of another player, like in Monopoly.

2. Cooperative games and noncooperative games

In cooperative games, to maximize their chances of winning, players can establish alliances like negotiations that profit both parties. In non-cooperative games, players can’t form alliances, like in wars.

3. Perfect and Imperfect information games

In games where all players can see the other players’ moves and the payoffs are common knowledge are called perfect information games, like chess. While imperfect information games are when the other players’ moves are hidden like card games.

4. Simultaneous games and Sequential games

In some games, different players take action at the same time, like in rock paper scissors, these are called simultaneous games. In sequential games, each player is aware of the other players’ previous actions, like in board games.

Turns out a lot of problems beyond economics and leisure games can be tackled using game theory. Artificial intelligence is one of them.

GAN: Generative Adversarial Networks

How does a machine learn to draw a human face if it has never seen one? A computer has the ability to store petabytes of photos, but when it comes to giving a bunch of pixels a meaning and relating it to someone’s appearance, it has no idea. This problem has been tackled by various generative models.

GAN or Generative Adversarial Network is a type of machine learning framework in which two neural networks compete with each other in a game. Their aim is to generate realistic images.

The two networks that compete with each other are the Generative network that generates candidates and the Discriminative network that evaluates them. The contest operates in terms of data distributions.

Generative models work by taking some features as inputs, examining their distributions, and then trying to understand how they have been produced. Certain examples being Hidden Markov Models (HMMs) and Restricted Boltzmann Machines (RBMs).

Discriminative Models instead takes the input features and predict to which class the sample may belong. Support Vector Machines (SVM) is an example.

In the game, our players (the two models) challenge each other. The first player creates fake samples to confuse the second. The second player tries to get better and better at identifying the right samples. This game is then repeated iteratively and in each iteration, the learning parameters are updated in order to reduce the overall loss.

This process keeps going on until Nash Equilibrium is reached, which is when the two players (models) become proficient at performing their tasks and they are not able to improve anymore, giving the result required.

A popular application of GANs can be to generating images and then distinguishing between real and fake ones.

MARL: Multi-Agents Reinforcement Learning

Reinforcement learning (RL) aims to make an agent or a model learn through interaction with an environment, both real or virtual. This was developed by placing the agent in a stationary environment and making it learn a policy by reward-punishment mechanism.

However, when multiple agents were placed in the same environment, it no longer worked. Earlies the agent was only dependent on the interaction between the agent and the environment but now, it also depended on the interaction with other agents.

Multi-Agents Reinforcement Learning Tennis

Let’s imagine a scenario.

There are AI-powered self-driving cars in the city. On their own, each of the cars perfectly interacts with the environment. But, when we want to make the cars think like a group, in order to control the traffic flow, things get complicated. One car gets into a conflict with another because for both of them it is more convenient to follow a certain route.

This situation can be solved by Game theory if we think of each car as different players and Nash equilibrium as the point of collaboration between different cars.

Hence even the problem of multi-agents in RL can be solved by game theory. Yay!

Hereby, Game Theory had a wide range of applications. Any problem involving two or more participants, where rewards are present and strategies can be analyzed, can be solved using game theory. It can be applied to a relationship dispute between couples, to a competition between companies in the same industry, to make a smart electric grid system for energy consumption by scheduling the use of appliances on the grid, to create authentic blockchains that give the best result, and of course, to win at the Friday night poker game.

Written by Hritika Rathi, Editorial head of IEEE Circuits and Systems, VIT

--

--