Abstract
Game theory is a field of applied mathematics that studies strategic behavior of rational factors. In other words, game theory is a collection of analytical tools that can be used to make optimal choices in interactional and decision making problems. Optimization in mathematics and computer science is the choice of the best member of an existing collection for a specific purpose. Several optimization methods have been used in many problems to minimize costs or maximize profits. From a particular point of view, it can be said that the game theory is in fact a kind of optimization. In this paper, a combined use of game theory and optimization algorithms has been reviewed and a new categorization is presented for researches which have been conducted in this area. In some of these combinations, game theory has been used to improve the performance of optimization algorithms, and in some others, optimizations methods help to solve game theory problems. Game theory and optimization algorithms are also used together to solve some other problems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Game theory can be defined as the science of modeling and evaluation of the behavior of decision-making systems. Game theory is trying to obtain the mathematical behavior of a system in a strategic or game-based manner, in which, the individual’s success in the choosing process depends on the choice of others. In other words, one of the goals of the game theory is to predict possible events of decision-making games [119]. Therefore, the concepts of game theory can be used in an environment where the roles and actions of multiple agents affect each other. The ultimate goal of the game theory is to find the optimal solution(s) for players. This theory, which was founded more than half a century ago, has been used to investigate issues of various sciences, including computer science [151]. Computer networks [3, 97, 102, 140], distributed computing [1, 2, 52, 57], data warehousing and mining [11, 66, 74, 154,155,156, 158,159,160, 180], cloud computing [7, 25, 46, 127, 169, 183], and decision support systems [8, 69, 157] are some of the most popular issues in the computer science which can be modeled as a game theory problem.
Optimization means that we look for the values among the parameters of a function which minimize or maximize that function. The set of all appropriate values for these parameters is called possible solutions, and the best value of the set of possible solutions is called the optimal solution. Considering the importance of optimization in different research domains, several optimization methods have been presented so far. The optimization algorithms, which cover both types of maximization and minimization issues, have many applications in resource allocation, scheduling, decision making, etc. [135].
Game theory and optimization have a conceptual overlap. In other words, the game theory is a kind of approximating optimization. In this paper, we review the combined use of game theory and optimization methods in optimization problems. This combination is possible in three different ways. In the first case, game theory is used to improve the optimization algorithms [29, 84, 138]; in the second one, it is possible to solve the games of a game theory problem using optimization methods [96, 121, 179]; and finally in the third case, game theory and optimization algorithms can be used together to solve a problem [26, 82, 148].
The remaining parts of the paper are organized as follows: Sect. 2 describes the basic concepts of the game theory and existing types of games. In Sect. 3, optimization is defined and its algorithms are introduced. Classification of the combined use of game theory and optimization algorithms is presented in Sect. 4, and the related researches of its different categories are reviewed and evaluated. The future direction for related researches is represented in Sect. 5, and finally, the conclusion of the review is given in Sect. 6.
2 Concepts of Game Theory
In this section, we introduce the most important concepts of the game theory and the most important types of games. The basic concepts of game theory are as follows:
Game Game is a model of an interactive state between entities or groups of entities.
Player Players are the basic entities of the games. This entity is the decision maker of the game that can be a person, a group, a concept, and so on.
State States are the possible situations of the game that the players can be in.
Action The set of all possible works that players of the game can do in different states is called actions.
Payoff The score that awarded to the action of a player in a game (or in one step of the game) is called payoff which can be positive or negative.
Strategy A player’s strategy is the complete set of actions that player can do in each state of the game. Each player has a number of strategies that can be selected based on the conditions and the objective(s).
Equilibrium Equilibrium is the point of a game at which no player tends to change, and any change leads to worsening the payoffs of all players.
2.1 Types of Strategies
Strategies of the game theory are divided into two categories of pure strategy and mixed strategy. A pure strategy completely determine the playing strategies of players. This strategy defines the action that a player must do for any state. A player’s strategic set is a set of pure strategies which is possible for that player. Each player has optimized strategy (strategies) and the value of the game is the same for all players [119]. Several researches have used pure strategy for their game theory-based methods [20, 42, 47, 70, 76, 77, 130, 173, 181, 184].
A mixed strategy is the assignment of a probability to any pure strategy. This strategy allows a player to randomly choose a pure strategy. Because probabilities are continuous, there are infinite mixed strategies for a player, even if the set of pure strategies is finite. It is obvious that a pure strategy can be considered as a special case of mixed strategy in which a specific pure strategy is selected with probability 1, and the other strategies are selected with probability 0. Each player can choose the pure strategy or the mixed strategy for decision making. For deterministic actions, the pure strategies, and for probabilistic actions, mixed strategies are selected [119]. Mixed strategies have been used in different game theory-based methods of the literature [33, 34, 39, 64, 92, 129, 132, 145].
2.2 Nash Equilibrium
The point in which none of the players of the game tends to change is the equilibrium point. The Nash equilibrium is a game action profile, in which, assuming constant actions for other players, condition of each player will be worse by each change. In other words, it is an action profile that does not motivate players to change their conditions, assuming constant actions for other players [55, 168]. The Nash equilibrium concept is used to analyze the results of the strategic interaction of several decision makers. In other words, the Nash equilibrium is a way to predict the results of the dependent decisions which are simultaneously made by several players.
2.3 Types of Games
Games can be categorized based on various features in the game theory. To solve a problem using the game theory, depending on the nature of the problem, the appropriate type of the game should be selected. In this sub-section we introduce some types of the games. There are also some other categories of the types of the games. In this subsection, the most important categories of game types are represented.
2.3.1 Static Games Versus Dynamic Games
The game in which the players make their decisions simultaneously is a static game. In this way, each player does not know anything about the other’s decisions when making a decision [182]. Static games have been used in some of researches of this area [40, 54, 59, 60, 86, 103, 123]. A dynamic game is a game in which players do not necessarily perform their actions simultaneously. In this way, players will consider the made decisions of other players in their next decisions. A static game is a special case of dynamic games. In a dynamic game, some actions can be performed simultaneously and some other action can be performed at different times [182]. Various researches have used dynamic game for optimization [5, 12, 22, 24, 61, 98, 99, 107, 171, 196].
2.3.2 Zero-Sum Games Versus Non Zero-Sum Games
Zero-sum games are the games that their total value is constant during the game and will not decrease or increase. In these games, one player’s profit is associated with the loss of another player. Zero-sum game is a win-loss game and there is always a loser for each winner. As an example in a two-player game, the zero-sum game condition is defined as Eq. (1).
In which, ui and si are the utility and strategy of the player i, respectively. Obviously, Eq. (1) means that \(u_{1} = - u_{2}\) [182]. There exist some examples of using zero-sum games for different application in the literature [27, 31, 41, 50, 89, 108, 113, 126, 139].
In contrast, there are strategies in the non zero-sum games that are beneficial to all players. In other words, the total sum of the profits and losses of the involved players is less than or more than zero [182]. Non zero-sum games have been also used for various applications [15, 81, 89, 90, 131, 137, 149, 150, 194].
2.3.3 Cooperative Games Versus Non Cooperative Games
If players compete individually in a game and try to raise their own profits alone, then it is a non-cooperative game. Non-cooperative games focus on individual player’s strategies and predicting their selected strategies [18, 19, 35, 44, 88, 105, 115, 162, 178, 187, 189, 193].
On the other hand, if in a game, different groups of players form several coalitions try to take advantage of their coalition, then that game will be cooperative. For example, if the set of players is N = {1, 2, 3, 4, 5, 6}, one of the players’ coalition states can be {{1, 2, 3}, {4, 5}, {6}} [18, 28, 36, 51, 59, 63, 95, 146, 174, 176].
2.4 Evolutionary Game Theory
Evolutionary game theory means the theoretical application of games in evolving populations in biology. In this theory, instead of direct analysis of the properties of a game, a community of players with different strategies is simulated and uses natural selection methods for their evolution. Evolutionary game theory provides a framework for concepts, strategies, and analyzes which can model Darwin’s competition [152]. There are two different approaches of evolutionary game theory. The first approach uses the concept of sustainable evolutionary strategy as the basic tool of analysis. The second approach generates a clear pattern of the process of changing of strategies in the society and investigates the characteristics of its evolutionary process. One of the most important differences between the evolutionary game theory and classic game theory is the assumption of the wisdom of players. In the classic game theory, it is always assumed that players are wise and intelligent. However, using the evolutionary game theory, provide the possibility of entrance of non-intelligent players to the game. Furthermore, when the classic game theory is not able to predict the equilibrium (multiple equilibrium situations), evolutionary game theory can determine the point or points of equilibrium based on its own equilibrium-solving process [152]. Recently, evolutionary game theory have been used in several applications [9, 71, 100, 112, 144, 172, 197].
3 Optimization
Optimization is the ability to find the best solution among existing solutions. Optimization techniques are exploited in the design and maintenance of many engineering, economic and even social systems to minimize the costs or maximize profits. Due to the widespread use of optimization in different sciences, this topic has grown a lot, so that it is studied in mathematics, management, industry, computers, and many other branches of science, and different titles, such as mathematical programming, are used to refer it. To solve an optimization problem, it must be modeled. Modeling means that we describe the problem with variables and mathematical relations, so that it simulates the problem of optimization. To develop the mathematical model of an optimization problem, the following four components should be fully characterized:
- 1.
The set of optimization variables x1, x2…, xn.
- 2.
The objective function f(x) that applies on the optimization variables and returns a real value. This objective fuction should be minimized or maximized (optimized) during the optimization process.
- 3.
A set of equality or non-equality constraints that should hold on the optimization variables.
- 4.
The domain sets D1, D2,…, Dn as the domains of the optimization variables x1, x2,…, xn.
Most optimization problems can be described completely by specifying the four mentioned components. It is also possible that an optimization problem has no constraint or its optimization variables’ domains are the entire space [135].
There is no comprehensive categorization for optimization methods and they can be categorized from different specific perspectives. For example, optimization methods can be categorized based on the number of their objective functions [104], using probabilistic or deterministic methods [124], using nature-inspired methods [16], etc.
In general, optimization methods can be divided into two groups of deterministic (or exact) (classic) methods [109] and approximate (or random) (modern) methods [122]. The deterministic methods are capable of finding the optimal solution in a precise manner. Since a lot of computational processes are required to optimization of NP-hard problems, the deterministic methods are not appropriate for optimizing such problems and their execution times grow exponentially with the dimensions of the problems. Random methods are able to find proper solutions (close to optimal) for short-time solving of NP-hard problems [122]. Each of these categories can also be subdivided into other sub-categories. The most popular classic and modern optimization methods are listed in Table 1.
There are also other algorithms that can be grouped into classic and modern algorithms. Only the most important algorithms are outlined here.
3.1 Multi-Objective Optimization
Multi-objective optimization is one of the most common and widely used research areas among optimization topics. In multi-objective optimization, we deal with problems that have two or more objective functions which are often in conflict. In this type of problem, you must simultaneously optimize all objective functions. Optimization of these functions can be minimization or maximization of all functions or a combination of minimization and maximization [161]. Such problems are being increasingly discussed in various branches of basic science, engineering, and economics, and thus, appropriate methods for solving them is required. For modeling of multi-objective optimization problems, we have the components optimization variables, constraints, and domains similar to single-objective optimization problems modeling, and k numbers of objective functions f1(x),…, fk(x), Which defined as Eq. (2).
By having several objective functions instead of an objective function, the concept of optimality is changed. This is because, in multi-objective optimization problems, instead of finding a solution to global optimization, it is always important to find an appropriate compromise and intermediate. For this purpose, “pareto” presented another concept of optimality. A solution X ε Ω is pareto optimal (with respect to the space Ω) if and only if there is no X’ε Ω such that \(v = F\left( {X^{\prime } } \right) = \left( {f_{1} \left( {x^{\prime } } \right), \ldots , \, f_{k} \left( {x^{\prime } } \right)} \right)\) dominates \(u = F\left( X \right) = \left( {f_{1} \left( x \right), \ldots , \, f_{k} \left( x \right)} \right)\) [32].
4 Combined Use of Game Theory and Optimization
The game theory and optimization are conceptually close to each other. Game theory can be considered as a kind of random optimization. In many studies, games theory and optimization have been used together, or one of them has been used in another’s solution. In this section, we present a new general categorization of researches of this area. This categorization is depicted in Fig. 1. We use four factors to categorize the combination of optimization methods and game theory. These four factors are using classic or modern methods of optimization, the type of the optimization method, single-objective or multi-objective optimization method, and the type of the game theory. In the following, we examine these four factors and represent the levels of our classification.
First level: classic or modern optimization
In the first level, using a classic or a modern optimization method is considered. Accordingly, this level is divided into two classic and modern categories.
Second level: Type of optimization method
The second level of classification is based on the type of the used optimization method. The most popular types of classic and modern optimization were introduced in Table 1. Based on researches of this area, classic methods at this level are divided into seven categories: linear programming, non-linear programming, quadratic programming, integer programming, mixed integer linear programming, stochastic programming, and convex optimization. Based on the researches of this field, modern algorithms can be also divided into nine categories, including genetic algorithm, Imperialist competitive algorithm, ant colony optimization, particle swarm optimization, simulated annealing, shuffled frog leaping, differential evolution, tabu search, and coral reefs optimization algorithm.
Third level: Single-objective or multi-objective
In the third level, the single-objective or multi-purpose optimization problem is considered.
Forth level: Type of game theory
At this level, the type of game theory is considered. Game theory types include classic game theory and evolutionary game theory. Classic game theory can be also divided into non-cooperative game theory and cooperative game theory. Researches can be divided into at most two categories of classic and evolutionary game theories.
4.1 Classic Optimization Methods
In this section, we will review the combined use of game theory and optimization methods. Since the classic methods provide a precise optimization and game theory is a kind of random optimization, classic methods have been usually used to solve game theory problems. In some cases, they have been used together to solve a specific problem.
4.1.1 Linear Programming
In an optimization problem, if the objective function(s) and all constraint functions are linear, then that problem is a linear programming problem. In fact, linear programming is a method that finds the minimum or maximum value of a linear function on a convex polygon. This convex polygon is in fact a graphical representation of some inequality constraints over the functions’ variables. Linear programming can achieve the best result in some specific conditions [30]. There are many algorithms for solving linear programming problems, including simplex algorithm, criss-cross algorithm, Karmarkar’s algorithm, affine scaling, etc.
A lot of research has been carried out on the combined use of game theory and linear programming [48, 96, 106]. In most of these studies, a game has been solved using linear programming. That is, the game’s Nash equilibrium is calculated. In these researches, both single-objective and multi-objective optimizations have been considered.
A matrix game has at least one answer. In general, a positive-matrix game can be modeled as a dual linear programming problem. Hence, simplex algorithm can find the answer(s) of a matrix game. The steps to solve a simulated game as a linear programming problem using the simplex algorithm are as follows [48].
- 1.
Create the matrix of the game.
- 2.
If the solution is completely clear, set it as the obtained solution of the problem.
- 3.
Check the dominance of the rows and columns. All dominated rows and columns should be eliminated in this step.
- 4.
Make sure the amount of the game is positive. To do this, just add the resulted value of subtracting the largest negative entry of the matrix from 1, to all matrix entries. Consider this added value (if any) as k.
- 5.
Suppose G is the m × n matrix of the game. Consider e as n-component row vector which all of its entries are 1, and f as m-component column vector which all of its entries are 1. Also consider z as n-component row vector and x as m-component column vector. Given the mentioned assumptions, create the following matrix:
G
f
e
0
- 6.
Run the simplex algorithm to the point that all indicators are non-positive. Calculate the optimal answers to the dual linear programming problem as z0 and x0, and assume that t = z0 and f = ex0. We know that t > 0.
- 7.
The answers of the main matrix can be calculated using following equations:
$$p^{0} = \frac{{z^{0} }}{t}, q^{0} = \frac{{x^{0} }}{t}, {\text{and}} v = \frac{1}{t} - k$$
Eliminated rows and columns will be replaced by additional zero components in strategy vectors. The type of game that is considered in this method is non-cooperative.
In [96], there is a way to solve cooperative games. In this strategy, for each coalition, a real-value variable is considered. The value of this variable is equal to the total amount of benefits of the members of the coalition. The appropriate coalitions are selected in the game based on this real value. In this way, the game is modeled as a dual linear programming problem.
In [106], an iterative approach is presented to solve the multi-objective linear programming problem based on the principles of game theory. This method is actually provided for solving the problem of optimizing the utilities of several decision makers at the same time and with the same constraints. Decision makers are players of the game and form coalitions to solve the problem. So the game is a cooperative one. The proposed method consists of a basic phase and repeats this phase to a satisfactory level. This phase involves solving a single linear programming problem, which is easy to accomplish. The unique answers generated in different iterations are the objective indicators. These indicators are used as guidelines for determining the strategy of the next iteration, and this work continues until the game reaches the equilibrium. The main advantage of this method is that it can be easily extended to multi-objective non-linear programming problems. Therefore, it is appropriate in order to solve a range of optimization problems. Lucchetti [96] and Matejas and Peric [106] show that the linear programming can easily complete a game and find point(s) of equilibrium.
4.1.2 Non-Linear Programming
If at least one of the objective functions or constraint functions of an optimization problem is not linear, that problem is called non-linear programming problem. Some of the most popular algorithms for solving the non-linear programming problems are the Lagrangian and the gradient descendant methods [14].
In [72], by combining multi-objective optimization method and evolutionary game theory, a multi-frequency offsets estimation method was proposed. Multi-frequency offsets estimation is a multi-objective search, which is presented as a non-linear programming problem with several objective functions. In this way, different frequency offsets were considered as players in the evolutionary game theory to turn the problem into find an equilibrium point in the game. Thus, evolutionary game theory can solve a multi-objective nonlinear programming problem. The main advantage of the mentioned method is to avoid an objective function has several optimum solutions.
4.1.3 Quadratic Programming
If the objective function is a quadratic one and all of the constraint functions are linear, then it is quadratic programming problem. The quadratic programming problem is a special case of non-linear programming problem that is closest to the linear programming problems. Traditional quadratic programming models require certain parameters with constant values. This model is widely used to solve real world problems [120]. Researches conducted on the combined use of the game theory and the quadratic programming has not made much progress, and very few studies have been presented in this regard. One research on the combination of the game theory and the quadratic programming is presented in [101]. In this research, a method is proposed to find the Nash equilibrium in a game using quadratic programming. The game considered in this study had two players and was a non-zero sum game. This research proved that a point is the Nash equilibrium point in a game, if and only if, it is the answer of a quadratic programming problem. The quadratic programming problem should be considered as a game-related dual linear programming problem to solve zero-sum games. The main drawback of this method is that it only considers dual games with pure strategy and thus, is not suitable for a wide range of games. Furthermore, the number of strategy sets should also be limited in this method.
4.1.4 Integer Programming
Integer programming is an optimization problem, in which, the domains of the optimization variables (Di) are integer set (Z). This type of problem has many applications in real problems, since many variables are integers in the real world [166]. Given the type of objective functions in optimization problems, integer programming can be combined with other classic optimization methods, and creates new optimization categories that can be effective in various applications.
In [190], the basis of the optimization is integer programming. In this study, a framework and an algorithm for supply chain design and operations were presented. This framework is based on mixed integer bilevel programming and is modeled as a Stackelberg game with one leader and one follower. The game is non-cooperative. The followers can have discrete decisions in this game and this is the main advantage of this framework. It is also a flexible framework and can be expanded with different types of optimization and different types of objective functions.
4.1.5 Mixed Integer Linear Programming
Mixed integer linear programming is a special type of optimization which is similar to linear programming (with linear objective function), except that the domain of some of the variables of the problem can be non-integer numbers. Since most integer programming problems are linear, all available methods and tools can be used in this area. Also, the use of integer variables adds a lot of abilities to the modeling process that can not be done in linear programming [175].
In [198], a scheduling mechanism for managing the demands of home and neighborhood area in smart grids is provided based on mixed integer linear programming and game theory. This mechanism is presented in two centralized and distributed modes. In centralized mode, the scheduling mechanism is first described as a linear programming problem. However, in the following, given the specific constraints of the problem, mixed integer linear programming is used to formulate the problem. To solve the formulated problem in centralized mode, branch and bound method have been used. In distributed mode, the problem is modeled as a distributed mixed integer linear programming problem. The problem has been then solved with the help of a cooperative game. In this game, players are users who are connected to the neighborhood/local area network. The set of strategies is also daily consumption schedules. In this study, mixed integer linear programming has been well utilized, but the game theory has been used only in one state.
In [191], a method for designing a supply chain with high tactical decision-making power was provided through different optimization metrics (multi-objective). In the proposed method, the problem of optimization was formulated as a mixed integer linear programming problem and integrated with the game theory. The problem of optimization in this research is solved by the ε-constraint method. The approach is intended for both cooperative and competitive environments. The game theory is used here as a tool to support decision-making in order to deal with uncertainty in the competitive scenario. In this method, the proposed game is non-zero sum and the players are the supply chains. In the experimental results of this research showed that the proposed method could make stronger decisions.
4.1.6 Stochastic Programming
Stochastic programming is a kind of mathematical programming in which data includes random elements. In fact, in stochastic programming, some of the optimization variables or some of the parameters are random. In the real world, the parameters of optimization problems are generally uncertain, and stochastic programming can serve as a framework for modeling uncertain optimization problems. Stochastic programming uses the probability distribution of uncertain parameters for modeling and solving optimization problems [147].
In [29], a model has been proposed to create optimal bidding strategies for trading wind power in a competitive electricity market. The purpose of this model is to maximize the total profit of the producers in both the energy market and the bilateral reserve market. The proposed model has been created using stochastic programming and game theory. The optimization variables were wind power output, hourly location marginal price, real-time price, and bilateral market price clearing, which were uncertain (random) variables. The reserve price was determined using the game theory. In fact, game theory has been used to address the uncertainty of other market participants’ behaviors. In the game, energy producers were considered as players. The proposed model in this research was an appropriate application model and it could reduce the risk of losing money.
4.1.7 Convex Optimization
Finding at least one convex function among a set of convex functions is called convex optimization. Convex function is a continuous function that if two arbitrary points are considered on this function, the dotted line of these two points always lies above the graph of the function. In an optimization problem, when the objective function(s), the constraints functions, and the domains of the optimization variables are all convex, we have a convex optimization problem. The main advantage of this type of optimization problem is that every local optimal point is also a global optimal point, and any optimization algorithm that finds a local optimal point actually finds a global optimal point [17].
No significant research has been carried out on combining convex optimization and game theory. In [143], the combined use of variation inequality, convex optimization, and game theory has been investigated. This research consists of two parts. In the first part of the study, the basic concepts of variation inequality, convex optimization, and game theory were investigated to use in communication and signal processing communities. In the second part, the results of the first part were used to solve some of the problems of wireless ad hoc networks. The main emphasis of this research is on variation inequality and convex optimization and game theory is marginally used.
A summary of the researches which combine game theory and classic optimization methods can be seen in Table 2. The number of citations of each study of Table 2 is counted up to November 2018.
4.2 Modern Optimization Methods
In this section, we will review the combined use of game theory and modern methods of optimization. Given the fact that modern optimization methods are approximate (randon) and game theory is also an approximated optimization, combined use of game theory and modern optimization methods is expected to have various applications. It is also expected that using each approach can improve the problem solving of the other one.
4.2.1 Genetic Algorithm
Genetic algorithm is a search technique to find an approximate solution for optimization and search problems. Genetic algorithm is a special type of evolutionary algorithms that uses biology evolutionary techniques such as heritability and mutation. It is a repetition-based algorithm, most of which are selected as random processes [67]. In the genetic algorithm, first, several answers are generated for the problem randomly or algorithmically. This set of answers is called the initial population and each answer is called a chromosome. Then, using the genetic algorithm operators, the best selected chromosomes are combined with each other and produce a mutation. Further, the current population is combined with a new population that results from the combination and mutation in the chromosomes. This process is repeated as long as the termination condition of the algorithm is established and the final result is obtained.
One of the modern optimization algorithms which have been used in many applications in combination with the game theory is the genetic algorithm. These applications have been presented in both types of single-objective and multi-objective optimizations, and in the both types of classic and evolutionary game theory [78, 93, 118, 125, 138, 148].
In [138], the genetic algorithm learning is modeled as an evolutionary game theory. The study showed that the genetic algorithm learning is an evolutionary game, in addition to an evolutionary process. This aspect of the genetic algorithm learning concept is very suitable for economic applications. In this research, three propositions have been stated. Each genetic algorithm is a dynamic game; each genetic algorithm is an evolutionary game; in the genetic algorithm learning process, the population tends to move toward a Nash equilibrium point. Given these three propositions, the main solution was presented.
In [148], a combined approach was introduced to solve the generation expansion planning in the pool market. This approach is a combination of game theory and genetic algorithm. The proposed algorithm is divided into two levels of master and slave. At the master level, a modified game theory was suggested to evaluate the contrast of generation company by independent system operator. At the slave level, an improved genetic algorithm was used to find the best solution for each manufacturing company to decide on investment. In this application, the game theory and the genetic algorithm were used separately in two phases to solve a particular problem. Although the game theory and the genetic algorithm are not modeled together, the results of optimization in this method are still satisfactory.
Konak et al. [78] proposed a new solution for reliable server assignment under attacks. The problem was modeled as a bi-level optimization problem in which the network designer acted as leader and the attacker acted as a follower. A game theory-based genetic algorithm was proposed to solve this bi-linear optimization problem. The game network designer and the attacker are two players of the game, which are interacting with a payoff matrix. There are two categories of population for the genetic algorithm; each category is associated with one player. Since the problem is NP-hard, it seems necessary to use modern optimization algorithms. The performance of the proposed combined approach is much better than the pure genetic algorithm. The other advantage of the proposed method is that it can be easily modeled for other two-level optimization problems of a similar structure.
Several multi-objective problems have also been solved with the help of genetic algorithm and game theory. For example, in [125], a game theory-based genetic algorithm method for solving the multi-objective optimization problem of DDM-nozzle in distributed environments was presented. In this method, the string representing the answer is divided into two parts, each part being considered as a player. Each player has a duty to optimize its part. That is, each player is associated with an objective function. A population is also considered for each player. The game is non-cooperative. This method had better performance than the cooperative games provided in this area. The use of game theory in this study made it easier for parallelization. In [118], a method for designing water distribution networks was proposed using game theory and genetic algorithm. This was a multi-objective problem. One of the objective functions aimed to minimize cost and the other function aimed to maximize pressure. In this method, the features of the genetic algorithm were well used, and in each step of the algorithm, a large number of solutions were updated and improved. The best solution is selected by the game theory. The method presented in this study had a good performance compared to other available methods. In [93], a model was proposed to solve the land-use spatial optimization problem. Land-use spatial is a complex decision-making optimization problem that deals with several opposite objective functions. In the mentioned research, suitability and compactness in land-use were considered as goals. The proposed model combines game theory and genetic algorithm. The genetic algorithm was repeated to optimize the spatial layout of each type of land-use. The structure of the grid was used to display the chromosomes, because it could easily be used to indicate the area under study (ground units).
4.2.2 Imperial Competitive Algorithm
Imperialist Competitive Algorithm is a method in the field of evolutionary computing that seeks the optimal solution for various problems. This algorithm provides a method for solving optimization problems by mathematical modeling of the socio-political evolution process. Each element of the population of this algorithm is called a country. Countries are divided into colonies and imperialists. Each imperialist, depending on his power, controls a number of colonial countries and forms an empire. For the division of colonies between the imperialists, each imperialist will be given a number of colonies, which this number is proportional to its strength. The policy of assimilation, imperialistic competition, and revolution are the core of this algorithm. The algorithm continues until a convergence condition is fulfilled or until the total number of iterations is completed. Finally, all the empires will fall, and we will have only one empire, and the rest of the countries will be under the control of this single empire [10].
Given the structure of the imperialist competition algorithm, it can be seen that it can be a good option for use in game theory. In [133], imperialist competition algorithm was used as a tool for finding the equilibrium point in game theory in multi-objective problems. The game is considered non-cooperative. In this method, one player is considered each objective function and for each player, an imperialist competition algorithm is executed separately to optimize its solution. The initial selection of players in the first iteration is random. The game will be continued until all objective functions are optimized and reach the Nash equilibrium point. The proposed method has a better performance than the genetic algorithm and is stronger in terms of convergence. But the main advantage of the proposed method is that it can be used to solve various games and even solve many of the multi-objective optimization problems.
4.2.3 Ant Colony Optimization
An ant colony optimization is inspired by the behavior of natural ants that lives in large sets together and is one of the most efficient algorithms for solving optimization problems. An ant colony algorithm is a clear example of collective intelligence, in which factors that have not high abilities, can work together and get together very well. This algorithm is used to solve a wide range of optimization problems [37].
One of the most important and most interesting behaviors of ants is their behavior to find food, and in particular how to find the shortest route between food and colony. This kind of behavior of the ants has a kind of massive intelligence. In the real world of ants, they first accidentally go to find food. They then go back to the colony and leave the pheromone. When the other ants find this path, sometimes leave the wanderer and follow it. Then, if they come to the food, they will go home and leave the other side behind them, in other words, strengthen the previous path. The pheromone evaporates over time. When an ant finds a good path from home to food, the rest of the ants are more likely to follow the same path, and by continually reinforcing that path and evaporating other tracks, all the ants are also traversed. The goal of an ant colony algorithm is to imitate this behavior by artificial ants.
In [128], a method to optimize social information in opportunistic networks was proposed based on the ant colony algorithm and game theory. Losing data and disconnecting is a common problem in opportunistic networks and social information has a big role in reducing this problem. In fact, in this research, a new routing method was proposed to determine the relative importance of the network nodes for solving the mentioned problem. In this research, the game theory is used to find the shapley value of each node. Shapley value shows the nodes’ participation in the network. The shortest paths are also calculated by the Dijkstra algorithm. This information is considered along with the random path as the input of the ant colony optimization algorithm. The ant colony algorithm uses pheromone and updates it to find the deviation of social information between nodes. The advantage of the proposed method is to increase the probability of delivery and to reduce delay in comparison with other similar methods in this field.
In [82], a method for the rapid broadcasting of essential messages and for increasing the life of the network in the wireless body area network was presented using the ant colony algorithm and the game theory. In this method, an ant colony algorithm is used to find the shortest path to send the required message through the sensor nodes. The game theory is also used to increase network lifetime. The game is considered as a static game with mixed strategies. In this application the game theory and ant colony algorithm are completely separate in two different domains, and their features and their advantages are not used concurrently. In the real world, this approach can be used well as a tool for diagnosing a disease and remote treating a patient using telecommunication technology.
4.2.4 Particle Swarm Optimization
The particle swarm optimization is a population-based global optimization technique. This technique is inspired by the social behavior of birds for food search. Due to its simple search mechanism, computational efficiency, and easy implementation, this technique has been widely used in many optimization problems. Each particle (which is a solution) is defined in this algorithm by the velocity vector and position vector in the search space. In each repetition, the new particle position is defined according to the velocity vector and position vector in the search space. At each time interval, the positions of particles are updated according to the current velocity vectors, the best positions found by that particles, and the best position found by the best particle in the group. That is, each particle continues to search in the space of the problem by looking for optimal particle in the current state. Therefore, this algorithm optimizes the problem by updating particles in iterations [75].
Particle swarm optimization algorithm has been used in many researches along with game theory. These researches have been presented in the field of optimization of single-objective problems as well as optimization of multi-objective problems. Both classic game theory and evolutionary game theory have been also used in conjunction with the particle swarm optimization algorithm [26, 84, 91, 114, 116, 179].
An optimization algorithm is presented in [91], which has combined the particle swarm optimization and the evolutionary game theory. In this algorithm, the particles are considered as players in the evolutionary game. These players will seek the most profit by choosing the right strategies. The space of strategies is the same as the search space of the particle swarm algorithm. The fitness function is also considered as payoff function. In this study, the multi-start technique has been also introduced to overcome premature convergence. The most important advantage of the proposed algorithm is its proper convergence. The algorithm also performs better than standard particle swarm algorithm.
A new hybrid optimization algorithm has been also proposed in [84] using an enhanced particle swarm algorithm and evolutionary game theory. In this algorithm, the mass of particles is the same as our population. Each particle is considered as a player with three strategies. These three strategies are tracking own memory, tracking the best neighbor, or moving just by inertia. The payoff value is also the average performance that is obtained by tracking a strategy. In this study, another algorithm is proposed to overcome the potential premature convergence. The second algorithm is based on the combination of the first algorithm and the standard particle swarm algorithm. The second algorithm consists of two types of particles, one relating to the first algorithm, and another to the particle swarm algorithm. The main advantage of this research was that, in contrast to previous studies whose results had been empirically investigated, in this research the efficiency of the presented methods was proved theoretically.
Wang et al. [179] investigated the effect of using the particle swarm algorithm to update strategies for the evolution of cooperation in prisoner’s dilemma and Snowdrift games on a square lattice. In this research, a model has been introduced for updating strategies in these games in an evolutionary manner, using the particle swarm algorithm. Experimental results showed that using the strategy update mechanism based on the particle swarm algorithm could promote collaboration in both games. This research provides an overview of the evolution of collaboration in these two games, which can be useful in understanding collaboration in natural and social systems.
In the mentioned previous studies, evolutionary game theory was used along with the particle swarm algorithm. In [114], a method for solving generation expansion planning in the pool market was presented using classic game theory and particle swarm algorithm. The proposed algorithm is divided into two levels of master and slave. At the master level, a modified game theory was suggested to evaluate the contrast of generating companies by independent system operator. At the slave level, the particle swarm algorithm was used to find the best solution for each manufacturing company to decide on investment. In this method, game theory and particle swarm algorithm are used separately to solve this problem. The authors of this study have also proposed a combined method of the game theory and genetic algorithm which have been examined in the genetic algorithm section. Comparing the results of these two methods, it can be concluded that, by combining the game theory and the particle swarm algorithm, more benefit can be gained in comparison with the combine use of the game theory and the genetic algorithm.
In [116], a combinational framework of game theory and particle swarm algorithm has been proposed to solve the problem of generation expansion planning in the power market. This framework consists of three levels. The first level is a PSO-based optimizer that provides optimal generation expansion plans. The second level is a coordinator that responsible for clearing market prices, maintaining open dynamicity, and maintaining system security. The third level is a demand block that considers the interactions of the demand entities. The game theory has been used as a decision support tool in competitive situations. For this purpose, a Stackelberg game has been introduced that includes three types of players, namely leader, suppliers and demand entities. In this research, game theory is used to model the problem and the particle swarm algorithm is a part of the framework which is responsible to optimization.
In [26], a new method for publishing collaborative content in multimedia clouds has been presented. This method is based on the particle swarm optimization algorithm, game theory, and evolutionary game theory. In this method, the optimal number of service users and non-service users in each location-based multimedia user group is determined by an improved particle swarm algorithm. Also, a game-based matching technique is proposed to determine the match between service users and non-service users. In order to avoid the blindness and selfishness of non-service users in the selection of service users, an evolutionary game is used. A Stackelberg game is also used to obtain optimal pricing policies and bandwidth allocations for service users. The proposed method in this research reduces the average publishing time of content and total cost of multimedia cloud users. Particle swarm algorithm, game theory, and evolutionary game theory are used for different purposes and separately in this research.
4.2.5 Simulated Annealing
Simulated annealing algorithm is a simple and effective optimization algorithm for solving optimization problems. Due to its ability to solve different problems and specially to find the absolute minimum value, it is a very important modern optimization algorithm. The simulated annealing algorithm starts the optimization process with an initial solution and then moves to neighboring solutions in an iterating loop. If the neighbor’s solution is better than the current one, the algorithm puts it as the current solution, otherwise the algorithm accepts it as the current solution with the probability exp(-ΔE/T). ΔE is the difference between the values of the objective function for the current solution and the neighboring solution, and T is a parameter called temperature. At each temperature, several iterations are executed, and then the temperature is slowly reduced. In the initial steps, the temperature is set very high, so it is more likely to accept worse solutions. With the gradual reduction of temperature, in the final steps, there will be less chance of accepting worse solutions, and so the algorithm converges to a good solution [164].
In [121], researchers have used simulated annealing algorithms to find points of equilibrium in a game. A method for solving games using simulated annealing algorithms was presented. In this study, a special version of the simulated annealing algorithm called fuzzy adaptive simulated annealing has been introduced. Since the population-based simulated annealing algorithm is not available, a multi-start version is used in this method, which includes a pre-processing step. With this step, the algorithm is able to find some promising starting points. These points help the algorithm not be caught in the local minima. Unlike many of the modern algorithms that may not find all the points of equilibrium in a problem (that has more than one point of equilibrium); the proposed method will find all the points of equilibrium in the problem. Thus, it can be said that the remarkable advantage of the combination of game theory and simulated annealing algorithm is that it can be used to find all the points of equilibrium (if any) in the games. Another advantage of this method is that it does not pay attention to a particular type of game and the simplest game is considered. Therefore, this method is applicable to any type of game.
4.2.6 Shuffled Frog Leaping Algorithm
Shuffled frog leaping algorithm is an evolutionary and population-based algorithm. This algorithm is fast and has great global search capability. Shuffled frog leaping algorithm combines deterministic and random methods. The deterministic method allows the algorithm to exchange messages efficiently. The random method guarantees the flexibility and resilience of the algorithm. The algorithm starts with random selection of frog groups (a set of solutions). Then, groups of frogs are divided into several subgroups. Each of these subgroups can perform local search independently and in a different way. This algorithm uses the memetic method to search locally among frogs subgroups. Frogs of a subgroup can affect other frogs in the same subgroup and thus, frogs evolve in this subgroup. To achieve a good objective, the weights of good frogs should be increased and the weights of bad frogs should be decreased. After the evolution of some memetics, subgroups are combined. This combination causes that the memetics are optimized in the global domain and new subgroups of frogs are created. Local search and global search are combined to satisfy the convergence condition. The rapid convergence is one of the most prominent advantages of the shuffled frog leaping algorithm [43].
A combined method of game theory and shuffled frog leaping algorithm for bandwidth scheduling in the networked learning control system have been presented in [188]. In the study, the resource allocation problem is modeled as a non-cooperative and fair game and it is proved that there is only a unique equilibrium point for the game. An improved shuffled frog leaping algorithm was proposed in this study with a combination of local search and information exchange between network groups. The advantage of the proposed method is that the convergence rate is very high.
4.2.7 Differential Evolution
Differential evolution is a random and population-based optimization algorithm. This algorithm is presented to overcome the main drawback of the genetic algorithm, namely the lack of local search. The main difference between the genetic algorithm and the differential evolution algorithm is the “selection” operator. In the selection operator of the genetic algorithm, the chances of choosing a solution as one of the parents depends on its merits, however, in the differential evolution algorithm, all the solutions have the same chance of being choosed and their selection chances do not depend on their merits. When a new solution was generated using a self-adjusting mutation operator and the crossover operator, the new solution is compared with the previous one and will be replaced if it is better. In this algorithm, unlike other algorithms, to create new generation, the mutation operator is applied before applying the crossover operator. In a differential evolution algorithm, no particular distribution is used for applying the mutation operator, but the length of the mutation step is equal to the value of the distance between the current members. To create the initial population, uniform distribution is usually used. At each step of the algorithm, the members approach each other and this convergence will lead to find an optimal solution [163].
In [49], a method is proposed to design a nano-CMOS voltage-controlled oscillator, which is a multi-objective problem. The proposed method is based on the combination of three algorithms, namely, particle swarm optimization, differential evolution algorithm, and an improved differential evolution algorithm that has been developed with evolutionary game theory. In the third algorithm, a combined use of evolutionary games and differential evolution algorithm has been considered. Evolutionary game theory is used here as a self-adjusting mechanism in the differential evolution algorithm. The performance of the proposed method was better, in terms of the quality of the solution, than the solution which was obtained from the standard differential evolution algorithm and its solution was more optimal. The cost of finding more optimal solutions for the proposed method was the execution time. The required time to find the solution using the proposed method is twice the execution time of the standard differential evolution algorithm.
A combined method of game theory and differential evolution algorithm has been represented in [21] to solve the multi-objective problem of trajectory optimization of space maneuver vehicles. In this research, a modified game theory-based approach based on an adaptive differential evolution algorithm has been proposed to find the solution of the problem. The differential evolution algorithm in this method uses two types of crossovers, binomial and exponential. It also uses a consistent strategy to update the mutation operator. To increase the quality of the solutions, control logic is also considered in the proposed framework. The proposed method in this study has more appropriate convergence speed comparing other methods of this area. Also, the quality of the solutions obtained by this method is appropriate.
4.2.8 Tabu Search
Tabu search is an optimization algorithm that can solve various optimization problems, especially graph-based problems and combinatorial problems. This algorithm works almost like local search algorithms, except that it uses tabu list to avoid getting stuck at a local optimal point. Tabu search algorithm starts from an initial solution and then, the algorithm chooses the best solution of the neighbors. If this solution is not in the tabu list, the algorithm moves to the neighbor’s solution; otherwise, the algorithm will check the breathing criterion. Based on the breathing criterion, if the neighbor’s solution is the best solution so far, the algorithm will move to it, even if that solution is in the tabu list. After this moving, the tabu list is updated and the previous move, by which we proceeded to the neighbor’s solution, is placed in the tabu list in order to prevent the algorithm from returning to that solution and creating a cycle. After placing the previous move on the tabu list, a number of moves that were previously on the tabu list will be removed from the list. The duration of movement in the tabu list is determined by a parameter called tabu tenure. Moving from the current solution to the neighbor’s solutions continues to the end condition [53].
A method for finding Nash equilibrium in games has been presented in [165], using tabu search and best-response dynamics. The games in this research have n players and pure strategies. To solve games with these features, we need to create a payoff matrix. If the number of players is high, creating a payoff matrix would be difficult and would require a lot of calculations. The proposed method does not require the total payoff matrix, and only creates payoffs that are related to the search. Tabu search helps to avoid the problem of loop in the proposed method. There is a general tabu list in the case of explicit memory, however in the case of using attribute-based memory; there will be an individual tabu list for each player. One of the conditions for stopping in this method is to find the Nash equilibrium and the other stop condition is to exceed the allowed number of iterations. The proposed method is appropriate in terms of execution time because it does not require full payoff matrix. The main drawback of the proposed method is that it only considers games with pure strategies and does not pay attention to the games with mixed strategies.
4.2.9 Coral Reefs Optimization Algorithm
The coral reefs algorithm is a natural-inspired evolutionary optimization method based on the simulation of coral reefs processes. The coral reefs algorithm begins with the initial population corals in a square grid (reef). Cells of the square grid are empty in the beginning of the algorithm. Using sexual and asexual reproduction operators, the reproduction process of corals are simulated. To form the coral reefs, a coral larva tries to find a place on the reef. This placement depends on the power of the larva (how much the solution is appropriate), or the amount of the chance to find an empty place. At the end of each step of the algorithm, corals depredation process is performed to eliminate inappropriate solutions to ensure the existence of empty space in the reefs for the next steps. The steps continue to the end condition [142].
In [45], a new method has been presented for managing elastic resources allocation in cloud environments using the coral reefs optimization and game theory. This is a multi-objective optimization to maximize demand satisfaction and to minimize cost and resource consumption. In this method, the coral reefs algorithm is used to model cloud elasticity in a cloud-data center. The game theory optimizes the resource reallocation schema with respect to the cloud provider’s goals, and actually provides the solution of the problem. Players of this game are virtual machines. The advantage of the proposed method is the convergence speed in very large clouds. It also has great scalability and can work well with a large number of virtual machines and large clouds.
A summary of the researches which combine game theory and modern optimization methods can be seen in Table 3. The number of citations of each study of Table 3 is counted up to November 2018.
4.3 Hybrid Methods
In addition to the researches which are discussed in the previous sub-sections, there are other researches that use the combination game theory and optimization methods. These studies cannot be categorized in the proposed structure, as they do not directly use a classic or a modern optimization algorithm. Some of other researchers have combined the game theory with a number of classic or modern optimization methods. The most important researches of this extra category have been represented in [4, 65, 83, 87, 134, 185, 186, 195]. We will look at these studies in this sub-section in a nutshell.
The research presented in [134] is a basic research in the field of multi-objective optimization using game theory, which is based on many of the classic methods. In [87], a combined game-based algorithm is proposed to optimize the integrated process planning and scheduling (IPPS) problem in production systems. The problem was first modeled using the game theory and then, was solved using a hybrid algorithm that combines the genetic algorithm and tabu search. In [83], a new multi-objective optimization method was proposed to solve the dynamic weapon target assignment problem with evolutionary game theory. In the proposed method, for each objective function, a solution is obtained first. Then every solution is considered as a player in a game, and the winner of the game will be the final solution to the problem. In [185], a dynamic method was developed for determining the placement of virtual machines in data centers using evolutionary game theory. In this research, it has been proved that the proposed method can provide optimal solutions. Xiao et al. [186] has presented a new method for multi-disciplinary design optimization in non-cooperative environments based on game theory. The proposed method is based on the programming of gene expression. [4] examined recent methods for optimizing energy consumption and increasing the lifetime of the network using game theory in wireless sensor networks. This research provides an overview of the applications of game theory in wireless sensor networks and their various optimization problems. In Zhao et al. [195], using an algebraic formulation and optimization control, a new approach is proposed to optimize a class of dynamic games with random inputs. In this study, games are considered as a network evolutionary game, which is formulated based on the Markov process. In [65], a sequential game has been developed to solve the multi-objective clustering problem. The main objective of this method is to optimize the inherently conflicting goals. This method is very appropriate for medium-size datasets.
A summary of the researches which combine game theory and hybrid optimization methods can be seen in Table 4. The number of citations of each study of Table 4 is counted up to November 2018.
5 Directions for Future Researches
Considering the works reviewed in the previous section, we can find that the combination of game theory and optimization methods is common and popular research area. By combining the game theory and various optimization algorithms, new methods for optimization can be presented and existing optimization methods can be improved. This hybrid use can be also exploited in various applications. Here is some of the research areas of the combined use of game theory and optimization methods.
5.1 Presenting New Methods Based on Proposed Categorization
The proposed categorization shows that there are many categories in which, no research has yet been provided. For example, there is no game theory-based multi-objective integer programming optimization approaches. Evolutionary games can be also used to improve the efficiency of this category of optimization. There are also such conditions for ant colony algorithms in the modern optimization branch. No combined method for multi-objective optimization using ant colony algorithms and classic or evolutionary game theory has been represented in the literature. Furthermore, it is possible to complete the proposed categorization by introducing combined game theory-based optimizations using other optimization methods, such as Gray Wolf Algorithm [111], Moth Flame Algorithm [110] and Bees Colony Algorithm [73].
5.2 More Attention to Modern Algorithms Based One Solution
Considering the modern methods studied in this study, it can be concluded that most researches in the field of optimization are based on population-based algorithms such as genetic algorithm, ant colony algorithm, particle swarm optimization, and imperialist competition algorithm. The works performed in the field of algorithms based on one solution are limited to only two simulated annealing and tabu search. It is possible to use other such algorithms, including GRASP search [153], variable neighborhood search [62], guided local search [177] and iterated local search [94] in combination with the game theory.
5.3 Use of Combined Methods in Other Applications
In many reviewed studies, combined methods based on game theory and optimization algorithms are presented to solve a particular problem. In addition to the raised issues, combined methods can be used in various applications. Examples include data mining, decision support systems, computer networks, and etc.
6 Conclusion
The game theory is a set of analytical tools that can be used to model strategic situations. The main objective of game theory is to find an optimal solution for players. Optimization is the process of finding the best (optimal) solution among existing solutions. In the simplest case, the goal of optimization problem is maximizing or minimizing a real function. Optimization can be deterministic or random. Game theory can act as a random optimization. Due to the proximity of the concept of game theory and optimization, many studies have used their combination to solve various problems. Also, in some studies, one of them has been used to improve the other. In this article, we have reviewed the studies that combine game theory and optimization algorithms and have also presented a new categorization for these studies. The proposed categorization is based on four factors including classic or modern optimization, the type of optimization method, single- or multi-objectives optimization and the type of game theory. Using this categorization, methods that use the combination of games theory and optimization algorithms can be better studied, and thus, new research can be presented in this regard.
References
Abraham I, Alvisi L, Halpern JY (2011) Distributed computing meets game theory: combining insights from two fields. Acm Sigact News 42(2):69–76
Abraham I, Dolev D, Gonen R, Halpern J (2006) Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty computation. In Proceedings of the twenty-fifth annual ACM symposium on principles of distributed computing. ACM, pp 53–62
Agah A, Das SK, Basu K (2004) A game theory based approach for security in wireless sensor networks. In 2004 IEEE international conference on performance, computing, and communications. IEEE, pp 259–263
AlSkaif T, Zapata MG, Bellalta B (2015) Game theory for energy efficiency in wireless sensor networks: latest trends. J Netw Comput Appl 54:33–61
Anand V, Gupta V (2016) Markov pricing equilibrium in a prosumer-aggregator dynamic game. In: American control conference (ACC), 2016. IEEE, pp 4120–4125
Annamdas KK, Rao SS (2009) Multi-objective optimization of engineering systems using game theory and particle swarm optimization. Eng Optim 41(8):737–752
Armbrust M, Fox A, Griffith R, Joseph AD, Katz R, Konwinski A, Zaharia M (2010) A view of cloud computing. Commun ACM 53(4):50–58
Arnott D, Pervan G (2005) A critical analysis of decision support systems research. J Inf Technol 20(2):67–87
Asako Y, Okuda T (2017) Guiding the economy toward the target inflation rate: an evolutionary game theory approach (No. 17-E-03). In: Institute for Monetary and Economic Studies, Bank of Japan
Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: IEEE congress on evolutionary computation, 2007. CEC 2007. IEEE, pp 4661–4667
Azgomi H, Sohrabi MK (2018) A game theory based framework for materialized view selection in data warehouses. Eng Appl Artif Intell 71:126–137
Babonneau FLF, Haurie A, Vielle M (2014) A stochastic dynamic game of the 2050 European energy roadmap with CCS. In: 14th IAEE European energy conference (No. EPFL-TALK-202722)
Baykasoğlu A, Özbel BK (2016). Cooperative interval game theory and the grey shapley value approach for solving the maximum flow problem. In: LM-SCM 2016 XIV. International logistics and supply chain congress, p 53
Bazaraa MS, Sherali HD, Shetty CM (2013) Nonlinear programming: theory and algorithms. Wiley, Hoboken
Bensoussan A, Siu CC, Yam SCP, Yang H (2014) A class of non-zero-sum stochastic differential investment and reinsurance games. Automatica 50(8):2025–2037
Binitha S, Sathya SS (2012) A survey of bio inspired optimization algorithms. Int J Soft Comput Eng 2(2):137–151
Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge
Brandenburger A (2007) Cooperative game theory. Teaching Materials at. Teaching Materials at New York University, New York
Cano L, Capone A, Carello G, Cesana M, Passacantando M (2016) A non-cooperative game approach for RAN and spectrum sharing in mobile radio networks. In Proceedings of 22th European wireless conference European wireless 2016. VDE, pp 1–6
Cardaliaguet P, Jimenez C, Quincampoix M (2014) Pure and random strategies in differential game with incomplete informations. J Dyn Games 1(3):363–375
Chai R, Savvaris A, Tsourdos A, Chai S (2017) Multi-objective trajectory optimization of space manoeuvre vehicle using adaptive differential evolution and modified game theory. Acta Astronaut 136:273–280
Chander P (2017) Subgame-perfect cooperative agreements in a dynamic game of climate change. J Environ Econ Manag 84:173–188
Chen H, Li Y, Jiang Y, Ma Y, Vucetic B (2015) Distributed power splitting for SWIPT in relay interference channels using game theory. IEEE Trans Wireless Commun 14(1):410–420
Chen H, Liu X, Xu H, Wang C (2016) A cloud service broker based on dynamic game theory for bilateral SLA negotiation in cloud environment. Int J Grid Distrib Comput 9(9):251–268
Chen X (2015) Decentralized computation offloading game for mobile cloud computing. IEEE Trans Parallel Distrib Syst 26(4):974–983
Chunlin L, Yanpei L, Youlong L, Min Z (2017) Collaborative content dissemination based on game theory in multimedia cloud. Knowl Based Syst 124:1–15
Czumaj A, Fasoulakis M, Jurdziński M (2017) Zero-sum game techniques for approximate nash equilibria. In: Proceedings of the 16th conference on autonomous agents and multiagent systems. International Foundation for Autonomous Agents and Multiagent Systems, pp 1514–1516.
Dabbagh SR, Sheikh-El-Eslami MK (2015) Risk-based profit allocation to DERs integrated with a virtual power plant using cooperative Game theory. Electr Power Syst Res 121:368–378
Dai T, Qiao W (2013) Trading wind power in a competitive electricity market using stochastic programing and game theory. IEEE Trans Sustain Energy 4(3):805–815
Dantzig G (2016) Linear programming and extensions. Princeton University Press, Princeton
Daskalakis C, Deckelbaum A, Kim A (2015) Near-optimal no-regret algorithms for zero-sum games. Games Econ Behav 92:327–348
Deb K (2014) Multi-objective optimization. In: Burke E, Kendall G (eds) Search methodologies. Springer, Boston, pp 403–449
Demartsev V, Ilany A, Barocas A, Ziv EB, Schnitzer I, Koren L, Geffen E (2016) A mixed strategy of counter-singing behavior in male rock hyrax vocal competitions. Behav Ecol Sociobiol 70(12):2185–2193
Ding H, Zhang Y, Ren Y, Shi B, Choo KKR (2016) Heterogeneous investment in spatial public goods game with mixed strategy. Soft Comput 22(4):1287–1294
Dong J, Smith DB, Hanlen LW (2016) Socially optimal coexistence of wireless body area networks enabled by a non-cooperative game. ACM Trans Sensor Netw (TOSN) 12(4):26
Dong M, Lan T, Zhong L (2014) Rethink energy accounting with cooperative game theory. In: Proceedings of the 20th annual international conference on Mobile computing and networking. ACM, pp 531–542
Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39
Doudou M, Barcelo-Ordinas JM, Djenouri D, Garcia-Vidal J, Bouabdallah A, Badache N (2016) Game theory framework for MAC parameter optimization in energy-delay constrained sensor networks. ACM Trans Sensor Netw (TOSN) 12(2):10
Du L, Han L, Li XY (2014) Distributed coordinated in-vehicle online routing using mixed-strategy congestion game. Transp Res Part B Methodol 67:1–17
Duan DW, Liu JY (2003) Buyer’s bidding strategies based on the static game theory with incomplete information. Electr Power Autom Equip 7(23):10–14
El Chamie M, Başar T (2016) A zero-sum game between the network designer and an adversary in consensus protocols. In: Thuijsman F, Wagener F (eds) Advances in dynamic and evolutionary games, vol 14. Springer, Berlin, pp 117–137
Enomoto H, Hachimori M, Nakamura S, Shigeno M, Tanaka Y, Tsugami M (2015) Pure-strategy Nash equilibria on competitive diffusion games (No. 1333). Discussion paper
Eusuff M, Lansey K, Pasha F (2006) Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization. Eng Optim 38(2):129–154
Fahimi M, Ghasemi A (2016) Joint spectrum load balancing and handoff management in cognitive radio networks: a non-cooperative game approach. Wireless Netw 22(4):1161–1180
Ficco M, Esposito C, Palmieri F, Castiglione A (2016) A coral-reefs and game theory-based approach for optimizing elastic cloud resource allocation. Future Gener Comput Syst 78:343
Furuncu E, Sogukpinar I (2015) Scalable risk assessment method for cloud computing using game theory (CCRAM). Comput Stand Interfaces 38:44–50
Gairing M, Kotsialou G, Skopalik A (2014) Approximate pure nash equilibria in social context congestion games. In: International conference on web and internet economics. Springer, Cham, pp 480–485
Gale D, Kuhn HW, Tucker AW (1951) Linear programming and the theory of games. Act Anal Prod Alloc 13:317–335
Ganesan T, Elamvazuthi I, Vasant P (2015) Multiobjective design optimization of a nano-CMOS voltage-controlled oscillator using game theoretic-differential evolution. Appl Soft Comput 32:293–299
Ganikhodjaev NN, Ganikhodjaev RN, Jamilov UU (2015) Quadratic stochastic operators and zero-sum game dynamics. Ergod Theory Dyn Syst 35(5):1443–1473
Gharehshiran ON, Attar A, Krishnamurthy V (2013) Collaborative sub-channel allocation in cognitive LTE femto-cells: a cooperative game-theoretic approach. IEEE Trans Commun 61(1):325–334
Ghosh P, Basu K, Das SK (2007) A game theory-based pricing strategy to support single/multiclass job allocation schemes for bandwidth-constrained distributed computing systems. IEEE Trans Parallel Distrib Syst 18(3):289
Glover F, Laguna M (2013) Tabu Search∗. In: Du DZ, Pardalos PM (eds) Handbook of combinatorial optimization. Springer, New York, pp 3261–3362
Gorbaneva OI, Ougolnitsky GA (2017) Static game theoretic models of coordination of private and public interests in economic systems. Contrib Game Theory Manag 10:79–93
Greiner D, Periaux J, Emperador JM, Galván B, Winter G (2017) Game theory based evolutionary algorithms: a review with nash applications in structural engineering optimization problems. Arch Comput Methods Eng 24:703–750
Grinold RC (1969) Continuous programming part one: linear objectives. J Math Anal Appl 28(1):32–51
Grosu D, Chronopoulos AT (2005) Noncooperative load balancing in distributed systems. J Parallel Distrib Comput 65(9):1022–1034
Guo J, Liu F, Lui J, Jin H (2016) Fair network bandwidth allocation in IaaS datacenters via a cooperative game approach. IEEE/ACM Trans Netw (ToN) 24(2):873–886
Guo J, Liu F, Zeng D, Lui JC, Jin H (2013) A cooperative game based allocation for sharing data center networks. In 2013 Proceedings IEEE INFOCOM. IEEE, pp 2139–2147
Guo L, Huang S, Zhuang J, Sadek AW (2013) Modeling parking behavior under uncertainty: a static game theoretic versus a sequential neo-additive capacity modeling approach. Netw Spat Econ 13(3):327–350
Han Z, Ma J, Si F, Ren W (2016) Entropy complexity and stability of a nonlinear dynamic game model with two delays. Entropy 18(9):317
Hansen P, Mladenović N (2014) Variable neighborhood search. In: Burke EK, Kendall G (eds) Search methodologies. Springer, Boston, pp 313–337
Hassan MM, Hossain MS, Sarkar AJ, Huh EN (2014) Cooperative game-based distributed resource allocation in horizontal dynamic cloud federation platform. Inf Syst Front 16(4):523–542
Healey D, Gore J (2014) Phenotypic heterogeneity implements a game theoretic mixed strategy in a clonal microbial population. bioRxiv, 011049
Heloulou I, Radjef MS, Kechadi MT (2017) Automatic multi-objective clustering based on game theory. Expert Syst Appl 67:32–48
Hemmatian F, Sohrabi MK (2018) A survey on classification techniques for opinion mining and sentiment analysis. Intel Rev, Artif. https://doi.org/10.1007/s10462-017-9599-6
Holland JH (1992) Genetic algorithms. Sci Am 267(1):66–73
Horst R, Tuy H (2013) Global optimization: deterministic approaches. Springer, Berlin
Huang SY (1992) Intelligent decision support: handbook of applications and advances of the rough sets theory, vol 11. Springer, Berlin
Iimura T, Watanabe T (2016) Pure strategy equilibrium in finite weakly unilaterally competitive games. Int J Game Theory 45(3):719–729
Jiang C, Chen Y, Liu KR (2014) Graphical evolutionary game for information diffusion over social networks. IEEE J Sel Top Signal Process 8(4):524–536
Jin M, Lei X, Du J (2010) Evolutionary game theory in multi-objective optimization problem. Int J Comput Intell Syst 3(sup01):74–87
Karaboga D, Akay B (2009) A comparative study of artificial bee colony algorithm. Appl Math Comput 214(1):108–132
Kargupta, H., Das, K., & Liu, K. (2007, September). Multi-party, privacy-preserving distributed data mining using a game theoretic framework. In: PKDD, vol 4702, pp 523–531
Kennedy J (2011) Particle swarm optimization. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning. Springer, New York, pp 760–766
Khan MA, Zhang Y (2018) On pure-strategy equilibria in games with correlated information. Game Econ Behav 111:289–304
Khan MA, Zhang Y (2017) Existence of pure-strategy equilibria in Bayesian games: a sharpened necessity result. Int J Game Theory 46(1):167–183
Konak A, Kulturel-Konak S, Snyder LV (2015) A game-theoretic genetic algorithm for the reliable server assignment problem under attacks. Comput Ind Eng 85:73–85
Kontogiorgos P, Sarri E, Vrahatis MN, Papavassilopoulos GP (2014) An energy market stackelberg game solved with particle swarm optimization. In: NumAN2014 conference, p 2–5
Kusyk J, Urrea E, Sahin CS, Uyar MU, Bertoli G, Pizzo C (2010) Resilient node self-positioning methods for manets based on game theory and genetic algorithms. In: Military communications conference, 2010-MILCOM 2010. IEEE, pp 1399–1404
Laraki R, Solan E (2013) Equilibrium in two-player non-zero-sum Dynkin games in continuous time. Stoch Int J Prob Stoch Process 85(6):997–1014
Latha R, Vetrivelan P, Jagannath M (2017) Balancing emergency message dissemination and network lifetime in wireless body area network using ant colony optimization and Bayesian game formulation. Inf Med Unlocked 8:60
Leboucher C, Shin HS, Le Ménec S, Tsourdos A, Kotenkoff A, Siarry P, Chelouah R (2014) Novel evolutionary game based multi-objective optimisation for dynamic weapon target assignment. IFAC Proc Vol 47(3):3936–3941
Leboucher C, Shin HS, Siarry P, Le Ménec S, Chelouah R, Tsourdos A (2016) Convergence proof of an enhanced particle swarm optimisation method integrated with evolutionary game theory. Inf Sci 346:389–411
Leonard D, Van Long N (1992) Optimal control theory and static optimization in economics. Cambridge University Press, Cambridge
Li S, Liu Y, Deininger K (2013) How important are endogenous peer effects in group lending? Estimating a static game of incomplete information. J Appl Econom 28(5):864–882
Li X, Gao L, Li W (2012) Application of game theory based hybrid algorithm for multi-objective integrated process planning and scheduling. Expert Syst Appl 39(1):288–297
Lin CH, Chen SJ, Kuo CL, Chen JL (2014) Non-cooperative game model applied to an advanced metering infrastructure for non-technical loss screening in micro-distribution systems. IEEE Trans Smart Grid 5(5):2468–2469
Liu D, Li H, Wang D (2013) Neural-network-based zero-sum game for discrete-time nonlinear systems via iterative adaptive dynamic programming algorithm. Neurocomputing 110:92–100
Liu D, Li H, Wang D (2014) Online synchronous approximate optimal learning algorithm for multi-player non-zero-sum games with unknown dynamics. IEEE Trans Syst Man Cybern Syst 44(8):1015–1027
Liu WB, Wang XJ (2008) An evolutionary game based particle swarm optimization algorithm. J Comput Appl Math 214(1):30–35
Liu X, Zhang J, Zhu P (2017) Modeling cyber-physical attacks based on probabilistic colored Petri nets and mixed-strategy game theory. Int J Crit Infrastruct Prot 16:13–25
Liu Y, Tang W, He J, Liu Y, Ai T, Liu D (2015) A land-use spatial optimization model based on genetic optimization and game theory. Comput Environ Urban Syst 49:1–14
Lourenço, H. R., Martin, O. C., & Stutzle, T. (2003). Iterated local search. In: International series in operations research and management science, pp 321–354
Lozano S, Moreno P, Adenso-Díaz B, Algaba E (2013) Cooperative game theory approach to allocating benefits of horizontal cooperation. Eur J Oper Res 229(2):444–452
Lucchetti R (2006) Linear programming and game theory. In: Lucchetti R (ed) Convexity and Well-Posed Problems. Springer, New York, pp 117–137
Lye KW, Wing JM (2005) Game strategies in network security. Int J Inf Secur 4(1–2):71–86
Ma J, Guo Z (2014) The parameter basin and complex of dynamic game with estimation and two-stage consideration. Appl Math Comput 248:131–142
Ma J, Liu Y, Song L, Han Z (2015) Multiact dynamic game strategy for jamming attack in electricity market. IEEE Trans Smart Grid 6(5):2273–2282
Maciejewski W, Fu F, Hauert C (2014) Evolutionary game dynamics in populations with heterogenous structures. PLoS Comput Biol 10(4):e1003567
Mangasarian OL, Stone H (1964) Two-person nonzero-sum games and quadratic programming. J Math Anal Appl 9(3):348–355
Manshaei MH, Zhu Q, Alpcan T, Bacşar T, Hubaux JP (2013) Game theory meets network security and privacy. ACM Comput Surv (CSUR) 45(3):25
Marinatto L, Weber T (2000) A quantum approach to static games of complete information. Phys Lett A 272(5):291–303
Marler RT, Arora JS (2004) Survey of multi-objective optimization methods for engineering. Struct Multidiscip Optim 26(6):369–395
Marzband M, Javadi M, Domínguez-García JL, Moghaddam MM (2016) Non-cooperative game theory based energy management systems for energy district in the retail market considering DER uncertainties. IET Gener Transm Distrib 10(12):2999–3009
Matejaš J, Perić T (2014) A new iterative method for solving multi objective linear programming problem. Appl Math Comput 243:746–754
Mediwaththe CP, Stephens ER, Smith DB, Mahanti A (2016) A dynamic game for electricity load management in neighborhood area networks. IEEE Trans Smart Grid 7(3):1329–1336
Mehraeen S, Dierks T, Jagannathan S, Crow ML (2013) Zero-sum two-player game theoretic formulation of affine nonlinear discrete-time systems using neural networks. IEEE Trans Cybern 43(6):1641–1655
Minoux M (1986) Mathematical programming: theory and algorithms. Wiley, Hoboken
Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl Based Syst 89:228–249
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
Misra S, Sarkar S (2015) Priority-based time-slot allocation in wireless body area networks during medical emergency situations: an evolutionary game-theoretic perspective. IEEE J Biomed Health Inf 19(2):541–548
Modares H, Lewis FL, Sistani MBN (2014) Online solution of nonquadratic two-player zero-sum games arising in the H∞ control of constrained input systems. Int J Adapt Control Signal Process 28(3–5):232–254
Moghddas-Tafreshi SM, Shayanfar HA, Lahiji AS, Rabiee A, Aghaei J (2011) Generation expansion planning in Pool market: a hybrid modified game theory and particle swarm optimization. Energy Convers Manag 52(2):1512–1519
Motalleb M, Ghorbani R (2017) Non-cooperative game-theoretic model of demand response aggregator competition for selling stored energy in storage devices. Appl Energy 202:581–596
Neshat N, Amin-Naseri MR (2015) Cleaner power generation through market-driven generation expansion planning: an agent-based hybrid framework of game theory and particle swarm optimization. J Clean Prod 105:206–217
Nguyen TT, Yang S, Branke J (2012) Evolutionary dynamic optimization: a survey of the state of the art. Swarm Evolut Comput 6:1–24
Nikjoofar A, Zarghami M (2013) Water distribution networks designing by the multiobjective genetic algorithm and game theory. In: Yang XS, Gandomi AH, Talatahari S, Alavi AH (eds) Metaheuristics in water, geotechnical and transport engineering. Elsevier, pp 99–119. https://www.sciencedirect.com/science/article/pii/B9780123982964000052
Nisan N, Roughgarden T, Tardos E, Vazirani VV (eds) (2007) Algorithmic game theory, vol 1. Cambridge University Press, Cambridge
Nocedal J, Wright SJ (2006) Sequential quadratic programming. Springer, New York, pp 529–562
Oliveira H, Petraglia A (2014) Establishing nash equilibria of strategic games: a multistart fuzzy adaptive simulated annealing approach. Appl Soft Comput 19:188–197
Parejo JA, Ruiz-Cortés A, Lozano S, Fernandez P (2012) Metaheuristic optimization frameworks: a survey and benchmarking. Soft Comput 16(3):527–561
Park J, Kwon S, Law KH (2013) Wind farm power maximization based on a cooperative static game approach. In: Proceeding SPIE, vol 8688, p 86880R
Pelikan M, Goldberg DE, Lobo FG (2002) A survey of optimization by building and using probabilistic models. Comput Optim Appl 21(1):5–20
Périaux J, Chen HQ, Mantel B, Sefrioui M, Sui HT (2001) Combining game theory and genetic algorithms with application to DDM-nozzle optimization problems. Finite Elem Anal Des 37(5):417–429
Pham T, Zhang J (2014) Two person zero-sum game in weak formulation and path dependent Bellman-Isaacs equation. SIAM J Control Optim 52(4):2090–2121
Pillai PS, Rao S (2016) Resource allocation in cloud computing using the uncertainty principle of game theory. IEEE Syst J 10(2):637–648
Prabha C, Khanna R, Kumar S (2016) Optimising social information by game theory and ant colony method to enhance routing protocol in opportunistic networks. Perspect Sci 8:658–660
Prokopovych P, Yannelis NC (2014) On the existence of mixed strategy Nash equilibria. J Math Econ 52:87–97
Pu Y, Zhu L (2016) A Pure Strategy Nash Equilibrium Bertrand Game with Strictly Positive Profits. Open Journal of Social Sciences 4(03):54
Pun CS, Wong HY (2016) Robust non-zero-sum stochastic differential investment-reinsurance game. Insur Math Econ 68:169–177
Raj A, Vyavahare SS, Chheda UB (2014) Nash equilibrium in mixed strategies: penaly kicks in football, Thesis in Indian Institute of Managment Ahmedabad
Rajabioun R, Atashpaz-Gargari E, Lucas C (2008) Colonial competitive algorithm as a tool for Nash equilibrium point achievement. In: Computational science and its applications–iccsa 2008, pp 680–695
Rao SS (1987) Game theory approach for multiobjective structural optimization. Comput Struct 25(1):119–127
Rao SS (2009) Engineering optimization: theory and practice. Wiley, Hoboken
Rao SS, Freiheit TI (1991) A modified game theory approach to multiobjective optimization. ASME J Mech Des 113(3):286–291
Rath HK, Revoori V, Nadaf SM, Simha A (2014) Optimal controller placement in Software Defined Networks (SDN) using a non-zero-sum game. In: 2014 IEEE 15th international symposium on a world of wireless, mobile and multimedia networks (WoWMoM). IEEE, pp 1–6
Riechmann T (2001) Genetic algorithm learning and evolutionary games. J Econ Dyn Control 25(6):1019–1037
Rieti BE, Rieti BJ, Bbl S, Nominal IS, Rates REE, Series S, Column V (2017) World economy at risk of turning into a zero-sum game: innovation and differentiation are essential. In: World Economy, 2017
Roy S, Ellis C, Shiva S, Dasgupta D, Shandilya V, Wu Q (2010) A survey of game theory as applied to network security. In: 2010 43rd Hawaii international conference on system sciences (HICSS). IEEE, pp 1–10
Saguan M, Plumel S, Dessante P, Glachant JM, Bastard P (2004) Genetic algorithm associated to game theory in congestion management. In: 2004 International conference on probabilistic methods applied to power systems. IEEE, pp 415–420
Salcedo-Sanz S, Pastor-Sánchez A, Gallo-Marazuela D, Portilla-Figueras A (2013) A novel coral reefs optimization algorithm for multi-objective problems. In: International conference on intelligent data engineering and automated learning. Springer, Berlin, pp 326–333
Scutari G, Palomar DP, Facchinei F, Pang JS (2010) Convex optimization, game theory, and variational inequality theory. IEEE Signal Process Mag 27(3):35–49
Semasinghe P, Hossain E, Zhu K (2015) An evolutionary game for distributed resource allocation in self-organizing small cells. IEEE Trans Mob Comput 14(2):274–287
Shachat J, Swarthout JT, Wei L (2015) A hidden Markov model for the detection of pure and mixed strategy play in games. Econom Theory 31(4):729–752
Shamshirband S, Patel A, Anuar NB, Kiah MLM, Abraham A (2014) Cooperative game theoretic approach using fuzzy Q-learning for detecting and preventing intrusions in wireless sensor networks. Eng Appl Artif Intell 32:228–241
Shapiro A, Dentcheva D, Ruszczyński A (2009) Lectures on stochastic programming: modeling and theory. MPS-SIAM series on optimization, vol 9. SIAM
Shayanfar HA, Lahiji AS, Aghaei J, Rabiee A (2009) Generation Expansion Planning in pool market: a hybrid modified game theory and improved genetic algorithm. Energy Convers Manag 50(5):1149–1156
Shen S, Ma H, Fan E, Hu K, Yu S, Liu J, Cao Q (2017) A non-cooperative non-zero-sum game-based dependability assessment of heterogeneous WSNs with malware diffusion. J Netw Comput Appl 91:26–35
Shi J, Wang G (2016) A nonzero sum differential game of BSDE with time-delayed generator and applications. IEEE Trans Autom Control 61(7):1959–1964
Shoham Y (2008) Computer science and game theory. Commun ACM 51(8):74–79.a
Sigmund K, Nowak MA (1999) Evolutionary game theory. Curr Biol 9(14):R503–R505
Silva JPM, Sakallah KA (1997) GRASP—a new search algorithm for satisfiability. In: Proceedings of the 1996 IEEE/ACM international conference on computer-aided design. IEEE Computer Society, pp 220–227
Sohrabi MK (2018) A gossip-based information fusion protocol for distributed frequent itemset mining. Inform Syst, Enterp. https://doi.org/10.1080/17517575.2017.1405286
Sohrabi MK, Akbari S (2016) A comprehensive study on the effects of using data mining techniques to predict tie strength. Comput Hum Behav 60:534–541
Sohrabi MK, Azgomi H (2017) Parallel set similarity join on big data based on Locality-Sensitive Hashing. Sci Comput Program 145:1–12
Sohrabi MK, Azgomi H (2017) TSGV: a table-like structure-based greedy method for materialized view selection in data warehouses. Turk J Electr Eng Comput Sci 25(4):3175–3187
Sohrabi MK, Barforoush AA (2012) Efficient colossal pattern mining in high dimensional datasets. Knowl Based Syst 33:41–52
Sohrabi MK, Barforoush AA (2013) Parallel frequent itemset mining using systolic arrays. Knowl Based Syst 37:462–471
Sohrabi MK, Roshani R (2017) Frequent itemset mining using cellular learning automata. Comput Hum Behav 68:244–253
Sohrabi MK, Tajik A (2017) Multi-objective feature selection for warfarin dose prediction. Comput Biol Chem 69:126–133
Song J, Wen J (2015) A non-cooperative game with incomplete information to improve patient hospital choice. Int J Prod Res 53(24):7360–7375
Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Suman B, Kumar P (2006) A survey of simulated annealing as a tool for single and multiobjective optimization. J Oper Res Soc 57(10):1143–1160
Sureka A, Wurman PR (2005) Using tabu best-response search to find pure strategy Nash equilibria in normal form games. In: Proceedings of the fourth international joint conference on autonomous agents and multiagent systems. ACM, pp 1023–102988888
Taha HA (2014) Integer programming: theory, applications, and computations. Academic Press, Cambridge
Tang Z, Chen Y, Zhang L, Périaux J (2018) Solving the two objective evolutionary shape optimization of a natural laminar airfoil and shock control bump with game strategies. Arch Comput Methods Eng. https://doi.org/10.1007/s11831-017-9231-6
Tang Z, Zhang L (2016) Nash equilibrium and multi criterion aerodynamic optimization. J Comput Phys 314:107–126
Teng F, Magoulès F (2010) A new game theoretical resource allocation algorithm for cloud computing. In: International conference on grid and pervasive computing. Springer, Berlin, pp 321–330
Theunissen TJJM (1985) Binary programming and test design. Psychometrika 50(4):411–420
Tian R, Zhang Q, Wang G, Li H, Chen S, Li Y, Tian Y (2017) Study on the promotion of natural gas-fired electricity with energy market reform in China using a dynamic game-theoretic model. Appl Energy 185:1832–1839
Tian Y, Govindan K, Zhu Q (2014) A system dynamics model based on evolutionary game theory for green supply chain management diffusion among Chinese manufacturers. J Clean Prod 80:96–105
Bouyer P, Brenguier R, Markey N, Ummels M (2015) Pure Nash equilibria in concurrent deterministic games. Logical Methods Comput Sci 11(2):1–72
Velez JA, Greitemeyer T, Whitaker JL, Ewoldsen DR, Bushman BJ (2016) Violent video games and reciprocity: the attenuating effects of cooperative game play on subsequent aggression. Commun Res 43(4):447–467
Vielma JP (2015) Mixed integer linear programming formulation techniques. SIAM Rev 57(1):3–57
Vinyals M, Robu V, Rogers A, Jennings NR (2014). Prediction-of-use games: a cooperative game theoryapproach to sustainable energy tariffs. In: Proceedings of the 2014 international conference on autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, pp 829–836
Voudouris C, Tsang EPK (2010) Guided local search. In: Glover F, Kochenberger GA (eds) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, pp 321–361
Wang G, Zhang Q, Li H, McLellan BC, Chen S, Li Y, Tian Y (2017) Study on the promotion impact of demand response on distributed PV penetration by using non-cooperative game theoretical analysis. Appl Energy 185:1869–1878
Wang X, Lv S, Quan J (2017) The evolution of cooperation in the Prisoner’s Dilemma and the Snowdrift game based on particle swarm optimization. Phys A 482:286–295
Wang Y (2007) Combining data mining and game theory in manufacturing strategy analysis. J Intell Manuf 18(4):505–511
Watanabe T, Yamashita H (2017) Existence of a Pure Strategy Equilibrium in Markov Games with Strategic Complementarities for Finite Actions and Finite States. J Oper Res Soc Jpn 60(2):201–214
Webb JN (2007) Game theory: decisions, interaction and Evolution. Springer, New York
Wei G, Vasilakos AV, Zheng Y, Xiong N (2010) A game-theoretic method of fair resource allocation for cloud computing services. J Supercomput 54(2):252–269
Wu Z, Dang C, Zhu C (2014) Searching one pure-strategy nash equilibrium using a distributed computation approach. JCP 9(4):859–866
Xiao Z, Jiang J, Zhu Y, Ming Z, Zhong S, Cai S (2015) A solution of dynamic VMs placement problem for energy consumption optimization based on evolutionary game theory. J Syst Softw 101:260–272
Xiao M, Shao X, Gao L, Luo Z (2015) A new methodology for multi-objective multidisciplinary design optimization problems based on game theory. Expert Syst Appl 42(3):1602–1612
Xiao Z, Tong Z, Li K, Li K (2017) Learning non-cooperative game for load balancing under self-interested distributed environment. Appl Soft Comput 52:376–386
Xu L, Fei M, Jia T, Yang TC (2012) Bandwidth scheduling and optimization using non-cooperative game model-based shuffled frog leaping algorithm in a networked learning control system. Neural Comput Appl 21(6):1117–1128
Xu M, Yang Q, Kwak KS (2016) Distributed topology control with lifetime extension based on non-cooperative game for wireless sensor networks. IEEE Sens J 16(9):3332–3342
Yue D, You F (2017) Stackelberg-game-based modeling and optimization for supply chain design and operations: a mixed integer bilevel programming framework. Comput Chem Eng 102:81–95
Zamarripa MA, Aguirre AM, Méndez CA, Espuña A (2013) Mathematical programming and game theory optimization-based tool for supply chain planning in cooperative/competitive environments. Chem Eng Res Des 91(8):1588–1600
Zappone A, Jorswieck E (2015) Energy efficiency in wireless networks via fractional programming theory. Found Trends® Commun Inf Theory 11(3–4):185–396
Zhang CT, Liu LP (2013) Research on coordination mechanism in three-level green supply chain under non-cooperative game. Appl Math Model 37(5):3369–3379
Zhao D, Zhang Q, Wang D, Zhu Y (2016) Experience replay for optimal control of nonzero-sum game systems with unknown dynamics. IEEE Trans Cybern 46(3):854–865
Zhao G, Wang Y, Li H (2016) A matrix approach to modeling and optimization for dynamic games with random entrance. Appl Math Comput 290:9–20
Zhu K, Hossain E, Niyato D (2014) Pricing, spectrum sharing, and service selection in two-tier small cell networks: a hierarchical dynamic game approach. IEEE Trans Mob Comput 13(8):1843–1856
Zhu X, Jiang L, Ye M, Sun L, Gragnoli C, Wu R (2016) Integrating evolutionary game theory into mechanistic genotype–phenotype mapping. Trends Genet 32(5):256–268
Zhu Z, Tang J, Lambotharan S, Chin WH, Fan Z (2011) An integer linear programming and game theory based optimization for demand-side management in smart grid. In: 2011 IEEE GLOBECOM Workshops (GC Wkshps). IEEE, pp 1205–1210
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Sohrabi, M.K., Azgomi, H. A Survey on the Combined Use of Optimization Methods and Game Theory. Arch Computat Methods Eng 27, 59–80 (2020). https://doi.org/10.1007/s11831-018-9300-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11831-018-9300-5