1 INTRODUCTION

Operations research in which an operation is considered as a set of purposeful actions includes the mathematical theory of decision-making, which is the subject of this paper. The majority of decisions in real-life systems have multiple goals. A part of these goals can be ranked, and then the most important of them can be included in constraints; some other goals can be scalarized, i.e. aggregated into more general goals using weighting coefficients [1]. However, most often the problem cannot be reduced to a single-criterion problem for the following reason.

Decision-making in complex systems is usually cooperative—different participants may have different goals, and it is not known in advance what the tradeoff will be [2]. Even if an organization has a hierarchical structure and there is a person that makes a decision (this person is called decision maker, or DM), she is hardly able to formulate her goal to a system analyst. In the theory of operations research and the management practice, the DM and the operations researcher (OR) are intentionally separated because the DM must make the ultimate choice and be responsible for it [1].

However, the range of operation goals can usually be restricted, and the task of the OR as the developer of the decision-making support system is not to select a decision by optimizing the given criteria but to reject nonoptimal solutions and obtain a representative set of estimates of cooperatively formed possible decisions and generally under the conditions when not everything depends on the operation’s parties. Due to the need to take into account uncontrollable factors while estimating the consequences of various decisions, this task is difficult even to formulate.

In simple cases, the uncontrollable factors in operations research are treated as uncertain factors, and the estimation of proposed decisions uses their worst values. However, such estimates are often little informative—they are trivial, too pessimistic, etc. In this case, one should build a model of the uncontrollable factors; in particular, one should try to separate them, classify the causes of their occurrence, and refine the parameters. In this paper, we pay considerable attention to such a modeling. We show that different models of uncertainty and uncontrollability give different results; therefore, the model validity is the primary condition for its successful application.

The situation is much simpler in single-criterion problems if the random nature of the uncontrollable factors can be substantiated and information about the form (and even better about the parameters) of their distribution function can be obtained. In this case, the criteria (but not the constraints, which must be satisfied with a given probability, in particular, with probability 1, for all values of the random factors) are typically averaged, which reduces the analysis to more or less standard optimization statements [3]. In multicriteria (MC) problems, the issue of taking into account the random factors remains open. It cannot be resolved by averaging as in the scalar case because the mean value of each component of the vector payoff function is attained on a different realization of the random factor, and it is not clear which realization should be used for the optimization of the vector as a whole. The issues of how random factors can be takeb into account in MC models were considered in [4, 5]. We also discuss them in Section 4 in the context of estimating the use of mixed strategies by the DM.

Another model of nonrandom uncontrollable factors used in operations research is their interpretation as purposeful actions of the opponent. Such a game theoretic interpretation in scalar problems gives the same result as in the model with uncertainty; however, the pessimistic value gets a meaningful explanation. In MC problems, the proposed interpretation changes the result (the estimates of the feasible values of the vector criterion). In Sections 2 and 3, we analyze and formally compare the two models in terms of MC games with a fixed order of moves to distinguish the statements in which the maximum guaranteed payoff of the maximizing player automatically coincides with the minimum guaranteed loss of the minimizing player from the statements in which this occurs only in singular cases. In Sections 4–6, we consider the concepts of decision making and the MC game value primarily for the second statement, and analyze the properties of these concepts. A parameterization of the solution and value of the MC game using Germeier’s scalarization [6, 1] is proposed. This scalarization is compared with the linear scalarization for the Shapley generalization [7] of the Nash equilibrium for the multicriteria case.

2 MC PROBLEM WITH UNCERTAIN FACTORS

Suppose that the goal of making a decision (the operation goal [1, 3]) is formulated as the MC problem [1, 2, 8] of maximizing a vector function \(\Phi (x,y)\) = \(({{\varphi }_{1}}(x,y), \ldots ,{{\varphi }_{n}}(x,y))\) by choosing the control parameter \(x \in X\) taking into account the uncontrollable factor \(y \in Y.\) Note that, strictly speaking, the presence of a vector criterion indicates that the goal is not clearly defined, and according to [1] it should be made clearer. However, we have already mentioned above that the goal refinement is the DM’s responsibility, who sometimes defines the goal during the announcement of the decision. From the OR’s point of view (and the authors of this paper agree with this viewpoint), there is the implicit goal specified by the set \({{\varphi }_{i}}\), \(i = \overline {1,n} \).

Without loss of generality, we assume that \(\Phi (x,y) \geqslant 0\). Here and below, the standard inequality signs for vectors are interpreted as component-wise inequalities; the crossed sign means the negation of the corresponding vector inequality. The sets \(X\) and \(Y\) are assumed to be finite or compact in the Euclidean space; in the latter case, the functions \({{\varphi }_{i}}\) are assumed to be continuous. In this section, we consider the operation in which the parameter \(y\) models an uncertain factor that takes arbitrary values in the set Y, and no probability characteristics for it are given.

If the OR expects to have information about a particular realized value \({{y}^{1}}\) of the parameter \(y\) before making the decision to select \(x \in X,\) then he rejects nonoptimal decisions and recommends the DM to select \(x\) from the set of unimprovable strategies with respect to the vector criterion; i.e., he recommends to select an \(x\) from the set on which \(\mathop {{\text{Max}}}\limits_{x \in X} \,\Phi (x,{{y}^{1}})\) is attained. (Everywhere in this paper, the minimum and maximum written with a capital letter denote the set of maximum (minimum) elements in the sense of the relation \( < \) between vectors, so that we usually deal with the Slater set [8].) However, the OR cannot guarantee to the DM the vectors \(\varphi {\text{*}} \in \mathop {{\text{Max}}}\limits_{x \in X} \,\Phi (x,{{y}^{1}})\) in advance because the choice of \({{y}^{1}}\) is not known yet, and the OR should estimate the possible values of \(\varphi {\text{*}}\) assuming that any \({{y}^{1}} \in Y\) is possible. The guaranteed MC estimates are introduced as follows.

If the OR can guarantee a value \(\varphi \) of the vector criterion \(\Phi (\, \cdot \,)\), then, taking into account the DM’s desire to maximize \(\Phi \), we consider every MC estimate of the DM’s result not exceeding \(\varphi \) in every component as a guaranteed one. Thus, we assume that the ability to obtain the result \(\varphi _{i}^{*}\) with respect to a partial criterion, e.g., \({{\varphi }_{i}}\), automatically means for the DM the feasibility of all estimates \({{\psi }_{i}} < \varphi _{i}^{*}\) of the criterion \({{\varphi }_{i}}\) by realizing the stronger estimate \(\varphi _{i}^{*}\). Note the difference between the feasible value of the vector criterion and the feasible estimate of the resulting value (for a particular \({{y}^{1}} \in Y\)). More precisely, the value \(\varphi \) of the vector criterion is feasible if there exists an \(x \in X\): \(\Phi (x,{{y}^{1}}) = \varphi \); and the estimate \(\psi \) of the vector criterion for the maximizing DM is feasible if there exists \(x \in X\): \(\Phi (x,{{y}^{1}}) \geqslant \psi \). (Figure 1a illustrates this difference.)

Fig. 1.
figure 1

Guaranteed estimates ((a) for a single point and (b) for an interval) and the BGR of an informed DM (c) in the MC maximization problem.

Now, we write the set

$${{\Psi }_{1}}(y)\;\mathop = \limits^{{\text{def}}} \;\bigcup\limits_{x \in X} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} $$

of feasible MC estimates of the vector criterion \(\Phi \) for the given \(y\). Then, if any \(y \in Y\) is a priori possible, then the OR can guarantee to the DM only the estimates belonging to the intersection \(\bigcap\nolimits_{y \in Y} {{{\Psi }_{1}}(y)} \) of these sets. In this case, the best guaranteed result (BGR) is determined by the unimprovable guaranteed MC estimates—it is given by the formula

$${{\mathcal{F}}_{ \leqslant }}\;\mathop = \limits^{{\text{def}}} \;{\text{Max}}\bigcap\limits_{y \in Y} \,\bigcup\limits_{x \in X} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} .$$
(2.1)

Following the reasoning in [813], set (2.1) can be considered as the BGR of the informed DM player (i.e., the second player to move) in the antagonistic MC game (which we rigorously define in Section 3). The occurrence of the set as the value of the maximum in (2.1) is characteristic for MC problems due to the incomplete order formed among the vector estimates \(\psi \in {{\mathbb{R}}^{n}}\) by the relation \( < \) (see Figs. 1b and 1c).

If the OR is not sure that the concrete realized value of the parameter \(y\) becomes known before making the decision about the selection \(x \in X,\) then, whatever decision \({{x}^{0}}\) is chosen by the DM, the OR cannot guarantee any estimates of the criterion vector that do not lie in the set

$${{\Psi }_{2}}({{x}^{0}})\;\mathop = \limits^{{\text{def}}} \;\bigcap\limits_{y \in Y} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi ({{x}^{0}},y)\} .$$

The entire set of guaranteed estimates of the vector criterion, i.e., the estimates that can be relied on in such a situation, is given by the formula \(\bigcup\nolimits_{x \in X} {{{\Psi }_{2}}(x)} \). Since the BGR, similarly to the preceding case, is determined by the unimprovable guaranteed MC estimates, we derive the following formula for the BGR of the uninformed DM:

$${{f}_{ \leqslant }}\;\mathop = \limits^{{\text{def}}} \;{\text{Max}}\bigcup\limits_{x \in X} \,\bigcap\limits_{y \in Y} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} .$$
(2.2)

Set (2.2) is the BGR of the first player to move in the antagonistic MC game [813]. The pair of sets (2.1), (2.2) is an MC analog of the concepts of minimax and maximin for the maximizing DM (in Section 3, we show that they are different for the minimizing DM in contrast to the scalar case).

Formalization of the MC maximin and MC minimax and derivation of MC guaranteed estimates in the presence of uncertain factors are considered in many works (in addition to the ones mentioned above), including dynamic problem statements. Among the works devoted to dynamic statements, we distinguish [14], which is based on the approach proposed in [15]. The connection of (2.1) and (2.2) with the constructions in [15] was thoroughly discussed in [16], and we do not dwell on the details of their differences in this paper; rather, we use as the basis the most widespread definitions from [8]. Furthermore, for these statements (in distinction from [14]), the generalization of the scalar relation the maximin is not greater than the minimax proposed in [16] holds, which corresponds to a basic principle of operations research that information cannot deteriorate the result [1]. (All the available and expected information must be used in the model when the BGR is calculated.)

To take into account in an operation with a vector criterion that an uncontrollable factor is caused by actions of another DM who pursues her own goals, we study in Sections 3 and 4 a zero-sum MC game defined in [13] and its difference from the antagonistic game in the general logic of opposite interests. Independently of the order of moves, we will call the player that maximizes the vector criterion the first player (DM-1), and the opponent will be called the second player (DM-2).

3 MODELS OF UNCONTROLLABLE FACTORS CAUSED BY DM’S ACTIONS

Consider a generalized operation involving two DMs (DM-1 and DM-2) with their teams; they are also called players. Let the vector efficiency criteria for each of them also depend on the opponent’s actions. This operation will be called a two-person MC game. We could deal with two operations with two DMs and two ORs; however, for the analysis of relationships, it is more convenient to join them to interpret the uncontrollable factor as a result of a partial operation (purposeful actions even though with an implicit goal).

In a competitive MC game, it is assumed that the choice of the uncontrollable factor values \(y \in Y\) is made by DM-2 on the basis of the desire to minimize the vector function \(\Phi (x,y)\), which is maximized by DM-1. Formally, the competitive MC game is written as \(\Gamma = \left\langle {X,Y,\Phi } \right\rangle \), where \(X\) is the set of strategies of the first player, \(Y\) is the set of strategies of the second player, and as the players select the pair \((x,y) \in X \times Y,\) the first player gets the payoff vector \(\Phi (x,y) = ({{\varphi }_{1}}(x,y), \ldots ,{{\varphi }_{n}}(x,y))\), and the second one gets the same vector \(\Phi (x,y)\) as her loss vector. Unfortunately, this is not sufficient for describing the result of this MC game because it is not specified how the DMs interpret their vector criterion.

According to [1] the nonuniqueness of the function to be optimized corresponds to the subjective uncertainty in the optimization goal. The players' attitude to this internal uncertainty can be generally different. In conventional vector optimization (e.g., see [8]), the presence of multiple criteria assumes the desire to optimize all of them. A less popular interpretation of the multiplicity of criteria first proposed in [10] is reduced to the desire of a player to optimize one criterion in the given set. In games with a vector payoff function, both interpretations may be used [17]. To reveal the influence of these interpretations on the game result, we introduced in [13] two types of competitive MC games \(\Gamma \)antagonistic (\({{\Gamma }^{A}}\)) and zero-sum (\({{\Gamma }^{0}}\)) games.

In antagonistic games, one player is an antagonist of the other player, i.e., her purpose is to prevent the other player to achieve her goal. For example, assuming that the goal of the first player in \(\Gamma \) is to maximize the payoffs \(({{\varphi }_{1}}, \ldots ,{{\varphi }_{n}})\), we conclude that the goal of the second player in \({{\Gamma }^{A}}\) is the minimization of at least one (arbitrary) \({{\varphi }_{1}}, \ldots ,{{\varphi }_{n}}\). It is clear that in this case the first player is also the antagonist of the second player. Any DM in the antagonistic statement of a competitive game embodies for her opponent a source of uncertain factors (natural uncertainty), which was touched upon in Section 2.

In the zero-sum game, the goal of one player coincides with the goal of the other player with the negative sign. More precisely, if the goal of the first player is to maximize the payoff vector \(({{\varphi }_{1}}, \ldots ,{{\varphi }_{n}})\), then the goal of the second player in the zero-sum game \({{\Gamma }^{0}}\) is to maximize the vector \(( - {{\varphi }_{1}}, \ldots , - {{\varphi }_{n}})\), i.e., to minimize the loss vector \(({{\varphi }_{1}}, \ldots ,{{\varphi }_{n}})\). If the goal of the first player is to maximize at least one (arbitrary) \({{\varphi }_{1}}, \ldots ,{{\varphi }_{n}}\), then the goal of the second player in the game \({{\Gamma }^{0}}\) is to minimize at least one (arbitrary) \({{\varphi }_{1}}, \ldots ,{{\varphi }_{n}}\). (The players may select different partial criteria.) In this case, the name zero-sum game is very tentative, and it is used for this statement to stress that there is no other aspect of competitiveness in this game in contrast with the antagonistic game, in which everything, including the interpretation of multicriteriality, is opposite.

Rigorous definitions of the games \({{\Gamma }^{A}}\) and \({{\Gamma }^{0}}\) are implied by the following formal construction [13, 8] based on the two main interpretations of vector optimization mentioned above. Note that intermediate interpretations of MC problems are possible, when a group of partial criteria is optimized as a whole, and for the other group it is sufficient to optimize an arbitrary criterion; however, we do not consider such general statements in this paper and restrict ourselves to the extreme cases. It is essential that if such extended interpretations of vector optimization are considered, only two types of competitive MC games are distinguished according to Definition 2 below.

Definition 1. In an MC maximization problem, for any \(z \in {{\mathbb{R}}^{n}}\), the guaranteed estimates of the value \(z\) are defined as the elements of the set \(\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant z\} \), and the weak estimates of the value z are defined as the elements of the set \(\{ \xi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\xi \not { > }z\} \). For estimates in an MC minimization problem, the signs in the inequalities are reversed (see the example in Fig. 2).

Fig. 2.
figure 2

Weak estimates ((а) for a single point and (b) for an interval) and the BGR of an unaware second player in the antagonistic MC game \(\Gamma _{{21}}^{A}\) (c).

The guaranteed estimates were considered in Section 2. The purpose of using weak estimates is similar. Let \(z\) be the vector payoff of the first player. Then, the components \({{z}_{i}}\) are the values achieved by this player for each partial criterion \({{\varphi }_{i}}\) (simultaneously). Since the first player wants to maximize the vector with the components \({{\varphi }_{i}}\), then all vector estimates that are “less” than \(z\) are already achieved. The set of weak estimates characterizes the values that the maximizing player considers to be achieved if her payoff is \(z\) when she uses the binary relation \(\not { > }\) for comparing the payoff vectors (in distinction from the relation \( \leqslant \) used in the case of guaranteed estimates). The situation for the second player in the minimization problem is similar. Definition 1 implies that any guaranteed estimate \(\psi \) for \(z\) is not better than the vector result \(z\) with respect to all criteria, and any weak estimate \(\xi \) is not better than \(z\) with respect to at least one criterion (however, this criterion is not indicated, and this is the idea of multicriteriality).

Definition 2. The MC zero-sum game \({{\Gamma }^{0}}\) is defined as a competitive game (\(\Gamma \)) in which both players use the same type of estimates of the vector criterion. The game \(\Gamma \) in which the players use complementary estimates is called antagonistic (\({{\Gamma }^{A}}\)).

These definitions of the zero-sum and antagonistic games cover the cases when the first (maximizing) player uses not only guaranteed, as in the examples above, but also weak estimates. All main properties of the classes of games defined above are independent of the estimates by which the players are guided; it is only important whether these estimates are identical or different. Following the original statement in Section 2, we assume that the first player uses the guaranteed estimates (the results obtained in [13, 16] imply that the most part of the propositions below also hold for the other case). We have also taken into account that the term MC zero-sum game is most often used in the literature for guaranteed estimates.

Remark 1. For an antagonistic game in which the first player uses guaranteed estimates, we can give the following simple illustration. DM-1 has two uncontrollable factors \(y\) and \(i\), and she adheres the guaranteed result principle with respect to both factors. For DM-2, the index \(i\) is the control; therefore, she can use weak estimates. This behavior corresponds to the defense against attack model [1]. However, the zero-sum game requires another example in which the second player also interprets the criterion index as an uncontrollable factor and uses guaranteed estimates. Here, we deal with an indirect conflict, and we may imagine a competition game with respect to a number of indicators specified by the index \(i\) (in the simple case, \({{\varphi }_{i}} = {{x}_{i}} - {{y}_{i}} + C\)) or the model game described in [4, 5], in which each player cares for her interests but the players' vectors turned out to be oppositely directed. Therefore, the formal separation of \(\Gamma \) into the game \({{\Gamma }^{A}}\), in which the players' preferences are opposite, and the game \({{\Gamma }^{0}}\), in which the vector payoff functions are opposite, is justified. Note that in [7], the second version of the MC game with guaranteed estimates (and optimization of all criteria) is considered; and [1012] consider the first version (in relation with modeling coalitions in multi-person scalar games without side payments); in [8, 9, 13], both interpretations are admitted.

As for the first player in Section 2, the BGR of the second player playing the role of an independent DM depends on her information. Consider two versions of information in the game \(\Gamma \): \({{\Gamma }_{{12}}}\) and \({{\Gamma }_{{21}}}\). In \({{\Gamma }_{{12}}}\) the first player makes the first move, and the second move is made by the second player when she already knows the first player’s move. The value of such an MC maximin for the second (minimizing) player [16] differs from the similar value (2.2) constructed for the first player, and it is given by

$${{f}_{ \geqslant }}\;\mathop = \limits^{{\text{def}}} \;{\text{Min}}\bigcap\limits_{x \in X} \,\bigcup\limits_{y \in Y} \,\{ \psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\psi \geqslant \Phi (x,y)\} $$
(3.1)

or by

$${{f}_{{\nless}}}\;\mathop = \limits^{{\text{def}}} \;{\text{Min}}\bigcap\limits_{x \in X} \,\bigcup\limits_{y \in Y} \,\{ \xi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\xi \nless\Phi (x,y)\} ,$$
(3.2)

depending on the type of estimate of the vector criterion values by the second player. Set (3.1) is considered as the least guaranteed loss or the BGR of the second player in the competitive MC game with the order of moves first player–second player for the zero-sum (\(\Gamma _{{12}}^{0}\)) and (3.2) for the antagonistic statement (\(\Gamma _{{12}}^{A}\)).

Similarly, in the MC game \({{\Gamma }_{{21}}}\) with the order of moves second player–first player, the BGR of the second player is given by the values of her vector minimaxes determined by

$${{\mathcal{F}}_{ \geqslant }}\;\mathop = \limits^{{\text{def}}} \;{\text{Min}}\bigcup\limits_{y \in Y} \,\bigcap\limits_{x \in X} \,\{ \psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\psi \geqslant \Phi (x,y)\} $$
(3.3)

for the zero-sum game \(\Gamma _{{21}}^{0}\) or (see Fig. 2c) by

$${{\mathcal{F}}_{{\nless}}}\;\mathop = \limits^{{\text{def}}} \;{\text{Min}}\bigcup\limits_{y \in Y} \,\bigcap\limits_{x \in X} \,\{ \xi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\xi \nless \Phi (x,y)\} $$

in the antagonistic statement \(\Gamma _{{21}}^{A}\). By comparing the BGR (2.1) of the first player with the BGR (2.2) of the second player, we can formulate the following rule for determining the BGR. The players make the union of the sets of the corresponding estimates of the vector criterion with respect to their own strategies and the intersection with respect to the opponent’s strategies. The order of the variables is determined by the order of moves. The first player selects the Slater maximum estimates, and the second player selects the minimum estimates.

In [16], the identities (for the illustration, cf. Figs. 1c and 2c)

$${{f}_{{\nless}}} = {{f}_{ \leqslant }},\quad {{\mathcal{F}}_{{\nless}}} = {{\mathcal{F}}_{ \leqslant }},$$
(3.4)

were proved; they imply that in an antagonistic MC game with a fixed order of moves, the greatest guaranteed payoff of the first player equals the least guaranteed loss of the second player; i.e., there is no need for them to negotiate because the game has the value equal to the BGR. As a result, we obtain that, in the antagonistic MC game, as in the ordinary single-criterion game, the second player plays the role of an equivalent of a natural uncertainty and has no specific features of a reasonable opponent. This case has already been described in Section 2. For this reason, we will below study the MC games primarily in the zero-sum statement assuming that the difference of the opponent DM from the uncertain factor manifests itself for these games.

For zero-sum MC games, identities similar to (3.4) do not hold, and even their relaxation to Pareto values is valid only for the games that are actually equivalent to the independent set of single-criterion games. More precisely, denote by \({{P}_{{\max}}}(W)\) and \({{P}_{{\min}}}(W)\) the Pareto bounds of the set \(W \subset \mathbb{R}_{ + }^{n}\) in MC maximization or minimization problems. Then \({{\mathcal{F}}_{ \leqslant }} \cap {{\mathcal{F}}_{ \geqslant }} \ne \emptyset \Leftrightarrow {{P}_{{\max}}}({{\mathcal{F}}_{ \leqslant }}) = {{P}_{{\min}}}({{\mathcal{F}}_{ \geqslant }})( = {{\psi }^{{21}}})\) [18] and \({{f}_{ \leqslant }} \cap {{f}_{ \geqslant }} \ne \emptyset \Leftrightarrow {{P}_{{\max}}}({{f}_{ \leqslant }}) = {{P}_{{\min}}}({{f}_{ \geqslant }})( = {{\psi }^{{12}}})\) [13], where

$$\psi _{i}^{{12}}\;\mathop = \limits^{{\text{def}}} \;\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} {{\varphi }_{i}}(x,y),\quad \psi _{i}^{{21}}\;\mathop = \limits^{{\text{def}}} \;\mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{x \in X} {{\varphi }_{i}}(x,y)\quad \forall i = \overline {1,n} .$$

Thus, in the zero-sum MC game with the fixed order of players' moves, the greatest guaranteed payoff of DM-1 intersects with the least guaranteed loss of DM-2 if and only if the Pareto value of the BGR of each player consists of a single vector. We conclude that Germeier’s theorem about the existence of solution in perfect information games [1], which guarantees the equality of the maximin and minimax in scalar games (in mapping strategies), cannot be extended for the multicriteria case \({{\Gamma }^{0}}\).

The condition of uniqueness of the Pareto optimum in the MC optimization problem is equivalent to the absence of competition between partial criteria; i.e., to the condition that each of them can be optimized independently of the others. In this case, the MC game \(\Gamma \) also decomposes into \(n\) scalar games. In the other cases, the zero-sum MC game with a fixed order of moves has no solutions in the sense that is conventional for scalar games. The greatest guaranteed payoff of the first player in \({{\Gamma }^{0}}\) is strictly less with respect to a certain component than the least guaranteed loss of the second player because the guaranteed estimate for at least one side will be worse (at least with respect to one criterion and not better with respect to the other ones) than the value of the vector criterion that the player can obtain. The guaranteed estimates in MC games in which multicriteriality is essential are pessimistic in the sense that there are no opponent’s actions that would result in such estimates (independently of the order of moves)—see the example in Fig. 3. This suggests that there is a potential possibility for finding a tradeoff in the zero-sum MC game. We consider this possibility in Sections 5 and 6, and the next section is devoted to the statements of the MC game in normal form and various definitions of its solution.

Fig. 3.
figure 3

Typical relation between the BGR’s of the first and second players in the zero-sum MC game (\(\Gamma _{{21}}^{0}\)).

4 SPECIFIC FEATURES OF MAKING DECISIONS IN MC GAMES IN NORMAL FORM

In the games in normal form, making decisions is complicated by the fact that the players select their strategies simultaneously without knowing the opponent’s move [19]. In the scalar case, the zero-sum game in normal form (by contrast to the games with a fixed order of moves) does not always has a solution; more precisely, it has a solution only if the maximin of the payoff function equals the minimax, i.e., the BGR of each player is independent of the order of their moves [1]. In this case, the players can simultaneously select their guaranteeing strategies and obtain a solution and the game value, which equals to the BGR that is the same for both players is the same.

In zero-sum MC games, the BGRs of the players do not usually intersect (see Section 3); however, we may speak of the conditions of independency of the BGRs of the order of moves. This means the equality \({{f}_{ \leqslant }} = {{\mathcal{F}}_{ \leqslant }}\) for the first player and (or) \({{\mathcal{F}}_{ \geqslant }} = {{f}_{ \geqslant }}\) for the second player. In this situation, we say that there is a one-sided solution. It is one-sided because the equality may hold only for one of the players. Moreover, even if it holds for both players, the one-sided value of the game, i.e., the BGR (estimate of the DM payoff) is different for different players. The term solution is used because this estimate and the optimal player strategies do not depend of her information of the opponent’s actions. A good property of MC games \({{\Gamma }^{0}}\) with a one-sided solution is the absence of fight for the order of moves (as in the scalar case [19]).

For the antagonistic game in normal form, the existence of a one-sided solution for one of the players is equivalent to its existence for the other one due to identities (3.4). Therefore, the existence of a one-sided solution in \({{\Gamma }^{A}}\) implies the existence of its classical solution in the sense that is conventional for scalar games, which implies the equality of all BGRs. (For zero-sum MC games, this is not the case.) By the definition of the antagonistic game, one of the players in it is guided by guaranteed estimates. For her, the existence of a one-sided solution is independent of the game she takes part in. Therefore, this property is characterizing for \({{\Gamma }^{A}}\). It is also important for \({{\Gamma }^{0}}\) and for making decisions in MC problems with uncertainty. Let us examine the existence conditions of the one-sided solution for the maximizing and minimizing players in an arbitrary MC statement. (For the first time, the question about the class of competitive games for which \({{f}_{ \leqslant }} = {{\mathcal{F}}_{ \leqslant }}\) was considered in [10] in the context of \({{\Gamma }^{A}}\) interpretation in the analysis of cooperative games.)

The criteria of coincidence of \({{f}_{ \leqslant }}\) and \({{\mathcal{F}}_{ \leqslant }}\) or \({{\mathcal{F}}_{ \geqslant }}\) and \({{f}_{ \geqslant }}\) (MC maximin and MC minimax for any player) were obtained in [13, 20] in the form corresponding to the use of Germeier’s scalarization [1, 6] by both DMs. In particular, for the equality of two sets \({{f}_{ \leqslant }} = {{\mathcal{F}}_{ \leqslant }}\) or

$${\text{Max}}\bigcup\limits_{x \in X} \,\bigcap\limits_{y \in Y} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} = {\text{Max}}\bigcap\limits_{y \in Y} \,\bigcup\limits_{x \in X} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} $$
(4.1)

of the MC maximin and the MC minimax of the first player, it is necessary and sufficient that \(\forall \mu \in M\;\mathop = \limits^{{\text{def}}} \;\left\{ {\mu \geqslant 0\,{\text{|}}\,\sum\nolimits_{i = 1}^n {{{\mu }_{i}} = 1} } \right\}\) the following number equalities hold

$${{\theta }^{{ - \,}}}[\mu ]\;\mathop = \limits^{{\text{def}}} \;\,\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} \left\{ {\mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,y){\text{/}}{{\mu }_{i}}} \right\} = {{\theta }^{{ + \,}}}[\mu ]\;\mathop = \limits^{{\text{def}}} \;\mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{x \in X} \left\{ {\mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,y){\text{/}}{{\mu }_{i}}} \right\},$$
(4.2)

where \(I(\mu )\;\mathop = \limits^{{\text{def}}} \;\{ i = \overline {1,n} \,{\text{|}}\,{{\mu }_{i}} > 0\} \). Conditions (4.2) are based on the representation of the sets of estimates to be maximized on the left- and right-hand sides of (4.1) using the Germeier’s scalarization [20]

$$\begin{gathered} \bigcup\limits_{x \in X} \,\bigcap\limits_{y \in Y} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} = \bigcup\limits_{\mu \in M} \,\prod\limits_{i = 1}^n \,[0,{{\theta }^{{ - \,}}}[\mu ]{{\mu }_{i}}], \\ \bigcap\limits_{y \in Y} \,\bigcup\limits_{x \in X} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} = \bigcup\limits_{\mu \in M} \,\prod\limits_{i = 1}^n \,[0,{{\theta }^{{ + \,}}}[\mu ]{{\mu }_{i}}], \\ \end{gathered} $$
(4.3)

where \({{\theta }^{ - }}\), \({{\theta }^{ + }}\) are defined in (4.2). The same applies to the minimizing DM [13]: the MC maximin coincides with the MC minimax in the guaranteed estimates of the second player, i.e., \({{f}_{ \geqslant }} = {{\mathcal{F}}_{ \geqslant }}\) or

$${\text{Min}}\bigcap\limits_{x \in X} \,\bigcup\limits_{y \in Y} \,\{ \psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\psi \geqslant \Phi (x,y)\} = {\text{Min}}\bigcup\limits_{y \in Y} \,\bigcap\limits_{x \in X} \,\{ \psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\psi \geqslant \Phi (x,y)\} ,$$
(4.4)

if and only if \(\forall \mu \in M\) it holds that

$$\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} \left\{ {\mathop {\max}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,y){\text{/}}{{\mu }_{i}}} \right\} = \mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{x \in X} \left\{ {\mathop {\max}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,y){\text{/}}{{\mu }_{i}}} \right\}.$$
(4.5)

Pay attention to the difference of signs of the minimum and maximum in the definitions of Germeier’s scalarizing function (the GSF) for the first and the second players (the GSF values are shown in braces in (4.2) and (4.5)).

We are not aware of other general conditions for (4.1) and (4.4). At least, the use of the linear scalarization [1, 8] does not give an adequate parameterization of the BGR [4] and does not allow one to derive criteria for their equality. The satisfaction of (4.2) or (4.5) for all \(\mu \in M\) implies the equalities

$$\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} {{\varphi }_{i}}(x,y) = \mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{x \in X} {{\varphi }_{i}}(x,y)\quad \forall i = \overline {1,n} ,$$

i.e., the existence of solutions of scalar games with any partial criterion. However, the examples in [10] show that the last condition is insufficient for (4.1) to hold. It is also insufficient for (4.4). Therefore, it is insufficient to use mixed strategies for the existence of a one-sided solution in a finite MC game in normal form (see [10, 11, 13, 21]) even though they ensure the equality of the maximin and minimax with respect to each individual averaged \({{\varphi }_{i}}\). The application of mixed strategies by the DM extends the sets of estimates that are guaranteed for her (i.e., the left-hand sides of (4.3) if we consider the first player); the optimal estimates among them determine her BGR. However, this does not necessarily give equalities (4.2) or (4.5). The point is that the appearance of the inner minimum or maximum sign with respect to \(i \in I(\mu )\) for \({\text{|}}I(\mu ){\text{|}} > 1\) violates the customary bilinearity of the convolution of mathematical expectations with respect to mixed strategies. After averaging the partial criteria, the function in braces in (4.2) or (4.5) becomes concave or convex with respect to the strategies of both players rather than a concave–convex one.

Thus, the standard technique of mixed extension of the MC game does not help derive conditions (4.2) and (4.5) for the existence of solutions of the corresponding scalar games with the payoff function equal to the minimum or maximum with respect to \(i \in I(\mu )\) for each \(\mu \in M;\) i.e., it does not yield relations of type (4.1) and (4.4). A different situation occurs when the GSF of the partial criteria, rather than the partial criteria themselves, is averaged by the corresponding DM. In this case, the bilinearity of the GSF expectation ensures the equality of the maximin and minimax for all \(\mu \in M\). The average of GSF also remains concave–convex when the expectations of the partial criteria over the player’s own mixed strategies and the expectation of the GSF with respect to the opponent’s strategies, as proposed in [5], are used. Thus, we can obtain a one-sided solution of the game if we reconsider the BGR concept for this case (taking into account the modification of the mixed extension) rather than immediately replace the partial criteria by their mean values.

Let us formalize this idea for the finite MC zero-sum game in which the first player is guided by the GSF in her decision making. Since the existence of a one-sided solution for one player is independent of the other player, it is not necessary to refine the optimality principle for the minimizing player; however, assume for definiteness that it is the same as for the first player. Assume that \(X\) and \(Y\) are finite sets and \(p\) and \(q\) are the players’ mixed strategies on these sets. Denote by \(P\) and \(Q\) the sets of corresponding mixed strategies. Let DM-1 and DM-2 estimate their MC mean payoffs and losses using the following mean GSFs with the parameters \(\mu \) and \(\nu \in M{\text{:}}\)

$${{G}_{1}}(p,q)[\mu ]\;\mathop = \limits^{{\text{def}}} \;\sum\limits_{y \in Y} \,q(y)\mathop {\min}\limits_{i \in I(\mu )} \sum\limits_{x \in X} \,p(x){{\varphi }_{i}}(x,y){\text{/}}{{\mu }_{i}},{{G}_{2}}(p,q)[\nu ]\;\mathop = \limits^{{\text{def}}} \;\sum\limits_{x \in X} \,p(x)\mathop {\max}\limits_{i \in I(\nu )} \sum\limits_{y \in Y} \,q(y){{\varphi }_{i}}(x,y){\text{/}}{{\nu }_{i}}.$$

The objective vector function for each player is replaced by a parametric family of scalar functions, where the functions in the first set are maximized by the first player and the functions in the second set are minimized by the second player. A justification of the use of this rule of averaging MC games in Germeier’s scalarization can be found in [5] (it allows the DM to avoid too optimistic estimates occurring when the players use linear scalarization following [7]).

Define the mixed extension of the original MC game \({{\Gamma }^{0}}\) consistent with Germeier’s scalarization as the parametric family of scalar games in normal form

$$\overline \Gamma _{G}^{0}\;\mathop = \limits^{{\text{def}}} \;\left\{ {\left\langle {P,Q,{{G}_{1}}(p,q)[\mu ], - {{G}_{2}}(p,q)[\nu ]} \right\rangle \,{\text{|}}\,\mu ,\nu \in M} \right\}.$$

Hence, we determine the MC mean maximin and MC mean minimax for DM-1 by parameterizing using the Germeier’s scalarization maximin and minimax of the mean GSFs

$$\begin{gathered} {{{\bar {f}}}_{ \leqslant }}\;\mathop = \limits^{{\text{def}}} \;\bigcup\limits_{\mu \in M} \,\mu \mathop {\max}\limits_{(p(x)|x \in X) \in P} \mathop {\min}\limits_{(q(y)|y \in Y) \in Q} {{G}_{1}}(p,q)[\mu ], \\ {{{\bar {\mathcal{F}}}}_{ \leqslant }}\;\mathop = \limits^{{\text{def}}} \;\bigcup\limits_{\mu \in M} \,\mu \mathop {\min}\limits_{(q(y)|y \in Y) \in Q} \mathop {\max}\limits_{(p(x)|x \in X) \in P} {{G}_{1}}(p,q)[\mu ] \\ \end{gathered} $$
(4.6)

rather than after averaging the original partial criteria \({{\varphi }_{i}}(x,y)\) with respect to \(p\) and \(q\) [7].

Since the function \({{G}_{1}}(p,q)[\mu ]\) is concave in \(p\) and linear in \(q\), it holds that

$$\mathop {\max}\limits_{(p(x)|x \in X) \in P} \mathop {\min}\limits_{(q(y)|y \in Y) \in Q} {{G}_{1}}(p,q)[\mu ] = \mathop {\min}\limits_{(q(y)|y \in Y) \in Q} \mathop {\max}\limits_{(p(x)|x \in X) \in P} {{G}_{1}}(p,q)[\mu ]\quad \forall \mu \in M.$$
(4.7)

Equalities (4.7) imply the equality \({{\bar {f}}_{ \leqslant }} = {{\bar {\mathcal{F}}}_{ \leqslant }}\), which corresponds to the existence of a one-sided solution in mixed strategies when DM-1 uses Germeier’s scalarization and averages the GSF with respect to the opponent’s mixed strategy for estimating her own MC mean BGR. Since this result is extremely important, we repeat it in the following proposition.

Proposition 1. The MC game \(\overline \Gamma _{G}^{0}\) has a one-sided solution in mixed strategies for the first player.

The result for the second player is the same, and we do not discuss it here in more detail. We only note that the scalar games in the family \(\overline \Gamma _{G}^{0}\) are noncompetitive even for \(\mu = \nu \) due to differences in GSFs. The antagonism of interests in the case of identical scalarization coefficients occurs in the games of the similar extension \(\overline \Gamma _{G}^{A}\) of the antagonistic MC game because the GSF for the minimizing player who makes decisions on the basis of weak estimates is specified using the minimum of partial criteria (reduced to the scalarizing function coefficients); this GSF coincides with the GSF of the maximizing player who is guided by guaranteed estimates. Let

$$\overline \Gamma _{G}^{A}\;\mathop = \limits^{{\text{def}}} \;\left\{ {\left\langle {P,Q,{{G}_{1}}(p,q)[\mu ], - G_{2}^{A}(p,q)[\nu ],P,Q} \right\rangle \,{\text{|}}\,\mu ,\nu \in M} \right\},$$

where

$$G_{2}^{A}(p,q)[\nu ]\;\mathop = \limits^{{\text{def}}} \;\sum\limits_{x \in X} \,p(x)\mathop {\min}\limits_{i \in I(\nu )} \sum\limits_{y \in Y} \,q(y){{\varphi }_{i}}(x,y){\text{/}}{{\nu }_{i}}.$$

As the game \(\overline \Gamma _{G}^{0}\), the game \(\overline \Gamma _{G}^{A}\) has a one-sided solution not only for the first but also for the second player if the second player uses the MC mean BGR by analogy with (4.6). Furthermore, for the MC averaging under examination, equalities (3.4) can be generalized. However, to skip here cumbersome manipulations, we simply formulate the ultimate result for the game \(\overline \Gamma _{G}^{A}\) taking into account Section 3 and the existence of a one-sided solution for the minimizing DM.

Proposition 2. The game \(\overline \Gamma _{G}^{A}\) has a classical solution of the MC game in normal form.

Here the classical solution of the MC game is understood, as above, in complete accordance with the solution of the scalar game in normal form but taking into account the MC mixed extension introduced above. This implies that the MC mean least guaranteed loss of the second player in weak estimates (\(\nless\)) coincides with the MC mean BGR of the first player (4.6) independently of the order of moves (because \({{\bar {f}}_{ \leqslant }} = {{\bar {\mathcal{F}}}_{ \leqslant }}\) both for the game \(\overline \Gamma _{G}^{A}\) and the game \(\overline \Gamma _{G}^{0}\)). Recall that without the GSF averaging, the MC game can have no one-sided solution in mixed strategies [11, 10]. Therefore, Proposition 2 is not valid for arbitrary antagonistic games in normal form.

For the zero-sum MC game, even if one-sided solutions exist, a common game value usually does not exist. Therefore, the opponents have a nonempty bargaining set even though their payoffs are opposite. We have already mentioned that the use of several criteria by DMs corresponds to uncertainty. In games with uncertain factors, the opposition of objective functions turns out to be ambiguous [22] (also see Remark 1 in Section 3). If the attitudes of DMs to the subjective uncertainty are identical (rather than opposite) as in zero-sum games, then solutions can exist that are better than both BGRs of both players (e.g., see Fig. 3). Informally, this can be explained by using the operations research methodology.

Two players are actually two teams with two DMs and two ORs who support decision making in their teams. From the viewpoint of ORs, every joint solution (the game outcome) that ensures a result not worse than the BGR to their teams and is a nondominated response to the opponent’s strategy can provide a basis for compromise. This approach (we consider it in Section 5 in more detail) elaborates the concept of Nash equilibrium proposed in [7] for zero-sum MC games and is a generalization of the concept of imputation in the scalar two-person game [19]. Imputation in two-person games is defined as the Pareto optimal outcome that is not worse for both players than their BGR in the absence of information. In the context of the further comparison with imputation, we note that, as in the scalar case, any outcome of the zero-sum MC game is Pareto optimal with respect to the system of all partial criteria of both players. For this reason, we do not introduce this condition as an additional constraint.

5 APPLICATION OF GERMEIER’S SCALARIZATION FOR THE ANALYSIS OF EQUILIBRIUM IN THE MC GAME

According to [7], the equilibrium situation for the zero-sum MC game \({{\Gamma }^{0}}\) is any pair of strategies (\({{x}^{0}},{{y}^{0}}\)) for which

$${{x}^{0}} \in {\text{Arg}}\,\mathop {{\text{Max}}}\limits_{x \in X} \,\Phi (x,{{y}^{0}}),\quad {{y}^{0}} \in {\text{Arg}}\,\mathop {{\text{Min}}}\limits_{y \in Y} \,\Phi ({{x}^{0}},y).$$
(5.1)

Such pairs are called Shapley equilibriums or simply equilibriums (in MC cases). Depending on the scalarization method for the Slater MC maximum and MC minimum, the set of points (5.1) is reduced to the corresponding set of equilibriums in two-person scalar games with the payoff functions that are scalarizations of the partial criteria for all versions of the scalarization coefficients for each player. Thus, the selection of the scalarizing function specifies a parameterization of set (5.1). Let us analyze the possibility of adding the comparison with the BGR to (5.1) for obtaining an equilibrium allocation as a solution of the game \({{\Gamma }^{0}}\).

For the description of the BGR of the first player, we use representation (4.3) of the sets of guaranteed estimates, and assume that the standard regularity conditions

$$\begin{gathered} \bigcup\limits_{x \in X} \,\bigcap\limits_{y \in Y} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} = {\text{cl}}\left\{ {\bigcup\limits_{x \in X} \,\bigcap\limits_{y \in Y} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,0 < \psi \leqslant \Phi (x,y)\} } \right\}, \\ \bigcap\limits_{y \in Y} \,\bigcup\limits_{x \in X} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\psi \leqslant \Phi (x,y)\} = {\text{cl}}\left\{ {\bigcap\limits_{y \in Y} \,\bigcup\limits_{x \in X} \,\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,0 < \psi \leqslant \Phi (x,y)\} } \right\} \\ \end{gathered} $$
(5.2)

hold, where the symbol \({\text{cl}}\) denotes the closure in \({{\mathbb{R}}^{n}}\). Then, according to [20], we have

$${{f}_{ \leqslant }} = \bigcup\limits_{\mu \in M} \,{{\theta }^{ - }}[\mu ]\mu ,\quad {{\mathcal{F}}_{ \leqslant }} = \bigcup\limits_{\mu \in M} \,{{\theta }^{ + }}[\mu ]\mu ,$$
(5.3)

where \({{\theta }^{ - }}\) and \({{\theta }^{ + }}\) are defined in (4.2). (In the absence of regularity, the right-hand sides of (5.3) do not contain a number of insignificant elements of the set on the left-hand sides; more precisely, they do not contain some Slater points that are not Pareto points [18, 20].)

To determine the second player’s BGR, we prove a similar representation to (4.3). Define the \(n\)-dimensional vector \(\bar {c}\) with identical components

$$C \geqslant \mathop {\max}\limits_{(x,y) \in X \times Y} \mathop {\max}\limits_{i = \overline {1,n} } {{\varphi }_{i}}(x,y),$$

and ignore for the second player the estimates exceeding \(\bar {c}\) because they are not informative. Then, rename (3.1) and (3.3) as

$$\begin{gathered} {{f}_{ \geqslant }}\;\mathop = \limits^{{\text{def}}} \;{\text{Min}}\bigcap\limits_{x \in X} \,\bigcup\limits_{y \in Y} \,\{ \psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\bar {c} \geqslant \psi \geqslant \Phi (x,y)\} , \\ {{\mathcal{F}}_{ \geqslant }}\;\mathop = \limits^{{\text{def}}} \;{\text{Min}}\bigcup\limits_{y \in Y} \,\bigcap\limits_{x \in X} \,\{ \psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\bar {c} \geqslant \psi \geqslant \Phi (x,y)\} . \\ \end{gathered} $$

Proposition 3. The BGR of the second player can be represented using the GSF as

$${{f}_{ \geqslant }} = {\text{Min}}\bigcup\limits_{\nu \in M} \,\left\{ {\prod\limits_{i \in I(\nu )} \,[{{\nu }_{i}}\mathop {\max }\limits_{x \in X} \mathop {\min }\limits_{y \in Y} \mathop {\max }\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}},C]\prod\limits_{i\not \in I(\nu )} \,\{ C\} } \right\},$$
(5.4)
$${{\mathcal{F}}_{ \geqslant }} = {\text{Min}}\bigcup\limits_{\nu \in M} \,\left\{ {\prod\limits_{i \in I(\nu )} \,[{{\nu }_{i}}\mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{x \in X} \mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}},C]\prod\limits_{i\not \in I(\nu )} \{ C\} } \right\}.$$
(5.5)

Proof. We give an outline of the proof. By the direct repetition of the manipulations in [1, 6], we verify the validity of parameterization using Germeier’s scalarization in MC problems for the maximizing player and a vector criterion with all nonpositive components. The results obtained in [20] can also be extended for the case of nonnegative partial criteria. Now, the validity of (5.4) and (5.5) follows from (4.3) if we assume that the second player maximizes the vector function \( - \Phi (x,y)\) with respect to \(y \in Y\) and the first player minimizes it with respect to \(x \in X.\)

Indeed, by exchanging the maximizing and minimizing players in (4.3) and using the second equality in (4.3), we write for the criterion \( - \Phi (x,y)\)

$$\begin{gathered} {\text{Max}}\bigcap\limits_{x \in X} \,\bigcup\limits_{y \in Y} \,\{ \chi \in {{\mathbb{R}}^{n}}\,{\text{|}}\, - {\kern 1pt} \bar {c} \leqslant \chi \leqslant - \Phi (x,y)\} \\ = {\text{Max}}\bigcup\limits_{\nu \in M} \,\left\{ {\prod\limits_{i \in I(\nu )} \,[ - C,{{\nu }_{i}}\mathop {\min}\limits_{x \in X} \mathop {\max}\limits_{y \in Y} \mathop {\min}\limits_{j \in I(\nu )} ( - {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}})]\prod\limits_{i\not \in I(\nu )} \,[ - C, - C]} \right\}, \\ \end{gathered} $$

where the intersection with the nonnegative orthant is replaced by the condition not less than \( - \bar {c}\) due to the nonpositiveness of the payoff function \( - \Phi (x,y)\). For the same reason, for the components not included in the GSF with a given \(\nu \), we substituted \( - C\) for the zero values (for which purpose we had to separate the components \(i\not \in I(\nu )\) from the total Cartesian product). For the other components on the right-hand side, we use the transformation of the equality in

$${{\chi }_{i}} \leqslant {{\nu }_{i}}\mathop {\min}\limits_{x \in X} \mathop {\max}\limits_{y \in Y} \mathop {\min}\limits_{j \in I(\nu )} ( - {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}}) = - {{\nu }_{i}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}}.$$

Next, change the variables \(\chi \) for \(\psi \;\mathop = \limits^{{\text{def}}} \; - \chi \) on the left- and right-hand sides to pass to nonnegative values in the problem for the second player who minimizes \(\Phi (x,y)\), to obtain (5.4). (Here, the degenerate interval \([C,C]\) is denoted by the single number \(\{ C\} \).) Formula (5.5) is derived from the first equality in (4.3) in a similar fashion (an example illustrating formula (5.5) see in Fig. 4).

Fig. 4.
figure 4

Representation of the second player BGR in \(\Gamma _{{21}}^{0}\) using Germeier’s scalarizing function.

In the case of regularity, an analog of (5.3) can be derived for representing the second player’s BGR using Germeier’s scalarization. However, we do not need such a concretization for the further reasoning. Note that the regularity conditions (5.2) can be weakened to take into account the cases when some of the sets of estimates on the left-hand side of (4.3) have an empty interior. This case is thoroughly considered in [20], where the transition to spaces of lower dimension is justified. Therefore, the regularity conditions are not too restrictive.

In the regular case, we can use the parametrization with the help of the GSF as suggested in [23] for describing the set of points (5.1) because condition (5.1) imposed on the equilibrium outcomes can be represented in the form \(({{x}^{0}},{{y}^{0}}) \in {{R}^{0}}\) (see [1, 6]), where

$$\begin{gathered} {{R}^{0}}\;\mathop = \limits^{{\text{def}}} \;\left\{ {({{x}^{0}},{{y}^{0}}) \in X \times Y\,{\text{|}}\,\exists \mu ,\;\nu \in M\,:{{x}^{0}} \in {\text{Arg}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,{{y}^{0}}){\text{/}}{{\mu }_{i}}} \right., \\ \left. {{{y}^{0}} \in {\text{Arg}}\mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}({{x}^{0}},y){\text{/}}{{\nu }_{j}}} \right\} \\ \end{gathered} $$
(5.6)

(in the irregular case, \({{R}^{0}}\) (5.6) is narrower than defined in (5.1) due to insignificant Slater points that are not Pareto points). In this case, the outcomes in \({{R}^{0}}\) satisfy the relation

$$\mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}({{x}^{0}},{{y}^{0}}){\text{/}}{{\mu }_{i}} = \mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,{{y}^{0}}){\text{/}}{{\mu }_{i}} \geqslant \mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,y){\text{/}}{{\mu }_{i}} = {{\theta }^{ + }}[\mu ] \geqslant {{\theta }^{ - }}[\mu ],$$

and for DM-2 they satisfy the relation

$$\mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}({{x}^{0}},{{y}^{0}}){\text{/}}{{\nu }_{j}} \leqslant \mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}} \leqslant \mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{x \in X} \mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}}.$$

Thus, we conclude that any pair of strategies in \({{R}^{0}}\) (that is a Shapley equilibrium in the MC game) gives to each player a result that is not worse than the estimate that is guaranteed for the corresponding player with the same parameters of the GSF of her partial criteria. Now, we give a formal definition that allows the players that use guaranteed estimates to compare sets in the criterion space.

Definition 3 (see [2]). The Edgeworth–Pareto hull of an arbitrary set \(Z \subset \mathbb{R}_{ + }^{n}\) is defined as the set \(\{ \psi \in \mathbb{R}_{ + }^{n}\,{\text{|}}\,\exists z \in Z{\kern 1pt} :\;\psi \leqslant z\} \) in the maximization problem or \(\{ \psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\exists z \in Z{\kern 1pt} :\;\psi \geqslant z\} \) in the minimization problem. These sets will be denoted by \(\mathop {{\text{eph}}}\nolimits_ \leqslant (Z)\) and \(\mathop {{\text{eph}}}\nolimits_ \geqslant (Z)\), respectively.

Now, we define the equilibrium value of the MC game \({{\Gamma }^{0}}\) as the set \({{Z}^{0}}\;\mathop = \limits^{{\text{def}}} \;\{ \Phi ({{x}^{0}},{{y}^{0}})\,|\,({{x}^{0}},{{y}^{0}}) \in {{R}^{0}}\} \). Then, the inequalities above imply the following result.

Proposition 4. In the case \({{Z}^{0}} \ne \emptyset \), it holds that

$${{\mathcal{F}}_{ \leqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_ \leqslant ({{Z}^{0}}),\quad {{f}_{ \geqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_ \geqslant ({{Z}^{0}}).$$
(5.7)

Proof. For every \(\psi \) on the left-hand side of (4.3) and, therefore, for \(\psi \in {{\mathcal{F}}_{ \leqslant }}\), (4.3) implies that \(\exists \mu \in M\): \(\forall i = \overline {1,n} \) \({{\psi }_{i}} \leqslant {{\theta }^{ + }}[\mu ]{{\mu }_{i}}\). Thus, we have

$$\forall i = \overline {1,n} \quad {{\psi }_{i}} \leqslant {{\mu }_{i}}\mathop {\min}\limits_{j \in I(\mu )} {{\varphi }_{j}}({{x}^{0}},{{y}^{0}}){\text{/}}{{\mu }_{j}},$$

as has been shown above. Therefore, \(\forall i \in I(\mu )\): \({{\psi }_{i}} \leqslant {{\mu }_{i}}{{\varphi }_{i}}({{x}^{0}},{{y}^{0}}){\text{/}}{{\mu }_{i}}\) = \({{\varphi }_{i}}({{x}^{0}},{{y}^{0}})\), and for \(k\not \in I(\mu )\) we have \({{\psi }_{k}} = 0\); hence, \(\psi \leqslant \Phi ({{x}^{0}},{{y}^{0}})\) (because \(\Phi \) is nonnegative); i.e., \(\psi \in \mathop {{\text{eph}}}\nolimits_ \leqslant ({{Z}^{0}})\). Similarly, for \(\psi \in {{f}_{ \geqslant }}\), we conclude from (5.4) that \({{\psi }_{k}} = C \geqslant {{\varphi }_{k}}({{x}^{0}},{{y}^{0}})\) for \({{\nu }_{k}} = 0\) and

$${{\psi }_{i}} \geqslant {{\nu }_{i}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}} \geqslant {{\nu }_{i}}\mathop {\max}\limits_{j \in I(\nu )} {{\varphi }_{j}}({{x}^{0}},{{y}^{0}}){\text{/}}{{\nu }_{j}} \geqslant {{\nu }_{i}}{{\varphi }_{i}}({{x}^{0}},{{y}^{0}}){\text{/}}{{\nu }_{i}} = {{\varphi }_{i}}({{x}^{0}},{{y}^{0}})$$

for the other criteria indices \(i \in I(\nu )\), which gives the second inclusion in (5.7).

By adding to (5.7) the relations \({{f}_{ \leqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_ \leqslant ({{\mathcal{F}}_{ \leqslant }})\) and \({{\mathcal{F}}_{ \geqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_ \geqslant ({{f}_{ \geqslant }})\) obtained in [13], which once more emphasize the principal difference of the MC maximin and MC minimax interpretation used in this paper (both for the maximizing and the minimizing players) from their analogs used in [14], we summarize the specific features of multicrieriality in the zero-sum game. By the classification of scalar two-person games used in [19], the outcomes that are imputations (i.e., the outcomes that give for both uninformed players the results not worse than their BGRs) constitute the \(\alpha \)-core of the game, and its \(\beta \)-core is defined as the set of outcomes belonging to the \(\alpha \)-core that give for both players the results not worse than their BGRs under the conditions when the players know the opponent’s move. The outcomes in \({{R}^{0}}\) (and in the regular case, also all Shapley equilibrium outcomes) in the MC game can be assigned both to the \(\alpha \)- and \(\beta \)-core, if these concepts are generalized for the case of multicriteriality by replacing the inequalities by the inclusion in the Edgeworth–Pareto hull. On one hand, we have here the correspondence to the value of the scalar zero-sum game, which coincides with the maximin and minimax of the objective function under the assumption that they are equal; i.e., the solution of such a game is included both in the \(\alpha \)- and in the \(\beta \)-core. On the other hand, (5.7) also holds when the MC maximin is not equal to the MC minimax for any player and the equality of the MC BGRs of the players is possible only in the degenerate case. In Section 6, we show that the nonemptiness of \({{R}^{0}}\) is not a rare case, i.e., good properties of the set \({{R}^{0}}\) as the solution of the MC game \({{\Gamma }^{0}}\) (and of the set \({{Z}^{0}}\) as its equilibrium value) are to a large extent caused by the fact that the MC BGRs are pessimistic.

In the antagonistic MC game, the estimate of the result for the informed second player can be represented (taking into account that \(\bar {c}\) is exceeded) in the form

$${{f}_{{\not < }}} = {\text{Min}}\bigcup\limits_{\nu \in M} \,\left\{ {\prod\limits_{i \in I(\nu )} \,[{{\nu }_{i}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} \mathop {\min}\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}},C]\prod\limits_{i\not \in I(\nu )} \,\{ C\} } \right\},$$
(5.8)

where by contrast to (5.4), the inner maximum over the indices of the partial criteria is replaced by the minimum for the lower bounds of the intervals to be multiplied. Hence, using (4.3) and the definition of \({{\theta }^{ - }}[\mu ]\) we can once more derive the equality \({{f}_{{\nless}}} = {{f}_{ \leqslant }}\), i.e., the existence of solution for the antagonistic MC game \(\Gamma _{{12}}^{A}\) (and also for \(\Gamma _{{21}}^{A}\)). To find an analog of the MC Shapley equilibrium for the antagonistic MC game in normal form, we can replace the Slater MC minimum in the second membership in (5.1) by weak estimates (see Definition 1). However, we do not dwell on generalizing the standard definition (which is not used below); rather, we use Germeier’s scalarization and define the set

$$\begin{gathered} {{R}^{A}}\;\mathop = \limits^{{\text{def}}} \;\left\{ {({{x}^{A}},{{y}^{A}}) \in X \times Y\,{\text{|}}\,\exists \mu ,\;\nu \in M\,:{{x}^{A}} \in {\text{Arg}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,{{y}^{A}}){\text{/}}{{\mu }_{i}}} \right., \\ \left. {{{y}^{A}} \in {\text{Arg}}\mathop {\min}\limits_{y \in Y} \mathop {\min}\limits_{j \in I(\nu )} {{\varphi }_{j}}({{x}^{A}},y){\text{/}}{{\nu }_{j}}} \right\} \\ \end{gathered} $$
(5.9)

as the equilibrium solution of the MC game \({{\Gamma }^{A}}\). For its equilibrium value \({{Z}^{A}}\;\mathop = \limits^{{\text{def}}} \;\left\{ {\Phi ({{x}^{A}},{{y}^{A}})\,{\text{|}}\,({{x}^{A}},{{y}^{A}}) \in {{R}^{A}}} \right\}\) \({{Z}^{A}}\;\mathop = \limits^{{\text{def}}} \;\left\{ {\Phi ({{x}^{A}},{{y}^{A}})\,{\text{|}}\,({{x}^{A}},{{y}^{A}}) \in {{R}^{A}}} \right\}\), we prove the following expected result.

Proposition 5. In the case \({{Z}^{A}} \ne \emptyset \), it holds that \({{\mathcal{F}}_{ \leqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_ \leqslant ({{Z}^{A}})\) and \({{f}_{{\nless}}} \subseteq \mathop {{\text{eph}}}\nolimits_{\nless} ({{Z}^{A}})\), where \(\mathop {{\text{eph}}}\nolimits_{\nless} (Z)\;\mathop = \limits^{{\text{def}}} \;\left\{ {\psi \in {{\mathbb{R}}^{n}}\,{\text{|}}\,\exists z \in Z\,:\psi \nless z} \right\}\).

Proof. The first inclusion is proved in the same way as the proof of the first inclusion in Proposition 4 by replacing the superscripts 0 by \(A\). To prove the second inclusion, we take \(\psi \in {{f}_{{\nless }}}\), and obtain from (5.8) that \({{\psi }_{k}} = C \geqslant {{\varphi }_{k}}({{x}^{A}},{{y}^{A}})\) for \({{\nu }_{k}} = 0\) and

$$\begin{gathered} {{\psi }_{{{{i}^{0}}}}} \geqslant {{\nu }_{{{{i}^{0}}}}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{y \in Y} \mathop {\min}\limits_{j \in I(\nu )} {{\varphi }_{j}}(x,y){\text{/}}{{\nu }_{j}} \geqslant {{\nu }_{{{{i}^{0}}}}}\mathop {\min}\limits_{y \in Y} \mathop {\min}\limits_{j \in I(\nu )} {{\varphi }_{j}}({{x}^{A}},y){\text{/}}{{\nu }_{j}} = {{\nu }_{{{{i}^{0}}}}}\mathop {\min}\limits_{j \in I(\nu )} {{\varphi }_{j}}({{x}^{A}},{{y}^{A}}){\text{/}}{{\nu }_{j}} = \\ \, = {{\nu }_{{{{i}^{0}}}}}{{\varphi }_{{{{i}^{0}}}}}({{x}^{A}},{{y}^{A}}){\text{/}}{{\nu }_{{{{i}^{0}}}}} = {{\varphi }_{{{{i}^{0}}}}}({{x}^{A}},{{y}^{A}}) \\ \end{gathered} $$

for the index \({{i}^{0}} \in I(\nu )\) that realizes the minimum of \({{\varphi }_{j}}({{x}^{A}},{{y}^{A}}){\text{/}}{{\nu }_{j}}\). The first equality follows from (5.9). The inequality for \({{i}^{0}}\) proves the second inclusion in Proposition 5.

Thus, for the antagonistic MC game in normal form, we have obtained the same property as (5.7) by defining the MC equilibrium for it by analogy with (5.6); this property shows that the equilibrium outcomes give to both players the result not worse than their BGRs. The existence of equilibrium in the game \({{\Gamma }^{A}}\) imposes certain constraint on its parameters because the equality \({{f}_{{\nless }}} = {{f}_{ \leqslant }}\), along with the inclusion \({{f}_{ \leqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_ \leqslant ({{\mathcal{F}}_{ \leqslant }})\), implies the following corollary to Proposition 5: \({{f}_{ \leqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_ \leqslant ({{Z}^{A}})\) and \({{f}_{ \leqslant }} \subseteq \mathop {{\text{eph}}}\nolimits_{\nless } ({{Z}^{A}})\). This entails that the intersection \(\mathop {{\text{eph}}}\nolimits_ \leqslant ({{Z}^{A}})\bigcap {{\text{ep}}{{{\text{h}}}_{{\nless }}}({{Z}^{A}})} \) is not empty in the case \({{Z}^{A}} \ne \emptyset \). (For the example in Figs. 2 and 3, this is seen in Fig. 5.)

Fig. 5.
figure 5

Relation between the MC equilibrium and BGR in the MC game \({{\Gamma }^{A}}\).

Let us investigate the conditions for the existence of equilibrium paying attention to the zero-sum MC game because we are mainly interested in the existence of solution in this game. By the solution of the games \({{\Gamma }^{0}}\) and \({{\Gamma }^{A}}\), we understand in Section 6 the equilibriums \({{R}^{0}}\) and \({{R}^{A}}\) due to the proven properties (5.7) and Proposition 5 for the equilibrium values \({{Z}^{0}}\) and \({{Z}^{A}}\).

6 THE EXISTENCE OF EQUILIBRIUM IN MC GAMES

In Section 5, we reduced the question of existence of an equilibrium imputation solution of the zero-sum MC game to the nonemptiness of \({{R}^{0}}\) (5.6). If we do not speak about the application of mixed strategies, then it does not matter which scalarizing function is used by DMs; in the regular case, the parameterization of the set of outcomes (5.1) in form (5.6) using Germeier’s scalarization remains the same. It is seen on the basis of this parameterization that condition (5.6), by contrast to conditions (4.2) and (4.5), which characterize the existence of one-sided solutions, follows from the existence of solution of the scalar game with at least one partial criterion. Indeed, to ensure that \({{R}^{0}}\) includes the solution of this scalar game, it is sufficient to choose in (5.6) as \(\mu \) and \(\nu \) the vector with the unit at the place of this partial criterion. The same solution also belongs to \({{R}^{A}}\) by definition (5.9). Let us show formally that the condition in (5.6) is weaker than the existence of the one-sided solution.

Proposition 6. If for a certain vector \(\mu \in M,\)  the situation \((x{\text{*}},y{\text{*}})\) is a solution of the scalar zero-sum game on \(X \times Y\) with the payoff function in the form of the GSF or the maximizing player

$$\mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,y){\text{/}}{{\mu }_{i}},$$
(6.1)

then \((x{\text{*}},y{\text{*}}) \in {{R}^{0}}\) belongs to the solution of the MC game \({{\Gamma }^{0}}\) in the sense (5.6) and (5.1).

Proof. By the definition of the solution of the scalar game with (6.1), we have

$$\forall y \in Y\mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x{\text{*}},y){\text{/}}{{\mu }_{i}} \geqslant \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x{\text{*}},y{\text{*}}){\text{/}}{{\mu }_{i}} \geqslant \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,y{\text{*}}){\text{/}}{{\mu }_{i}}\quad \forall x \in X.$$

Let

$$\mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x{\text{*}},y{\text{*}}){\text{/}}{{\mu }_{i}} = {{\varphi }_{{{{i}^{0}}}}}(x{\text{*}},y{\text{*}}){\text{/}}{{\mu }_{{{{i}^{0}}}}},\quad {{i}^{0}} \in I(\mu ).$$

Then

$${{\varphi }_{{{{i}^{0}}}}}(x{\text{*}},y{\text{*}}){\text{/}}{{\mu }_{{{{i}^{0}}}}} \leqslant \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x{\text{*}},y){\text{/}}{{\mu }_{i}} \leqslant {{\varphi }_{{{{i}^{0}}}}}(x{\text{*}},y){\text{/}}{{\mu }_{{{{i}^{0}}}}}\quad \forall y \in Y,$$

i.e., for \(\nu = {{\nu }^{0}}\) with \(\nu _{{{{i}^{0}}}}^{0} = 1\), we have

$$y{\text{*}} \in {\text{Arg}}\mathop {\min}\limits_{y \in Y} {{\varphi }_{{{{i}^{0}}}}}(x{\text{*}},y) = {\text{Arg}}\mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{j \in I({{\nu }^{0}})} {{\varphi }_{j}}(x{\text{*}},y){\text{/}}\nu _{j}^{0}.$$

Taking into account

$$x{\text{*}} \in {\text{Arg}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x{\text{*}},y){\text{/}}{{\mu }_{i}},$$

we obtain for \((x{\text{*}},y{\text{*}})\) the fulfillment of (5.6) with the corresponding \(\mu \) and \(\nu = {{\nu }^{0}}\).

Note that the GSF (6.1) in Proposition 6 for the maximizing function in the game \({{\Gamma }^{0}}\) coincides with the GSF of the minimizing player in the game \({{\Gamma }^{A}}\). Therefore, the situation \((x{\text{*}},y{\text{*}})\) is also the equilibrium outcome for the antagonistic MC game due to (5.9).

Similarly to Proposition 6, we can prove the symmetric proposition for the GSF of the minimizing player in the game \({{\Gamma }^{0}}\). It gives one more sufficient condition for the existence of an MC equilibrium obtained using the GSF and a set of equilibrium outcomes in \({{R}^{0}}\).

Proposition 7. If the situation \((x{\text{**}},y{\text{**}})\) is, for a certain vector of parameters \(\nu \in M\), a solution of the scalar zero-sum game on \(X \times Y\) with the payoff function in the form of the GSF or the minimizing player in the MC game \({{\Gamma }^{0}}\)

$$\mathop {\max}\limits_{i \in I(\nu )} {{\varphi }_{i}}(x,y){\text{/}}{{\nu }_{i}},$$

then \((x{\text{**}},y{\text{**}}) \in {{R}^{0}}\) belongs to the solution of the MC game \({{\Gamma }^{0}}\) in the sense (5.6) and (5.1).

Note that the assumptions of both Propositions 6 and 7 are insufficient for the existence of a one-sided solution of the MC game because the existence of solutions of the scalar games for all coefficients of the GSF is not required. On the other hand, conditions (5.6) (and (5.1)) clearly give a too wide set. The attempt in Section 5 to introduce additional informal requirements for constructing a solution as a proper subset of the Shapley equilibrium strategies failed to reduce the set \({{R}^{0}}\) due to (5.7). For the example considered in [5], the set of outcomes (5.1) turned out to be nonselective. For the arbitrary zero-sum MC game, \({{R}^{0}}\) can easily be made nonempty by using mixed strategies because, as has been shown above, it is sufficient that the scalar game with one partial criterion has a solution. These properties are independent of the scalarizing function form used to describe the Slater sets in (5.1). The standard approach is to use the linear scalarizing function [7]; however, according to [24], set (5.1) parameterized using the linear scalarizing function is the largest one and, therefore, not smaller than (5.6). In actual fact, in the regular and convex case, these sets are identical because, for any \(({{x}^{0}},{{y}^{0}})\) satisfying (5.1), there exist parameters \(\mu \), \(\nu \in M\) with which the conditions in (5.6) are satisfied. The number of the coefficients of the scalarizing function in the parameterization based on the GSF can be even reduced. Let (for simplicity) \(\Phi (x,y) > 0\) (in this case, the regularity conditions are automatically satisfied).

Proposition 8: \({{R}^{0}} = \{ ({{x}^{0}},{{y}^{0}}) \in X \times Y\,{\text{|}}\,\exists \mu \in M\):

$${{x}^{0}} \in {\text{Arg}}\mathop {\max}\limits_{x \in X} \mathop {\min}\limits_{i \in I(\mu )} {{\varphi }_{i}}(x,{{y}^{0}}){\text{/}}{{\mu }_{i}},{{y}^{0}} \in {\text{Arg}}\mathop {\min}\limits_{y \in Y} \mathop {\max}\limits_{i \in I(\mu )} {{\varphi }_{i}}({{x}^{0}},y){\text{/}}{{\mu }_{i}}\} .$$

Proof. For any pair \(({{x}^{0}},{{y}^{0}})\) satisfying (5.1), select

$$\mu _{i}^{0}\;\mathop = \limits^{{\text{def}}} \;{{\varphi }_{i}}({{x}^{0}},{{y}^{0}}){{\left( {\sum\limits_{k = 1}^n \,{{\varphi }_{k}}({{x}^{0}},{{y}^{0}})} \right)}^{{ - 1}}}\quad \forall i = \overline {1,n} .$$

From (5.1), we conclude that \(\forall x \in X\) \(\exists {{i}^{0}} \in I({{\mu }^{0}})\) (= \(\{ 1, \ldots ,n\} \)): \({{\varphi }_{{{{i}^{0}}}}}(x,{{y}^{0}}) \leqslant {{\varphi }_{{{{i}^{0}}}}}({{x}^{0}},{{y}^{0}})\). Therefore,

$$\mathop {\min}\limits_{i \in I({{\mu }^{0}})} {{\varphi }_{i}}(x,{{y}^{0}}){\text{/}}\mu _{i}^{0} \leqslant {{\varphi }_{{{{i}^{0}}}}}(x,{{y}^{0}}){\text{/}}\mu _{{{{i}^{0}}}}^{0} \leqslant {{\varphi }_{{{{i}^{0}}}}}({{x}^{0}},{{y}^{0}}){\text{/}}\mu _{{{{i}^{0}}}}^{0} = \sum\limits_{k = 1}^n \,{{\varphi }_{k}}({{x}^{0}},{{y}^{0}}).$$

The last quantity equals \({{\varphi }_{i}}({{x}^{0}},{{y}^{0}}){\text{/}}\mu _{i}^{0}\) \(\forall i \in I({{\mu }^{0}})\); hence, it also equals the minimum over \(i \in I({{\mu }^{0}})\). Hence, \({{x}^{0}}\) maximizes the GSF.

On the other hand, \(\forall y \in Y\) \(\exists {{i}^{1}} \in I({{\mu }^{0}})\): \({{\varphi }_{{{{i}^{1}}}}}({{x}^{0}},y) \geqslant {{\varphi }_{{{{i}^{1}}}}}({{x}^{0}},{{y}^{0}})\) and, therefore,

$$\mathop {\max}\limits_{i \in I({{\mu }^{0}})} {{\varphi }_{i}}({{x}^{0}},y){\text{/}}\mu _{i}^{0} \geqslant {{\varphi }_{{{{i}^{1}}}}}({{x}^{0}},y){\text{/}}\mu _{{{{i}^{1}}}}^{0} \geqslant {{\varphi }_{{{{i}^{1}}}}}({{x}^{0}},{{y}^{0}}){\text{/}}\mu _{{{{i}^{1}}}}^{0} = \sum\limits_{k = 1}^n \,{{\varphi }_{k}}({{x}^{0}},{{y}^{0}}) = {{\varphi }_{i}}({{x}^{0}},{{y}^{0}}){\text{/}}\mu _{i}^{0}$$

\(\forall i \in I({{\mu }^{0}})\); i.e., the equality of the maximum over \(i \in I({{\mu }^{0}})\) also holds. Hence\({{y}^{0}}\) is a minimizer of Germeier’s scalarizing function of the second player with \({{\mu }^{0}}\) for this \({{x}^{0}}\). Due to (5.6), this completes the proof.

For the antagonistic MC game, the nonemptiness of (5.9) is also ensured by passing to mixed strategies (by contrast to the existence of a one-sided solution). For the one-sided solution, it is required that the scalar games (6.1) have solutions for any \(\mu \in M\). However, this cannot be achieved by using mixed strategies without using an MC mixed extension with averaging over the GSF considered in Section 4. In turn, the proposed concept of MC averaging of the scalarizing function for obtaining a solution of a finite MC game in mixed strategies allows us to use the mathematical techniques of Germeier’s scalarization for parameterizing and analyzing the structure of the MC equilibrium with retaining good properties of linear scalarization. Thus, the competitive ability of Germeier’s scalarization, which is widely used in operations research, improves.

7 CONCLUSIONS

In spite of the description of various approaches to the formalization of the solution of MC games, the results obtained above suggest that this concept still needs elaboration. This also follows from the difference in the concepts of the one-sided solution and equilibrium, one of which seems to be too narrow and the other too wide. But the main thing due to which the refinement is needed is that no solution in which all the possibilities of compromise caused exclusively by multicriteriality, i.e., by subjective uncertainty in goal setting, that would be acceptable from the viewpoint of operations research can be obtained. An intuitively clear definition of solution is available only for antagonistic MC games with a fixed order of moves in which, by the way, there is no room for compromise. For zero-sum MC games in which it is conventionally recommended to use equilibrium, its nonselectivity is suspicious (e.g., in Fig. 5, \({{Z}^{0}} = {{Z}^{A}} = \{ \Phi (x,y)\,{\text{|}}\,x \in X,y \in Y\} \)).

Certainly, the set of scalarizing function coefficients (\(\mu \), \(\nu \in M\) for GSF) can be a priori reduced, e.g., by establishing the relative importance of partial criteria for the DM [25]. Rather than selecting all nondominated points in the original problems (2.1), (2.2), (3.1)–(3.3), this approach leads to the selection of points conforming to the system of preferences specified by a lexicographic order on the set of criteria. This additional information was not taken into account in the analysis of competitive MC games, and it is of independent interest.

An interesting method for reducing set (5.1) by imposing additional constraints follows from the constructions proposed in [26] (Definition 2.13 and Theorem 2.14) for mixed extensions of finite MC games. This method is based on the concept of set optimization (partial order relations between the sets of reachable estimates similar to inclusion relations of their Edgeworth–Pareto hulls are introduced). It would be interesting to estimate how much the constraints proposed in [26] can reduce the set of alternatives for the OR.

In addition to specifying stronger constraints on the equilibriums in order to obtain a more adequate solution of the MC competitive game, it is possible to try to generalize the concepts of \(\gamma \)-core and \(g\)-core described in [19] to MC games using the results obtained in [27]. The idea is that the concepts of solution cores of a scalar game are based on its information extensions, i.e., on taking into account information exchange between the players in the process of the game (another term used in the Russian literature is hierarchical games [22]). More specifically, the \(\gamma \) (or \(g\))-core consists of imputations that are Stackelberg (or Germeier) equilibriums. The construction of these equilibriums requires the development of strategies for the players to a priori inform each other; these strategies should include threat, warning [19], and punishment [22] scenarios, which improve the BGR of the player who applies such strategies. As a result, the condition not worse than the BGR, which is used in all concepts of the core, becomes more significant. Such extensions are also promising for dynamic games; e.g., they can model negotiation processes.

However, the issues of MC formalization of the bargaining process between two DMs who represent the interests of their teams presently remain out of the scope of studies. In part, this is due to the fact that the strategies mentioned above require the specification of assumptions about the information available to both DMs about each other scalarizing parameters and about the attitude of the opponent to uncertainty. For this reason, the further development of the theory of hierarchical games with uncontrollable factors in the direction indicated in the title does not fit within the framework of this paper. Note that Propositions 4 and 5 make it possible to use the idea of bargaining set from this theory, which is the basic concept for scalar noncompetitive games [22] the properties of which manifest themselves in zero-sum MC games. More precisely, we can introduce the concepts of bargaining estimates—in the MC game \({{\Gamma }^{0}}\) as \({{Z}^{0}}{\backslash }\left\{ {\mathop {{\text{eph}}}\nolimits_ \leqslant ({{\mathcal{F}}_{ \leqslant }})\bigcup {{\text{ep}}{{{\text{h}}}_{ \geqslant }}({{f}_{ \geqslant }})} } \right\}\) and in the game \({{\Gamma }^{A}}\) as \({{Z}^{A}}{\backslash }\left\{ {\mathop {{\text{eph}}}\nolimits_ \leqslant ({{\mathcal{F}}_{ \leqslant }})\bigcup {{\text{ep}}{{{\text{h}}}_{{\nless }}}({{f}_{{\nless }}})} } \right\}\). For the case of a finite MC game in which the players use Germeier’s scalarization for the MC mixed extension \(\overline \Gamma _{G}^{0}\), there is a chance to reduce the set of equilibrium outcomes even more down to the candidates for the game solution by selecting in the corresponding set of bargaining estimates the vectors that are nondominated in the criterion space for any player.

The issue of nonemptiness of the set of bargaining estimates when \({{Z}^{0}}\) and \({{Z}^{A}}\) are nonempty and the issue of selection (if these sets of estimates are not empty) of a set of bargaining solutions in \({{R}^{0}}\) and \({{R}^{A}}\) is the subject of study in the near future. We are going to elaborate this idea and examine (in particular, on examples) what are the prospects if we can count in advance on the use of Germeier’s scalarization, by DMs. Then, the parameterization of the BGR and equilibrium values using the GSF will give their illustrative description (with expansion in the scalarizing function coefficients) recommend to the players potential tradeoff versions, i.e., give to the players more possibilities for coming to an agreement and for ORs to slightly reduce the set (in the criterion space) that provides an a priori estimate of the DM’s result in the MC competitive game.