1 Introduction and problem settlement

A Nash equilibrium problem (NEP) is a non cooperative game in which each player’s objective function depends on the other players’ strategies. More precisely, assume that there are p players and each player \( \nu \) controls a vector of real variables \( x^\nu \in \mathbb {R}^{n_\nu } \). We say that \( x^\nu \) is the strategy of player \(\nu \). Let us denote by x the whole vector of strategies

$$\begin{aligned} x {:}= (x^1,\ldots ,x^p)\quad \text{ and } \quad n := n_1 + n_2 + \cdots + n_p. \end{aligned}$$

Denote by \( x^{-\nu }\) the vector formed by all players decision variables except those of player \( \nu \). So we can also write \( x = ( x^\nu , x^{-\nu } )\) which is a shortcut [classically used in many papers on the subject, see e.g. Facchinei et al. (2007), Pang and Fukushima (2005)] to denote the vector \(x=(x^1,\dots ,x^{\nu -1},x^\nu ,x^{\nu +1}, \dots , x^p)\). The set of possible strategies for player \( \nu \) is a set \(K_\nu \) of \(\mathbb {R}^{n_\nu }\) and his aim is to minimize his cost function \(\theta _\nu {:}\mathbb {R}^{n_\nu }\times \mathbb {R}^{n-n_{\nu }}\rightarrow \mathbb {R}\) which, as said above, depends on the decision variables of the other players and describes the loss player \( \nu \) suffers when the rival players have chosen the strategy \( x^{-\nu } \). Thus aim of any player \(\nu \), given strategies \( x^{-\nu } \), is to choose a strategy \(x^\nu \) that solves the following optimization problem

$$\begin{aligned} \min _{x^\nu } \; \theta _\nu ( x^\nu , x^{-\nu }), \quad \text{ subject } \text{ to }\quad x^\nu \in K_\nu . \end{aligned}$$

This Nash game, as well as the solution set of this game, will be later on denoted by NEP\((\theta , K)\) where \(K:=\prod _{\nu =1}^p K_\nu \).

Now if additionally, the strategy set \(K_\nu \) of some of the players also depends on the strategies of the other players, then the game is called a generalized Nash equilibrium problem (GNEP). In this case the strategy set of player \(\nu \) is given by a set-valued map \(K_\nu {:}\mathbb {R}^{n-n_{\nu }}\rightrightarrows \mathbb {R}^{n_\nu }\) which describes, given the vector \(x^{-\nu }\) of strategies of the other players, the set \(K_\nu (x^{-\nu })\) of possible strategies of player \(\nu \). In such a GNEP each player aims to solve the following parametrized minimization problem

$$\begin{aligned} \min _{x^\nu } \; \theta _\nu ( x^\nu , x^{-\nu }), \quad \text{ subject } \text{ to }\quad x^\nu \in K_\nu (x^{-\nu }). \end{aligned}$$

This generalized Nash game, as well as the solution set of this game, will be later on denoted by GNEP\((\theta , K)\) where K is the set-valued map defined by \(K(x):=\prod _{\nu =1}^p K_\nu (x^{-\nu })\).

GNEPs are widely used modelling tools which have attracted much attention in the last decades, see e.g. Facchinei and Kanzow (2007), Pang and Fukushima (2005). Recently, numerical methods have been proposed for their solution Dreves et al. (2011), Dreves et al. (2012), Facchinei and Sagratella (2011).

It is well known that when the loss functions \(\theta _\nu \) are convex and continuously differentiable and the strategy sets \(K_\nu \) are closed and convex then the Nash equilibrium problem NEP\((\theta ,K)\) can be equivalently reformaluted as the variational inequality VI(FK), see e.g. Facchinei and Pang (2003):

$$\begin{aligned} \begin{array}{ll} \text{ VI }(F,K)&\quad \text{ Find } \bar{x}\in K \text{ such } \text{ that } \langle F(\bar{x}), y-\bar{x}\rangle \ge 0,~~ \forall \,y\in K, \end{array} \end{aligned}$$

where \(F(x) {:}= \left( \nabla _{x^\nu } \theta _\nu (x^\nu , x^{-\nu }) \right) _{\nu =1}^p \).

And, under similar assumptions, that is if for any \(\nu \), the above loss function \(\theta _\nu \) is differentiable and convex with regards to \(x^\nu \) and map \(K_\nu \) is closed and convex valued, then GNEP\((\theta , K)\) can be equivalently reformulated as a quasivariational inequality \(QVI(FK)\qquad \text{ Find } \bar{x}\in K(\bar{x}) \text{ such } \text{ that } \langle F(\bar{x}),y- \bar{x}\rangle \ge 0,~~\forall y\in K(\bar{x})\), where of course K stands, in this case, for the above defined set-valued strategy map \(K(x) := \prod _{\nu =1}^p K_\nu (x^{-\nu })\), see e.g. Facchinei et al. (2014), Pang and Fukushima (2005). This has been extended in Aussel-Dutta Aussel and Dutta (2008), thanks to the concept of normal operator, to the case where the cost functions \(\theta _\nu \) are semistrictly quasiconvex and possibly nonsmooth.

The main advantage of reformulating a NEP as a VI is to allow to use of the machinery of theoretical and computational tools developed for variational inequalities, including in particular many efficient algorithms. Nevertheless this advantage is lost as soon as one consider quasivariational inequalities instead of variational inequalities since this more general problem is known to be computationally harder to solve (Facchinei et al. 2015, 2014; Latorre and Sagratella 2015, 2016).

An important subclass of these GNEPs are those for which the constraint set-valued maps \(K_\nu \) are defined by the so-called “Rosen’s law”, see Rosen (1965): that is there exists a nonempty subset X of \(\mathbb {R}^n\) such that, for all \(\nu \), the set-valued map \(K_\nu \) is given by

$$\begin{aligned} K_\nu (x^{-\nu }) := \{ x^\nu \in \mathbb {R}^{n_\nu } : (x^\nu , x^{-\nu }) \in X \}. \end{aligned}$$
(1)

If, moreover, the subset X is assumed to be closed and convex, then the corresponding GNEP\((\theta ,K)\) is usually called a “jointly convex GNEP”. Then in this case and again if, for any \(\nu \), the loss functions \(\theta _\nu \) are differentiable and convex with regards to \(x^\nu \), any solution of the variational inequality VI(FX) is a solution of GNEP\((\theta ,K)\). Such solutions are again called “variational solutions” of GNEP\((\theta ,K)\) and it is well known that, even in the jointly convex case, this set of variational solutions is (generally strictly) included in the solution set of GNEP\((\theta ,K)\), see Facchinei et al. (2007), Facchinei and Sagratella (2011). Indeed, as shown in Aussel and Dutta (2014), there is no hope, in general, to obtain a full characterization of the solutions of a GNEP as solutions of a VI. A counter-example is described in Aussel and Dutta (2014) and this fact was already observed in Outrata et al. (1998). This incomplete characterization occurs also with VIs when the GNEP has functions \(\theta _\nu \) that are (player) semistrictly quasiconvex, see Theorem 3.1 in Aussel and Dutta (2008).

Our aim in this work is therefore to describe a subclass of generalized Nash equilibrium problems for which it is possible to obtain a full characterization of the solutions of GNEP\((\theta ,K)\) as solutions of some variational inequalities. Such GNEP will be called T-pseudo-variational, where, in practice, T stands for a set-valued map corresponding to the generalized derivatives of the loss functions. Here and later on, \(\text{ FP }(K)\) stands for the set of fixed points of the set-valued map \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\), that is

$$\begin{aligned} \text{ FP }(K):=\left\{ x\in \mathbb {R}^n~{:}~x\in K(x)\right\} . \end{aligned}$$

Definition 1

Let \(T{:}\,\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be any set-valued map. The generalized Nash equilibrium problem GNEP\((\theta ,K)\) is said to be T -pseudo-variational if and only if

$$\begin{aligned} \text{ GNEP }(\theta ,K) \equiv \cup _{z\in \text {FP}(K)} VI(T,K(z)), \end{aligned}$$

that is any solution of GNEP\((\theta ,K)\) is a solution of the variational inequality VI(TK(z)), for some fixed point z of the map K and any solution of such VI(TK(z)) is a solution of GNEP\((\theta ,K)\).

Note that the description of set-valued variational inequalities VI(TK) will be given in the forthcoming Sect. 3.

The paper is organized as follows. First in Sect. 2, we define and study the concept of reproducible set-valued maps. Based on this notion, we then investigate, in Sect. 3, sufficient conditions to obtain a characterization of the solution set of a quasivariational inequality in terms of the union of solution sets of some variational inequalities. In Sect. 4, assuming that the strategy map K is reproducible and convex valued, we prove some sufficient conditions for a GNEP\((\theta ,K)\) to be pseudo-variational. Both cases of convex and quasiconvex loss functions \(\theta _\nu \) are considered. Finally, in Sect. 5 we define the concept of pseudo-NEP generalized Nash equilibrium problems and show that this property can be reached even if the considered GNEP does not satisfy any convexity assumptions.

2 Reproducible set-valued maps

All the developments of this paper are based on the concept of reproducible set-valued maps. As shown in Sect. 2.1, this subclass includes different set-valued maps.

Definition 2

The set-valued map \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) is said to be reproducible on \(C \subseteq \mathbb {R}^n\) if, for any \(x \in C \cap \text{ FP }(K)\) and any \(z\in K(x)\), it holds that \(K(x) \equiv K(z)\).

Let us first describe some immediate properties of reproducible set-valued maps. Some general classes of reproducible set-valued maps will be then described in Sect. 2.1.

Proposition 1

The following statements hold.

  1. (i)

    Let the map \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be reproducible on \(C \subseteq \mathbb {R}^n\). Then, for any \(x \in C \cap \text{ FP }(K)\), one has \(K(x)\subseteq \text{ FP }(K)\).

  2. (ii)

    Let the maps \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) and \(K':\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be reproducible on \(C \subseteq \mathbb {R}^n\). Then the map \(K\cap K':\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) defined by \(K\cap K'(x):=K(x)\cap K'(x)\) is reproducible on C.

  3. (iii)

    Let the maps \(K_1:\mathbb {R}^{n_1}\rightrightarrows \mathbb {R}^{n_1}\) and \(K_2:\mathbb {R}^{n_2}\rightrightarrows \mathbb {R}^{n_2}\) be reproducible respectively on \(C_1 \subseteq \mathbb {R}^{n_1}\) and \(C_2 \subseteq \mathbb {R}^{n_2}\). Then the map \(K{:}\mathbb {R}^{n_1+n_2}\rightrightarrows \mathbb {R}^{n_1+n_2}\) defined by

    $$\begin{aligned} K\left( \begin{array}{c} x^1 \\ x^2 \end{array}\right) := \left( \begin{array}{c} K_1(x^1) \\ K_2(x^2) \end{array}\right) \end{aligned}$$

    is reproducible on \(C_1\times C_2\).

  4. (iv)

    Let the map \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be single-valued on \(C \subseteq \mathbb {R}^n\), then it is reproducible on C.

Proof

Cases (i), (ii) and (iv) follow from the definition of reproducible maps and from the fact that \(\text{ FP }(K\cap K')=\text{ FP }(K)\cap \text{ FP }(K')\). Case (iii) can be deduced easily. \(\square \)

2.1 Sufficient conditions of reproducibility of a set-valued map

Our aim in this subsection is to illustrate that the subclass of reproducible maps covers different set-valued maps. Thus in the forthcoming propositions, we provide sufficient conditions for a set-valued map to be reproducible on \(\mathbb {R}^n\).

Proposition 2

If the set-valued map \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) is defined by one of the following expressions, then K is reproducible on \(\mathbb {R}^n\):

  1. (i)

    suppose that

    $$\begin{aligned} K(x) := \bigg \{ y \in \mathbb {R}^n : A\big (\gamma (y)\big ) + B \big (\gamma (x)\big ) = c\bigg \}, \end{aligned}$$
    (2)

    where \(\gamma : \mathbb {R}^n \rightarrow \mathbb {R}^m\) and \(B : \mathbb {R}^m \rightarrow \mathbb {R}^p\) are functions, \(A : \mathbb {R}^m \rightarrow \mathbb {R}^p\) is an injective function and \(c \in \mathbb {R}^p\);

  2. (ii)

    let \({{\mathcal {P}}}\) be a partition of \(\{1,\ldots ,n\}\) and let

    $$\begin{aligned} \displaystyle K(x) := \bigg \{ y \in \mathbb {R}^n : A_I(y_I) + \sum _{\mathcal{P} \ni J \ne I} A_J(x_J) = b, \; \forall \, I \in \mathcal{P} \bigg \}, \end{aligned}$$
    (3)

    where \(A_I : \mathbb {R}^{|I|} \rightarrow \mathbb {R}^p\) is a function for all \(I \in \mathcal{P}\) and \(b \in \mathbb {R}^p\).

Proof

(i) Let \({\hat{x}}\in K({\hat{x}})\), that is

$$\begin{aligned} A \big (\gamma ({\hat{x}})\big ) = c - B \big (\gamma ({\hat{x}})\big ). \end{aligned}$$
(4)

Let \({\bar{x}}\in \mathbb {R}^n\) be any point such that \({\bar{x}} \in K(\hat{x})\), that is

$$\begin{aligned} A \big (\gamma ({\bar{x}})\big ) = c - B \big (\gamma ({\hat{x}})\big ). \end{aligned}$$
(5)

By (4) and (5) we obtain \(A \big (\gamma ({\bar{x}})\big ) = A \big (\gamma ({\hat{x}})\big )\), and A being an injective map, we immediately have \(\gamma ({\bar{x}}) = \gamma ({\hat{x}})\) and thus \(c - B \big (\gamma ({\bar{x}})\big ) = c - B \big (\gamma ({\hat{x}})\big )\) showing that \(K({\hat{x}}) \equiv K({\bar{x}})\).

(ii) Let \({\hat{x}}\in K({\hat{x}})\), that is

$$\begin{aligned} \sum _{J \in \mathcal{P}} A_J({\hat{x}}_J) = b. \end{aligned}$$
(6)

Let \({\bar{x}}\in \mathbb {R}^n\) be any point such that \({\bar{x}} \in K(\hat{x})\), that is

$$\begin{aligned} A_I({\bar{x}}_I) = b - \sum _{\mathcal{P} \ni J \ne I} A_J({\hat{x}}_J) = A_I({\hat{x}}_I), \quad \forall \, I \in \mathcal{P}, \end{aligned}$$
(7)

where the last equality is due to (6). By (7) we obtain

$$\begin{aligned} b - \sum _{\mathcal{P} \ni J \ne I} A_J({\hat{x}}_J) = b - \sum _{\mathcal{P} \ni J \ne I} A_J({\bar{x}}_J), \quad \forall \, I \in \mathcal{P}, \end{aligned}$$

therefore \(K({\hat{x}}) \equiv K({\bar{x}})\). \(\square \)

The subclasses of reproducible set-valued maps defined by the expressions (2) and (3) are somehow general. Let us thus give simple illustrative examples.

Example 1

The set-valued map K defined on \(\mathbb {R}^n\) by

$$\begin{aligned} K(x)=\left\{ y\in \mathbb {R}^n~:~ Ay+Bx+c=0 \right\} \end{aligned}$$

is reproducible on \(\mathbb {R}^n\) providing that matrix A is an element of GL\(_n(\mathbb {R})\) while \(B\in \mathcal{M}_n(\mathbb {R})\) and \(c\in \mathbb {R}^n\). Note nevertheless that the concept of reproducible map is not restricted to linealy defined set-valued maps as can be seen on the following toy examples:

  • the set-valued map given by

    $$\begin{aligned} \displaystyle K(x) = \left\{ y \in \mathbb {R}^n: e^{\left( \sum _{i=1}^n y_i\right) ^2} - 5 \left( \sum _{i=1}^n x_i\right) ^4 = 10 \right\} \end{aligned}$$

    satisfies definition (2);

  • let \(\mathcal{P} = \{ (1,2), (3,4), \ldots , (n-1,n) \}\), then the set-valued map

    $$\begin{aligned} \displaystyle K(x) = \left\{ y \in \mathbb {R}^n: \Vert y_I\Vert ^2_2 + \sum _{\mathcal{P} \ni J \ne I} \Vert x_J\Vert ^2_2 = 10, \; \forall \, I \in \mathcal{P} \right\} \end{aligned}$$

    is reproducible thanks to Proposition 2.

Let us now consider the case of set-valued maps defined by products.

Proposition 3

If the set-valued map \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) is defined by one of the following equalities, then K is reproducible on \(\mathbb {R}^n\):

  1. (i)

    suppose that

    $$\begin{aligned} K(x) := \bigg \{ y \in \mathbb {R}^{n}: a\big (\gamma (y)\big ) b\big (\gamma (x)\big ) = c \bigg \}, \end{aligned}$$
    (8)

    where \(\gamma : \mathbb {R}^n \rightarrow \mathbb {R}^m\) and \(b : \mathbb {R}^m \rightarrow \mathbb {R}\) are functions, \(a : \mathbb {R}^m \rightarrow \mathbb {R}\) is an injective function and \(0 \ne c \in \mathbb {R}\);

  2. (ii)

    let \(\mathcal{P}\) be a partition of \(\{1,\ldots ,n\}\) and let

    $$\begin{aligned} \displaystyle K(x) := \left\{ y \in \mathbb {R}^n : a_I(y_I) \left( \prod _{\mathcal{P} \ni J \ne I} a_J(x_J) \right) = b, \; \forall \, I \in \mathcal{P} \right\} , \end{aligned}$$
    (9)

    where \(a_I : \mathbb {R}^{|I|} \rightarrow \mathbb {R}\) is a function for all \(I \in \mathcal{P}\) and \(0 \ne b \in \mathbb {R}\).

Proof

The proof of both cases can be easily deduced by adapting, to the case of product based maps, the arguments used in the proof of Proposition 2. \(\square \)

The following set-valued maps correspond to simple examples of reproducible maps defined by products and which satisfy assumptions of Proposition 3:

  • \(\displaystyle K(x) = \left\{ y \in \mathbb {R}^n: e^{\left( \sum _{i=1}^n y_i\right) } \left( \sum _{i=1}^n x_i\right) ^2 = 10 \right\} ;\)

  • let \(\mathcal{P} = \{ (1,2), (3,4), \ldots , (n-1,n) \}\) and

    $$\begin{aligned} \displaystyle K(x) = \left\{ y \in \mathbb {R}^n: \Vert y_I\Vert ^2_2 \left( \prod _{\mathcal{P} \ni J \ne I} \Vert x_J\Vert ^2_2\right) = 10, \; \forall \, I \in \mathcal{P} \right\} . \end{aligned}$$

Finally, note that by combining items (ii) and (iii) of Proposition 1 with Propositions 2 and 3, we can obtain many different reproducible maps.

3 Reproducible set-valued maps and quasivariational inequalities

Quasivariational inequalities are known to be difficult problems for which few efficient computational procedures exist in the literature. But for variational inequalities, a full machinery of algorithms is available. Our aim in this section is to describe a particular class of quasivariational inequalities for which the complete set of solutions can be evaluated simply through the resolution of various variational inequalities. In the forthcoming proposition we establish some strong interrelations between the solution set of the considered quasivariational inequality and the solution set of associated variational inequalities.

Let us first recall that, given two set-valued maps \(T{:}\,\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) and \(K{:}\,\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) the quasivariational inequality defined by T and K consists of finding an element \(\bar{x}\) such that

$$\begin{aligned} \bar{x}\in K(\bar{x})\quad \text{ and } \quad \exists \, \bar{x}^*\in T(\bar{x})~:~\langle \bar{x}^*,y-\bar{x}\rangle \ge 0,\quad \forall \,y\in K(\bar{x}). \end{aligned}$$

This quasivariational inequality, as well as its solution set, will be denoted by QVI(TK). When the map K is constant, with value \(K\subset \mathbb {R}^n\), then the above definition corresponds to a classical Stampacchia variational inequality that will be denoted, again as well as its solution set, by VI(TK).

It is well known that \({\textit{QVI}}(T,K)={\textit{FP}}(S(T,K(\cdot )))\) where \({\textit{FP}}(S(T,K(\cdot )))\) denotes the set of fixed points of the solution map \(x\mapsto S(T,K(x))\) of the perturbed Stampacchia variational inequality. The approach developed below is clearly different.

Proposition 4

Let \(T:\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) and \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be two set-valued maps. Suppose that K is reproducible on \(C \subseteq \mathbb {R}^n\).

  1. (i)

    Let \(z \in C \cap \text{ FP }(K)\). If \(\bar{x}\) is a solution of VI(TK(z)), then \(\bar{x}\) is a solution of QVI(TK).

  2. (ii)

    If \(\bar{x}\in C\) is a solution of QVI(TK), then, for all \(z \in K(\bar{x}), \bar{x}\) is a solution of VI(TK(z)).

Proof

(i) Let \(\bar{x}\in K(z)\) and \(\bar{x}^* \in T(\bar{x})\) be such that \(\langle \bar{x}^*,y-\bar{x}\rangle \ge 0\), for all \(y \in K(z)\). Since K is reproducible on C and \(\bar{x}\in K(z)\), we obtain \(K(\bar{x}) \equiv K(z)\). Therefore, \(\bar{x}\in \text{ FP }(K)\) and \(\langle \bar{x}^*,y-\bar{x}\rangle \ge 0\), for all \(y \in K(\bar{x})\).

(ii) It is a direct consequence of the definition of reproducible map since, for any such \(\bar{x}\), one has \(K(\bar{x}) \equiv K(z)\), for any \(z\in K(\bar{x})\). \(\square \)

Note that assuming that K is reproducible provides only a sufficient, but not necessary, condition in order to have these useful implications. However, if K is not reproducible, it is easy to give examples for which both (i) and (ii) of Proposition 4 do not hold.

Example 2

Let us consider \(T(x) = \left\{ t \in \mathbb {R}: x - \frac{5}{6} \le t \le x - \frac{4}{6}\right\} \) and \(K(x) =\{y \in \mathbb {R}: 0 \le y \le 1-x\}\). Note that K is not reproducible on \(\mathbb {R}\) and that \(\text{ FP }(K) = \left\{ y \in \mathbb {R}: 0 \le y \le \frac{1}{2}\right\} \). Now we show that both (i) and (ii) of Proposition 4 do not hold for QVI(TK).

  1. (i)

    Let \(z=0 \in \text{ FP }(K)\). It is easy to see that any \({\bar{x}} \in \left\{ y \in \mathbb {R}: \frac{4}{6} \le y \le \frac{5}{6}\right\} \) is a solution of VI(TK(z)). However \(\bar{x}\notin \text{ FP }(K)\) and then it cannot be a solution of QVI(TK).

  2. (ii)

    \(\bar{x}= \frac{1}{2}\) is the unique solution of QVI(TK). \(z = 0 \in K(\bar{x})\), but \(\bar{x}\) is not a solution of VI(TK(z)).

Definition 3

Let \(T{:}\,\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) and \(K{:}\,\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be two set-valued maps. The quasivariational inequality QVI(TK) is said to be pseudo-variational if and only if

$$\begin{aligned} \text{ QVI }(T,K) \equiv \cup _{z\in \text {FP}(K)} \text{ VI }(T,K(z)), \end{aligned}$$

that is any solution of QVI(TK) is a solution of the variational inequality VI(TK(z)), for some fixed point z of the map K, and any solution of such VI(TK(z)) is a solution of QVI(TK).

Now if K is reproducible on \(\mathbb {R}^n\) and combining Propositions 4 and 1, we immediately obtain a sufficient condition for a quasivariational inequality to be pseudo-variational.

Corollary 1

Let \(T{:}\,\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) and \(K{:}\,\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be two set-valued maps. Suppose that K is reproducible on \(\text{ FP }(K)\). Then quasivariational inequality QVI(TK) is pseudo-variational.

We are now ready to define a simple procedure to compute a number of different solutions of the quasivariational inequality QVI(TK), with the assumption that K is reproducible on a certain set C.

Procedure 5

(Computing solutions of QVI(TK) via VIs)

  1. Step 0:

    Set \(R:=(C \cap \text{ FP }(K)) \ne \emptyset , S:=\emptyset \) and \(k > 0\).

  2. Step 1:

    Pick a \(z \in R\) and compute a set \({\bar{S}}(z)\) of solutions of VI(TK(z)).

  3. Step 2:

    Set \(R := R \setminus K(z)\) and \(S := S \cup {\bar{S}}(z)\).

  4. Step 3:

    If either \(R = \emptyset \) or \(|S| \ge k\), then return S, else go to Step 1.

Let us observe that if K is reproducible on C, and supposing to have a procedure to compute solutions of VIs, Procedure 5, with k finite, will stop in a finite number of steps and, by Proposition 4, all points in S are solutions of QVI(TK). Moreover, we clearly have that:

  1. (i)

    if \(C \, \cap \, \)QVI\((T,K) \ne \emptyset \), then \(S \ne \emptyset \);

  2. (ii)

    if \(|C \, \cap \, \)QVI\((T,K)| \ge k\) and, at any “Step 1”, \(|{\bar{S}}(z)| \ge min\{|VI(T,K(z))|, k\}\), that is either \(\bar{S}(z)=VI(T,K(z))\) or \(|\bar{S}(z)| \ge k\), then \(|S| \ge k\).

Therefore Procedure 5 can be efficiently used, in practice, to compute a number of different solutions by doing a wide-ranging pick at “Step 1”. Note that, if the assumption of reproducibility does not hold, then the procedure is unuseful since \({\bar{S}}(z)\) computed at “Step 1” can contain points that are not solutions of QVI(TK).

It is also important to notice that if the set \(C \cap \text{ FP }(K)\) is included in the union of a finite number of components \(K(z^i)\) and, at any “Step 1”, \({\bar{S}}(z^i)\) contains all the solutions of \(VI(T,K(z^i))\), then Procedure 5, with \(k = \infty \), will determine in a finite number of steps all the solutions of QVI(TK) in C.

Let us end this section by a proposition describing some other sufficient conditions ensuring that Procedure 5 can also stop after a finite number of steps.

Proposition 6

Let \(T:\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) and \(K{:}\mathbb {R}^n\rightrightarrows \mathbb {R}^n\) be two set-valued maps. Assume that K is reproducible on \(C \subseteq \mathbb {R}^n\) and one of the following situations holds:

  1. (i)

    the subset \(C\cap \text{ FP }(K)\) is compact and for any \(z\in C\cap \text{ FP }(K)\), the set K(z) is open;

  2. (ii)

    \(C\cap \text{ FP }(K)\) is bounded and there exists \(c>0\) such that, for all \(z \in C \cap \text{ FP }(K), K(z)\subset C\) and one can find \({\bar{z}} \in K(z)\) for which the ball \(B({\bar{z}}, c)\) is included in K(z).

Suppose that, in Procedure 5, \(k = \infty \) and, at any “Step 1”, \({\bar{S}}(z) \supseteq C \cap VI(T,K(z))\). Then Procedure 5 returns S in a finite number of steps such that

$$\begin{aligned} S \supseteq C \cap \text{ QVI }(T,K). \end{aligned}$$

Proof

It is sufficient to show that \(C \cap \text{ FP }(K)\) is included in the union of a finite number of components \(K(z^i)\).

(i):

The family \(\{K(z)~:~z\in C\cap \text{ FP }(K)\}\) is an open covering of the compact set \(C\cap \text{ FP }(K)\) from which one can extract a finite subcovering.

(ii):

Let \(\{z^i\}_i\) be the sequence of picked points at “Step 1” and let \(\{{\bar{z}}^i\}_i\) be the sequence of balls’ centers associated with \(\{K(z^i)\}_i\). For all i we obtain \(B({\bar{z}}^i, c) \subseteq K(z^i) \subseteq C \cap \text{ FP }(K)\) by recalling that \(K(z^i) \subseteq C\) and by using item (i) of Proposition 1 since K is reproducible on C. Now suppose, for a contradiction, that

$$\begin{aligned} C\cap \text{ FP }(K) \supseteq \cup _{i=1}^\infty B({\bar{z}}^i, c). \end{aligned}$$
(10)

For any two different iterates \(i_1\) and \(i_2\), it holds that \(B({\bar{z}}^{i_1}, c) \cap B({\bar{z}}^{i_2}, c) = \emptyset \), then (10) is not compatible with the boundedness of \(C \cap \text{ FP }(K)\).

\(\square \)

The above hypothesis (i) and (ii) are quite restrictive. Nevertheless let us observe that the following “toy-example” satisfies all hypothesis (ii).  Let \(C=[0,1/2] \times [0,1]\) and define the subsets \(C_1=\{(x_1,x_2)\in \mathbb {R}^2{:}\,x_1\in [0,1/2],~x_2\in [0,x_1]\}\) and \(C_2=\{(x_1,x_2)\in \mathbb {R}^2{:}\,x_1\in [0,1/2],~x_2\in ]x_1,1]\}\). Now define the set-valued map K from \(\mathbb {R}^2\) to \(\mathbb {R}^2\) by

$$\begin{aligned} K(x)=\left\{ \begin{array}{ll} C_1 &{}\quad \text{ if } x_1\ge x_2\\ C_2 &{}\quad \text{ if } x_1<x_2. \end{array} \right. \end{aligned}$$

Clearly the map K is reproducible on C and one has \(C \cap FP(K) = C\). It is thus clear that C is composed of two components, \(C_1\) and \(C_2\), and therefore hypothesis (ii) holds. Moreover, by considering for example the (single-valued) map T to be defined on \(\mathbb {R}^2\) by \(T(x_1,x_2)=\{(x_1-1,x_2-1)\}\), one clearly obtain that each of the iterations of Procedure 5 consists in the solving of a variational inequality that has a unique solution. Actually these solutions are (1 / 2, 1 / 2) and (1 / 2, 1), both in C.

4 Pseudo-variational generalized Nash equilibrium problems

As already explained in Sect. 1, it is very important, from the computational point of view, to precise the existing links between a generalized Nash equilibrium problem and its associated variational inequalities. These links have been widely studied in the literature and it is now well known that, even if the strategy map K of a generalized Nash equilibrium problem GNEP\((\theta ,K)\) is driven by a convex Rosen’s law (1), only some particular solutions of GNEP\((\theta ,K)\), called variational solutions, will be obtained by solving the associated variational inequality, see e.g. Facchinei et al. (2007), Facchinei and Sagratella (2011). Our aim is here to observe that, under a reproducibility assumption, the complete set of solutions of the generalized Nash equilibrium problem can be obtained by solving variational inequalities.

Let us recall, see e.g. Pang and Fukushima (2005), that if the loss functions \(\theta _\nu \) of the GNEP\((\theta ,K)\) are continuously differentiable and convex with respect to \(x^\nu \) and the set-valued map K is convex valued, then GNEP\((\theta ,K)\) can be equivalently reformulated as the quasivariational inequality QVI(FK) where \(F(x) := \left( \nabla _{x^\nu } \theta _\nu (x^\nu , x^{-\nu }) \right) _{\nu =1}^p \).

Theorem 1

Let GNEP\((\theta ,K)\) be a generalized Nash equilibrium problem for which the loss functions \(\theta _\nu \) are continuously differentiable and convex with respect to \(x^\nu \) and the strategy set-valued map K is convex valued. If K is reproducible on \(\text{ FP }(K)\) then GNEP\((\theta ,K)\) is F-pseudo-variational (see Definition 1).

Proof

The proof follows from Corollary 1 and the above recalled equivalence between the solution set of GNEP\((\theta ,K)\) and that of QVI(FK). \(\square \)

Remark 1

It is important to note that even if the strategy map is given by the Rosen’s law with a nonempty convex set X of \(\mathbb {R}^n\) then the set-valued map is not necessarily reproducible on \(\text{ FP }(K)\). This situation can be simply illustrated by the case of a two real variables set-valued map following a Rosen’s law described by the unit ball B(0, 1) of \(\mathbb {R}^2\).

In the more general case of quasiconvex loss functions an analogous of the reformulation result obtained for differentiable convex loss functions can be established. The possible lack of differentiablity and convexity force us to use more elaborated tools of nonsmooth analysis, namely the normal operator, see Aussel and Hadjisavvas (2005), Aussel and Ye (2006), Aussel and Dutta (2008).

Let us first recall some definitions and related structures associated with quasiconvex functions. Recall that a function \( f : \mathbb {R}^n \rightarrow \mathbb {R}\) is said to be quasiconvex if, for any \( x, y \in \mathbb {R}^n \) and \( \lambda \in [0,1] \), we have \(f(\lambda x+(1-\lambda )y)\le \max \{f(x),f(y)\}\). An equivalent and useful characterization of quasiconvexity is that the function f is quasiconvex if and only if its sublevel set \( S_\lambda := \{ y \in \mathbb {R}^n : f(y) \le \lambda \} \) is convex for all \( \lambda \in \mathbb {R}\). Obviously convex and differentiable pseudoconvex functions are quasiconvex. The strict sublevel set of f will be denoted by \(S_\lambda ^< := \{ y \in \mathbb {R}^n : f(y)< \lambda \}\). Finally, we recall that a function \(f:\mathbb {R}^n \rightarrow \mathbb {R}\) is said to be semistrictly quasiconvex if f is quasiconvex and, for any \( x , y \in \mathbb {R}^n \) with \( f(x) \ne f(y) \) and \( \lambda \in (0,1) \), one has \( f(\lambda x + ( 1 - \lambda ) y ) < \max \{ f(x), f(y) \}. \) Roughly speaking, a semistrictly quasiconvex function is a quasiconvex function which does not admit “flat parts”, except possibly \(\arg \min _{\mathbb {R}^n} f\).

Let us now recall from Aussel and Hadjisavvas (2005) the fundamental concept of adjusted sublevel set which provides, for quasiconvex functions, more informations than the usual concept (large and strict) of sublevel sets. Given a quasiconvex function f, the adjusted sublevel set is defined by

$$\begin{aligned} S^a_f (x) := S_{f(x)} \bigcap \overline{B\,(S^<_{f(x)}, \rho _x)}, \end{aligned}$$

where \( \rho _x = \text{ dist } (x, S^<_{f(x)}) \), if \(x\not \in \arg \min _{\mathbb {R}^n} f\) and \(S^a_f (x)=S_{f(x)}\) otherwise. Here \(B\,(S^<_{f(x)}, \rho _x)=S^<_{f(x)} + \rho _x B(0,1)\) denotes the \(\rho _x \)-neighbourhood of the set \( S^<_{f(x)}\) where B(0, 1) is the unit ball in \(\mathbb {R}^n \). The bar in the above expression denotes the closure of a set.

Given a quasiconvex function \( f : \mathbb {R}^n \rightarrow \mathbb {R}\), the normal operator associated with f is a set-valued map \( N^a_f : \mathbb {R}^n \rightrightarrows \mathbb {R}^n\) which is given as

$$\begin{aligned} N^a_f (x) := \left\{ v \in \mathbb {R}^n {:}\, \langle v , y - x \rangle \le 0, \quad \forall y \in S^a_f (x) \right\} , \end{aligned}$$

that is, \(N^a_f(x)\) is the polar cone to the adjusted sublevel set \(S^a_f (x)\). It is important to observe that, in the case of a semistrictly quasiconvex function, \(N^a_f(x)\) is simply the polar cone to the classical sublevel set \(S_{f(x)}\) or to the strict sublevel set \(S^<_{f(x)}\), that is, for any \(x\not \in \arg \min _{\mathbb {R}^n} f, N^a_f(x)=(S_{f(x)}-x)^\circ =(S^<_{f(x)}-x)^\circ . \)

To simplify the notations, we define, for any \(\nu \) and any \(x\in \mathbb {R}^n, S_\nu (x)\) and \(A_\nu (x^{-\nu })\) as subsets of \(\mathbb {R}^{n_\nu }\) such that

$$\begin{aligned} S_\nu (x):=S_{\theta _\nu (\cdot ,x^{-\nu })}^a (x^\nu )\quad \text{ and }\quad A_\nu (x^{-\nu }):=\arg \min _{\mathbb {R}^{n_\nu }}\theta _\nu (\cdot ,x^{-\nu }), \end{aligned}$$

and we define normal operator \(N^a_\theta : \mathbb {R}^n \rightrightarrows \mathbb {R}^n\) by

$$\begin{aligned} N^a_\theta (x):=F_1(x)\times \ldots \times F_p(x), \end{aligned}$$

where

$$\begin{aligned}F_\nu (x):=\left\{ \begin{array}{l@{\quad }l} \overline{B_\nu (0,1)} &{} \text{ if } x^\nu \in A_\nu (x^{-\nu })\\ \text{ conv }(N^a_{\theta _\nu }(x^\nu )\cap S_\nu (0,1)) &{} \text{ otherwise, } \end{array}\right. \end{aligned}$$

with \(\overline{B_\nu (0,1)}\) and \(S_\nu (0,1)\) denoting the closed unit ball and the unit sphere of \(\mathbb {R}^{n_\nu }\) and \(N^a_{\theta _\nu }(x^\nu )\) standing for the normal operator of the quasiconvex function \( \theta _\nu (\cdot , x^{-\nu })\) at \(x^\nu \), that is,

$$\begin{aligned} N^a_{\theta _\nu }(x^\nu ):=\{v^\nu \in \mathbb {R}^{n_\nu } : \langle v^\nu ,u^\nu -x^\nu \rangle \le 0,~\forall \,u^\nu \in S_\nu (x)\}. \end{aligned}$$

According to Aussel and Hadjisavvas (2005, Proposition 3.4), normal operator \(N^a_\theta \) has nonempty (convex) values. Moreover since, for any \(\nu \) and any \(x^\nu \not \in A_\nu (x^{-\nu }), N^a_{\theta _\nu }(x^\nu )\) is a closed cone and \( S_\nu (0,1)\) is a compact set then \(F_\nu (x)\) is a compact set as the convex hull of a compact set [see e.g. Hiriart-Urruty and Lemaréchal (1993, Theorem 1.4.3)]. As a conclusion and using classical calculus rules, the set-valued map \(N^a_\theta \) has nonempty convex compact values. Based on the proofs stated in Aussel and Dutta (2008), one can actually deduce the following generalization of Aussel and Dutta (2014, Theorem 1). We provide below an adapted sketch of proof for the sake of completeness.

Theorem 2

Let GNEP\((\theta ,K)\) be a generalized Nash equilibrium problem for which loss functions \(\theta _\nu \) are continuous and semistrictly quasiconvex with respect to \(x^\nu \) and the strategy set-valued map K is convex valued. Then \(\bar{x}\) is a solution of GNEP\((\theta ,K)\) if and only if \(\bar{x}\) is a solution of QVI\((N^a_\theta , K)\).

Proof

Let \(\bar{x}\) be a solution of GNEP\((\theta ,K)\). For any \(\nu \), we can assume, wlog, that \(\bar{x}^\nu \not \in A_\nu (\bar{x}^{-\nu })\). Thus the subset \(S_\nu (\bar{x})\) is closed convex with a nonempty interior \(\text{ int }(S_\nu (\bar{x}))=\{u^\nu : \theta _\nu (u^\nu ,\bar{x}^{-\nu })<\theta _\nu (\bar{x}^\nu ,\bar{x}^{-\nu })\}\). According to a classical separation theorem of the convex sets \(K_\nu ({\bar{x}}^{-\nu })\) and \(\text{ int }(S_\nu (\bar{x})), \langle v^{\nu }, x^\nu -\bar{x}^{\nu }\rangle \ge 0\), for some \(v^\nu \in F_\nu (\bar{x})\) and any \(x^\nu \in K_\nu (\bar{x}^{-\nu })\). Thus, \(\bar{x}\) is a solution of QVI\((N^a_\theta ,K)\).

Now let \(\bar{x}\in \text{ QVI }(N^a_\theta , K)\) and let \(\nu \in \{1,\dots ,p\}\). If \( \bar{x}^\nu \in A_\nu (\bar{x}^{-\nu })\) then obviously \(\bar{x}^\nu \) is a solution for player’s problem given other players’ strategies \(\bar{x}^{-\nu }\). Otherwise, according to Aussel and Dutta (2008, Lemma 3.1), there exist \(u^\nu \in N^a_{\theta _\nu }(\bar{x}^\nu )\setminus \{0\}\) such that \(\langle u^\nu , y^\nu -\bar{x}^\nu \rangle \ge 0\), for any \(y^\nu \in K_\nu (\bar{x}^\nu )\). This, together with Aussel and Ye (2006, Proposition 3.2), imply that \(\bar{x}\) is a solution of GNEP\((\theta , K)\). \(\square \)

Therefore, as an immediate consequence of Theorem 2 and Corollary 1, we obtain a sufficient condition for a GNEP with nonconvex loss functions to be pseudo-variational.

Theorem 3

Let GNEP\((\theta ,K)\) be a generalized Nash equilibrium problem for which loss functions \(\theta _\nu \) are continuous and semistrictly quasiconvex with respect to \(x^\nu \) and the strategy set-valued map K is convex valued. Suppose that set-valued map K is reproducible on \(\text{ FP }(K)\), then GNEP\((\theta ,K)\) is \(N^a_\theta \)-pseudo-variational (see Definition 1).

In the light of what said above, we can use Procedure 5, with \(T:=N^a_\theta \), to find all solutions of a GNEP that satisfies all assumptions of Theorem 3.

5 Pseudo-NEP generalized Nash equilibrium problems

In this section we consider generalized Nash equilibrium problems for which the loss functions are not necessarily differentiable and we do not assume any convexity condition.

Definition 4

We say that GNEP\((\theta ,K)\) is a pseudo-Nash Equilibrium problem (pseudo-NEP in short) if and only if

$$\begin{aligned} \text{ GNEP }(\theta ,K) \equiv \cup _{z\in \text {FP}(K)} \text{ NEP }(\theta ,K(z)), \end{aligned}$$

that is any solution of GNEP\((\theta ,K)\) is a solution of Nash equilibrium problem NEP\((\theta ,K(z))\), for some fixed point z of map K, and any solution of such NEP\((\theta ,K(z))\) is a solution of GNEP\((\theta ,K)\).

Here we do not assume any convexity of players’ loss functions since we do not use variational issues. Let us define set-valued maps \(\Omega :\mathbb {R}^n \rightrightarrows \mathbb {R}^n\) and \(Y:\mathbb {R}^n \rightrightarrows \mathbb {R}^n\) as

$$\begin{aligned} \Omega (x)&:= \Omega _1(x^{-1}) \times \cdots \times \Omega _p(x^{-p}), \\ Y(x)&:= Y_1(x^{-1}) \times \cdots \times Y_p(x^{-p}), \end{aligned}$$

where, for any \(\nu =1,\dots ,p, \Omega _\nu : \mathbb {R}^{n_{-\nu }} \rightrightarrows \mathbb {R}^{n_\nu }\) and \(Y_\nu : \mathbb {R}^{n_{-\nu }} \rightrightarrows \mathbb {R}^{n_\nu }\) are set-valued maps. Then we will further denote by GNEP\((\theta ,\Omega ,Y)\) the generalized Nash equilibrium problem in which \(K(x) := \Omega (x) \cap Y(x)\) that is, for any \(\nu \in \{1,\ldots ,p\}\),

$$\begin{aligned} X_\nu (x^{-\nu }) := \Omega _\nu (x^{-\nu }) \cap Y_\nu (x^{-\nu }). \end{aligned}$$

Given a point \({\bar{x}}\in \mathbb {R}^n\), we denote by GNEP\((\theta ,\bar{K}(\cdot ,\bar{x}))\) the generalized Nash equilibrium problem whose loss functions of players are the same as the generalized Nash equilibrium problem GNEP\((\theta ,\Omega ,Y)\), but in which for all \(\nu \) the strategy set-valued map \(\bar{K}\) is parametrized and, for all \(\nu \), denoted by

$$\begin{aligned} \bar{K}_\nu (x^{-\nu },{\bar{x}}^{-\nu }) := \Omega _\nu (x^{-\nu }) \cap Y_\nu ({\bar{x}}^{-\nu }). \end{aligned}$$

The following theorem links solution set of GNEP\((\theta ,\Omega ,Y)\) with those of the family GNEP\((\theta ,\bar{K}(\cdot ,\bar{x}))\).

Theorem 4

Suppose that Y is reproducible on \(\mathbb {R}^n\).

  1. (i)

    Let \(z \in K(z)\). If \(\bar{x}\) is a solution of GNEP\((\theta ,\bar{K}(\cdot ,z))\), then \(\bar{x}\) is a solution of GNEP\((\theta ,\Omega ,Y)\).

  2. (ii)

    If \(\bar{x}\) is a solution of GNEP\((\theta ,\Omega ,Y)\), then, for all \(z \in Y(\bar{x}), \bar{x}\) is a solution of GNEP\((\theta ,\bar{K}(\cdot ,z))\).

Proof

(i). \({\bar{x}}\) is feasible for GNEP\((\theta ,\bar{K}(\cdot ,z))\) then \({\bar{x}} \in \Omega ({\bar{x}}) \cap Y(z)\). Since Y is reproducible on \(\mathbb {R}^n\) and \({\bar{x}} \in Y(z)\) then \(Y(z) \equiv Y(\bar{x})\). Therefore \({\bar{x}} \in \Omega ({\bar{x}}) \cap Y({\bar{x}})\), and we can conclude that \({\bar{x}}\) is feasible for GNEP\((\theta ,\Omega ,Y)\).

For all \(\nu , \bar{x}^\nu \) is a solution of the optimization problem

$$\begin{aligned} \theta _\nu (\bar{x}^\nu , \bar{x}^{-\nu }) \le \theta _\nu (x^\nu , \bar{x}^{-\nu }), \qquad \forall \, x^\nu \in \Omega _\nu (\bar{x}^{-\nu }) \cap Y_\nu (z^{-\nu }). \end{aligned}$$
(11)

Again due to the fact that \(Y(z) \equiv Y(\bar{x})\), it is a solution of

$$\begin{aligned} \theta _\nu (\bar{x}^\nu , \bar{x}^{-\nu }) \le \theta _\nu (x^\nu , \bar{x}^{-\nu }), \qquad \forall \, x^\nu \in \Omega _\nu (\bar{x}^{-\nu }) \cap Y_\nu (\bar{x}^{-\nu }), \end{aligned}$$
(12)

that is \(\bar{x}\) is a solution of GNEP\((\theta ,\Omega ,Y)\).

(ii). The point \({\bar{x}}\) is feasible for GNEP\((\theta ,\Omega ,Y)\), that is \({\bar{x}} \in K(\bar{x})=\Omega ({\bar{x}}) \cap Y({\bar{x}})\). Since Y is reproducible on \(\mathbb {R}^n\) and \(z \in Y(\bar{x})\) then \(Y(z) \equiv Y(\bar{x})\) and therefore \({\bar{x}}\) is feasible for GNEP\((\theta ,\bar{K}(\cdot ,z))\).

Moreover, \(\bar{x}^\nu \) is solution of (12) for all \(\nu \) and thus, using again that \(Y(z) \equiv Y(\bar{x})\), we can conclude that, for all \(\nu , \bar{x}^\nu \) is also solution of (11), that is \({\bar{x}}\) is a solution of GNEP\((\theta ,\bar{K}(\cdot ,z))\). \(\square \)

The following corollary of Theorem 4 gives sufficient conditions for a generalized Nash equilibrium problem to be a pseudo-NEP.

Corollary 2

Suppose that Y is reproducible on \(\mathbb {R}^n\) and that \(\Omega _\nu (x^{-\nu }) = \Omega _\nu \) for all \(\nu \in \{1,\ldots ,p\}\), then GNEP\((\theta ,K)\) is a pseudo-NEP.

Finally we propose two applications inspired from market models that involve naturally reproducible set-valued maps and can thus be modeled as pseudo-NEPs.

Example 3

Let us consider the case in which the p players share a set of r common resources. In this case all players have the following common constraints

$$\begin{aligned} \sum _{\mu =1}^p a^\mu _j (x^\mu ) \le b_j, \qquad j = 1, \ldots , r, \end{aligned}$$

where \(a^\mu _j : \mathbb {R}^{n_\mu } \rightarrow \mathbb {R}\) is the function measuring the consumption of resource j by player \(\mu \) (we assume that \(a^\mu _j\) is injective, e.g. it is linear), and \(b_j > 0\) is the amount of resource j available in the system. Let us suppose that all players can store, without additional costs, the amounts of resources that they do not use. Then, for any player \(\mu \) and any resource j, we can introduce a surplus variable \(y^\mu _j \ge 0\) indicating how much of such resource is stored by the player. In this case we can rewrite the common constraints as equations:

$$\begin{aligned} \sum _{\mu =1}^p \left( a^\mu _j (x^\mu ) + y^\mu _j \right) = b_j, \qquad j = 1, \ldots , r. \end{aligned}$$

Assuming that any player \(\nu \) have some private constraints \(X_\nu \), then the optimization problem that he must solve is

$$\begin{aligned} \min _{x^\nu , y_1^\nu , \ldots , y_r^\nu } \;&\theta _\nu (x^\nu , x^{-\nu }) \\&\left\{ \begin{array}{l} \sum _{\mu =1}^p \left( a^\mu _j (x^\mu ) + y^\mu _j \right) = b_j, \qquad j = 1, \ldots , r,\\ x^\nu \in X_\nu , \qquad y^\nu _j \ge 0, \qquad j = 1, \ldots , r. \end{array}\right. \end{aligned}$$

Thus using Proposition 2 and Corollary 2 the associated generalized Nash equilibrium problem is a pseudo-NEP.

Example 4

This example is taken from Facchinei et al. (2013). Consider a market with p firms. Each firm owns l plants each one to generate electric energy for sale. We denote as \(x^\nu _j\) the energy produced by firm \(\nu \) in the j-th plant. The unitary energy price in the market depends on the total amount of energy produced by all the firms, then with \(p(x^1,\ldots ,x^p)\) we indicate the inverse demand function of the market. Then the profit of each firm \(\nu \) depends on the generation level of the other firms in the market and it is equal to

$$\begin{aligned} \Pi _\nu (x^1,\ldots ,x^p) = p(x^1,\ldots ,x^p) \left( \sum _{j=1}^l x^\nu _j\right) - c_\nu (x^\nu ), \end{aligned}$$

where \(c_\nu \) is a cost function. In turn, each generation level is constrained by technological limitations of the power plants (\(X_\nu \)). The coordination, or regulation, of the market is done by the Independent System Operator (ISO), whose actions in the market are considered as those of an additional player. Accordingly, letting the ISO be the player number 0, the ISO tries to maximize the social welfare by encouraging all the firms to satisfy the total market demand \(d>0\).

In this framework, the optimization problem solved by the ISO is

$$\begin{aligned}&\min _{x^0 \in \mathbb {R}} \; x^0 \\&\qquad \left\{ \begin{array}{l} x^0 + \sum _{\nu =1}^p \sum _{j=1}^l x^\nu _j = d \\ l \le x^0 \le u, \end{array}\right. \end{aligned}$$

with \(l\le u\), and the optimization problem solved by each firm \(\nu \) is

$$\begin{aligned} \max _{x^\nu } \;&\Pi _\nu (x^1,\ldots ,x^p) \\&\left\{ \begin{array}{l} x^0 + \sum _{\nu =1}^p \sum _{j=1}^l x^\nu _j = d \\ x^\nu \in X_\nu . \end{array}\right. \end{aligned}$$

Again according to Proposition 2 and Corollary 2 the simplified model for electricity market is a pseudo-NEP.