1 Introduction

The optimization problems are standard tools for formulation and solution of various decision making problems; see e.g. Polyak (1983). That is, one has to find an element attributed to a decision from some given feasible set D that yields the maximal (or minimal) value of some goal (utility) function f. For brevity, we write this problem as

$$\begin{aligned} \max \limits _{y \in D} \rightarrow f(y). \end{aligned}$$
(1)

This formulation assumes that both the feasible set and goal function are known exactly. However, this assumption does not seem to be very realistic since the description of both the goal function and feasible set may be incomplete due to various kinds of uncertainty of real data; see e.g. Hlaváček et al. (2004), Ben-Tal et al. (2009). Usually, this leads to appearance of indefinite factors. For instance, one can denote the collection of these factors by \(\omega \) and take the problem

$$\begin{aligned} \max \limits _{y \in D(\omega )} \rightarrow f(y,\omega ), \end{aligned}$$
(2)

where \(\omega \in \varOmega \) and \(\varOmega \) denotes the whole set of possible values of the indefinite factors. However, unlike (1), problem (2) may not have a definite solution, i.e. it requires a proper reformulation. There exist a number of such reformulations that depend on the possible knowledge about properties of the indefinite factors; see e.g. Fishburn (1970), Larichev (1979), Moiseev (1981), Hlaváček et al. (2004), Ben-Tal et al. (2009).

In many cases, it is required that all the necessary knowledge about the problem data and all the parameters should be derived beforehand. This means that one has to solve problem (1) or a proper reformulation of (2) and then implement the obtained decision in the real system. In this situation, the basic optimization problem is considered to be absolute. Of course, this implementation of the decision admits usually proper corrections caused by possible data perturbations, but the model remains the same. At the same time, the presentation of the goal and constraints defining the system model may vary together with the changes of the system state especially if the system under consideration is rather complicated. Moreover, only some limited information about the goal and constraints may be known at each state. Therefore, our decisions will be dependent of the current state and this fact should be reflected in the mathematical formulation of the decision making problem. This means that the corresponding optimization problem should be then considered as relative or subjective with respect to system states.

In this paper, we intend to describe such mathematical models and establish some their properties. We propose equilibrium formulations of these problems and show that they give general (quasi-)equilibrium problems. Finding a solution of quasi-equilibrium problems may be rather difficult because of the presence of the moving feasible set. We propose to apply a regularized version of the penalty method for the general quasi-equilibrium problem, which enables us to establish existence results under weak coercivity conditions and replace the initial quasi-equilibrium problem with a sequence of the usual equilibrium problems. We illustrate our approach with several examples of applications and show that it can be extended to non-cooperative game problems.

2 Basic formulations of relative optimization problems

We first consider the problem of finding an optimal decision for the case where the presentation (model) of the goal depends on the current state. That is, given a current state x, the goal function at x is described as f(xy) for the state estimate y. We may suppose that f(xx) gives a rather precise estimate of the goal at x, but the value of f(xy), which gives our estimate of the decision y at x, may differ from f(yy), which gives the estimate of the choice of y at y. Besides, the value f(yy) is not supposed to be known at x. For this reason, the global optimization approach, which consists in solution of the optimization problem

$$\begin{aligned} \max \limits _{y \in D} \rightarrow u(y)= f(y,y), \end{aligned}$$
(3)

does not seem suitable here, even if the feasible set D is fixed. We propose to apply the equilibrium formulation of this relative problem: Find \(x^{*} \in D\) such that

$$\begin{aligned} f(x^{*},x^{*}) \ge f(x^{*},y) \quad \forall y \in D. \end{aligned}$$
(4)

This means that the feasible state \(x^{*}\) yields the maximal value of the goal function which is compared with all the other feasible states evaluated at the current state \(x^{*}\). Hence, we can call \(x^{*}\) a relatively optimal point since it is based on the current (relative) knowledge about the system. It follows that we can not obtain a genuine optimal point in the sense of (1) or (3) due to the presence of data uncertainty. Nevertheless, we think that we can use solutions of problem (4) in order to decide whether the current state is suitable or should be changed.

It is easy to see that (4) coincides with the general equilibrium problem (EP for short), which gives a common format for many other nonlinear problems such as custom optimization, fixed point, variational inequality, saddle point, and non-cooperative game equilibrium problems; see, e.g., Baiocchi and Capelo (1984), Blum and Oettli (1994), Konnov (2001) and references therein.

However, it is natural to suppose that the feasible set D also depends on the states. Namely, let us define the feasible mapping value as follows

$$\begin{aligned} D (x)=V \bigcap W(x), \end{aligned}$$
(5)

where V is a set in the space \({\mathbb {R}}^{n}\), \(W : {\mathbb {R}}^{n} \rightarrow \varPi ({\mathbb {R}}^{n})\) is a set-valued mapping. Here and below \(\varPi (A)\) denotes the family of all subsets of a set A. That is, V is a constant part of the feasible set, whereas W(x) describes the current varying knowledge about the constraints at the current state x. Therefore, we can utilize the general formulation of the relative optimization problem: Find \(x^{*} \in D(x^{*})\) such that

$$\begin{aligned} f(x^{*},x^{*}) \ge f(x^{*},y) \quad \forall y \in D(x^{*}). \end{aligned}$$
(6)

This is nothing but the so-called quasi-equilibrium problem (QEP for short); see Bensoussan and Lions (1984), Harker (1991), Aubin (1998). As to the set W(x), it can be defined with the help of some constraint bi-functions \( h_{i}(x,y)\), \(i=1, \ldots , m\), as follows

$$\begin{aligned} W(x)=\left\{ y \in {\mathbb {R}}^{n} \ | \ h_{i}(x,y) \le 0, \ i=1, \ldots , m \right\} . \end{aligned}$$
(7)

That is, \( h_{i}(x,y)\) gives our estimate of the i-th constraint for the state y at x. We observe that problem (2) with uncertainty can be reduced to (6) if the indefinite factors \(\omega \) depend on the states, i.e. when \(\omega =\omega (x,y)\).

Example 2.1

(Treatment of industrial wastes). Let us consider an industrial firm which may utilize n production technologies and have a plant for treatment of its wastes containing m polluted substances. Let \(x=(x_{1},\ldots ,x_{n})^{\top } \in {\mathbb {R}}^{n}\) be the vector of technology activity levels of the firm. Then \(q(x)= (q_{1}(x),\ldots ,q_{m}(x))^{\top } \in {\mathbb {R}}^{m}\) is the corresponding vector of its wastes and \(\mu (x)\) is the benefit of this firm. Next, suppose that the vector c of unit treatment charges depends on the pollution volumes, that is \(c=c(q(x))\). Further, we denote by \(X \subseteq {\mathbb {R}}^{n}_{+}\) the feasible activity level set of the firm and suppose that the total pollution volumes within the planned period must be bounded above by the fixed vector \(b \in {\mathbb {R}}^{m}_{+}\). Here and below, \({\mathbb {R}}_{+}^{s}\) denotes the non-negative orthant in \({\mathbb {R}}^{s}\), i.e.,

$$\begin{aligned} {\mathbb {R}}_{+}^{s} = \{ x \in {\mathbb {R}}^{s} \ | \ x_{j} \ge 0 \quad j=1, \ldots , s \}. \end{aligned}$$

Then the firm has the utility (profit) function

$$\begin{aligned} f(x)= \mu (x)-\sum _{i=1}^{m} q_{i}(x)c_{i}[q(x)], \end{aligned}$$

and the constraint set

$$\begin{aligned} D=\left\{ x \in X \bigg | \sum _{j=1}^{m} q_{j}(x) \le b \right\} . \end{aligned}$$

If all the parameters are known, we obtain the usual problem (1) for determining the optimal decision.

Let us now consider the case where the exact values of some parameters are not known. For instance, if x is the current vector of activity levels, then one can calculate the exact values of the functions \(c_{i}[q(y)]\) only if q(y) belongs to some neighborhood U(x) of q(x), i.e. we have in fact \(c_{i}=c_{i}[q(x),q(y)]\). This means that small perturbations of activity levels enable one to calculate exact values, but greater perturbations may require new facilities, which were not used before and only theoretical approximate evaluations of the same parameters can be calculated. Moreover, the same situation occurs if the firm does not utilize some feasible treatment technologies at x. Utilization of these technologies may appear necessary for other activity levels, but this may require essential changes in industrial organization. Hence, it would be difficult to evaluate the desired parameters \(c_{i}\) exactly at x. Clearly, these evaluations will be different from the same parameters calculated near y. In order to create an adequate model, we can take the relative utility function

$$\begin{aligned} f(x,y)= \mu (y)-\sum _{i=1}^{m} q_{i}(y)c_{i}[q(x),q(y)], \end{aligned}$$

and the relative optimization problem of form (4) in order to decide whether the current activity level is suitable or not. The similar uncertainty of parameters of the functions \(q_{i}\) clearly leads to the more general formulation (6).

3 Existence of solutions of equilibrium problems

The first issue in investigation of any problem consists in establishing conditions which provide existence of its solutions. We intend to utilize the known existence results for EPs on not necessarily compact sets since they were obtained under rather mild coercivity conditions; see, e.g., Blum and Oettli (1994), Bianchi and Pini (2005), Konnov and Dyabilkin (2011), Konnov (2015).

Given a function \(\varphi : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\) and a number \(\gamma \), we define the set

$$\begin{aligned} X(\varphi ,\gamma )=\left\{ x \in {\mathbb {R}}^{n} \ | \ \varphi (x) \le \gamma \right\} . \end{aligned}$$

We recall that the function \(\varphi : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\) is said to be weakly coercive with respect to a set K if there exists a number \(\gamma \) such that the set \(X(\varphi ,\gamma )\bigcap K\) is nonempty and bounded. Next, a function \(\varphi : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\) is said to be upper (lower) semi-continuous on a set X if, for any point \(z \in X\) and any sequence \(\{ x^{k}\} \rightarrow z\), \(x^{k} \in X\), it holds that

$$\begin{aligned} \limsup _{k \rightarrow \infty } \varphi (x^{k}) \le \varphi (z) \ (\liminf _{k \rightarrow \infty } \varphi (x^{k}) \ge \varphi (z)). \end{aligned}$$

Also, a set-valued mapping \(T : {\mathbb {R}}^{n} \rightarrow \varPi ({\mathbb {R}}^{n})\) is said to be lower semi-continuous at a point \(z \in X\) on a set X if, for any sequence \(\{ x^{k}\} \rightarrow z\), \( x^{k} \in X\), and any \(t \in T(z)\) there exists a sequence \(\{ t^{k}\} \rightarrow t\), \( t^{k} \in T(x^{k})\). The mapping T is said to be lower semi-continuous on the set X if it is lower semi-continuous at any point of X.

Let K be a nonempty set in \({\mathbb {R}}^{n}\) and let \(\eta : K \times K \rightarrow {\mathbb {R}}\) be an equilibrium bi-function, i.e., \(\eta (x, x)=0\) for each \(x \in K\). Then, we can define the following EP: find a point \(x^{*} \in K\) such that

$$\begin{aligned} \eta (x^{*}, y) \ge 0, \quad \forall y \in K. \end{aligned}$$
(8)

Let us define the coercivity condition.

(C1)There exist a lower semi-continuous and convex function\(\mu : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\), which is weakly coercive with respect to the setK, and a number\(\gamma \), such that, for any point\(x \in K {\setminus } X(\mu ,\gamma )\)there is a point\(z \in K\)with

$$\begin{aligned} \min \{ \eta (x, z), \mu (z)-\mu (x)\} <0 \quad \text{ and } \quad \max \{ \eta (x, z), \mu (z)-\mu (x)\} \le 0. \end{aligned}$$

We give the existence result for EPs as proper specialization of that from Konnov (2015), Theorem 3.1.

Proposition 3.1

If K is a nonempty, convex and closed set in \({\mathbb {R}}^{n}\), \(\eta : K \times K \rightarrow {\mathbb {R}}\) is an equilibrium bi-function, \(\eta (\cdot , y)\) is upper semi-continuous for each \(y \in K\), \(\eta (x, \cdot )\) is convex for each \(x \in K\), and the assumptions in (C1) are fulfilled, then EP (8) has a solution.

It was also indicated in Konnov (2015) that the set \(X(\varphi ,\gamma )\bigcap K\) is then nonempty.

Clearly, Proposition 3.1 can be used for establishing existence results for problem (4) if we set

$$\begin{aligned} \eta (x, y)= f (x,x)-f (x,y) \quad {\textit{and}} \quad K=D. \end{aligned}$$
(9)

We can conclude that existence of solutions of EPs (8) or (4) requires in particular additional concavity of the function \(f(x, \cdot )\), whereas it is not necessary for the usual optimization problem of form (1).

Example 3.1

(Extended linear programming). Let us consider the classical problem of income maximization subject to resource limitations. In this model we are given a firm that may produce n commodities and utilize m factors (resources). Let \(c_{j}\) denote the price of the j-th commodity, \(x_{j}\) denote the unknown output of the j-th commodity, \(b_{i}\) denote the endowment of the i-th resource, and \(a_{ij}\) denote the amount of the i-th resource for producing one unit of the j-th commodity. If all the values \(c_{j}\), \(b_{i}\), and \(a_{ij}\) are fixed and known exactly, we obtain the classical linear programming problem, which is a particular case of (1):

$$\begin{aligned} \max \limits _{y \in D} \rightarrow \langle c,y\rangle =\sum \limits _{j=1}^{n } c _{j} y_{j}, \end{aligned}$$
(10)

where

$$\begin{aligned} D=\left\{ y \in {\mathbb {R}}_{+}^{n} \ \bigg |\ \sum \limits _{j=1}^{n } a_{ij} y_{j} \le b_{i} \quad \text{ for } \ i=1,\ldots ,m \right\} . \end{aligned}$$
(11)

However, some parameters may not be known exactly. Suppose that prices are dependent of outputs, i.e. \(c = c(x)\), so that we can evaluate the prices only at the current output level x. Then we can define the bi-function

$$\begin{aligned} f (x,y)=\langle c(x),y\rangle \end{aligned}$$
(12)

and consider problem (4), (12) where D is defined in (11). Observe that it has the same formulation as the Cassel–Wald economic equilibrium model; see Kuhn (1956) for more detail. These models may give different solutions. In order to illustrate this difference between the approaches, we take the simplest case where

$$\begin{aligned} m=1, \quad n=2, \quad a_{11}=a_{12}=b_{1}=1. \end{aligned}$$

That is, (10)–(11) reduces to the problem

$$\begin{aligned} \max \ \rightarrow \ \{ c_{1} y_{1}+c _{2} y_{2} \ |\ y_{1}+ y_{2} \le 1, \ y_{1}\ge 0, y_{2}\ge 0\}. \end{aligned}$$
(13)

For any fixed vector c with \(c_{1}> c_{2}\) (respectively, \(c_{1}< c_{2}\)) we have the solution \(x'=(1,0)^{\top }\) (respectively, \(x'=(0,1)^{\top }\)). Suppose first that the prices are dependent of outputs as follows:

$$\begin{aligned} c_{j}(x)=1-x_{j} \quad \hbox {for} \ j=1,2. \end{aligned}$$
(14)

Then the above approach leads to a cyclic process. In fact, starting from any point \(x^{0}\) where \(c_{1}(x^{0}) \ne c_{2}(x^{0})\) and taking the optimal response \(x^{1}\) in the sense of the above optimization problem (13), we obtain an infinite sequence of solutions \(\{x^{k}\}\) with zero goal value, i.e. \(\langle c(x^{k}),x^{k}\rangle =0\). Let us consider the relative optimization problem: Find \(x^{*} \in D\) such that

$$\begin{aligned} c_{1}(x^{*}) x^{*}_{1}+c _{2}(x^{*}) x^{*}_{2} \ge c_{1}(x^{*}) y_{1}+c _{2}(x^{*}) y_{2} \quad \forall y \in D, \end{aligned}$$
(15)

where

$$\begin{aligned} D=\left\{ x \in {\mathbb {R}}_{+}^{2} \ \bigg |\ x_{1}+ x_{2} \le 1 \right\} . \end{aligned}$$

It clearly has the unique solution \(x^{*}=(0.5,0.5)^{\top }\) since it yields \(c(x^{*})=(0.5,0.5)^{\top }\) and \(\langle c(x^{*}),x^{*}\rangle =0.5\). Besides, the point \(x^{*}\) is the unique solution of the global optimization problem (3), (12), (14), where

$$\begin{aligned} u(y)=(1-y_{1})y_{1}+ (1-y_{2})y_{2} \end{aligned}$$

is a strongly concave function and \(\nabla u(x^{*})={\mathbf {0}}\). However, problems (3) and (4) may give different solutions. Let us now take

$$\begin{aligned} c_{j}(x)=3-4 x_{j} \quad \text{ for } \ j=1,2, \end{aligned}$$
(16)

instead of (14). Then \(x^{*}=(0.5,0.5)^{\top }\) is a solution of problem (4), (12), (16) since \(c(x^{*})=(1,1)^{\top }\) and

$$\begin{aligned} \langle c(x^{*}),x^{*}\rangle =1 \ge y_{1}+ y_{2} \end{aligned}$$

for any \(y \in D\). At the same time, the global optimization problem (3), (12), (16) with the cost function

$$\begin{aligned} u(y)=(3-4y_{1})y_{1}+ (3-4y_{2})y_{2} \end{aligned}$$

has the unique solution \({\bar{x}}=(3/8,3/8)^{\top }\) with \(u(\bar{x})=9/8\) since u is also strongly concave and \(\nabla u(\bar{x})={\mathbf {0}}\). However, \({\bar{x}}\) is not a relative optimal point. In fact,

$$\begin{aligned} \langle c({\bar{x}}),{\bar{x}}\rangle =9/8 < \langle c({\bar{x}}),x^{*}\rangle =3/2. \end{aligned}$$

That is, the relative optimality reflects restrictions on the knowledge about the problem and gives certain restricted optimality. Next, denote by \({\mathbb {Z}}_{+}\) the set of non-negative integers and re-define the feasible set D as follows:

$$\begin{aligned} D=\left\{ y \in {\mathbb {R}}^{2} \ \bigg |\ y_{1}+ y_{2} \le 1, \ y_{1}\in {\mathbb {Z}}_{+}, y_{2}\in {\mathbb {Z}}_{+} \right\} . \end{aligned}$$

Then the usual problem (10) with any fixed c obviously has integer solution, but this is not the case for problem (4), (12), (14). In fact, this additional condition makes the feasible set non-convex.

Example 3.2

(Multi-criteria optimization). Let us consider a multi-criteria maximization problem (see e.g. Sawaragi et al. 1985; Miettinen 1998) with respect to m scalar functions \(\phi _{i} : D \rightarrow {\mathbb {R}}\) on a feasible set \(D \subseteq {\mathbb {R}}^{n}\). Usually, solution sets of separate optimization problems

$$\begin{aligned} \max \limits _{y \in D} \rightarrow \phi _{i}(y), \quad i=1,\ldots ,m, \end{aligned}$$
(17)

do not contain any common point, which corresponds to the ideal solution, since the goal functions \(\phi _{i}\) reflect different and even contradicting goals. The most popular and simplest way for overcoming this drawback consists in associating each goal function \(\phi _{i}\) with some weight \(\lambda _{i}\ge 0\), taking a normalization condition, say,

$$\begin{aligned} \sum \limits _{i=1}^{m } \lambda _{i}=1, \end{aligned}$$

and solving the scalar parametric optimization problem

$$\begin{aligned} \max \limits _{x \in D} \rightarrow \sum \limits _{i=1}^{m } \lambda _{i} \phi _{i}(x); \end{aligned}$$
(18)

see e.g. Sawaragi et al. (1985), Miettinen (1998) and the references therein. It is well known that each solution of (18) is a weak Pareto maximal point (or a Pareto maximal point if all the weights are positive). Therefore, it can serve as a solution of system (17). The main difficulty of this approach consists in right weights assignment due to diversity of comparative goal evaluations. Note that different weights may give rather different solutions of problem (18). Let us consider the situation where the weights are dependent of the current state, i.e. they are subjective and can be defined as functions. In other words, we are able to define

$$\begin{aligned} \sum \limits _{i=1}^{m } \lambda _{i}(x)=1, \quad \lambda _{i}(x) \ge 0, \ i=1,\ldots ,m, \end{aligned}$$

at each point \(x \in D\). Then we define the goal bi-function

$$\begin{aligned} f (x,y)=\langle \lambda (x),\phi (y)\rangle = \sum \limits _{i=1}^{m } \lambda _{i}(x) \phi _{i}(y); \end{aligned}$$
(19)

and consider the relative optimization problem (4), whose solutions can be treated as solutions of system (17) in the sense that the point so derived is optimal with respect to the weights given at this point. Existence of solutions of problem (4) and (19) can be deduced from Proposition 3.1. For instance, suppose that the functions \(\lambda _{i}\) and \(\phi _{i}\) are continuous, the functions \(\phi _{i}\) are concave, the set D is nonempty, convex and closed. Then if either D is bounded or a proper coercivity condition holds, problem (4) and (19) has a solution.

4 Regularized penalty method for quasi-equilibrium problems

In order to apply Proposition 3.1 to the more general problem (5)–(6) we will utilize a suitable penalty method. The penalty method is one of basic tools for solution of various equilibrium type problems with complex nonlinear constraints; see e.g. Fedorov (1979), Gwinner (1983), Harker and Choi (1991), Konnov (2014, 2015). In such a way, we both obtain an existence result and present a solution method for this problem.

Let V be a nonempty set in \({\mathbb {R}}^{n}\), \(W : {\mathbb {R}}^{n} \rightarrow \varPi ({\mathbb {R}}^{n})\) a set-valued mapping, and let \(\varPhi : V \times V \rightarrow {\mathbb {R}}\) be an equilibrium bi-function. Then, we can define the general QEP: Find \(x^{*} \in D(x^{*})\) such that

$$\begin{aligned} \varPhi (x^{*},y) \ge 0 \quad \forall y \in D(x^{*}), \end{aligned}$$
(20)

where D(x) is defined in (5). Clearly, problem (5)–(6) coincides with QEP (5) and (20) if we set

$$\begin{aligned} \varPhi (x, y)= f (x,x)-f (x,y); \end{aligned}$$
(21)

cf. (9).

Let us now define a general penalty bi-function \(P : {\mathbb {R}}^{n} \times {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\) just for values of the mapping W, i.e.

$$\begin{aligned} P(x,y)=0 \ \text{ if } \ y \in W(x) \quad \text{ and } \quad P(x,y)>0 \quad \text{ if } \ y \notin W(x). \end{aligned}$$

For instance, if the set W(x) has the form (7), we can take

$$\begin{aligned} P(x,y) = \sum \limits _{i= 1}^{m} \left( \max \{h_{i}(x,y), 0\}\right) ^{\sigma }, \quad \sigma \ge 1. \end{aligned}$$

Then we determine the penalized bi-function

$$\begin{aligned} \varPhi _{\tau }(x, y) = \varPhi (x, y)+\tau [P(x,y)-P(x,x)] \end{aligned}$$

with \(\tau > 0\) on the set V. Hence, we can in principle consider penalized EPs of the form: Find a point \(x(\tau ) \in V\) such that

$$\begin{aligned} \varPhi _{\tau }(x(\tau ),y) \ge 0 \quad \forall y \in V. \end{aligned}$$
(22)

However, it was noticed in Konnov (2014, 2015) that application of the regularized penalty method enables one to obtain convergence under weaker assumptions. For this reason, we take the following coercivity condition.

(C2)There exist a continuous convex function\(\mu : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\)that is weakly coercive with respect to the setVand a number\(\gamma \)such that for any point\(x \in V{\setminus } X(\mu ,\gamma )\), there is a point\(z \in V\)such that \( P(x,z) \le P(x,x)\),

$$\begin{aligned} \min \{ \varPhi (x, z), \mu (z)-\mu (x)\} <0 \quad {\textit{and}} \quad \max \{ \varPhi (x, z), \mu (z)-\mu (x)\} \le 0. \end{aligned}$$

Given numbers \(\tau > 0\) and \(\varepsilon > 0\), we will consider the perturbed problem of finding a point \(x(\tau ,\varepsilon ) \in V\) such that

$$\begin{aligned} \varPhi _{\tau }(x(\tau ,\varepsilon ), y)+ \varepsilon [\mu (y)-\mu (x(\tau ,\varepsilon ))] \ge 0 \quad \forall y \in V; \end{aligned}$$
(23)

cf. (22). Our goal is to show that the trajectory \(\{x(\tau ,\varepsilon )\}\) converges to a solution of QEP (5), (20) as \(\tau \rightarrow +\infty \) and \(\varepsilon \rightarrow 0\). For brevity, we set \(x^{k}=x(\tau _{k}, \varepsilon _{k})\). This means that the point \(x^{k}\) is an arbitrary solution of the auxiliary EP (23) with \(\tau =\tau _{k}\) and \(\varepsilon =\varepsilon _{k}\).

We now define the basic assumptions for the initial QEP (5), (20). For brevity, we also set

$$\begin{aligned} \varphi (x)= P (x,x). \end{aligned}$$
(24)

(B1)Vis a nonempty, convex, and closed set in\({\mathbb {R}}^{n}\), the equilibrium bi-function\(\varPhi : {\mathbb {R}}^{n} \times {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\)is upper semi-continuous on\(V\times V\), \(\varPhi (x, \cdot )\)is convex for each\(x \in V\).

(B2)The function\(P (\cdot , y)\)is upper semi-continuous for each\(y \in V\), the function\(\varphi : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\)is lower semi-continuous onV, \(P(x, \cdot )\)is convex for each\(x \in V\), the mapping\(D : {\mathbb {R}}^{n} \rightarrow \varPi ({\mathbb {R}}^{n})\)is lower semi-continuous on Vand has nonempty values on the set\(V \bigcap X(\mu ,\gamma )\).

Theorem 4.1

Suppose that assumptions (B1)(B2) and (C2) are fulfilled, the sequences \(\{\tau _{k}\}\) and \(\{\varepsilon _{k}\}\) satisfy

$$\begin{aligned} \{\tau _{k}\} \nearrow +\infty , \ \{\varepsilon _{k}\} \searrow 0. \end{aligned}$$
(25)

Then:

  1. (i)

    EP (23) has a solution for each pair \(\tau > 0\) and \(\varepsilon > 0\) and all these solutions belong to \(V \bigcap X(\mu ,\gamma )\);

  2. (ii)

    Each sequence \(\{x^{k}\}\) of solutions of EP (23) with \((\tau ,\varepsilon ) =(\tau _{k},\varepsilon _{k})\) has limit points, all these limit points belong to \(V \bigcap X(\mu ,\gamma )\) and are solutions of QEP (5), (20).

Proof

We first prove assertion (i). For brevity, set

$$\begin{aligned} \varPhi _{\tau ,\varepsilon }(x, y) = \varPhi (x, y)+\tau [P(x,y)-P(x,x)] +\varepsilon [\mu (y)-\mu (x)]. \end{aligned}$$

We now show that, for any \(\tau > 0\) and \(\varepsilon > 0\), (C1) is true with \(\eta =\varPhi _{\tau ,\varepsilon }\) and \(K=V\). Take any \(x \in V {\setminus } X(\mu ,\gamma )\), then by (C2) there is \(z \in V\) such that

$$\begin{aligned} \varPhi _{\tau ,\varepsilon }(x, z) = \varPhi (x,z)+\tau [P(x,z)-P(x,x)] +\varepsilon [\mu (z)-\mu (x)]< 0. \end{aligned}$$

Since \(\varPhi _{\tau ,\varepsilon }(\cdot , y)\) is upper semi-continuous for each \(y \in V\) and \(\varPhi _{\tau ,\varepsilon }(x,\cdot )\) is convex for each \(x \in V\), EP (23) has a solution for any \(\tau > 0\) and \(\varepsilon > 0\) due to Proposition 3.1 with \(F = \varPhi _{\tau ,\varepsilon }\) and \(K=V\). It also follows that \(x(\tau ,\varepsilon ) \in V \bigcap X(\mu ,\gamma )\). Hence, assertion (i) is true.

By (i), the sequence \(\{x^{k}\}\) exists and is bounded. Therefore, it has limit points. Since \(V \bigcap X(\mu ,\gamma )\) is convex and closed, all these limit points must belong to \(V \bigcap X(\mu ,\gamma )\). Let \(x'\) be an arbitrary limit point of \(\{x^{k}\}\), i.e. \( \{x^{k_{s}} \} \rightarrow x'\). Then, by definition,

$$\begin{aligned} 0 \le P(x^{k_{s}},x^{k_{s}})\le \tau _{k_{s}}^{-1}\varPhi (x^{k_{s}},y)+ P(x^{k_{s}},y) +\varepsilon _{k_{s}}\tau _{k_{s}}^{-1} [\mu (y)-\mu (x^{k_{s}})] \quad \forall y \in V. \end{aligned}$$

Since \(x' \in V \bigcap X(\mu ,\gamma )\), \(D(x')\) is nonempty due to (B2). Take any \(y' \in D(x')\), then there exists a sequence of points \( \{ y^{k_{s}} \} \subset V \), \(\{ y^{k_{s}} \} \rightarrow y'\) such that \( P (x^{k_{s}},y^{k_{s}})= 0\) since the mapping D is lower semi-continuous on V. Setting \(y=y^{k_{s}}\) in the above inequality, we obtain

$$\begin{aligned} 0\le & {} P(x',x') =\varphi (x')\le \liminf _{s \rightarrow \infty } \varphi (x^{k_{s}}) = \liminf _{s \rightarrow \infty } P(x^{k_{s}},x^{k_{s}})\\\le & {} \limsup _{s \rightarrow \infty } \left\{ \tau _{k_{s}}^{-1}\varPhi (x^{k_{s}},y^{k_{s}}) + P (x^{k_{s}},y^{k_{s}})+\varepsilon _{k_{s}}\tau _{k_{s}}^{-1} \left[ \mu (y^{k_{s}})-\mu (x^{k_{s}}) \right] \right\} \\= & {} \limsup _{s \rightarrow \infty } \left\{ \tau _{k_{s}}^{-1}\varPhi (x^{k_{s}},y^{k_{s}}) +\varepsilon _{k_{s}}\tau _{k_{s}}^{-1} \left[ \mu (y^{k_{s}})-\mu (x^{k_{s}}) \right] \right\} \le 0, \end{aligned}$$

because of (B1), (B2), and (25). Hence \(x' \in D(x')\). But \( \{x^{k_{s}} \} \rightarrow x'\) and for each \(y \in V\) we have

$$\begin{aligned} \varPhi (x^{k_{s}}, y) + \tau _{k_{s}} \left[ P(x^{k_{s}},y)-P(x^{k_{s}},x^{k_{s}}) \right] +\varepsilon _{k_{s}}\left[ \mu (y)-\mu (x^{k_{s}}) \right] \ge 0 \end{aligned}$$

due to (23). Take an arbitrary point \(y' \in D(x')\). Since the mapping D is lower semi-continuous on V, there exists a sequence of points \( \{ z^{k_{s}} \} \subset V \), \(\{ z^{k_{s}} \} \rightarrow y'\) such that \( P (x^{k_{s}},z^{k_{s}})= 0\). Setting \(y=z^{k_{s}}\) in the above inequality, we obtain

$$\begin{aligned}&\varPhi (x^{k_{s}}, z^{k_{s}}) - \tau _{k_{s}} P(x^{k_{s}},x^{k_{s}}) +\varepsilon _{k_{s}}\left[ \mu (z^{k_{s}})-\mu (x^{k_{s}}) \right] \\&\quad =\varPhi (x^{k_{s}}, y) + \tau _{k_{s}} \left[ P(x^{k_{s}},z^{k_{s}})-P(x^{k_{s}},x^{k_{s}}) \right] +\varepsilon _{k_{s}}\left[ \mu (z^{k_{s}})-\mu (x^{k_{s}}) \right] \ge 0. \end{aligned}$$

It now follows that

$$\begin{aligned} \varPhi (x', y')\ge & {} \limsup _{s \rightarrow +\infty } \varPhi (x^{k_{s}}, z^{k_{s}}) \\\ge & {} \liminf _{s \rightarrow +\infty } \left[ \tau _{k_{s}}P(x^{k_{s}},x^{k_{s}}) \right] + \liminf _{s \rightarrow +\infty } \left\{ -\varepsilon _{k_{s}}\left[ \mu (z^{k_{s}})-\mu (x^{k_{s}}) \right] \right\} \\\ge & {} \liminf _{s \rightarrow +\infty } \left\{ \varepsilon _{k_{s}}\left[ \mu (x^{k_{s}})-\mu (z^{k_{s}}) \right] \right\} = 0. \end{aligned}$$

Therefore \(x'\) solves QEP (5), (20) and assertion (ii) is true. The proof is complete. \(\square \)

Corollary 4.1

Suppose that assumptions (B1)(B2) and (C2) are fulfilled. Then QEP (5), (20) has a solution in \(V \bigcap X(\mu ,\gamma )\).

The assertion follows from part (ii) of Theorem 4.1.

It should be noted that QEP (5), (20) may have in principle other solutions under the conditions of Theorem 4.1, moreover, the solution set may be unbounded in general.

Remark 4.1

We note that (B1) and (B2) imply that the mapping \(x \mapsto D(x)\) has convex values. Due to (21), the above results are adjusted easily to problem (5)–(6). In fact, we have to only specialize the assumptions in (B1). For instance, if the bi-function \(f : {\mathbb {R}}^{n} \times {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\) is lower semi-continuous on \(V\times V\), the function \(\theta : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\) defined by \(\theta (x)= f (x,x)\) is upper semi-continuous on V, and \(f(x, \cdot )\) is concave for each \(x \in V\), then (B1) holds true for the bi-function \(\varPhi \) defined in (21). Thus, the conditions of Theorem 4.1 are somewhat different from those in Bensoussan and Lions (1984), Yuan and Tan (1997), Aubin (1998), mainly due to weaker coercivity conditions. The set of these conditions seems rather mild even with comparison of the usual coercivity conditions for EPs. In fact, let us take QEP (5)–(6) with \(V = {\mathbb {R}}^{n}_{+}\) and set

$$\begin{aligned} W(x)=\left\{ y \in {\mathbb {R}}^{n} \ \bigg | \ A y \le b(x) \right\} , \end{aligned}$$

where A is an \(m \times n\) matrix, \(m<n\), \(b : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}^{m}_{>}\) is a continuous mapping, and

$$\begin{aligned} {\mathbb {R}}_{>}^{m} = \{ v \in {\mathbb {R}}^{m} \ | \ v_{j} > 0 \quad j=1, \ldots , m \}. \end{aligned}$$

It follows that \({\mathbf {0}} \in D(x)\) for any point \(x \in V\). We can determine the penalty bi-function as follows: \(P(x,y)=\Vert [Ay-b(x)]_{+}\Vert ^{2}\). Then all the assumptions in (B2) hold true. For the sake of simplicity we determine the bi-function \(\varPhi (x,y)=\langle F(x),y-x \rangle \), which corresponds to a quasi-variational inequality. If \(F : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}^{n}\) is an arbitrary coercive continuous mapping, i.e. \(\langle F(x),x \rangle \rightarrow +\infty \) as \(\Vert x\Vert \rightarrow \infty \), then there exists a number \(\gamma >0\) such that \(\langle F(x),x \rangle > 0\) if \(\Vert x\Vert \ge \sqrt{\gamma }\). It follows that all the assumptions in (B1) and (C2) hold true with \(z={\mathbf {0}}\) for any point \( x \in V{\setminus } X(\mu ,\gamma )\) and \(\mu (x)=\Vert x\Vert ^{2}\).

There exist a number of iterative solution methods for solution of EP (4); see e.g. Zukhovitskii et al. (1969), Krawczyk and Uryasev (2000), Mastroeni (2003), Konnov and Pinyagina (2003), Chadli et al. (2004), Konnov (2008) and references therein. A great number of these methods is based on the reduction of this problem to a variational inequality problem (VI for short). Suppose that \(f (x, \cdot )\) is concave and differentiable for each \(x \in D\) and set

$$\begin{aligned} F(x) = \left. \nabla _{y} f (x,y) \right| _{y=x}; \end{aligned}$$
(26)

see Rosen (1965). Then EP (4) is equivalent to the following VI: Find a point \(x^{*} \in D\) such that

$$\begin{aligned} \langle F(x^{*}),x^{*}-x \rangle \ge 0 \quad \forall x \in D. \end{aligned}$$
(27)

That is, we can replace EP (4) with VI (26)–(27) and solve it with suitable iterative methods; see e.g. Patriksson (1999), Konnov (2001), Facchinei and Pang (2003) and references therein. Due to Theorem 4.1, all these methods can be applied to QEP (5), (20) with proper utilization of the penalty method.

5 Application to non-cooperative game problems

The relative or subjective approach can be applied to other classes of mathematical models that are used for decision making in complex systems. We now describe its possible application to non-cooperative game problems.

Let us first consider the usual l-person non-cooperative game where the i-th player has a utility function \(\phi _{i} : {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}\) and a strategy set \(D_{i} \subseteq {\mathbb {R}}^{n_{i}}\) with

$$\begin{aligned} D=D_{1} \times \dots \times D_{l} \end{aligned}$$

and \(\sum _{i=1}^{l} n_{i}=n\). A point \(x^{*}= (x^{*}_{1},\ldots ,x^{*}_{l})^{\top }\in D\) is said to be a Nash equilibrium point (Nash 1951) for this game, if

$$\begin{aligned} \phi _{i}(x^{*}_{-i},y_{i}) \le \phi _{i}(x^{*}) \quad \forall y_{i} \in D_{i}, \ i=1,\ldots ,l. \end{aligned}$$
(28)

Here and below, we set \((x_{-i},y_{i})=(x_{1},\ldots ,x_{i-1},y_{i},x_{i+1},\ldots ,x_{l})\) for brevity. The strategy set \(D_{i}\) may be represented by different constraints in general. Suppose that

$$\begin{aligned} D_{i}=\left\{ x_{i} \in X_{i} \ | \ h_{i}(x_{i}) \le 0 \right\} , \end{aligned}$$

where \(X_{i} \subseteq {\mathbb {R}}^{n_{i}}\) and \(h_{i} : {\mathbb {R}}^{n_{i}} \rightarrow {\mathbb {R}}\) for \(i=1,\ldots ,l\). Within the usual game-theoretic setting it is supposed that each i-th player knows precisely his/her utility function \(\phi _{i}\), constraint function \(h_{i}\), and set \(X_{i}\), but does not know the choice of the other players before his/her own choice.

Nevertheless, if this system becomes rather complicated in the sense that it involves many active elements (players) whose behaviour is not very clear, then we can suppose that each i-th player has only estimate functions \(\phi _{i}\) and \(h_{i}\) and that these estimate functions are associated with the current state. That is, given a state \(x \in {\mathbb {R}}^{n}\), the i-th player has the strategy set

$$\begin{aligned} D_{i}(x)=\left\{ y_{i} \in X_{i} \ | \ h_{i}(x,y_{i}) \le 0 \right\} \end{aligned}$$

and the utility function \(\phi _{i} (x,x_{-i},\cdot )\). Then we can define a relative or subjective equilibrium point \(x^{*} \in D(x^{*})\) such that

$$\begin{aligned} \phi _{i}(x^{*},x^{*}_{-i},y_{i}) \le \phi _{i}(x^{*},x^{*}) \quad \forall y_{i} \in D_{i}(x^{*}), \ i=1,\ldots ,l; \end{aligned}$$
(29)

where

$$\begin{aligned} D(x)=D_{1} (x) \times \dots \times D_{l}(x). \end{aligned}$$
(30)

It is well known that problem (28) is equivalent to (4) where

$$\begin{aligned} f (x,y) = \sum _{i=1}^{l} \phi _{i} (x_{-i},y_{i}); \end{aligned}$$

see Nikaido and Isoda (1955). Similarly, we can reduce (29)–(30) to (6). It suffices to set

$$\begin{aligned} f (x,y) = \sum _{i=1}^{l} \phi _{i}(x,x_{-i},y_{i}). \end{aligned}$$
(31)

If a point \(x^{*} \in D(x^{*})\) solves (29)–(30), then summing the inequalities in (29) gives (6) where f is defined in (31). Conversely, let \(x^{*} \in D(x^{*})\) be a solution of (6), (31). Fix an arbitrary index k and an arbitrary element \(v_{k} \in D_{k}(x^{*})\) and take \(y=(x^{*}_{-k},v_{k})\). From (6) and (31) we now obtain

$$\begin{aligned} \phi _{k}(x^{*},x^{*}_{-k},v_{k}) \le \phi _{k}(x^{*},x^{*}), \end{aligned}$$

hence, \(x^{*}\) is a solution of (29)–(30). Therefore, we can utilize the results of the previous sections for investigation and finding solutions of relative game equilibrium problems.

6 Conclusions

We considered relative optimization problem formulations where the goal function and feasible set are dependent of the current state of the system under consideration. We can use solutions of these problems in order to decide whether the current state is suitable or should be changed. We proposed equilibrium and quasi-equilibrium formulations of the corresponding problems, which enabled us to use the corresponding theory and methods for their investigation. In the general case we obtained a quasi-equilibrium problem. We proposed to apply a regularized penalty method for this problem. This approach enabled us to establish existence results under weak coercivity conditions and replace it with a sequence of the usual equilibrium problems. We described several examples of applications and show that the subjective approach can be extended to non-cooperative game problems.