1 Introduction

Let C be a nonempty closed convex subset of the Euclidean space \(\mathcal R^{n}\) with inner product 〈⋅,⋅〉 and norm ∥⋅∥. The Ky Fan inequalities, shortly K F(f, C), are formulated as follows:

$$\text{Find~} x^{*}\in C \text{~such that~} f(x^{*}, x)\geq 0~ \forall x\in C, $$

where f is a bifunction from C × C to \(\mathcal R\) such that f(x, x) = 0 for all xC, and for each xC, the function f(x, y) is convex and subdiffentiable with respect to the second argument y on C. The solution set of Problem K F(f, C) is denoted by Sol(f, C). An important example of Ky Fan inequalities is the variational inequalities denoted by V I(F, C). It corresponds to f(x, y) = 〈F(x),yx〉 for every x, yC, where \(F: C\to \mathcal R^{n}\) is a continuous mapping, and can be expressed as follows:

$$\text{Find~} \bar x\in C \text{~such that~} \langle F(\bar x), y-\bar x\rangle\geq 0~ \forall y\in C. $$

In the present work, we are concerned with the variational inequalities on the solution set of Ky Fan inequalities, shortly V I K F(F, f, C), which consist in finding a point x , such that

$$x^{*}\in \text{Sol}(f, C) \text{~and~} \langle F(x^{*}), x-x^{*}\rangle \geq 0~ \forall x\in \text{Sol}(f, C), $$

where the bifunction \(f: C\times C\to \mathcal R\) and \(F: C\to \mathcal R^{n}.\) The variational inequalities on the solution set of Ky Fan inequalities are special classes of quasivariational inequalities and bilevel Ky Fan inequalities (see [1, 7, 9, 15, 16, 23]). However, this class of problems includes, as special cases, lexicographic variational inequalities, Ky Fan inequalities, some classes of mathematical programs with equilibrium constraints (see [4, 8, 13, 18, 27]), bilevel minimization problems (see [2, 25]), variational inequalities (see [14]), minimum-norm problems of the solution set of variational inequalities (see [6, 30]), and bilevel convex programming models (see [26, 29]). The problems have recently attracted many researcher’s interest, due to the fact that those algorithms have extensive applications in a variety of areas such as image recovery, signal processing, and network resource allocation (see for instance [3, 9, 12, 19, 27]). The merit of these problems is that it unifies all these particular problems in a convenient way.

A particularly interesting problem occurs in the case that F(x) = x for all xC, the problem reduces to a minimum-norm problem over the solution set of the following Ky Fan inequalities:

$$\text{Find~} x^{*}\in C \text{~such that~} x^{*}=\text{Pr}_{\text{Sol}(f, C)}(0), $$

where PrSol(G, C)(0) is the projection of 0 onto Sol(f, C). To solve Problem V I K F(F, f, C): find x ∈Sol(f, C) such that

$$\langle F(x^{*}), x-x^{*}\rangle\geq 0~ \forall x\in \text{Sol}(f, C), $$

where F is Lipschitz continuous and strongly monotone from \(\mathcal R^{n}\) to \(\mathcal R^{n}\), and

$$f(x, y)= \langle G (x), y-x\rangle+\varphi (y)-\varphi (x), $$

where \(G: \mathcal R^{n}\to \mathcal R^{n}\) is monotone and \(\varphi : \mathcal R^{n}\to (-\infty , +\infty ]\) is lower semicontinuous and convex; Maingé [21] proposed the projected subgradient method and showed the convergence of the iteration sequences under certain appropriate conditions imposed on the regular parametrics.

Anh et al. [7] introduced an extragradient algorithm for solving V I K F(F, f, C), where f(x, y) = 〈G(x),yx〉 for every x, yC. Then, the algorithm consists of roughly two loops. At each iteration k of the outer loop, they applied the extragradient method to the lower variational inequality problem. Then, starting from the obtained iterate in the outer loop, they compute an 𝜖 k -solution of Problem V I(G, C). The convergence of the algorithm crucially depends on the starting point x 0 and the parameters chosen in advance. Under assumptions that F is strongly monotone and Lipschitz continuous, G is pseudomonotone and Lipschitz continuous on C, and the sequences of parameters were chosen appropriately. They showed that two iterative sequences {x k} and {z k} converge to the same point x which is a solution of the problem. However, at each iteration of the outer loop, the scheme requires computing an approximation solution to a variational inequality problem.

There exist some other solution-methods for lexicographic variational inequalities when the cost operator has some monotonicity (see [13, 18, 23, 26]). In all of these methods, it requires solving auxiliary variational inequalities. In order to avoid this requirement, we combine the contractivity of the mapping T := Iμ F in [16] for solving variational inequalities V I(F, C) with some conditions of F and μ, and the extragradient methods proposed by Korpelevich in [20]. Then, the strong convergence of proposed sequences will be considered in a real Hilbert space. Let Fix (T) be the fixed point set of a demicontractive mapping T : CC. For solving V I(F,Sol(f, C) ∩Fix(T)) in a real Hilbert space, by using the hybrid steepest descent method for fixed point problems introduced by Yamada and Ogura in [28], Maingé and Moudafi in [22] proposed the following iterative process:

$$\left\{\begin{array}{llllllll} x^{0}\in C,\\ z^{k} \text{~such that~} f(z^{k}, y)+\dfrac 1{r_{k}}\langle y-z^{k}, z^{k}-x^{k}\rangle\geq 0~ \forall y\in C,\\ x^{k+1}=[(1-\lambda)I+\lambda T](t^{k})\text{~with~} t^{k}=z^{k}-\alpha_{k}F(z^{k}), \end{array}\right. $$

where I stands for the identity mapping on \(\mathcal H, \lambda \in (0, 1), \{\alpha _{k}\}\subset [0, 1)\) and \(\{r_{k}\}\subset (0, \infty )\). Under certain conditions on the parameters, the sequence {x k} converges strongly to a solution of Problem V I(F,Sol(f, C) ∩Fix(T)). By replacing the computation of z k in the Maingé and Moudafi algorithm by an extragradient iteration, Vuong et al. in [24] recently proposed extragradient-viscosity methods for solving Problem V I(F,Sol(f, C) ∩Fix(T)). The strong convergence of the iterates generated by these algorithms is obtained by combining a viscosity approximation method with an extragradient method. This is done when the basic iteration comes directly from the extragradient method, under a Lipschitz-type continuous and pseudomonotone assumptions on the equilibrium bifunction. In this paper, we are interested in finding a solution to variational inequalities on the solution set of Ky Fan inequalities V I K F(F, f, C), where the operators F and f satisfy the following usual conditions:

(A 1) For each xC, f(x,⋅) is lower semicontinuous convex on C. If {x k}⊂ C is bounded and 𝜖 k ↘ 0 as \(k\to \infty \), then the sequence {w k} with w k2𝜖 k f(x k,x k) is bounded, where 2𝜖 k f(x k,x k) stands for 𝜖-subdifferential of the convex function f(x k,x) with respect to the second argument at x k:

$$\begin{array}{@{}rcl@{}} \partial^{\epsilon_{k}}_{2}f(x^{k}, x^{k})&=&\left\{w\in \mathcal H\,:\, f(x^{k}, x)-f(x^{k},x^{k})\geq \langle w, x-x^{k}\rangle-\epsilon_{k}~ \forall x\in C\right\}\\ &=&\left\{w\in \mathcal H\,:\, f(x^{k}, x)+\epsilon_{k}\geq \langle w, x-x^{k}\rangle~ \forall x\in C\right\}; \end{array} $$

(A 2)f is pseudomonotone on C with respect to every solution x of Problem V I K F(F, f, C) and satisfies the following condition, called the strict paramonotonicity property

$$y\in C, f(y, x^{*})=f(x^{*}, y)=0\Rightarrow y\in \text{Sol}(f, C); $$
  • (A 3) For each xC, f(⋅,x) is upper semicontinuous on C;

  • (A 4) The solution set Sol(f, C) of Problem V I(F, C) is nonempty;

  • (A 5)F is L-Lipschitz continuous and β-strongly monotone.

It is well known that if the cost mapping F of variational inequalities V I(F, C) is continuous and strongly monotone on the nonempty closed convex subset \(C\subset \mathcal H\), then Problem V I(F, C) has a unique solution (see [12, 14]). When f is pseudomonotone and Sol(f, C)≠, Sol(f, C) is convex (see [10]). Thus, under Conditions (A 2),(A 4), and (A 5), Problem V I K F(F, f, C) has a unique solution. The purpose of this paper is to propose an algorithm for directly solving the variational inequalities on the solution set of Ky Fan inequalities by using the projected subgradient method and fixed point techniques.

The rest of this paper is divided into three sections. In Section 2, we recall some properties for monotonicity, the metric projection onto a closed convex set, and introduce in detail a new algorithm for solving Problem V I K F(F, f, C). The third section is devoted to the convergence analysis for the algorithm. Finally, in the last section, we consider a numerical example to illustrate the convergence of the algorithm.

2 Preliminaries

Let C be a nonempty closed convex subset of the Euclidean space \(\mathcal R^{n}\). There exists a unique point in C, denoted by Pr C (x) satisfying

$$\|x-\text{Pr}_{C}(x)\|\leq \|x-y\|~ \forall y\in C. $$

Then, Pr C is called the metric projection of \(\mathcal R^{n}\) to C. It is well known that Pr C satisfies the following properties:

  1. (a)

    \(\|\text {Pr}_{C}(x)-\text {Pr}_{C}(y)\|\leq \|x-y\|~ \forall x,y\in \mathcal R^{n};\)

  2. (b)

    \(\|\text {Pr}_{C}(x)-\text {Pr}_{C}(y)\|^{2}\leq \langle \text {Pr}_{C}(x)-\text {Pr}_{C}(y), x-y\rangle ~ \forall x,y\in \mathcal R^{n};\)

  3. (c)

    \(\langle x-\text {Pr}_{C}(x), y-\text {Pr}_{C}(x)\rangle \leq 0~ \forall y\in C, x\in \mathcal R^{n};\)

  4. (d)

    \(\|\text {Pr}_{C}(x)-y\|^{2}\leq \|x-y\|^{2}-\|\text {Pr}_{C}(x)-x\|^{2}~ \forall y\in C, x\in \mathcal R^{n};\)

  5. (e)

    \(\|\text {Pr}_{C}(x)-\text {Pr}_{C}(y)\|^{2}\leq \|x-y\|^{2}-\|\text {Pr}_{C}(x)-x+y-\text {Pr}_{C}(y)\|^{2}~ \forall x,y\in \mathcal R^{n}.\)

Now, we list some well-known definitions which will be used in our analysis.

Definition 2.1

Let C be a nonempty subset in \(\mathcal R^{n}\). The operator \(\varphi : C \to \mathcal R^{n}\) is said to be

(i):

γ-strongly monotone on C if for each x, yC,

$$\langle \varphi(x)-\varphi(y), x-y\rangle\geq \gamma \|x-y\|^{2};$$
(ii):

pseudomonotone on C if for each x, yC,

$$\langle \varphi(y), x-y\rangle\geq 0 \Rightarrow \langle \varphi(x), x-y\rangle\geq 0;$$
(iii):

Lipschitz continuous with constant L > 0 (shortly L-Lipschitz continuous) on C if for each x, yC,

$$\|\varphi(x)-\varphi(y)\|\leq L\|x-y\|.$$

If φ : CC and L = 1, then φ is called nonexpansive on C.

The bifunction \(f: C\times C \to \mathcal R\) is said to be

(iv):

pseudomonotone on C × C if for each x, yC,

$$f(x, y)\geq 0\Rightarrow f(y, x)\leq 0.$$

In the case f(x, y) = 0, Problem V I K F(f, C) is the variational inequalities V I(F, C) and Algorithm 1 becomes the basic projection algorithm (see Algorithm 12.1.1 [14]).

figure a

To investigate the convergence of this algorithm, we recall the following technical lemmas which will be used in the sequel.

Lemma 2.2 (See [16])

Let \(A: \mathcal R^{n}\to \mathcal R^{n}\) be a β -strongly monotone and L-Lipschitz continuous, λ ∈ (0,1]and \(\mu \in (0, \frac {2\beta }{L^{2}})\). Then, the mapping T(x) := xλ μ A(x)for all \(x\in \mathcal R^{n}\), satisfies the inequality:

$$\|T(x)-T(y)\|\leq (1-\lambda\tau)\|x-y\|~ \forall x,y\in \mathcal R^{n}, $$

where \(\tau =1-\sqrt {1-\mu (2\beta -\mu L^{2})}\in (0, 1]\).

Lemma 2.3 (See [30])

Let {a k }and {δ k }be sequences of nonnegative real numbers such that

$$a_{k+1} \leq a_{k}+\delta_{k}~ \forall k\geq 0,$$

where {δ k }satisfies\({\sum }_{k=0}^{\infty }\delta _{k}<\infty \). Then, there exists the limit \(\lim _{k\to \infty }a_{k}\).

3 Convergence Results

In this section, we will state and prove the convergence of the sequences in Algorithm 1.

Theorem 3.1

Let C be a nonempty closed convex subset of \(\mathcal R^{n}\). Let the mapping \(F: C\to \mathcal R^{n}\) and the bifunction \(f: C\times C\to \mathcal R\) satisfy Assumptions (A 1) (A 5). Then, the sequences {x k}and {y k}in Algorithm 1 converge to the same point which is the unique solution of Problem V I K F(F, f, C).

Suppose that \(x^{*}\) is the unique solution of Problem V I K F(F, f, C). The proof of the theorem is divided into five parts.

Claim 1

$$ \|x^{k+1}-x^{*}\|^{2}\leq \bar \xi_{k} \|x^{k}-x^{*}\|^{2}+\frac{2\beta_{k}}{\lambda}f(x^{k}, x^{*})+S_{k}, $$
(3.1)

where \(\tau =1-\sqrt {1-\mu (2\beta -\mu L^{2})}, \bar \xi _{k}:=1-\tau \xi _{k}, S_{k}=\frac {2\beta _{k}\epsilon _{k}\bar {\xi _{k}}}{\lambda }+{\beta _{k}^{2}}\bar {\xi _{k}}+\frac {\xi _{k}\tau \mu ^{2}}{\tau ^{2}}\|F(x^{*})\|^{2}\) and there exists the limit \(\lim _{k\to \infty }\|x^{k}-x^{*}\|^{2}=c\).

Proof

For each xC, set g k (x) := xξ k μ F(x) for all k = 0,1,⋯. By Lemma 2.2, we have

$$\|g_{k}(x)-g_{k}(y)\|\leq (1-\xi_{k}\tau)\|x-y\|\enskip \forall x,y\in C. $$

By using the nonexpansiveness of the projection and the triangle inequality, we get that

$$\begin{array}{@{}rcl@{}} \|x^{k+1}-x^{*}\|&=& \|\text{Pr}_{C}[y^{k}-\mu\xi_{k} F(y^{k})]-x^{*}\|\\ &=& \|\text{Pr}_{C}[y^{k}-\mu\xi_{k} F(y^{k})]-\text{Pr}_{C}(x^{*})\|\\ &\leq& \|g_{k}(y^{k})-g_{k}(x^{*})-\mu\xi_{k} F(x^{*})\|\\ &\leq& \|g_{k}(y^{k})-g_{k}(x^{*})\|-\mu\xi_{k} \|F(x^{*})\|\\ &\leq& \bar\xi_{k}\|y^{k}-x^{*}\|+\mu\xi_{k} \|F(x^{*})\|. \end{array} $$

This implies that

$$\begin{array}{@{}rcl@{}} \|x^{k+1}-x^{*}\|^{2}&\leq& [\bar\xi_{k}\|y^{k}-x^{*}\|+\mu\xi_{k} \|F(x^{*})\|]^{2}\\ &=& \left[(1-\xi_{k}\tau)\|y^{k}-x^{*}\|+\xi_{k}\tau\left( \frac{\mu}{\tau} \|F(x^{*})\|\right)\right]^{2}\\ &\leq& (1-\xi_{k}\tau)\|y^{k}-x^{*}\|^{2}+\frac{\xi_{k}\tau\mu^{2}}{\tau^{2}} \|F(x^{*})\|^{2}. \end{array} $$
(3.2)

Since y k =Pr C (x kα k w k) and x ∈Sol(F, f, C) ⊆ C, we have

$$\begin{array}{@{}rcl@{}} \|y^{k}-x^{*}\|^{2}&=& \|\text{Pr}_{C}(x^{k}-\alpha_{k} w^{k})-\text{Pr}_{C}(x^{*})\|^{2}\\ &\leq& \|x^{k}-\alpha_{k} w^{k}-x^{*}\|^{2}\\ &=& \|x^{k}-x^{*}\|^{2}-2\alpha_{k}\langle w^{k}, x^{k}-x^{*}\rangle+{\alpha_{k}^{2}}\|w^{k}\|^{2}\\ &\leq& \|x^{k}-x^{*}\|^{2}+2\alpha_{k}(f(x^{k}, x^{*})-f(x^{k}, x^{k})+\epsilon_{k})+{\alpha_{k}^{2}}{\gamma_{k}^{2}}\\ &=& \|x^{k}-x^{*}\|^{2}+2\alpha_{k} f(x^{k}, x^{*})+2\alpha_{k}\epsilon_{k}+{\beta_{k}^{2}}\\ &\leq&\|x^{k}-x^{*}\|^{2}+2\frac{\beta_{k}}{\lambda} f(x^{k}, x^{*})+2\frac{\beta_{k}\epsilon_{k}}{\lambda}+{\beta_{k}^{2}}. \end{array} $$

Combining this and (3.2), we obtain (3.1).

On the other hand, since x ∈Sol(F, f, C) ⊆Sol(f, C), i.e., f(x ,x) ≥ 0 for all xC, x kC, by pseudomonotonicity of f with respect to x , we have f(x k,x ) ≤ 0. Then, using \(\bar \xi _{k}\in (0, 1]\) and (3.1), we get

$$\begin{array}{@{}rcl@{}} \|x^{k+1}-x^{*}\|^{2}&\leq &\bar \xi_{k} \|x^{k}-x^{*}\|^{2}+\frac{2\beta_{k}}{\lambda}f(x^{k}, x^{*})+S_{k}\\ &\leq & \|x^{k}-x^{*}\|^{2}+S_{k}. \end{array} $$

Then, we have

$$a_{k+1}\leq a_{k}+\delta_{k}\enskip \forall k\geq 0, $$

where a k := ∥x kx 2 and δ k := S k . By the assumptions (2.1), we have

$$\sum\limits_{k=0}^{\infty}{\beta_{k}^{2}}<\infty, \sum\limits_{k=0}^{\infty}\beta_{k}\epsilon_{k}<\infty, \sum\limits_{k=0}^{\infty}\xi_{k}<\infty, $$

and hence \({\sum }_{k=0}^{\infty }S_{k}<\infty \), and using Lemma 2.3, there exists the limit \(c:=\lim _{k\to \infty }\|x^{k}-x^{*}\|^{2}\). □

Claim 2

\(\limsup \limits _{k\to \infty }f(x^{k}, x^{*})=0\) and \(\lim \limits _{k\to \infty }\|y^{k}-x^{k}\|=0\).

Proof

Since − f(x k,x ) ≥ 0, Claim 1 and \(\bar \xi _{k}\in (0, 1]\) for every k, one has

$$0\leq \frac{2\beta_{k}}{\lambda} [-f(x^{k}, x^{*})]\leq \|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}+S_{k}.$$

Then, we have

$$\frac{2}{\lambda} \sum\limits_{k=0}^{\infty}\beta_{k}[-f(x^{k}, x^{*})]\leq \|x^{0}-x^{*}\|^{2}+\sum\limits_{k=0}^{\infty}S_{k}<\infty,$$

and hence,

$$\sum\limits_{k=0}^{\infty}\beta_{k}[-f(x^{k}, x^{*})]<\infty. $$

Then, by \({\sum }_{k=0}^{\infty }\beta _{k}=\infty \) and − f(x k,x ) ≥ 0, we can deduce that \(\limsup \limits _{k\to \infty }f(x^{k}, x^{*})=0\) as desired.

Otherwise, by the definition of y k,x kC and the property of Pr C , we have

$$\begin{array}{@{}rcl@{}} \|y^{k}-x^{k}\|&=&\|\text{Pr}_{C}(x^{k}-\alpha_{k} w^{k})-\text{Pr}_{C}(x^{k})\|\\ &\leq& \|x^{k}-\alpha_{k} w^{k}-x^{k}\|\\ &=& \alpha_{k}\|w^{k}\|\\ &\leq& \alpha_{k}\gamma_{k}\\ &=&\beta_{k}\\ &\to&0 \text{~as~} k\to\infty. \end{array} $$

So, \(\lim _{k\to \infty }\|y^{k}-x^{k}\|=0\). □

Claim 3

Suppose that \(\{x^{k_{j}}\}\) is the subsequence of \(\{x^{k}\}\) such that

$$ \limsup\limits_{k\to\infty}f(x^{k}, x^{*})=\lim\limits_{j\to\infty} f(x^{k_{j}}, x^{*}), $$
(3.3)

and \(\bar x\) is a limit point of \(\{x^{k_{j}}\}\). Then, \(\bar x\in \text {Sol}(f, C)\).

Proof

For simplicity of notation, without loss of generality, we may assume that \(x^{k_{j}}\) converges to \(\bar x\) as \(j\to \infty \). Since f(⋅,x ) is upper semicontinuous, Claim 2 and (3.3), we have

$$\begin{array}{@{}rcl@{}} f(\bar x, x^{*})&\geq &\limsup\limits_{j\to\infty}f(x^{k_{j}}, x^{*})\\ &=&\lim\limits_{j\to\infty}f(x^{k_{j}}, x^{*})\\ &=&\limsup\limits_{k\to\infty}f(x^{k}, x^{*})\\ &=& 0. \end{array} $$

On the other hand, since f is pseudomonotone with respect to x and \(f(x^{*}, \bar x)\geq 0\), we have \(f(\bar x, x^{*})=0\). Thus, \(f(\bar x, x^{*})=0\). Then, by the assumption (A 2), we can deduce that \(\bar x\) is a solution of K F(f, C) as well. □

Claim 4

The sequences {x k} and {y k} converge to the unique solution x ∈Sol(F, f, C).

Proof

By Claim 2 and Claim 3, if the subsequence \(\{x^{k_{j}}\}\) converges to \(\bar x\), then \(\bar x\in \text {Sol}(f, C)\) and also \(y^{k_{j}}\rightharpoonup \bar x\). In a similar way as Claim 1, we also have

$$\begin{array}{@{}rcl@{}} \|x^{k+1}-x^{*}\|^{2}&\leq& \bar\beta_{k} \|y^{k}-x^{*}\|^{2}\\ &\leq& \|y^{k}-x^{*}\|^{2}\\ &\leq& \|x^{k}-x^{*}\|^{2}+2\frac{\beta_{k}\epsilon_{k}}{\lambda}+{\beta_{k}^{2}}. \end{array} $$
(3.4)

Combining this and \(\lim _{k\to \infty }\|x^{k}-x^{*}\|^{2}=c^{2}\), we obtain

$$\lim\limits_{k\to\infty}\|y^{k}-x^{*}\|=c.$$

From \(y^{k_{j}}\rightharpoonup \bar x\in \text {Sol}(f, C)\) as \(j\to \infty \), it follows that

$$\liminf\limits_{k\to\infty}\langle y^{k}-x^{*}, F(x^{*})\rangle=\lim\limits_{j\to\infty}\langle y^{k_{j}}-x^{*}, F(x^{*})\rangle=\langle \bar x-x^{*}, F(x^{*})\rangle\geq 0. $$

Since F is β-strongly monotone on C, we get

$$\begin{array}{@{}rcl@{}} \liminf\limits_{k\to\infty}\langle y^{k}-x^{*}, F(y^{k})\rangle&=&\liminf\limits_{k\to\infty}[\langle y^{k}-x^{*}, F(y^{k})-F(x^{*})\rangle+\langle y^{k}-x^{*}, F(x^{*})\rangle]\\ &\geq &\liminf\limits_{k\to\infty}[\beta\|y^{k}-x^{*}\|^{2}+\langle y^{k}-x^{*}, F(x^{*})\rangle]\\ &=&\beta c^{2}+\liminf\limits_{k\to\infty}\langle y^{k}-x^{*}, F(x^{*})\rangle\\ &\geq&\beta c^{2}. \end{array} $$
(3.5)

Assume, to get a contradiction, that c > 0, and choose \(\epsilon =\frac {1}{2} \beta c^{2}\). It follows from (3.5) that there exists k 0 such that the inequality

$$\langle y^{k}-x^{*}, F(y^{k})\rangle\geq \beta c^{2}-\epsilon=\beta c^{2}- \frac{1}{2} \beta c^{2}=\frac{1}{2} \beta c^{2}>0$$

holds for all kk 0. Otherwise,

$$\begin{array}{@{}rcl@{}} &&\|x^{k+1}-x^{*}\|^{2}\\ &=&\|\text{Pr}_{C}(y^{k}-\beta_{k}\mu F(y^{k}))-\text{Pr}_{C}(x^{*})\|^{2}\\ &\leq &\|y^{k}-\beta_{k}\mu F(y^{k})-x^{*}\|^{2}\\ &=&\|y^{k}-x^{*}\|^{2}-2\beta_{k}\mu\langle F(y^{k}), y^{k}-x^{*}\rangle+{\beta_{k}^{2}}\mu^{2}\|F(y^{k})\|^{2}\\ &=&\|y^{k}-x^{*}\|^{2}-2\beta_{k}\mu\langle F(y^{k})-F(x^{*}), y^{k}-x^{*}\rangle-2\beta_{k}\mu\langle F(x^{*}), y^{k}-x^{*}\rangle\\ &&+{\beta_{k}^{2}}\mu^{2}\|F(y^{k})\|^{2}\\ &\leq &\|y^{k}-x^{*}\|^{2}-2\beta_{k}\mu\beta\|y^{k}-x^{*}\|^{2}-2\beta_{k}\mu\langle F(x^{*}), y^{k}-x^{*}\rangle+{\beta_{k}^{2}}\mu^{2}\|F(y^{k})\|^{2}\\ &\leq &\|y^{k}-x^{*}\|^{2}-2\beta_{k}\mu\langle F(x^{*}), y^{k}-x^{*}\rangle+{\beta_{k}^{2}}M, \end{array} $$

where \(M:=\sup \{\mu ^{2}\|F(y^{k})\|^{2}\,:\, k=0,1,\dots \}<\infty \). Then, using (3.4), we get

$$\begin{array}{@{}rcl@{}} \|x^{k+1}-x^{*}\|^{2}&\leq &\|x^{k}-x^{*}\|^{2}+2\frac{\beta_{k}\epsilon_{k}}{\lambda}+{\beta_{k}^{2}}-2\beta_{k}\mu\langle F(x^{*}), y^{k}-x^{*}\rangle+{\beta_{k}^{2}}M\\ &\leq &\|x^{k}-x^{*}\|^{2}+2\frac{\beta_{k}\epsilon_{k}}{\lambda}+{\beta_{k}^{2}}-\beta_{k}\mu\beta c^{2}+{\beta_{k}^{2}}M\enskip \forall k\geq k_{0}. \end{array} $$

After summation, we can write

$$\|x^{k+1}-x^{*}\|^{2}-\|x^{k_{0}}-x^{*}\|^{2}+\mu\beta c^{2} \sum\limits_{j=k_{0}}^{k}\beta_{j}\leq \frac{2}{\lambda}\sum\limits_{j=k_{0}}^{k}\beta_{k}\epsilon_{k}+(1+M)\sum\limits_{j=k_{0}}^{k}{\beta_{k}^{2}}.$$

Passing to the limit as \(k\to \infty \), we get \({\sum }_{j=k_{0}}^{\infty }\beta _{j} <\infty \). This is a contradiction with the assumption \({\sum }_{j=k_{0}}^{\infty }\beta _{j} =\infty \) of (2.1). As a consequence, we have c = 0, x kx and y kx . □

Now, we use Algorithm 1 to solve the variational inequalities V I(F, C) on the solution set of the variational inequalities V I(G, C), which is called lexicographic variational inequalities, shortly L V I(F, G, C). In this case, f(x, y) = 〈G(x),yx〉 for all x, yC and

$$\partial^{\epsilon_{k}}_{2} f(x, x)=\{G(x)\}\enskip \forall x\in C, \epsilon_{k}\geq 0. $$

Thus, Algorithm 1 and its convergence lead to the following results.

figure b

When f(x, y) = 〈G(x),yx〉 for all x, yC, we give the following application of Theorem 3.1. We suppose that the mappings F and G satisfy five conditions:

(B 1)G is upper semicontinuous on C;

(B 2)F is pseudomonotone on C with respect to every solution x of Problem L V I(F, G, C) and satisfies the following condition, called the strict paramonotonicity property

$$y\in C, \langle G(x^{*}), y-x^{*}\rangle=\langle G(y), x^{*}-y\rangle=0\Rightarrow y \text{~is a solution of Problem~} VI(G, C); $$

(B 3) For each xC, 〈G(⋅),x −⋅〉 is upper semicontinuous on C;

(B 4)F is L-Lipschitz continuous and β-strongly monotone;

(B 5) The solution set of Problem L V I(F, G, C) is nonempty.

Theorem 3.2

Let C be a nonempty closed convex subset of \(\mathcal R^{n}\). Let the mappings \(F: C\to \mathcal R^{n}\) and \(G: C\to \mathcal R^{n}\) satisfy Assumptions (B 1) (B 5). Then, the sequences (x k)and (y k)in Algorithm 2 converge to the same point x , which is the unique solution of Problem L V I(F, G, C).

4 Numerical Illustration

In this section, we consider a numerical example to illustrate the convergence of Algorithm 1. The computations are performed by Matlab R2013a running on Laptop Intel(R) Core(TM) i3-3110M CPU@2.40 GHz 2.40 GHz 4 Gb RAM. To compute the projections in Algorithm 1, we implemented them in Matlab to solve strongly convex quadratic problems by using the quadratic-program solver from the Matlab optimization toolbox.

Example 4.1

Let us take \(\mathcal H:=\mathcal R^{5}, F(x)=Mx+q\) with the matrix M generated as suggested in [17], later in [22]:

$$M=A A^{T}+B+D, $$

where A is a 5 × 5 matrix, B is a 5 × 5 skew-symmetric matrix, D is a 5 × 5 diagonal matrix, and q is a vector in \(\mathcal R^{5}\). The feasible set C and the equilibrium bifunction f are defined by

$$\begin{array}{@{}rcl@{}} &&C= \left\{\begin{array}{llllllll} x\in \mathcal R^{5}_{+}\\ x_{1}+x_{2}\geq 1.5\\ x_{1}+x_{2}+x_{3}+2x_{4}+x_{5}\geq 5\\ 3x_{1}+2x_{2}+x_{3}+3x_{4}+4x_{5}\leq 12, \end{array}\right.\\ &&\begin{aligned}{}f(x, y)&=\langle d, \arctan(x-y)\rangle+\langle v, x-y\rangle,\text{~where~} \\ \arctan(x)&=(\arctan(x_{1}),\dots,\arctan(x_{5}))^{T},\end{aligned}\\ && d=(1,3,5,7,9)^{T}, v=(2,4,6,8,1)^{T}. \end{array} $$

The pseudomonotone bifunction f(x, y) is suggested by Bnouhachem in [11] and later in [5]. Then, we have

$$\partial^{\epsilon_{k}}_{2} f(x, y)=\left\{\left( d_{1}\frac{-1}{1+(x_{1}-y_{1})^{2}}-v_{1},\dots, d_{5}\frac{-1}{1+(x_{5}-y_{5})^{2}}-v_{5}\right)^{T}\right\}, $$

and hence,

$$\begin{array}{@{}rcl@{}} \partial^{\epsilon_{k}}_{2} f(x, x)&=&\left\{\left( -d_{1}-v_{1},\dots, -d_{5}-v_{5}\right)^{T}\right\}\Rightarrow \\w^{k}&=& \left( -d_{1}-v_{1},\dots, -d_{5}-v_{5}\right)^{T}\enskip \forall k\geq 0. \end{array} $$

The initial data is listed in the following.

  • Every entry of A, B, and q is randomly and uniformly generated from (−3,3), and every diagonal entry of D is randomly generated from (0,1);

  • The tolerance error is 𝜖-solution, if \(\max \{\|y^{k}-x^{k}\|, \|x^{k+1}-x^{k}\|\}\leq \epsilon \);

  • The parameters: \(\mu =\frac {1}{\|M\|}\in \left (0, \frac {2\beta }{L^{2}}\right )=\left (0, \frac {2}{\|M\|}\right ), \lambda _{k}=1+\frac {10}{k+2}, \beta _{k}=\frac {1}{k+1}, \xi _{k}=\frac {1}{k^{2}+1}\) for all k ≥ 0;

  • The starting point x 0 = (1,2,0,0,1)T or x 0 = (1,1,1,1,0)T.

The matrix M is positive definite, so F is strongly monotone and Lipschitz continuous on C. The numerical results are showed in Tables 1 and 2.

Table 1 Results for Algorithm 1 with different starting points and the tolerance 𝜖 = 10−3
Table 2 Algorithm 1 with different parameters, x 0 = (1,1,1,1,0) and the tolerance 𝜖 = 10−3

From the preliminary numerical results reported in the tables, we observe that:

  1. (a)

    Similar to other methods for equilibrium problems such as the proximal point algorithm, or the extragradient method, the gap function method, and other methods, the rapidity of Algorithm 1 depends very much on the starting point x 0;

  2. (b)

    Algorithm 1 is quite sensitive to the choice of the parameters λ k ,β k , and ξ k .