1 Introduction

Let H be a real Hilbert space with an inner product 〈⋅,⋅〉 and its induced norm denoted by ∥⋅∥, and let C be a nonempty closed convex subset of H. The bilevel variational inequality under consideration in this paper can be formulated as

$$ \text{Find } x^{*}\in\text{Sol}(C, G)\text{ such that } \langle F(x^{*}), y-x^{*}\rangle\geq 0\quad\forall y\in\text{Sol}(C, G), $$
(1)

where F:HH is a given mapping and

$$\text{Sol}(C, G)=\{y^{*}\in C: \langle G(y^{*}), z-y^{*}\rangle\geq 0~\forall z\in C\}, $$

i.e., Sol(C,G) is the solution-set of the variational inequality V I(C,G) defined as

$$ \text{Find } y^{*}\in C\text{ such that } \langle G(y^{*}), z-y^{*}\rangle\geq 0\quad\forall z\in C $$
(2)

with G:HH. Some additional conditions on F and G will be detailed later.

The bilevel problem (1) is a special class of mathematical programs with equilibrium constraints 23, 25]. However, it covers some bilevel problems considered in the literature (see, e.g., [11, 13, 1720, 23, 29]). In the special case when F(x) = xx 0 for all xH with x 0 being a fixed element in H, the bilevel variational inequality (1) is reduced to the problem of finding the projection of x 0 onto Sol(C,G). The latter problem where G is monotone or pseudomonotone arises in the Tikhonov regularization method [15, 16] and recently has been solved in some papers [3, 4, 6, 10, 12, 31] by using the projection technique. It is well-known (see, e.g., [12, p. 1110]) that the projection method for monotone variational inequality may fail to converge. To overcome this difficulty, the extragradient method (or double projection), first proposed by Korpelevich [21] for saddle problem and Antipin [1], can be applied to pseudomonotone variational inequalities ensuring convergence. However, since the extragradient method requires two projections, it may effect the efficiency of the method. Motivated by this fact, Censor et al. [7] have modified the extragradient method for solving the variational inequality V I(C,G) defined by (2) by replacing the second projection onto the constrained set by the one onto the half-space containing C. This method is called subgradient projection method, that can be described as follows.

Starting from a given point x 0H, for all k≥0, the next iterate is defined as

$$\left\{\begin{array}{l} y^{k}=P_{C}\big(x^{k}-\tau G(x^{k})\big),\\ T_{k}=\{\omega\in H: \langle x^{k}-\tau G(x^{k})-y^{k}, \omega-y^{k}\rangle\leq 0\},\\ x^{k+1}=P_{T_{k}}(x^{k}-\tau G(y^{k})). \end{array}\right. $$

It was proved that if G is monotone, L-Lipschitz on C and the stepsize \(\tau \in \big (0,\frac {1}{L}\big )\) then both sequences {x k} and {y k} converge weakly to a solution u of V I(C,G).

This method has been further modified and extended to obtain strong convergence results for monotone variational inequalities and generalized problems [8, 9, 22].

Inspired of the works by Censor et al. [79], in this paper, we propose a subgradient algorithm combining with the Halpern method to obtain a strongly convergent algorithm for solving the bilevel variational inequality (1) where F is strongly monotone but G may be pseudomonotone. It is worth mentioning that, since G may not be monotone, the bilevel problem (1) cannot be formulated as a variational inequality over the fixed-points of a nonexpansive mapping.

The paper is organized as follows. After some preliminary results in Section 2, a general scheme is proposed in Section 3; this scheme is strongly convergent. In particular, when F is defined for all xH by F(x) = xx 0 where x 0 is a fixed element in H, we prove that the sequence {x k} generated by our algorithm converges strongly to the projection of x 0 onto the solution set of the variational inequality V I(C,G). This is particularly interesting when x 0=0 (the minimum-norm problem).

2 Preliminaries

In what follows, we write \(x^{k}\rightharpoonup x\) to indicate that the sequence {x k} converges weakly to x while x kx to indicate that the sequence {x k} converges strongly to x. Recall that the (nearest point or metric) projection from H onto C, denoted by P C , is defined in such a way that, for each xH, P C (x) is the unique point in C with the property

$$\| x-P_{C}(x)\|=\min\{\|x-y\|: y\in C\}. $$

Some important properties of the projection operator P C are gathered in the following lemma.

Lemma 1

[14] For given x∈H and y∈C:

  1. (i)

    y=P C (x) if and only if 〈x−y,z−y〉≤0 ∀z∈C.

  2. (ii)

    P C (x)−z∥ 2 ≤∥x−z∥ 2 −∥x−P C (x)∥ 2 ∀z∈C.

Let us also recall some well-known definitions which will be used in this paper.

Definition 1

A mapping ϕ:HH is said to be

  1. (i)

    β-strongly monotone on H if there exists β>0 such that

    $$\langle \phi(x)-\phi(y), x-y\rangle\geq\beta\|x-y\|^{2}\quad\forall x, y\in H; $$
  2. (ii)

    L-Lipschitz continuous on H if

    $$\|\phi(x)-\phi(y)\|\leq L\|x-y\|\quad \forall x, y\in H; $$
  3. (iii)

    monotone on H if

    $$\langle \phi(x)-\phi(y), x-y\rangle\geq 0\quad\forall x, y\in H; $$
  4. (iv)

    pseudomonotone on H if

    $$\langle \phi(x), y-x\rangle\geq 0\quad\Longrightarrow\quad \langle \phi(y), y-x\rangle\geq 0 \quad \forall x, y\in H. $$

It is well known that if F:HH is β-strongly monotone, L-Lipschitz continuous on H and if Sol(C,G) is a nonempty, closed and convex subset of H, then the bilevel problem (1) has a unique solution (see, e.g., [27]).

The next lemmas will be used for proving the convergence of the proposed algorithm described below.

Lemma 2

Suppose ϕ:H→H is β-strongly monotone, L-Lipschitz continuous on H, and 0<α<1, 0≤η≤1−α, \(0<\mu <\frac {2\beta }{L^{2}}\) . Then

$$\|(1-\eta)x-\alpha\mu \phi(x)-[(1-\eta)y-\alpha\mu \phi(y)]\|\leq(1-\eta-\alpha\tau)\|x-y\|\quad\forall x, y\in H, $$

where

$$\tau=1-\sqrt{1-\mu(2\beta-\mu L^{2})}\in (0, 1]. $$

Proof

It is clear that

$$\begin{array}{@{}rcl@{}} &&\|(1-\eta)x-\alpha\mu \phi(x)-[(1-\eta)y-\alpha\mu \phi(y)]\|\\ &&\qquad=\|(1-\eta)(x-y)-\alpha\mu(\phi(x)-\phi(y))\|\\ &&\qquad=\|(1-\eta-\alpha)(x-y)+\alpha[(x-y)-\mu(\phi(x)-\phi(y))]\|\\ &&\qquad\leq (1-\eta-\alpha)\|x-y\|+\alpha\|(x-y)-\mu(\phi(x)-\phi(y))\|. \end{array} $$
(3)

By the strong monotonicity and the Lipschitz continuity of ϕ, we have

$$\begin{array}{@{}rcl@{}} \|(x-y)-\mu(\phi(x)- \phi(y))\|^{2}&=&\|x-y\|^{2}-2\mu\langle x-y, \phi(x)-\phi(y)\rangle\\ &&+\mu^{2}\|\phi(x)-\phi(y)\|^{2}\\ &\leq&\|x-y\|^{2}-2\mu\beta\|x-y\|^{2}+\mu^{2}L^{2}\|x-y\|^{2}\\ &=&(1-2\mu\beta+\mu^{2}L^{2})\|x-y\|^{2}. \end{array} $$

Hence,

$$ \|(x-y)-\mu(\phi(x)-\phi(y))\|\leq\sqrt{1-\mu(2\beta-\mu L^{2})}\|x-y\|. $$
(4)

Combining (3) with (4) gives

$$\begin{array}{@{}rcl@{}} &&\|(1-\eta)x-\alpha\mu\phi(x)-[(1-\eta)y-\alpha\mu\phi(y)]\|\\ &&\qquad\leq (1-\eta-\alpha)\|x-y\|+\alpha\|(x-y)-\mu(\phi(x)-\phi(y))\|\\ &&\qquad\leq (1-\eta-\alpha)\|x-y\|+\alpha\sqrt{1-\mu(2\beta-\mu L^{2})}\|x-y\|\\ &&\qquad=(1-\eta-\alpha\tau)\|x-y\|, \end{array} $$

where

$$\tau=1-\sqrt{1-\mu(2\beta-\mu L^{2})}\in (0, 1]. $$

This completes the proof. □

Lemma 3

([24]) Let {a n } be a sequence of nonnegative real numbers. Suppose that for any integer m, there exists an integer p such that p≥m and a p ≤a p+1 . Let n 0 be an integer such that \(a_{n_{0}}\leq a_{n_{0}+1}\) and define, for all integer n≥n 0,

$$\tau(n)=\max\{k\in\mathbb{N}:~ n_{0}\leq k\leq n, a_{k}\leq a_{k+1}\}. $$

Then \(\{\tau (n)\}_{n\geq n_{0}}\) is a nondecreasing sequence satisfying \(\lim _{n\to \infty }\tau (n)=\infty \) and the following inequalities hold true:

$$a_{\tau(n)}\leq a_{\tau(n)+1}, \quad a_{n}\leq a_{\tau(n)+1} \quad \forall n\geq n_{0}. $$

Lemma 4

([30]) Assume {a n } is a sequence of nonnegative real numbers satisfying the condition

$$a_{n+1}\leq (1-\alpha_{n})a_{n}+\alpha_{n}\xi_{n}\quad\forall n\geq 0, $$

where {α n } is a sequence in (0,1) and {ξ n } is a sequence in \(\mathbb {R}\) such that

  1. (i)

    \({\sum }_{n=0}^{\infty }\alpha _{n}=\infty \);

  2. (ii)

    \(\limsup _{n\to \infty }\xi _{n}\leq 0\).

Then \(\lim _{n\to \infty }a_{n}=0\).

3 The Algorithm and Its Strong Convergence

The algorithm we are going to describe can be considered as a combination of the one in [7] with the Halpern method for solving the bilevel variational inequality (1). Furthermore, we impose the following assumptions on the mappings F,G:HH associated with the bilevel problem (1).

  • (A1) F is β-strongly monotone and L-Lipschitz continuous on H.

  • (A2) G is pseudomonotone and γ-Lipschitz continuous on H.

  • (A3) \(\limsup _{k\to \infty }\langle G(x^{k}), y-y^{k}\rangle \leq \langle G(\overline {x}), y-\overline {y}\rangle \) for every sequence {x k}, {y k} converging weakly to \(\overline {x}\) and \(\overline {y}\), respectively.

Let us also mention that conditions (A1)–(A3) are classical assumptions for variational inequalities. Furthermore, it is easy to see that if G satisfies the property (A2), then the solution set Sol(C,G) of the variational inequality V I(C,G) is closed and convex (see, e.g., [26]).

figure a

Remark 1

In Algorithm 1, we can choose, for example, \(\alpha _{k}=\frac {1}{k+3}\), \(\eta _{k}=\frac {k+1}{2(k+3)}\), \(\lambda _{k}=\frac {1}{2\gamma }\) for all \(k\in \mathbb {N}\). An elementary computation shows that \(\{\alpha _{k}\}\subset (0, 1)\), \(\lim _{k\to \infty }\alpha _{k}=0\) and \({\sum }_{k=0}^{\infty }\alpha _{k}=\infty \). Furthermore, 0≤η k ≤1−α k for all \(k\in \mathbb {N}\) and \(\lim _{k\to \infty }\eta _{k}=\frac {1}{2}<1\).

The following theorem shows validity and convergence of the algorithm.

Theorem 1

Suppose that the assumptions (A 1 )–(A 3 ) and Sol(C,G)≠∅ hold. Then, the sequence {x k } in Algorithm 1 converges to the unique solution of the bilevel problem ( 1 ).

Proof

The proof of the theorem is divided into several steps.

Step 1:

For all x ∈Sol(C,G), we have

$$ \|z^{k}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-(1-\lambda_{k}\gamma)\|x^{k}-y^{k}\|^{2}-(1-\lambda_{k}\gamma)\|y^{k}-z^{k}\|^{2}\quad\forall k\in\mathbb{N}. $$
(5)

Indeed, let x ∈Sol(C,G). From (i) of Lemma 1 and the definition of y k, we have

$$ \langle x^{k}-\lambda_{k} G(x^{k})-y^{k}, z-y^{k}\rangle\leq 0\quad\forall z\in C. $$
(6)

Using (6) and the definition of T k , we obtain \(C\subset T_{k}\). It follows from x ∈Sol(C,G) and the pseudomonotonicity of G that

$$ \langle G(y^{k}), y^{k}-x^{*}\rangle\geq 0\quad\forall k\in\mathbb{N}. $$
(7)

From (ii) of Lemma 1 and (7), we obtain

$$\begin{array}{@{}rcl@{}} \|z^{k}-x^{*}\|^{2}&=&\|P_{T_{k}}(x^{k}-\lambda_{k}G(y^{k}))-x^{*}\|^{2}\\ &\leq&\|x^{k}-\lambda_{k}G(y^{k})-x^{*}\|^{2}-\|x^{k}-\lambda_{k}G(y^{k})-z^{k}\|^{2}\\ &=&\|x^{k}-x^{*}\|^{2}-2\lambda_{k}\langle x^{k}-x^{*}, G(y^{k})\rangle+{\lambda_{k}^{2}}\|G(y^{k})\|^{2}-\|x^{k}-z^{k}\|^{2}\\ &&+2\lambda_{k}\langle x^{k}-z^{k}, G(y^{k})\rangle-{\lambda_{k}^{2}}\|G(y^{k})\|^{2}\\ &=&\|x^{k}-x^{*}\|^{2}-\|x^{k}-z^{k}\|^{2}+2\lambda_{k}\langle x^{*}-z^{k}, G(y^{k})\rangle\\ &=&\|x^{k}-x^{*}\|^{2}-\|x^{k}-z^{k}\|^{2}-2\lambda_{k}\langle G(y^{k}), y^{k}-x^{*}\rangle\\ &&+2\lambda_{k}\langle y^{k}-z^{k}, G(y^{k})\rangle\\ &\leq&\|x^{k}-x^{*}\|^{2}-\|x^{k}-z^{k}\|^{2}+2\lambda_{k}\langle y^{k}-z^{k}, G(y^{k})\rangle. \end{array} $$
(8)

Using the Cauchy–Schwarz inequality, arithmetic and geometric means inequality and observing that G is γ-Lipschitz continuous on H, we obtain

$$\begin{array}{@{}rcl@{}} 2\langle G(x^{k})-G(y^{k}), z^{k}-y^{k}\rangle&\leq&2\|G(x^{k})-G(y^{k})\|\|z^{k}-y^{k}\|\\ &\leq& 2\gamma\|x^{k}-y^{k}\| \|z^{k}-y^{k}\|\\ &\leq&\gamma(\|x^{k}-y^{k}\|^{2}+\|y^{k}-z^{k}\|^{2}). \end{array} $$
(9)

From the definition of T k and z kT k , we have

$$\langle x^{k}-\lambda_{k} G(x^{k})-y^{k}, z^{k}-y^{k}\rangle\leq 0. $$

It follows from the above inequality, (8), and (9) that

$$\begin{array}{@{}rcl@{}} \|z^{k}-x^{*}\|^{2}&\leq&\|x^{k}-x^{*}\|^{2}-\|x^{k}-z^{k}\|^{2}+2\lambda_{k}\langle y^{k}-z^{k}, G(y^{k})\rangle\\ &=&\|x^{k}-x^{*}\|^{2}+2\lambda_{k}\langle y^{k}-z^{k}, G(y^{k})\rangle-\|(x^{k}-y^{k})+(y^{k}-z^{k})\|^{2}\\ &=&\|x^{k}-x^{*}\|^{2}+2\lambda_{k}\langle y^{k}-z^{k}, G(y^{k})\rangle-\|x^{k}-y^{k}\|^{2}-\|y^{k}-z^{k}\|^{2}\\ &&-2\langle y^{k}-z^{k}, x^{k}-y^{k}\rangle\\ &=&\|x^{k}-x^{*}\|^{2}-\|x^{k}-y^{k}\|^{2}-\|y^{k}-z^{k}\|^{2}\\ &&+2\langle y^{k}-z^{k}, \lambda_{k}G(y^{k})-x^{k}+y^{k}\rangle\\ &=&\|x^{k}-x^{*}\|^{2}-\|x^{k}-y^{k}\|^{2}-\|y^{k}-z^{k}\|^{2}\\ &&+2\langle x^{k}-\lambda_{k}G(x^{k})-y^{k}, z^{k}-y^{k}\rangle\\ &&+2\lambda_{k}\langle G(x^{k})-G(y^{k}), z^{k}-y^{k}\rangle\\ &\leq&\|x^{k}-x^{*}\|^{2}-\|x^{k}-y^{k}\|^{2}-\|y^{k}-z^{k}\|^{2}\\ &&+\lambda_{k}\gamma(\|x^{k}-y^{k}\|^{2}+\|y^{k}-z^{k}\|^{2})\\ &=&\|x^{k}-x^{*}\|^{2}-(1-\lambda_{k}\gamma)\|x^{k}-y^{k}\|^{2}-(1-\lambda_{k}\gamma)\|y^{k}-z^{k}\|^{2}. \end{array} $$
Step 2:

The sequences {x k}, {F(x k)}, {y k}, and {z k} are bounded. Since, for every k≥0, 1−λ k γ≥1−b γ>0, it follows from (5) that

$$ \|z^{k}-x^{*}\|\leq\|x^{k}-x^{*}\|\quad\forall k\in\mathbb{N}. $$
(10)

It follows from (10) and Lemma 2 that

$$\begin{array}{@{}rcl@{}} \|x^{k+1}-x^{*}\|&=&\|\eta_{k}x^{k}+(1-\eta_{k})z^{k}-\alpha_{k}\mu F(z^{k})-x^{*}\|\\ &=&\|(1-\eta_{k})z^{k}-\alpha_{k}\mu F(z^{k})-[(1-\eta_{k})x^{*}-\alpha_{k}\mu F(x^{*})]\\ &&+\eta_{k}(x^{k}-x^{*})-\alpha_{k}\mu F(x^{*})\|\\ &\leq&\|(1-\eta_{k})z^{k}-\alpha_{k}\mu F(z^{k})-[(1-\eta_{k})x^{*}-\alpha_{k}\mu F(x^{*})]\|\\ &&+\eta_{k}\|x^{k}-x^{*}\|+\alpha_{k}\mu\|F(x^{*})\|\\ &\leq& (1-\eta_{k}-\alpha_{k}\tau)\|z^{k}-x^{*}\|+\eta_{k}\|x^{k}-x^{*}\|+\alpha_{k}\mu\|F(x^{*})\|\\ &\leq& (1-\eta_{k}-\alpha_{k}\tau)\|x^{k}-x^{*}\|+\eta_{k}\|x^{k}-x^{*}\|+\alpha_{k}\mu\|F(x^{*})\|\\ &=&(1-\alpha_{k}\tau)\|x^{k}-x^{*}\|+\alpha_{k}\mu\|F(x^{*})\|\\ &=&(1-\alpha_{k}\tau)\|x^{k}-x^{*}\|+\alpha_{k}\tau\frac{\mu\|F(x^{*})\|}{\tau}, \end{array} $$
(11)

where

$$\tau=1-\sqrt{1-\mu(2\beta-\mu L^{2})}\in (0, 1]. $$

We obtain from (11) that

$$\|x^{k+1}-x^{*}\|\leq\max\left\{\|x^{k}-x^{*}\|,\frac{\mu\|F(x^{*})\|}{\tau}\right\}. $$

So, by induction, we obtain, for every k≥0, that

$$\|x^{k}-x^{*}\|\leq\max\left\{\|x^{0}-x^{*}\|, \frac{\mu\|F(x^{*})\|}{\tau}\right\}. $$

Hence, the sequence {x k} is bounded and so are the sequences {F(x k)}, {y k}, and {z k}.

Step 3:

We prove that x k converges strongly to x , where x is the unique solution of (1). Using Lemma 2, (10) and the inequality

$$\|x-y\|^{2}\leq\|x\|^{2}-2\langle y, x-y\rangle\qquad\forall x, y\in H, $$

we obtain successively

$$\begin{array}{@{}rcl@{}} \|x^{k+1}-x^{*}\|^{2}&=&\|\eta_{k}x^{k}+(1-\eta_{k})z^{k}-\alpha_{k}\mu F(z^{k})-x^{*}\|^{2}\\ &=&\|(1-\eta_{k})z^{k}-\alpha_{k}\mu F(z^{k})-[(1-\eta_{k})x^{*}-\alpha_{k}\mu F(x^{*})]\\ &&+\eta_{k}(x^{k}-x^{*})-\alpha_{k}\mu F(x^{*})\|^{2}\\ &\leq&\|(1-\eta_{k})z^{k}-\alpha_{k}\mu F(z^{k})-[(1-\eta_{k})x^{*}-\alpha_{k}\mu F(x^{*})]\\ &&+\eta_{k}(x^{k}-x^{*})\|^{2}-2\alpha_{k}\mu\langle F(x^{*}), x^{k+1}-x^{*}\rangle\\ &\leq&\big\{\|(1-\eta_{k})z^{k}-\alpha_{k}\mu F(z^{k})-[(1-\eta_{k})x^{*}-\alpha_{k}\mu F(x^{*})]\|\\ &&+\eta_{k}\|x^{k}-x^{*}\|\big\}^{2}-2\alpha_{k}\mu\langle F(x^{*}), x^{k+1}-x^{*}\rangle\\ &\leq&\big[(1-\eta_{k}-\alpha_{k}\tau)\|z^{k}-x^{*}\|+\eta_{k}\|x^{k}-x^{*}\|\big]^{2}\\ &&-2\alpha_{k}\mu\langle F(x^{*}), x^{k+1}-x^{*}\rangle\\ &\leq&(1-\eta_{k}-\alpha_{k}\tau)\|z^{k}-x^{*}\|^{2}+\eta_{k}\|x^{k}-x^{*}\|^{2}\\ &&-2\alpha_{k}\mu\langle F(x^{*}), x^{k+1}-x^{*}\rangle\\ &\leq&(1-\eta_{k}-\alpha_{k}\tau)\|x^{k}-x^{*}\|^{2}+\eta_{k}\|x^{k}-x^{*}\|^{2}\\ &&-2\alpha_{k}\mu\langle F(x^{*}), x^{k+1}-x^{*}\rangle\\ &=&(1-\alpha_{k}\tau)\|x^{k}-x^{*}\|^{2}+2\alpha_{k}\mu\langle F(x^{*}), x^{*}-x^{k+1}\rangle. \end{array} $$
(12)

Let us consider two cases.

Case 1 :

There exists k 0 such that {∥x kx ∥} is decreasing for kk 0. In that case, the limit of {∥x kx ∥} exists. So, it follows from (10) and (12) that

$$\begin{array}{@{}rcl@{}} 0\leq\|x^{k}-x^{*}\|^{2}-\|z^{k}-x^{*}\|^{2}&\leq&-\frac{\alpha_{k}\tau}{1-\eta_{k}}\|z^{k}-x^{*}\|^{2}-\frac{2\alpha_{k}\mu}{1-\eta_{k}}\langle F(x^{*}), x^{k+1}-x^{*}\rangle\\ &&+\frac{1}{1-\eta_{k}}(\|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}). \end{array} $$
(13)

Since the limit of {∥x kx ∥} exists, \(\lim _{k\to \infty }\alpha _{k}=0\), \(\lim _{k\to \infty }\eta _{k}=\eta <1\), {x k} and {z k} are two bounded sequences, it follows from (13) that

$$ \underset{k\to\infty}{\lim}(\|x^{k}-x^{*}\|^{2}-\|z^{k}-x^{*}\|^{2})=0. $$
(14)

From (5) and \(\{\lambda _{k}\}\subset [a, b]\subset (0,\frac {1}{\gamma })\), we get

$$ (1-b\gamma)\|x^{k}-y^{k}\|^{2}\leq(1-\lambda_{k}\gamma)\|x^{k}-y^{k}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|z^{k}-x^{*}\|^{2}. $$
(15)

Then, from (14) and (15), we obtain

$$\underset{k\to\infty}{\lim}\|x^{k}-y^{k}\|=0. $$

Now, we prove that

$$ \underset{k\longrightarrow\infty}{\limsup}\langle F(x^{*}), x^{*}-x^{k+1}\rangle\leq 0. $$
(16)

Take a subsequence \(\{x^{k_{i}}\}\) of \(\{x^{k}\}\) such that

$$\underset{k\longrightarrow\infty}{\limsup}\langle F(x^{*}), x^{*}-x^{k+1}\rangle=\underset{i\to\infty}{\lim}\langle F(x^{*}), x^{*}-x^{k_{i}}\rangle. $$

Since \(\{x^{k_{i}}\}\) is bounded, we may assume that \(x^{k_{i}}\) converges weakly to some \(\overline {x}\in H\). Therefore

$$ \underset{k\longrightarrow\infty}{\limsup} \langle F(x^{*}), x^{*}-x^{k+1}\rangle=\underset{i\to\infty}{\lim}\langle F(x^{*}), x^{*}-x^{k_{i}}\rangle=\langle F(x^{*}), x^{*}-\overline{x}\rangle. $$
(17)

Since \(\lim _{k\to \infty }\|x^{k}-y^{k}\|=0\) and \(x^{k_{i}}\rightharpoonup \overline {x}\), it follows that \(y^{k_{i}}\) converges weakly to \(\overline {x}\). Since C is closed and convex, it is also weakly closed, and thus \(\overline {x}\in C\). Next, we prove \(\overline {x}\in \text {Sol}(C, G)\). Indeed, let xC. From (6), we have 〈x kλ k G(x k)−y k,xy k〉≤0. In particular, for all \(i\in \mathbb {N}\)

$$\langle x^{k_{i}}-\lambda_{k_{i}}G(x^{k_{i}})-y^{k_{i}}, x-y^{k_{i}}\rangle\leq 0. $$

Since \(\lambda _{k_{i}}>0\) for every \(i\in \mathbb {N}\), it follows from the above inequality that

$$ \langle G(x^{k_{i}}), x-y^{k_{i}}\rangle\geq\frac{\langle x^{k_{i}}-y^{k_{i}}, x-y^{k_{i}}\rangle}{\lambda_{k_{i}}}. $$
(18)

Applying the Cauchy–Schwarz inequality, and recalling that \(\lambda _{k_{i}}\geq a>0\) for all \(i\in \mathbb {N}\)

$$\begin{array}{@{}rcl@{}} \left|\frac{\langle x^{k_{i}}-y^{k_{i}}, x-y^{k_{i}}\rangle}{\lambda_{k_{i}}}\right|=\frac{|\langle x^{k_{i}}-y^{k_{i}}, x-y^{k_{i}}\rangle|}{\lambda_{k_{i}}}&\leq&\frac{|\langle x^{k_{i}}-y^{k_{i}}, x-y^{k_{i}}\rangle|}{a}\\ &\leq&\frac{\|x^{k_{i}}-y^{k_{i}}\| \|x-y^{k_{i}}\|}{a}. \end{array} $$
(19)

Since \(\|x^{k_{i}}-y^{k_{i}}\|\to 0\) as \(i\to \infty \) and \(\{y^{k_{i}}\}\) is bounded, it follows that the right-hand side of the inequality (19) tends to zero. Then, it follows that \(\lim _{i\to \infty }\frac {\langle x^{k_{i}}-y^{k_{i}}, x-y^{k_{i}}\rangle }{\lambda _{k_{i}}}=0\). So, using (18), condition (A3) and the weak convergence of two sequences \(\{x^{k_{i}}\}\), \(\{y^{k_{i}}\}\) to \(\overline {x}\), we get

$$0\leq\underset{i\longrightarrow\infty}{\limsup}\langle G(x^{k_{i}}), x-y^{k_{i}}\rangle\leq\langle G(\overline{x}), x-\overline{x}\rangle, $$

i.e., \(\overline {x}\in \text {Sol}(C, G)\). Since x is the solution to the bilevel problem (1) and \(\overline {x}\in \text {Sol}(C, G)\), we can conclude that

$$\langle F(x^{*}), \overline{x}-x^{*}\rangle\geq 0. $$

Thus, it follows from (17) that \(\limsup _{k\to \infty }\langle F(x^{*}), x^{*}-x^{k+1}\rangle \leq 0\). The inequality (12) can be written as follows

$$\|x^{k+1}-x^{*}\|^{2}\leq(1-\alpha_{k}\tau)\|x^{k}-x^{*}\|^{2}+\alpha_{k}\tau\xi_{k}, $$

where

$$\xi_{k}=\frac{2\mu\langle F(x^{*}), x^{*}-x^{k+1}\rangle}{\tau}. $$

From (16), we have that \(\limsup _{k\to \infty }\xi _{k}\leq 0\). By Lemma 4, we have \(\lim _{k\to \infty }\|x^{k}-x^{*}\|^{2}=0\), that is, x kx as \(k\to \infty \).

Case 2 :

Suppose there exists a subsequence \(\{x^{k_{j}}\}\) of {x k} such that

$$\|x^{k_{j}}-x^{*}\|\leq\|x^{k_{j}+1}-x^{*}\|\qquad\forall j\in\mathbb{N}. $$

From Lemma 3, there exists a nondecreasing sequence \(\{\tau (k)\}\) of \(\mathbb N\) such that \(\lim _{k\to \infty }\tau (k)=\infty \) and the following inequalities hold for all (sufficiently large) \(k\in \mathbb N\)

$$ \|x^{\tau(k)}-x^{*}\|\leq \|x^{\tau(k)+1}-x^{*}\|,\qquad \|x^{k}-x^{*}\|\leq\|x^{\tau(k)+1}-x^{*}\|. $$
(20)

So, it follows from (11) and (20) that

$$\begin{array}{@{}rcl@{}} &&\|x^{\tau(k)}-x^{*}\|\leq\|x^{\tau(k)+1}-x^{*}\|\leq (1-\eta_{\tau(k)}-\alpha_{\tau(k)}\tau)\|z^{\tau(k)}-x^{*}\|\\ &&\quad+\eta_{\tau(k)}\|x^{\tau(k)}-x^{*}\|+\alpha_{\tau(k)}\mu\|F(x^{*})\|. \end{array} $$
(21)

Also, from (10) and (21), we have

$$ 0\leq\|x^{\tau(k)}-x^{*}\|-\|z^{\tau(k)}-x^{*}\|\leq -\frac{\alpha_{\tau(k)}\tau}{1-\eta_{\tau(k)}}\|z^{\tau(k)}-x^{*}\|+\frac{\alpha_{\tau(k)}\mu}{1-\eta_{\tau(k)}}\|F(x^{*})\|. $$
(22)

Since \(\lim _{k\to \infty }\alpha _{k}=0\), \(\lim _{k\to \infty }\eta _{k}=\eta <1\) and \(\{z^{k}\}\) is bounded, (22) ensures that

$$ \underset{k\to\infty}{\lim}(\|x^{\tau(k)}-x^{*}\|-\|z^{\tau(k)}-x^{*}\|)=0. $$
(23)

Therefore, the boundedness of {x k}, {z k}, and (23) guarantee that

$$ \underset{k\to\infty}{\lim}(\|x^{\tau(k)}-x^{*}\|^{2}-\|z^{\tau(k)}-x^{*}\|^{2})=0. $$
(24)

From 1−λ τ(k) γ≥1−b γ>0, (5), and (24), it follows that

$$ \underset{k\to\infty}{\lim}\|x^{\tau(k)}-y^{\tau(k)}\|=0,\qquad\underset{k\to\infty}{\lim}\|y^{\tau(k)}-z^{\tau(k)}\|=0. $$
(25)

Note that

$$\|x^{\tau(k)}-z^{\tau(k)}\|\leq\|x^{\tau(k)}-y^{\tau(k)}\|+\|y^{\tau(k)}-z^{\tau(k)}\|. $$

This together with (25) imply

$$ \underset{k\to\infty}{\lim}\|x^{\tau(k)}-z^{\tau(k)}\|=0. $$
(26)

On the other hand, Lemma 2 guarantees that

$$\begin{array}{@{}rcl@{}} \|x^{\tau(k)+1}-x^{\tau(k)}\|&=&\|\eta_{\tau(k)}x^{\tau(k)}+(1-\eta_{\tau(k)})z^{\tau(k)}-\alpha_{\tau(k)}\mu F(z^{\tau(k)})-x^{\tau(k)}\|\\ &=&\|(1-\eta_{\tau(k)})z^{\tau(k)}-\alpha_{\tau(k)}\mu F(z^{\tau(k)})\\ &&-[(1-\eta_{\tau(k)})x^{\tau(k)}-\alpha_{\tau(k)}\mu F(x^{\tau(k)})]-\alpha_{\tau(k)}\mu F(x^{\tau(k)})\|\\ &\leq&\|(1-\eta_{\tau(k)})z^{\tau(k)}-\alpha_{\tau(k)}\mu F(z^{\tau(k)})\\ &&-[(1-\eta_{\tau(k)})x^{\tau(k)}-\alpha_{\tau(k)}\mu F(x^{\tau(k)})]\|+\alpha_{\tau(k)}\mu\|F(x^{\tau(k)})\|\\ &\leq& (1-\eta_{\tau(k)}-\alpha_{\tau(k)}\tau)\|z^{\tau(k)}-x^{\tau(k)}\|+\alpha_{\tau(k)}\mu\|F(x^{\tau(k)})\|\\ &\leq&\|z^{\tau(k)}-x^{\tau(k)}\|+\alpha_{\tau(k)}\mu\|F(x^{\tau(k)})\|. \end{array} $$

Then, it follows from the above inequality, (26), \(\lim _{k\to \infty }\alpha _{k}=0\) and the boundedness of {F(x τ(k))} that

$$\underset{k\to\infty}{\lim}\|x^{\tau(k)+1}-x^{\tau(k)}\|=0. $$

As proved in the first case, we can conclude that

$$ \underset{k\longrightarrow\infty}{\limsup}\langle F(x^{*}), x^{*}-x^{\tau(k)+1}\rangle=\underset{k\longrightarrow\infty}{\limsup}\langle F(x^{*}), x^{*}-x^{\tau(k)}\rangle\leq 0. $$
(27)

It follows from (12) and (20) that

$$\begin{array}{@{}rcl@{}} \|x^{\tau(k)+1}-x^{*}\|^{2}&\leq&(1-\alpha_{\tau(k)}\tau)\|x^{\tau(k)}-x^{*}\|^{2}+2\alpha_{\tau(k)}\mu\langle F(x^{*}), x^{*}-x^{\tau(k)+1}\rangle\\ &\leq&(1-\alpha_{\tau(k)}\tau)\|x^{\tau(k)+1}-x^{*}\|^{2}+2\alpha_{\tau(k)}\mu\langle F(x^{*}), x^{*}-x^{\tau(k)+1}\rangle. \end{array} $$

Hence, by (20), we have

$$ \|x^{k}-x^{*}\|^{2}\leq\|x^{\tau(k)+1}-x^{*}\|^{2}\leq\frac{2\mu}{\tau}\langle F(x^{*}), x^{*}-x^{\tau(k)+1}\rangle. $$
(28)

Taking the limit in (28) as \(k\longrightarrow \infty \), and using (27), we obtain

$$\underset{k\longrightarrow\infty}{\limsup}\|x^{k}-x^{*}\|^{2}\leq 0. $$

Therefore, x kx as \(k\to \infty \). The proof is complete. □

Remark 2

From the continuity of G, condition (A3) is automatically satisfied in finite dimensional spaces. In infinite dimensional spaces, the condition (A3) can be dropped if G is monotone. This is because the condition (A3) is used to indicate \(\overline {x}\in \text {Sol}(C, G)\). Now, we will prove \(\overline {x}\in \text {Sol}(C, G)\) by using the monotonicity of G.

Proof

For all xC, from (6), we have

$$\langle x^{k_{i}}-\lambda_{k_{i}}G(x^{k_{i}})-y^{k_{i}}, x-y^{k_{i}}\rangle\leq 0\quad\forall i\in\mathbb{N}. $$

This together with the monotonicity of G and the Cauchy–Schwarz inequality imply that

$$\begin{array}{@{}rcl@{}} \langle G(x), x^{k_{i}}-x\rangle&\leq&\langle G(x^{k_{i}}), x^{k_{i}}-x\rangle\\ &=&\langle G(x^{k_{i}}), x^{k_{i}}-y^{k_{i}}\rangle+\tfrac{1}{\lambda_{k_{i}}}\langle\lambda_{k_{i}} G(x^{k_{i}}), y^{k_{i}}-x\rangle\\ &=&\langle G(x^{k_{i}}), x^{k_{i}}-y^{k_{i}}\rangle+\tfrac{1}{\lambda_{k_{i}}}\langle x^{k_{i}}-y^{k_{i}}, y^{k_{i}}-x\rangle\\ &&+\tfrac{1}{\lambda_{k_{i}}}\langle x^{k_{i}}-\lambda_{k_{i}}G(x^{k_{i}})-y^{k_{i}}, x-y^{k_{i}}\rangle\\ &\leq&\langle G(x^{k_{i}}), x^{k_{i}}-y^{k_{i}}\rangle+\tfrac{1}{\lambda_{k_{i}}}\langle x^{k_{i}}-y^{k_{i}}, y^{k_{i}}-x\rangle\\ &\leq&\|G(x^{k_{i}})\| \|x^{k_{i}}-y^{k_{i}}\|+\tfrac{1}{\lambda_{k_{i}}}\|x^{k_{i}}-y^{k_{i}}\| \|y^{k_{i}}-x\|\\ &\leq&\|G(x^{k_{i}})\| \|x^{k_{i}}-y^{k_{i}}\|+\tfrac{1}{a}\|x^{k_{i}}-y^{k_{i}}\| \|y^{k_{i}}-x\|. \end{array} $$
(29)

Taking the limit in (29) as \(i\to \infty \), using the boundedness of \(\{G(x^{k_{i}})\}\), \(\{y^{k_{i}}\}\), and recalling that \(\lim _{i\to \infty }\|x^{k_{i}}-y^{k_{i}}\|\to 0\), \(x^{k_{i}}\rightharpoonup \overline {x}\), we obtain \(\langle G(x), \overline {x}-x\rangle \leq 0\) and hence,

$$ \langle G(x), x-\overline{x}\rangle\geq 0\quad\forall x\in C. $$
(30)

Let \(x_{t}:=(1-t)\overline {x}+tx\in C\), t∈[0,1]. From (30), we have

$$0\leq\langle G(x_{t}), x_{t}-\overline{x}\rangle=\langle G(x_{t}), (1-t)\overline{x}+tx-\overline{x}\rangle=t\langle G(x_{t}), x-\overline{x}\rangle. $$

Then, for all 0<t≤1

$$\begin{array}{@{}rcl@{}} 0&\leq&\langle G(x_{t}), x-\overline{x}\rangle\\ &=&\langle G(x_{t})-G(\overline{x}), x-\overline{x}\rangle+\langle G(\overline{x}), x-\overline{x}\rangle\\ &\leq&\gamma\|x_{t}-\overline{x}\| \|x-\overline{x}\|+\langle G(\overline{x}), x-\overline{x}\rangle\\ &=&\gamma\|(1-t)\overline{x}+tx-\overline{x}\| \|x-\overline{x}\|+\langle G(\overline{x}), x-\overline{x}\rangle\\ &=&\gamma t\|x-\overline{x}\|^{2}+\langle G(\overline{x}), x-\overline{x}\rangle. \end{array} $$

Taking the limit as t→0+, we have \(\langle G(\overline {x}), x-\overline {x}\rangle \geq 0\), i.e., \(\overline {x}\in \text {Sol}(C, G)\). □

4 Applications

We consider the special case when F(x) = xx 0 with x 0 a given vector in H. This mapping F is 1-Lipschitz continuous and 1-strongly monotone on H, and in this situation, by choosing μ=1 and η k =0 for all \(k\in \mathbb {N}\), the bilevel variational inequality (1) becomes the problem of finding the projection of x 0 onto the solution set of the variational inequality. When x 0=0, this projection is the minimum-norm solution in Sol(C,G). Finding such a solution is an interesting work because of its applications. A typical example is the least-squares solution to the constrained linear inverse problem (see, for instance, [28]).

Corollary 1

Let G:H→H be a pseudomonotone and L-Lipschitz continuous on H. Suppose that Sol(C,G)≠∅ and \(\limsup _{k\to \infty }\langle G(x^{k}), y-y^{k}\rangle \leq \langle G(\overline {x}), y-\overline {y}\rangle \) for every sequence {x k }, {y k } converging weakly to \(\overline {x}\) and \(\overline {y}\) , respectively. Let {x k } be the sequence generated by

$$ \left\{\begin{array}{l} x^{0}\in H,\\ y^{k}=P_{C}(x^{k}-\lambda_{k} G(x^{k})),\\ T_{k}=\{\omega\in H: \langle x^{k}-\lambda_{k} G(x^{k})-y^{k}, \omega-y^{k}\rangle\leq 0\},\\ x^{k+1}=\alpha_{k}x^{0}+(1-\alpha_{k})P_{T_{k}}(x^{k}-\lambda_{k}G(y^{k}))~ \forall k\geq 0, \end{array}\right. $$
(31)

where \(\{\alpha _{k}\}\subset (0, 1)\) is a sequence, \(\{\lambda _{k}\}\subset [a, b]\) for some \(a, b\in (0, \frac {1}{L})\) . Assume \(\lim _{k\to \infty }\alpha _{k}=0\) and \({\sum }_{k=0}^{\infty }\alpha _{k}=\infty \) . Then, the sequence {x k} generated by (31) converges strongly to P Sol(C,G) (x 0).

From Remark 2 and Corollary 1, we get the following corollary, which includes the corresponding result of [22, Theorem 3.1] as a special case.

Corollary 2

Let G:H→H be a monotone and L-Lipschitz continuous on H such that Sol(C,G) is nonempty. Let {x k} be the sequence generated by

$$ \left\{\begin{array}{l} x^{0}\in H,\\ y^{k}=P_{C}(x^{k}-\lambda_{k} G(x^{k})),\\ T_{k}=\{\omega\in H: \langle x^{k}-\lambda_{k} G(x^{k})-y^{k}, \omega-y^{k}\rangle\leq 0\},\\ x^{k+1}=\alpha_{k}x^{0}+(1-\alpha_{k})P_{T_{k}}(x^{k}-\lambda_{k}G(y^{k}))~ \forall k\geq 0, \end{array}\right. $$
(32)

where \(\{\alpha _{k}\}\subset (0, 1)\) is a sequence, \(\{\lambda _{k}\}\subset [a, b]\) for some \(a, b\in (0, \frac {1}{L})\) . Assume \(\lim _{k\to \infty }\alpha _{k}=0\) and \({\sum }_{k=0}^{\infty }\alpha _{k}=\infty \) . Then, the sequence {x k} generated by (32) converges strongly to P Sol(C,G) (x 0).

5 Numerical Illustrations

Example 1

Let \(\mathcal {H}=\mathbb {R}^{n}\) with the norm \(\|x\|=\big ({\sum }_{k=1}^{n}{x_{k}^{2}}\big )^{\frac {1}{2}}\) and the inner product \(\langle x, y\rangle ={\sum }_{k=1}^{n}x_{k}y_{k}\), where \(x=(x_{1}, x_{2},\ldots , x_{n})^{T}\in \mathbb {R}^{n}\), \(y=(y_{1}, y_{2},\ldots ,y_{n})^{T}\in \mathbb {R}^{n}\). Consider the affine mapping \(G: \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\)

$$G(x)=Mx+q, $$

where M is an n×n positive semidefinite matrix. Then G is pseudomonotone and ∥M∥-Lipschitz continuous on \(\mathbb {R}^{n}\). Indeed, for every \(x, y\in \mathbb {R}^{n}\), since M is positive semidefinite, we have

$$\langle G(x)-G(y), x-y\rangle=\langle M(x-y), x-y\rangle\geq 0. $$

Hence, G is monotone and consequently pseudomonotone on \(\mathbb {R}^{n}\). To see the Lipschitz continuity on \(\mathbb {R}^{n}\) of G, we observe that for every \(x, y\in \mathbb {R}^{n}\)

$$\|G(x)-G(y)\|=\|M(x-y)\|\leq\|M\| \|x-y\|, $$

where ∥M∥ denotes the spectral norm of the matrix M, i.e., the square root of the largest eigenvalue of the positive-semidefinite matrix M T M. It then follows that G is ∥M∥-Lipschitz continuous on \(\mathbb {R}^{n}\). First, we can see that for any real matrix A, the product A T A is a positive semidefinite matrix. Now, to illustrate Algorithm 1, the mapping \(G: \mathbb {R}^{8}\longrightarrow \mathbb {R}^{8}\) is defined by

$$G(x)=Mx+q, $$

where M = A T A and the matrices A,q (randomly generated) are

$$A= \left( \begin{array}{cccccccc} 0 & 0 & 1 & 1 & -1 & 0 & -1 & -2\\ 0 & 0 & -1 & 1 & 0 & 0 & 1 & 0\\ 0 & -1 & -1 & 1 & -2 & -1 & -1 & 1\\ 0 & 0 & -1 & -2 & 0 & -2 & -2 & 0\\ -2 & 0 & 0 & 0 & -2 & 0 & 0 & 1\\ 0 & 0 & -1 & -1 & 0 & -2 & -2 & -1\\ -1 & 0 & 0 & -2 & -1 & 0 & -1 & 0\\ 0 & -1 & -1 & -2 & 1 & -1 & -2 & -1 \end{array}\right)\Rightarrow M=\left( \begin{array}{cccccccc} 5 & 0 & 0 & 2 & 5 & 0 & 1 & -2\\ 0 & 2 & 2 & 1 & 1 & 2 & 3 & 0\\ 0 & 2 & 6 & 4 & 0 & 6 & 5 & -1\\ 2 & 1 & 4 & 16 & -3 & 7 & 11 & 2\\ 5 & 1 & 0 & -3 & 11 & 1 & 2 & -3\\ 0 & 2 & 6 & 7 & 1 & 10 & 11 & 2\\ 1 & 3 & 5 & 11 & 2 & 11 & 16 & 5\\ -2 & 0 & -1 & 2 & -3 & 2 & 5 & 8 \end{array}\right), $$
$$q=(3, 3, -2, 3, 1, -2, -1,1)^{T}. $$

Then G is pseudomonotone and Lipschitz continuous on \(\mathbb {R}^{8}\) with γ=∥M∥=37.5747. Let \(F: \mathbb {R}^{8}\longrightarrow \mathbb {R}^{8}\) be defined by

$$F(x)=(2x_{1}, 4x_{2}, 3x_{3}, 2x_{4}, 6x_{5}, 4x_{6}, 5x_{7}, 3x_{8}) $$

for all \(x=(x_{1}, x_{2},\ldots , x_{8})^{T}\in \mathbb {R}^{8}\). It is easy to see that F is strongly monotone with β=2 and Lipschitz continuous with L=6 on \(\mathbb {R}^{8}\).

Let \(C=\{x\in \mathbb {R}^{8}: \|x\|\leq 2\}\). Fix \(x^{0}=(6, 10, 4, -8, 7, 9, -6, 5)^{T}\in \mathbb {R}^{8}\) (randomly chosen) as a starting point for the Algorithm 1. With \(\alpha _{k}=\frac {1}{k+3}\), \(\eta _{k}=\frac {k+1}{2(k+3)}\), λ k =0.01, μ=0.1, we obtain the iteratives (see Table 1) and an approximate solution

$$\begin{array}{@{}rcl@{}} x^{2673}&=&(-0.44954, -1.53672, 0.50317, -0.55589, -0.13462, -0.14708,\\ &&\quad 0.78948, -0.45707)^{T} \end{array} $$

with the tolerance 𝜖=10−6.

Table 1 Numerical results for Example 1

Example 2

Let

$$C=\{(x_{1}, x_{2},\ldots,x_{6})^{T}\in\mathbb{R}^{6}: 2x_{1}+3x_{2}-4x_{3}+x_{4}+2x_{5}-x_{6}\geq -12\} $$

and \(G: \mathbb {R}^{6}\longrightarrow \mathbb {R}^{6}\) be defined by \(G(x)=(\sin \|x\|+2)b\) for all \(x\in \mathbb {R}^{6}\), where \(b=(2, 3, -4, 1, 2, -1)^{T}\in \mathbb {R}^{6}\). It is easy to verify that G is pseudomonotone on \(\mathbb {R}^{6}\). Furthermore, for all \(x, y\in \mathbb {R}^{6}\), we have

$$\begin{array}{@{}rcl@{}} \|G(x)-G(y)\|&=&\|b\| |\sin\|x\|-\sin\|y\||\\ &=&\sqrt{35}|\sin\|x\|-\sin\|y\||\\ &\leq&\sqrt{35}|\|x\|-\|y\||\\ &\leq&\sqrt{35}\|x-y\|. \end{array} $$

So G is \(\sqrt {35}\)-Lipschitz continuous on \(\mathbb {R}^{6}\). It is easy to see that the solution set Sol(C,G) of V I(C,G) is given by

$$\begin{array}{@{}rcl@{}} \text{Sol}(C, G)&=&\{x=(x_{1}, x_{2},\ldots,x_{6})^{T}\in\mathbb{R}^{6}: 2x_{1}+3x_{2}-4x_{3}+x_{4}+2x_{5}-x_{6}=-12\}\\ &=&\{x=(x_{1}, x_{2},\ldots,x_{6})^{T}\in\mathbb{R}^{6}: \langle b, x\rangle+12=0\}. \end{array} $$

Select a random starting point \(x^{0}=(3, -1, 4, -3, 0, 2)^{T}\in \mathbb {R}^{6}\) for the algorithm in Corollary 1. We take \(\lambda _{k}=\frac {k+1}{10(k+11)}\), \(\alpha _{k}=\frac {1}{k+2}\) then \(\{\alpha _{k}\}\subset (0, 1)\), \(\lim _{k\to \infty }\alpha _{k}=0\), \({\sum }_{k=0}^{\infty }\alpha _{k}=\infty \), \(\{\lambda _{k}\}\subset [\frac {1}{110},\frac {1}{10}]\subset (0, \frac {1}{\sqrt {35}})\). With 𝜖=10−9, we obtain the iteratives in Table 2 and an approximate solution obtained after 28236 iterations is

$$x^{28236}=(3.34285, -0.48573, 3.31431, -2.82858, 0.34285, 1.82858)^{T}, $$

which is a good approximation to the projection of x 0 on S o l(C,G)

$$P_{\text{Sol}(C, G)}(x^{0})=x^{0}-\frac{\langle b, x^{0}\rangle+12}{\|b\|^{2}}b=\left( \frac{117}{35}, -\frac{17}{35}, \frac{116}{35}, -\frac{99}{35}, \frac{12}{35}, \frac{64}{35}\right). $$
Table 2 Numerical results for Example 2

We perform the iterative schemes in MATLAB R2012a running on a laptop with Intel(R) Core(TM) i3-3217U CPU @ 1.80GHz, 2 GB RAM.