1 Introduction

Let H be a real Hilbert space with the inner product 〈⋅,⋅〉 and the induced norm ∥⋅∥. Let C be a nonempty closed convex subset of H and A : HH be an operator.

The variational inequality problem(VIP) for A on C is to find a point xC such that

$$ \langle Ax^{*},x-x^{*}\rangle \geq 0, \forall x\in C. $$
(1)

Let us denote VI(C,A) by the solution set of the problem (VIP) (1). The problem of finding solutions of the problem (VIP) (1) is a fundamental problem in optimization theory. Due to this, the problem (VIP) (1) has received a lot of attention by many authors. In fact, there are two general approaches to study variational inequality problems, which are the regularized method and the projection method. Based on these directions, many algorithms have been considered and proposed for solving the problem (VIP) (1) (see, for example, [13,14,15,16, 26, 31, 34, 35, 43, 46, 53,54,55]).

The basic idea consists of extending the projected gradient method for solving the problem of minimizing f(x) subject to xC given by

$$ x_{n+1}=P_{C}(x_{n}-\alpha_{n} \triangledown f(x_{n})), \forall n\geq 0, $$
(2)

where {αn} is a positive real sequence satisfying certain conditions and PC is the metric projection onto C.

For convergence properties of this method for the case in which \(f: \mathbb {R}^{2}\to \mathbb {R}\) is convex and differentiable function, one may see [1]. An immediate extension of the method (2) to (VIP) (1) is the projected gradient method for optimization problems, substituting the operator A for the gradient, so that we generate a sequence {xn} in the following manner:

$$ x_{n+1}=P_{C}(x_{n}-\alpha_{n} Ax_{n}), \forall n\geq 0. $$

However, the convergence of this method requires a slightly strong assumption that the operators are strongly monotone or inverse strongly monotone (see, for example, [51]).

To avoid this strong assumption, Korpelevich [27] introduced the extragradient method (EGM) for solving saddle point problems and, after that, this method was further extended to variational inequality problems in both Euclidean spaces and Hilbert spaces. The convergence of the extragradient method only requires that the operator A is monotone and L-Lipschitz continuous. More precisely, the extragradient method is of the form:

$$ \left\{\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda Ax_{n}),\\ x_{n+1}=P_{C}(x_{n}-\lambda Ay_{n}), \forall n\geq 0, \end{array}\right. $$
(3)

where λ ∈ (0,1/L) and PC denotes the metric projection from H onto C.

If the solution set VI(C,A) is nonempty, then the sequence {xn} generated by the process (3) converges weakly to an element in VI(C,A).

In recent years, the (EGM) (3) has received great attention by many authors, who improved it in various ways (see, for example, [13,14,15,16,17, 29, 34, 35, 38, 41, 47,48,49] and the references therein).

In fact, the (EGM) (3) needs to calculate two projections onto the closed convex set C in each iteration. If C is a general closed convex set, then this might seriously affect the efficiency of the algorithm.

To our knowledge, there are some methods to overcome this drawback. The first one is the subgradient extragradient method proposed by Censor et al. [14], in which the second projection onto C is replaced by a projection onto a specific constructible half-space. Their algorithm is of the form:

$$ \left\{\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda Ax_{n}),\\ T_{n}=\{w\in H|\langle x_{n}-\lambda Ax_{n}-y_{n},w-y_{n}\rangle \le 0\},\\ x_{n+1}=P_{T_{n}}(x_{n}-\lambda Ay_{n}), \forall n\geq 0, \end{array}\right. $$

where λ ∈ (0,1/L).

The second one is the method proposed by Tseng in [49]. Tseng’s method is of the form:

$$ \left\{\begin{array}{l} y_{n}=P_{C}(x_{n}-\lambda Ax_{n}),\\ x_{n+1}=y_{n}-\lambda (Ay_{n}-Ax_{n}), \forall n\geq 0, \end{array}\right. $$

where λ ∈ (0,1/L). Recently, Tseng’s extragradient method for solving (VIP) (1) has received great attention by many authors (see, for example, [7, 43, 50] and the references therein).

The third one is the projection and contraction method studied by some authors [25, 42]. The projection and contraction method is of the form:

$$ \left\{\begin{array}{l} x_{0}\in H,\\ y_{n}=P_{C}(x_{n}-\lambda Ax_{n}),\\ d(x_{n},y_{n})=(x_{n}-y_{n})-\lambda(Ax_{n}-Ay_{n}),\\ x_{n+1}=x_{n}-\gamma \beta_{n} d(x_{n},y_{n}), \forall n\geq 0, \end{array}\right. $$

where γ ∈ (0,2), λ ∈ (0,1/L), and

$$ \beta_{n}:=\frac{\phi(w_{n},y_{n})}{\|d(w_{n},y_{n})\|^{2}}, \phi(w_{n},y_{n}):=\langle w_{n}-y_{n},d(w_{n},y_{n})\rangle, \forall n\geq 0. $$

In recent years, the projection and contraction method has received great attention by many authors, who improved it in various ways (see, for example, [12, 19,20,21] and the references therein).

Note that the three methods above need only to calculate one projection onto C in each iteration. This may increase the performance of the algorithms.

Now, let us mention an inertial-type algorithm which is based upon a discrete version of a second-order dissipative dynamical system [4, 5] and it can be regarded as a procedure of speeding up the convergence properties (see [3, 33, 39]).

In 2001, Alvarez and Attouch [3] applied the inertial technique to obtain an inertial proximal method for solving the problem of finding zero of a maximal monotone operator, which is as follows: for any xn− 1,xnH and two parameters 𝜃n ∈ [0,1),λn > 0, find xn+ 1H such that

$$ 0\in \lambda_{n} A(x_{n+1})+x_{n+1}-x_{n}-\theta_{n}(x_{n}-x_{n-1}), \forall n\geq 0, $$

which can be written equivalently as follows:

$$ x_{n+1}=J_{\lambda_{n}}^{A}(x_{n}+\theta_{n}(x_{n}-x_{n-1})), \forall n\geq 0, $$

where \(J_{\lambda _{n}}^{A}\) is the resolvent of A with parameter λn and the inertia is induced by the term 𝜃n(xnxn− 1).

Recently, the inertial methods have been studied by several authors (see [2, 3, 6,7,8,9,10,11, 18, 22, 32, 36, 37, 44, 45]).

In 2015, Bot and Csetnek [11] introduced the so-called inertial hybrid proximal hybrid proximal-extragradient algorithm, which combines the inertial-type algorithm and the hybrid proximal-extragradient for a maximally monotone operator.

Very recently, Dong et al. [22] proposed an algorithm as a combination between inertial projection and contraction method (shortly, IPCM) and inertial method. The IPCM is of the form

$$ \left\{\begin{array}{l} x_{0},x_{1}\in H,\\ w_{n}=x_{n}+\alpha_{n}(x_{n}-x_{n-1}),\\ y_{n}=P_{C}(w_{n}-\lambda Aw_{n}),\\ d(w_{n},y_{n})=(w_{n}-y_{n})-\lambda(Aw_{n}-Ay_{n}),\\ x_{n+1}=w_{n}-\gamma \beta_{n} d(w_{n},y_{n}), \forall n\geq 0, \end{array}\right. $$

where γ ∈ (0,2),λ ∈ (0,1/L), and

$$ \beta_{n}:= \left\{\begin{array}{ll} \frac{\phi(w_{n},y_{n})}{\|d(w_{n},y_{n})\|^{2}}, & \text{ if } d(w_{n},y_{n})\ne 0\\ 0, &\text{ if } d(w_{n},y_{n})= 0, \end{array}\right. $$

where ϕ(wn,yn) := 〈wnyn,d(wn,yn)〉. Under appropriate conditions, they proved that the sequence {xn} converges weakly to an element of VI(C,A).

Motivated and inspired by the works in the literature. In this paper, we study strong convergence of the algorithm for solving classical variational inequalities problem with Lipschitz continuous and monotone mapping in real Hilbert spaces. The algorithm is inspired by the inertial projection and contraction method and the viscosity method. Under several appropriate conditions imposed on parameters, we will prove that the proposed algorithm converges strongly to some point in VI(C,A). Finally, we present several numerical experiments to support convergence of theorems. The numerical illustrations show that the speed of the proposed algorithm with inertial effects is better than the original algorithm without inertial effects.

This paper is organized as follows: In Section 2, we recall some definitions and preliminary results for further use. Section 3 deals with analyzing the convergence of the proposed algorithm. Finally, in Section 4, we perform several numerical examples to illustrate the computational performance of our algorithm.

2 Preliminaries

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. The weak convergence of {xn} to x is denoted by \(x_{n}\rightharpoonup x\) as \(n\to \infty \), while the strong convergence of {xn} to x is written as xnx as \(n\to \infty \).

For all x,yH, we have

$$ \|x+y\|^{2}=\|x\|^{2}+2\langle x,y\rangle +\|y\|^{2} $$

and

$$ \|x+y\|^{2}\le \|x\|^{2}+2\langle y,x+y\rangle. $$
(4)

For all xH, there exists the unique nearest point in C, denoted by PCx, such that

$$ \|x-P_{C}x\|\le \|x-y\|, \forall y\in C. $$

PC is called the metric projection of H onto C. It is known that PC is nonexpansive.

Lemma 1

[23] Let C be a nonempty closed convex subset of a real Hilbert space H. For anyxHandzC, wehave

$$ z=P_{C}x \Longleftrightarrow \langle x-z,z-y\rangle \geq 0, \forall y\in C. $$

Lemma 2

[23] Let C be a closed and convex subset in a real Hilbert space H and letxH. Thenwe have the following:

  1. (1)

    PCxPCy2 ≤〈PCxPCy,xyforallyH;

  2. (2)

    PCxy2 ≤∥xy2 −∥xPCx2forallyC.

For some properties of the metric projection, refer to Section 3 in [23].

Definition 1

Let T : HH be an operator. Then

  1. (1)

    T is said to be L-Lipschitz continuous with L > 0 if

    $$ \|Tx-Ty\|\le L \|x-y\|, \forall x,y \in H; $$
  2. (2)

    T is said to be monotone if

    $$ \langle Tx-Ty,x-y \rangle \geq 0, \forall x,y \in H. $$

Lemma 3

[30] Let {an} bea sequence of nonnegative real numbers such that there exists asubsequence\(\{a_{n_{j}}\}\)of{an} suchthat\(a_{n_{j}}<a_{n_{j}+1}\)foreach\(j\in \mathbb {N}\).Then there exists a nondecreasing sequence{mk} of\(\mathbb {N}\)suchthat\(\lim _{k\to \infty }m_{k}=\infty \)andthe following properties are satisfied by all (sufficiently large)number\(k\in \mathbb {N}\):

$$ a_{m_{k}}\le a_{m_{k}+1}, \quad a_{k}\le a_{m_{k}+1}. $$

In fact, mk is the largest number n in the set {1,2,⋯ ,k} such that an < an+ 1.

Lemma 4

Let {an} bea sequence of nonnegative real numbers such that:

$$ a_{n+1}\le (1-\alpha_{n})a_{n}+\alpha_{n} b_{n}, \forall n\geq 0, $$

where {αn}⊂ (0,1) and {bn} are sequences such that

  1. (a)

    \(\sum \nolimits _{n=0}^{\infty } \alpha _{n}=\infty \);

  2. (b)

    \(\limsup _{n\to \infty }b_{n}\le 0\).Then \(\lim _{n\to \infty }a_{n}=0\).

Remark 1

Lemma 4 was shown and used by several authors. For detail proofs, see Liu [28] and Xu [52]. Furthermore, a variant of Lemma 4 has already been used by Reich in [40].

Lemma 5

[26] LetA : HHbea monotone and L-Lipschitz continuous mapping on C. LetS = PC(IμA),whereμ > 0.If {xn} isa sequence in H satisfying\(x_{n}\rightharpoonup q\)andxnSxn → 0,thenqVI(C,A) = Fix(S).

3 Main results

In this section, we assume that A : HH is monotone and Lipschitz continuous on H with the constant L, VI(C,A)≠ and f : HH be a contraction mapping with contraction parameter κ ∈ [0,1).

Now, we introduce the following algorithm:

figure a

Lemma 6

Ifyn = wnordn = 0 inAlgorithm 1, thenynVI(C,A).

Proof

Since A is L-Lipschitz continuous, we have

$$ \begin{array}{@{}rcl@{}} \|d_{n}\|&=&\|w_{n}-y_{n}-\lambda [ Aw_{n}-Ay_{n}]\|\\ &\geq& \|w_{n}-y_{n}\|-\lambda\|Aw_{n}- Ay_{n}\|\\ &\geq& \|w_{n}-y_{n}\|- \lambda L\|w_{n}-y_{n}\|\\ &=&(1-\lambda L)\|w_{n}-y_{n}\|. \end{array} $$
(8)

It is also easy to see that

$$ \|d_{n}\|\le (1+\lambda L)\|w_{n}-y_{n}\|. $$
(9)

Combining (8) and (9), we get

$$ (1-\lambda L)\|w_{n}-y_{n}\|\le \|d_{n}\|\le (1+\lambda L)\|w_{n}-y_{n}\| $$
(10)

and so wn = yn if and only if d(wn,yn) = 0. Therefore, if wn = yn or dn = 0, then wn = yn and we get

$$ y_{n}=P_{C}(y_{n}-\lambda Ay_{n}). $$

This implies that ynVI(C,A). This completes the proof. □

Theorem 1

The sequence {xn} generatedby Algorithm 1 converges strongly to an elementpVI(C,A),wherep = PVI(C,A)f(p).

Proof

Claim 1. :

Let zn = wn𝜃ndn. Now, we show that

$$ \|z_{n}-p\|^{2}\le \|w_{n}-p\|^{2}-\|z_{n}-w_{n}\|^{2}. $$
(11)

Indeed, we have

$$ \begin{array}{@{}rcl@{}} &&\langle w_{n}-p,d_{n}\rangle\\ &=&\langle w_{n}-y_{n},d_{n}\rangle+\langle y_{n}-p,d_{n}\rangle\\ &=&\langle w_{n}-y_{n},w_{n}-y_{n}-\lambda (Aw_{n}-Ay_{n})\rangle+\langle y_{n}-p,w_{n} - y_{n}-\lambda (Aw_{n} - Ay_{n})\rangle\\ & = &\|w_{n} - y_{n}\|^{2} - \langle w_{n} - y_{n},\lambda (Aw_{n} - Ay_{n})\rangle + \langle y_{n} - p,w_{n} - y_{n} - \lambda (Aw_{n}-Ay_{n})\rangle\\ &\geq& \|w_{n}-y_{n}\|^{2}-\lambda L\|w_{n}-y_{n}\|^{2}+\langle y_{n}-p,w_{n}-y_{n}-\lambda (Aw_{n}-Ay_{n})\rangle. \end{array} $$
(12)

Note that, from yn = PC(wnλAwn), it follows that

$$ \langle y_{n}-w_{n}+\lambda Aw_{n},y_{n}-p\rangle \le 0. $$
(13)

By the monotonicity of A and pVI(C,A), we have

$$ \langle Ay_{n},y_{n}-p\rangle \geq\langle Ap,y_{n}-p\rangle \geq 0. $$
(14)

Thus, from (13) and (14), it follows that

$$ \langle w_{n}-y_{n}-\lambda_{n} (Aw_{n}-Ay_{n}),y_{n}-p\rangle \geq 0. $$
(15)

Combining (12) and (15), we obtain

$$ \langle w_{n}-p,d_{n}\rangle\geq (1-\lambda L)\|w_{n}-y_{n}\|^{2}. $$
(16)

On the other hand, we have

$$ \begin{array}{@{}rcl@{}} \|z_{n}-p\|^{2}&=&\|w_{n}-\theta_{n}d_{n}-p\|^{2}\\ &=&\|w_{n}-p\|^{2}-2\theta_{n}\langle d_{n},w_{n}-p\rangle +{\theta_{n}^{2}}\|d_{n}\|^{2}. \end{array} $$
(17)

Combining (16) and (17), we get

$$ \|z_{n}-p\|^{2}\le \|w_{n}-p\|^{2}-2\theta_{n}(1-\lambda L)\|w_{n}-y_{n}\|^{2}+{\theta_{n}^{2}}\|d_{n}\|^{2}. $$
(18)

On the other hand, since \(\theta _{n}=(1-\lambda L)\frac {\|w_{n}-y_{n}\|^{2}}{\|d_{n}\|^{2}}\), it follows that

$$ \|w_{n}-y_{n}\|^{2}=\frac{\theta_{n}\|d_{n}\|^{2}}{1-\lambda L}. $$
(19)

Substituting (19) into (18), we obtain

$$ \begin{array}{@{}rcl@{}} \|z_{n}-p\|^{2}&\le& \|w_{n}-p\|^{2}-2{\theta_{n}^{2}}\cdot\|d_{n}\|^{2}+{\theta^{2}_{n}}\|d_{n}\|^{2}\\ &=&\|w_{n}-p\|^{2}-\|\theta_{n}\cdot d_{n}\|^{2}. \end{array} $$
(20)

By the definition of the sequence {zn}, we have zn = wn𝜃ndn and so

$$ \theta_{n}d_{n}=w_{n}-z_{n}, $$
(21)

which implies, from (20) and (21), that

$$ \|z_{n}-p\|^{2}=\|w_{n}-p\|^{2}-\|w_{n}-z_{n}\|^{2}. $$
Claim 2. :

The sequence {xn} is bounded. By Claim 1, we have

$$ \|z_{n}-p\|\le\|w_{n}-p\|. $$
(22)

From the definition of wn, we get

$$ \begin{array}{@{}rcl@{}} \|w_{n}-p\|&=&\|x_{n}+\alpha_{n} (x_{n}-x_{n-1})-p\| \\ &\le&\|x_{n}-p\|+\alpha_{n}\|x_{n}-x_{n-1}\|\\ &=&\|x_{n}-p\|+\beta_{n}\cdot\frac{\alpha_{n}}{\beta_{n}}\|x_{n}-x_{n-1}\|. \end{array} $$
(23)

From (??), (??), and (??), we have \(\frac {\alpha _{n}}{\beta _{n}}\|x_{n}-x_{n-1}\|\to 0\); hence, there exists a constant M1 > 0 such that

$$ \frac{\alpha_{n}}{\beta_{n}}\|x_{n}-x_{n-1}\|\le M_{1}\ \forall n. $$
(24)

Combining (22), (23), and (24), we obtain

$$ \|z_{n}-p\|\le \|w_{n}-p\|\le\|x_{n}-p\|+\beta_{n} M_{1}. $$
(25)

Since we have xn+ 1 = βnf(xn) + (1 − βn)zn, it follows that

$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|&=&\|\beta_{n} f(x_{n})+(1-\beta_{n})z_{n}-p\|\\ &=&\|\beta_{n} (f(x_{n})-p)+(1-\beta_{n})(z_{n}-p)\|\\ &\le& \beta_{n} \|f(x_{n})-p\|+(1-\beta_{n})\|z_{n}-p\|\\ &\le& \beta_{n} \|f(x_{n})-f(p)\|+\beta_{n} \|f(p)-p\|+(1-\beta_{n})\|z_{n}-p\|\\ &\le& \beta_{n} \kappa \|x_{n}-p\|+\beta_{n} \|f(p)-p\|+(1-\beta_{n})\|z_{n}-p\|. \end{array} $$
(26)

Substituting (25) into (26), we obtain

$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|&\le& (1-(1-\kappa)\beta_{n} )\|x_{n}-p\|+\beta_{n} M_{1}+\beta_{n} \|f(p)-p\|.\\ &=&(1-(1-\kappa)\beta_{n} )\|x_{n}-p\|+(1-\kappa)\beta_{n} \frac{ M_{1}+ \|f(p)-p\|}{1-\kappa}\\ &\le& \max\left\{\|x_{n}-p\|, \frac{ M_{1}+ \|f(p)-p\|}{1-\kappa}\right\}\\ &\le& \cdots\\ &\le& \max\left\{\|x_{0}-p\|, \frac{ M_{1}+ \|f(p)-p\|}{1-\kappa}\right\}. \end{array} $$

This implies {xn} is bounded. Also, we see that {zn}, {f(xn)}, and {wn} are bounded. This completes the proof.

Claim 3. :
$$ (1-\beta_{n})\|z_{n}-w_{n}\|^{2}\le \|x_{n+1}-p\|^{2}-\|x_{n}-p\|^{2}+\beta_{n} M_{4} $$

for some M4 > 0. Indeed, we get

$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|^{2}&\le& \beta_{n} \|f(x_{n})-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &\le& \beta_{n} (\|f(x_{n})- f(p)\|+\|f(p)-p\|)^{2}+\|z_{n}-p\|^{2} \\ &\le& \beta_{n} (\kappa\|x_{n}- p\|+\|f(p)-p\|)^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &\le& \beta_{n} (\|x_{n}- p\|+\|f(p)-p\|)^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &=&\beta_{n}\|x_{n}-p\|^{2}+\beta_{n} (2\|x_{n}- p\|\cdot\|f(p)-p\|+\|f(p)-p\|^{2})\\ &&\quad+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &\le&\beta_{n} \|x_{n}-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+\beta_{n} M_{2} \end{array} $$
(27)

for some M2 > 0. Substituting (11) into (27), we get

$$ \|x_{n+1}-p\|^{2} \le \beta_{n} \|x_{n}-p\|^{2}+(1-\beta_{n})\|w_{n}-p\|^{2} -(1-\beta_{n})\|z_{n}-w_{n}\|^{2}+\beta_{n} M_{2}, $$
(28)

which implies from (25) that

$$ \begin{array}{@{}rcl@{}} \|w_{n}-p\|^{2}&\le& (\|x_{n}-p\|+\beta_{n} M_{1})^{2}\\ &=&\|x_{n}-p\|^{2}+\beta_{n}(2M_{1}\|x_{n}-p\| +\beta_{n}{M_{1}^{2}})\\ &\le&\|x_{n}-p\|^{2}+\beta_{n} M_{3} \end{array} $$
(29)

for some M3 > 0. Combining (28) and (29), we obtain

$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|^{2}&\le& \beta_{n} \|x_{n}-p\|^{2}+(1-\beta_{n})\|x_{n}-p\|^{2}+\beta_{n} M_{3}\\ &&\quad -(1-\beta_{n})\|z_{n}-w_{n}\|^{2}+\beta_{n} M_{2}\\ &=&\|x_{n}-p\|^{2}+\beta_{n} M_{3} -(1-\beta_{n})\|z_{n}-w_{n}\|^{2}+\beta_{n} M_{2}. \end{array} $$

This implies that

$$ (1-\beta_{n})\|z_{n}-w_{n}\|^{2}\le \|x_{n+1}-p\|^{2}-\|x_{n}-p\|^{2}+\beta_{n} M_{4}, $$

where M4 := M2 + M3.

Claim 4. :
$$ \|w_{n}-y_{n}\|^{2}\le \frac{(1+\lambda L)^{2}}{(1-\lambda L)^{2}}\|z_{n}-w_{n}\|^{2}. $$

Indeed, we have

$$ \begin{array}{@{}rcl@{}} \|w_{n}-y_{n}\|^{2}&=\frac{\theta_{n}\|d_{n}\|^{2}}{1-\lambda L}=\frac{\|\theta_{n} d_{n}\|^{2}}{(1-\lambda L)\theta_{n}}=\frac{\|z_{n}-w_{n}\|^{2}}{(1-\lambda L)\theta_{n}}. \end{array} $$
(30)

It follows from (10) that

$$ \frac{\|w_{n}-y_{n}\|}{\|d_{n}\|}\geq \frac{1}{1+\lambda L}. $$

Therefore, we have

$$ \theta_{n}=(1-\lambda L)\frac{\|w_{n}-y_{n}\|^{2}}{\|d_{n}\|^{2}}\geq \frac{1-\lambda L}{(1+\lambda L)^{2}}, $$

that is,

$$ \frac{1}{\theta_{n}} \le \frac{(1+\lambda L)^{2}} {1-\lambda L}. $$
(31)

Combining (30) and (31), we obtain

$$ \|w_{n}-y_{n}\|^{2}\le \frac{(1+\lambda L)^{2}}{(1-\lambda L)^{2}}\|z_{n}-w_{n}\|^{2}. $$
Claim 5. :
$$ \begin{array}{@{}rcl@{}} &&\|x_{n+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{n} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{n}}{\beta_{n}}\cdot\|x_{n}-x_{n-1}\|\right] \end{array} $$

for some M > 0. Indeed, we have

$$ \begin{array}{@{}rcl@{}} &&\|w_{n}-p\|^{2}\\ &=&\|x_{n}+\alpha_{n} (x_{n}-x_{n-1})-p\|^{2}\\ &=&\|x_{n}-p\|^{2}+2\alpha_{n}\langle x_{n}-p,x_{n}-x_{n-1}\rangle +{\alpha_{n}^{2}}\|x_{n}-x_{n-1}\|^{2}\\ &\le&\|x_{n}-p\|^{2}+2\alpha_{n}\| x_{n}-p\|\|x_{n}-x_{n-1}\| +{\alpha_{n}^{2}}\|x_{n}-x_{n-1}\|^{2}. \end{array} $$
(32)

Using (4), we have

$$ \begin{array}{@{}rcl@{}} &&\|x_{n+1}-p\|^{2}\\ &=&\|\beta_{n} f(x_{n})+(1-\beta_{n})z_{n}-p\|^{2}\\ &=&\|\beta_{n} (f(x_{n})-f(p))+(1-\beta_{n})(z_{n}-p)+\beta_{n} (f(p)-p)\|^{2}\\ &\le& \|\beta_{n} (f(x_{n})-f(p))+(1-\beta_{n})(z_{n}-p)\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \|f(x_{n})-f(p)\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \kappa^{2} \|x_{n}-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \kappa \|x_{n}-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \kappa \|x_{n}-p\|^{2}+(1-\beta_{n})\|w_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle. \end{array} $$
(33)

Substituting (32) into (33), we have

$$ \begin{array}{@{}rcl@{}} &&\|x_{n+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}+2\alpha_{n}\| x_{n}-p\|\|x_{n}-x_{n-1}\|\\ &&\quad +{\alpha_{n}^{2}}\|x_{n}-x_{n-1}\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &=& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}+(1-\kappa)\beta_{n} \cdot\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle\\ &&\quad+\alpha_{n} \|x_{n}-x_{n-1}\| (2\| x_{n}-p\|+\alpha_{n}\|x_{n}-x_{n-1}\|) \\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}+(1-\kappa)\beta_{n} \cdot\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle\\ &&\quad+\alpha_{n} \|x_{n}-x_{n-1}\| (2\| x_{n}-p\|+\alpha\|x_{n}-x_{n-1}\|) \\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{n}\cdot\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle+3M\alpha_{n}\|x_{n}-x_{n-1}\|\\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{n} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1} - p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{n}}{\beta_{n}}\cdot\|x_{n}-x_{n-1}\|\right], \end{array} $$

where \(M:=\sup _{n\in \mathbb {N}}\{\|x_{n}-p\|,\alpha \|x_{n}-x_{n-1}\|\}>0\).

Claim 6. :

Finally, we show that \(\left \{\|x_{n}-p\|^{2}\right \}\) converges to zero by considering two possible cases on the sequence \(\left \{\|x_{n}-p\|^{2}\right \}\).

Case 1. :

There exists \(N\in \mathbb {N}\) such that ∥xn+ 1p2 ≤∥xnp2 for each nN. This implies that \(\lim _{n\to \infty }\|x_{n}-p\|\) exists and, according to Claim 3, we get

$$ \lim_{n\to\infty}\|z_{n}-w_{n}\|=0. $$
(34)

Now, we show that, as \(n\to \infty \),

$$ \|x_{n+1}-x_{n}\| \to 0,\quad \|y_{n}-w_{n}\|\to 0. $$
(35)

Indeed, we have

$$ \|x_{n+1}-z_{n}\|=\beta_{n}\|z_{n}-f(x_{n})\|\to 0,\ \|x_{n}-w_{n}\|=\alpha_{n} \|x_{n}-x_{n-1}\|=\beta_{n}.\frac{\alpha_{n}}{\beta_{n}} \|x_{n}-x_{n-1}\|\to 0. $$
(36)

This implies, from (34) and (36), that

$$ \|x_{n+1}-x_{n}\|\le \|x_{n+1}-z_{n}\|+\|z_{n}-w_{n}\|+\|w_{n}-x_{n}\|\to 0. $$

From (34) and Claim 4, we get

$$ \lim_{n\to\infty}\|y_{n}-w_{n}\|=0. $$
(37)

Since the sequence {xn} is bounded, it follows that there exists a subsequence \(\{x_{n_{k}}\}\) of {xn} converging weakly to a point zH such that

$$ \limsup_{n\to \infty}\langle f(p)-p,x_{n}-p\rangle =\lim_{k\to \infty}\langle f(p)-p,x_{n_{k}}-p\rangle=\langle f(p)-p,z-p\rangle, $$
(38)

which implies, from (34), that

$$ w_{n_{k}}\rightharpoonup z. $$
(39)

From (37), (39), and Lemma 5, we have zVI(C,A). Also, from (38) and the definition of p = PVI(C,A)f(p), we have

$$ \limsup_{n\to \infty}\langle f(p)-p,x_{n}-p\rangle =\langle f(p)-p,z-p\rangle\le 0. $$
(40)

Combining (35) and (40), we have

$$ \begin{array}{@{}rcl@{}} \limsup_{n\to \infty}\langle f(p)-p,x_{n+1}-p\rangle &=& \limsup_{n\to \infty}\langle f(p)-p,x_{n}-p\rangle\\ &=&\langle f(p)-p,z-p\rangle\\ &\le& 0. \end{array} $$

Using Lemma 4 and Claim 5, we get \(\lim _{n\to \infty }\|x_{n}-p\|=0\).

Case 2. :

There exists a subsequence \(\{\| x_{n_{j}}-p\|^{2}\}\) of {∥xnp2} such that

$$ \| x_{n_{j}}-p\|^{2} < \| x_{n_{j}+1}-p\|^{2}, \forall j\geq 1. $$

In this case, it follows from Lemma 3 that there exists a nondecreasing sequence {mk} of \(\mathbb {N}\) such that \(\lim _{k\to \infty }m_{k}=\infty \) and the following inequalities hold: for each k ≥ 1,

$$ \| x_{m_{k}}-p\|^{2}\le \| x_{m_{k}+1}-p\|^{2},\quad \| x_{k}-p\|^{2}\le \| x_{m_{k}}-p\|^{2}. $$
(41)

According to Claim 3, we have

$$ \begin{array}{@{}rcl@{}} (1-\beta_{m_{k}})\|z_{m_{k}}-w_{m_{k}}\|^{2}&\le& \|x_{m_{k}}-p\|^{2}-\|x_{m_{k}+1}-p\|^{2}+\beta_{m_{k}} M_{4}\\ &\le& \beta_{m_{k}} M_{4}. \end{array} $$

Therefore, we obtain

$$ \lim_{k\to\infty}\|z_{m_{k}}-w_{m_{k}}\|=0. $$

Using the same arguments as in the proof of Case 1, we obtain

$$ \|x_{m_{k}+1}-x_{m_{k}}\|\to 0 $$

and

$$ \limsup_{k\to \infty}\langle f(p)-p,x_{m_{k}+1}-p\rangle\le 0. $$

According to Claim 5, we have

$$ \begin{array}{@{}rcl@{}} &&\|x_{m_{k}+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{m_{k}})\|x_{m_{k}}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{m_{k}} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{m_{k}+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{m_{k}}}{\beta_{m_{k}}}\cdot\|x_{m_{k}}-x_{m_{k}-1}\|\right]. \end{array} $$
(42)

From (41) and (42), we obtain

$$ \begin{array}{@{}rcl@{}} &&\|x_{m_{k}+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{m_{k}})\|x_{m_{k}+1}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{m_{k}} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{m_{k}+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{m_{k}}}{\beta_{m_{k}}}\cdot\|x_{m_{k}}-x_{m_{k}-1}\|\right]. \end{array} $$

Thus, we have

$$ \|x_{m_{k}+1}-p\|^{2}\le \frac{2}{1-\kappa}\langle f(p)-p, x_{m_{k}+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{m_{k}}}{\beta_{m_{k}}}\cdot\|x_{m_{k}}-x_{m_{k}-1}\|. $$

Therefore, we have

$$ \limsup_{k\to \infty}\|x_{m_{k}+1}-p\|\le 0. $$
(43)

Combining (41) and (43), we have \(\limsup _{k\to \infty }\|x_{k}-p\|\le 0\), that is, xkp. This completes the proof.

4 Numerical illustrations

In this section, we provide two numerical examples to test the proposed algorithms. All the codes were written in Matlab (R2015a) and run on PC with Intel(R) Core(TM) i3-370M Processor 2.40 GHz.

Now, we apply Algorithm 1 to solve the variational inequality problem (VIP) (1) and compare numerical results with other algorithms. In the numerical results reported in the following tables, “Iter.” and “Sec.” stand for the number of iterations and the cpu time in seconds, respectively.

Example 1

Suppose that H = L2([0,1]) with the inner product

$$ \langle x,y\rangle:={{\int}_{0}^{1}}x(t)y(t)dt, \forall x,y\in H, $$

and the induced norm

$$ \| x\|:=\left( {{\int}_{0}^{1}}|x(t)|^{2}dt\right)^{\frac{1}{2}}, \forall x\in H. $$

Let C := {xH : ∥x∥≤ 1} be the unit ball. Define an operator A : CH by

$$ (Ax)(t)=\max\{0,x(t)\}. $$

It is easy to see that A is 1-Lipschitz continuous and monotone on C. For given C and A, the set of solutions of the variational inequality problem (VIP) (1) is given by Γ = {0}≠. It is known that

$$P_{C}(x)=\left\{\begin{array}{ll} \frac{x}{\|x\|_{L^{2}}}, &\text{if }\|x\|_{L^{2}}>1,\\ x, & \text{if }\|x\|_{L^{2}}\leq 1. \end{array}\right. $$

Now, we apply Algorithm 1 (iPCM), Maingé’s algorithm [30] (Algorithm M) and Kraikaew and Saejung’s algorithm [26] (Algorithm KR) to solve the variational inequality problem (VIP) (1). We use:

  1. (a)

    The same parameter λ = 0.5,

  2. (b)

    The stopping rule ∥xn − 0∥ < 10− 3 for all algorithms,

  3. (c)

    The same starting point x0.

Moreover, with respect to Algorithm 1, we take f(x) = x1, \(\beta _{n}=\frac {1}{n}\), and α = 0.6. We also choose \(\alpha _{n}=\frac {1}{n}\) for Maingé’s algorithm and Kraikaew and Saejung’s algorithm. We now make a comparison of three algorithms with different x0 and report the results in Table 1.

Table 1 Comparison of three algorithms in Example 1

Convergent behavior of Algorithms with different starting point is given in Figs. 12, and 3. In these figures, the value of error ∥xn − 0∥ is represented by the y-axis, and number of iterations is represented by the x-axis.

Fig. 1
figure 1

Comparison of three algorithms in Example 1 with \(x_{0}=\frac {1}{2}t^{2}\)

Fig. 2
figure 2

Comparison of three algorithms in Example 1 with x0 = et

Fig. 3
figure 3

Comparison of three algorithms in Example 1 with x0 = (t3 + 1)et

Example 2

Consider the linear operator \(A:\mathbb {R}^{m}\to \mathbb {R}^{m}\) defined by A(x) = Mx + q, which is taken from [24] and has been considered by many authors for numerical experiments (see, for example, [18, 25]), where

$$M=BB^{T}+S+D,$$

B is an m × m matrix, S is an m × m skew-symmetric matrix, D is an m × m diagonal matrix, whose diagonal entries are nonnegative (so M is positive definite), q is a vector in \(\mathbb {R}^{m}\), and

$$C := \left\{x\in\mathbb{R}^{m}: -5\leq x_{i}\leq 5,\ i=1,...,m\right\}.$$

Then, A is monotone and Lipschitz continuous with the Lipschitz constant L = ||M||. For q = 0, the unique solution of the corresponding variational inequality is {0}.

Now, we compare our Algorithm 1 (iPCM) with the standard algorithm (Algorithm 1 with αk = 0, shortly, PCM). The starting point is \(x_{0}=(1,1,\cdots ,1)\in \mathbb {R}^{m}\). All entries of the matrices B,S,D are generated randomly (matrices of normally distributed random numbers).

Control parameters and stopping rules are chosen as in Example 1 except α = 0.9, f(x) = 0, and \(\beta _{n}=\frac {1}{n+2}\). The results are described in Table 2 and Figs. 456, and 7.

Table 2 Comparison of two algorithms with different m
Fig. 4
figure 4

Comparison of two algorithms in Example 2 with m = 10

Fig. 5
figure 5

Comparison of two algorithms in Example 2 with m = 50

Fig. 6
figure 6

Comparison of two algorithms in Example 2 with m = 80

Fig. 7
figure 7

Comparison of two algorithms in Example 2 with m = 150

In Fig. 8, we illustrate the performances of Algorithm 1 for different choices of the contraction f(x) = 0.82x, 0.75x, 0.5x, 0.125x, where m = 150 and the stopping criterion is ∥xn − 0∥ < 10− 4.

Fig. 8
figure 8

The performances of Algorithm 1 for different choices of the contraction f(x) = 0.82x, 0.75x, 0.5x, 0.125x

Computing times for Algorithm 1 are 1.1341, 1.5652, 3.6404, and 7.2754 second for f(x) = 0.82x, 0.75x, 0.5x, 0.125x, respectively, and the corresponding number of iterations for Algorithm 1 are 106, 150, 351, and 702.

5 Conclusions

The paper has proposed a new method for solving monotone and Lipschitz VIPs in real Hilbert spaces. Under some suitable conditions imposed on parameters, we have proved the strong convergence of the algorithm. The efficiency of the proposed algorithm has also been illustrated by several numerical experiments.