Abstract
In this paper, we introduce a new algorithm which combines the inertial projection and contraction method and the viscosity method for solving monotone variational inequality problems in real Hilbert spaces and prove a strong convergence theorem of our proposed algorithm under the standard assumptions imposed on cost operators. Finally, we give some numerical experiments to illustrate the proposed algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let H be a real Hilbert space with the inner product 〈⋅,⋅〉 and the induced norm ∥⋅∥. Let C be a nonempty closed convex subset of H and A : H → H be an operator.
The variational inequality problem(VIP) for A on C is to find a point x∗∈ C such that
Let us denote VI(C,A) by the solution set of the problem (VIP) (1). The problem of finding solutions of the problem (VIP) (1) is a fundamental problem in optimization theory. Due to this, the problem (VIP) (1) has received a lot of attention by many authors. In fact, there are two general approaches to study variational inequality problems, which are the regularized method and the projection method. Based on these directions, many algorithms have been considered and proposed for solving the problem (VIP) (1) (see, for example, [13,14,15,16, 26, 31, 34, 35, 43, 46, 53,54,55]).
The basic idea consists of extending the projected gradient method for solving the problem of minimizing f(x) subject to x ∈ C given by
where {αn} is a positive real sequence satisfying certain conditions and PC is the metric projection onto C.
For convergence properties of this method for the case in which \(f: \mathbb {R}^{2}\to \mathbb {R}\) is convex and differentiable function, one may see [1]. An immediate extension of the method (2) to (VIP) (1) is the projected gradient method for optimization problems, substituting the operator A for the gradient, so that we generate a sequence {xn} in the following manner:
However, the convergence of this method requires a slightly strong assumption that the operators are strongly monotone or inverse strongly monotone (see, for example, [51]).
To avoid this strong assumption, Korpelevich [27] introduced the extragradient method (EGM) for solving saddle point problems and, after that, this method was further extended to variational inequality problems in both Euclidean spaces and Hilbert spaces. The convergence of the extragradient method only requires that the operator A is monotone and L-Lipschitz continuous. More precisely, the extragradient method is of the form:
where λ ∈ (0,1/L) and PC denotes the metric projection from H onto C.
If the solution set VI(C,A) is nonempty, then the sequence {xn} generated by the process (3) converges weakly to an element in VI(C,A).
In recent years, the (EGM) (3) has received great attention by many authors, who improved it in various ways (see, for example, [13,14,15,16,17, 29, 34, 35, 38, 41, 47,48,49] and the references therein).
In fact, the (EGM) (3) needs to calculate two projections onto the closed convex set C in each iteration. If C is a general closed convex set, then this might seriously affect the efficiency of the algorithm.
To our knowledge, there are some methods to overcome this drawback. The first one is the subgradient extragradient method proposed by Censor et al. [14], in which the second projection onto C is replaced by a projection onto a specific constructible half-space. Their algorithm is of the form:
where λ ∈ (0,1/L).
The second one is the method proposed by Tseng in [49]. Tseng’s method is of the form:
where λ ∈ (0,1/L). Recently, Tseng’s extragradient method for solving (VIP) (1) has received great attention by many authors (see, for example, [7, 43, 50] and the references therein).
The third one is the projection and contraction method studied by some authors [25, 42]. The projection and contraction method is of the form:
where γ ∈ (0,2), λ ∈ (0,1/L), and
In recent years, the projection and contraction method has received great attention by many authors, who improved it in various ways (see, for example, [12, 19,20,21] and the references therein).
Note that the three methods above need only to calculate one projection onto C in each iteration. This may increase the performance of the algorithms.
Now, let us mention an inertial-type algorithm which is based upon a discrete version of a second-order dissipative dynamical system [4, 5] and it can be regarded as a procedure of speeding up the convergence properties (see [3, 33, 39]).
In 2001, Alvarez and Attouch [3] applied the inertial technique to obtain an inertial proximal method for solving the problem of finding zero of a maximal monotone operator, which is as follows: for any xn− 1,xn ∈ H and two parameters 𝜃n ∈ [0,1),λn > 0, find xn+ 1 ∈ H such that
which can be written equivalently as follows:
where \(J_{\lambda _{n}}^{A}\) is the resolvent of A with parameter λn and the inertia is induced by the term 𝜃n(xn − xn− 1).
Recently, the inertial methods have been studied by several authors (see [2, 3, 6,7,8,9,10,11, 18, 22, 32, 36, 37, 44, 45]).
In 2015, Bot and Csetnek [11] introduced the so-called inertial hybrid proximal hybrid proximal-extragradient algorithm, which combines the inertial-type algorithm and the hybrid proximal-extragradient for a maximally monotone operator.
Very recently, Dong et al. [22] proposed an algorithm as a combination between inertial projection and contraction method (shortly, IPCM) and inertial method. The IPCM is of the form
where γ ∈ (0,2),λ ∈ (0,1/L), and
where ϕ(wn,yn) := 〈wn − yn,d(wn,yn)〉. Under appropriate conditions, they proved that the sequence {xn} converges weakly to an element of VI(C,A).
Motivated and inspired by the works in the literature. In this paper, we study strong convergence of the algorithm for solving classical variational inequalities problem with Lipschitz continuous and monotone mapping in real Hilbert spaces. The algorithm is inspired by the inertial projection and contraction method and the viscosity method. Under several appropriate conditions imposed on parameters, we will prove that the proposed algorithm converges strongly to some point in VI(C,A). Finally, we present several numerical experiments to support convergence of theorems. The numerical illustrations show that the speed of the proposed algorithm with inertial effects is better than the original algorithm without inertial effects.
This paper is organized as follows: In Section 2, we recall some definitions and preliminary results for further use. Section 3 deals with analyzing the convergence of the proposed algorithm. Finally, in Section 4, we perform several numerical examples to illustrate the computational performance of our algorithm.
2 Preliminaries
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. The weak convergence of {xn} to x is denoted by \(x_{n}\rightharpoonup x\) as \(n\to \infty \), while the strong convergence of {xn} to x is written as xn → x as \(n\to \infty \).
For all x,y ∈ H, we have
and
For all x ∈ H, there exists the unique nearest point in C, denoted by PCx, such that
PC is called the metric projection of H onto C. It is known that PC is nonexpansive.
Lemma 1
[23] Let C be a nonempty closed convex subset of a real Hilbert space H. For anyx ∈ Handz ∈ C, wehave
Lemma 2
[23] Let C be a closed and convex subset in a real Hilbert space H and letx ∈ H. Thenwe have the following:
- (1)
∥PCx − PCy∥2 ≤〈PCx − PCy,x − y〉 forally ∈ H;
- (2)
∥PCx − y∥2 ≤∥x − y∥2 −∥x − PCx∥2forally ∈ C.
For some properties of the metric projection, refer to Section 3 in [23].
Definition 1
Let T : H → H be an operator. Then
- (1)
T is said to be L-Lipschitz continuous with L > 0 if
$$ \|Tx-Ty\|\le L \|x-y\|, \forall x,y \in H; $$ - (2)
T is said to be monotone if
$$ \langle Tx-Ty,x-y \rangle \geq 0, \forall x,y \in H. $$
Lemma 3
[30] Let {an} bea sequence of nonnegative real numbers such that there exists asubsequence\(\{a_{n_{j}}\}\)of{an} suchthat\(a_{n_{j}}<a_{n_{j}+1}\)foreach\(j\in \mathbb {N}\).Then there exists a nondecreasing sequence{mk} of\(\mathbb {N}\)suchthat\(\lim _{k\to \infty }m_{k}=\infty \)andthe following properties are satisfied by all (sufficiently large)number\(k\in \mathbb {N}\):
In fact, mk is the largest number n in the set {1,2,⋯ ,k} such that an < an+ 1.
Lemma 4
Let {an} bea sequence of nonnegative real numbers such that:
where {αn}⊂ (0,1) and {bn} are sequences such that
- (a)
\(\sum \nolimits _{n=0}^{\infty } \alpha _{n}=\infty \);
- (b)
\(\limsup _{n\to \infty }b_{n}\le 0\).Then \(\lim _{n\to \infty }a_{n}=0\).
Remark 1
Lemma 4 was shown and used by several authors. For detail proofs, see Liu [28] and Xu [52]. Furthermore, a variant of Lemma 4 has already been used by Reich in [40].
Lemma 5
[26] LetA : H → Hbea monotone and L-Lipschitz continuous mapping on C. LetS = PC(I − μA),whereμ > 0.If {xn} isa sequence in H satisfying\(x_{n}\rightharpoonup q\)andxn − Sxn → 0,thenq ∈ VI(C,A) = Fix(S).
3 Main results
In this section, we assume that A : H → H is monotone and Lipschitz continuous on H with the constant L, VI(C,A)≠∅ and f : H → H be a contraction mapping with contraction parameter κ ∈ [0,1).
Now, we introduce the following algorithm:
Lemma 6
Ifyn = wnordn = 0 inAlgorithm 1, thenyn ∈ VI(C,A).
Proof
Since A is L-Lipschitz continuous, we have
It is also easy to see that
and so wn = yn if and only if d(wn,yn) = 0. Therefore, if wn = yn or dn = 0, then wn = yn and we get
This implies that yn ∈ VI(C,A). This completes the proof. □
Theorem 1
The sequence {xn} generatedby Algorithm 1 converges strongly to an elementp ∈ VI(C,A),wherep = PVI(C,A) ∘ f(p).
Proof
- Claim 1. :
-
Let zn = wn − 𝜃ndn. Now, we show that
$$ \|z_{n}-p\|^{2}\le \|w_{n}-p\|^{2}-\|z_{n}-w_{n}\|^{2}. $$(11)Indeed, we have
$$ \begin{array}{@{}rcl@{}} &&\langle w_{n}-p,d_{n}\rangle\\ &=&\langle w_{n}-y_{n},d_{n}\rangle+\langle y_{n}-p,d_{n}\rangle\\ &=&\langle w_{n}-y_{n},w_{n}-y_{n}-\lambda (Aw_{n}-Ay_{n})\rangle+\langle y_{n}-p,w_{n} - y_{n}-\lambda (Aw_{n} - Ay_{n})\rangle\\ & = &\|w_{n} - y_{n}\|^{2} - \langle w_{n} - y_{n},\lambda (Aw_{n} - Ay_{n})\rangle + \langle y_{n} - p,w_{n} - y_{n} - \lambda (Aw_{n}-Ay_{n})\rangle\\ &\geq& \|w_{n}-y_{n}\|^{2}-\lambda L\|w_{n}-y_{n}\|^{2}+\langle y_{n}-p,w_{n}-y_{n}-\lambda (Aw_{n}-Ay_{n})\rangle. \end{array} $$(12)Note that, from yn = PC(wn − λAwn), it follows that
$$ \langle y_{n}-w_{n}+\lambda Aw_{n},y_{n}-p\rangle \le 0. $$(13)By the monotonicity of A and p ∈ VI(C,A), we have
$$ \langle Ay_{n},y_{n}-p\rangle \geq\langle Ap,y_{n}-p\rangle \geq 0. $$(14)Thus, from (13) and (14), it follows that
$$ \langle w_{n}-y_{n}-\lambda_{n} (Aw_{n}-Ay_{n}),y_{n}-p\rangle \geq 0. $$(15)Combining (12) and (15), we obtain
$$ \langle w_{n}-p,d_{n}\rangle\geq (1-\lambda L)\|w_{n}-y_{n}\|^{2}. $$(16)On the other hand, we have
$$ \begin{array}{@{}rcl@{}} \|z_{n}-p\|^{2}&=&\|w_{n}-\theta_{n}d_{n}-p\|^{2}\\ &=&\|w_{n}-p\|^{2}-2\theta_{n}\langle d_{n},w_{n}-p\rangle +{\theta_{n}^{2}}\|d_{n}\|^{2}. \end{array} $$(17)Combining (16) and (17), we get
$$ \|z_{n}-p\|^{2}\le \|w_{n}-p\|^{2}-2\theta_{n}(1-\lambda L)\|w_{n}-y_{n}\|^{2}+{\theta_{n}^{2}}\|d_{n}\|^{2}. $$(18)On the other hand, since \(\theta _{n}=(1-\lambda L)\frac {\|w_{n}-y_{n}\|^{2}}{\|d_{n}\|^{2}}\), it follows that
$$ \|w_{n}-y_{n}\|^{2}=\frac{\theta_{n}\|d_{n}\|^{2}}{1-\lambda L}. $$(19)Substituting (19) into (18), we obtain
$$ \begin{array}{@{}rcl@{}} \|z_{n}-p\|^{2}&\le& \|w_{n}-p\|^{2}-2{\theta_{n}^{2}}\cdot\|d_{n}\|^{2}+{\theta^{2}_{n}}\|d_{n}\|^{2}\\ &=&\|w_{n}-p\|^{2}-\|\theta_{n}\cdot d_{n}\|^{2}. \end{array} $$(20)By the definition of the sequence {zn}, we have zn = wn − 𝜃ndn and so
$$ \theta_{n}d_{n}=w_{n}-z_{n}, $$(21)which implies, from (20) and (21), that
$$ \|z_{n}-p\|^{2}=\|w_{n}-p\|^{2}-\|w_{n}-z_{n}\|^{2}. $$ - Claim 2. :
-
The sequence {xn} is bounded. By Claim 1, we have
$$ \|z_{n}-p\|\le\|w_{n}-p\|. $$(22)From the definition of wn, we get
$$ \begin{array}{@{}rcl@{}} \|w_{n}-p\|&=&\|x_{n}+\alpha_{n} (x_{n}-x_{n-1})-p\| \\ &\le&\|x_{n}-p\|+\alpha_{n}\|x_{n}-x_{n-1}\|\\ &=&\|x_{n}-p\|+\beta_{n}\cdot\frac{\alpha_{n}}{\beta_{n}}\|x_{n}-x_{n-1}\|. \end{array} $$(23)From (??), (??), and (??), we have \(\frac {\alpha _{n}}{\beta _{n}}\|x_{n}-x_{n-1}\|\to 0\); hence, there exists a constant M1 > 0 such that
$$ \frac{\alpha_{n}}{\beta_{n}}\|x_{n}-x_{n-1}\|\le M_{1}\ \forall n. $$(24)Combining (22), (23), and (24), we obtain
$$ \|z_{n}-p\|\le \|w_{n}-p\|\le\|x_{n}-p\|+\beta_{n} M_{1}. $$(25)Since we have xn+ 1 = βnf(xn) + (1 − βn)zn, it follows that
$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|&=&\|\beta_{n} f(x_{n})+(1-\beta_{n})z_{n}-p\|\\ &=&\|\beta_{n} (f(x_{n})-p)+(1-\beta_{n})(z_{n}-p)\|\\ &\le& \beta_{n} \|f(x_{n})-p\|+(1-\beta_{n})\|z_{n}-p\|\\ &\le& \beta_{n} \|f(x_{n})-f(p)\|+\beta_{n} \|f(p)-p\|+(1-\beta_{n})\|z_{n}-p\|\\ &\le& \beta_{n} \kappa \|x_{n}-p\|+\beta_{n} \|f(p)-p\|+(1-\beta_{n})\|z_{n}-p\|. \end{array} $$(26)Substituting (25) into (26), we obtain
$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|&\le& (1-(1-\kappa)\beta_{n} )\|x_{n}-p\|+\beta_{n} M_{1}+\beta_{n} \|f(p)-p\|.\\ &=&(1-(1-\kappa)\beta_{n} )\|x_{n}-p\|+(1-\kappa)\beta_{n} \frac{ M_{1}+ \|f(p)-p\|}{1-\kappa}\\ &\le& \max\left\{\|x_{n}-p\|, \frac{ M_{1}+ \|f(p)-p\|}{1-\kappa}\right\}\\ &\le& \cdots\\ &\le& \max\left\{\|x_{0}-p\|, \frac{ M_{1}+ \|f(p)-p\|}{1-\kappa}\right\}. \end{array} $$This implies {xn} is bounded. Also, we see that {zn}, {f(xn)}, and {wn} are bounded. This completes the proof.
- Claim 3. :
-
$$ (1-\beta_{n})\|z_{n}-w_{n}\|^{2}\le \|x_{n+1}-p\|^{2}-\|x_{n}-p\|^{2}+\beta_{n} M_{4} $$
for some M4 > 0. Indeed, we get
$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|^{2}&\le& \beta_{n} \|f(x_{n})-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &\le& \beta_{n} (\|f(x_{n})- f(p)\|+\|f(p)-p\|)^{2}+\|z_{n}-p\|^{2} \\ &\le& \beta_{n} (\kappa\|x_{n}- p\|+\|f(p)-p\|)^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &\le& \beta_{n} (\|x_{n}- p\|+\|f(p)-p\|)^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &=&\beta_{n}\|x_{n}-p\|^{2}+\beta_{n} (2\|x_{n}- p\|\cdot\|f(p)-p\|+\|f(p)-p\|^{2})\\ &&\quad+(1-\beta_{n})\|z_{n}-p\|^{2}\\ &\le&\beta_{n} \|x_{n}-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+\beta_{n} M_{2} \end{array} $$(27)for some M2 > 0. Substituting (11) into (27), we get
$$ \|x_{n+1}-p\|^{2} \le \beta_{n} \|x_{n}-p\|^{2}+(1-\beta_{n})\|w_{n}-p\|^{2} -(1-\beta_{n})\|z_{n}-w_{n}\|^{2}+\beta_{n} M_{2}, $$(28)which implies from (25) that
$$ \begin{array}{@{}rcl@{}} \|w_{n}-p\|^{2}&\le& (\|x_{n}-p\|+\beta_{n} M_{1})^{2}\\ &=&\|x_{n}-p\|^{2}+\beta_{n}(2M_{1}\|x_{n}-p\| +\beta_{n}{M_{1}^{2}})\\ &\le&\|x_{n}-p\|^{2}+\beta_{n} M_{3} \end{array} $$(29)for some M3 > 0. Combining (28) and (29), we obtain
$$ \begin{array}{@{}rcl@{}} \|x_{n+1}-p\|^{2}&\le& \beta_{n} \|x_{n}-p\|^{2}+(1-\beta_{n})\|x_{n}-p\|^{2}+\beta_{n} M_{3}\\ &&\quad -(1-\beta_{n})\|z_{n}-w_{n}\|^{2}+\beta_{n} M_{2}\\ &=&\|x_{n}-p\|^{2}+\beta_{n} M_{3} -(1-\beta_{n})\|z_{n}-w_{n}\|^{2}+\beta_{n} M_{2}. \end{array} $$This implies that
$$ (1-\beta_{n})\|z_{n}-w_{n}\|^{2}\le \|x_{n+1}-p\|^{2}-\|x_{n}-p\|^{2}+\beta_{n} M_{4}, $$where M4 := M2 + M3.
- Claim 4. :
-
$$ \|w_{n}-y_{n}\|^{2}\le \frac{(1+\lambda L)^{2}}{(1-\lambda L)^{2}}\|z_{n}-w_{n}\|^{2}. $$
Indeed, we have
$$ \begin{array}{@{}rcl@{}} \|w_{n}-y_{n}\|^{2}&=\frac{\theta_{n}\|d_{n}\|^{2}}{1-\lambda L}=\frac{\|\theta_{n} d_{n}\|^{2}}{(1-\lambda L)\theta_{n}}=\frac{\|z_{n}-w_{n}\|^{2}}{(1-\lambda L)\theta_{n}}. \end{array} $$(30)It follows from (10) that
$$ \frac{\|w_{n}-y_{n}\|}{\|d_{n}\|}\geq \frac{1}{1+\lambda L}. $$Therefore, we have
$$ \theta_{n}=(1-\lambda L)\frac{\|w_{n}-y_{n}\|^{2}}{\|d_{n}\|^{2}}\geq \frac{1-\lambda L}{(1+\lambda L)^{2}}, $$that is,
$$ \frac{1}{\theta_{n}} \le \frac{(1+\lambda L)^{2}} {1-\lambda L}. $$(31)Combining (30) and (31), we obtain
$$ \|w_{n}-y_{n}\|^{2}\le \frac{(1+\lambda L)^{2}}{(1-\lambda L)^{2}}\|z_{n}-w_{n}\|^{2}. $$ - Claim 5. :
-
$$ \begin{array}{@{}rcl@{}} &&\|x_{n+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{n} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{n}}{\beta_{n}}\cdot\|x_{n}-x_{n-1}\|\right] \end{array} $$
for some M > 0. Indeed, we have
$$ \begin{array}{@{}rcl@{}} &&\|w_{n}-p\|^{2}\\ &=&\|x_{n}+\alpha_{n} (x_{n}-x_{n-1})-p\|^{2}\\ &=&\|x_{n}-p\|^{2}+2\alpha_{n}\langle x_{n}-p,x_{n}-x_{n-1}\rangle +{\alpha_{n}^{2}}\|x_{n}-x_{n-1}\|^{2}\\ &\le&\|x_{n}-p\|^{2}+2\alpha_{n}\| x_{n}-p\|\|x_{n}-x_{n-1}\| +{\alpha_{n}^{2}}\|x_{n}-x_{n-1}\|^{2}. \end{array} $$(32)Using (4), we have
$$ \begin{array}{@{}rcl@{}} &&\|x_{n+1}-p\|^{2}\\ &=&\|\beta_{n} f(x_{n})+(1-\beta_{n})z_{n}-p\|^{2}\\ &=&\|\beta_{n} (f(x_{n})-f(p))+(1-\beta_{n})(z_{n}-p)+\beta_{n} (f(p)-p)\|^{2}\\ &\le& \|\beta_{n} (f(x_{n})-f(p))+(1-\beta_{n})(z_{n}-p)\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \|f(x_{n})-f(p)\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \kappa^{2} \|x_{n}-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \kappa \|x_{n}-p\|^{2}+(1-\beta_{n})\|z_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &\le& \beta_{n} \kappa \|x_{n}-p\|^{2}+(1-\beta_{n})\|w_{n}-p\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle. \end{array} $$(33)Substituting (32) into (33), we have
$$ \begin{array}{@{}rcl@{}} &&\|x_{n+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}+2\alpha_{n}\| x_{n}-p\|\|x_{n}-x_{n-1}\|\\ &&\quad +{\alpha_{n}^{2}}\|x_{n}-x_{n-1}\|^{2}+2\beta_{n} \langle f(p)-p, x_{n+1}-p\rangle\\ &=& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}+(1-\kappa)\beta_{n} \cdot\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle\\ &&\quad+\alpha_{n} \|x_{n}-x_{n-1}\| (2\| x_{n}-p\|+\alpha_{n}\|x_{n}-x_{n-1}\|) \\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}+(1-\kappa)\beta_{n} \cdot\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle\\ &&\quad+\alpha_{n} \|x_{n}-x_{n-1}\| (2\| x_{n}-p\|+\alpha\|x_{n}-x_{n-1}\|) \\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{n}\cdot\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1}-p\rangle+3M\alpha_{n}\|x_{n}-x_{n-1}\|\\ &\le& (1-(1-\kappa)\beta_{n})\|x_{n}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{n} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{n+1} - p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{n}}{\beta_{n}}\cdot\|x_{n}-x_{n-1}\|\right], \end{array} $$where \(M:=\sup _{n\in \mathbb {N}}\{\|x_{n}-p\|,\alpha \|x_{n}-x_{n-1}\|\}>0\).
- Claim 6. :
-
Finally, we show that \(\left \{\|x_{n}-p\|^{2}\right \}\) converges to zero by considering two possible cases on the sequence \(\left \{\|x_{n}-p\|^{2}\right \}\).
- Case 1. :
-
There exists \(N\in \mathbb {N}\) such that ∥xn+ 1 − p∥2 ≤∥xn − p∥2 for each n ≥ N. This implies that \(\lim _{n\to \infty }\|x_{n}-p\|\) exists and, according to Claim 3, we get
$$ \lim_{n\to\infty}\|z_{n}-w_{n}\|=0. $$(34)Now, we show that, as \(n\to \infty \),
$$ \|x_{n+1}-x_{n}\| \to 0,\quad \|y_{n}-w_{n}\|\to 0. $$(35)Indeed, we have
$$ \|x_{n+1}-z_{n}\|=\beta_{n}\|z_{n}-f(x_{n})\|\to 0,\ \|x_{n}-w_{n}\|=\alpha_{n} \|x_{n}-x_{n-1}\|=\beta_{n}.\frac{\alpha_{n}}{\beta_{n}} \|x_{n}-x_{n-1}\|\to 0. $$(36)This implies, from (34) and (36), that
$$ \|x_{n+1}-x_{n}\|\le \|x_{n+1}-z_{n}\|+\|z_{n}-w_{n}\|+\|w_{n}-x_{n}\|\to 0. $$From (34) and Claim 4, we get
$$ \lim_{n\to\infty}\|y_{n}-w_{n}\|=0. $$(37)Since the sequence {xn} is bounded, it follows that there exists a subsequence \(\{x_{n_{k}}\}\) of {xn} converging weakly to a point z ∈ H such that
$$ \limsup_{n\to \infty}\langle f(p)-p,x_{n}-p\rangle =\lim_{k\to \infty}\langle f(p)-p,x_{n_{k}}-p\rangle=\langle f(p)-p,z-p\rangle, $$(38)which implies, from (34), that
$$ w_{n_{k}}\rightharpoonup z. $$(39)From (37), (39), and Lemma 5, we have z ∈ VI(C,A). Also, from (38) and the definition of p = PVI(C,A) ∘ f(p), we have
$$ \limsup_{n\to \infty}\langle f(p)-p,x_{n}-p\rangle =\langle f(p)-p,z-p\rangle\le 0. $$(40)Combining (35) and (40), we have
$$ \begin{array}{@{}rcl@{}} \limsup_{n\to \infty}\langle f(p)-p,x_{n+1}-p\rangle &=& \limsup_{n\to \infty}\langle f(p)-p,x_{n}-p\rangle\\ &=&\langle f(p)-p,z-p\rangle\\ &\le& 0. \end{array} $$Using Lemma 4 and Claim 5, we get \(\lim _{n\to \infty }\|x_{n}-p\|=0\).
- Case 2. :
-
There exists a subsequence \(\{\| x_{n_{j}}-p\|^{2}\}\) of {∥xn − p∥2} such that
$$ \| x_{n_{j}}-p\|^{2} < \| x_{n_{j}+1}-p\|^{2}, \forall j\geq 1. $$In this case, it follows from Lemma 3 that there exists a nondecreasing sequence {mk} of \(\mathbb {N}\) such that \(\lim _{k\to \infty }m_{k}=\infty \) and the following inequalities hold: for each k ≥ 1,
$$ \| x_{m_{k}}-p\|^{2}\le \| x_{m_{k}+1}-p\|^{2},\quad \| x_{k}-p\|^{2}\le \| x_{m_{k}}-p\|^{2}. $$(41)According to Claim 3, we have
$$ \begin{array}{@{}rcl@{}} (1-\beta_{m_{k}})\|z_{m_{k}}-w_{m_{k}}\|^{2}&\le& \|x_{m_{k}}-p\|^{2}-\|x_{m_{k}+1}-p\|^{2}+\beta_{m_{k}} M_{4}\\ &\le& \beta_{m_{k}} M_{4}. \end{array} $$Therefore, we obtain
$$ \lim_{k\to\infty}\|z_{m_{k}}-w_{m_{k}}\|=0. $$Using the same arguments as in the proof of Case 1, we obtain
$$ \|x_{m_{k}+1}-x_{m_{k}}\|\to 0 $$and
$$ \limsup_{k\to \infty}\langle f(p)-p,x_{m_{k}+1}-p\rangle\le 0. $$According to Claim 5, we have
$$ \begin{array}{@{}rcl@{}} &&\|x_{m_{k}+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{m_{k}})\|x_{m_{k}}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{m_{k}} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{m_{k}+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{m_{k}}}{\beta_{m_{k}}}\cdot\|x_{m_{k}}-x_{m_{k}-1}\|\right]. \end{array} $$(42)$$ \begin{array}{@{}rcl@{}} &&\|x_{m_{k}+1}-p\|^{2}\\ &\le& (1-(1-\kappa)\beta_{m_{k}})\|x_{m_{k}+1}-p\|^{2}\\ &&\quad+(1-\kappa)\beta_{m_{k}} \cdot\left[\frac{2}{1-\kappa}\langle f(p)-p, x_{m_{k}+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{m_{k}}}{\beta_{m_{k}}}\cdot\|x_{m_{k}}-x_{m_{k}-1}\|\right]. \end{array} $$Thus, we have
$$ \|x_{m_{k}+1}-p\|^{2}\le \frac{2}{1-\kappa}\langle f(p)-p, x_{m_{k}+1}-p\rangle+\frac{3M}{1-\kappa}\cdot\frac{\alpha_{m_{k}}}{\beta_{m_{k}}}\cdot\|x_{m_{k}}-x_{m_{k}-1}\|. $$Therefore, we have
$$ \limsup_{k\to \infty}\|x_{m_{k}+1}-p\|\le 0. $$(43)Combining (41) and (43), we have \(\limsup _{k\to \infty }\|x_{k}-p\|\le 0\), that is, xk → p. This completes the proof.
□
4 Numerical illustrations
In this section, we provide two numerical examples to test the proposed algorithms. All the codes were written in Matlab (R2015a) and run on PC with Intel(R) Core(TM) i3-370M Processor 2.40 GHz.
Now, we apply Algorithm 1 to solve the variational inequality problem (VIP) (1) and compare numerical results with other algorithms. In the numerical results reported in the following tables, “Iter.” and “Sec.” stand for the number of iterations and the cpu time in seconds, respectively.
Example 1
Suppose that H = L2([0,1]) with the inner product
and the induced norm
Let C := {x ∈ H : ∥x∥≤ 1} be the unit ball. Define an operator A : C → H by
It is easy to see that A is 1-Lipschitz continuous and monotone on C. For given C and A, the set of solutions of the variational inequality problem (VIP) (1) is given by Γ = {0}≠∅. It is known that
Now, we apply Algorithm 1 (iPCM), Maingé’s algorithm [30] (Algorithm M) and Kraikaew and Saejung’s algorithm [26] (Algorithm KR) to solve the variational inequality problem (VIP) (1). We use:
- (a)
The same parameter λ = 0.5,
- (b)
The stopping rule ∥xn − 0∥ < 10− 3 for all algorithms,
- (c)
The same starting point x0.
Moreover, with respect to Algorithm 1, we take f(x) = x1, \(\beta _{n}=\frac {1}{n}\), and α = 0.6. We also choose \(\alpha _{n}=\frac {1}{n}\) for Maingé’s algorithm and Kraikaew and Saejung’s algorithm. We now make a comparison of three algorithms with different x0 and report the results in Table 1.
Convergent behavior of Algorithms with different starting point is given in Figs. 1, 2, and 3. In these figures, the value of error ∥xn − 0∥ is represented by the y-axis, and number of iterations is represented by the x-axis.
Example 2
Consider the linear operator \(A:\mathbb {R}^{m}\to \mathbb {R}^{m}\) defined by A(x) = Mx + q, which is taken from [24] and has been considered by many authors for numerical experiments (see, for example, [18, 25]), where
B is an m × m matrix, S is an m × m skew-symmetric matrix, D is an m × m diagonal matrix, whose diagonal entries are nonnegative (so M is positive definite), q is a vector in \(\mathbb {R}^{m}\), and
Then, A is monotone and Lipschitz continuous with the Lipschitz constant L = ||M||. For q = 0, the unique solution of the corresponding variational inequality is {0}.
Now, we compare our Algorithm 1 (iPCM) with the standard algorithm (Algorithm 1 with αk = 0, shortly, PCM). The starting point is \(x_{0}=(1,1,\cdots ,1)\in \mathbb {R}^{m}\). All entries of the matrices B,S,D are generated randomly (matrices of normally distributed random numbers).
Control parameters and stopping rules are chosen as in Example 1 except α = 0.9, f(x) = 0, and \(\beta _{n}=\frac {1}{n+2}\). The results are described in Table 2 and Figs. 4, 5, 6, and 7.
In Fig. 8, we illustrate the performances of Algorithm 1 for different choices of the contraction f(x) = 0.82x, 0.75x, 0.5x, 0.125x, where m = 150 and the stopping criterion is ∥xn − 0∥ < 10− 4.
Computing times for Algorithm 1 are 1.1341, 1.5652, 3.6404, and 7.2754 second for f(x) = 0.82x, 0.75x, 0.5x, 0.125x, respectively, and the corresponding number of iterations for Algorithm 1 are 106, 150, 351, and 702.
5 Conclusions
The paper has proposed a new method for solving monotone and Lipschitz VIPs in real Hilbert spaces. Under some suitable conditions imposed on parameters, we have proved the strong convergence of the algorithm. The efficiency of the proposed algorithm has also been illustrated by several numerical experiments.
References
Alber, Y, Iusem, A.N.: Extension of subgradient techniques for nonsmooth optimization in Banach spaces. Set-Valued Anal. 9, 315–335 (2001)
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14, 773–782 (2004)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)
Attouch, H., Goudon, X., Redont, P.: The heavy ball with friction. I. The continuous dynamical system. Commun. Contemp. Math. 2, 1–34 (2000)
Attouch, H., Czarnecki, M. O.: Asymptotic control and stabilization of nonlinear oscillators with non-isolated equilibria. J. Differ. Equ. 179, 278–310 (2002)
Bot, R. I., Csetnek, E. R., Laszlo, S.C.: An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions. EURO J. Comput. Optim. 4, 3–25 (2016)
Bot, R. I., Csetnek, E. R.: An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 171, 600–616 (2016)
Bot, R. I., Csetnek, E. R.: An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms 71, 519–540 (2016)
Bot, R. I., Csetnek, E. R., Hendrich, C.: Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472–487 (2015)
Bot, R. I., Csetnek, E. R.: An inertial alternating direction method of multipliers. Minimax Theory Appl. 1, 29–49 (2016)
Bot, R. I., Csetnek, E. R.: A hybrid proximal-extragradient algorithm with inertial effects. Numer. Funct. Anal. Optim. 36, 951–963 (2015)
Cai, X., Gu, G., He, B.: On the \(O(\frac {1}t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57, 339–363 (2014)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Meth. Softw. 26, 827–845 (2011)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61, 1119–1132 (2012)
Ceng, L. C., Hadjisavvas, N., Wong, N. C.: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46, 635–646 (2010)
Chen, C., Ma, S., Yang, J.: A general inertial proximal point algorithm for mixed variational inequality problem. SIAM J. Optim. 25, 2120–2142 (2015)
Dong, Q. L., Jiang, D., Gibali, A.: A modified subgradient extragradient method for solving the variational inequality problem. Numer Algorithms. https://doi.org/10.1007/s11075-017-0467-x (2018)
Dong, Q. L., Cho, Y. J., Rassias, T. M.: The projection and contraction methods for finding common solutions to variational inequality problems. Optim. Lett. https://doi.org/10.1007/s11590-017-1210-1 (2017)
Dong, Q.L., Gibali, A., Jiang, D., Ke, S.H.: Convergence of projection and contraction algorithms with outer perturbations and their applications to sparse signals recovery. J. Fixed Point Theory Appl. https://doi.org/10.1007/s11784-018-0501-1(2018)
Dong, L. Q., Cho, J. Y., Zhong, L. L., Rassias, M. T. h.: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 70, 687–704 (2018)
Goebel, K., Reich, S.: Uniform convexity, hyperbolic geometry, and nonexpansive mappings: Marcel Dekker, New York (1984)
Harker, P. T., Pang, J. S.: A damped-Newton method for the linear complementarity problem. In: Allgower, E.L., Georg, K. (eds.) Computational Solution of Nonlinear Systems of Equations. AMS Lectures on Applied Mathematics 26, 265–284 (1990)
He, B. S.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 35, 69–76 (1997)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
Korpelevich, G. M.: The extragradient method for finding saddle points and other problems. Ekonomikai Matematicheskie Metody. 12, 747–756 (1976)
Liu, L. S.: Ishikawa and Mann iteration process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 194, 114–125 (1995)
Gibali, A., Thong, D. V., Tuan, P. A.: Two simple projection-type methods for solving variational inequalities Anal.Math.Phys. https://doi.org/10.1007/s13324-019-00330-w (2019)
Maingé, P. E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)
Maingé, P. E., Gobinddass, M. L.: Convergence of one step projected gradient methods for variational inequalities. J. Optim. Theory Appl. 171, 146–168 (2016)
Maingé, P. E.: Inertial iterative process for fixed points of certain quasi-nonexpansive mappings. Set Valued Anal. 15, 67–79 (2007)
Maingé, P. E.: Regularized and inertial algorithms for common fixed points of nonlinear operators. J. Math. Anal. Appl. 34, 876–887 (2008)
Malitsky, Y. V., Semenov, V. V.: A hybrid method without extrapolation step for solving variational inequality problems. J. Glob. Optim. 61, 193–202 (2015)
Malitsky, Y. V.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25, 502–520 (2015)
Moudafi, A., Elisabeth, E.: An approximate inertial proximal method using enlargement of a maximal monotone operator. Int. J. Pure Appl. Math. 5, 283–299 (2003)
Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003)
Nadezhkina, N., Takahashi, W.: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J. Optim. 16, 1230–1241 (2006)
Polyak, B. T.: Some methods of speeding up the convergence of iteration methods. Zh. Vychisl. Mat. Mat. Fiz. 4, 1–17 (1964)
Reich, S.: Constructive Techniques for Accretive and Monotone Operators. Applied Nonlinear Analysis, pp 335–345. Academic Press, New York (1979)
Solodov, M. V., Svaiter, B. F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37, 765–776 (1993)
Sun, D. F.: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 91, 123–140 (1996)
Thong, D. V., Hieu, D. V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 78, 1045–1060 (2018)
Thong, D. V., Hieu, D. V.: Modified subgradient extragradient method for inequality variational problems. Numer. Algorithms 79, 597–610 (2018)
Thong, D. V., Vuong, P. T.: Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities optimization. https://doi.org/10.1080/02331934.2019.1616191 (2019)
Thong, D.V., Vinh, N.T., Cho, Y.J.: Accelerated subgradient extragradient methods for variational inequality problems. J. Sci. Comput. https://doi.org/10.1007/s10915-019-00984-5 (2019)
Thong, D. V., Vinh, N. T., Cho, Y. J.: A strong convergence theorem for Tseng’s extragradient method for solving variational inequality problems. Optim Lett. https://doi.org/10.1007/s11590-019-01391-3 (2019)
Thong, D. V., Vinh, N. T., Hieu, D. V.: Accelerated hybrid and shrinking projection methods for variational inequality problems. Optimization 68, 981–998 (2019)
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
Wang, F. H., Xu, H. K.: Weak and strong convergence theorems for variational inequality and fixed point problems with Tseng’s extragradient method. Taiwanese J. Math. 16, 1125–1136 (2012)
Xiu, N. H., Zhang, J. Z.: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559–587 (2003)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
Wang, Y. M., Xiao, Y. B., Wang, X., Cho, Y. J.: Equivalence of well-posedness between systems of hemivariational inequalities and inclusion problems. J. Nonlinear Sci. Appl. 9, 1178–1192 (2016)
Xiao, Y. B., Huang, N. J., Wong, M. M.: Well-posedness of hemivariational inequalities and inclusion problems. Taiwan. J. Math. 15, 1261–1276 (2011)
Xiao, Y. B., Huang, N. J., Cho, Y. J.: A class of generalized evolution variational inequalities in Banach spaces. Appl. Math. Lett. 25, 914–920 (2012)
Acknowledgments
The authors would like to thank Professor Aviv Gibali and two anonymous reviewers for their comments on the manuscript which helped us very much in improving and presenting the original version of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Thong, D.V., Vinh, N.T. & Cho, Y.J. New strong convergence theorem of the inertial projection and contraction method for variational inequality problems. Numer Algor 84, 285–305 (2020). https://doi.org/10.1007/s11075-019-00755-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-019-00755-1
Keywords
- Inertial projection and contraction method
- Viscosity method
- Variational inequality problem
- Monotone operator