Abstract
In this paper we study a classical monotone and Lipschitz continuous variational inequality in real Hilbert spaces. Two projection type methods, Mann and its viscosity generalization are introduced with their strong convergence theorems. Our methods generalize and extend some related results in the literature and their main advantages are: the strong convergence and the adaptive step-size usage which avoids the need to know apriori the Lipschitz constant of variational inequality associated operator. Primary numerical experiments in finite and infinite dimensional spaces compare and illustrate the behaviors of the proposed schemes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we study the classical Variational Inequality (VI) of Fichera [14, 15] in real Hilbert spaces. The VI is formulated as follows: Find a point \(x^*\in C\) such that
where \(C\subseteq H\) is a nonempty, closed and convex set of a real Hilbert space H and \(A: H\rightarrow H\) is a given mapping. We denote by VI(C, A) the solution set of the VI (1).
Variational inequalities are fundamental problems which stand the core of diverse applied fields such as in economics, engineering mechanics, transportation, and many more, see for example, [2, 3, 20], just to name a few. In the last decades, many iterative methods have been constructed for solving variational inequalities and their related optimization problems, see for example the excellent book of Facchinei and Pang [13], Konnov [20] and the many references therein.
The first simplest method for solving VIs, derived from optimization theory, is known as the gradient method (GM). The iterative step of this method requires the calculation of the orthogonal projection onto the feasible set of the VI, that is C, per each iteration. Given the current iterate \(x_n\), the algorithm’s iterative step has the following form.
where \(\tau \in (0,\dfrac{1}{L})\), L is the Lipschitz constant of A and \(P_C\) denotes the metric projection onto C. It is shown that gradient method (2) convergence under Lipschitz continuity and some restrictive monotonicity assumption, such as strong monotonicity or inverse strongly monotone, see for example [18]. Korpelevich [21] (also Antipin [1] independently) proposed a double-projection method, known as the extragradient method (EM) which enable to obtain convergence in Euclidean spaces under Lipschitz continuity and just monotonicity. Given the current iterate \(x_n\), the algorithm’s iterative step has the following form.
where \(\tau \) and \(P_C\) are as above. This method is studied intensively and extended and improved in various ways, for example see, e.g. [6,7,8,9,10, 25, 26, 31, 34, 35] and the references therein.
Although the extragradient method converges under weaker monotonicity assumption than the gradient method, it requires to calculate two projections onto C per each iteration. So, in case that the set C is not “easy” to project onto it, a minimum distance subproblem has to be solved twice per each iteration in order to evaluate \(P_C\), a fact which might affect the applicability and computational complexity of the method.
In a direction to overcome this obstacle, Censor et al. [7,8,9] introduced the so-called subgradient extragradient method (SEM). In this algorithm, the second projection onto C is replaced by an easy and constructible projection onto some super set which contains C. Given the current iterate \(x_n\), the algorithm’s iterative step has the following form.
where \(\tau \in (0,1/L)\).
Another method which uses only one projection onto C is projection and contraction method (PC) of He [17] (see also Sun [32]). In this method, the point \(y_n\) in calculated in the same spirit of (3), but the next iterate \(x_{n+1}\) is calculated via some adaptive step size rules. Given the current iterate \(x_n\), the algorithm’s iterative step has the following form.
and then the next iterate \(x_{n+1}\) is generated via the following PC-algorithms:
where \(\gamma \in (0,2), \tau _n\in (0,1/L)\) (or \(\tau _n\) is updated by some self-adaptive rule),
and
Recently, projection and contraction type methods for solving VIs have received great attention by many authors, see, e.g., [4, 11, 12], just to name a few.
Since the SEM and PC algorithms originally introduced in Euclidean spaces, a natural question which was studied is how to extend the method to infinite dimensional spaces and obtain strong convergence. In 2012, Censor et al. [8] proposed two subgradient extragradient variants, which converge strongly in real Hilbert spaces. One of the SEM variant has the following form. Given the current iterate \(x_n\), the next iterate \(x_{n+1}\) is calculated via the following.
Inspired by the results in [8], Kraikaew and Saejung [22] combined the subgradient extragradient method and the Halpern-type method and propose the so-called Halpern subgradient extragradient method.Given the current iterate \(x_n\), the next iterate \(x_{n+1}\) is calculated via the following.
where \(\tau \in (0,\dfrac{1}{L})\), \(\{\alpha _n\}\subset (0,1)\), \(\lim _{n\rightarrow \infty } \alpha _n=0\), \(\sum _{n=1}^\infty \alpha _n =+\infty \) and \(x_0\in H\). Similar to (6) of Censor et al. [8], (7) converges strongly to a specific point \(p=P_{VIP(C,A)}x_0.\)
Another two very recent and related (viscosity type methods) which are also used as comparison with our methods in Sect. 4 are Shehu and Iyiola [30, Algorithm 3.1] and Thong and Hieu [33, Algorithm 3].
The setting of Shehu and Iyiola [30, Algorithm 3.1] is as follows. Given \(\rho ,\mu \in (0,1)\) and let \(\{\alpha _n\}_{n=0}^\infty \subset (0,1)\), f a contraction and choose an arbitrary starting point \(x_1\in H\). Given the current iterate \(x_n\), calculate.
where \(\lambda _n=\rho l_n\) and \(l_n\) is the smallest nonnegative integer l such that
where \(r_{\rho l_n}(x_n):=x_n-P_C(x_n-\rho l_n Ax_n)\). Construct the set \(T_n\) as in (4) and compute
and calculate the next iterate as follows.
The setting of Thong and Hieu [33, Algorithm 3] is as follows. Given \(\rho \in [0,1)\), \(\mu ,l\in (0,1)\) and \(\gamma >0\). Let \(\{\alpha _n\}_{n=0}^\infty \subset (0,1)\), f a contraction and choose an arbitrary starting point \(x_1\in H\). Given the current iterate \(x_n\), calculate.
where \(\lambda _n\) is chosen to be the largest \(\lambda \in \{\gamma ,\gamma l, \gamma l^2,\cdots \}\) satisfying
Calculate the next iterate as follows.
where \(z_n=y_n-\lambda _n (Ay_n-Ax_n)\).
Motivated and inspired by the above results and the ongoing research in these directions, we suggest two modified projection-type methods, Man-type [27] and viscosity type [28], for solving monotone and Lipschitz continuous variational inequalities which converge strongly in real Hilbert spaces and does not require the knowledge of the Lipschitz constant of A a-priori.
The paper is organized as follows. We first recall some basic definitions and results in Sect. 2. Our algorithms are presented and analysed in Sect. 3. In Sect. 4 we present some numerical experiments which demonstrate the algorithms performances as well as provide a preliminary computational overview by comparing it with some related algorithms. Final remarks and conclusions are given in Sect. 5.
2 Preliminaries
Let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. The weak convergence of \(\{x_n\}_{n=1}^{\infty }\) to x is denoted by \(x_{n}\rightharpoonup x\) as \(n\rightarrow \infty \), while the strong convergence of \(\{x_n\}_{n=1}^{\infty }\) to x is written as \(x_n\rightarrow x\) as \(n\rightarrow \infty .\) For each \(x,y\in H\) and \(\alpha \in \mathbb {R}\), we have
for all \(x, y, z \in H\) and for all \(\alpha , \beta , \gamma \in [0; 1]\) with \(\alpha + \beta + \gamma = 1.\)
Definition 2.1
Let \(T:H\rightarrow H\) be an operator. Then
-
1.
the operator T is called L-Lipschitz continuous with \(L>0\) if
$$\begin{aligned} \Vert Tx-Ty\Vert \le L \Vert x-y\Vert \ \ \ \forall x,y \in H. \end{aligned}$$(13)if \(L=1\) then the operator T is called nonexpansive and if \(L\in (0,1)\), T is called contraction.
-
2.
T is called monotone if
$$\begin{aligned} \langle Tx-Ty,x-y \rangle \ge 0 \ \ \ \forall x,y \in H. \end{aligned}$$(14) -
3.
the fixed point set of T, denoted by Fix(T) is defined as follows.
$$\begin{aligned} Fix(T):=\{x\in H \mid Tx=x\}. \end{aligned}$$(15)
For every point \(x\in H\), there exists a unique nearest point in C, denoted by \(P_Cx\) such that \(\Vert x-P_Cx\Vert \le \Vert x-y\Vert \ \forall y\in C\). \(P_C\) is called the metric projection of H onto C. It is known that \(P_C\) is nonexpansive.
Lemma 2.1
[16] Let C be a nonempty closed convex subset of a real Hilbert space H. Given \(x\in H\) and \(z\in C\). Then \(z=P_Cx\Longleftrightarrow \langle x-z,z-y\rangle \ge 0 \ \ \forall y\in C.\)
Lemma 2.2
[16] Let C be a closed and convex subset in a real Hilbert space H, \(x\in H\). Then
-
i)
\(\Vert P_Cx-P_Cy\Vert ^2\le \langle P_C x-P_C y,x-y\rangle \ \forall y\in C\);
-
ii)
\(\Vert P_C x-y\Vert ^2\le \Vert x-y\Vert ^2-\Vert x-P_Cx\Vert ^2 \ \forall y\in C;\)
-
iii)
\(\langle (I-P_C)x-(I-P_C)y,x-y\rangle \ge \Vert (I-P_C)x-(I-P_C)y\Vert ^2 \ \forall y\in C.\)
For properties of the metric projection, the interested reader could be referred to Section 3 in [16].
The following Lemmas are useful for the convergence of our proposed methods.
Lemma 2.3
[22] Let \(A:H\rightarrow H\) be a monotone and L-Lipschitz continuous mapping on C. Let \(S=P_C(I-\tau A)\), where \(\tau >0\). If \(\{x_n\}\) is a sequence in H satisfying \(x_n\rightharpoonup q\) and \(x_n -Sx_n\rightarrow 0\) then \(q\in VI(C,A)=Fix(S)\).
Lemma 2.4
[24] Let \(\{a_n\}\) be a sequence of nonnegative real numbers such that there exists a subsequence \(\{a_{n_j}\}\) of \(\{a_n\}\) such that \(a_{n_{j}}<a_{n_{j}+1}\) for all \(j\in \mathbb {N}\). Then there exists a nondecreasing sequence \(\{m_k\}\) of \(\mathbb {N}\) such that \(\lim _{k\rightarrow \infty }m_k=\infty \) and the following properties are satisfied by all (sufficiently large) number \(k\in \mathbb {N}\):
In fact, \(m_k\) is the largest number n in the set \(\{1,2,\cdots ,k\}\) such that \(a_n<a_{n+1}\).
The next technical lemma is very useful and used by many authors, for example Liu [23] and Xu [36]. Furthermore, a variant of Lemma 2.5 has already been used by Reich in [29].
Lemma 2.5
Let \(\{a_n\}\) be sequence of nonnegative real numbers such that:
where \(\{\alpha _n\}\subset (0,1)\) and \(\{b_n\}\) is a sequence such that
a) \(\sum _{n=0}^\infty \alpha _n=\infty \);
b) \(\limsup _{n\rightarrow \infty }b_n\le 0.\)
Then \(\lim _{n\rightarrow \infty }a_n=0.\)
3 Main results
In this section we introduce our two modified projection-type methods for solving VIs. For the convergence analysis of the methods, we assume the following conditions.
Condition 3.1
The VI (1) associated operator \(A:H\rightarrow H\) is monotone and L-Lipschitz continuous on H.
Condition 3.2
The solution set of the VI (1) is nonempty, that is \(VI(C,A)\ne \emptyset \).
Condition 3.3
Let \(\{\alpha _n\}\) and \(\{\beta _n\}\) be two real sequences in (0, 1) such that \(\{\beta _n\}\subset (a,b)\subset (0,1-\alpha _n)\) for some \(a>0, b>0\) and
3.1 Mann-type projection algorithm
We start the analysis of the algorithm’s convergence by proving the validity of the stopping criterion.
Lemma 3.1
Assume that Conditions 3.1–3.2 hold. The Armijo-like search rule (16) is well defined and
Proof
See e.g., Lemma 3.1 in [33]. \(\square \)
Lemma 3.2
Let \(\{d_n\}\) be a sequence generated by Algorithm 3.1. Then \(d_n=0\) if and only if \(x_n=y_n\).
Proof
Indeed, we will prove that
We have
and it is also easy to see that
Combining (18) and (19) we obtain
It implies from (17) that \(d_n=0\) if and only if \(x_n=y_n.\) \(\square \)
Remark 3.1
From Lemma 3.2 we show that if \(d_n=0\) then stop and \(y_n\) is a solution of VI(C, A).
Lemma 3.3
Assume that Conditions 3.1 and 3.2 hold. Let \(\{z_n\}\) be a sequence generated by Algorithm 3.1. Then
Proof
Using (16) we have
On the other hand, since \(y_n=P_C(x_n-\tau _n Ax_n)\) we get
By the monotonicity of A and \(p\in VI(C,A)\) we have
Combining (21) and (24) we get
On the other hand, we have
It implies from (25) and (26) that
Since \(\eta _n=(1-\mu )\dfrac{\Vert x_n-y_n\Vert ^2}{\Vert d_n\Vert ^2}\), it implies that \(\Vert x_n-y_n\Vert ^2=\dfrac{\eta _n \Vert d_n\Vert ^2}{1-\mu }\). Thus,
\(\square \)
Lemma 3.4
Assume that Conditions 3.1–3.2 hold and let the sequence \(\{x_n\}\) be generated by Algorithm 3.1. Then
Proof
We have
On the other hand, from (17) we get
thus,
It implies from (28) and (29) that
\(\square \)
Theorem 3.1
Assume that Conditions 3.1–3.3 hold. Then any sequence \(\{x_n\}\) generated by Algorithm 3.1 converges strongly to \(p\in VI(C,A)\), where \(\Vert p\Vert =\min \{\Vert z\Vert : z\in VI(C,A)\}\).
Proof
Thanks to Lemma 3.3 we get
Claim 1. We prove that the sequence \(\{x_n\}\) is bounded. We have
On the other hand, using (30) we get
This implies that
That is, the sequence \(\{x_n\}\) is bounded and \(\{z_n\}\) is also.
Claim 2. We show that
Indeed, using (12) we have
which, together Lemma 3.3 we obtain
Therefore, we get
Claim 3. We show that
Indeed, setting \(t_n=(1-\beta _n)x_n+\beta _n z_n\). We have
and
Claim 4. Now, we will show that the sequence \(\{\Vert x_n-p\Vert ^2\}\) converges to zero by considering two possible cases on the sequence \(\{\Vert x_n-p\Vert ^2\}\).
Case 1: There exists an \(N\in {{\mathbb {N}}}\) such that \(\Vert x_{n+1}-p\Vert ^2\le \Vert x_n-p\Vert ^2\) for all \(n\ge N.\) This implies that \(\lim _{n\rightarrow \infty }\Vert x_n-p\Vert ^2\) exists. It implies from Claim 2 that
which, together with Lemma 3.4, we get
We also have
Since \(\{x_n\}\) is bounded we assume that there exists a subsequence \(\{x_{n_j}\}\) of \(\{x_n\}\) such that \(x_{n_j}\rightharpoonup q\) and
We have \(x_{n_j}\rightharpoonup q\), \(\min \{\lambda ,\dfrac{\mu l}{L}\}\le \tau _n\le \lambda \) and \(\Vert x_n-y_n\Vert =\Vert x_n-P_C(x_n-\tau _n Ax_n)\Vert \rightarrow 0\), by Lemma 2.3 we get \(q\in VI(C,A).\)
Since \(q\in VI(C,A) \) and \(\Vert p\Vert =\min \{ \Vert z\Vert : z\in VI(C,A)\}\), that is \(p=P_{VI(C,A)}0\) we obtain
By \(\Vert x_{n+1}-x_n\Vert \rightarrow 0\) we get
Therefore by Claim 3 and Lemma 2.5 we get \(\lim _{n\rightarrow \infty }\Vert x_{n}-p\Vert ^2=0\), that is \(x_n\rightarrow p.\)
Case 2: There exists a subsequence \(\{\Vert x_{n_j}-p\Vert ^2\}\) of \(\{\Vert x_{n}-p\Vert ^2\}\) such that \(\Vert x_{n_j}-p\Vert ^2 < \Vert x_{n_j+1}-p\Vert ^2\) for all \(j\in \mathbb {N}\). In this case, it follows from Lemma 2.4 that there exists a nondecreasing sequence \(\{m_k\}\) of \(\mathbb {N}\) such that \(\lim _{k\rightarrow \infty }m_k=\infty \) and the following inequalities hold for all \(k\in \mathbb {N}\):
Since \(\{\beta _n\}\subset (a,b)\) and Claim 2, we have
Therefore, we get
As proved in the first case, we obtain
and
Since Claim 3 we have
This implies that
Therefore, we obtain \(\limsup _{k\rightarrow \infty }\Vert x_k-p\Vert \le 0\), that is \(x_k\rightarrow p\). The proof is completed. \(\square \)
3.2 Viscosity projection type algorithm
In this section, we propose our viscosity projection type algorithm for solving variational inequalities, with the usage of a \(\rho \)-contraction \(f:H\rightarrow H\).
Theorem 3.2
Assume that Conditions 3.1–3.2 hold and given a \(\rho \)-contraction \(f:H\rightarrow H\). Assume that \(\{\alpha _n\}\) is a real sequence in (0, 1) such that
Then any sequence \(\{x_n\}\) generated by Algorithm 3.2 converges strongly to an element \(p\in VI(C,A)\), where \(p=P_{ VI(C,A)}\circ f(p)\).
Proof
Claim 1. We prove that the \(\{x_n\}\) is bounded. Indeed, According to Lemma 3.3 we have
Using (42) we obtain
This implies that the sequence \(\{x_n\}\) is bounded. Consequently, \(\{f(x_n)\}, \{y_n\}\) and \(\{z_n\}\) are bounded.
Claim 2. We show that
Indeed, using (11) and Lemma 3.3 we have
This implies that
Claim 3. We show that
Indeed, using (10) and (42) we have
Claim 4. Now, we will show that the sequence \(\{\Vert x_n-p\Vert ^2\}\) converges to zero by considering two possible cases on the sequence \(\{\Vert x_n-p\Vert ^2\}\).
Case 1: There exists an \(N\in {{\mathbb {N}}}\) such that \(\Vert x_{n+1}-p\Vert ^2\le \Vert x_n-p\Vert ^2\) for all \(n\ge N.\) This implies that \(\lim _{n\rightarrow \infty }\Vert x_n-p\Vert ^2\) exists.
Since the Claim 2 and \(\lim _{n\rightarrow \infty }\alpha _n=0\) we get
and by Lemma 3.4
We also have
Since the sequence \(\{x_n\}\) is bounded, it implies that there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) that weak convergence to some \(z\in H\) such that
From (45) and Lemma 2.3 we have \(z\in VI(C,A)\).
By the definition of p and \(z\in VI(C,A)\) we have
which, together with (46) and (47) we get
Using Lemma 2.5, (49) and Claim 3 we obtain \(x_n\rightarrow p.\)
Case 2. There exists a subsequence \(\{\Vert x_{n_j}-p\Vert ^2\}\) of \(\{\Vert x_{n}-p\Vert ^2\}\) such that \(\Vert x_{n_j}-p\Vert ^2 < \Vert x_{n_j+1}-p\Vert ^2\) for all \(j\in \mathbb {N}\). In this case, it follows from Lemma 2.4 that there exists a nondecreasing sequence \(\{m_k\}\) of \(\mathbb {N}\) such that \(\lim _{k\rightarrow \infty }m_k=\infty \) and the following inequalities hold for all \(k\in \mathbb {N}\):
and
According to Claim 2 we get
We obtain
and by Lemma 3.4 we get
Using the same arguments as in the proof of Case 1, we obtain
Thanks to Claim 3, we have
together with (50), we deduce that
This follows that
Combining (51), (54) and (56) we get
that is \(x_k \rightarrow p\). The proof is completed. \(\square \)
4 Numerical illustrations
In this section we present two numerical experiments which demonstrate the performances of our Mann-type and viscosity-type projection algorithm (Algorithms 3.1 and 3.2) in finite and infinite dimensional spaces. In both experiments the parameters are chosen as \(\lambda =7.55,\, l=0.5,\, \mu =0.85\) and \(\gamma =1.99\), \(\alpha _k=1/k, \beta _k=(k-1)/2k\).
Example 1
Suppose that \(H = L^2([0,1])\) with norm \(\Vert x\Vert :=\Big (\int _0^1 |x(t)|^2dt\Big )^{\frac{1}{2}}\) and inner product \(\langle x,y\rangle := \int _0^1 x(t)y(t)dt,~~ \forall x,y \in H\). Let \(C:=\{x \in H \mid \Vert x\Vert \le 1\}\) be the unit ball. Define operator \(A:C\rightarrow H\) by \((Ax)(t)=\max (0,x(t))\). Then it can be easily verified that A is 2-Lipschitz continuous and monotone on C (see [19]). With these given C and A, the set of solution to the variational inequality is \(\{0\} \ne \emptyset \). It is known that, see for example [5]
We implement our algorithm with different starting point \(x_1(t)\). We choose the stopping criterion \(||x_{n+1}-x_n||<\varepsilon \) with \(\varepsilon =10^{-30}\). The results are presented in Table 1 and Figs. 1 and 2.
Example 2
In this example we consider a nonlinear variational inequality with \(A:\mathbb {R}^m\rightarrow \mathbb {R}^m\) which is defined as \(Ax=Mx+Fx+q\), with M an \(m\times m\) symmetric semi-definite matrix, q is a vector in \(\mathbb {R}^{m}\) and Fx is the proximal mapping of the function \(g(x)=\frac{1}{4}||x||^4\), i.e.,
The feasible set is a polyhedral convex set, given by \( C=\left\{ x\in \mathbb {R}^m \mid Qx\le b\right\} \), where \(Q\in \mathbb {R}^{r\times m}\) and \(b\in \mathbb {R}^l\). In this case, A is monotone and Lipschitz continuous with \(L=||M||+1\). All the entries of Q, M, q are generated randomly in \((-2,2)\) and b in (0, 1), \(m=100,r=10\) and we choose the stopping criterion \(||x_n-y_n||<\varepsilon \) with \(\varepsilon =10^{-5}\). The starting point is \(x_0=(1,1,\ldots ,1)\in \mathbb {R}^m\). The projections onto C and the evaluation of F are computed by using the MATLAB solvers fmincon. For comparison we choose two very recent viscosity type methods, Shehu and Iyiola [30, Algorithm 3.1] and Thong and Hieu [33, Algorithm 3]. In all algorithms we take the contractions \(f(x)=x/2\). The numerical results are showed in Fig. 3 with respect to the logarithmic scale. In Fig. 4 we illustrate the performances of Algorithm 3.2 for different choices of the contraction \(f(x)=0.9x,0.75x,0.5x,0.25x\).
5 Conclusions
In this paper we proposed two projection-type methods, Mann and viscosity schemes methods [27, 28] for solving variational inequalities in real Hilbert spaces. Both algorithms converge strongly under monotonicity and Lipschitz continuity of the VI associated mapping A. The algorithms require the calculation of only one projection onto the VI’s feasible set C per each iteration and by using the projection and contraction technique there is no need to know the Lipschitz constant of A in advance. These two properties emphasize the applicability and advantages over several existing results in the literature. Numerical experiments in finite and infinite dimensional spaces compare and illustrate the performance of the our new schemes.
References
Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika i Mat. Metody. 12, 1164–1173 (1976)
Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
Baiocchi, C., Capelo, A.: Variational and Quasivariational Inequalities, Applications to Free Boundary Problems. Wiley, New York (1984)
Cai, X., Gu, G., He, B.: On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57, 339–363 (2014)
Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Mathematics, vol. 2057. Springer, Berlin (2012)
Ceng, L.C., Hadjisavvas, N., Wong, N.C.: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46, 635–646 (2010)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradientmethod for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Meth. Softw. 26, 827–845 (2011)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61, 1119–1132 (2011)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 56, 301–323 (2012)
Dong, Q.L., Gibali, A., Jiang, D., Ke, S.H.: Convergence of projection and contraction algorithms with outer perturbations and their applications to sparse signals recovery. J. Fixed Point Theory Appl. 20, 16 (2018). https://doi.org/10.1007/s11784-018-0501-1
Dong, L.Q., Cho, J.Y., Zhong, L.L., Rassias, MTh: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 70, 687–704 (2018)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research, vols. I and II. Springer, New York (2003)
Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, VIII Ser. Rend. Cl. Sci. Fis. Mat. Nat. 34, 138–142 (1963)
Fichera, G.: Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat. Sez. I, VIII. Ser. 7, 91–140 (1964)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
He, B.S.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 35, 69–76 (1997)
He, B.S., Liao, L.Z.: Improvements of some projection methods for monotone nonlinear variational inequalities. J. Optim. Theory Appl. 112, 111–128 (2002)
Hieu, D.V., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 66, 75–96 (2017)
Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekonomika i Mat. Metody. 12, 747–756 (1976)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
Liu, L.S.: Ishikawa and Mann iteration process with errors for nonlinear strongly accretive mappings in Banach space. J. Math. Anal. Appl. 194, 114–125 (1995)
Maingé, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)
Malitsky, Y.V.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25, 502–520 (2015)
Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolation step for solving variational inequality problems. J. Glob. Optim. 61, 193–202 (2015)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
Moudafi, A.: Viscosity approximating methods for fixed point problems. J. Math. Anal. Appl. 241, 46–55 (2000)
Reich, S.: Constructive Techniques for Accretive and Monotone Operators. Applied Nonlinear Analysis, pp. 335–345. Academic Press, New York (1979)
Shehu, Y., Iyiola, O.S.: Strong convergence result for monotone variational inequalities. Numer. Algorithms 76, 259–282 (2017)
Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37, 765–776 (1999)
Sun, D.F.: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 91, 123–140 (1996)
Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 78, 1045–1060 (2018)
Thong, D.V., Hieu, D.V.: Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 79, 597–610 (2018)
Thong, D.V., Hieu, D.V.: Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 341, 80–98 (2018)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Dedicated to Professor Le Dung Muu on the Occasion of his 70th Birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gibali, A., Thong, D.V. & Tuan, P.A. Two simple projection-type methods for solving variational inequalities. Anal.Math.Phys. 9, 2203–2225 (2019). https://doi.org/10.1007/s13324-019-00330-w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13324-019-00330-w
Keywords
- Projection-type method
- Variational inequality
- Mann-type method
- Viscosity method
- Projection and contraction method