Abstract
The purpose of this paper is to introduce a new modified subgradient extragradient method for finding an element in the set of solutions of the variational inequality problem for a pseudomonotone and Lipschitz continuous mapping in real Hilbert spaces. It is well known that for the existing subgradient extragradient methods, the step size requires the line-search process or the knowledge of the Lipschitz constant of the mapping, which restrict the applications of the method. To overcome this barrier, in this work we present a modified subgradient extragradient method with adaptive stepsizes and do not require extra projection or value of the mapping. The advantages of the proposed method only use one projection to compute and the strong convergence proved without the prior knowledge of the Lipschitz constant of the inequality variational mapping. Numerical experiments illustrate the performances of our new algorithm and provide a comparison with related algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the classical variational inequality problem (VI) of Fichera [15, 16] and Stampacchia [37] (see also Kinderlehrer and Stampacchia [21]) in real Hilbert spaces. The (VI) is formulated as follows:
where C is a nonempty closed convex subset in a real Hilbert space H, \(A: H\rightarrow H\) is a single-valued mapping, \(\langle \cdot ,\cdot \rangle \) and \(\Vert \cdot \Vert \) are the inner product and the norm in H, respectively. Let us denote VI(C, A) by the solution set of the problem (VI).
In the last years, the techniques for the variational inequality problem have been applied to a variety of diverse areas, such as, operations research, nonlinear equations, and network equilibrium problems, see, for instance, [2, 3, 5, 14, 21, 24,25,26,27,28] and the extensive list of references therein.
Many authors have proposed and analyzed several methods for solving the problem (VI). One of the most popular methods is the extragradient method introduced by Korpelevich [23] which was called the extragradient method:
where the mapping \(A:C\rightarrow H\) is monotone and L-Lipschitz continuous, \(\lambda \in \big (0,\dfrac{1}{L}\big )\). The algorithm converges to an element of VI(C, A) provided that VI(C, A) is nonempty.
In recent years, the extragradient method has been received great attention by many authors in various ways (see, for example, [6, 9,10,11,12, 19, 29, 30, 36, 39, 43] and the references therein).
Censor et al. [10, 11] proposed the subgradient extragradient method:
where \(T_n=\{x\in H | \langle x_n -\lambda Ax_n-y_n,x-y_n\rangle \le 0\}\), the mapping \(A:H\rightarrow H\) is monotone and L-Lipschitz continuous, \(\lambda \in \big (0,\dfrac{1}{L}\big )\). This method replaces two projections onto C by one projection onto C and one onto a half-space. Since the projection onto the half-space \(T_n\) can be explicitly calculated, the subgradient extragradient requires only one projection per iteration. For this, recently, the subgradient extragradient method [10, 11] has been received great attention by many authors, they improved and extended it in various ways to obtain the weak and strong convergence of this method (see [8, 12, 22, 34, 35, 38, 39, 41, 42] and the references therein).
Inspired by the results in [10, 11], Kraikaew and Saejung [22] introduced the Halpern subgradient extragradient method for solving monotone variational inequalities as follows:
where \(T_n=\{x\in H: \langle x_n-\lambda Ax_n-y_n,x-y_n\rangle \le 0\}\), the mapping \(A:H\rightarrow H\) is monotone and L-Lipschitz continuous and \(\lambda \in (0,\frac{1}{L})\), \(\{\gamma _n\}\subset (0,1)\), \(\lim _{n\rightarrow \infty } \gamma _n=0\), \(\sum _{n=1}^\infty \gamma _n =+\infty \), and they proved that the sequence \(\{x_n\}\) is generated by (2) converges strongly to a point \(x^*\), where \(x^*=P_{\mathrm{VI}(C,A)}x_0.\)
It is worth pointing out that the main shortcoming of algorithms (1) and (2) is that it requires to know the Lipschitz constant or at least to know some estimation of it. Very recently, in [42], motivated and inspired by the algorithms in [10, 11], they introduced a modified subgradient extragradient method for solving monotone variational inequalities with a new step size. It is worth pointing out that the convergence analysis of the algorithm in [42] doesn’t require either the prior knowledge of the Lipschitz constant of the variational inequality mapping or any additional evaluation of \(P_C\).
The pseudomonotone mappings in the sense of Karamardian were introduced in [20] as a generalization of the monotone mappings. The notion of the pseudomonotone mapping has found many applications in variational inequalities and economics.
It is a known fact that, in [12], Censor et al. showed that the subgradient extragradient method can be successfully applied for solving the pseudomonotone variational inequality in a finite dimensional Euclidean space. Since, in infinite dimensional spaces, the norm convergence is often more desirable, a natural question raises as follows:
How to design and extend the result of Censor et al. in [12] such that strong convergence is obtained in infinite dimensional Hilbert spaces?
To answer this question, in this paper, we develop a new version of the subgradient extragradient method with the technique of choosing stepsizes in [42] for finding an element of the set of solutions of a pseudomonotone and Lipschitz-continuous variational inequality problem in Hilbert spaces and prove that the sequence generated by the proposed algorithm converges strongly to a solution of the pseudomonotone variational inequality.
This paper is organized as follows: In Sect. 2, we recall some definitions and preliminary results for further use. In Sect. 3, we deal with analyzing the convergence of the proposed algorithm. Finally, in Sect. 4, we give several numerical experiments to illustrate the convergence of the proposed algorithm and compare it with previously known algorithms.
2 Preliminaries
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. The weak convergence of \(\{x_n\}\) to x is denoted by \(x_{n}\rightharpoonup x\) as \(n\rightarrow \infty \), while the strong convergence of \(\{x_n\}\) to x is written as \(x_n\rightarrow x\) as \(n\rightarrow \infty .\) For each \(x,y,z\in H\), we have
for all \(\alpha , \beta , \gamma \in [0; 1]\) with \(\alpha + \beta + \gamma = 1.\)
Definition 2.1
Let \(T:H\rightarrow H\) be a mapping. Then we have the following:
-
(1)
T is called L-Lipschitz continuous with \(L>0\) if
$$\begin{aligned} \Vert Tx-Ty\Vert \le L \Vert x-y\Vert , \, \, \, \forall x,y \in H. \end{aligned}$$ -
(2)
T is called monotone if
$$\begin{aligned} \langle Tx-Ty,x-y \rangle \ge 0, \, \, \, \forall x,y \in H. \end{aligned}$$ -
(3)
T is called pseudomonotone if
$$\begin{aligned} \langle Tx,y-x \rangle \ge 0\,\, \Longrightarrow \,\, \langle Ty,y-x \rangle \ge 0, \, \, \, \forall x,y \in H. \end{aligned}$$ -
(4)
T is called sequentially weakly continuous if, for each sequence \(\{x_n\}\) in H with \(x_{n}\rightharpoonup x\) as \(n\rightarrow \infty \), \(Tx_n \rightharpoonup Tx\).
It is easy to see that every monotone operator T is pseudomonotone, but the converse is not true.
Now, we present an academic example of the variational inequality problem in an infinite dimensional space, where the cost function A is pseudomonotone, L-Lipschitz continuous and sequentially weakly continuous on C, but A fails to be a monotone mapping on H.
Example 1
Consider a Hilbert space defined as follows:
equipped with the inner product and the induced norm on H:
for any \(u=(u_1,u_2,\ldots ,u_i,\ldots ), v=(v_1,v_2,\ldots ,v_i,\ldots )\in H\), respectively. Let \(\alpha ,\beta \in {\mathbb {R}}\) such that \(\beta>\alpha>\dfrac{\beta }{2}>2\) and consider the set and the mapping:
Then it is easy to see that \(VI(C,A)\ne \emptyset \) since \(0\in VI(C,A)\). Moreover, let
It is known that A is pseudomonotone, \((\beta +2\alpha )\)-Lipschitz continuous on \(C_\alpha \) and A fails to be a monotone mapping on H (see [18, Example 4.1]).
Now, we show that \(C\subset C_\alpha \). Indeed, let \(u=(u_1,u_2,\ldots ,u_i,\ldots )\in C\). Then we have
which implies that \(\Vert u\Vert \le \alpha \), that is, \(u\in C_\alpha \) and so \(C\subset C_\alpha \).
Further, since \(C\subset C_\alpha \), it follows that A is pseudomonotone and \(\beta +2\alpha \)-Lipschitz continuous on C. On the other hand, since C is compact and A is continuous on H, A is sequentially weakly continuous on C.
Remark 2.1
(1) It should be noted here that the mapping A is not sequentially weakly continuous on \(C_\alpha \) since \(C_\alpha \) is not compact on H.
(2) An example on noncompact sets can be found in [4, Example 2.1], where the mapping A is pseudomonotone, Lipschitz continuous and sequentially weakly continuous.
For all point \(x\in H\), there exists a unique nearest point in C, denoted by \(P_Cx\), such that
Then \(P_C\) is called the metric projection of H onto C. It is known that \(P_C\) is nonexpansive.
Lemma 2.1
([17]) Let C be a nonempty closed convex subset of a real Hilbert space H. Then, for any \(x\in H\) and \(z\in C\),
Lemma 2.2
([7]) For any \(x\in H\) and \(v\in H\) with \(v\ne 0\), let \(T=\left\{ z\in H: \left\langle v, z-x\right\rangle \le 0\right\} \). Then, for all \(u\in H\), the projection \(P_T(u)\) is defined by
In particular, if \(u\notin T\), then we have
Lemma 2.2 gives us an explicit formula of the projection of any point onto a half-space. For more some properties of the metric projection, the interested reader can refer to Sect. 3 in [17] and Chapter 4 in [7].
The following lemmas are useful for the convergence of our proposed method:
Lemma 2.3
([13, Lemma 2.1]) Consider the solution set VI(C, A) of the problem (VI), where C is a nonempty closed convex subset of a real Hilbert space H and \(A : C \rightarrow H\) is pseudomonotone and continuous. Then \(x^*\in VI(C, A)\) if and only if
Lemma 2.4
( [33]) Let \(\{a_n\}\) be a sequence of nonnegative real numbers, \(\{\gamma _n\}\) be a sequence of real numbers in (0, 1) with \(\sum _{n=1}^\infty \gamma _n=\infty \) and \(\{b_n\}\) be a sequence of real numbers. Assume that
If \(\limsup _{k\rightarrow \infty } b_{n_k} \le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying
then \(\lim _{n\rightarrow \infty }{a_n} = 0\).
3 Main results
In this section, we introduce a modified subgradient extragradient algorithm for solving the pseudomonotone variational inequality problem. Under mild assumptions, the sequence generated by the proposed method converges strongly to \(x^*\in VI(C,A)\), where
First, the following conditions are assumed for the convergence of the method:
Condition 1
The feasible set C is a nonempty closed convex subset of a real Hilbert space H.
Condition 2
The mapping \(A:H\rightarrow H\) is L-Lipschitz continuous, pseudomonotone on H and the mapping \(A:H\rightarrow H\) satisfies the following condition
Condition 3
The solution set of the problem (VI) is nonempty, that is, \(VI(C,A)\ne \emptyset \).
Condition 4
Assume that \(\{\gamma _n\}\) and \(\{\beta _n\}\) are two real sequences in (0, 1) such that \(\{\beta _n\}\subset (a,1-\gamma _n)\) for some \(a>0\) and
Now, we present our algorithm.
Remark 3.2
It is easy to show that condition (5) is weaker than the sequential weak continuity of the mapping A (see [1, 32]), which is frequently assumed in recent works on pseudomonotone variational inequality problems, (see, [40]). Indeed, if A is sequentially weakly continuous, then due to the weak lower semicontinuity of the norm, condition (5) is fulfilled. Conversely, let \(Ax = \Vert x|| x\) and suppose that \( x_n \rightharpoonup x.\) Due to the weak lower continuiuty of the norm, one has \(\Vert x\Vert \le \liminf _{n\rightarrow \infty } \Vert x_n\Vert ,\) hence,
Thus, condition (5) is satisfied. However, A is not sequentially weakly continuous. Indeed, let \(x_n = e_n + e_1,\) where \(\{e_n\}\) is an orthonormal system in H. Then \(x_n \rightharpoonup e_1.\) For \(n >1,\) \(Ax_n = \sqrt{2}(e_n + e_1) \rightharpoonup \sqrt{2}e_1 \ne A(e_1) = e_1.\)
Lemma 3.5
([42]) Assume that Conditions 1–3 hold. Then the sequence \(\{\lambda _n\}\) generated by (6) is a non-increasing sequence and
The following lemmas are quite helpful to analyze the convergence of algorithm:
Lemma 3.6
Assume that Conditions 1–3 hold. Let \(\{w_n\}\) be a sequence generated by Algorithm 3.1 Then we have
for all \(x^*\in VI(C,A)\).
Proof
First, it is easy to see that, by the definition of \(\{\lambda _n\}\),
Indeed, if \(\langle Ax_n-Ay_n, w_n-y_n\rangle <0\), then the inequality (8) holds. Otherwise, from (6), we have
This implies that
Therefore, the inequality (8) holds. Now, using the inequality (8) and \(x^*\in VI(C,A)\subset C\subset T_n\), we prove that the inequality (7) holds. Indeed, we have
This implies that
Since \(x^*\) is the solution of the problem (VI), we have \(\langle Ax^*,x-x^*\rangle \ge 0\) for all \(x\in C\). By the pseudomontonicity of A on C, we have \(\langle Ax,x-x^*\rangle \ge 0\) for all \(x\in C\). Taking \(x:=y_n\in C\), we get
Thus we have
From (9) and (10), it follows that
Since \(y_n=P_{T_n}(x_n-\lambda _n Ax_n)\) and \(w_n\in T_n\), we have
which, together with (8), implies that
This completes the proof. \(\square \)
Remark 3.3
Unlike the proof in [42], our Lemma 3.6 is proved when A is pseudomonotone instead of the fact that A is monotone.
We adapt the technique developed in [40] to obtain the following result.
Lemma 3.7
Assume that Conditions 1–3 hold and \(\{x_n\}\) is a sequence generated by Algorithm 3.1. If there exists a subsequence \(\{x_{n_k}\}\) convergent weakly to \(z\in H\) and
then \(z\in VI(C,A).\)
Proof
We have
or, equivalently,
Consequently, we have
Since \(\{x_{n_k}\}\) is weakly convergent, \(\{x_{n_k}\}\) is bounded. Then, by the Lipschitz continuity of A, \(\{Ax_{n_k}\}\) is bounded. Since \(\Vert x_{n_k}-y_{n_k}\Vert \rightarrow 0\), \(\{y_{n_k}\}\) is also bounded and, according to Lemma 3.5, we have
Passing (13) to limit as \(k\rightarrow \infty \), we get
Moreover, we have
Since \(\lim _{k\rightarrow \infty }\Vert x_{n_k}-y_{n_k}\Vert =0\) and A is L-Lipschitz continuous on H, we get
which, together with (14) and (15), implies that
Next, we show that \(z\in VI(C,A).\) We choose a sequence \(\{\epsilon _k\}\) of positive numbers such that \(\{\epsilon _k\}\) is decreasing and convergent to 0. For each \(k\ge 1\), we denote by \(n_{N_k}\) the smallest positive integer such that
Since \(\{ \epsilon _k\}\) is decreasing, it is easy to see that the sequence \(\{n_{N_k}\}\) is increasing. Furthermore, for each \(k\ge 1\), since \(\{y_{n_{N_k}}\}\subset C\), we have \(Ay_{n_{N_k}}\ne 0\) and, setting
we have \(\langle Ay_{n_{N_k}}, v_{n_{N_k}}\rangle = 1\) for each \(k\ge 1\). Now, it follows from (16) that, for each \(k\ge 1\),
Since A is pseudomonotone on H, we get
This implies that
Now, we show that \(\lim _{k\rightarrow \infty }\epsilon _k v_{n_{N_k}}=0\). Indeed, since \(x_{n_k}\rightharpoonup z\) and \(\lim _{k\rightarrow \infty }\Vert x_{n_k}-y_{n_k}\Vert =0,\) we obtain \(y_{N_k}\rightharpoonup z \text { as } k \rightarrow \infty \). Since \(\{y_n\} \subset C\), we have \(z\in C\). We can suppose that \(Az \ne 0\) (otherwise, z is a solution). Since the mapping A satisfies the condition (5), we obtain
Since \(\{y_{n_{N_k}}\}\subset \{y_{n_k}\}\) and \(\epsilon _k \rightarrow 0\) as \(k \rightarrow \infty \), we obtain
which implies that \(\lim _{k\rightarrow \infty } \epsilon _k v_{n_{N_k}} = 0.\) Now, letting \(k\rightarrow \infty \), the right hand side of (17) tends to zero since A is Lipschitz continuous, \(\{x_{n_{N_k}}\}, \{v_{n_{N_k}}\}\) are bounded and \(\lim _{k\rightarrow \infty }\epsilon _k v_{n_{N_k}}=0\). Thus we get
Hence, for all \(x\in C\), we have
Therefore, by Lemma 2.3, \(z\in VI(C,A)\). This completes the proof. \(\square \)
Remark 3.4
When A is monotone, it is not necessary to impose the sequential weak continuity of A, see [8].
Theorem 3.1
Assume that Conditions 1–4 hold. Then the sequence \(\{x_n\}\) generated by Algorithm 3.1 converges strongly to \(x^*\in VI(C,A)\), where
Proof
Since \(\lim _{n\rightarrow \infty }\Big (1-\mu \dfrac{\lambda _n}{\lambda _{n+1}}\Big )=1-\mu >0\), there exists \(n_0\in {\mathbb {N}}\) such that
Combining (7) and (18), we get
Claim 1. The sequence \(\{x_n\}\) is bounded. It follows from (19) that
That is, the sequence \(\{x_n\}\) is bounded and \(\{w_n\}\) is also. Claim 2. Note that
Indeed, using (4), we have
It follows from (7) and (20) that
Thus we get
Moreover, since \(\beta _n\ge a\) for all \(n\ge 1\), we obtain
Claim 3. Note that
Indeed, setting \(t_n=(1-\beta _n)x_n+\beta _n w_n\) for each \(n\ge 1\), we have
and
Now, using the inequality (3), we get
Combining (23) and (24), we obtain
Claim 4. \(\{\Vert x_n-x^*\Vert ^2\}\) converges to zero. Indeed, for each \(n\ge 0\), set
Then, Claim 3 can be rewritten as follows:
By Lemma 2.4, it is sufficient to show that \(\limsup _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying
This is equivalently to that we need to show \(\limsup _{k\rightarrow \infty }\langle x^*, x^*-x_{n_k+1}\rangle \le 0\) and \(\limsup _{k\rightarrow \infty }\Vert x_{n_k}-w_{n_k}\Vert \le 0\) for every subsequence \(\{\Vert x_{n_k}-x^*\Vert \}\) of \(\{\Vert x_n-x^*\Vert \}\) satisfying
Suppose that \(\{\Vert x_{n_k}-x^*\Vert \}\) is a subsequence of \(\{\Vert x_n-x^*\Vert \}\) such that
Then, we have
By Claim 2, we obtain
This implies that
Thus we have
On the other hand, we have
Since the sequence \(\{x_{n_k}\}\) is bounded, it follows that there exists a subsequence \(\{x_{n_{k_j}}\}\) of \(\{x_{n_k}\}\), which converges weakly to some \(z\in H\), such that
From \( \lim _{k\rightarrow \infty }\Vert x_{n_k}-y_{n_k}\Vert =0\) and Lemma 3.7, we have \(z\in VI(C,A)\) and, from (26) and the definition of \(x^*=P_{VI(C,A)} 0\), we have
Combining (25) and (27), we have
Hence it follows from (28), \(\lim _{k\rightarrow \infty }\Vert x_{n_k}-w_{n_k}\Vert =0\), Claim 3 and Lemma 2.4 that
This completes the proof. \(\square \)
Remark 3.5
Our result generalizes some related results in the literature and hence might be applied to a wider class of nonlinear mappings. For example, in the next section, we presented the advantages of our method compared with the recent results [22, Theorem 3.1], [34, Theorem 3.3], [41, Theorem 3.1] and [42, Theorem 3.7] as follows:
-
(1)
In Theorem 3.1, we replaced the monotonicity by the pseudomonotonicity and sequentially weakly continuity of A.
-
(2)
We also obtained the strong convergence without using the viscosity technique.
4 Numerical illustrations
In this section, we present numerical experiments relative to the problem (VI). The first example, we compare Algorithm 3.1 with Algorithm 2 of Yang et al. in [42]. The second example, we illustrate the convergence of Algorithms 3.1 and compare them with three well-known algorithms including Algorithm 2 of Yang et al. in [42], Algorithm 1 of Censor et al. in [10] and Algorithm 3.1 of Kraikaew et al. in [22]. All the numerical experiments are performed on a HP laptop with Intel(R) Core(TM)i5-6200U CPU 2.3GHz with 4 GB RAM. All the programs are written in Matlab2015a.
Problem 1
The first problem is the Example 2.1 in [4]. Assume that \(A:{\mathbb {R}}^m \rightarrow {\mathbb {R}}^m\) is defined by
where Q is a positive definite matrix, P is a positive semidefinite matrix, \(q \in {\mathbb {R}}^m\) and \(\beta >0\). Observe that A is differentiable and there exists \(M > 0\) such that \(\Vert \nabla Ax\Vert \le M \), \(x \in {\mathbb {R}}^m\). Therefore, by the Mean Value Theorem A is Lipschitz continuous. Also, A is pseudo-monotone but not monotone.
Let \(C:=\{x \in {\mathbb {R}}^m|Bx \le b\}\), where B is a matrix of size \(l \times m\) and \(b \in {\mathbb {R}}^l_{+}\) with \(l=10\).
For all tests, we take \(\beta =0.01\), \(P= R^TR, Q= U^TU\) with all entries of matrices \( R, U \in {\mathbb {R}}^{m \times m}\) and vector \(q \in {\mathbb {R}}^m\) are generated randomly from a normal distribution with mean zero and unit variance. B is a random matrix and random vector \(b \in {\mathbb {R}}^l\) with non-negative entries. The starting points are \(x_0=(1,1,\ldots ,1) \in {\mathbb {R}}^m\) and \(\lambda _0=0.5\) for Algorithms. To terminate Algorithms, we use the condition \(\Vert x_{n}-y_n\Vert \le \epsilon \) with \(\epsilon =10^{-3}\) . We choose \(\gamma _n=\frac{1}{(n+3)}\) and \(\mu =0.05\) for all algorithms and \(\beta _n=\frac{1}{10}(1-\gamma _n)\) for Algorithm 3.1. The numerical results are described in Table 1
Problem 2
Let \(H=L^{2}([0,1]) \) with the norm \(\Vert \cdot \Vert \) and the inner product \(\langle x,y\rangle \) defined by
respectively. The operator \(A:H\rightarrow H\) is defined by
It can be easily verified that A is 1-Lipschitz-continuous and monotone. The feasible set \(C:=\{x \in H:\Vert x\Vert \le 1\}\) be the unit ball. Observe that \(0\in VI(C,A)\) and so \(VI(C,A)\ne \emptyset .\) For all tests, we take \(\lambda =\lambda _0=0.5\) for all algorithms. We choose \(\gamma _n=\frac{1}{(n+3)}, \mu =0.05\) for Algorithm 2 of Yang et al. Algorithms 3.1 and \(\gamma _n=\frac{1}{(n+3)}\) for Algorithm 3.1 of Kraikaew et al. \(\beta _n=\frac{1}{10}(1-\gamma _n)\) for Algorithms 3.1. To terminate Algorithms, we use the condition \(\Vert x_{n}-0\Vert \le \epsilon \) with \(\epsilon =10^{-4}\) or iterations \(\ge \) 500. The numerical results are described in Table 2.
5 Conclusions
We proposed a new modified subgradient extragradient method for solving the pseudomonotone variational inequality problem in real Hilbert spaces. To obtain the strong convergence theorem, we combined by the subgradient extragradient method and the Mann type method [31]. The advantages of the proposed algorithm don’t need any requirement of additional projections and the knowledge of the Lipschitz constant of the mapping. Further, we gave several numerical experiments to illustrate the performance of the proposed algorithm with the known algorithms.
References
Anh, P.K., Thong, D.V., Vinh, N.T.: Improved inertial extragradient methods for solving pseudo-monotone variational inequalities. Optimization (2020). https://doi.org/10.1080/02331934.2020.1808644
Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Mathem. Oper. Res. 26, 248–264 (2001)
Bauschke, H.H., Combettes, P.L.: Construction of best Bregman approximations in reflexive Banach spaces. Proc. Am. Mathem. Soc. 131, 3757–3766 (2003)
Bot, R.I., Csetnek, E.R., Vuong, P.T.: The forward-backward-forward method from discrete and continuous perspective for pseudo-monotone variational inequalities in Hilbert spaces. Eur. J. Oper. Res. 287, 49–60 (2020)
Burachik, R.S., Lopes, J.O., Svaiter, B.F.: An outer approximation method for the variational inequality problem. SIAM J. Control Optim. 43, 2071–2088 (2005)
Cai, G., Gibali, A., Iyiola, O.S., Shehu, Y.: A new double projection method for solving variational inequalities in Banach space. J. Optim. Theory Appl. 78, 219–239 (2018)
Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Mathematics. Springer, Berlin (2012)
Denisov, S.V., Semenov, V.V., Chabak, L.M.: Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 51, 757–765 (2015)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 56, 301–323 (2012)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Meth. Softw. 26, 827–845 (2011)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61, 1119–1132 (2011)
Cottle, R.W., Yao, J.C.: Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 75, 281–295 (1992)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research. Springer, New York (2003)
Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. 8 (34): 138-142 (1963)
Fichera, G.: Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem., Cl. Sci. Fis. Mat. Nat., Sez. I, VIII. Ser. 7, 91-140 (1964)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
Khanh, P.D., Vuong, P.T.: Modified projection method for strongly pseudomonotone variational inequalities. J. Glob. Optim. 58, 341–350 (2014)
Khanh, P.Q., Thong, D.V., Vinh, N.T.: Versions of the subgradient extragradient method for pseudomonotone variational inequalities. Acta. Appl. Math. 170, 319–345 (2020)
Karamardian, S., Schaible, S.: Seven kinds of monotone maps. J. Optim. Theory Appl. 66, 37–46 (1990)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and their Applications. Academic, New York (1980)
Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekon. Matem. Metody. 12, 747–756 (1976)
Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)
Dong, Q.L., Cho, Y.J., Zhong, L.L., Rassias, MTh: Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 70, 687–704 (2018)
Dong, Q.L., Cho, Y.J., Rassias, ThM: The projection and contraction methods for finding common solutions to variational inequality problems. Optim. Lett. 12, 1871–1896 (2018)
Dong, Q.L., Tang, Y.C., Cho, Y.J., Rassias, ThM: Optimal choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 71, 341–360 (2018)
Hieu, D.V., Cho, Y.J., Xiao, Y.: Modified extragradient algorithms for solving equilibrium problems. Optimization 67, 2003–2029 (2018)
Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolation step for solving variational inequality problems. J. Glob. Optim. 61, 193–202 (2015)
Malitsky, Y.V.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25, 502–520 (2015)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
Reich, S., Thong, D.V., Dong, Q.L., Xiao, H.L., Dung, V.T.: New algorithms and convergence theorems for solving variational inequalities with non-Lipschitz mappings. Numer. Algorithms (2020). https://doi.org/10.1007/s11075-020-00977-8
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)
Shehu, Y., Iyiola, O.S.: Strong convergence result for monotone variational inequalities. Numer. Algorithms 76, 259–282 (2017)
Shehu, Y., Iyiola, O.S.: Projection methods with alternating inertial steps for variational inequalities: weak and linear convergence. Appl. Numer. Math. 157, 315–337 (2020)
Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37, 765–776 (1999)
Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. 258, 4413–4416 (1964)
Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 78, 1045–1060 (2018)
Thong, D.V., Hieu, D.V.: Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 79, 597–610 (2018)
Vuong, P.T.: On the weak convergence of the extragradient method for solving pseudomonotone variational inequalities. J. Optim. Theory Appl. 176, 399–409 (2018)
Yang, J., Liu, H.: Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithms 80, 741–752 (2019)
Yang, J., Liu, H., Liu, Z.: Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 67, 2247–2258 (2018)
Yao, Y., Marino, G., Muglia, L.: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63, 559–569 (2014)
Acknowledgements
The authors would like to thank two anonymous reviewers for their comments on the manuscript which helped us very much in improving and presenting the original version of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Thong, D.V., Yang, J., Cho, Y.J. et al. Explicit extragradient-like method with adaptive stepsizes for pseudomonotone variational inequalities. Optim Lett 15, 2181–2199 (2021). https://doi.org/10.1007/s11590-020-01678-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-020-01678-w