Abstract
In this paper, we propose a new version of the extragradient method for solving non-Lipschitzian and pseudo-monotone variational inequalities in real Hilbert spaces. First, we show that the proposed method converges strongly to the minimum-norm solution of a variational inequality under mild assumptions. Second, we obtain a linear convergence rate of this algorithm under strong pseudo-monotonicity and Lipschitz continuity assumptions. Our results improve some recent contributions in the literature on the extragradient method. Finally, we illustrate the performance of proposed algorithm by some numerical results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We focus on the following classical variational inequality (VI) Fichera (1963, 1964) which consists in finding a point \(x^*\in C\) such that
where C is a nonempty closed convex subset in a real Hilbert space H, and is a single-valued mapping \(F: C\rightarrow H\). As commonly done, we denote by \(\mathrm{Sol}(C,F)\) the solution set of VI (1). Variational inequalities are fundamental in a broad range of mathematical and applied sciences; the theoretical and algorithmic foundations as well as the applications of variational inequalities have been extensively studied in the literature and continue to attract intensive research. For the current state of the art in finite-dimensional setting, see for instance Facchinei and Pang (2003) and the extensive list of references therein.
Many authors have proposed and analyzed several iterative methods for solving the variational inequality (1). The simplest one is the following projection method, which can be seen an extension of the projected gradient method introduced for solving optimization problems. However, the convergence of this method is quite restrictive and assume that F is L-Lipschitz continuous and \(\alpha \)-strongly monotone.
In a way to weaken the convergence assumptions of the gradient method, Korpelevich Korpelevich (1976) (also independently by Antipin (1976)) proposed a double projection method known as the extragradient method in Euclidean space when F is monotone and L-Lipschitz continuous. In recent years, the extragradient method was further extended to infinite-dimensional spaces in various ways, see, e.g., Censor et al. (2011a, 2011b, 2011c, 2012), Gibali (2015), Khobotov (1987), Malitsky (2015), Malitsky and Semenov (2015), Marcotte (1991), Solodov and Svaiter (1999), Gibali and Shehu (2019), Shehu et al. (2020), Shehu and Iyiola (2020) and the references therein.
However, in the case the inequality variational mapping is not Lipschitz continuous or its Lipschitz constant is difficult to compute/approximate, Korpelevich’s method fails since the step-size depends on this. Hence, the method extragradient method cannot be applied. Motivated by this problem it would be of interest to propose an iterative method for solving VI (1) with the underline cost function F is uniformly continuous on bounded subsets of C but not Lipschitz continuous on C. In Iusem (1994), Iusem proposed the algorithm so that it can overcome this barrier. Recently, Cai et al. (2021) introduced the modification of the subgradient extragradient method which it uses a different Armijo-type line-search and the mapping F is only assumed to be pseudo-monotone on C in the sense of Karamardian (1976) and Vuong (2018). In this work, motivated by the works of Cai et al., we propose a new extragradient method for solving variational inequality problems of pseudo-monotone and non-Lipschitz continuous operator in real Hilbert spaces. We present strong convergence and convergence rate of the proposed algorithm under suitable conditions. Our results improve some recent contributions in the literature.
The paper is organized as follows. We first recall some basic definitions and results in Sect. 2. Our algorithms are presented and analyzed in Sect. 3. In Sect. 4, we present some numerical experiments to demonstrate the performance of the proposed algorithm.
2 Preliminaries
Let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. The weak convergence of \(\{x_n\}_{n=1}^{\infty }\) to x is denoted by \(x_{n}\rightharpoonup x\) as \(n\rightarrow \infty \), while the strong convergence of \(\{x_n\}_{n=1}^{\infty }\) to x is written as \(x_n\rightarrow x\) as \(n\rightarrow \infty .\) For each \(x,y,z\in H\), we have
for all \(\alpha , \beta , \gamma \in [0; 1]\) with \(\alpha + \beta + \gamma = 1.\)
Definition 2.1
Let \(T:H\rightarrow H\) be an operator.
-
1.
The operator T is called L-Lipschitz continuous with \(L>0\) if
$$\begin{aligned} \Vert Tx-Ty\Vert \le L \Vert x-y\Vert \quad \forall x,y \in H. \end{aligned}$$ -
2.
The operator T is called monotone if
$$\begin{aligned} \langle Tx-Ty,x-y \rangle \ge 0 \quad \forall x,y \in H. \end{aligned}$$ -
3.
The operator T is called pseudo-monotone if
$$\begin{aligned} \langle Tx,y-x \rangle \ge 0 \Longrightarrow \langle Ty,y-x \rangle \ge 0 \quad \forall x,y \in H. \end{aligned}$$ -
4.
The operator T is called \(\delta \)-strongly pseudo-monotone if there is \(\delta > 0\) such that
$$\begin{aligned} \langle Tx,y-x \rangle \ge 0 \Longrightarrow \langle Ty,y-x \rangle \ge \delta \Vert y - x\Vert ^2. \end{aligned}$$ -
5.
The operator T is called \(\alpha \)-strongly monotone if there exists a constant \(\alpha >0\) such that
$$\begin{aligned} \langle Tx-Ty,x-y\rangle \ge \alpha \Vert x-y\Vert ^2 \quad \forall x,y\in H. \end{aligned}$$ -
6.
The operator T is called sequentially weakly continuous if for each sequence \(\{x_n\}\) we have: \(\{x_n\}\) converges weakly to x implies \({Tx_n}\) converges weakly to Tx.
It is easy to see that very monotone operator is pseudo-monotone but the converse is not true.
For every point \(x\in H\), there exists a unique nearest point in C, denoted by \(P_Cx\) such that \(\Vert x-P_Cx\Vert \le \Vert x-y\Vert \ \forall y\in C\). \(P_C\) is called the metric projection of H onto C. It is known that \(P_C\) is nonexpansive.
Lemma 2.1
(Goebel and Reich 1984) Let C be a nonempty closed convex subset of a real Hilbert space H. Given \(x\in H\) and \(z\in C\). Then, \(z=P_Cx\Longleftrightarrow \langle x-z,z-y\rangle \ge 0 \ \ \forall y\in C.\)
Lemma 2.2
(Goebel and Reich 1984) Let C be a closed and convex subset in a real Hilbert space H, \(x\in H\). Then,
-
(i)
\(\Vert P_Cx-P_Cy\Vert ^2\le \langle P_C x-P_C y,x-y\rangle \ \forall y\in C\);
-
(ii)
\(\Vert P_C x-y\Vert ^2\le \Vert x-y\Vert ^2-\Vert x-P_Cx\Vert ^2 \ \forall y\in C;\)
-
(iii)
\(\langle (I-P_C)x-(I-P_C)y,x-y\rangle \ge \Vert (I-P_C)x-(I-P_C)y\Vert ^2 \ \forall y\in C.\)
For properties of the metric projection, the interested reader could be referred to Goebel and Reich (1984), Section 3.
The following Lemmas are useful for the convergence of our proposed methods.
Lemma 2.3
(Denisov et al. 2015) For \(x\in H\) and \(\alpha \ge \beta >0\), the following inequalities hold:
where \(F:C\rightarrow H\) is a nonlinear mapping.
Lemma 2.4
(Iusem and Nasri 2011) Let \(H_1\) and \(H_2\) be two real Hilbert spaces. Suppose \(F : H_1 \rightarrow H_2\) is uniformly continuous on bounded subsets of \(H_1\) and M is a bounded subset of \(H_1\). Then, F(M) (the image of M under F) is bounded.
Lemma 2.5
(Cottle and Yao 1992) Consider the problem \(\mathrm{Sol}(C, F)\) with C being a nonempty, closed, convex subset of a real Hilbert space H and \(F : C \rightarrow H\) being pseudo-monotone and continuous. Then, \(x^*\) is a solution of \(\mathrm{Sol}(C, F)\) if and only if
Definition 2.2
(Ortega and Rheinboldt 1993) Let \(\{x_n\}\) be a sequence in H.
-
(i)
\(\{x_n\}\) is said to converge R-linearly to \(x^*\) with rate \(\rho \in [0, 1)\) if there is a constant \(c>0\) such that
$$\begin{aligned} \Vert x_n-x^*\Vert \le c\rho ^n \quad \forall n\in \mathbb {N}. \end{aligned}$$ -
(ii)
\(\{x_n\}\) is said to converge Q-linearly to \(x^*\) with rate \(\rho \in [0, 1)\) if
$$\begin{aligned} \Vert x_{n+1}-x^*\Vert \le \rho \Vert x_n-x^*\Vert \quad \forall n\in \mathbb {N}. \end{aligned}$$
Lemma 2.6
(Saejung and Yotkaew 2012) Let \(\{a_n\}\) be a sequence of nonnegative real numbers, \(\{\gamma _n\}\) be a sequence of real numbers in (0, 1) with \(\sum _{n=1}^\infty \gamma _n=\infty \) and \(\{b_n\}\) be a sequence of real numbers. Assume that
If \(\limsup _{k\rightarrow \infty } b_{n_k} \le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying
then \(\lim _{n\rightarrow \infty }{a_n} = 0\).
3 Main results
In this section, we present the strong convergence and convergence rate of the sequence generated by the proposed method under the suitable assumptions. The proposed algorithm is of the form:
Remark 3.1
(5) can rewritten as follows
3.1 Strong convergence
To prove the strong convergence, we need the following conditions:
Condition 1
The feasible set C of the VI (1) is a nonempty, closed, and convex subset of the real Hilbert space H.
Condition 2
The mapping \(F:C\rightarrow H\) is a pseudo-monotone, uniformly continuous on bounded subsets of C.
Condition 3
The solution set of the VI (1) is nonempty, that is \(\mathrm{Sol}(C,F)\ne \emptyset \).
We start the algorithm’s convergence analysis by proving that (5) terminates after a finite number of steps.
Lemma 3.7
Assume that the mapping \(F:C\rightarrow H\) is uniformly continuous on bounded subsets of C. The Armijo line-search rule (5) is well defined. In addition, we have \(\tau _n\le \gamma .\)
Proof
If \(v_n\in \mathrm{Sol}(C,F)\), then \(v_n=P_{C}(v_n-\gamma Fv_n)\) and \(m_n=0\). We consider the situation \(v_n\notin \mathrm{Sol}(C,F)\) and assume the contrary that for all m, we have
By Cauchy–Schwartz inequality, we have
and
Combining (6) and (7) and (8), we find
This implies that
Since \(v_n\in C\) for all n and \(P_C\) is continuous, we have \(\lim _{m\rightarrow \infty }\Vert v_n-P_C(v_n-\gamma l^m Fv_n)\Vert =0.\) From the uniform continuity of the operator F on bounded subsets of C, it implies that
Combining (10) and (11), we get
Assume that \(z_m=P_C(v_n-\gamma l^m Fv_n)\), we have
This implies that
Taking the limit \(m\rightarrow \infty \) in (13) and using (12), we obtain
which implies that \(v_n\in \mathrm{Sol}(C,F)\) this is a contraction. \(\square \)
Lemma 3.8
Assume that Conditions 1–3 hold and let \(\{v_n\}\) be any sequence generated by Algorithm 3.1. Moreover, we assume that the mapping \(F:C\rightarrow H\) satisfies the following condition:
If there exists a subsequence \(\{v_{n_k}\}\) of \(\{v_n\}\) such that \(\{v_{n_k}\}\) converges weakly to \(z\in C\) and \(\lim _{k\rightarrow \infty }\Vert v_{n_k}-y_{n_k}\Vert =0\) then \(z\in \mathrm{Sol}(C,F).\)
Proof
We have \(y_{n_k}=P_C(v_{n_k}-\tau _{n_k}Fv_{n_k})\), thus
or equivalently
This implies that
Now, we show that
For showing this, we consider two possible cases. Suppose first that \(\liminf _{k\rightarrow \infty }\tau _{n_k}>0\). We have \(\{v_{n_k}\}\) is a bounded sequence, F is uniformly continuous on bounded subsets of C. By Lemma 2.4, we get that \(\{Fv_{n_k}\}\) is bounded. Taking \(k\rightarrow \infty \) in (15) since \(\Vert v_{n_k}-y_{n_k}\Vert \rightarrow 0\), we get
Now, we assume that \(\liminf _{k\rightarrow \infty }\tau _{n_k}=0\). Assume \(z_{n_k}=P_{C}(v_{n_k}-\tau _{n_k}.l^{-1}Fv_{n_k})\), we have \(\tau _{n_k}l^{-1}>\tau _{n_k}\). Applying Lemma 2.3, we obtain
Consequently, \(z_{n_k}\rightharpoonup z\in C\), this implies that \(\{z_{n_k}\}\) is bounded, which the uniformly continuity of the operator F on bounded subsets of C, it follows that
By the Armijo line-search rule (5) and the proof is similar to the inequality (9), we have
That is,
Combining (17) and (18), we obtain
Furthermore, we have
This implies that
Taking the limit \(k\rightarrow \infty \) in (19), we get
Therefore, the inequality (16) is proved. Next, we show that \(z\in \mathrm{Sol}(C,F)\).
Now, we choose a sequence \(\{\varepsilon _k\}\) of positive numbers decreasing and tending to 0. For each k, we denote by \(N_k\) the smallest positive integer such that
where the existence of \(N_k\) follows from (16). Since \(\{ \varepsilon _k\}\) is decreasing, it is easy to see that the sequence \(\{N_k\}\) is increasing. Furthermore, for each k, since \(\{v_{N_k}\}\subset C\), we have \(Fv_{N_k}\ne 0\) and, setting
we have \(\langle Fv_{N_k}, u_{N_k}\rangle = 1\) for each k. Now, we can deduce from (20) that for each k
Since the fact that F is pseudo-monotone, we get
This implies that
Now, we show that \(\lim _{k\rightarrow \infty }\varepsilon _k u_{N_k}=0\). Indeed, we have \(v_{n_k}\rightharpoonup z \text { as } k \rightarrow \infty \). Since F satisfies the condition (14). We have
Since \(\{v_{N_k}\}\subset \{v_{n_k}\}\) and \(\varepsilon _k \rightarrow 0\) as \(k \rightarrow \infty \), we obtain
which implies that \(\lim _{k\rightarrow \infty } \varepsilon _k u_{N_k} = 0.\)
Now, letting \(k\rightarrow \infty \), then the right hand side of (21) tends to zero by F is uniformly continuous, \(\{v_{N_k}\}, \{u_{N_k}\}\) are bounded and \(\lim _{k\rightarrow \infty }\varepsilon _k u_{N_k}=0\). Thus, we get
Hence, for all \(x\in C\), we have
By Lemma 2.5, we obtain \(z\in \mathrm{Sol}(C,F)\) and the proof is complete. \(\square \)
Remark 3.2
We should here emphasize that if F is sequentially weakly continuous, then F satisfies condition (14) but the converse is not true, hence condition (14) is strictly weaker than the sequential weak continuity of the operator F. This is the assumption which has frequently been used in recent articles on pseudo-monotone variational inequality problems, e.g., Vuong (2018), Thong et al. (2020) and Vuong and Shehu (2019). Moreover, when F is monotone, we can remove condition (14) (Denisov et al. 2015; Vuong 2018).
Theorem 3.1
Assume that Conditions 1–3 hold. Moreover, we assume that the mapping \(F:C\rightarrow H\) satisfies the following condition:
Then, any sequence \(\{v_n\}\) generated by Algorithm 3.1 converges strongly to an element \(u^*\) \(\mathrm{Sol}(C,F)\), where
Proof
Let \(u^*\) be a certain solution of (VIP). We divide the proof into two claims.
Claim 1.
Indeed, from \(u^*\in \mathrm {Sol}(C,F)\subset C\), we have
This implies that
Since \(u^*\) is a solution of the problem (VI), we have \(\langle Fu^*,x-u^*\rangle \ge 0\) for all \(x\in C\). By the pseudo-monotonicity of F on C, we have \(\langle Fx,x-u^*\rangle \ge 0\) for all \(x\in C\). Taking \(x:=y_n\in C\), we get
Thus, we have
Since \(y_n=P_{C}(v_n-\tau _n Fv_n)\) and \(w_n\in C\), we have
Combining (5) and (26), we obtain
Substituting (27) into (25), we obtain
Claim 2. The sequence \(\{v_n\}\) is bounded. Indeed, we have
That is, the sequence \(\{v_n\}\) is bounded and \(\{w_n\}\) is also.
Claim 3.
Indeed, using (4), we have
Substituting (22) into the above inequality, we get
Thus, we get
Moreover, since \(\beta _n\ge a\) for all \(n\ge 1\), we obtain
Claim 4.
Indeed, setting \(t_n=(1-\beta _n)v_n+\beta _n w_n\) for each \(n\ge 1\), we have
and
Now, using the inequality (3), we get
Combining (31) and (32), we obtain
Claim 5. \(\{\Vert v_n-u^*\Vert ^2\}\) converges to zero. Indeed, for each \(n\ge 0\), set
Then, Claim 4 can be rewritten as follows:
By Lemma 2.6, it is sufficient to show that \(\limsup _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying
This is equivalently to that we need to show \(\limsup _{k\rightarrow \infty }\langle u^*, u^*-v_{n_k+1}\rangle \le 0\) and \(\limsup _{k\rightarrow \infty }\Vert v_{n_k}-w_{n_k}\Vert \le 0\) for every subsequence \(\{\Vert v_{n_k}-u^*\Vert \}\) of \(\{\Vert v_n-u^*\Vert \}\) satisfying
Suppose that \(\{\Vert v_{n_k}-u^*\Vert \}\) is a subsequence of \(\{\Vert v_n-u^*\Vert \}\) such that
Then, we have
By Claim 3, we obtain
This implies that
Thus, we have
On the other hand, we have
Since the sequence \(\{v_{n_k}\}\) is bounded, it follows that there exists a subsequence \(\{v_{n_{k_j}}\}\) of \(\{v_{n_k}\}\), which converges weakly to some \(z\in H\), such that
From \( \lim _{k\rightarrow \infty }\Vert v_{n_k}-y_{n_k}\Vert =0\) and Lemma 3.8, we have \(z\in \mathrm{Sol}(C,F)\) and, from (34) and the definition of \(u^*=P_{\mathrm{Sol}(C,F)} 0\), we have
Combining (33) and (35), we have
Hence, it follows from (36), \(\lim _{k\rightarrow \infty }\Vert v_{n_k}-w_{n_k}\Vert =0\), Claim 4 and Lemma 2.6 that
This completes the proof. \(\square \)
Remark 3.3
Our result generalizes some related results in the literature and hence might be applied to a wider class of nonlinear mappings. For example, in the next section, we present the advantages of our method compared with the recent results (Cai et al. 2021, Theorem 3.1) as follows:
-
(1)
In Theorem 3.1, we replaced the sequentially weakly continuity of F by condition (14), which is strictly weaker than the sequential weak continuity of the operator F.
-
(2)
We also obtained the strong convergence without using the viscosity technique.
3.2 Convergence rate
In this section, we provide a result on the convergence rate of the iterative sequence generated by Algorithm 3.1 when \(\alpha _n=0\) and \(\beta _n=\beta \in (0,1)\) for all n and the mapping \(F: C\rightarrow H\) is L-Lipschitz continuous and \(\delta \)-strongly pseudo-monotone on C. In this case, Algorithm 3.1 is of the form:
Theorem 3.2
Assume that \(F: C\rightarrow H\) is L-Lipschitz continuous and \(\delta \)-strongly pseudo-monotone on C. Then, the sequence \(\{v_n\}\) generated by Algorithm 3.2 converges strongly to \(u^*\) with a Q-linear rate, where \(u^*\) is a unique solution of VI.
Proof
We have \(\langle Fu^*,y_n-u^*\rangle \ge 0\), by the strong pseudo-monotonicity of F on C, we get
Using (37), we have
Substituting (38) into (23), we obtain
Thus,
Substituting (27) into (39), we obtain
Now we show that \(\tau _n> \dfrac{\mu l}{L}\) for all n. Indeed, by the search rule (5), we know that \(\dfrac{\tau _n}{l}\) must violate inequality (5), i.e.,
This follows that
Thus,
This implies that
Combining (40) and (41), we get
On the other hand, using (2), we have
Let \(\xi :=\dfrac{1}{2}\min \bigg \{\beta (1-\mu ),2\beta \dfrac{\mu l}{L}\delta \bigg \}\in (0,1)\). From (42), we get
This implies that
Since \(\sqrt{(1-\xi )}\in (0,1)\), it implies that the sequence \(\{v_n\}\) converges strongly to \(u^*\) with a Q-linear rate.
4 Numerical illustrations
In this section, we perform four numerical experiments to show the behaviors of Algorithm 3.2, and compare them with other algorithms. In the numerical results listed in the following tables, ‘Iter.’ denotes the total number of internal and external circulations, and the number in brackets denotes the number of total iterations of finding suitable \(\tau _n.\) The followings are the experiments in detail.
Example 4.1
Shehu et al. (2019) Consider \(C:=\{x \in H:\Vert x\Vert \le 2\}\). Let \(g: C\rightarrow \mathbb {R}\) be defined by
Observe that g is Lipschitz continuous with constant \(L_g=\frac{16}{25}\) and \(\frac{1}{5}\le g(u)\le 1, \quad \forall \, u \in C\). Define the Volterra integral operator \(A:L^2([0,1])\rightarrow L^2([0,1])\) by
Then, A is bounded linear monotone and \(\Vert A\Vert =\frac{2}{\pi }\). Now, define \(F:C\rightarrow L^2([0,1])\) by
Hence F is not monotone on C but pseudo-monotone and Lipschitz continuous with constant \(L = 82/\pi \). The parameters are chosen as follows:
Algorithm 3.2: \(\gamma =1.5,\,\mu =0.99,\,l=0.5,\,a=0.0001,\,\alpha _n=\frac{1}{n+2},\,\beta _n=a+0.99(1-a-\alpha _n)\).
Algorithm 3.1 in Cai et al. (2021): \(f(x)=0,\,\gamma =1.5,\,\mu =0.6,\,l=0.5,\,\beta _n=\frac{1}{n+2}\).
Algorithm 3 in Thong et al. (2020): \(f(x)=0,\,\gamma =1.1,\,\mu =0.5,\,l=0.5,\,\alpha _n=\frac{1}{n+2}\).
Algorithm 3.1 in Thong and Gibali (2019): \(\gamma =1.72,\,\mu =0.5,\,l=0.5,\,\alpha _n=\frac{1}{n+2}\).
We now make comparisons of four algorithms with the initial function \(x_0=x_1=t^2\) and \(\Vert x_{n}-y_n\Vert <\mathrm{Error}\) as stopping condition.
The results in Fig. 1 illustrate that Algorithm 3.2 behaves better than Algorithm 3.1 in Cai et al. (2021), Algorithm 3 in Thong et al. (2020) and Algorithm 3.1 in Thong and Gibali (2019).
Example 4.2
(Harker and Pang 1990) Let \(F(x): = M x + q\) where
and B is an \(m\times m\) matrix, C is an \(m\times m\) skew-symmetric matrix, D is an \(m\times m\) diagonal matrix, whose diagonal entries are nonnegative (so M is positive semidefinite), q is a vector in \(\mathbb {R}^m\). The feasible set \(C\subset \mathbb {R}^m\) is a closed and convex subset defined by \(C: = \{x\in \mathbb {R}^m: Qx\le b\}\), where Q is an \(d\times m\) matrix and b is a nonnegative vector. It is clear that F is monotone and Lipschitz continuous with constant \(L =\Vert M\Vert \). Let \( q= 0\). Then, we obtain the solution set \(\varGamma = \{0\}\). The parameters are chosen as follows:
Algorithm 3.2: \(\gamma =0.01,\,\mu =0.99,\,l=0.55,\,\alpha _n=\frac{1}{n+2},\,\beta _n=a+0.99(1-a-\alpha _n)\).
Algorithm 3.2 in Gibali et al. (2019): \(f(x)=0,\,\lambda =0.01,\,\gamma =1,\,\mu =0.5,\,l=0.5,\,\alpha _n=\frac{1}{n+2}\).
We now make comparisons of two algorithms with \(d=20\), \(m=20\) and \(\Vert v_n\Vert <\mathrm{Error}\) as stopping condition. From Fig. 2 and Table 1, it is observed that the performance of Algorithm 3.2 is better than Algorithm 3.2 in Gibali et al. (2019).
Example 4.3
(Shehu et al. 2019) Next, let us consider Sol(C, F) with
and \(C:=\{x \in \mathbb {R}^2:-10 \le x_i \le 10, i=1,2\}.\) This problem has unique solution \(x^* =(0,-1)^T\). It is clear that F is not a monotone map on C. However, using the Monte Carlo approach (see Shehu et al. 2019), it can be shown that F is pseudo-monotone on C. The parameters are chosen as follows:
Algorithm 3.2: \(\gamma =0.24,\,\mu =0.99,\,l=0.58,\,a=0.0001,\,\alpha _n=\frac{1}{n+2},\,\beta _n=a+0.99(1-a-\alpha _n)\).
Algorithm 3 in Thong et al. (2020): \(f(x)=0,\,\gamma =1.8,\,\mu =0.5,\,l=0.22,\,\alpha _n=\frac{1}{n+2}\).
Algorithm 3.1 in Thong and Gibali (2019): \(\gamma =0.84,\,\mu =0.01,\,l=0.99,\,\alpha _n=\frac{1}{n+2}\).
Algorithm 3.2 in Thong et al. (2020): \(\tau _1=0.43,\,\mu =0.99,\,\alpha =0,\,\beta _n=\frac{1}{n+2}\).
We now make comparisons of four algorithms with the initial function \(v_0=v_1=[-0.3;-0.3]\) and \(\Vert v_n-x^*\Vert <\mathrm{Error}\) as stopping condition. Figure 3 and Table 2 demonstrate that Algorithm 3.2 performs better than Algorithm 3 in Thong et al. (2020), Algorithm 1 in Thong and Gibali (2019) and Algorithm 3.2 in Thong et al. (2020).
Example 4.4
Cai et al. (2021) Suppose that \(H = L^2([0,1])\) with norm \(\Vert x\Vert :=\Big (\int _0^1 |x(t)|^2\mathrm{d}t\Big )^{\frac{1}{2}}\) and inner product \(\langle x,y\rangle := \int _0^1 x(t)y(t)\mathrm{d}t,\quad x,y \in H\). Let \(C:=\{x \in H:\Vert x\Vert \le 1\}\) be the unit ball. Define an operator \(F:C\rightarrow H\) by
where
It was shown in Cai et al. (2021) that F is sequentially weakly continuous on C, monotone and 2-Lipschitz continuous.
The parameters are chosen as follows:
Algorithm 3.2: \(\gamma =0.01,\,\mu =0.99,\,l=0.5,\,a=0.0001,\,\alpha _n=\frac{1}{n+2}, \,\beta _n=a+0.99(1-\alpha _n-a)\).
Algorithm 3.1 in Cai et al. (2021): \(f(x)=0,\,\gamma =0.02,\,\mu =0.8,\,l=0.5,\,\beta _n=\frac{1}{n+2}\).
Algorithm 3.2 in Thong et al. (2020): \(f(x)=0,\,\tau _1=0.4,\,\mu =0.03,\,\alpha _n=0,\,\beta _n=\frac{1}{n+2}\).
Algorithm 1 in Yang and Hongwei (2019): \(f(x)=0,\,\lambda _0=0.02,\,\mu =0.3,\,\alpha _n=\frac{1}{n+2}\).
We now make comparisons of four algorithms with the initial functions \(x_0=x_1=(0.5,0.5,\dots ,0.5)\) and the stopping condition \(\Vert x_n-y_n\Vert <Error\). The result in Table 3 illustrates that Algorithm 3.2 behaves better than Algorithm 3.1 in Cai et al. (2021), Algorithm 3.2 in Thong et al. (2020) and Algorithm 1 in Yang and Hongwei (2019) when the error is smaller.
5 Conclusions
In this paper, we introduce a new version of extragradient method for solving non-Lipschitzian pseudo-monotone variational inequalities in real Hilbert spaces. Strong convergence and convergence rate of the proposed are presented. Our work extends and generalizes some existing results in the literature. Finally, several numerical experiments are reported to illustrate the performance of the proposed algorithm.
References
Antipin AS (1976) On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika i Mat Metody 12:1164–1173
Cai G, Dong QL, Peng Y (2021) Strong convergence theorems for solving variational inequality problems with pseudo-monotone and non-Lipschitz operators. J Optim Theory Appl 188:447–472
Censor Y, Gibali A, Reich S (2011) The subgradient extragradientmethod for solving variational inequalities in Hilbert space. J Optim Theory Appl 148:318–335
Censor Y, Gibali A, Reich S (2011) Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim Methods Softw 26:827–845
Censor Y, Gibali A, Reich S (2011) Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61:1119–1132
Censor Y, Gibali A, Reich S (2012) Algorithms for the split variational inequality problem. Numer Algorithms 56:301–323
Cottle RW, Yao JC (1992) Pseudo-monotone complementarity problems in Hilbert space. J Optim Theory Appl 75:281–295
Denisov SV, Semenov VV, Chabak LM (2015) Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern Syst Anal 51:757–765
Facchinei F, Pang JS (2003) Finite-dimensional variational inequalities and complementarity problems. Springer series in operations research, vols. I and II. Springer, New York
Fichera G (1963) Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad Naz Lincei VIII Ser Rend Cl Sci Fis Mat Nat 34:138–142
Fichera G (1964) Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad Naz Lincei Mem Cl Sci Fis Mat Nat Sez I VIII Ser 7:91–140
Gibali A (2015) A new non-Lipschitzian projection method for solving variational inequalities in Euclidean spaces. J Nonlinear Anal Optim Theory Appl 6:41–51
Gibali A, Shehu Y (2019) An efficient iterative method for finding common fixed point and variational inequalities in Hilbert. Optimization 68(1):13–32
Gibali A, Thong DV, Tuan PA (2019) Two simple projection-type methods for solving variational inequalities. Anal Math Phys 9:220–2225
Goebel K, Reich S (1984) Uniform convexity, hyperbolic geometry, and nonexpansive mappings. Marcel Dekker, New York
Harker, P.T., Pang, J.S.: A damped Newton method for linear complementarity problem. In: Simulation and optimization of large systems, lectures on applied mathematics, vol 26. AMS, Providence, RI, pp 265–284 (1990)
Iusem AN (1994) An iterative algorithm for the variational inequality problem. Comput Appl Math 13:103–114
Iusem AN, Nasri M (2011) Korpelevich’s method for variational inequality problems in Banach spaces. J Glob Optim 50:59–76
Karamardian S (1976) Complementarity problems over cones with monotone and pseudo-monotone maps. J Optim Theory Appl 18:445–454
Khobotov EN (1987) Modifications of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput Math Math Phys 27:120–127
Korpelevich GM (1976) The extragradient method for finding saddle points and other problems. Ekonomika i Mat Metody 12:747–756
Malitsky YV (2015) Projected reflected gradient methods for monotone variational inequalities. SIAM J Optim 25:502–520
Malitsky YV, Semenov VV (2015) A hybrid method without extrapolation step for solving variational inequality problems. J Glob Optim 61:193–202
Marcotte P (1991) Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. Inf Syst Oper Res 29:258–270
Ortega JM, Rheinboldt WC (1970) Iterative solution of nonlinear equations in several variables. Academic Press, New York [the nonlinear complementarity problem, Mathematical Programming 60, 295–337 (1993)]
Saejung S, Yotkaew P (2012) Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal 75:742–750
Shehu Y, Iyiola OS (2020) Projection methods with alternating inertial steps for variational inequalities: weak and linear convergence. Appl Numer Math 157:315–337
Shehu Y, Dong QL, Jiang D (2019) Single projection method for pseudo-monotone variational inequalbity in Hilbert spaces. Optimization 68:385–409
Shehu Y, Li XH, Dong QL (2020) An efficient projection-type method for monotone variational inequalities in Hilbert spaces. Numer Algorithms 84:365–388
Solodov MV, Svaiter BF (1999) A new projection method for variational inequality problems. SIAM J Control Optim 37:765–776
Thong DV, Gibali A (2019) Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities. J Fixed Point Theory Appl 21:20. https://doi.org/10.1007/s11784-018-0656-9
Thong DV, Shehu Y, Iyiola OS (2020) Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numer Algorithms 84:795–823
Thong DV, Hieu DV, Rassias TM (2020) Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim Lett 14:115–144
Vuong PT (2018) On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J Optim Theory Appl 176:399–409
Vuong PT, Shehu Y (2019) Convergence of an extragradient-type method for variational inequality with applications to optimal control problems. Numer Algorithms 81:269–291
Yang J, Hongwei L (2019) Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer Algorithms 80:741–752
Acknowledgements
The authors would like to thank Associate Editor and anonymous reviewers for their comments on the manuscript which helped us very much in improving and presenting the original version of this paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Communicated by Orizon Pereira Ferreira.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Thong, D.V., Li, X., Dong, QL. et al. Revisiting the extragradient method for finding the minimum-norm solution of non-Lipschitzian pseudo-monotone variational inequalities. Comp. Appl. Math. 41, 186 (2022). https://doi.org/10.1007/s40314-022-01887-2
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-022-01887-2
Keywords
- Extragradient method
- Mann type method
- Variational inequality problem
- Pseudo-monotone mapping
- Strong convergence
- Convergence rate