Abstract
In this paper, we introduce and analyze a new algorithm for solving equilibrium problem involving pseudomonotone and Lipschitz-type bifunction in real Hilbert space. The algorithm requires only a strongly convex programming problem per iteration. A weak and a strong convergence theorem are established without the knowledge of the Lipschitz-type constants of the bifunction. As a special case of equilibrium problem, the variational inequality is also considered. Finally, numerical experiments are performed to illustrate the advantage of the proposed algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the equilibrium problem (EP) which is to find \(x^*\in C\) such that
where C is a nonempty closed convex subset in a real Hilbert space H, \(f: H\times H\longrightarrow {\mathbb {R}}\) is a bifunction. The solution set of (1) is denoted by EP(f). Equilibrium problem is also called the Ky Fan inequality due to his contribution to this field [1]. The problem unifies many important mathematical models such as saddle point problem, fixed point problem, variational inequality and Nash equilibrium problem [2, 3]. Recently, methods for solving equilibrium problem have been studied extensively [4,5,6,7,8,9,10,11,12,13,14,15,16,17]. One of the most popular methods is the proximal point method [4,5,6]. But the method cannot be applied to pseudomonotone equilibrium problem. Another method is the proximal-like method (the extragradient method) [7]. By using the idea of Korpelevich extragradient method in [8], this method was extended by Quoc et al. in [9]
where \(\lambda \) is a suitable parameter. It was proved that the sequence \(\{x_n\}\) generated by (2) converges to a solution of equilibrium problem under the assumptions of pseudomonotonicity and Lipschitz-type condition of f. But at each iteration, one needs to calculate two strongly convex programming problems. This method was improved by many authors; see,e.g., [10,11,12,13,14,15]. Based on the Malitsky’s work in the variational inequality [18], Nguyen [15] proposed the following method
where \(\varphi =\frac{\sqrt{5}+1}{2}\) and \(\lambda \) is a suitable parameter. It is easy to see this method need only one strongly convex programming problem per iteration. The main drawback of algorithms (2) and (3) is a requirement to know Lipschitz-type constants of equilibrium bifunction. In order to overcome this shortcoming, Dang [10, 11, 13, 14] proposed the non-summable and diminishing step size sequence for solving strongly pseudomonotone equilibrium problem. In this work, we propose a new gradient method for solving pseudomonotone equilibrium problem. It is worth pointing out that the proposed algorithm uses a new step size and does not require the knowledge of the Lipschitz-type constants of the bifunction.
The remainder of this paper is organized as follows. In Sect. 2, we present some definitions and preliminaries that will be needed throughout the paper. In Sect. 3, we propose a new algorithm and analyze its convergence. In Sect. 4, we particularize our method to the variational inequality. Finally, preliminary numerical experiments are provided which demonstrate our algorithm performance.
2 Preliminaries
In this section, we recall some concepts and results for further use.
Definition 2.1
A bifunction \(f : C \times C \longrightarrow {\mathbb {R}}\) is said to be as follows:
- (i)
Monotone on C if \(f(x,y)+f(y,x)\le 0, \ \forall x, y\in C. \)
- (ii)
Pseudomonotone on C if \(f(x,y)\ge 0\Longrightarrow f(y,x)\le 0, \ \forall x, y\in C. \)
- (iii)
Strong pseudomonotone on C if \(f(x,y)\ge 0\Longrightarrow f(y,x)\le -\gamma \Vert x-y\Vert ^2, \ \forall x, y\in C. \) Where \(\gamma >0. \)
Definition 2.2
A mapping \(h : C \longrightarrow {\mathbb {R}}\) is called subdifferentiable at \(x\in C\) if there exists a vector \(w\in H\) such that \(h(y)-h(x)\ge \langle w, y-x\rangle , \forall y\in C.\)
Definition 2.3
A mapping \(F : H \rightarrow H\) is said to be sequentially weakly continuous if the sequence \(\{x_n\}\) converges weakly to x implies \(\{F(x_n)\}\) converges weakly to F(x).
For solving the equilibrium problem, we assume that the bifunction f satisfies the following conditions:
- (C1):
f is pseudomonotone on C and \(f(x,x)=0\) for all \(x\in C\).
- \((C1')\):
f is strong pseudomonotone on C and \(f(x,x)=0\) for all \(x\in C\).
- (C2):
f satisfies the Lipschitz-type condition on C. That is, there exist two positive constants \(c_1\), \(c_2\) such that \(f(x,y)+f(y,z)\ge f(x,z)-c_1\Vert x-y\Vert ^2-c_2\Vert y-z\Vert ^2, \ \forall x, y, z\in C.\)
- (C3):
f(x, .) is convex and subdifferentiable on C for every fixed \(x \in C\).
- (C4):
\(\limsup _{n\rightarrow \infty }f (x_n, y)\le f (x, y)\) for every sequence
- \(\{x_n\}\):
which converges weakly to x and for each \(y\in C\).
For a proper, convex and lower semicontinuous function \(g: C\rightarrow (-\infty ,+\infty ]\) and \(\lambda >0, \) the proximal mapping of g associated with \(\lambda \) is defined by
The following lemma is a property of the proximal mapping.
Lemma 2.1
[19] For all \(x\in H, y \in C \) and \(\lambda > 0,\) the following inequality holds:
Remark 2.1
From Lemma 2.1, we note that if \(x=prox_{\lambda g}(x)\), then
Lemma 2.2
Let \(\delta \in (0,+\infty ) \) and \(x,\ y\in H\). Then
Lemma 2.3
Let \(\{a_n\},\)\(\{b_n\}\) be two nonnegative real sequences such that \(\exists N>0, \ \forall n>N,\)\(a_{n+1}\le a_n-b_n\). Then \(\{a_n\}\) is bounded and \(\lim \nolimits _{n\rightarrow \infty }b_n=0.\)
Lemma 2.4
Let \(\{{x_n}\}\) be a sequence in H such that \(x_n\rightharpoonup x \). Then
For a closed and convex \(K\subseteq H\), the (metric) projection \(P_K: H\longrightarrow C\) is defined, for all \(x\in H\) by \(P_K(x)=argmin\{{\parallel y-x \parallel \mid y\in K }\}\).
Lemma 2.5
Let C be a nonempty, closed and convex set in H and \(x \in H\). Then
3 Algorithm and its convergence
In this section, we propose an iterative algorithm for solving the equilibrium problem (1). The algorithm is designed as follows:
Algorithm 3.1
- (Step 0):
-
Choose \(\lambda _1>0\), \(x_{0}, y_0, y_1\in C,\)\(\mu \in (0, 1) \), \(\alpha \in (0, 1) \), \(\theta \in (0, 1] \), \(\delta \in (\frac{\sqrt{1+4(\frac{\alpha }{2-\theta }+1-\alpha )}-1}{2},1).\)
- (Step 1):
-
Given the current iterate \(x_{n-1}\), \(y_{n-1}\), \(y_n\), compute
$$\begin{aligned} x_{n}= & {} (1-\delta )y_n+\delta x_{n-1}. \end{aligned}$$(7)$$\begin{aligned} y_{n+1}= & {} argmin\left\{ \lambda _{n} f(y_n,y)+\frac{1}{2}\Vert x_{n}-y\Vert ^2, \ y\in C\right\} =prox_{\lambda _{n} f(y_n,.)}(x_{n}).\nonumber \\ \end{aligned}$$(8)If \(y_{n+1} = x_n = y_n\), then stop: \(y_n \) is a solution. Otherwise, go to step 2.
- (Step 2):
-
Compute
$$\begin{aligned} \lambda _{n+1}= {\left\{ \begin{array}{ll} &{} min\left\{ {\frac{\alpha \mu \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_{n+1}-y_n\Vert ^2)}{4\delta (f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1}))},\lambda _n }\right\} , \\ &{} \qquad \qquad if\ \ f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})>0, \\ &{}\lambda _n, \qquad otherwise. \\ \end{array}\right. } \end{aligned}$$(9)Set \(n := n + 1\) and return to step 1.
Remark 3.1
Under hypotheses (C1) and (C3), from Lemma 2.1 and Remark 2.1, we obtain that if Algorithm 3.1 terminates at some iterate, i.e., \(y_{n+1} = x_n = y_n\), then \(y_n\in EP(f).\)
Lemma 3.1
The sequence \( \{\lambda _n\} \) generated by Algorithm 3.1 is a monotonically decreasing sequence with lower bound \( \min \{{\frac{\alpha \mu \theta }{4\delta \max \{c_1,c_2\}},\lambda _1 }\}\).
Proof
It is easily checked that \( \{\lambda _n\} \) is a monotonically decreasing sequence. Since f is a Lipschitz-type bifunction with constants \(c_1\) and \(c_2\), in the case of \(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})>0,\) we have
It is clear that the sequence \( \{\lambda _n\} \) has the lower bound \( \min \{{\frac{\alpha \mu \theta }{4\delta \max \{c_1,c_2\}},\lambda _1 }\}\). \(\square \)
Remark 3.2
The limit of \( \{\lambda _n\} \) exists and we denote \(\lambda =\lim \nolimits _{n\rightarrow \infty }\lambda _n.\) It is obvious that \(\lambda >0\). If \(\lambda _1\le \frac{\alpha \mu \theta }{4\delta \max \{c_1,c_2\}}, \) then \(\{\lambda _n\}\) is a constant sequence. The following lemma plays a crucial role in the Proof of the Theorem 3.1.
Lemma 3.2
Under the conditions (C1), (C2) and (C3). Let \(\{x_n\}\) and \(\{y_n\}\) be sequences generated by Algorithm 3.1 and \(EP(f)\ne \emptyset \). Then \(\{x_n\}\) and \(\{y_n\}\) are bounded.
Proof
Since \(y_{n+1}=prox_{\lambda _n f(y_n,.)}(x_n).\) By Lemma 2.1, we get
Let \(u\in EP(f).\) Substituting \(y=u\) into the last inequality, we have
As \(u\in EP(f),\) we obtain \(f(u,y_n)\ge 0.\) Thus \(f(y_n,u)\le 0\) because of the pseudomonotonicity of f. Hence, from (12) and \(\lambda _n>0\), we obtain
Since \(y_{n}=prox_{\lambda _{n-1} f(y_{n-1},.)}(x_{n-1}),\) we get
In particular, substituting \(y=y_{n+1}\) into the last inequality, we have
Since \(x_{n}=(1-\delta )y_n+\delta x_{n-1}\), we obtain \(y_n=\frac{1}{1-\delta }x_{n}-\frac{\delta }{1-\delta } x_{n-1} \). Hence,
Combining (15), (16) and \(\lambda _{n}>0\), we have
That is
By definition of \(\lambda _n\) and (19), we obtain
In the last inequality, in the case of \(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})\le 0,\) it is obvious that
Since
we have that \(\exists N\ge 0,\) such that \(\forall n\ge N,\)\(\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1>0 \), \(0<\lambda _n\frac{\mu }{\lambda _{n+1}}<1\) and \(\alpha <\frac{\lambda _n}{\lambda _{n-1}}\).
From the relation \(y_{n+1}=\frac{1}{1-\delta }x_{n+1}-\frac{\delta }{1-\delta } x_{n} \), by Lemma 2.2 and (16), we have
Also, from \( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1\le \frac{\lambda _{n-1}}{\lambda _{n-1}}\frac{1}{\delta }-1= \frac{1}{\delta }-1\), (20), (21) and (22), it implies that \(\forall n\ge N,\)
Thus,
For \(n\ge N,\) let
Combining (24), (25) and \(-2\langle x_{n}-y_{n+1},y_{n+1}-y_{n}\rangle \le \eta \Vert x_{n}-y_{n+1}\Vert ^2+\frac{1}{\eta }\Vert y_{n}-y_{n+1}\Vert ^2 \), we have
Since \(\delta \in (\frac{\sqrt{1+4(\frac{\alpha }{2-\theta }+1-\alpha )}-1}{2},1),\) we obtain \(\frac{1}{\delta }-1-\delta -\frac{\alpha }{\delta }+\frac{\alpha \eta }{\delta }<0.\)
For \(n\ge N,\) let
Then (26) can be written as \(a_{n+1}\le a_n-b_n\), \(\forall n\ge N.\) From Lemma 2.3, we can conclude that \(\{a_n\}\) is bounded, \(\lim \nolimits _{n\rightarrow \infty }b_n=0\) and the limit of \(\{a_n\}\) exists. By definition of \(b_n\), we can show that \(\lim \nolimits _{n\rightarrow \infty }\Vert y_{n+1}-x_{n}\Vert =0.\) From the relation (16), \(\Vert y_n-y_{n-1}\Vert \le \Vert y_n-x_{n}\Vert +\Vert x_n-y_{n-1}\Vert \) and \(\Vert x_n-y_{n-1}\Vert \le \Vert x_n-x_{n-1}\Vert +\Vert x_{n-1}-y_{n-1}\Vert \), we get
Also, we obtain \(\lim \nolimits _{n\rightarrow \infty }a_n=\lim \nolimits _{n\rightarrow \infty }\frac{1}{1-\delta }\Vert x_n-u\Vert ^2. \) This implies that the sequence \(\{x_n\}\) is bounded and so \(\{y_n\}\) is bounded. That is the desired result. \(\square \)
Theorem 3.1
Assume that \((C1){-}(C4)\) and \( EP(f)\ne \emptyset \) hold. Then the sequences \(\{x_n\}\) and \(\{y_n\}\) generated by Algorithm 3.1 converge weakly to a solution of the equilibrium problem.
Proof
By Lemma 3.2, the sequence \(\{x_n\}\) is bounded and there exists a subsequence \(\{x_{n_k}\}\) that converges weakly to some \(x^*\in H\). Then \(y_{n_k}\rightharpoonup x^* \), \(y_{{n_k}+1}\rightharpoonup x^* \) and \(x^*\in C.\) From the relation (11), we have
Since f satisfies the Lipschitz-type condition on C, we have
From the relations (17) and (30), it follows that
Combining the relations (29) and (31), it follows that, for all \(y \in C\),
Let \(k\rightarrow \infty \), using the facts \(\lim \nolimits _{k\rightarrow \infty }\Vert y_{n_k}-x_{n_k}\Vert =\lim \nolimits _{k\rightarrow \infty }\Vert x_{n_k}-y_{{n_k}+1}\Vert =\lim \nolimits _{k\rightarrow \infty }\Vert y_{n_k}-y_{{n_k}-1}\Vert =0\), \(\{x_{n}\}\) is bounded, \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n}=\lambda >0\) and the hypothesis (C4), we obtain \(f(x^*,y)\ge 0, \ \ \forall y\in C.\) That is \(x^*\in EP(f)\). Next we prove that \(x_n\rightharpoonup x^*\). Assume that \(\{x_n\}\) has at least two weak cluster points \(x^*\in EP(f)\) and \({\bar{x}}\in EP(f)\) such that \(x^*\ne {\bar{x}}\). Let \(\{x_{n_i}\}\) be a sequence such that \(x_{n_i}\rightharpoonup {\bar{x}}\) as \(i\rightarrow \infty \), noting the fact that \( \forall u\in EP(f)\),
By Lemma 2.4, we get
Which is impossible. Hence we deduce that \(x_n\rightharpoonup x^*\). Since \(\lim \nolimits _{n\rightarrow \infty }\Vert x_n-y_n\Vert =0\), we have \(y_n\rightharpoonup x^*\). That is the desired result. \(\square \)
Next, we prove Algorithm 3.1 converges strongly to the solution of (1) under a strong pseudomonotonicity assumption of the bifunction f.
Theorem 3.2
Assume that \((C1')\), (C2), (C3) and \( EP(f)\ne \emptyset \) hold. Then the sequences \(\{x_n\}\) and \(\{y_n\}\) generated by Algorithm 3.1 converge strongly to the unique solution u of the equilibrium problem.
Proof
The strong pseudomonotonicity assumption of the bifunction f implies that (1) has a unique solution, which we denote by u. Since \(y_n\in C\), we have \(f(u,y_n)\ge 0. \) As f is strong pseudomonotone, we get \(f(y_n,u)\le -\gamma \Vert y_n-u\Vert ^2. \) Hence, from (12) and \(\lambda _n>0\), we have
Adding (35) and (17), we obtain
Moreover, by Lemma 3.1, Remark 3.2 and (36), we also have
By definition of \(\lambda _n\) and (37), we obtain
Using (38) and the same techniques as in the proof of (21)–(24), we have \(\exists N\ge 0,\) such that \(\forall n\ge N,\)
For \(n\ge N,\) let
Using (40) and the same argument as in the proof of (26), we obtain \(a_{n+1}\le a_n-b_n\), \(\forall n\ge N.\) From Lemma 2.3 and the definition of \(b_n\), we can conclude that \(\{a_n\}\) is bounded, \(\lim \nolimits _{n\rightarrow \infty }b_n=0\). Using \(\frac{1}{\delta }-1-\delta -\frac{\alpha }{\delta }+\frac{\alpha \eta }{\delta }<0\) and (28), we have
The proof is complete. \(\square \)
4 The case of variational inequalities
Let \(f(x, y)=\langle F(x),y-x\rangle , \ \forall x, y\in C,\) where \(F : C \rightarrow H\) is a mapping. Then the equilibrium problem becomes the variational inequality. That is, find \(x^*\in C\) such that
Moreover, we have \(prox_{\lambda _n f(y_n,.)}(x_n)=P_C(x_{n}-\lambda _n F(y_n)). \) The solution set of (42) is denoted by VI(F, C). It is well known that \(x^*\in VI(F,C)\) if and only if it satisfies the following projection equation
where \(\lambda \) is any positive real number. For solving pseudomonotone variational inequality, we propose the following method.
Algorithm 4.1
(Step 0) Choose \(\lambda _1>0\), \(x_{0}, y_0, y_1\in C,\)\(\mu \in (0, 1) \), \(\alpha \in (0, 1) \), \(\theta \in (0, 1] \), \(\delta \in (\frac{\sqrt{1+4(\frac{\alpha }{2-\theta }+1-\alpha )}-1}{2},1).\)
(Step 1) Given the current iterate \(x_{n-1}\), \(y_{n-1}\), \(y_n\), compute
If \(y_{n+1} = x_n = y_n\)(or \(F(y_n)=0\)), then stop: \(y_n \) is a solution. Otherwise, go to step 2.
(Step 2) Compute
Set \(n := n + 1\) and return to step 1.
Remark 4.1
If \(F(y_n)=0\), we have \(y_{n}=P_C(y_{n}-\lambda F(y_n))\). Thus \(y_n\in VI(F,C)\) follows directly from (43).
Recall that the mapping F is Lipschitz-continuous with constant \(L>0\), if there exists \(L>0\) such that
If F is Lipschitz-continuous and pseudomonotone, then the conditions \((C1){-}(C3)\) hold for f with \(c_2=c_1=\frac{L}{2}\). Then the following conclusion follows from Lemma 3.2.
Lemma 4.1
Let \(\{x_n\}\) and \(\{y_n\}\) be sequences generated by Algorithm 4.1 and \(VI(F,C)\ne \emptyset \). Then \(\{x_n\}\) and \(\{y_n\}\) are bounded.
The next statement is classical.
Lemma 4.2
[20] Assume that \(F:C\rightarrow {\mathcal {H}}\) is a continuous and pseudomonotone mapping. Then \(x^*\in VI(F,C)\) if and only if \(x^*\) is a solution of the following problem
We analyze the finite and infinite dimensions separately.
Theorem 4.1
Let H be a finite dimensional real Hilbert space. Assume that F is a pseudomonotone Lipschitz continuous mapping on C and VI(F, C) is nonempty. Let \(\{x_n\}\) and \(\{y_n\}\) be two sequences generated by Algorithm 4.1. Then \(\{x_n\}\) and \(\{y_n\}\) converge to the same point \(x^*\in VI(F,C)\).
Proof
Since the sequence \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_k}\}\) that converges to some \(x^*\in H\). From the relation (28), we have \(y_{n_k}\rightarrow x^* \), \(y_{{n_k}+1}\rightarrow x^* \) and \(x^*\in C.\) Noting the fact that
By the continuity of F and the projection, we get
We deduce from (43) that \(x^*\in VI(F,C)\). By using (33), we obtain \(\lim \nolimits _{n\rightarrow \infty }\Vert x_n-x^*\Vert \) exists. Combining \(\lim \nolimits _{k\rightarrow \infty }x_{n_k}=x^*\) and \(\lim \nolimits _{n\rightarrow \infty }\Vert x_n-y_n\Vert =0\) , we have \(\lim \nolimits _{n\rightarrow \infty }x_n=\lim \nolimits _{n\rightarrow \infty }y_n=x^*\). That is the desired result.\(\square \)
Inspired by [21], we give the proof of the following theorem.
Theorem 4.2
Assume that F is pseudomonotone on a infinite dimensional H, sequentially weakly continuous and Lipschitz continuous on C and VI(F, C) is nonempty. Let \(\{x_n\}\) and \(\{y_n\}\) be two sequences generated by Algorithm 4.1. Then \(\{x_n\}\) and \(\{y_n\}\) converge weakly to the same point \(x^*\in VI(F,C)\).
Proof
From Lemma 4.1, the sequences \(\{x_n\}\) and \(\{y_n\}\) are bounded. Hence there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_{n}\}\) that converges weakly to some \(x^*\in H\). Then \(y_{n_k}\rightharpoonup x^* \) and \(x^*\in C.\) Next we prove \(x^*\in VI(F,C)\). Since \(y_{n_k+1}=P_C(x_{n_k}-\lambda _{n_k} F(y_{n_k}))\), by Lemma 2.5, we have
That is
Therefore, we get
Fixing \(z\in C\), let \(k\rightarrow \infty \), using the facts (28), \(\{y_{n}\}\) is bounded and \(\lim \nolimits _{k\rightarrow \infty }\lambda _{n_k}=\lambda >0,\) we obtain
We choose a decreasing positive sequence \(\{\varepsilon _k\}\) such that \(\lim \nolimits _{k\rightarrow \infty }\varepsilon _k=0\). By definition of the lower limit, for each \(\varepsilon _k\) , we denote by \(m_k\) the smallest positive integer such that
As \(\{\varepsilon _k\}\) is decreasing, it is easy to see that the sequence \(\{m_k\}\) is increasing. From Remark 4.1, for each k, \(F(y_{n_{m_k}})\ne 0.\) Let \(z_{n_{m_k}}=\frac{F(y_{n_{m_k}})}{\Vert F(y_{n_{m_k}})\Vert ^2}\). Then we get \(\langle F(y_{n_{m_k}}), z_{n_{m_k}}\rangle =1\) for each k. Moreover, from (51), we have
By definition of pseudomonotone, we obtain
Since \(\{y_{n_k}\}\) converges weakly to \(x^*\in C\) and F is sequentially weakly continuous on C, we have \(\{F(y_{n_k})\}\) converges weakly to \(F(x^*)\). We can suppose that \(F(x^*)\ne 0\) (otherwise, \(x^*\) is a solution). Since the norm mapping is sequentially weakly lower semicontinuous, we have
As \(\{y_{n_{m_k}}\}\subset \{y_{n_k}\} \) and \(\lim \nolimits _{k\rightarrow \infty }\varepsilon _k=0\), we have
Let \(k\rightarrow \infty \) in (53), we get
By Lemma 4.2, we obtain \(x^*\in VI(F,C)\) and as in the proof of Theorem 3.1, we have \(x_n\rightharpoonup x^*\) and \(y_n\rightharpoonup x^*\). That is the desired result. \(\square \)
Remark 4.2
When F is monotone, as in [22, 23], it is not necessary to impose the sequential weak continuity of F.
Now applying Theorem 3.2 with variational inequalities, we have the following result.
Theorem 4.3
Assume that F is strong pseudomonotone on a infinite dimensional H, Lipschitz continuous on C and VI(F, C) is nonempty. Let \(\{x_n\}\) and \(\{y_n\}\) be two sequences generated by Algorithm 4.1. Then \(\{x_n\}\) and \(\{y_n\}\) converge strongly to the unique solution \(u\in VI(F,C)\).
5 Numerical experiments
In this section, we provide numerical experiments to illustrate our algorithm and compare them with other existing algorithms in [15, 23, 24]. First, we compare Algorithm 3.1 with the Algorithm (3) (Algorithm 3.1 in [15]). Then we compare Algorithm 4.1 with Algorithm A in [23], Algorithm 2.1 in [24] and Algorithm 3.2 in [24]. We report the number of iterations (iter.) and computing time (time) measured in seconds for all the tests. The termination criteria are the following
For Algorithm 3.1, we take \(\alpha =\mu =0.98\), \(\theta =1 (\delta =0.62)\) and \(\theta =0.75 (\delta =0.53)\). For Algorithm A in [21], we use \(\alpha =\lambda _0=0.4\) and \(\delta =1.001\). For Algorithm 3.2 in [24], we choose \(P=I\), \(\alpha _{-1}=1\), \(\theta =1.5\), \(\rho =0.1\) and \(\beta =0.3 \). We take \(\varepsilon =10^{-6}\) for all algorithms.
Problem 1
We consider the equilibrium problem for the following bifunction \(f : H \times H \rightarrow {\mathbb {R}}\) which comes from the Nash-Cournot equilibrium model in [9,10,11,12,13,14,15].
where \(q \in {\mathbb {R}}^m\) is chosen randomly with its elements in \([-m, m]\), and the matrices P and Q are two square matrices of order m such that Q is symmetric positive semidefinite and \(Q-P\) is negative semidefinite. In this case, the bifunction f satisfies \((C1){-}(C4)\) with the Lipschitz-type constants \(c_1 = c_2 = \frac{\Vert P-Q\Vert }{2}\), see [9, Lemma 6.2]. For Algorithm 3.1, we take \(\lambda _1=\frac{1}{2c_1}\). For Algorithm (3), we take \(\lambda =\frac{\varphi }{4c_1}\).
For numerical experiments: we suppose that the feasible set \(C\subset {\mathbb {R}}^m \) has the form of
where \(m = 10, 100, 500\). We take \(y_{1}=x_0=y_0=(1,\ldots ,1)\) for all algorithms. For every m, as shown in Table 1, we have generated two random samples with different choice of P, Q and q. The Table 1 shows that our algorithm may perform better, even if the Lipschitz constants are known.
Problem 2
The second problem is HpHard problem , we choose \(F(x)=Mx+q\) with \(q\in R^n\) and \(M=NN^T+S+D\), where every entry of the \(n\times n\) matrix N and of the \(n\times n\) skew-symmetric matrix S is uniformly generated from \((-5,5)\), and every diagonal entry of the \(n\times n\) diagonal D is uniformly generated from (0, 0.3) (so M is positive definite), with every entry of q uniformly generated from \((-500,0)\). The feasible set is \(R_n^+\). This problem was considered in [24]. For all tests, we take \(y_{1}=x_0=y_0=(1,1,\ldots ,1)\). For Algorithm 4.1, we choose \(\lambda _1=0.4\). For Algorithm 2.1 in [24], we take \(P=(I+M^T)(I+M)\) and \(\theta =0.7\). For every n, as shown in Table 2, we have generated three random samples with different choice of M and q.
Problem 3
The third problem was considered in [23, 25], where
The feasible set is \(C=R_+^m\). We take \(\lambda _{1}=0.4\) for Algorithm 4.1. For all tests, we take \(y_{1}=x_0=y_0=(0,0,\ldots ,0)\). The results are summarized in Table 3.
Problem 4
Kojima–Shindo Nonlinear Complementarity Problem (NCP) was considered in [23, 25, 26], where \(n = 4\) and the mapping F is defined by
The feasible set is \(C = \{x\in R_4^+ | x_1 + x_2 + x_3 + x_4 = 4\}.\) We choose as starting points: \(y_{1}=x_0=y_0 = (1,1,1,1)\) and \(y_{1}=x_0=y_0 = (2,0,0,2).\) We take \(\lambda _{1}=0.8\) for Algorithm 4.1. The Tables 2, 3 and 4 illustrate that Algorithm 4.1 may work better. As in the previous experiments, Algorithms 3.1 and 4.1 may perform better when choosing \(\theta =0.75\).
6 Conclusions
In this work, we consider a convergence result for equilibrium problem involving Lipschitz-type and pseudomonotone bifunctions but the Lipschitz-type constants are unknown. We modify the gradient method with a new step size. A weak and a strong convergence theorem are proved for sequences generated by the algorithm. The numerical experiments confirm the computational effectiveness of the proposed algorithm.
References
Fan, K.: A minimax Inequality and Applications, Inequalities III, pp. 103–113. Academic Press, New York (1972)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problem. Springer, New York (2003)
Martinet, B.: Rgularisation dinquations variationelles par approximations successives. Rev. Fr. Autom. Inform. Rech. Opr. Anal. Numr. 4, 154–159 (1970)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2000)
Flam, S.D., Antipin, A.S.: Equilibrium programming and proximal-like algorithms. Math. Program. 78, 29–41 (1997)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problem. Ekonomika i Matematicheskie Metody 12, 747–756 (1976)
Quoc, T.D., Muu, L.D., Hien, N.V.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Dang, V.H., Duong, V.T.: New extragradient-like algorithms for strongly pseudomonotone variational inequalities. J. Glob. Optim. 70, 385–399 (2018)
Dang, V.H., Yeol, J.C., Yibin, X.: Modified extragradient algorithms for solving equilibrium problems. Optimization 67, 2003–2029 (2018)
Dang, V.H.: Halpern subgradient extragradient method extended to equilibrium problems. RACSAM 111, 823–840 (2017)
Dang, V.H.: Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 77, 983–1001 (2018)
Dang, V.H.: New inertial algorithm for a class of equilibrium problems. Numer. Algorithms 80, 1413–1436 (2019)
Nguyen, T.V.: Golden ratio algorithms for solving equilibrium problems in Hilbert spaces. arXiv:1804.01829
Mastroeni, G.: On auxiliary principle for equilibrium problems. Publicatione del Dipartimento di Mathematica DellUniversita di Pisa 3, 1244–1258 (2000)
Daniele, P., Giannessi, F., Maugeri, A.: Equilibrium Problems and Variational Models. Kluwer, Alphen aan den Rijn (2003)
Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. (2019). https://doi.org/10.1007/s10107-019-01416-w
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
Cottle, R.W., Yao, J.C.: Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 75, 281–295 (1992)
Phan, T.V.: On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 176, 399–409 (2018)
Yang, J., Liu, H.W., Liu, Z.X.: Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 67, 2247–2258 (2018)
Yang, J., Liu, H.W.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179(1), 197–211 (2018)
Solodov, M.V., Tseng, P.: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 34(5), 1814–1830 (1996)
Malitsky, YuV: Projected reflected gradient methods for variational inequalities. SIAM J. Optim. 25(1), 502–520 (2015)
Yang, J., Liu, H.W.: Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithms 80, 741–752 (2019)
Acknowledgements
The authors would like to thank the Associate Editor and the anonymous referees for their valuable comments and suggestions which helped to improve the original version of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Yang, J., Liu, H. A self-adaptive method for pseudomonotone equilibrium problems and variational inequalities . Comput Optim Appl 75, 423–440 (2020). https://doi.org/10.1007/s10589-019-00156-z
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-019-00156-z