Abstract
In this paper, we propose two algorithms for finding the solution of a bilevel equilibrium problem in a real Hilbert space. Under some sufficient assumptions on the bifunctions involving pseudomonotone and Lipschitz-type conditions, we obtain the strong convergence of the iterative sequence generated by the first algorithm. Furthermore, the strong convergence of the sequence generated by the second algorithm is obtained without a Lipschitz-type condition.
Similar content being viewed by others
1 Introduction
Let C be a nonempty closed convex subset of a real Hilbert space H, and let f and g be bifunctions from \(H\times H\) to \(\mathbb{R}\) such that \(f(x,x)=0\) and \(g(x,x)=0\) for all \(x\in H\). The equilibrium problem associated with g and C is denoted by \(EP(C,g)\): Find \(x^{*}\in C\) such that
The solution set of problem (1) is denoted by Ω. The equilibrium problem is very important because many problems arise in applied areas such as the fixed point problem, the (generalized) Nash equilibrium problem in game theory, the saddle point problem, the variational inequality problem, the optimization problem and others.
The basic method for solving the monotone equilibrium problem is the proximal method (see [14, 16, 19]). In 2008, Tran et al. [25] proposed the extragradient algorithm for solving the equilibrium problem by using the strongly convex minimization problem to solve at each iteration. Furthermore, Hieu [9] introduced subgradient extragradient methods for pseudomonotone equilibrium problem and the other methods (see the details in [1, 8, 10, 15, 17, 22, 28]).
In this paper, we consider the equilibrium problem whose constraints are the solution sets of equilibrium problems (bilevel equilibrium problems) : Find \(x^{*}\in\varOmega\) such that
The solution set of problem (2) is denoted by \(\varOmega^{*}\). The bilevel equilibrium problems were introduced by Chadli et al. [4] in 2000. This kind of problems is very important and interesting because it is a generalization class of problems such as optimization problems over equilibrium constraints, variational inequality over equilibrium constraints, hierarchical minimization problems, and complementarity problems. Furthermore, the particular case of the bilevel equilibrium can be applied to a real word model such as the variational inequality over the fixed point set of a firmly nonexpansive mapping applied to the power control problem of CDMA networks which were introduced by Iiduka [11]. For more on the relation of bilevel equilibrium with particular cases, see [7, 12, 21].
Methods for solving bilevel equilibrium problems have been studied extensively by many authors. In 2010, Moudafi [20] introduced a simple proximal method and proved the weak convergence to a solution of problem (2). In 2014, Quy [23] introduced the algorithm by combining the proximal method with the Halpern method for solving bilevel monotone equilibrium and fixed point problem. For more details and most recent works on the methods for solving bilevel equilibrium problems, we refer the reader to [2, 5, 24]. The authors considered the method for monotone and pseudoparamonotone equilibrium problem. If a bifunction is more generally monotone, we cannot use the above methods for solving bilevel equilibrium problem, for example, the pseudomonotone property.
Inspired by the above work, in this paper, we propose a method for finding the solution for bilevel equilibrium problems where f is strongly monotone and g is pseudomonotone and Lipschitz-type continuous. Firstly, we obtain the convergent sequence by combining an extragradient subgradient method with the Halpern method. Second, we obtain the convergent sequence without Lipschitz-type continuity on bifunction g by combining an Armijo line search with the extragradient subgradient method.
2 Preliminaries
Let C be a nonempty closed convex subset of a real Hilbert space H. Denote that \(x_{n}\rightharpoonup x\) and \(x_{n}\rightarrow x\) are the weak convergence and the strong convergence of a sequence \(\{x_{n}\}\) to x, respectively. For every \(x\in H\), there exists a unique element \(P_{C}x\) defined by
It is also known that \(P_{C}\) is a nonexpansive mapping from H onto C, i.e., \(\|P_{C}(x)-P_{C}(y)\|\leq\|x-y\|\) \(\forall x,y\in H\). For every \(x\in H\) and \(y\in C\), we have
A bifunction \(\psi:H\times H \to\mathbb{R}\) is called:
-
(i)
β-strongly monotone on C if
$$\psi(x, y) + \psi(y, x) \leq-\beta \Vert x-y \Vert ^{2}\quad \forall x, y \in C; $$ -
(ii)
monotone on C if
$$\psi(x, y) + \psi(y, x) \leq0\quad \forall x, y\in C; $$ -
(iii)
pseudomonotone on C if
$$\psi(x,y)\geq0\Rightarrow\psi(y,x)\leq0, \quad \forall x, y\in C. $$
It is easy to check that the monotone bifunction implies the pseudomonotone bifunction. On the other hand, if the bifunction is pseudomonotone, then we cannot guarantee that the bifunction is monotone, for example, let \(\phi(x,y)= \frac{2y-x}{1-x}\) for all \(x,y\in\mathbb{R}\). It follows that ϕ is pseudomonotone on \(\mathbb{R}^{+}\setminus\{0\}\) but ϕ is not monotone on \(\mathbb {R}^{+}\setminus\{0\}\). Let \(\psi(x, \cdot)\) be convex for every \(x\in H\). For each fixed \(x\in H\), the subdifferential of \(\psi(x, .)\) at x, denoted by \(\partial_{2}\psi (x, x)\), is defined by
studied in [13]. In this paper, we consider the bifunctions f and g under the following conditions.
Condition A
-
(A1)
\(f(x, \cdot)\) is convex, weakly lower semicontinuous and subdifferentiable on H for every fixed \(x \in H\).
-
(A2)
\(f (\cdot, y)\) is weakly upper semicontinuous on H for every fixed \(y\in H\).
-
(A3)
f is β-strongly monotone on H.
-
(A4)
For each \(x,y\in H\), there exists \(L>0\) such that
$$\Vert w-v \Vert \leq L \Vert x-y \Vert ,\quad \forall w\in \partial_{2}f(x,x),v\in \partial_{2}f(y,y). $$ -
(A5)
The function \(x \mapsto\partial_{2}f(x,x)\) is bounded on the bounded subsets of H.
Condition B
-
(B1)
\(g(x, \cdot)\) is convex, weakly lower semicontinuous, and subdifferentiable on H for every fixed \(x\in H\).
-
(B2)
\(g(\cdot, y)\) is weakly upper semicontinuous on H for every fixed \(y\in H\).
-
(B3)
g is pseudomonotone on C with respect to Ω, i.e.,
$$g \bigl(x, x^{*}\bigr)\leq0,\quad \forall x\in C, x^{*}\in \varOmega. $$ -
(B4)
g is Lipschitz-type continuous, i.e., there are two positive constants \(L_{1},L_{2}\) such that \(g(x,y)+g(y,z)\geq g(x,z)- L_{1}\|x-y\|^{2}-L_{2}\|y-z\|^{2}\), \(\forall x,y,z\in H\).
-
(B5)
g is jointly weakly continuous on \(H \times H\) in the sense that, if \(x, y \in H\) and \(\{x_{n}\}, \{y_{n}\} \in H\) converge weakly to x and y, respectively, then \(g(x_{n}, y_{n}) \rightarrow g(x, y)\) as \(n\rightarrow+\infty\).
Example 2.1
Let \(f,g : \mathbb{R}\times\mathbb{R} \to\mathbb{R}\) be defined by \(f(x,y)= 5y^{2}-7x^{2}+2xy\) and \(g(x,y)=2y^{2}-7x^{2}+5xy\). It follows that f and g satisfy Condition A and Condition B, respectively.
Lemma 2.2
([3, Propositions 3.1, 3.2])
If the bifunction g satisfies Assumptions (B1), (B2), and (B3), then the solution set Ω is closed and convex.
Remark 2.3
Let the bifunction f satisfy Assumption A and the bifunction g satisfy B. If \(\varOmega\neq\emptyset\), then the bilevel equilibrium problem (2) has a unique solution, see the details in [23].
Lemma 2.4
([6])
Let C be a nonempty closed convex subset of a real Hilbert space H and \(\phi:C\to\mathbb{R}\) be a convex, lower semicontinuous, and subdifferentiable function on C. Then \(x^{*}\) is a solution to the convex optimization problem \(\min\{\phi(x): x\in C\}\) if and only if \(0\in\partial\phi(x^{*})+N_{C}(x^{*})\), where \(\partial\phi(\cdot)\) denotes the subdifferential of ϕ and \(N_{C}(x^{*})\) is the normal cone of C at \(x^{*}\).
Lemma 2.5
([27])
Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers, \(\{\alpha_{n}\}\) be a sequence in \((0,1)\), and \(\{\xi _{n}\}\) be a sequence in \(\mathbb{R}\) satisfying the condition
where \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\) and \(\limsup_{n\rightarrow\infty }\xi_{n}\leq0\). Then \(\lim_{n\rightarrow\infty}a_{n}=0\).
Lemma 2.6
([18])
Let \(\{a_{n}\}\) be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence \(\{a_{n_{j}}\} \) of \(\{a_{n}\}\) such that
Also consider the sequence of integers \(\{\tau(n)\}_{n\geq n_{0}}\) defined, for all \(n\geq n_{0}\), by
Then \(\{\tau(n)\}_{n\geq n_{0}}\) is a nondecreasing sequence verifying
and, for all \(n\geq n_{0}\), the following two estimates hold:
Lemma 2.7
Suppose that f is β-strongly monotone on H and satisfies (A4). Let \(0<\alpha<1\), \(0\leq\eta\leq1-\alpha\), and \(0< \mu<\frac{2\beta }{L^{2}}\). For each \(x,y\in H, w\in\partial_{2}f(x,x)\), and \(v\in\partial_{2}f(y,y)\), we have
where \(\tau=1-\sqrt{1-\mu(2\beta-\mu L^{2})}\in(0,1]\).
Proof
Let \(x,y\in H, w\in\partial_{2}f(x,x)\), and \(v\in\partial_{2}f(y,y)\). Thus
Since \(f:H\times H\to\mathbb{R}\) is β-strongly monotone, \(w\in\partial_{2}f(x,x)\), and \(v\in\partial_{2}f(y,y)\), we have
From (5) and (A4), we have
This implies that
By using (4) and (6), we can conclude that
where \(\tau=1-\sqrt{1-\mu(2\beta-\mu L^{2})}\in(0,1]\). □
3 The extragradient subgradient Halpern methods
In this section, we propose the algorithm for finding the solution of a bilevel equilibrium problem under the strong monotonicity of f and the pseudomonotonicity and Lipschitz-type continuous conditions on g.
Algorithm 1
Initialization: Choose \(x_{0}\in H, 0<\mu< \frac{2\beta }{L^{2}}\), the sequences \(\{\alpha_{n}\}\subset(0,1), \{\eta_{n}\}\), and \(\{ \lambda_{n}\}\) are such that
Set \(n=0\) and go to Step 1.
Step 1. Compute
Step 2. Compute \(w_{n}\in\partial_{2}f(z_{n},z_{n})\) and
Set \(n=n+1\) and go back to Step 1.
Theorem 3.1
Let bifunctions f and g satisfy Condition A and Condition B, respectively. Assume that \(\varOmega\neq\emptyset\). Then the sequence \(\{ x_{n}\}\) generated by Algorithm 1 converges strongly to the unique solution of the bilevel equilibrium problem (2).
Proof
Under assumptions of two bifunctions f and g, we get the unique solution of the bilevel equilibrium problem (2), denoted by \(x^{*}\).
Step 1: Show that
The definition of \(y_{n}\) and Lemma 2.4 imply that
There are \(w\in\partial_{2}g(x_{n},y_{n})\) and \(\bar{w}\in N_{C}(y_{n})\) such that
Since \(\bar{w}\in N_{C}(y_{n})\), we have
By using (8) and (9), we obtain \(\lambda_{n}\langle w,y-y_{n}\rangle\geq\langle x_{n}-y_{n},y-y_{n}\rangle\) for all \(y\in C\). Since \(z_{n}\in C\), we have
It follows from \(w\in\partial_{2}g(x_{n},y_{n})\) that
By using (10) and (11), we get
Similarly, the definition of \(z_{n}\) implies that
There are \(u\in\partial_{2}g(y_{n},z_{n})\) and \(\bar{u}\in N_{C}(z_{n})\) such that
Since \(\bar{u}\in N_{C}(z_{n})\), we have
By using (13) and (14), we obtain \(\lambda_{n}\langle u,y-z_{n}\rangle\geq\langle x_{n}-z_{n},y-z_{n}\rangle\) for all \(y\in C\). Since \(x^{*}\in C\), we have
It follows from \(u\in\partial_{2}g(y_{n},z_{n})\) that
By using (15) and (16), we get
Since \(x^{*}\in\varOmega\), we have \(g(x^{*},y_{n})\geq0\). If follows from the pseudomonotonicity of g on C with respect to Ω that \(g(y_{n},x^{*})\leq0\). This implies that
Since g is Lipschitz-type continuous, there exist two positive constants \(L_{1},L_{2}\) such that
By using (18) and (19), we get
From (12) and the above inequality, we obtain
We know that
From (20), we can conclude that
Step 2: The sequences \(\{x_{n}\}, \{w_{n}\}, \{ y_{n}\}\), and \(\{z_{n}\}\) are bounded.
Since \(0<\lambda_{n}<a\), where \(a=\min ( \frac {1}{2L_{1}},\frac{1}{2L_{2}} )\), we have
It follows from (7) and the above inequalities that
By Lemma 2.7 and (21), we obtain
where \(w_{n}\in\partial_{2}f(z_{n},z_{n})\) and \(v\in\partial _{2}f(x^{*},x^{*})\). This implies that
By induction, we obtain
Thus the sequence \(\{x_{n}\}\) is bounded. By using (21), we have \(\{z_{n}\}\), and using Condition (A5), we can conclude that \(\{w_{n}\}\) is also bounded.
Step 3: Show that the sequence \(\{x_{n}\}\) converges strongly to \(x^{*}\).
Since \(x\in\varOmega^{*}\), we have \(f(x^{*},y)\geq0\) for all \(y\in\varOmega\). Thus \(x^{*}\) is a minimum of the convex function \(f(x^{*},\cdot)\) over Ω. By Lemma 2.4, we obtain \(0\in\partial_{2}f(x^{*},x^{*})+N_{\varOmega}(x^{*})\). Then there exists \(v\in\partial_{2}f(x^{*},x^{*})\) such that
Note that
From Lemma (2.7) and (24), we obtain
It follows that
Let us consider two cases.
Case 1: There exists \(n_{0}\) such that \(\{\|x_{n}-x^{*}\|\}\) is decreasing for \(n\geq n_{0}\). Therefore the limit of sequence \(\{\|x_{n}-x^{*}\|\} \) exists. By using (21) and (25), we obtain
Since \(\lim_{n\rightarrow\infty}\eta_{n}=\eta<1, \lim_{n\rightarrow \infty}\alpha_{n}=0\) and the limit of \(\{\|x_{n}-x^{*}\|\}\) exists, we have
From \(0< \lambda_{n}<a\) and inequality (7), we get
By using (27), we obtain \(\lim_{n\rightarrow\infty}\|x_{n}-y_{n}\|=0\). Next, we show that
Take a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that
Since \(\{x_{n_{k}}\}\) is bounded, we may assume that \(\{x_{n_{k}}\}\) converges weakly to some \(\bar{x}\in H\). Therefore
Since \(\lim_{n\rightarrow\infty}\|x_{n}-y_{n}\|=0\) and \(x_{n_{k}}\rightharpoonup\bar{x}\), we have \(y_{n_{k}}\rightharpoonup \bar{x}\). Since C is closed and convex, it is also weakly closed and thus \(\bar {x}\in C\). Next, we show that \(\bar{x}\in\varOmega\). From the definition of \(\{y_{n}\}\) and Lemma 2.4, we obtain
There exist \(\bar{w}\in N_{C}(y_{n})\) and \(w\in\partial _{2}g(x_{n},y_{n})\) such that
Since \(\bar{w}\in N_{C}(y_{n})\), we have \(\langle\bar{w},y- y_{n}\rangle\leq0\) for all \(y\in C\). From (30), we obtain
Since \(w\in\partial_{2}g(x_{n},y_{n})\), we have
Combining (31) and (32), we get
Taking \(n=n_{k}\) and \(k\rightarrow\infty\) in (33), the assumption of \(\lambda_{n}\) and (B5), we obtain \(g(\bar{x},y)\geq0\) for all \(y\in C\). This implies that \(\bar{x}\in\varOmega\). By inequality (23), we obtain \(\langle v, \bar{x}- x^{*}\rangle \geq0\). It follows from (29) that
We can write inequality (25) in the following form:
where \(\xi_{n}=\frac{2\mu}{\tau}\langle v,x^{*}-x_{n+1}\rangle\). It follows from (34) that \(\limsup_{n\rightarrow\infty}\xi _{n}\leq0\). By Lemma 2.5, we can conclude that \(\lim_{n\rightarrow\infty}\|x_{n}-x^{*}\|^{2}=0\). Hence \(x_{n}\rightarrow x^{*}\) as \(n \rightarrow\infty\).
Case 2: There exists a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\} \) such that \(\|x_{n_{j}}-x^{*}\|\leq\|x_{n_{j}+1}-x^{*}\|\) for all \(j\in\mathbb{N}\). By Lemma 2.6, there exists a nondecreasing sequence \(\{\tau (n)\}\) of \(\mathbb{N}\) such that \(\lim_{n\rightarrow\infty}\tau(n)=\infty\), and for each sufficiently large \(n\in\mathbb{N}\), we have
Combining (22) and (35), we have
Since \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\lim_{n\rightarrow\infty }\eta_{n}=\eta<1\), \(\{z_{n}\}\) is bounded, and (37), we have \(\lim_{n\rightarrow\infty}(\|x_{\tau(n)}-x^{*}\|-\|z_{\tau (n)}-x^{*}\|)=0\). It follows from the boundedness of \(\{x_{n}\}\) and \(\{z_{n}\}\) that
By using the assumption of \(\{\lambda_{n}\}\), we get the following two inequalities:
From (7), we obtain
This implies that
It follows from (38) and the above inequality that
Note that \(\|x_{\tau(n)}-z_{\tau(n)}\|\leq\|x_{\tau(n)}-y_{\tau(n)}\|+\|y_{\tau (n)}-z_{\tau(n)}\|\). From (39), we have
By using the definition of \(x_{n+1}\) and Lemma 2.7, we obtain
where \(t_{\tau(n)}\in\partial_{2}f(z_{\tau(n)},z_{\tau(n)})\) and \(w_{\tau(n)}\in\partial_{2}f(x_{\tau(n)},x_{\tau(n)})\). Since \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), the boundedness of \(\{ w_{\tau(n)}\}\) and (40), we have \(\lim_{n\rightarrow\infty}\|x_{\tau(n)+1}-x_{\tau(n)}\|=0\). As proved in the first case, we can conclude that
Combining (25) and (35), we obtain
By using (35) again, we have
From (41), we can conclude that \(\limsup_{n\rightarrow\infty}\|x_{n}-x^{*}\|^{2}\leq0\). Hence \(x_{n}\rightarrow x^{*}\) as \(n\rightarrow\infty\). This completes the proof. □
4 The extragradient subgradient methods with line searches
In this section, we introduce the algorithm for finding the solution of a bilevel equilibrium problem without the Lipschitz condition for the bifunction g.
Algorithm 2
Initialization: Choose \(x_{0}\in C\), \(0<\mu< \frac{2\beta }{L^{2}}\), \(\rho\in(0,2)\), \(\gamma\in(0,1)\), the sequences \(\{\lambda_{n}\}\), \(\{\xi_{n}\}\), and \(\{\alpha_{n}\}\subset(0,1)\) such that
Set \(n=0\), and go to Step 2.
Step 1. Compute
If \(y_{n}=x_{n}\), then set \(u_{n}=x_{n}\) and go to Step 4. Otherwise, go to Step 2.
Step 2. (Armijo line search rule) Find m as the smallest positive integer number satisfying
Set \(z_{n}=z_{n,m}\) and \(\gamma_{n}=\gamma^{m}\).
Step 3. Choose \(t_{n}\in\partial_{2}g(z_{n},x_{n})\) and compute \(u_{n}=P_{C}(x_{n}-\xi_{n}\sigma_{n}t_{n})\) where \(\sigma _{n}=\frac{g(z_{n},x_{n})}{\|t_{n}\|^{2}}\)
Step 4. Compute \(w_{n}\in\partial_{2}f(u_{n},u_{n})\) and
Set \(n=n+1\), and go back to Step 1.
Lemma 4.1
([26])
Suppose that \(y_{n}\neq x_{n}\) for some \(n \in\mathbb{N}\). Then the line search corresponding to \(x_{n}\) and \(y_{n}\) (Step 2) is well defined, \(g(z_{n}, x_{n}) > 0\), and \(0 \notin\partial_{2}g(z_{n},x_{n})\).
Lemma 4.2
([26])
Let \(g:\varTheta\times\varTheta \rightarrow\infty\) be a bifunction satisfying conditions (B1) on C and (B5) on Θ, where Θ is an open convex set containing C. Let \(\bar{x}, \bar{y}\in\varTheta\) and \(\{x_{n}\}, \{y_{n}\}\) be two sequences in Θ converging weakly to \(\bar{x}, \bar{y}\in\varTheta\), respectively. Then, for any \(\varepsilon>0\), there exist \(\eta>0\) and \(n_{\varepsilon}\in\mathbb{N}\) such that
for all \(n\geq n_{\varepsilon}\), where B denotes the closed unit ball in H.
Lemma 4.3
([8])
Let the bifunction g satisfy assumption (B1) on \(C\times C\), (B5) on \(\varTheta\times\varTheta\). Suppose that \(\{x_{n}\}\) is a bounded sequence in \(C, \rho>0\) and \(\{y_{n}\}\) is a sequence such that
Then \(\{y_{n}\}\) is also bounded.
Theorem 4.4
Let the bifunction f satisfy Condition A and g satisfy Conditions (B1)–(B3) and (B5). Assume that \(\varOmega\neq\emptyset\). Then the sequences \(\{x_{n}\}\) generated by Algorithm 2 converge strongly to the unique solution of the bilevel equilibrium problem (2).
Proof
Let \(x^{*}\) be the unique solution of the bilevel equilibrium problem (2). Then we have \(x^{*}\in\varOmega\) and there exists \(v\in\partial _{2}f(x^{*},x^{*})\) such that
Step 1: Show that
By the definition of \(u_{n}\), we have
Since \(t_{n}\in\partial_{2}g(z_{n},x_{n})\) and \(g(z_{n},\cdot)\) is convex on C, we have \(g(z_{n},x^{*})-g(z_{n},x_{n})\geq\langle t_{n}, x^{*}-x_{n}\rangle\). It follows that
Since g is pseudomonotone on C with respect to Ω, we have \(g(z_{n},x^{*})\leq0\). It follows from (46) and the definition of \(\sigma_{n}\) that
Combining (45) with (47), we obtain
Step 2: The sequences \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{u_{n}\}\), and \(\{w_{n}\}\) are bounded.
Since \(\xi_{n}\in[\underline{\xi},\overline{\xi}]\subset (0,2)\) and (44), we have
By the definition of \(x_{n+1}\), we get
From Lemma 2.7, (48), and (49), we can conclude that
This implies that
By induction, we obtain
Thus the sequence \(\{x_{n}\}\) is bounded. Hence we can conclude from (48) and Lemma 4.3 that \(\{y_{n}\}\) and \(\{u_{n}\}\) are bounded, respectively. From condition (A4), we have \(\{w_{n}\}\) is also bounded.
Step 3: We show that if there is a subsequence \(\{ x_{n_{k}}\}\) of \(\{x_{n}\}\) converging weakly to x̄ and \(\lim_{k\rightarrow\infty}(\sigma_{n_{k}}\|t_{n_{k}}\|)=0\), then we have \(\bar{x}\in\varOmega\).
Firstly, we will show that \(\{t_{n_{k}}\}\) is bounded. Since \(\{ z_{n}\}\) is bounded, there is a subsequence \(\{z_{n_{k}}\}\) of \(\{z_{n}\}\) converging weakly to z̄ By using Lemma 4.2, for any \(\varepsilon>0\), there exist \(\eta >0\) and \(k_{0}\) such that
for all \(k\geq k_{0}\). Since \(\{t_{n_{k}}\}\in\partial_{2}g(z_{n_{k}},x_{n_{k}})\), we have \(\{t_{n_{k}}\}\) is bounded. Next, we show that \(\| x_{n_{k}}-y_{n_{k}}\|\rightarrow0\). Without loss of generality, we can assume that \(x_{n_{k}}\neq y_{n_{k}}\) for all \(k \in\mathbb{N}\). By Lemma 4.1, we obtain \(g(z_{n_{k}}, x_{n_{k}}) > 0\) and \(t_{n_{k}}\neq0\). Since \(\lim_{k\rightarrow\infty}(\sigma_{n_{k}}\| t_{n_{k}}\|)=0\) and \(\{t_{n_{k}}\}\) is bounded, we have
It follows from the convexity of \(g(z_{n_{k}},\cdot)\) that
This implies that
By the Armijo line search, we get \(\frac{\rho}{2\lambda_{n}}\|x_{n}-y_{n}\|^{2}\leq g(z_{n_{k}},x_{n_{k}})-g(z_{n_{k}},y_{n_{k}})\), and (52) implies that
Combining (51) with (53), we obtain
Then we consider two cases.
Case 1. \(\limsup_{k\rightarrow\infty}\gamma_{n_{k}}>0\). There exist \(\bar{\gamma}>0 \) and a subsequence of \(\{\gamma_{n_{k}}\}\) denoted again by \(\{\gamma_{n_{k}}\}\) such that \(\gamma_{n_{k}} > \bar{\gamma}\) for all k. So we get from (54) that
Since \(x_{n_{k}}\rightharpoonup\bar{x}\) and (55), we have \(y_{n_{k}}\rightharpoonup\bar{x}\). On the other hand, by the definition of \(y_{n_{k}}\), we have
Therefore
Letting \(k\rightarrow\infty\) in the above inequality, using (55) and the jointly weak continuity of g, we have \(g(\bar{x},y)-g(\bar{x},\bar{x})\geq0\) for all \(y\in C\). So, \(g(\bar{x},y)\geq0\) for all \(y\in C\). Hence \(\bar{x}\in\varOmega\).
Case 2. \(\limsup_{k\rightarrow\infty}\gamma_{n_{k}}=0\). From the boundedness of \(\{y_{n}\}\), there exists \(\{y_{n_{k}}\}\subseteq\{y_{n}\}\) such that \(y_{n_{k}}\rightharpoonup\bar{y}\). Let \(\{m_{k}\}\) be the sequence of the smallest non-negative integers such that \(z_{n_{k}}=(1-\gamma^{m_{k}})x_{n_{k}}+\gamma^{m_{k}}y_{n_{k}}\) and
Since \(\gamma^{m_{k}}\rightarrow0\), we have \(m_{k}>0\). It follows from the Armijo line search that, for \(m_{k}-1\), we have \(\bar{z}_{n_{k}}=(1-\gamma^{m_{k}-1})x_{n_{k}}+\gamma^{m_{k}-1}y_{n_{k}}\) and
On the other hand, by the definition of \(y_{n}\), we have
Letting \(n=n_{k}\) and \(y=x_{n_{k}}\) in (58), we get
Combining (57) with (59), we obtain
Since \(x_{n_{k}}\rightharpoonup\bar{x}, y_{n_{k}}\rightharpoonup\bar{y}\), and \(\gamma_{n_{k}}\rightarrow0\), we have \(\bar{z}_{n_{k}}\rightharpoonup \bar{x}\). From (60) and g is jointly weakly continuous on \(H \times H\), we get \(-g(\bar{x},\bar{y})<-\frac{\rho}{2}g(\bar{x},\bar{y})\). Since \(\rho\in(0,2)\), we have \(g(\bar{x},\bar{y})\geq0\). Taking \(k\rightarrow\infty\) in (59), we obtain \(\lim_{k\rightarrow\infty}\|x_{n_{k}}-y_{n_{k}}\|=0\). By Case 1, it is immediate that \(\bar{x}\in\varOmega\).
Step 4: Show that the sequence \(\{x_{n}\}\) converges strongly to \(x^{*}\). By using the definition of \(x_{n+1}\) and (44), we obtain
Setting \(a_{n}=\|x_{n}-x^{*}\|^{2}\). It follows from the boundedness of \(\{w_{n}\}\) and \(\{u_{n}\}\) that
Combining (61), (62) with the definition of \(a_{n}\), we get
Let us consider two cases.
Case 1: There exists \(n_{0}\) such that \(\{a_{n}\}\) is decreasing for \(n\geq n_{0}\). Therefore the limit of \(\{a_{n}\}\) exists, denoted by a. It follows that \(\lim_{n\rightarrow\infty}(a_{n+1}-a_{n})=0\). From (63) and the definition of \(\alpha_{n}\), we have
By the definitions of \(x_{n+1}\) and \(u_{n}\), we obtain
It follows that \(\lim_{n\rightarrow\infty}\|u_{n}-x_{n}\|^{2}=0\), which implies that
Since \(\{u_{n}\}\subseteq C\) is bounded, there exists a subsequence \(\{ u_{n_{k}}\}\) of \(\{u_{n}\}\) that converges weakly to some \(\bar{u}\in C\) and satisfies the equality
Since \(u_{n_{k}}\rightharpoonup\bar{u}\in C\) and (43), we have
Since \(w_{n}\in\partial_{2}f(u_{n},u_{n}), v\in\partial_{2}f(x^{*},x^{*})\) and f is β-strongly monotone on C, we have
This implies that \(\langle w_{n}, u_{n}-x^{*}\rangle\geq\beta\|u_{n}-x^{*}\|^{2}+\langle u_{n}-x^{*},v\rangle\). Combining (66), (67) with the above inequality, we get
Assume that \(a>0\). Choose \(\varepsilon= \frac{1}{2}\beta a\). There exists \(n_{0}\) such that
It follows from (61) and the above inequality that \(a_{n+1}-a_{n}\leq-\alpha_{n}\mu\beta a^{2}+\alpha_{n}^{2}\mu M_{2}\). Summing this inequality from \(n_{0}\) to n, we obtain
Since \(\sum_{k=n_{0}}^{\infty}\alpha_{k}=\infty\) and \(\sum_{k=n_{0}}^{\infty}\alpha_{n}^{2}<\infty\), we can conclude from (69) that \(\liminf_{n\rightarrow\infty}a_{n}=-\infty\), which is a contradiction. So, \(a=0\), and we can conclude that \(\lim_{n\rightarrow\infty}\|x_{n}-x^{*}\|^{2}=0\).
Case 2: There exists a subsequence \(\{a_{n_{j}}\}\) of \(\{a_{n}\} \) such that \(a_{n_{j}}\leq a_{n_{j}+1}\) for all \(j\in\mathbb{N}\). Let \(\{\tau(n)\}\) be a nondecreasing sequence defined in Lemma 2.6. Thus
From (63) and the definition of \(\alpha_{n}\), we have
By the definition of \(u_{\tau(n)}\), we obtain
This implies that \(\lim_{n\rightarrow\infty}\|u_{\tau(n)}-x_{\tau(n)}\| ^{2}=0\). Since \(\{x_{\tau(n)}\}\) is bounded, there exists a subsequence \(\{x_{{\tau(n)}_{k}}\}\subseteq\{x_{\tau(n)}\}\) such that \(x_{{\tau(n)}_{k}}\rightharpoonup\bar{x}\in C\) and thus \(u_{{\tau(n)}_{k}}\rightharpoonup\bar{x}\in C\). From (71) and Step 3 of this proof, we get \(\bar{x}\in\varOmega\). Next, we will show that \(u_{\tau(n)_{k}}\rightarrow x^{*}\). It follows from (61) that
This implies that
Since f is β-strongly monotone on C and \(w_{\tau(n)}\in \partial_{2}f(u_{\tau(n)_{k}},u_{\tau(n)_{k}})\), we have
It follows from (73) and the above inequality that
Taking \(k\rightarrow\infty\), by using \(u_{\tau(n)_{k}}\rightharpoonup \bar{x}\) and \(\alpha_{\tau(n)_{k}}\rightarrow0\), we get
Therefore \(\lim_{k\rightarrow\infty}\|u_{\tau(n)_{k}}-x^{*}\|^{2}=0\). Then, it is easy to see that \(\lim_{n\rightarrow\infty}\|u_{\tau(n)}-x^{*}\|^{2}=0\). By the definition of \(x_{n+1}\), we have
Since \(\{w_{\tau(n)}\}\) is bounded, and from the definition of \(\alpha _{\tau(n)}\), we have \(\lim_{n\rightarrow\infty}\|x_{\tau(n)+1}-x^{*}\|^{2}=0\). This means that \(\lim_{n\rightarrow\infty}a_{\tau(n)+1}=0\). It follows from (50) that \(\lim_{n\rightarrow\infty}a_{n}=0\). Hence \(\lim_{n\rightarrow\infty}\|x_{n}-x^{*}\|^{2}=0\). This completes the proof. □
5 Numerical examples
Let \(H=\mathbb{R}^{n}\) and \(C= \{x\in\mathbb{R}^{n}:-5 \leq x_{i}\leq 5,\forall i\in\{1,2,\ldots,n\}\}\). Let the bifunction \(g: \mathbb {R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) be defined by
where P and Q are randomly symmetric positive semidefinite matrices such that \(P-Q\) is positive semidefinite. Then g is pseudomonotone on \(\mathbb{R}^{n}\). Indeed, let \(g(x,y)\geq0\) for every \(x,y\in\mathbb{R}^{n}\), we have
Next, we obtain that g is Lipschitz-type continuous with \(L_{1}=L_{2}=\frac{1}{2}\|P-Q\|\). Indeed, for each \(x,y,z \in\mathbb{R}^{n}\),
where \(\|P-Q\|\) is the spectral norm of the matrix \(\|P-Q\|\), that is, the square root of the largest eigenvalue of the positive semidefinite matrix \((P-Q)^{T}(P-Q)\). It is easy to check that \(\varOmega\neq\emptyset\). Furthermore, we define the bifunction \(f:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) as
with A and B being positive definite matrices defined by
where \(M,N\) are randomly \(n\times n\) matrices and \(I_{n}\) is the identity matrix. Then we have f is n-strongly monotone on \(\mathbb{R}^{n}\). Indeed, let \(x,y\in\mathbb{R}^{n}\), we get
Moreover, \(\partial_{2}f(x,x)=\{(A+B) x\}\) and \(\|(A+B)x - (A+B)y\|\leq\|A+B\|\|x-y\|\) for all \(x,y\in\mathbb{R}^{n}\). Thus the mapping \(x \rightarrow\partial_{2}g(x,x)\) is bounded and \(\| A+B\|\)-Lipschitz continuous on every bounded subset of H. In this example, we consider the quadratic optimization
where H is a matrix, f and x are vectors. From the subproblem of solving \(y_{k}\) and \(z_{k}\) in Algorithm 1, we can consider problem (76).
We have tested for this example where \(n=5, 10, 50,100\), and 500. Starting point \(x_{0}\) is a randomly initial point. Take the parameters
We have implemented Algorithm 1 for this problem in Matlab R2015 running on a Desktop with Intel(R) Core(TM) i5-7200u CPU 2.50 GHz, and 4 GB RAM, and we used the stopping criteria \(\|x_{k+1} - x_{k}\|<\varepsilon \) with \(\varepsilon= 0.001\) is a tolerance to cease the algorithm. Denote that
-
N.P: the number of the tested problems.
-
Average iteration: the average number of iterations.
-
Average times: the average CPU-computation times (in s).
The computation results are reported in the following tables.
From the numerical result Table 1, we see that the sequence generated by our algorithms is convergent and effective for solving the solution of bilevel equilibrium problems.
6 Conclusions
We have proposed two iterative algorithms for finding the solution of a bilevel equilibrium problem in a real Hilbert space. The sequence generated by our algorithms converges strongly to the solution. Furthermore, we reported the numerical result to support our algorithm.
References
Anh, P.N., Anh, T.T.H., Hien, N.D.: Modified basic projection methods for a class of equilibrium problems. Numer. Algorithms https://doi.org/10.1007/s11075-017-0431-9
Bento, G.C., Cruz Neto, J.X., Lopes, J.O., Soares, P.A. Jr, Soubeyran, A.: Generalized proximal distances for bilevel equilibrium problems. SIAM J. Optim. 26, 810–830 (2016)
Bianchi, M., Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90, 31–43 (1996)
Chadli, O., Chbani, Z., Riahi, H.: Equilibrium problems with generalized monotone bifunctions and applications to variational inequalities. J. Optim. Theory Appl. 105, 299–323 (2000)
Chbani, Z., Riahi, H.: Weak and strong convergence of proximal penalization and proximal splitting algorithms for two-level hierarchical Ky Fan minimax inequalities. Optimization 64, 1285–1303 (2015)
Daniele, P., Giannessi, F., Maugeri, A.: Equilibrium Problems and Variational Models. Kluwer Academic, Norwell (2003)
Dempe, S.: Annotated bibliography on bilevel programming and mathematical programs with equilibrium constraints. Optimization 52, 333–359 (2003)
Dinh, B.V., Kim, D.S.: Extragradient algorithms for equilibrium problems and symmetric generalized hybrid mappings. Optim. Lett. https://doi.org/10.1007/s11590-016-1025-5
Hieu, D.V.: Weak and strong convergence of subgradient extragradient methods for pseudomonotone equilibrium. Commun. Korean Math. Soc. 31, 879–893 (2016)
Hieu, D.V., Moudafi, A.: A barycentric projected-subgradient algorithm for equilibrium problems. J. Nonlinear Var. Anal. 1, 43–59 (2017)
Iiduka, H.: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 133, 227–242 (2012)
Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19, 1881–1893 (2009)
Iusem, A.N.: On the maximal monotonicity of diagonal subdifferential operators. J. Convex Anal. 18, 489–503 (2011)
Jusem, A.N., Nasri, M.: Inexact proximal point methods for equilibrium problems in Banach spaces. Numer. Funct. Anal. Optim. 28, 1279–1308 (2007)
Kim, D.S., Dinh, B.V.: Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces. Numer. Algorithms 77, 741–761 (2018)
Konnov, I.V.: Application of the proximal point method to nonmonotone equilibrium problems. J. Optim. Theory Appl. 119, 317–333 (2003)
Liu, Y.: A modified hybrid method for solving variational inequality problems in Banach spaces. J. Nonlinear Funct. Anal. 2017, Article ID 31 (2017)
Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Moudafi, A.: Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 15, 91–100 (1999)
Moudafi, A.: Proximal methods for a class of bilevel monotone equilibrium problems. J. Glob. Optim. 47, 287–292 (2010)
Muu, L.D., Oettli, W.: Optimization over equilibrium sets. Optimization 49, 179–189 (2000)
Muu, L.D., Quoc, T.D.: Regularization algorithms for solving monotone Ky Fan inequalities with application to a Nash-Cournot equilibrium model. J. Optim. Theory Appl. 142, 185–204 (2009)
Quy, N.V.: An algorithm for a bilevel problem with equilibrium and fixed point constraints. Optimization 64, 1–17 (2014)
Thuy, L.Q., Hai, T.N.: A projected subgradient algorithm for bilevel equilibrium problems and applications. J. Optim. Theory Appl. https://doi.org/10.1007/s10957-017-1176-2
Tran, D.Q., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155, 605–627 (2013)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
Yao, Y., Petrusel, A., Qin, X.: An improved algorithm based on Korpelevich’s method for variational inequalities in Banach spaces. J. Nonlinear Convex Anal. 19, 397–406 (2018)
Acknowledgements
The first author would like to thank the Thailand Research Fund through the Royal Golden Jubilee PH.D. Program for supporting by grant fund under Grant No. PHD/0032/2555 and Naresuan University.
Funding
This research was supported by the Thailand Research Fund through the Royal Golden Jubilee PH.D. Program (Grant No. PHD/0032/2555) and Naresuan University.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Yuying, T., Dinh, B.V., Kim, D.S. et al. Extragradient subgradient methods for solving bilevel equilibrium problems. J Inequal Appl 2018, 327 (2018). https://doi.org/10.1186/s13660-018-1898-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1898-1