1 Introduction

In this paper, we consider the equilibrium problem (EP) which is to find \(x^*\in C\) such that

$$\begin{aligned} f(x^*,y)\ge 0, \ \ \forall y\in C, \end{aligned}$$
(1)

where C is a nonempty closed convex subset in a real Hilbert space H, \(f: H\times H\longrightarrow {\mathbb {R}}\) is a bifunction. The solution set of (1) is denoted by EP(f). Equilibrium problem is also called the Ky Fan inequality due to his contribution to this field [1]. The problem unifies many important mathematical models such as saddle point problem, fixed point problem, variational inequality and Nash equilibrium problem [2, 3]. Recently, methods for solving equilibrium problem have been studied extensively [4,5,6,7,8,9,10,11,12,13,14,15,16,17]. One of the most popular methods is the proximal point method [4,5,6]. But the method cannot be applied to pseudomonotone equilibrium problem. Another method is the proximal-like method (the extragradient method) [7]. By using the idea of Korpelevich extragradient method in [8], this method was extended by Quoc et al. in [9]

$$\begin{aligned} \left\{ \begin{aligned}\begin{array}{ll} x_{0}\in C, y_{n}=argmin\left\{ \lambda f(x_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in C\right\} , \\ x_{n+1}= argmin\left\{ \lambda f(y_n,y)+\frac{1}{2}\Vert x_{n}-y\Vert ^2, \ y\in C\right\} ,\\ \end{array} \end{aligned} \right. \end{aligned}$$
(2)

where \(\lambda \) is a suitable parameter. It was proved that the sequence \(\{x_n\}\) generated by (2) converges to a solution of equilibrium problem under the assumptions of pseudomonotonicity and Lipschitz-type condition of f. But at each iteration, one needs to calculate two strongly convex programming problems. This method was improved by many authors; see,e.g., [10,11,12,13,14,15]. Based on the Malitsky’s work in the variational inequality [18], Nguyen [15] proposed the following method

$$\begin{aligned} \left\{ \begin{aligned}\begin{array}{ll} x_{0}, y_{1}\in C, x_{n}=\frac{(\varphi -1)y_n+x_{n-1}}{\varphi } \\ y_{n+1}= argmin\left\{ \lambda f(y_n,y)+\frac{1}{2}\Vert x_{n}-y\Vert ^2, \ y\in C\right\} ,\\ \end{array} \end{aligned} \right. \end{aligned}$$
(3)

where \(\varphi =\frac{\sqrt{5}+1}{2}\) and \(\lambda \) is a suitable parameter. It is easy to see this method need only one strongly convex programming problem per iteration. The main drawback of algorithms (2) and (3) is a requirement to know Lipschitz-type constants of equilibrium bifunction. In order to overcome this shortcoming, Dang [10, 11, 13, 14] proposed the non-summable and diminishing step size sequence for solving strongly pseudomonotone equilibrium problem. In this work, we propose a new gradient method for solving pseudomonotone equilibrium problem. It is worth pointing out that the proposed algorithm uses a new step size and does not require the knowledge of the Lipschitz-type constants of the bifunction.

The remainder of this paper is organized as follows. In Sect. 2, we present some definitions and preliminaries that will be needed throughout the paper. In Sect. 3, we propose a new algorithm and analyze its convergence. In Sect. 4, we particularize our method to the variational inequality. Finally, preliminary numerical experiments are provided which demonstrate our algorithm performance.

2 Preliminaries

In this section, we recall some concepts and results for further use.

Definition 2.1

A bifunction \(f : C \times C \longrightarrow {\mathbb {R}}\) is said to be as follows:

  1. (i)

    Monotone on C if \(f(x,y)+f(y,x)\le 0, \ \forall x, y\in C. \)

  2. (ii)

    Pseudomonotone on C if \(f(x,y)\ge 0\Longrightarrow f(y,x)\le 0, \ \forall x, y\in C. \)

  3. (iii)

    Strong pseudomonotone on C if \(f(x,y)\ge 0\Longrightarrow f(y,x)\le -\gamma \Vert x-y\Vert ^2, \ \forall x, y\in C. \) Where \(\gamma >0. \)

Definition 2.2

A mapping \(h : C \longrightarrow {\mathbb {R}}\) is called subdifferentiable at \(x\in C\) if there exists a vector \(w\in H\) such that \(h(y)-h(x)\ge \langle w, y-x\rangle , \forall y\in C.\)

Definition 2.3

A mapping \(F : H \rightarrow H\) is said to be sequentially weakly continuous if the sequence \(\{x_n\}\) converges weakly to x implies \(\{F(x_n)\}\) converges weakly to F(x).

For solving the equilibrium problem, we assume that the bifunction f satisfies the following conditions:

(C1):

f is pseudomonotone on C and \(f(x,x)=0\) for all \(x\in C\).

\((C1')\):

f is strong pseudomonotone on C and \(f(x,x)=0\) for all \(x\in C\).

(C2):

f satisfies the Lipschitz-type condition on C. That is, there exist two positive constants \(c_1\), \(c_2\) such that \(f(x,y)+f(y,z)\ge f(x,z)-c_1\Vert x-y\Vert ^2-c_2\Vert y-z\Vert ^2, \ \forall x, y, z\in C.\)

(C3):

f(x, .) is convex and subdifferentiable on C for every fixed \(x \in C\).

(C4):

\(\limsup _{n\rightarrow \infty }f (x_n, y)\le f (x, y)\) for every sequence

\(\{x_n\}\):

which converges weakly to x and for each \(y\in C\).

For a proper, convex and lower semicontinuous function \(g: C\rightarrow (-\infty ,+\infty ]\) and \(\lambda >0, \) the proximal mapping of g associated with \(\lambda \) is defined by

$$\begin{aligned} prox_{\lambda g}(x)=argmin\left\{ \lambda g(y)+\frac{1}{2}\Vert x-y\Vert ^2: y\in C\right\} , \ x\in H. \end{aligned}$$
(4)

The following lemma is a property of the proximal mapping.

Lemma 2.1

[19] For all \(x\in H, y \in C \) and \(\lambda > 0,\) the following inequality holds:

$$\begin{aligned} \lambda \{g(y)-g(prox_{\lambda g}(x))\}\ge \langle x-prox_{\lambda g}(x),y-prox_{\lambda g}(x)\rangle . \end{aligned}$$
(5)

Remark 2.1

From Lemma 2.1, we note that if \(x=prox_{\lambda g}(x)\), then

$$\begin{aligned} x\in Argmin\{g(y):y\in C\}:=\{x\in C: g(x)=\min \limits _{y\in C}g(y)\}. \end{aligned}$$
(6)

Lemma 2.2

Let \(\delta \in (0,+\infty ) \) and \(x,\ y\in H\). Then

$$\begin{aligned} \Vert (\delta +1)x-\delta y\Vert ^2=(\delta +1)\Vert x\Vert ^2-\delta \Vert y\Vert ^2+\delta (\delta +1)\Vert x-y\Vert ^2. \end{aligned}$$

Lemma 2.3

Let \(\{a_n\},\)\(\{b_n\}\) be two nonnegative real sequences such that \(\exists N>0, \ \forall n>N,\)\(a_{n+1}\le a_n-b_n\). Then \(\{a_n\}\) is bounded and \(\lim \nolimits _{n\rightarrow \infty }b_n=0.\)

Lemma 2.4

Let \(\{{x_n}\}\) be a sequence in H such that \(x_n\rightharpoonup x \). Then

$$\begin{aligned} \liminf \limits _{n\rightarrow \infty }\Vert x_n-x\Vert <\liminf \limits _{n\rightarrow \infty }\Vert x_n-y\Vert ,\ \ \forall y\ne x. \end{aligned}$$

For a closed and convex \(K\subseteq H\), the (metric) projection \(P_K: H\longrightarrow C\) is defined, for all \(x\in H\) by \(P_K(x)=argmin\{{\parallel y-x \parallel \mid y\in K }\}\).

Lemma 2.5

Let C be a nonempty, closed and convex set in H and \(x \in H\). Then

$$\begin{aligned} \ \langle P_Cx-x, y-P_Cx\rangle \ge 0,\ \ \forall y\in C. \end{aligned}$$

3 Algorithm and its convergence

In this section, we propose an iterative algorithm for solving the equilibrium problem (1). The algorithm is designed as follows:

Algorithm 3.1

(Step 0):

Choose \(\lambda _1>0\), \(x_{0}, y_0, y_1\in C,\)\(\mu \in (0, 1) \), \(\alpha \in (0, 1) \), \(\theta \in (0, 1] \), \(\delta \in (\frac{\sqrt{1+4(\frac{\alpha }{2-\theta }+1-\alpha )}-1}{2},1).\)

(Step 1):

Given the current iterate \(x_{n-1}\), \(y_{n-1}\), \(y_n\), compute

$$\begin{aligned} x_{n}= & {} (1-\delta )y_n+\delta x_{n-1}. \end{aligned}$$
(7)
$$\begin{aligned} y_{n+1}= & {} argmin\left\{ \lambda _{n} f(y_n,y)+\frac{1}{2}\Vert x_{n}-y\Vert ^2, \ y\in C\right\} =prox_{\lambda _{n} f(y_n,.)}(x_{n}).\nonumber \\ \end{aligned}$$
(8)

If \(y_{n+1} = x_n = y_n\), then stop: \(y_n \) is a solution. Otherwise, go to step 2.

(Step 2):

Compute

$$\begin{aligned} \lambda _{n+1}= {\left\{ \begin{array}{ll} &{} min\left\{ {\frac{\alpha \mu \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_{n+1}-y_n\Vert ^2)}{4\delta (f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1}))},\lambda _n }\right\} , \\ &{} \qquad \qquad if\ \ f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})>0, \\ &{}\lambda _n, \qquad otherwise. \\ \end{array}\right. } \end{aligned}$$
(9)

Set \(n := n + 1\) and return to step 1.

Remark 3.1

Under hypotheses (C1) and (C3), from Lemma 2.1 and Remark 2.1, we obtain that if Algorithm 3.1 terminates at some iterate, i.e., \(y_{n+1} = x_n = y_n\), then \(y_n\in EP(f).\)

Lemma 3.1

The sequence \( \{\lambda _n\} \) generated by Algorithm 3.1 is a monotonically decreasing sequence with lower bound \( \min \{{\frac{\alpha \mu \theta }{4\delta \max \{c_1,c_2\}},\lambda _1 }\}\).

Proof

It is easily checked that \( \{\lambda _n\} \) is a monotonically decreasing sequence. Since f is a Lipschitz-type bifunction with constants \(c_1\) and \(c_2\), in the case of \(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})>0,\) we have

$$\begin{aligned}&\frac{\alpha \mu \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_{n+1}-y_n\Vert ^2)}{4\delta (f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1}))}\nonumber \\&\quad \ge \frac{\alpha \mu \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_{n+1}-y_n\Vert ^2)}{4\delta (c_1\Vert y_{n-1}-y_n\Vert ^2+c_2\Vert y_{n+1}-y_n\Vert ^2)} \nonumber \\&\quad \ge \frac{\alpha \mu \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_{n+1}-y_n\Vert ^2)}{4\delta \max \{c_1,c_2\}(\Vert y_{n-1}-y_n\Vert ^2+\Vert y_{n+1}-y_n\Vert ^2)} \nonumber \\&\quad =\frac{\alpha \mu \theta }{4\delta \max \{c_1,c_2\}} . \end{aligned}$$
(10)

It is clear that the sequence \( \{\lambda _n\} \) has the lower bound \( \min \{{\frac{\alpha \mu \theta }{4\delta \max \{c_1,c_2\}},\lambda _1 }\}\). \(\square \)

Remark 3.2

The limit of \( \{\lambda _n\} \) exists and we denote \(\lambda =\lim \nolimits _{n\rightarrow \infty }\lambda _n.\) It is obvious that \(\lambda >0\). If \(\lambda _1\le \frac{\alpha \mu \theta }{4\delta \max \{c_1,c_2\}}, \) then \(\{\lambda _n\}\) is a constant sequence. The following lemma plays a crucial role in the Proof of the Theorem 3.1.

Lemma 3.2

Under the conditions (C1),  (C2) and (C3). Let \(\{x_n\}\) and \(\{y_n\}\) be sequences generated by Algorithm 3.1 and \(EP(f)\ne \emptyset \). Then \(\{x_n\}\) and \(\{y_n\}\) are bounded.

Proof

Since \(y_{n+1}=prox_{\lambda _n f(y_n,.)}(x_n).\) By Lemma 2.1, we get

$$\begin{aligned} \lambda _n (f(y_n,y)-f(y_n,y_{n+1}))\ge \langle x_n-y_{n+1},y-y_{n+1}\rangle ,\ \forall y\in C. \end{aligned}$$
(11)

Let \(u\in EP(f).\) Substituting \(y=u\) into the last inequality, we have

$$\begin{aligned} \lambda _n (f(y_n,u)-f(y_n,y_{n+1}))\ge \langle x_n-y_{n+1},u-y_{n+1}\rangle . \end{aligned}$$
(12)

As \(u\in EP(f),\) we obtain \(f(u,y_n)\ge 0.\) Thus \(f(y_n,u)\le 0\) because of the pseudomonotonicity of f. Hence, from (12) and \(\lambda _n>0\), we obtain

$$\begin{aligned} -\lambda _nf(y_n,y_{n+1})\ge \langle x_n-y_{n+1},u-y_{n+1}\rangle . \end{aligned}$$
(13)

Since \(y_{n}=prox_{\lambda _{n-1} f(y_{n-1},.)}(x_{n-1}),\) we get

$$\begin{aligned} \lambda _{n-1} (f(y_{n-1},y)-f(y_{n-1},y_{n}))\ge \langle x_{n-1}-y_{n},y-y_{n}\rangle ,\ \forall y\in C. \end{aligned}$$
(14)

In particular, substituting \(y=y_{n+1}\) into the last inequality, we have

$$\begin{aligned} \lambda _{n-1}(f(y_{n-1},y_{n+1})-f(y_{n-1},y_n))\ge \langle x_{n-1}-y_n, y_{n+1}-y_n\rangle . \end{aligned}$$
(15)

Since \(x_{n}=(1-\delta )y_n+\delta x_{n-1}\), we obtain \(y_n=\frac{1}{1-\delta }x_{n}-\frac{\delta }{1-\delta } x_{n-1} \). Hence,

$$\begin{aligned} y_n-x_{n}=\frac{\delta }{1-\delta } (x_n-x_{n-1})=\delta (y_n-x_{n-1}). \end{aligned}$$
(16)

Combining (15), (16) and \(\lambda _{n}>0\), we have

$$\begin{aligned} \lambda _{n}(f(y_{n-1},y_{n+1})-f(y_{n-1},y_n))\ge \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }\langle x_{n}-y_n, y_{n+1}-y_n\rangle . \end{aligned}$$
(17)

Adding (13) and (17), we get

$$\begin{aligned}&2\lambda _n(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})) \ \ \nonumber \\&\quad \ge 2\langle x_n-y_{n+1},u-y_{n+1}\rangle +2\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }\langle x_{n}-y_n, y_{n+1}-y_n\rangle \nonumber \\&\quad =\Vert x_n-y_{n+1}\Vert ^2+\Vert y_{n+1}-u\Vert ^2-\Vert x_n-u\Vert ^2 \ \ \ \ \ \ \ \ \ \ \ \nonumber \\&\qquad +\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }(\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}-y_{n}\Vert ^2-\Vert x_{n}-y_{n+1}\Vert ^2). \end{aligned}$$
(18)

That is

$$\begin{aligned}&\Vert y_{n+1}-u\Vert ^2\le \Vert x_n-u\Vert ^2-\Vert x_n-y_{n+1}\Vert ^2-\frac{\lambda _{n}}{\lambda _{n-1}} \frac{1}{\delta }(\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}\nonumber \\&\qquad -y_{n}\Vert ^2-\Vert x_{n}-y_{n+1}\Vert ^2)+ 2\lambda _n(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})) \ \nonumber \\&\quad =\Vert x_n-u\Vert ^2+\left( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1\right) \Vert x_n-y_{n+1}\Vert ^2-\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }(\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}-y_{n}\Vert ^2) \nonumber \\&\qquad + 2\lambda _{n+1}\frac{\lambda _{n}}{\lambda _{n+1}}(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})). \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{aligned}$$
(19)

By definition of \(\lambda _n\) and (19), we obtain

$$\begin{aligned}&\Vert y_{n+1}-u\Vert ^2 \le \Vert x_n-u\Vert ^2+\left( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1\right) \Vert x_n-y_{n+1}\Vert ^2\nonumber \\&\quad -\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }(\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}-y_{n}\Vert ^2) \nonumber \\&\quad + \frac{1}{2}\mu \frac{\lambda _{n}}{\lambda _{n+1}}\frac{1}{\delta } \alpha \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_n-y_{n+1}\Vert ^2). \end{aligned}$$
(20)

In the last inequality, in the case of \(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})\le 0,\) it is obvious that

$$\begin{aligned}&2\lambda _{n+1}\frac{\lambda _{n}}{\lambda _{n+1}}(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})) \\&\quad \le 0\le \frac{1}{2}\mu \frac{\lambda _{n}}{\lambda _{n+1}}\frac{1}{\delta }\alpha \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_n-y_{n+1}\Vert ^2). \end{aligned}$$

Since

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{\lambda _{n}}{\lambda _{n-1}}=1>\alpha , \ \ \ \lim \limits _{n\rightarrow \infty }\lambda _n\frac{\mu }{\lambda _{n+1}} =\mu , \ \ 0<\mu <1. \end{aligned}$$
(21)

we have that \(\exists N\ge 0,\) such that \(\forall n\ge N,\)\(\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1>0 \), \(0<\lambda _n\frac{\mu }{\lambda _{n+1}}<1\) and \(\alpha <\frac{\lambda _n}{\lambda _{n-1}}\).

From the relation \(y_{n+1}=\frac{1}{1-\delta }x_{n+1}-\frac{\delta }{1-\delta } x_{n} \), by Lemma 2.2 and (16), we have

$$\begin{aligned} \Vert y_{n+1}-u\Vert ^2&=\Vert \frac{1}{1-\delta }(x_{n+1}-u)-\frac{\delta }{1-\delta }(x_{n}-u)\Vert ^2 \nonumber \\&=\frac{1}{1-\delta }\Vert x_{n+1}-u\Vert ^2-\frac{\delta }{1-\delta }\Vert x_n-u\Vert ^2+\frac{1}{1-\delta }\frac{\delta }{1-\delta }\Vert x_{n+1}-x_n\Vert ^2\nonumber \\&=\frac{1}{1-\delta }\Vert x_{n+1}-u\Vert ^2-\frac{\delta }{1-\delta }\Vert x_n-u\Vert ^2+\delta \Vert y_{n+1}-x_n\Vert ^2. \end{aligned}$$
(22)

Also, from \( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1\le \frac{\lambda _{n-1}}{\lambda _{n-1}}\frac{1}{\delta }-1= \frac{1}{\delta }-1\), (20), (21) and (22), it implies that \(\forall n\ge N,\)

$$\begin{aligned}&\frac{1}{1-\delta }\Vert x_{n+1}-u\Vert ^2-\frac{\delta }{1-\delta }\Vert x_n-u\Vert ^2+\delta \Vert y_{n+1}-x_n\Vert ^2 \nonumber \\&\quad \le \Vert x_n-u\Vert ^2+\left( \frac{1}{\delta }-1\right) \Vert x_{n}-y_{n+1}\Vert ^2 -\frac{\alpha }{\delta }(\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}-y_{n}\Vert ^2)\nonumber \\&\quad +\frac{\alpha }{2\delta }\theta (\Vert y_{n}-y_{n-1}\Vert ^2+\Vert y_{n}-y_{n+1}\Vert ^2). \end{aligned}$$
(23)

Thus,

$$\begin{aligned}&\frac{1}{1-\delta }\Vert x_{n+1}-u\Vert ^2+\frac{\alpha \theta }{2\delta }\Vert y_{n+1}-y_{n}\Vert ^2 \nonumber \\&\quad \le \frac{1}{1-\delta }\Vert x_n-u\Vert ^2+\frac{\alpha \theta }{2\delta }\Vert y_{n}-y_{n-1}\Vert ^2 +\left( \frac{\theta \alpha }{\delta }-\frac{\alpha }{\delta }\right) \Vert y_{n+1}-y_{n}\Vert ^2 \nonumber \\&\qquad +\left( \frac{1}{\delta }-1-\delta \right) \Vert y_{n+1}-x_{n}\Vert ^2-\frac{\alpha }{\delta }\Vert y_{n}-x_{n}\Vert ^2 \nonumber \\&\quad =\frac{1}{1-\delta }\Vert x_n-u\Vert ^2+\frac{\alpha \theta }{2\delta }\Vert y_{n}-y_{n-1}\Vert ^2\nonumber \\&\qquad +\frac{(\theta -1)\alpha }{\delta } \Vert y_{n+1}-y_{n}\Vert ^2+\left( \frac{1}{\delta }-1-\delta \right) \Vert y_{n+1}-x_{n}\Vert ^2 \nonumber \\&\qquad -\frac{\alpha }{\delta }(\Vert x_{n}-y_{n+1}\Vert ^2+\Vert y_{n}-y_{n+1}\Vert ^2+2\langle x_{n}-y_{n+1},y_{n+1}-y_{n}\rangle ). \end{aligned}$$
(24)

For \(n\ge N,\) let

$$\begin{aligned} a_n&=\frac{1}{1-\delta }\Vert x_n-u\Vert ^2+\frac{\alpha \theta }{2\delta }\Vert y_{n}-y_{n-1}\Vert ^2,\nonumber \\ \eta&=\frac{1}{2-\theta }. \end{aligned}$$
(25)

Combining (24), (25) and \(-2\langle x_{n}-y_{n+1},y_{n+1}-y_{n}\rangle \le \eta \Vert x_{n}-y_{n+1}\Vert ^2+\frac{1}{\eta }\Vert y_{n}-y_{n+1}\Vert ^2 \), we have

$$\begin{aligned} a_{n+1}&\le a_n +\left( \frac{(\theta -1)\alpha }{\delta }-\frac{\alpha }{\delta }+ \frac{\alpha }{\delta }\frac{1}{\eta }\right) \Vert y_{n+1}-y_{n}\Vert ^2+\left( \frac{1}{\delta }-1-\delta - \frac{\alpha }{\delta }\right. \nonumber \\&\qquad \left. +\frac{\alpha \eta }{\delta }\right) \Vert y_{n+1}-x_{n}\Vert ^2\nonumber \\&\quad =a_n +\left( \frac{1}{\delta }-1-\delta -\frac{\alpha }{\delta }+\frac{\alpha \eta }{\delta }\right) \Vert y_{n+1}-x_{n}\Vert ^2. \end{aligned}$$
(26)

Since \(\delta \in (\frac{\sqrt{1+4(\frac{\alpha }{2-\theta }+1-\alpha )}-1}{2},1),\) we obtain \(\frac{1}{\delta }-1-\delta -\frac{\alpha }{\delta }+\frac{\alpha \eta }{\delta }<0.\)

For \(n\ge N,\) let

$$\begin{aligned} b_n=-\left( \frac{1}{\delta }-1-\delta -\frac{\alpha }{\delta }+ \frac{\alpha \eta }{\delta }\right) \Vert y_{n+1}-x_{n}\Vert ^2. \end{aligned}$$
(27)

Then (26) can be written as \(a_{n+1}\le a_n-b_n\), \(\forall n\ge N.\) From Lemma 2.3, we can conclude that \(\{a_n\}\) is bounded, \(\lim \nolimits _{n\rightarrow \infty }b_n=0\) and the limit of \(\{a_n\}\) exists. By definition of \(b_n\), we can show that \(\lim \nolimits _{n\rightarrow \infty }\Vert y_{n+1}-x_{n}\Vert =0.\) From the relation (16), \(\Vert y_n-y_{n-1}\Vert \le \Vert y_n-x_{n}\Vert +\Vert x_n-y_{n-1}\Vert \) and \(\Vert x_n-y_{n-1}\Vert \le \Vert x_n-x_{n-1}\Vert +\Vert x_{n-1}-y_{n-1}\Vert \), we get

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Vert y_n-x_{n}\Vert =\lim \limits _{n\rightarrow \infty }\Vert x_n-x_{n-1}\Vert =\lim \limits _{n\rightarrow \infty }\Vert y_n-y_{n-1}\Vert =\lim \limits _{n\rightarrow \infty }\Vert y_{n+1}-x_{n}\Vert =0. \end{aligned}$$
(28)

Also, we obtain \(\lim \nolimits _{n\rightarrow \infty }a_n=\lim \nolimits _{n\rightarrow \infty }\frac{1}{1-\delta }\Vert x_n-u\Vert ^2. \) This implies that the sequence \(\{x_n\}\) is bounded and so \(\{y_n\}\) is bounded. That is the desired result. \(\square \)

Theorem 3.1

Assume that \((C1){-}(C4)\) and \( EP(f)\ne \emptyset \) hold. Then the sequences \(\{x_n\}\) and \(\{y_n\}\) generated by Algorithm 3.1 converge weakly to a solution of the equilibrium problem.

Proof

By Lemma 3.2, the sequence \(\{x_n\}\) is bounded and there exists a subsequence \(\{x_{n_k}\}\) that converges weakly to some \(x^*\in H\). Then \(y_{n_k}\rightharpoonup x^* \), \(y_{{n_k}+1}\rightharpoonup x^* \) and \(x^*\in C.\) From the relation (11), we have

$$\begin{aligned} \lambda _{n_k} (f(y_{n_k},y)-f(y_{n_k},y_{{n_k}+1}))\ge \langle x_{n_k}-y_{{n_k}+1},y-y_{{n_k}+1}\rangle ,\ \forall y\in C. \end{aligned}$$
(29)

Since f satisfies the Lipschitz-type condition on C, we have

$$\begin{aligned} \lambda _{n_k}(f(y_{n_k},y_{{n_k}+1}))\ge & {} \lambda _{n_k}(f(y_{{n_k}-1},y_{{n_k}+1})\nonumber \\&-f(y_{{n_k}-1},y_{n_k}))-\lambda _{n_k}c_1\Vert y_{n_k}-y_{{n_k}-1}\Vert ^2\nonumber \\&-\lambda _{n_k}c_2\Vert y_{n_k}-y_{{n_k}+1}\Vert ^2. \end{aligned}$$
(30)

From the relations (17) and (30), it follows that

$$\begin{aligned} \lambda _{n_k}(f(y_{n_k},y_{{n_k}+1}))\ge & {} \frac{\lambda _{n_k}}{\lambda _{{n_k}-1}}\frac{1}{\delta }\langle x_{n_k}-y_{n_k}, y_{{n_k}+1}\nonumber \\&-y_{n_k}\rangle -\lambda _{n_k}c_1\Vert y_{n_k}-y_{{n_k}-1}\Vert ^2\nonumber \\&-\lambda _{n_k}c_2\Vert y_{n_k}-y_{{n_k}+1}\Vert ^2. \end{aligned}$$
(31)

Combining the relations (29) and (31), it follows that, for all \(y \in C\),

$$\begin{aligned} f(y_{n_k},y)\ge&\, \frac{1}{\lambda _{{n_k}-1}}\frac{1}{\delta }\langle x_{n_k}-y_{n_k}, y_{{n_k}+1}-y_{n_k}\rangle +\frac{1}{\lambda _{n_k}}\langle x_{n_k}-y_{{n_k}+1},y-y_{{n_k}+1}\rangle \nonumber \\&\quad -c_1\Vert y_{n_k}-y_{{n_k}-1}\Vert ^2-c_2\Vert y_{n_k}-y_{{n_k}+1}\Vert ^2. \end{aligned}$$
(32)

Let \(k\rightarrow \infty \), using the facts \(\lim \nolimits _{k\rightarrow \infty }\Vert y_{n_k}-x_{n_k}\Vert =\lim \nolimits _{k\rightarrow \infty }\Vert x_{n_k}-y_{{n_k}+1}\Vert =\lim \nolimits _{k\rightarrow \infty }\Vert y_{n_k}-y_{{n_k}-1}\Vert =0\), \(\{x_{n}\}\) is bounded, \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n}=\lambda >0\) and the hypothesis (C4), we obtain \(f(x^*,y)\ge 0, \ \ \forall y\in C.\) That is \(x^*\in EP(f)\). Next we prove that \(x_n\rightharpoonup x^*\). Assume that \(\{x_n\}\) has at least two weak cluster points \(x^*\in EP(f)\) and \({\bar{x}}\in EP(f)\) such that \(x^*\ne {\bar{x}}\). Let \(\{x_{n_i}\}\) be a sequence such that \(x_{n_i}\rightharpoonup {\bar{x}}\) as \(i\rightarrow \infty \), noting the fact that \( \forall u\in EP(f)\),

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Vert x_n-u \Vert =\lim \limits _{n\rightarrow \infty }\sqrt{(1-\delta )a_n}. \end{aligned}$$
(33)

By Lemma 2.4, we get

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Vert x_n-{\bar{x}}\Vert =&\, \lim \limits _{i\rightarrow \infty }\Vert x_{n_i}-{\bar{x}}\Vert =\liminf \limits _{i\rightarrow \infty }\Vert x_{n_i}-{\bar{x}}\Vert<\liminf \limits _{i\rightarrow \infty }\Vert x_{n_i}-x^*\Vert \ \nonumber \\ =&\,\lim \limits _{n\rightarrow \infty }\Vert x_n-x^*\Vert =\lim \limits _{k\rightarrow \infty }\Vert x_{n_k}-x^*\Vert =\liminf \limits _{k\rightarrow \infty }\Vert x_{n_k}-x^*\Vert \nonumber \\ <&\,\liminf \limits _{k\rightarrow \infty }\Vert x_{n_k}-{\bar{x}}\Vert =\lim \limits _{k\rightarrow \infty }\Vert x_{n_k}-{\bar{x}}\Vert =\lim \limits _{n\rightarrow \infty }\Vert x_n-{\bar{x}}\Vert . \ \ \end{aligned}$$
(34)

Which is impossible. Hence we deduce that \(x_n\rightharpoonup x^*\). Since \(\lim \nolimits _{n\rightarrow \infty }\Vert x_n-y_n\Vert =0\), we have \(y_n\rightharpoonup x^*\). That is the desired result. \(\square \)

Next, we prove Algorithm 3.1 converges strongly to the solution of (1) under a strong pseudomonotonicity assumption of the bifunction f.

Theorem 3.2

Assume that \((C1')\), (C2), (C3) and \( EP(f)\ne \emptyset \) hold. Then the sequences \(\{x_n\}\) and \(\{y_n\}\) generated by Algorithm 3.1 converge strongly to the unique solution u of the equilibrium problem.

Proof

The strong pseudomonotonicity assumption of the bifunction f implies that (1) has a unique solution, which we denote by u. Since \(y_n\in C\), we have \(f(u,y_n)\ge 0. \) As f is strong pseudomonotone, we get \(f(y_n,u)\le -\gamma \Vert y_n-u\Vert ^2. \) Hence, from (12) and \(\lambda _n>0\), we have

$$\begin{aligned} -\lambda _nf(y_n,y_{n+1})\ge \langle x_n-y_{n+1},u-y_{n+1}\rangle +\lambda _n\gamma \Vert y_n-u\Vert ^2. \end{aligned}$$
(35)

Adding (35) and (17), we obtain

$$\begin{aligned}&2\lambda _n(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1})) \ \ \nonumber \\&\quad \ge 2\langle x_n-y_{n+1},u-y_{n+1}\rangle +2\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }\langle x_{n}-y_n, y_{n+1}-y_n\rangle +2\lambda _n\gamma \Vert y_n-u\Vert ^2 \nonumber \\&\quad =\Vert x_n-y_{n+1}\Vert ^2+\Vert y_{n+1}-u\Vert ^2-\Vert x_n-u\Vert ^2 +2\lambda _n\gamma \Vert y_n-u\Vert ^2 \nonumber \\&\quad +\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }(\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}-y_{n}\Vert ^2-\Vert x_{n}-y_{n+1}\Vert ^2). \end{aligned}$$
(36)

Moreover, by Lemma 3.1, Remark 3.2 and (36), we also have

$$\begin{aligned}&\Vert y_{n+1}-u\Vert ^2\le \Vert x_n-u\Vert ^2-\Vert x_n-y_{n+1}\Vert ^2\nonumber \\&\quad -\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta } (\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}-y_{n}\Vert ^2-\Vert x_{n}-y_{n+1}\Vert ^2) \nonumber \\&\quad + 2\lambda _n(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1}))-2\lambda _n\gamma \Vert y_n-u\Vert ^2 \nonumber \\&\quad \le \Vert x_n-u\Vert ^2+\left( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1\right) \Vert x_n-y_{n+1}\Vert ^2-\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }(\Vert x_n-y_{n}\Vert ^2+\Vert y_{n+1}-y_{n}\Vert ^2) \nonumber \\&\quad + 2\lambda _{n+1}\frac{\lambda _{n}}{\lambda _{n+1}}(f(y_{n-1},y_{n+1})-f(y_{n-1},y_{n})-f(y_n,y_{n+1}))-2\lambda \gamma \Vert y_n-u\Vert ^2. \end{aligned}$$
(37)

By definition of \(\lambda _n\) and (37), we obtain

$$\begin{aligned}&\Vert y_{n+1}-u\Vert ^2 \le \Vert x_n-u\Vert ^2+\left( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }-1\right) \Vert x_n-y_{n+1}\Vert ^2\nonumber \\&\quad -\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\delta }(\Vert x_n-y_{n}\Vert ^2 +\Vert y_{n+1}-y_{n}\Vert ^2) \nonumber \\&\quad + \frac{1}{2}\mu \frac{\lambda _{n}}{\lambda _{n+1}}\frac{1}{\delta }\alpha \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_n-y_{n+1}\Vert ^2)-2\lambda \gamma \Vert y_n-u\Vert ^2. \end{aligned}$$
(38)

Using (38) and the same techniques as in the proof of (21)–(24), we have \(\exists N\ge 0,\) such that \(\forall n\ge N,\)

$$\begin{aligned}&\frac{1}{1-\delta }\Vert x_{n+1}-u\Vert ^2+\frac{\alpha \theta }{2\delta } \Vert y_{n+1}-y_{n}\Vert ^2 \nonumber \\&\quad \le \frac{1}{1-\delta }\Vert x_n-u\Vert ^2 +\frac{\alpha \theta }{2\delta }\Vert y_{n}-y_{n-1}\Vert ^2 \nonumber \\&\quad +\frac{(\theta -1)\alpha }{\delta } \Vert y_{n+1}-y_{n}\Vert ^2+\left( \frac{1}{\delta }-1-\delta \right) \Vert y_{n+1}-x_{n}\Vert ^2 \nonumber \\&\quad -\frac{\alpha }{\delta }(\Vert x_{n}-y_{n+1}\Vert ^2+\Vert y_{n}-y_{n+1} \Vert ^2+2\langle x_{n}-y_{n+1},y_{n+1}-y_{n}\rangle )-2\lambda \gamma \Vert y_n-u\Vert ^2. \end{aligned}$$
(39)

For \(n\ge N,\) let

$$\begin{aligned} a_n&=\frac{1}{1-\delta }\Vert x_n-u\Vert ^2 +\frac{\alpha \theta }{2\delta }\Vert y_{n}-y_{n-1}\Vert ^2, \ \ \ \ \ \ \eta =\frac{1}{2-\theta },\nonumber \\ b_n&=-\left( \frac{1}{\delta }-1- \delta -\frac{\alpha }{\delta }+ \frac{\alpha \eta }{\delta }\right) \Vert y_{n+1}-x_{n}\Vert ^2+2\lambda \gamma \Vert y_n-u\Vert ^2. \end{aligned}$$
(40)

Using (40) and the same argument as in the proof of (26), we obtain \(a_{n+1}\le a_n-b_n\), \(\forall n\ge N.\) From Lemma 2.3 and the definition of \(b_n\), we can conclude that \(\{a_n\}\) is bounded, \(\lim \nolimits _{n\rightarrow \infty }b_n=0\). Using \(\frac{1}{\delta }-1-\delta -\frac{\alpha }{\delta }+\frac{\alpha \eta }{\delta }<0\) and (28), we have

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Vert x_n-u\Vert =\lim \limits _{n\rightarrow \infty }\Vert y_n-u\Vert =\lim \limits _{n\rightarrow \infty }b_n=0. \end{aligned}$$
(41)

The proof is complete. \(\square \)

4 The case of variational inequalities

Let \(f(x, y)=\langle F(x),y-x\rangle , \ \forall x, y\in C,\) where \(F : C \rightarrow H\) is a mapping. Then the equilibrium problem becomes the variational inequality. That is, find \(x^*\in C\) such that

$$\begin{aligned} \langle F(x^*),y-x^*\rangle \ge 0, \ \ \forall y\in C. \end{aligned}$$
(42)

Moreover, we have \(prox_{\lambda _n f(y_n,.)}(x_n)=P_C(x_{n}-\lambda _n F(y_n)). \) The solution set of (42) is denoted by VI(FC). It is well known that \(x^*\in VI(F,C)\) if and only if it satisfies the following projection equation

$$\begin{aligned} x^*=P_C(x^*-\lambda F(x^*)), \end{aligned}$$
(43)

where \(\lambda \) is any positive real number. For solving pseudomonotone variational inequality, we propose the following method.

Algorithm 4.1

(Step 0) Choose \(\lambda _1>0\), \(x_{0}, y_0, y_1\in C,\)\(\mu \in (0, 1) \), \(\alpha \in (0, 1) \), \(\theta \in (0, 1] \), \(\delta \in (\frac{\sqrt{1+4(\frac{\alpha }{2-\theta }+1-\alpha )}-1}{2},1).\)

(Step 1) Given the current iterate \(x_{n-1}\), \(y_{n-1}\), \(y_n\), compute

$$\begin{aligned} x_{n}=(1-\delta )y_n+\delta x_{n-1}. \\ y_{n+1}= P_C(x_{n}-\lambda _n F(y_n)). \end{aligned}$$

If \(y_{n+1} = x_n = y_n\)(or \(F(y_n)=0\)), then stop: \(y_n \) is a solution. Otherwise, go to step 2.

(Step 2) Compute

$$\begin{aligned} \ \lambda _{n+1}= {\left\{ \begin{array}{ll} min\{{\frac{\alpha \mu \theta (\Vert y_n-y_{n-1}\Vert ^2+\Vert y_{n+1}-y_n\Vert ^2)}{4\delta \langle F(y_{n-1})-F(y_n),y_{n+1}-y_n \rangle },\lambda _n }\} , &{} if\ \ \langle F(y_{n-1})-F(y_n),y_{n+1}-y_n \rangle >0, \\ \lambda _n,&{} otherwise. \\ \end{array}\right. } \end{aligned}$$

Set \(n := n + 1\) and return to step 1.

Remark 4.1

If \(F(y_n)=0\), we have \(y_{n}=P_C(y_{n}-\lambda F(y_n))\). Thus \(y_n\in VI(F,C)\) follows directly from (43).

Recall that the mapping F is Lipschitz-continuous with constant \(L>0\), if there exists \(L>0\) such that

$$\begin{aligned} \parallel F(x)-F(y) \parallel \le L\parallel x-y \parallel ,\ \ \forall x, y\in C. \end{aligned}$$
(44)

If F is Lipschitz-continuous and pseudomonotone, then the conditions \((C1){-}(C3)\) hold for f with \(c_2=c_1=\frac{L}{2}\). Then the following conclusion follows from Lemma 3.2.

Lemma 4.1

Let \(\{x_n\}\) and \(\{y_n\}\) be sequences generated by Algorithm 4.1 and \(VI(F,C)\ne \emptyset \). Then \(\{x_n\}\) and \(\{y_n\}\) are bounded.

The next statement is classical.

Lemma 4.2

[20] Assume that \(F:C\rightarrow {\mathcal {H}}\) is a continuous and pseudomonotone mapping. Then \(x^*\in VI(F,C)\) if and only if \(x^*\) is a solution of the following problem

$$\begin{aligned} find \ \ \ \ \ x\in C \ \ \ \ s.t. \ \ \langle F(y),y-x\rangle \geqslant 0, \ \ \forall y\in C. \end{aligned}$$

We analyze the finite and infinite dimensions separately.

Theorem 4.1

Let H be a finite dimensional real Hilbert space. Assume that F is a pseudomonotone Lipschitz continuous mapping on C and VI(FC) is nonempty. Let \(\{x_n\}\) and \(\{y_n\}\) be two sequences generated by Algorithm 4.1. Then \(\{x_n\}\) and \(\{y_n\}\) converge to the same point \(x^*\in VI(F,C)\).

Proof

Since the sequence \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_k}\}\) that converges to some \(x^*\in H\). From the relation (28), we have \(y_{n_k}\rightarrow x^* \), \(y_{{n_k}+1}\rightarrow x^* \) and \(x^*\in C.\) Noting the fact that

$$\begin{aligned} y_{{n_k}+1}=P_C(x_{n_k}-\lambda _{n_k} F(y_{n_k})). \end{aligned}$$
(45)

By the continuity of F and the projection, we get

$$\begin{aligned} x^*=\lim \limits _{k\rightarrow \infty }y_{{n_k}+1}=\lim \limits _{k\rightarrow \infty }P_C(x_{n_k}-\lambda _{n_k} F(y_{n_k}))=P_C(x^*-\lambda F(x^*)). \end{aligned}$$
(46)

We deduce from (43) that \(x^*\in VI(F,C)\). By using (33), we obtain \(\lim \nolimits _{n\rightarrow \infty }\Vert x_n-x^*\Vert \) exists. Combining \(\lim \nolimits _{k\rightarrow \infty }x_{n_k}=x^*\) and \(\lim \nolimits _{n\rightarrow \infty }\Vert x_n-y_n\Vert =0\) , we have \(\lim \nolimits _{n\rightarrow \infty }x_n=\lim \nolimits _{n\rightarrow \infty }y_n=x^*\). That is the desired result.\(\square \)

Inspired by [21], we give the proof of the following theorem.

Theorem 4.2

Assume that F is pseudomonotone on a infinite dimensional H, sequentially weakly continuous and Lipschitz continuous on C and VI(FC) is nonempty. Let \(\{x_n\}\) and \(\{y_n\}\) be two sequences generated by Algorithm 4.1. Then \(\{x_n\}\) and \(\{y_n\}\) converge weakly to the same point \(x^*\in VI(F,C)\).

Proof

From Lemma 4.1, the sequences \(\{x_n\}\) and \(\{y_n\}\) are bounded. Hence there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_{n}\}\) that converges weakly to some \(x^*\in H\). Then \(y_{n_k}\rightharpoonup x^* \) and \(x^*\in C.\) Next we prove \(x^*\in VI(F,C)\). Since \(y_{n_k+1}=P_C(x_{n_k}-\lambda _{n_k} F(y_{n_k}))\), by Lemma 2.5, we have

$$\begin{aligned} \langle y_{n_k+1}-x_{n_k}+\lambda _{n_k}F(y_{n_k}), z-y_{n_k+1}\rangle \ge 0, \ \ \ \forall z\in C. \end{aligned}$$
(47)

That is

$$\begin{aligned} \langle x_{n_k}-y_{n_k+1}, z-y_{n_k+1}\rangle \le \lambda _{n_k}\langle F(y_{n_k}), z-y_{n_k+1}\rangle , \ \ \ \forall z\in C. \end{aligned}$$
(48)

Therefore, we get

$$\begin{aligned} \frac{1}{\lambda _{n_k}}\langle x_{n_k}-y_{n_k+1}, z-y_{n_k+1}\rangle + \langle F(y_{n_k}), y_{n_k+1}-y_{n_k}\rangle \le \langle F(y_{n_k}), z-y_{n_k}\rangle , \ \ \ \forall z\in C. \end{aligned}$$
(49)

Fixing \(z\in C\), let \(k\rightarrow \infty \), using the facts (28), \(\{y_{n}\}\) is bounded and \(\lim \nolimits _{k\rightarrow \infty }\lambda _{n_k}=\lambda >0,\) we obtain

$$\begin{aligned} \liminf \limits _{k\rightarrow \infty }\langle F(y_{n_k}), z-y_{n_k}\rangle \ge 0. \end{aligned}$$
(50)

We choose a decreasing positive sequence \(\{\varepsilon _k\}\) such that \(\lim \nolimits _{k\rightarrow \infty }\varepsilon _k=0\). By definition of the lower limit, for each \(\varepsilon _k\) , we denote by \(m_k\) the smallest positive integer such that

$$\begin{aligned} \langle F(y_{n_i}), z-y_{n_i}\rangle +\varepsilon _k\ge 0, \ \forall i\ge m_k. \end{aligned}$$
(51)

As \(\{\varepsilon _k\}\) is decreasing, it is easy to see that the sequence \(\{m_k\}\) is increasing. From Remark 4.1, for each k, \(F(y_{n_{m_k}})\ne 0.\) Let \(z_{n_{m_k}}=\frac{F(y_{n_{m_k}})}{\Vert F(y_{n_{m_k}})\Vert ^2}\). Then we get \(\langle F(y_{n_{m_k}}), z_{n_{m_k}}\rangle =1\) for each k. Moreover, from (51), we have

$$\begin{aligned} \langle F(y_{n_{m_k}}), z+\varepsilon _kz_{n_{m_k}} -y_{n_{m_k}}\rangle \ge 0. \end{aligned}$$
(52)

By definition of pseudomonotone, we obtain

$$\begin{aligned} \langle F(z+\varepsilon _kz_{n_{m_k}}), z+\varepsilon _kz_{n_{m_k}} -y_{n_{m_k}}\rangle \ge 0. \end{aligned}$$
(53)

Since \(\{y_{n_k}\}\) converges weakly to \(x^*\in C\) and F is sequentially weakly continuous on C, we have \(\{F(y_{n_k})\}\) converges weakly to \(F(x^*)\). We can suppose that \(F(x^*)\ne 0\) (otherwise, \(x^*\) is a solution). Since the norm mapping is sequentially weakly lower semicontinuous, we have

$$\begin{aligned} \Vert F(x^*)\Vert \le \liminf \limits _{k\rightarrow \infty }\Vert F(y_{n_k})\Vert . \end{aligned}$$
(54)

As \(\{y_{n_{m_k}}\}\subset \{y_{n_k}\} \) and \(\lim \nolimits _{k\rightarrow \infty }\varepsilon _k=0\), we have

$$\begin{aligned} 0\le \lim \limits _{k\rightarrow \infty }\Vert \varepsilon _kz_{n_{m_k}}\Vert = \lim \limits _{k\rightarrow \infty }\frac{\varepsilon _k}{\Vert F(y_{n_{m_k}})\Vert }\le \frac{0}{\Vert F(x^*)\Vert }=0. \end{aligned}$$
(55)

Let \(k\rightarrow \infty \) in (53), we get

$$\begin{aligned} \langle F(z), z -x^*\rangle \ge 0. \ \ \forall z\in C. \end{aligned}$$
(56)

By Lemma 4.2, we obtain \(x^*\in VI(F,C)\) and as in the proof of Theorem 3.1, we have \(x_n\rightharpoonup x^*\) and \(y_n\rightharpoonup x^*\). That is the desired result. \(\square \)

Remark 4.2

When F is monotone, as in [22, 23], it is not necessary to impose the sequential weak continuity of F.

Now applying Theorem 3.2 with variational inequalities, we have the following result.

Theorem 4.3

Assume that F is strong pseudomonotone on a infinite dimensional H, Lipschitz continuous on C and VI(FC) is nonempty. Let \(\{x_n\}\) and \(\{y_n\}\) be two sequences generated by Algorithm 4.1. Then \(\{x_n\}\) and \(\{y_n\}\) converge strongly to the unique solution \(u\in VI(F,C)\).

5 Numerical experiments

In this section, we provide numerical experiments to illustrate our algorithm and compare them with other existing algorithms in [15, 23, 24]. First, we compare Algorithm 3.1 with the Algorithm (3) (Algorithm 3.1 in [15]). Then we compare Algorithm 4.1 with Algorithm A in [23], Algorithm 2.1 in [24] and Algorithm 3.2 in [24]. We report the number of iterations (iter.) and computing time (time) measured in seconds for all the tests. The termination criteria are the following

$$\begin{aligned} \begin{array}{ll} . \end{array} \end{aligned}$$

For Algorithm 3.1, we take \(\alpha =\mu =0.98\), \(\theta =1 (\delta =0.62)\) and \(\theta =0.75 (\delta =0.53)\). For Algorithm A in [21], we use \(\alpha =\lambda _0=0.4\) and \(\delta =1.001\). For Algorithm 3.2 in [24], we choose \(P=I\), \(\alpha _{-1}=1\), \(\theta =1.5\), \(\rho =0.1\) and \(\beta =0.3 \). We take \(\varepsilon =10^{-6}\) for all algorithms.

Problem 1

We consider the equilibrium problem for the following bifunction \(f : H \times H \rightarrow {\mathbb {R}}\) which comes from the Nash-Cournot equilibrium model in [9,10,11,12,13,14,15].

$$\begin{aligned} f(x,y) = \langle Px+Qy+q, y-x \rangle , \end{aligned}$$
(57)

where \(q \in {\mathbb {R}}^m\) is chosen randomly with its elements in \([-m, m]\), and the matrices P and Q are two square matrices of order m such that Q is symmetric positive semidefinite and \(Q-P\) is negative semidefinite. In this case, the bifunction f satisfies \((C1){-}(C4)\) with the Lipschitz-type constants \(c_1 = c_2 = \frac{\Vert P-Q\Vert }{2}\), see [9, Lemma 6.2]. For Algorithm 3.1, we take \(\lambda _1=\frac{1}{2c_1}\). For Algorithm (3), we take \(\lambda =\frac{\varphi }{4c_1}\).

For numerical experiments: we suppose that the feasible set \(C\subset {\mathbb {R}}^m \) has the form of

$$\begin{aligned} C= \{x\in {\mathbb {R}}^m: -2\le x_i\le 5, i=1,\ldots , m\}, \end{aligned}$$
(58)

where \(m = 10, 100, 500\). We take \(y_{1}=x_0=y_0=(1,\ldots ,1)\) for all algorithms. For every m, as shown in Table 1, we have generated two random samples with different choice of P, Q and q. The Table 1 shows that our algorithm may perform better, even if the Lipschitz constants are known.

Table 1 Problem 1

Problem 2

The second problem is HpHard problem , we choose \(F(x)=Mx+q\) with \(q\in R^n\) and \(M=NN^T+S+D\), where every entry of the \(n\times n\) matrix N and of the \(n\times n\) skew-symmetric matrix S is uniformly generated from \((-5,5)\), and every diagonal entry of the \(n\times n\) diagonal D is uniformly generated from (0, 0.3) (so M is positive definite), with every entry of q uniformly generated from \((-500,0)\). The feasible set is \(R_n^+\). This problem was considered in [24]. For all tests, we take \(y_{1}=x_0=y_0=(1,1,\ldots ,1)\). For Algorithm 4.1, we choose \(\lambda _1=0.4\). For Algorithm 2.1 in [24], we take \(P=(I+M^T)(I+M)\) and \(\theta =0.7\). For every n, as shown in Table 2, we have generated three random samples with different choice of M and q.

Table 2 Problem 2

Problem 3

The third problem was considered in [23, 25], where

$$\begin{aligned} F(x)= & {} (f_1(x),f_2(x),\ldots ,f_m(x)), \\ f_i(x)= & {} x_{i-1}^2+x_{i}^2+x_{i-1}x_i+x_{i}x_{i+1}-2x_{i-1}+4x_{i}+x_{i+1}-1,\\ i= & {} 1,2,\ldots ,m, \ \ x_0=x_{m+1}=0. \end{aligned}$$

The feasible set is \(C=R_+^m\). We take \(\lambda _{1}=0.4\) for Algorithm 4.1. For all tests, we take \(y_{1}=x_0=y_0=(0,0,\ldots ,0)\). The results are summarized in Table 3.

Table 3 Problem 3

Problem 4

Kojima–Shindo Nonlinear Complementarity Problem (NCP) was considered in [23, 25, 26], where \(n = 4\) and the mapping F is defined by

$$\begin{aligned} F(x_1,x_2,x_3,x_4)={ \left[ \begin{array}{ccc} 3x_1^2+2x_1x_2+2x_2^2+x_3+3x_4-6 \\ 2x_1^2+x_1+x_2^2+10x_3+2x_4-2 \\ 3x_1^2+x_1x_2+2x_2^2+2x_3+9x_4-9 \\ x_1^2+3x_2^2+2x_3+3x_4-3 \end{array} \right] } \end{aligned}$$

The feasible set is \(C = \{x\in R_4^+ | x_1 + x_2 + x_3 + x_4 = 4\}.\) We choose as starting points: \(y_{1}=x_0=y_0 = (1,1,1,1)\) and \(y_{1}=x_0=y_0 = (2,0,0,2).\) We take \(\lambda _{1}=0.8\) for Algorithm 4.1. The Tables 23 and 4 illustrate that Algorithm 4.1 may work better. As in the previous experiments, Algorithms 3.1 and 4.1 may perform better when choosing \(\theta =0.75\).

Table 4 Problem 4

6 Conclusions

In this work, we consider a convergence result for equilibrium problem involving Lipschitz-type and pseudomonotone bifunctions but the Lipschitz-type constants are unknown. We modify the gradient method with a new step size. A weak and a strong convergence theorem are proved for sequences generated by the algorithm. The numerical experiments confirm the computational effectiveness of the proposed algorithm.