1 Introduction

In this paper, we consider the equilibrium problems (EP) of find \(x^*\in C\) such that

$$\begin{aligned} f(x^*,y)\ge 0, \quad \forall y\in C, \end{aligned}$$
(1)

where C is a nonempty closed convex subset in a real Hilbert space H, \(f: H\times H\longrightarrow \mathbb {R}\) is a bifunction. The set of solutions of (1) is denoted by EP(f). This problem is also known as the Ky Fan’s inequality due to his contribution to this field [1]. It unifies many important mathematical problems, such as optimization problems, complementary problems, variational inequality problems, Nash equilibrium problems, fixed point problems [2,3,4]. In recent decades, many methods have been proposed and analyzed for approximating solution of equilibrium problems [5,6,7,8,9,10,11,12,13]. One of the most common methods is proximal point method [5, 6], but the method cannot be adapted to peudomonotone equilibrium problems [7].

Another fundamental method for equilibrium problem is the extragradient-like methods [6, 7, 9,10,11,12]. Some known methods use step sizes which depend on the Lipschitz-type constants of the bifunctions [6, 12]. That fact can make some restrictions in applications because the Lipschitz-type constants are often unknown or difficult to estimate.

Recently, iterative methods for finding a common element of the set of solutions of equilibrium problems and the set of fixed points of operators in a real Hilbert space have further developed by many authors [14,15,16]. Very recently, Dang [8, 10, 11] proposed algorithms which use a step size sequence for solving strong pseudomonotone equilibrium problems in real Hilbert space.

The main purpose of this paper is to propose a new step size for finding a common element of the set of fixed points of a quasinonexpansive mapping and the set of solutions of equilibrium problems involving pseudomonotone and Lipschitz-type bifunctions.

The paper is organized as follows. In Sect. 2, we present some definitions and preliminaries that will be needed in the paper. In Sect. 3, we propose the new algorithms and analyze their convergence. Finally, some numerical experiments are provided.

2 Preliminaries

In this section, we recall some definitions and preliminaries for further use.

Definition 2.1

A bifunction \(f : C \times C \longrightarrow \mathbb {R}\) is said to be as follows:

  1. (i)

    monotone on C if \(f(x,y)+f(y,x)\le 0, \ \forall x, y\in C. \)

  2. (ii)

    pseudomonotone on C if \(f(x,y)\ge 0\Longrightarrow f(y,x)\le 0, \ \forall x, y\in C. \)

  3. (iii)

    strongly pseudomonotone on C if there exists a constant \(\gamma >0\) such that \(f(x,y)\ge 0\Longrightarrow f(y,x)\le -\gamma \Vert x-y\Vert ^2, \ \forall x, y\in C. \)

  4. (iv)

    Lipschitz-type condition on C, if there exist two positive constants \(c_1\), \(c_2\) such that \(f(x,y)+f(y,z)\ge f(x,z)-c_1\Vert x-y\Vert ^2-c_2\Vert y-z\Vert ^2, \ \forall x, y, z\in C.\)

From the above definitions, we see that (i)\(\Rightarrow \)(ii) and (iii)\(\Rightarrow \)(ii).

Definition 2.2

A mapping \(h : C \longrightarrow \mathbb {R}\) is called subdifferentiable at \(x\in C\) if there exists a vector \(w\in H\) such that \(h(y)-h(x)\ge \langle w, y-x\rangle , \forall y\in C.\)

Definition 2.3

Let \(S : H \longrightarrow H\) is a mapping with \(F (S) \ne \emptyset \). Then

  1. (i)

    S is called quasi-nonexpansive if \(\Vert S(x)-y\Vert \le \Vert x-y\Vert , \ \forall x\in H, y\in F(S), \) where F(S) is denoted the fixed point of S.

  2. (ii)

    \(I - S\) is called demiclosed at zero if \(\{x_n\}\subset H,\)\(x_n\rightharpoonup x\) and \(\Vert S(x_n)-x_n\Vert \rightarrow 0 \), it follows that \(x\in F(S)\).

Here, we assume that the bifunction f satisfies the following conditions:

(A1):

f is pseudomonotone on C and \(f(x,x)=0\) for all \(x\in C\).

(A2):

f satisfies the Lipschitz-type condition on H.

(A3):

f(x, .) is convex and subdifferentiable on H for every fixed \(x \in H\).

(A4):

f is jointly weakly continuous on \(H \times C\) in the sense that, if \(x \in H, y \in C\) and \(\{x_n\}, \{y_n\}\) converge weakly to xy,  respectively, then \(f (x_n, y_n) \rightarrow f (x, y)\) as \(n\rightarrow \infty .\)

Remark 2.1

If f satisfies \((A1)-(A4)\), then the set of solutions EP(f) of EP(1) is closed and convex (see, [9, 12]). If S is quasi-nonexpansive, then F(S) is a closed convex subset of H [17, Proposition 1].

For a proper, convex and lower semicontinuous function \(g: C\rightarrow (-\infty ,+\infty ]\) and \(\lambda >0, \) the proximal mapping of g with \(\lambda \) is defined by

$$\begin{aligned} prox_{\lambda g}(x)=argmin\{\lambda g(y)+\frac{1}{2}\Vert x-y\Vert ^2: y\in C\}, \ x\in H. \end{aligned}$$
(2)

The following lemma is a property of the proximal mapping [10, 11, 18].

Lemma 2.1

For all \(x\in H, y \in C \) and \(\lambda > 0,\) the following inequality holds:

$$\begin{aligned} \lambda \{g(y)-g(prox_{\lambda g}(x))\}\ge \langle x-prox_{\lambda g}(x),y-prox_{\lambda g}(x)\rangle . \end{aligned}$$
(3)

Remark 2.2

From Lemma 2.1, we note that if \(x=prox_{\lambda g}(x)\), then

$$\begin{aligned} x\in Argmin\{g(y):y\in C\}:=\{x\in C: g(x)=\min \limits _{y\in C}g(y)\}. \end{aligned}$$
(4)

For a closed and convex \(C\subseteq H\), the (metric) projection \(P_C:H\longrightarrow C\) is defined, for all \(x\in H\) such that \(P_C(x)=argmin\{{\parallel y-x \parallel : y\in C }\}\). It is known that \(P_C\) has the following property.

Lemma 2.2

Let C be a nonempty, closed and convex set in H and \( x \in H\). Then

$$\begin{aligned} \langle P_C(x)-x, y-P_C(x)\rangle \ge 0, \quad \forall y\in C. \end{aligned}$$

Lemma 2.3

Let \( u,\ v\in H\). Then \( \ \Vert u+v\Vert ^2\le \Vert u\Vert ^2+2\langle v, u+v\rangle . \)

Lemma 2.4

[19] Let \(\{a_n\}\) be a nonnegative real sequence and \(\exists N>0, \ \forall n\ge N,\) such that \(a_{n+1}\le (1-\alpha _n)a_n+\alpha _nb_n\), where \(\{\alpha _n\}\subset (0, 1)\), \( \sum _{n=0}^\infty \alpha _n=\infty \) and \(\{b_n\}\) is a sequence such that \(\limsup \limits _{n\rightarrow \infty }b_n\le 0\). Then \(\lim \limits _{n\rightarrow \infty }a_n=0.\)

Lemma 2.5

[20] Let \(\{a_n\}\) be a nonnegative real sequence such that there exists a subsequence \(\{a_{n_j}\}\) of \(\{a_n\}\) such that \(a_{n_j}<a_{{n_j}+1} \) for all \(j\in \mathbb {N}\). Then there exists a nondecreasing sequence \(\{m_k\}\) of \(\mathbb {N}\) such that \(\lim \limits _{k\rightarrow \infty }m_k=\infty \), and the following properties are satisfied by all (sufficiently large) number \(k\in \mathbb {N}\):

$$\begin{aligned} a_{m_k}\le a_{{m_k}+1} \ and \ a_k\le a_{{m_k}+1}. \end{aligned}$$

In fact, \(m_k\) is the largest number n in the set \( \{1,2,\ldots ,k\}\) such that \(a_n<a_{n+1}\).

The normal cone \(N_C\) to C at a point \(x \in C\) is defined by

$$\begin{aligned} N_C(x)=\{w\in H: \langle w,y-x\rangle \le 0, \ \forall y\in C\}. \end{aligned}$$

Lemma 2.6

[21, Chapter 7] Let C be a nonempty convex subset of a real Hilbert space H and \(g : C \rightarrow \mathbb {R}\) be a convex subdifferentiable and lower semicontinuous function on C. Then, \(x^*\) is a solution to the following convex problem

$$\begin{aligned} min \{g(x) : x \in C\} \end{aligned}$$

if and only if \(0 \in \partial g(x^*)+N_C(x^*)\), where \(\partial g(.)\) denotes the subdifferential of g and \(N_C(x^*)\) is the normal cone of C at \(x^*\).

3 Algorithm and its convergence

Inspired by the algorithms in [12,13,14,15,16, 22,23,24,25,26] and viscosity scheme [27], we propose the following method.

Algorithm 3.1

(Step 0):

Take \(\lambda _0>0\), \(x_0\in H,\)\(\mu \in (0, 1) \).

(Step 1):

Given the current iterate \(x_n\), compute

$$\begin{aligned} y_{n}=argmin\{\lambda _n f(x_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in C\}=prox_{\lambda _n f(x_n,.)}(x_n). \end{aligned}$$
(5)
(Step 2):

Choose \(w_n\in \partial _2f(x_n,y_n)\) such that \( x_n-\lambda _n w_n-y_{n}\in N_C(y_n) \), compute

$$\begin{aligned} z_n=argmin\{\lambda _n f(y_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in T_n\}=prox_{\lambda _n f(y_n,.)}(x_n), \end{aligned}$$
(6)

where \(T_{n}=\{v\in H|\langle x_n-\lambda _n w_n-y_{n}, v-y_{n} \rangle \le 0\}\).

(Step 3):

Compute \(t_n=\alpha _nx_0+(1-\alpha _n)z_n \), \(x_{n+1}=\beta _nz_n+(1-\beta _n)St_n\) and

$$\begin{aligned} \lambda _{n+1}{=} {\left\{ \begin{array}{ll} min\{{\frac{\mu (\Vert x_n{-}y_n\Vert ^2+\Vert z_n{-}y_n\Vert ^2)}{2 (f(x_n,z_n){-}f(x_n,y_n){-}f(y_n,z_n))},\lambda _n }\} , &{} if\ f(x_n,z_n){-}f(x_n,y_n){-}f(y_n,z_n){>}0, \\ \lambda _n,&{} otherwise. \\ \end{array}\right. } \end{aligned}$$
(7)

Set \(n := n + 1\) and return to step 1.

Remark 3.1

The domain of the first proximal mapping \(y_{n}=prox_{\lambda _n f(x_n,.)}(x_n)\) is C. The domain of the second proximal mapping \(z_n=prox_{\lambda _n f(y_n,.)}(x_n)\) is \(T_{n}.\)

Remark 3.2

We note that \(w_n\) exists and \(C\subseteq T_{n}\). Since \(y_{n}=argmin\{\lambda _n f(x_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in C\}\), by Lemma2.6, we obtain \(0\in \partial _2\{\lambda _n f(x_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2\}(y_n)+N_C(y_n).\) That is , there exists \(w_n\in \partial _2f(x_n,y_n)\) such that \( x_n-\lambda _n w_n-y_{n}\in N_C(y_n) \). Hence, there exists \(w\in N_C(y_n)\) such that \(\lambda _nw_n+y_n-x_n+w=0.\) Thus,

$$\begin{aligned} \langle x_n-y_n, y-y_n\rangle =\langle \lambda _nw_n,y-y_n\rangle +\langle w,y-y_n\rangle \le \lambda _n \langle w_n,y-y_n\rangle . \ \forall y\in C. \end{aligned}$$

That is \(\langle x_n-\lambda _n w_n-y_{n}, y-y_{n} \rangle \le 0, \ \forall y\in C.\) Hence, \(C\subseteq T_{n}\).

Lemma 3.1

The sequence \( \{\lambda _n\} \) generated by Algorithm 3.1 is a monotonically decreasing sequence with lower bound \( \min \{{\frac{\mu }{2\max \{c_1,c_2\} },\lambda _0 }\}\).

Proof

It is easy to see that \( \{\lambda _n\} \) is a monotonically decreasing sequence. Since f satisfies the Lipschitz-type condition with constants \(c_1\) and \(c_2\), in the case of \(f(x_n,z_n)-f(x_n,y_n)-f(y_n,z_n)>0,\) we have

$$\begin{aligned} \frac{\mu (\Vert x_n{-}y_n\Vert ^2{+}\Vert z_n{-}y_n\Vert ^2)}{2 (f(x_n,z_n){-}f(x_n,y_n){-}f(y_n,z_n))}{\ge } \frac{\mu (\Vert x_n-y_n\Vert ^2{+}\Vert z_n{-}y_n\Vert ^2)}{2(c_1\Vert x_n{-}y_n\Vert ^2{+}c_2\Vert y_n{-}z_n\Vert ^2)}{\ge } \frac{\mu }{2\max \{c_1,c_2\}}. \end{aligned}$$
(8)

Thus, the sequence \( \{\lambda _n\} \) has the lower bound \( \min \{{\frac{\mu }{2\max \{c_1,c_2\} },\lambda _0 }\}\). \(\square \)

Remark 3.3

It is obvious the limit of \( \{\lambda _n\} \) exists and we denote \(\lambda =\lim \limits _{n\rightarrow \infty }\lambda _n.\) Clearly \(\lambda >0\). If \(\lambda _0\le \frac{\mu }{2\max \{c_1,c_2\} }, \) Then \(\{\lambda _n\}\) is a constant sequence.

The following lemma plays a crucial role in the proof of the Theorem 3.1.

Lemma 3.2

Suppose that \(S : H \longrightarrow H\) is a quasi-nonexpansive mapping. Let \(\{x_n\}\), \(\{y_n\}\), \(\{z_n\}\) and \(\{t_n\}\) be sequences generated by Algorithm 3.1 and \(F(S)\cap EP(f)\ne \emptyset .\) Then the sequences \(\{x_n\}\), \(\{y_n\},\)\(\{z_n\}\) and \(\{t_n\}\) are bounded.

Proof

Since \(z_n=argmin\{\lambda _n f(y_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in T_n\}.\) By Lemma 2.1, we get

$$\begin{aligned} \lambda _n (f(y_n,y)-f(y_n,z_n))\ge \langle x_n-z_n,y-z_n\rangle ,\ \forall y\in T_n. \end{aligned}$$
(9)

Note that \(F(S)\cap EP(f)\subseteq EP(f)\subseteq C\subseteq T_{n}.\) Let \(u\in F(S)\cap EP(f),\) substituting \(y=u\) into the last inequality, we have

$$\begin{aligned} \lambda _n (f(y_n,u)-f(y_n,z_n))\ge \langle x_n-z_n,u-z_n\rangle . \end{aligned}$$
(10)

As \(u\in EP(f),\) we obtain \(f(u,y_n)\ge 0.\) Thus \(f(y_n,u)\le 0\) because of the pseudomonotonicity of f. Hence, from (10) and \(\lambda _n>0\), we obtain

$$\begin{aligned} -\lambda _nf(y_n,z_n)\ge \langle x_n-z_n,u-z_n\rangle . \end{aligned}$$
(11)

Note that \(w_n\in \partial _2f(x_n,y_n),\) we get \(f(x_n,y)-f(x_n,y_n)\ge \langle w_n,y-y_n\rangle , \forall y\in H.\) In particular, substituting \(y=z_n\) into the last inequality, we have \(f(x_n,z_n)-f(x_n,y_n)\ge \langle w_n,z_n-y_n\rangle .\) That is,

$$\begin{aligned} \lambda _n(f(x_n,z_n)-f(x_n,y_n))\ge \lambda _n\langle w_n,z_n-y_n\rangle . \end{aligned}$$
(12)

By the definition of \(T_n\), we have \(\langle x_n-\lambda _n w_n-y_{n},z_n-y_{n} \rangle \le 0.\) Then

$$\begin{aligned} \lambda _n\langle w_n,z_n-y_n\rangle \ge \langle x_n-y_n,z_n-y_n\rangle . \end{aligned}$$
(13)

Combining (11), (12) and (13), we have

$$\begin{aligned}&2\lambda _n(f(x_n,z_n)-f(x_n,y_n)-f(y_n,z_n))\nonumber \\&\quad \ge 2\langle x_n-y_{n},z_n-y_{n}\rangle +2\langle x_n-z_{n},u-z_{n} \rangle \nonumber \\&\quad = \Vert x_n-y_{n}\Vert ^2+\Vert y_n-z_{n}\Vert ^2-\Vert x_n-z_{n}\Vert ^2\nonumber \\&\qquad -\Vert x_n-u\Vert ^2+\Vert x_n-z_{n}\Vert ^2+\Vert z_n-u\Vert ^2. \end{aligned}$$
(14)

Thus

$$\begin{aligned} \Vert z_n-u\Vert ^2&\le \Vert x_n-u\Vert ^2-\Vert x_n-y_{n}\Vert ^2-\Vert y_n-z_{n}\Vert ^2\nonumber \\&\quad +2\lambda _n(f(x_n,z_n)-f(x_n,y_n)-f(y_n,z_n)). \end{aligned}$$
(15)

By the definition of \(\lambda _n\) and (15), we obtain

$$\begin{aligned} \Vert z_n-u\Vert ^2&\le \Vert x_n-u\Vert ^2-\Vert x_n-y_n\Vert ^2- \Vert y_n-z_n\Vert ^2\nonumber \\&\quad +2\lambda _n(f(x_n,z_n)-f(x_n,y_n) -f(y_n,z_n))\nonumber \\&=\Vert x_n-u\Vert ^2-\Vert x_n-y_n\Vert ^2- \Vert y_n-z_n\Vert ^2\nonumber \\&\quad +2\frac{\lambda _n}{\lambda _{n+1}} \lambda _{n+1}(f(x_n,z_n)-f(x_n,y_n)-f(y_n,z_n))\nonumber \\&\le \Vert x_n-u\Vert ^2-\Vert x_n-y_n\Vert ^2-\Vert y_n-z_n\Vert ^2\nonumber \\&\quad +\frac{\lambda _n}{\lambda _{n+1}}\mu (\Vert x_n-y_n\Vert ^2+\Vert z_n-y_n\Vert ^2). \end{aligned}$$
(16)

Note that the limit

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\lambda _n\frac{\mu }{\lambda _{n+1}} =\mu , \quad 0<\mu <1. \end{aligned}$$
(17)

That is, \(\exists N\ge 0,\) such that \(\forall n\ge N,\)\(0<\lambda _n\frac{\mu }{\lambda _{n+1}}<1\).

We obtain \(\forall n\ge N,\)\(\Vert z_n-u\Vert \le \Vert x_n-u\Vert .\)

Thus, \(\forall n\ge N,\)

$$\begin{aligned} \Vert x_{n+1}-u\Vert&=\Vert \beta _nz_n+(1-\beta _n)St_n-u\Vert \nonumber \\&\le \beta _n\Vert z_n-u\Vert +(1-\beta _n)\Vert St_n-u\Vert \nonumber \\&\le \beta _n\Vert z_n-u\Vert +(1-\beta _n)\Vert t_n-u\Vert \nonumber \\&\le \beta _n\Vert z_n-u\Vert +(1-\beta _n)\Vert \alpha _nx_0+(1-\alpha _n)z_n-u\Vert \nonumber \\&\le \beta _n\Vert z_n-u\Vert +(1-\beta _n)(\alpha _n\Vert x_0-u\Vert +(1-\alpha _n)\Vert z_n-u\Vert )\nonumber \\&\le (\beta _n+(1-\beta _n)(1-\alpha _n))\Vert z_n-u\Vert +(1-\beta _n)\alpha _n\Vert x_0-u\Vert \nonumber \\&\le max \{\Vert x_0-u\Vert , \Vert x_n-u\Vert \}\nonumber \\&\le \cdots \le max \{\Vert x_0-u\Vert , \Vert x_{N}-u\Vert \}. \end{aligned}$$
(18)

Hence the sequence \(\{x_n\}\) is bounded. Clearly, \(\{y_n\}\), \(\{z_n\}\) and \(\{t_n\}\) are bounded. That is the desired result. \(\square \)

Theorem 3.1

Let S be a quasi-nonexpansive mapping such that \(I-S\) is demiclosed at zero. Assume that \(\{\alpha _n\}\subset (0, 1)\), \( \sum _{n=0}^\infty \alpha _n=\infty \), \(\lim \limits _{n\rightarrow \infty }\alpha _n=0 \) and \(\beta _n\in [a,b]\subset (0,1)\). Moreover, the assumptions \((A1)-(A4)\) and \(F(S)\cap EP(f)\ne \emptyset \) hold. Then the sequence \(\{x_n\}\) generated by Algorithm 3.1 converges strongly to \(x^*=P_{F(S)\cap EP(f)}(x_0)\).

Proof

Let \(x^*=P_{F(S)\cap EP(f)}(x_0)\), by Lemma 2.2, we have

$$\begin{aligned} \langle x_0-x^*,z-x^*\rangle \le 0, \ \forall z\in F(S)\cap EP(f). \end{aligned}$$
(19)

By the proof of Lemma 3.2, we get \(\exists N_1\ge 0,\)\(\forall n\ge N_1,\)\(\Vert z_n-x^*\Vert \le \Vert x_n-x^*\Vert .\) From Lemma 3.2, we have the sequence \(\{x_n\}\), \(\{y_n\}\), \(\{z_n\}\) and \(\{t_n\}\) are bounded. By lemma 2.3, we have

$$\begin{aligned} \Vert x_{n+1}-x^*\Vert ^2&=\Vert \beta _nz_n+(1-\beta _n)St_n-x^*\Vert ^2\nonumber \\&=\beta _n\Vert z_n-x^*\Vert ^2+(1-\beta _n)\Vert St_n-x^*\Vert ^2-\beta _n(1-\beta _n) \Vert St_n-z_n\Vert ^2\nonumber \\&\le \beta _n\Vert z_n-x^*\Vert ^2+(1-\beta _n)\Vert t_n-x^*\Vert ^2- \beta _n(1-\beta _n)\Vert St_n-z_n\Vert ^2\nonumber \\&=\beta _n\Vert z_n-x^*\Vert ^2+(1-\beta _n)\Vert \alpha _nx_0+(1-\alpha _n)z_n-x^*\Vert ^2\nonumber \\&\quad -\beta _n(1-\beta _n)\Vert St_n-z_n\Vert ^2\nonumber \\&\le \beta _n\Vert z_n-x^*\Vert ^2-\beta _n(1-\beta _n)\Vert St_n-z_n\Vert ^2\nonumber \\&\quad +(1-\beta _n)(2\alpha _n\langle x_0-x^*,t_n-x^*\rangle +(1-\alpha _n) \Vert z_n-x^*\Vert ^2)\nonumber \\&{=}(\beta _n{+}(1{-}\beta _n)(1{-}\alpha _n)) \Vert z_n{-}x^*\Vert ^2{+}2(1{-}\beta _n)\alpha _n\langle x_0{-}x^*,t_n{-}x^*\rangle \nonumber \\&\quad -\beta _n(1-\beta _n)\Vert St_n-z_n\Vert ^2. \end{aligned}$$
(20)

Hence, for \(\forall n\ge N_1,\) we have

$$\begin{aligned} \Vert x_{n{+}1}{-}x^*\Vert ^2&{\le }(\beta _n{+}(1{-}\beta _n)(1{-}\alpha _n))\Vert x_n{-}x^*\Vert ^2{+}2(1{-}\beta _n)\alpha _n\langle x_0-x^*,t_n-x^*\rangle \nonumber \\&\quad -\beta _n(1-\beta _n)\Vert St_n-z_n\Vert ^2. \end{aligned}$$
(21)

Moreover, by (20) and (16), we get

$$\begin{aligned} \Vert x_{n{+}1}{-}x^*\Vert ^2&{\le }(\beta _n{+}(1{-}\beta _n)(1{-}\alpha _n))\Vert z_n{-}x^*\Vert ^2{+}2(1{-}\beta _n)\alpha _n\langle x_0-x^*,t_n-x^*\rangle \nonumber \\&\le \Vert x_n-x^*\Vert ^2-(1-\lambda _n\frac{\mu }{\lambda _{n+1}}) (\Vert y_n-x_n\Vert ^2\nonumber \\&\quad +\Vert y_n-z_n\Vert ^2)+2(1-\beta _n)\alpha _n\langle x_0-x^*,t_n-x^*\rangle . \end{aligned}$$
(22)

Case 1 Suppose that there exists \(N_2\in \mathbb {N}\)(\(N_2\ge N_1\)), such that \(\Vert x_{n+1}-x^*\Vert \le \Vert x_n-x^*\Vert , \ \forall n\ge N_2.\) Then \(\lim \limits _{n\rightarrow \infty }\Vert x_n-x^*\Vert \) exists and we denote \(\lim \limits _{n\rightarrow \infty }\Vert x_n-x^*\Vert =l.\) From (22), we get

$$\begin{aligned}&(1-\lambda _n\frac{\mu }{\lambda _{n+1}})(\Vert y_n-x_n\Vert ^2+\Vert y_n-z_n\Vert ^2)\nonumber \\&\quad \le \Vert x_n-x^*\Vert ^2-\Vert x_{n+1}-x^*\Vert ^2+2(1-\beta _n)\alpha _n\langle x_0-x^*,t_n-x^*\rangle . \end{aligned}$$
(23)

Combining (23) and \(\lim \limits _{n\rightarrow \infty }(1-\lambda _n\frac{\mu }{\lambda _{n+1}}) =1-\mu >0,\) we obtain \(\lim \limits _{n\rightarrow \infty }\Vert x_n-y_n\Vert =0\), \(\lim \limits _{n\rightarrow \infty }\Vert z_n-y_n\Vert =0\) and \(\lim \limits _{n\rightarrow \infty }\Vert z_n-x_n\Vert =0\). Since \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_k}\}\) that converges weakly to some \(z_0\in H\), such that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\langle x_0-x^*, x_n-x^* \rangle =\lim \limits _{k\rightarrow \infty }\langle x_0-x^*, x_{n_k}-x^* \rangle =\langle x_0-x^*, z_0-x^* \rangle . \end{aligned}$$
(24)

Then \(y_{n_k}\rightharpoonup z_0\) and \(z_0\in C.\) Since \(y_{n_k}=prox_{\lambda _{n_k} f(x_{n_k},.)}(x_{n_k}),\) by Lemma 2.1, we have

$$\begin{aligned} \lambda _{n_k} (f(x_{n_k},y)-f(x_{n_k},y_{n_k}))\ge \langle x_{n_k}-y_{n_k},y-y_{n_k}\rangle ,\ \forall y\in C. \end{aligned}$$
(25)

Passing to the limit in the last inequality as \(k\longrightarrow \infty \) and using assumptions (A1), (A4) and \(\lim \limits _{k\rightarrow \infty }\lambda _{n_k}=\lambda > 0,\) we get \(f(z_0, y)\ge 0, \ \forall y\in C. \) That is \(z_0\in EP(f).\) Next we prove \(z_0\in F(S)\). Indeed, it follow from (21) that

$$\begin{aligned} 0&=\lim \limits _{n\rightarrow \infty }\Vert x_{n+1}-x^*\Vert ^2-\Vert x_{n} -x^*\Vert ^2+\alpha _n(1-\beta _n)\Vert x_n-x^*\Vert ^2\nonumber \\&\quad -2(1-\beta _n)\alpha _n\langle x_0-x^*,t_n-x^*\rangle \nonumber \\&\le \liminf _{n\rightarrow \infty }(-\beta _n(1-\beta _n)\Vert St_n-z_n\Vert ^2)\nonumber \\&=-\limsup \limits _{n\rightarrow \infty }\beta _n(1-\beta _n)\Vert St_n-z_n\Vert ^2\nonumber \\&\le -a(1-b)\limsup \limits _{n\rightarrow \infty }\Vert St_n-z_n\Vert ^2. \end{aligned}$$
(26)

Hence \(\lim \limits _{n\rightarrow \infty }\Vert St_n-z_n\Vert =0.\) Since \(t_n-z_n=\alpha _n(x_0-z_n)\), we get \(\lim \limits _{n\rightarrow \infty }\Vert t_n-z_n\Vert =0.\) Consequently, \(t_{n_k}\rightharpoonup z_0\) and \(\lim \limits _{n\rightarrow \infty }\Vert t_n-St_n\Vert =0.\) Using the demiclosedness of the mapping \(I-S\), we have \(z_0\in F(S)\). That is \(z_0\in F(S)\cap EP(f)\). Using (19) and (24), we obtain \(\limsup \limits _{n\rightarrow \infty }\langle x_0-x^*, x_n-x^* \rangle =\langle x_0-x^*, z_0-x^* \rangle \le 0.\) Hence, we get

$$\begin{aligned} \limsup _{n\rightarrow \infty }\langle x_0{-}x^*, t_{n}{-}x^* \rangle {\le }\limsup _{n\rightarrow \infty }\langle x_0{-}x^*, t_{n}{-}x_n \rangle {+}\limsup _{n\rightarrow \infty }\langle x_0{-}x^*, x_n{-}x^* \rangle {\le }0. \end{aligned}$$
(27)

By (21), for \(\forall n\ge N_2,\) we have

$$\begin{aligned} \Vert x_{n+1}{-}x^*\Vert ^2&{\le }(\beta _n{+}(1{-}\beta _n)(1{-}\alpha _n))\Vert x_n{-}x^*\Vert ^2 {+}2(1{-}\beta _n)\alpha _n\langle x_0{-}x^*,t_n{-}x^*\rangle \nonumber \\&=(1-\alpha _n(1-\beta _n))\Vert x_n-x^*\Vert ^2+(1-\beta _n) \alpha _n2\langle x_0-x^*,t_n-x^*\rangle . \end{aligned}$$
(28)

It follows from (27), (28) and Lemma 2.4, we obtain \(\lim \limits _{n\rightarrow \infty }\Vert x_n-x^*\Vert ^2=0\).

Case 2 There exists a subsequence \(\{\Vert x_{n_j}-x^*\Vert \}\) of \(\{\Vert x_n-x^*\Vert \}\) such that \(\Vert x_{n_j}-x^*\Vert <\Vert x_{n_j+1}-x^*\Vert \) for all \(j\in \mathbb {N}\). From Lemma 2.5, there exists a nondecreasing sequence \(m_k\) of \(\mathbb {N}\) such that \(\lim \limits _{k\rightarrow \infty }m_k=\infty \) and the following inequalities hold for all \(k\in \mathbb {N}\):

$$\begin{aligned} 0\le \Vert x_{m_k}-x^*\Vert \le \Vert x_{m_k+1}-x^*\Vert \ \ and \ \ \Vert x_k-x^*\Vert \le \Vert x_{m_k+1}-x^*\Vert . \end{aligned}$$
(29)

By (22), we have

$$\begin{aligned}&(1-\lambda _{m_k}\frac{\mu }{\lambda _{{m_k}+1}})(\Vert y_{m_k} -x_{m_k}\Vert ^2+\Vert y_{m_k}-z_{m_k}\Vert ^2)\nonumber \\&\quad \le \Vert x_{m_k}-x^*\Vert ^2-\Vert x_{{m_k}+1}-x^* \Vert ^2+2(1-\beta _{m_k})\alpha _{m_k}\langle x_0-x^*, t_{m_k}-x^*\rangle . \end{aligned}$$
(30)

Combining (30) and \(\lim \limits _{k\rightarrow \infty }(1-\lambda _{m_k}\frac{\mu }{\lambda _{{m_k}+1}}) =1-\mu >0,\) we obtain \(\lim \limits _{k\rightarrow \infty }\Vert x_{m_k}-y_{m_k}\Vert =0\) and \(\lim \limits _{k\rightarrow \infty }\Vert z_{m_k}-y_{m_k}\Vert =0.\) Using the same argument as in the proof of Case 1, we have \(\limsup _{k\rightarrow \infty }\langle x_0-x^*, t_{m_k}-x^* \rangle \le 0.\) Using (29) and the same argument as in the proof of (28), for all \(m_k\ge N_1\), we have

$$\begin{aligned} \Vert x_{{m_k}{+}1}{-}x^*\Vert ^2&{\le }(1{-}\alpha _{m_k}(1{-}\beta _{m_k}))\Vert x_{m_k}{-}x^*\Vert ^2 {+}(1{-}\beta _{m_k})\alpha _{m_k}2\langle x_0{-}x^*,t_{m_k}{-}x^*\rangle \nonumber \\&\le (1{-}\alpha _{m_k}(1{-}\beta _{m_k}))\Vert x_{{m_k}{+}1}{-}x^* \Vert ^2{+}(1{-}\beta _{m_k})\alpha _{m_k}2\langle x_0{-}x^*,t_{m_k}{-}x^*\rangle . \end{aligned}$$
(31)

This implies that

$$\begin{aligned} \Vert x_{{m_k}+1}-x^*\Vert ^2 \le 2\langle x_0-x^*,t_{m_k}-x^*\rangle , \quad \forall \ m_k\ge N_1. \end{aligned}$$
(32)

Since \(\limsup \limits _{k\rightarrow \infty }2\langle x_0-x^*,t_{m_k}-x^*\rangle \le 0 ,\) we obtain \(\lim \limits _{k\rightarrow \infty }\Vert x_{m_k+1}-x^*\Vert ^2=0\) and \(\lim \limits _{k\rightarrow \infty }\Vert x_{m_k}-x^*\Vert ^2=0\). Since \(\Vert x_k-x^*\Vert \le \Vert x_{{m_k}+1}-x^*\Vert \), we have \(\lim \limits _{k\rightarrow \infty }\Vert x_k-x^*\Vert =0\). Therefore \(x_k\rightarrow x^*.\) That is the desired result. \(\square \)

If \(f(x,y)=\langle A(x),y-x\rangle \), \(\forall x, y\in H,\) where \(A : H \rightarrow H\) is a mapping. Then the equilibrium problem become the variational inequality. That is, find \(x^*\in C\) such that

$$\begin{aligned} \langle A(x^*),y-x^*\rangle \ge 0, \quad \forall y\in C. \end{aligned}$$

Moreover, we have

$$\begin{aligned}&argmin\{\lambda _n f(x_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in C\}\\&\quad =argmin\{\lambda _n \langle A(x_n),y-x_n\rangle +\frac{1}{2}\Vert x_n-y\Vert ^2+\frac{\lambda _n^2}{2}\Vert Ax_n\Vert ^2, \ y\in C\}\\&\quad =argmin\{\frac{1}{2}\Vert (x_n-\lambda _nA(x_n))-y\Vert ^2, \ y\in C\}\\&\quad =P_C(x_{n}-\lambda _n A(x_n)) \end{aligned}$$

and

$$\begin{aligned} argmin\{\lambda _n f(y_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in T_n\}=P_{T_n}(x_{n}-\lambda _n A(y_n)). \end{aligned}$$

Remark 3.4

Let \(f(x,y)=\langle A(x),y-x\rangle \), \(\forall x, y\in H\). If A is Lipschitz-continuous, i.e., there exists \(L>0\) such that

$$\begin{aligned} \parallel A(x)-A(y) \parallel \le L\parallel x-y \parallel ,\ \ \forall x, y\in H. \end{aligned}$$

Then the condition (A2) holds for f with \(c_2=c_1=\frac{L}{2}\). If A is monotone and Lipschitz-continuous, the conditions (A1), (A3) and (A4) can be dropped in Theorem 3.1. It is obvious that f(xy) satisfies (A1) and (A3). Since \(f(x,y)=\langle A(x),y-x\rangle \) and (25), we get

$$\begin{aligned} \lambda _{n_k} (\langle A(x_{n_k}),y-x_{n_k}\rangle -\langle A(x_{n_k}),y_{n_k}-x_{n_k}\rangle )\ge \langle x_{n_k}-y_{n_k},y-y_{n_k}\rangle ,\ \forall y\in C. \end{aligned}$$

That is

$$\begin{aligned} \lambda _{n_k} \langle A(x_{n_k}),y-y_{n_k}\rangle \ge \langle x_{n_k}-y_{n_k},y-y_{n_k}\rangle ,\ \forall y\in C. \end{aligned}$$

Using the monotonicity of A, for \(\forall y\in C\) we have

$$\begin{aligned} 0&\le \langle y_{n_k}-x_{n_k},y-y_{n_k}\rangle + \lambda _{n_k}\langle A(x_{n_k}), y-y_{n_k}\rangle \\&=\langle y_{n_k}-x_{n_k},y-y_{n_k}\rangle + \lambda _{n_k}\langle A(x_{n_k}), y-x_{n_k}\rangle + \lambda _{n_k}\langle A(x_{n_k}), x_{n_k}-y_{n_k}\rangle \\&\le \langle y_{n_k}-x_{n_k},y-y_{n_k}\rangle + \lambda _{n_k}\langle A(y), y-x_{n_k}\rangle + \lambda _{n_k}\langle A(x_{n_k}), x_{n_k}-y_{n_k}\rangle . \end{aligned}$$

Let \(k\rightarrow \infty \), using the facts \(\lim \limits _{k\rightarrow \infty }\Vert y_{n_k}-x_{n_k}\Vert =0\), \(\{y_{n_k}\}\) is bounded and \(\lim \limits _{k\rightarrow \infty }\lambda _{n_k}=\lambda >0,\) we obtain \(\langle A(y), y-z_0\rangle \ge 0, \quad \forall y\in C.\) Using the similar proof in [22] (Using Minty Lemma), we have \(z_0\in EP(f)\).

Corollary 3.1

Let S be a quasi-nonexpansive mapping such that \(I-S\) is demiclosed at zero and \(f(x,y)=\langle A(x),y-x\rangle \), \(\forall x, y\in H.\) Let \(A : H \rightarrow H\) is a monotone and Lipschitz-continuous mapping and \(F(S)\cap EP(f)\ne \emptyset .\) Assume that \(\{\alpha _n\}\subset (0, 1)\), \( \sum _{n=0}^\infty \alpha _n=\infty \), \(\lim \limits _{n\rightarrow \infty }\alpha _n=0 \) and \(\beta _n\in [a,b]\subset (0,1)\). Then the sequence \(\{x_n\}\) generated by

$$\begin{aligned} \left\{ \begin{aligned}\begin{array}{llllll} \lambda _0>0, x_0\in H, \mu \in (0, 1) ,\\ y_{n}=P_C(x_n-\lambda _n A(x_n)), \\ T_{n}=\{x\in H|\langle x_n-\lambda _n A(x_n)-y_{n},x-y_{n} \rangle \le 0\},\\ z_{n}=P_{T_{n}}(x_n-\lambda _n A(y_n)),\\ t_n=\alpha _nx_0+(1-\alpha _n)z_n, \ \ x_{n+1}=\beta _nz_n+(1-\beta _n)St_n \end{array} \end{aligned} \right. \end{aligned}$$

converges strongly to \(x^*=P_{F(S)\cap EP(f)}(x_0)\).

Now we introduce another algorithm, which is Algorithm 3.1 does not consider the fixed point. The proof of convergence is similar to Algorithm 3.1. We omitted the proof. The algorithm is of the form

Algorithm 3.2

(Step 0):

Take \(\lambda _0>0\), \(x_0\in H,\)\(\mu \in (0, 1) \).

(Step 1):

Given the current iterate \(x_n\), compute

$$\begin{aligned} y_{n}=argmin\{\lambda _n f(x_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in C\}=prox_{\lambda _n f(x_n,.)}(x_n). \end{aligned}$$

If \(x_n = y_n\), then stop: \(x_n \) is a solution. Otherwise, go to Step 2.

(Step 2):

Choose \(w_n\in \partial _2f(x_n,y_n)\) such that \( x_n-\lambda _n w_n-y_{n}\in N_C(y_n) \), compute

$$\begin{aligned} z_n=argmin\{\lambda _n f(y_n,y)+\frac{1}{2}\Vert x_n-y\Vert ^2, \ y\in T_n\}=prox_{\lambda _n f(y_n,.)}(x_n), \end{aligned}$$

where \(T_{n}=\{v\in H|\langle x_n-\lambda _n w_n-y_{n}, v-y_{n} \rangle \le 0\}\).

(Step 3):

Compute \(x_{n+1}=\alpha _nx_0+(1-\alpha _n)z_n\) and

$$\begin{aligned} \lambda _{n+1}= {\left\{ \begin{array}{ll} min\{{\frac{\mu (\Vert x_n{-}y_n\Vert ^2+\Vert z_n{-}y_n\Vert ^2)}{2 (f(x_n,z_n){-}f(x_n,y_n){-}f(y_n,z_n))},\lambda _n }\} , &{} if\ f(x_n,z_n){-}f(x_n,y_n){-}f(y_n,z_n)>0, \\ \lambda _n,&{} otherwise. \\ \end{array}\right. } \end{aligned}$$

Set \(n := n + 1\) and return to step 1.

Remark 3.5

The domain of the first proximal mapping \(y_{n}=prox_{\lambda _n f(x_n,.)}(x_n)\) is C. The domain of the second proximal mapping \(z_n=prox_{\lambda _n f(y_n,.)}(x_n)\) is \(T_{n}.\)

Remark 3.6

Under assumptions \((A1)-(A4)\), from Lemma 2.1 and Remark 2.2, we obtain that if Algorithm 3.2 terminates at some iterate, i.e., \(x_n = y_n\), then \(x_n\in EP(f).\)

Theorem 3.2

Assume that \(\{\alpha _n\}\subset (0, 1)\), \( \sum _{n=0}^\infty \alpha _n=\infty \) and \(\lim \limits _{n\rightarrow \infty }\alpha _n=0 \). Moreover, the assumptions \((A1)-(A4)\) and \(EP(f)\ne \emptyset \) hold. Then the sequence \(\{x_n\}\) generated by Algorithm 3.2 converges strongly to \(x^*=P_{EP(f)}(x_0)\).

4 Numerical experiments

In this section, we present some numerical experiments. We compare our algorithms with the Dang’s algorithm (Algorithm in [12]) and Algorithm 1 in [15]. We take \(\alpha _n=\frac{1}{10000(n+2)}\) for all the methods. To terminate the algorithms, we use the condition \(\Vert y_n-x_n\Vert \le \varepsilon \) for all the algorithms.

Problem 1

We consider the equilibrium problem for the following bifunction \(f : H \times H \rightarrow \mathbb {R}\) which comes from the Nash-Cournot equilibrium model in [10, 12]

$$\begin{aligned} f(x,y) = \langle Px+Qy+q, y-x \rangle , \end{aligned}$$

where \(q \in \mathbb {R}^m\) is chosen randomly with its elements in \([-m, m]\), and the matrices P and Q are two square matrices of order m such that Q is symmetric positive semidefinite and \(Q-P\) is negative semidefinite. In this case, the bifunction f satisfies \((A1)-(A4) \) with the Lipschitz-type constants \(c_1 = c_2 = \frac{\Vert P-Q\Vert }{2}\), see [9, Lemma 6.2]. For Alg.3.2, we take \(\lambda _0=\frac{1}{4c_1}\) and \(\mu =0.9\). For Dang’s algorithm and Algorithm 1 in [15], we take \(\lambda =\frac{1}{4c_1}.\) For Algorithm 1 in [15], we take \(S=I\) and \(F(x)=x-x_0\).

For numerical experiments: we suppose that the feasible set \(C\subset \mathbb {R}^m \) has the form of

$$\begin{aligned} C= \{x\in \mathbb {R}^m: Ax\le b\}, \end{aligned}$$

where A is a matrix of the size \(k\times m\) ( \(m = 10, 100, 200\) and \(k = 100\) ) with its entries generated randomly in \([-2,2]\) and \(b\in \mathbb {R}^k\) is a vector with its elements generated randomly in [1, 3]. The numerical results are showed in Table 1.

Table 1 Problem 1

Problem 2

Let \(H=L^{2}([0,1]) \) with norm \(\Vert x\Vert =(\int _{0}^{1}|x(t)|^{2}dt)^{\frac{1}{2}} \) and inner product \(\langle x,y\rangle =\int _{0}^{1}x(t)y(t)dt, x, y\in H.\) The bifunction f is defined by \(f(x,y) = \langle Ax, y-x \rangle \) and the operator \(A:H\rightarrow H\) is defined by \(Ax(t)=max(0,x(t)), t\in [0,1]\) for all \(x\in H.\) It can be easily verified that A is Lipschitz-continuous and monotone. The feasible set is \(C=\{x\in H: \int _{0}^{1}(t^{2}+1)x(t)dt\le 1 \}.\) Observe that \(0\in EP(f)\) and so \(EP(f)\ne \emptyset .\) We take \(\lambda _0=0.7 \) and \(\mu =0.9\) for Alg.3.2 and \(\lambda =0.7\) for Dang’s algorithm. The results are presented in Table 2.

Table 2 Problem 2

Problem 3

The third problem was considered in [28], where \(f(x,y)=\langle A(x),y-x\rangle \), \(\forall x, y\in \mathbb {R}.\) The mapping \(A: \mathbb {R}\rightarrow \mathbb {R} \) is defined by \(A(x)=x+sinx\) and \(S: \mathbb {R}\rightarrow \mathbb {R} \) is defined by \(S(x)=\frac{x}{2}sinx.\) The feasible set is \(C=[-2\pi ,2\pi ]\). It is easy to show that A is monotone and Lipschitz-continuous with \(L=2\), S is a quasinonexpansive mapping such that \(I-S\) is demiclosed at zero and \(0=F(S)\cap EP(f)\ne \emptyset ,\) see [28, Example 4.1]. We take \(\lambda _0=0.4, \beta _n=\frac{1}{2}\) and \(\mu =0.9\) for Alg.3.1. For Algorithm 1 in [15], we take \(\beta =0\), \(\beta _n=\frac{1}{2}\), \(F(x)=x-x_0\) and \(\lambda _n=\lambda =0.4\). The numerical results are showed in Table 3.

Table 3 Problem 3

From the aforementioned numerical results, we see that the proposed algorithms are effective.

5 Conclusions

In this paper, we consider the convergence results for equilibrium problem involving the Lipschitz-type condition and pseudomonotone bifunctions but the Lipschitz-type constants are unknown. We modify the Halpern subgradient extragradient methods with a new step size. We prove the sequence generated by Algorithm 3.1 converge strongly to a common solution of an equilibrium problem and a fixed point problem. Another algorithm is proposed and some numerical experiments confirm the effectiveness of the proposed algorithms.