1 Introduction

Let H be a real Hilbert space and let f : H→(−, ] be a proper, lower semicontinuous and convex function. In order to find a minimum point of f, Martinet [19] proposed the iterative method as follows: x1H and

$${x}_{n + 1}=\arg\min\limits_{y{\in} H}\left\{f(y)+\frac{1}{2}\|y-{x}_{n}\|^{2}\right\}, $$

for all n ≥ 1. He proved that, the sequence {xn} converges weakly to a minimum point of f. Note that, the above sequence {xn} can be rewritten in the form

$$\partial f({x}_{n + 1})+{x}_{n + 1}\ni {x}_{n},\ \forall n\geq 1. $$

We know that the subdifferential operator f of f is a maximal monotone operator (see [27]). So, the problem of finding a null point of a maximal monotone operator plays an important role in optimization. One popular method of solving equation 0∈A(x) where A is a maximal monotone operator in Hilbert space H, is the proximal point algorithm. The proximal point algorithm generates, for any starting point x0 = xE, a sequence {xn} by the rule

$$ {x}_{n + 1}={J}_{c_{n}}^{A}({x}_{n}), \text{ for all } n{\in} \mathbb{N}, $$
(1.1)

where {cn} is a sequence of positive real numbers and \({J}_{c_{n}}^{A}=(I+c_{n}A)^{-1}\) is the resolvent of A. Some of them dealt with the weak convergence of the sequence {xn} generated by (1.1) and others proved strong convergence theorems by imposing assumptions on A. Moreover, Rockafellar [29] gave a more practical method which is an inexact variant of the method:

$$ {x}_{n}+e_{n}{\in} {x}_{n + 1}+c_{n}A{x}_{n + 1}\text{ for all } n{\in} \mathbb{N}, $$
(1.2)

where {en} is regarded as an error sequence and {cn} is a sequence of positive regularization parameters. Note that the algorithm (1.2) can be rewritten as

$$ {x}_{n + 1}={J}_{c_{n}}^{A}({x}_{n}+e_{n})\text{ for all } n{\in} \mathbb{N}. $$
(1.3)

This method is called inexact proximal point algorithm. It was shown in Rockafellar [29] that if en→0 quickly enough such that \({\sum }_{n = 1}^{{\infty }} \|e_{n}\|<{\infty } \), then \({x}_{n}\rightharpoonup z{\in } H\) with 0∈Az.

In [10], Burachik et al. used the enlargement Aε to devise an approximate generalized proximal point algorithm. The exact version of this algorithm can be stated as follows: having xn, the next element xn+ 1 is the solution of

$$ 0{\in} {C}_{n}A(x ) +\bigtriangledown f (x )-\bigtriangledown f({x}_{n}), $$
(1.4)

where f is a suitable regularization function. Note that, if \(f(x)=\frac {1}{2}\|x\|^{2}\), then the above algorithm becomes the classical proximal point algorithm. Approximate solutions of (1.4) are treated in [10] via Aε. Specifically, an approximate solution of (1.4) is regarded as an exact solution of

$$0{\in} {C}_{n}A^{\varepsilon_{n}}(x) +\bigtriangledown f (x)-\bigtriangledown f({x}_{n}), $$

for an appropriate value of εn. Note that, if \(f(x)=\frac {1}{2}\|x\|^{2}\), the above relation is equivalent to the problem of finding an element xn+ 1H, and \(v_{n + 1}{\in } A^{\varepsilon _{n}}({x}_{n + 1})\) with εn ≥ 0 such that

$$0=c_{n}v_{n + 1}+({x}_{n + 1}-{x}_{n}). $$

They proved that if \({\sum }_{n = 1}^{{\infty }}\varepsilon _{n}<{\infty }\), then the sequence {xn} converges weakly to a null point of A.

In [25], Solodov and Svaiter proposed a new criterion for the approximate solution of subproblems as follows: Two element yn and vn are admissible if

$$v_{n}{\in} A({y}_{n}),\ 0=c_{n}v_{n}+({y}_{n}-{x}_{n})-e_{n},$$

and the error en satisfies

$$\|e_{n}\|\leq \sigma \max\{c_{n}\|v_{n}\|,\|{y}_{n}-{x}_{n}\|\},$$

where σ is a real number in [0,1). And the next iterative xn+ 1 is obtained by projecting xn onto the hyperplane

$$\{z{\in} H : \langle v_{n},z-{y}_{n}\rangle = 0\}.$$

By combining the ideas of [10, 25], Solodov et al. [24] proposed an even simpler method, in which no projection is performed. An approximate solution is regarded as a pair yn, vn such that

$$v_{n}{\in} {C}_{n}A^{\varepsilon_{n}}({y}_{n}),\ 0=c_{n}v_{n} +({y}_{n}-{x}_{n})-e_{n},$$

where εn, en are “relatively small” in comparison with ∥ynxn∥, and the next iterative xn+ 1 is defined by

$${x}_{n + 1}={x}_{n}-c_{n}v_{n}.$$

Rockafellar [29] posed an open question of whether the sequence generated by (1.1) converges strongly or not. In 1991, G\(\ddot {\text {{u}}}\)ler [15] gave an example showing that Rockafellar’s proximal point algorithm does not converge strongly. An example of the authors Bauschke, Matou\(\check {\text {s}}\textit {kov}\acute {\text {a}}\), and Reich [7] also showed that the proximal algorithm only converges weakly but not in norm. In 2000, Solodov and Svaiter [26] proposed the following algorithm (hybrid projection method): Choose any x0H and σ ∈ [0,1). At iteration n, having xn, choose μn > 0 and find (yn, vn) an inexact solution of

$$0{\in} A(x)+\mu_{n} (x-{x}_{n}),$$

with tolerance σ. Define

$$C_{n}=\{z{\in} H : \langle z-{y}_{n},v_{n}\rangle\leq 0\},$$

and

$$Q_{n}=\{z{\in} H : \langle z-{x}_{n},{x}_{0}-{x}_{n}\rangle \leq 0\}.$$

Take

$${x}_{n + 1}={P}_{C_{n}\cap Q_{n}}{x}_{0}.$$

They proved that if the sequence of the regularization parameters μn is bounded from above, then {xn} converges strongly to xA− 10. Moreover, based on the important fact that Cn and Qn in the above algorithm are two halfspaces, they showed that

$${x}_{n + 1}={x}_{0}+\lambda_{1} v_{n} +\lambda_{2} ({x}_{0}-{x}_{n}),$$

where (λ1, λ2) is the solution of the linear system of two equations with two unknowns:

$$\left\{\begin{array}{ll} \lambda_{1} \|v_{n}\|^{2} +\lambda_{2} \langle v_{n}, {x}_{0}-{x}_{n}\rangle =-\langle {x}_{0}-{y}_{n},v_{n}\rangle\\ \lambda_{1} \langle v_{n}, {x}_{0}-{x}_{n}\rangle +\lambda_{2} \|{x}_{0}-{x}_{n}\|^{2}=-\|{x}_{0}-{x}_{n}\|^{2}. \end{array}\right. $$

In 2003, Bauschke et al. introduced a new algorithm (see [6, Algorithm 4.1]) for finding a common fixed point of a family of operators \((T_{i})_{i\in \mathbb I}\) in \(\mathcal B\)-class operators (see [5]). Let E be a real Banach space and f : E→(0, ] be a lower semicontinuous convex function which is Gâteaux differentiable on int dom f and Legendre, i.e., it satisfies the following two properties:

  1. (i)

    f is both locally bounded and single-valued on its domain;

  2. (ii)

    (f)− 1 is locally bounded on its domain and f is strictly convex on every bounded set of dom f.

The Bregman distance associated with f is the function

$$\begin{array}{@{}rcl@{}} D:\ E\times E&\longrightarrow& [0,{\infty} ]\\ (x,y)&\longmapsto& \left\{\begin{array}{llll}f(x)-f(y)-\langle x-y,\bigtriangledown f(y)\rangle\ \text{ if } x,y{\in} \text{ int dom } f,\\ {\infty}\ {}\text{otherwise}, \end{array}\right. \end{array} $$

and the D-projector onto a set CE is the operator

$$\begin{array}{@{}rcl@{}} {P}_{C}:\ E&\longrightarrow& 2^{E},\\ y&\longmapsto& \{x{\in} C : D(x,y)=D_{C}(y)< {\infty}\}. \end{array} $$

It is easy to see that if E is a real Hilbert space and f(x) = ∥x2/2 for all xH, and C is a nonempty closed convex subset of E, then PC is the metric projection from E onto C.

They gave an application of [6, Algorithm 4.1] for finding a common zero of a family of maximal monotone operators \((A_{i})_{i\in \mathbb I}\) in Banach space E as follows: for every \(n{\in } \mathbb {N}\), take i(n)∈I, γn∈(0, ) and set xn+ 1 = Q(x0, xn,(▽f + γnAi(n))− 1 ∘▽f(xn)), where

$$\begin{array}{@{}rcl@{}} Q(x,y,z)&=&\{u{\in} E : \langle u-y,\bigtriangledown f(x)-\bigtriangledown f(y)\rangle \leq 0\}\\ &&\cap \{u{\in} E : \langle u-z, \bigtriangledown f(y)-\bigtriangledown f(z)\rangle \leq 0\}. \end{array} $$

They proved that if ▽ f is uniformly continuous on bounded subsets of E and for every iI, and every strictly increasing sequence {Pn} such that i(Pn) ≡ i, one has \(\inf _{n}\gamma _{{P}_{n}}>0\) and if the following conditions hold:

  1. (i)

    The index control mapping \(\text {i}:\ \mathbb {N}\longrightarrow I\) satisfies

    $$(\forall i{\in} I) (\exists M_{i}>0) (\forall n\in\mathbb{N})\ i\in\{\text{i}(n),\ldots, \text{i}(n+M_{i}-1)\}.$$
  2. (ii)

    For every sequence {yn} in int dom f and every bounded sequence {zn} in int dom f, one has

    $$D({y}_{n},z_{n})\to 0\Rightarrow {y}_{n}-z_{n}\to 0,$$

    then xnPSx0, where \(S=\overline {\text {dom}}f\cap ({\cap }_{i{\in } I}A_{i}^{-1}0)\).

In order to find a fixed point of a nonexpansive mapping T on the closed and convex subset C of H, motivated by the result of Solodov and Svaiter, Takahashi et al. [32] introduced the following iterative method

$$\begin{array}{@{}rcl@{}} C_{0}&=&C,\ {x}_{0}{\in} C,\\ {y}_{n}&=&\alpha_{n} {x}_{n} +(1-\alpha_{n})T{x}_{n},\\ C_{n + 1}&=&\{z{\in} {C}_{n} : \|{y}_{n}-z\|\leq \|{x}_{n}-z\|\},\\ {x}_{n + 1}&=&{P}_{C_{n + 1}}{x}_{0},\ n\geq 0, \end{array} $$

and they proved that the sequence {xn} converges strongly to PF(T)x0, when {αn}⊂ [0, a), with a ∈ [0,1). Moreover, they also gave a similar iterative method to find zero of a maximal monotone operator in the following form

$$\begin{array}{@{}rcl@{}} C_{0}&=&C,\ {x}_{0}{\in} C,\\ {y}_{n}&=&\alpha_{n} {x}_{n} +(1-\alpha_{n})J^{A}_{c_{n}}{x}_{n},\\ C_{n + 1}&=&\{z{\in} {C}_{n} : \|{y}_{n}-z\|\leq \|{x}_{n}-z\|\},\\ {x}_{n + 1}&=&{P}_{C_{n + 1}}{x}_{0},\ n\geq 0. \end{array} $$
(1.5)

They showed that if {αn}⊂ [0, a), with a ∈ [0,1) and cn, then the sequence {xn} generated by (1.5) converges strongly to \({P}_{A^{-1} 0}{x}_{0}\).

We can see that the shrinking projection method (1.5) of Takahashi et al. is more complex than the hybrid projection method of Solodov and Svaiter. Because in the iterative method (1.5), to define xn+ 1, we have to find the projection of x0 over the intersection of n closed and convex subsets of H, but in hybrid projection method, we only compute the projection of x0 over the intersection of two hyperplanes. However, recently, many mathematicians studied the shrinking projection method for solving the difference problems, see for instance, Dadashi [13], Kimura [18], Qin et al. [22], Takahashi [33], Takahashi et al. [30, 34], Sean et al. [37].

In this paper, by using shrinking projection method, we introduce two parallel iterative methods for finding a common null point of a finite family of maximal monotone operators in Banach spaces. Moreover, we also give some applications of the main results for solving the problem of finding a common minimum point of convex functions, the convex feasibility problem and the system of variational inequalities. In Section 5, a numerical example is also given to illustrate the effectiveness of the proposed algorithms.

2 Preliminaries

Let E be a real Banach space with norm ∥⋅∥ and let E be its dual. The value of fE at xE will be denoted by 〈x, f〉. When {xn} is a sequence in E, then xnx (resp. \({x}_{n}\rightharpoonup x,\ {x}_{n}\overset {*}{\rightharpoonup } x\)) will denote strong (resp. weak, weak) convergence of the sequence {xn} to x. Let JE denote the normalized duality mapping from E into \(2^{E^{*}}\) given by

$${J}_{E}x=\left\{f{\in} E^{*} : \langle x,f\rangle=\|x\|^{2}=\|f\|^{2}\right\},\ \forall x{\in} E. $$

We always use SE to denote the unit sphere SE = {xE : ∥x∥ = 1}. A Banach space E is said to be strictly convex if x, ySE with xy, and, for all t ∈ (0,1),

$$\|(1-t)x+ty\|<1. $$

A Banach space E is said to be uniformly convex if for any ε ∈ (0,2] and the inequalities ∥x∥≤ 1, ∥y∥≤ 1, ∥xy∥≥ ε, there exists a δ = δ(ε) > 0 such that

$$\frac{\|x+y\|}{2}\leq 1-\delta.$$

Recall that a Banach space E is called having the Kadec-Klee property, if for every sequence {xn}⊂ E such that ∥xn∥→∥x∥ and \({x}_{n}\rightharpoonup x\), as n, we have xnx, as n. It is well known that every uniformly convex Banach space has Kadec-Klee property (see [12, 23]).

A Banach space E is said to be smooth provided the limit

$$\lim_{t\rightarrow 0}\frac{\|x+ty\|-\|x\|}{t}$$

exists for each x and y in SE. In this case, the norm of E is said to be Gâteaux differentiable. It is said to be uniformly Gâteaux differentiable if for each ySE, this limit attained uniformly for xSE.

Let E be a reflexive Banach space, we know that E is uniformly convex if and only if E is uniformly smooth [1, 14].

We have the following properties of the normalized duality mapping JE (see [1, 12, 14]):i) E is reflexive if and only if JE is surjective.ii) If E is strictly convex, then JE is single-valued.iii) If E is a smooth, strictly convex and reflexive Banach space, then JE is single-valued bijection.iv) If E is uniformly convex, then JE is uniformly continuous on each bounded set of E.We know that if E is a smooth, strictly convex and reflexive Banach space and C is a nonempty, closed, and convex subset of E; then, for each xE, there exists a unique zC such that

$$\|x-z\|=\inf_{y{\in} C}\|x-y\|.$$

The mapping PC : EC defined by PCx = z is called metric projection from E on to C and we denote by d(x, C) = ∥xPCx∥.

Let \(A:\ E\longrightarrow 2^{E^{*}}\) be an operator. The effective domain of A is denoted by D(A), that is, D(A) = {xE : Ax}. Recall that A is called monotone operator if 〈xy, uv〉≥ 0 for all x, yD(A) and for all uAx, vA(y). A monotone operator A on E is called maximal monotone if its graph is not properly contained in the graph of any other monotone operator on E. We know that if A is a maximal monotone operator on E and if E is a uniformly convex and smooth Banach space, then R(JE + rA) = E for all r > 0, where R(JE + rA) is the range of JE + rA (see [9, 28]). For all xE and r > 0, there exists a unique xrE such that

$$0{\in} {J}_{E}({x}_{r}-x)+rA{x}_{r}.$$

We define Jr by xr = Jrx and Jr is called the metric resolvent of A.

Hence, in this case, we can define a mapping Jr : EE by Jrx = xr and Jr is called the generalized resolvent of A.

The set of null point of A is defined by A− 10 = {zE : 0∈Az} and we know that A− 10 is a closed and convex subset of E (see [31]).

Let \(A:\ E\longrightarrow 2^{E^{*}}\) be a maximal monotone operator. In [11], for each ε ≥ 0, Burachik and Svaiter defined Aε(x), an ε-enlargement of A, as follows

$$A^{\varepsilon} x=\{u{\in} E^{*} : \langle y-x,v-u\rangle \geq -\varepsilon,\ \forall y{\in} E,\ v{\in} Ay\}. $$

It is easy to see that A0x = Ax and if 0 ≤ ε1ε2, then \(A^{\varepsilon _{1}}x\subseteq A^{\varepsilon _{2}}x\) for any xE. The use of element in Aε instead of T allows an extra degree freedom which is very useful in various applications.

Let {Cn} be a sequence of closed, convex, and nonempty subsets of a reflexive Banach space E. We define the subsets s-LinCn and w-LsnCn of E as follows: x ∈ s-LinCn if and only if there exists {xn}⊂ E converges strongly to x and that xnCn for all n ≥ 1; x ∈ w-LsnCn if and only if there exists a subsequence \(\{C_{n_{k}}\}\) of {Cn} and the sequence {yk}⊂ E such that \({y}_{k}\rightharpoonup x\) and \({y}_{k}{\in } {C}_{n_{k}}\) for all k ≥ 1. If s-LinCn = w-LsnCn = C0, then C0 is called the limits of {Cn} in the sense of Mosco [20] and it is denoted by \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\).

Remark 2.1

We know that, if {Cn} is a decreasing sequence of closed convex subsets of a reflexive Banach space E and \(C_{0}={\cap }_{n = 1}^{{\infty }} {C}_{n}\neq \emptyset \), then \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\) (see [8, 17]).

Indeed, it is clear that if xC0, then x ∈ s-LinCn and x ∈ w-LsnCn, because the sequence {xn} with xn = x for all n ≥ 1 converges strongly to x. Thus, we have C0 ⊂ s-LinCn and C0 ⊂ w-LsnCn.

Now, we will show that C0 ⊇ s-LinCn and C0 ⊇ w-LsnCn. Let x ∈ s-LinCn, from the definition of s-LinCn, there exists a sequence {xn}⊂ E with xnCn for all n ≥ 1 such that xnx, as n. Since {Cn} is a decreasing sequence, xn+kCn for all n ≥ 1 and k ≥ 0. So, letting k and by the closedness of Cn, we get that xCn for all n ≥ 1. Thus, xC0 and hence C0 ⊇ s-LinCn. Next, let y ∈ w-LsnCn, from the definition of w-LsnCn, there exist a subsequence \(\{C_{n_{k}}\}\) of {Cn} and the sequence {yk}⊂ E such that \({y}_{k}\rightharpoonup x\) and \({y}_{k}{\in } {C}_{n_{k}}\) for all k ≥ 1. From {Cn} is a decreasing sequence, we have

$$ {y}_{k+p}{\in} {C}_{n_{k}} $$
(2.1)

for all k ≥ 1 and p ≥ 0. Since \(C_{n_{k}}\) is closed and convex, \(C_{n_{k}}\) is weakly closed in E for all k ≥ 1. So, in (21), letting p, we get that \(y{\in } {C}_{n_{k}}\) for all k ≥ 1. Since \(C_{k}\supseteq {C}_{n_{k}}\), yCk for all k ≥ 1. So, yC0 and hence C0 ⊇ w-LsnCn.

Consequently, we obtain that s-LinCn = and w-LsnCn = C0. Thus, \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\).

The following lemmas will be needed in the sequel for the proof of main theorems.

Lemma 2.2

[2, 3, 16] LetE be a smooth, strictly convex and reflexive Banachspace. Let C be a nonempty closed convex subset ofE and letx1EandzC. Then, the following conditions are equivalent:

  1. i)

    z = PCx1;

  2. ii)

    zy, JE(x1z)〉≥ 0, ∀yC.

Lemma 2.3

[36] LetE be a Banach space,R ∈ (0, ) andBR = {xE : ∥x∥≤ R}. IfE is uniformly convex, then there exists a continuous, strictly increasing and convex functiong : [0,2R]→[0, ) withg(0) = 0 suchthat

$$\|\alpha x + (1-\alpha )y\|^{2}\leq \alpha \|x\|^{2}+(1-\alpha)\|y\|^{2}-\alpha (1-\alpha)g(\|x-y\|),$$

for allx, yBRandα ∈ [0, 1].

Lemma 2.4

[35] LetE be a smooth, reflexive, and strictly convexBanach space having the Kadec-Klee property. Let{Cn} be a sequence of nonempty closed convex subsets ofE.If\(C_{0}=\mathrm {M}-\lim _{n\to {\infty }}C_{n}\)existsand is nonempty, then\(\{{P}_{C_{n}}x\}\)convergesstrongly to\({P}_{C_{0}}x\)foreachxC.

Lemma 2.5

[11] The graph of\(A^{\varepsilon } :\ \mathbb R_{+}\times E\longrightarrow 2^{E^{*}}\)isdemiclosed, i.e., the conditions below hold:

i) If {xn}⊂ Econverges strongly tox0, \(\{u_{n}{\in } A^{\varepsilon _{n}}{x}_{n}\}\)convergesweakly tou0inEand\(\{\varepsilon _{n}\}\subset \mathbb R_{+}\)convergestoε, thenu0Aεx0.

ii) If {xn}⊂ Econverges weakly tox0, \(\{u_{n}{\in } A^{\varepsilon _{n}}{x}_{n}\}\)convergesstrongly tou0inEand\(\{\varepsilon _{n}\}\subset \mathbb R_{+}\)convergestoε, thenu0Aεx0.

3 Main Results

First, we have the following lemma:

Lemma 3.1

LetE be a uniformly convex and smooth Banach space and let{Cn} be a decreasing sequence of closed and convex subsets ofE suchthat\(C_{0}={\cap }_{n = 1}^{{\infty }} {C}_{n}\neq \emptyset \). Let\({P}_{n}={P}_{C_{n}}u\)withuEand let{xn} be the sequence inE suchthat

$${x}_{n}{\in} \{z{\in} {C}_{n} : \|u-z\|^{2}\leq d^{2}(u,C_{n})+\delta_{n}\},$$

for alln ≥ 1, where {δn} is a sequence of positive real numbers. If\(\lim _{n\to {\infty }}\delta _{n}= 0\), then {xn} and {Pn} converge strongly to the same point\({P}_{0}={P}_{C_{0}}u\).

Proof

From Remark 2.1, we have \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\). By Lemma 2.4, we have \({P}_{n}\to {P}_{0}={P}_{C_{0}}u\), as n.

Since \({P}_{n}={P}_{C_{n}}u\), d(u, Cn) = ∥uPn∥. From xnCn and the definition of Cn, we have

$$ \|u-{x}_{n}\|^{2}\leq \|u-{P}_{n}\|^{2}+\delta_{n},\ \forall n\geq 2. $$
(3.1)

From (3.1) and the boundedness of {Pn}, the sequence {xn} is bounded. So, R = max{supn{∥xn∥},supn{∥Pn∥}} < .

From the convexity of Cn, we have αPn + (1 − α)xnCn for all α ∈ (0,1). Thus, from the definition of \({P}_{C_{n}}u\) and apply Lemma 2.3 on BR, we get

$$\begin{array}{@{}rcl@{}} \|{P}_{n}-u\|^{2}&\leq& \|\alpha {P}_{n} +(1-\alpha) {x}_{n}-u\|^{2}\\ &\leq& \alpha \|{P}_{n}-u\|^{2} +(1-\alpha)\|{x}_{n}-u\|^{2}-\alpha (1-\alpha)g (\|{x}_{n}-{P}_{n}\|), \end{array} $$

this combines with (3.1), we obtain that

$$ \alpha g (\|{x}_{n}-{P}_{n}\|) \leq \delta_{n},\ \forall \alpha{\in} (0,1). $$
(3.2)

In (3.2), letting α → 1, we get

$$g (\|{x}_{n}-{P}_{n}\|) \leq \delta_{n}. $$

By the property of g and δn → 0, we have

$$\|{x}_{n}-{P}_{n}\|\to 0. $$

Thus, the sequences {xn} and {Pn} converge strongly to the same point P0, as n. □

Now, we have the following theorem:

Theorem 3.2

LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let {εn} and {δn} be nonnegative real sequences and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointuE, we define the sequence {xn} byx1 = xE, C1 = Eand

i) Findyi, nEsuch that\({J}_{E}({y}_{i,n}-{x}_{n})+r_{i,n}A^{\varepsilon _{n}}_{i}{y}_{i,n}\ni 0,\ i = 1,2,\dots ,N\).

ii) Chooseinsuch that\(\|{y}_{{i}_{n},n}-{x}_{n}\|={\max }_{i = 1,\dots ,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text {let}\ {y}_{n}={y}_{{i}_{n},n}\),

$$\begin{array}{@{}rcl@{}} C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq -\varepsilon_{n}r_{{i}_{n},n}\}. \end{array} $$
(3.3)

iii) Findxn+ 1∈{zCn+ 1 : ∥uz2d2(u, Cn+ 1) + δn+ 1}, n = 1,2,…

If\(\lim _{n\to {\infty }}\varepsilon _{n}r_{i,n}=\lim _{n\to {\infty }}\delta _{n} = 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn.

Proof

First, we show that SCn for all n ≥ 1 by mathematical induction. Indeed, it is clear that SC1 = E. Suppose that SCn for some n ≥ 1. Take vS, we have

$${J}_{E}({y}_{{i}_{n},n}-{x}_{n})+r_{{i}_{n},n}A^{\varepsilon_{n}}_{{i}_{n}}{y}_{{i}_{n},n}\ni 0,\ A_{{i}_{n}}v\ni 0.$$

From the definition of \(A^{\varepsilon _{n}}_{{i}_{n}}\), we get

$$\langle {y}_{n}-v,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq -\varepsilon_{n} r_{{i}_{n},n}.$$

Thus, vCn+ 1. Since v is arbitrary in S, SCn+ 1. So, by induction we obtain that SCn for all n ≥ 1.

Moreover, Cn is a closed and convex subset of E for all n. Hence, the sequence {xn} is well defined.

Now, for each n ≥ 1, denote by \({P}_{n}={P}_{C_{n}}u\). By Lemma 3.1, we obtain that the sequences {xn} and {Pn} converge strongly to the same point \({P}_{0} ={P}_{C_{0}}u\) with \(C_{0}={\cap }_{n = 1}^{{\infty }} {C}_{n}\).

From Pn+ 1Cn+ 1 and the definition of Cn+ 1, we have

$$\langle {y}_{n}-{P}_{n + 1},{J}_{E}({x}_{n}-{y}_{n})\rangle \geq -\varepsilon_{n} r_{{i}_{n},n}.$$

The above inequality is equivalent to

$$\langle {y}_{n}-{x}_{n},{J}_{E}({x}_{n}-{y}_{n})\rangle+\langle {x}_{n}-{P}_{n + 1},{J}_{E}({x}_{n}-{y}_{n})\rangle \geq -\varepsilon_{n} r_{{i}_{n},n}.$$

So, we have

$$\begin{array}{@{}rcl@{}} \|{x}_{n}-{y}_{n}\|^{2}-\varepsilon_{n} r_{{i}_{n},n}&\leq& \langle {x}_{n}-{P}_{n + 1},{J}_{E}({x}_{n}-{y}_{n})\rangle\\ &\leq& \|{x}_{n}-{P}_{n + 1}\|\ \|{x}_{n}-{y}_{n}\|\\ &\leq& \frac{1}{2}(\|{x}_{n}-{P}_{n + 1}\|^{2} + \|{x}_{n}-{y}_{n}\|^{2}). \end{array} $$

This implies that

$$\|{x}_{n}-{y}_{n}\|^{2}\leq \|{x}_{n}-{P}_{n + 1}\|^{2} + 2\varepsilon_{n} r_{{i}_{n},n}. $$

From PnP0, xnP0 and \(\varepsilon _{n} r_{{i}_{n},n}\to 0\), we obtain that

$$\|{x}_{n}-{y}_{n}\|\to 0.$$

By the definition of yn, we get that

$$ \|{x}_{n}-{y}_{i,n}\|\to 0,\ \forall i = 1,2,\dots,N. $$
(3.4)

This implies that yi, nP0 for all i = 1,2,…, N, as n. Furthermore, since mini{infn{ri, n}}≥ r > 0 and (3.4), we have

$$0\leftarrow \frac{1}{r_{i,n}}{J}_{E}({x}_{n}-{y}_{i,n}){\in} A^{\varepsilon_{n}}_{i}{y}_{i,n},$$

for all i = 1,2,…, N, as n. So, by Lemma 2.5, we obtain \({P}_{0}{\in } A^{-1}_{i}0\) for all i = 1,2,…, N, that is, P0S.

Finally, we show that P0 = PSu. Indeed, let x = PSu. Since SCn, xCn. Thus, from \({P}_{n}={P}_{C_{n}}u\), we have

$$\|{P}_{n}-u\|\leq \|u-{x}^{*}\|,\ \forall n\geq 1.$$

Letting n, we get that ∥uP0∥≤∥ux∥. By the uniqueness of x, we obtain that P0 = x = PSu. This completes the proof. □

Now, in the following theorem, we give another way to construct the subsets Cn.

Theorem 3.3

LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let {εn} and {δn} be nonnegative real sequences and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointuE, we define the sequence {xn} byx1 = xE, C1 = Eand

$$\begin{array}{@{}rcl@{}} &&\text{i)} ~\text{Find} ~{y}_{i,n}{\in} E\ \text{ such that } {J}_{E}({y}_{i,n}-{x}_{n})+r_{i,n}A^{\varepsilon_{n}}_{i}{y}_{i,n}\ni 0,\ i = 1,2,\dots,N,\\ &&C^{i}_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq -\varepsilon_{n} r_{i,n}\},\ i = 1,2,\dots,N,\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i}.\\ &&\text{ii)} ~\text{Find}~ {x}_{n + 1}{\in} \{z{\in} {C}_{n + 1} : \|u-z\|^{2}\leq d^{2}(u,C_{n + 1})+\delta_{n + 1}\},\ n = 1,2,\dots \end{array} $$

If\(\lim _{n\to {\infty }}\varepsilon _{n}r_{i,n}=\lim _{n\to {\infty }}\delta _{n} = 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn.

Proof

First, we show that SCn for all n ≥ 1 by mathematical induction. Indeed, it is clear that SC1 = E. Suppose that SCn for some n ≥ 1. Take vS, we have

$${J}_{E}({y}_{i,n}-{x}_{n})+r_{i,n}A^{\varepsilon_{n}}_{i}{y}_{i,n}\ni 0,\ A_{i}v\ni 0.$$

From the definition of \(A^{\varepsilon _{n}}_{i}\), we get

$$\langle {y}_{i,n}-v,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq -\varepsilon_{n} r_{i,n}.$$

Thus, \(v{\in } C^{i}_{n + 1}\) for all i = 1,2,…, N. So, \(v{\in } {C}_{n + 1}={\cap }_{i = 1}^{N}C^{i}_{n + 1}\). By induction, we obtain that SCn for all n ≥ 1.

It is clear that {Cn} is a decreasing sequence of closed and convex subsets of E with \({\cap }_{n = 1}^{{\infty }} {C}_{n}=C_{0}\supset S\neq \emptyset \).

Now, for each n, denote by \({P}_{n}={P}_{C_{n}}u\). By Lemma 3.1, the sequences {xn} and {Pn} converge strongly to the same point \({P}_{0} ={P}_{C_{0}}u\).

We have \({P}_{n + 1}{\in } {C}_{n + 1}={\cap }_{i = 1}^{N}C^{i}_{n + 1}\). Hence, \({P}_{n + 1}{\in } C^{i}_{n + 1}\) for all i = 1,2,…, N. Thus, from the definition of \(C^{i}_{n + 1}\), we have

$$\langle {y}_{i,n}-{P}_{n + 1},j({x}_{n}-{y}_{i,n})\rangle \geq -\varepsilon_{n} r_{i,n},$$

for all i = 1,2,…, N. Thus, we get that

$$\|{x}_{n}-{y}_{i,n}\|^{2}\leq \|{x}_{n}-{P}_{n + 1}\|^{2}+ 2\varepsilon_{n} r_{i,n},$$

for all i = 1,2,…, N. From PnP0, xnP0 and \(\varepsilon _{n} r_{{i}_{n},n}\to 0\), we obtain that

$$\|{x}_{n}-{y}_{i,n}\|\to 0,$$

for all i = 1,2,…, N. The rest of the proof follows the pattern of Theorem 3.2. This completes the proof. □

Remark 3.4

a) In Theorems 3.2 and 3.3, if N = 1 then the sequence {xn} is defined by: For a given point uE, we define the sequence {xn} by x1 = xE, C1 = E and

$$\begin{array}{@{}rcl@{}} &&\text{\text{i)} Find } {y}_{n}{\in} E\ \text{such that } {J}_{E}({y}_{n}-{x}_{n})+r_{n}A^{\varepsilon_{n}}{y}_{n}\ni 0,\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq -\varepsilon_{n} r_{n}\}.\\ &&\text{\text{ii)} Find } {x}_{n + 1}{\in} \{z{\in} {C}_{n + 1} : \|u-z\|^{2}\leq d^{2}(u,C_{n + 1})+\delta_{n + 1}\},\ n = 1,2, \dots, \end{array} $$

where {rn} is the positive real sequence and {εn}, {δn} are nonnegative real sequences such that infn{rn}≥ r > 0 and \(\lim _{n\to {\infty }}r_{n}\varepsilon _{n}=\lim _{n\to {\infty }}\delta _{n}= 0\).

b) In Theorem 3.3, to define the element xn+ 1, we have to find the projection of u onto the intersection of n × N half-spaces. In Theorem 3.2, we only find the projection of u onto the intersection of n half-spaces. So, the algorithm to define xn+ 1 in Theorem 3.2 is simpler than the algorithm in Theorem 3.3. However, in the both cases, we can find the element xn+ 1 by the approximation solution of the following minimization problem: Find a minimum point of \(f(x)=\frac {1}{2}\|x-u\|^{2}\) over the intersection of a finite family of half-spaces Ci. In particular, if \(E=\mathbb R^{m}\), then we can find xn+ 1 easily by using the “Quadratic Programming Algorithms” package in MATLAB software.

Next, we have the following corollaries:

Corollary 3.5

LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let\({J^{i}_{r}}\)bethe metric resolvent ofAiforr > 0 withi = 1,2,…, N. Let {δn} be nonnegative real sequence and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointuE, we define the sequence {xn} byx1 = xE, C1 = Eand

$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}=J^{i}_{r_{i,n}}{x}_{n},\ i = 1,2,\dots,N\\ &&\text{ii) } \text{ Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{let}\ {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq 0\},\text{ or} \\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n}:\ \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq 0\},\ i = 1,2,\dots,N\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } \text{ Find } {x}_{n + 1}{\in} \{z{\in} {C}_{n + 1} : \|u-z\|^{2}\leq d^{2}(u,C_{n + 1})+\delta_{n + 1}\},\ n = 1,2,\dots \end{array} $$

If\(\lim _{n\to {\infty }}\delta _{n} = 0\), then thesequence {xn} convergesstrongly toPSu, asn.

Proof

In (3.3) if εn = 0 for all n ≥ 1, then the elements yi, n, i = 1,2,…, N, can be rewritten in the form

$${J}_{E}({y}_{i,n}-{x}_{n})+r_{i,n}A_{i}{y}_{i,n}\ni 0.$$

The above inclusion equation is equivalent to

$${y}_{i,n}=J^{i}_{r_{i,n}}{x}_{n},$$

for all i = 1,2,…, N.

So, apply Theorems 3.2 and 3.3 with εn = 0 for all n ≥ 1, we obtain the proof of this corollary. □

Corollary 3.6

LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let {εn} be a nonnegative real sequence and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointuE, we define the sequence {xn} byx1 = xE, C1 = Eand

$$\begin{array}{@{}rcl@{}} &&\text{i) } \text{ Find } {y}_{i,n}{\in} E\ \text{such that } {J}_{E}({y}_{i,n}-{x}_{n})+r_{i,n}A^{\varepsilon_{n}}_{i}{y}_{i,n}\ni 0,\ i = 1,2,\dots,N\\ &&\text{ii) } \text{ Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{ let } {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq -\varepsilon_{n} r_{{i}_{n},n}\},\text{ or}\\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq -\varepsilon_{n}r_{i,n}\},\ i = 1,2,\dots,N\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } {x}_{n + 1}={P}_{C_{n + 1}}u,\ n = 1,2,\dots \end{array} $$

If\(\lim _{n\to {\infty }}\varepsilon _{n} r_{i,n}= 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn.

Proof

In (3.3), if δn = 0 for all n ≥ 1, then we have the element xn+ 1 is defined by

$${x}_{n + 1}\in\{z{\in} {C}_{n + 1} : \|u-z\|\leq d(u,C_{n + 1})\},$$

that is \({x}_{n + 1}={P}_{C_{n + 1}}u\).

So, apply Theorem 3.2 with δn = 0 for all n ≥ 1, we obtain the proof of this corollary. □

Remark 3.7

If ε = δn = 0 for all n ≥ 1, then the sequence {xn} is defined as follows: For a given point uE, we define the sequence {xn} by x1 = xE, C1 = E and

$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}=J^{i}_{r_{i,n}}{x}_{n},\ i = 1,2,\dots,N,\\ &&\text{ii) } \text{Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{let}\ {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq 0\},\text{ or} \\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq 0\},\ i = 1,2,\dots,N\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } {x}_{n + 1}={P}_{C_{n + 1}}u,\ n = 1,2,\dots \end{array} $$

Remark 3.8

In Remark 3.7, if E is a real Hilbert space, N = 1 then we obtain the result of Takahashi et al. in [32] (see [32, Theorem 4.5]). Note that in this case, we do not use the condition rn. So, Theorems 3.2 and 3.3 are more general than the result of Takahashi et al. Furthermore, in the proof of Theorems 3.2 and 3.3, we used the properties (Remark 2.1) of the limits of {Cn} in the sense of Mosco [20] and Lemmas 2.3–2.5 to show that the sequence {xn} converges strongly to PSu. But in order to prove [32, Theorem 4.5], Takahashi et al. used NST(I) condition and Lemma 3.1, Theorems 3.2 and 3.3. Thus, the proofs of main theorems in this paper are simpler than the proof of [32, Theorem 4.5].

4 Applications

4.1 The Common Minimum Point Problem

Let E be a Banach space and let f : E→(−, ] be a proper, lower semicontinuous and convex function. The subdifferential of f is the multi-valued mapping \(\partial f :\ E\longrightarrow 2^{E^{*}}\) which is defined by

$$\partial f(x)=\{g{\in} E^{*} : f(y)-f(x)\geq \langle y-x,g\rangle,\ \forall y{\in} E\}$$

for all xE. We know that f is a maximal monotone operator (see [28]) and x0∈argminEf(x) if and only if f(x0) ∋ 0.

The ε-subdifferential enlargement of f, is given by

$$\partial_{\varepsilon} f(x)=\{u{\in} E^{*} : f(y)-f(x) \geq \langle y-x,u\rangle -\varepsilon ,\ \forall y{\in} E\},$$

for each ε ≥ 0. We know that εf(x) ⊂ εf(x), for any xE. Moreover, in some particular cases, we have that \(\partial _{\varepsilon } f(x)\subsetneq \partial ^{\varepsilon } f(x)\) (see [10, Example 2 and Example 3]).

In [4], when E is a real Hilbert space, Alvarez proposed the following approximate inertial proximal algorithm:

$$c_{n}\partial_{\varepsilon_{n}}f ({x}_{n + 1}) +{x}_{n + 1}-{x}_{n} -\alpha_{n} ({x}_{n}-{x}_{n-1})\ni 0. $$

In [21], Moudafi and Elisabeth extended the above iterative method in the form

$$ c_{n}\partial^{\varepsilon_{n}}f ({x}_{n + 1}) +{x}_{n + 1}-{x}_{n} -\alpha_{n} ({x}_{n}-{x}_{n-1})\ni 0. $$
(4.1)

They proved that if there exists c > 0 such that cnc for all n ≥ 1, and there is α ∈ [0,1) such that {αn}⊂ [0, α], \({\sum }_{n = 1}^{{\infty }} {C}_{k}\varepsilon _{k}<{\infty }\) and

$$\sum\limits_{n = 1}^{{\infty}}\alpha_{n} \|{x}_{n}-{x}_{n-1}\|^{2} <{\infty},$$

then the sequence {xn} converges weakly to a minimum point of f.

Note that, if αn = 0 for all n ≥ 1, then (4.1) becomes

$$c_{n}\partial^{\varepsilon_{n}}f ({x}_{n + 1}) +{x}_{n + 1}-{x}_{n} \ni 0. $$

From Theorems 3.2 and 3.3, we have the following theorem:

Theorem 4.1

LetE be a uniformly convex and smooth Banach space and letfi, i = 1,2,…, N be proper, lower semicontinuous and convex functions ofE into(−, ] such that\(S={\cap }_{i = 1}^{N}\arg \min _{x{\in } E} f_{i}(x)\neq \emptyset \). Let {εn} and {δn} be nonnegative real sequences and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointuE, we define the sequence {xn} byx1 = xE, C1 = Eand

$$\begin{array}{@{}rcl@{}} &&\text{i)}\text{ Find } {y}_{i,n}{\in} E\ \text{ such that } {J}_{E}({y}_{i,n}-{x}_{n})+r_{i,n}\partial^{\varepsilon_{n}} f_{i}({y}_{i,n})\ni 0,\ i = 1,2,\dots,N,\\ &&\text{ii)}\text{ Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{ let } {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n}:\ \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq -\varepsilon_{n} r_{{i}_{n},n}\},\text{ or }\\ &&\text{ii*)}~ C^{i}_{n + 1}=\{z{\in} {C}_{n}:\ \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq -\varepsilon_{n} r_{i,n}\},\ i = 1,2,\dots,N,\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii)}\text{ Find } {x}_{n + 1}{\in} \{z{\in} {C}_{n + 1}:\ \|u-z\|^{2}\leq d^{2}(u,C_{n + 1})+\delta_{n + 1}\},\ n = 1,2,\dots \end{array} $$

If\(\lim _{n\to {\infty }}\varepsilon _{n}r_{i,n}=\lim _{n\to {\infty }}\delta _{n} = 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn.

Remark 4.2

In Theorem 4.1, if εn = 0 for all n ≥ 1, then the sequence {xn} is defined as follows: For a given point uE, we define the sequence {xn} by x1 = xE, C1 = E and

$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}=\arg\min_{y{\in} E}\left\{ f_{i}(y)+\frac{1}{2r_{i,n}}\|y-{x}_{n}\|^{2}\right\},\ i = 1,2,\dots,N,\\ &&\text{ii) } \text{Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{let}\ {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n}:\ \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq 0\},\text{ or} \\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n}:\ \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq 0\},\ i = 1,2,\dots,N,\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } \text{ Find } {x}_{n + 1}{\in} \{z{\in} {C}_{n + 1}:\ \|u-z\|^{2}\leq d^{2}(u,C_{n + 1})+\delta_{n + 1}\},\ n = 1,2,\dots \end{array} $$

Note that if E is a real Hilbert space, then the element yi, n can be defined as follows

$${y}_{i,n}=(I+r_{i,n}\partial f_{i})^{-1}({x}_{n})$$

for all i = 1,2,…, N and for all n ≥ 0.

4.2 The Convex Feasibility Problem

Let C be a nonempty closed convex subset of E. Let iC be the indicator function of C, that is,

$${i}_{C}(x)=\left\{\begin{array}{lllll} 0& \text{ if } x{\in} C,\\ {\infty}& \text{ if } x\notin C. \end{array}\right.$$

It is easy to see that iC is the proper, semicontinuous and convex function, so its subdifferentiable iC is a maximal monotone operator. We know that

$$\partial {i}_{C} (u)=N(u,C)=\{f{\in} E^{*} : \langle u-y, f\rangle \geq 0\ \forall y{\in} C\},$$

where N(u, C) is the normal cone of C at u.

We denote the metric resolvent of iC by Jr with r > 0. Suppose u = Jrx for xE, that is

$$\frac{{J}_{E}(x-u)}{r}{\in} \partial {i}_{C} (u)=N(u,C).$$

Thus, we have

$$\langle u-y,{J}_{E}(x-u)\rangle \geq 0,$$

for all yC. From Lemma 2.4, we get that u = PCx.

So, from Corollary 3.5, we have the following theorem:

Theorem 4.3

LetE be a uniformly convex and smooth Banach space and letQi, i = 1,2,…, N, benonempty closed convex subsets ofE suchthat\(S={\cap }_{i = 1}^{N}Q_{i}\neq \emptyset \). Let {δn} be nonnegative real sequence. For a given pointuE, we define the sequence {xn} byx1 = xE, C1 = Eand

$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}={P}_{Q_{i}}{x}_{n},\ i = 1,2,\dots,N,\\ &&\text{ii) } \text{Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{ let } {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq 0\},\text{ or}\\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n}:\ \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq 0\},\ i = 1,2,\dots,N,\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } \text{ Find } {x}_{n + 1}{\in} \{z{\in} {C}_{n + 1} : \|u-z\|^{2}\leq d^{2}(u,C_{n + 1})+\delta_{n + 1}\},\ n = 1,2,\dots \end{array} $$

If\(\lim _{n\to {\infty }}\delta _{n} = 0\), then thesequence {xn} convergesstrongly toPSu, asn.

4.3 A System Variational Inequalities

Let C be a nonempty closed convex subset of E and let A : CE be a monotone operator which is hemicontinuous (that is for any xC and tn → 0+ we have \(A(x+t_{n}y)\rightharpoonup Ax\) for all yE such that x + tnyC). Then, a point uC is called a solution of the variational inequality for A, if

$$\langle y-u, Au\rangle \geq 0\ \forall y{\in} C.$$

We denote by VI(C, A) the set of all solutions of the variational inequality for A.

Define a mapping T by

$$T_{A}x=\left\{\begin{array}{llll}Ax+N(x,C)&\text{if } x{\in} C,\\ \emptyset&\text{if } x\notin C. \end{array}\right.$$

By Rockafellar [28], we know that TA is maximal monotone and \(T_{A}^{-1}0=VI(C,A)\).

For any yE and r > 0, we know that the variational inequality VI(C, rA + JE(⋅− y)) has a unique solution. Suppose that x = VI(C, rAx + JE(xy)), that is

$$\langle z-x, rA(x)+{J}_{E}(x-y)\rangle \geq 0\ \forall z{\in} C.$$

From the definition of N(x, C), we have

$$-rAx-{J}_{E}(x-y){\in}N(x,C)=rN(x,C), $$

which implies that

$$\frac{{J}_{E}(y-x)}{r}{\in} Ax+N(x,C)={T}_{A}x. $$

Thus, we obtain that x = Jry, where Jr is the metric resolvent of TA.

Now, let E and F be two uniformly convex and smooth Banach spaces and let Ki, i = 1,2,…, N be closed convex subsets of E. Let Ai : KiE be monotone and hemicontinuous operators. Suppose that \(S={\cap }_{i = 1}^{N}VI(K_{i},A_{i})\neq \emptyset \).

We consider the following problem:

$$ \text{Find an element } {x}^{*}{\in} S. $$
(4.2)

To solve Problem (4.2), we define the operators \(T_{A_{i}}\) as follows

$$T_{A_{i}}x=\left\{\begin{array}{llll} A_{i}x+N(x,K_{i})& \text{if } x{\in} K_{i},\\ \emptyset&\text{if } x\notin K_{i}, \end{array}\right.$$

for all i = 1,2,…, N.

So, from Corollary 3.5, we have the following theorem:

Theorem 4.4

Let {δn} be a nonnegative real sequence and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointuE, we define the sequence {xn} byx1 = xE, C1 = Eand

$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}=VI\left( K_{i},r_{i,n}A_{i}(\cdot ) +{J}_{E}(\cdot -{x}_{n})\right),\ i = 1,2,\dots,N,\\ &&\text{ii) } \text{ Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{ let } {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq 0\},\text{ or}\\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq 0\},\ i = 1,2,\dots,N,\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } \text{ Find } {x}_{n + 1}{\in} \{z{\in} {C}_{n + 1} : \|u-z\|^{2}\leq d^{2}(u,C_{n + 1})+\delta_{n + 1}\},\ n = 1,2,\dots \end{array} $$

If\(\lim _{n\to {\infty }}\delta _{n} = 0\), then thesequence {xn} convergesstrongly toPSu, asn.

5 Numerical Test

We take E = L2([0,1]) with the inner product

$$\langle f,g\rangle ={{\int}_{0}^{1}} f(t)g(t)dt$$

and the norm

$$\| f \| =\left( {{\int}_{0}^{1}} f^{2}(t)dt\right)^{1/2},$$

for all f, gL2([0, 1]).

Now, let

$$Q_{i}=\{x{\in} L_{2}([0,1]) : \langle a_{i},x\rangle =b_{i}\},$$

where ai(t) = ti− 1, \(b_{i}=\frac {1}{i + 2}\) for all i = 1,2,…,10 and t ∈ [0,1].

It is easy to check that \(x(t)=t^{2}{\in } S={\cap }_{i = 1}^{10}Q_{i}\). We consider the problem of finding an element xS.

Now, by using Theorem 4.3, we consider the convergence of the sequence {xn} which is generated by the following two cases:

Case A. :
$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}={P}_{Q_{i}}{x}_{n},\ i = 1,2,\dots,N,\\ &&\text{ii) } \text{ Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{ let } {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq 0\},\\ &&\text{iii) } {x}_{n + 1}={P}_{C_{n + 1}}{x}_{1},\ n = 1,2,\ldots \end{array} $$
Case B. :
$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}={P}_{Q_{i}}{x}_{n},\ i = 1,2,\dots,N,\\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq 0\},\ i = 1,2,\dots,N,\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } {x}_{n + 1}={P}_{C_{n + 1}}{x}_{1},\ n = 1,2,\ldots \end{array} $$

We obtain Table 1 of numerical results.

Table 1 Table of numerical results

The behaviors of the approximation solution xn(t) in both of the cases ∥xn+ 1xn∥ < 10− 4 and ∥xn+ 1xn∥ < 10− 5 are presented in Figs. 1 and 2.

Fig. 1
figure 1

The behavior of xn(t) with the stop condition ∥xn+ 1xn∥ < 10− 4

Fig. 2
figure 2

The behavior of xn(t) with the stop condition ∥xn+ 1xn∥ < 10− 5