1 Introduction

Let C and Q be nonempty closed and convex subsets of real p-uniformly convex and uniformly smooth Banach spaces E and F, respectively. Let \(A:E\longrightarrow F\) be a bounded linear operator and \(A^*: F^* \rightarrow E^*\) be the adjoint of A. The split feasibility problem (SFP) is formulated as follows:

$$\begin{aligned} \text {Find an element}\quad x^*\in S=C\cap A^{-1}(Q) \end{aligned}$$
(1.1)

The model of SFP given above was first introduced by Censor and Elfving [11] for modeling inverse problems. We also know that it plays an important role in medical image reconstruction and signal processing (see [5, 7]). In view of its applications, several iterative algorithms of solving (1.1) were presented in [5, 7, 12, 16, 18, 29,30,31, 33,34,35,36] and references therein.

There are some generalizations of the SFP, for example, the multiple-set SFP (MSSFP) (see [12, 22]), the split common fixed point problem (SCFPP) (see [15, 23]), the split variational inequality problem (SVIP) (see [16]), the split common null point problem (SCNPP) (see [8]) and so on.

In 2014, Wang [37] modified Schopfer’s algorithm [26] and proved strong convergence for the following multiple-set split feasibility problem (MSSFP):

$$\begin{aligned} \text {Find an element}\quad x^*\in S=\Bigg (\bigcap _{i=1}^N C_i\Bigg )\bigcap \Bigg (\bigcap _{j=N+1}^{N+M} A^{-1} (Q_j)\Bigg ), \end{aligned}$$
(1.2)

where \(C_i\) and \(Q_j\) are the nonempty closed convex subsets of two p-uniformly convex and uniformly smooth Banach spaces E and F, respectively. He defined for each \(n\in {\mathbb {N}}\)

$$\begin{aligned} T_n(x)= {\left\{ \begin{array}{ll} \Pi _{C_{i(n)}}(x)&{}\quad 1\le i(n)\le N,\\ J_{q}^*[J_{p}(x)-t_nA^*J_{p}(I-P _{Q_j})A(x)]&{}\quad N+1\le i(n)\le N+M, \end{array}\right. } \end{aligned}$$

where \(i:{\mathbb {N}}\rightarrow I\) is the cyclic control mapping

$$\begin{aligned} i(n)=n\,\text {mod} \, (N+M)+1, \end{aligned}$$

and \(t_n\) satisfies

$$\begin{aligned} 0<t\le t_n\le \bigg (\dfrac{q}{C_q\Vert A\Vert ^q}\bigg )^{1/(q-1)}, \end{aligned}$$
(1.3)

with \(C_q\) defined as in Lemma 2.1 and proposed the following algorithm: For any initial guess \(x_0={\bar{x}}\), define \(\{x_n\}\) recursively by

$$\begin{aligned} {\left\{ \begin{array}{ll} y_n=T_n(x_n)\\ D_n=\{w\in E:\Delta _p(y_n,w)\le \Delta _p(x_n,w)\}\\ E_n=\{w\in E:\langle x_n-w,J_p({\bar{x}})-J_p(x_n)\rangle \ge 0\}\\ x_{n+1}=\Pi _{D_n \cap E_n} (\bar{x}), \end{array}\right. } \end{aligned}$$
(1.4)

where \(\Delta _p\) is the Bregman distance with respect to \(f(x)=\dfrac{1}{p}\Vert x\Vert ^p\), \(\Pi _C\) denotes the Bregman projection and \(J_p\) is the duality mapping. He proved the following strong convergence theorem.

Theorem 1.1

The sequence \(\{x_n\}\), generated by (1.4), converges strongly to the Bregman projection \(\Pi _S{\bar{x}}\) of \(\bar{x}\) onto the solution set S.

Note that the algorithm (1.4) studied in the above work is not the parallel one. Therefore, it takes a lot of time in computation when the family of sets \(C_i\) and \(Q_j\) are sufficiently large.

In 2016, Shehu et al. [27] constructed an iterative scheme for solving the following problem:

$$\begin{aligned} \text {Find an element}\quad x^*\in C \cap A^{-1} (Q)\cap F(T). \end{aligned}$$
(1.5)

where T is a left Bregman strongly nonexpansive mapping of C into C. If \(T = I\), the identity map, then \(F (T ) = C\) and in this case, the problem (1.5) reduces to SFP (1.1). They proved the following result.

Theorem 1.2

Let E and F be two p-uniformly convex and uniformly smooth Banach spaces. Let C and Q be nonempty, closed and convex subsets of E and F, respectively, \(A : E\rightarrow F\) be a bounded linear operator and \(A^*: F^* \rightarrow E^*\) be the adjoint of A. Suppose that SFP (1.1) has a nonempty solution set S. Let T be a left Bregman strongly nonexpansive mapping of C into C such that \(F(T)={\hat{F}}(T)\) and \(F(T)\cap S\ne \emptyset \). Let \(\{\alpha _n\}\) be a sequence in (0, 1). For a fixed \(u\in E_1\), let sequence \(\{x_n\}\) be iteratively generated by \(u_1\in E_1\)

$$\begin{aligned} {\left\{ \begin{array}{ll} x_n=\Pi _CJ_q[J_p(u_n)-t_nA^*J_p(I-P _{Q})A(u_n)]\\ u_{n+1}=\Pi _CJ_q[\alpha _n J_p(u)+(1-\alpha _n)J_pT(x_n)], \quad n\ge 1. \end{array}\right. } \end{aligned}$$
(1.6)

Suppose the following conditions are satisfied:

  1. (i)

    \(\displaystyle \lim _{n\rightarrow \infty } \alpha _n =0\),

  2. (ii)

    \(\displaystyle \sum _{n=1}^{\infty } \alpha _n =\infty ,\)

  3. (iii)

    \(0<t\le t_n\le k<\bigg (\dfrac{q}{C_q\Vert A\Vert ^q}\bigg )^{1/(q-1)}.\)

Then \(\{x_n\}\) converges strongly to an element \(x^*\in F(T)\cap S\), where \(x^*=\Pi _{F(T)\cap S}u\).

In this paper, we study the above works for a more generalized case

$$\begin{aligned} S=\Bigg (\displaystyle \bigcap _{i=1}^N C_i\Bigg )\bigcap \Bigg (\displaystyle \bigcap _{j=1}^M A^{-1}(Q_j)\Bigg )\bigcap \Bigg (\displaystyle \bigcap _{k=1}^K F(T_k)\Bigg )\ne \emptyset . \end{aligned}$$

where \(C_i\) and \(Q_j\) are the nonempty closed convex subsets of two p-uniformly convex and uniformly smooth Banach spaces E and F, respectively, \(F(T_k)\) is the set of fixed point of left Bregman strongly nonexpansive mapping \(T_k:\ E\longrightarrow E\) such that \({\hat{F}}(T_k)=F(T_k)\), and \(A:\ E\longrightarrow F\) is a bounded linear operator. We shall introduce a new strongly convergent, parallel and explicit iterative algorithm with the similar condition (1.3) on the iterative parameter.

The rest of this paper is organized as follows. In Sect. 2, we list some related facts that will be used in the proof of our result. In Sect. 3, we introduce a new parallel iterative algorithm and prove a strong convergence theorem for this algorithm. Finally, in Sect. 4, we give two numerical examples for illustrating the main result.

2 Preliminaries

In this section, we recall some definitions and results which will be used later. Let E be a real Banach space with the dual space \(E^*\). For the sake of simplicity, the norms of E and \(E^*\) are denoted by the symbol \(\Vert .\Vert \) and we use \(\langle x,f\rangle \) instead of f(x) for \(f\in E^*\) and \(x\in E\).

The modulus of convexity \(\delta _E:\ [0,2]\longrightarrow [0,1]\) is defined by

$$\begin{aligned} \delta _E(\varepsilon )=\inf \bigg \{ 1-\dfrac{\Vert x+y\Vert }{2}:\ \Vert x\Vert =\Vert y\Vert =1,\ \Vert x-y\Vert \ge \varepsilon \bigg \}, \end{aligned}$$

for all \(\varepsilon \in [0,1]\). The modulus of smoothness \(\rho _E:\ [0,\infty )\longrightarrow [0,\infty )\) is defined as

$$\begin{aligned} \rho _E(\tau )=\sup \bigg \{ \dfrac{\Vert x+\tau y\Vert +\Vert x-\tau y\Vert }{2}-1:\ \Vert x\Vert =\Vert y\Vert =1\bigg \}, \end{aligned}$$

for all \(\tau \in [0,\infty )\). Recall that a Banach space E is said to be

  1. (i)

    uniformly convex if \(\delta _E(\varepsilon )>0\) for all \(\varepsilon \in (0,2]\) and p-uniformly convex if there exists \(c_p>0\) such that \(\delta _E(\varepsilon )\ge c_p\varepsilon ^p\) for all \(\varepsilon \in (0,2]\).

  2. (ii)

    uniformly smooth if \(\displaystyle \lim _{\tau \rightarrow 0}\rho _E(\tau )/\tau =0\) and q-uniformly smooth if there is \(C_q>0\) such that \(\rho _E(\tau )\le C_q\tau ^q\) for all \(\tau >0\).

The \(L_p\) space is 2-uniformly convex for \(1<p\le 2\) and p-uniformly convex for \(p\ge 2\). Let \(1<q\le 2\le p\) with \(1/p+1/q=1\). It is well-known that E is p-uniformly convex if and only if its dual \(E^*\) is q-uniformly smooth (see [24]).

The duality mapping \(J_p:\ E\longrightarrow 2^{E^*}\) is defined by

$$\begin{aligned} J_p(x)=\{x^*\in E^*:\ \langle x,x^*\rangle =\Vert x\Vert ^p,\ \Vert x^*\Vert =\Vert x\Vert ^{p-1}\}. \end{aligned}$$

It is also well-known that if E is p-uniformly convex and uniformly smooth, then its dual space \(E^*\) is q-uniformly smooth and uniformly convex. And in this situation, the duality mapping \(J_p\) is one-to-one, single valued and satisfies \(J_p=(J_q^*)^{-1}\), where \(J_q^*\) is the duality mapping of \(E^*\) (see [1, 17]).

We have the following lemma:

Lemma 2.1

[32] Let \(x,y\in E\). If E is q-uniformly smooth, then there is a \(C_q>0\) such that

$$\begin{aligned} \Vert x-y\Vert ^q\le \Vert x\Vert ^q-q\langle y, J_q(x)\rangle +C_q\Vert y\Vert ^q. \end{aligned}$$
(2.1)

Let \(f:\ E\longrightarrow (-\infty ,+\infty ]\) be a convex and Gâteaux differentiable function. The function \(D_f:\ \text {dom}f\times \text {int dom}f\longrightarrow [0,+\infty )\), defined by

$$\begin{aligned} \Delta _f(y,x)=f(y)-f(x)-\langle y-x, \bigtriangledown f(x)\rangle , \end{aligned}$$

is called the Bregman distance with respect to f (see [13]).

If E is a smooth and strictly Banach space and \(f(x)=\dfrac{1}{p}\Vert x\Vert ^p\), then \(\bigtriangledown f(x)=J_p(x)\) and thus the Bregman distance with respect to f is given by

$$\begin{aligned} \Delta _p(x,y)&=\dfrac{1}{p}(\Vert x\Vert ^p-\Vert y\Vert ^p) -\langle y-x, J_p(x)\rangle \\&=\dfrac{1}{q}\Vert x\Vert ^p-\langle y, J_p(x)\rangle +\dfrac{1}{p}\Vert y\Vert ^p\\&=\dfrac{1}{q}(\Vert x\Vert ^p-\Vert y\Vert ^p)-\langle y,J_p(x)-J_p(y)\rangle . \end{aligned}$$

It is easy to show that, for any \(x,y,z\in E\), we have

$$\begin{aligned}&\Delta _p(x,y)=\Delta _p(x,z)+\Delta _p(z,y)+\langle z-y,J_p(x)-J_p(z)\rangle , \end{aligned}$$
(2.2)
$$\begin{aligned}&\Delta _p(x,y)+\Delta _p(y,x)=\langle x-y,J_p(x)-J_p(y)\rangle . \end{aligned}$$
(2.3)

We know that if E is p-uniformly convex, then the Bregman distance has the following property:

$$\begin{aligned} \tau \Vert x-y\Vert ^p\le \Delta _p(x,y)\le \langle x-y, J_p(x)-J_p(y)\rangle , \end{aligned}$$
(2.4)

for all \(x,y\in E\) and for some fixed number \(\tau >0\).

Now, let C be a nonempty closed convex subset of E. The metric projection

$$\begin{aligned} P_C(x):=\text {arg}\min _{y\in C}\Vert x-y\Vert ,\quad x\in E, \end{aligned}$$

is the unique minimum point of the norm distance, which can be characterized by the following variational inequality (see [20]):

$$\begin{aligned} \langle z-P_Cx, J_p(x-P_Cx)\rangle \le 0,\quad \forall z\in C. \end{aligned}$$
(2.5)

The Bregman projection

$$\begin{aligned} \Pi _C(x):=\text {arg}\min _{y\in C}\Delta _p(x,y),\quad x\in E, \end{aligned}$$

as the minimum point of the Bregman distance (see [6]). The Bregman projection can also be characterized by the following variational inequality:

$$\begin{aligned} \langle z-\Pi _Cx, J_p(x)-J_p(\Pi _Cx)\rangle \le 0,\quad \forall z\in C. \end{aligned}$$
(2.6)

It follows that

$$\begin{aligned} \Delta _p(\Pi _Cx,z)\le \Delta _p(x,z)-\Delta _p(x,\Pi _Cx),\quad \forall z\in C. \end{aligned}$$
(2.7)

Let C be a convex subset of int domf with \(f(x)=\dfrac{1}{p}\Vert x\Vert ^p\), \(2\le p<\infty \) and let T be a self-mapping of C. A point p in the closure of C is said to be an asymptotic fixed point of T (see [14, 25]) if C contains a sequence \(\{x_n\}\) which converges weakly to p such that the strong \(\lim \nolimits _{n\rightarrow \infty }\Vert x_n-T(x_n)\Vert =0\). The set of asymptotic fixed points of T will be denoted by \({\hat{F}} (T)\). The operator T is called left Bregman strongly nonexpansive (L-BSNE) with respect to a nonempty \({\hat{F}} (T)\) (see [21]) if

$$\begin{aligned} \Delta _p(Tx,p)\le \Delta _p(x,p), \end{aligned}$$
(2.8)

for all \(p\in {\hat{F}} (T)\) and \(x\in C\), and if whenever \(\{x_n\}\subset C\) is bounded, \(p\in {\hat{F}} (T)\), and

$$\begin{aligned} \lim _{n\rightarrow \infty }(\Delta _p(x_n,p)-\Delta _p(T(x_n),p))=0, \end{aligned}$$
(2.9)

it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty }\Delta _p(T(x_n),x_n)=0. \end{aligned}$$
(2.10)

3 Main results

We consider the problem: find an element \(x^\dagger \) such that

$$\begin{aligned} x^\dagger \in S=\Bigg (\displaystyle \bigcap _{i=1}^N C_i\Bigg )\bigcap \Bigg (\displaystyle \bigcap _{j=1}^M A^{-1}(Q_j)\Bigg )\bigcap \Bigg (\displaystyle \bigcap _{k=1}^K F(T_k)\Bigg )\ne \emptyset . \end{aligned}$$
(3.1)

To solve the Problem (3.1), we introduce the following algorithm:

Algorithm 3.1

For any initial guess \(x_0=x\in E\), define the sequence \(\{x_n\}\) by

$$\begin{aligned}&y_{i,n}=\Pi _{C_i}x_n,\ i=1,2,\ldots ,N,\\&\text {Choose } i_n \text { such that }\Delta _p(y_{i_n,n},x_n)=\max \nolimits _{i=1,\ldots ,N}\Delta _p(y_{i,n},x_n), \text { let }y_n=y_{i_n,n},\\&z_{j,n}=J_q^*[J_p(y_n)-t_nA^*J_p(I-P _{Q_j})A(y_n)],\ j = 1,2,\ldots ,M\\&\text {Choose } j_n \text { such that }\Delta _p(z_{j_n,n},y_n)=\max \nolimits _{j=1,\ldots ,M}\Delta _p(z_{j,n},y_n), \text { let }z_n = z_{j_n,n},\\&t_{k,n}=T_k(z_n),\ k=1,2,\ldots ,K,\\&\text {Choose }k_n \text { such that }\Delta _p(t_{k_n,n},z_n)=\max \nolimits _{k=1,\ldots ,K}\Delta _p(t_{k,n},z_n), \text { let }t_n=t_{k_n,n},\\&H_n=\{z\in E:\ \Delta _p(t_n,z)\le \Delta _p(z_n,z)\le \Delta _p(y_n,z)\le \Delta _p(x_n,z)\},\\&D_n=\{z\in E:\ \langle x_n-z,J_p(x_0)-J_p(x_n)\rangle \ge 0\},\\&x_{n+1}=\Pi _{H_n\cap D_n}(x_0),\ n\ge 0, \end{aligned}$$

where, \(\{t_n\}\) satisfies the condition (1.3).

First of all, we prove the following propositions.

Proposition 3.1

In the Algorithm 3.1, we have that \(S\subset H_n\cap D_n\) for all \(n\ge 0\).

Proof

First, it is easy to see that \(H_n\) and \(D_n\) are closed convex subsets of E.

Let \(u\in S\), we have

$$\begin{aligned} \Delta _p(t_n,u)=\Delta _p(T_{k_n}(z_n),u)\le \Delta _p(z_n,u). \end{aligned}$$
(3.2)

From the property of the Bregman projection in (2.7), we have

$$\begin{aligned}&\Delta _p(y_n,u)=\Delta _p(\Pi _{C_{i_n}}(x_n),u)\le \Delta _p(x_n,u). \end{aligned}$$
(3.3)

Now, we will show that \(\Delta _p(z_n,u)\le \Delta _p(y_n,u)\). Let \(w_n=A(y_n)-P_{Q_{j_n}}A(y_n)\). Then we have

$$\begin{aligned} z_n=J_q ^*(J_p(y_n)-t_nA^*J_p(w_n)). \end{aligned}$$

From the definition of \(J_p\) and (2.5), we have

$$\begin{aligned} \begin{aligned} \langle A(y_n)-A(u),J_p(w_n)\rangle&=\Vert A(y_n)-P_{Q_{j_n}}A(y_n)\Vert ^p\\&\quad +\langle P _{Q_{j_n}}A(y_n)- A(u),J_p(w_n)\rangle \\&\ge \Vert w_n\Vert ^p. \end{aligned} \end{aligned}$$
(3.4)

Thus, from Lemma 2.1 and (3.4), we get that

$$\begin{aligned} \Delta _p(z_n,u)&=\Delta _p(J_q^*(J_p(y_n)-t_nA^*J_p(w_n)),u)\\&=\dfrac{1}{q}\Vert J_p(y_n)-t_nA^*J_p(w_n)\Vert ^q-\langle u,J_p(y_n)\rangle \\&\quad +t_n\langle A(u), J_p(w_n)\rangle +\dfrac{1}{p}\Vert u\Vert ^p\\&\le \dfrac{1}{q}\Vert J_p(y_n)\Vert ^q-t_n\langle Ay_n, J_p(w_n)\rangle +\dfrac{C_q(t_n\Vert A\Vert )^q}{q}\Vert J_p(w_n)\Vert ^q\\&\quad -\langle u, J_p(y_n)\rangle +t_n\langle Au, J_p(w_n)\rangle + \dfrac{1}{p}\Vert u\Vert ^p\\&=\dfrac{1}{q}\Vert y_n\Vert ^q-\langle u, J_p(y_n)\rangle +\dfrac{1}{p}\Vert u\Vert ^p +t_n\langle A(u)-A(y_n), J_p(w_n)\rangle \\&\quad +\dfrac{C_q(t_n\Vert A\Vert )^q}{q}\Vert w_n\Vert ^q\\&=\Delta _p(y_n,u)+t_n\langle A(u)-A(y_n), J_p(w_n)\rangle +\dfrac{C_q(t_n\Vert A\Vert )^q}{q}\Vert w_n\Vert ^q\\&\le \Delta _p(y_n,u)-(t_n-\dfrac{C_q(t_n\Vert A\Vert )^q}{q})\Vert w_n\Vert ^p. \end{aligned}$$

From the condition (1.3), we obtain that

$$\begin{aligned} \Delta _p(z_n,u)\le \Delta _p(y_n,u). \end{aligned}$$
(3.5)

So, from (3.2), (3.3) and (3.5), we get that \(u\in H_n\). Hence, \(S\subset H_n\) for all \(n\ge 0\).

Finally, we show that \(S\subset D_n\) for all \(n\ge 0\). Indeed, \(D_0=E\), so \(S\subset D_0\). Suppose that \(S\subset D_n\) for some \(n\ge 0\), then \(S\subset H_n\cap D_n\). Thus, from \(x_{n+1}=\Pi _{H_n\cap D_n}(x_0)\) and (2.6), we have

$$\begin{aligned} \langle x_{n+1}-u,J_p(x_0)-J_p(x_{n+1})\rangle \ge 0, \end{aligned}$$

so that \(u\in D_{n+1}\). By induction, we obtain that \(S\subset D_n\) for all \(n\ge 0\). \(\square \)

Proposition 3.2

In the Algorithm 3.1, we have that \(x_{n+1}-x_n\rightarrow 0\) as \(n\rightarrow \infty \).

Proof

From the Proposition 3.1, we have that the sequence \(\{x_n\}\) is well-defined.

Fixing \(u\in S\). It follows form \(x_{n+1}=\Pi _{H_n\cap D_n}(x_0)\) and (2.7) that

$$\begin{aligned} \Delta _p(x_{n+1},u)\le \Delta _p(x_0,u). \end{aligned}$$
(3.6)

Hence, the sequence \(\{\Delta _p(x_{n},u)\}\) is bounded. Thus, from (2.4), the sequence \(\{x_n\}\) also is bounded.

Now, from \(x_{n+1}\in D_n\) and from the definition of \(D_n\), we have

$$\begin{aligned} \langle x_n-x_{n+1},J_p(x_0)-J_p(x_n)\rangle \ge 0. \end{aligned}$$
(3.7)

So, we obtain that

$$\begin{aligned} \langle x_{n}-x_0,J_p(x_0)-J_p(x_n)\rangle \ge \langle x_{n+1}-x_0, J_p(x_0)-J_p(x_n)\rangle . \end{aligned}$$
(3.8)

Thus, from (2.4), we have

$$\begin{aligned} \langle x_{n+1}-x_0, J_p(x_0)-J_p(x_n)\rangle \ge \Delta _p(x_n,x_0)+\Delta _p(x_0,x_n). \end{aligned}$$
(3.9)

Hence, from (2.3), we get that

$$\begin{aligned} -\Delta _p (x_n,x_{n+1})+\Delta _p(x_n,x_0)+\Delta _p(x_0,x_{n+1})\ge \Delta _p(x_n,x_0)+\Delta _p(x_0,x_n). \end{aligned}$$

This is equivalent to

$$\begin{aligned} \Delta _p(x_0,x_{n+1})\ge \Delta _p(x_0,x_n)+\Delta _p (x_n,x_{n+1}), \end{aligned}$$
(3.10)

which implies that the sequence \(\{\Delta _p(x_0,x_n)\}\) is increasing. Thus, from the boundedness of \(\{\Delta _p(x_0,x_n)\}\), there is the finite limit

$$\begin{aligned} a=\lim _{n\rightarrow \infty } \Delta _p(x_0,x_n). \end{aligned}$$

So, from (3.10), we obtain that \(\lim \nolimits _{n\rightarrow \infty }\Delta _p (x_n,x_{n+1})=0\). It follows from (2.4) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\Vert x_{n+1}-x_n\Vert =0. \end{aligned}$$

\(\square \)

Proposition 3.3

In the Algorithm 3.1, we have the sequences \(\{x_n-y_n\}\), \(\{x_n-z_n\}\) and \(\{x_n-t_n\}\) converge to zero as \(n\rightarrow \infty \).

Proof

Since \(x_{n+1}\in H_n\), we have

$$\begin{aligned} \Delta _p(t_n,x_{n+1})\le \Delta _p(z_n,x_{n+1})\le \Delta (y_n,x_{n+1})\le \Delta (x_n,x_{n+1}). \end{aligned}$$

Thus, from the Proposition 3.2 (\(\Delta (x_n,x_{n+1})\rightarrow 0\)), we obtain that

$$\begin{aligned} \Delta _p(t_n,x_{n+1})\rightarrow 0,\ \Delta _p(z_n,x_{n+1})\rightarrow 0,\ \Delta (y_n,x_{n+1})\rightarrow 0. \end{aligned}$$

It follows from (2.4) that

$$\begin{aligned} \Vert x_{n+1}-t_n\Vert \rightarrow 0,\ \Vert x_{n+1}-z_n\Vert \rightarrow 0,\ \Vert x_{n+1}-y_n\Vert \rightarrow 0 \end{aligned}$$

which combining with \(\Vert x_{n+1}-x_n\Vert \rightarrow 0\), we get that

$$\begin{aligned} x_n-t_n\rightarrow 0,\ x_n-z_n\rightarrow 0,\ \text {and }x_n-y_n\rightarrow 0. \end{aligned}$$

\(\square \)

Proposition 3.4

In the Algorithm 3.1, we have that \(\omega _w(x_n)\subset S\).

Proof

We will prove this proposition by several steps.

Clearly, the \(\omega _w(x_n)\ne \emptyset \) because the \(\{x_n\}\) is bounded. Let \({\bar{x}}\in \omega _w(x_n)\), there is a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) which converges weakly to \({\bar{x}}\).

Step 1. \({\bar{x}}\in \bigcap \nolimits _{k=1}^KF(T_k)\)

From the Proposition 3.3, we have \(t_n-z_n\rightarrow 0\) and it follows that \(\Delta _p(t_n,z_n)\rightarrow 0\). Thus, from the definition of \(t_n\), we obtain that \(\Delta _p(t_{k,n},z_n)\rightarrow 0\), that is \(\Delta _p(T_k(z_n),z_n)\rightarrow 0\) for all \(k=1,2,\ldots ,K\). Therefore, we obtain that \({\bar{x}} \in {\hat{F}} (T_k)=F(T_k)\) for all \(k=1,2,\ldots ,K\). This implies that \({\bar{x}}\in \bigcap \nolimits _{k=1}^KF(T_k).\)

Step 2. \({\bar{x}}\in \bigcap \nolimits _{i=1}^N C_i\)

From Proposition 3.3, we have \(\Delta _p(y_n,x_n)\rightarrow 0\). So, it follows from the definition of \(y_n\) that \(\Delta _p(y_{i,n},x_n)\rightarrow 0\) and hence

$$\begin{aligned} \Vert y_{i,n}-x_n\Vert \rightarrow 0, \end{aligned}$$
(3.11)

for all \(i=1,2,\ldots ,N\).

We need to prove that \(\Delta _p({\bar{x}}, \Pi _{C_i}({\bar{x}}))=0\) for all \(i=1,2,\ldots ,N\). Indeed, from (2.3), (2.6) and (2.4), we have the following estimate

$$\begin{aligned} \Delta _p({\bar{x}}, \Pi _{C_i}({\bar{x}}))&\le \langle {\bar{x}}- \Pi _{C_i}{\bar{x}}, J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle \\&=\langle {\bar{x}}- x_{n_k}, J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle \\&\quad +\langle x_{n_k}- \Pi _{C_i}(x_{n_k}), J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle \\&\quad +\langle \Pi _{C_i}(x_{n_k})-\Pi _{C_i}({\bar{x}}), J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle \\&\le \langle {\bar{x}}- x_{n_k}, J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle \\&\quad +\langle x_{n_k}- \Pi _{C_i}(x_{n_k}), J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle \\&=\langle {\bar{x}}- x_{n_k}, J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle \\&\quad +\langle x_{n_k}- y_{i,n_k}, J_p({\bar{x}})-J_p(\Pi _{C_i}({\bar{x}}))\rangle . \end{aligned}$$

From (3.11), letting \(k\rightarrow \infty \) yields \(\Delta _p({\bar{x}}, \Pi _{C_i}({\bar{x}}))=0\) for all \(i=1,2,\ldots ,N\), that is \({\bar{x}}\in C_i\) for all \(i=1,2,\ldots ,N\) or \({\bar{x}} \in \bigcap \nolimits _{i=1}^N C_i\).

Step 3. \({\bar{x}}\in \bigcap \nolimits _{j=1}^MA^{-1}Q_j\)

From the Proposition 3.3, we have \(\Delta _p(z_n,y_n)\rightarrow 0\). Thus, from the definition of \(z_n\), we get that \(\Delta _p(z_{j,n},y_n)\rightarrow 0\) and hence we obtain

$$\begin{aligned} \Vert z_{j,n}-y_n\Vert \rightarrow 0, \end{aligned}$$
(3.12)

for all \(j=1,2,\ldots ,M\).

Since E is uniformly Banach space, the duality mapping \(J_p\) is uniformly continuous on bounded sets (see [17, Theorem 2.16]) and hence we have

$$\begin{aligned} t_nA^*J_p(I-P _{Q_j})A(y_n)=J_p(y_n)-J_p(z_{j,n})\rightarrow 0. \end{aligned}$$

Since \(0<t\le t_n\) for all n, we obtain

$$\begin{aligned} \Vert A^*J_p(I-P _{Q_j})A(y_n)\Vert \rightarrow 0. \end{aligned}$$
(3.13)

Let us now fix some \(u\in S\), then \(A(u)\in Q_j\) for all \(j=1,2,\ldots ,M\). It follows from (2.5) that

$$\begin{aligned} \Vert (I-P _{Q_j})A(y_{n_k})\Vert ^p&=\langle (I-P _{Q_j})A(y_{n_k}), J_p(I-P _{Q_j})A(y_{n_k})\rangle \\&=\langle A(y_{n_k})-A(u), J_p(I-P _{Q_j})A(y_{n_k})\rangle \\&\quad +\langle A(u)-P_{Q_j}A(y_{n_k}), J_p(I-P_{Q_j})A(y_{n_k})\rangle \\&\le \langle A(y_{n_k})-A(u), J_p(I-P_{Q_j})A(y_{n_k})\rangle \\&\le K_0\Vert (I-P_{Q_j})A(y_{n_k})\Vert ^{p-1}, \end{aligned}$$

which combines with (3.13), we obtain that

$$\begin{aligned} \Vert (I-P _{Q_j})A(y_{n_k})\Vert \rightarrow 0 \end{aligned}$$
(3.14)

for all \(j=1,2,\ldots ,M\) and \(K_0=\Vert A\Vert (\sup _{k}\Vert y_{n_k}\Vert +\Vert u\Vert )<\infty \).

Now, from (2.5), we have

$$\begin{aligned} \Vert (I-P_{Q_j})A({\bar{x}})\Vert ^p&=\langle A({\bar{x}})-P_{Q_j}A({\bar{x}}),J_p(A({\bar{x}})-P_{Q_j}A({\bar{x}}))\rangle \\&=\langle A({\bar{x}})-A(y_{n_k}),J_p(A({\bar{x}})-P_{Q_j}A({\bar{x}}))\rangle \\&\quad + \langle A(y_{n_k})-P_{Q_j}A({\bar{x}}),J_p(A({\bar{x}})-P_{Q_j}A({\bar{x}}))\rangle \\&\quad + \langle P_{Q_j}A({\bar{x}})-A(y_{n_k}),J_p(A({\bar{x}})-P_{Q_j}A({\bar{x}}))\rangle \\&\le \langle A({\bar{x}})-A(y_{n_k}),J_p(A({\bar{x}})-P_{Q_j}A({\bar{x}}))\rangle \\&\quad + \langle A(y_{n_k})-P_{Q_j}A({\bar{x}}),J_p(A({\bar{x}})-P_{Q_j}A({\bar{x}}))\rangle . \end{aligned}$$

From the continuity of A, \(x_n-y_n\rightarrow 0\) and \(x_{n_k}\rightharpoonup {\bar{x}}\), we get that \(A(y_{n_k})\rightharpoonup A({\bar{x}})\). Hence, letting \(k\rightarrow \infty \) and using (3.14), we obtain

$$\begin{aligned} \Vert A({\bar{x}})- P_{Q_j} A({\bar{x}})\Vert =0, \end{aligned}$$

for all \(j=1,2,\ldots ,M\), that is \(A({\bar{x}})\in \bigcap \nolimits _{j=1}^MA^{-1}Q_j\).

Thus, from Step 1, Step 2 and Step 3, we conclude that \({\bar{x}}\in S\). Since \({\bar{x}}\) is arbitrary, \(\omega _w(x_n)\subset S\). \(\square \)

Now, we are in position to prove our main result.

Theorem 3.5

In the Algorithm 3.1, we have that the sequence \(\{x_n\}\) converges strongly to \(x^\dagger =\Pi _S(x_0)\), as \(n\rightarrow \infty \).

Proof

Suppose that \(\{x_{n_k}\}\) is a subsequence of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup x^*\). Then, from the Proposition 3.4 we have \(x^*\in S\).

Since \(x_{n+1}=\Pi _{H_n\cap D_n}(x_0)\), \(x_{n+1}\in D_n\). Thus, from \(\Pi _S(x_0)\in S\subset D_n\), we have

$$\begin{aligned} \Delta _p(x_{n+1},x_0)\le \Delta _p(\Pi _Sx_0,x_0), \end{aligned}$$

which combines with \(\Delta _p(x_{n+1},x_0)\ge \Delta _p(x_{n},x_0)\), we obtain that

$$\begin{aligned} \Delta _p(x_{n},x_0)\le \Delta _p(\Pi _Sx_0,x_0),\quad \forall n\ge 0. \end{aligned}$$
(3.15)

Thus, from (2.2), (2.3) and (3.15), we get

$$\begin{aligned} \Delta _p(x_{n_k},\Pi _S(x_0))&=\Delta _p(x_{n_k},x_0)+\Delta _p(x_0,\Pi _S(x_0))\\&\quad +\langle x_{n_k}-x_0, J_p(x_0)-J_p(\Pi _S(x_0))\rangle \\&\le \Delta _p(\Pi _S(x_0),x_0) +\Delta _p(x_0,\Pi _S(x_0))\\&\quad +\langle \Pi _S(x_0)-x_0, J_p(x_0)-J_p(\Pi _S(x_0))\rangle \\&\quad +\langle x_{n_k}-\Pi _S(x_0), J_p(x_0)-J_p(\Pi _S(x_0))\rangle \\&=\langle x_{n_k}-\Pi _S(x_0), J_p(x_0)-J_p(\Pi _S(x_0))\rangle . \end{aligned}$$

So, we have

$$\begin{aligned} \limsup _{k\rightarrow \infty }\Delta _p(x_{n_k},\Pi _S(x_0))&\le \limsup _{k\rightarrow \infty }\langle x_{n_k}-\Pi _S(x_0), J_p(x_0)-J_p(\Pi _S(x_0))\rangle \\&\le \langle x^*-\Pi _S(x_0), J_p(x_0)-J_p(\Pi _S(x_0))\rangle \le 0, \end{aligned}$$

which implies that \(\lim \nolimits _{k\rightarrow \infty }\Delta _p(x_{n_k},\Pi _S(x_0))=0\) and hence \(x_{n_k}\rightarrow \Pi _S(x_0)\) thanks to (2.4). By the uniqueness of Bregman projection \(\Pi _S(x_0)\), we obtain that the sequence \(\{x_n\}\) converges weakly to \(\Pi _S(x_0)\). Now, from (2.4), there exists a \(\tau >0\) such that

$$\begin{aligned} \tau \Vert x_n-\Pi _S(x_0)\Vert \le \langle x_n-\Pi _S(x_0),J_p(x_0)-J_p(\Pi _S(x_0))\rangle . \end{aligned}$$

Letting \(n\rightarrow \infty \), we conclude that \(x_n\rightarrow x^\dagger =\Pi _S(x_0)\). \(\square \)

Next, from Theorem 3.5, we have two following corollaries. First, we have an algorithm for solving the MSFP in two Banach spaces.

Corollary 3.6

Let \(C_i\), \(i=1,2,\ldots ,N\) and \(Q_j\), \(j=1,2,\ldots ,M\) be the nonempty closed convex subsets of two p-uniformly convex and uniformly smooth Banach spaces E and F, respectively. Let \(A:\ E\rightarrow F\) be a bounded linear operator. Suppose that \(S=\big (\bigcap \nolimits _{i=1}^NC_i\big )\bigcap \big (\bigcap \nolimits _{j=1}^MA^{-1}(Q_j)\big )\ne \emptyset \). If the sequence \(\{t_n\}\) satisfies the condition (1.3), then the sequence \(\{x_n\}\) generated by \(x_0\in E\) and

$$\begin{aligned}&y_{i,n}=\Pi _{C_i}(x_n),\ i=1,2,\ldots ,N,\\&\text {Choose } i_n \text { such that }\Delta _p(y_{i_n,n},x_n)=\max \nolimits _{i=1,\ldots ,N}\Delta _p(y_{i,n},x_n), \text { let }y_n=y_{i_n,n},\\&z_{j,n}=J_q^*[J_p(y_n)-t_nA^*J_p(I-P _{Q_j})A(y_n)],\ j=1,2,\ldots ,M\\&\text {Choose } j_n \text { such that }\Delta _p(z_{j_n,n},y_n)=\max \nolimits _{j=1,\ldots ,M}\Delta _p(z_{j,n},y_n), \text { let }z_n=z_{j_n,n},\\&H_n=\{z\in E:\ \Delta _p(z_n,z)\le \Delta _p(y_n,z)\le \Delta _p(x_n,z)\},\\&D_n=\{z\in E:\ \langle x_n-z,J_p(x_0)-J_p(x_n)\rangle \ge 0\},\\&x_{n+1}=\Pi _{H_n\cap D_n}(x_0),\ n\ge 0, \end{aligned}$$

converges strongly to \(x^\dagger =\Pi _S(x_0)\), as \(n\rightarrow \infty \).

Proof

Apply Theorem 3.5 with \(T_k(x)=x\) for all \(x\in E\) and for all \(k=1,2,\ldots ,K\), we get the proof of this corollary. \(\square \)

Finally, we have the following result for the problem of finding a common fixed point of a finite family of L-BSNE operators in Banach spaces.

Corollary 3.7

Let E be a p-uniformly convex and uniformly smooth Banach space. Let \(T_k:\ E\rightarrow E\), \(k=1,2,\ldots ,K\) be the left Bregman strongly nonexpansive mappings such that \({\hat{F}}(T_k)=F(T_k)\) and \(S=\bigcap \nolimits _{k=1}^KF(T_k)\ne \emptyset \). Then the sequence \(\{x_n\}\) generated by \(x_0\in E\) and

$$\begin{aligned}&t_{k,n}=T_k(x_n),\ k=1,2,\ldots ,K,\\&\text {Choose }k_n \text { such that }\Delta _p(t_{k_n,n},x_n)=\max \nolimits _{k=1,\ldots ,K}\Delta _p(t_{k,n},x_n), \text { let }t_n=t_{k_n,n},\\&H_n=\{z\in E:\ \Delta _p(t_n,z)\le \Delta _p(x_n,z)\},\\&D_n=\{z\in E:\ \langle x_n-z,J_p(x_0)-J_p(x_n)\rangle \ge 0\},\\&x_{n+1}=\Pi _{H_n\cap D_n}(x_0),\ n\ge 0, \end{aligned}$$

converges strongly to \(x^\dagger =\Pi _S(x_0)\), as \(n\rightarrow \infty \).

Proof

Apply Theorem 3.5 with \(E\equiv F\) and \(C_i=Q_j=E\) for all \(i=1,2,\ldots ,N\) and for all \(j=1,2,\ldots ,M\), and \(A=I\), we get the proof of this corollary. \(\square \)

4 Numerical test

Example 4.1

We consider the Problem (3.1) with \(C_i\subset {\mathbb {R}}^n\) and \(Q_j\subset {\mathbb {R}}^m\) which are defined by

$$\begin{aligned}&C_i=\{x\in {\mathbb {R}}^{\mathcal {N}}:\ \langle a^C_i,x\rangle \le b^C_i\},\\&Q_j=\{x\in {\mathbb {R}}^{\mathcal {M}}:\ \langle a^Q_j,x\rangle \le b^Q_j\}, \end{aligned}$$

where \(a^C_i\in {\mathbb {R}}^{\mathcal {N}}, a^Q_j\in {\mathbb {R}}^{\mathcal {M}}\) and \(b^C_i, b^Q_j\in {\mathbb {R}}\) for all \(i=1,2,\ldots ,N\) and for all \(j=1,2,\ldots ,M\), \(T_k\) is metric projection from \({\mathbb {R}}^{\mathcal {N}}\) onto \(S_k\) with

$$\begin{aligned} S_k=\{x\in {\mathbb {R}}^n:\ \Vert x-I_k\Vert ^2\le R_k^2\}, \end{aligned}$$

for all \(k=1,2,\ldots ,K\), and A is bounded linear operator from \({\mathbb {R}}^{\mathcal {N}}\) into \({\mathbb {R}}^{\mathcal {M}}\) with its matrix which has the elements are randomly generated in [2, 4].

Next, we take the randomly generated values of the coordinates of \(a^C_i\), \(a^Q_j\) in [1, 3] and \(b^C_i\), \(b^Q_j\) in [2,4], the center \(I_k\) in \([-1,1]\) and the radius \(R_k\) of \(S_k\) in [2, 10], respectively.

Clearly, the \(S=\big (\bigcap \nolimits _{i=1}^N C_i\big )\bigcap \big (\bigcap \nolimits _{j=1}^M A^{-1}(Q_j)\big )\bigcap \big (\bigcap \nolimits _{k=1}^K F(T_k)\big )\ne \emptyset \), because \(0\in S\).

Now, we will test the Algorithm 3.1, with the initial \(x_0\in {\mathbb {R}}^{\mathcal {N}}\) where its coordinates are also randomly generated in \([-5,5]\), \({\mathcal {N}}=20\), \({\mathcal {M}}=40\), \(N=50\), \(M=100\), \(K=200\) and \(t_n=\dfrac{1}{2\Vert A\Vert ^2}\). After five attempts with randomized data, we obtain the following table of results (see Table 1).

Table 1 Table of numerical results for Example 4.1

Remark 4.2

In the above example, the function TOL is defined by

$$\begin{aligned} \text {TOL}_n= & {} \dfrac{1}{N}\sum _{i=1}^N\Vert x_n-P_{C_i}x_n\Vert ^2+\dfrac{1}{M}\sum _{j=1}^M\Vert Ax_n-P_{Q_j}Ax_n\Vert ^2\\&+\dfrac{1}{K}\sum _{k=1}^K\Vert x_n-T_kx_n\Vert ^2, \end{aligned}$$

for all \(n\ge 1\). Note that, if at the nth step, \(\hbox {TOL}_n=0\) then \(x_n\in S\) that is, \(x_n\) is a solution of this problem.

Example 4.3

We take \(E=F=L_2([0,1])\) with the inner product

$$\begin{aligned} \langle f,g\rangle =\int _0^1 f(t)g(t){\text {d}}t \end{aligned}$$

and the norm

$$\begin{aligned} \Vert f \Vert =\Bigg (\int _0^1 f^2(t){\text {d}}t\Bigg )^{1/2}, \end{aligned}$$

for all \(f,g\in L_2([0,1]).\)

Now, let

$$\begin{aligned} C_i=\{x\in L_2([0,1]):\ \langle a_i,x\rangle =b_i\}, \end{aligned}$$

where \(a_i(t)=t^{i-1}\), \(b_i=\dfrac{1}{i+1}\) for all \(i=1,2,\ldots , N\) and \(t\in [0,1]\),

$$\begin{aligned} Q_j=\{x\in L_2([0,1]):\ \langle c_j,x\rangle \ge d_j\}, \end{aligned}$$

in which \(c_j(t)=t+j\), \(d_j=\dfrac{1}{4}\) for all \(j=1,2,\ldots ,M\) and \(t\in [0,1]\), and

$$\begin{aligned} T_k=P_{S_k}, \end{aligned}$$

in here \(S_k=\{x\in L_2([0,1]):\ \Vert x-I_k\Vert \le k+1\},\) with \(I_k(t)=t+k\) for all \(k=1,2,\ldots , K\) and \(t\in [0,1]\).

Let us assume that

$$\begin{aligned} A:\ L_2([0,1])\longrightarrow L_2([0,1]),\ (Ax)(t)=\dfrac{x(t)}{2}. \end{aligned}$$

We consider the problem of finding an element \(x^\dagger \) such that

$$\begin{aligned} x^\dagger \in S=\Bigg (\displaystyle \bigcap _{i=1}^N C_i\Bigg )\bigcap \Bigg (\displaystyle \bigcap _{j=1}^M A^{-1}(Q_j)\Bigg )\bigcap \Bigg (\displaystyle \bigcap _{k=1}^K F(T_k)\Bigg ). \end{aligned}$$
(4.1)

It is easy to see that \(S\ne \emptyset \), because \(x(t)=t\in S\).

Table 2 Table of numerical results for Example 4.3
Fig. 1
figure 1

The behavior of \(\Vert x_{n+1}-x_n\Vert \) with the stop condition \(\Vert x_{n+1}-x_n\Vert <10^{-3}\)

Fig. 2
figure 2

The behavior of \(x_n(t)\) with the stop condition \(\Vert x_{n+1}-x_n\Vert <10^{-2}\)

Fig. 3
figure 3

The behavior of \(x_n(t)\) with the stop condition \(\Vert x_{n+1}-x_n\Vert <10^{-3}\)

We have

$$\begin{aligned} \Pi _{C_i}(x)= & {} P_{C_i}(x)=\dfrac{b_i-\langle a_i,x\rangle }{\Vert a_i\Vert ^2}a_i+x,\\ P_{Q_j}(x)= & {} \max \bigg \{0, \dfrac{d_j-\langle c_j,x\rangle }{\Vert c_j\Vert ^2}\bigg \} c_j+x, \end{aligned}$$

and

$$\begin{aligned} T_k(x)= {\left\{ \begin{array}{ll}x,&{} \text {if }\,\,\Vert x-I_k\Vert \le k+1,\\ I_k+\dfrac{k+1}{\Vert x-I_k\Vert }(x-I_k),&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

Using Algorithm 3.1 with \(N=10,\) \(M=20\) and \(K=40\), we obtain the following table of numerical results.

Fig. 4
figure 4

The behavior of \(x_n(t)\) with the stop condition \(\Vert x_{n+1}-x_n\Vert <10^{-7}\)

The behavior of \(\Vert x_{n+1}-x_n\Vert \) in Table 2 is described in the Fig. 1.

The behaviors of the approximation solution \(x_n(t)\) in both of the cases \(\Vert x_{n+1}-x_n\Vert <10^{-2}\) and \(\Vert x_{n+1}-x_n\Vert <10^{-3}\) are presented in Figs. 2 and 3.

Now, we consider a special case of problem (4.1) as follows:

$$\begin{aligned} \text {Find an element }x^\dagger \in C\cap A^{-1}(Q)\cap F(T), \end{aligned}$$
(4.2)

where \(C=C_2\), \(Q=Q_2\) and \(T=T_2\).

Applying algorithms (1.6) and (3.1) with \(t_n=1\) and \(\alpha _n =\dfrac{1}{n}\) for all \(n\ge 1\), and \(u(t)=x_0(t)=\text {exp}(t^2+1)\) for all \(t\in [0,1]\), we get the following table of numerical results.

Table 3 Table of numerical results for Problem (4.2)

Figure 4 show the behaviors of the approximation solutions \(x_n(t)\) for the case \(\Vert x_{n+1}-x_n\Vert <10^{-7}\) in Table 3.