1 Introduction

The multiple-sets split feasibility problem (MSSFP) [1] is a general way to characterize various linear inverse problems which arises in many real-world application problems, such as medical image reconstruction [2] and intensity-modulated radiation therapy [3, 4]. A point closest to the intersection of a family of closed convex sets in one space has to be find, such that its image under a linear transformation will be closest to the intersection of another family of closed convex sets in the image space. Many iterative projection methods have been developed to solve this problem. See for example [514] and references therein. Among these works, Censor et al. [1] proposed an iterative algorithm to solve the MSSFP which is based on the gradient projection method. This iterative algorithm used a fixed step size restricted by the Lipschitz constant of gradient operator, which depends on the operator norm of the linear transformation. In order to avoid this inconvenience, Zhao and Yang [12] introduced a self-adaptive projection method by adopting Armijo-like searches to solve the MSSFP. See also [10] and [11]. However, these iterative algorithms need an inner iteration numbers to obtain a suitable step size. The recent work of Zhao and Yang [13] suggested a new self-adaptive way to compute directly the step size in each iteration. It need not estimate the Lipschitz constant or choose the inner iteration numbers. A similar approach for solving the two-sets split feasibility problem can be found in López et al. [14].

On the other hand, a fixed-point method for solving the MSSFP was proposed by Xu [15]. He proved that the MSSFP is equivalent to finding a common fixed point of finite family of averaged mappings. Consequently, he proposed three iteration methods to solve the MSSFP: (i) successive iteration method; (ii) simultaneous iteration method; and (iii) cyclic iteration method. These iteration methods also used a fixed step size which depend on the Lipschitz constant.

Since the iteration methods introduced by Xu [15] used a fixed step size which rely on the Lipschitz constant. To overcome this shortage, the purpose of this paper was to introduce a cyclic iteration method and simultaneous iteration method with self-adaptive step size for solving the MSSFP. Further, we consider a special case of the MSSFP where the closed convex sets are level sets of convex functions and propose a relaxed cyclic iteration method and relaxed simultaneous iteration method with self-adaptive step size by using projections onto half-spaces instead of the original convex sets, which are much more practical. For generality, we prove the theoretical convergence results in an infinite-dimensional Hilbert spaces setting. Some numerical experiments are reported to verify the efficiency of the proposed methods.

The rest of this paper is organized as follows. In the next section, we introduce some basic definitions and lemmas which will be used in the following sections. In Sect. 3, we propose a cyclic iteration scheme and simultaneous iteration scheme with self-adaptive step size and prove their convergence. A relaxed cyclic iteration scheme and simultaneous iteration scheme with self-adaptive step size will be given in Sect. 4 with theoretical convergence proofs. In Sect. 5, we present some numerical experiments to compare with other methods and show the effectiveness of our proposed iterative methods. We will give some conclusions in the final.

2 Preliminaries

In this section, we collect some important definitions and some useful lemmas which will be used in the following sections. Let \(H\) be a real Hilbert space, \(\langle \cdot , \cdot \rangle \) and \(\Vert \cdot \Vert \) be the inner product and norm, respectively, in \(H\). We adopt the following notations: (i) \(\varOmega \) denotes the solution set of the MSSFP; (ii) \(x_n \rightarrow x\,(x_n \rightharpoonup x)\) represents \(x_n\) converges strongly (weakly) to \(x\), respectively; (iii) \(\omega _{w}(x_n)\) means the weak cluster of the sequence \(\{x_n\}\); and (iv) \(Fix(T)\) denotes the set of fixed points of the mapping \(T\).

Definition 2.1

([16]) A mapping \(T:H\rightarrow H\) is said to be

  • (i) nonexpansive, if \(\Vert Tx-Ty\Vert \le \Vert x-y\Vert ,\quad \forall x,y\in H\);

  • (ii) firmly nonexpansive, if \(\langle x-y, Tx - Ty \rangle \ge \Vert Tx-Ty\Vert ^{2},\quad \forall x,y \in H\);

  • (iii) averaged mapping, if there exist a nonexpansive mapping \(S:H \rightarrow H\) and a real number \(t \in (0,1)\) satisfying \( T = (1-t)I + t S, \) where \(I\) represents the identity mapping.

Recall that the orthogonal projection \(P_{C}\) from \(H\) onto a nonempty closed convex subset \(C\subset H\) is defined by the following

$$\begin{aligned} P_{C}x = \arg \min _{y\in C} \Vert x-y\Vert . \end{aligned}$$

The orthogonal projection has the following well-known properties (see for example [17]).

Lemma 2.1

Let \(C\subset H\) be nonempty, closed and convex. Then for all \(x, y\in H\) and \(z\in C\),

  • (i) \(\langle x - P_{C}x, z- P_{C}x \rangle \le 0\);

  • (ii) \(\Vert P_{C}x - P_{C}y\Vert ^{2} \le \langle P_{C}x - P_{C}y, x-y \rangle \);

  • (iii) \(\Vert P_{C}x - z\Vert ^{2} \le \Vert x-z\Vert ^{2} - \Vert P_{C}x -x\Vert ^{2}\).

We give two examples of projection operators. These results can be found in Chapter 4 of Cegielski [18].

  1. (1)

    If \(C=\{x\in {\mathbb {R}}^{n}: \Vert x-u\Vert \le r \}\) is a closed ball centered at \(u\in {\mathbb {R}}^{n}\) with radius \(r>0\), then

    $$\begin{aligned} P_{C}x = \left\{ \begin{array}{ll} u+ r \frac{x-u}{\Vert x-u\Vert }, &{}\quad x \not \in C, \\ x, &{}\quad x\in C. \end{array} \right. \end{aligned}$$
  2. (2)

    If \(C=[\mathbf {a},\mathbf {b}]\subset {\mathbb {R}}^{n}\) is a closed rectangle in \({\mathbb {R}}^{n}\), where \(\mathbf {a}=(a_1, a_2, \ldots , a_n)^\mathrm{T}\) and \(\mathbf {b}=(b_1, b_2, \ldots , b_n)^\mathrm{T}\) with \(a_i \le b_i\), for all \(i\), then for \(x\in {\mathbb {R}}^{n}\),

    $$\begin{aligned} (P_{C}x)_i = \left\{ \begin{array}{lll} a_i, &{}\quad \text {if}\,x_i < a_i, \\ x_i, &{}\quad \text {if}\,x_i \in [a_i, b_i], \\ b_i, &{}\quad \text {if}\,x_i > b_i. \end{array} \right. \end{aligned}$$

Remark 2.1

It is easily seen that a firmly nonexpansive mapping is nonexpansive due to the Cauchy-Schwartz inequality. A projection \(P_{C}\) is firmly nonexpansive and hence nonexpansive.

The mathematical form of the MSSFP can be formulated as finding a point \(x^{*}\) with the property

$$\begin{aligned} x^{*}\in \bigcap _{i=1}^{t}C_i,\quad \text {such}\,\text {that}\, Ax^{*} \in \bigcap _{j=1}^{r}Q_j, \end{aligned}$$
(1)

where \(t, r\ge 1\) are nonnegative integers, \(\{C_i\}_{i=1}^{t}\subseteq H_1,\{Q_j\}_{j=1}^{r}\subseteq H_2\) are closed convex sets of Hilbert spaces \(H_1\) and \(H_2\), respectively, and \(A:H_1 \rightarrow H_2\) is a bounded linear operator. Let \(t=r=1\), then the MSSFP reduces to the two-sets split feasibility problem (SFP) [19] as follows:

$$\begin{aligned} \text {Finding}\,\text {a}\,\text {point} \ x^{*}\in C,\quad \text {such}\,\text {that} Ax^{*}\in Q, \end{aligned}$$
(2)

where \(C\subseteq H_1\) and \(Q\subseteq H_2\) are nonempty, closed and convex sets, respectively. Recall the proximity function introduced in Censor et al. [1] that

$$\begin{aligned} g(x) := \frac{1}{2}\sum _{i=1}^{t}\alpha _i \left\| x-P_{C_i}(x)\right\| ^{2} + \frac{1}{2}\sum _{j=1}^{r}\beta _j \left\| Ax - P_{Q_j}(Ax)\right\| ^{2}, \end{aligned}$$
(3)

where \(\{\alpha _i\}_{i=1}^{t}\) and \(\{\beta _j\}_{j=1}^{r}\) are positive real numbers, and \(P_{C_i}\) and \(P_{Q_j}\) are the metric projections onto \(C_i\) and \(Q_j\), respectively.

Proposition 2.1

([1]) Suppose that the solution set of the MSSFP is nonempty, then the following statements hold,

  1. (i)

    \(x^{*}\) is a solution of the MSSFP if \(g(x^{*})=0\);

  2. (ii)

    The proximity function \(g(x)\) is convex and differentiable with gradient

    $$\begin{aligned} \nabla g(x) = \sum _{i=1}^{t}\alpha _i \left( x-P_{C_i}x\right) + \sum _{j=1}^{r}\beta _j A^{*}\left( I-P_{Q_j}\right) (Ax), \end{aligned}$$
    (4)

    and the Lipschitz constant of \(\nabla g(x)\) is \(L=\sum \nolimits _{i=1}^{t}\alpha _i + \Vert A\Vert ^{2}\sum \nolimits _{j=1}^{r}\beta _j\).

Xu [15] introduced a proximity function which is different from the proximity function (3),

$$\begin{aligned} p(x) := \frac{1}{2}\sum _{j=1}^{r}\beta _j \left\| Ax - P_{Q_j}(Ax)\right\| ^{2}, \end{aligned}$$
(5)

where \(\beta _j > 0\) for all \(1\le j\le r\). Consequently, he derived the following results.

Proposition 2.2

([15]) Suppose that the solution set of the MSSFP is nonempty, then the following statements hold,

  1. (i)

    The function \(p(x)\) is convex and differentiable with gradient

    $$\begin{aligned} \nabla p(x) = \sum _{j=1}^{r}\beta _j A^{*}(I - P_{Q_j})(Ax), \end{aligned}$$
    (6)

    and the Lipschitz constant of \(\nabla p(x)\) is \(L= \Vert A\Vert ^{2}\sum \nolimits _{j=1}^{r}\beta _j\);

  2. (ii)

    \(x^{*}\) is a solution of the MSSFP if \(x^{*}\) is a common fixed point of the averaged mappings \(\{T_i\}_{i=1}^{t}\), where \(T_i :=P_{C_i}(I - \gamma \nabla p), \gamma >0,i=1, 2,\ldots , t\).

It is well known that in a real Hilbert space \(H\), the following equality holds:

$$\begin{aligned} \left\| \alpha x + (1-\alpha )y\right\| ^2 = \alpha \Vert x\Vert ^2 + (1-\alpha )\Vert y\Vert ^2 - \alpha (1-\alpha )\Vert x-y\Vert ^2, \end{aligned}$$
(7)

for all \(x, y\in H\) and \( \alpha \in [0,1]\). We will make use of a more general equality of the above which can be found in the Lemma 2.13 of Bauschke and Combettes [16].

Lemma 2.2

For all \(x_1, x_2, \ldots , x_n\in H\), they satisfy the following equality

$$\begin{aligned} \left\| \sum _{i=1}^{n}\lambda _i x_i \right\| ^2 = \sum _{i=1}^{n}\lambda _i \Vert x_i\Vert ^2 - \frac{1}{2}\sum _{i, j =1 }^{n} \lambda _i \lambda _j \Vert x_i -x_j\Vert ^2,\quad \ n\ge 2, \end{aligned}$$
(8)

where \(\lambda _i \in [0,1], i=1, 2, \ldots , n, \sum \nolimits _{i=1}^{n}\lambda _i =1\).

Definition 2.2

([20]) Suppose that \(C\) is a nonempty, closed and convex set in \(H\) and \(\{x_n\}\) is a sequence in \(H\). Then, \(\{x_n\}\) is Fejér monotone with respect to \(C\) if

$$\begin{aligned} \Vert x_{n+1}-z\Vert \le \Vert x_n -z\Vert ,\quad \forall z\in C. \end{aligned}$$

Fejér-monotone sequences are very useful in the analysis of optimization iterative algorithms. It is easy to see that a Fejér-monotone sequence \(\{x_n\}\) is bounded and the limit \(\Vert x_n - z\Vert \) exists when \(n\rightarrow \infty \).

The demiclosedness principle for nonexpansive mapping is well known in the Hilbert spaces. See for example Theorem 4.17 of Bauschke and Combettes [16].

Lemma 2.3

(Demiclosedness principle of nonexpansive mappings) Let \(C\) be a closed convex subset of \(H\), \(T:C\rightarrow C\) be a nonexpansive mapping with \(\mathrm{Fix}(T)\ne \emptyset \). If \(\{x_n\}\) is a sequence in \(C\) converges weakly to \(x\) and \(\{(I-T)x_n\}\) converges strongly to \(y\), then \((I-T)x=y\). In particular, if \(y=0\), then \(x=Tx\).

The following result is useful when proving weak convergence of a sequence.

Lemma 2.4

([16]) Let \(K\) be a nonempty, closed and convex subset of a Hilbert space \(H\). Let \(\{x_n\}\) be a sequence in \(H\) satisfying the properties:

  1. (i)

    \(\lim _{n\rightarrow \infty }\Vert x_n - x\Vert \) exists for each \(x\in K\);

  2. (ii)

    \(\omega _{w}(x_n)\subset K\).

Then, \(\{x_n\}\) is converges weakly to a point in \(K\).

We will use convex functions to define the closed convex sets \(\{C_i\}_{i=1}^{t}\) and \(\{Q_j\}_{j=1}^{r}\) in Sect. 4. Recall that a function \(\varphi : H\rightarrow R\) is said to be convex if

$$\begin{aligned} \varphi (\lambda x + (1-\lambda )y) \le \lambda \varphi (x) + (1-\lambda ) \varphi (y), \end{aligned}$$
(9)

for all \(\lambda \in [0,1]\) and for all \(x,y\in H\). Let \(x_0\in H\). We say that \(\varphi \) is subdifferentiable at \(x_0\) if there exists \(\xi \in H\) such that

$$\begin{aligned} \varphi (y) \ge \varphi (x_0) + \langle \xi , y - x_0 \rangle ,\quad \text {for}\,\text {all}\, y\in H. \end{aligned}$$
(10)

The subdifferential of \(\varphi \) at \(x_0\) denoted by \(\partial \varphi (x_0)\) which consists of all \(\xi \) satisfies the relation (10).

The following lemma provides an important boundedness property of the subdifferential infinite-dimensional Hilbert spaces.

Lemma 2.5

([21]) Suppose \(f:{\mathbb {R}}^{n}\rightarrow {\mathbb {R}}\) is a finite convex function, then it is subdifferentiable everywhere, and its subdifferentials are uniformly bounded on any bounded subset of \({\mathbb {R}}^{n}\).

3 A Cyclic and Simultaneous Iteration Method for Solving the MSSFP

In this section, we propose a cyclic iteration method and a simultaneous iteration method with self-adaptive step size for solving the MSSFP and prove the theoretical convergence. In what follows, the functions \(p(x)\) and \(\nabla p(x)\) are defined in (5) and (6), respectively. First, we prove the convergence of a cyclic iterative sequence with self-adaptive step size for solving the MSSFP.

Theorem 3.1

Assume that the MSSFP is consistent (i.e., the solution set \(\varOmega \) is nonempty). For any \(x_0\in H_1\), the cyclic iteration scheme \(\{x_n\}\) is defined by the following,

$$\begin{aligned} x_{n+1} = P_{C_{[n]}}\left( x_n - \lambda _n \nabla p(x_n)\right) ,\quad \ n\ge 0, \end{aligned}$$
(11)

where \([n] = (n\mod \ t) +1\), the mod function takes values in \(\{0, 1, \ldots , t-1\}\), and the step size \(\lambda _n := \frac{\rho _n p(x_n)}{\Vert \nabla p(x_n) \Vert ^{2}}\) with \(0 < \underline{\rho } \le \rho _n \le \overline{\rho } < 4\), then the iterative sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP.

Proof

Let \(p\in \varOmega \). Using the nonexpansivity property of projection operator, we have

$$\begin{aligned} \left\| x_{n+1} - p \right\| ^{2}&= \left\| P_{C_{[n]}}\left( x_n - \lambda _n \nabla p(x_n)\right) - p\right\| ^{2} \nonumber \\&\le \left\| x_n - \lambda _n \nabla p(x_n) - p \right\| ^{2} \nonumber \\&= \Vert x_n -p \Vert ^{2} - 2 \lambda _n \left\langle \nabla p(x_n), x_n -p \right\rangle + \left\| \lambda _n \nabla p(x_n)\right\| ^{2}. \end{aligned}$$
(12)

From Lemma 2.1, we obtain

$$\begin{aligned} \left\langle \nabla p(x_n), x_n -p \right\rangle&= \left\langle \sum _{j=1}^{r}\beta _j A^{*}(I-P_{Q_j})Ax_n, x_n - p \right\rangle \nonumber \\&= \sum _{j=1}^{r}\beta _j \left\langle (I-P_{Q_j})Ax_n, Ax_n - P_{Q_j}(Ax_n)\right\rangle \nonumber \\&\quad +\, \sum _{j=1}^{r}\beta _j \left\langle (I-P_{Q_j})Ax_n, P_{Q_j}(Ax_n) - Ap \right\rangle \nonumber \\&\ge \sum _{j=1}^{r}\beta _j \left\| Ax_n - P_{Q_j}(Ax_n)\right\| ^{2} = 2 p(x_n). \end{aligned}$$
(13)

Substituting (13) into (12), we get

$$\begin{aligned} \Vert x_{n+1} - p \Vert ^{2}&\le \Vert x_n -p\Vert ^{2} - 4 \lambda _n p(x_n) + \Vert \lambda _n \nabla p(x_n) \Vert ^{2} \nonumber \\&= \Vert x_n -p\Vert ^{2} - \rho _{n}(4-\rho _n) \frac{p^{2}(x_n)}{\Vert \nabla p(x_n)\Vert ^{2}}. \end{aligned}$$
(14)

Since \(0<\rho _n <4\), it follows from (14) that

$$\begin{aligned} \Vert x_{n+1} - p \Vert \le \Vert x_n -p\Vert , \end{aligned}$$
(15)

which implies that \(\{x_n\}\) is Fejér-monotone sequence, and \(\lim _{n\rightarrow \infty }\Vert x_n -p\Vert \) exists. We can also get from (14) that

$$\begin{aligned} \underline{\rho }\left( 4-\overline{\rho }\right) \frac{p^{2}(x_n)}{\Vert \nabla p(x_n) \Vert ^{2}}&\le \rho _n (4-\rho _n) \frac{p^{2}(x_n)}{\Vert \nabla p(x_n) \Vert ^{2}} \nonumber \\&\le \Vert x_n - p \Vert ^{2} - \Vert x_{n+1}-p\Vert ^{2}. \end{aligned}$$
(16)

This implies that

$$\begin{aligned} \sum _{n=0}^{\infty }\frac{p^{2}(x_n)}{\Vert \nabla p(x_n) \Vert ^{2}} < +\infty . \end{aligned}$$
(17)

Since \(\nabla p\) is Lipschitz continuous and \(\{x_n\}\) is bounded, so \(\{\nabla p(x_n)\}\) is also bounded. Hence, from (17), we can conclude that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{2}\sum _{j=1}^{r}\beta _j \left\| Ax_{n} - P_{Q_j}(Ax_n)\right\| ^2 = 0. \end{aligned}$$

It follows from the above that

$$\begin{aligned} \lim _{n\rightarrow \infty }\left\| Ax_n - P_{Q_j}(Ax_n)\right\| =0, \end{aligned}$$
(18)

for any \(j = 1, 2, \ldots , r\). Since the sequence \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k} \rightharpoonup {\widehat{x}}\). Next, we will show that \({\widehat{x}}\) is a solution of MSSFP. From (18), for any \(j=1, 2, \ldots , r\), we have

$$\begin{aligned} \lim _{k\rightarrow \infty } \left\| Ax_{n_k} - P_{Q_j}(Ax_{n_k}) \right\| = \left\| A{\widehat{x}} - P_{Q_j}(A{\widehat{x}}) \right\| = 0. \end{aligned}$$

Therefore, \(A{\widehat{x}}\in \bigcap _{j=1}^{r}Q_j\). In the following, we will prove \({\widehat{x}}\in \bigcap _{i=1}^{t}C_i\).

Let \(u_n = x_n - \lambda _n \nabla p(x_n)\). The subsequence \(\{u_{n_k}\}\) converges weakly to \(\widehat{x}\). On the other hand, we have the estimation that

$$\begin{aligned} \Vert x_{n_{k}+1} - p \Vert ^{2} \le \Vert u_{n_{k}} - p \Vert ^{2} \le \Vert x_{n_k} - p \Vert ^{2} - \rho _{n_k}(4-\rho _{n_k})\frac{p^{2}(x_{n_k})}{\Vert \nabla p(x_{n_k})\Vert ^{2}}. \end{aligned}$$
(19)

Then, \(\lim _{k\rightarrow \infty }\Vert u_{n_{k}} -p \Vert \) has the same limits as the \(\lim _{k\rightarrow \infty }\Vert x_{n_k} - p \Vert \). By Lemma 2.1, we have

$$\begin{aligned} \left\| P_{C_{[n_{k}]}}(u_{n_k}) - u_{n_k} \right\| ^{2}&\le \Vert u_{n_k} -p \Vert ^{2} - \left\| P_{C_{[n_k]}}(u_{n_k}) - p \right\| ^{2} \nonumber \\&= \Vert u_{n_k} - p \Vert ^{2} - \Vert x_{n_{k}+1} - p \Vert ^{2} \rightarrow 0,\quad \text {as}\,k\rightarrow \infty . \end{aligned}$$
(20)

Notice that the pool \(\{1, 2, \ldots , t\}\) is finite, then for any \(i\in \{1, 2, \ldots , t\}\), we can choose a subsequence \(\{n_{k_{l}}\}\subset \{n_k\}\) such that \([n_{k_{l}}]=i\), then

$$\begin{aligned} \Vert P_{C_i}(u_{n_{k_{l}}}) - u_{n_{k_{l}}} \Vert \rightarrow 0,\quad \text {as}\, l\rightarrow \infty . \end{aligned}$$

Since the projection operator \(P_{C_i}\) is nonexpansive, by the demiclosedness of nonexpansive mappings (Lemma 2.3), we know that \(\widehat{x}\in C_i\), i.e., \(\widehat{x}\in \bigcap _{i=1}^{t}C_i\). Therefore, \(\widehat{x} \in \varOmega \). By Lemma 2.4, we can conclude that the sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP. This completes the proof. \(\square \)

Second, we propose a simultaneous iterative sequence with self-adaptive step size for solving the MSSFP and prove its convergence. The process of proof is similar to Theorem 3.1, and we give detailed process for the sake of completeness.

Theorem 3.2

Assume that the MSSFP is consistent (i.e., the solution set \(\varOmega \) is nonempty). For any initial \(x_0 \in H_1\), define the simultaneous iteration scheme as follows,

$$\begin{aligned} x_{n+1} = \sum _{i=1}^{t}w_i P_{C_{i}}\left( x_n - \lambda _n \nabla p(x_n)\right) ,\quad \ n\ge 0, \end{aligned}$$
(21)

where \(\{w_i\}_{i=1}^{t}\subseteq [0,1]\) with \(\sum \nolimits _{i=1}^{t}w_i =1\), and the stepsize \(\{\lambda _n\}\) is the same as in Theorem 3.1, then the iterative sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP.

Proof

Let \(p\in \varOmega \). By the iteration scheme (21), the nonexpansivity property of projection operator \(P_{C}\) and Lemma 2.2, we obtain

$$\begin{aligned} \Vert x_{n+1} - p \Vert ^{2}&= \left\| \sum _{i=1}^{t}w_i P_{C_{i}}(x_n - \lambda _n \nabla p(x_n)) - p \right\| ^{2} \nonumber \\&\le \Vert x_n - \lambda _n \nabla p(x_n) - p \Vert ^{2} \nonumber \\&= \Vert x_n -p \Vert ^{2} - 2 \lambda _n \left\langle \nabla p(x_n), x_n -p \right\rangle + \Vert \lambda _n \nabla p(x_n) \Vert ^{2}. \end{aligned}$$
(22)

Notice the inequality (13) and submit it into (22), we get

$$\begin{aligned} \Vert x_{n+1} - p \Vert ^{2}&\le \Vert x_n -p\Vert ^{2} - 4 \lambda _n p(x_n) + \Vert \lambda _n \nabla p(x_n) \Vert ^{2} \nonumber \\&= \Vert x_n -p\Vert ^{2} - \rho _{n}(4-\rho _n) \frac{p^{2}(x_n)}{\Vert \nabla p(x_n)\Vert ^{2}}. \end{aligned}$$
(23)

Since \(0<\rho _n <4\), it follows from (23) that

$$\begin{aligned} \Vert x_{n+1} - p \Vert ^{2} \le \Vert x_n -p\Vert ^{2}, \end{aligned}$$
(24)

which implies that \(\{x_n\}\) is Fejér-monotone sequence, and \(\lim _{n\rightarrow \infty }\Vert x_n -p\Vert \) exists. We can also get from (23) that

$$\begin{aligned} \underline{\rho }(4-\overline{\rho })\frac{p^{2}(x_n)}{\Vert \nabla p(x_n) \Vert ^{2}} \le \Vert x_n - p \Vert ^{2} - \Vert x_{n+1}-p\Vert ^{2}. \end{aligned}$$
(25)

Then,

$$\begin{aligned} \sum _{n=0}^{\infty }\frac{p^{2}(x_n)}{\Vert \nabla p(x_n) \Vert ^{2}} < +\infty . \end{aligned}$$
(26)

Since \(\nabla p\) is Lipschitz continuous and \(\{x_n\}\) is bounded, \(\{\nabla p(x_n)\}\) is bounded. Hence, we can conclude from (26) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\Vert Ax_n - P_{Q_j}(Ax_n) \Vert =0,\quad \text { for}\,\text {any} j= 1, 2, \ldots , r. \end{aligned}$$
(27)

Since the sequence \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_l}\}\) of \(\{x_n\}\) such that \(\{x_{n_l}\}\) converges weakly to \(\tilde{x}\). Next, we will show that \(\tilde{x}\) is a solution of MSSFP. From (27), for any \(j=1, 2, \ldots , r\), we have

$$\begin{aligned} \lim _{l\rightarrow \infty } \Vert Ax_{n_l} - P_{Q_j}(Ax_{n_l}) \Vert = \left\| A\tilde{x} - P_{Q_j}\left( A\tilde{x}\right) \right\| = 0. \end{aligned}$$

Therefore, \(A\tilde{x}\in \bigcap _{j=1}^{r}Q_j\). In the following, we will prove that \(\tilde{x}\in \bigcap _{i=1}^{t}C_i\).

Let \(u_n = x_n - \lambda _n \nabla p(x_n)\). The subsequence \(\{u_{n_l}\}\) converges weakly to \(\tilde{x}\). On the other hand, we have the estimation

$$\begin{aligned} \Vert x_{n_{l}+1} - p \Vert ^{2} \le \left\| u_{n_l} - p \right\| ^{2} \le \Vert x_{n_l} - p \Vert ^{2} - \rho _{n_l}(4-\rho _{n_l})\frac{p^{2}(x_{n_l})}{\Vert \nabla p(x_{n_l})\Vert ^{2}}. \end{aligned}$$
(28)

Then, \(\lim _{l\rightarrow \infty }\Vert u_{n_{l}} -p \Vert \) has the same limits as the \(\lim _{l\rightarrow \infty }\Vert x_{n_l} - p \Vert \). With the help of Lemma 2.1(iii), we have

$$\begin{aligned} \sum _{i=1}^{t}w_i\Vert P_{C_{i}}(u_{n_l}) - u_{n_l} \Vert ^{2}&\le \Vert u_{n_l} -p \Vert ^{2} - \sum _{i=1}^{t}w_i\Vert P_{C_{i}}(u_{n_l}) - p \Vert ^{2} \nonumber \\&\le \Vert u_{n_l} - p \Vert ^{2} - \Vert x_{n_{l}+1} - p \Vert ^{2} \rightarrow 0,\quad \text {as}\,l\rightarrow \infty . \end{aligned}$$
(29)

Thus, for any \(i \in \{1, 2, \ldots , t\}\), we have

$$\begin{aligned} \Vert P_{C_{i}}(u_{n_l}) - u_{n_l} \Vert \rightarrow 0,\quad \text {as}\,l\rightarrow \infty . \end{aligned}$$
(30)

So \(\tilde{x}\in C_i\), i.e., \(\tilde{x}\in \bigcap _{i=1}^{t}C_i\). Therefore, \(\tilde{x} \in \varOmega \). By Lemma 2.4, we can conclude that the sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP. This completes the proof. \(\square \)

Remark 3.1

The cyclic iterative sequence (11) and the simultaneous iterative sequence (21) not only extend the iteration methods of Xu[15] from constant step size to self-adaptive step size, but also generalize the results of López et al. [14] to solve the MSSFP.

4 A Relaxed Cyclic and Simultaneous Iteration Method for Solving the MSSFP

In the previous section, we have proved that the cyclic iterative sequence (11) and the simultaneous iterative sequence (21) with self-adaptive step size converge weakly to a solution of the MSSFP. In this section, we will consider a special case of the MSSFP, where the closed convex sets \(\{C_i\}_{i=1}^{t}\) and \(\{Q_j\}_{j=1}^{r}\) are level sets of convex functions. Before we state our main results, we make the following two assumptions.

  • (A1) The set \(C_i\) is given by

    $$\begin{aligned} C_i := \{ x\in H_1 | c_{i}(x)\le 0 \}, \end{aligned}$$

    where \(c_i : H_1\rightarrow {\mathbb {R}}, i=1, 2, \ldots , t\) are convex functions. The set \(Q_j\) is given by

    $$\begin{aligned} Q_j := \{ y\in H_2 | q_j (y)\le 0 \}, \end{aligned}$$

    where \(q_{j}: H_2\rightarrow {\mathbb {R}}, j=1, 2, \ldots , r\) are convex functions. Assume that both \(c_i\) and \(q_j\) are subdifferentiable on \(H_1\) and \(H_2\), respectively, and that \(\partial c_{i}\) and \(\partial q_{j}\) are bounded operators (i.e., bounded on bounded sets).

  • (A2) For any \(x\in H_1\) and \(y\in H_2\), at least one subgradient \(\xi _{i}\in \partial c_{i}(x)\) and \(\eta _j \in \partial q_{j}(y)\) can be calculated, where \(\partial c_{i}(x)\) and \(\partial q_{j}(y)\) are the subdifferentials of \(c_{i}(x)\) and \(q_{j}(y)\) at the points \(x\) and \(y\), respectively.

    $$\begin{aligned} \partial c_{i}(x) := \left\{ \xi _i \in H_1 | c_{i}(z) \ge c_{i}(x) + \langle \xi _i, z -x \rangle ,\quad \forall z \in H_1\right\} , \end{aligned}$$

    and

    $$\begin{aligned} \partial q_{j}(y) := \left\{ \eta _j \in H_2 | q_{j}(u) \ge q_{j}(y) + \langle \eta _j, u -y \rangle ,\quad \forall u\in H_2\right\} . \end{aligned}$$

    Define \(C_{i}^{n}\) and \(Q_{j}^{n}\) to be the following halfspaces:

    $$\begin{aligned} C_{i}^{n} := \left\{ x\in H_1 | c_{i}(x_n) + \left\langle \xi _{i}^{n}, x - x_n \right\rangle \le 0\right\} , \end{aligned}$$

    where \(\xi _{i}^{n}\in \partial c_{i}(x_n), i=1, 2, \ldots , t\) and

    $$\begin{aligned} Q_{j}^{n} := \left\{ y\in H_2 | q_{j}(Ax_n) + \left\langle \eta _{j}^{n}, y - Ax_n\right\rangle \le 0\right\} , \end{aligned}$$

    where \(\eta _{j}^{n}\in \partial q_{j}(Ax_n), j=1, 2, \ldots , r\).

By the definition of the subgradient, it is clear that \(C_i \subseteq C_{i}^{n}\), \(Q_j \subseteq Q_{j}^{n}\), and the orthogonal projections onto these half-spaces \(C_{i}^{n}\) and \(Q_{j}^{n}\) can be directly calculated.

First, we present a relaxed cyclic projection scheme with self-adaptive step size. Since the projections onto half-spaces \(C_{i}^{n}\) and \(Q_{j}^{n}\) have closed forms, the following iteration scheme is easy to be implemented. We define the function \(p_{n}(x)\) as follows,

$$\begin{aligned} p_{n}(x) := \frac{1}{2}\sum _{j=1}^{r}\beta _j \left\| Ax - P_{Q_{j}^{n}}(Ax)\right\| ^{2}, \end{aligned}$$
(31)

where \(\beta _j >0\) for all \(1\le j\le r\). The gradient of \(p_{n}(x)\) is

$$\begin{aligned} \nabla p_{n}(x) = \sum _{j=1}^{r}\beta _j A^{*}\left( I - P_{Q_{j}^{n}}\right) (Ax). \end{aligned}$$
(32)

Theorem 4.1

Suppose the MSSFP is consistent (i.e., the solution set \(\varOmega \) is nonempty) and the condition \(A1\) and \(A2\) hold. For any initial \(x_0\in H_1\), the relaxed cyclic iteration scheme is defined by

$$\begin{aligned} x_{n+1} = P_{C_{[n]}^{n}}\left( x_n - \lambda _n \nabla p_{n}(x_n)\right) ,\quad n\ge 0, \end{aligned}$$
(33)

where \([n] = (n\ mod\ t) +1\), the mod function takes values in \(\{0, 1, \ldots , t-1\}\), and the step size \(\{\lambda _n\}\) is chosen such that \( \lambda _n := \frac{\rho _n p_{n}(x_n)}{\Vert \nabla p_{n}(x_n) \Vert ^{2}}\) with \(0 < \underline{\rho } \le \rho _n \le \overline{\rho } < 4\). Then, the iterative sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP.

Proof

Let \(p\in \varOmega \). Since the \(P_{C_{i}^{n}}\) is nonexpansive for each \(i\in \{1, 2, \ldots , t\}\), it follows from the same way of Theorem 3.1 to get the inequality (14), we have

$$\begin{aligned} \Vert x_{n+1} - p \Vert ^{2} \le \Vert x_n -p\Vert ^{2} - \rho _{n}(4-\rho _n) \frac{p_{n}^{2}(x_n)}{\Vert \nabla p_{n}(x_n)\Vert ^{2}}. \end{aligned}$$
(34)

Thus, the sequence \(\{x_n\}\) is Fejér-monotone sequence, and \(\lim _{n\rightarrow \infty }\Vert x_n -p\Vert \) exists. It follows from (34) and \(0<\underline{\rho }\le \rho _n\le \overline{\rho } <4\) that

$$\begin{aligned} \sum _{n=0}^{\infty }\frac{p_{n}^{2}(x_n)}{\Vert \nabla p_{n}(x_n) \Vert ^{2}} < +\infty . \end{aligned}$$
(35)

Since \(\nabla p_{n}\) is Lipschitz continuous and \(\{x_n\}\) is bounded, \(\{\nabla p_{n}(x_n)\}\) is bounded. Hence, from (35), we can conclude that \( p_{n}(x_n)\rightarrow 0, \text {as}\,n\rightarrow \infty . \)

Since \(\partial q_{j}\) is bounded on bounded sets, it exists \(\eta \) such that \(\Vert \eta ^{n}_{j}\Vert \le \eta \) for all \(j\). Notice that \(P_{Q_{j}^{n}}Ax_n \in Q_{j}^{n}\), we get

$$\begin{aligned} q_{j}(Ax_{n}) \le \left\langle \eta ^{n}_{j},Ax_{n}-P_{Q^{n}_{j}}Ax_{n}\right\rangle \le \eta \left\| Ax_{n}-P_{Q^{n}_{j}}Ax_{n} \right\| \rightarrow 0,\quad \text { as } n\rightarrow \infty . \end{aligned}$$
(36)

We now prove that \(\omega _{w}(x_n)\subset \varOmega \). Since \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\subset \{x_n\}\) such that \(x_{n_{k}}\rightharpoonup \hat{x}\). By the weakly lower semicontinuous of convex function \(q_{j}\) and (36), we obtain

$$\begin{aligned} q_{j}(A\hat{x})\le \liminf _{k\rightarrow \infty }q_{j}(Ax_{n_{k}})\le 0. \end{aligned}$$

Then, \(A\hat{x}\in Q_{j}\), \(j=1,2, \ldots , r.\) i.e., \(A\hat{x}\in \bigcap _{j=1}^{r}Q_{j}\).

Next, we show that \(\hat{x}\in \bigcap _{i=1}^{t}C_{i}\). Let \(u_{n_{k}} = x_{n_{k}} - \lambda _{n_{k}} \nabla p_{n_{k}}(x_{n_{k}})\), we have

$$\begin{aligned} \Vert u_{n_{k}}-x_{n_{k}}\Vert =\lambda _{n_{k}}\Vert \nabla p_{n_{k}}(x_{n_{k}})\Vert \le \frac{4p_{n_{k}}(x_{n_{k}})}{\Vert \nabla p_{n_{k}}(x_{n_{k}})\Vert }\rightarrow 0,\quad \text {as}\,k\rightarrow \infty . \end{aligned}$$
(37)

Since \(p\in C_i \subset C_{i}^{n}\), for any \(i=1, 2, \ldots , t\). With the help of Lemma 2.1 and observe that \(\Vert u_{n_k} - p \Vert \le \Vert x_{n_k} -p \Vert \), we have

$$\begin{aligned} \Vert x_{n_{k}+1}-p\Vert ^{2}&\le \Vert u_{n_{k}}-p\Vert ^{2}- \left\| \left( I-P_{C^{n_{k}}_{[n_{k}]}}\right) (u_{n_{k}}) \right\| ^{2}\nonumber \\&\le \Vert x_{n_{k}}-p\Vert ^{2}- \left\| \left( I-P_{C^{n_{k}}_{[n_{k}]}} \right) (u_{n_{k}}) \right\| ^{2}. \end{aligned}$$
(38)

It turns out that

$$\begin{aligned} \left\| \left( I-P_{C^{n_{k}}_{[n_{k}]}}\right) (u_{n_{k}}) \right\| \rightarrow 0,\quad \text {as}\,k\rightarrow \infty . \end{aligned}$$
(39)

Since the pool of convex sets \(\{C_{i}\}_{i=1}^{t}\) is finite. For any \(i=1,2,\ldots , t\), we can choose a subsequence \(\{n_{k_{l}}\}\subset \{n_k\}\) such that \([n_{k_{l}}]=i\), then we get

$$\begin{aligned} c_{i}(x_{n_{k_{l}}})&\le \left\langle \xi _{i}^{n_{k_{l}}}, x_{n_{k_{l}}}-P_{C_{i}^{n_{k_{l}}}}(u_{n_{k_{l}}}) \right\rangle \nonumber \\&\le \xi \left( \left\| x_{n_{k_{l}}}-u_{n_{k_{l}}}\right\| +\left\| u_{n_{k_{l}}}-P_{C_{i}^{n_{k_{l}}}}(u_{n_{k_{l}}})\right\| \right) \rightarrow 0,\quad \text {as}\,l\rightarrow \infty , \end{aligned}$$
(40)

where \(\xi \) satisfies \(\Vert \xi _{i}^{n_{k_{l}}}\Vert \le \xi \). By virtue of the weakly lower semicontinuous of convex function \(c_{i}\) , we obtain

$$\begin{aligned} c_{i}(\hat{x})\le \liminf _{l\rightarrow \infty }c_{i}(x_{n_{k_{l}}})\le 0. \end{aligned}$$
(41)

Consequently, \(\hat{x}\in C_{i}\), \(i=1,2,\ldots ,t\). Therefore, \(\hat{x} \in \varOmega \). Noticing that for any \(p\in \varOmega \), \(\lim _{n\rightarrow \infty }\Vert x_n - p\Vert \) exists and \(\omega _{w}(x_n)\subset \varOmega \). By Lemma 2.4, we can conclude that the sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP. This completes the proof. \(\square \)

Second, we propose a relaxed simultaneous iterative sequence with self-adaptive step size for solving the MSSFP and establish its convergence.

Theorem 4.2

Assume that the MSSFP is consistent (i.e., the solution set \(\varOmega \) is nonempty) and the condition \(A\)1 and \(A\)2 hold. For any initial \(x_0\in H_1\), the relaxed simultaneous iterative sequence is defined by

$$\begin{aligned} x_{n+1} = \sum _{i=1}^{t}w_i P_{C_{i}^{n}}\left( x_n - \lambda _n \nabla p_{n}(x_n)\right) ,\quad \ n\ge 0, \end{aligned}$$
(42)

where \(\{w_i\}_{i=1}^{t}\subseteq [0,1]\) with \(\sum \nolimits _{i=1}^{t}w_i =1\), and the step size \(\{\lambda _n\}\) is chosen the same as in Theorem 4.1. Then, the iterative sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP.

Proof

The main idea of proofing Theorem 4.2 is similar to Theorem 4.1. For completeness, we give the detailed below. Let \(p\in \varOmega \). Replace \(p(x_n)\) and \(\nabla p(x_n)\) with \(p_{n}(x_n)\) and \(\nabla p_{n}(x_n)\) in (22) of Theorem 3.2, respectively. Then, under the same argument, we have

$$\begin{aligned} \Vert x_{n+1} - p \Vert ^{2}&\le \Vert x_n -p\Vert ^{2} - \rho _{n}(4-\rho _n) \frac{p_{n}^{2}(x_n)}{\Vert \nabla p_{n}(x_n)\Vert ^{2}}. \end{aligned}$$
(43)

Thus, the sequence \(\{x_n\}\) is Fejér-monotone sequence, and \(\lim _{n\rightarrow \infty }\Vert x_n -p\Vert \) exists. It follows from (43) and \(0<\underline{\rho }\le \rho _n\le \overline{\rho } <4\) that

$$\begin{aligned} \sum _{n=0}^{\infty }\frac{p_{n}^{2}(x_n)}{\Vert \nabla p_{n}(x_n) \Vert ^{2}} < +\infty . \end{aligned}$$
(44)

Since \(\partial q_{j}\) is bounded on bounded sets, for any \(j = 1, 2, \ldots , r\), we have

$$\begin{aligned} q_{j}(Ax_{n}) \le \left\langle \eta ^{n}_{j},Ax_{n}-P_{Q^{n}_{j}}Ax_{n}\right\rangle \le \left\| \eta ^{n}_{j}\right\| \left\| Ax_{n}-P_{Q^{n}_{j}}Ax_{n} \right\| \rightarrow 0,\quad \text {as}\, n\rightarrow \infty . \end{aligned}$$
(45)

Since \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\subset \{x_n\}\) such that \(x_{n_{k}}\rightharpoonup \hat{x}\). By the weakly lower semicontinuous of convex function \(q_{j}\) and (45), we obtain

$$\begin{aligned} q_{j}(A\hat{x})\le \liminf _{k\rightarrow \infty }q_{j}(Ax_{n_{k}})\le 0. \end{aligned}$$

Then, \(A\hat{x}\in Q_{j}\), \(j=1,2, \ldots , r\). I.e., \(A\hat{x}\in \bigcap _{j=1}^{r}Q_{j}\).

Let \(u_{n_{k}} = x_{n_{k}} - \lambda _{n_{k}} \nabla p_{n_{k}}(x_{n_{k}})\), we have

$$\begin{aligned} \Vert u_{n_{k}}-x_{n_{k}}\Vert =\lambda _{n_{k}}\Vert \nabla p_{n_{k}}(x_{n_{k}})\Vert \le \frac{4p_{n_{k}}(x_{n_{k}})}{\Vert \nabla p_{n_{k}}(x_{n_{k}})\Vert }\rightarrow 0,\quad \text {as}\,k\rightarrow \infty . \end{aligned}$$
(46)

Since \(\Vert u_{n_{k}}-p\Vert \le \Vert x_{n_{k}}-p\Vert \). By Lemma 2.1, we have

$$\begin{aligned} \sum _{i=1}^{t}w_i\left\| P_{C_{i}^{n_{k}}}(u_{n_{k}})-u_{n_{k}}\right\| ^{2}&\le \Vert u_{n_{k}}-p\Vert ^{2}-\sum _{i=1}^{t}w_i \left\| P_{C_{i}^{n_{k}}}(u_{n_{k}})-p \right\| ^{2}\nonumber \\&\le \Vert x_{n_{k}}-p\Vert ^{2}-\Vert x_{n_{k}+1}-p\Vert ^{2}. \end{aligned}$$
(47)

For any \(i = 1, 2, \ldots , t,\) we obtain

$$\begin{aligned} \left\| \left( I- P_{C_{i}^{n_{k}}}\right) (u_{n_{k}}) \right\| \rightarrow 0,\quad \text {as}\,k\rightarrow \infty . \end{aligned}$$
(48)

Notice that the subdifferentials \(\partial c_i\) is bounded on bounded sets, by (46) and (48), we know that

$$\begin{aligned} c_{i}(x_{n_{k}})&\le \left\langle \xi _{i}^{n_{k}},x_{n_{k}}-P_{C_{i}^{n_{k}}}(u_{n_{k}})\right\rangle \nonumber \\&\le \left\| \xi _{i}^{n_{k}} \right\| \left( \Vert x_{n_{k}}-u_{n_{k}}\Vert + \left\| u_{n_{k}}-P_{C_{i}^{n_{k}}}(u_{n_{k}}) \right\| \right) \rightarrow 0,\quad \text {as}\, k\rightarrow \infty . \end{aligned}$$
(49)

By the weakly lower semicontinuous of convex function \(c_{i}\) , we obtain

$$\begin{aligned} c_{i}(\hat{x})\le \liminf _{k\rightarrow \infty }c_{i}(x_{n_{k}})\le 0. \end{aligned}$$
(50)

Consequently, \(\hat{x}\in C_{i}\), \(i=1,2,\ldots ,t\). Therefore, \(\hat{x} \in \varOmega \). Now we can apply Lemma 2.4 to \(K :=\varOmega \) to get the full iterative sequence \(\{x_n\}\) converges weakly to a solution of the MSSFP. This completes the proof. \(\square \)

5 Numerical Experiments

In this section, we present some preliminary numerical results and show the efficiency of our proposed methods to solve the MSSFP. All the codes are written in MATLAB and are performed on a personal Lenovo computer with Pentium(R) Dual-Core CPU @ 2.8 GHz and RAM 2.00 GB. For the sake of convenience, we use \({\mathbf {e}}_{0}\) and \({\mathbf {e}}_{1}\) to represent a \(n\)-dimensional real-valued vector with every elements equal to 0 and 1, respectively. That is, \({\mathbf {e}}_{0}=\{0, 0, \ldots , 0\}^\mathrm{T}\) and \({\mathbf {e}}_{1}=\{1, 1, \ldots , 1\}^\mathrm{T}\). The randn is a MATLAB command to generate normally distributed random numbers.

We apply our proposed iteration methods to solve the MSSFP and compare with the methods proposed by Zhao and Yang [13]. Throughout the computational experiments, the parameters \(\rho _n =1\) was set in all our iteration schemes and \(w_n =1\) in Zhao and Yang [13]. To measure the performance of these iterative methods, we report the stopping iteration numbers when the objection function \(f(x)\) (3) satisfying \(f(x) \le \delta \) for some given small \(\delta \).

First, we compare the cyclic iterative sequence (11) and the simultaneous iterative sequence (21) with Zhao and Yang’s [13] method to solve the Example 5.1. The numerical results are reported in Table 1.

Table 1 Comparing cyclic iterative sequence (11) and simultaneous iterative sequence (21) with Zhao and Yang’s [13] method to solve Example 5.1 with the problem size of \(t=r=20, m=60, n=80\)

Example 5.1

The MSSFP with \(C_{i} = \{x\in {\mathbb {R}}^{n} | \Vert x-d_i\Vert \le r_i \}\), \(i=1, 2, \ldots , t\), and \(Q_{j} = \{y\in {\mathbb {R}}^{m} | L_j \le y \le U_j \}, j=1, 2, \ldots , r\). Let \(A=(a_{ij})_{m\times n}\) and \(a_{ij}\in [0,1]\), where \(d_{i}\in [{\mathbf {e}}_{0}, 10{\mathbf {e}}_{1}]\), \(r_{i}\in [40, 60]\), \(L_{j}\in [10{\mathbf {e}}_{1}, 40{\mathbf {e}}_{1}]\) and \(U_{j}\in [50{\mathbf {e}}_{1}, 100{\mathbf {e}}_{1}]\) are all generated randomly.

Second, we compare the relaxed cyclic projection iterative sequence (33) and the relaxed simultaneous projection iterative sequence (42) with the relaxed iterative algorithm of Zhao and Yang[13]. The numerical results are reported in Table 2.

Table 2 Comparing relaxed cyclic iterative sequence (33) and relaxed simultaneous iterative sequence (42) with Zhao and Yang’s [13] relaxed method to solve Example 5.2 with the problem size of \(t=r=30, m=50, n=60\)

Example 5.2

The MSSFP with \(C_{i} = \{x\in {\mathbb {R}}^{n} | \Vert x-d_i\Vert \le r_i \}, i=1, 2, \ldots , t\), and \(Q_{j} = \{y\in {\mathbb {R}}^{m} | \frac{1}{2}y^{T}B_j y + b_{j}^{T}y + c_j \le 0\}, j=1, 2, \ldots , r\), where \(d_{i}\in (6{\mathbf {e}}_{1}, 16{\mathbf {e}}_{1}), r_{i}\in (100, 120), b_{j}\in (-30{\mathbf {e}}_{1}, -20{\mathbf {e}}_{1}), c_{j}\in (-50, -60)\), and all elements of the matrix \(B_{j}\) [in the interval (2, 10)] are all generated randomly. The matrix \(A\) is the same as Example 5.1.

We can see from Tables 1 and 2 that the (relaxed) cyclic iterative sequence converges faster than the other two iterative methods. For relative large error \(\delta \) (e.g., \(\delta = 10^{-5}\)), the (relaxed) simultaneous iterative sequence converges slower than the method of Zhao and Yang [13]. However, if we require to find much accurate of the solution, the iterative method of Zhao and Yang [13] spend more iteration numbers than the (relaxed) simultaneous iterative sequence.

6 Conclusions

The multiple-sets split feasibility problem includes the two-sets split feasibility problem as a special case. Many iterative projection methods have been developed to solve them. However, to employ these iteration schemes, one needs to know a priori of the norm of a bounded linear operator. In this paper, we introduced a cyclic iterative sequence (11) and simultaneous iterative sequence (21) for solving the MSSFP with self-adaptive step size without any prior information about the operator norm. We also proposed a relaxed cyclic projection scheme (33) and simultaneous projection scheme (42), respectively. Theoretical convergence results are established in the infinite-dimensional Hilbert spaces setting. Numerical experiments indicated that our iterative methods performed better than the methods of Zhao and Yang [13].