1 Introduction

Let C and Q be nonempty closed convex subsets of Hilbert spaces \(H_1\) and \(H_2\), respectively. Let \(A:H_1\rightarrow H_2\) be a bounded linear operator with its adjoint operator \(A^*\). The split feasibility problem (SFP) is formulated as finding a point \(x^*\in H_1\) satisfying

$$\begin{aligned} x^*\in C\ \text {and} \ \ Ax^*\in Q. \end{aligned}$$
(1)

The SFP was first proposed by Censor and Elfving [6] in 1994 for solving modeling inverse problems which arise from the phase retrieval problems and medical image reconstruction [4]. It has been found that the SFP can also be used to model problems with applications in different domains, for instance, intensity-modulated radiation therapy and gene regulatory network inference [5, 7, 8, 17].

For solving the SFP, Byrne [4] introduced the applicable and best-known CQ algorithm, which involves the computations of the projections \(P_C\) and \(P_Q\) onto C and Q. In addition, the step size of CQ algorithm depends on the operator norm, which is not easy to compute (or at least estimate). In 2004, Yang [19] generalized the CQ method to the so-called relaxed CQ algorithm, needing computation of the metric projection onto (relaxed sets) half-spaces \(C^n\) and \(Q^n\). Since \(P_{C^n}\) and \(P_{Q^n}\) are easily calculated, this method appears to be very practical. To overcome the criterion for computing the norm of A (which is both complicated and costly), in 2012, López et al. [11] introduced a relaxed CQ algorithm for solving the SFP with a new adaptive way of determining the sequence of steps \(\tau _n\), defined as follows:

$$\begin{aligned} \tau _n:= \frac{\rho _nf_n(x_n)}{\Vert \nabla f_n(x_n)\Vert ^2}, \end{aligned}$$

where \(\rho _n\in (0, 4), \forall n \ge 1\) such that \(\liminf _{n\rightarrow \infty } \rho _n(4- \rho _n)>0.\) It was proved that the sequence \(\{x_n\}\) with \(\tau _n\) converges weakly to a solution of the SFP.

Some generalizations of the SFP have been studied by many authors. Recently, Reich and Tuyen [12] considered and studied the split feasibility problem with multiple output sets (SFPMOS). Let \(H, H_i, i = 1, 2,...,N,\) be real Hilbert spaces and let \(A_i: H\rightarrow H_i, i = 1, 2,...,N\) be bounded linear operators. Let C and \(Q_i, i = 1, 2,...,N\) be nonempty, closed, and convex subsets of H and \(H_i, i = 1, 2,...,N\), respectively. The SFPMOS is formulated to find a point \(x^*\) such that

$$\begin{aligned} x^*\in \Gamma := C \cap (\cap _{i=1}^NA_i^{-1}(Q_i))\ne \emptyset . \end{aligned}$$
(2)

The solution set of problem (SFPMOS) is denoted by \(\Gamma \).

Furthermore, Reich and Tuyen [12] proposed the following two algorithms for solving the SFPMOS by extending the CQ algorithm:

$$\begin{aligned}{} & {} x_{n+1}= P_C(x_n-\lambda _n \sum _{i=1}^N A_i^*(I-P_{Q_i})A_ix_n), \end{aligned}$$
(3)
$$\begin{aligned}{} & {} x_{n+1}= \alpha _nf(x_n)+(1-\alpha _n)P_C(x_n-\lambda _n\sum _{i=1}^NA_i^*(I-P_{Q_i})A_ix_n), \end{aligned}$$
(4)

where \(f: C\rightarrow C\) is a strict contraction mapping, \(\{\lambda _n\} \subset (0,\infty )\) and \(\{\alpha _n\}\subset (0, 1).\) It is proved that if the sequence \(\lambda _n\) satisfies the condition:

$$\begin{aligned} 0<a\le \lambda _n \le b<\frac{2}{N\max _{i=1,2,...N}\{\Vert A_i\Vert ^2\}}, \end{aligned}$$

for all \(n\ge 1,\) then the sequence \(\{x_n\}\) obtained weak and strong convergence results for (3) and (4), respectively.

However, we note that each iteration step of Algorithm (3) and Algorithm (4) requires the computation of metric projections onto sets C and \(Q_i\) and have to calculate or estimate the operator norm \(\Vert A_i\Vert \). In general, this is not an easy task in practice.

Next, we consider several self-adaptive projection methods that would avoid the above case. In 2019, Yao et al. [14] presented two self-adaptive iterative algorithms for solving the multiple-sets split feasibility problem (MSSFP) and proved the weak and strong convergence results. MSSFP is to find a point \(x^*\) such that

$$\begin{aligned} x^*\in \bigcap _{i=1}^s C_i\ \text {and} \ \ Ax^*\in \bigcap _{j=1}^t Q_j. \end{aligned}$$
(5)

where \(\{C_i\}_{i=1}^s\) and \(\{Q_i\}_{j=1}^t\) are two finite families of closed convex subsets of \(H_1\) and \(H_2\).

Yao et al. [14] proposed the following self-adaptive method with selection technique:

$$\begin{aligned} \left\{ \begin{array}{ll} z_n=P_{C_{i_n}}x_n,\\ y_n=A^*(I-P_{Q_{j_n}})Ax_n,\\ x_{n+1}=x_n-\tau _n(x_n+y_n-z_n), \end{array} \right. \end{aligned}$$
(6)

where

$$\begin{aligned} i_n\in \{i|\max _{i\in I_1}\Vert x_n-P_{C_i}x_n\Vert ,I_1=\{1,2,...,s\}\}, \\ j_n\in \{j|\max _{j\in I_2}\Vert Ax_n-P_{Q_j}Ax_n\Vert ,I_2=\{1,2,...,t\}\}, \\ \tau _n=\lambda _n\frac{\Vert x_n-z_n\Vert ^2+\Vert y_n\Vert ^2}{2\Vert x_n+y_n-z_n\Vert ^2}, \end{aligned}$$

in which \(\lambda _n>0.\)

In 2022, Reich et al. [14] introduced a split inverse problem called split common fixed point problem with multiple output sets. To solve this problem, they proposed a self-adaptive algorithm which is based on the viscosity approximation method and prove a strong convergence theorem. More details in [14].

Very recently, Taddele et al. [16] propose two new self-adaptive relaxed CQ algorithms for solving SFPMOS. The algorithms they proposed are as follows:

$$\begin{aligned} x_{n+1}=x_n-\rho _1^n(I-P_{C^n})x_n-\tau _n\sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n, \\ x_{n+1}=\alpha _nu+(1-\alpha _n)(x_n-\rho _1^n(I-P_{C^n})x_n-\tau _n\sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n), \end{aligned}$$

where \(\tau _n:=\frac{\rho _2^n\sum _{i=1}^N\vartheta _i\Vert (I-P_{Q_i^n})A_ix_n\Vert ^2}{\bar{\tau _n}^2}\), \(\bar{\tau _n}:=\max \{\Vert \sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n\Vert ,\beta \}.\) They establish a weak and a strong convergence theorems for the proposed algorithms. We note that although the step size of the algorithms of Taddele are adaptive, the computational effort required to compute the sum of the first N terms with respect to the projection \(P_{Q_i^n}\) is undoubtedly enormous.

In addition, variational inequalities are crucial for both optimization theory and its applications. More importantly, we also turn our attention to the variational inequality problem over the solution set of the SFPMOS (in short, P2). Let \(F: H \rightarrow H\) be a given operator. Suppose that the solution set \(\Gamma \) of SFPMOS is nonempty, P2 is described as follows:

$$\begin{aligned} \text {Find\ a\ point}\ x^*\in \Gamma \ \ \text {such\ that}\ \langle Fx^*,x-x^*\rangle \ge 0,\ \forall \ x\in \Gamma . \end{aligned}$$
(7)

The solution set of problem (P2) is denoted by VIP(F, \(\Gamma \)).

Motivated and inspired by [14] and [20], we further improve Reich’s algorithms (4) for solving the SFPMOS (2). Especially, we develop and improve some previously discussed results in the following ways:

1. In contrast to Reich’s algorithms (4), we propose two new relaxed CQ algorithms with adaptive step size to solve SFPMOS. It is worth noting that the parameters of these two algorithms are chosen without the prior knowledge of the norm of the transfer mappings. Our proposed algorithms employ selection techniques that reduce the computational effort of projection.

2. Based on these two algorithms in [14] and [20], we present compute the projections onto half-spaces instead of onto a closed convex set. Strong convergence results of the proposed Halpern-Type algorithms are proved.

3. As a generalisation of our proposed algorithms, we propose two new adaptive algorithms for solving the variational inequality problem over the solution set of the SFPMOS (7), which may not have been solved by others until now.

The paper is organized as follows: In Sect. 2, we recall some existing results, lemmas, and definitions for subsequent use. In Sect. 3, we present algorithms for solving SFPMOS and as a generalisation problem P2 in turn, and analyse the convergence of the proposed algorithms. To demonstrate that our proposed approach is implementable, several numerical examples are provided in Sect. 4.

2 Preliminaries

Let H be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and \(\Vert \cdot \Vert \). Let the symbols “\(\rightharpoonup \)" and “\(\rightarrow \)" denote the weak and strong convergence, respectively. For any sequence \(\{x_n\}\subset H\), \(\omega _w(x_n) =\{x\in H:\exists \{x_{n_k}\}\subset \{x_n\}\) such that \(x_{n_k}\rightharpoonup x\}\) denotes the weak w-limit set of \(\{x_n\}.\)

Definition 1

[2] Let C be a nonempty, closed and convex set of H. For every element \(x\in H,\) there exists a unique nearest point in C,  denoted by \(P_Cx\) such that

$$\begin{aligned} \Vert x-P_Cx\Vert =\min \{\Vert x-y\Vert :y\in C\}. \end{aligned}$$

The operator \(P_C\) is called a metric projection from H onto C.

It has the following well-known properties which are used in the subsequent convergence analysis.

Lemma 2

[10] Let \(C\subset H\) be a nonempty, closed and convex set. Then, the following assertions hold for any \(x, y\in H\) and \(z\in C:\)

  1. (i)

    \( \langle x-P_Cx,z-P_Cx \rangle \le 0;\)

  2. (ii)

    \( \Vert P_Cx-P_Cy\Vert \le \Vert x-y\Vert ;\)

  3. (iii)

    \(\Vert P_Cx-P_Cy\Vert ^2 \le \langle P_Cx-P_Cy, x-y\rangle ;\)

  4. (iv)

    \(\Vert P_Cx-z\Vert ^2 \le \Vert x-z\Vert ^2-\Vert x-P_Cx\Vert ^2.\)

Next, we give some inequalities required for proving convergence analysis. For each \(x,y\in H,\) we have

$$\begin{aligned} \Vert x+y\Vert ^2\le \Vert x\Vert ^2+2\langle y,x+y\rangle . \end{aligned}$$
(8)

Lemma 3

(Peter-Paul Inequality) If a and b are non-negative real numbers, then

$$\begin{aligned} ab\le \frac{a^2}{2\epsilon }+\frac{\epsilon b^2}{2},\ \forall \ \epsilon > 0. \end{aligned}$$

Definition 4

[2] Let \(f: H\rightarrow (-\infty , +\infty ]\) be a proper function, and let \(x\in H\). Then f is called

  1. (i)

    Lower semicontinuous at x if \(x_n\rightarrow x\) implies \(f(x)\le \liminf _{n\rightarrow \infty } f (x_n)\);

  2. (ii)

    Weakly lower semicontinuous at x if \(x_n\rightharpoonup x\) implies \(f(x)\le \liminf _{n\rightarrow \infty } f(x_n)\).

Definition 5

[2] Let \(f: H\rightarrow (-\infty , +\infty ]\) be proper. The subdifferential of f is the set-valued operator

$$\begin{aligned} \partial f: H\rightarrow 2^H: x\mapsto \{u\in H|\langle y-x, u\rangle + f (x) \le f (y), \forall y\in H\}. \end{aligned}$$

Let \(x\in H\). Then f is subdifferentiable at x if \(\partial f(x)\ne \emptyset \); the elements of \(\partial f(x)\) are the subgradients of f at x.

Lemma 6

[11] A mapping \(T: H\rightarrow H\) is said to be:

  1. (i)

    L-Lipschitz continuous if there exists a constant \(L > 0\) such that

    $$\begin{aligned} \Vert Tx-Ty\Vert \le L\Vert x-y\Vert ,\ \forall \ x,\ y \in H; \end{aligned}$$
  2. (ii)

    \(\kappa \)-strongly monotone if there exists a constant \(\kappa >0\) such that

    $$\begin{aligned} \langle Tx-Ty, x-y \rangle \ge \kappa \Vert x-y\Vert ^2,\ \forall \ x,\ y \in H. \end{aligned}$$

Lemma 7

[15] Let \(\{\Upsilon _n\}\) be a positive sequence, \(\{b_n\}\) be a sequence of real numbers, and \(\{\alpha _n\}\) be a sequence in the open interval (0, 1) such that \( \sum _{n=1}^\infty \alpha _n=\infty .\) Assume that

$$\begin{aligned} \Upsilon _{n+1}\le (1-\alpha _n)\Upsilon _n+\alpha _n b_n, \ \ \forall \ n\ \ge \ 1. \end{aligned}$$

If \(\limsup _{k\rightarrow \infty } b_{n_k}\le 0\) for every subsequence \(\{\Upsilon _{n_k}\}\) of \(\{\Upsilon _n\}\) satisfying \(\liminf _{k\rightarrow \infty }(\Upsilon _{n_{k+1}}-\Upsilon _{n_k})\le 0,\) then \(\lim _{n\rightarrow \infty } \Upsilon _n=0\).

3 Results

In this section, we consider a general case of the SFPMOS, in which C and \(Q_i\) are level sets of convex and subdifferential functions \(c: H_1\rightarrow {\mathbb {R}}\) and \(q_i: H_i\rightarrow {\mathbb {R}}\) defined as follows:

$$\begin{aligned} C=\{x\in H: c(x)\le 0\}\ \ \text {and}\ \ Q_i=\{y\in H_i: q_i(y)\le 0\}. \end{aligned}$$

Let \(\partial c\) and \(\partial q\) denote the subdifferential of c and q. Then c and q are also weakly lower semicontinuous. We define the half-spaces \(C^n\) and \(Q_i^n\ (i=1,2,...,N)\) of C and \(Q_i\), respectively:

$$\begin{aligned} \begin{aligned}&C^n:= \{x\in H: c(x_n)+\langle \xi _n, x-x_n\rangle \le 0\},\ \ \ \ \xi _n \in \partial c(x_n), \\ {}&Q_i^n:=\{y\in H_i: q_i(A_ix_n)+\langle \eta ^n_i, y-A_i x_n\rangle \le 0\},\ \ \ \ \eta ^n_i \in \partial q_i(A_ix_n). \end{aligned} \end{aligned}$$
(9)

By the definition of the subgradient, it is easy to see that \(C \subseteq C^n\) and \(Q_i \subset Q^n_i\) ( [9]).

Now, we introduce two self-adaptive relaxed CQ algorithms for solving the SFPMOS (2) in the case where the SFPMOS is consistent (i.e. \(\Gamma \ne \emptyset \)).

3.1 Two iterative algorithms for solving SFPMOS

Algorithm 1
figure a

Self-adaptive relaxed CQ algorithm I for SFPMOS

Theorem 8

Suppose the sequences \(\{\lambda _n\}\), \(\{\rho _n\}\) and \(\{\alpha _n\}\) are in (0, 1) satisfying the following conditions:

  1. (i)

    \(0<a\le \lambda _n \le b<1\), \(0<c\le \rho _n \le d < 1;\)

  2. (ii)

    \(\lim \limits _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=0}^\infty \alpha _n=\infty .\)

Then \(\{x_n\}\) generated by Algorithm 1 converges strongly to a solution \(x^*\), where \(x^*=P_\Gamma u.\)

Proof

Let \(x^*\in \Gamma .\) The proof is divided into four steps as follows.

Step 1. The sequence \(\{x_n\}\) is bounded. We consider the following two cases:

Case 1: \(\hat{d_n}=\Lambda _n.\)

From the definition of \(\{y_n\}\), we have

$$\begin{aligned} \begin{aligned} \Vert y_n-x^*\Vert ^2=&\ \Vert x_n-\lambda _n(x_n-P_{C^n}x_n)-x^*\Vert ^2 \\ =&\ \Vert x_n-x^*\Vert ^2+\lambda _n^2\Vert x_n-P_{C^n}x_n\Vert ^2-2\lambda _n\langle x_n-x^*, x_n-P_{C^n}x_n\rangle \\ =&\ \Vert x_n-x^*\Vert ^2+\lambda _n^2\Vert x_n-P_{C^n}x_n\Vert ^2-2\lambda _n\langle x_n-P_{C^n}x_n, x_n-P_{C^n}x_n\rangle \\&-2\lambda _n\langle P_{C^n}x_n-x^*, x_n-P_{C^n}x_n\rangle \\ =&\ \Vert x_n-x^*\Vert ^2+\lambda _n^2\Vert x_n-P_{C^n}x_n\Vert ^2-2\lambda _n\Vert x_n-P_{C^n}x_n\Vert ^2 \\&-2\lambda _n\langle P_{C^n}x_n-x^*, x_n-P_{C^n}x_n\rangle . \end{aligned}\nonumber \\ \end{aligned}$$
(13)

Since \(x^*\in C\subset C^n\), it follows from lemma 2 (i) that

$$\begin{aligned} \langle x_n-P_{C^n}x_n, P_{C^n}x_n-x^* \rangle \ge 0. \end{aligned}$$

This implies that

$$\begin{aligned} \Vert y_n-x^*\Vert ^2\le \Vert x_n-x^*\Vert ^2-2\lambda _n(1-\lambda _n)\Vert x_n-P_{C^n}x_n\Vert ^2. \end{aligned}$$
(14)

Case 2: \(\bar{d_n}=\Lambda _n.\)

Using (11) and lemma 2 (i), we have

$$\begin{aligned} \Vert y_n-x^*\Vert ^2=&\ \Vert x_n-\tau _nA_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)-x^*\Vert ^2 \nonumber \\=&\ \Vert x_n-x^*\Vert ^2+\tau _n^2\Vert A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)\Vert ^2\nonumber \\ {}&-2\tau _n\langle x_n-x^*, A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)\rangle \nonumber \\=&\ \Vert x_n-x^*\Vert ^2+\tau _n^2\Vert A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)\Vert ^2\nonumber \\ {}&-2\tau _n\langle A_{i_n}x_n-A_{i_n}x^*, A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\rangle \nonumber \\=&\ \Vert x_n-x^*\Vert ^2+\tau _n^2\Vert A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)\Vert ^2\nonumber \\ {}&-2\tau _n\langle A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n, A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\rangle \nonumber \\ {}&-2\tau _n\langle P_{Q_{i_n}^n}A_{i_n}x_n-A_{i_n}x^*, A_ix_n-P_{Q_{i_n}^n}A_{i_n}x_n\rangle \nonumber \\ \le&\ \Vert x_n-x^*\Vert ^2+\tau _n^2\Vert A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)\Vert ^2-2\tau _n \Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2 \nonumber \\=&\ \Vert x_n-x^*\Vert ^2+(\frac{\rho _n\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2}{\Vert A_{i_n} ^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2})^2\Vert A_{i_n}^*(A_ix_n-P_{Q_{i_n }^n}A_ix_n)\Vert ^2\nonumber \\ {}&-2(\frac{\rho _n\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2}{\Vert A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2}) \Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2 \nonumber \\ \le&\ \Vert x_n-x^*\Vert ^2-2\rho _n(1-\rho _n)\frac{\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^4}{\Vert A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)\Vert ^2}. \end{aligned}$$
(15)

By (13) and (15), we obtain

$$\begin{aligned} \Vert y_n-x^*\Vert \le \Vert x_n-x^*\Vert . \end{aligned}$$
(16)

We next demonstrate that the sequence \(\{x_n\}\) is bounded. Indeed, using (8) and (16), we get

$$\begin{aligned} \Vert x_{n+1}-x^*\Vert ^2= & {} \ \Vert \alpha _nu+(1-\alpha _n)y_n-x^*\Vert ^2 \nonumber \\= & {} \ \Vert \alpha _n(u-x^*)+(1-\alpha _n)(y_n-x^*)\Vert ^2 \nonumber \\\le & {} \ \Vert (1-\alpha _n)(y_n-x^*)\Vert ^2+2\alpha _n \langle u-x^*, x_{n+1}-x^* \rangle \nonumber \\\le & {} \ (1-\alpha _n)\Vert y_n-x^*\Vert ^2+2\alpha _n\langle u-x^*, x_{n+1}-x^* \rangle \nonumber \\\le & {} \ (1-\alpha _n)\Vert x_n-x^*\Vert ^2+2\alpha _n\langle u-x^*, x_{n+1}-x^* \rangle . \end{aligned}$$
(17)

It follows from lemma 3 and (17) that

$$\begin{aligned}{} & {} \Vert x_{n+1}-x^*\Vert ^2\le \ \ (1-\alpha _n)\Vert x_n-x^*\Vert ^2+2\alpha _n\langle u-x^*, x_{n+1}-x^* \rangle \nonumber \\{} & {} \quad \le \ (1-\alpha _n)\Vert x_n-x^*\Vert ^2+2\alpha _n\Vert u-x^*\Vert \Vert x_{n+1}-x^*\Vert \nonumber \\{} & {} \quad \le \ (1-\alpha _n)\Vert x_n-x^*\Vert ^2+4\alpha _n\Vert u-x^*\Vert ^2+\frac{1}{4}\alpha _n\Vert x_{n+1}-x^*\Vert ^2 \nonumber \\{} & {} \quad \le \ \frac{1-\alpha _n}{1-\frac{1}{4}\alpha _n}\Vert x_n-x^*\Vert ^2+\frac{\frac{3}{4}\alpha _n}{1-\frac{1}{4}\alpha _n}\frac{16}{3}\Vert u-x^*\Vert ^2 \nonumber \\{} & {} \quad \le \ \ \max \{\Vert x_n-x^*\Vert ^2, \frac{16}{3}\Vert u-x^*\Vert ^2\} \end{aligned}$$
(18)
$$\begin{aligned}{} & {} \quad \vdots \nonumber \\{} & {} \quad \le \ \ \max \{\Vert x_0-x^*\Vert ^2, \frac{16}{3}\Vert u-x^*\Vert ^2\}. \end{aligned}$$
(19)

This implies that the sequence \(\{x_n\}\) is bounded, and \(\{y_n\}\) and \(\{A_ix_n\}, i=1,..., N\) as well.

Step 2. \(\Vert x_n-P_{C^n}x_n\Vert \rightarrow 0\) and \(\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert \rightarrow 0\) are hold as \(n\rightarrow \infty \).

By (17), we obtain

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert ^2 \le (1-\alpha _n)\Vert x_n-x^*\Vert ^2+2\alpha _n\langle u-x^*, x_{n+1}-x^* \rangle . \end{aligned} \end{aligned}$$
(20)

Setting \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we obtain

$$\begin{aligned} \Upsilon _{n+1}\le (1-\alpha _n)\Upsilon _n+\alpha _nb_n, \end{aligned}$$
(21)

where \(b_n=2\langle u-x^*, x_{n+1}-x^*\rangle .\)

In view of lemma 7, it suffices to demonstrate that \(\limsup _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\Upsilon _{n_k}\) is an arbitrary subsequence of \(\Upsilon _n\) satisfying

$$\begin{aligned} \liminf _{k\rightarrow \infty }(\Upsilon _{n_{k+1}}-\Upsilon _{n_k})=0. \end{aligned}$$

From (17), we also obtain

$$\begin{aligned} \Vert x_{n+1}-x^*\Vert ^2\le \Vert y_n-x^*\Vert ^2+2\alpha _n\langle u-x^*,x_{n+1}-x^*\rangle . \end{aligned}$$
(22)

If we put (14) in (22), we obtain that

$$\begin{aligned}{} & {} \Vert x_{n+1}-x^*\Vert ^2\le \Vert x_n-x^*\Vert ^2+2\alpha _n\langle u-x^*, x_{n+1}-x^* \rangle \\ {}{} & {} -2\lambda _n(1-\lambda _n)\Vert x_n-P_{C^n}x_n\Vert ^2. \end{aligned}$$

Using the hypothesis \(\lambda _n\in [a,b]\subset (0,1)\), we get from the inequality above that

$$\begin{aligned} \Vert x_n-P_{C^n}x_n\Vert ^2\le & {} \frac{1}{2\lambda _n(1-\lambda _n)}(\Upsilon _n-\Upsilon _{n+1}+\alpha _nM) \nonumber \\ {}\le & {} \frac{1}{2a(1-b)}(\Upsilon _n-\Upsilon _{n+1}+\alpha _nM), \end{aligned}$$
(23)

where \(M>0\) is a constant such that \(2\Vert u-x^*\Vert \Vert x_{n+1}-x^* \Vert \le M,\) for all \(n\in N.\)

Since \(x^*\in \Gamma ,\) we note \(A_i^*(I-P_{Q_i^n})A_ix^*=0.\) Hence, we can get

$$\begin{aligned} \Vert A_i^*(I-P_{Q_i^n})A_ix_n-A_i^*(I-P_{Q_i^n})A_ix^*\Vert \le (\max _{1\le i\le N}\Vert A_i\Vert ^2)\Vert x_n-x^*\Vert . \end{aligned}$$

Since \(\{x_n\}\) is also bounded, we have the sequence \(\{\Vert A_i^*(I-P_{Q_i^n})A_ix_n\Vert \}_{n=1}^{\infty }\) is also bounded. Similarly, using (15), (22) and \(\rho _n\in [c,d]\subset (0,1)\), we see that

$$\begin{aligned} \Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^4\le \frac{K^2}{2c(1-d)}(\Upsilon _n-\Upsilon _{n+1}+\alpha _nM), \end{aligned}$$
(24)

where \(K>0\) is a constant such that \(\Vert A_{i_n}^*(I-P_{Q_{i_n}^n})A_{i_n}x_n\Vert \le K,\) for all \(n\in N.\)

Since \(\Upsilon _n-\Upsilon _{n+1}\rightarrow 0\) and \(\alpha _n\rightarrow 0 \ (n\rightarrow \infty ),\) it follows from (23) and (24) that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Vert x_n-P_{C^n}x_n\Vert =0\ \ \text {and}\ \lim \limits _{n\rightarrow \infty }\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert =0. \end{aligned}$$
(25)

Step 3. We show that \(\omega _w(x_n)\subset \Gamma .\)

Let \({\hat{x}}\in \omega _w(x_n),\) there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup {\hat{x}}\) as \(k\rightarrow \infty .\) For each \(i = 1, 2,...,N\), since \(\partial c\) is bounded on bounded sets, there exists a constant \(\xi >0\) such that \(\Vert \xi _n\Vert \le \xi \) for all \(n \ge 0.\) Then, from the fact that \(P_{C^n}x_n\in C^n\) and (9), it follows that

$$\begin{aligned} c(x_{n_k})\le \langle \xi _{n_k}, x_{n_k}-P_{C^{n_k}}x_{n_k}\rangle \le \Vert \xi _{n_k}\Vert \Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert \le \xi \Vert (I-P_{C^{n_k}})x_{n_k}\Vert . \end{aligned}$$

By applying (25) and the weakly lower semicontinuity of c, we have

$$\begin{aligned} c({\hat{x}})\le \liminf _{k\rightarrow \infty } c(x_{n_k})\le 0. \end{aligned}$$

Consequently, \({\hat{x}}\in C.\) In the same way, there exists a constant \(\eta >0\) such that \(\Vert \eta ^n_{i_n}\Vert \le \eta \) for all \(n\ge 0.\) Since \(P_{Q_{i_n}^n}A_{i_n}x_n\in Q_{i_n}^n\), it follows that

$$\begin{aligned} \begin{aligned} q_{i_n}(A_{i_n}x_{n_k})\le&\ \langle \eta _{i_n}^{n_k}, A_{i_n}x_{n_k}- P_{Q_{i_n}^n}A_{i_n}x_{n_k}\rangle \\ \le&\ \Vert \eta _{i_n}^{n_k}\Vert \Vert A_{i_n}x_{n_k}- P_{Q_{i_n}^n}A_{i_n}x_{n_k}\Vert \\ \le&\ \eta \Vert (I-P_{Q_{i_n}^{n_k}})A_{i_n}x_{n_k}\Vert . \end{aligned} \end{aligned}$$
(26)

By applying (25) and the weakly lower semicontinuity of \(q_i\), we have

$$\begin{aligned} q_{i_n}(A_{i_n}{\hat{x}})\le \liminf _{k\rightarrow \infty } q_{i_n}(A_{i_n}x_{n_k}) \le 0. \end{aligned}$$

According to the definition of \(i_n\), it turns out that \(A_i{\hat{x}}\in Q_i\) for \(i = 1, 2,...,N.\) Consequently, we have \({\hat{x}}\in \Gamma ,\) which implies that \(\omega _w(x_n)\subset \Gamma .\)

Step 4. We prove that \(\Upsilon _n\rightarrow 0\).

We now show that \(\Vert x_{{n_k}+1}-x_{n_k}\Vert \rightarrow 0\) as \(k\rightarrow \infty .\) Indeed, it follows from the boundedness of the sequence \(\{y_n\}\) and the assumption of \(\alpha _n\rightarrow 0\) that

$$\begin{aligned} \Vert x_{{n_k}+1}-y_{n_k}\Vert =\alpha _{n_k}\Vert u-y_{n_k}\Vert \rightarrow 0,\ k\rightarrow \infty . \end{aligned}$$
(27)

In Case 1, from the definition of \(y_n\), \(\lambda _n\), \(\rho _n\) and (25), we obtain that

$$\begin{aligned} \Vert y_{n_k}-x_{n_k}\Vert =\lambda _{n_k}\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert \rightarrow 0,\ k\rightarrow \infty . \end{aligned}$$
(28)

In Case 2, from the definition of \(y_n\), \(\tau _n\), \(\rho _n\) and (25), we have

$$\begin{aligned} \begin{aligned} \Vert y_{n_k}-x_{n_k}\Vert&\ =\tau _{n_k}\Vert A_{i_n}^*(A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k})\Vert \\ {}&\ =\rho _{n_k}\frac{\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2}{\Vert A_{i_n}^*(A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n)\Vert }\rightarrow 0,\ k\rightarrow \infty . \end{aligned} \end{aligned}$$
(29)

Thus, from (28) and (29), we get \(\Vert y_{n_k}-x_{n_k}\Vert \rightarrow 0.\) Consequently, we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert x_{{n_k}+1}-x_{n_k}\Vert =0. \end{aligned}$$
(30)

Assume that \(\{x_{{n_k}_j}\}\) is a subsequence of \(\{x_{n_k}\},\) we also have

$$\begin{aligned} x_{{n_k}_j}\rightharpoonup {\hat{x}},\ j\rightarrow \infty . \end{aligned}$$

Furthermore, according to (30), \(x^*=P_\Gamma u\) and lemma 2 (i), we have

$$\begin{aligned} \begin{aligned} \limsup _{k\rightarrow \infty }b_{n_k}&=\limsup _{k\rightarrow \infty }2\langle u-x^*, x_{{n_k}+1}-x^*\rangle \\ {}&\ =\lim _{j\rightarrow \infty }2\langle u-x^*, x_{{{n_k}_j}+1}-x^*\rangle \\ {}&\ =2\langle u-x^*, {\hat{x}}-x^*\rangle \\ {}&\ \le \ 0. \end{aligned} \end{aligned}$$
(31)

Finally, applying Lemma 7 to (21), we conclude that \(\Upsilon _n\rightarrow 0,\) that is \(x_n\rightarrow x^*.\) This completes the proof. \(\square \)

Algorithm 2
figure b

Self-adaptive relaxed CQ algorithm II for SFPMOS

Assume that the sequence \(\{x_n\}\) generated by Algorithm 2 is infinite. In other words, Algorithm 2 does not terminate in a finite number of iterations.

Lemma 9

Let \(\{x_n\}\) be a bounded sequence. If \(\Vert x_n+y^n_{i_n}-z_n\Vert =0\) holds if and only if \(x_n\) is a solution of Problem 2.

Proof

Assume \(\Vert x_n+y_n^{i_n}-z_n\Vert =0\) holds. For any \(x^*\in \Gamma \), by lemma 2 (iii), we have

$$\begin{aligned} \begin{aligned}&\Vert x_n-P_{C^n}x_n\Vert ^2+\Vert (I-P_{Q_{i_n}^n})A_{i_n}x_n\Vert ^2 \\ \le&\ \langle x_n-P_{C^n}x_n,x_n-x^*\rangle +\langle (I-P_{Q_{i_n}^n})A_{i_n}x_n,A_{i_n}x_n-A_{i_n}x^*\rangle \\ =&\ \langle x_n-P_{C^n}x_n+A_{i_n}^*(I-P_{Q_{i_n}^n})A_{i_n}x_n,x_n-x^*\rangle \\ =&\ \langle x_n+y_n^{i_n}-z_n,x_n-x^* \rangle \\ \le&\ \Vert x_n+y_n^{i_n}-z_n\Vert \Vert x_n-x^*\Vert . \end{aligned} \end{aligned}$$
(34)

Since \(\{x_n\}\) is bounded and together with \(d_n=0\), we get

$$\begin{aligned} \Vert x_n-P_{C^n}x_n\Vert =0\ \ \ \text {and}\ \ \ \Vert (I-P_{Q_{i_n}^n})A_{i_n}x_n\Vert =0. \end{aligned}$$

According to the definition of \(i_n\), it follows from the above equation that

$$\begin{aligned} \Vert (I-P_{Q_i}^n)A_ix_n\Vert =0,\ \text {for\ all}\ i\in I. \end{aligned}$$

Hence \(x_n\in C^n\) and \(A_ix_n\in Q_i^n.\) From (9), we have that \(x_n\in C\) and \(A_ix_n\in \bigcap _{i=1}^NQ_i.\) Therefore, \(x_n\in \Gamma .\) \(\square \)

Theorem 10

Suppose the sequences \(\{\alpha _n\}\) and \(\{\rho _n\}\) satisfying the following conditions:

(C1) \(\lim \limits _{n\rightarrow \infty } \alpha _n = 0\) and \(\sum _{n=1}^\infty \alpha _n=\infty ;\)

(C2) \(0< \liminf \limits _{n\rightarrow \infty } \rho _n\le \limsup \limits _{n\rightarrow \infty } \rho _n <4.\)

Then the sequence \(\{x_n\}\) generated by Algorithm 2 converges strongly to \(x^* =P_\Gamma u.\)

Proof

The proof is divided into four steps as follows.

Step 1. The sequence \(\{x_n\}\) is bounded.

Set \(w_n =x_n-t_n(x_n+y_n^{i_n}-z_n).\) It follows from (33) and (34) that

$$\begin{aligned} \Vert w_n-x^*\Vert ^2 =&\ \Vert x_n-t_n(x_n+y_n^{i_n}-z_n)-x^*\Vert ^2 \nonumber \\=&\ \Vert x_n-x^*\Vert ^2+t_n^2\Vert x_n+y_n^{i_n}-z_n\Vert ^2-2t_n\langle x_n-x^*, x_n+y_n^{i_n}-z_n\rangle \nonumber \\ \le&\ \Vert x_n-x^*\Vert ^2+t_n^2\Vert x_n+y_n^{i_n}-z_n\Vert ^2-2t_n(\Vert x_n-P_{C^n}x_n \Vert ^2\nonumber \\ {}&+\Vert (I-P_{Q_{i_n}^n})A_{i_n}x_n\Vert ^2)\nonumber \\ =&\Vert x_n\!-\!x^*\Vert ^2-4\!\rho _n(1\!-\!\frac{1}{4}\rho _n)\frac{(\Vert x_n\!-\!P_{C^n}x_n\Vert ^2\!+\!\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n+\!y_n^{i_n}-z _n\Vert ^2}\nonumber \\ \le&\ \Vert x_n\!-\!x^*\Vert ^2-\rho _n(1\!-\!\frac{1}{4}\rho _n)\frac{(\Vert x_n\!-P_{C^n}x_n\Vert ^ 2\!+\!\Vert A_{i_n}x_n\!-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n\!+y_n^{i_n}\!-z_ n\Vert ^2}.\nonumber \\ \end{aligned}$$
(35)

It implies that

$$\begin{aligned} \Vert w_n-x^*\Vert \le \Vert x_n-x^*\Vert . \end{aligned}$$
(36)

From (32) and (36), we get

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert =&\ \Vert \alpha _nu+(1-\alpha _n)w_n-x^*\Vert \\=&\ \Vert \alpha _n(u-x^*)+(1-\alpha _n)(w_n-x^*)\Vert \\ \le&\ \alpha _n\Vert u-x^*\Vert +(1-\alpha _n)\Vert w_n-x^*\Vert \\ \le&\ \alpha _n\Vert u-x^*\Vert +(1-\alpha _n)\Vert x_n-x^*\Vert \\ \le&\ \max \{\Vert u-x^*\Vert ,\Vert x_n-x^*\Vert \} \\ \vdots \\ \le&\ \max \{\Vert u-x^*\Vert ,\Vert x_0-x^*\Vert \}. \end{aligned} \end{aligned}$$
(37)

Hence, \(\{x_n\}\) is bounded and so are the sequences \(\{Ax_n\}\), \(\{P_Cx_n\}\) and \(\{P_{Q_i}x_n\}(i\in I)\). Step 2. \(\Vert x_n-P_{C^n}x_n\Vert \rightarrow 0\) and \(\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert \rightarrow 0\) are hold, \(n\rightarrow \infty \).

From (8) and (35), we deduce

$$\begin{aligned}{} & {} \Vert x_{n+1}-x^*\Vert ^2 =\ \Vert \alpha _nu+(1-\alpha _n)w_n-x^*\Vert ^2 \nonumber \\{} & {} \quad =\ \Vert \alpha _n(u-x^*)+(1-\alpha _n)(w_n-x^*)\Vert ^2 \nonumber \\{} & {} \quad \le \ (1-\alpha _n)\Vert w_n-x^*\Vert ^2 + 2\alpha _n\langle u-x^*, x_{n+1}-x^*\rangle \nonumber \\{} & {} \quad \le \ (1-\alpha _n)\Vert x_n-x^*\Vert ^2+ 2\alpha _n\langle u-x^*, x_{n+1}-x^*\rangle \nonumber \\{} & {} -(1-\alpha _n)\rho _n(1-\frac{1}{4}\rho _n)\frac{(\Vert x_n-P_{C^n}x_n\Vert ^2+\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n+y_n^{i_n}-z_n\Vert ^2} \nonumber \\{} & {} \quad \le \ (1-\alpha _n)\Vert x_n-x^*\Vert ^2+\alpha _n\Big [2\langle u-x^*, x_{n+1}-x^*\rangle \nonumber \\{} & {} \qquad -\frac{1-\alpha _n}{\alpha _n}\rho _n(1-\frac{1}{4}\rho _n)\frac{(\Vert x_n-P_{C^n}x_n\Vert ^2 +\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n+y_n^{i_n}-z_n\Vert ^2}\Big ]. \end{aligned}$$
(38)

Set \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we have

$$\begin{aligned} \Upsilon _{n+1}\le (1-\alpha _n)\Upsilon _n+\alpha _nb_n,\ n\ge 1, \end{aligned}$$
(39)

where

$$\begin{aligned}{} & {} b_n=2\langle u-x^*, x_{n+1}-x^*\rangle -\frac{1-\alpha _n}{\alpha _n}\rho _n(1-\frac{1}{4}\rho _n)\\ {}{} & {} \times \frac{(\Vert x_n-P_{C^n}x_n\Vert ^2+\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n+y_n^{i_n}-z_n\Vert ^2}. \end{aligned}$$

Now, we prove that \(\Upsilon _n\rightarrow 0\) by lemma 7. Suppose that \(\{\Upsilon _{n_k}\}\) is an arbitrary subsequence of satisfying

$$\begin{aligned} \liminf \limits _{k\rightarrow \infty } (\Upsilon _{n_{k+1}}-\Upsilon _{n_k})\ge 0. \end{aligned}$$
(40)

Let \(M>0\) be a constant such that \(2\Vert u-x^*\Vert \Vert x_{n+1}-x^* \Vert \le M,\) for all \(n\in N.\) By (38), we have

$$\begin{aligned} \begin{aligned}&(1-\alpha _{n_k})\rho _{n_k}(1-\frac{1}{4}\rho _{n_k})\frac{(\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2+\Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert ^2)^2}{\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert ^2} \\ \le&\ (1-\alpha _{n_k})\Vert x_{n_k}-x^*\Vert ^2-\Vert x_{{n_k}+1}-x^*\Vert ^2+2\alpha _{n_k}\langle u-x^*, x_{{n_k}+1}-x^*\rangle \\ \le&\ \Vert x_{n_k}-x^*\Vert ^2-\Vert x_{{n_k}+1}-x^*\Vert ^2+2\alpha _{n_k}\Vert u-x^*\Vert \Vert x_{{n_k}+1}-x^*\Vert \\ \le&\ \Upsilon _{n_k}-\Upsilon _{n_{k+1}}+\alpha _{n_k}M. \end{aligned} \end{aligned}$$
(41)

From (40) and (41) together with conditions (C1) and (C2), we have that

$$\begin{aligned} \begin{aligned}&\limsup _{k\rightarrow \infty }(1-\alpha _{n_k})\rho _{n_k}(1-\frac{1}{4}\rho _{n_k})\\ {}&\qquad \frac{(\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2+\Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert ^2)^2}{\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert ^2} \\ {}&\qquad \le \ \limsup _{k\rightarrow \infty }(\Upsilon _{n_k}-\Upsilon _{n_{k+1}}+\alpha _{n_k}M) \\ {}&\qquad =\ -\liminf _{k\rightarrow \infty }(\Upsilon _{n_{k+1}}-\Upsilon _{n_k})+\limsup _{k\rightarrow \infty }\alpha _{n_k}M \\ {}&\qquad \le \ 0. \end{aligned} \end{aligned}$$
(42)

Thus we see that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\frac{(\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2+\Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert ^2)^2}{\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert ^2}=0, \end{aligned}$$

which yields

$$\begin{aligned} \Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert =0,\ \ \Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert =0. \end{aligned}$$
(43)

Step 3. We show that \(\omega _w(x_n)\subset \Gamma .\)

Assume that \({\hat{x}}\in \omega _w(x_n)\) and \(\{x_{n_k}\}\) is a subsequence of \(\{x_n\}\) which converges weakly to \({\hat{x}}.\) Since \(P_{C^n} x_n\in C^n\) and (9), it follows that

$$\begin{aligned} c(x_{n_k})\le \langle \xi _{n_k}, x_{n_k}-P_{C^{n_k}}x_{n_k}\rangle \le \Vert \xi _{n_k}\Vert \Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert \le \xi \Vert (I-P_{C^{n_k}})x_{n_k}\Vert , \end{aligned}$$
(44)

where \(\xi \) satisfies \(\Vert \xi _{n_k}\Vert \le \xi .\) By applying (43) and the weakly lower semicontinuity of c, we have

$$\begin{aligned} c({\hat{x}})\le \liminf _{k\rightarrow \infty } c(x_{n_k})\le 0. \end{aligned}$$

Consequently, \({\hat{x}}\in C.\) In the same way, there exists a constant \(\eta >0\) such that \(\Vert \eta ^{n_k}_{i_n}\Vert \le \eta \) for all \(n\ge 0.\)

Since \(P_{Q_{i_n}^{n_k}}(A_{i_n}x_{n_k})\in Q_{i_n}^{n_k}\), it follows that

$$\begin{aligned} \begin{aligned} q_{i_n}(A_{i_n}x_{n_k})\le&\langle \eta _{i_n}^{n_k}, A_{i_n}x_{n_k}- P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\rangle \\ \le&\ \Vert \eta _{i_n}^{n_k}\Vert \Vert A_{i_n}x_{n_k}- P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert \\ \le&\ \eta \Vert (I-P_{Q_{i_n}^{n_k}}) A_{i_n}x_{n_k}\Vert . \end{aligned} \end{aligned}$$
(45)

By applying (43) and the weakly lower semicontinuity of \(q_i\), we have

$$\begin{aligned} q_{i_n}(A_{i_n}{\hat{x}})\le \liminf _{k\rightarrow \infty } q_{i_n}(A_{i_n}x_{n_k}) \le 0. \end{aligned}$$
(46)

It turns out that \(A_i{\hat{x}}\in Q_i\) for \(i = 1, 2,...,N.\) Consequently, we have \({\hat{x}}\in \Gamma ,\) which implies that \(\omega _w(x_n)\subset \Gamma .\)

Step 4. We prove that \(x_n\rightarrow x^*\).

Moreover, we have the following estimation

$$\begin{aligned} \begin{aligned} \Vert x_{{n_k}+1}-x_{n_k}\Vert =&\ \Vert \alpha _{n_k}u+(1-\alpha _{n_k})(x_{n_k}-t_{n_k}(x_{n_k}+y_{n_k}^{i_n}-z_{n_k}))-x_{n_k}\Vert \\ \le&\ \alpha _n\Vert u-x_{n_k}\Vert +(1-\alpha _{n_k})t_{n_k}\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert \\ \le&\ \alpha _n\Vert u-x_{n_k}\Vert +t_{n_k}\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert \\ \le&\ \alpha _n\Vert u-x_{n_k}\Vert +\rho _{n_k}\\ {}&\quad \times \frac{\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2+\Vert A_{i_{n_k}}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_{n_k}}x_{n_k}\Vert ^2}{2\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert }. \end{aligned} \end{aligned}$$
(47)

Since \(\{x_n\}\) is bounded, (43) together with (C1) and (C2), we have that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\Vert x_{n_k}-x_{{n_k}+1}\Vert =0. \end{aligned}$$
(48)

Assume that \(\{x_{{n_k}_j}\}\) is a subsequence of \(\{x_{n_k}\},\) we also have \(x_{{n_k}_j}\rightharpoonup {\hat{x}},\ \text {as} \ j\rightarrow \infty .\)

Furthermore, due to (48), \(x^* =P_\Gamma u\) and lemma 2 (i), we infer that

$$\begin{aligned} \limsup \limits _{k\rightarrow \infty }b_{n_k}=&\lim \limits _{j\rightarrow \infty }b_{{n_k}_j} \nonumber \\ =&\ \lim \limits _{j\rightarrow \infty }\Big [2\langle u-x^*, x_{{n_k}_j+1}-x^*\rangle \nonumber \\ -\frac{1-\alpha _{{n_k}_j}}{\alpha _{{n_k}_j}}\rho _{{n_k}_j}(1-\frac{1}{4}&\rho _{{n_k}_j})\frac{\Big ( \Vert x_{{n_k}_j}-P_{C^{{n_k}_j}}x_{{n_k}_j}\Vert ^2+\Vert A_ix_{{n_k}_j}-P_{Q_{i_n}^{{n_k}_j}}A_ix_{{n_k}_j }\Vert ^2\Big )^2}{\Vert x_{{n_k}_j}+y_{{n_k}_j}^{i_n}-z_{{n_k}_j}\Vert ^2}\Big ] \nonumber \\ \le&\ \lim \limits _{j\rightarrow \infty } 2\langle u-x^*, x_{{n_k}_j+1}-x^*\rangle \nonumber \\ \le&\ 2\langle u-x^*, \hat{x}-x^*\rangle \nonumber \\ \le&\ 0. \end{aligned}$$
(49)

Finally, applying Lemma 7 to (39), we conclude that \(\Gamma _n\rightarrow 0,\) that is \(x_n\rightarrow x^*.\) This completes the proof. \(\square \)

3.2 Two iterative algorithms for variational inequalities over the solution set of SFPMOS

Now, we introduce two self-adaptive algorithms for solving variational inequalities over the solution set of SFPMOS (7). Before convergence analysis of our algorithms, the following conditions are assumed.

Assumption 1

The control sequences satisfy the following conditions:

  1. (i)

    \(0<a\le \lambda _n \le b<1\), \(0<c\le \rho _n \le d < 1;\)

  2. (ii)

    \(\{\beta _n\}\subset (0, 1)\), \(\lim \limits _{n\rightarrow \infty }\beta _n=0\) and \(\sum _{n=0}^\infty \beta _n=\infty \);

  3. (iii)

    \(\gamma \) is a positive real number in the open interval \((0, 2\kappa /L^2).\)

Algorithm 3
figure c

Self-adaptive algorithm I for P2

Lemma 11

Suppose that \(F:H\rightarrow H\) is L-Lipschitz and \(\kappa \)-strongly monotone, then

  1. (i)
    $$\begin{aligned} \Vert (I-\beta _n\gamma F)y_n-(I-\beta _n\gamma F)x^*\Vert \le (1-\beta _n\varphi )\Vert y_n-x^*\Vert , \end{aligned}$$

    where

    $$\begin{aligned} \varphi =1-\sqrt{1-\gamma (2\kappa -\gamma L^2)}; \end{aligned}$$
  2. (ii)
    $$\begin{aligned}{} & {} \Vert x_{n+1}-x^*\Vert \le \Vert x_n-x^*\Vert +\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2-2\langle y_n-x^*,Fy_n\rangle )\\ {}{} & {} -2\lambda _n(1-\lambda _n)\Vert x_n-P_{C^n}x_n\Vert ^2; \end{aligned}$$
  3. (iii)
    $$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert \le&\ \Vert x_n-x^*\Vert +\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2-2\langle y_n-x^*,Fy_n\rangle )\\ {}&-2\rho _n(1-\rho _n)\frac{\Vert A_ix_n-P_{Q_i^n}A_ix_n\Vert ^4}{\Vert A_i^*(A_ix_n-P_{Q_i^n})\Vert ^2}. \end{aligned} \end{aligned}$$
    (53)

Proof

(i) Taking any \(x^*\in \Gamma ,\) it follows from Lemma 3.1 in [18] that

$$\begin{aligned} \Vert (I-\gamma F)y_n-(I-\gamma F)x^*\Vert ^2\le (1-\gamma (2\kappa -\gamma L^2))\Vert y_n-x^*\Vert ^2. \end{aligned}$$
(54)

From (54), we have

$$\begin{aligned} \begin{aligned}&\ \Vert (I-\beta _n\gamma F)y_n-(I-\beta _n\gamma F)x^*\Vert \\=&\ \Vert (I-\beta _n)(y_n-x^*)+\beta _n(y_n-x^*)-\beta _n\gamma Fy_n+\beta _n\gamma Fx^*\Vert \\ \le&\ (I-\beta _n)\Vert y_n-x^*\Vert +\beta _n\Vert (I-\gamma F)y_n-(I-\gamma F)x^*\Vert \\ \le&\ (I-\beta _n)\Vert y_n-x^*\Vert +\beta _n\sqrt{1-\gamma (2\kappa -\gamma L^2)}\Vert y_n-x^*\Vert . \end{aligned} \end{aligned}$$
(55)

(ii) Using (14), we get

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert ^2=&\ \Vert (I-\beta _n\gamma F)y_n-x^*\Vert ^2 \\ \le&\ \Vert y_n-x^*-\beta _n\gamma Fy_n\Vert ^2 \\ \le&\ \Vert y_n-x^*\Vert ^2+\beta _n^2\gamma ^2\Vert Fy_n\Vert ^2-2\beta _n\gamma \langle y_n-x^*, Fy_n\rangle \\ \le&\ \Vert x_n-x^*\Vert ^2+\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2 \\ {}&-2\langle y_n-x^*,Fy_n\rangle )-2\lambda _n(1-\lambda _n)\Vert x_n-P_{C^n}x_n\Vert ^2. \end{aligned} \end{aligned}$$
(56)

(iii) Similar to (ii), through (15), we have

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert ^2 \le&\ \Vert x_n-x^*\Vert ^2+\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2-2\langle y_n-x^*,Fy_n\rangle ) \\ {}&-2\rho _n(1-\rho _n)\frac{\Vert A_ix_n-P_{{Q_i}^n}A_ix_n\Vert ^4}{\Vert A_i^*(A_ix_n-P_{Q^n_i})\Vert ^2}. \end{aligned} \end{aligned}$$
(57)

\(\square \)

Theorem 12

Let \(F: H\rightarrow H\) be an L-Lipschitz and \(\kappa \)-strongly monotone operator and under Assumption 1. Then the sequence \(\{x_n\}\) generated by Algorithm 3 converges strongly to the unique solution of \(VIP(F,\Gamma ).\)

Proof

Let \(x^*\) be the unique solution of the variational inequality \(VIP(F,\Gamma ),\) that is,

$$\begin{aligned} \langle Fx^*,z-x^*\rangle \ge 0,\ \forall \ z\in \Gamma . \end{aligned}$$
(58)

From (16) in Theorem 8, (52) and lemma 11 (i), we have

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert&= \ \Vert (I-\beta _n\gamma F)y_n-x^*\Vert \\&= \ \Vert (I-\beta _n\gamma F)y_n-(I-\beta _n\gamma F)x^*-\beta _n\gamma Fx^*\Vert \\&\le \ \Vert (I-\beta _n\gamma F)y_n-(I-\beta _n\gamma F)x^*\Vert +\beta _n\gamma \Vert Fx^*\Vert \\&=\ (1-\beta _n\varphi )\Vert y_n-x^*\Vert +\beta _n\gamma \Vert Fx^*\Vert \\&\le \ (1-\beta _n\varphi )\Vert x_n-x^*\Vert +\beta _n\varphi \frac{\gamma }{\varphi } \Vert Fx^*\Vert \\&\le \ \max \{\Vert x_n-x^*\Vert , \frac{\gamma }{\varphi }\Vert Fx^*\Vert \} \\&\vdots \\&\le \ \max \{\Vert x_0-x^*\Vert ,\frac{\gamma }{\varphi }\Vert Fx^*\Vert \}. \end{aligned} \end{aligned}$$
(59)

This implies that the sequence \(\{x_n\}\) is bounded, and \(\{y_n\}\) and \(\{A_ix_n\}, i=1,..., N\) as well. Using lemma 11 (i), we get that

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-x^*\Vert ^2=&\ \langle (I-\beta _n\gamma F)y_n-x^*,x_{n+1}-x^*\rangle \\ =&\ \langle (I-\beta _n\gamma F)y_n-(I-\beta _n\gamma F)x^*,x_{n+1}-x^*\rangle -\beta _n\gamma \langle Fx^*,x_{n+1}-x^*\rangle \\ \le&\ \frac{\Vert (I-\beta _n\gamma F)y_n-(I-\beta _n\gamma F)x^*\Vert ^2+\Vert x_{n+1}-x^*\Vert ^2}{2}\\ {}&-\beta _n\gamma \langle Fx^*,x_{n+1}-x^*\rangle \\ \le&\ \frac{(I-\beta _n\varphi )\Vert y_n-x^*\Vert ^2+\Vert x_{n+1}-x^*\Vert ^2}{2}-\beta _n\gamma \langle Fx^*,x_{n+1}-x^*\rangle \\ \le&\ \frac{(I-\beta _n\varphi )\Vert x_n-x^*\Vert ^2+\Vert x_{n+1}-x^*\Vert ^2}{2}-\beta _n\gamma \langle Fx^*,x_{n+1}-x^*\rangle . \end{aligned} \end{aligned}$$
(60)

Hence, we infer that

$$\begin{aligned} \Vert x_{n+1}-x^*\Vert ^2\le (I-\beta _n\varphi )\Vert x_n-x^*\Vert ^2-2\beta _n\gamma \langle Fx^*, x_{n+1}-x^*\rangle . \end{aligned}$$

Set \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we have

$$\begin{aligned} \Upsilon _{n+1}\le (1-\alpha _n)\Upsilon _n+\alpha _nb_n, \end{aligned}$$
(61)

where \(\alpha _n=\beta _n\varphi \) and \(b_n=-\frac{2\gamma }{\varphi }\langle Fx^*, x_{n+1}-x^*\rangle .\)

Now, we prove that \(\Upsilon _n\rightarrow 0\) by using lemma 7. To this end, suppose that \(\Upsilon _{n_k}\) is an arbitrary subsequence of \(\Upsilon _n\) satisfying

$$\begin{aligned} \liminf _{k\rightarrow \infty }(\Upsilon _{n_{k+1}}-\Upsilon _{n_k})=0. \end{aligned}$$
(62)

Since \(\{x_n\}\) is bounded, \(\{y_n\}\) is bounded too. Hence, there exists a positive real number \({\bar{M}}\) such that \(\max \{\sup _n\Vert y_n\Vert ,\sup _n\Vert Fy_n\Vert \}\le {\bar{M}}.\)

In Case 1, from lemma 11 (ii) and \(\{\lambda _n\}\subset [a, b]\subset (0, 1),\) we get

$$\begin{aligned} 2a(1-b)\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2\le \Upsilon _{n_k}-\Upsilon _{n_{k+1}}+\beta _n\gamma (\beta _n\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}}. \end{aligned}$$

Thus, by (62) and \(\beta _n\rightarrow 0\), we obtain that

$$\begin{aligned} \begin{aligned}&\limsup _{k\rightarrow \infty }\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2 \\ \le&\ \frac{1}{2a(1-b)}\limsup _{k\rightarrow \infty }[\Upsilon _{n_k}-\Upsilon _{{n_k}+1}+\beta _n\gamma (\beta _n\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}}] \\ =&\ \frac{1}{2a(1-b)}\Big [\limsup _{k\rightarrow \infty }(\Upsilon _{n_k}-\Upsilon _{{n_k}+1})+\limsup _{k\rightarrow \infty }\beta _n\gamma (\beta _n\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}}\Big ] \\ =&\ \frac{1}{2a(1-b)}\Big [-\liminf _{k\rightarrow \infty }(\Upsilon _{n_k}-\Upsilon _{{n_k}+1})+\limsup _{k\rightarrow \infty }\beta _n\gamma (\beta _n\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}}\Big ] \\ \le&\ 0. \end{aligned}\nonumber \\ \end{aligned}$$
(63)

In Case 2, the same process as above, by lemma 11 (ii) and the condition \(\{\rho _n\}\subset [c, d]\subset (0, 1),\) we infer that

$$\begin{aligned}{} & {} 2c(1-d)\Vert A_{i_n}x_{n_k}-P_{Q_{i_n}}^{n_k}A_{i_n}x_{n_k}\Vert ^4\\ {}{} & {} \quad \le K^2(\Upsilon _{n_k}-\Upsilon _{n_{k+1}}+\beta _n\gamma (\beta _n\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}}), \end{aligned}$$

where \(K>0\) is a constant such that \(\Vert A_{i_n}^*(I-P_{Q_{i_n}^n})A_{i_n}x_n\Vert \le K,\) for all \(n\in N.\) Thus, we obtain that

$$\begin{aligned} \begin{aligned}&\limsup _{k\rightarrow \infty }\Vert A_{i_n}x_{n_k}-P_{Q_{i_n}}^{n_k}A_{i_n}x_{n_k}\Vert ^4 \\ \le&\ \frac{K^2}{c(1-d)}\limsup _{k\rightarrow \infty }[\Upsilon _{n_k}-\Upsilon _{{n_k}+1}+\beta _n\gamma (\beta _n\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}}] \\ \le&\ 0. \end{aligned} \end{aligned}$$
(64)

This implies that

$$\begin{aligned} \Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert =0,\ \ \Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert =0. \end{aligned}$$

By the same process as Step 3 of Theorem 1, we obtain \(\omega _w(x_n)\subset \Gamma .\) Let \({\hat{x}}\in \omega _w(x_n),\) there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup {\hat{x}}\) as \(k\rightarrow \infty .\)

Next, we show that \(\limsup \limits _{k\rightarrow \infty } b_{n_k}\le 0.\) Assume that \(\{x_{{n_k}_j}\}\) is a subsequence of \(\{x_{n_k}\},\) we also have \(x_{{n_k}_j}\rightharpoonup {\hat{x}}.\) Thus, using the above mentioned and (58), we have

$$\begin{aligned} \begin{aligned} \liminf _{k\rightarrow \infty }\langle Fx^*,x_{n_k}-x^*\rangle&=\lim _{j\rightarrow \infty }\langle Fx^*,x_{{n_k}_j}-x^*\rangle \\ {}&=\langle Fx^*, {\hat{x}}-x^*\rangle \\ {}&\ge 0. \end{aligned} \end{aligned}$$
(65)

From (27)–(30) in Theorem 8, we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert x_{{n_k}+1}-x_{n_k}\Vert =0. \end{aligned}$$
(66)

From (61) and (66), we have

$$\begin{aligned} \begin{aligned} \limsup _{k\rightarrow \infty }b_{n_k}&=\limsup _{k\rightarrow \infty }-\frac{2\gamma }{\varphi }\langle Fx^*, x_{n_k+1}-x^*\rangle \\ {}&=-\frac{2\gamma }{\varphi }\liminf _{k\rightarrow \infty }\langle Fx^*, x_{{n_k}+1}-x^*\rangle \\ {}&=-\frac{2\gamma }{\varphi }\langle Fx^*, {\hat{x}}-x^*\rangle \\ {}&\le 0. \end{aligned} \end{aligned}$$
(67)

Therefore we can immediately conclude that \(\Upsilon _n\rightarrow 0,\) that is, \(x_n \rightarrow x^*\) as \(n \rightarrow \infty .\) This completes the proof of the theorem. \(\square \)

Algorithm 4
figure d

Self-adaptive algorithm II for P2

Theorem 13

Let \(F: H\rightarrow H\) be an L-Lipschitz and \(\kappa \)-strongly monotone operator and under assumption 1. Then the sequence \(\{x_n\}\) generated by Algorithm 4 converges strongly to the unique solution of \(VIP(F,\Gamma ).\)

Proof

Set \(w_n =x_n-t_n(x_n+y_n^{i_n}-z_n).\) It follows from (35) that

$$\begin{aligned}{} & {} \Vert w_n-x^*\Vert ^2 \le \Vert x_n-x^*\Vert ^2-\rho _n(1-\frac{1}{4}\rho _n)\\ {}{} & {} \quad \times \frac{(\Vert x_n-P_{C^n}x_n\Vert ^2+\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n+y_n^{i_n}-z_n\Vert ^2}, \end{aligned}$$

which implies

$$\begin{aligned} \Vert w_n-x^*\Vert \le \Vert x_n-x^*\Vert . \end{aligned}$$

Using the above inequalities, we get

$$\begin{aligned} \Vert x_{n+1}-x^*\Vert ^2=&\ \Vert (I-\beta _n\gamma F)w_n-x^*\Vert ^2 \nonumber \\ \le&\ \Vert w_n-x^*-\beta _n\gamma Fw_n\Vert ^2 \nonumber \\ \le&\ \Vert w_n-x^*\Vert ^2+\beta _n^2\gamma ^2\Vert Fw_n\Vert ^2-2\beta _n\gamma \langle w_n-x^*, Fw_n\rangle \nonumber \\ \le&\ \Vert x_n-x^*\Vert ^2+\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2-2\langle y_n-x^*,Fy_n\rangle ) \nonumber \\&-\rho _n(1-\frac{1}{4}\rho _n)\frac{(\Vert x_n-P_{C^n}x_n\Vert ^2+\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n+y_n^{i_n}-z_n\Vert ^2}. \end{aligned}$$
(69)

According to (59), replacing \(w_n\) by \(y_n,\) we get that \(\{x_n\}\) is bounded.

From (60)–(61), the following result obviously holds. Set \(\Upsilon _n=\Vert x_n-x^*\Vert ^2\), we have

$$\begin{aligned} \Upsilon _{n+1}\le (1-\alpha _n)\Upsilon _n+\alpha _nb_n, \end{aligned}$$
(70)

where \(\alpha _n=\beta _n\varphi \) and \(b_n=-\frac{2\gamma }{\varphi }\langle Fx^*, x_{n+1}-x^*\rangle .\)

Now, we prove that \(\Upsilon _n\rightarrow 0\) by lemma 7. Suppose that \(\{\Upsilon _{n_k}\}\) is an arbitrary subsequence of satisfying

$$\begin{aligned} \liminf \limits _{k\rightarrow \infty } (\Upsilon _{n_{k+1}}-\Upsilon _{n_k})\ge 0. \end{aligned}$$
(71)

From (69), we get

$$\begin{aligned} \begin{aligned}&\rho _n(1-\frac{1}{4}\rho _n)\frac{(\Vert x_n-P_{C^n}x_n\Vert ^2+\Vert A_{i_n}x_n-P_{Q_{i_n}^n}A_{i_n}x_n\Vert ^2)^2}{\Vert x_n+y_n^{i_n}-z_n\Vert ^2} \le \Upsilon _n-\Upsilon _{n+1}\\ {}&\ +\beta _n\gamma (\beta _n\gamma \Vert Fy_n\Vert ^2-2\langle y_n-x^*,Fy_n\rangle ). \end{aligned} \end{aligned}$$
(72)

Thus, from Assumption 1 (ii) and (62), we obtain

$$\begin{aligned}{} & {} \begin{aligned}&\limsup _{k\rightarrow \infty }\rho _{n_k}(1-\frac{1}{4}\rho _{n_k})\frac{(\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2+\Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert ^2)^2}{\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert ^2} \\ \le&\ \limsup _{k\rightarrow \infty }[\Upsilon _{n_k}-\Upsilon _{n_{k+1}}+\beta _{n_k}\gamma (\beta _{n_k}\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}}] \end{aligned} \end{aligned}$$
(73)
$$\begin{aligned}{} & {} \begin{aligned} \\ =&\ -\liminf _{k\rightarrow \infty }(\Upsilon _{n_{k+1}}-\Upsilon _{n_k})+\limsup _{k\rightarrow \infty }\beta _{n_k}\gamma (\beta _{n_k}\gamma {\bar{M}}+2{\bar{M}}+2\Vert x^*\Vert ){\bar{M}} \\ \le&\ 0, \end{aligned} \end{aligned}$$
(74)

where \({\bar{M}}\) is a positive real number satisfying \(\max \{\sup _n\Vert y_n\Vert ,\sup _n\Vert Fy_n\Vert \}\le {\bar{M}}.\)

Under Assumption 1, we see that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\frac{(\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2+\Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert ^2)^2}{\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert ^2}=0, \end{aligned}$$

which yields

$$\begin{aligned} \Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert =0,\ \ \Vert A_{i_n}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_n}x_{n_k}\Vert =0. \end{aligned}$$
(75)

By the same process as Step 3 of Theorem 1, we obtain \(\omega _w(x_n)\subset \Gamma .\) Furthermore, based on the assumptions of the parameters, we obtain

$$\begin{aligned} \begin{aligned} \Vert x_{{n_k}+1}-w_{n_k}\Vert =\beta _{n_k}\gamma \Vert Fw_{n_k}\Vert \le {\bar{M}}\gamma \beta _{n_k}\rightarrow 0,\ \ k\rightarrow \infty . \end{aligned} \end{aligned}$$
(76)

and

$$\begin{aligned} \begin{aligned} \Vert w_{n_k}-x_{n_k}\Vert&=\ t_{n_k}\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert \\ {}&=\ \rho _{n_k}\frac{\Vert x_{n_k}-P_{C^{n_k}}x_{n_k}\Vert ^2+\Vert A_{i_{n_k}}x_{n_k}-P_{Q_{i_n}^{n_k}}A_{i_{n_k}}x_{n_k}\Vert ^2}{2\Vert x_{n_k}+y_{n_k}^{i_n}-z_{n_k}\Vert } \\ {}&\rightarrow 0,\ \ k\rightarrow \infty . \end{aligned} \end{aligned}$$
(77)

From (76) and (77), we get

$$\begin{aligned} \Vert x_{{n_k}+1}-x_{n_k}\Vert \rightarrow 0,\ \ k\rightarrow \infty . \end{aligned}$$

Therefore, we infer that

$$\begin{aligned} \begin{aligned} \limsup _{k\rightarrow \infty }b_{n_k}&=\limsup _{k\rightarrow \infty }-\frac{2\gamma }{\varphi }\langle Fx^*, x_{{n_k}+1}-x^*\rangle \\ {}&=-\frac{2\gamma }{\varphi }\liminf _{k\rightarrow \infty }\langle Fx^*, x_{{n_k}+1}-x^*\rangle \\ {}&=-\frac{2\gamma }{\varphi }\langle Fx^*, {\hat{x}}-x^*\rangle \\ {}&\le \ 0. \end{aligned} \end{aligned}$$
(78)

Finally, applying Lemma 7 to (70), we conclude that \(\Gamma _n\rightarrow 0,\) that is \(x_n\rightarrow x^*.\) This completes the proof. \(\square \)

Remark 1

We have the following comments for Algorithm 1–4.

  • Algorithm 1–4 are based on the gradient algorithm with selection technique. In each iteration, Algorithm 1 only needs to compute two projections, one from \(P_{C^n}\) and another one from \(\{P_{Q_i^n}A_ix_n\}_{i=1}^N\). Algorithm 2 only needs to compute the projection of \(\{x_n-P_{C^n}x_n+A_i^*P_{Q_i^n}A_ix_n\}\) in each iteration. This technique reduces the computational effort of the projection and speeds up the rate of convergence of algorithms.

  • Different selection methods of the proposed algorithms result in different representation of adaptive steps. Algorithm 1 and Algorithm 2 are two different projection and Halpern-type algorithms, which results in a different range of step sizes. Algorithm 1 and Algorithm 3 use the same step size, as do Algorithm 2 and Algorithm 4.

4 Numerical experiment

In this section, we provide some numerical examples to prove we proposed the convergence of the algorithm. All codes for the numerical computations are implemented using MATLAB R2015b. The numerical results are carried out on a personal computer with an Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz.

Fig. 1
figure 1

Example 1: Comparison of all algorithms in different cases

Fig. 2
figure 2

Example 2: Comparison of all algorithms in different cases

Using \(\Vert x_{n+1}-x_n\Vert <10^{-6}\) as the stopping criterion, we plot the graphs of \(TOL=\Vert x_{n+1}-x_n\Vert ^2\) against the number of iterations for each n. The numerical results are reported in Figs. 1 and 2 and Tables 1 and 2.

Example 1. [12] Consider \(H = {\mathbb {R}}^{10},\) \(H_1 = {\mathbb {R}}^{20},\) \(H_2 = {\mathbb {R}}^{30}\) and \(H_3 = {\mathbb {R}}^{40}.\) Find a point \(x^*\in {\mathbb {R}}^{10}\) such that

$$\begin{aligned} x^*\in \Gamma :=C\cap A_1^{-1}(Q_1)\cap A_2^{-1}(Q_2)\cap A_3^{-1}(Q_3)\ne \emptyset , \end{aligned}$$

where \(C = \{x\in {\mathbb {R}}^{10}:\Vert x- c\Vert ^2\le r^2\},\) \(Q_i = \{x\in {\mathbb {R}}^{(i+1)\times 10}:\Vert x- c_i\Vert ^2\le r_i^2\},\) and \(A_1: {\mathbb {R}}^{10}\rightarrow {\mathbb {R}}^{20},\) \(A_2: {\mathbb {R}}^{10}\rightarrow {\mathbb {R}}^{30},\) \(A_3: {\mathbb {R}}^{10}\rightarrow {\mathbb {R}}^{40}\) are bounded linear operators the elements of the representing matrices of which are randomly generated in the closed interval [-5, 5]. In this case, for any \(x\in {\mathbb {R}}^{10}\), we have \(c(x) =\Vert x-c\Vert ^2-r^2\) and \(q_i(A_i x) =\Vert A_ix-c_i\Vert ^2-r_i^2\) for \(i = 1, 2, 3.\)

The half-spaces \(C^n\) and \(Q^n_i (i = 1, 2, 3),\) are defined by

$$\begin{aligned} C^n =\{x\in {\mathbb {R}}^{10}:\Vert x_n- c\Vert ^2-r^2\le 2\langle x_n-c,x_n- x\rangle \}, \end{aligned}$$

and

$$\begin{aligned} Q^n_i =\{z\in {\mathbb {R}}^{20}:\Vert A_ix_n-c_i\Vert ^2-r_i^2 \le 2\langle A_ix_n-c_i,A_ix_n- y\rangle \}. \end{aligned}$$

The control parameters for each algorithm are chosen as follows.

  • Algorithm 1 and Algorithm 2 (Our new algorithm 1 and 2): \(\lambda _n=\tau _n=\frac{1}{10n+1}\) and \(\alpha _n=\frac{1}{n+1}.\)

  • Algorithm 3 and Algorithm 4 (Our new algorithm 3 and 4): \(\gamma = 0.15,\ \beta _n =\frac{1}{100n+1},\) and \(Fx = 2x\) for all \(x\in {\mathbb {R}}^{10}.\)

  • Algorithm (1.6) (in (4), [12]): \(\lambda _1=\frac{1}{6}, \lambda _2=\frac{1}{3}, \lambda _3=\frac{1}{2},\) and \(f(x) = 0.975x\) for all \(x\in {\mathbb {R}}^{10}.\)

  • Algorithm (79) (in [16]) is as follows:

    $$\begin{aligned} x_{n+1}=\alpha _nu+(1-\alpha _n)(x_n-\rho _1^n(I-P_{C^n})x_n-\tau _n\sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n)), \end{aligned}$$

where \(\tau _n:=\frac{\rho _2^n\sum _{i=1}^N\vartheta _i\Vert (I-P_{Q_i^n})A_ix_n\Vert ^2}{\bar{\tau _n}^2}\), \(\bar{\tau _n}:=\max \{\Vert \sum _{i=1}^N\vartheta _iA_i^*(I-P_{Q_i^n})A_ix_n\Vert ,\beta \}.\)

We take \(\rho _1^n=\rho _2^n=\frac{1}{10n+1},\) \(\vartheta _1=\frac{1}{6},\vartheta _2=\frac{1}{3}\, \vartheta _3=\frac{1}{2},\) and \(\beta =0.05.\)

Now we bring the results of the iterations for four cases, where \(e_1=(1,1,...,1)\in {\mathbb {R}}^{10}\).

Case 1: Take \(u = 10e_1\) and \(x_1 = 5e_1;\)

Case 2: Take \(u = 3e_1\) and \(x_1 = 0.5e_1;\)

Case 3: Take \(u = 3e_1\) and \(x_1 = -0.5e_1;\)

Case 4: Take \(u=e_1\) and \(x_1 = -0.2e_1.\)

Table 1 Numerical results of all algorithms with different \(x_0\) and u

Example 2. Let \(H=H_1=L_2([0, 1])\). For all \(y\in L_2([0,1])\), \(z\in L_2([0,1])\), \(\langle \cdot ,\cdot \rangle \) and \(\Vert \cdot \Vert \) are defined by \(\langle y,z\rangle := \int _0^1 y(t)z(t)dt\) and \(\Vert y\Vert := (\int _0^1 |y(t)|^2dt)^{\frac{1}{2}},\) respectively. Furthermore, we consider the half-spaces

$$\begin{aligned} C:=\{x\in L_2([0,1])|\int _0^1x(t)dt\le 1\} \end{aligned}$$

and

$$\begin{aligned} Q:=\{x\in L_2([0,1])|\int _0^1 |y(t)-sint|^2dt\le 16\}. \end{aligned}$$

And \(A:L_2([0,1])\rightarrow L_2([0,1])\), where \((Ax)(t)=\int _0^1x(t)dt, \forall t\in [0,1], x\in L_2([0,1]).\) Then, A is a bounded linear operator, \(\Vert A\Vert =\frac{2}{\pi }\) and \((A^*x)(t)=\int _1^tx(s)ds.\)

The metric projections \(P_C\) and \(P_Q\) are defined by

$$\begin{aligned} P_C(x(t))=\left\{ \begin{array}{ll} x(t)+1-a, &{} a>1,\\ x(t), &{}a\le 1, \end{array} \right. \end{aligned}$$
(79)

and

$$\begin{aligned} P_Q(y(t))=\left\{ \begin{array}{ll} sin(t)+\frac{4(y(t)-sin(t))}{\sqrt{b}}, &{} b>16, \\ y(t), &{}b\le 16, \end{array} \right. \end{aligned}$$
(80)

where \(a=\int _0^1x(t)dt\) and \(b=\int _0^1|y(t)-sin(t)|^2dt.\)

Now we bring the results of the iterations for four cases,

Case 1: Take \(u = t\) and \(x_1 = e^{3t};\)

Case 2: Take \(u = t^3+2t\) and \(x_1 = sin(3t);\)

Case 3: Take \(u = -t\) and \(x_1 = \frac{2^t}{2};\)

Case 4: Take \(u=\frac{e^{5t}}{5}\) and \(x_1 = sin(3t).\)

Table 2 Numerical results of all algorithms with different \(x_0\) and u

Remark 2

The preliminary results presented in Tables 1 and 2 and Figs. 1 and 2 demonstrate the advantages and computational efficiency of the proposed methods over some known schemes.

  • The suggested Algorithms 1–4 require fewer iterations and CPU time than Alg.(1.6) and Alg.(79) in reaching the same stopping criterion.

  • The step size of Alg. (1.6) depends on the criterion of the transfer operator, and thus its performance is significantly weaker than that of our algorithms 1–4 with adaptive step sizes.

  • The advantage of our proposed algorithm compared to Alg. (79) lies in the application of the selection technique, which is shown to be advantageous when n is sufficiently large. We have only discussed the case where \(n=3.\)

5 Conclusions

In this paper, based on the relaxed CQ method and Halpern-type method, we proposed two new adaptive iterative algorithms to discover solutions of the split feasibility problem with multiple output sets in infinite dimensional Hilbert spaces. More importantly, according to different selection conditions, we give two different adaptive step sizes without the prior knowledge of the operator norm of the involved operator. Moreover, as a generalization, we construct two new algorithms to solve the variational inequalities over the solution set of split feasibility problem with multiple output sets. Under some suitable conditions, we established the strong convergence theorems of the suggested algorithms. Finally, the advantages of the proposed algorithms are confirmed by two numerical examples. It is interesting to extend the results obtained in this paper to Banach spaces or more complex spaces.