1 Introduction

Let \(\mathcal {H}_1\) and \(\mathcal {H}_2\) be two real Hilbert spaces, and let \(A:\mathcal {H}_1\longrightarrow \mathcal {H}_2\) be a bounded linear operator. Given nonempty closed convex subsets \(C_i\subseteq \mathcal {H}_1\), \(Q_j\subseteq \mathcal {H}_2\) for \(i=1,2,\ldots ,M\) and \(j=1,2,\ldots , N\), respectively. Let \(F_i: \mathcal {H}_1\longrightarrow \mathcal {H}_1\), \(i=1,2,\ldots ,M\) and \(G_j: \mathcal {H}_2\longrightarrow \mathcal {H}_2\), \(j=1,2,\ldots , N\) be mappings. The multiple-sets split variational inequality problem (MSSVIP), introduced first by Censor et al. [18], can be formulated as follows:

figure a

such that

figure b

If the solution sets of variational inequality problems \(VIP(C_i, F_i)\) and \(VIP(Q_j, G_j)\) are denoted by \(\hbox {Sol}(C_i, F_i)\) and \(\hbox {Sol}(Q_j, G_j)\), respectively, then the MSSVIP becomes the problem of finding \(x^*\in \displaystyle \bigcap \nolimits _{i=1}^{M}\hbox {Sol}(C_i, F_i)\) such that \(Ax^*\in \displaystyle \bigcap \nolimits _{j=1}^{N}\hbox {Sol}(Q_j, G_j)\).

When \(M=N=1\), the MSSVIP reduces to the split variational inequality problem, shortly SVIP,

figure c

such that

figure d

where C and Q are two nonempty closed convex subsets of \(\mathcal {H}_1\) and \(\mathcal {H}_2\), respectively, and \(F: \mathcal {H}_1\longrightarrow \mathcal {H}_1\), \(G: \mathcal {H}_2\longrightarrow \mathcal {H}_2\) are two mappings.

The SVIP was introduced and investigated by Censor et al. [18] in the case when F is \(\alpha \)-inverse strongly monotone on \(\mathcal {H}_1\), G is \(\beta \)-inverse strongly monotone on \(\mathcal {H}_2\), which starts from a given point \(x^0\in \mathcal {H}_1\), for all \(n\ge 0\), the next iterate is defined as

$$\begin{aligned} x^{n+1}=P_C^{F, \lambda }(x^n+\gamma A^*(P_Q^{G, \lambda }-I)(Ax^n)), \end{aligned}$$

where \(\gamma \in \Big (0, \displaystyle \frac{1}{\Vert A\Vert ^2}\Big )\), \(0\le \lambda \le 2\min \{\alpha , \beta \}\), \(P_C^{F, \lambda }=P_C(I-\lambda F)\) and \(P_Q^{G, \lambda }=P_Q(I-\lambda G)\). They showed that the sequence \(\{x^n\}\) converges weakly to a solution of the split variational inequality problem, provided that the solution set of the SVIP is nonempty. If we consider only the problem VIP(CF), then VIP(CF) is a classical variational inequality problem, which was studied by many authors, see, for example, [1, 4, 5, 19, 20, 24,25,26, 28, 31, 34, 35, 38, 42, 43] and the references therein.

If \(F_i=0\) for all \(i=1,2,\ldots ,M\) and \(G_j=0\) for all \(j=1,2,\ldots ,N\), then the MSSVIP becomes the multiple-sets split feasibility problem (MSSFP):

$$\begin{aligned} \text {Find } x^*\in \displaystyle \bigcap \limits _{i=1}^{M}C_i\text { such that } Ax^*\in \displaystyle \bigcap \limits _{j=1}^{N}Q_j. \end{aligned}$$

The MSSFP, which has been first introduced and studied by Censor et al [17], is a general way to characterize various linear inverse problems which arises in many real-world application problems such as intensity-modulated radiation therapy [14, 15] and medical image reconstruction [22]. A special case of the MSSFP, when \(M=N=1\), is the split feasibility problem (SFP), which is formulated as finding a point \(x^*\in C\) such that \(Ax^*\in Q\), where C and Q are two nonempty closed convex subsets of real Hilbert spaces \(\mathcal {H}_1\) and \(\mathcal {H}_2\), respectively. Recently, it has been found that the SFP has some practical applications in various disciplines such as signal processing, image restoration, intensity-modulated radiation therapy [15, 16, 21] and many other fields. Many iterative projection methods have been developed to solve the MSSFP, the SFP and their generalizations. See, for example, [2, 3, 6,7,8,9,10,11,12, 17, 23, 27, 29, 30, 32, 33, 36, 39,40,41, 44, 45, 47, 48] and the references cited there.

In 2012, Ceng et al. [13, Theorem 3.1] presented a method, which is called the relaxed extragradient method, for finding minimum-norm solution of the SFP. This method is given as: For \(x^0\in \mathcal {H}_1\), the sequence \(\{x^n\}\) is generated by

$$\begin{aligned} {\left\{ \begin{array}{ll} {y^n=P_C(x^n-\lambda _n\nabla f_{\alpha _n}(x^n))},\\ {x^{n+1}=\beta _nx^n+\gamma _ny^n+\delta _nP_C(x^n-\lambda _n\nabla f_{\alpha _n}(y^n))\;\;\forall n\ge 0}, \end{array}\right. } \end{aligned}$$
(1)

where \(\nabla f_{\alpha _n}\) stands for \(A^*(I-P_Q)A+\alpha _n I\). They proved that, under some conditions imposed on \(\{\lambda _n\}\), \(\{\alpha _n\}\), \(\{\beta _n\}\), \(\{\gamma _n\}\) and \(\{\delta _n\}\), the sequence \(\{x^n\}\) generated by (1) converges strongly to the minimum-norm solution of the SFP if the solution set \(\varGamma \) of the SFP is nonempty.

Motivated and inspired by the above-mentioned works, in this paper, we present an iterative method for finding minimum-norm solution of the MSSVIP, which is to find a point \(x^*\in \varOmega \) such that

$$\begin{aligned} \Vert x^*\Vert \le \Vert x\Vert \;\;\forall x\in \varOmega , \end{aligned}$$

where \(\varOmega =\Big \{x^*\in \displaystyle \bigcap \limits _{i=1}^{M}\hbox {Sol}(C_i, F_i): Ax^*\in \displaystyle \bigcap \limits _{j=1}^{N}\hbox {Sol}(Q_j, G_j)\Big \}\) is the solution set of the MSSVIP and \(A:\mathcal {H}_1\longrightarrow \mathcal {H}_2\) is a bounded linear operator between \(\mathcal {H}_1\) and \(\mathcal {H}_2\).

The paper is organized as follows. Section 2 deals with some definitions and lemmas that will be used in the third section, where we describe the algorithm and prove its strong convergence. We close Sect. 3 by considering some applications to the multiple-sets split feasibility problem and the split variational inequality problem. Finally, in the last section, we perform a numerical example to illustrate the convergence of the proposed algorithm.

2 Preliminaries

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal {H}\). It is well known that for all point \(x\in \mathcal {H}\), there exists a unique point \(P_C(x)\in C\) such that

$$\begin{aligned} \Vert x-P_C(x)\Vert =\min \{\Vert x-y\Vert : y\in C\}. \end{aligned}$$
(2)

The mapping \(P_C:\mathcal {H}\longrightarrow C\) defined by (2) is called the metric projection of \(\mathcal {H}\) onto C. It is known that \(P_C\) is nonexpansive. Moreover, for all \(x\in \mathcal {H}\) and \(y\in C\), \(y=P_C(x)\) if and only if

$$\begin{aligned} \langle x-y, z-y\rangle \le 0\;\;\forall z\in C. \end{aligned}$$

We denote the strong (weak) convergence of a sequence \(\{x^n\}\) to x in \(\mathcal {H}\) by \(x^n\longrightarrow x\) (\(x^n\rightharpoonup x\)), respectively. In what follows, we recall some well-known definitions and lemmas which will be used in this paper.

Definition 1

Let \(\mathcal {H}_1\) and \(\mathcal {H}_2\) be two Hilbert spaces and let \(A: \mathcal {H}_1\longrightarrow \mathcal {H}_2\) be a bounded linear operator. An operator \(A^*: \mathcal {H}_2\longrightarrow \mathcal {H}_1\) with the property \(\langle A(x), y\rangle =\langle x, A^*(y)\rangle \) for all \(x\in \mathcal {H}_1\) and \(y\in \mathcal {H}_2\), is called an adjoint operator.

The adjoint operator of a bounded linear operator A between Hilbert spaces \(\mathcal {H}_1\), \(\mathcal {H}_2\) always exists and is uniquely determined. Furthermore, \(A^*\) is a bounded linear operator and \(\Vert A^*\Vert =\Vert A\Vert \).

Definition 2

A mapping \(F:\mathcal {H}\longrightarrow \mathcal {H}\) is said to be

(i) L-Lipschitz continuous on \(\mathcal {H}\) if

$$\begin{aligned} \Vert F(x)-F(y)\Vert \le L\Vert x-y\Vert \;\;\forall x, y\in \mathcal {H}; \end{aligned}$$

(ii) monotone on \(\mathcal {H}\) if

$$\begin{aligned} \langle F(x)-F(y), x-y\rangle \ge 0\;\;\forall x, y\in \mathcal {H}. \end{aligned}$$

Lemma 1

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal {H}\). Let \(G: \mathcal {H}\longrightarrow \mathcal {H}\) be monotone and L-Lipschitz continuous on \(\mathcal {H}\) such that \(\hbox {Sol}(C, G)\ne \emptyset \). Let \(x\in \mathcal {H}\), \(0<\lambda <\dfrac{1}{L}\) and define

$$\begin{aligned} \begin{aligned} y&=P_C(x-\lambda G(x)),\\ z&=x-y-\lambda (G(x)-G(y)),\\ t&={\left\{ \begin{array}{ll} x-\dfrac{\langle x-y, z\rangle }{\Vert z\Vert ^2}z &{} \text{ if } y\ne x,\\ x &{} \text{ if } y=x. \end{array}\right. } \end{aligned} \end{aligned}$$

Then,

  1. (i)
    $$\begin{aligned} \langle x-y, z\rangle \ge (1-\lambda L)\Vert x-y\Vert ^2. \end{aligned}$$
    (3)
  2. (ii)
    $$\begin{aligned} \Vert x-y\Vert \le \dfrac{\sqrt{1+\lambda ^2L^2}}{1-\lambda L}\Vert x-t\Vert . \end{aligned}$$
    (4)
  3. (iii)

    For all \(x^*\in \hbox {Sol}(C, G)\)

    $$\begin{aligned} \Vert t-x^*\Vert ^2\le \Vert x-x^*\Vert ^2-\Vert x-t\Vert ^2. \end{aligned}$$
    (5)

Proof

(i) From the definition of z and the Lipschitz continuity of G, we have

$$\begin{aligned} \langle x-y, z\rangle&=\langle x-y, x-y-\lambda (G(x)-G(y))\rangle \\&=\Vert x-y\Vert ^2-\lambda \langle x-y, G(x)-G(y)\rangle \\&\ge \Vert x-y\Vert ^2-\lambda \Vert x-y\Vert \Vert G(x)-G(y)\Vert \\&\ge \Vert x-y\Vert ^2-\lambda L\Vert x-y\Vert ^2\\&=(1-\lambda L)\Vert x-y\Vert ^2. \end{aligned}$$

The inequality (3) holds. Since \(1-\lambda L>0\), from (3) we have that \(z\ne 0\) if \(y\ne x\). So t is well defined. (ii) If \(y=x\) then \(t=x\), so (4) holds. Suppose that \(y\ne x\), from the monotonicity and the L-Lipschitz continuity of G, we have

$$\begin{aligned} \begin{aligned} \Vert z\Vert ^2&=\Vert x-y-\lambda (G(x)-G(y))\Vert ^2\\&=\Vert x-y\Vert ^2+\lambda ^2\Vert G(x)-G(y)\Vert ^2-2\lambda \langle x-y, G(x)-G(y)\rangle \\&\le \Vert x-y\Vert ^2+\lambda ^2\Vert G(x)-G(y)\Vert ^2\\&\le \Vert x-y\Vert ^2+\lambda ^2L^2\Vert x-y\Vert ^2\\&=(1+\lambda ^2L^2)\Vert x-y\Vert ^2. \end{aligned} \end{aligned}$$

This implies that

$$\begin{aligned} \Vert x-y\Vert \ge \dfrac{\Vert z\Vert }{\sqrt{1+\lambda ^2L^2}}. \end{aligned}$$

So, by the definition of t and (3), we have

$$\begin{aligned} \Vert x-t\Vert =\bigg \Vert \dfrac{\langle x-y, z\rangle }{\Vert z\Vert ^2}z\bigg \Vert =\dfrac{\langle x-y, z\rangle }{\Vert z\Vert }\ge \dfrac{(1-\lambda L)\Vert x-y\Vert ^2}{\Vert z\Vert }\ge \dfrac{(1-\lambda L)\Vert x-y\Vert }{\sqrt{1+\lambda ^2L^2}} \end{aligned}$$

and hence

$$\begin{aligned} \Vert x-y\Vert \le \dfrac{\sqrt{1+\lambda ^2L^2}}{1-\lambda L}\Vert x-t\Vert . \end{aligned}$$

(iii) If \(y=x\) then \(t=x\). The inequality (5) holds. Suppose that \(y\ne x\). Since G is monotone and \(x^*\in \hbox {Sol}(C, G)\), we have

$$\begin{aligned} \langle G(y), y-x^*\rangle \ge \langle G(x^*), y-x^*\rangle \ge 0. \end{aligned}$$
(6)

From \(y=P_C(x-\lambda G(x))\) and \(x^*\in C\), we have

$$\begin{aligned} \langle x-\lambda G(x)-y, x^*-y\rangle \le 0. \end{aligned}$$

Combining the above inequality with \(x-\lambda G(x)-y=z-\lambda G(y)\), we get

$$\begin{aligned} \langle z-\lambda G(y), x^*-y\rangle \le 0. \end{aligned}$$
(7)

It follows from (6) and (7) that

$$\begin{aligned} \langle z, x^*-y\rangle \le \lambda \langle G(y), x^*-y\rangle \le 0. \end{aligned}$$
(8)

By the definition of t, (3) and (8), we have

$$\begin{aligned} \Vert t-x^*\Vert ^2&=\Big \Vert (x-x^*)-\dfrac{\langle x-y, z\rangle }{\Vert z\Vert ^2}z\Big \Vert ^2\\&=\Vert x-x^*\Vert ^2-\dfrac{2\langle x-y, z\rangle }{\Vert z\Vert ^2}\langle x-x^*, z\rangle + \dfrac{(\langle x-y, z\rangle )^2}{\Vert z\Vert ^2}\\&=\Vert x-x^*\Vert ^2-\dfrac{(\langle x-y, z\rangle )^2}{\Vert z\Vert ^2}+\dfrac{2\langle x-y, z\rangle }{\Vert z\Vert ^2}\langle x^*-y, z\rangle \\&\le \Vert x-x^*\Vert ^2-\dfrac{(\langle x-y, z\rangle )^2}{\Vert z\Vert ^2}\\&=\Vert x-x^*\Vert ^2-\Vert x-t\Vert ^2. \end{aligned}$$

The proof is complete. \(\square \)

Lemma 2

([29, Lemma 3]) Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal {H}\). Let \(F: \mathcal {H}\longrightarrow \mathcal {H}\) be monotone and L-Lipschitz continuous on \(\mathcal {H}\). Assume that \(\lambda _n\ge a>0\) for all n, \(\{x^n\}\) is a sequence in \(\mathcal {H}\) satisfying \(x^n\rightharpoonup \overline{x}\) and \(\lim \limits _{n\longrightarrow \infty }\Vert x^n-y^n\Vert =0\), where \(y^n=P_C(x^n-\lambda _n F(x^n))\) for all n. Then \(\overline{x}\in \hbox {Sol}(C, F)\).

Lemma 3

([37, Remark 4.4]) Let \(\{a_n\}\) be a sequence of nonnegative real numbers. Suppose that for any integer m, there exists an integer p such that \(p\ge m\) and \(a_p\le a_{p+1}\). Let \(n_0\) be an integer such that \(a_{n_0}\le a_{n_0+1}\) and define, for all integer \(n\ge n_0\), by

$$\begin{aligned} \tau (n)=\max \{k\in \mathbb {N}: n_0\le k\le n, a_k\le a_{k+1}\}. \end{aligned}$$

Then, \(\{\tau (n)\}_{n\ge n_0}\) is a nondecreasing sequence satisfying \(\lim \limits _{n\longrightarrow \infty }\tau (n)=\infty \) and the following inequalities hold:

$$\begin{aligned} a_{\tau (n)}\le a_{\tau (n)+1}, \ a_n\le a_{\tau (n)+1} \ \forall n\ge n_0. \end{aligned}$$

Lemma 4

( [46, Lemma 2.5]) Assume \(\{a_n\}\) is a sequence of nonnegative real numbers satisfying the condition

$$\begin{aligned} a_{n+1}\le (1-\alpha _n)a_n+\alpha _n\xi _n,\;\;\forall n\ge 0, \end{aligned}$$

where \(\{\alpha _n\}\) is a sequence in (0, 1) and \(\{\xi _n\}\) is a sequence in \(\mathbb {R}\) such that

  1. (i)

    \(\displaystyle \sum \limits _{n=0}^{\infty }\alpha _n=\infty \);

  2. (ii)

    \(\limsup \limits _{n\longrightarrow \infty }\xi _n\le 0\). Then, \(\lim \limits _{n\longrightarrow \infty }a_n=0\).

3 The Algorithm and Convergence Analysis

In this section, we propose a strong convergence algorithm for finding the minimum-norm solution of the multiple-sets split variational inequality problem. We impose the following assumptions on the mappings \(F_i: \mathcal {H}_1\longrightarrow \mathcal {H}_1\) and \(G_j: \mathcal {H}_2\longrightarrow \mathcal {H}_2\), \(i=1,2,\ldots ,M\) and \(j=1,2,\ldots , N\).

\((A_F)\): \(F_i\) is monotone and \(L_i\)-Lipschitz continuous on \(\mathcal {H}_1\) for each \(i=1,2,\ldots ,M\).

\((A_G)\): \(G_j\) is monotone and \(\kappa _j\)-Lipschitz continuous on \(\mathcal {H}_2\) for each \(j=1,2,\ldots ,N\).

The algorithm can be expressed as follows

Algorithm 1.

Step 0. Choose \(\{\delta _n\}\subset [\underline{\delta }, \overline{\delta }]\subset \Big (0, \dfrac{1}{\Vert A\Vert ^2+1}\Big )\), \(\{\lambda _n\}\subset [\underline{\lambda }, \overline{\lambda }]\subset \Big (0, \min \limits _{1\le i\le M}\Big \{\displaystyle \frac{1}{L_i}\Big \}\Big )\), \(\{\mu _n\}\subset [\underline{\mu }, \overline{\mu }]\subset \Big (0, \min \limits _{1\le j\le N}\Big \{\displaystyle \frac{1}{\kappa _j}\Big \}\Big )\), \(\{\alpha _n\}\subset (0, 1)\) such that \(\lim \limits _{n\longrightarrow \infty }\alpha _n\!=\!0\), \(\displaystyle \sum \limits _{n=0}^{\infty }\!\alpha _n\!=\!\infty \). Step1. Let \(x^0\in \mathcal {H}_1\). Set \(n:=0\).

Step 2. Compute, for all \(j=1,2,\ldots ,N\)

$$\begin{aligned} \begin{aligned} b^n&=A(x^n),\\ u_j^n&=P_{Q_j}(b^n-\mu _n G_j(b^n)),\\ v_j^n&=b^n-u_j^n-\mu _n(G_j(b^n)-G_j(u_j^n)),\\ w_j^n&= {\left\{ \begin{array}{ll} b^n-\dfrac{\langle b^n-u_j^n, v_j^n\rangle }{\Vert v_j^n\Vert ^2}v_j^n &{} \text{ if } u_j^n\ne b^n,\\ b^n &{} \text{ if } u_j^n=b^n. \end{array}\right. }\\ \end{aligned} \end{aligned}$$

Step 3. Find among \(w_j^n\), \(j=1,2,\ldots ,N\), the farthest element from \(b^n\), i.e.,

$$\begin{aligned} j_n=\hbox {argmax}\{\Vert w_j^n-b^n\Vert : j=1,2,\ldots ,N\},\;r^n:=w_{j_n}^n. \end{aligned}$$

Step 4. Compute, for all \(i=1,2,\ldots ,M\)

$$\begin{aligned} \begin{aligned} a^n&=x^n+\delta _n A^*(r^n-b^n),\\ y_i^n&=P_{C_i}(a^n-\lambda _n F_i(a^n)),\\ z_i^n&=a^n-y_i^n-\lambda _n(F_i(a^n)-F_i(y_i^n)),\\ s_i^n&= {\left\{ \begin{array}{ll} a^n-\dfrac{\langle a^n-y_i^n, z_i^n\rangle }{\Vert z_i^n\Vert ^2}z_i^n &{} \text{ if } y_i^n\ne a^n,\\ a^n &{} \text{ if } y_i^n=a^n. \end{array}\right. } \end{aligned} \end{aligned}$$

Step 5. Find among \(s_i^n\), \(i=1,2,\ldots ,M\), the farthest element from \(a^n\), i.e.,

$$\begin{aligned} i_n=\hbox {argmax}\{\Vert s_i^n-a^n\Vert : i=1,2,\ldots ,M\},\;t^n:=s_{i_n}^n. \end{aligned}$$

Step 6. Compute

$$\begin{aligned} x^{n+1}=(1-\alpha _n)t^n. \end{aligned}$$

Step 7. Set \(n:=n+1\), and go to Step 2.

The following theorem shows validity and convergence of the algorithm.

Theorem 1

Suppose that the assumptions \((A_F),(A_G)\) hold. Then, the sequence \(\{x^n\}\) generated by Algorithm 1 converges strongly to the minimum-norm solution of the multiple-sets split variational inequality problem.

Proof

The proof of the theorem is divided into several steps. Let \(x^*=P_\varOmega (0)\). We will prove that \(\{x^n\}\) converges in norm to \(x^*\). We divide the proof into several steps. Step 1. For all \(n\ge 0\), we have

$$\begin{aligned} \Vert x^{n+1}-x^*\Vert&\le (1-\alpha _n)\Vert t^n-x^*\Vert +\alpha _n\Vert x^*\Vert , \end{aligned}$$
(9)
$$\begin{aligned} \Vert x^{n+1}-x^*\Vert ^2&\le (1-\alpha _n)\Vert t^n-x^*\Vert ^2+2\alpha _n\langle x^*, x^*-t^n+\alpha _n t^n\rangle . \end{aligned}$$
(10)

In fact, it holds that

$$\begin{aligned} \Vert x^{n+1}-x^*\Vert&=\Vert (1-\alpha _n)t^n-x^*\Vert \\&=\Vert (1-\alpha _n)(t^n-x^*)-\alpha _nx^*\Vert \\&\le (1-\alpha _n)\Vert t^n-x^*\Vert +\alpha _n\Vert x^*\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert x^{n+1}-x^*\Vert ^2&=\Vert (1-\alpha _n)(t^n-x^*)-\alpha _nx^*\Vert ^2\\&=(1-\alpha _n)^2\Vert t^n-x^*\Vert ^2-2\alpha _n(1-\alpha _n)\langle x^*, t^n-x^*\rangle +\alpha _n^2\Vert x^*\Vert ^2\\&\le (1-\alpha _n)\Vert t^n-x^*\Vert ^2-2\alpha _n(1-\alpha _n)\langle x^*, t^n-x^*\rangle +2\alpha _n^2\Vert x^*\Vert ^2\\&=(1-\alpha _n)\Vert t^n-x^*\Vert ^2+2\alpha _n\langle x^*, x^*-t^n+\alpha _n t^n\rangle . \end{aligned}$$

Step 2. For all \(n\ge 0\), we show that

$$\begin{aligned} \Vert a^n-x^*\Vert ^2\le \Vert x^n-x^*\Vert ^2-\delta _n(1-\delta _n\Vert A\Vert ^2)\Vert r^n-b^n\Vert ^2. \end{aligned}$$
(11)

Since \(x^*\in \varOmega \), we have \(x^*\in \hbox {Sol}(C_i, F_i)\), \(Ax^*\in \hbox {Sol}(Q_j, G_j)\) for all \(i=1,2,\ldots ,M\), \(j=1,2,\ldots ,N\). From Lemma 1, we get, for all \(n\ge 0\)

$$\begin{aligned} \Vert r^n-Ax^*\Vert ^2&=\Vert w_{j_n}^n-Ax^*\Vert ^2\nonumber \\&\le \Vert b^n-Ax^*\Vert ^2-\Vert b^n-w_{j_n}^n\Vert ^2\nonumber \\&=\Vert b^n-Ax^*\Vert ^2-\Vert b^n-r^n\Vert ^2. \end{aligned}$$
(12)

Hence,

$$\begin{aligned} \Vert r^n-Ax^*\Vert \le \Vert b^n-Ax^*\Vert . \end{aligned}$$
(13)

From (13), we obtain

$$\begin{aligned} \Vert a^n-x^*\Vert ^2&=\Vert (x^n-x^*)+\delta _n A^*(r^n-b^n)\Vert ^2\\&=\Vert x^n-x^*\Vert ^2+\Vert \delta _n A^*(r^n-b^n)\Vert ^2+2\delta _n\langle x^n-x^*, A^*(r^n-b^n)\rangle \\&\le \Vert x^n-x^*\Vert ^2+\delta _n^2\Vert A^*\Vert ^2\Vert r^n-b^n\Vert ^2+2\delta _n\langle A(x^n-x^*), r^n-b^n\rangle \\&=\Vert x^n-x^*\Vert ^2+\delta _n^2\Vert A\Vert ^2\Vert r^n-b^n\Vert ^2+2\delta _n\langle b^n-Ax^*, r^n-b^n\rangle \\&=\Vert x^n-x^*\Vert ^2+\delta _n^2\Vert A\Vert ^2\Vert r^n-b^n\Vert ^2+2\delta _n[\langle r^n-Ax^*, r^n-b^n\rangle \\&\quad -\Vert r^n-b^n\Vert ^2]\\&=\Vert x^n-x^*\Vert ^2+\delta _n^2\Vert A\Vert ^2\Vert r^n-b^n\Vert ^2+\delta _n\big [\big (\Vert r^n-Ax^*\Vert ^2-\Vert b^n-Ax^*\Vert ^2\big )\\&\quad -\Vert r^n-b^n\Vert ^2\big ]\\&\le \Vert x^n-x^*\Vert ^2+\delta _n^2\Vert A\Vert ^2\Vert r^n-b^n\Vert ^2-\delta _n\Vert r^n-b^n\Vert ^2\\&=\Vert x^n-x^*\Vert ^2-\delta _n(1-\delta _n\Vert A\Vert ^2)\Vert r^n-b^n\Vert ^2. \end{aligned}$$

Step 3. We show that the sequences \(\{x^n\}\), \(\{a^n\}\) and \(\{t^n\}\) are bounded. From Lemma 1, we have, for all \(n\ge 0\)

$$\begin{aligned} \Vert t^n-x^*\Vert ^2&=\Vert s_{i_n}^n-x^*\Vert ^2\nonumber \\&\le \Vert a^n-x^*\Vert ^2-\Vert a^n-s_{i_n}^n\Vert ^2\nonumber \\&=\Vert a^n-x^*\Vert ^2-\Vert a^n-t^n\Vert ^2. \end{aligned}$$
(14)

From (11), (14) and \(\{\delta _n\}\subset {[}\underline{\delta }, \overline{\delta }]\subset \Big (0, \displaystyle \frac{1}{\Vert A\Vert ^2+1}\Big )\), we get

$$\begin{aligned} \Vert t^n-x^*\Vert \le \Vert a^n-x^*\Vert \le \Vert x^n-x^*\Vert \;\;\forall n\ge 0. \end{aligned}$$
(15)

Using (9) and (15), we have

$$\begin{aligned} \Vert x^{n+1}-x^*\Vert \le (1-\alpha _n)\Vert x^n-x^*\Vert +\alpha _n\Vert x^*\Vert \;\;\forall n\ge 0. \end{aligned}$$

This implies that

$$\begin{aligned} \Vert x^{n+1}-x^*\Vert \le \max \{\Vert x^n-x^*\Vert ,\Vert x^*\Vert \}\;\;\forall n\ge 0. \end{aligned}$$

So, by induction, we obtain, for every \(n\ge 0\) that

$$\begin{aligned} \Vert x^n-x^*\Vert \le \max \{\Vert x^0-x^*\Vert ,\Vert x^*\Vert \}. \end{aligned}$$

Hence, the sequence \(\{x^n\}\) is bounded and so are the sequences \(\{a^n\}\) and \(\{t^n\}\) thanks to (15). Step 4. We prove that \(\{x^n\}\) converges strongly to \(x^*=P_\varOmega (0)\). Let us consider two cases. Case 1. There exists \(n_0\) such that the sequence \(\{\Vert x^n-x^*\Vert \}\) is decreasing for \(n\ge n_0\). In this case, the limit of \(\{\Vert x^n-x^*\Vert \}\) exists. So, it follows from (10) and (15), for all \(n\ge 0\), that

$$\begin{aligned} (\Vert x^{n+1}-x^*\Vert ^2-\Vert x^n-x^*\Vert ^2)-2\alpha _n\langle x^*, x^*-t^n+\alpha _n t^n\rangle&\le \Vert t^n-x^*\Vert ^2-\Vert x^n-x^*\Vert ^2\\&\le \Vert a^n-x^*\Vert ^2-\Vert x^n-x^*\Vert ^2\\&\le 0. \end{aligned}$$

Since the limit of \(\{\Vert x^n-x^*\Vert \}\) exists, \(\lim \limits _{n\longrightarrow \infty }\alpha _n=0\), \(\{t^n\}\) is bounded, it follows from the above inequality that

$$\begin{aligned}&\lim \limits _{n\longrightarrow \infty }(\Vert t^n-x^*\Vert ^2-\Vert x^n-x^*\Vert ^2)=0, \end{aligned}$$
(16)
$$\begin{aligned}&\lim \limits _{n\longrightarrow \infty }(\Vert a^n-x^*\Vert ^2-\Vert x^n-x^*\Vert ^2)=0. \end{aligned}$$
(17)

Combining (16) and (17), we have

$$\begin{aligned} \lim \limits _{n\longrightarrow \infty }(\Vert a^n-x^*\Vert ^2-\Vert t^n-x^*\Vert ^2)=0. \end{aligned}$$
(18)

From (14), we find

$$\begin{aligned} \lim \limits _{n\longrightarrow \infty }\Vert a^n-t^n\Vert =0. \end{aligned}$$
(19)

From (11), (17) and \(\{\delta _n\}\subset [\underline{\delta }, \overline{\delta }]\subset \Big (0, \displaystyle \frac{1}{\Vert A\Vert ^2+1}\Big )\), we get

$$\begin{aligned} \lim \nolimits _{n\longrightarrow \infty }\Vert r^n-b^n\Vert =0. \end{aligned}$$
(20)

On the other hand,

$$\begin{aligned} \Vert a^n-x^n\Vert&=\Vert \delta _n A^*(r^n-b^n)\Vert \le \delta _n\Vert A^*\Vert \Vert r^n-b^n\Vert \le \overline{\delta }\Vert A\Vert \Vert r^n-b^n\Vert , \end{aligned}$$

which together with (20) implies

$$\begin{aligned} \lim \nolimits _{n\longrightarrow \infty }\Vert a^n-x^n\Vert =0. \end{aligned}$$
(21)

From Lemma 1, we have, for all \(i=1,2,\ldots ,M\)

$$\begin{aligned} \Vert a^n-y_i^n\Vert&\le \dfrac{\sqrt{1+\lambda _n^2L_i^2}}{1-\lambda _n L_i}\Vert a^n-s_i^n\Vert \nonumber \\&\le \dfrac{\sqrt{1+\overline{\lambda }^2L_i^2}}{1-\overline{\lambda } L_i}\Vert a^n-s_{i_n}^n\Vert \nonumber \\&=\dfrac{\sqrt{1+\overline{\lambda }^2L_i^2}}{1-\overline{\lambda } L_i}\Vert a^n-t^n\Vert . \end{aligned}$$
(22)

From (19) and (22), we get

$$\begin{aligned} \lim \limits _{n\longrightarrow \infty }\Vert a^n-y_i^n\Vert =0\;\;\forall i=1,2,\ldots ,M. \end{aligned}$$
(23)

From Lemma 1, we have

$$\begin{aligned} \Vert b^n-u_j^n\Vert&\le \dfrac{\sqrt{1+\mu _n^2\kappa _j^2}}{1-\mu _n \kappa _j}\Vert b^n-w_j^n\Vert \nonumber \\&\le \dfrac{\sqrt{1+\overline{\mu }^2\kappa _j^2}}{1-\overline{\mu }\kappa _j}\Vert b^n-w_{j_n}^n\Vert \nonumber \\&=\dfrac{\sqrt{1+\overline{\mu }^2\kappa _j^2}}{1-\overline{\mu }\kappa _j}\Vert b^n-r^n\Vert . \end{aligned}$$
(24)

From (20) and (24), we have

$$\begin{aligned} \lim \limits _{n\longrightarrow \infty }\Vert b^n-u_j^n\Vert =0\;\;\forall j=1,2,\ldots ,N. \end{aligned}$$
(25)

Note that

$$\begin{aligned} \Vert x^n-t^n\Vert \le \Vert x^n-a^n\Vert +\Vert a^n-t^n\Vert . \end{aligned}$$

Using the above inequality, (19) and (21), we have

$$\begin{aligned} \lim \limits _{n\longrightarrow \infty }\Vert x^n-t^n\Vert =0. \end{aligned}$$
(26)

Choose a subsequence \(\{t^{n_k}\}\) of \(\{t^n\}\) such that

$$\begin{aligned} \limsup \limits _{n\longrightarrow \infty }\langle x^*, x^*-t^n\rangle =\lim \limits _{k\longrightarrow \infty }\langle x^*, x^*-t^{n_k}\rangle . \end{aligned}$$

Since \(\{t^{n_k}\}\) is bounded, we may assume without loss of generality that \(t^{n_k}\rightharpoonup \overline{t}\). Therefore,

$$\begin{aligned} \limsup \limits _{n\longrightarrow \infty }\langle x^*, x^*-t^n\rangle =\langle x^*, x^*-\overline{t}\rangle . \end{aligned}$$

From \(t^{n_k}\rightharpoonup \overline{t}\) and (19), we get \(a^{n_k}\rightharpoonup \overline{t}\). From (23), we have \(\lim \limits _{k\longrightarrow \infty }\Vert a^{n_k}-y_i^{n_k}\Vert =0\) for all \(i=1,2,\ldots ,M\). Note that \(y_i^{n_k}=P_{C_i}(a^{n_k}-\lambda _{n_k} F_i(a^{n_k}))\). From Lemma 2, we get \(\overline{t}\in \hbox {Sol}(C_i, F_i)\) for all \(i=1,2,\ldots ,M\), that is, \(\overline{t}\in \displaystyle \bigcap \nolimits _{i=1}^{M}\hbox {Sol}(C_i, F_i)\).

From \(t^{n_k}\rightharpoonup \overline{t}\) and (26), we get \(x^{n_k}\rightharpoonup \overline{t}\). Thus, \(b^{n_k}=A(x^{n_k})\rightharpoonup A(\overline{t})\). From (25), we have \(\lim \limits _{k\longrightarrow \infty }\Vert b^{n_k}-u_j^{n_k}\Vert =0\) for all \(j=1,2,\ldots ,N\). Note that \(u_j^{n_k}=P_{Q_j}(b^{n_k}-\mu _{n_k} G_j(b^{n_k}))\). From Lemma 2, we obtain \(A(\overline{t})\in \hbox {Sol}(Q_j, G_j)\) for all \(j=1,2,\ldots ,N\), that is, \(A(\overline{t})\in \displaystyle \bigcap \nolimits _{j=1}^{N}\hbox {Sol}(Q_j, G_j)\), which together with \(\overline{t}\in \displaystyle \bigcap \nolimits _{i=1}^{M}\hbox {Sol}(C_i, F_i)\) implies \(\overline{t}\in \varOmega \).

From (10) and (15), we get

$$\begin{aligned} \Vert x^{n+1}-x^*\Vert ^2\le (1-\alpha _n)\Vert x^n-x^*\Vert ^2+\alpha _n\xi _n, \end{aligned}$$

where

$$\begin{aligned} \xi _n=2\langle x^*, x^*-t^n+\alpha _n t^n\rangle . \end{aligned}$$

Since \(x^*=P_\varOmega (0)\) and \(\overline{t}\in \varOmega \), it follows that

$$\begin{aligned} \langle x^*, x^*-\overline{t}\rangle \le 0. \end{aligned}$$

Therefore, since \(\{t^n\}\) is bounded, \(\lim \limits _{n\longrightarrow \infty }\alpha ^n=0\), we get

$$\begin{aligned} \limsup \limits _{n\longrightarrow \infty }\xi _n&=\limsup \limits _{n\longrightarrow \infty }\big [2\langle x^*, x^*-t^n\rangle +2\alpha _n\langle x^*, t^n\rangle \big ]\\&=\limsup \limits _{n\longrightarrow \infty }2\langle x^*, x^*-t^n\rangle \\&=2\langle x^*, x^*-\overline{t}\rangle \\&\le 0. \end{aligned}$$

By Lemma 4, we have \(\lim \limits _{n\longrightarrow \infty }\Vert x^n-x^*\Vert ^2=0\), that is, \(x^n\longrightarrow x^*\). Case 2. Suppose that for any integer m, there exists an integer n such that \(n\ge m\) and \(\Vert x^n-x^*\Vert \le \Vert x^{n+1}-x^*\Vert \). According to Lemma 3, there exists a nondecreasing sequence \(\{\tau (n)\}\) of \(\mathbb N\) such that \(\lim \limits _{n\longrightarrow \infty }\tau (n)=\infty \) and the following inequalities hold for all (sufficiently large) \(n\in \mathbb N\).

$$\begin{aligned} \Vert x^{\tau (n)}-x^*\Vert \le \Vert x^{\tau (n)+1}-x^*\Vert ,\;\;\Vert x^n-x^*\Vert \le \Vert x^{\tau (n)+1}-x^*\Vert . \end{aligned}$$
(27)

From (9) and (15), we get

$$\begin{aligned} \begin{aligned} 0&\le \Vert a^{\tau (n)}-x^*\Vert -\Vert t^{\tau (n)}-x^*\Vert \\&\le \Vert x^{\tau (n)}-x^*\Vert -\Vert t^{\tau (n)}-x^*\Vert \\&\le \Vert x^{\tau (n)+1}-x^*\Vert -\Vert t^{\tau (n)}-x^*\Vert \\&\le \alpha _{\tau (n)}\Vert x^*\Vert -\alpha _{\tau (n)}\Vert t^{\tau (n)}-x^*\Vert . \end{aligned} \end{aligned}$$
(28)

From the boundedness of \(\{t^n\}\), \(\lim \limits _{n\longrightarrow \infty }\alpha _n=0\) and (28), we have

$$\begin{aligned} \lim \limits _{n\longrightarrow \infty }(\Vert a^{\tau (n)}-x^*\Vert -\Vert t^{\tau (n)}-x^*\Vert )=0,\;\lim \limits _{n\longrightarrow \infty }(\Vert x^{\tau (n)}-x^*\Vert -\Vert t^{\tau (n)}-x^*\Vert )=0.\nonumber \\ \end{aligned}$$
(29)

Thus,

$$\begin{aligned} \lim \limits _{n\longrightarrow \infty }(\Vert a^{\tau (n)}-x^*\Vert -\Vert x^{\tau (n)}-x^*\Vert )=0. \end{aligned}$$
(30)

Thus, from (29), (30) and the boundedness of \(\{x^n\}\), \(\{a^n\}\), \(\{t^n\}\), we obtain

$$\begin{aligned}&\lim \limits _{n\longrightarrow \infty }(\Vert a^{\tau (n)}-x^*\Vert ^2-\Vert t^{\tau (n)}-x^*\Vert ^2)=0, \\&\lim \limits _{n\longrightarrow \infty }(\Vert t^{\tau (n)}-x^*\Vert ^2-\Vert x^{\tau (n)}-x^*\Vert ^2)=0, \\&\lim \limits _{n\longrightarrow \infty }(\Vert a^{\tau (n)}-x^*\Vert ^2-\Vert x^{\tau (n)}-x^*\Vert ^2)=0. \end{aligned}$$

As proved in the first case, we obtain

$$\begin{aligned} \limsup \limits _{n\longrightarrow \infty }\langle x^*, x^*-t^{\tau (n)}\rangle \le 0. \end{aligned}$$

So, from the boundedness of \(\{t^n\}\) and \(\lim \limits _{n\longrightarrow \infty }\alpha _n=0\), we have

$$\begin{aligned} \limsup \limits _{n\longrightarrow \infty }\langle x^*, x^*-t^{\tau (n)}+\alpha _{\tau (n)}t^{\tau (n)}\rangle&=\limsup \limits _{n\longrightarrow \infty }\big [\langle x^*, x^*-t^{\tau (n)}\rangle +\alpha _{\tau (n)}\langle x^*, t^{\tau (n)}\rangle \big ]\nonumber \\&=\limsup \limits _{n\longrightarrow \infty }\langle x^*, x^*-t^{\tau (n)}\rangle \nonumber \\&\le 0. \end{aligned}$$
(31)

From (10), (15) and (27), we have

$$\begin{aligned} \Vert x^{\tau (n)+1}-x^*\Vert ^2\le (1-\alpha _{\tau (n)})\Vert x^{\tau (n)+1}-x^*\Vert ^2+2\alpha _{\tau (n)}\langle x^*, x^*-t^{\tau (n)}+\alpha _{\tau (n)} t^{\tau (n)}\rangle . \end{aligned}$$

Since \(\alpha _{\tau (n)}>0\), it follows from above inequality that

$$\begin{aligned} \Vert x^{\tau (n)+1}-x^*\Vert ^2\le 2\langle x^*, x^*-t^{\tau (n)}+\alpha _{\tau (n)}t^{\tau (n)}\rangle . \end{aligned}$$
(32)

From (27) and (32), we obtain

$$\begin{aligned} \Vert x^n-x^*\Vert ^2\le 2\langle x^*, x^*-t^{\tau (n)}+\alpha _{\tau (n)}t^{\tau (n)}\rangle . \end{aligned}$$
(33)

Taking the limit in (33) as \(n\longrightarrow \infty \), and using (31), we obtain that

$$\begin{aligned} \limsup \limits _{n\longrightarrow \infty }\Vert x^n-x^*\Vert ^2\le 0. \end{aligned}$$

Therefore, \(x^n\longrightarrow x^*\). This completes the proof of Theorem. \(\square \)

From Algorithm 1 and Theorem 1, by taking \(F_i=0\) for all \(i=1,2,\ldots ,M\) and \(G_j=0\) for all \(j=1,2,\ldots ,N\), we get the following corollary:

Corollary 1

Assume that the solution set \(\varOmega \) of the MSSFP is nonempty. Choose \(x^0\in \mathcal {H}_1\), \(\{\delta _n\}\subset [\underline{\delta }, \overline{\delta }]\subset \Big (0, \dfrac{1}{\Vert A\Vert ^2+1}\Big )\), \(\{\alpha _n\}\subset (0, 1)\) such that \(\lim \nolimits _{n\longrightarrow \infty }\alpha _n=0\) and \(\displaystyle \sum \nolimits _{n=0}^{\infty }\alpha _n=\infty \).

For each iteration \(n\ge 0\), compute

$$\begin{aligned} u^n=A(x^n),\;\;v^n:=P_{Q_{j_n}}(u^n), \end{aligned}$$

where

$$\begin{aligned} j_n=\hbox {argmax}\{\Vert P_{Q_j}(u^n)-u^n\Vert : j=1,2,\ldots ,N\}. \end{aligned}$$

Further, we compute

$$\begin{aligned} y^n=x^n+\delta _n A^*(v^n-u^n),\;\;z^n:=P_{C_{i_n}}(y^n), \end{aligned}$$

where

$$\begin{aligned} i_n=\hbox {argmax}\{\Vert P_{C_i}(y^n)-y^n\Vert : i=1,2,\ldots ,M\} \end{aligned}$$

and define the next iteration as

$$\begin{aligned} x^{n+1}=(1-\alpha _n)z^n. \end{aligned}$$

Then, the sequence \(\{x^n\}\) converges strongly to the minimum-norm solution of the MSSFP.

Corollary 2

Let C and Q be two nonempty closed convex subset of two real Hilbert spaces \(\mathcal {H}_1\) and \(\mathcal {H}_2\), respectively. Let \(F: \mathcal {H}_1\longrightarrow \mathcal {H}_1\) be monotone and L-Lipschitz continuous on \(\mathcal {H}_1\), \(G: \mathcal {H}_2\longrightarrow \mathcal {H}_2\) be monotone and \(\kappa \)-Lipschitz continuous on \(\mathcal {H}_2\). Suppose that the sequences \(\{\alpha _n\}\), \(\{\delta _n\}\), \(\{\lambda _n\}\) and \(\{\mu _n\}\) satisfy the following restrictions:

$$\begin{aligned} {\left\{ \begin{array}{ll} {\{\alpha _n\}\subset (0, 1), \lim \limits _{n\longrightarrow \infty }\alpha _n=0, \displaystyle \sum \limits _{n=0}^{\infty }\alpha _n=\infty },\\ {\{\delta _n\}\subset [\underline{\delta }, \overline{\delta }]\subset \Big (0, \dfrac{1}{\Vert A\Vert ^2+1}\Big )},\\ {\{\lambda _n\}\subset [\underline{\lambda }, \overline{\lambda }]\subset \Big (0, \dfrac{1}{L}\Big )},\\ {\{\mu _n\}\subset [\underline{\mu }, \overline{\mu }]\subset \Big (0, \dfrac{1}{\kappa }\Big )}. \end{array}\right. } \end{aligned}$$

Choose \(x^0\in \mathcal {H}_1\), and for each iteration \(n\ge 0\), compute

$$\begin{aligned} \begin{aligned} b^n&=A(x^n),\\ u^n&=P_Q(b^n-\mu _n G(b^n)),\\ v^n&=b^n-u^n-\mu _n(G(b^n)-G(u^n)),\\ w^n&= {\left\{ \begin{array}{ll} b^n-\dfrac{\langle b^n-u^n, v^n\rangle }{\Vert v^n\Vert ^2}v^n &{} \text{ if } u^n\ne b^n,\\ b^n &{} \text{ if } u^n=b^n. \end{array}\right. }\\ \end{aligned} \end{aligned}$$

Further, we compute

$$\begin{aligned} \begin{aligned} a^n&=x^n+\delta _n A^*(w^n-b^n),\\ y^n&=P_C(a^n-\lambda _n F(a^n)),\\ z^n&=a^n-y^n-\lambda _n(F(a^n)-F(y^n)),\\ t^n&= {\left\{ \begin{array}{ll} a^n-\dfrac{\langle a^n-y^n, z^n\rangle }{\Vert z^n\Vert ^2}z^n &{} \text{ if } y^n\ne a^n,\\ a^n &{} \text{ if } y^n=a^n. \end{array}\right. } \end{aligned} \end{aligned}$$

and define the next iteration as

$$\begin{aligned} x^{n+1}=(1-\alpha _n)t^n. \end{aligned}$$

Then, the sequence \(\{x^n\}\) converges strongly to the minimum-norm solution of the SVIP, provided that the solution set \(\varOmega =\{x^*\in \hbox {Sol}(C, F): Ax^*\in \hbox {Sol}(Q, G)\}\) of the SVIP is nonempty.

4 Numerical Illustrations

Example 1

Let \(\mathcal {H}_1=\mathbb {R}^4\) with the norm \(\Vert x\Vert =(x_1^2+x_2^2+x_3^2+x_4^2)^{\frac{1}{2}}\) for \(x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4\) and \(\mathcal {H}_2=\mathbb {R}^2\) with the norm \(\Vert y\Vert =(y_1^2+y_2^2)^{\frac{1}{2}}\) for all \(y=(y_1, y_2)^T\in \mathbb {R}^2\).

Let \(A(x)=(x_1+x_3+x_4, x_2+x_3-x_4)^T\) for all \(x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4\). Then, A is a bounded linear operator from \(\mathbb {R}^4\) into \(\mathbb {R}^2\) with \(\Vert A\Vert =\sqrt{3}\). For \(y=(y_1, y_2)^T\in \mathbb {R}^2\), let \(B(y)=(y_1, y_2, y_1+y_2, y_1-y_2)^T\). Then, B is a bounded linear operator from \(\mathbb {R}^2\) into \(\mathbb {R}^4\) with \(\Vert B\Vert =\sqrt{3}\). Moreover, for any \(x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4\) and \(y=(y_1, y_2)^T\in \mathbb {R}^2\), \(\langle A(x), y\rangle =\langle x, B(y)\rangle \), so \(B=A^*\) is an adjoint operator of A.

Let

$$\begin{aligned} \begin{aligned} C_1&=\{x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4: 2x_1+x_2+3x_3+x_4\le 8\},\\ C_2&=\{x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4: x_1+x_2+2x_3\le 5\},\\ C_3&=\{x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4: x_1-x_2-4x_3\le 3\},\\ C_4&=\{x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4: 4x_1-x_2+x_3-x_4\le 18\},\\ C_5&=\{x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4: 3x_1-2x_2-5x_3-3x_4\ge 14\},\\ C_6&=\{x=(x_1, x_2, x_3, x_4)^T\in \mathbb {R}^4: -x_1+x_2+4x_3\le -3\} \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} Q_1&=\{(u_1, u_2)^T\in \mathbb {R}^2: 2u_1+u_2\ge 8\},\\ Q_2&=\{(u_1, u_2)^T\in \mathbb {R}^2: u_1+u_2\ge 5\}. \end{aligned} \end{aligned}$$

The solution set \(\varOmega =\{x=(x_1, x_2, x_3, x_4)^T\in C_1\cap C_2\cap C_3\cap C_4\cap C_5\cap C_6: A(x)\in Q_1\cap Q_2\}\) of the MSSFP is given by

$$\begin{aligned} \varOmega ={\left\{ \begin{array}{ll} {2x_1+x_2+3x_3+x_4=8},\\ {x_1+x_2+2x_3=5},\\ {x_1-x_2-4x_3=3},\\ {4x_1-x_2+x_3-x_4\le 18},\\ {3x_1-2x_2-5x_3-3x_4\ge 14} \end{array}\right. } \end{aligned}$$

which can be rewritten as follows

$$\begin{aligned} \varOmega =\Big \{(a+4, 1-3a, a, -2a-1)^T: \dfrac{1}{10}\le a\le \dfrac{1}{5}\Big \}. \end{aligned}$$

Suppose \(x=(a+4, 1-3a, a, -2a-1)^T\in \varOmega \), then

$$\begin{aligned} \begin{aligned} \Vert x\Vert&=\sqrt{(a+4)^2+(1-3a)^2+a^2+(-2a-1)^2}\\&=\sqrt{15a^2+6a+18}\\&=\sqrt{15\Big (\dfrac{1}{10}-a\Big )^2+9a+\dfrac{357}{20}}\\&\ge \sqrt{\dfrac{9}{10}+\dfrac{357}{20}}=\dfrac{\sqrt{75}}{2}. \end{aligned} \end{aligned}$$

The above equality holds if and only if \(a=\dfrac{1}{10}\). So the minimum-norm solution \(x^*\) of the MSSFP is \(x^*=\Big (\dfrac{41}{10}, \dfrac{7}{10}, \dfrac{1}{10}, -\dfrac{6}{5}\Big )^T\). Select a random starting point \(x^0=(-3,7,-4,10)^T\in \mathbb {R}^4\) for Algorithm 1. With \(\delta _n=\dfrac{n+1}{5n+6}\), \(\alpha _n=\dfrac{1}{n+2}\), \(\varepsilon =10^{-5}\) and the stopping criterion \(\Vert x^{n+1}-x^n\Vert \le \varepsilon \), we obtain an approximate solution obtained after 111490 iterations (with time 27.2783 seconds) is

$$\begin{aligned} x^{111490}=(4.09989912, 0.69989497, 0.10001402, -1.20001237)^T, \end{aligned}$$

which is a good approximation to the minimum-norm solution of the MSSFP \(x^*=\Big (\dfrac{41}{10}, \dfrac{7}{10}, \dfrac{1}{10}, -\dfrac{6}{5}\Big )^T\).

We perform the iterative schemes in PYTHON 3.9 running on a laptop with Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz, 4 GB RAM.

5 Conclusion

The paper has presented an iterative algorithm for finding minimum-norm solutions of the multiple-sets split variational inequality problem in real Hilbert spaces. Under some suitable conditions imposed on parameters, we have proved the strong convergence of the iterative sequence to the minimum-norm solution of the MSSVIP. As special cases, we apply the main theorem to find the minimum-norm solution of the MSSFP and the SVIP. A simple numerical example has been performed to illustrate the performance of the proposed algorithm.