1 Introduction

Let \(\mathcal {H}\) be a real Hilbert space, let \(A: \mathcal {H}\to 2^{\mathcal {H}}\) be a set-valued operator. The domain and the graph of A are, respectively, defined by \(\operatorname {dom} A=\{x\in \mathcal {H}~|~ Ax\neq \varnothing \}\) and \(\operatorname {gra} A=\{(x,u) \in \mathcal {H}\times \mathcal {H}~|~u\in Ax\}\). We denote by \(\operatorname {zer} A=\{x\in \mathcal {H}~|~0\in Ax\}\) the set of zeros of A, and by \(\operatorname {ran} A=\{u\in \mathcal {H}~|~(\exists \; x\in \mathcal {H})\; u\in Ax\}\) the range of A. The inverse of A is \(A^{-1}\colon \mathcal {H}\mapsto 2^{\mathcal {H}}\colon u\mapsto \{x\in \mathcal {H}~|~u\in Ax\}\). Moreover, A is monotone if

$$(\forall(x,y)\in\mathcal{H}\times \mathcal{H})~ (\forall (u,v)\in Ax\times Ay)\quad \langle x-y~|~u-v \rangle \geq 0, $$

and maximally monotone if it is monotone and there exists no monotone operator \(B\colon \mathcal {H}\to 2^{\mathcal {H}}\) such that graB properly contains graA.

A basis problem in monotone operator theory is to find a zero point of the sum of two maximally monotone operators A and B acting on a real Hilbert space \(\mathcal {H}\), that is, find \(\overline {x}\in \mathcal {H}\) such that

$$ 0\in A\overline{x} + B\overline{x}. $$
(1)

Suppose that the problem (1) has at least one solution \(\overline {x}\). Then, there exists \(\overline {v}\in B\overline {x}\) such that \(-\overline {v}\in A\overline {x}\). The set of all such pairs \((\overline {x},\overline {v})\) defines the extended set of solutions to the problem (1) [20],

$$E(A,B) = \{(\overline{x},\overline{v})~|~\overline{v}\in B\overline{x},~-\overline{v}\in A\overline{x}\}. $$

Conversely, if E(A, B) is non-empty and \((\overline {x},\overline {v})\in E(A,B)\), then the set of solutions to the problem (1) is also nonempty since \(\overline {x}\) solves (1) and \(\overline {v}\) solves its dual problem [2], i.e,

$$0\in B^{-1}v -A^{-1}(-v). $$

It is remarkable that three fundamental methods such as Douglas–Rachford splitting method, forward-backward splitting method, and forward-backward-forward splitting method converge weakly to points in E(A, B) [22, Theorem 1], [14, 23]. We next consider a more general problem where one of the operators has a linearly composite structure. In this case, the problem (1) becomes [11, (1.2)],

$$ 0\in A\overline{x} + (L^{\ast}\circ B\circ L)\overline{x}, $$
(2)

where B acts on a real Hilbert space \(\mathcal {G}\) and L is a bounded linear operator from \(\mathcal {H}\) to \(\mathcal {G}\). Then, it is shown in [11, Proposition 2.8(iii)(iv)] that whenever the set of solutions to (2) is non-empty, the extended set of solutions

$$E(A,B,L) = \{(\overline{x},\overline{v})~~|-L^{\ast}\overline{v}\in A\overline{x}, L\overline{x} \in B^{-1}\overline{v}\} $$

is non-empty and, for every \((\overline {x},\overline {v}) \in E(A,B,L), \overline {v}\) is a solution to the dual problem of (2) [11, Eq.(1.3)],

$$ 0\in B^{-1}\overline{v} - L\circ A^{-1}\circ(-L^{\ast})\overline{v}. $$
(3)

Algorithm proposed in [11, Eq. (3.1)] to solve the pair (2) and (3) converges weakly to a point in E(A, B, L) [11, Theorem 3.1]. Let us consider the case when monotone inclusions involve the parallel-sum monotone operators. This typical inclusion is firstly introduced in [18, Problem 1.1] and then studied in [24] and [6]. A simple case is

$$ 0\in A\overline{x} + L^{\ast}\circ(B\;\square\; D)\circ L\overline{x} + C\overline{x}, $$
(4)

where B, D act on \(\mathcal {G}\) and C acts on \(\mathcal {H}\), and the sign \(\square \) denotes the parallel sum operation defined by

$$B\;\square\; D = (B^{-1}+D^{-1})^{-1}. $$

Then, under the assumption that the set of solutions to (4) is non-empty, so is its extended set of solutions defined by

$$ E(A,B,C,D,L) = \left\{(\overline{x},\overline{v})~|~-L^{\ast}\overline{v}\in (A+C)\overline{x},~ L\overline{x} \in (B^{-1}+D^{-1})\overline{v}\right\}. $$

Furthermore, if there exists \((\overline {x},\overline {v})\in E(A,B,C,D,L)\), then \(\overline {x}\) solves (4) and \(\overline {v}\) solves its dual problem defined by

$$0\in B^{-1}\overline{v} -L\circ(A+C)^{-1}\circ(-L^{\ast})\overline{v} + D^{-1}\overline{v}. $$

Under suitable conditions on operators, the algorithms in [6, 18, 24] converge weakly to a point in E(A, B, C, D, L). We also note that even in the more complex situation when B and D in (4) admit linearly composite structures introduced firstly [4] and then in [7], in this case (4) becomes

$$ 0\in A\overline{x} + L^{\ast}\circ\Big((M^{\ast}\circ B\circ M)\;\square\; (N^{\ast}\circ D\circ N)\Big)\circ L\overline{x} + C\overline{x}, $$
(5)

where M and N are, respectively, bounded linear operators from \(\mathcal {G}\) to real Hilbert spaces \(\mathcal {Y}\) and \(\mathcal {X}\), B and D act on \(\mathcal {Y}\) and \(\mathcal {X}\), respectively. Then, under suitable conditions on operators, simple calculations show that the algorithm proposed in [4] and [7] converge weakly to the points in the extended set of solutions,

$$\begin{array}{@{}rcl@{}} &&E(A,B,C,D,L,M,N)\\ &&\quad =\!\! \left\{(\overline{x},\overline{v})|-L^{\ast}\overline{v}\in (A+C)\overline{x}, L\overline{x} \in \left((M^{\ast}\circ B\circ M)^{-1}+(N^{\ast}\circ D\circ N)^{-1}\right)\overline{v}\right\}. \end{array} $$
(6)

Furthermore, for each \((\overline {x},\overline {v}) \in E(A,B,C,D,L,M,N)\), then \(\overline {v}\) solves the dual problem of (5),

$$0\in (M^{\ast}\circ B\circ M)^{-1}\overline{v} -L\circ(A+C)^{-1}\circ(-L^{\ast})\overline{v}+ (N^{\ast}\circ D\circ N)^{-1}\overline{v}. $$

To sum up, the above analysis shows that each primal problem formulation mentioned has a dual problem which admits an explicit formulation and the corresponding algorithm converges weakly to a point in the extended set of solutions. However, there is a class of inclusions in which their dual problems are no longer available, for instance, when A is univariate and C is multivariate, as in [1, Problem 1.1]. Therefore, it is necessary to find a new way to overcome this limitation. Observe that the problem in the form of (6) can recover both the primal problem and dual problem. Hence, it will be more convenient to formulate the problem in the form of (6) to overcome this limitation. This approach is firstly used in [25]. In this paper, we extend it to the following problem to unify some recent primal-dual frameworks in the literature.

Problem 1

Let m, s be strictly positive integers. For every i∈{1,…, m}, let \((\mathcal {H}_{i},\langle \cdot |\cdot \rangle )\) be a real Hilbert space, let \(z_{i}\in \mathcal {H}_{i}\), let \(A_{i}\colon \mathcal {H}_{i}\to 2^{\mathcal {H}_{i}}\) be maximally monotone, let \(C_{i}\colon \mathcal {H}_{1}\times \dots \times \mathcal {H}_{m}\to \mathcal {H}_{i}\) be such that

$$\begin{array}{@{}rcl@{}} &&\left(\exists \nu_{0} \in [0,+\infty[\right)~ \left(\forall (x_{i})_{1\leq i\leq m}\in \mathcal{H}_{1}\times\cdots\times\mathcal{H}_{m}\right)~ \left(\forall (y_{i})_{1\leq i\leq m}\in \mathcal{H}_{1}\times\cdots\times\mathcal{H}_{m}\right)\\ && \left\{\begin{array}{l} {\sum}_{i=1}^{m} \|C_{i}(x_{1},\ldots, x_{m}) - C_{i}(y_{1},\ldots, y_{m}) \|^{2}{\leq\nu_{0}^{2}} {\sum}_{i=1}^{m} \|x_{i}-y_{i} \|^{2},\\ {\sum}_{i=1}^{m} \langle C_{i}(x_{1},\ldots, x_{m}) - C_{i}(y_{1},\ldots, y_{m})~|~x_{i}-y_{i}\rangle \geq 0. \end{array}\right. \end{array} $$
(7)

For every k∈{1,…, s}, let \((\mathcal {G}_{k},\langle \cdot \mid \cdot \rangle ), (\mathcal {Y}_{k},\langle \cdot \mid \cdot \rangle )\), and \((\mathcal {X}_{k},\langle \cdot \mid \cdot \rangle )\) be real Hilbert spaces, let \(r_{k} \in \mathcal {G}_{k}\), let \(B_{k}\colon \mathcal {Y}_{k}\to 2^{\mathcal {Y}_{k}}\) be maximally monotone, let \(D_{k}\colon \mathcal {X}_{k}\to 2^{\mathcal {X}_{k}}\) be maximally monotone, let \(M_{k}\colon \mathcal {G}_{k}\to \mathcal {Y}_{k}\) and \(N_{k}\colon \mathcal {G}_{k}\to \mathcal {X}_{k}\) be bounded linear operators, and every i∈{1,…, m}, let \(L_{k,i}\colon \mathcal {H}_{i} \to \mathcal {G}_{k}\) be a bounded linear operator. The problem is to find \(\overline {x}_{1} \in \mathcal {H}_{1},\ldots , \overline {x}_{m} \in \mathcal {H}_{m}\) and \(\overline {v}_{1} \in \mathcal {G}_{1},\ldots , \overline {v}_{s} \in \mathcal {G}_{s}\) such that

$$ \left\{\begin{array}{l} z_{1}-{\sum}_{k=1}^{s} L_{k,1}^{\ast}\overline{v}_{k}\in A_{1}\overline{x}_{1}+ C_{1}(\overline{x}_{1},\ldots,\overline{x}_{m}), \\ \vdots\\ z_{m}-{\sum}_{k=1}^{s} L_{k,m}^{\ast}\overline{v}_{k}\in A_{m}\overline{x}_{m} + C_{m}(\overline{x}_{1},\ldots,\overline{x}_{m}),\\ {\sum}_{i=1}^{m}L_{1,i}\overline{x}_{i}-r_{1} \in (M^{\ast}_{1}\circ B_{1}\circ M_{1})^{-1}\overline{v}_{1} + (N^{\ast}_{1}\circ D_{1}\circ N_{1})^{-1}\overline{v}_{1},\\ \vdots\\ {\sum}_{i=1}^{m}L_{s,i}\overline{x}_{i}-r_{s} \in (M^{\ast}_{s}\circ B_{s}\circ M_{s})^{-1} \overline{v}_{s} + (N^{\ast}_{s}\circ D_{s}\circ N_{s})^{-1} \overline{v}_{s}. \end{array}\right. $$
(8)

We denote by Ω the set of solutions to (8).

Here are some connections to existing primal-dual problems in the literature.

  1. (i)

    In Problem 1, set m = 1,(∀k∈{1,…, s}) L k,1 = Id, then by removing \(\overline {v}_{1},\ldots , \overline {v}_{s}\) from (8), we obtain the primal inclusion in [4, (1.7)]. Furthermore, by removing \(\overline {x}_{1}\) from (8), we obtain the dual inclusion.

  2. (ii)

    In Problem 1, set m = 1, C 1 is restricted to be cocoercive (i.e., \(C_{1}^{-1}\) is strongly monotone), then by removing \(\overline {v}_{1},\ldots , \overline {v}_{s}\) from (8), we obtain the primal inclusion in [7, (1.1)]. Furthermore, by removing \(\overline {x}_{1}\) from (8), we obtain the dual inclusion which is weaker than the dual inclusion in [7, (1.2)].

  3. (iii)

    In Problem 1, set \((\forall k\in \{1,\ldots ,s\})\; \mathcal {Y}_{k} = \mathcal {X}_{k} =\mathcal {G}_{k}\) and \(M_{k}= N_{k} = \operatorname {Id}, (D_{k}^{-1})_{1\leq k\leq s}\) are single-valued, then we obtain an instance of the system of inclusions in [25, (1.3)] where the coupling terms are restricted to be cocoercive in product space. Furthermore, if for every i∈{1,…, m}, C i is restricted on \(\mathcal {H}_{i}\) and \((D_{k}^{-1})_{1\leq k\leq s}\) are Lipschitzian, then by removing, respectively \(\overline {v}_{1},\ldots , \overline {v}_{s}\) and \(\overline {x}_{1},\ldots , \overline {x}_{m}\), we obtain respectively the primal inclusion in [16, (1.2)] and the dual inclusion in [16, (1.3)].

  4. (iv)

    In Problem 1, set s = m,(∀i∈{1,…, m}) z i = 0, A i = 0 and (∀k∈{1,…, s}) r k = 0,(ki) L k, i = 0. Then, we obtain the dual inclusion in [5, (1.2)] where \((D^{-1}_{k})_{1\leq k\leq s}\) are single-valued and Lipschitzian. Moreover, by removing the variables \(\overline {v}_{1},\ldots ,\overline {v}_{s}\), we obtain the primal inclusion in [5, (1.2)].

In the present paper, we develop the splitting technique in [4], and base on the convergence result of the algorithm proposed in [16], we propose a splitting algorithm for solving Problem 1 and prove its convergence in Section 2. We provide some application examples in the last section.

Notations

(See [3]) The scalar products and the norms of all Hilbert spaces used in this paper are denoted, respectively, by 〈⋅∣⋅〉 and ∥⋅∥. We denote by \(\mathcal {B}(\mathcal {H},\mathcal {G})\) the space of all bounded linear operators from \(\mathcal {H}\) to \(\mathcal {G}\). The symbols \(\rightharpoonup \) and → denote, respectively, weak and strong convergence. The resolvent of A is

$$J_{A}=(\text{Id} + A)^{-1}, $$

where Id denotes the identity operator on \(\mathcal {H}\). We say that A is uniformly monotone at x∈ domA if there exists an increasing function ϕ:[0,+[→[0,+] vanishing only at 0 such that

$$ \left(\forall u\in Ax\right)~\left(\forall (y,v)\in\text{gra}A\right)\quad\langle x-y\mid u-v\rangle\geq\phi(\|x-y\|). $$

The class of all lower semicontinuous convex functions \(f\colon \mathcal {H}\to ]-\infty ,+\infty ]\) such that \(\operatorname {dom} f=\{x\in \mathcal {H}\mid f(x) < +\infty \}\neq \varnothing \) is denoted by \({\Gamma }_{0}(\mathcal {H})\). Now, let \(f\in {\Gamma }_{0}(\mathcal {H})\). The conjugate of f is the function \(f^{\ast }\in {\Gamma }_{0}(\mathcal {H})\) defined by \(f^{\ast }\colon u\mapsto \sup _{x\in \mathcal {H}}(\langle x\mid u\rangle - f(x))\), and the subdifferential of \(f\in {\Gamma }_{0}(\mathcal {H})\) is the maximally monotone operator

$$\partial f\colon\mathcal{H}\to 2^{\mathcal{H}}\colon x \mapsto\left\{u\in\mathcal{H}\mid (\forall y\in\mathcal{H})~ \langle y-x\mid u\rangle + f(x) \leq f(y)\right\} $$

with inverse given by

$$(\partial f)^{-1}=\partial f^{\ast}. $$

Moreover, the proximity operator of f is

$$\ensuremath{\operatorname{prox}}_{f}=J_{\partial f} \colon\mathcal{H}\to\mathcal{H}\colon x \mapsto\mathop{\mathrm{argmin}}_{y\in\mathcal{H}} f(y) + \frac12\|x-y\|^{2}.$$
(9)

2 Algorithm and Convergence

The main result of the paper can be now stated in which we introduce our splitting algorithm, prove its convergence and provide the connections to existing works.

Theorem 1

In Problem 1, suppose that \(\varOmega \not =\varnothing \) and that

$$ \beta = \nu_{0}+\sqrt{\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{s}\|N_{k}L_{k,i}\|^{2}+\max_{1\leq k\leq s}\left(\|N_{k}\|^{2}+\|M_{k}\|^{2}\right)} > 0. $$
(10)

For every i∈{1,…,m}, let \(\left (a^{i}_{1,1,n}\right )_{n\in \mathbb {N}}, \left (b^{i}_{1,1,n}\right )_{n\in \mathbb {N}}, \left (c^{i}_{1,1,n}\right )_{n\in \mathbb {N}}\) be absolutely summa-ble sequences in \(\mathcal {H}_{i}\) , for every k∈{1,…,s}, let \(\left (a^{k}_{1,2,n}\right )_{n\in \mathbb {N}}, \left (c^{k}_{1,2,n}\right )_{n\in \mathbb {N}}\) be absolutely summable sequences in \(\mathcal {G}_{k}\) , let \(\left (a^{k}_{2,1,n}\right )_{n\in \mathbb {N}}, \left (b^{k}_{2,1,n}\right )_{n\in \mathbb {N}}, \left (c^{k}_{2,1,n}\right )_{n\in \mathbb {N}}\) be absolutely summable sequences in \(\mathcal {X}_{k}, \left (a^{k}_{2,2,n}\right )_{n\in \mathbb {N}}, \left (b^{k}_{2,2,n}\right )_{n\in \mathbb {N}}, \left (c^{k}_{2,2,n}\right )_{n\in \mathbb {N}}\) be absolutely summable sequences in \(\mathcal {Y}_{k}\) . For every i∈{1,…,m} and k∈{1,…,s}, let \(x^{i}_{1,0} \in \mathcal {H}_{i}, x_{2,0}^{k} \in \mathcal {G}_{k}\) and \(v_{1,0}^{k} \in \mathcal {X}_{k}, v_{2,0}^{k} \in \mathcal {Y}_{k}\) , let ε∈]0,1/(β+1)[, let \((\gamma _{n})_{n\in \mathbb {N}}\) be a sequence in [ε,(1−ε)/β] and set

$$ \begin{array}{l} \text{For }~n=0,1,\ldots,\\ \left\lfloor \begin{array}{l} \text{For }~i=1,\ldots, m,\\ \left\lfloor \begin{array}{l} s_{1,1,n}^{i} = x_{1,n}^{i} -\gamma_{n}\left(C_{i}\left(x_{1,n}^{1},\ldots,x_{1,n}^{m}\right)+ {\sum}_{k=1}^{s}L_{k,i}^{\ast}N_{k}^{\ast}v_{1,n}^{k} + a_{1,1,n}^{i}\right),\\ p_{1,1,n}^{i} = J_{\gamma_{n} A_{i}}\left(s_{1,1,n}^{i} +\gamma_{n} z_{i}\right) + b_{1,1,n}^{i}, \end{array} \right.\\ \text{For }~k=1,\ldots,s,\\ \left\lfloor \begin{array}{l} p_{1,2,n}^{k} = x_{2,n}^{k} +\gamma_{n}\left(N_{k}^{\ast}v_{1,n}^{k} - M_{k}^{\ast}v_{2,n}^{k} + a_{1,2,n}^{k} \right),\\ s_{2,1,n}^{k} = v_{1,n}^{k} + \gamma_{n}\left({\sum}_{i=1}^{m}N_{k}L_{k,i} x_{1,n}^{i} - N_{k}x_{2,n}^{k} + a_{2,1,n}^{k} \right),\\ p_{2,1,n}^{k} = s_{2,1,n}^{k}-\gamma_{n} \left(N_{k}r_{k}+ J_{\gamma_{n}^{-1}D_{k}}\left(\gamma_{n}^{-1}s_{2,1,n}^{k} -N_{k}r_{k}\right) + b_{2,1,n}^{k} \right),\\ q_{2,1,n}^{k} = p_{2,1,n}^{k} +\gamma_{n}\left( N_{k}{\sum}_{i=1}^{m}L_{k,i}p_{1,1,n}^{i}-N_{k}p_{1,2,n}^{k} +c_{2,1,n}^{k} \right), \\ v_{1,n+1}^{k} = v_{1,n}^{k} - s_{2,1,n}^{k} + q_{2,1,n}^{k},\\ s_{2,2,n}^{k} = v_{2,n}^{k} + \gamma_{n}\left(M_{k}x_{2,n}^{k} + a_{2,2,n}^{k} \right),\\ p_{2,2,n}^{k} = s_{2,2,n}^{k}-\gamma_{n} \left(J_{\gamma_{n}^{-1}B_{k}}\left(\gamma_{n}^{-1}s_{2,2,n}^{k}\right) + b_{2,2,n}^{k} \right),\\ q_{2,2,n}^{k} = p_{2,2,n}^{k} +\gamma_{n}\left( M_{k}p_{1,2,n}^{k} +c_{2,2,n}^{k} \right), \\ v_{2,n+1}^{k} = v_{2,n}^{k} - s_{2,2,n}^{k} + q_{2,2,n}^{k},\\ q_{1,2,n}^{k} = p_{1,2,n}^{k} +\gamma_{n}\left(N_{k}^{\ast}p_{2,1,n}^{k} - M_{k}^{\ast}p_{2,2,n}^{k} + c_{1,2,n}^{k} \right),\\ x_{2,n+1}^{k} = x_{2,n}^{k} -p_{1,2,n}^{k}+ q_{1,2,n}^{k}, \end{array} \right.\\ \text{For }~i=1,\ldots, m,\\ \left\lfloor \begin{array}{l} q_{1,1,n}^{i} = p_{1,1,n}^{i} -\gamma_{n}\left(C_{i}\left(p_{1,1,n}^{1},\ldots,p_{1,1,n}^{m}\right) + {\sum}_{k=1}^{s}L_{k,i}^{\ast}N^{\ast}_{k}p_{2,1,n}^{k} + c_{1,1,n}^{i} \right),\\ x_{1,n+1}^{i} = x_{1,n}^{i} -s_{1,1,n}^{i}+ q_{1,1,n}^{i}. \end{array} \right.\\ \end{array} \right.\\ \end{array} $$
(11)

Then, the following hold for each i∈{1,…,m} and k∈{1,…,s}.

  1. (i)

    \({\sum }_{n\in \mathbb {N}}\|x_{1,n}^{i} - p_{1,1,n}^{i} \|^{2} < +\infty \) and \({\sum }_{n\in \mathbb {N}}\|x_{2,n}^{k} - p_{1,2,n}^{k} \|^{2} < +\infty \).

  2. (ii)

    \({\sum }_{n\in \mathbb {N}}\|v_{1,n}^{k} - p_{2,1,n}^{k} \|^{2} < +\infty \) and \({\sum }_{n\in \mathbb {N}}\|v_{2,n}^{k} - p_{2,2,n}^{k} \|^{2} < +\infty \).

  3. (iii)

    \(x_{1,n}^{i}\rightharpoonup \overline {x}_{1,i}, x_{2,n}^{k}\to \overline {y}_{k}, v_{1,n}^{k}\rightharpoonup \overline {v}_{1,k}, v_{2,n}^{k}\rightharpoonup \overline {v}_{2,k}\) and for every (i,k)∈{1…,m}×{1…,s},

    $$ \left\{\begin{array}{l} z_{i}- {\sum}_{k=1}^{s} L_{k,i}^{\ast}N_{k}^{\ast}\overline{v}_{1,k}\in A_{i}\overline{x}_{1,i}+ C_{i}(\overline{x}_{1,1},\ldots, \overline{x}_{1,m})\quad \text{and}\quad M^{\ast}_{k}\overline{v}_{2,k} = N^{\ast}_{k}\overline{v}_{1,k},\\ N_{k}\Big({\sum}_{i=1}^{m} L_{k,i}\overline{x}_{1,i} -r_{k} -\overline{y}_{k}\Big)\in D_{k}^{-1}\overline{v}_{1,k}\quad \text{and}\quad M_{k}\overline{y}_{k} \in B_{k}^{-1}\overline{v}_{2,k},\\ \left(\overline{x}_{1,1},\ldots,\overline{x}_{1,m},N_{1}^{\ast}\overline{v}_{1,1},\ldots,N_{s}^{\ast}\overline{v}_{1,s}\right) \in\varOmega. \end{array}\right. $$
  4. (iv)

    Suppose that A j is uniformly monotone at \(\overline {x}_{1,j}\) for some j∈{1,…,m}, then \(x_{1,n}^{j} \to \overline {x}_{1,j}\).

  5. (v)

    Suppose that the operator (x i ) 1≤i≤m ↦(C j (x i ) 1≤i≤m ) 1≤j≤m is uniformly monotone at \((\overline {x}_{1,1},\ldots ,\overline {x}_{1,m})\) , then \((\forall i\in \{1,\ldots ,m\})\; x_{1,n}^{i}\to \overline {x}_{1,i}\).

  6. (vi)

    Suppose that there exist j∈{1,…,m} and an increasing function ϕ j :[0,+∞[→[0,+∞] vanishing only at 0 such that

    $$\begin{array}{@{}rcl@{}} &&\left(\forall (x_{i})_{1\leq i\leq m}\in \mathcal{H}_{1}\times\cdots\times\mathcal{H}_{m}\right) \\ &&\quad\sum\limits_{i=1}^{m} \langle C_{i}(x_{1},\ldots, x_{m})\! -\! C_{i}(\overline{x}_{1,1},\ldots,\overline{x}_{1,m})\!\mid \!x_{i}\,-\, \overline{x}_{1,i}\rangle\! \geq \phi_{j}(\|x_{j}-\overline{x}_{1,j}\!\|), \end{array} $$
    (12)

    then \(x_{1,n}^{j}\to \overline {x}_{1,j}\).

  7. (vii)

    Suppose that \(D^{-1}_{j}\) is uniformly monotone at \(\overline {v}_{1,j}\) for some j∈{1,…,k}, then \(v_{1,n}^{j} \to \overline {v}_{1,j}\).

  8. (viii)

    Suppose that \(B^{-1}_{j}\) is uniformly monotone at \(\overline {v}_{2,j}\) for some j∈{1,…,k}, then \(v_{2,n}^{j} \to \overline {v}_{2,j}\).

Proof

Let us introduce the Hilbert direct sums

$$\boldsymbol{\mathcal{H}} = \mathcal{H}_{1}\oplus\dots\oplus\mathcal{H}_{m}, \quad \boldsymbol{\mathcal{G}} = \mathcal{G}_{1}\oplus\dots\oplus\mathcal{G}_{s},\quad \boldsymbol{\mathcal{Y}} = \mathcal{Y}_{1}\oplus\dots\oplus\mathcal{Y}_{s}, \quad \boldsymbol{\mathcal{X}} = \mathcal{X}_{1}\oplus\dots\oplus\mathcal{X}_{s}. $$

We use the boldsymbol to indicate the elements in these spaces. The scalar products and the norms of these spaces are defined in the normal way. For example, in \(\boldsymbol {\mathcal {H}}\),

$$\langle \cdot\mid\cdot\rangle \colon ({\boldsymbol{x}},\boldsymbol{y})\mapsto \sum\limits_{i=1}^{m} \langle x_{i}\mid y_{i}\rangle \quad \text{ and }\quad \|\cdot\|\colon {\boldsymbol x}\mapsto \sqrt{\langle {\boldsymbol x}\mid {\boldsymbol x}\rangle}. $$

Set

$$ \left\{\begin{array}{l} {\boldsymbol A}\colon \boldsymbol{\mathcal{H}}\to 2^{\boldsymbol{\mathcal{H}}}\colon {\boldsymbol x} \mapsto \huge{\times}_{i=1}^{m} A_{i}x_{i},\\ {\boldsymbol{C}} \colon \boldsymbol{\mathcal{H}}\to \boldsymbol{\mathcal{H}}\colon {\boldsymbol x} \mapsto (C_{i}{\boldsymbol x})_{1\leq i\leq m},\\ {\boldsymbol L}\colon \boldsymbol{\mathcal{H}}\to\boldsymbol{\mathcal{G}}\colon {\boldsymbol x} \mapsto \left({\sum}_{i=1}^{m}L_{k,i}x_{i}\right)_{1\leq k\leq s},\\ {\boldsymbol N}\colon\boldsymbol{\mathcal{G}}\to\boldsymbol{\mathcal{X}}\colon {\boldsymbol v} \mapsto (N_{k}v_{k})_{1\leq k\leq s},\\ {\boldsymbol z} = (z_{1},\ldots, z_{m}), \end{array}\right. ~ \text{and }~ \left\{\begin{array}{l} {\boldsymbol B}\colon \boldsymbol{\mathcal{Y}}\to 2^{\boldsymbol{\mathcal{Y}}}\colon {\boldsymbol v} \mapsto \huge{\times}_{k=1}^{s} B_{k}v_{k},\\ {\boldsymbol D}\colon \boldsymbol{\mathcal{X}}\to 2^{\boldsymbol{\mathcal{X}}}\colon {\boldsymbol v} \mapsto \huge{\times}_{k=1}^{s} D_{k}v_{k},\\ {\boldsymbol M }\colon\boldsymbol{\mathcal{G}}\to\boldsymbol{\mathcal{Y}}\colon {\boldsymbol v} \mapsto (M_{k}v_{k})_{1\leq k\leq s},\\ {\boldsymbol r} = (r_{1},\ldots, r_{s}). \end{array}\right. $$
(13)

Then, it follows from (7) that

$$ \left(\forall({\boldsymbol x},{\boldsymbol y})\in\boldsymbol{\mathcal{H}}^{2}\right)\quad \|{\boldsymbol{Cx}}-{\boldsymbol{Cy}} \|\leq \nu_{0}\|{\boldsymbol x}-{\boldsymbol y}\| \quad \text{and} \quad \langle {\boldsymbol{Cx}}-{\boldsymbol{Cy}}\mid {\boldsymbol x}-{\boldsymbol y}\rangle\geq0, $$

which shows that C is ν 0-Lipschitzian and monotone hence they are maximally monotone [3, Corollary 20.25]. Moreover, it follows from [3, Proposition 20.23] that A, B, and D are maximally monotone. Furthermore,

$$ \left\{\begin{array}{l} {\boldsymbol L}^{\ast}\colon \boldsymbol{\mathcal{G}}\to\boldsymbol{\mathcal{H}}\colon {\boldsymbol v} \mapsto \left({\sum}_{k=1}^{s} L_{k,i}^{\ast}v_{k}\right)_{1\leq i\leq m},\\ {\boldsymbol M }^{\ast}\colon\boldsymbol{\mathcal{Y}}\to\boldsymbol{\mathcal{G}}\colon{\boldsymbol v}\mapsto \left(M_{k}^{\ast}v_{k}\right)_{1\leq k\leq s},\\ {\boldsymbol N}^{\ast}\colon\boldsymbol{\mathcal{X}}\to\boldsymbol{\mathcal{G}}\colon{\boldsymbol v}\mapsto \left(N_{k}^{\ast}v_{k}\right)_{1\leq k\leq s}. \end{array}\right. $$
(14)

Then, using (13) and (14), we can rewrite the system of monotone inclusions (8) as monotone inclusions in \(\boldsymbol {\mathcal {K}} = \boldsymbol {\mathcal {H}}\oplus \boldsymbol {\mathcal {G}}\),

$$ \text{find } (\overline{{\boldsymbol x}},\overline{{\boldsymbol v}})\in\boldsymbol{\mathcal{K}} \text{ such that } \left\{\begin{array}{l} {\boldsymbol z}-{\boldsymbol L}^{\ast}\overline{{\boldsymbol v}}\in({\boldsymbol A}+{\boldsymbol{C}})\overline{{\boldsymbol x}}, \\ {\boldsymbol L}\overline{{\boldsymbol x}}-{\boldsymbol r}\in \left(({\boldsymbol M }^{\ast}\circ{\boldsymbol B}\circ{\boldsymbol M })^{-1} + ({\boldsymbol N}^{\ast}\circ{\boldsymbol D}\circ{\boldsymbol N})^{-1}\right)\overline{{\boldsymbol v}}. \end{array}\right. $$
(15)

It follows from (15) that there exists \(\overline {{\boldsymbol y}} \in \boldsymbol {\mathcal {G}}\) such that

$$\left\{\begin{array}{l} {\boldsymbol z}-{\boldsymbol L}^{\ast}\overline{{\boldsymbol v}}\in({\boldsymbol A}+{\boldsymbol{C}})\overline{{\boldsymbol x}}, \\ \overline{{\boldsymbol y}} \in ({\boldsymbol M }^{\ast}\circ{\boldsymbol B}\circ{\boldsymbol M })^{-1}\overline{{\boldsymbol v}},\\ {\boldsymbol L}\overline{{\boldsymbol x}}-\overline{{\boldsymbol y}} -{\boldsymbol r}\in ({\boldsymbol N}^{\ast}\circ{\boldsymbol D}\circ{\boldsymbol N})^{-1}\overline{{\boldsymbol v}} \end{array}\right. \Longleftrightarrow~ \left\{\begin{array}{l} {\boldsymbol z}-{\boldsymbol L}^{\ast}\overline{{\boldsymbol v}}\in({\boldsymbol A}+{\boldsymbol{C}})\overline{{\boldsymbol x}},\\ \overline{{\boldsymbol v}} \in {\boldsymbol M }^{\ast}\circ{\boldsymbol B}\circ{\boldsymbol M }\overline{{\boldsymbol y}},\\ \overline{{\boldsymbol v}} \in {\boldsymbol N}^{\ast}\circ{\boldsymbol D}\circ{\boldsymbol N}({\boldsymbol L}\overline{{\boldsymbol x}}-\overline{{\boldsymbol y}} -{\boldsymbol r}), \end{array}\right. $$

which implies that

$$ \left\{\begin{array}{l} {\boldsymbol z}\in ({\boldsymbol A}+{\boldsymbol{C}})\overline{{\boldsymbol x}} + {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\left({\boldsymbol D}({\boldsymbol N}{\boldsymbol L}\overline{{\boldsymbol x}}-{\boldsymbol N}\overline{{\boldsymbol y}}-{\boldsymbol N}{\boldsymbol r})\right),\\ 0\in{\boldsymbol M }^{\ast}\circ{\boldsymbol B}\circ{\boldsymbol M }\overline{{\boldsymbol y}}-{\boldsymbol N}^{\ast}\left({\boldsymbol D}({\boldsymbol N}{\boldsymbol L}\overline{{\boldsymbol x}}-{\boldsymbol N}\overline{{\boldsymbol y}}-{\boldsymbol N}{\boldsymbol r})\right). \end{array}\right. $$
(16)

Since \(\varOmega \not =\varnothing \), the problem (16) possesses at least one solution. The problem (16) is a special case of the primal problem in [16, (1.2)] with

$$ \left\{\begin{array}{l} m =2, K =2,\\ \boldsymbol{\mathcal{H}}_{1} = \boldsymbol{\mathcal{H}}, \boldsymbol{\mathcal{G}}_{1} = \boldsymbol{\mathcal{X}},\\ \boldsymbol{\mathcal{H}}_{2} =\boldsymbol{\mathcal{G}}, \boldsymbol{\mathcal{G}}_{2} = \boldsymbol{\mathcal{Y}},\\ {\boldsymbol z}_{1} = {\boldsymbol z}, ,{\boldsymbol z}_{2}=0,\\ {\boldsymbol r}_{1} = {\boldsymbol N}{\boldsymbol r},{\boldsymbol r}_{2} =0, \end{array}\right. \left\{\begin{array}{l} {\boldsymbol L}_{1,1} = {\boldsymbol N}{\boldsymbol L},\\ {\boldsymbol L}_{1,2} = -{\boldsymbol N},\\ {\boldsymbol L}_{2,1} = 0,\\ {\boldsymbol L}_{2,2} = {\boldsymbol M }, \end{array}\right. \left\{\begin{array}{l} {\boldsymbol A}_{1} = {\boldsymbol A},\\ {\boldsymbol{C}}_{1} = {\boldsymbol{C}},\\ {\boldsymbol A}_{2} = 0,\\ {\boldsymbol{C}}_{2} = 0, \end{array}\right. \text{ and }~ \left\{\begin{array}{l} {\boldsymbol B}_{1} = {\boldsymbol D}, \\ {\boldsymbol D}^{-1}_{1} = 0,\\ {\boldsymbol B}_{2} = {\boldsymbol B},\\ {\boldsymbol D}^{-1}_{2} = 0. \end{array}\right. $$
(17)

In view of [16, (1.4)], the dual problem of (16) is to find \(\overline {{\boldsymbol v}}_{1}\in \boldsymbol {\mathcal {X}}\) and \(\overline {{\boldsymbol v}}_{2}\in \boldsymbol {\mathcal {Y}}\) such that

$$\left\{\begin{array}{l} -{\boldsymbol{Nr}}\in -{\boldsymbol{NL}}({\boldsymbol A}+{\boldsymbol{C}})^{-1}({\boldsymbol z}-{\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1})+{\boldsymbol N}\{\boldsymbol{0}\}^{-1}({\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1}-{\boldsymbol M }^{\ast}\overline{{\boldsymbol v}}_{2}) +{\boldsymbol D}^{-1}\overline{{\boldsymbol v}}_{1},\\ 0\in -{\boldsymbol M }\{\boldsymbol{0}\}^{-1}({\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1}-{\boldsymbol M }^{\ast}\overline{{\boldsymbol v}}_{2})+{\boldsymbol B}^{-1}\overline{{\boldsymbol v}}_{2}, \end{array}\right. $$

where {0}−1 denotes the inverse of zero operator which maps each point to {0}. We next show that the alogorithm (11) is an application of the algorithm in [16, (2.4)] to (16). It follows from [3, Proposition 23.16] that

$$ (\forall {\boldsymbol x}\in\boldsymbol{\mathcal{H}})(\gamma\in\left]0,+\infty\right[)\quad J_{\gamma{\boldsymbol A}_{1}}{\boldsymbol x} = (J_{\gamma A_{i}}x_{i})_{1\leq i\leq m} $$
(18)

and

$$ (\forall {\boldsymbol v}\in\boldsymbol{\mathcal{X}})(\gamma\in ]0,+\infty[)~ J_{\gamma{\boldsymbol B}_{1}}{\boldsymbol v} = (J_{\gamma D_{k}}v_{k})_{1\leq k\leq s}~ \text{ and }~ (\forall {\boldsymbol v}\in\boldsymbol{\mathcal{Y}})~ J_{\gamma{\boldsymbol B}_{2}}{\boldsymbol v} = (J_{\gamma B_{k}}v_{k})_{1\leq k\leq s}. $$
(19)

Let us set

$$ (\forall n\in\mathbb{N}) \left\{\begin{array}{l} {\boldsymbol a}_{1,1,n} = \left(a_{1,1,n}^{1},\ldots, a_{1,1,n}^{m}\right),\\ {\boldsymbol b}_{1,1,n} = \left(b_{1,1,n}^{1},\ldots, b_{1,1,n}^{m}\right),\\ {\boldsymbol c}_{1,1,n} = \left(c_{1,1,n}^{1},\ldots, c_{1,1,n}^{m}\right)\\ {\boldsymbol a}_{1,2,n} = \left(a_{1,2,n}^{1},\ldots, a_{1,2,n}^{s}\right),\\ {\boldsymbol c}_{1,2,n} = \left(c_{1,2,n}^{1},\ldots, c_{1,2,n}^{s}\right), \end{array}\right. \text{ and }~ (\forall n\in\mathbb{N}) \left\{\begin{array}{l} {\boldsymbol a}_{2,1,n} = \left(a_{2,1,n}^{1},\ldots, a_{2,1,n}^{s}\right),\\ {\boldsymbol c}_{2,1,n} = \left(c_{2,1,n}^{1},\ldots, c_{2,1,n}^{s}\right),\\ {\boldsymbol a}_{2,2,n} = \left(a_{2,2,n}^{1},\ldots, a_{2,2,n}^{s}\right),\\ {\boldsymbol b}_{2,2,n} = \left(b_{2,2,n}^{1},\ldots, b_{2,2,n}^{s}\right),\\ {\boldsymbol c}_{2,2,n} = \left(c_{2,2,n}^{1},\ldots, c_{2,2,n}^{s}\right). \end{array}\right. $$
(20)

Then, it follows from our assumptions that every sequence defined in (20) is absolutely summable. Now set

$$(\forall n\in\mathbb{N}) \left\{\begin{array}{l} {\boldsymbol x}_{1,n} = \left(x_{1,n}^{1},\ldots, x^{m}_{1,n}\right),\\ {\boldsymbol x}_{2,n} = \left(x_{2,n}^{1},\ldots, x_{2,n}^{s}\right) \end{array}\right. \quad \text{and} \quad \left\{\begin{array}{l} {\boldsymbol v}_{1,n} = \left(v_{1,n}^{1},\ldots, v^{s}_{1,n}\right),\\ {\boldsymbol v}_{2,n} = \left(v_{2,n}^{1},\ldots, v_{2,n}^{s}\right), \end{array}\right. $$

and set

$$ (\forall n\in\mathbb{N}) \left\{\begin{array}{l} {\boldsymbol s}_{1,1,n} \!= \left(s_{1,1,n}^{1},\ldots, s_{1,1,n}^{m}\right),\\ {\boldsymbol p}_{1,1,n} \!= \left(p_{1,1,n}^{1},\ldots, p_{1,1,n}^{m}\right),\\ {\boldsymbol q}_{1,1,n} \!= \left(q_{1,1,n}^{1},\ldots, q_{1,1,n}^{m}\right),\\ {\boldsymbol p}_{1,2,n} \!= \left(p_{1,2,n}^{1},\ldots, p_{1,2,n}^{s}\right),\\ {\boldsymbol q}_{1,2,n} \!= \left(q_{1,2,n}^{1},\ldots, q_{1,2,n}^{s}\right), \end{array}\right. \text{and }~ (\forall n\in\mathbb{N}) \left\{\begin{array}{l} {\boldsymbol s}_{2,1,n} \,=\, \left(s_{2,1,n}^{1},\ldots, s_{2,1,n}^{s}\right),\\ {\boldsymbol p}_{2,1,n} \,=\, \left(p_{2,1,n}^{1},\ldots, p_{2,1,n}^{s}\right),\\ {\boldsymbol q}_{2,1,n} \,=\, \left(q_{2,1,n}^{1},\ldots, q_{2,1,n}^{s}\right),\\ {\boldsymbol s}_{2,2,n} \,=\, \left(s_{2,2,n}^{1},\ldots, s_{2,2,n}^{s}\right),\\ {\boldsymbol p}_{2,2,n} \,=\, \left(p_{2,2,n}^{1},\ldots, p_{2,2,n}^{s}\right),\\ {\boldsymbol q}_{2,2,n} \,=\, \left(q_{2,2,n}^{1},\ldots, q_{2,2,n}^{s}\right). \end{array}\right. $$

Then, in view of (13), (14), (17), (18), and (19), algorithm (11) reduces to a special case of the algorithm in [16, (2.4)]. Moreover, it follows from (10) and (17) that the condition [16, (1.1)] is satisfied. Furthermore, the conditions on stepsize \((\gamma _{n})_{n\in \mathbb {N}}\) and, as shown above, every specific conditions on operators and the error sequences are also satisfied. To sum up, every specific conditions in [16, Problem 1.1] and [16, Theorem 2.4] are satisfied.

(i), (ii): These conclusions follow from [16, Theorem 2.4 (i)] and [16, Theorem 2.4(ii)], respectively.

(iii): It follows from [16, Theorem 2.4(iii)(c)] and [16, Theorem 2.4(iii)(d)] that \({\boldsymbol x}_{1,n}\rightharpoonup \overline {{\boldsymbol x}}_{1}, {\boldsymbol x}_{2,n}\rightharpoonup \overline {{\boldsymbol y}}\) and \({\boldsymbol v}_{1,n}\rightharpoonup \overline {{\boldsymbol v}}_{1}, {\boldsymbol v}_{2,n}\rightharpoonup \overline {{\boldsymbol v}}_{2}\). We next derive from [16, Theorem 2.4(iii)(a)] and [16, Theorem 2.4(iii)(b)] that, for every i∈{1…, m} and k∈{1…, s},

$$ \left\{\begin{array}{l} z_{i}- {\sum}_{k=1}^{s} L_{k,i}^{\ast}N_{k}^{\ast}\overline{v}_{1,k}\in A_{i}\overline{x}_{1,i} + C_{i}(\overline{x}_{1,1},\ldots, \overline{x}_{1,m}),\\ M^{\ast}_{k}\overline{v}_{2,k} = N^{\ast}_{k}\overline{v}_{1,k}, \end{array}\right. $$
(21)

and

$$ \left\{\begin{array}{l} N_{k}\left({\sum}_{i=1}^{m} L_{k,i}\overline{x}_{1,i} -r_{k} -\overline{y}_{k}\right)\in D_{k}^{-1}\overline{v}_{1,k},\\ M_{k}\overline{y}_{k} \in B_{k}^{-1}\overline{v}_{2,k}. \end{array}\right. $$
(22)

We have

$$\begin{array}{@{}rcl@{}} (22)&\Leftrightarrow&\left\{\begin{array}{l}\overline{v}_{1,k}\in D_{k}\left(N_{k}\left({\sum}_{i=1}^{m} L_{k,i}\overline{x}_{1,i} -r_{k} -\overline{y}_{k}\right) \right),\\\overline{v}_{2,k}\in B_{k}(M_{k}\overline{y}_{k})\end{array}\right.\\&\Rightarrow& \left\{\begin{array}{l} N^{*}_{k}\overline{v}_{1,k}\in N^{*}_{k}\left(D_{k}\left(N_{k}\left({\sum}_{i=1}^{m}L_{k,i}\overline{x}_{1,i} -r_{k} -\overline{y}_{k}\right)\right)\right),\\ M^{*}_{k}\overline{v}_{2,k}\in M^{*}_{k}(B_{k}( M_{k}\overline{y}_{k}))\end{array}\right.\nonumber\\&\Rightarrow&\left\{\begin{array}{l}{\sum}_{i=1}^{m} L_{k,i}\overline{x}_{1,i} -r_{k} -\overline{y}_{k} \in (N^{\ast}_{k}\circ D_{k}\circ N_{k})^{-1}(N^{\ast}_{k}\overline{v}_{1,k}),\\\overline{y}_{k}\in (M^{*}_{k}\circ B_{k}\circ M_{k})^{-1}(M^{\ast}_{k}\overline{v}_{2,k})\end{array}\right.\nonumber\\&\Rightarrow& \sum\limits_{i=1}^{m} L_{k,i}\overline{x}_{1,i} -r_{k}\in (N^{\ast}_{k}\circ D_{k}\circ N_{k})^{-1}(N^{\ast}_{k}\overline{v}_{1,k})+(M^{\ast}_{k}\circ B_{k}\circ M_{k})^{-1}(N^{\ast}_{k}\overline{v}_{1,k}). \end{array} $$
(23)

Therefore, (21) and (23) show that \((\overline {x}_{1,1},\ldots , \overline {x}_{1,m}, N^{\ast }_{1}\overline {v}_{1,1},\ldots , N^{\ast }_{s}\overline {v}_{1,s})\) is a solution to (8).

(iv): For every \(n\in \mathbb {N}\) and every i∈{1,…, m} and k∈{1,…, s}, set

$$ \left\{\!\!\begin{array}{l} \widetilde{s}_{1,1,n}^{i} = x^{i}_{1,n} - \gamma_{n}\left(C_{i}(x_{1,n}^{1},\ldots, x_{1,n}^{m})\right.\\ \qquad \quad+\left. {\sum}_{k=1}^{s}L_{k,i}^{\ast}N_{k}^{\ast}v_{1,n}^{k}\right),\\ \widetilde{p}_{1,2,n}^{k} = x_{2,n}^{k} -\gamma_{n}\left( N_{k}^{\ast}v_{1,n}^{k} - M_{k}^{\ast}v_{2,n}^{k} \right),\\ \widetilde{p}_{1,1,n}^{i}=J_{\gamma_{n} A_{i}}(\widetilde{s}_{1,1,n}^{i}+ \gamma_{n}z_{i}) \end{array}\right.\!\!\!\!\! \text{ and} \left\{\begin{array}{l} \widetilde{s}_{2,1,n}^{k} = v_{1,n}^{k} + \gamma_{n}\left({\sum}_{i=1}^{m}N_{k}L_{k,i} x_{1,n}^{i}\right.\\ \hspace{1,2cm}\left. - N_{k}x_{2,n}^{k}\right),\\ \widetilde{p}_{2,1,n}^{k} = \widetilde{s}_{2,1,n}^{k}-\gamma_{n} \left(N_{k}r_{k}\right.\\ \left.\hspace{1,2cm}+ J_{\gamma_{n}^{-1}D_{k}}\!\left(\gamma_{n}^{-1}\widetilde{s}_{2,1,n}^{k} -N_{k}r_{k}\right)\right),\\ \widetilde{s}_{2,2,n}^{k} = v_{2,n}^{k} + \gamma_{n}M_{k}x_{2,n}^{k},\\ \widetilde{p}_{2,2,n}^{k} = \widetilde{s}_{2,2,n}^{k} -\gamma_{n}J_{\gamma^{-1}_{n}B_{k}}(\gamma^{-1}_{n}\widetilde{s}_{2,2,n}^{k}). \end{array}\right. $$
(24)

Since \((\forall i\in \{1,\ldots ,m\})\; a_{1,1,n}^{i}\to 0, b_{1,1,n}^{i}\to 0, (\forall k\in \{1,\ldots ,s\})\;a_{2,1,n}^{k}\to 0, a_{2,2,n}^{k}\to 0\)and \(b_{2,1,n}^{k}\to 0, b_{2,2,n}^{k}\to 0\) and since the resolvents of \((A_{i})_{1\leq i\leq m}, (B_{k}^{-1})_{1\leq k\leq s}\) and \((D_{k}^{-1})_{1\leq k\leq s}\) are nonexpansive, we obtain

$$\left\{\begin{array}{l} (\forall i\in\{1,\ldots, m\})\quad\widetilde{p}_{1,1,n}^{i} - p_{1,1,n}^{i}\to 0,\\ (\forall k\in\{1,\ldots, s\})\quad\widetilde{p}_{1,2,n}^{k}-p_{1,2,n}^{k}\to 0, \end{array}\right. $$

and

$$\left\{\begin{array}{l} (\forall k\in \{1,\ldots,s\})\quad \widetilde{p}_{2,1,n}^{k}-p_{2,1,n}^{k} \to 0,\\ (\forall k\in \{1,\ldots,s\})\quad \widetilde{p}_{2,2,n}^{k}-p_{2,2,n}^{k} \to 0. \end{array}\right. $$

In turn, by (i) and (ii), we obtain

$$ \left\{\begin{array}{l} (\forall i\in\{1,\ldots,m\})\quad \widetilde{p}_{1,1,n}^{i} - x_{1,n}^{i}\to 0, \quad \widetilde{p}_{1,1,n}^{i} \rightharpoonup \overline{x}_{1,i},\\ (\forall k\in\{1,\ldots,s\})\quad \widetilde{p}_{1,2,n}^{k}-x_{2,n}^{k} \to 0, \quad \widetilde{p}_{1,2,n}^{k}\rightharpoonup \overline{y}_{k} \end{array}\right. $$
(25)

and

$$ (\forall k\in\{1,\ldots,s\})\quad \left\{\begin{array}{l} \widetilde{p}_{2,1,n}^{k} - v_{1,n}^{k}\to 0,\quad \widetilde{p}_{2,1,n}^{k} \rightharpoonup \overline{v}_{1,k},\\ \widetilde{p}_{2,2,n}^{k}-v_{2,n}^{k} \to 0,\quad \widetilde{p}_{2,2,n}^{k}\rightharpoonup \overline{v}_{2,k}. \end{array}\right. $$
(26)

Set

$$ (\forall n\in\mathbb{N})~ \left\{\begin{array}{l} \widetilde{{\boldsymbol p}}_{1,1,n}= (\widetilde{p}_{1,1,n}^{1},\ldots, \widetilde{p}_{1,1,n}^{m}),\\ \widetilde{{\boldsymbol p}}_{1,2,n}= (\widetilde{p}_{1,2,n}^{1},\ldots,\widetilde{p}_{1,2,n}^{s}) \end{array}\right. \text{and } \left\{\begin{array}{l} \widetilde{{\boldsymbol p}}_{2,1,n}= (\widetilde{p}_{2,1,n}^{1},\ldots, \widetilde{p}_{2,1,n}^{s}),\\ \widetilde{{\boldsymbol p}}_{2,2,n}= (\widetilde{p}_{2,2,n}^{1},\ldots, \widetilde{p}_{2,2,n}^{s}). \end{array}\right. $$
(27)

Then, it follows from (26) that

$$ \left\{\begin{array}{l} \gamma_{n}^{-1}({\boldsymbol x}_{1,n}-\widetilde{{\boldsymbol p}}_{1,1,n})\to 0,\\ \gamma_{n}^{-1}({\boldsymbol x}_{2,n}-\widetilde{{\boldsymbol p}}_{1,2,n})\to 0 \end{array}\right. \quad \text{and }\quad \left\{\begin{array}{l} \gamma_{n}^{-1}({\boldsymbol v}_{1,n}-\widetilde{{\boldsymbol p}}_{2,1,n})\to 0,\\ \gamma_{n}^{-1}({\boldsymbol v}_{2,n}-\widetilde{{\boldsymbol p}}_{2,2,n})\to 0. \end{array}\right. $$
(28)

Furthermore, we derive from (24) that, for every i∈{1,…, m} and k∈{1,…, s}

$$ (\forall n\in\mathbb{N})\quad\!\!\!\!\! \left\{\begin{array}{l} \gamma^{-1}_{n}(x^{i}_{1,n}-\widetilde{p}_{1,1,n}^{i})\! -\!{\sum}_{k = 1}^{s} L_{k,i}^{\ast}N_{k}^{\ast}v_{1,n}^{k}\! -\! C_{i}(x_{1,n}^{1},\ldots,x_{1,n}^{m}) \in \!-z_{i} + A_{i}\widetilde{p}_{1,1,n}^{i},\\ \gamma^{-1}_{n}(\widetilde{s}_{2,2,n}^{k}-\widetilde{p}_{2,2,n}^{k}) \in B_{k}^{-1}\widetilde{p}_{2,2,n}^{k},\\ \gamma^{-1}_{n}(\widetilde{s}_{2,1,n}^{k}-\widetilde{p}_{2,1,n}^{k}) \in N_{k}r_{k}+ D_{k}^{-1}\widetilde{p}_{2,1,n}^{k}. \end{array}\right. $$
(29)

Since A j is uniformly monotone at \(\overline {x}_{1,j}\), using (29) and (21), there exists an increasing function \(\phi _{A_{j}} \colon [0,+\infty [ \to [0,+\infty ]\) vanishing only at 0 such that, for every \(n\in \mathbb {N}\),

$$\begin{array}{@{}rcl@{}} &&\phi_{A_{j}}(\|\widetilde{p}_{1,1,n}^{j} -\overline{x}_{1,j}\|)\\ &&\leq\left\langle \widetilde{p}_{1,1,n}^{j} -\overline{x}_{1,j}\mid\gamma_{n}^{-1}(x_{1,n}^{j}-\widetilde{p}_{1,1,n}^{j}) -\sum\limits_{k = 1}^{s}L_{k,j}^{\ast}N_{k}^{\ast}(v_{1,n}^{k}-\overline{v}_{1,k}) -(C_{j}{\boldsymbol x}_{1,n}-C_{j}\overline{{\boldsymbol x}}_{1})\right\rangle\\ &&= \left\langle \widetilde{p}_{1,1,n}^{j}-\overline{x}_{1,j}\mid \gamma_{n}^{-1}(x_{1,n}^{j}-\widetilde{p}_{1,1,n}^{j}\right\rangle -\! \sum\limits_{k = 1}^{s}\left\langle \widetilde{p}_{1,1,n}^{j}-\overline{x}_{1,j}\mid L_{k,j}^{\ast}N_{k}^{\ast}(v_{1,n}^{k} - \overline{v}_{1,k})\right\rangle \,-\,\chi_{j,n}, \end{array} $$

where we denote \((\forall n \in \mathbb {N})\; \chi _{j,n} = \langle \widetilde {p}_{1,1,n}^{j} -\overline {x}_{1,j}\mid C_{j}{\boldsymbol x}_{1,n} -C_{j}\overline {{\boldsymbol x}}_{1}\rangle \). Therefore,

$$\begin{array}{@{}rcl@{}} && \phi_{A_{j}}(\|\widetilde{p}_{1,1,n}^{j}-\overline{x}_{1,j}\|)\\ &&\leq \langle \widetilde{{\boldsymbol p}}_{1,1,n} -\overline{{\boldsymbol x}}_{1} \mid \gamma_{n}^{-1}({\boldsymbol x}_{1,n}-\widetilde{{\boldsymbol p}}_{1,1,n}\rangle - \langle \widetilde{{\boldsymbol p}}_{1,1,n}-\overline{{\boldsymbol x}}_{1}\mid {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}({\boldsymbol v}_{1,n} - \overline{{\boldsymbol v}}_{1})\rangle - \chi_{n}\\ &&=\langle \widetilde{{\boldsymbol p}}_{1,1,n} -\overline{{\boldsymbol x}}_{1} \mid \gamma_{n}^{-1}({\boldsymbol x}_{1,n}-\widetilde{{\boldsymbol p}}_{1,1,n}\rangle - \langle \widetilde{{\boldsymbol p}}_{1,1,n}-{\boldsymbol x}_{1,n} \mid {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}({\boldsymbol v}_{1,n} - \overline{{\boldsymbol v}}_{1})\rangle \\ &&\quad-\langle {\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1} \mid {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}({\boldsymbol v}_{1,n} - \overline{{\boldsymbol v}}_{1})\rangle - \chi_{n}, \end{array} $$
(30)

where \(\chi _{n} = {\sum }_{i=1}^{m}\chi _{i,n} = \langle \widetilde {{\boldsymbol p}}_{1,1,n} -\overline {{\boldsymbol x}}_{1} \mid {\boldsymbol {C}}{\boldsymbol x}_{1,n} -{\boldsymbol {C}}\overline {{\boldsymbol x}}_{1}\rangle \). Since \((B_{k}^{-1})_{1\leq k\leq s}\) and \((D_{k}^{-1})_{1\leq k\leq s}\) are monotone, we derive from (22) and (29) that for every k∈{1,…, s},

$$\left\{\!\!\begin{array}{l} 0\leq\left\langle \widetilde{p}_{2,1,n}^{k}-\overline{v}_{1,k} \mid \gamma_{n}^{-1}(v_{1,n}^{k} -\widetilde{p}_{2,1,n}^{k})+{\sum}_{i=1}^{m} N_{k}L_{k,i}(x_{1,n}^{i}-\overline{x}_{1,i})-N_{k}\left(x_{2,n}^{k}-\overline{y}_{k}\right) \right\rangle,\\ 0\leq\langle \widetilde{p}_{2,2,n}^{k}-\overline{v}_{2,k} \mid \gamma_{n}^{-1}(v_{2,n}^{k}-\widetilde{p}_{2,2,n}^{k}) + M_{k}(x_{2,n}^{k}-\overline{y}_{k})\rangle, \end{array}\right. $$

which implies that

$$ 0\leq \left\langle \widetilde{{\boldsymbol p}}_{2,2,n}-\overline{{\boldsymbol v}}_{2} \mid \gamma_{n}^{-1}({\boldsymbol v}_{2,n}-\widetilde{{\boldsymbol p}}_{2,2,n})\right\rangle +\left\langle \widetilde{{\boldsymbol p}}_{2,2,n}-\overline{{\boldsymbol v}}_{2} \mid {\boldsymbol M }({\boldsymbol x}_{2,n}-\overline{{\boldsymbol y}})\right\rangle\\ $$
(31)

and

$$\begin{array}{@{}rcl@{}} 0&\leq& \left\langle \widetilde{{\boldsymbol p}}_{2,1,n}-\overline{{\boldsymbol v}}_{1} \mid \gamma_{n}^{-1}({\boldsymbol v}_{1,n}-\widetilde{{\boldsymbol p}}_{2,1,n}) \right\rangle +\langle {\boldsymbol N}{\boldsymbol L}({\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1}) \mid \widetilde{{\boldsymbol p}}_{2,1,n} - \overline{{\boldsymbol v}}_{1}\rangle \\ &&-\langle \widetilde{{\boldsymbol p}}_{2,1,n}-\overline{{\boldsymbol v}}_{1}\mid {\boldsymbol N}({\boldsymbol x}_{2,n}-\overline{{\boldsymbol y}})\rangle. \end{array} $$
(32)

We expand \((\chi _{n})_{n\in \mathbb {N}}\) as

$$\begin{array}{@{}rcl@{}} (\forall n\in\mathbb{N})\quad \chi_{n} &=& \langle {\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1} \mid {\boldsymbol{C}}{\boldsymbol x}_{1,n}-{\boldsymbol{C}}\overline{{\boldsymbol x}}_{1}\rangle + \langle \widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n} \mid {\boldsymbol{C}}{\boldsymbol x}_{1,n}-{\boldsymbol{C}}\overline{{\boldsymbol x}}_{1}\rangle\\ &\geq& \langle \widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n} \mid {\boldsymbol{C}}{\boldsymbol x}_{1,n}-{\boldsymbol{C}}\overline{{\boldsymbol x}}_{1}\rangle, \end{array} $$
(33)

where the last inequality follows from the monotonicity of C. Now, adding (30)–(33) and using \({\boldsymbol M }^{\ast }\overline {{\boldsymbol v}}_{2} = {\boldsymbol N}^{\ast }\overline {{\boldsymbol v}}_{1}\), we obtain

$$\begin{array}{@{}rcl@{}} &\phi_{A_{j}}&(\|\widetilde{p}_{1,1,n}^{j} -\overline{x}_{1,j}\|)\\ &\leq&\!\! \langle \widetilde{{\boldsymbol p}}_{1,1,n} -\overline{{\boldsymbol x}}_{1} \mid \gamma_{n}^{-1}({\boldsymbol x}_{1,n}-\widetilde{{\boldsymbol p}}_{1,1,n}\rangle - \langle \widetilde{{\boldsymbol p}}_{1,1,n}-{\boldsymbol x}_{1,n} \mid {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}({\boldsymbol v}_{1,n} - \overline{{\boldsymbol v}}_{1})\rangle\\ &&+ \langle \widetilde{{\boldsymbol p}}_{2,2,n} -\overline{{\boldsymbol v}}_{2} \mid \gamma_{n}^{-1}({\boldsymbol v}_{2,n}-\widetilde{{\boldsymbol p}}_{2,2,n})\rangle + \langle \widetilde{{\boldsymbol p}}_{2,1,n}-\overline{{\boldsymbol v}}_{1} \mid \gamma_{n}^{-1}({\boldsymbol v}_{1,n}-\widetilde{{\boldsymbol p}}_{2,1,n})\rangle \\ &&+\langle {\boldsymbol M }^{\ast}\widetilde{{\boldsymbol p}}_{2,2,n} - {\boldsymbol N}^{\ast}\widetilde{{\boldsymbol p}}_{2,1,n} \mid {\boldsymbol x}_{2,n}-\overline{{\boldsymbol y}}\rangle + \langle {\boldsymbol N}{\boldsymbol L}({\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1}) \mid \widetilde{{\boldsymbol p}}_{2,1,n} -{\boldsymbol v}_{1,n}\rangle-\chi_{n}.\qquad \end{array} $$
(34)

We next derive from (11) that

$$(\forall k\in\{1,\ldots,s\})\quad M^{\ast}_{k}p_{2,2,n}^{k}- N^{\ast}_{k}p_{2,1,n}^{k}= \gamma_{n}^{-1}(p_{1,2,n}^{k} -q_{1,2,n}^{k}) + c_{1,2,n}^{k}, $$

which and (27), (28), and [11, Theorem 2.5(i)] imply that

$${\boldsymbol M }^{\ast}\widetilde{{\boldsymbol p}}_{2,2,n}^{k}- {\boldsymbol N}^{\ast}\widetilde{{\boldsymbol p}}_{2,1,n}^{k}\to 0. $$

Furthermore, since \((({\boldsymbol x}_{i,n})_{n\in \mathbb {N}})_{1\leq i\leq 2}\) and \((\widetilde {{\boldsymbol p}}_{1,1,n})_{n\in \mathbb {N}}, (\widetilde {{\boldsymbol p}}_{2,1,n})_{n\in \mathbb {N}}, (\widetilde {{\boldsymbol p}}_{2,2,n})_{n\in \mathbb {N}} ({\boldsymbol v}_{1,n})_{n\in \mathbb {N}}\) converge weakly, they are bounded. Hence,

$$\begin{array}{@{}rcl@{}} \tau &=& \sup_{n\in\mathbb{N}}\left\{\|{\boldsymbol x}_{1,n}-\overline{x}_{1}\|, \|{\boldsymbol x}_{2,n}-\overline{y}\|, \max_{1\leq i\leq2}\{\|\widetilde{{\boldsymbol p}}_{2,i,n}-\overline{{\boldsymbol v}}_{i}\|,\|\widetilde{{\boldsymbol p}}_{1,1,n}-\overline{{\boldsymbol x}}_{1}\|\},\|{\boldsymbol v}_{1,n}-\overline{{\boldsymbol v}}_{1}\| \right\}\\ & <&+\infty. \end{array} $$
(35)

Then, using Cauchy–Schwarz’s inequality, the Lipchitz continuity of C and (35), (28), it follows from (34) that

$$\begin{array}{@{}rcl@{}} \phi_{A_{j}}(\|\widetilde{p}_{1,1,n}^{j} -\overline{x}_{1,j}\|)&\leq& \tau((\gamma_{n}^{-1}+\|{\boldsymbol N}{\boldsymbol L}\|)(\|\widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n} \|+ \|\widetilde{{\boldsymbol p}}_{2,1,n} -{\boldsymbol v}_{1,n}\|)\\ && +\|\gamma_{n}^{-1}({\boldsymbol v}_{2,n}-\widetilde{{\boldsymbol p}}_{2,2,n})\| + \upmu\|\widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n}\|\\ &&+\|{\boldsymbol M}^{\ast}\widetilde{\boldsymbol p}^{k}_{2,2,n}- {\boldsymbol N}^{\ast}\widetilde{\boldsymbol p}^{k}_{2,1,n}\|)\\ &\rightarrow & 0, \end{array} $$
(36)

in turn, \(\widetilde {p}_{1,1,n}^{j} \to \overline {x}_{1,j}\) and hence, by (25), \(x_{1,n}^{j}\to \overline {x}_{1,j}\).

(v): Since C is uniformly monotone at \(\overline {{\boldsymbol x}}_{1}\), there exists an increasing function ϕ C :[0,+[→[0,+] vanishing only at 0 such that

$$(\forall n\in\mathbb{N})\quad \langle {\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1} \mid {\boldsymbol{C}}{\boldsymbol x}_{1,n}- {\boldsymbol{C}}\overline{{\boldsymbol x}}_{1}\rangle \geq \phi_{{\boldsymbol{C}}}(\|{\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1}\|), $$

and hence, (33) becomes

$$\begin{array}{@{}rcl@{}} (\forall n\in\mathbb{N})\quad \chi_{n} &=& \langle {\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1} \mid {\boldsymbol{C}}{\boldsymbol x}_{1,n}- {\boldsymbol{C}}\overline{{\boldsymbol x}}_{1}\rangle + \langle \widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n} \mid {\boldsymbol{C}}{\boldsymbol x}_{1,n}- {\boldsymbol{C}}\overline{{\boldsymbol x}}_{1}\rangle\\ &\geq& \langle \widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n} \mid {\boldsymbol{C}}{\boldsymbol x}_{1,n}- {\boldsymbol{C}}\overline{{\boldsymbol x}}_{1}\rangle + \phi_{{\boldsymbol{C}}}(\|{\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1}\|). \end{array} $$

Processing as in (iv), (36) becomes

$$\begin{array}{@{}rcl@{}} \phi_{{\boldsymbol{C}}}(\|{\boldsymbol x}_{1,n}-\overline{{\boldsymbol x}}_{1}\|)&\leq&\tau\left((\gamma_{n}^{-1}+\|{\boldsymbol N}{\boldsymbol L}\|) (\|\widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n} \|+\|\widetilde{{\boldsymbol p}}_{2,1,n} -{\boldsymbol v}_{1,n}\|)\right.\\ && +\|\gamma_{n}^{-1}({\boldsymbol v}_{2,n}-\widetilde{{\boldsymbol p}}_{2,2,n})\|+ \upmu\|\widetilde{{\boldsymbol p}}_{1,1,n} -{\boldsymbol x}_{1,n}\|\\ &&\left.+\|{\boldsymbol M}^{\ast}\widetilde{\boldsymbol p}^{k}_{2,2,n}- {\boldsymbol N}^{\ast}\widetilde{\boldsymbol p}^{k}_{2,1,n}\| \right)\\ &\to& 0, \end{array} $$
(37)

in turn, \({\boldsymbol x}_{1,n}\to \overline {{\boldsymbol x}}_{1}\) or equivalently \((\forall i\in \{1,\ldots ,m\})\; x_{1,n}^{i}\to \overline {x}_{1,i}\).

(vi): Using the same argument as in the proof of (v), we reach at (37) where \(\phi _{{\boldsymbol {C}}}(\|{\boldsymbol x}_{1,n}-\overline {{\boldsymbol x}}_{1}\|)\) is replaced by \(\phi _{j}(\|x_{1,n}^{j}-\overline {x}_{1,j}\|)\), and hence we obtain the conclusion.

(vii) and (viii): Use the same argument as in the proof of (v). □

Remark 1

Here are some remarks.

  1. (i)

    In the special case when m = 1 and \((\forall k\in \{1,\ldots ,s\})\;\mathcal {G}_{k}=\mathcal {H}_{1}, L_{k,i} =\operatorname {Id}\), algorithm (11) is reduced to the recent algorithm proposed in [4, (3.15)] where the convergence results are proved under the same conditions.

  2. (ii)

    In the special case when m = 1, an alternative algorithm proposed in [7] can be used to solve Problem 1.

  3. (iii)

    In the case when (∀k∈{1…, s})(∀i∈{1,…, m}) L k, i = 0, algorithm (11) is separated into two different algorithms which solve, respectively, the first m inclusions and the last k inclusions in (8) independently.

  4. (iv)

    In the case when \((\forall k\in \{1,\ldots ,s\})\;\mathcal {X}_{k} =\mathcal {Y}_{k}=\mathcal {G}_{k}, N_{k} = M_{k} =\operatorname {Id}\), we obtain a new splitting method for solving a coupled system of monotone inclusion. An alternative method can be found in [25] for the case when C is restricted to be cocoercive and (D k )1 ≤ ks are strongly monotone.

  5. (v)

    Condition (12) is satisfied, for example, when each C i is restricted to be univariate and monotone, and C j is uniformly monotone.

3 Applications to Minimization Problems

The algorithm proposed has a structure of the forward-backward-forward splitting as in [4, 11, 16, 18, 23]. The applications of this type of algorithm to specific problems in applied mathematics can be found in [3, 4, 10, 11, 16, 18, 19, 23] and the references therein. We provide an application to the following minimization problem which extends [4, Problem 4.1] and [7, Problem 4.1]. We recall that the infimal convolution of the two functions f and g from \(\mathcal {H}\) to ]−,+] is

$$f\;\square\; g\colon x \mapsto \inf_{y\in\mathcal{H}}(f(y)+g(x-y)). $$

Problem 2

Let m, s be strictly positive integers. For every i∈{1,…, m}, let \((\mathcal {H}_{i},\langle \cdot \mid \cdot \rangle )\) be a real Hilbert space, let \(z_{i}\in \mathcal {H}_{i}\), let \(f_{i}\in {\Gamma }_{0}(\mathcal {H}_{i})\), let \(\varphi \colon \mathcal {H}_{1}\times \cdots \times \mathcal {H}_{m}\to \mathbb {R}\) be a convex differentiable function with ν 0-Lipschitz continuous gradient ∇φ = (∇1 φ,…,∇ m φ), for some ν 0∈[0,+[. For every k∈{1,…, s}, let \((\mathcal {G}_{k},\langle \cdot \mid \cdot \rangle ), (\mathcal {Y}_{k},\langle \cdot \mid \cdot \rangle )\) and \((\mathcal {X}_{k},\langle \cdot \mid \cdot \rangle )\) be real Hilbert spaces, let \(r_{k} \in \mathcal {G}_{k}\), let \(g_{k}\in {\Gamma }_{0}(\mathcal {Y}_{k})\), let \(\ell _{k}\in {\Gamma }_{0}(\mathcal {X}_{k})\), let \(M_{k}\colon \mathcal {G}_{k}\to \mathcal {Y}_{k}\) and \(N_{k}\colon \mathcal {G}_{k}\to \mathcal {X}_{k}\) be bounded linear operators. For every i∈{1,…, m} and every k∈{1,…, s}, let \(L_{k,i}\colon \mathcal {H}_{i} \to \mathcal {G}_{k}\) be a bounded linear operator. The primal problem is to

$$\begin{array}{@{}rcl@{}} \underset{{x_{1}\in\mathcal{H}_{1}},\ldots, x_{m}\in\mathcal{H}_{m}}{\text{minimize}} &&\sum\limits_{k=1}^{s}\left((\ell_{k}\circ N_{k})\;\square\; (g_{k}\circ M_{k})\right)\left(\sum\limits_{i=1}^{m} L_{k,i}x_{i} -r_{k}\right)\\ &&+\sum\limits_{i=1}^{m}\left(f_{i}(x_{i}) - \langle x_{i} \mid z_{i}\rangle\right)+\varphi(x_{1},\ldots,x_{m}), \end{array} $$
(38)

and the dual problem is to

$$\begin{array}{@{}rcl@{}} \underset{\begin{array}{c}{\boldsymbol v}_{1}\in\boldsymbol{\mathcal{X}},{\boldsymbol v}_{2}\in \boldsymbol{\mathcal{Y}},\\ (\forall k\in\{1,\ldots,s\})\;M^{\ast}_{k}v_{2,k}=\! N^{\ast}_{k}v_{1,k} \end{array}}{\text{minimize}}&&\!\!\left(\varphi^{\ast}\;\square\; \left(\sum\limits_{i=1}^{m} f_{i}^{\ast}\right)\right) \!\left(\Big(z_{i}-\sum\limits_{k=1}^{s} L_{k,i}^{\ast}N^{\ast}_{k}v_{1,k}\Big)_{1\leq i\leq m} \right)\\ && \!\!\!\,+\,\!\sum\limits_{k=1}^{s}\left(\ell_{k}^{\ast}(v_{1,k})\! + \!g_{k}^{\ast}(v_{2,k}) +\! \langle N_{k}^{\ast}v_{1,k}\mid r_{k}\rangle\right) \end{array} $$
(39)

Corollary 1

In Problem 2, suppose that ( 10 ) is satisfied and there exists \({\boldsymbol x}= (x_{1},\ldots , x_{m}) \in \mathcal {H}_{1}\times \dots \times \mathcal {H}_{m}\) such that, for all i∈{1,…,m},

$$\begin{array}{@{}rcl@{}} z_{i}&\in& \partial f_{i}(x_{i})+ \sum\limits_{k=1}^{s} L_{k,i}^{\ast}\left(\left((N_{k}^{\ast}\circ(\partial \ell_{k})\circ N_{k}) \;\square\; (M_{k}^{\ast}\circ (\partial g_{k})\circ M_{k})\right)\!\left(\sum\limits_{j=1}^{m} L_{k,j}x_{j}\!-r_{k}\right)\right)\\ &&+ \nabla_{i}\varphi({\boldsymbol x}). \end{array} $$
(40)

For every i∈{1,…,m}, let \((a^{i}_{1,1,n})_{n\in \mathbb {N}}, (b^{i}_{1,1,n})_{n\in \mathbb {N}}, (c^{i}_{1,1,n})_{n\in \mathbb {N}}\) be absolutely summable sequences in \(\mathcal {H}_{i}\) , for every k∈{1,…,s}, let \((a^{k}_{1,2,n})_{n\in \mathbb {N}}, (c^{k}_{1,2,n})_{n\in \mathbb {N}}\) be absolutely summable sequences in \(\mathcal {G}_{k}\) , let \((a^{k}_{2,1,n})_{n\in \mathbb {N}}, (b^{k}_{2,1,n})_{n\in \mathbb {N}}, (c^{k}_{2,1,n})_{n\in \mathbb {N}}\) be absolutely summable sequences in \(\mathcal {X}_{k}, (a^{k}_{2,2,n})_{n\in \mathbb {N}}, (b^{k}_{2,2,n})_{n\in \mathbb {N}}, (c^{k}_{2,2,n})_{n\in \mathbb {N}}\) be absolutely summable sequences in \(\mathcal {Y}_{k}\) . For every i∈{1,…,m} and k∈{1,…,s}, let \(x^{i}_{1,0} \in \mathcal {H}_{i}, x_{2,0}^{k} \in \mathcal {G}_{k}\) and \(v_{1,0}^{k} \in \mathcal {X}_{k}, v_{2,0}^{k} \in \mathcal {Y}_{k}\) , let ε∈]0,1/(β+1)[, let \((\gamma _{n})_{n\in \mathbb {N}}\) be a sequence in [ε,(1−ε)/β] and set

$$ \begin{array}{l} \text{For }\;n=0,1,\ldots,\\ \left\lfloor \begin{array}{l} \text{For }\;i=1,\ldots, m\\ \left\lfloor \begin{array}{l} s_{1,1,n}^{i} = x_{1,n}^{i} -\gamma_{n}\left(\nabla_{i}\varphi(x_{1,n}^{1},\ldots,x_{1,n}^{m})+ {\sum}_{k=1}^{s}L_{k,i}^{\ast}N_{k}^{\ast}v_{1,n}^{k} + a_{1,1,n}^{i}\right),\\ p_{1,1,n}^{i} = \operatorname{prox}_{\gamma_{n} f_{i}}(s_{1,1,n}^{i} +\gamma_{n} z_{i}) + b_{1,1,n}^{i} \end{array} \right.\\ \text{For }\;k=1,\ldots,s\\ \left\lfloor \begin{array}{l} p_{1,2,n}^{k} = x_{2,n}^{k} +\gamma_{n}\left( N_{k}^{\ast}v_{1,n}^{k} - M_{k}^{\ast}v_{2,n}^{k} + a_{1,2,n}^{k} \right),\\ s_{2,1,n}^{k} = v_{1,n}^{k} + \gamma_{n}\left({\sum}_{i=1}^{m}N_{k}L_{k,i} x_{1,n}^{i} - N_{k}x_{2,n}^{k} + a_{2,1,n}^{k} \right),\\ p_{2,1,n}^{k} = s_{2,1,n}^{k}-\gamma_{n} \left(N_{k}r_{k}+ \operatorname{prox}_{\gamma_{n}^{-1}\ell_{k}}(\gamma_{n}^{-1}s_{2,1,n}^{k} -N_{k}r_{k}) + b_{2,1,n}^{k} \right),\\ q_{2,1,n}^{k} = p_{2,1,n}^{k} +\gamma_{n}\left( N_{k}{\sum}_{i=1}^{m}L_{k,i}p_{1,1,n}^{i}-N_{k}p_{1,2,n}^{k} +c_{2,1,n}^{k} \right), \\ v_{1,n+1}^{k} = v_{1,n}^{k} - s_{2,1,n}^{k} + q_{2,1,n}^{k},\\ s_{2,2,n}^{k} = v_{2,n}^{k} + \gamma_{n}\left(M_{k}x_{2,n}^{k} + a_{2,2,n}^{k} \right),\\ p_{2,2,n}^{k} = s_{2,2,n}^{k}-\gamma_{n} \left(\operatorname{prox}_{\gamma_{n}^{-1}g_{k}}(\gamma_{n}^{-1}s_{2,2,n}^{k}) + b_{2,2,n}^{k} \right),\\ q_{2,2,n}^{k} = p_{2,2,n}^{k} +\gamma_{n}\left( M_{k}p_{1,2,n}^{k} +c_{2,2,n}^{k} \right), \\ v_{2,n+1}^{k} = v_{2,n}^{k} - s_{2,2,n}^{k} + q_{2,2,n}^{k},\\ q_{1,2,n}^{k} = p_{1,2,n}^{k} +\gamma_{n}\left(N_{k}^{\ast}p_{2,1,n}^{k} - M_{k}^{\ast}p_{2,2,n}^{k} + c_{1,2,n}^{k} \right),\\ x_{2,n+1}^{k} = x_{2,n}^{k} -p_{1,2,n}^{k}+ q_{1,2,n}^{k} \end{array} \right.\\ \text{For }\;i=1,\ldots, m\\ \left\lfloor \begin{array}{l} q_{1,1,n}^{i} = p_{1,1,n}^{i} -\gamma_{n}\left(\nabla_{i}\varphi(p_{1,1,n}^{1},\ldots,p_{1,1,n}^{m})+ {\sum}_{k=1}^{s}L_{k,i}^{\ast}N^{\ast}_{k}p_{2,1,n}^{k} + c_{1,1,n}^{i} \right),\\ x_{1,n+1}^{i} = x_{1,n}^{i} -s_{1,1,n}^{i}+ q_{1,1,n}^{i}. \end{array} \right.\\ \end{array} \right.\\ \end{array} $$
(41)

Then, the following hold for each i∈{1,…,m} and k∈{1,…,s},

  1. (i)

    \({\sum }_{n\in \mathbb {N}}\|x_{1,n}^{i} - p_{1,1,n}^{i} \|^{2} < +\infty \) and \({\sum }_{n\in \mathbb {N}}\|x_{2,n}^{k} - p_{1,2,n}^{k} \|^{2} < +\infty \).

  2. (ii)

    \({\sum }_{n\in \mathbb {N}}\|v_{1,n}^{k} - p_{2,1,n}^{k} \|^{2} < +\infty \) and \({\sum }_{n\in \mathbb {N}}\|v_{2,n}^{k} - p_{2,2,n}^{k} \|^{2} < +\infty \).

  3. (iii)

    \(x_{1,n}^{i}\rightharpoonup \overline {x}_{1,i}, v_{1,n}^{k}\rightharpoonup \overline {v}_{1,k}, v_{2,n}^{k}\rightharpoonup \overline {v}_{2,k}\) and \((\overline {x}_{1,1},\ldots ,\overline {x}_{1,m})\) solves ( 38 ) and \((\overline {v}_{1,1},\ldots ,\allowbreak \overline {v}_{1,s}, \overline {v}_{2,1},\ldots ,\overline {v}_{2,s})\) solves ( 39).

  4. (iv)

    Suppose that f j is uniformly convex at \(\overline {x}_{1,j}\) for some j∈{1,…,m}, then \(x_{1,n}^{j} \to \overline {x}_{1,j}\).

  5. (v)

    Suppose that φ is uniformly convex at \((\overline {x}_{1,1},\ldots ,\overline {x}_{1,m})\) , then \((\forall i\in \{1,\ldots ,m\})\; x_{1,n}^{i} \to \overline {x}_{1,i}\).

  6. (vi)

    Suppose that \(\ell ^{\ast }_{j}\) is uniformly convex at \(\overline {v}_{1,j}\) for some j∈{1,…,k}, then \(v_{1,n}^{j} \to \overline {v}_{1,j}\).

  7. (vii)

    Suppose that \(g^{\ast }_{j}\) is uniformly convex at \(\overline {v}_{2,j}\) for some j∈{1,…,k}, then \(v_{2,n}^{j} \to \overline {v}_{2,j}\).

Proof

Set

$$ \left\{\begin{array}{l} (\forall i\in\{1,\ldots,m\})\quad A_{i} = \partial f_{i},\quad C_{i} = \nabla_{i}\varphi,\\ (\forall k\in\{1,\ldots,s\})\quad B_{k} = \partial g_{k},\quad D_{k} = \partial\ell_{k}. \end{array}\right. $$
(42)

Then, it follows from [3, Theorem 20.40] that (A i )1 ≤ im ,(B k )1 ≤ ks , and (D k )1 ≤ ks are maximally monotone. Moreover, (C 1,…, C m )=∇φ is ν 0-Lipschitzian. Therefore, every conditions on the operators in Problem 1 are satisfied. Let \(\boldsymbol {\mathcal {H}}, \boldsymbol {\mathcal {G}}, \boldsymbol {\mathcal {X}}\), and \(\boldsymbol {\mathcal {Y}}\) be defined as in the proof of Theorem 1, and let L, M, N, z, and r be defined as in (13), and define

$$\left\{\begin{array}{l} f\colon\boldsymbol{\mathcal{H}}\to ]-\infty,+\infty[\colon {\boldsymbol x}\mapsto {\sum}_{i=1}^{m} f_{i}(x_{i}),\\ g\colon\boldsymbol{\mathcal{Y}}\to ]-\infty,+\infty[\colon {\boldsymbol v}\mapsto {\sum}_{k=1}^{s} g_{k}(v_{k}),\\ \ell\colon\boldsymbol{\mathcal{X}}\to ]-\infty,+\infty[\colon {\boldsymbol v}\mapsto {\sum}_{k=1}^{s} \ell_{k}(v_{k}). \end{array}\right. $$

Observe that [3, Proposition 13.27],

$$f^{\ast}\colon {\boldsymbol y}\mapsto \sum\limits_{i=1}^{m} f_{i}^{\ast}(y_{i}),\quad g^{\ast}\colon {\boldsymbol v} \mapsto \sum\limits_{k=1}^{s}g_{k}^{\ast}(v_{k}), \quad \text{ and }\quad \ell^{\ast}\colon {\boldsymbol v} \mapsto\sum\limits_{k=1}^{s} \ell_{k}^{\ast}(v_{k}). $$

We also have

$$(\ell\circ {\boldsymbol N})\;\square\; (g\circ {\boldsymbol M})\colon {\boldsymbol v}\mapsto \sum\limits_{k=1}^{s} \left((\ell_{k}\circ N_{k})\;\square\; (g_{k}\circ M_{k})\right)(v_{k}). $$

Then, the primal problem becomes

$$\mathop{\mathrm{minimize}}_{{\boldsymbol x}\in\boldsymbol{\mathcal{H}}} f({\boldsymbol x}) - \langle {\boldsymbol x}\mid {\boldsymbol z}\rangle + \left((\ell\circ {\boldsymbol N})\;\square\; (g\circ {\boldsymbol M })\right) ({\boldsymbol L}{\boldsymbol x} -{\boldsymbol r})+ \varphi({\boldsymbol x}),$$
(43)

and the dual problem becomes

$$ \underset{\begin{array}{c}{\boldsymbol v}_{2}\in\boldsymbol{\mathcal{Y}},{\boldsymbol v}_{1}\in\boldsymbol{\mathcal{X}},\\ {\boldsymbol M }^{\ast}{\boldsymbol v}_{2}={\boldsymbol N}^{\ast}{\boldsymbol v}_{1} \end{array}}{\text{minimize}} (\varphi^{\ast}\;\square\; f^{\ast})({\boldsymbol z}-{\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}{\boldsymbol v}_{1}) +\ell^{\ast}({\boldsymbol v}_{1}) + g^{\ast}({\boldsymbol v}_{2})+ \langle {\boldsymbol N}^{\ast}{\boldsymbol v}_{1} \mid {\boldsymbol r}\rangle. $$

Using the same argument as in [7, page 15], we have

$$\begin{array}{@{}rcl@{}} &&\inf_{{\boldsymbol x}\in\boldsymbol{\mathcal{H}}} f({\boldsymbol x})- \langle {\boldsymbol x}\mid{\boldsymbol z}\rangle + \left((\ell\circ {\boldsymbol N})\;\square\; (g\circ {\boldsymbol M })\right) ({\boldsymbol L}{\boldsymbol x} -{\boldsymbol r})+ \varphi({\boldsymbol x})\\ &&\geq \underset{\begin{array}{c}{\boldsymbol v}_{2}\in\boldsymbol{\mathcal{Y}}, {\boldsymbol v}_{1}\in\boldsymbol{\mathcal{X}},\\ {\boldsymbol M }^{\ast}{\boldsymbol v}_{2}={\boldsymbol N}^{\ast}{\boldsymbol v}_{1} \end{array}}{\sup}{-(\varphi^{\ast}\;\square\; f^{\ast}) ({\boldsymbol z}-{\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}{\boldsymbol v}_{1}) -\ell^{\ast}({\boldsymbol v}_{1}) - g^{\ast}({\boldsymbol v}_{2})- \langle {\boldsymbol N}^{\ast}{\boldsymbol v}_{1}\mid {\boldsymbol r}\rangle}.\\ \end{array} $$
(44)

Furthermore, the condition (40) implies that the set of solutions to (8) is non-empty. Furthermore, we derive from (9), (42), and [15, Lemma 2.10] that (41) reduces to a special case of (11). Moreover, every specific condition in Theorem 1 is satisfied. Therefore, by Theorem 1(iii), we have

$$ \left\{\begin{array}{l} z_{i}- {\sum}_{k=1}^{s} L_{k,i}^{\ast}N_{k}^{\ast}\overline{v}_{1,k}\in \partial f_{i}(\overline{x}_{1,i}) + \nabla_{i}\varphi(\overline{x}_{1,1},\ldots, \overline{x}_{1,m})\quad \text{ and }\quad M^{\ast}_{k}\overline{v}_{2,k} = N^{\ast}_{k}\overline{v}_{1,k};\\ N_{k}\left({\sum}_{i=1}^{m} L_{k,i}\overline{x}_{1,i} -r_{k} -\overline{y}_{k}\right) \in \partial\ell_{k}^{\ast}(\overline{v}_{1,k})\quad \text{ and }\quad M_{k}\overline{y}_{k} \in \partial g_{k}^{\ast}(\overline{v}_{2,k}), \end{array}\right. $$

which is equivalent to

$$ \left\{\begin{array}{l} {\boldsymbol z}- {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1}\in \partial f(\overline{{\boldsymbol x}}_{1}) + \nabla\varphi(\overline{{\boldsymbol x}}_{1})\quad \text{ and }\quad {\boldsymbol M }^{\ast}\overline{{\boldsymbol v}}_{2} = {\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1};\\ {\boldsymbol N} ({\boldsymbol L}\overline{{\boldsymbol x}}_{1} -{\boldsymbol r} -\overline{{\boldsymbol y}})\in \partial\ell^{\ast}(\overline{{\boldsymbol v}}_{1})\quad \text{ and }\quad {\boldsymbol M }\overline{{\boldsymbol y}} \in \partial g^{\ast}(\overline{{\boldsymbol v}}_{2}). \end{array}\right. $$

We next prove that \(\overline {{\boldsymbol x}}_{1} = (\overline {x}_{1,1},\ldots ,\overline {x}_{1,m})\in \boldsymbol {\mathcal {H}}\) is a solution to the primal problem and \((\overline {{\boldsymbol v}}_{1},\overline {{\boldsymbol v}}_{2}) = (\overline {v}_{1,1},\ldots ,\overline {v}_{1,s},\overline {v}_{2,1},\ldots ,\overline {v}_{2,s})\in \boldsymbol {\mathcal {X}}\times \boldsymbol {\mathcal {Y}}\) is a solution to the dual problem. Now, we have

$$\left\{\begin{array}{l} f(\overline{{\boldsymbol x}}_{1})+\varphi(\overline{{\boldsymbol x}}_{1}) + (f+\varphi)^{\ast}({\boldsymbol z}- {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1})=\langle \overline{{\boldsymbol x}}_{1}\mid {\boldsymbol z}- {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1}\rangle,\\ \ell({\boldsymbol N} ({\boldsymbol L}\overline{{\boldsymbol x}}_{1} -{\boldsymbol r} -\overline{{\boldsymbol y}}))+ \ell^{\ast}(\overline{{\boldsymbol v}}_{1}) = \langle {\boldsymbol N}({\boldsymbol L}\overline{{\boldsymbol x}}_{1} -{\boldsymbol r} -\overline{{\boldsymbol y}}) \mid \overline{{\boldsymbol v}}_{1}\rangle,\\ g({\boldsymbol M }\overline{{\boldsymbol y}})+ g^{\ast}(\overline{{\boldsymbol v}}_{2}) = \langle {\boldsymbol M }\overline{{\boldsymbol y}} \mid \overline{{\boldsymbol v}}_{2}\rangle, \end{array}\right. $$

which implies that

$$\begin{array}{@{}rcl@{}} &&f(\overline{{\boldsymbol x}}_{1}) - \langle \overline{{\boldsymbol x}}_{1} \mid {\boldsymbol z} \rangle + \left((\ell\circ {\boldsymbol N})\;\square\; (g\circ {\boldsymbol M })\right) ({\boldsymbol L}\overline{{\boldsymbol x}}_{1}-{\boldsymbol r})+ \varphi(\overline{{\boldsymbol x}}_{1})\\ &&\qquad \leq f(\overline{{\boldsymbol x}}_{1}) - \langle \overline{{\boldsymbol x}}_{1}\mid {\boldsymbol z}\rangle + g({\boldsymbol M }\overline{{\boldsymbol y}})+\ell\left({\boldsymbol N} ({\boldsymbol L}\overline{{\boldsymbol x}}_{1} -{\boldsymbol r} -\overline{{\boldsymbol y}})\right) + \varphi(\overline{\boldsymbol x}_{1})\\ &&\qquad \leq -(f+\varphi)^{\ast}({\boldsymbol z}- {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1})-\ell^{\ast}(\overline{{\boldsymbol v}}_{1})-g^{\ast}(\overline{{\boldsymbol v}}_{2}) - \langle {\boldsymbol r}\mid {\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1}\rangle\\ &&\qquad= -(f^{\ast}\square\varphi^{\ast})({\boldsymbol z}- {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1})-\ell^{\ast}(\overline{{\boldsymbol v}}_{1})-g^{\ast}(\overline{{\boldsymbol v}}_{2}) - \langle {\boldsymbol r} \mid {\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1}\rangle. \end{array} $$

Combining this inequality and (44), we get

$$\begin{array}{@{}rcl@{}} &&f(\overline{{\boldsymbol x}}_{1}) - \langle \overline{{\boldsymbol x}}_{1} \mid {\boldsymbol z}\rangle + \left((\ell\circ {\boldsymbol N})\;\square\; (g\circ {\boldsymbol M })\right) ({\boldsymbol L}\overline{{\boldsymbol x}}_{1}-{\boldsymbol r}) + \varphi(\overline{{\boldsymbol x}}_{1})\\ &&\qquad= \inf_{{\boldsymbol x}\in\boldsymbol{\mathcal{H}}} f({\boldsymbol x}) -\langle {\boldsymbol x}\mid {\boldsymbol z}\rangle + \left((\ell\circ {\boldsymbol N})\;\square\; (g\circ {\boldsymbol M })\right) ({\boldsymbol L}{\boldsymbol x} -{\boldsymbol r}) + \varphi({\boldsymbol x}) \end{array} $$

and

$$\begin{array}{@{}rcl@{}} &&(f^{\ast}\square\varphi^{\ast})({\boldsymbol z}- {\boldsymbol L}^{\ast}{\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1})+\ell^{\ast}(\overline{{\boldsymbol v}}_{1})+g^{\ast}(\overline{{\boldsymbol v}}_{2}) + \langle {\boldsymbol r}\mid {\boldsymbol N}^{\ast}\overline{{\boldsymbol v}}_{1}\rangle \\ &&\,=\,\!\underset{\begin{array}{c}{\boldsymbol v}_{2}\in \boldsymbol{\mathcal{Y}},{\boldsymbol v}_{1}\in\boldsymbol{\mathcal{X}},\\ (\forall k\in\{1,\ldots,s\})\;M^{\ast}_{k}v_{2,k} = N^{\ast}_{k}v_{1,k} \end{array}}{\text{minimize}} \left(\varphi^{\ast}\!\;\square\;\! \bigg(\sum\limits_{i=1}^{m} f_{i}^{\ast}\bigg)\right) \!\left(\Big(z_{i}\,-\,\sum\limits_{k=1}^{s} L_{k,i}^{\ast}N^{\ast}_{k}v_{1,k}\Big)_{1\leq i\leq m}\right)\\ &&\hspace{3,5cm} +\sum\limits_{k=1}^{s}\left(\ell_{k}^{\ast}(v_{1,k}) + g_{k}^{\ast}(v_{2,k}) + \langle N_{k}^{\ast}v_{1,k}\mid r_{k}\rangle\right). \end{array} $$

Therefore, the conclusions follow from Theorem 1 and the fact that the uniform convexity of a function in \({\Gamma }_{0}(\mathcal {H})\) at a point in the domain of its subdifferential implies the uniform monotonicity of its subdifferential at that point. □

Remark 2

Here are some remarks.

  1. (i)

    In the special case when m = 1 and \((\forall k\in \{1,\ldots ,s\})\;\mathcal {G}_{k}=\mathcal {H}_{1}, L_{k,i} =\operatorname {Id}\), algorithm (41) is reduced to [4, (4.20)]. In the case when m > 1, one can apply algorithm (41) to multicomponent signal decomposition and recovery problems [8, 9] where the smooth multivariate function φ models the smooth couplings and the first term in (38) models non-smooth couplings.

  2. (ii)

    Some sufficient conditions, which ensure that (40) is satisfied, are in [7, Proposition 4.2].

In the remainder of this section, we provide some concrete examples in image restoration [8, 9, 12, 21], which can be formulated as special cases of the problem (38).

Example 1

(Image decomposition) Let us consider the case where the noisy image \(r\in \mathbb {R}^{K\times K}\) is decomposed into three parts,

$$r = \overline{x}_{1}+\overline{x}_{2}+w, $$

where w is noise. To find the ideal image \(\overline {x} = \overline {x}_{1}+\overline {x}_{2}\) = “the piecewise constant part” + “the piecewise smooth part”, we propose to solve the following variational problem

$$\underset{x_{1}\in C_{1}, x_{2}\in C_{2}}{\text{minimize}} \frac{1}{2}\|r- x_{1}-x_{2}\|^{2} + \alpha \|\nabla_{1}\|_{1,2} + \beta \|\nabla^{2}x_{2}\|_{1,4},$$
(45)

where ∇ and ∇2 are respectively the first and the second order discrete gradient (see [21, Section 2.1] for their closed form expressions), C 1 and C 2 are non-empty closed convex subsets of \(\mathbb {R}^{K\times K}\) and model the prior information on the ideal solutions \(\overline {x}_{1}\) and \(\overline {x}_{2}\), respectively. The norms ∥⋅∥1,2 and ∥⋅∥1,4 are, respectively, defined by

$$\|\cdot\|_{1,2}\colon \mathbb{R}^{K\times K}\times\mathbb{R}^{K\times K} \ni (x,y)\mapsto \sum\limits_{1\leq i,j\leq K}\sqrt{|x(i,j)|^{2} + |y(i,j)|^{2}} $$

and

$$\|\cdot\|_{1,4}\colon (\mathbb{R}^{K\times K})^{4} \ni (x,y,u,v)\mapsto \sum\limits_{1\leq i,j\leq K}\sqrt{|x(i,j)|^{2} + |y(i,j)|^{2}\,+\,|u(i,j)|^{2}+|v(i,j)|^{2}}. $$

The problem (45) is a special case of (38) with

$$\left\{\begin{array}{l} m =2=s,~ N_{1} = \text{Id},~ N_{2} = \text{Id},~ M_{1} = \text{Id},~ M_{2}= \text{Id},\\ L_{1,1} = \nabla,~ L_{1,2} = L_{2,1} = 0,~ L_{2,2} = \nabla^{2},~ r_{1} = 0,~ r_{2}=0,\\ g_{1} = \|\cdot\|_{1,2},~ g_{2} = \|\cdot\|_{1,4},~ \ell_{1} = \ell_{2} = \iota_{\{0\}},\\ f_{1} = \iota_{C_{1}},~ f_{2} = \iota_{C_{2}},~ z_{1} = 0,~ z_{2}=0, \varphi\colon (x_{1},x_{2})\mapsto \frac{1}{2}\|r- x_{1}-x_{2}\|^{2}. \end{array}\right. $$

We note that in the case when \(C_{1} = C_{2} = \mathbb {R}^{K\times K}\), the problem (45) was proposed in [12, (30)].

The next example will be an application to the problem of recovery of an ideal image from multi-observation [17, (3.4)].

Example 2

Let p, K,(q i )1 ≤ ip be strictly positive integers, let \(\mathcal {H} = \mathbb {R}^{K\times K}\), and for every i∈{1,…, p}, let \(\mathcal {G}_{i}=\mathbb {R}^{q_{i}}\) and \(T_{i}\colon \mathcal {H}\to \mathcal {G}_{i}\) be a linear mapping. Consider the problem of recovery of an ideal image \(\overline {x}\) from

$$(\forall i\in\{1,\ldots,p\})\quad t_{i} = T_{i}\overline{x} + w_{i}, $$

where each w i is a noise component. Let (α, β)∈[0,+[2,(ω i )1 ≤ ip ∈]0,+[p, let C 1 and C 2 be nonempty, closed convex subsets of \(\mathcal {H}\), model the prior information of the ideal image. We propose the following variational problem to recover \(\overline {x}\),

$$ \underset{x\in C_{1}\cap C_{2}}{\text{minimize}} \sum\limits_{k=1}^{p}\frac{\omega_{k}}{2}\|t_{k}- T_{k}x\|^{2} + (\alpha\|\cdot\|_{1,2}\circ \nabla)\;\square\;(\beta \|\cdot\|_{1,4}\circ \nabla^{2})(x).$$
(46)

The problem (46) is a special case of the primal problem (38) with

$$\left\{\begin{array}{l} m=1,~ s=2,~ L_{1,1} =\text{Id},~ L_{1,2} = \text{Id},\\ f_{1} = \iota_{C_{1}},~ N_{1} =\nabla,~ \ell_{1} = \beta \|\cdot\|_{1,2},~ g_{1} = \alpha \|\cdot\|_{1,4},~ M_{1} =\nabla^{2},\\ N_{2} =\operatorname{Id},~ \ell_{2} =\iota_{\{0\}},~ g_{2} = \iota_{C_{2}},~ M_{2} =\operatorname{Id},~ \varphi = \frac{1}{2}{\sum}_{k=1}^{p}\omega_{k}\|t_{k}- T_{k}\cdot\|^{2},\\ \nu_{0} = {\sum}_{k=1}^{p}\omega_{k}\|T_{k}\|^{2},~ \|\nabla\|^{2} \leq 8. \end{array}\right. $$

Using the same argument as in [4, Section 5.3], we can check that (40) is satisfied. In the following experiment, we use p = 2, C 2 = [0,1]K×K and C 1 is defined by [13]

$$C_{1} = \left\{x\in\mathbb{R}^{K\times K} \mid (\forall (i,j)\in\{1,\ldots,K/8\}^{2}\quad \hat{x}(i,j) = \hat{\overline{x}}(i,j))\right\}, $$

where \(\hat {x}\) is the discrete Fourier transform of x. The operators T 1 and T 2 are convolution operators with uniform kernel of sizes 15×15 and 17×17, respectively. Furthermore, ω 1 = ω 2 = 0.5, α = β = 0.001, and w 1, w 2 are white noises with mean zero.

The results are presented in the following Table 1 Footnote 1and Fig. 1.

Table 1 Signal-to-noise ratio of the observations and deblurring
Fig. 1
figure 1

Deblurring by algorithm (41)