1 Introduction

Throughout this paper, we assume that

with inner product 〈⋅,⋅〉 and induced norm ∥⋅∥. Denote the set of all nonempty subsets of \({\mathscr{H}}\) containing finitely many elements by \(\mathcal {P}({\mathscr{H}})\). Given \(K \in \mathcal {P}({\mathscr{H}})\), the circumcenter of K is defined as either empty set or the unique point CC(K) such that CC(K) ∈aff(K) and CC(K) is equidistant from all points in K, see [4, Proposition 3.3].

Let \(m \in \mathbb {N} \setminus \{0\}\), and let T1,…,Tm− 1,Tm be operators from \({\mathscr{H}}\) to \({\mathscr{H}}\). Assume

The associated set-valued operator \(\mathcal {S}: {\mathscr{H}} \rightarrow \mathcal {P}({\mathscr{H}})\) is defined by

The circumcenter mapping \(CC_{\mathcal {S}}\) induced by \(\mathcal {S}\) is defined by the composition of CC and \(\mathcal {S}\), that is \((\forall x \in {\mathscr{H}})~CC_{\mathcal {S}}(x) =CC(\mathcal {S}(x))\). If \(CC_{\mathcal {S}}\) is proper, i.e., \( (\forall x \in {\mathscr{H}})\), \(CC_{\mathcal {S}}x \in {\mathscr{H}}\), then we are able to define the circumcenter methods induced by \(\mathcal {S}\) as

$$ x_{0}=x ~\text{ and }~x_{k}=CC_{\mathcal{S}}(x_{k-1}) = CC_{\mathcal{S}}^{k}x, ~\text{ where }~k=1,2,\ldots. $$

Motivated by Behling, Bello Cruz and Santos [7], we worked on circumcenters of finite set in Hilbert space in [4] and on the properness of circumcenter mappings in [5]. For other recent developments on circumcentered isometry methods, see also [9, 10, 16] and [6]. In this paper, we study the properness of the circumcenter mapping induced by isometries, and the circumcenter methods induced by isometries. Isometry includes reflector associated with closed affine subspaces. We provide convergence or even linear convergence results of the circumcentered isometry methods. In particular, for circumcentered reflection methods, we also offer some applications and evaluate their linear convergence rate by comparing them with two classical algorithms, namely, the Douglas–Rachford method (DRM) and the method of alternating projections (MAP).

More precisely, our main results are the following:

  • Theorem 3.3 provides the properness of the circumcenter mapping induced by isometries.

  • Theorem 4.7 presents a sufficient condition for the weak convergence of circumcentered isometry methods.

  • Theorems 4.14 and 4.15 present sufficient conditions for the linear convergence of circumcentered isometry methods in Hilbert space and \(\mathbb {R}^{n}\), respectively.

  • Proposition 5.18 takes advantage of the linear convergence of DRM to build the linear convergence of other circumcentered reflection methods.

Theorem 3.3 extends [5, Theorem 4.3] from reflectors to isometries. Based on the demiclosedness principle for circumcenter mappings built in [5, Theorem 3.20], we obtain Theorem 4.7, which implies the weak convergence of the DRM and the circumcentered reflection method, the main actor in [8]. Motivated by the role played by the Douglas–Rachford operator in the proof of [7, Theorem 1], we establish Theorem 4.14 and Proposition 5.18. As a corollary of Proposition 5.18, we observe that Proposition 5.19 yields [7, Theorem 1]. Motivated by the role that the firmly nonexpansive operator A played in [8, Theorem 3.3] to deduce the linear convergence of circumcentered reflection method in \(\mathbb {R}^{n}\), we obtain Proposition 2.10 and Theorem 4.15(ii). Theorem 4.15(ii) says that some α-averaged operators can be applied to construct linear convergent methods, which imply the linear convergence of the circumcentered isometry methods. As applications of Theorem 4.15, Propositions 5.10, 5.14, and 5.15 display particular classes of circumcentered reflection methods being linearly convergent.

The rest of the paper is organized as follows. In Section 2, we present various basic results for subsequent use. Our main theory results start at Section 3. Some results in [5, Section 4.1] are generalized in Section 3.1 to deduce the properness of the circumcenter mapping induced by isometries. Thanks to the properness, we are able to generate the circumcentered isometry methods in Section 4. In Section 4.2, we focus on exploring sufficient conditions for the (weak, strong and linear) convergence of the circumcentered isometry methods. In Sections 5 and 6, we consider the circumcentered reflection methods. In Section 5, first, we display some particular linearly convergent circumcentered reflection methods. Then the circumcentered reflection methods are used to accelerate the DRM, which is then used to find best approximation onto the intersection of finitely many linear subspaces. Finally, in Section 6, in order to evaluate the rate of linear convergence of the circumcentered reflection methods, we use performance profile with performance measures on both required number of iterations and run time to compare four circumcentered reflection methods with DRM and MAP for solving the best approximation problems associated with two linear subspaces with Friedrichs angle taken in certain ranges.

We now turn to notation. Let C be a nonempty subset of \({\mathscr{H}}\). Denote the cardinality of C by card(C). The intersection of all the linear subspaces of \({\mathscr{H}}\) containing C is called the span of C, and is denoted by spanC; its closure is the smallest closed linear subspace of \({\mathscr{H}}\) containing C and it is denoted by \(\overline {\text {span}} C\). C is an affine subspace of \({\mathscr{H}}\) if \(C \neq \varnothing \) and \((\forall \rho \in \mathbb {R})\)ρC + (1 − ρ)C = C; moreover, the smallest affine subspace containing C is the affine hull of C, denoted affC. An affine subspace U is said to be parallel to an affine subspace M if U = M + a for some \(a \in {\mathscr{H}}\). Every affine subspace U is parallel to a unique linear subspace L, which is given by (∀yU) L := Uy = UU. For every affine subspace U, we denote the linear subspace parallel to U by parU. The orthogonal complement of C is the set \( C^{\perp } =\{x \in {\mathscr{H}}~|~ \langle x,y\rangle =0~\text {for all}~y \in C\}\). The best approximation operator (or projector) onto C is denoted by PC while RC := 2PC −Id is the reflector associated with C. For two subsets A, B of \({\mathscr{H}}\), the distance d(A,B) is \(\inf \|A-B\|\). A sequence \((x_{k})_{k \in \mathbb {N}}\) in \({\mathscr{H}}\)converges weakly to a point \(x \in {\mathscr{H}}\) if, for every \(u \in {\mathscr{H}}\), \(\langle x_{k},u\rangle \rightarrow \langle x,u\rangle \); in symbols, \(x_{k} \rightharpoonup x\). Let \(T: {\mathscr{H}} \rightarrow {\mathscr{H}}\) be an operator. The set of fixed points of the operator T is denoted by FixT, i.e., \(\text {Fix}T := \{x \in {\mathscr{H}} ~|~ Tx=x\}\). T is asymptotically regular if for each \(x \in {\mathscr{H}}\), TkxTk+ 1x → 0. For other notation not explicitly defined here, we refer the reader to [3].

2 Auxiliary Results

This section contains several results that will be useful later.

2.1 Projections

Fact 2.1

[3, Proposition 29.1] Let C be a nonempty closed convex subset of \({\mathscr{H}}\), and let \(x \in {\mathscr{H}}\). Set D := z + C, where \(z \in {\mathscr{H}}\). Then PDx = z + PC(xz).

Fact 2.2

[11, Theorems 5.8 and 5.13] Let M be a closed linear subspace of \({\mathscr{H}}\). Then:

  1. (i)

    \(x=\mathrm {P}_{M}x+\mathrm {P}_{M^{\perp }}x\) for each \(x \in {\mathscr{H}}\). Briefly, \(\text {Id} =\mathrm {P}_{M}+\mathrm {P}_{M^{\perp }}\).

  2. (ii)

    \(M^{\perp }=\{x \in {\mathscr{H}}~|~ \mathrm {P}_{M}(x)=0\}\) and \(M=\{x \in {\mathscr{H}}~|~ \mathrm {P}_{M^{\perp }}(x)=0\}=\{x \in {\mathscr{H}}~|~ \mathrm {P}_{M}(x)=x\}\).

Fact 2.3

[5, Proposition 2.10] Let C be a closed affine subspace of \({\mathscr{H}}\). Then the following hold:

  1. (i)

    The projector PC and the reflector RC are affine operators.

  2. (ii)

    \((\forall x \in {\mathscr{H}})\) (∀νC) \(\|x-\mathrm {P}_{C} x\|^{2} + \|\mathrm {P}_{C} x -v\|^{2}=\|x-\nu \|^{2}\).

  3. (iii)

    \((\forall x \in {\mathscr{H}})\)\((\forall y \in {\mathscr{H}})\)xy∥ = ∥RCx −RCy∥.

Lemma 2.4

Let \(M := \text {aff}\{x, x_{1}, \ldots , x_{n}\} \subseteq {\mathscr{H}}\), where x1x,…,xnx are linearly independent. Then for every \(y \in {\mathscr{H}}\),

$$ \mathrm{P}_{M}(y)=x+ {\sum}^{n}_{i=1} \langle y-x, e_{i}\rangle e_{i}, $$

where (∀i ∈{1,…,n}) \(e_{i}=\frac {x_{i}-x - {\sum }^{i-1}_{j=1} \langle x_{i}-x, e_{j}\rangle e_{j}}{\|x_{i}-x - {\sum }^{i-1}_{j=1} \langle x_{i}-x, e_{j}\rangle e_{j}\|}\).

Proof

Since x1x,…,xnx are linearly independent, by the Gram–Schmidt orthogonalization process [17, p. 309], let (∀i ∈{1,…,n}) \(e_{i} :=\frac {x_{i}-x - {\sum }^{i-1}_{j=1} \langle x_{i}-x,e_{j}\rangle e_{j}}{\|x_{i}-x - {\sum }^{i-1}_{j=1} \langle x_{i}-x,e_{j}\rangle e_{j}\|}\), then e1,…,en are orthonormal. Moreover

$$ \text{span}\{e_{1}, \ldots, e_{n}\} = \text{span}\{x_{1}-x, \ldots, x_{n}-x\}:=L. $$

Since M = x + L, thus by Fact 2.1, we know PM(y) = x + PL(yx). By [3, Proposition 29.15], we obtain that for every \(z \in {\mathscr{H}}\), \(\mathrm {P}_{L}(z)= {\sum }^{n}_{i=1} \langle z, e_{i}\rangle e_{i}\), where (∀i ∈{1,…,n}) \(e_{i}=\frac {x_{i}-x - {\sum }^{i-1}_{j=1} \langle x_{i}-x,e_{j}\rangle e_{j}}{\|x_{i}-x - {\sum }^{i-1}_{j=1} \langle x_{i}-x,e_{j}\rangle e_{j}\|}\). □

2.2 Firmly Nonexpansive Mappings

Definition 2.5

[3, Definition 4.1] Let D be a nonempty subset of \({\mathscr{H}}\) and let \(T:D \rightarrow {\mathscr{H}}\). Then T is

  1. (i)

    firmly nonexpansive if

    $$ (\forall x, y \in D) \quad \|Tx -Ty\|^{2} + \|(\text{Id} -T)x -(\text{Id} -T)y\|^{2} \leq \|x -y\|^{2}; $$
  2. (ii)

    nonexpansive if it is Lipschitz continuous with constant 1, i.e.,

    $$ (\forall x, y \in D) \quad \|Tx -Ty\| \leq \|x-y\|; $$
  3. (iii)

    firmly quasinonexpansive if

    $$ (\forall x \in D)\quad (\forall y \in \text{Fix}T) \quad \|Tx -y\|^{2}+\|Tx-x\|^{2} \leq \|x -y\|^{2}; $$
  4. (iv)

    quasinonexpansive if

    $$ (\forall x \in D)\quad (\forall y \in \text{Fix}T) \quad \|Tx-y\| \leq \|x-y\|. $$

Fact 2.6

[3, Corollary 4.24] Let D be a nonempty closed convex subset of \({\mathscr{H}}\) and let \(T : D \to {\mathscr{H}}\) be nonexpansive. Then FixT is closed and convex.

Definition 2.7

[3, Definition 4.33] Let D be a nonempty subset of \({\mathscr{H}}\), let \(T: D \rightarrow {\mathscr{H}}\) be nonexpansive, and let α ∈]0,1[. Then T is averaged with constant α, or α-averaged for short, if there exists a nonexpansive operator \(R: D \rightarrow {\mathscr{H}}\) such that T = (1 − α)Id + αR.

Fact 2.8

[3, Proposition 4.35] Let D be a nonempty subset of \({\mathscr{H}}\), let \(T: D \rightarrow {\mathscr{H}}\) be nonexpansive, and let α ∈]0,1[. Then the following are equivalent:

  1. (i)

    T is α-averaged.

  2. (ii)

    (∀xD) (∀yD) \(\|Tx -Ty\|^{2} +\frac {1-\alpha }{\alpha } \|(\text {Id} -T)x -(\text {Id} -T)y\|^{2} \leq \|x -y\|^{2}\).

Fact 2.9

[3, Proposition 4.42] Let D be a nonempty subset of \({\mathscr{H}}\), let (Ti)i∈I be a finite family of nonexpansive operators from D to \({\mathscr{H}}\), let (ωi)i∈I be real numbers in ]0,1] such that \({\sum }_{i \in \mathrm {I}} \omega _{i}=1\), and let (αi)i∈I be real numbers in ]0,1[ such that, for every i ∈I, Ti is αi-averaged, and set \(\alpha :={\sum }_{i \in \mathrm {I}} \omega _{i} \alpha _{i}\). Then \({\sum }_{i \in \mathrm {I}} \omega _{i} T_{i}\) is α-averaged.

The following result is motivated by [8, Lemma 2.1(iv)].

Proposition 2.10

Assume \({\mathscr{H}}=\mathbb {R}^{n}\). Let \(T: \mathbb {R}^{n} \rightarrow \mathbb {R}^{n}\) be linear and α-averaged with α ∈]0,1[. Then \(\|T\mathrm {P}_{(\text {Fix}T)^{\perp }}\| <1\).

Proof

If (FixT) = {0}, then \(\mathrm {P}_{(\text {Fix} T)^{\perp }} =0\) and so \(T\mathrm {P}_{(\text {Fix} T)^{\perp }} =0\). Hence, the required result is trivial.

Now assume (FixT)≠{0}. By definition, (FixT) is a closed linear subspace of \(\mathbb {R}^{n}\). Since T is α-averaged, thus by Fact 2.8,

$$ (\forall x \in \mathbb{R}^{n}) \quad (\forall y \in \mathbb{R}^{n}) \quad \|Tx -Ty\|^{2} +\frac{1-\alpha}{\alpha} \|(\text{Id} -T)x -(\text{Id} -T)y\|^{2} \leq \|x -y\|^{2}. $$
(2.1)

Since (FixT)≠{0}, it is easy to see that

$$ \left\|T\mathrm{P}_{(\text{Fix}T)^{\perp}}\right\| = \sup_{\underset{\|x\| \leq 1}{x \in \mathcal{H}}} \left\|T\mathrm{P}_{(\text{Fix}T)^{\perp}} x\right\| ~\stackrel{y= \mathrm{P}_{(\text{Fix}T)^{\perp}}x}{=} \sup_{\underset{\|y\| \leq 1}{y\in (\text{Fix}T)^{\perp}} } \|Ty\| = \sup_{\underset{\|y\| = 1}{y\in (\text{Fix}T)^{\perp}}} \|Ty\|. $$
(2.2)

Suppose to the contrary that \(\|T\mathrm {P}_{(\text {Fix}T)^{\perp }}\| = 1\). Then by (2.2) and by the Bolzano–Weierstrass Theorem, there exists \(\overline {y} \in (\text {Fix} T)^{\perp }\) with \(\|\overline {y}\|=1\) and \(\|T \overline {y}\|=1\).

For every \(x \in \mathbb {R}^{n}\), substituting y = PFixTx in (2.1), we get,

$$ \|Tx-\mathrm{P}_{\text{Fix}T}x\|^{2}+\frac{1-\alpha}{\alpha}\|x -Tx\|^{2} \leq \|x-\mathrm{P}_{\text{Fix}T}x\|^{2}, $$

which implies that

$$ (\forall x\not \in \text{Fix}T) \quad \|Tx-\mathrm{P}_{\text{Fix}T}x\| < \|x-\mathrm{P}_{\text{Fix}T}x\|. $$
(2.3)

Since FixT ∩ (FixT) = {0} and since \(\overline {y} \in (\text {Fix}T)^{\perp }\) and \(\|\overline {y}\|=1\), so \( \overline {y} \not \in \text {Fix}T\). By Fact 2.2(ii), \(\overline {y} \in (\text {Fix}T)^{\perp }\) implies that \(\mathrm {P}_{\text {Fix}T}(\overline {y} )=0\), thus substituting \(x=\overline {y}\) in (2.3), we obtain

$$ 1 = \|T\overline{y}\| = \|T\overline{y}-\mathrm{P}_{\text{Fix}T}\overline{y}\| < \|\overline{y}-\mathrm{P}_{\text{Fix}T}\overline{y}\|=\|\overline{y}\|=1, $$

which is a contradiction.□

Definition 2.11

[3, Definition 5.1] Let C be a nonempty subset of \({\mathscr{H}}\) and let \((x_{k})_{k \in \mathbb {N}}\) be a sequence in \({\mathscr{H}}\). Then \((x_{k})_{k \in \mathbb {N}}\) is Fejér monotone with respect to C if

$$ (\forall x \in C) \quad (\forall k \in \mathbb{N}) \quad \|x_{k+1}-x\| \leq \|x_{k}-x\|. $$

Fact 2.12

[3, Proposition 5.4] Let C be a nonempty subset of \({\mathscr{H}}\) and let \((x_{k})_{k \in \mathbb {N}}\) be Fejér monotone with respect to C. Then \((x_{k})_{k \in \mathbb {N}}\) is bounded.

Fact 2.13

[3, Proposition 5.9] Let C be a closed affine subspace of \({\mathscr{H}}\) and let \((x_{k})_{k \in \mathbb {N}}\) be a sequence in \({\mathscr{H}}\). Suppose that \((x_{k})_{k \in \mathbb {N}}\) is Fejér monotone with respect to C. Then the following hold:

  1. (i)

    \((\forall k \in \mathbb {N})\) PCxk = PCx0.

  2. (ii)

    Suppose that every weak sequential cluster point of \((x_{k})_{k \in \mathbb {N}}\) belongs to C. Then \(x_{k} \rightharpoonup \mathrm {P}_{C}x_{0}\).

2.3 The Douglas–Rachford Method

Definition 2.14

[1, p. 2] Let U and V be closed convex subsets of \({\mathscr{H}}\) such that \(U \cap V \neq \varnothing \). The Douglas–Rachford splitting operator is TV,U := PV(2PU −Id) + Id −PU.

It is well known that

Definition 2.15

[11, Definition 9.4] The Friedrichs angle between two linear subspaces U and V is the angle α(U,V ) between 0 and \(\frac {\pi }{2}\) whose cosine, \(c(U,V) :={\cos \limits } \alpha (U,V)\), is defined by the expression

$$ c(U,V) = \sup\left\{ |\langle u,v\rangle| ~|~ u \in U \cap (U \cap V)^{\perp}, v \in V \cap (U \cap V)^{\perp}, \|u\| \leq 1, \|v\| \leq 1\right\}. $$

Fact 2.16

[11, Theorem 9.35] Let U and V be closed linear subspaces of \({\mathscr{H}}\). Then the following are equivalent:

  1. (i)

    c(U,V ) < 1;

  2. (ii)

    U + V is closed.

Fact 2.17

[1, Theorem 4.1] Let U and V be closed linear subspaces of \({\mathscr{H}}\) and T := TV,U defined in Definition 2.14. Let \(n \in \mathbb {N} \setminus \{0\}\) and let \(x \in {\mathscr{H}}\). Denote the c(U,V ) defined in Definition 2.15 by cF. Then

$$ \|T^{n}x - \mathrm{P}_{\text{Fix}T}x\| \leq {c^{n}_{F}} \|x - \mathrm{P}_{\text{Fix}T}x\| \leq {c^{n}_{F}} \|x\|. $$

Lemma 2.18

Let U and V be closed linear subspaces of \({\mathscr{H}}\) and T := TV,U. Let \(x \in {\mathscr{H}}\). Then

$$ \mathrm{P}_{U \cap V} (x) = \mathrm{P}_{\text{Fix}T}(x)~ \Leftrightarrow~ x \in \overline{\text{span}} (U \cup V)~ \Leftrightarrow~ x \in \overline{U+V}. $$

Proof

By [1, Proposition 3.6], \(\mathrm {P}_{\text {Fix}T} = \mathrm {P}_{U\cap V} + \mathrm {P}_{U^{\perp } \cap V^{\perp }}\). Moreover, by [11, Theorems 4.6(5) & 4.5(8)], we have \(U^{\perp } \cap V^{\perp } = (\overline {U+V})^{\perp }=(\overline {\text {span}} (U \cup V))^{\perp }\). Hence, by Fact 2.2(ii), we obtain that \(\mathrm {P}_{U \cap V} (x) = \mathrm {P}_{\text {Fix}T}(x) \Leftrightarrow \mathrm {P}_{U^{\perp } \cap V^{\perp }}x=0 \Leftrightarrow \mathrm {P}_{(\overline {\text {span}} (U \cup V))^{\perp }}x=0 \Leftrightarrow x \in ((\overline {\text {span}} (U \cup V))^{\perp } )^{\perp } =\overline {\text {span}} (U \cup V) = \overline {U+V} \). □

Lemma 2.19

Let U and V be closed linear subspaces of \({\mathscr{H}}\) and T := TV,U. Let \(x \in {\mathscr{H}}\). Let K be a closed linear subspace of \({\mathscr{H}}\) such that \(U\cap V \subseteq K \subseteq \overline {U+V}\). Then

$$ \mathrm{P}_{\text{Fix}T}\mathrm{P}_{K}x=\mathrm{P}_{U \cap V}\mathrm{P}_{K}x =\mathrm{P}_{U\cap V}x. $$

Proof

Since \(\mathrm {P}_{K}x \in K \subseteq \overline {U+V}\), by Lemma 2.18,

$$ \mathrm{P}_{\text{Fix}T}\mathrm{P}_{K}x=\mathrm{P}_{U \cap V}\mathrm{P}_{K}x. $$

On the other hand, by assumption, \(U\cap V \subseteq K\). Hence, by [11, Lemma 9.2], we get PUVPKx = PKPUVx = PUVx.□

2.4 Isometries

Definition 2.20

[15, Definition 1.6-1] A mapping \(T: {\mathscr{H}} \rightarrow {\mathscr{H}}\) is said to be isometric or an isometry if

$$ (\forall x \in \mathcal{H}) \quad (\forall y \in \mathcal{H}) \quad \|Tx -Ty\| =\|x-y\|. $$
(2.4)

Note that in some references, the definition of isometry is the linear operator satisfying (2.4). In this paper, the definition of isometry follows from [15, Definition 1.6-1] where the linearity is not required.

Corollary 2.21

Let α ∈]0,1[, and let \(T: {\mathscr{H}} \to {\mathscr{H}}\) be α-averaged with \(\text {Fix}T \neq \varnothing \). Assume that T≠Id. Then T is not an isometry.

Proof

Because T≠Id, \(\text {Fix}T \neq {\mathscr{H}}\). Take \(x \in {\mathscr{H}} \setminus \text {Fix}T\). Then

$$ \|x -Tx\| >0. $$
(2.5)

By assumption, \(\text {Fix}T \neq \varnothing \), take y ∈FixT, that is, yTy = 0. Because \(T: {\mathscr{H}} \to {\mathscr{H}}\) is α-averaged, by Fact 2.8,

$$ \begin{array}{@{}rcl@{}} &&\|Tx -Ty\|^{2} +\frac{1-\alpha}{\alpha} \|(\text{Id} -T)x -(\text{Id} -T)y\|^{2} \leq \|x -y\|^{2}\\ \Leftrightarrow~ &&\|Tx -Ty\|^{2} +\frac{1-\alpha}{\alpha} \|x -Tx\|^{2} \leq \|x -y\|^{2} \\ \overset{(2.5)}{\Rightarrow} ~&& \|Tx -Ty\| < \|x -y\|, \end{array} $$

which, by Definition 2.20, imply that T is not isometric. □

Definition 2.22

[3, p. 32] If \(\mathcal {K}\) is a real Hilbert space and \(T \in {\mathscr{B}}({\mathscr{H}},\mathcal {K})\), then the adjoint of T is the unique operator \(T^{\ast } \in {\mathscr{B}}(\mathcal {K}, {\mathscr{H}})\) that satisfies

$$ (\forall x \in \mathcal{H}) \quad (\forall y \in \mathcal{K}) \quad \langle Tx, y\rangle =\langle x, T^{\ast}y\rangle. $$

Lemma 2.23

  1. (i)

    Let C be a closed affine subspace of \({\mathscr{H}}\). Then the reflector RC := 2PC −Id is isometric.

  2. (ii)

    Let \(a \in {\mathscr{H}}\). The translation operator \((\forall x \in {\mathscr{H}})\)Tax := x + a is isometric.

  3. (iii)

    Let \(T \in {\mathscr{B}}({\mathscr{H}},{\mathscr{H}})\) and let T be the adjoint of T. Then T is isometric if and only if TT = Id.

  4. (iv)

    The identity operator is isometric.

Proof

(i): The result follows from Fact 2.3(iii).

(ii): It is clear from the definitions.

(iii): Assume that TT = Id. Let \(x \in {\mathscr{H}}\) and \(y \in {\mathscr{H}}\). Now ∥TxTy2 = 〈TxTy,TxTy〉 = 〈T(xy),T(xy)〉 = 〈xy,TT(xy)〉 = 〈xy,xy〉 = ∥xy2. For the proof of the opposite direction, refer to [15, Exercise 8 in p. 207].

(iv): The required result follows easily from (iii). □

Clearly, the reflector associated with an affine subspace is affine and not necessarily linear. The translation operator Ta defined in Lemma 2.23(ii) is not linear whenever a≠ 0.

Lemma 2.24

Assume \(F:{\mathscr{H}} \rightarrow {\mathscr{H}}\) and \(T: {\mathscr{H}} \rightarrow {\mathscr{H}}\) are isometric. Then the composition FT of T and F is isometric. In particular, the composition of finitely many isometries is an isometry.

Proof

The first statement comes directly from the definition of isometry. Then by induction, we obtain the last assertion. □

Lemma 2.25

Let \(T: {\mathscr{H}} \rightarrow {\mathscr{H}}\) be an isometry. Then the following hold:

  1. (i)

    T is nonexpansive.

  2. (ii)

    FixT is closed and convex.

Proof

(i): This is trivial from Definition 2.20 and Definition 2.5(ii). (ii): Combine (i) and Fact 2.6. □

2.5 Circumcenter Operators and Circumcenter Mappings

In order to study circumcentered isometry methods, we require facts on circumcenter operators and circumcenter mappings. Recall that \(\mathcal {P}({\mathscr{H}})\) is the set of all nonempty subsets of \({\mathscr{H}}\) containing finitely many elements. By [4, Proposition 3.3], we know that the following definition is well defined.

Definition 2.26

(Circumcenter operator) [4, Definition 3.4] The circumcenter operator is

In particular, when \(CC(K) \in {\mathscr{H}}\), that is, \(CC(K) \neq \varnothing \), we say that the circumcenter of K exists and we call CC(K) the circumcenter of K.

Recall that T1,…,Tm− 1,Tm are operators from \({\mathscr{H}}\) to \({\mathscr{H}}\) with \(\cap ^{m}_{j=1} \text {Fix} T_{j} \neq \varnothing \) and that

Definition 2.27

(Circumcenter mapping) [5, Definition 3.1] The circumcenter mapping induced by \(\mathcal {S}\) is

that is, for every \(x \in {\mathscr{H}}\), if the circumcenter of the set \(\mathcal {S}(x)\) defined in Definition 2.26 does not exist, then \(CC_{\mathcal {S}}x= \varnothing \). Otherwise, \(CC_{\mathcal {S}}x\) is the unique point satisfying the two conditions below:

  1. (i)

    \(CC_{\mathcal {S}}x \in \text {aff}(\mathcal {S}(x))=\text {aff}\{T_{1}(x), \ldots , T_{m-1}(x), T_{m}(x)\}\), and

  2. (ii)

    \(\{\|CC_{\mathcal {S}}x - T_{i}(x)\|~|~ i \in \{1, \ldots , m-1,m\}\}\) is a singleton, that is,

    $$ \|CC_{\mathcal{S}}x -T_{1}(x)\| = {\cdots} =\|CC_{\mathcal{S}}x -T_{m-1}(x)\| = \|CC_{\mathcal{S}}x -T_{m}(x)\|. $$

In particular, if for every \(x \in {\mathscr{H}}\), \(CC_{\mathcal {S}}x \in {\mathscr{H}}\), then we say the circumcenter mapping \(CC_{\mathcal {S}}\) induced by \(\mathcal {S}\) is proper. Otherwise, we call the \(CC_{\mathcal {S}}\)improper.

Fact 2.28

[5, Proposition 3.10(i)&(iii)] Assume \(CC_{\mathcal {S}}\) is proper. Then the following hold:

  1. (i)

    \(\cap ^{m}_{j=1} \text {Fix}T_{j} \subseteq \text {Fix} CC_{\mathcal {S}}\).

  2. (ii)

    If T1 = Id, then \(\cap ^{m}_{i=1} \text {Fix}T_{i} = \text {Fix} CC_{\mathcal {S}}\).

To facilitate the notations, from now on, for any nonempty and finite family of operators F1,…,Ft,

(2.6)

which is the set consisting of all finite composition of operators from {F1,…,Ft}. We use the empty product convention, so for r = 0, \(F_{i_{0}}{\cdots } F_{i_{1}} = \text {Id}\).

Proposition 2.29

Let t be a positive integer. Let F1,…,Ft be t operators from \({\mathscr{H}}\) to \({\mathscr{H}}\). Assume that \(CC_{\mathcal {S}}\) is proper. Assume that \(\mathcal {S}\) is a finite subset of Ω(F1,…,Ft) defined in (2.6) such that \(\{\text {Id}, F_{1}, F_{2}F_{1}, \ldots , F_{t} F_{t-1} \cdots F_{2}F_{1} \} \subseteq \mathcal {S}\) or \(\{\text {Id}, F_{1}, F_{2}, \ldots , F_{t}\} \subseteq \mathcal {S}\). Then \(\text {Fix} CC_{\mathcal {S}} = \cap ^{t}_{j=1} \text {Fix} F_{j}\).

Proof

Because each element of \(\mathcal {S}\) is composition of operators from {F1,…,Ft}, and because (∀i ∈{1,…,t}) \(\cap ^{t}_{j=1} \text {Fix} F_{j} \subseteq \text {Fix} F_{i}\), we obtain that

$$ \cap^{t}_{j=1} \text{Fix} F_{j} \subseteq \cap_{T \in \mathcal{S}} \text{Fix}T = \text{Fix} CC_{\mathcal{S}}, $$
(2.7)

where the equality is from Fact 2.28(ii).

On the other hand, if \(\{\text {Id}, F_{1}, F_{2}, \ldots , F_{t}\} \subseteq \mathcal {S}\), then clearly \(\cap _{T \in \mathcal {S}} \text {Fix}T \subseteq \cap ^{t}_{j=1} \text {Fix} F_{j}\). Hence, by (2.7), \(\text {Fix} CC_{\mathcal {S}} = \cap ^{t}_{j=1} \text {Fix} F_{j}\).

Suppose that \(\{\text {Id}, F_{1}, F_{2}F_{1}, \ldots , F_{t} F_{t-1} {\cdots } F_{2}F_{1} \} \subseteq \mathcal {S}\). Then for every \(x \in {\mathscr{H}}\), by Definition 2.27,

$$ \begin{array}{@{}rcl@{}} x \in \text{Fix} CC_{\mathcal{S}} ~& \Rightarrow&~\|x {}-{}x\| {}={} \|x {}-{}F_{1}x\| {}={}\|x - F_{2}F_{1}x\|= {\cdots} = \|x - F_{t} F_{t-1} {\cdots} F_{2}F_{1}x\|\\ & \Leftrightarrow&~x= F_{1}x= F_{2}F_{1}x=\cdots= F_{t} F_{t-1} {\cdots} F_{2}F_{1}x\\ & \Leftrightarrow&~x= F_{1}x= F_{2}x=\cdots= F_{t-1}x= F_{t}x\\ & \Leftrightarrow&~x \in \cap^{t}_{j=1} \text{Fix} F_{j} , \end{array} $$

which imply that \(\text {Fix} CC_{\mathcal {S}} \subseteq \cap ^{t}_{j=1} \text {Fix} F_{j}\). Again, by (2.7), \(\text {Fix} CC_{\mathcal {S}} = \cap ^{t}_{j=1} \text {Fix} F_{j}\). Therefore, the proof is complete. □

The following example says that the condition “\(\{\text {Id}, F_{1},F_{2}F_{1}, \ldots , F_{t} F_{t-1} {\cdots } F_{2}F_{1} \} \subseteq \mathcal {S}\)” in Proposition 2.29 above is indeed critical. Clearly, for each reflector RU, FixRU = U.

Example 2.30

Assume \({\mathscr{H}} = \mathbb {R}^{2}\). Set \(U_{1}:= \mathbb {R}\cdot (1,0)\), \(U_{2} :=\mathbb {R}\cdot (1,1)\) and \(U_{3} :=\mathbb {R}\cdot (0,1)\). Assume \(\mathcal {S} = \{\text {Id}, \mathrm {R}_{U_{3}}\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\}\). Since (∀xU2) \(\mathrm {R}_{U_{3}}\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}x=x\), \(CC_{\mathcal {S}} = \frac {1}{2} (\text {Id} + \mathrm {R}_{U_{3}}\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}})\) and since the set of fixed points of linear and continuous operator is a linear space, thus \(\cap ^{3}_{i=1}U_{i} = \{(0,0)\} \varsubsetneqq U_{2} = \text {Fix} CC_{\mathcal {S}}\).

Fact 2.31

(Demiclosedness principle for circumcenter mappings) [5, Theorem 3.20] Suppose that T1 = Id, that each operator in \(\mathcal {S} = \{T_{1}, T_{2}, \ldots , T_{m}\}\) is nonexpansive, and that \(CC_{\mathcal {S}}\) is proper. Then \(\text {Fix} CC_{\mathcal {S}} = \cap ^{m}_{i=1}\text {Fix}T_{i}\) and the demiclosedness principle holds for \(CC_{\mathcal {S}}\), that is,

$$ \left. \begin{array}{l} x_{k} \rightharpoonup \overline{x},\\ x_{k} - CC_{\mathcal{S}}x_{k} \to 0 \end{array} \right\} ~ \Rightarrow ~\overline{x} \in \text{Fix} CC_{\mathcal{S}}. $$
(2.8)

Fact 2.32

[5, Proposition 3.3] Assume m = 2 and \(\mathcal {S}=\{T_{1}, T_{2}\}\). Then \(CC_{\mathcal {S}}\) is proper. Moreover, \((\forall x \in {\mathscr{H}})\)\(CC_{\mathcal {S}}x = \frac {T_{1}x +T_{2}x}{2}\).

The following result plays a critical role in our calculations of circumcentered reflection methods in our numerical experiments in Section 6 below.

Proposition 2.33

Assume \(CC_{\mathcal {S}}\) is proper. Let \(x \in {\mathscr{H}}\). Set \(d_{x} :=\dim (\text {span}\{T_{2}x-T_{1}x, \ldots , T_{m}x-T_{1}x\})\). Let \(\widetilde {\mathcal {S}} := \{T_{1}, T_{i_{1}}, \ldots , T_{i_{d_{x}}}\} \subseteq \mathcal {S}\) be such thatFootnote 1

$$ T_{i_{1}}x -T_{1}x, \ldots,T_{i_{d_{x}}}x-T_{1}x ~\text{ is a basis of }~ \text{span}\{T_{2}x-T_{1}x, \ldots, T_{m}x-T_{1}x\}. $$

Then

$$ CC_{\mathcal{S}}x= CC_{\widetilde{\mathcal{S}}}x= T_{1}x+ {\sum}^{d_{x}}_{j=1} \alpha_{i_{j}}(x) (T_{i_{j}}x-T_{1}x), $$

where

$$ \begin{pmatrix} \alpha_{i_{1}}(x)\\ \vdots\\ \alpha_{i_{d_{x}}}(x)\\ \end{pmatrix} =\frac{1}{2} G(T_{i_{1}}x -T_{1}x, \ldots, T_{i_{d_{x}}}x-T_{1}x)^{-1} \begin{pmatrix} \|T_{i_{1}}x -T_{1}x\|^{2} \\ \vdots\\ \|T_{i_{d_{x}}}x-T_{1}x\|^{2} \\ \end{pmatrix}, $$

and \(G(T_{i_{1}}x -T_{1}x, \ldots , T_{i_{d_{x}}}x-T_{1}x)\) is the Gram matrix of \( T_{i_{1}}x -T_{1}x, \ldots , T_{i_{d_{x}}}x-T_{1}x\).

Proof

The desired result follows from [4, Corollary 4.3]. □

3 Circumcenter Mappings Induced by Isometries

Denote I := {1,…,m}. Recall that (∀i ∈I) \(T_{i} : {\mathscr{H}} \rightarrow {\mathscr{H}}\) and that

In the remaining part of the paper, we assume additionally that

3.1 Properness of Circumcenter Mapping Induced by Isometries

The following three results generalize Lemma 4.1, Proposition 4.2, and Theorem 4.3 respectively in [5, Section 4] from reflectors associated with affine subspaces to isometries. In view of [6, Theorem 3.14(ii)], we know that isometries are indeed more general than reflectors associated with affine subspaces. The proofs are similar to those given in [5, Section 4].

Lemma 3.1

Let \(x \in {\mathscr{H}}\). Then

$$ (\forall z \in \cap^{m}_{j=1} \text{Fix}T_{j}) \quad (\forall i \in \{1,2,\ldots, m\}) \quad \|T_{i}x -z\| = \|x-z\|. $$

Proof

Let \(z \in \cap ^{m}_{j=1} \text {Fix}T_{j}\) and i ∈{1,2,…,m}. Since Ti is isometric, and since \(z \in \cap ^{m}_{j=1} \text {Fix}T_{j} \subseteq \text {Fix}T_{i}\), thus ∥Tixz∥ = ∥TixTiz∥ = ∥xz∥. □

Proposition 3.2

For every \(z \in \cap ^{m}_{j=1} \text {Fix}T_{j}\), and for every \(x \in {\mathscr{H}}\), we have

  1. (i)

    \(\mathrm {P}_{\text {aff}(\mathcal {S}(x))}(z) \in \text {aff}(\mathcal {S}(x))\), and

  2. (ii)

    \(\big \{\|\mathrm {P}_{\text {aff}(\mathcal {S}(x))}(z) -Tx\| ~|~ T \in \mathcal {S}\}\) is a singleton.

Proof

Let \(z \in \cap ^{m}_{j=1} \text {Fix}T_{j}\), and let \(x \in {\mathscr{H}}\).

(i): Because \(\text {aff}(\mathcal {S}(x))\) is a nonempty finite-dimensional affine subspace, we know \(\mathrm {P}_{\text {aff} (\mathcal {S}(x))}(z)\) is well-defined. Clearly, \(\mathrm {P}_{\text {aff}(\mathcal {S}(x))}(z) \in \text {aff}(\mathcal {S}(x))\).

(ii): Take an arbitrary but fixed element \(T \in \mathcal {S}\). Then \(T x \in \mathcal {S}(x) \subseteq \text {aff}(\mathcal {S}(x))\). Denote \(p := \mathrm {P}_{\text {aff}(\mathcal {S}(x))}(z)\). By Fact 2.3(ii),

$$ \|z-p\|^{2}+\|p- Tx\|^{2} = \|z-Tx\|^{2}. $$
(3.1)

By Lemma 3.1, ∥zTx∥ = ∥zx∥. Thus, (3.1) yields that

$$ (\forall T \in \mathcal{S} ) \quad \|p- Tx\| = (\|z -x\|^{2} - \|z-p\|^{2} )^{\frac{1}{2}}, $$

which implies that \(\{\|p - Tx\| ~|~ T \in \mathcal {S}\}\) is a singleton. □

The following Theorem 3.3(i) states that the circumcenter mapping induced by isometries is proper, which makes the circumcentered isometry method well-defined and is therefore fundamental for our study on circumcentered isometry methods.

Theorem 3.3

Let \(x\in {\mathscr{H}}\). Then the following hold:

  1. (i)

    The circumcenter mapping \(CC_{\mathcal {S}} : {\mathscr{H}} \rightarrow {\mathscr{H}}\) induced by \(\mathcal {S}\) is proper; moreover, \(CC_{\mathcal {S}}x\) is the unique point satisfying the two conditions below:

    1. (a)

      \(CC_{\mathcal {S}}x\in \text {aff}(\mathcal {S}(x))\), and

    2. (b)

      \(\{\|CC_{\mathcal {S}}x-Tx\| ~|~ T \in \mathcal {S}\}\) is a singleton.

  2. (ii)

    \((\forall z \in \cap ^{m}_{j=1} \text {Fix}T_{j})\)\(CC_{\mathcal {S}}x= \mathrm {P}_{\text {aff}(\mathcal {S}(x))}(z)\).

  3. (iii)

    Assume that \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix}T_{j}\) and that W is closed and convex. Then \(CC_{\mathcal {S}}x= \mathrm {P}_{\text {aff}(\mathcal {S}(x))}\left (\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix}T_{j}} x\right ) = \mathrm {P}_{\text {aff}(\mathcal {S}(x))}(\mathrm {P}_{W} x)\).

Proof

(i) and (ii) come from Proposition 3.2 and [5, Proposition 3.6].

Using Lemma 2.25 and the underlying assumptions, we know \(\cap ^{m}_{j=1} \text {Fix} T_{j} \) is nonempty, closed and convex, so \(\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix}T_{j}} x \in \cap ^{m}_{j=1} \text {Fix} T_{j}\) is well-defined. Hence (iii) comes from (ii). □

3.2 Further Properties of Circumcenter Mappings Induced by Isometries

Similarly to Proposition 2.33, we provide a formula of the circumcenter mapping in the following result. Because \(\mathrm {P}_{\cap ^{m}_{i=1} \text {Fix} T_{i} } x\) or PWx is unknown in general, Proposition 2.33 is more practical.

Proposition 3.4

Let \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix}T_{j}\) and let W be closed and convex. Let \(x \in {\mathscr{H}}\). Set \(d_{x}:= \dim (\text {span}\{T_{2}x-T_{1}x, \ldots , T_{m}x-T_{1}x\})\). Let \(\widetilde {\mathcal {S}} := \{ T_{1}, T_{i_{1}}, \ldots , T_{i_{d_{x}}}\} \subseteq \mathcal {S}\) be such thatFootnote 2

$$ T_{i_{1}}x -T_{1}x, \ldots,T_{i_{d_{x}}}x-T_{1}x ~\text{ is a basis of }~\text{span}\{T_{2}x-T_{1}x, \ldots, T_{m}x-T_{1}x\}. $$
(3.2)

Then

$$ CC_{\mathcal{S}}x = T_{1}x+ {\sum}^{d_{x}}_{j=1} \left\langle \mathrm{P}_{\cap^{m}_{i=1} \text{Fix}T_{i}} x-T_{1}x, e_{j}\right\rangle e_{j} = T_{1}x + {\sum}^{d_{x}}_{j=1} \langle \mathrm{P}_{W} x-T_{1}x, e_{j}\rangle e_{j}, $$

where (j ∈{1,…,dx}) \(e_{j} =\frac {T_{i_{j}}x-T_{1}x - {\sum }^{j-1}_{k=1} \langle T_{i_{j}}x-T_{1}x,e_{k}\rangle e_{k}}{\|T_{i_{j}}x-T_{1}x - {\sum }^{j-1}_{k=1} \langle T_{i_{j}}x-T_{1}x,e_{k}\rangle e_{k}\|}\).

Proof

By Theorem 3.3(iii),

$$ CC_{\mathcal{S}}x = \mathrm{P}_{\text{aff}(\mathcal{S}(x))}\left( \mathrm{P}_{\cap^{m}_{j=1} \text{Fix}T_{j}} x\right) = \mathrm{P}_{\text{aff} (\mathcal{S}(x))}(\mathrm{P}_{W} x). $$

By (3.2), we know that

$$ \text{aff}(\mathcal{S}(x))=\text{aff}\{T_{1}x, T_{i_{1}}x, \ldots, T_{i_{d_{x}}}x \}=T_{1}x +\text{span}\{T_{i_{1}}x-T_{1}x, \ldots, T_{i_{d_{x}}}x-T_{1}x\}. $$

Substituting (x,x1,…,xn,M) by \((T_{1}x,T_{i_{1}}x, \ldots ,T_{i_{d_{x}}}x,\text {aff}(\mathcal {S}(x))\) in Lemma 2.4, we obtain the desired result.□

The following result plays an important role for the proofs of the linear convergence of circumcentered isometry methods.

Lemma 3.5

Let \(x \in {\mathscr{H}}\) and \(z \in \cap ^{m}_{j=1} \text {Fix}T_{j}\). Then the following hold:

  1. (i)

    Let \(F: {\mathscr{H}} \to {\mathscr{H}}\) satisfy \((\forall y \in {\mathscr{H}})\)\(F(y) \in \text {aff}(\mathcal {S}(y))\). Then \(\|z- CC_{\mathcal {S}}x\|^{2} + \|CC_{\mathcal {S}}x -~Fx\|^{2} = \|z- Fx\|^{2}\);

  2. (ii)

    If \(T_{\mathcal {S}} \in \text {aff} \mathcal {S}\), then \(\|z - CC_{\mathcal {S}}x\|^{2} + \|CC_{\mathcal {S}}x - T_{\mathcal {S}}x\|^{2} = \|z - T_{\mathcal {S}}x\|^{2}\);

  3. (iii)

    If \(\text {Id} \in \text {aff} \mathcal {S}\), then \(\|z- CC_{\mathcal {S}}x\|^{2} + \|CC_{\mathcal {S}}x -x\|^{2} =\|z- x\|^{2}\);

  4. (iv)

    \((\forall T \in \mathcal {S})\)\(\|z- CC_{\mathcal {S}}x\|^{2} + \|CC_{\mathcal {S}}x -Tx\|^{2} =\|z- x\|^{2}\).

Proof

Using Theorem 3.3(ii), we obtain

$$ CC_{\mathcal{S}}x=\mathrm{P}_{\text{aff}(\mathcal{S}(x))}(z). $$
(3.3)

(i): Since \(F(x) \in \text {aff}(\mathcal {S}(x))\), Fact 2.3(ii) implies

$$ \|z-CC_{\mathcal{S}}x\|^{2} + \|CC_{\mathcal{S}}x-Fx\|^{2}= \|z-Fx\|^{2}. $$

(ii) and (iii) come directly from (i).

Note that \((\forall T \in \mathcal {S})\)T is isometric and \(z \in \cap ^{m}_{j=1} \text {Fix}T_{j} \subseteq \text {Fix}T\). Hence, (iv) follows easily from (ii). □

We now present some calculus rules for circumcenter mappings.

Corollary 3.6

Assume \((\forall T \in \mathcal {S})\)T is linear. Then

  1. (i)

    \(CC_{\mathcal {S}}\) is homogeneous, that is \((\forall x \in {\mathscr{H}})\)\((\forall \lambda \in \mathbb {R})\)\(CC_{\mathcal {S}} (\lambda x)=\lambda CC_{\mathcal {S}} x\);

  2. (ii)

    \(CC_{\mathcal {S}}\) is quasitranslation, that is, \((\forall x \in {\mathscr{H}})\)\((\forall z \in \cap ^{m}_{j=1} \text {Fix}T_{j}) ~ CC_{\mathcal {S}} (x+ z ) = CC_{\mathcal {S}} (x) + z\).

Proof

By assumption, \((\forall T \in \mathcal {S})\)T is linear, so for every \(\alpha , \upbeta \in \mathbb {R}\), and for every \(x ,y \in {\mathscr{H}}\),

$$ (\forall T \in \mathcal{S}) \quad T(\alpha x + \upbeta y) = \alpha T x+ \upbeta Ty. $$

Note that by Theorem 3.3(i), \(CC_{\mathcal {S}}\) is proper. By Fact 2.28(i), \(0 \in \cap ^{m}_{j=1} \text {Fix} T_{j} \subseteq \text {Fix} CC_{\mathcal {S}}\). Hence,

$$ (\forall x \in \mathcal{H}) \quad CC_{\mathcal{S}} (0 x)=0=0 CC_{\mathcal{S}} x. $$

Therefore, (i) is from [4, Proposition 6.1] and (ii) comes from [4, Proposition 6.3]. □

The following result characterizes the fixed point set of circumcenter mappings induced by isometries under some conditions.

Proposition 3.7

Recall that \(\mathcal {S}=\{T_{1}, \ldots , T_{m-1}, T_{m}\}\). Then the following hold:

  1. (i)

    Assume T1 = Id. Then \(\text {Fix} CC_{\mathcal {S}} = \cap ^{m}_{j=1} \text {Fix}T_{j}\).

  2. (ii)

    Let F1,…,Ft be isometries from \({\mathscr{H}}\) to \({\mathscr{H}}\). Assume that \(CC_{\mathcal {S}}\) is proper, and that \(\mathcal {S}\) is a finite subset of Ω(F1,…,Ft) defined in (2.6) such that \(\{\text {Id}, F_{1}, F_{2}F_{1}, \ldots , F_{t} F_{t-1} {\cdots } F_{2}F_{1}\} \subseteq \mathcal {S}\) or \(\{\text {Id}, F_{1}, F_{2}, \ldots , F_{t}\} \subseteq \mathcal {S}\). Then \(\text {Fix} CC_{\mathcal {S}} = \cap ^{t}_{j=1} \text {Fix} F_{j} = \cap ^{m}_{j=1} \text {Fix}T_{j}\).

Proof

(i) is clear from Theorem 3.3(i) and Fact 2.28(ii).

(ii): Combining Theorem 3.3(i) with Proposition 2.29, we obtain \(\text {Fix} CC_{\mathcal {S}} = \cap ^{t}_{j=1} \text {Fix} F_{j} \). In addition, (i) proved above implies that \(\text {Fix} CC_{\mathcal {S}} = \cap ^{m}_{j=1} \text {Fix}T_{j} \). Hence, the proof is complete.□

Proposition 3.8

Let F1,…,Ft be isometries from \({\mathscr{H}}\) to \({\mathscr{H}}\). Assume that \(CC_{\mathcal {S}}\) is proper, and that \(\mathcal {S}\) is a finite subset of Ω(F1,…,Ft) defined in (2.6) such that \(\{\text {Id}, F_{1}, F_{2}F_{1}, \ldots , F_{t} F_{t-1} \cdots F_{2}F_{1} \} \subseteq \mathcal {S}\) or \(\{\text {Id}, F_{1}, F_{2}, \ldots , F_{t}\} \subseteq \mathcal {S}\). Then

$$ (\forall x \in \mathcal{H})\quad (\forall y \in \text{Fix} CC_{\mathcal{S}}) \quad \|CC_{\mathcal{S}}x -y\|^{2} +\|CC_{\mathcal{S}}x-x\|^{2} = \|x -y\|^{2}. $$
(3.4)

In particular, \(CC_{\mathcal {S}}\) is firmly quasinonexpansive.

Proof

Proposition 3.7(ii) says that in both cases stated in the assumptions, \(\text {Fix} CC_{\mathcal {S}} = \cap ^{t}_{j=1} \text {Fix} F_{j} = \cap _{T \in \mathcal {S}} \text {Fix}T\). Combining this result with Lemma 3.5(iii), we obtain (3.4).

Hence, by Definition 2.5(iii), \(CC_{\mathcal {S}}\) is firmly quasinonexpansive. □

Corollary 3.9

Let U1,…,Ut be closed affine subspaces in \({\mathscr{H}}\). Assume that \(\mathcal {S}_{1} =\{\text {Id}, \mathrm {R}_{U_{1}}, \ldots , \mathrm {R}_{U_{t}}\}\) and that \(\mathcal {S}_{2} =\{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}, \ldots , \mathrm {R}_{U_{t}}\cdots \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\}\). Then

  1. (i)

    (∀i ∈{1,2}) \(\text {Fix} CC_{\mathcal {S}_{i}} = \bigcap _{T \in \mathcal {S}_{i}} \text {Fix}T = \cap ^{t}_{j=1} \text {Fix} \mathrm {R}_{U_{j}} = \cap ^{t}_{j=1} U_{j}\).

  2. (ii)

    \(CC_{\mathcal {S}_{1}}\) and \(CC_{\mathcal {S}_{2}}\) are firmly quasinonexpansive.

Proof

We obtain (i) and (ii) by substituting \(F_{1}=\mathrm {R}_{U_{1}}, \ldots , F_{t}=\mathrm {R}_{U_{t}}\) in Propositions 3.7 and 3.8, respectively. □

In fact, the \(CC_{\mathcal {S}_{2}}\) in Corollary 3.9 is the main actor in [8].

4 Circumcenter Methods Induced by Isometries

Recall that \(\mathcal {S}=\{ T_{1}, \ldots , T_{m-1}, T_{m} \}\) with \(\cap ^{m}_{j=1} \text {Fix} T_{j} \neq \varnothing \) and that every element of \(\mathcal {S}\) is isometric and affine.

Let \(x \in {\mathscr{H}}\). The circumcenter method induced by \(\mathcal {S}\) is

$$ x_{0} :=x ~\text{ and }~x_{k} :=CC_{\mathcal{S}}(x_{k-1})=CC_{\mathcal{S}}^{k}x, \quad\text{where }~k=1,2,\ldots. $$

Theorem 3.3(i) says that \(CC_{\mathcal {S}}\) is proper, which ensures that the circumcenter method induced by \(\mathcal {S}\) is well defined. Since every element of \(\mathcal {S}\) is isometric, we say that the circumcenter method is the circumcenter method induced by isometries.

4.1 Properties of Circumcentered Isometry Methods

In this subsection, we provide some properties of circumcentered isometry methods. All of the properties are interesting in their own right. Moreover, the following Propositions 4.1 and 4.2 play an important role in the convergence proofs later.

Proposition 4.1

Let \(x \in {\mathscr{H}}\). Then the following hold:

  1. (i)

    \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) is a Fejér monotone sequence with respect to \(\cap ^{m}_{j=1} \text {Fix}T_{j}\).

  2. (ii)

    \((\forall z \in \cap ^{m}_{j=1} \text {Fix}T_{j})\) the limit \(\lim _{k \rightarrow + \infty } \|CC_{\mathcal {S}}^{k}x-z\|\) exists.

  3. (iii)

    \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) is bounded sequence.

  4. (iv)

    Assume \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix}T_{j}\). Then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) is a Fejér monotone sequence with respect to W.

  5. (v)

    Assume \(\text {Id} \in \text {aff} \mathcal {S}\). Then \(CC_{\mathcal {S}}\) is asymptotically regular, that is for every \(y \in {\mathscr{H}}\),

    $$ \lim_{k \rightarrow \infty}CC_{\mathcal{S}}^{k} y-CC_{\mathcal{S}}^{k+1} y =0. $$

Proof

For every \(k \in \mathbb {N}\), substitute x by \(CC_{\mathcal {S}}^{k}x\) in Lemma 3.5(iv) to obtain

$$ (\forall T \in \mathcal{S}) \quad (\forall z \in \cap^{m}_{j=1} \text{Fix}T_{j}) \quad \|z - CC_{\mathcal{S}}^{k+1}x\|^{2} + \|CC_{\mathcal{S}}^{k+1}x-T CC_{\mathcal{S}}^{k}x\|^{2} = \|z - CC_{\mathcal{S}}^{k}x\|^{2}. $$
(4.1)

(i): By (4.1), it is clear that

$$ \left( \forall z \in \cap^{m}_{j=1} \text{Fix}T_{j}\right) \quad (\forall k \in \mathbb{N}) \quad \|CC_{\mathcal{S}}^{k+1}x -z\| \leq \|CC_{\mathcal{S}}^{k}x-z\|. $$
(4.2)

By Definition 2.11, \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) is a Fejér monotone sequence with respect to \(\cap ^{m}_{j=1} \text {Fix}T_{j}\).

(ii): By (4.2), clearly \((\forall z \in \cap ^{m}_{j=1} \text {Fix}T_{j})\)\( \lim _{k \rightarrow + \infty } \|CC_{\mathcal {S}}^{k}x-z\|\) exists.

(iii): It directly comes from (i) and Fact 2.12.

(iv): The desired result is directly from (i) and Definition 2.11.

(v): Let \(z \in \cap ^{m}_{j=1} \text {Fix}T_{j}\). By (ii) above, we know \(L_{z} := \lim _{k \rightarrow + \infty } \|CC_{\mathcal {S}}^{k}x-z\|\) exists. Since \(\text {Id} \in \text {aff} \mathcal {S}\), for every \(k \in \mathbb {N}\), substituting x by \(CC_{\mathcal {S}}^{k}x\) in Lemma 3.5(iii), we have

$$ \|CC_{\mathcal{S}}^{k}x-CC_{\mathcal{S}}^{k+1}x\|^{2} = \|CC_{\mathcal{S}}^{k}x-z\|^{2} - \|CC_{\mathcal{S}}^{k+1}x -z\|^{2}. $$
(4.3)

Summing over k from 0 to infinity in both sides of (4.3), we obtain

$$ {\sum}^{\infty}_{k=0} \|CC_{\mathcal{S}}^{k}x-CC_{\mathcal{S}}^{k+1}x\|^{2} = \|x-z\|^{2} - {L^{2}_{z}} < +\infty, $$

which yields \(\lim _{k \rightarrow + \infty } CC_{\mathcal {S}}^{k}x-CC_{\mathcal {S}}^{k+1}x=0\), i.e., \(CC_{\mathcal {S}}\) is asymptotically regular. □

The following results are motivated by [7, Lemmas 1 and 3]. Note that by Lemma 2.25(ii), \(\cap ^{m}_{j=1} \text {Fix}T_{j}\) is always closed and convex.

Proposition 4.2

Let \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix}\)Tj such that W is convex and closed. Let \(x \in {\mathscr{H}}\). Then the following hold:

  1. (i)

    \((\forall T \in \mathcal {S})\) PWTx = T PWx = PWx and d(x,W) = d(Tx,W).

  2. (ii)

    \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}\mathrm {P}_{W} x =\mathrm {P}_{W} x\).

  3. (iii)

    Assume W is closed and affine. Then \((\forall k \in \mathbb {N})\)\(\mathrm {P}_{W} (CC_{\mathcal {S}}^{k}x)=\mathrm {P}_{W} x\).

  4. (iv)

    Let \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\). Then \(\|\mathrm {P}_{W} x - CC_{\mathcal {S}}x\|^{2} + \|CC_{\mathcal {S}}x - T_{\mathcal {S}}x\|^{2} = \|\mathrm {P}_{W} x - T_{\mathcal {S}}x\|^{2}\).

Proof

(i): Let \(T \in \mathcal {S}\). Since \(W \subseteq \cap ^{m}_{j=1} \text {Fix}T_{j} \subseteq \text {Fix}T\), thus it is clear that T PWx = PWx. Moreover, since \(\mathrm {P}_{W} x \in W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j} \subseteq \text {Fix}T\), \(\mathrm {P}_{W} Tx \in W \subseteq \cap ^{m}_{i=1} \text {Fix}T_{i} \subseteq \text {Fix}T\) and since T is isometric, thus

$$ \begin{array}{@{}rcl@{}} \|x - \mathrm{P}_{W}x\| & \leq& \|x - \mathrm{P}_{W}Tx\| \quad (\text{by definition of best approximation and $\mathrm{P}_{W}Tx \in W$}) \\ &=& \|Tx - \mathrm{P}_{W}Tx\| \quad (\text{\textit{T} is isometric})\\ &\leq& \|Tx - \mathrm{P}_{W}x\| \quad (\text{by definition of best approximation and $\mathrm{P}_{W}x \in W$}) \\ &=& \|x - \mathrm{P}_{W}x\|, \quad (\text{\textit{T} is isometric}) \end{array} $$

which imply that

$$ \|x - \mathrm{P}_{W}x\| =\|Tx - \mathrm{P}_{W}Tx\| = \|x - \mathrm{P}_{W}Tx\|. $$
(4.4)

Since W is nonempty, closed and convex, the best approximation of x onto W uniquely exists. So (4.4) implies that PWTx = PWx and d(x,W) = d(Tx,W).

(ii): By assumption and by Fact 2.28(i), \(\mathrm {P}_{W}x \in W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j} \subseteq \text {Fix} CC_{\mathcal {S}}\), thus it is clear that \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}\mathrm {P}_{W} x =\mathrm {P}_{W} x\).

(iii): The required result comes from Proposition 4.1(iv) and Fact 2.13(i).

(iv): By Theorem 3.3(iii), \(CC_{\mathcal {S}}x = \mathrm {P}_{\text {aff}(\mathcal {S}(x))}\mathrm {P}_{W}x\). Since \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\), which implies that \(T_{\mathcal {S}}x \in \text {aff}(\mathcal {S}(x))\), thus by Fact 2.3(ii), \(\|\mathrm {P}_{W} x - CC_{\mathcal {S}}x\|^{2} + \|CC_{\mathcal {S}}x - T_{\mathcal {S}}x\|^{2} = \|\mathrm {P}_{W} x - T_{\mathcal {S}}x\|^{2}\). □

With \(W=\cap ^{m}_{j=1} \text {Fix}T_{j}\) in the following result, we know that \((\forall x \in {\mathscr{H}})\) the distance between \(CC_{\mathcal {S}}x \in \text {aff}(\mathcal {S}(x))\) and \(\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix}T_{j}}x \in \cap ^{m}_{j=1} \text {Fix}T_{j}\) is exactly the distance between the two affine subspaces \(\text {aff}(\mathcal {S}(x))\) and \(\cap ^{m}_{j=1} \text {Fix}T_{j}\).

Corollary 4.3

Let \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j}\) such that W is closed and affine. Let \(x \in {\mathscr{H}}\). Then

$$ \|CC_{\mathcal{S}}x - \mathrm{P}_{W}x\| = \mathrm{d}(\text{aff}(\mathcal{S}(x)), W). $$

Proof

By Theorem 3.3(ii), \((\forall z \in \cap ^{m}_{j=1} \text {Fix}T_{j})\)\(CC_{\mathcal {S}}x= \mathrm {P}_{\text {aff}(\mathcal {S}(x))}(z)\), which implies that

$$ (\forall z \in W \subseteq \cap^{m}_{j=1} \text{Fix}T_{j}) \quad \|CC_{\mathcal{S}}x - z\| = \mathrm{d}(\text{aff}(\mathcal{S}(x)), z). $$
(4.5)

Now taking infimum over all z in W in (4.5), we obtain

$$ \mathrm{d}(CC_{\mathcal{S}}x, W) = \inf_{z \in W} \|CC_{\mathcal{S}}x - z\| = \inf_{z \in W} \mathrm{d}(\text{aff}(\mathcal{S}(x)), z) =\mathrm{d}(\text{aff}(\mathcal{S}(x)), W). $$

Hence, using Proposition 4.2(iii), we deduce that \(\|CC_{\mathcal {S}}x - \mathrm {P}_{W}x\| = \|CC_{\mathcal {S}}x -\mathrm {P}_{W}(CC_{\mathcal {S}}x)\| = \mathrm {d}(CC_{\mathcal {S}}x, W) = \mathrm {d}(\text {aff}(\mathcal {S}(x)), W)\).□

Proposition 4.4

Let \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j}\) such that W is closed and affine. Let \(x \in {\mathscr{H}}\). Then the following are equivalent:

  1. (i)

    \(CC_{\mathcal {S}}x \in W\);

  2. (ii)

    \(CC_{\mathcal {S}}x = \mathrm {P}_{W}x\);

  3. (iii)

    (∀k ≥ 1) \(CC_{\mathcal {S}}^{k}x = \mathrm {P}_{W}x\).

Proof

“(i) ⇒ (ii)”: If \(CC_{\mathcal {S}}x \in W\), then \(CC_{\mathcal {S}}x = \mathrm {P}_{W} CC_{\mathcal {S}}x = \mathrm {P}_{W}x\) using Proposition 4.2(iii).

“(ii) ⇒ (iii)”: Assume \(CC_{\mathcal {S}}x = \mathrm {P}_{W}x\). By Fact 2.28(i), \(\mathrm {P}_{W}x \in W \subseteq \cap ^{m}_{j=1} \text {Fix}T_{j} \subseteq \text {Fix} CC_{\mathcal {S}}\). Hence,

$$ (\forall k \geq 2) \quad CC_{\mathcal{S}}^{k}x = CC_{\mathcal{S}}^{k-1}(CC_{\mathcal{S}}x) = CC_{\mathcal{S}}^{k-1}(\mathrm{P}_{W}x) = \mathrm{P}_{W}x. $$

“(iii) ⇒ (i)”: Take k = 1. □

Corollary 4.5

Let \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j}\) such that W is closed and affine. Let \(x \in {\mathscr{H}}\). Assume that \(\lim _{k \rightarrow \infty } CC_{\mathcal {S}}^{k}x \neq \mathrm {P}_{W}x\). Then

$$ (\forall k \in \mathbb{N}) \quad CC_{\mathcal{S}}^{k}x \not \in W. $$

Proof

We argue by contradiction and thus assume there exists \(n \in \mathbb {N}\) such that \(CC_{\mathcal {S}}^{n}x \in W\). If n = 0, then, by Fact 2.28(i), \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}x =x = \mathrm {P}_{W}x\), which contradicts the assumption, \(\lim _{k \rightarrow \infty } CC_{\mathcal {S}}^{k}x \neq \mathrm {P}_{W}x\). Assume n ≥ 1. Then Proposition 4.4 implies (∀kn) \(CC_{\mathcal {S}}^{k}x = \mathrm {P}_{W} CC_{\mathcal {S}}^{n-1}x\), which is absurd.□

Proposition 4.6

Assume \((\forall T \in \mathcal {S})\)T is linear. Then

  1. (i)

    \((\forall x \in {\mathscr{H}})\)\((\forall \lambda \in \mathbb {R})\)\(CC_{\mathcal {S}}^{k} (\lambda x)=\lambda CC_{\mathcal {S}}^{k} x\).

  2. (ii)

    \((\forall x \in {\mathscr{H}})\)\((\forall z \in \cap ^{m}_{j=1} \text {Fix} T_{j})\)\(CC_{\mathcal {S}}^{k} (x+ z) = CC_{\mathcal {S}}^{k} (x) + z\).

Proof

The required results follow easily from Corollary 3.6 and some easy induction. □

4.2 Convergence

In this subsection, we consider the weak, strong and linear convergence of circumcentered isometry methods.

Theorem 4.7

Assume T1 = Id and \(\cap ^{m}_{j=1} \text {Fix} T_{j}\) is an affine subspace of \({\mathscr{H}}\). Let \(x \in {\mathscr{H}}\). Then \((CC_{\mathcal {S}}^{k}x)\) weakly converges to \(\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix}T_{j}} x\) and \((\forall k \in \mathbb {N})\)\(\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix} T_{j}} (CC_{\mathcal {S}}^{k}x) = \mathrm {P}_{\cap ^{m}_{j=1} \text {Fix}T_{j}} x\). In particular, if \({\mathscr{H}}\) is finite-dimensional space, then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix} T_{j}}x\).

Proof

By Proposition 4.2(iii), we have \((\forall k \in \mathbb {N} \setminus \{0\})\)\(\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix} T_{j}} (CC_{\mathcal {S}}^{k}x) =\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix}T_{j}} x\).

In Proposition 4.1(i), we proved that \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) is a Fejér monotone sequence with respect to \( \cap ^{m}_{j=1} \text {Fix} T_{j}\). By assumptions above and Fact 2.13(ii), in order to prove the weak convergence, it suffices to show that every weak sequential cluster point of \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) belongs to \(\cap ^{m}_{j=1} \text {Fix} T_{j}\). Because every bounded sequence in a Hilbert space possesses weakly convergent subsequence, by Fact 2.12, there exist weak sequential cluster points of \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\). Assume \(\bar {x}\) is a weak sequential cluster point of \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\), that is, there exists a subsequence \((CC_{\mathcal {S}}^{k_{j}}x)_{j \in \mathbb {N}}\) of \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) such that \(CC_{\mathcal {S}}^{k_{j}}x \rightharpoonup \bar {x}\). Applying Proposition 4.1(v), we know that \(CC_{\mathcal {S}}^{k}x -CC_{\mathcal {S}}(CC_{\mathcal {S}}^{k}x) \rightarrow 0\). So \(CC_{\mathcal {S}}^{k_{j}}x -CC_{\mathcal {S}}(CC_{\mathcal {S}}^{k_{j}}x) \rightarrow 0\). Combining the results above with Lemma 2.25(i), Theorem 3.3(i) and Fact 2.31, we conclude that \(\bar {x} \in \text {Fix} CC_{\mathcal {S}} = \cap ^{m}_{j=1} \text {Fix} T_{j}\). □

From Theorem 4.7, we obtain the well-known weak convergence of the Douglas–Rachford method next.

Corollary 4.8

Let U1,U2 be two closed affine subspaces in \({\mathscr{H}}\). Denote \(T_{U_{2},U_{1}} := \frac {\text {Id} +\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}}{2}\) the Douglas–Rachford operator. Let \(x \in {\mathscr{H}}\). Then the Douglas–Rachford method \((T^{k}_{U_{2},U_{1}}x )_{k \in \mathbb {N}}\) weakly converges to \(\mathrm {P}_{\text {Fix} T_{U_{2},U_{1}}}x\). In particular, if \({\mathscr{H}}\) is finite-dimensional space, then \((T^{k}_{U_{2},U_{1}}x)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\text {Fix}T_{U_{2},U_{1}}}x\).

Proof

Set \(\mathcal {S} := \{\text {Id}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}} \}\). By Fact 2.32, we know that \(CC_{\mathcal {S}} = T_{U_{2},U_{1}}\). Since U1,U2 are closed affine, thus, by Lemma 2.23(i) and Lemma 2.24, \(\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\) is isometric and, by Lemma 2.25(i) and Fact 2.3(i), \(\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\) is nonexpansive and affine. So \(\text {Fix} \text {Id} \cap \text {Fix} \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}} = \text {Fix} \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}} \) is closed and affine. In addition, by definition of \(T_{U_{2},U_{1}} \), it is clear that \(\text {Fix} T_{U_{2},U_{1}} = \text {Fix} \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\).

Hence, the result comes from Theorem 4.7. □

We now provide examples of weakly convergent circumcentered reflection methods.

Corollary 4.9

Let U1,…,Ut be closed affine subspaces in \({\mathscr{H}}\). Assume that \(\mathcal {S}_{1} =\{\text {Id}, \mathrm {R}_{U_{1}}, \ldots , \mathrm {R}_{U_{t}}\}\) and that \(\mathcal {S}_{2} =\{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}, \ldots , \mathrm {R}_{U_{t}}{\cdots } \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\}\). Let \(x \in {\mathscr{H}}\). Then both \((CC_{\mathcal {S}_{1}}^{k}x)\) and \((CC_{\mathcal {S}_{2}}^{k}x)\) weakly converge to \(\mathrm {P}_{\cap ^{t}_{j=1} U_{j}} x\). In particular, if \({\mathscr{H}}\) is finite-dimensional space, then both \((CC_{\mathcal {S}_{1}}^{k}x)\) and \((CC_{\mathcal {S}_{2}}^{k}x)\) converges to \(\mathrm {P}_{\cap ^{t}_{j=1} U_{j}} x\).

Proof

Since U1,…,Ut are closed affine subspaces in \({\mathscr{H}}\), thus \(\cap ^{t}_{j=1} U_{j}\) is closed and affine subspace in \({\mathscr{H}}\). Moreover, by Lemma 2.23(i) and Lemma 2.24, every element of \(\mathcal {S}\) is isometric. In addition, by Corollary 3.9(i), (∀i ∈{1,2}) \( \bigcap _{T \in \mathcal {S}_{i} } \text {Fix} T = \cap ^{t}_{j=1} U_{j}\). Therefore, the required results follow from Theorem 4.7. □

In fact, in Section 5.2 below, we will show that if \({\mathscr{H}} \) is finite-dimensional space, then both \((CC_{\mathcal {S}_{1}}^{k}x)\) and \((CC_{\mathcal {S}_{2}}^{k}x)\) defined in Corollary 4.9 above linearly converge to \(\mathrm {P}_{\cap ^{t}_{j=1} U_{j}} x\).

Corollary 4.10

Assume that A1,…,Ad are orthogonal matrices in \(\mathbb {R}^{n \times n}\) and that \(\mathcal {S}=\{\text {Id}, A_{1}, \ldots , A_{d}\}\). Let \(x \in \mathbb {R}^{n}\). Then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap ^{d}_{j=1} \text {Fix} A_{j}}x\).

Proof

Since \(\text {Fix} \text {Id} = \mathbb {R}^{n}\), we have \( \text {Fix} \text {Id} \bigcap (\cap ^{d}_{j=1} \text {Fix} A_{j} ) = \cap ^{d}_{j=1} \text {Fix} A_{j}\) is a closed linear subspace in \(\mathbb {R}^{n}\). Moreover, by [17, p. 321], the linear isometries on \(\mathbb {R}^{n}\) are precisely the orthogonal matrices. Hence, the result comes from Lemma 2.23(iv) and Theorem 4. □

Remark 4.11

If we replace \(\mathrm {P}_{\cap ^{m}_{j=1} \text {Fix} T_{j}} x\) by PWx for any \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j}\), the result showing in Theorem 4.7 may not hold. For instance, consider \({\mathscr{H}} =\mathbb {R}^{n}\), \(\mathcal {S}=\{\text {Id}\}\) and \(W \varsubsetneqq \mathbb {R}^{n}\) being closed and affine and \(x \in \mathbb {R}^{n} \setminus W\). Then \(CC_{\mathcal {S}}^{k}x \equiv x \not \to \mathrm {P}_{W}x\).

Let us now present sufficient conditions for the strong convergence of circumcentered isometry methods.

Theorem 4.12

Let W be a nonempty closed affine subset of \(\cap ^{m}_{j=1} \text {Fix}T_{j}\), and let \(x \in {\mathscr{H}}\). Then the following hold:

  1. (i)

    If \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) has a norm cluster point in W, then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges in norm to PW(x).

  2. (ii)

    The following are equivalent:

    1. (a)

      \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges in norm to PW(x).

    2. (b)

      \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges in norm to some point in W.

    3. (c)

      \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) has norm cluster points, all lying in W.

    4. (d)

      \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) has norm cluster points, one lying in W.

Proof

(i): Assume \(\overline {x} \in W\) is a norm cluster point of \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\), that is, there exists a subsequence \((CC_{\mathcal {S}}^{k_{j}}x)_{j \in \mathbb {N}}\) of \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) such that \(\lim _{j \rightarrow \infty }CC_{\mathcal {S}}^{k_{j}}x=\overline {x}\). Now for every \(j \in \mathbb {N}\),

$$ \begin{array}{@{}rcl@{}} \|CC_{\mathcal{S}}^{k_{j}}x-\mathrm{P}_{W} x\| &=& \|CC_{\mathcal{S}}^{k_{j}}x-\mathrm{P}_{W} (CC_{\mathcal{S}}^{k_{j}}x)\| \quad \text{(by Proposition~4.2(iii))}\\ & \leq& \|CC_{\mathcal{S}}^{k_{j}}x-\overline{x}\| \quad (\text{since}~\overline{x} \in W). \end{array} $$

So

$$ 0 \leq \lim_{j \rightarrow \infty} \|CC_{\mathcal{S}}^{k_{j}}x-\mathrm{P}_{W} (x)\| \leq \lim_{j \rightarrow \infty} \|CC_{\mathcal{S}}^{k_{j}}x-\overline{x}\|=0. $$

Hence, \( \lim _{j \rightarrow + \infty } CC_{\mathcal {S}}^{k_{j}}x = \mathrm {P}_{W} (x)\).

Substitute z in Proposition 4.1(ii) by PWx, then we know that \(\lim _{k \rightarrow + \infty } \|CC_{\mathcal {S}}^{k}x- \mathrm {P}_{W} x\|\) exists. Hence,

$$ \lim_{k \rightarrow + \infty} \|CC_{\mathcal{S}}^{k}x-\mathrm{P}_{W} x\| = \lim_{j \rightarrow + \infty} \|CC_{\mathcal{S}}^{k_{j}}x-\mathrm{P}_{W} x\| =0, $$

from which follows that \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges strongly to PWx.

(ii): By Proposition 4.1 (iv), \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) is a Fejér monotone sequence with respect to W. Then the equivalences follow from [2, Theorem 2.16(v)] and (i) above. □

To facilitate a later proof, we provide the following lemma.

Lemma 4.13

Let \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j}\) such that W is closed and affine. Assume there exists γ ∈ [0,1[ such that

$$ (\forall x \in \mathcal{H}) \quad \|CC_{\mathcal{S}}x-\mathrm{P}_{W} x\| \leq \gamma \|x -\mathrm{P}_{W} x\|. $$
(4.6)

Then

$$ (\forall x \in \mathcal{H}) \quad (\forall k \in \mathbb{N}) \quad \|CC_{\mathcal{S}}^{k}x -\mathrm{P}_{W} x\| \leq \gamma^{k} \|x - \mathrm{P}_{W} x\|. $$

Proof

Let \(x \in {\mathscr{H}}\). For k = 0, the result is trivial.

Assume for some \(k \in \mathbb {N}\) we have

$$ (\forall y \in \mathcal{H}) \quad \|CC_{\mathcal{S}}^{k}y-\mathrm{P}_{W} y\| \leq \gamma^{k} \|y -\mathrm{P}_{W} y\|. $$
(4.7)

Now

$$ \begin{array}{@{}rcl@{}} \|CC_{\mathcal{S}}^{k+1}x-\mathrm{P}_{W} x\| &=& \|CC_{\mathcal{S}}(CC_{\mathcal{S}}^{k}x)-\mathrm{P}_{W} (CC_{\mathcal{S}}^{k}x)\| \quad (\text{by Proposition~4.2(iii)})\\ &\stackrel{(4.6)} {\leq}& \gamma \|CC_{\mathcal{S}}^{k}x-\mathrm{P}_{W} (CC_{\mathcal{S}}^{k}x)\| \\ &=& \gamma \|CC_{\mathcal{S}}^{k}x-\mathrm{P}_{W} x\| \quad (\text{by Proposition~4.2(iii)})\\ &\stackrel{\text{(4.7)}} {\leq}& \gamma^{k+1} \|x -\mathrm{P}_{W} x\|. \end{array} $$

Hence, we obtain the desired result inductively. □

The following powerful result will play an essential role to prove the linear convergence of the circumcenter method induced by reflectors.

Theorem 4.14

Let W be a nonempty, closed and affine subspace of \(\cap ^{m}_{j=1} \text {Fix} T_{j}\).

  1. (i)

    Assume that there exist \(F: {\mathscr{H}} \to {\mathscr{H}}\) and γ ∈ [0,1[ such that \((\forall y \in {\mathscr{H}})\)\(F(y) \in \text {aff}(\mathcal {S}(y))\) and

    $$ (\forall x \in \mathcal{H}) \quad \|Fx-\mathrm{P}_{W} x\| \leq \gamma \|x -\mathrm{P}_{W} x\|. $$
    (4.8)

    Then

    $$ (\forall x \in \mathcal{H})\quad (\forall k \in \mathbb{N}) \quad \|CC_{\mathcal{S}}^{k}x-\mathrm{P}_{W} x\| \leq \gamma^{k} \|x -\mathrm{P}_{W} x\|. $$
    (4.9)

    Consequently, \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges linearly to PWx with a linear rate γ.

  2. (ii)

    If there exist \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\) and γ ∈ [0,1[, such that

    $$ (\forall x \in \mathcal{H}) \quad \|T_{\mathcal{S}}x-\mathrm{P}_{W} x\| \leq \gamma \|x -\mathrm{P}_{W} x\|, $$

    then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges linearly to PWx with a linear rate γ.

Proof

(i): Using the assumptions and applying Lemma 3.5(i) with \((\forall x \in {\mathscr{H}})\)z = PWx, we obtain that

$$ (\forall x \in \mathcal{H}) \quad \|CC_{\mathcal{S}}x-\mathrm{P}_{W} x\| \leq \|Fx-\mathrm{P}_{W} x\| \stackrel{(4.8)}{\leq} \gamma \|x -\mathrm{P}_{W} x\|. $$

Hence, (4.9) follows directly from Lemma 4.13.

(ii): Since \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\) implies that \((\forall y \in {\mathscr{H}})\)\(T_{\mathcal {S}}y \in \text {aff}(\mathcal {S}(y))\), thus the required result follows from (i) above by substituting \(F=T_{\mathcal {S}}\). □

Theorem 4.15

Let \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\) satisfy that \( \text {Fix} T_{\mathcal {S}} \subseteq \cap _{T \in \mathcal {S}} \text {Fix} T\). Then the following hold:

  1. (i)

    \(\text {Fix}T_{\mathcal {S}} = \cap _{T \in \mathcal {S}} \text {Fix}T\).

  2. (ii)

    Let \({\mathscr{H}}= \mathbb {R}^{n}\). Assume that \(T_{\mathcal {S}}\) is linear and α-averaged with α ∈]0,1[. For every \(x \in {\mathscr{H}}\), \( (CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap _{T \in \mathcal {S}} \text {Fix} T }x\) with a linear rate \(\|T_{\mathcal {S}} \mathrm {P}_{(\cap _{T \in \mathcal {S}} \text {Fix} T)^{\perp }}\| \in [0,1[\).

Proof

(i): Clearly, \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\) implies that \(\cap _{T \in \mathcal {S}} \text {Fix} T \subseteq \text {Fix} T_{\mathcal {S}}\). Combining the result with the assumption, \(\text {Fix} T_{\mathcal {S}} \subseteq \cap _{T \in \mathcal {S}} \text {Fix}T\), we get (i).

(ii): Since \(T_{\mathcal {S}}\) is linear and α-averaged, thus by Fact 2.6, \(\text {Fix} T_{\mathcal {S}}\) is a nonempty closed linear subspace. It is clear that

$$ T_{\mathcal{S}} \mathrm{P}_{\text{Fix} T_{\mathcal{S}} } = \mathrm{P}_{\text{Fix} T_{\mathcal{S}}}. $$
(4.10)

Using Proposition 2.10, we know

$$ \gamma :=\|T_{\mathcal{S}}\mathrm{P}_{(\text{Fix} T_{\mathcal{S}})^{\perp}}\|<1. $$

Now for every \(x \in \mathbb {R}^{n}\),

$$ \begin{array}{@{}rcl@{}} \|T_{\mathcal{S}}x-\mathrm{P}_{\text{Fix} T_{\mathcal{S}}} x\| & \stackrel{(4.10)}{=}& \|T_{\mathcal{S}}x-T_{\mathcal{S}}\mathrm{P}_{\text{Fix} T_{\mathcal{S}}} x\| \\ &=& \|T_{\mathcal{S}} (x-\mathrm{P}_{\text{Fix} T_{\mathcal{S}} } x)\| \quad (T_{\mathcal{S}}~\text{linear}) \\ &=& \|T_{\mathcal{S}} \mathrm{P}_{(\text{Fix} T_{\mathcal{S}} )^{\perp}} (x)\| \quad (\text{by Fact~2.2(i)})\\ &=& \|T_{\mathcal{S}} \mathrm{P}_{(\text{Fix} T_{\mathcal{S}} )^{\perp}} \mathrm{P}_{(\text{Fix} T_{\mathcal{S}} )^{\perp}} (x)\|\\ & \leq& \|T_{\mathcal{S}} \mathrm{P}_{(\text{Fix} T_{\mathcal{S}})^{\perp}}\| \|\mathrm{P}_{(\text{Fix} T_{\mathcal{S}} )^{\perp}} (x)\| \\ &=&\gamma \|x-\mathrm{P}_{\text{Fix} T_{\mathcal{S}}} (x)\| \quad (\text{by Fact~2.2(i)}). \end{array} $$

Hence, the desired result follows from Theorem 4.14(ii) by substituting \(W =\text {Fix} T_{\mathcal {S}}\) and (i) above. □

Useful properties of the \(T_{\mathcal {S}}\) in Theorem 4.15 can be found in the following results.

Proposition 4.16

Let \(\varnothing \neq W \subseteq \cap ^{m}_{j=1} \text {Fix} T_{j}\) such that W is a closed and affine subspace of \({\mathscr{H}}\) and let \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\). Let \(x \in {\mathscr{H}}\). Then

  1. (i)

    \((\forall k \in \mathbb {N})\)\(\mathrm {P}_{W} (T^{k}_{\mathcal {S}}x)=T^{k}_{\mathcal {S}}\mathrm {P}_{W} x=\mathrm {P}_{W} x\).

  2. (ii)

    \(\|\mathrm {P}_{W} (CC_{\mathcal {S}}x) -CC_{\mathcal {S}}x\|^{2} = \|\mathrm {P}_{W} (T_{\mathcal {S}}x) -T_{\mathcal {S}}x\|^{2} -\|CC_{\mathcal {S}}x-T_{\mathcal {S}}x\|^{2}\).

  3. (iii)

    \( d(CC_{\mathcal {S}}x, W) = \|CC_{\mathcal {S}}x - \mathrm {P}_{W} (x)\| \leq \|T_{\mathcal {S}}x - \mathrm {P}_{W} x\| = d(T_{\mathcal {S}}x, W)\).

Proof

(i): Denote I := {1,…,m}. By assumption, \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\), that is, there exist \((\alpha _{i})_{i\in \mathrm {I}} \in \mathbb {R}^{m}\) such that \({\sum }^{m}_{i=1} \alpha _{i} =1\) and \(T_{\mathcal {S}} ={\sum }^{m}_{i=1} \alpha _{i} T_{i}\). By assumption, W is closed and affine, thus by Fact 2.3(i), PW is affine. Hence, using Proposition 4.2(i), we obtain that

$$ \mathrm{P}_{W}T_{\mathcal{S}}x = \mathrm{P}_{W} \left( {\sum}^{m}_{i=1} \alpha_{i} T_{i} x \right) = {\sum}^{m}_{i=1} \alpha_{i} \mathrm{P}_{W}T_{i} x = {\sum}^{m}_{i=1} \alpha_{i} \mathrm{P}_{W}x = \mathrm{P}_{W}x. $$

Using \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S})\) again, we know \(\mathrm {P}_{W}x \in W \subseteq \cap ^{m}_{j=1} \text {Fix}T_{j} \subseteq \text {Fix}T_{\mathcal {S}}\). So it is clear that \(T_{\mathcal {S}}\mathrm {P}_{W}x= \mathrm {P}_{W}x\). Then (i) follows easily by induction on k.

(ii): The result comes from Proposition 4.2(iii), Proposition 4.2(iv) and the item (i) above.

(iii): The desired result follows from Proposition 4.2(iii) and from the (ii) & (i) above. □

Remark 4.17

Recall our global assumptions that \(\mathcal {S}=\{ T_{1}, \ldots , T_{m-1}, T_{m} \}\) with \(\cap ^{m}_{j=1} \text {Fix} T_{j} \neq \varnothing \) and that every element of \(\mathcal {S}\) is isometric. So, by Corollary 2.21, for every i ∈{1,…,m}, if Ti≠Id, Ti is not averaged. Hence, we cannot construct the operator \(T_{\mathcal {S}}\) used in Theorem 4.15(ii) as in Fact 2.9. See also Proposition 5.10 and Lemmas 5.12 and 5.13 below for further examples of \(T_{\mathcal {S}}\).

Remark 4.18

(Relationship to [6]) In this present paper, we study systematically on the circumcentered isometry method. We first show that the circumcenter mapping induced by isometries is proper which makes the circumcentered isometry method well-defined and gives probability for any study on circumcentered isometry methods. Then we consider the weak, strong and linear convergence of the circumcentered isometry method. In addition, we provide examples of linear convergent circumcentered reflection methods in \(\mathbb {R}^{n}\) and some applications of circumcentered reflection methods. We also display performance profiles showing the outstanding performance of two of our new circumcentered reflection methods without theoretical proofs. The paper plays a fundamental role for our study of [6]. In particular, Theorem 4.14(i) and Theorem 4.15(ii) are two principal facts used in some proofs of [6] which is an in-depth study of the linear convergence of circumcentered isometry methods. Indeed, in [6], we first show the corresponding linear convergent circumcentered isometry methods for all of the linear convergent circumcentered reflection methods in \(\mathbb {R}^{n}\) shown in this paper. We provide two sufficient conditions for the linear convergence of circumcentered isometry methods in Hilbert spaces with first applying another operator on the initial point. In fact, one of the sufficient conditions is inspired by Proposition 5.18 in this paper. Moreover, we present sufficient conditions for the linear convergence of circumcentered reflection methods in Hilbert space. In addition, we find some circumcentered reflection methods attaining the known linear convergence rate of the accelerated symmetric MAP in Hilbert spaces, which explains the dominant performance of the CRMs in the numerical experiments in this paper.

5 Circumcenter Methods Induced by Reflectors

As Lemma 2.23(i) showed, the reflector associated with any closed and affine subspace is isometry. This section is devoted to study particularly the circumcenter method induced by reflectors. In the whole section, we assume that \(t \in \mathbb {N} \setminus \{0\}\) and that

$$ U_{1}, \ldots, U_{t}~\text{are closed affine subspaces in}~\mathcal{H}~\text{with}~\cap^{t}_{i=1} U_{i} \neq \varnothing, $$

and set that

Suppose \(\mathcal {S}\) is a finite set such that

We assume that

$$ \mathrm{R}_{U_{i_{r}}}{\cdots} \mathrm{R}_{U_{i_{1}}}~\text{is the representative element of the set}~\mathcal{S}. $$

In order to prove some convergence results on the circumcenter methods induced by reflectors later, we consider the linear subspace parU paralleling to the associated affine subspace U. We denote

$$ L_{1} :=\text{par} U_{1}, \ldots, L_{t} := \text{par} U_{t}. $$
(5.1)

We set

Note that if \(\text {Id} \in \mathcal {S}\), then the corresponding element in \(\mathcal {S}_{L}\) is Id.

For example, if \(\mathcal {S} \) = \(\{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}, \mathrm {R}_{U_{3}}\mathrm {R}_{U_{1}}\}\), then \(\mathcal {S}_{L}\) = \(\{\text {Id}, \mathrm {R}_{L_{1}}, \mathrm {R}_{L_{2}}\mathrm {R}_{L_{1}}, \mathrm {R}_{L_{3}}\mathrm {R}_{L_{1}}\}\).

5.1 Properties of Circumcentered Reflection Methods

Lemma 5.1

\(\cap ^{t}_{i=1} U_{i} \) is closed and affine. Moreover, \(\varnothing \neq \cap ^{t}_{i=1} U_{i} \subseteq \cap _{T \in \mathcal {S}} \text {Fix} T\).

Proof

By the underlying assumptions, \(\cap ^{t}_{i=1} U_{i}\) is closed and affine.

Take an arbitrary but fixed \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}} \in \mathcal {S}\). If \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}} = \text {Id}\), then \(\cap ^{t}_{i=1} U_{i} \subseteq {\mathscr{H}} = \text {Fix} \text {Id}\). Assume \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}} \neq \text {Id}\). Let \(x \in \cap ^{t}_{i=1} U_{i}\). Since (∀j ∈{1,…,t}) \(\cap ^{t}_{i=1} U_{i} \subseteq U_{j} = \text {Fix} \mathrm {R}_{U_{j}}\), thus clearly \(\mathrm {R}_{U_{i_{r}}}\cdots \mathrm {R}_{U_{i_{1}}}x =x\). Hence, \(\cap ^{t}_{i=1} U_{i} \subseteq \cap _{T \in \mathcal {S}} \text {Fix} T\) as required. □

Lemma 5.1 tells us that we are able to substitute the W in all of the results in Section 4 by the \(\cap ^{t}_{i=1} U_{i} \). Therefore, the circumcenter methods induced by reflectors can be used in the best approximation problem associated with the intersection \(\cap ^{t}_{i=1} U_{i}\) of finitely many affine subspaces.

Lemma 5.2

Let \(x\in {\mathscr{H}}\) and let \(z \in \cap ^{t}_{i=1} U_{i}\). Then the following hold:

  1. (i)

    \((\forall \mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}} \in \mathcal {S})\)\(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}}x = z + \mathrm {R}_{L_{i_{r}}}{\cdots } \mathrm {R}_{L_{i_{1}}} (x-z)\).

  2. (ii)

    \(\mathcal {S}(x)=z + \mathcal {S}_{L}(x-z)\).

  3. (iii)

    \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}x = z+ CC_{\mathcal {S}_{L}}^{k}(x-z)\).

Proof

(i): Let \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}} \in \mathcal {S}\). Since for every \(y \in {\mathscr{H}}\) and for every i ∈{1,…,t}, \(\mathrm {R}_{U_{i}}y = \mathrm {R}_{z+L_{i}}y =2\mathrm {P}_{z+L_{i}}y - y = 2(z+ \mathrm {P}_{L_{i}}(y-z))-y =z+ (2 \mathrm {P}_{L_{i}}(y-z) - (y-z)) =z + \mathrm {R}_{L_{i}}(y-z)\), where the third and the fifth equality is by using Fact 2.1, thus

$$ (\forall y \in \mathcal{H}) \quad (\forall i \in \{1, \ldots, t\}) \quad \mathrm{R}_{U_{i}}y = z + \mathrm{R}_{L_{i}}(y-z). $$
(5.2)

Then assume for some k ∈{1,…,r − 1},

$$ \mathrm{R}_{U_{i_{k}}} {\cdots} \mathrm{R}_{U_{i_{1}}}x = z + \mathrm{R}_{L_{i_{k}}}{\cdots} \mathrm{R}_{L_{i_{1}}} (x-z). $$
(5.3)

Now

$$ \begin{array}{@{}rcl@{}} \mathrm{R}_{U_{i_{k+1}}}\mathrm{R}_{U_{i_{k}}}{\cdots} \mathrm{R}_{U_{i_{1}}}x & \stackrel{(5.3)}{=}& \mathrm{R}_{U_{i_{k+1}}} \left( z + \mathrm{R}_{L_{i_{k}}}{\cdots} \mathrm{R}_{L_{i_{1}}} (x-z) \right) \\ &\stackrel{(5.2)}{=}& z + \mathrm{R}_{L_{i_{k+1}}}\mathrm{R}_{L_{i_{k}}}{\cdots} \mathrm{R}_{L_{i_{1}}} (x-z). \end{array} $$

Hence, by induction, we know (i) is true.

(ii): Combining the result proved in (i) above with the definitions of the set-valued operator \(\mathcal {S}\) and \(\mathcal {S}_{L}\), we obtain

$$ \begin{array}{@{}rcl@{}} \mathcal{S}(x) & =& \left\{\mathrm{R}_{U_{i_{r}}}{\cdots} \mathrm{R}_{U_{i_{2}}}\mathrm{R}_{U_{i_{1}}}x ~|~ \mathrm{R}_{U_{i_{r}}}{\cdots} \mathrm{R}_{U_{i_{2}}}\mathrm{R}_{U_{i_{1}}} \in \mathcal{S} \right\} \\ &=& \left\{ z + \mathrm{R}_{L_{i_{r}}}{\cdots} \mathrm{R}_{L_{i_{2}}}\mathrm{R}_{L_{i_{1}}}(x-z) ~|~ \mathrm{R}_{U_{i_{r}}}{\cdots} \mathrm{R}_{U_{i_{2}}}\mathrm{R}_{U_{i_{1}}} \in \mathcal{S} \right\} \\ & =& z + \left\{ \mathrm{R}_{L_{i_{r}}}{\cdots} \mathrm{R}_{L_{i_{2}}}\mathrm{R}_{L_{i_{1}}}(x-z) ~|~ \mathrm{R}_{U_{i_{r}}}{\cdots} \mathrm{R}_{U_{i_{2}}}\mathrm{R}_{U_{i_{1}}} \in \mathcal{S} \right\} \\ & =& z + \mathcal{S}_{L}(x-z). \end{array} $$

(iii): By [4, Proposition 6.3], for every \(K \in \mathcal {P}({\mathscr{H}})\) and \(y \in {\mathscr{H}}\), CC(K + y) = CC(K) + y. Because \(z \in \cap ^{t}_{i=1} U_{i} \subseteq \cap _{T \in \mathcal {S}} \text {Fix}T\), by Definition 2.27,

$$ \begin{array}{@{}rcl@{}} (\forall y \in \mathcal{H})\quad CC_{\mathcal{S}}y = CC(\mathcal{S}(y)) &\stackrel{\text{(ii)}}{=}& CC(z + \mathcal{S}_{L}(y-z)) = z + CC(\mathcal{S}_{L}(y-z))\\ & =& z + CC_{\mathcal{S}_{L}}(y-z). \end{array} $$
(5.4)

Assume for some \(k \in \mathbb {N}\),

$$ (\forall y \in \mathcal{H}) \quad CC_{\mathcal{S}}^{k}y = z+ CC_{\mathcal{S}_{L}}^{k}(y-z). $$
(5.5)

Now

$$ \begin{array}{@{}rcl@{}} CC_{\mathcal{S}}^{k+1}x & =&CC_{\mathcal{S}} \left( CC_{\mathcal{S}}^{k}x \right) \\ & =& CC_{\mathcal{S}} \left( z+ CC_{\mathcal{S}_{L}}^{k}(x-z) \right) \quad (\text{by (5.5)})\\ & =&z + CC_{\mathcal{S}_{L}} \left( z+ CC_{\mathcal{S}_{L}}^{k}(x-z) - z\right) \quad (\text{by (5.4)}) \\ & =& z+ CC_{\mathcal{S}_{L}}^{k+1}(x-z). \end{array} $$

Hence, by induction, we know (iii) is true. □

The following Proposition 5.3 says that the convergence of the circumcenter methods induced by reflectors associated with linear subspaces is equivalent to the convergence of the corresponding circumcenter methods induced by reflectors associated with affine subspaces. In fact, Proposition 5.3 is a generalization of [7, Corollary 3].

Proposition 5.3

Let \(x\in {\mathscr{H}}\) and let \(z \in \cap ^{t}_{i=1} U_{i}\). Then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap ^{t}_{i=1} U_{i} }x\) (with a linear rate γ ∈ [0,1[) if and only if \((CC_{\mathcal {S}_{L}}^{k}(x-z))_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap ^{t}_{i=1} L_{i} }(x-z)\) (with a linear rate γ ∈ [0,1[).

Proof

By Lemma 5.2(iii), we know that \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}x = z+ CC_{\mathcal {S}_{L}}^{k}(x-z)\). Moreover, by Fact 2.1, \(\mathrm {P}_{\cap ^{t}_{i=1} U_{i} }x = \mathrm {P}_{z +\cap ^{t}_{i=1} L_{i} }x = z+ \mathrm {P}_{\cap ^{t}_{i=1} L_{i} }(x-z)\). Hence, the equivalence holds. □

The proof of Proposition 5.5 requires the following result.

Lemma 5.4

Let \(x \in {\mathscr{H}}\) and let \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}} \in \mathcal {S}\). Let L1,L2,…,Lt be the closed linear subspaces defined in (5.1). Then \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}}x-x \in (\cap ^{t}_{i=1} L_{i})^{\perp }\), that is,

$$ (\forall z \in \cap^{t}_{i=1} L_{i}) \quad \langle \mathrm{R}_{U_{i_{r}}}{\cdots} \mathrm{R}_{U_{i_{1}}}x-x, z\rangle=0. $$

Proof

By Lemma 5.2(i), for every \(z \in \cap ^{t}_{i=1} L_{i}\),

$$ \langle\mathrm{R}_{U_{i_{r}}}{\cdots} \mathrm{R}_{U_{i_{1}}}x-x, z\rangle = \langle z + \mathrm{R}_{L_{i_{r}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(x-z)-x, z\rangle = \langle \mathrm{R}_{L_{i_{r}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(x-z)-(x-z), z\rangle. $$

Hence, it suffices to prove

$$ (\forall y \in \mathcal{H}) \quad (\forall z \in \cap^{t}_{i=1} L_{i}) \quad \langle\mathrm{R}_{L_{i_{r}}}{\cdots} \mathrm{R}_{L_{i_{1}}}y-y, z\rangle = 0. $$

Let \(y \in {\mathscr{H}}\) and \(z\in \cap ^{t}_{i=1} L_{i}\). Take an arbitrary j ∈{1,2,…,t}. By Fact 2.2(i) \(\langle \mathrm {R}_{L_{j}}(y)-y,z\rangle = \langle 2(\mathrm {P}_{L_{j}}-\text {Id} )y,z\rangle = \langle -2\mathrm {P}_{L_{j}^{\perp }}y,z\rangle =0\), which yields that

$$ (\forall w \in \mathcal{H}) \quad \left( \forall d \in \{1,2, \ldots,t\}\right) \quad \langle \mathrm{R}_{L_{d}}(w)-w, z\rangle = 0. $$
(5.6)

Recall \({\prod }^{0}_{j=1}\mathrm {R}_{L_{i_{j}}} =\text {Id}\). So we have

$$ \mathrm{R}_{L_{i_{r}}}\mathrm{R}_{L_{i_{r-1}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y) -y = {\sum}^{r-1}_{j=0} \left( \mathrm{R}_{L_{i_{j+1}}}\mathrm{R}_{L_{i_{j}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y)- \mathrm{R}_{L_{i_{j}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y) \right). $$
(5.7)

Hence,

$$ \begin{array}{@{}rcl@{}} \left\langle\mathrm{R}_{L_{i_{r}}}\mathrm{R}_{L_{i_{r-1}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y)-y,z\right\rangle &\stackrel{(5.7)}{=} & \left\langle{\sum}^{r-1}_{j=0} \left( \mathrm{R}_{L_{i_{j+1}}}\mathrm{R}_{L_{i_{j}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y)- \mathrm{R}_{L_{i_{j}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y) \right),z\right\rangle \\ &=& {\sum}^{r-1}_{j=0} \left\langle \mathrm{R}_{L_{i_{j+1}}}\left( \mathrm{R}_{L_{i_{j}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y)\right)- \mathrm{R}_{L_{i_{j}}}{\cdots} \mathrm{R}_{L_{i_{1}}}(y), z\right\rangle\\ &\stackrel{\text{(5.6)}}{=} & 0. \end{array} $$

Hence, the proof is complete. □

Proposition 5.5

Assume \(\text {Id} \in \mathcal {S}\). Let L1,L2,…,Lt be the closed linear subspaces defined in (5.1). Let \(x \in {\mathscr{H}}\). Then the following hold:

  1. (i)

    \(CC_{\mathcal {S}}x-x \in (\cap ^{t}_{i=1} L_{i})^{\perp }\), that is, \((\forall z \in \cap ^{t}_{i=1} L_{i})\)\(\langle CC_{\mathcal {S}}x-x,z\rangle =0\).

  2. (ii)

    \((\forall k \in \mathbb {N}) \)\(CC_{\mathcal {S}}^{k}x-x \in (\cap ^{t}_{i=1} L_{i})^{\perp }\), that is,

    $$ (\forall k \in \mathbb{N}) \quad (\forall z \in\cap^{t}_{i=1} L_{i}) \quad \langle CC_{\mathcal{S}}^{k}x-x,z\rangle =0. $$
    (5.8)

Proof

(i): By Theorem 3.3(i), we know that \(CC_{\mathcal {S}}\) is proper. Hence, by Proposition 2.33 and \(\text {Id} \in \mathcal {S}\), there exist \(n \in \mathbb {N}\) and \(\alpha _{1}, \ldots , \alpha _{n} \in \mathbb {R}\) and \(T_{1}, \ldots , T_{n} \in \mathcal {S}\) such that

$$ CC_{\mathcal{S}}x=x+ {\sum}^{n}_{j=1} \alpha_{j} (T_{j}x-x). $$
(5.9)

Let \(z \in \cap ^{t}_{i=1} L_{i}\). Since \(\{T_{1}, \ldots , T_{n}\} \subseteq \mathcal {S}\), by Lemma 5.4, \({\sum }^{n}_{j=1} \alpha _{j} \langle T_{j}x-x,z\rangle =0\). Therefore,

$$ \langle CC_{\mathcal{S}}x-x,z\rangle \stackrel{(5.9)}{=} {\sum}^{n}_{j=1} \alpha_{j} \langle T_{j}x-x,z\rangle = 0. $$

Hence, (i) is true.

(ii): When k = 0, (5.8) is trivial. By (i),

$$ (\forall y \in \mathcal{H}) \quad (\forall z \in \cap^{t}_{i=1} L_{i}) \quad \langle CC_{\mathcal{S}}y-y,z\rangle = 0. $$
(5.10)

Then for every \(k \in \mathbb {N} \setminus \{0\}\), and for every \(z \in \cap ^{m}_{i=1} L_{i}\),

$$ \begin{array}{@{}rcl@{}} \langle CC_{\mathcal{S}}^{k}x-x,z\rangle & =& \left\langle {\sum}^{k-1}_{i=0}\left( CC_{\mathcal{S}}^{i+1}(x)-CC_{\mathcal{S}}^{i}(x) \right),z\right\rangle\\ & =& \left\langle{\sum}^{k-1}_{i=0}\left( CC_{\mathcal{S}}(CC_{\mathcal{S}}^{i}(x))-CC_{\mathcal{S}}^{i}(x) \right),z\right\rangle\\ & =& {\sum}^{k-1}_{i=0} \left\langle CC_{\mathcal{S}}(CC_{\mathcal{S}}^{i}(x))-CC_{\mathcal{S}}^{i}(x),z\right\rangle\\ & \stackrel{\text{(5.10)}}{=}&0. \end{array} $$

Hence, (ii) holds. □

Remark 5.6

Assume \(\text {Id} \in \mathcal {S}\). Let \(x \in {\mathscr{H}}\), and let \(k \in \mathbb {N}\). Then

$$ \begin{array}{@{}rcl@{}} \mathrm{P}_{\cap^{t}_{i=1} U_{i}} x -\mathrm{P}_{\cap^{t}_{i=1} U_{i}} CC_{\mathcal{S}}^{k}x & =& z + \mathrm{P}_{\cap^{t}_{i=1} L_{i}} (x-z) -z-\mathrm{P}_{\cap^{t}_{i=1} L_{i}} (CC_{\mathcal{S}}^{k}(x)-z) \quad (\text{by Fact~2.1})\\ &=& \mathrm{P}_{\cap^{t}_{i=1} L_{i}} (x-z) -\mathrm{P}_{\cap^{t}_{i=1} L_{i}} CC_{\mathcal{S}_{L}}^{k}(x-z) \quad (\text{ by Lemma~5.2(iii)})\\ & =& \mathrm{P}_{\cap^{t}_{i=1} L_{i}} ((x-z) - CC_{\mathcal{S}}^{k}(x-z)) =0 \quad (\text{by Proposition~5.5(ii)}). \end{array} $$

In fact, we proved \((\forall x \in {\mathscr{H}} )\)\(\mathrm {P}_{\cap ^{t}_{i=1} U_{i}} CC_{\mathcal {S}}^{k}x =\mathrm {P}_{\cap ^{t}_{i=1} U_{i}} x \) which is a special case of Proposition 4.2(iii).

In the remainder of this subsection, we consider cases when the initial points of circumcentered isometry methods are drawn from special sets.

Lemma 5.7

Let x be in \({\mathscr{H}}\). Then the following hold:

  1. (i)

    Suppose \(x \in \text {aff}(\cup ^{t}_{i=1} U_{i})\). Then \(\text {aff} \mathcal {S}(x) \subseteq \text {aff}(\cup ^{t}_{i=1} U_{i})\) and \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}x \in \text {aff}(\cup ^{t}_{i=1} U_{i})\).

  2. (ii)

    Suppose \(x \in \text {span}(\cup ^{t}_{i=1} U_{i})\). Then \(\text {aff} \mathcal {S}(x) \subseteq \text {span} \mathcal {S}(x) \subseteq \text {span}(\cup ^{t}_{i=1} U_{i})\) and \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}x \in \text {span}(\cup ^{t}_{i=1} U_{i})\).

Proof

(i): Let \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}}\) be an arbitrary but fixed element in \(\mathcal {S}\). If r = 0, \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}}x =x \in \text {aff}(\cup ^{t}_{i=1} U_{i})\). Assume r ≥ 1. Since i1 ∈{1,…,t}, \( \mathrm {P}_{U_{i_{1}}}x \in \text {aff}(\cup ^{t}_{i=1} U_{i})\). So

$$ \mathrm{R}_{U_{i_{1}}} x = 2 \mathrm{P}_{U_{i_{1}}}x - x \in \text{aff}(\cup^{t}_{i=1} U_{i}). $$

Assume for some j ∈{1,…,r − 1},

$$ \mathrm{R}_{U_{i_{j}}}{\cdots} \mathrm{R}_{U_{i_{1}}}x \in \text{aff}(\cup^{t}_{i=1} U_{i}). $$

Now since ij+ 1 ∈{1,…,t}, thus \(\mathrm {P}_{U_{i_{j+1}}}(\mathrm {R}_{U_{i_{j}}}{\cdots } \mathrm {R}_{U_{i_{1}}}x) \in \text {aff}(\cup ^{t}_{i=1} U_{i})\). Hence,

$$ \mathrm{R}_{U_{i_{j+1}}}\mathrm{R}_{U_{i_{j}}}{\cdots} \mathrm{R}_{U_{i_{1}}}x = 2 \mathrm{P}_{U_{i_{j+1}}}\left( \mathrm{R}_{U_{i_{j}}}{\cdots} \mathrm{R}_{U_{i_{1}}}x\right) - \mathrm{R}_{U_{i_{j}}}{\cdots} \mathrm{R}_{U_{i_{1}}}x \in \text{aff}(\cup^{t}_{i=1} U_{i}). $$

Hence, we have inductively proved \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}}x \in \text {aff}(\cup ^{t}_{i=1} U_{i})\).

Since \(\mathrm {R}_{U_{i_{r}}}{\cdots } \mathrm {R}_{U_{i_{1}}}x \in \mathcal {S}(x)\) is chosen arbitrarily, we conclude that \(\mathcal {S}(x) \subseteq \text {aff}(\cup ^{t}_{i=1} U_{i})\) which in turn yields \(\text {aff} \mathcal {S}(x) \subseteq \text {aff}(\cup ^{t}_{i=1} U_{i})\).

Moreover, by Theorem 3.3(i), \(CC_{\mathcal {S}}x \in \text {aff} \mathcal {S}(x) \subseteq \text {aff}(\cup ^{t}_{i=1} U_{i})\). Therefore, an easy inductive argument deduce \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}x \in \text {aff}(\cup ^{t}_{i=1} U_{i})\).

(ii): Using the similar technique showed in the proof of (i), we know that \(x \in \text {span}(\cup ^{t}_{i=1} U_{i})\) implies that \(\mathcal {S}(x) \subseteq \text {span}(\cup ^{t}_{i=1} U_{i})\). The remaining part of the proof is similar with the proof in (i), so we omit it. □

Corollary 5.8

Assume U1,…,Ut are closed linear subspaces in \({\mathscr{H}}\). Then the following hold:

  1. (i)

    \(CC_{\mathcal {S}} \mathrm {P}_{(\cap ^{t}_{i=1}U_{i})^{\perp }}= CC_{\mathcal {S}} - \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}= \mathrm {P}_{(\cap ^{t}_{i=1}U_{i})^{\perp }} CC_{\mathcal {S}}\).

  2. (ii)

    Let \(x \in (\cap ^{t}_{i=1} U_{i})^{\perp }\). Then \((\forall k \in \mathbb {N})\)\(CC_{\mathcal {S}}^{k}x \in (\cap ^{t}_{i=1} U_{i})^{\perp }\).

Proof

(i): Let \(x \in {\mathscr{H}}\). By Fact 2.2(i), we get \(\mathrm {P}_{(\cap ^{t}_{i=1}U_{i})^{\perp }} = \text {Id} - \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}\). By Lemma 5.1, \(- \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x \in \cap ^{t}_{i=1}U_{i} \subseteq \cap ^{t}_{j=1} \text {Fix} T_{j}\). Applying Corollary 3.6(ii) with \(z =- \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x\), we obtain \(CC_{\mathcal {S}}(x - \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x) =CC_{\mathcal {S}}x - \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x\). Hence,

$$ CC_{\mathcal{S}}\left( \mathrm{P}_{(\cap^{t}_{i=1}U_{i})^{\perp}}x\right)= CC_{\mathcal{S}}\left( x - \mathrm{P}_{\cap^{t}_{i=1}U_{i}}x\right) = CC_{\mathcal{S}}x - \mathrm{P}_{\cap^{t}_{i=1}U_{i}}x. $$
(5.11)

On the other hand, substituting \(W= \cap ^{t}_{i=1}U_{i}\) in Proposition 4.2(iii), we obtain that

$$ \mathrm{P}_{(\cap^{t}_{i=1}U_{i})^{\perp}} (CC_{\mathcal{S}}x) = CC_{\mathcal{S}}x - \mathrm{P}_{\cap^{t}_{i=1}U_{i}} CC_{\mathcal{S}}x = CC_{\mathcal{S}}x - \mathrm{P}_{\cap^{t}_{i=1}U_{i}}x. $$
(5.12)

Thus, (5.11) and (5.12) yield

$$ CC_{\mathcal{S}} \mathrm{P}_{(\cap^{t}_{i=1}U_{i})^{\perp}}= CC_{\mathcal{S}} - \mathrm{P}_{\cap^{t}_{i=1}U_{i}}= \mathrm{P}_{(\cap^{t}_{i=1}U_{i})^{\perp}} CC_{\mathcal{S}}. $$

(ii): By (i), \(CC_{\mathcal {S}}x = CC_{\mathcal {S}} \mathrm {P}_{(\cap ^{t}_{i=1} U_{i})^{\perp }} x = \mathrm {P}_{(\cap ^{t}_{i=1} U_{i})^{\perp }} CC_{\mathcal {S}}x \in (\cap ^{t}_{i=1} U_{i})^{\perp }\), which implies that

$$ (\forall y \in (\cap^{t}_{i=1} U_{i})^{\perp}) \quad CC_{\mathcal{S}} y \in (\cap^{t}_{i=1} U_{i})^{\perp}. $$

Hence, we obtain (ii) by induction. □

The following example tells us that in Corollary 5.8(i), the condition “U1,…,Ut are linear subspaces in \({\mathscr{H}}\)” is indeed necessary.

Example 5.9

Assume \({\mathscr{H}} = \mathbb {R}^{2}\) and \(U_{1}:=\{(x_{1},x_{2}) \in \mathbb {R}^{2} ~|~ x_{2}=1\}\) and \(U_{2} := \{ (x_{1},x_{2}) \in \mathbb {R}^{2} ~|~ x_{2}=x_{1}+1\}\). Assume \(\mathcal {S} =\{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\}\). Let x := (1,0). Since U1U2 = {(0,1)} and since \((U_{1} \cap U_{2})^{\perp }=\{(x_{1},x_{2}) \in \mathbb {R}^{2} ~|~ x_{2}=0\}\), thus

$$ CC_{\mathcal{S}} \mathrm{P}_{(U_{1} \cap U_{2})^{\perp}}x =(0,1) \neq (0,0)= CC_{\mathcal{S}}x - \mathrm{P}_{U_{1} \cap U_{2}}x= \mathrm{P}_{(U_{1} \cap U_{2})^{\perp}} CC_{\mathcal{S}}x. $$

5.2 Linear Convergence of Circumcentered Reflection Methods

This subsection is motivated by [8, Theorem 3.3]. In particular, [8, Theorem 3.3] is Proposition 5.10 below for the special case when \(\{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}, \ldots , \mathrm {R}_{U_{t}}\mathrm {R}_{U_{t-1}}{\cdots } \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\} = \mathcal {S}\) and U1,…,Ut are linear subspaces. The operator \(T_{\mathcal {S}}\) defined in Proposition 5.10 below is the operator A defined in [8, Lemma 2.1].

Proposition 5.10

Assume that \({\mathscr{H}}=\mathbb {R}^{n}\) and that

$$ \{ \text{Id}, \mathrm{R}_{U_{1}}, \mathrm{R}_{U_{2}}\mathrm{R}_{U_{1}}, \ldots, \mathrm{R}_{U_{t}}\mathrm{R}_{U_{t-1}}{\cdots} \mathrm{R}_{U_{2}}\mathrm{R}_{U_{1}}\} \subseteq \mathcal{S}. $$

Let L1,…,Lt be the closed linear subspaces defined in (5.1). Define \(T_{\mathcal {S}} : \mathbb {R}^{n} \to \mathbb {R}^{n}\) by \(T_{\mathcal {S}} := \frac {1}{t} {\sum }^{t}_{i=1} T_{i}\), where \(T_{1} := \frac {1}{2}(\text {Id} + \mathrm {P}_{L_{1}})\) and (∀i ∈{2,…,t}) \(T_{i} := \frac {1}{2} (\text {Id} + \mathrm {P}_{L_{i}} \mathrm {R}_{L_{i-1}} {\cdots } R_{L_{1}})\). Let \(x \in {\mathscr{H}}\). Then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap ^{t}_{i=1} U_{i} }x\) with a linear rate \(\|T_{\mathcal {S}} \mathrm {P}_{(\cap ^{t}_{i=1} L_{i})^{\perp }}\|\in [0,1[ \).

Proof

Now

$$ \begin{array}{@{}rcl@{}} T_{1} & =& \frac{1}{2}(\text{Id} + \mathrm{P}_{L_{1}})= \frac{1}{2} \left( \text{Id} +\frac{\text{Id} + \mathrm{R}_{L_{1}}}{2}\right) =\frac{3}{4}\text{Id} +\frac{1}{4} R_{L_{1}} \\ & \in& \text{aff}\{ \text{Id}, \mathrm{R}_{L_{1}}, \mathrm{R}_{L_{2}}\mathrm{R}_{L_{1}}, \ldots, \mathrm{R}_{L_{t}}\mathrm{R}_{L_{t-1}}{\cdots} \mathrm{R}_{L_{2}}\mathrm{R}_{L_{1}}\}, \end{array} $$

and for every i ∈{2,…,t},

$$ \begin{array}{@{}rcl@{}} T_{i} & =& \frac{1}{2} (\text{Id} + \mathrm{P}_{L_{i}} \mathrm{R}_{L_{i-1}} {\cdots} \mathrm{R}_{L_{1}}) \\ & =& \frac{1}{2} \left( \text{Id} + \left( \frac{\mathrm{R}_{L_{i}} + \text{Id} }{2}\right) \mathrm{R}_{L_{i-1}} {\cdots} \mathrm{R}_{L_{1}} \right) \\ & =&\frac{1}{2} \text{Id} + \frac{1}{4} \mathrm{R}_{L_{i}}\mathrm{R}_{L_{i-1}} {\cdots} \mathrm{R}_{L_{1}} + \frac{1}{4} \mathrm{R}_{L_{i-1}} {\cdots} \mathrm{R}_{L_{1}} \\ & \in& \text{aff}\{\text{Id}, \mathrm{R}_{L_{1}}, \mathrm{R}_{L_{2}}\mathrm{R}_{L_{1}}, \ldots, \mathrm{R}_{L_{t}}\mathrm{R}_{L_{t-1}}{\cdots} \mathrm{R}_{L_{2}}\mathrm{R}_{L_{1}}\}, \end{array} $$

which yield that

$$ T_{\mathcal{S}}=\frac{1}{t} {\sum}^{t}_{i=1} T_{i} \in \text{aff}\{\text{Id}, \mathrm{R}_{L_{1}}, \mathrm{R}_{L_{2}}\mathrm{R}_{L_{1}}, \ldots, \mathrm{R}_{L_{t}}\mathrm{R}_{L_{t-1}}{\cdots} \mathrm{R}_{L_{2}}\mathrm{R}_{L_{1}}\} \subseteq \text{aff} (\mathcal{S}_{L}). $$

Using [8, Lemma 2.1(i)], we know the \(T_{\mathcal {S}}\) is linear and \(\frac {1}{2}\)-averaged, and by [8, Lemma 2.1(ii)], \(\text {Fix} T_{\mathcal {S}} = \cap ^{t}_{i=1} L_{i}\). Hence, by Theorem 4.15(ii) and Lemma 5.1, we obtain that for every \(y \in {\mathscr{H}}\), \( (CC_{\mathcal {S}_{L}}^{k}y)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap ^{t}_{i=1} L_{i} }y\) with a linear rate \(\|T_{\mathcal {S}} \mathrm {P}_{(\cap ^{t}_{i=1} L_{i})^{\perp } }\|\in [0,1[ \). Therefore, the desired result follows from Proposition 5.3. □

Remark 5.11

In fact, [8, Lemma 2.1(ii)] is \(\text {Fix} T_{\mathcal {S}} = \cap ^{t}_{i=1} L_{i}\). In the proof of [8, Lemma 2.1(ii)], the authors claimed that “it is easy to see that FixTi = Li”. We provide more details here. For every i ∈{1,…,m}, by [3, Proposition 4.49], we know that \(\text {Fix} T_{i} =\text {Fix} \mathrm {P}_{L_{i}} \cap \text {Fix} \mathrm {R}_{L_{i-1}}\cdots \mathrm {R}_{L_{1}} \subseteq L_{i}\). As [8, Lemma 2.1(ii)] proved that \(\text {Fix} T_{\mathcal {S}} \subseteq \cap ^{m}_{i=1} \text {Fix} T_{i}\), we get that \(\text {Fix} T_{\mathcal {S}} \subseteq \cap ^{m}_{i=1} L_{i}\). On the other hand, by definition of \(T_{\mathcal {S}} \), we have \(\cap ^{m}_{i=1} L_{i} \subseteq \text {Fix} T_{\mathcal {S}}\). Altogether, \(\text {Fix} T_{\mathcal {S}} = \cap ^{m}_{i=1} L_{i}\), which implies that [8, Lemma 2.1(ii)] is true.

The idea of the proofs in the following two lemmas is obtained from [8, Lemma 2.1].

Lemma 5.12

Assume that \({\mathscr{H}}=\mathbb {R}^{n}\) and that \(\{\text {Id}, \mathrm {R}_{U_{1}},\ldots , \mathrm {R}_{U_{t-1}},\mathrm {R}_{U_{t}}\} \subseteq \mathcal {S}\). Let L1,…,Lt be the closed linear subspaces defined in (5.1). Define the operator \(T_{\mathcal {S}}: \mathbb {R}^{n} \to \mathbb {R}^{n}\) as \(T_{\mathcal {S}} :=\frac {1}{t} {\sum }^{t}_{i=1}\mathrm {P}_{L_{i}}\). Then the following hold:

  1. (i)

    \(T_{\mathcal {S}} \in \text {aff} (\mathcal {S}_{L})\).

  2. (ii)

    \(T_{\mathcal {S}}\) is linear and firmly nonexpansive.

  3. (iii)

    \(\text {Fix} T_{\mathcal {S}}=\cap ^{t}_{i=1} L_{i} = \cap _{F \in \mathcal {S}_{L}} \text {Fix} F\).

Proof

(i): Now (∀i ∈{1,…,t}), \(\mathrm {P}_{L_{i}}=\frac {\text {Id} + \mathrm {R}_{L_{i}}} {2}\), so

$$ T_{\mathcal{S}}=\frac{1}{t} {\sum}^{t}_{i=1}\mathrm{P}_{L_{i}} = \frac{1}{t} {\sum}^{t}_{i=1} \frac{\text{Id} + \mathrm{R}_{L_{i}}} {2} \in \text{aff}\{\text{Id}, \mathrm{R}_{L_{1}},\ldots, \mathrm{R}_{L_{t-1}},\mathrm{R}_{L_{t}}\} \subseteq \text{aff}(\mathcal{S}_{L}). $$

(ii): Let i ∈{1,…,t}. Because \(\mathrm {P}_{L_{i}}\) is firmly nonexpansive, it is \(\frac {1}{2}\)-averaged. Using Fact 2.9, we know \(T_{\mathcal {S}}\) is \(\frac {1}{2}\)-averaged, that is, it is firmly nonexpansive. In addition, because (∀i ∈{1,…,t}) Li is linear subspace implies that \(\mathrm {P}_{L_{i}}\) is linear, we know that \(T_{\mathcal {S}}\) is linear.

(iii): The projection is firmly nonexpansive, so it is quasinonexpansive. Hence, the result follows from [3, Proposition 4.47] and Theorem 4.15(i). □

Lemma 5.13

Assume that \({\mathscr{H}}=\mathbb {R}^{n}\) and that \(\{\text {Id}, \mathrm {R}_{U_{1}},\ldots , \mathrm {R}_{U_{t-1}},\mathrm {R}_{U_{t}}\} \subseteq \mathcal {S}\). Let L1,…,Lt be the closed linear subspaces defined in (5.1). Define the operator \(T_{\mathcal {S}}: \mathbb {R}^{n} \to \mathbb {R}^{n}\) by \(T_{\mathcal {S}} :=\frac {1}{t} {\sum }^{t}_{i=1}T_{i}\), where (∀i ∈{1,2,…,t}) \(T_{i} := \frac {1}{2} (\text {Id} +\mathrm {P}_{L_{i}})\). Then

  1. (i)

    \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S}_{L})\).

  2. (ii)

    \(T_{\mathcal {S}}\) is linear and firmly nonexpansive.

  3. (iii)

    \(\text {Fix} T_{\mathcal {S}}=\cap ^{t}_{i=1} L_{i} = \cap _{F \in \mathcal {S}_{L}} \text {Fix} F\).

Proof

(i): Now for every i ∈{1,…,t}, \(T_{i}=\frac {1}{2} (\text {Id} +\mathrm {P}_{L_{i}})=\frac {1}{2} \left (\text {Id} +\frac {\text {Id} + \mathrm {R}_{L_{i}}}{2}\right ) = \frac {3}{4}\text {Id} +\frac {1}{4} \mathrm {R}_{L_{i}}\). Hence,

$$ T_{\mathcal{S}}=\frac{1}{t} {\sum}^{t}_{i=1}T_{i} = \frac{1}{t} {\sum}^{t}_{i=1} \left( \frac{3}{4}\text{Id} +\frac{1}{4} \mathrm{R}_{L_{i}}\right) \in \text{aff}\{\text{Id}, \mathrm{R}_{L_{1}}, \mathrm{R}_{L_{2}}, \ldots, \mathrm{R}_{L_{t}}\} \subseteq \text{aff}(\mathcal{S}_{L}). $$

The proofs for (ii) and (iii) are similar to the corresponding parts of the proof in Lemma 5.12. □

Proposition 5.14

Assume that \({\mathscr{H}}= \mathbb {R}^{n}\) and \(\{\text {Id}, \mathrm {R}_{U_{1}}, \ldots , \mathrm {R}_{U_{t-1}},\mathrm {R}_{U_{t}}\} \subseteq \mathcal {S}\). Then for every \(x \in {\mathscr{H}}\), \( (CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges to \( \mathrm {P}_{\cap ^{t}_{i=1} U_{i} }x\) with a linear rate \(\big \|(\frac {1}{t} {\sum }^{t}_{i=1}\mathrm {P}_{L_{i}})\mathrm {P}_{(\cap ^{t}_{i=1} L_{i})^{\perp }}\big \|\).

Proof

Combining Lemma 5.12 and Theorem 4.15(ii), we know that for every \(y \in {\mathscr{H}}\), \((CC_{\mathcal {S}_{L}}^{k}y)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{ \cap ^{t}_{i=1} L_{i} }y\) with a linear rate \(\big \|(\frac {1}{t} {\sum }^{t}_{i=1}\mathrm {P}_{L_{i}})\mathrm {P}_{(\cap ^{t}_{i=1} L_{i})^{\perp }}\big \|\).

Hence, the required result comes from Proposition 5.3. □

Proposition 5.15

Assume that \({\mathscr{H}}=\mathbb {R}^{n}\) and \(\{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}, \ldots , \mathrm {R}_{U_{t}}\} \subseteq \mathcal {S}\). Denote \(T_{\mathcal {S}} := \frac {1}{t} {\sum }^{t}_{i=1}T_{i}\) where (∀i ∈{1,2,…,t}) \(T_{i} := \frac {1}{2} (\text {Id} +\mathrm {P}_{L_{i}})\). Let \(x \in \mathbb {R}^{n}\). Then \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) linearly converges to \(\mathrm {P}_{\cap ^{t}_{i=1} U_{i} }x\) with a linear rate \(\big \|T_{\mathcal {S}}\mathrm {P}_{(\cap ^{t}_{i=1} L_{i})^{\perp }}\big \|\).

Proof

Using the similar method used in the proof of Proposition 5.14, and using Lemma 5.13 and Theorem 4.15(ii), we obtain the required result. □

Clearly, we can take \(\mathcal {S}=\{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}, \ldots , \mathrm {R}_{U_{t}}\}\) in Propositions 5.14 and 5.15. In addition, Propositions 5.14 and 5.15 tell us that for different \(T_{\mathcal {S}} \in \text {aff}(\mathcal {S}_{L})\), we may obtain different linear convergence rates of \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\).

5.3 Accelerating the Douglas–Rachford Method

In this subsection, we consider the case when t = 2.

Lemma 5.16

Let L1, L2 be the closed linear subspaces defined in (5.1). Let zL1 + L2. Denote \(T:=T_{L_{2},L_{1}}\) defined in Definition 2.14. Assume \(L_{1} \cap L_{2} \subseteq \cap _{F \in \mathcal {S}_{L}} \text {Fix} F\). Then

$$ (\forall k \in \mathbb{N}) \quad \mathrm{P}_{L_{1}\cap L_{2}}(z) = \mathrm{P}_{L_{1}\cap L_{2}}\left( CC_{\mathcal{S}_{L}}^{k}z\right) = \mathrm{P}_{\text{Fix}T}\left( CC_{\mathcal{S}_{L}}^{k}z\right). $$

Proof

Using Lemma 5.7(ii), we get \((CC_{\mathcal {S}_{L}}^{k}z )_{k \in \mathbb {N}} \subseteq \text {span}(L_{1} \cup L_{2})= L_{1} +L_{2}\). Combining Lemma 5.1, Proposition 4.2(iii) (by taking W = L1L2) with Lemma 2.18, we obtain that \((\forall k \in \mathbb {N})\)\( \mathrm {P}_{\text {Fix} T}z = \mathrm {P}_{L_{1}\cap L_{2}}z= \mathrm {P}_{L_{1}\cap L_{2}}(CC_{\mathcal {S}_{L}}^{k}z) = \mathrm {P}_{\text {Fix} T}(CC_{\mathcal {S}_{L}}^{k}z)\). □

Corollary 5.17

Let L1, L2 be the closed linear subspaces defined in (5.1). Assume \(L_{1} \cap L_{2} \subseteq \cap _{F \in \mathcal {S}_{L}} \text {Fix} F\). Let \(x \in {\mathscr{H}}\). Let K be a closed linear subspace of \({\mathscr{H}}\) such that

$$ L_{1} \cap L_{2} \subseteq K \subseteq L_{1}+L_{2}. $$

Denote \(T:=T_{L_{2},L_{1}}\) defined in Definition 2.14. Then

$$ \begin{array}{@{}rcl@{}} (\forall k \in \mathbb{N}) \quad \mathrm{P}_{L_{1}\cap L_{2}}x &=& \mathrm{P}_{\text{Fix} T}\mathrm{P}_{K}x =\mathrm{P}_{L_{1}\cap L_{2}}\mathrm{P}_{K}x\\ &=&\mathrm{P}_{L_{1}\cap L_{2}}\left( CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K}x\right) = \mathrm{P}_{\text{Fix}T}\left( CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K}x\right). \end{array} $$

Proof

Because \(\mathrm {P}_{K}x \in K \subseteq L_{1}+L_{2}\). Then Lemma 2.19 implies that

$$ \mathrm{P}_{L_{1}\cap L_{2}}x =\mathrm{P}_{L_{1}\cap L_{2}}\mathrm{P}_{K}x = \mathrm{P}_{\text{Fix} T}\mathrm{P}_{K}x. $$

Applying Lemma 5.16 with z = PKx, we get the desired result. □

Using Corollary 5.17, Proposition 4.2(iv), Facts 2.16 and 2.17, and an idea similar to the proof of [7, Theorem 1], we obtain the following more general result, which is motivated by [7, Theorem 1]. In fact, [7, Theorem 1] reduces to Proposition 5.19(i) when \({\mathscr{H}}=\mathbb {R}^{n}\) and \(\mathcal {S} = \{\text {Id}, \mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}} \}\).

Proposition 5.18

Let L1, L2 be the closed linear subspaces defined in (5.1). Assume \(L_{1} \cap L_{2} \subseteq \cap _{F \in \mathcal {S}_{L}} \text {Fix} F\). Let K be a closed affine subspace of \({\mathscr{H}}\) such that for KL = parK,

$$ L_{1}\cap L_{2} \subseteq K_{L} \subseteq L_{1}+L_{2}. $$

Denote \(T:=T_{U_{2},U_{1}}\) and \(T_{L}:=T_{L_{2},L_{1}}\) defined in Definition 2.14. Denote the c(L1,L2) defined in Definition 2.15 by cF. Assume there exists \(d \in \mathbb {N} \setminus \{0\}\) such that \(T^{d} \in \text {aff} \mathcal {S}\). Let \(x \in {\mathscr{H}}\). Then

$$ (\forall k \in \mathbb{N}) \quad \|CC_{\mathcal{S}}^{k}\mathrm{P}_{K}x - \mathrm{P}_{U_{1}\cap U_{2}}x\| \leq (c_{F})^{d k} \|\mathrm{P}_{K}x - \mathrm{P}_{U_{1}\cap U_{2}}x\|. $$

Proof

By definition, \(T^{d} \in \text {aff} \mathcal {S}\) means that \({T_{L}^{d}} \in \text {aff} \mathcal {S}_{L}\). Using Corollary 5.17, we get

$$ \begin{array}{@{}rcl@{}} (\forall n \in \mathbb{N}) \quad \mathrm{P}_{L_{1}\cap L_{2}}x &=& \mathrm{P}_{\text{Fix} T_{L}} \mathrm{P}_{K_{L}}x =\mathrm{P}_{L_{1}\cap L_{2}}\mathrm{P}_{K_{L}}x\\ &=&\mathrm{P}_{L_{1}\cap L_{2}}\left( CC_{\mathcal{S}_{L}}^{n}\mathrm{P}_{K_{L}}x\right) = \mathrm{P}_{\text{Fix} T_{L}}\left( CC_{\mathcal{S}_{L}}^{n}\mathrm{P}_{K_{L}}x\right). \end{array} $$
(5.13)

Since \({T^{d}_{L}} \in \text {aff} \mathcal {S}_{L}\), Proposition 4.2(iv) implies that

$$ (\forall y \in \mathcal{H}) \quad \|CC_{\mathcal{S}_{L}}(y)- \mathrm{P}_{L_{1} \cap L_{2}} y\| \leq \|{T^{d}_{L}}(y)- \mathrm{P}_{L_{1} \cap L_{2}} y\|. $$
(5.14)

Using Fact 2.17, we get

$$ (\forall y \in \mathcal{H}) \quad \|{T^{d}_{L}}y - \mathrm{P}_{\text{Fix} T_{L}}y\| \leq {c^{d}_{F}} \|y - \mathrm{P}_{\text{Fix} T_{L}}y\|. $$
(5.15)

If k = 0, then the result is trivial. Thus, we assume that for some k ≥ 0, we have

$$ \|CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x - \mathrm{P}_{L_{1}\cap L_{2}}x\| \leq (c_{F})^{d k} \|\mathrm{P}_{K_{L}}x - \mathrm{P}_{L_{1}\cap L_{2}}x\|. $$
(5.16)

Then

$$ \begin{array}{@{}rcl@{}} \|CC_{\mathcal{S}_{L}}^{k+1}\mathrm{P}_{K_{L}}x - \mathrm{P}_{L_{1}\cap L_{2}}x\| & \stackrel{(5.13)}{=}&\|CC_{\mathcal{S}_{L}}(CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x) - \mathrm{P}_{L_{1}\cap L_{2}}(CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x)\|\\ & \stackrel{(5.14)}{\leq}& \|{T_{L}^{d}} (CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x) - \mathrm{P}_{L_{1}\cap L_{2}}(CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x)\| \\ & \stackrel{(5.13)}{=}& \|{T^{d}_{L}} (CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x) - \mathrm{P}_{\text{Fix} T_{L}}(CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x)\| \\ & \stackrel{(5.15)}{\leq}& {c^{d}_{F}} \|CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x - \mathrm{P}_{\text{Fix}T_{L}} (CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x)\| \\ & \stackrel{(5.13)}{=}& {c^{d}_{F}} \|CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}x - \mathrm{P}_{L_{1}\cap L_{2}}x\|\\ & \stackrel{(5.16)}{\leq}& {c^{d}_{F}} (c_{F})^{d k} \|\mathrm{P}_{K_{L}}x - \mathrm{P}_{L_{1}\cap L_{_{2}}}x\|\\ &=& (c_{F})^{d(k+1)} \|\mathrm{P}_{K_{L}}x - \mathrm{P}_{L_{1}\cap L_{2}}x\|. \end{array} $$

Hence, we have inductively proved

$$ (\forall k \in \mathbb{N}) \quad (\forall y\in \mathcal{H}) \quad \|CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{K_{L}}y - \mathrm{P}_{L_{1}\cap L_{2}}y\| \leq (c_{F})^{d k} \|\mathrm{P}_{K_{L}}y- \mathrm{P}_{L_{1}\cap L_{2}}y\|. $$
(5.17)

Let uU1U2. By Lemma 5.2(iii), we know that \((\forall k \in \mathbb {N})\)\((\forall y \in {\mathscr{H}})\)\(CC_{\mathcal {S}}^{k}y = u+ CC_{\mathcal {S}_{L}}^{k}(y-u)\) and by Fact 2.1, we have \(\mathrm {P}_{\cap ^{2}_{i=1} U_{i} }y = \mathrm {P}_{u +\cap ^{2}_{i=1} L_{i} }y = u+ \mathrm {P}_{\cap ^{2}_{i=1} L_{i} }(y-u)\). Hence, we obtain that for every \(k \in \mathbb {N}\) and for every \(x \in {\mathscr{H}}\),

$$ \begin{array}{@{}rcl@{}} \|CC_{\mathcal{S}}^{k}(\mathrm{P}_{K}x) - \mathrm{P}_{U_{1}\cap U_{2}}x\| & =& \|u + CC_{\mathcal{S}_{L}}^{k}(\mathrm{P}_{K}(x) -u) - u -\mathrm{P}_{L_{1}\cap L_{2}}(x-u)\| \\ & =& \|CC_{\mathcal{S}_{L}}^{k}(\mathrm{P}_{K_{L}}(x-u) ) -\mathrm{P}_{L_{1}\cap L_{2}}(x-u)\| \\ & \stackrel{(5.17)}{\leq}& (c_{F})^{d k} \|\mathrm{P}_{K_{L}}(x-u)- \mathrm{P}_{L_{1}\cap L_{2}}(x-u)\|\\ &=& (c_{F})^{d k} \|u+\mathrm{P}_{K_{L}}(x-u)-\left( u+ \mathrm{P}_{L_{1}\cap L_{2}}(x-u) \right)\|\\ &=& (c_{F})^{d k} \|\mathrm{P}_{K}x- \mathrm{P}_{U_{1}\cap U_{2}}x\|. \end{array} $$

Therefore, the proof is complete. □

Let us now provide an application of Proposition 5.18.

Proposition 5.19

Assume that U1, U2 are two closed affine subspaces with parU1 + parU2 being closed. Let \(x \in {\mathscr{H}}\). Let cF be the cosine of the Friedrichs angle between parU1 and parU2. Then the following hold:

  1. (i)

    Assume that \(\{\text {Id}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\} \subseteq \mathcal {S}\). Then each of the three sequences \((CC_{\mathcal {S}}^{k}(\mathrm {P}_{U_{1}}x))_{k \in \mathbb {N}}\), \((CC_{\mathcal {S}}^{k}(\mathrm {P}_{U_{2}}x))_{k \in \mathbb {N}}\), and \((CC_{\mathcal {S}}^{k}(\mathrm {P}_{U_{1} + U_{2}}x))_{k \in \mathbb {N}}\) converges linearly to \(\mathrm {P}_{U_{1} \cap U_{2}}x\). Moreover, their rates of convergence are no larger than cF ∈ [0,1[.

  2. (ii)

    Assume that \(\{\text {Id}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\} \subseteq \mathcal {S}\). Then the sequences \((CC_{\mathcal {S}}^{k}(\mathrm {P}_{U_{1}}x))_{k \in \mathbb {N}}\), \((CC_{\mathcal {S}}^{k}(\mathrm {P}_{U_{2}}x))_{k \in \mathbb {N}}\), and \((CC_{\mathcal {S}}^{k}(\mathrm {P}_{U_{1} + U_{2}}x))_{k \in \mathbb {N}}\) converge linearly to \(\mathrm {P}_{U_{1} \cap U_{2}}x\). Moreover, their rates of convergence are no larger than \({c^{2}_{F}}\).

Proof

Clearly, under the conditions of each statement, \(\text {par} U_{1} \cap \text {par} U_{2} \subseteq \cap _{F \in \mathcal {S}_{L}} \text {Fix} F\). In addition, we are able to substitute KL in Proposition 5.18 by any one of parU1, parU2 or parU1 + parU2.

(i): Since \(\{\text {Id}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\} \subseteq \mathcal {S}\),

$$ T_{U_{2},U_{1}} :=\frac{\text{Id} +\mathrm{R}_{U_{2}}\mathrm{R}_{U_{1}}}{2} \in \text{aff}\{\text{Id}, \mathrm{R}_{U_{2}}\mathrm{R}_{U_{1}}\} \subseteq \text{aff} \mathcal{S}. $$

Substitute d = 1 in Proposition 5.18 to obtain

$$ (\forall k \in \mathbb{N}) \quad \|CC_{\mathcal{S}}^{k}\mathrm{P}_{K_{L}}x- \mathrm{P}_{U_{1}\cap U_{2}}x\| \leq {c^{k}_{F}} \|\mathrm{P}_{K_{L}}x - \mathrm{P}_{U_{1}\cap U_{2}}x\|. $$

Because parU1 + parU2 is closed, by Fact 2.16, we know that cF ∈ [0,1[.

(ii): Since \(\{\text {Id}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\} \subseteq \mathcal {S}\), by [5, Proposition 4.13(i)], we know that

$$ T^{2}_{U_{2},U_{1}} = \left( \frac{\text{Id} +\mathrm{R}_{U_{2}}\mathrm{R}_{U_{1}}}{2}\right)^{2} \in \text{aff} \mathcal{S}. $$

The remainder of the proof is similar to the proof in (i) above. The only difference is that this time we substitute d = 2 but not d = 1. □

The following example shows that the special address for the initial points in Proposition 5.19 is necessary.

Example 5.20

Assume that U1, U2 are two closed linear subspaces in \({\mathscr{H}}\) such that U1 + U2 is closed. Assume \(\mathcal {S} = \{\text {Id}, \mathrm {R}_{U_{2}}\mathrm {R}_{U_{1}}\}\). Let \(x \in {\mathscr{H}} \setminus (U_{1}+U_{2})\). Clearly, \(U_{1} \cap U_{2} \subseteq \cap _{T \in \mathcal {S}} \text {Fix} T\). But

$$ \lim_{k \rightarrow \infty} CC_{\mathcal{S}}^{k}x = \mathrm{P}_{\text{Fix} CC_{\mathcal{S}}}x \not \in U_{1} \cap U_{2}. $$

Proof

By definition of \(\mathcal {S}\) and by Fact 2.32, \(CC_{\mathcal {S}} = T_{U_{2},U_{1}}\), where the \(T_{U_{2},U_{1}}\) is the Douglas–Rachford operator defined in Definition 2.14. By assumptions, Facts 2.16 and 2.17 imply that \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges linearly to \(\mathrm {P}_{\text {Fix} CC_{\mathcal {S}}}x\). So

$$ \lim_{k \rightarrow \infty} CC_{\mathcal{S}}^{k}x = \mathrm{P}_{\text{Fix} CC_{\mathcal{S}}}x. $$
(5.18)

Since \(x \not \in U_{1} + U_{2} =\overline {U_{1} + U_{2}}\), Lemma 2.18 yields that

$$ \mathrm{P}_{\text{Fix} CC_{\mathcal{S}}}x \neq \mathrm{P}_{U_{1} \cap U_{2}}x. $$
(5.19)

Assume to the contrary \(\mathrm {P}_{\text {Fix} CC_{\mathcal {S}}}x \in U_{1} \cap U_{2}\). By Theorem 4.12(ii) and (5.18), we get \(\mathrm {P}_{\text {Fix} CC_{\mathcal {S}}}x = \mathrm {P}_{U_{1} \cap U_{2}}x\), which contradicts (5.19).

Therefore, \(\lim _{k \rightarrow \infty } CC_{\mathcal {S}}^{k}x = \mathrm {P}_{\text {Fix} CC_{\mathcal {S}}}x \not \in U_{1} \cap U_{2}\). □

5.4 Best Approximation for the Intersection of Finitely Many Affine Subspaces

In this subsection, our main goal is to apply Proposition 5.19(i) to find the best approximation onto the intersection of finitely many affine subspaces. Unless stated otherwise, let I := {1,…,N} with N ≥ 1 and let \({\mathscr{H}}^{N}\) be the real Hilbert space obtained by endowing the Cartesian product with the usual vector space structure and with the inner product \((\mathbf {x},\mathbf {y}) \mapsto {\sum }^{N}_{i =1} \langle x_{i},y_{i}\rangle \), where x = (xi)i∈I and y = (yi)i∈I (for details, see [3, Proposition 29.16]).

Let (∀i ∈I) Ci be a nonempty closed convex subset of \({\mathscr{H}}\). Define two subsets of \({\mathscr{H}}^{N}\):

figure o

which are both closed and convex (in fact, D is a linear subspace).

Fact 5.21

[3, Propositions 29.3 and 29.16] Let x := (xi)i∈I. Then

  1. (i)

    \(\mathrm {P}_{\mathbf {C}}\mathbf {x} = \left (\mathrm {P}_{C_{i}} x_{i} \right )_{i \in \mathrm {I}}\).

  2. (ii)

    \(\mathrm {P}_{\mathbf {D}}\mathbf {x} = \left (\frac {1}{N} {\sum }_{i \in \mathrm {I}} x_{i} \right )_{i \in \mathrm {I}}\).

The following two results are clear from the definition of the sets C and D.

Lemma 5.22

Let \(x \in {\mathscr{H}}\). Then (x,…,x) ∈CDx ∈∩i∈ICi.

Proposition 5.23

Let \(x \in {\mathscr{H}}\). Then \(\mathrm {P}_{\mathbf {C} \cap \mathbf {D} } (x, \ldots ,x) = \left (\mathrm {P}_{\cap ^{N}_{i=1}C_{i}}x, \ldots , \mathrm {P}_{\cap ^{N}_{i=1}C_{i}}x\right )\).

Fact 5.24

[3, Corollary 5.30] Let t be a strictly positive integer, set J := {1,…,t}, let (Uj)j∈J be a family of closed affine subspaces of \({\mathscr{H}}\) such that \(\cap ^{t}_{j=1} U_{j} \neq \varnothing \). Let \(x_{0} \in {\mathscr{H}}\). Set \((\forall n \in \mathbb {N})\)\(x_{n+1} := \mathrm {P}_{U_{t}}{\cdots } \mathrm {P}_{U_{1}}x_{n}\). Then \(x_{n} \rightarrow \mathrm {P}_{\cap ^{t}_{j=1} U_{j} }x_{0}\).

Using Fact 5.24 and Proposition 5.23, we obtain the following interesting by-product, which can be treated as a new method to solve the best approximation problem associated with \(\cap ^{N}_{i=1}C_{i}\).

Proposition 5.25

Assume (∀i ∈I) Ci is a closed affine subspace of \({\mathscr{H}}\) with \(\cap ^{N}_{i=1}C_{i} \neq \varnothing \). Let \(x \in {\mathscr{H}}\). Then the following hold:

  1. (i)

    \(\mathrm {P}_{\mathbf {C} \cap \mathbf {D} } (x, \ldots ,x) = \lim _{k \to \infty } (\mathrm {P}_{\mathbf {D} } \mathrm {P}_{\mathbf {C} })^{k} (x, \ldots ,x) \).

  2. (ii)

    Denote by \(Q:=\frac {1}{N}(\mathrm {P}_{C_{1}} + {\cdots } + \mathrm {P}_{C_{N}})\), then

    $$ Q^{k}x \to \mathrm{P}_{\cap^{m}_{i=1}C_{i}}x. $$

Proof

Since (∀i ∈I) Ci is closed affine subspace of \({\mathscr{H}}\) with \(\cap ^{N}_{i=1}C_{i} \neq \varnothing \), thus C is closed affine subspace of \({\mathscr{H}}^{N}\) and \(\mathbf {C} \cap \mathbf {D} \neq \varnothing \). By definition of D, it is a linear subspace of \({\mathscr{H}}^{N}\).

(i): The result is from Fact 5.24 by taking t = 2 and considering the two closed affine subspaces C and D in \({\mathscr{H}}^{N}\).

(ii): Combine Fact 5.21, Proposition 5.23 with the above (i) to obtain the desired results. □

Fact 5.26

[2, Lemma 5.18] Assume each set Ci is a closed linear subspace. Then \(C^{\perp }_{1} + {\cdots } + C^{\perp }_{N}\) is closed if and only if D + C is closed.

The next proposition shows that we can use the circumcenter method induced by reflectors to solve the best approximation problem associated with finitely many closed affine subspaces. Recall that for each affine subspace U, we denote the linear subspace paralleling U as parU, i.e., parU := UU.

Proposition 5.27

Assume U1,…,Ut are closed affine subspaces in \({\mathscr{H}}\), with \(\cap ^{t}_{i=1} U_{i} \neq \varnothing \) and (parU1) + ⋯ + (parUt) being closed. Set J := {1,…,t}, and \(\mathbf {D}:= \{(x, \ldots , x) \in {\mathscr{H}}^{t}~ |~ x \in {\mathscr{H}}\}\). Assume \(\{\text {Id}, \mathrm {R}_{\mathbf {C}}\mathrm {R}_{\mathbf {D}}\} \subseteq \mathcal {S}\) or \(\{\text {Id}, \mathrm {R}_{\mathbf {D}}\mathrm {R}_{\mathbf {C}}\} \subseteq \mathcal {S}\). Let \(x \in {\mathscr{H}}\) and set \(\mathbf {x} := (x, \ldots , x) \in {\mathscr{H}}^{t} \cap \mathbf {D}\). Then \((CC_{\mathcal {S}}^{k} \mathbf {x} )_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\mathbf {C} \cap \mathbf {D}} \mathbf {x} = (\mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x, \ldots , \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x)\) linearly.

Proof

Denote . Clearly, CL = parC. Now parU1,…,parUt are closed linear subspaces implies that CL is closed linear subspace. It is clear that D = parD is a closed linear subspace. Because (parU1) + ⋯ + (parUt) is closed, by Fact 5.26, we get CL + D is closed. Then using Proposition 5.19(i), we know there exists a constant cF ∈ [0,1[ such that

$$ (\forall k \in \mathbb{N})\quad (\forall \mathbf{y} \!\in\! \mathbf{D}) \quad \|CC_{\mathcal{S}_{L}}^{k}\mathbf{y} -\mathrm{P}_{\mathbf{C_{L}}\cap \mathbf{D}} \mathbf{y}\| = \|CC_{\mathcal{S}_{L}}^{k}\mathrm{P}_{\mathbf{D}}\mathbf{y} -\mathrm{P}_{\mathbf{C_{L}}\cap \mathbf{D}} \mathbf{y}\| \leq {c^{k}_{F}} \|\mathrm{P}_{\mathbf{D}}\mathbf{y} -\mathrm{P}_{\mathbf{C_{L}}\cap \mathbf{D}} \mathbf{y}\|, $$

which imply that \((CC_{\mathcal {S}_{L}}^{k} (\mathbf {x}-\mathbf {u}) )_{k \in \mathbb {N}}\) linearly converges to \(\mathrm {P}_{\mathbf {C_{L}}\cap \mathbf {D}} (\mathbf {x}-\mathbf {u})\) for any \(u \in \cap ^{t}_{i=1} U_{i}\) and u = (u,…,u). Hence, by Proposition 5.3, we conclude that \((CC_{\mathcal {S}}^{k} \mathbf {x} )_{k \in \mathbb {N}}\) linearly converges to PCDx. Since by Proposition 5.23, \(\mathrm {P}_{\mathbf {C} \cap \mathbf {D}} \mathbf {x} =\left (\mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x, \ldots , \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x\right )\), thus \((CC_{\mathcal {S}}^{k} \mathbf {x} )_{k \in \mathbb {N}}\) linearly converges to \(\left (\mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x, \ldots , \mathrm {P}_{\cap ^{t}_{i=1}U_{i}}x\right )\). □

6 Numerical Experiments

In order to explore the convergence rate of the circumcenter methods, in this section we use the performance profile introduced by Dolan and Moré [13] to compare circumcenter methods induced by reflectors developed in Section 5 with the Douglas–Rachford method (DRM) and the method of alternating projections (MAP) for solving the best approximation problems associated with linear subspaces. (Recall that by Proposition 5.3, for any convergence results on circumcenter methods induced by reflectors associated with linear subspaces, we will obtain the corresponding equivalent convergence result on that associated with affine subspaces.)

In the whole section, given a pair of closed and linear subspaces, U1, U2, and a initial point x0, the problem we are going to solve is to

Denote the cosine of the Friedrichs angle between U1 and U2 by cF. It is well known that the sharp rate of the linear convergence of DRM and MAP for finding \(\mathrm {P}_{U_{1}\cap U_{2}}x_{0}\) are cF and \({c^{2}_{F}}\) respectively (see, [1, Theorem 4.3] and [11, Theorem 9.8] for details). Hence, if cF is “small”, then we expect DRM and MAP converge to \(\mathrm {P}_{U_{1} \cap U_{2}}x_{0}\) “fast”, but if cF ≈ 1, the two classical solvers should converge to \(\mathrm {P}_{U_{1} \cap U_{2}}x_{0}\) “slowly”. The cF associated with the problems in each experiment below is randomly chosen from some certain range.

6.1 Numerical Preliminaries

Dolan and Moré define a benchmark in terms of a set P of benchmark problems, a set S of optimization solvers, and a convergence measure matrix T. Once these components of a benchmark are defined, performance profile can be used to compare the performance of the solvers.

We assume \({\mathscr{H}}= \mathbb {R}^{1000}\). In every one of our experiment, we randomly generate 10 pairs of linear subspaces, U1, U2 with Friedrichs angles in certain range. We create pairs of linear subspaces with particular Friedrichs angle by [14]. For each pair of subspaces, we choose randomly 10 initial points, x0. This results in a total of 100 problems, that constitute our set P of benchmark problems. Set

$$ \begin{array}{@{}rcl@{}} \mathcal{S}_{1} &:=& \{ \text{Id}, R_{U_{1}}, R_{U_{2}}\},\quad \mathcal{S}_{2} := \{\text{Id}, R_{U_{1}}, R_{U_{2}}R_{U_{1}}\},\\ \mathcal{S}_{3} &:=& \{ \text{Id}, R_{U_{1}}, R_{U_{2}}, R_{U_{2}}R_{U_{1}}\}, \quad \mathcal{S}_{4} := \{ \text{Id}, R_{U_{1}}, R_{U_{2}}, R_{U_{2}}R_{U_{1}}, R_{U_{1}}R_{U_{2}}, R_{U_{1}}R_{U_{2}}R_{U_{1}}\}. \end{array} $$

Notice that

$$ CC_{S_{2}} \text{is the C--DRM operator} C_{T} \text{in [7]} $$

and hence, it is also the CRM operator C in [8] when m = 2.

Our test algorithms and sequences to monitor are as the Table 1.

Table 1 Forming the set of solvers S

Hence, our set S of optimization solvers is subset of the set consists of the six algorithms above.

For every i ∈{1,2,3,4}, we calculate the operator \(CC_{\mathcal {S}_{i}}\) by applying Proposition 2.33, and for notational simplicity,

We use 10− 6 as the tolerance employed in our stopping criteria and we terminate the algorithm when the number of iterations reaches 106 (in which case the problem is declared unsolved). For each problem p with the exact solution being \(\overline {x}=\mathrm {P}_{U_{1} \cap U_{2}}x_{0}\), and for each solver s, the performance measure considered in the whole section is either

(6.1)

or

(6.2)

where \(a^{(k)}_{p,s}\) is the kth iteration of solver s to solve problem p. We would not have access to \(\overline {x}=\mathrm {P}_{U_{1} \cap U_{2}}x_{0}\) in applications, but we use it here to see the true performance of the algorithms. After collecting the related performance matrices, T = (tp,s)100×card(S), we use the perf.m file in Dolan and Moré [12] to generate the plots of performance profiles. All of our calculations are implemented in Matlab.

6.2 Performance Evaluation

In this subsection, we present the performance profiles from four experiments. (We ran many other experiments and the results were similar to the ones shown here.) The cosine of the Friedrichs angels of the four experiments are from [0.01,0.05[,[0.05,0.5[,[0.5,0.9[ and [0.9,0.95[ respectively. In each one of the four experiments, we randomly generate 10 pairs of linear subspaces with the cosine of Friedrichs angles, cF, in the corresponding range, and as we mentioned in the last subsection, for each pair of subspaces, we choose randomly 10 initial points, x0, which gives us 100 problems in each experiment. The outputs of every one of our four experiments are the pictures of performance profiles with performance measure shown in (6.1) (the left-hand side pictures in Figs. 1 and 2) and with performance measure shown in (6.2) (the right-hand side ones in Figs. 1 and 2).

Fig. 1
figure 1

Performance profiles on six solvers for cF ∈ [0.01,0.5[

Fig. 2
figure 2

Performance profiles on six solvers for cF ∈ [0.5,0.95[

According to Fig. 1, we conclude that when cF ∈ [0.01,0.5[, \(CC_{\mathcal {S}_{4}}\) needs the smallest number of iterations to satisfy the inequality shown in (6.1), that MAP is the fastest to attain the inequality shown in (6.2), and that \(CC_{\mathcal {S}_{3}}\) takes the second place in terms of both required number of iterations and run time. Note that the circumcentered reflection methods need to solve the linear system (see Proposition 2.33). Hence, it is reasonable that MAP is the the fastest although MAP needs more number of iterations than circumcentered reflection methods.

From Fig. 2(a) and (b), we know that when cF ∈ [0.5,0.9[, the number of iterations required by \(CC_{\mathcal {S}_{2}}\) and \(CC_{\mathcal {S}_{3}}\) are similar (the lines from \(CC_{\mathcal {S}_{2}}\) and \(CC_{\mathcal {S}_{3}}\) almost overlap) and dominate the other 4 algorithms, and \(CC_{\mathcal {S}_{2}}\) is the fastest followed closely by MAP and \(CC_{\mathcal {S}_{3}}\). By Fig. 2(c) and (d), we find that when cF ∈ [0.9,0.95[ in which case MAP and DRM are very slow for solving the best approximation problem, \(CC_{\mathcal {S}_{3}}\) needs the least number of iterations and is the fastest in every one of the 100 problems.

Note that in \(\mathbb {R}^{1000}\), the calculation of projections takes the majority time in the whole time to solve the problems. As we mentioned before, we apply the Proposition 2.33 to calculate our circumcenter mappings: \(CC_{\mathcal {S}_{1}}\), \(CC_{\mathcal {S}_{2}}\), \(CC_{\mathcal {S}_{3}}\) and \(CC_{\mathcal {S}_{4}}\). Because the largest number of the operators in our \(\mathcal {S}\) is 6 (attained for \(\mathcal {S}_{4}\)), the size of the Gram matrix in Proposition 2.33 is less than or equal 5 × 5. As it is shown in Fig. 2(a) and (c), the methods \(CC_{\mathcal {S}_{2}}\), \(CC_{\mathcal {S}_{3}}\), and \(CC_{\mathcal {S}_{4}}\) need fewer iterations to solve the problems than MAP and DRM. It is well-known that MAP and DRM are very slow when cF is close to 1. It is not surprising that Fig. 2(b) shows that \(CC_{\mathcal {S}_{2}}\) is the fastest when for cF ∈ [0.5,0.9[ and Fig. 2(d) illustrates that \(CC_{\mathcal {S}_{3}}\) is the fastest for cF ∈ [0.9,0.95[.

The main conclusions that can be drawn from our experiments are the following.

When cF ∈ [0.01,0.5[ is small, \(CC_{\mathcal {S}_{4}}\) is the winner in terms of number of iterations and MAP is the best solver with consideration of the required run time. \(CC_{\mathcal {S}_{3}}\) takes the second place in performance profiles with both of the performance measures (6.1) and (6.2) for cF ∈ [0.01,0.5[.

When cF ∈ [0,5,0.9[, Behling, Bello Cruz and Santos’ method \(CC_{\mathcal {S}_{2}}\) is the optimal solver and the performance of \(CC_{\mathcal {S}_{3}}\) is outstanding for both the required number of iterations and run time.

When cF ∈ [0,9,0.95[, \(CC_{\mathcal {S}_{3}}\) is the best option with regard to both required number of iterations and run time.

Altogether, if the user does not have an idea about the range of cF, then we recommend \(CC_{\mathcal {S}_{3}}\).

7 Concluding Remarks

Generalizing some of our work in [5] and using the idea in [7], we showed the properness of the circumcenter mapping induced by isometries, which allowed us to study the circumcentered isometry methods. Sufficient conditions for the (weak, strong, linear) convergence of the circumcentered isometry methods were presented. In addition, we provided certain classes of linear convergent circumcentered reflection methods and established some of their applications. Numerical experiments suggested that three (including the C–DRM introduced in [7]) out of our four chosen circumcentered reflection methods dominated the DRM and MAP in terms of number of iterations for every pair of linear subspaces with the cosine of Friedrichs angle cF ∈ [0.01,0.95[. Although MAP is fastest to solve the related problems when cF ∈ [0.01,0.5[ and C–DRM is the fastest when cF ∈ [0.5,0.9[, one of our new circumcentered reflection methods is a competitive choice when we have no prior knowledge on the Friedrichs angle cF.

We showed the weak convergence of certain class of circumcentered isometry methods in Theorem 4.7. Naturally, we may ask whether strong convergence holds. If \(\mathcal {S}\) consists of isometries and \(\cap _{T \in \mathcal {S}} \text {Fix} T \neq \varnothing \), then Theorem 3.3(i) shows the properness of \(CC_{\mathcal {S}}\). Assuming additionally that \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) has a norm cluster in \(\cap _{T \in \mathcal {S}} \text {Fix}T\), Theorem 4.12(i) says that \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) converges to \(\mathrm {P}_{\cap _{T \in \mathcal {S}} \text {Fix}T}x\). Another question is: Can one find more general condition on \(\mathcal {S}\) such that \(CC_{\mathcal {S}}\) is proper and \((CC_{\mathcal {S}}^{k}x)_{k \in \mathbb {N}}\) has a norm cluster in \(\cap _{T \in \mathcal {S}} \text {Fix} T\) for some \(x \in {\mathscr{H}}\)? These are interesting questions to explore in future work.