1 Introduction

We deal in this paper with the convex feasibility problem (CFP), defined as follows: given closed and convex sets \(K_1, \dots , K_m\subset \mathbb {R}^n\), find a point in \(\cap _{i=1}^mK_i\).

We study the Circumcentered-Reflection method (CRM) for solving CFP under an error bound regularity condition. CRM and generalized circumcenters were introduced in [11, 12], and subsequently studied in [7,8,9,10, 14, 16] with the geometrical appeal of accelerating classical projection/reflection based methods. We will see that a nonaffine setting embedded with an error bound condition provides, at least, linear convergence of CRM. Moreover, we present a quite general instance for which CRM converges superlinearly, opening a path for future research lines on circumcenters-type schemes. In addition to these contributions, we show that CRM is faster than the famous Method of Alternating Projections (MAP), even in the lack of an error bound. MAP has a vast literature (see, for instance, [2, 3, 21, 23]) and concerns CPF for two sets, in principle. However, the discussion below allows us to apply both CRM and MAP to the multi-set CFP.

Two very well-known methods for CFP related to MAP are the Sequential Projection Method (SePM) and the Simultaneous Projection Method (SiPM), which can be traced back to [18, 20] respectively, and are defined as follows. Consider the operators \({\widehat{P}}, {\overline{P}}:\mathbb {R}^n\rightarrow \mathbb {R}^n\) given by \({\widehat{P}}:=P_{K_m}\circ \dots \circ P_{K_1}\), \(\overline{P}:=\frac{1}{m}\sum _{i=1}^mP_{K_i}\), where each \(P_{K_i}:\mathbb {R}^n\rightarrow K_i\) is the orthogonal projection onto \(K_i\). Starting from an arbitrary \(z \in \mathbb {R}^n\), SePM and SiPM generate sequences \((\hat{x}^k)_{k\in \mathbb {N}}\) and \((\bar{x}^k)_{k\in \mathbb {N}}\) given by \(\hat{x}^{k+1}={\widehat{P}}(\hat{x}^k)\), \(\bar{x}^{k+1}=\overline{P}(\bar{x}^k)\), respectively, where \(\bar{x}^{0} = \hat{x}^{0} = z\). When \(\cap _{i=1}^mK_i\ne \emptyset\), the sequences generated by both methods are known to be globally convergent to points belonging to \(\cap _{i=1}^mK_i\), i.e., to solve CFP. Under suitable assumptions, both methods have interesting convergence properties also in the infeasible case, i.e., when \(\cap _{i=1}^mK_i =\emptyset\), but we will not deal with this case. See [4] for an in-depth study of these and other projections methods for CFP.

An interesting relation between SePM and SiPM was found by Pierra [22]. Consider the sets \(\mathrm {K}:=K_1 \times \dots \times K_m\subset \mathbb {R}^{nm}, \mathrm {U}:=\{(x,\dots ,x)\mid x\in \mathbb {R}^m\}\subset \mathbb {R}^{nm}\). Apply SePM to the sets \(\mathrm {K},\mathrm {U}\) in the product space \(\mathbb {R}^{n m}\), i.e., take \(\mathrm {x}^{k+1}=P_{\mathrm {U}}\left( P_{\mathrm {K}}(\mathrm {x}^k)\right)\) starting from \(\mathrm {x}^0\in \mathrm {U}\). Clearly, \(\mathrm {x}^k\) belongs to \(\mathrm {U}\) for all \(k\in \mathbb {N}\), so that we may write \(\mathrm {x}^k=(x^k,\ldots ,x^k)\) with \(x^k\in \mathbb {R}^n\). It was proved in [22] that \(x^{k+1}={\overline{P}}(x^k)\), i.e., a step of SePM applied to two convex sets in the product space \(\mathbb {R}^{n\times m}\) is equivalent to a step of SiPM in the original space \(\mathbb {R}^n\). Thus, SePM with just two sets plays a sort of special role and, therefore, carries a name of its own, namely MAP. Observe that in the equivalence above one of the two sets in the product space, namely \(\mathrm {U}\), is a linear subspace. This fact is essential for the convergence of CRM applied for solving CFP; see [1, 14].

Let us start to focus on the alleged acceleration effect of CRM with respect to MAP. There is abundant numerical evidence of this effect (see [11, 13, 14, 19]); in this paper, we will present some analytical evidence, which strengthens the results from [14]. In view of Pierra’s reformulation [22], the general CFP can be seen as a specific convex-affine intersection problem and since both CRM and MAP converge for the general convex-affine intersection problem, from now on, we seek a point common to a given closed convex set \(K\subset \mathbb {R}^n\) and an affine manifold \(U\subset \mathbb {R}^n\).

For finding a point in \(K\cap U\ne \emptyset\), MAP and CRM iterate by means of the operators \(T=P_U\circ P_K\) and \(C(\cdot )={{\,\mathrm{circ}\,}}(\cdot , R_K(\cdot ), R_U(R_K(\cdot )))\), respectively, where \(R_K=2P_K-{{\,\mathrm{Id}\,}}\) and \(R_U=2P_U-{{\,\mathrm{Id}\,}}\) are the reflection operators over K and U, respectively. For a point \(x\in \mathbb {R}^n\), C(x), when exists, is the point equidistant to \(x,R_K(x)\) and \(R_U(R_K(x))\) that lies in the affine manifold determined by the latter three points.

A first result in the analytical study of the acceleration effect of CRM over MAP was derived in [14], where it was proved that, for all \(x\in U\), C(x) is well-defined and \({{\,\mathrm{dist}\,}}(C(x),K\cap U)\le {{\,\mathrm{dist}\,}}(T(x),K\cap U)\), where \({{\,\mathrm{dist}\,}}\) stands for the Euclidean distance. The previous inequality means that the point obtained after a CRM step is closer to (or at least no farther from) \(K\cap U\) than the one obtained after a MAP step from the same point. This local (or myopic) acceleration does not imply immediately that the CRM sequence converges faster than the MAP one. In order to show global acceleration, we will focus on special situations where the convergence rate of the MAP can be precisely established.

MAP is known to be linearly convergent in several special situations, e.g., when both K and U are affine manifolds (see [21]) or when \(K\cap U\) has nonempty interior (see [3]). In Sect. 2 we will consider another such case, namely when a certain so-called error bound (EB from now on) holds, meaning that there exists \(\omega \in (0,1)\) such that \({{\,\mathrm{dist}\,}}(x, K)\ge \omega {{\,\mathrm{dist}\,}}(x,K\cap U)\) for all \(x\in U\). This error bound resembles the regularity conditions presented in [3, 4, 15]. We will prove that in this case both the MAP and the CRM sequences converge linearly, with asymptotic constants bounded by \(\sqrt{1-\omega ^2}\) for MAP, and by the strictly better bound \(\sqrt{{1-\omega ^2}}/\sqrt{{1+\omega ^{2}}}\) for CRM, thus showing that under EB, CRM is faster than MAP. For the case of MAP, linear convergence under the error bound condition with this asymptotic constant is already known (see, for instance, [3, Corollary 3.14]) even if U is not affine; we present it for the sake of completeness. Then, in Sect. 3 we will exhibit two rather generic families of examples where CRM converges indeed faster than MAP. In the first one, \(K\subset \mathbb {R}^{n+1}\) will be the epigraph of a convex function \(f:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) and \(U\subset \mathbb {R}^{n+1}\) a support hyperplane of K. We will show that in this situation, under adequate assumptions on f, the MAP sequence converges sublinearly, while the CRM sequence converges linearly, and we will give as well an explicit bound for the asymptotic constant of the CRM sequence. Also, we will present a somewhat pathological example for which both the MAP sequence and the CRM one converge sublinearly. In the second family, K will still be the epigraph of a convex f, but U will not be a supporting hyperplane of K; rather it will intersect the interior of K. In this case, under not too demanding assumptions on f, the MAP sequence converges linearly (we will give an explicit bound of its asymptotic constant), while CRM converges superlinearly. These results firmly corroborate the already established numerical evidence in [14] of the superiority of CRM over MAP.

2 Preliminaries

We recall first the definition of Q-linear and R-linear convergence.

Definition 2.1

Let \((y^k)_{k\in \mathbb {N}}\subset \mathbb {R}^n\) be a convergent sequence to \(y^*\). Assume that \(y^k\ne y^*\) for all \(k\in \mathbb {N}\). Define

$$\begin{aligned} q:=\limsup _{k\rightarrow \infty }\frac{\left\Vert y^{k+1}-y^*\right\Vert }{\left\Vert y^k-y^*\right\Vert }, \quad \text{ and }\quad r:=\limsup _{k\rightarrow \infty }\left\Vert y^k-y^*\right\Vert ^{1/k}. \end{aligned}$$

Then, the convergence of \((y^k)_{k\in \mathbb {N}}\) is

  1. (i)

    Q-superlinearly if \(q=0\);

  2. (ii)

    Q-lineary if \(q\in (0,1)\);

  3. (iii)

    Q-sublinearly if \(q\ge 1\);

  4. (iv)

    R-superlinearly if \(r=0\);

  5. (v)

    R-linearly if \(r\in (0,1)\);

  6. (vi)

    R-sublinearly if \(r\ge 1\).

The values qr are called asymptotic constants of \((y^k)_{k\in \mathbb {N}}\).

It is well known that Q-linear convergence implies R-linear convergence (with the same asymptotic constant), but the converse statement does not hold true.

We remind now the notion of Fejér monotonicity in \(\mathbb {R}^{n}\).

Definition 2.2

A sequence \((y^k)_{k\in \mathbb {N}}\) is Fejér monotone with respect to a set M when \(\left\Vert y^{k+1}-y\right\Vert \le \left\Vert y^k-y\right\Vert\), for all \(y\in M\).

Proposition 2.3

If \((y^k)_{k\in \mathbb {N}}\) is Fejér monotone with respect to M then

  1. (i)

    \((y^k)_{k\in \mathbb {N}}\) is bounded;

  2. (ii)

    if a cluster point \({\bar{y}}\) of \((y^k)_{k\in \mathbb {N}}\) belongs to M, then \(\displaystyle \lim _{k\rightarrow \infty }y^k={\bar{y}}\).

Proof

See Theorem 2.16 in [4]. \(\square\)

We end this section with the main convergence results for MAP and CRM.

Proposition 2.4

Take closed and convex sets \(K_1,K_2\subset \mathbb {R}^n\) such that \(K_1\cap K_2\ne \emptyset\). Let \(P_{K_1}, P_{K_2}\) be the orthogonal projections onto \(K_1,K_2\) respectively. Consider the sequence \((z^k)_{k\in \mathbb {N}}\) generated by MAP starting from any \(z^0\in \mathbb {R}^n\), i.e., \(z^{k+1}=P_{K_2}(P_{K_1}(z^k))\). Then \((z^k)_{k\in \mathbb {N}}\) is Fejér monotone with respect to \(K_1\cap K_2\) and converges to a point \(z^*\in K_1\cap K_2\).

Proof

See [17, Theorem 4]. \(\square\)

Let us present the formal definition of the circumcenter.

Definition 2.5

Let \(x,y,z\in \mathbb {R}^n\) be given. The circumcenter \({{\,\mathrm{circ}\,}}(x,y,z)\in \mathbb {R}^n\) is a point satisfying

  1. (i)

    \(\left\Vert {{\,\mathrm{circ}\,}}(x,y,z) - x\right\Vert =\left\Vert {{\,\mathrm{circ}\,}}(x,y,z) - y\right\Vert =\left\Vert {{\,\mathrm{circ}\,}}(x,y,z) - z\right\Vert\) and,

  2. (ii)

    \({{\,\mathrm{circ}\,}}(x,y,z)\in {{\,\mathrm{aff}\,}}\{x,y,z\}:=\{w\in \mathbb {R}^n \mid w=x+\alpha (y-x)+\beta (z-x),\;\alpha ,\beta \in \mathbb {R}\}\).

The point \({{\,\mathrm{circ}\,}}(x,y,z)\) is well and uniquely defined if the cardinality of the set \(\{x,y,z\}\) is one or two. In the case in which the three points are all distinct, \({{\,\mathrm{circ}\,}}(x,y,z)\) is well and uniquely defined only if x, y and z are not collinear. For more general notions, definitions and results on circumcenters see [12, 13].

Consider now a closed convex set \(K\subset \mathbb {R}^n\) and an affine manifold \(U\subset \mathbb {R}^n\). Let \(P_K, P_U\) be the orthogonal projections onto KU respectively. Define \(R_K,R_U, T, C:\mathbb {R}^n\rightarrow \mathbb {R}^n\) as

$$\begin{aligned} R_K=2P_K-{{\,\mathrm{Id}\,}}, \quad R_U=2P_U-{{\,\mathrm{Id}\,}},\quad T = P_U \circ P_K, \quad C(\cdot )={{\,\mathrm{circ}\,}}(\cdot ,R_K(\cdot ), R_U(R_K(\cdot ))). \end{aligned}$$
(2.1)

Proposition 2.6

Let \(K\subset \mathbb {R}^n\) be a nonempty closed convex set. Then, the orthogonal projection \(P_K\) onto K is firmly nonexpansive, that is, for all \(x,y\in \mathbb {R}^n\) we have

$$\begin{aligned} \left\Vert P_K(x)-P_K(y)\right\Vert ^2\le \left\Vert x-y\right\Vert ^2-\left\Vert (x-P_K(x)) - (y - P_K(y))\right\Vert ^2. \end{aligned}$$

Proof

See [5, Theorem 4.16]. \(\square\)

Proposition 2.7

Assume that \(K\cap U\ne \emptyset\). Let \((x^k)_{k\in \mathbb {N}}\) be the sequence generated by CRM starting from any \(x^0\in U\), i.e., \(x^{k+1}=C(x^k)\). Then,

  1. (i)

    for all \(x\in U\), we have that C(x) is well defined and belongs to U;

  2. (ii)

    for all \(x\in U\), it holds that \(\left\Vert C(x) -y\right\Vert \le \left\Vert T(x)-y\right\Vert\), for any \(y\in K\cap U\);

  3. (iii)

    \((x^k)_{k\in \mathbb {N}}\) is Fejér monotone with respect to \(K\cap U\);

  4. (iv)

    \((x^k)_{k\in \mathbb {N}}\) converges to a point in \(K\cap U\).

Proof

All these results can be found in [14]: (i) is proved in Lemma 3, (ii) in Theorem 2, and (iii) and (iv) in Theorem 1. \(\square\)

3 Linear convergence of MAP and CRM under an error bound assumption

We start by introducing an assumption on a pair of convex sets \(K,K'\subset \mathbb {R}^n\), denoted as EB (as in Error Bound), which will ensure linear convergence of MAP and CRM.

  1. EB

    \(K\cap K'\ne \emptyset\) and there exists \(\omega \in (0,1)\) such that \({{\,\mathrm{dist}\,}}(x,K)\ge \omega {{\,\mathrm{dist}\,}}(x,K\cap K')\) for all \(x\in K'\).

Now we consider a closed and convex set \(K\subset \mathbb {R}^n\) and an affine manifold \(U\subset \mathbb {R}^n\). Assuming that KU satisfy Assumption EB, we will prove linear convergence of the sequences \((z^k)_{k\in \mathbb {N}}\) and \((x^k)_{k\in \mathbb {N}}\) generated by MAP and CRM, respectively. We start by proving that, for both methods, both distance sequences \(({{\,\mathrm{dist}\,}}(z^k, K\cap U))_{k\in \mathbb {N}}\) and \(({{\,\mathrm{dist}\,}}(x^k, K\cap U))_{k\in \mathbb {N}}\) converge linearly to 0, which will be a corollary of the next proposition.

Proposition 3.1

Assume that KU satisfy EB. Consider \(T,C:\mathbb {R}^n\rightarrow \mathbb {R}^n\) as in (2.1). Then, for all \(x\in U\),

$$\begin{aligned} (1-\omega ^2)\left\Vert x-P_{K\cap U}(x)\right\Vert ^2\ge \left\Vert T(x)-P_{K\cap U}(T(x))\right\Vert ^2\ge \left\Vert C(x)-P_{K\cap U}(C(x))\right\Vert ^2, \end{aligned}$$
(3.1)

with \(\omega\) as in Assumption EB.

Proof

It follows easily from Proposition 2.6 that

$$\begin{aligned} \left\Vert P_K(x)-y\right\Vert ^2\le \left\Vert x-y\right\Vert ^2-\left\Vert x-P_K(x)\right\Vert ^2 \end{aligned}$$
(3.2)

for all \(x\in \mathbb {R}^n\) and all \(y\in K\cap U\subset K\). Invoking again Proposition 2.6, we get from (3.2)

$$\begin{aligned} \left\Vert T(x)-y\right\Vert ^2&=\left\Vert P_U(P_K(x))- y\right\Vert ^2\le \left\Vert P_K(x)-y\right\Vert ^2-\left\Vert P_U(P_K(x))-P_K(x)\right\Vert ^2 \nonumber \\&\le \left\Vert x-y\right\Vert ^2-\left\Vert x-P_K(x)\right\Vert ^2-\left\Vert P_U(P_K(x))-P_K(x)\right\Vert ^2 \nonumber \\&\le \left\Vert x-y\right\Vert ^2-\left\Vert x-P_K(x)\right\Vert ^2= \left\Vert x-y\right\Vert ^2-{{\,\mathrm{dist}\,}}^2(x,K)\nonumber \\&\le \left\Vert x-y\right\Vert ^2-\omega ^2 {{\,\mathrm{dist}\,}}^2(x,K\cap U) \end{aligned}$$
(3.3)

for all \(x\in U, y\in K\cap U\), using Assumption EB in the last inequality. Take now \(y=P_{K\cap U}(x)\). Then, in view of (3.3),

$$\begin{aligned} \left\Vert C(x)-P_{K\cap U}(C(x))\right\Vert ^2&\le \left\Vert C(x)-P_{K\cap U}(T(x))\right\Vert ^2\nonumber \\&\le \left\Vert T(x)-P_{K\cap U}(T(x))\right\Vert ^2 \le \left\Vert T(x)-P_{K\cap U}(x)\right\Vert ^2\nonumber \\&\le \left\Vert x-P_{K\cap U}(x)\right\Vert ^2-\omega ^2 {{\,\mathrm{dist}\,}}^2(x,K\cap U) \nonumber \\&= (1-\omega ^2)\left\Vert x-P_{K\cap U}(x)\right\Vert ^2, \end{aligned}$$
(3.4)

using the definition of \(P_{K\cap U}\) in the first and the third inequality and Proposition 2.7(ii) in the second inequality. Note that (3.1) follows immediately from (3.4). \(\square\)

Corollary 3.2

Let \((z^k)_{k\in \mathbb {N}}\) and \((x^k)_{k\in \mathbb {N}}\) be the sequences generated by MAP and CRM starting at any \(z^0\in \mathbb {R}^n\) and any \(x^0\in U\), respectively. If KU satisfy Assumption EB, then the sequences \(({{\,\mathrm{dist}\,}}(z^k,K\cap U))_{k\in \mathbb {N}}\) and \(({{\,\mathrm{dist}\,}}(x^k,K\cap U))_{k\in \mathbb {N}}\) converge Q-linearly to 0, and the asymptotic constants are bounded above by \(\sqrt{1-\omega ^2}\), with \(\omega\) as in Assumption EB.

Proof

In view of the definition of \(P_{K\cap U}\), (3.1) can be rewritten as

$$\begin{aligned} (1-\omega ^2){{\,\mathrm{dist}\,}}^2(x,K\cap U)\ge {{\,\mathrm{dist}\,}}^2(T(x),K\cap U)\ge {{\,\mathrm{dist}\,}}^2(C(x),K\cap U), \end{aligned}$$
(3.5)

for all \(x\in U\). Since \(z^{k+1}=T(z^k)\), we get from the first inequality in (3.5),

$$\begin{aligned}(1-\omega ^2){{\,\mathrm{dist}\,}}^2(z^k,K\cap U)\ge {{\,\mathrm{dist}\,}}^2(z^{k+1},K\cap U),\end{aligned}$$

using the fact that \((z^k)_{k\in \mathbb {N}}\subset U\). Hence

$$\begin{aligned} \frac{{{\,\mathrm{dist}\,}}(z^{k+1},K\cap U)}{{{\,\mathrm{dist}\,}}(z^k,K\cap U)}\le \sqrt{1-\omega ^2}. \end{aligned}$$
(3.6)

By the same token, using the second inequality in (3.5) and Proposition 2.7(ii), we get

$$\begin{aligned} \frac{{{\,\mathrm{dist}\,}}(x^{k+1},K\cap U)}{{{\,\mathrm{dist}\,}}(x^k,K\cap U)}\le \sqrt{1-\omega ^2}. \end{aligned}$$
(3.7)

The inequalities in (3.6) and (3.7) imply the result. \(\square\)

We remark that the result for MAP holds when U is any closed and convex set, not necessarily an affine manifold. We need U to be an affine manifold in Proposition 2.7 (otherwise, \((x^k)_{k\in \mathbb {N}}\) may even diverge), but this proposition is used in our proofs only when the CRM sequence is involved.

Next, we show that, under Assumption EB, CRM achieves a linear rate with an asymptotic constant better than the one given in Corollary 3.2.

Proposition 3.3

Let \((x^k)_{k\in \mathbb {N}}\) be the sequence generated by CRM starting at any \(x^0\in U\). If KU satisfy Assumption EB, then the sequence \(({{\,\mathrm{dist}\,}}(x^k,K\cap U))_{k\in \mathbb {N}}\) Q-converges to 0 with the asymptotic constant bounded above by \(\sqrt{\dfrac{1-\omega ^2}{1+\omega ^{2}}},\) where \(\omega\) is as in Assumption EB.

Proof

Take \(y^*\in K\cap U\) and \(x\in U\). Note that

$$\begin{aligned} {{\,\mathrm{dist}\,}}^2(x,K)&=\left\Vert x-P_K(x)\right\Vert ^2 \nonumber \\&\le \left\Vert x-y^*\right\Vert ^2-\left\Vert P_K(x)-y^*\right\Vert ^2\nonumber \\&=\left\Vert x-y^*\right\Vert ^2-\left\Vert P_K(x)-P_K(y^*)\right\Vert ^2\nonumber \\&\le \left\Vert x-y^*\right\Vert ^2-\left\Vert P_U(P_K(x))-P_U(P_K(y^*))\right\Vert ^2-\left\Vert P_U(P_K(x))-P_K(x)\right\Vert ^2\nonumber \\&=\left\Vert x-y^*\right\Vert ^2-\left\Vert P_U(P_K(x))-y^*\right\Vert ^2-\left\Vert P_U(P_K(x))-P_K(x)\right\Vert ^2, \end{aligned}$$
(3.8)

using the definition of orthogonal projection onto K, the fact that \(y^*\in K\) and Proposition 2.6 in the first inequality, and again Proposition 2.6 regarding U in the second inequality.

Now, we will invoke some results from [14] to prove that C(x) is indeed the orthogonal projection of x onto the intersection of U with the halfspace \(H_x^+:=\{y\in \mathbb {R}^n \mid \left\langle {y-P_K(x)},{ x-P_K(x)} \right\rangle \le 0\}\) containing K. In [14, Lemma 3] it is proved that \(C(x)=P_{H_x\cap U}(x)\) with \(H_x:=\{y\in \mathbb {R}^n \mid \left\langle {y-P_K(x)},{ x-P_K(x)} \right\rangle = 0\}\). Using the arguments employed at the beginning of the proof of [14, Lemma 5], we get

$$\begin{aligned} C(x) = P_{H_x\cap U}(x) = P_{H_x^+\cap U}(x). \end{aligned}$$

Hence, the above equality and the fact that \(y^*\in K\cap U \subset H_x^+\cap U\) imply \(\left\langle {y^*-C(x)},{x-C(x)} \right\rangle \le 0\) and since x, \(P_U(P_K(x))\) and C(x) are collinear (see [14, Eq. (7)]), we get \(\left\langle {y^*-C(x)},{P_U(P_K(x))-C(x)} \right\rangle \le 0\). Thus,

$$\begin{aligned} \Vert P_U(P_K(x))-y^*\Vert ^2\ge \Vert C(x)-y^*\Vert ^2+\Vert C(x)-P_U(P_K(x))\Vert ^2. \end{aligned}$$
(3.9)

Now, (3.9) and (3.8) imply

$$\begin{aligned}&{{{\,\mathrm{dist}\,}}}^{2}(x,K)\le \left\Vert x-y^{*}\right\Vert ^{2}-\left\Vert C(x)-y^{*}\right\Vert ^{2}-\left\Vert C(x)-P_U(P_K(x))\right\Vert ^{2}\\&\qquad -\left\Vert P_U(P_K(x))-P_{K}(x)\right\Vert ^{2}\\&\quad =\left\Vert x-y^{*}\right\Vert ^{2}-\left\Vert C(x)-y^{*}\right\Vert ^{2}-\left\Vert C(x)-P_K(x)\right\Vert ^{2}\\&\quad \le \left\Vert x-y^{*}\right\Vert ^{2}-\left\Vert C(x)-y^{*}\right\Vert ^{2}-{{\,\mathrm{dist}\,}}^{2}(C(x), K)\\&\quad \le \left\Vert x-y^{*}\right\Vert ^{2}-{{\,\mathrm{dist}\,}}^{2}(C(x),K\cap U) -{{\,\mathrm{dist}\,}}^{2}(C(x), K), \end{aligned}$$

using the definition of the distance in the last two inequalities. Now, taking \(y^{*}=P_{K\cap U}(x)\) and using the error bound condition for x and C(x), we obtain

$$\begin{aligned} \omega ^2{{\,\mathrm{dist}\,}}^2(x,K\cap U)&\le {{\,\mathrm{dist}\,}}^2(x,K)\nonumber \\&\le {{\,\mathrm{dist}\,}}^2(x,K\cap U)-{{\,\mathrm{dist}\,}}^2(C(x),K\cap U) -\omega ^2{{\,\mathrm{dist}\,}}^2(C(x), K\cap U)\nonumber \\&= {{\,\mathrm{dist}\,}}^2(x,K\cap U)- (1+\omega ^2){{\,\mathrm{dist}\,}}^2(C(x), K\cap U). \end{aligned}$$
(3.10)

Rearranging (3.10), we get \((1+\omega ^2){{\,\mathrm{dist}\,}}^2(C(x), K\cap U)\le (1-\omega ^2){{\,\mathrm{dist}\,}}^2(x,K\cap U)\) and, since \(x^{k+1}=C(x^k)\), we have

$$\begin{aligned} \frac{{{\,\mathrm{dist}\,}}(x^{k+1}, K\cap U)}{{{\,\mathrm{dist}\,}}(x^{k},K\cap U)}\le \sqrt{ \frac{1-\omega ^2}{1+\omega ^2}}, \end{aligned}$$

which implies the result. \(\square\)

Propositions 3.1 and 3.3 do not entail immediately that the sequences \((x^k)_{k\in \mathbb {N}}, (z^k)_{k\in \mathbb {N}}\) themselves converge linearly; a sequence \((y^k)_{k\in \mathbb {N}}\subset \mathbb {R}^n\) may converge to a point \(y\in M\subset \mathbb {R}^n\), in such a way that \(({{\,\mathrm{dist}\,}}(y^k, M))_{k\in \mathbb {N}}\) converges linearly to 0 but \((y^k)_{k\in \mathbb {N}}\) itself converges sublinearly. Take for instance \(M=\{(s,0)\in \mathbb {R}^2\}\), \(y^k=\left( 1/k,2^{-k}\right)\). This sequence converges to \(0\in M\), \({{\,\mathrm{dist}\,}}(y^k,M)=2^{-k}\) converges linearly to 0 with asymptotic constant equal to 1/2, but the first component of \(y^k\) converges to 0 sublinearly, and hence the same holds for the sequence \((y^k)_{k\in \mathbb {N}}\). The next lemma, possibly of some interest on its own, establishes that this situation cannot occur when \((y^k)_{k\in \mathbb {N}}\) is Fejér monotone with respect to M. The result below is similar to [5, Theorem 5.12], however we include its proof for the sake of completeness.

Lemma 3.4

Consider a nonempty closed convex set \(M\subset \mathbb {R}^n\) and \((y^k)_{k\in \mathbb {N}}\subset \mathbb {R}^n\). Assume that \((y^k)_{k\in \mathbb {N}}\) is Fejér monotone with respect to M, and that \(({{\,\mathrm{dist}\,}}(y^k,M))_{k\in \mathbb {N}}\) converges R-linearly to 0. Then \((y^k)_{k\in \mathbb {N}}\) converges R-linearly to some point \(y^*\in M\), with asymptotic constant bounded above by the asymptotic constant of \(({{\,\mathrm{dist}\,}}(y^k,M))_{k\in \mathbb {N}}\).

Proof

Fix \(k\in \mathbb {N}\) and note that the Fejér monotonicity hypothesis implies that, for all \(j\ge k\),

$$\begin{aligned} \left\Vert y^j-P_M(y^k)\right\Vert \le \left\Vert y^k-P_M(y^k)\right\Vert ={{\,\mathrm{dist}\,}}(y^k, M). \end{aligned}$$
(3.11)

By Proposition 2.3(i), \((y^k)_{k\in \mathbb {N}}\) is bounded. Take any cluster point \({\bar{y}}\) of \((y^k)_{k\in \mathbb {N}}\). Taking limits with \(j\rightarrow \infty\) in (3.11) along a subsequence \((y^{k_j})_{j\in \mathbb {N}}\) of \((y^k)_{k\in \mathbb {N}}\) converging to \({\bar{y}}\), we get that \(\left\Vert {\bar{y}}-P_M(y^k)\right\Vert \le {{\,\mathrm{dist}\,}}(y^k,M)\). Since \(\lim _{k\rightarrow \infty }{{\,\mathrm{dist}\,}}(y^k,M)=0\), we conclude that \((P_M(y^k))_{k\in \mathbb {N}}\) converges to \({\bar{y}}\), so that there exists a unique cluster point, say \(y^*\). Therefore, \(\lim _{k\rightarrow \infty }y^k=y^*\), and hence \(\left\Vert y^*-P_M(y^k)\right\Vert \le {{\,\mathrm{dist}\,}}(y^k,M)\). Since \(y^*=\lim _{k\rightarrow \infty }P_M(y^k)\), we conclude that \(y^*\in M\). Observe further that

$$\begin{aligned} \left\Vert y^k-y^*\right\Vert\le & {} \left\Vert y^k-P_M(y^k)\right\Vert +\left\Vert P_M(y^k)-y^*\right\Vert \nonumber \\= & {} {{\,\mathrm{dist}\,}}(y^k,M)+\left\Vert y^*-P_M(y^k)\right\Vert \le 2{{\,\mathrm{dist}\,}}(y^k,M). \end{aligned}$$
(3.12)

Taking kth-root and then \(\limsup\) with \(k\rightarrow \infty\) in (3.12), and using the R-linearity hypothesis,

$$\begin{aligned} \limsup _{k\rightarrow \infty }\left\Vert y^k-y^*\right\Vert ^{1/k}&\le \limsup _{k\rightarrow \infty }2^{1/k}{{\,\mathrm{dist}\,}}(y^k, M)^{1/k}\\&=\limsup _{k\rightarrow \infty }{{\,\mathrm{dist}\,}}(y^k,M)^{1/k}<1, \end{aligned}$$

establishing both that \((y^k)_{k\in \mathbb {N}}\) converges R-linearly to \(y^*\in M\) and the statement on the asymptotic constant. \(\square\)

With the help of Lemma 3.4, we prove next the R-linear convergence of the MAP and CRM sequences under Assumption EB, and give bounds for their asymptotic constants.

Theorem 3.5

Consider a closed and convex set \(K\subset \mathbb {R}^n\) and an affine manifold \(U\subset \mathbb {R}^n\). Assume that KU satisfy Assumption EB. Let \((z^k)_{k\in \mathbb {N}}, (x^k)_{k\in \mathbb {N}}\) be the sequences generated by MAP and CRM, respectively, starting from arbitrary points \(z^0\in \mathbb {R}^n, x^0\in U\). Then both sequences \((z^k)_{k\in \mathbb {N}}\) and \((x^k)_{k\in \mathbb {N}}\) converge R-linearly to points in \(K\cap U\), and the asymptotic constants are bounded above by \(\sqrt{1-\omega ^2}\) for MAP, and by \(\sqrt{\dfrac{1-\omega ^2}{1+\omega ^{2}}}\) for CRM, with \(\omega\) as in Assumption EB.

Proof

In view of Propositions 2.4 and 2.7(iii) and (iv), both sequences are Fejér monotone with respect to \({K\cap U}\) and converge to points in \({K\cap U}\). By Corollary 3.2, both sequences \(({{\,\mathrm{dist}\,}}(z^k,{K\cap U}))_{k\in \mathbb {N}}\) and \(({{\,\mathrm{dist}\,}}(x^k,{K\cap U}))_{k\in \mathbb {N}}\) are Q-linearly convergent to 0, and henceforth R-linearly convergent to 0. Corollary 3.2 shows that the asymptotic constant of the sequence \(({{\,\mathrm{dist}\,}}(z^k,{K\cap U}))_{k\in \mathbb {N}}\) is bounded above by \(\sqrt{1-\omega ^2}\), and Proposition 3.3 establishes that the asymptotic constant of the sequence \(({{\,\mathrm{dist}\,}}(x^k,{K\cap U}))_{k\in \mathbb {N}}\) is bounded above by \(\sqrt{{1-\omega ^2}}/\sqrt{{1+\omega ^{2}}}\). Finally, by Lemma 3.4, both \((z^k)_{k\in \mathbb {N}}\) and \((x^k)_{k\in \mathbb {N}}\) are R-linearly convergent, with the announced bounds for their asymptotic constants. \(\square\)

We remark that, in view of Theorem 3.5, the upper bound for the asymptotic constant of the CRM sequence is substantially better than the one for the MAP sequence. Note that the CRM bound reduces the MAP one by a factor of \(\sqrt{1+\omega ^2}\), which increases up to \(\sqrt{2}\) when \(\omega\) approaches 1.

4 Two families of examples for which CRM is much faster than MAP

We will present now two rather generic families of examples for which CRM is faster than MAP. In the first one, MAP converges sublinearly while CRM converges linearly; in the second one, MAP converges linearly and CRM converges superlinearly.

In both families, we work in \(\mathbb {R}^{n+1}\). K will be the epigraph of a proper convex function \(f:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\), and U the hyperplane \(\{(x,0)\mid x\in \mathbb {R}^n\}\subset \mathbb {R}^{n+1}\). From now on, we consider f to be continuously differentiable in \({{\,\mathrm{int}\,}}({{\,\mathrm{dom}\,}}(f))\), where \({{\,\mathrm{dom}\,}}(f):=\{x\in \mathbb {R}^n\mid f(x)<+\infty \}\) and \({{\,\mathrm{int}\,}}({{\,\mathrm{dom}\,}}(f))\) is its topological interior. Next, we make the following assumptions on f:

  1. A1.

    \(0\in {{\,\mathrm{int}\,}}({{\,\mathrm{dom}\,}}(f))\) is the unique minimizer of f.

  2. A2.

    \(\nabla f(0)=0\).

  3. A3.

    \(f(0)=0\).

We will show that under these assumptions, MAP always converges sublinearly, while, adding an additional hypothesis, CRM converges linearly.

Note that under hypotheses A1 to A3, \(0\in \mathbb {R}^{n}\) is the unique zero of f and hence \(K\cap U=\{(0,0)\}\subset \mathbb {R}^{n+1}\). In view of Propositions 2.4 and 2.7, the sequences generated by MAP and CRM, with arbitrary initial points in \(\mathbb {R}^{n+1}\) and U, respectively, both converge to (0, 0), and are Fejér monotone with respect to \(\{(0,0)\}\), so that, in view of A1, for large enough k the iterates of both sequences belong to \({{\,\mathrm{int}\,}}({{\,\mathrm{dom}\,}}(f))\times \mathbb {R}\). We take now any point \((x,0)\in U\), with \(x\ne 0\) and proceed to compute \(P_K(x,0)\). Since \((x,0)\notin K\) (because \(x\ne 0\) and \(K\cap U=\{(0,0)\}\)), \(P_K(x,0)\) must belong to the boundary of K, i.e., it must be of the form (uf(u)), and u is determined by minimizing \(\left\Vert (x,0)-(u,f(u))\right\Vert ^2\), so that \(u-x +f(u)\nabla f(u) =0\), or equivalently

$$\begin{aligned} x=u+f(u)\nabla f(u). \end{aligned}$$
(4.1)

Note that since \(x\ne 0\), \(u\ne 0\) by A3. With the notation of Sect. 3 and bearing in mind that T and C are the MAP and CRM operators defined in (2.1), it is easy to check that

$$\begin{aligned} \begin{aligned}&P_K(x,0) =(u,f(u)),\text { and } \\&T(x,0)=P_U(P_K(x,0))= (u,0), \end{aligned} \end{aligned}$$
(4.2)

with u as in (4.1). Moreover,

$$\begin{aligned} \begin{aligned}&R_K(x,0)=(2u-x,2f(u)), \\&P_U(R_K(x,0))= (2u-x,0), \text { and } \\&R_U(R_K(x,0))=(2u-x,-2f(u)). \end{aligned} \end{aligned}$$

Next, we compute \(C(x,0)={{\,\mathrm{circ}\,}}((x,0),R_K(x,0), R_U(R_K(x,0)))\). Suppose that \(C(x,0)=(v,s)\). The conditions

$$\left\Vert (v,s)-(x,0)\right\Vert =\left\Vert (v,s)-R_K(x,0)\right\Vert =\left\Vert (v,s)-R_U(R_K(x,0))\right\Vert$$

give rise to two quadratic equations whose solution is

$$\begin{aligned} s=0,\qquad v=u-\left[ \frac{f(u)}{\left\Vert x-u\right\Vert }\right] ^2(x-u)=u -\frac{f(u)}{\left\Vert \nabla f(u)\right\Vert ^2}\nabla f(u), \end{aligned}$$
(4.3)

using (4.1) in the last equality.

We proceed to compute the quotients \(\left\Vert T(x,0)-0\right\Vert /\left\Vert (x,0)-0\right\Vert\), \(\left\Vert C(x,0)-0\right\Vert /\left\Vert (x,0)-0\right\Vert\). Since both the MAP and the CRM sequences converge to 0, these quotients are needed for determining their convergence rates. In view of (4.2) and (4.3), these quotients reduce to \(\left\Vert u\right\Vert /\left\Vert x\right\Vert , \left\Vert v\right\Vert /\left\Vert x\right\Vert\). We state the result of the computation of these quotients in the next proposition.

Proposition 4.1

Take \((x,0)\in U\) with \(x\ne 0\). Let \(T(x,0)=(u,0)\) and \(C(x,0)=(v,0)\). Then,

$$\begin{aligned} \frac{\left\Vert T(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert }=\frac{\left\Vert u\right\Vert }{\left\Vert x\right\Vert }=\frac{1}{\left\Vert {\bar{u}}+\dfrac{f(u)}{\left\Vert u\right\Vert }\nabla f(u)\right\Vert } \end{aligned}$$
(4.4)

with \({\bar{u}}=u/\left\Vert u\right\Vert\),

$$\begin{aligned} \left[ \frac{\left\Vert C(x,0)\right\Vert }{\left\Vert T(x,0)\right\Vert }\right] ^2=\left[ \frac{\left\Vert v\right\Vert }{\left\Vert u\right\Vert }\right] ^2\le 1-\left[ \frac{f(u)}{\left\Vert u\right\Vert \,\left\Vert \nabla f(u)\right\Vert }\right] ^2, \end{aligned}$$
(4.5)

and

$$\begin{aligned} \left[ \frac{\left\Vert C(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert }\right] ^2\le \left[ 1-\left( \frac{f(u)}{\left\Vert u\right\Vert \,\left\Vert \nabla f(u)\right\Vert }\right) ^2\right] \left[ \frac{\left\Vert u\right\Vert }{\left\Vert x\right\Vert }\right] ^2. \end{aligned}$$
(4.6)

Proof

In view of (4.1),

$$\begin{aligned} \frac{\left\Vert u\right\Vert }{\left\Vert x\right\Vert }=\frac{\left\Vert u\right\Vert }{\left\Vert u+f(u)\nabla f(u)\right\Vert } \end{aligned}$$

and (4.4) follows by dividing the numerator and the denominator by \(\left\Vert u\right\Vert\).

We proceed to establish (4.5). In view of (4.3), we have

$$\begin{aligned} \left\Vert v\right\Vert ^2&=\left\Vert u -\frac{f(u)}{\left\Vert \nabla f(u)\right\Vert ^2}\nabla f(u)\right\Vert ^2\nonumber \\&=\left\Vert u\right\Vert ^2+\left[ \frac{f(u)}{\left\Vert \nabla f(u)\right\Vert }\right] ^2- 2\frac{f(u)}{\left\Vert \nabla f(u)\right\Vert ^2}\langle \nabla f(u),u\rangle \nonumber \\&\le \left\Vert u\right\Vert ^2+\left[ \frac{f(u)}{\left\Vert \nabla f(u)\right\Vert }\right] ^2-2\left[ \frac{f(u)}{\left\Vert \nabla f(u)\right\Vert }\right] ^2\nonumber \\&= \left\Vert u\right\Vert ^2-\left[ \frac{f(u)}{\left\Vert \nabla f(u)\right\Vert }\right] ^2, \end{aligned}$$
(4.7)

using the gradient inequality \(\langle \nabla f(u),u\rangle \ge f(u)\), which holds because f is convex and \(f(0)=0\). Now, (4.5) follows by dividing (4.7) by \(\left\Vert u\right\Vert ^2\). Finally, (4.6) follows by multiplying (4.5) by \(\dfrac{\left\Vert u\right\Vert ^2}{\Vert x\Vert ^2}=\dfrac{\left\Vert T(x,0)\right\Vert ^2}{\Vert (x,0)\Vert ^2}\). \(\square\)

Next we compute the limits with \(x\rightarrow 0\) of the quotients in Proposition 4.1.

Proposition 4.2

Take \((x,0)\in U\) with \(x\ne 0\). Let \(T(x,0)=(u,0)\) and \(C(x,0)=(v,0)\). Then,

$$\begin{aligned} \limsup _{x\rightarrow 0}\frac{\left\Vert T(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert }=\lim _{x\rightarrow 0}\frac{\left\Vert u\right\Vert }{\left\Vert x\right\Vert }=1 \end{aligned}$$
(4.8)

and

$$\begin{aligned} \limsup _{x\rightarrow 0}\left[ \frac{\left\Vert C(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert }\right] ^2=\limsup _{x\rightarrow 0}\left[ \frac{\left\Vert v\right\Vert }{\left\Vert x\right\Vert }\right] ^2\le 1-\liminf _{x\rightarrow 0}\left[ \frac{f(x)}{\left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert }\right] ^2. \end{aligned}$$
(4.9)

Proof

By convexity of f, using A3, \(f(y)\le \langle \nabla f(y),y\rangle \le \left\Vert \nabla f(y)\right\Vert \,\left\Vert y\right\Vert\) for all \(y\in {{\,\mathrm{int}\,}}({{\,\mathrm{dom}\,}}(f))\) sufficiently close to 0. Hence, for all nonzero \(y\in {{\,\mathrm{int}\,}}({{\,\mathrm{dom}\,}}(f))\), \(0< f(y)/\left\Vert y\right\Vert \le \left\Vert \nabla f(y)\right\Vert\), using A1 and A3. Since \(\lim _{y\rightarrow 0}\nabla f(y)=0\) by A1 and A2 and the convexity of f, it follows that

$$\begin{aligned} {} \lim _{y\rightarrow 0} f(y)/\left\Vert y\right\Vert =0. \end{aligned}$$
(4.10)

Now we take limits with \(x\rightarrow 0\) in (4.4). Since \((u,0)=P_K((x,0))\) and using the continuity of projections, \(\lim _{x\rightarrow 0}u=0\). Thus,

$$\begin{aligned} \limsup _{x\rightarrow 0}\frac{\left\Vert T(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert }&= \lim _{x\rightarrow 0}\frac{\left\Vert T(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert } =\lim _{x\rightarrow 0}\frac{1}{\left\Vert {\bar{u}}+\dfrac{f(u)}{\left\Vert u\right\Vert }\nabla f(u)\right\Vert } \\&=\lim _{u\rightarrow 0}\frac{1}{\left\Vert {\bar{u}}+\dfrac{f(u)}{\left\Vert u\right\Vert }\nabla f(u)\right\Vert }=\frac{1}{\left\Vert {\bar{u}}\right\Vert }=1, \end{aligned}$$

using (4.10) and the fact that \(\left\Vert {\bar{u}}\right\Vert = \left\Vert u/\left\Vert u\right\Vert \right\Vert = 1\). We have proved that (4.8) holds. Now we deal with (4.9). Taking \(x\rightarrow 0\) in (4.6), we have

$$\begin{aligned} \limsup _{x\rightarrow 0}\left[ \frac{\left\Vert C(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert }\right] ^2\le \left[ 1-\liminf _{x\rightarrow 0}\left( \frac{f(u)}{\left\Vert u\right\Vert \,\left\Vert \nabla f(u)\right\Vert }\right) ^2\right] \limsup _{x\rightarrow 0}\left[ \frac{\left\Vert u\right\Vert }{\left\Vert x\right\Vert }\right] ^2. \end{aligned}$$
(4.11)

The second \(\limsup\) on the right-hand side of (4.11) is equal to \(\lim _{x\rightarrow 0}\left[ \frac{\left\Vert u\right\Vert }{\left\Vert x\right\Vert }\right] ^2\) and by (4.8) it is equal to 1 and so (4.9) follows from the already made observation that \(\lim _{x\rightarrow 0}u=0\). \(\square\)

We proceed to establish the convergence rates of the sequences generated by MAP and CRM for this choice of K and U.

Corollary 4.3

Consider \(K, U\subset \mathbb {R}^{n+1}\) given by \(K= {{\,\mathrm{epi}\,}}(f)\), with \(f:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) satisfying A1 to A3 and \(U:=\{(x,0)\mid x\in \mathbb {R}^n\}\subset \mathbb {R}^{n+1}\). Let \((z^k,0)_{k\in \mathbb {N}}\) and \((x^k,0)_{k\in \mathbb {N}}\), be the sequences generated by MAP and CRM, starting from \((z^0,0)\in \mathbb {R}^{n+1}\) and \((x^0,0) \in U\), respectively. Then,

$$\begin{aligned} \limsup _{k\rightarrow \infty }\frac{\left\Vert (z^{k+1},0)\right\Vert }{\left\Vert (z^k,0)\right\Vert }=1 \end{aligned}$$

and

$$\begin{aligned} \limsup _{k\rightarrow \infty }\frac{\left\Vert (x^{k+1},0)\right\Vert }{\left\Vert (x^k,0)\right\Vert }\le \sqrt{1-\gamma ^2}, \end{aligned}$$
(4.12)

with

$$\begin{aligned} \gamma :=\liminf _{x\rightarrow 0}\frac{f(x)}{\left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert }. \end{aligned}$$
(4.13)

Proof

Since \(T(z^k,0)=(z^{k+1},0)\), \(C(x^k,0)=(x^{k+1},0)\), and \(\lim _{k\rightarrow \infty }x^k=\lim _{k\rightarrow \infty }z^k=0\), it suffices to apply Proposition 4.2 with \(x=z^k\) in (4.8) and \(x=x^k\) in (4.9).

\(\square\)

We add now an additional hypothesis on f.

  1. A4.

    f satisfies \(\displaystyle \liminf _{x\rightarrow 0}\dfrac{f(x)}{\left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert }>0.\)

Observe that by using the convexity of f and Cauchy-Schwarz inequality, we have

$$\begin{aligned} 0\le f(x)\le \left\langle {\nabla f(x)},{x} \right\rangle \le \left\Vert \nabla f(x)\right\Vert \left\Vert x\right\Vert . \end{aligned}$$

Thus, A1 to A3 imply \(f(x)/(\left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert )\in (0,1]\), for all \(x\ne 0\), so that A4 just excludes the case in which the \(\liminf\) above is equal to 0. Next we rephrase Corollary 4.3.

Corollary 4.4

Consider \(K, U\subset \mathbb {R}^{n+1}\) given by \(K= {{\,\mathrm{epi}\,}}(f)\), with \(f:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) satisfying A1 to A3 and \(U:=\{(x,0)\mid x\in \mathbb {R}^n\}\subset \mathbb {R}^{n+1}\). Then the sequence generated by MAP from an arbitrary initial point converges sublinearly. If f also satisfies hypothesis A4, then the sequence generated by CRM from an initial point in U converges linearly, and its asymptotic constant is bounded above by \(\sqrt{1-\gamma ^2}<1\), with \(\gamma\) as in (4.13).

Proof

Immediate from Corollary 4.3 and hypothesis A4.

\(\square\)

Next we discuss several situations for which hypothesis A4 holds, showing that it is rather generic. The first case is as follows.

Proposition 4.5

Assume that f, besides satisfying A1 to A3, is of class \({{\mathcal {C}}}^2\) and \(\nabla ^2f(0)\) is nonsingular. Then, assumption A4 holds, and \(\gamma \ge \lambda _{\min }/(2\lambda _{\max })>0\) where \(\lambda _{\min }, \lambda _{\max }\) are the smallest and largest eigenvalues of \(\nabla f^2(0)\), respectively.

Proof

In view of A2, A3 and the hypothesis on \(\nabla f^2(0)\), we have

$$\begin{aligned} f(x)=\frac{1}{2}\left\langle {x},{\nabla ^2f(0)x} \right\rangle +o(\left\Vert x\right\Vert ^2)\ge \frac{\lambda _{\min }}{2}\left\Vert x\right\Vert ^2+o(\left\Vert x\right\Vert ^2). \end{aligned}$$
(4.14)

Also, using the Taylor expansion of \(\nabla f\) around \(x=0\), \(\nabla f(x)=\nabla ^{2}f(0)x + o(\left\Vert x\right\Vert )\), so that

$$\begin{aligned} \left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert&=\left\Vert x\right\Vert \,\left\Vert \nabla ^2f(0)x\right\Vert +o(\left\Vert x\right\Vert ^2) \nonumber \\&\le \left\Vert x\right\Vert ^2\left\Vert \nabla ^2f(0)\right\Vert +o(\left\Vert x\right\Vert ^2)\nonumber \\&\le \lambda _{\max }\left\Vert x\right\Vert ^2 +o(\left\Vert x\right\Vert ^2). \end{aligned}$$
(4.15)

By (4.14), (4.15),

$$\begin{aligned} \frac{f(x)}{\left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert }\ge \frac{\lambda _{\min }\left\Vert x\right\Vert ^2+o(\left\Vert x\right\Vert ^2)}{2\lambda _{\max }\left\Vert x\right\Vert ^2+ o(\left\Vert x\right\Vert ^2)} \end{aligned}$$
(4.16)

and the result follows by taking \(\liminf\) in (4.16), since the right hand converges to \(\dfrac{\lambda _{\min } }{2\lambda _{\max }}>0\) as \(x \rightarrow 0\). \(\square\)

Note that nonsingularity of \(\nabla ^2f(0)\) holds when f is of class \({{\mathcal {C}}}^2\) and strongly convex.

We consider next other instances for which assumption A4 holds. Now we deal with the case in which \(f(x)=\phi (\left\Vert x\right\Vert )\) with \(\phi :\mathbb {R}\rightarrow \mathbb {R}\cup \{+\infty \}\), satisfying A1 to A3. This case has a one dimensional flavor, and computations are easier. The first point to note is that

$$\begin{aligned} \liminf _{x\rightarrow 0}\frac{f(x)}{\left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert }=\liminf _{t\rightarrow 0}\frac{\phi (t)}{t\phi '(t)}, \end{aligned}$$
(4.17)

so that assumption A4 becomes:

A4\(^\prime\):

\(\phi\) satisfies \(\displaystyle \liminf _{t\rightarrow 0}\dfrac{\phi (t)}{t\phi '(t)}>0\).

More importantly, in this case \(\nabla f(x)\) and x are collinear, which allows for an improvement in the asymptotic constant: we will have \(1-\gamma\) instead of \(\sqrt{1-\gamma ^2}\) in (4.12), as we show next. We reformulate Propositions 4.1 and 4.2 for this case.

Proposition 4.6

Assume that \(f(x)=\phi (\left\Vert x\right\Vert )\), with \(\phi :\mathbb {R}\rightarrow \mathbb {R}\cup \{+\infty \}\) satisfying A1 to A3 and A4\(^{\prime }\). Take \((x,0)\in U\) with \(x\ne 0\). Let \(C(x,0)=(v,0)\). Then,

  1. (i)
    $$\begin{aligned} \frac{\left\Vert C(x,0)\right\Vert }{\left\Vert T(x,0)\right\Vert }=\frac{\left\Vert v\right\Vert }{\left\Vert u\right\Vert }= 1-\frac{\phi (\left\Vert u\right\Vert )}{\phi '(\left\Vert u\right\Vert )\left\Vert u\right\Vert }, \end{aligned}$$
    (4.18)
  2. (ii)
    $$\begin{aligned} \limsup _{x\rightarrow 0}\frac{\left\Vert C(x,0)\right\Vert }{\left\Vert (x,0)\right\Vert }= 1-\liminf _{x\rightarrow 0}\frac{f(x)}{\left\Vert x\right\Vert \,\left\Vert \nabla f(x)\right\Vert }= 1-\liminf _{t\rightarrow 0}\frac{\phi (t)}{t\phi '(t)}. \end{aligned}$$
    (4.19)

Proof

In this case

$$\begin{aligned} \nabla f(x)=\frac{\phi '(\left\Vert x\right\Vert )}{\left\Vert x\right\Vert }x \end{aligned}$$

so that (4.1) becomes

$$\begin{aligned} x=\left( 1+\frac{\phi (\left\Vert u\right\Vert )\phi '(u)}{\left\Vert u\right\Vert }\right) u, \end{aligned}$$

and (4.3) can be rewritten as

$$\begin{aligned} v=\left( 1-\frac{\phi (\left\Vert u\right\Vert )}{\phi '(\left\Vert u\right\Vert )\left\Vert u\right\Vert }\right) u. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{\left\Vert v\right\Vert }{\left\Vert u\right\Vert }=1-\frac{\phi (\left\Vert u\right\Vert )}{\phi '(\left\Vert u\right\Vert )\left\Vert u\right\Vert }, \end{aligned}$$

establishing (4.18). Then, (4.19) follows from (4.18) as in the proofs of Propositions 4.1 and 4.2, taking into account (4.17). \(\square\)

Corollary 4.7

Let \((x^k,0)_{k\in \mathbb {N}}\) be the sequence generated by CRM with \((x^0,0)\in U\). Assume that \(f(x)=\phi (\left\Vert x\right\Vert )\) with \(\phi :\mathbb {R}\rightarrow \mathbb {R}\cup \{+\infty \}\) satisfying A1 to A3. Then,

$$\begin{aligned}\limsup _{k\rightarrow \infty }\frac{ \left\Vert (x^{k+1},0)\right\Vert }{\left\Vert (x^k,0)\right\Vert }=1-{\hat{\gamma }},\end{aligned}$$

with

$$\begin{aligned} {\hat{\gamma }}{:=} \liminf _{t\rightarrow 0}\frac{\phi (t)}{t\phi '(t)}. \end{aligned}$$
(4.20)

If \(\phi\) satisfies hypothesis A4\(^{\prime }\) then, the CRM sequence is Q-linearly convergent, with asymptotic constant equal to \(1-{\hat{\gamma }}\).

Proof

It is an immediate consequence of Proposition 4.6(ii), in view of the definition of the circumcenter operator C, given in (2.1). \(\square\)

We verify next that assumption A4\(^\prime\) is rather generic. It holds, e.g., if \(\phi\) is analytic around 0.

Proposition 4.8

If \(\phi\) satisfies A1 to A3 and is analytic around 0 then it satisfies A4\(^{\prime }\), and \({\hat{\gamma }}= 1/p\), where \(p :=\min \{j\mid \phi ^{(j)}(0)\ne 0\}\).

Proof

In this case

$$\phi (t)=(1/p !)\phi ^{(p )}(0)t^p +o(t^{p +1})$$

and

$$t\phi '(t)=(1/(p -1)!)\phi ^{(p )}(0)t^p + o(t^{p +1})$$

, and the result follows taking limits with \(t\rightarrow 0\), taking into account (4.20). \(\square\)

Note that for an analytic \(\phi\) the asymptotic constant is always of the form \(1-1/p\) with \(p\in \mathbb {N}\). This is not the case in general. Take, e.g., \(\phi (t)=\left|t\right|^{\alpha }\) with \(\alpha \in \mathbb {R}\), \(\alpha > 1\). Then a simple computation shows that \({\hat{\gamma }}=1/\alpha\). Note that \(\phi\) is of class \({{\mathcal {C}}}^p\), where p is the integer part of \(\alpha\), but not of class \({{\mathcal {C}}}^{p+1}\), so that Proposition 4.8 does not apply.

Take now

$$\begin{aligned} f(x):={\left\{ \begin{array}{ll} 1-\sqrt{1-\left\Vert x\right\Vert ^2},\quad \,\,\, \mathrm{if}\,\, \left\Vert x\right\Vert \le 1,\\ +\infty ,\qquad \qquad \qquad \;\;\; \mathrm{otherwise}, \end{array}\right. } \end{aligned}$$

i.e., \(f(x)=\phi (\left\Vert x\right\Vert )\) with \(\phi (t)=1-\sqrt{1-t^2}\), when \(t\in [-1,1], \phi (t)=+\infty\) otherwise. Note that f satisfies A1 to A3 and its effective domain is the unit ball in \(\mathbb {R}^n\). Since \(\phi\) is analytic around 0 and \(\phi ''(0)\ne 0\), we get from Proposition 4.8 that \({\hat{\gamma }}=1/2\) and so the asymptotic constant of the CRM sequence is also 1/2. Note that the graph of f is the lower hemisphere of the ball \(B\subset \mathbb {R}^{n+1}\) centered at (0, 1) with radius 1. Observe also that the projection onto B of a point of the form \((x,0)\in \mathbb {R}^{n+1}\) is of the form (ut) with \(t<1\), so it belongs to \({{\,\mathrm{epi}\,}}(f)\). Hence, the sequences generated by CRM for the pair KU with \(K={{\,\mathrm{epi}\,}}(f)\) and \(K=B\) coincide. It follows easily that the sequence generated by CRM for a pair KU where K is any ball and U is a hyperplane tangent to the ball, converges linearly, with asymptotic constant equal to 1/2. We remark that in all these cases the sequence generated by MAP converges sublinearly, by virtue of Corollary 4.4.

We look now at a case where hypothesis A4\(^{\prime }\) fails. Define

$$\begin{aligned} f(x):={\left\{ \begin{array}{ll} e^{-\left\Vert x\right\Vert ^{-2}},\quad \,\, \mathrm{if}\,\, \left\Vert x\right\Vert \le \frac{1}{\sqrt{3}},\\ +\infty ,\qquad \quad \,\,\, \mathrm{otherwise}. \end{array}\right. } \end{aligned}$$

so that \(f(x)=\phi (\left\Vert x\right\Vert )\) with \(\phi (t)=e^{-1/t^2}\), when \(t\in (-3^{-1/2}, 3^{-1/2}), \phi (t)=+\infty\) otherwise. Again f satisfies A1 to A3. It is easy to check that \(\phi (t)/(t\phi '(t))=(1/2)t^2\), so that \(\lim _{t\rightarrow 0}\phi (t)/(t\phi '(t))=0\) and A4\(^{\prime }\) fails. It is known that this \(\phi\), which is of class \({{\mathcal {C}}}^\infty\) but not analytic, is extremely flat (in fact, \(f^{(k)}(0)=0\) for all k), and not even CRM can overcome so much flatness; in view of Corollary 4.7, in this case it converges sublinearly, as MAP does. The examples above are also presented as a study case in [6], illustrating the slow convergence of the proximal point algorithm, Douglas-Rachford algorithm and alternating projections.

Let us abandon such an appalling situation, and move over to other examples where CRM will be able to exhibit again its superiority; next, we deal with our second family of examples. In this case we keep the framework of the first family with just one change, namely in hypothesis A3 on f; now we will request that \(f(0)<0\). With this single trick (and a couple of additional technical assumptions), we will achieve linear convergence of the MAP sequence and superlinear convergence of the CRM one. We will assume also that the effective domain of f is the whole space (differently from the previous section, we don’t have now interesting examples with smaller effective domains; also, since now the limit of the sequences can be anywhere, a hypothesis on the effective domain becomes rather cumbersome). We’ll also demand that f be of class \({{\mathcal {C}}}^2\).

Finally, we will restrict ourselves to the case of \(f(x)=\phi (\left\Vert x\right\Vert )\), with \(\phi :\mathbb {R}\rightarrow \mathbb {R}\cup \{+\infty \}\). This assumption is not essential, but will considerably simplify our analysis. Thus, we rewrite the assumptions for \(\phi\), in this new context. We assume that function \(\phi\) is proper, strictly convex and twice continuously differentiable, satisfying

A2\(^{\prime }\).:

\(\phi '(0)=0\).

A3\(^{\prime }\).:

\(\phi (0)<0\).

In the remainder of the paper we will study the behavior of the MAP and CRM sequences for the pair \(K,U\subset \mathbb {R}^{n+1}\), where K is the epigraph of \(f(x)=\phi (\left\Vert x\right\Vert )\), with \(\phi\) satisfying hypotheses A2\(^{\prime }\) and A3\(^{\prime }\) above, and \(U:=\{(x,0)\mid x\in \mathbb {R}^n\}\subset \mathbb {R}^{n+1}\). As in the previous case, Propositions 2.4 and 2.7 ensure that both sequences converge to points in \(K\cap U\). Since we are dealing with convergence rates, we will exclude the case in which the sequences of interest have finite convergence. We continue with an elementary property of the limit of these sequences.

Proposition 4.9

Assume that KU are as above. Let \((x^*,0)\) be the limit of either the MAP or the CRM sequences and \(t^*:=\left\Vert x^*\right\Vert\). Then, \(\phi (t^*)=0\) and \(\phi '(t^*)>0\).

Proof

Since these sequences stay in U, remain outside K (otherwise convergence would be finite), and converge to points in \(K\cap U\), it follows that their limits must belong to \({{\,\mathrm{bd}\,}}(K)\cap U\), where \({{\,\mathrm{bd}\,}}(K):=\{(x,f(x))\mid x\in \mathbb {R}^n\}\) denotes the boundary of K. So, we conclude that \(0=f(x^*)=\phi (t^*)\). Now, since \(\phi '(0)=0\), in view of A2\(^{\prime }\), and \(\phi '\) is strictly increasing, we conclude that \(\phi '(t)>0\) for all \(t>0\). Note that \(x^*\ne 0\), because \(f(x^*)=0\) and \(f(0)<0\) by A3\(^{\prime }\). Hence \(t^*=\left\Vert x^*\right\Vert >0\), so that \(\phi '(t^*)>0\). \(\square\)

Now we analyze the behavior of the operators C and T, in this case.

Proposition 4.10

Assume that \(K,U\subset \mathbb {R}^{n+1}\) are defined as

$$U:=\{(x,0)\mid x\in \mathbb {R}^n\}\subset \mathbb {R}^{n+1}$$

and

$$K= {{\,\mathrm{epi}\,}}(f)$$

where

$$f(x)=\phi (\left\Vert x\right\Vert )$$

and \(\phi\) satisfies A2\(^{\prime }\) and A3\(^{\prime }\). Let T and C be the operators associated to MAP and CRM respectively, and \((z^*,0)\) and \((x^*,0)\) the limits of the sequences \((z^k)_{k\in \mathbb {N}}\) and \((x^k)_{k\in \mathbb {N}}\) generated by these methods, starting from some \((z^0,0)\in \mathbb {R}^{n+1}\), and some \((x^0,0)\in U\), respectively. Then,

$$\begin{aligned} \limsup _{x\rightarrow z^*}\frac{\left\Vert T(x,0)-(z^*,0)\right\Vert }{\left\Vert (x,0)-(z^*,0)\right\Vert }= \frac{1}{1+\phi '(\left\Vert z^*\right\Vert )^2} \end{aligned}$$
(4.21)

and

$$\begin{aligned} \limsup _{x\rightarrow z^*}\frac{\left\Vert C(x,0)-(x^*,0)\right\Vert }{\left\Vert (x,0)-(x^*,0)\right\Vert }=0. \end{aligned}$$
(4.22)

Proof

Since, in this case, \(\nabla f(x)=\dfrac{\phi '(\left\Vert x\right\Vert )}{\left\Vert x\right\Vert }x\) for all \(x\ne 0\), we rewrite (4.1) and (4.3) as

$$\begin{aligned} x=\left( 1+\frac{\phi (\left\Vert u\right\Vert )\phi '(\left\Vert u\right\Vert )}{\left\Vert u\right\Vert }\right) u \end{aligned}$$
(4.23)

and

$$\begin{aligned} v=\left( 1-\frac{\phi (\left\Vert u\right\Vert )}{\phi '(\left\Vert u\right\Vert )\left\Vert u\right\Vert }\right) u. \end{aligned}$$
(4.24)

In view of (4.23) and (4.24), uv and x are collinear. In terms of the operators C and T, we have that xC(x) and T(x) are collinear, so the same holds for the whole sequences generated by MAP, CRM and hence also for their limits \((z^*,0), (x^*,0)\). This is a consequence of the one-dimensional flavor of this family of examples. So, we define \(s:=\left\Vert z^*\right\Vert\), \(t:=\left\Vert x^*\right\Vert\), \(r:=\left\Vert u\right\Vert\), and therefore we get \(u=(r/s)z^*=(r/t)x^*\). We compute next the quotients

$$\begin{aligned} \frac{\left\Vert (T(x),0)-(z^*,0)\right\Vert }{\left\Vert (x,0)-(z^*,0)\right\Vert } = \frac{\left\Vert u-z^*\right\Vert }{\left\Vert x-z^*\right\Vert } \end{aligned}$$

and

$$\begin{aligned} \frac{\left\Vert (C(x),0)-(x^*,0)\right\Vert }{\left\Vert (x,0)-(x^*,0)\right\Vert } = \frac{\left\Vert v-x^*\right\Vert }{\left\Vert x-x^*\right\Vert }, \end{aligned}$$

needed for determining the convergence rate of the MAP and CRM sequences. We start with the MAP case.

$$\begin{aligned} \frac{\left\Vert T(x,0)-(z^*,0)\right\Vert }{\left\Vert (x,0)-(z^*,0)\right\Vert }&= \frac{\left\Vert u-z^*\right\Vert }{\left\Vert x-z^*\right\Vert }= \frac{s\left|\frac{r}{s}-1\right|}{s\left|\frac{r}{s}-1+\phi (r)\phi '(r)\right|} \nonumber \\&=\frac{\left|r-s\right|}{\left|r-s+s\phi '(r)\phi (r)\right|}\nonumber \\&=\frac{1}{\left|1+s\phi '(r)\left( \frac{\phi (r)-\phi (s)}{r-s}\right) \right|}, \end{aligned}$$
(4.25)

using (4.23) in the second equality and the fact that \(s=\phi (\left\Vert z^*\right\Vert )=f(z^*)=0\), established in Proposition 4.9, in the fourth one.

Now, we perform a similar computation for the operator C, needed for the CRM sequence.

$$\begin{aligned} \frac{\left\Vert C(x,0)-(x^*,0)\right\Vert }{\left\Vert (x,0)-(x^*,0)\right\Vert }&= \frac{\left\Vert v-x^*\right\Vert }{\left\Vert x-x^*\right\Vert }= \frac{t\left|\left( 1-\frac{\phi (r)}{\phi '(r)r}\right) \frac{r}{t}-1\right|}{t\left|\frac{r}{t}-1+\phi (r)\phi '(r)\right|} \nonumber \\&=\frac{\left|\left( 1-\frac{\phi (r)}{r\phi '(r)}\right) r-t\right|}{\left|r-t+t\phi (r)\phi '(r))\right|}= \frac{\left|r-t-\frac{\phi (r)}{\phi '(r)}\right|}{\left|r-t+t\phi (r)\phi '(r)\right|} \nonumber \\&=\frac{\left|1-\frac{1}{\phi '(r)}\left( \frac{\phi (r)-\phi (t)}{r-t}\right) \right|}{\left|1+ t\phi '(r)\left( \frac{\phi (r)-\phi (t)}{r-t}\right) \right|}, \end{aligned}$$
(4.26)

using (4.24) in the second equality, and Proposition 4.9, which implies \(\phi (t)=0\), in the fifth one.

Finally, we take limits in (4.25) with \(x\rightarrow z^*\) and in (4.26) with \(x\rightarrow x^*\). Note that, since \(u=P_K(x)\), \(\lim _{x\rightarrow z^*}u=P_K(z^*)=z^*\), because \(z^*\in K\). Hence we take limit with \(r\rightarrow s\) in the right hand side of (4.25). We also take limits with \(x\rightarrow x^*\) in (4.26). By the same token, taking limit with \(r\rightarrow t\) in the right hand side, we get

$$\begin{aligned} \limsup _{x\rightarrow z^*}\frac{\left\Vert T(x,0)-(z^*,0)\right\Vert }{\left\Vert (x,0)-(z^*,0)\right\Vert }&= \limsup _{r\rightarrow s}\frac{1}{\left|1+s\phi '(r)\left( \frac{\phi (r)-\phi (s)}{r-s}\right) \right|}\nonumber \\&= \frac{1}{\left|1+s\displaystyle \lim _{r\rightarrow s}\phi '(r)\left( \frac{\phi (r)-\phi (s)}{r-s}\right) \right|} \nonumber \\&= \frac{1}{1+s\phi '(s)^2} \end{aligned}$$
(4.27)

and

$$\begin{aligned} \limsup _{x\rightarrow x^*}\frac{\left\Vert C(x,0)-(x^*,0)\right\Vert }{\left\Vert (x,0)-(x^*,0)\right\Vert }&= \limsup _{r\rightarrow t}\frac{\left|1-\frac{1}{\phi '(r)}\left( \frac{\phi (r)- \phi (t)}{r-t}\right) \right|}{\left|1+t\phi '(r)\left( \frac{\phi (r)-\phi (t)}{r-t}\right) \right|}\nonumber \\&= \frac{\left|1-\displaystyle \lim _{r\rightarrow t}\frac{1}{\phi '(r)}\left( \frac{\phi (r)-\phi (t)}{r-t}\right) \right|}{\left|1+t\displaystyle \lim _{r\rightarrow t}\phi '(r)\left( \frac{\phi (r)-\phi (t)}{r-t}\right) \right|}\nonumber \\&=\frac{\left|1-\frac{\phi '(t)}{\phi '(t)}\right|}{\left|1+t\phi '(t)^2\right|}=0. \end{aligned}$$
(4.28)

The results follow, in view of the definitions of s and t, from (4.27) and (4.28), respectively. \(\square\)

Note that the denominators in the expressions of (4.27) and (4.28) are the same; the difference lies in the numerators: in the MAP case it is 1; in the CRM one, the presence of the factor \((\phi (r)-\phi (t))/(r-t)\) makes the numerator go to 0 when r tends to t.

Corollary 4.11

Under the assumptions of Proposition 4.10 the sequence generated by MAP converges Q-linearly to a point \((z^*,0)\in K\cap U\), with asymptotic constant equal to \(1/(1+\phi '(\left\Vert z^*\right\Vert )^2)\), and the sequence generated by CRM converges superlinearly.

Proof

The result for the MAP sequence follows from (4.21) in Proposition 4.10, observing that for \(x=z^k\), we have \(T(x,0)=(z^{k+1}, 0)\). Note that the asymptotic constant is indeed smaller than 1, because \(z^*\ne 0\), and \(\phi '(\left\Vert z^*\right\Vert )\ne 0\) by Proposition 4.9. The result for the CRM sequence follows from (4.22) in Proposition 4.10, observing that for \(x=x^k\), we have \(C(x,0)=(x^{k+1},0)\). \(\square\)

We now present an example that, although very simple, enables one to visualize how fast CRM is in comparison to MAP.

Example 4.12

Let \(\phi :\mathbb {R}\rightarrow \mathbb {R}\) given by \(\phi (t) = \left|t\right|^\alpha - \beta\), where \(\alpha >1\) and \(\beta \ge 0\). Consider \(K, U\subset \mathbb {R}^2\) such that \(K :={{\,\mathrm{epi}\,}}(\phi )\) and U is the abscissa axis. Note that, if \(\beta = 0\), the error bound condition EB between K and U does not hold. For any \(\beta >0\), though, it is easily verifiable that EB is valid. Figure 1 shows CRM and MAP tracking a point in \(K\cap U\) up to a precision \(\varepsilon > 0\), with the same starting point \((1.1, 0)\in \mathbb {R}^2\). We fix \(\alpha =2\) and take \(\beta = 0\) in Fig. 1a and \(\beta = 0.06\) in Fig. 1b. We count and display the iterations of the MAP sequence \((z^k)_{k\in \mathbb {N}}\) and the CRM sequence \((x^k)_{k\in \mathbb {N}}\) until \({{\,\mathrm{dist}\,}}(z^k,K\cap U) \le \varepsilon\) and \({{\,\mathrm{dist}\,}}(x^k,K\cap U) \le \varepsilon\), with \(\varepsilon = 10^{-3}\). The figures below depict the results on MAP and CRM derived in Corollaries 4.4 and 4.11.

Fig. 1
figure 1

Illustrative comparison between MAP and CRM

We emphasize that in the cases above MAP exhibits its usual behavior, i.e., linear convergence. The examples of the first family were somewhat special because, roughly speaking, the angle between K and U goes to 0 near the intersection. On the other hand, the superlinear convergence of CRM is quite remarkable. The additional computations of CRM over MAP reduce to the trivial determination of the reflections and the solution of a system of two linear equations in two variables, for finding the circumcenter [7, 11]. Now MAP is a typical first-order method (projections disregard the curvature of the sets), and thus its convergence is generically no better than linear. We have shown that the CRM acceleration improves this linear convergence to superlinear in a rather large class of instances. Long live CRM!

We conjecture that CRM enjoys superlinear convergence whenever U intersects the interior of K. The results in this section firmly support this conjecture.