1 Introduction

In this paper, we study the matrix equation

$$\begin{aligned} X+A^{*}X^{-1}A+B^{*}X^{-1}B=Q, \end{aligned}$$
(1)

where \(A\), \(B\) are square matrices and \(Q\) is a positive definite matrix. Here, \(A^{*}\) denotes the conjugate transpose of the matrix \(A\). The matrix Eq. (1) can be reduced to

$$\begin{aligned} Y+C^{*}Y^{-1}C+D^{*}Y^{-1}D=I, \end{aligned}$$
(2)

where \(I\) is the identity matrix. Moreover, the Eq. (1) is solvable if and only if the Eq. (2) is solvable. For the first time, the Eqs. (2) and (1) are considered by Long et al. (2008) and Vaezzadeh et al. (2013), respectively. Also, the Eqs. (1) and (2) are appeared as particular cases of the equations in El-Sayed and Ran (2001), Ran and Reurings (2002), He and Long (2010), Duan et al. (2011) and Liu and Chen (2011). El-Sayed and Ran (2001) and Ran and Reurings (2002) investigated the equation \(X+A^{*}\mathcal {F}(X)A=Q\). He and Long (2010) and Duan et al. (2011) investigated the equation \(X+\sum _{i=1}^m A_i^{*}X^{-1}A_i = I\). Liu and Chen (2011) studied the equation \(X^s + A^{*}X^{-t_1}A + B^{*}X^{-t_2}B=Q\). Berzig et al. (2012) considered the equation \(X = Q - A^{*}X^{-1}A + B^{*}X^{-1}B\). Zhou et al. (2013) and Li et al. (2014) investigated the equation \(X+A^{*}\overline{X}^{-1}A=Q\).

Specifically, if \(B=0\), the Eq. (1) reduces to

$$\begin{aligned} X+A^{*}X^{-1}A=Q, \end{aligned}$$
(3)

which has many applications and has been studied recently by several authors (Anderson et al. 1990; Engwerda 1993; Zhan and Xie 1996; Zhan 1996; Guo and Lancaster 1999; Xu 2001; Meini 2002; Sun and Xu 2003; Hasanov and Ivanov 2006; Hasanov 2010).

In this paper, we write \(A>0\) \((A\ge 0)\) if \(A\) is a Hermitian positive definite (semidefinite) matrix. For Hermitian matrices \(A\) and \(B\), we write \(A>B\) \((A\ge B)\) if \(A-B>0\) \((A-B\ge 0)\). A positive definite solutions \(X_{\mathrm{S}}\) and \(X_\mathrm{L}\) of a matrix equation is called minimal and maximal, respectively, if \(X_{\mathrm{S}}\le X\le X_{\mathrm{L}}\) for any positive definite solution \(X\) of the equation.

Long et al. (2008) presented some conditions for existence of a positive definite solution of (2). They propose two iterative methods: basic fixed point iteration (BFPI) and an inversion-free variant of BFPI for computing the maximal positive definite solution of (2). Vaezzadeh et al. (2013) studied the Eq. (1) and considered inversion-free iterative methods. They give partial generalization of the convergence theorems of Guo and Lancaster (1999). Popchev et al. (2011, 2012) made a perturbation analysis of (1).

Motivated by the work in Long et al. (2008), Vaezzadeh et al. (2013) and Popchev et al. (2011, 2012), we continue to study the fixed point iteration and inversion-free variant of BFPI for solving of (1). In Sect. 2, we give the convergence rate of the BFPI. In Sect. 3, we improve the convergence theorems, proved by Vaezzadeh et al. (2013), of two inversion-free iterative methods. With these methods we obtain the maximal positive definite solution of (1). Some numerical examples are presented to illustrate the convergence behaviour of various algorithms in Sect. 4.

Throughout this paper, we denote by \(\Vert A\Vert \) and \(\rho (A)\) the spectral norm and the spectral radius of a square matrix \(A\), respectively.

2 Basic fixed point iteration

Long et al. (2008) investigated Eq. (2). They propose some iterative algorithms and obtained some conditions for the existence of the positive definite solutions of (2).

We consider the BFPI:

Algorithm 2.1

(Basic fixed point iteration) Let \(X_0 = Q\). For \(n=0,1,\ldots ,\) compute

$$\begin{aligned} X_{n+1}=Q - A^{*}X_n^{-1}A- B^{*}X_n^{-1}B. \end{aligned}$$

Long et al. (2008) proved that, if Eq. (1) with \(Q=I\) has a positive definite solution, then the Algorithm 2.1 defines a monotonically decreasing matrix sequence, which converges to positive definite solution of (1). But the problem of convergence rate in Long et al. (2008) was not considered. It is easy to prove by induction that, if Eq. (1) has a positive definite solution then the Algorithm 2.1 defines a monotonically decreasing matrix sequence, which converges to the maximal positive definite solution \(X_{\mathrm{L}}\) of (1) for general positive definite matrix \(Q\), i.e.

$$\begin{aligned} X_0=Q \ge X_n\ge X_{n+1}\ge X_{\mathrm{L}}, \quad n=1,2,\ldots , \quad \lim _{n\rightarrow \infty }X_n = X_{\mathrm{L}}. \end{aligned}$$
(4)

We now establish the following result.

Theorem 2.1

If Eq. (1) has a positive definite solution, then for Algorithm 2.1 we have

$$\begin{aligned} \Vert X_{n+1}-X_{\mathrm{L}}\Vert \le \left( \left\| X_{\mathrm{L}}^{-1}A\right\| ^2 +\left\| X_{\mathrm{L}}^{-1}B\right\| ^2\right) \,\Vert X_n-X_{\mathrm{L}}\Vert , \end{aligned}$$

for all \(n\ge 0\).

Proof

The proof is similar to Theorem 2.3 in Guo and Lancaster (1999). Since \(X_{n+1}= Q - A^{*}X_n^{-1}A- B^{*}X_n^{-1}B\) and \(X_{L} = Q - A^{*}X_{\mathrm{L}}^{-1}A- B^{*}X_{\mathrm{L}}^{-1}B\), we have

$$\begin{aligned} X_{n+1}-X_{\mathrm{L}}&= A^{*}\left( X_{\mathrm{L}}^{-1} - X_n^{-1}\right) A+B^{*}\left( X_{\mathrm{L}}^{-1} - X_n^{-1}\right) B,\\&= A^{*}\left( X_{\mathrm{L}}^{-1} + X_n^{-1} -X_{\mathrm{L}}^{-1}\right) (X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} A\\&+B^{*}\left( X_{\mathrm{L}}^{-1} + X_n^{-1} -X_{\mathrm{L}}^{-1}\right) (X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} B\\&= A^{*} X_{\mathrm{L}}^{-1}(X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} A + B^{*} X_{\mathrm{L}}^{-1}(X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} B\\&- A^{*} X_{\mathrm{L}}^{-1}(X_n-X_{\mathrm{L}})X_n^{-1}(X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} A\\&- B^{*} X_{\mathrm{L}}^{-1}(X_n-X_{\mathrm{L}})X_n^{-1}(X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} B. \end{aligned}$$

Hence,

$$\begin{aligned} 0\le X_{n+1}-X_{\mathrm{L}} \le A^{*} X_{\mathrm{L}}^{-1}(X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} A + B^{*} X_{\mathrm{L}}^{-1}(X_n-X_{\mathrm{L}})X_{\mathrm{L}}^{-1} B \end{aligned}$$

and

$$\begin{aligned} \Vert X_{n+1}-X_{\mathrm{L}}\Vert \le \left( \left\| X_{\mathrm{L}}^{-1}A\right\| ^2 +\left\| X_{\mathrm{L}}^{-1}B\right\| ^2\right) \,\Vert X_n-X_{\mathrm{L}}\Vert . \end{aligned}$$

\(\square \)

Remark 2.1

If \(\Vert X_{\mathrm{L}}^{-1}A\Vert ^2 +\Vert X_{\mathrm{L}}^{-1}B\Vert ^2 < 1\) in Theorem 2.1, then the Algorithm 2.1 converges to \(X_{\mathrm{L}}\) linearly with rate \(r\le \Vert X_{\mathrm{L}}^{-1}A\Vert ^2 +\Vert X_{\mathrm{L}}^{-1}B\Vert ^2\). Moreover, if \(X\) is a positive definite solution of the Eq. (1) and \(\Vert X^{-1}A\Vert ^2 +\Vert X^{-1}B\Vert ^2 < 1\), then \(X\equiv X_{\mathrm{L}}\).

3 Inversion-free variants of the basic fixed point iteration

Zhan (1996) proposed an inversion-free variant of the BFPI for the maximal solution of (3) when \(Q=I\). Guo and Lancaster (1999) considered this algorithm for general positive definite \(Q\) and solved the problem of convergence rate.

Long et al. (2008) investigated Eq. (2). They applied Zhan’s idea for (2) and proposed inversion-free variant of the BFPI for the maximal solution of (2). We rewrite their algorithm for general \(Q\), which is applicable directly for (1).

Algorithm 3.1

Let \(X_0=Q, Y_0=Q^{-1}\). For \(n=0,1,2,\ldots \) compute

$$\begin{aligned} \left\{ \begin{array}{l} X_{n+1}=Q-A^{*}Y_nA-B^{*}Y_nB \\ Y_{n+1}=Y_n(2I-X_nY_n) \end{array} \right. \end{aligned}$$

The convergence of Algorithm 3.1 was established in Long et al. (2008) for \(Q=I\). Moreover, Long et al. (2008) derived that, if (1) has a positive definite solution with \(Q=I\), for Algorithm 3.1, \(X_0\ge X_1\ge \cdots \), \(Y_0\le Y_1\le \cdots \), and \(\lim _{n\rightarrow \infty }X_n = X_{\mathrm{L}}\), \(\lim _{n\rightarrow \infty }Y_n = X_{\mathrm{L}}^{-1}\), where \(X_{\mathrm{L}}\) is the maximal positive definite solution. For general positive definite matrix \(Q\) the convergence properties of Algorithm 3.1 are preserved.

Vaezzadeh et al. (2013) studied the Eq. (1) with \(Q=I\) and investigated the problem of convergence rate for Algorithm 3.1. The following result is given in Vaezzadeh et al. (2013).

Theorem 3.1

((Vaezzadeh et al. 2013, Theorem 2)) If matrix Eq. (1) with \(Q=I\) has a positive definite solution, for Algorithm 3.1 and any \(\epsilon >0\), we have

$$\begin{aligned} \left\| Y_{n+1}-X_{\mathrm{L}}^{-1}\right\| \le {\bigg (\left\| AX_{\mathrm{L}}^{-1}\right\| +\left\| BX_{\mathrm{L}}^{-1}\right\| +\epsilon \bigg )}^2\left\| Y_{n-1}-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(5)

and

$$\begin{aligned} \Vert X_{n+1}-X_{\mathrm{L}}\Vert \le \left( \Vert A\Vert ^2+\Vert B\Vert ^2\right) \left\| Y_n-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(6)

for all \(n\) large enough.

We now show that the above result can be improved.

Theorem 3.2

If matrix Eq. (1) has a positive definite solution, then for Algorithm 3.1 and any \(\epsilon >0\), we have

$$\begin{aligned} \left\| Y_{n+1}-X_{\mathrm{L}}^{-1}\right\| \le \left( \left\| AX_{\mathrm{L}}^{-1}\right\| ^2+\left\| BX_{\mathrm{L}}^{-1}\right\| ^2+\epsilon \right) \left\| Y_{n-1}-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(7)

and

$$\begin{aligned} \Vert X_{n+1}-X_{\mathrm{L}}\Vert \le \left( \Vert A\Vert ^2+\Vert B\Vert ^2\right) \left\| Y_n-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(8)

for all \(n\) large enough. Moreover, if \(A\) and \(B\) are nonsingular, then

$$\begin{aligned} \Vert X_{n+1}-X_{\mathrm{L}}\Vert \le \left( \left\| X_{\mathrm{L}}^{-1}A\right\| ^2+\left\| X_{\mathrm{L}}^{-1}B\right\| ^2+\epsilon \right) \Vert X_{n-1}-X_{\mathrm{L}}\Vert \end{aligned}$$
(9)

for all \(n\) large enough.

Proof

In the proof of Theorem 3.1 (Vaezzadeh et al. 2013) obtained the expression

$$\begin{aligned} X_{\mathrm{L}}^{-1}-Y_{n+1}&= \left( X_{\mathrm{L}}^{-1}-Y_n\right) X_{\mathrm{L}}\left( X_{\mathrm{L}}^{-1}-Y_n\right) +Y_n A^{*}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) A Y_n\nonumber \\&+ Y_n B^{*}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) BY_n. \end{aligned}$$
(10)

The inequality (7) follows from (10) since \(\Vert Y_n-X_{\mathrm{L}}^{-1}\Vert \le \Vert Y_{n-1}-X_{\mathrm{L}}^{-1}\Vert \) and \(\lim Y_n =X_{\mathrm{L}}^{-1}\). The inequality (8) follows from

$$\begin{aligned} X_{n+1}-X_{\mathrm{L}}=A^{*}\left( X_{\mathrm{L}}^{-1}-Y_n\right) A+B^{*}\left( X_{\mathrm{L}}^{-1}-Y_n\right) B \end{aligned}$$
(11)

So, from (10) and (11) follows:

$$\begin{aligned} X_{\mathrm{L}}^{-1}-Y_n=\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) X_{\mathrm{L}}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) +Y_{n-1}(X_{n-1}-X_{\mathrm{L}})Y_{n-1} \end{aligned}$$
(12)

If \(A\) and \(B\) are nonsingular, from (11) and (12) we have

$$\begin{aligned} X_{n+1}-X_{\mathrm{L}}&= A^{*}Y_{n-1}(X_{n-1}-X_{\mathrm{L}})Y_{n-1}A +B^{*}Y_{n-1}(X_{n-1}-X_{\mathrm{L}})Y_{n-1}B \nonumber \\&+A^{*}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) AA^{-1}X_{\mathrm{L}}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) A\nonumber \\&+B^{*}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) BB^{-1}X_{\mathrm{L}}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) B. \end{aligned}$$

Hence,

$$\begin{aligned} \Vert X_{n+1}-X_{\mathrm{L}}\Vert&\le \left( \Vert Y_{n-1}A\Vert ^2+\Vert Y_{n-1}B\Vert ^2\right) \Vert X_{n-1}-X_{\mathrm{L}}\Vert \nonumber \\&+ \left\| A^{*}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) A + B^{*}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) B\right\| \\&\times \bigg (\left\| A^{-1}X_{\mathrm{L}} \left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) A\right\| + \left\| B^{-1}X_{\mathrm{L}}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) B\right\| \bigg ) \\&= \left( \Vert Y_{n-1}A\Vert ^2+\Vert Y_{n-1}B\Vert ^2\right) \Vert X_{n-1}-X_{\mathrm{L}}\Vert \\&+ \bigg (\left\| A^{-1}X_{\mathrm{L}} \left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) A\right\| + \left\| B^{-1}X_{\mathrm{L}}\left( X_{\mathrm{L}}^{-1}-Y_{n-1}\right) B\right\| \bigg ) \\&\times \Vert X_n-X_{\mathrm{L}}\Vert . \end{aligned}$$

Therefore, since \(\Vert X_n-X\Vert \le \Vert X_{n-1}-X\Vert \) and \(\lim Y_n =X_{\mathrm{L}}^{-1}\), (9) is satisfied for all \(n\) large enough. \(\square \)

Remark 3.1

According the Theorem 3.1 for the linear convergence of the Algorithm 3.1 is guaranteed if \((\Vert AX_{\mathrm{L}}^{-1}\Vert +\Vert BX_{\mathrm{L}}^{-1}\Vert )^2 < 1\). But, according our result (Theorem 3.2) is necessarily \(\Vert AX_{\mathrm{L}}^{-1}\Vert ^2+\Vert BX_{\mathrm{L}}^{-1}\Vert ^2 < 1\). It is obvious that

$$\begin{aligned} \left\| AX_{\mathrm{L}}^{-1}\right\| ^2+\left\| BX_{\mathrm{L}}^{-1}\right\| ^2 < \bigg (\left\| AX_{\mathrm{L}}^{-1}\right\| +\left\| BX_{\mathrm{L}}^{-1}\right\| \bigg )^2. \end{aligned}$$

Hence, there are matrices \(A\), \(B\) and maximal solution \(X_{\mathrm{L}}\) of the Eq. (1), for which \(\Vert AX_{\mathrm{L}}^{-1}\Vert ^2+\Vert BX_{\mathrm{L}}^{-1}\Vert ^2<1\) and \((\Vert AX_{\mathrm{L}}^{-1}\Vert +\Vert BX_{\mathrm{L}}^{-1}\Vert )^2>1\), see Examples 4.1 and 4.2.

Vaezzadeh et al. (2013) proposed modification of Algorithm 3.1 with \(Q=I\) and investigated the problem of convergence rate. For general positive definite matrix \(Q\) this algorithm takes the following form:

Algorithm 3.2

Let \(X_0=Q\), \(Y_0=Q^{-1}\). For \(n=0,1,\ldots \), compute

$$\begin{aligned} \left\{ \!\begin{array}{l} Y_{n+1}=Y_n(2I-X_nY_n) \\ X_{n+1}=Q-A^{*}Y_{n+1}A-B^{*}Y_{n+1}B \\ \end{array} \right. \end{aligned}$$

We denote that Algorithm 3.2 is generalization of Guo and Lancaster algorithm for (3) proposed in Guo and Lancaster (1999).

Vaezzadeh et al. (2013), Theorem 3 derived that, if (1) has a positive definite solution with \(Q=I\), for Algorithm 3.2, \(X_0\ge X_1\ge \cdots \), \(Y_0\le Y_1\le \cdots \), and \(\lim _{n\rightarrow \infty }X_n = X_{\mathrm{L}}\), \(\lim _{n\rightarrow \infty }Y_n = X_{\mathrm{L}}^{-1}\). Vaezzadeh et al. (2013) for convergence rate the following result is given.

Theorem 3.3

((Vaezzadeh et al. 2013, Theorem 4)) If matrix Eq. (1) with \(Q=I\) has a positive definite solution, for Algorithm 3.2 and any \(\epsilon >0\), then we have

$$\begin{aligned} \left\| Y_{n+1}-X_{\mathrm{L}}^{-1}\right\| \le {\bigg (\left\| AX_{\mathrm{L}}^{-1}\right\| +\left\| BX_{\mathrm{L}}^{-1}\right\| +\epsilon \bigg )}^2\left\| Y_n-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(13)

and

$$\begin{aligned} \Vert X_n-X_{\mathrm{L}}\Vert \le (\Vert A\Vert ^2+\Vert B\Vert ^2)\left\| Y_n-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(14)

for all \(n\) large enough.

For general positive definite matrix \(Q\) the convergence properties of Algorithm 3.2 are preserved. We now show that the above result can be improved.

Theorem 3.4

If matrix Eq. (1) has a positive definite solution, then for Algorithm 3.2 and any \(\epsilon >0\), we have

$$\begin{aligned} \left\| Y_{n+1}-X_{\mathrm{L}}^{-1}\right\| \le \left( \left\| AX_{\mathrm{L}}^{-1}\right\| ^2+\left\| BX_{\mathrm{L}}^{-1}\right\| ^2+\epsilon \right) \left\| Y_n-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(15)

and

$$\begin{aligned} \Vert X_n-X_{\mathrm{L}}\Vert \le (\Vert A\Vert ^2+\Vert B\Vert ^2)\left\| Y_n-X_{\mathrm{L}}^{-1}\right\| \end{aligned}$$
(16)

for all \(n\) large enough. Moreover, if \(A\) and \(B\) are nonsingular, then

$$\begin{aligned} \Vert X_{n+1}-X_{\mathrm{L}}\Vert \le \left( \left\| X_{\mathrm{L}}^{-1}A\right\| ^2+\left\| X_{\mathrm{L}}^{-1}B\right\| ^2+\epsilon \right) \Vert X_n-X_{\mathrm{L}}\Vert \end{aligned}$$
(17)

for all \(n\) large enough.

Proof

The proof is similar to that of Theorem 3.2. \(\square \)

4 Numerical experiments

In this section, we present some numerical examples to show the effectiveness of the new result for convergence rate of the considered inversion-free methods. We consider examples, which are modification of the examples in Long et al. (2008) and Vaezzadeh et al. (2013) and compare the Algorithm 2.1 (BFPI), Algorithm 3.1 (FIFV-BFPI) and Algorithm 3.2 (SIFV-BFPI). For the stopping criterion we take

$$\begin{aligned} \Vert X_n-X_{n-1}\Vert _\infty \le 10^{-10}, \end{aligned}$$

where \(\Vert A\Vert _\infty =\max _i\sum _{j=1}^m|a_{ij}|\) for a complex \(m\times m\) matrix \(A\).

We use the following notations:

  • \(k\) is the smallest number of iteration, such that the stopping criterion is satisfied;

  • \(\mathrm{res}(X_k)={\Vert X_k + A^{*}X_k^{-1}A + B^{*}X_k^{-1}B - Q\Vert }_\infty \);

  • \(r_1=\Vert X_{\mathrm{L}}^{-1}A\Vert ^2+\Vert X_{\mathrm{L}}^{-1}B\Vert ^2\)—convergence rate of Algorithm 2.1 (BFPI);

  • \(r_{2y}=(\Vert AX_{\mathrm{L}}^{-1}\Vert +\Vert BX_{\mathrm{L}}^{-1}\Vert )^2\)—convergence “semi”-rate of \(Y_n\) in Algorithm 3.1 (FIFV-BFPI) and convergence rate of \(Y_n\) in Algorithm 3.2 (SIFV-BFPI) given by Veazzadeh et al. [see (5) and (13)];

  • \(r_{3y}=\Vert AX_{\mathrm{L}}^{-1}\Vert ^2+\Vert BX_{\mathrm{L}}^{-1}\Vert ^2\) and \(r_{3x}=r_1\) are convergence “semi”-rate of \(Y_n\) and \(X_n\) in Algorithm 3.1, respectively. Moreover, \(r_{3y}\) and \(r_{3x}\) are convergence rate of \(Y_n\) and \(X_n\) in Algorithm 3.2, respectively [see (17)];

  • \(\varepsilon _x(r)=r-\frac{\Vert X_n-X_{\mathrm{L}}\Vert }{\Vert X_{n-1}-X_{\mathrm{L}}\Vert }\)\(\varepsilon '_x(r)=r-\frac{\Vert X_n-X_{\mathrm{L}}\Vert }{\Vert X_{n-2}-X_{\mathrm{L}}\Vert }\);

  • \(\varepsilon _y(r)=r-\frac{\Vert Y_n-X_{\mathrm{L}}^{-1}\Vert }{\Vert Y_{n-1}-X_{\mathrm{L}}^{-1}\Vert }\)\(\varepsilon '_y(r)=r-\frac{\Vert Y_n-X_{\mathrm{L}}^{-1}\Vert }{\Vert Y_{n-2}-X_{\mathrm{L}}^{-1}\Vert }\).

In our case for Algorithm 3.1 convergence “semi”-rate means that:

  • \(\Vert Y_{n+1}-X_{\mathrm{L}}^{-1}\Vert \le (r_{2y}+\epsilon )\Vert Y_{n-1}-X_{\mathrm{L}}^{-1}\Vert \) is satisfied [see (5)];

  • \(\Vert Y_{n+1}-X_{\mathrm{L}}^{-1}\Vert \le (r_{3y}+\epsilon )\Vert Y_{n-1}-X_{\mathrm{L}}^{-1}\Vert \) is satisfied [see (7)];

  • \(\Vert X_{n+1}-X_{\mathrm{L}}\Vert \le (r_{3x}+\epsilon )\Vert X_{n-1}-X_{\mathrm{L}}\Vert \) is satisfied [see (9)].

Convergence rate of Algorithm 3.1 is approximately square root of the “semi”-rate.

Example 4.1

Consider the Eq. (1) with

$$\begin{aligned} A&= \frac{1}{10}\left( \begin{array}{r@{\quad }r@{\quad }r} 0.10 &{} -1.50 &{} -2.59 \\ 0.15 &{} 2.12 &{} -0.64 \\ 0.25 &{} -0.69 &{} 1.39 \\ \end{array} \right) \!,\,\, B=\frac{1}{10}\left( \begin{array}{r@{\quad }r@{\quad }r} 1.60 &{} -0.25 &{} 0.20 \\ -0.25 &{} -2.88 &{} -0.60 \\ 0.04 &{} -0.16 &{} -1.20 \\ \end{array} \right) \!,\\ Q&= \frac{1}{2}I+2A^{*}A+2B^{*}B. \end{aligned}$$

Now, for Example 4.1 the maximal solution is \(X_{\mathrm{L}}=\frac{1}{2}I\), and \(r_1=r_{3y}=r_{3x}=0.7537\) and \(r_{2y}=1.5063\). In Table 1 are given the numbers of iteration \(k\), for which the stopping criterion is satisfied, the norm \({\Vert X_k-X_{k-1}\Vert }_\infty \) and \(\mathrm{res}(X_k)\) for the three algorithms.

Table 1 Numerical results of Example 4.1

The rest of our numerical results are reported in Table 2.

Table 2 Numerical results of Example 4.1

Example 4.2

Consider the Eq. (1) with

$$\begin{aligned} A&= \frac{1}{70}\left( \begin{array}{l@{\quad }l@{\quad }l@{\quad }l@{\quad }l} 40 &{} 25 &{} 23 &{} 35 &{} 66 \\ 25 &{} 32 &{} 27 &{} 45 &{} 21\\ 23 &{} 27 &{} 28 &{} 16 &{} 24\\ 35 &{} 45 &{} 16 &{} 52 &{} 65\\ 66 &{} 21 &{} 24 &{} 65 &{} 69\\ \end{array} \right) \!,\,\, B=\frac{1}{70}\left( \begin{array}{l@{\quad }l@{\quad }l@{\quad }l@{\quad }l} 11 &{} 21 &{} 23 &{} 25 &{} 32\\ 21 &{} 31 &{} 60 &{} 42 &{} 33\\ 23 &{} 60 &{} 34 &{} 18 &{} 26\\ 25 &{} 42 &{} 18 &{} 44 &{} 30\\ 32 &{} 33 &{} 26 &{} 30 &{} 50\\ \end{array} \right) \!,\\ Q&= X+A^{*}X^{-1}A+B^{*}X^{-1}B, \, \text{ where } \, \, X=\left( \begin{array}{l@{\quad }l@{\quad }l@{\quad }l@{\quad }l} 3 &{} 1 &{} 0 &{} 0 &{} 0\\ 1 &{} 3 &{} 1 &{} 0 &{} 0\\ 0 &{} 1 &{} 3 &{} 1 &{} 0\\ 0 &{} 0 &{} 1 &{} 3 &{} 1\\ 0 &{} 0 &{} 0 &{} 1 &{} 3\\ \end{array} \right) \end{aligned}$$

We have for Example 4.2 the maximal solution \(X_{\mathrm{L}}=X\), \(r_1=r_{3x}=r_{3y}=0.7450\) and \(r_{2y}=1.4602\). Our numerical results are reported in Tables 3 and 4.

Table 3 Numerical results of Example 4.2
Table 4 Numerical results of Example 4.2