1 Introduction

Let \({\mathbb {M}}_n\) stand for the algebra of \(n\times n\) complex matrices and let \(\mathcal {P}_n\) denote the normal cone of positive definite matrices in \({\mathbb {M}}_n\). Let f be a real-valued function which is well-defined on the set of eigenvalues of a Hermitian matrix A. Then the matrix f(A) can be defined by means of the functional calculus.

Let A, B be positive definite matrices, it is well-known that the matrix geometric mean \(A\sharp B=A^{1/2}\left( A^{-1/2}BA^{-1/2}\right) ^{1/2}A^{1/2}\) was firstly defined by Pusz and Woronowicz [1]. They showed that the geometric mean is the unique positive definite solution of the Riccati equation

$$\begin{aligned} XA^{-1}X=B. \end{aligned}$$
(1)

In 2005, Lim [2] studied the inverse means problem for the geometric mean and the contraharmonic mean. Using the Riccati equation (1) as a lemma, for positive definite matrices \(A \le B\) he studied the equation

$$\begin{aligned} X= A + 2BX^{-1}B. \end{aligned}$$

He showed that the last equation has a unique positive solution of the form \(X = \frac{1}{2} \left( A + A\sharp \left( A + 4BA^{-1}B\right) \right) \). Lim and co-authors [3] studied the non-linear equation

$$\begin{aligned} X = B \sharp (A+X). \end{aligned}$$

They proved that this equation has a unique positive definite solution \(X =\frac{1}{2}(B+B\sharp (B+4A)).\) Interestingly, both results were based on elementary approach by solving the corresponding quadratic equations. In 2020, Lee and co-authors [4] studied the matrix equation

$$\begin{aligned} X^p=A+M^T(X\sharp B)M. \end{aligned}$$

Similar to the approach of Lim and Pálfia [5], they used the Thompson metric and Banach fixed point theorem to show that the equation has a unique positive definite solution. After that, Zhai and Jin [6] generalized this equation for m non-singular real matrices. More precisely, they studied two non-linear matrix equations as follows:

$$\begin{aligned} X^p=A+ \sum _{i=1}^m M_i^T(X\sharp B)M_i \end{aligned}$$

and

$$\begin{aligned} X^p=A+ \sum _{i=1}^j M_i^T\left( X\sharp B\right) M_i + \sum _{i=j+1}^m M_i^T\left( X^{-1}\sharp B\right) M_i, \end{aligned}$$

where pmj are positive integers such that \(1 \le j \le m,\) AB are positive definite matrices and \(M_1, M_2, \ldots , M_m\) are non-singular real matrices.

In 2021, Dinh and co-authors [7] considered similar matrix equations for the weighted matrix geometric mean

$$\begin{aligned} A\sharp _t B = A^{1/2}\left( A^{-1/2}BA^{-1/2}\right) ^{t}A^{1/2}. \end{aligned}$$

Namely, they studied the following matrix equations:

$$\begin{aligned} X^p=A+\sum _{i=1}^m M_i^T\left( X \sharp _t B\right) M_i, \end{aligned}$$
(2)

and

$$\begin{aligned} X^p=A+ \sum _{i=1}^j M_i^T\left( X \sharp _t B\right) M_i+ \sum _{i=j+1}^m M_i^T\left( X^{-1} \sharp _t B\right) M_i, \end{aligned}$$
(3)

where pm are positive integers, AB are \(n \times n\) positive definite matrices and \(M_1, M_2, \ldots , M_m\) are \(n \times n\) nonsingular real matrices. At the end of the paper, they not only mentioned that the weighted geometric mean \(A\sharp _t B\) is a matrix generalization of \(a^{1-t}b^t\) for two non-negative numbers a and b but also noticed that there is another symmetric generalization such as \(\left( A^{\frac{1-t}{2t}}BA^{\frac{1-t}{2t}}\right) ^t\) which appears in the definition of the sandwiched quasi-relative entropy \({\text {Tr}}\left( A^{\frac{1-t}{2t}}BA^{\frac{1-t}{2t}}\right) ^t\) [8].

In this paper, we study the general case of equations (2) and (3). More precisely, for positive definite matrices A and B, for nonsingular matrices \(M_1, M_2, \ldots , M_m\) and for an arbitrary Kubo–Ando mean \(\sigma \), we show that each of two following equations:

$$\begin{aligned} X^p =A+\sum _{i=1}^m M_i^*\left( X\sigma B\right) M_i \end{aligned}$$

and

$$\begin{aligned} X^p=A+\sum \limits _{i=1}^jM_i^*(X\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(X^{-1}\sigma B)M_i, \end{aligned}$$

where m is positive interger and \(p>1\), has a unique positive definite solution. We also study the multi-step stationary iterative method for these equations and prove the corresponding convergences.

2 Preliminaries

In 1980, Kubo and Ando introduced the theory of matrix means over the set of positive semi-definite matrices as follows:

Definition 1

([9]) A binary operation \(\sigma \) on class of positive semi-definite matrices, \((A, B)\mapsto A\sigma B\), is called a mean if the following requirements are fulfilled:

  1. (1)

    \(A\le C\) and \(B\le D\) imply \(A\sigma B\le C\sigma D\).

  2. (2)

    \(C(A\sigma B) C\le (CAC)\sigma (CBC)\).

  3. (3)

    \(A_n \downarrow A\) and \(B_n\downarrow B\) imply \((A_n \sigma B_n)\downarrow (A\sigma B)\).

  4. (4)

    \(I\sigma I=I\) for identity matrix I.

A mean \(\sigma \) is symmetric if \(A\sigma B=B\sigma A\) for all AB.

An immediate consequence of (2) is

$$\begin{aligned} C(A\sigma B) C=(CAC)\sigma (CBC) \text { for invertible } C. \end{aligned}$$

In particular, \(a(A\sigma B)=(aA)\sigma (aB)\) for \(a>0\).

They also demonstrated an isomorphism between matrix means and non-negative normalized operator monotone functions through the expression

$$\begin{aligned} A\sigma B= A^{1/2}f_\sigma \left( A^{-1/2}BA^{-1/2}\right) A^{1/2}, \end{aligned}$$

where \(f_\sigma \) is a non-negative normalized operator monotone function which is called the representing function of \(\sigma \).

Three well-known symmetric matrix means are arithmetic mean, geometric mean, and harmonic mean, denoted by \(\nabla , \sharp \) and ! respectively. Their formulas and representing functions are presented in the following table:

Mean

Representing function

\(A\nabla B =\dfrac{1}{2}(A+B)\)

           \(f_{\nabla }(x)=\dfrac{1+x}{2}\)

   \(A \sharp B =A^{1/2}(A^{-1/2}BA^{-1/2})^{1/2}A^{1/2}\)    

           \(f_{\sharp }(x)=\sqrt{x}\)

\(A ! B =2(A^{-1}+B^{-1})^{-1}\)

           \(f_{!}(x)=\dfrac{2x}{1+x}\)

We know that for \(t\in [0,1]\), the weighted means of three above symmetric means are:

  • The weighted arithmetic mean: \(A\nabla _t B= (1-t)A+tB\).

  • The weighted geometric mean: \(A\sharp _t B= A^{1/2}\left( A^{-1/2}BA^{-1/2}\right) ^tA^{1/2}\).

  • The weighted harmonic mean: \(A!_tB= \left( (1-t)A^{-1}+tB^{-1} \right) ^{-1}\).

Although Kubo–Ando theory characterizes all possible matrix means, it does not clarify how weighted means correspond to symmetric ones.

In [10], Pálfia defined a procedure for every symmetric matrix mean \(\sigma \) and introduced, for all \(t \in [0,1]\), the weighted mean associated with a symmetric mean \(\sigma \), denoted by \(\sigma _t\).

Definition 2

([10, Definition 3]). Let \(\sigma \) be a symmetric matrix mean, \(A, B\in \mathcal {P}_n\) and \(t\in [0,1]\). Let \(a_0=0, b_0=1, A_0=A, B_0=B\). Define \(a_m, b_m, A_m, B_m\) recursively by the following procedure for all \(m=0,1,2,\ldots \): if \(a_m=t\) then

  • \( a_{m+1}=a_m\) and \(b_{m+1}=a_m\), \(A_{m+1}=A_m\) and \(B_{m+1}=A_m\)

    else if \(b_m=t\) then

  • \(a_{m+1}=b_m\) and \(b_{m+1}=b_m\), \(A_{m+1}=B_m\) and \(B_{m+1}=B_m\)

    else if \(\frac{a_m+b_m}{2}\le t\) then

  • \(a_{m+1}=\frac{a_m+b_m}{2}\) and \(b_{m+1}=b_m\), \(A_{m+1}=A_m\sigma B_m\) and \(B_{m+1}=B_m\)

    else

  • \(b_{m+1}=\frac{a_m+b_m}{2}\) and \(a_{m+1}=a_m\), \(B_{m+1}=A_m\sigma B_m\) and \(A_{m+1}=A_m\)

    end if

Theorem 3

[10, Theorem 1]. The sequences \(A_m\), \(B_m\) given in Definition 2 are convergent and have the same limits point.

Definition 4

([10, Definition 4]). The common limit point of \(A_m, B_m\) in Theorem 3 will be denoted by \(\sigma _t\) and it is considered as the corresponding weighted mean to a symmetric matrix mean \(\sigma \).

In [10], the authors not only showed that the weighted mean \(\sigma _t\) yields the correct corresponding weighted means in the case of arithmetic, geometric, and harmonic means, but also established some important properties fulfilled by \(\sigma _t\), as follows:

Proposition 5

([10, Proposition 3]). For \(t\in [0,1]\), the corresponding weighted mean \(\sigma _t\) to a symmetric mean \(\sigma \) fulfills the following properties:

  1. (1’)

    \(A\le C\) and \(B\le D\) imply \(A\sigma _t B\le C\sigma _t D\).

  2. (2’)

    \(C(A\sigma _t B) C\le (CAC)\sigma _t (CBC)\).

  3. (3’)

    \(A_n \downarrow A\) and \(B_n\downarrow B\) imply \((A_n \sigma _t B_n)\downarrow (A\sigma _t B)\).

  4. (4’)

    \(I\sigma _t I=I\) for identity matrix I.

  5. (5’)

    \(A\sigma _{1/2}B=A\sigma B\).

  6. (6’)

    If \(A\sigma B\le A\tau B\) then \(A\sigma _t B\le A\le \tau _t B\).

  7. (7’)

    \(A \sigma _t B\) is continuous in t.

What immediately follows from (2’) is that \(C(A\sigma _t B) C=(CAC)\sigma _t (CBC) \text { for invertible } C\) and \(a(A\sigma _t B)=(aA)\sigma _t (aB)\) for \(a>0\).

3 Main results

In this section, we use the technique of the fixed point theorem for increasing maps to show the existence of the solution of matrix equations.

Definition 6

([11, Definition 2.1.1]). An operator \(T:\mathcal {P}_n \rightarrow \mathcal {P}_n \) is said to be increasing if \(0 < x \le y\) implies \(Tx \le Ty.\)

Lemma 7

([11, Theorem 2.2.6]). Let \(T:\mathcal {P}_n \rightarrow \mathcal {P}_n \) be an increasing operator, and suppose that there exists \(r \in (0,1)\) for which

$$\begin{aligned} T(sx) \ge s^r T(x),\hspace{2.84544pt}x \in \mathcal {P}_n,\hspace{2.84544pt}s \in (0,1). \end{aligned}$$

Then T has a unique fixed point \(x^* \in \mathcal {P}_n.\)

Theorem 8

Let \(A,B \in \mathcal {P}_n\), m be positive integer and \(p>1\). Then, for nonsingular matrices \(M_1, M_2, \ldots , M_m\) in \(\mathbb {M}_n\) and for an arbitrary Kubo–Ando mean \(\sigma \), the following matrix equation

$$\begin{aligned} X^p =A+\sum _{i=1}^m M_i^*\left( X\sigma B\right) M_i \end{aligned}$$
(4)

has a unique positive definite solution \(X^*\) in \(\mathcal {P}_n.\)

Proof

We consider the function

$$\begin{aligned} T(X)=\Big (A+\sum _{i=1}^m M_i^*\left( X\sigma B\right) M_i \Big )^{\frac{1}{p}}. \end{aligned}$$

We show that T(X) satisfies the conditions of Lemma 7, so it has a unique fixed point \(X^*\) in \(\mathcal {P}_n\). That leads to the fact that the equation (4) has a unique positive definite solution \(X^*\) in \(\mathcal {P}_n\).

Let \(0< X_1 \le X_2\), by the monotonicity of \(\sigma \) we have \( X_1\sigma B\le X_2\sigma B\). Consequently,

$$\begin{aligned} M_i^*\left( X_1\sigma B\right) M_i \le M_i^* \left( X_2\sigma B\right) M_i,\quad i=1, 2, \ldots , m. \end{aligned}$$

Therefore,

$$\begin{aligned} A+\sum \limits _{i=1}^m M_i^*\left( X_1\sigma B\right) M_i \le A+\sum \limits _{i=1}^mM_i^* \left( X_2\sigma B\right) M_i. \end{aligned}$$

Since \(p>1\), the function \(x^{\frac{1}{p}}\) is a operator monotone on \((0,+\infty )\). We have

$$\begin{aligned} T(X_1)&= \left( A+\sum \limits _{i=1}^m M_i^*\left( X_1\sigma B\right) M_i\right) ^{\frac{1}{p}} \\&\le \left( A+\sum \limits _{i=1}^mM_i^* \left( X_2\sigma B\right) M_i\right) ^{\frac{1}{p}} \\&= T(X_2), \end{aligned}$$

so the function T(X) is increasing.

Let \(X\in \mathcal {P}_n\), \(s\in (0,1)\). For \(p>1\), there exists a constant \(r\in (0,1)\) such that \(\dfrac{1}{p}\le 1<rp\). We have

$$\begin{aligned} (sX)\sigma B&= (sX)\sigma \left( s\left( \dfrac{1}{s}B\right) \right) \\&= s\left( X\sigma \left( \dfrac{1}{s}B\right) \right) \\&\ge s \left( X\sigma B\right) , \end{aligned}$$

and

$$\begin{aligned} T(sX)&=\left( A+ \sum \limits _{i=1}^m M_i^*\left( (sX)\sigma B\right) M_i\right) ^{\frac{1}{p}}\\&\ge \left( A+s\sum _{i=1}^m M_i^*(X\sigma B)M\right) ^{\frac{1}{p}}\\&\ge \left( s^{rp} \left( A+\sum \limits _{i=1}^mM_i^*\left( X\sigma B\right) M_i\right) \right) ^{\frac{1}{p}} \\&\ge s^r \left( A+\sum \limits _{i=1}^mM_i^*\left( X\sigma B\right) ^tM_i\right) ^{\frac{1}{p}} \\&=s^r T(X). \end{aligned}$$

Thus, T(X) satisfies all conditions of Lemma 7. In other words, equation (4) has a unique positive solution \(X^*\in \mathcal {P}_n\). \(\square \)

Now, let \(X_1, X_2, \ldots , X_m\) be initial matrices in \(\mathcal {P}_n\) and consider the multi-step stationary iterative method for the equation (4) as follows:

$$\begin{aligned} X_{l+m+1} = \left( A+\sum \limits _{i=1}^m M_i^*\left( X_{l+i}\sigma B \right) M_i\right) ^{\frac{1}{p}} \end{aligned}$$
(5)

for \(l=1,2,3,\ldots \).

In the following theorem, we show that the matrix sequence \(\{X_k\}\) generated by (5) converges to \(X^*\).

Theorem 9

For any \(X_1, X_2, \ldots , X_m\in \mathcal {P}_n\), the sequence of matrices \(\{X_k\}\) generated by (5) converges to the unique positive definite solution \(X^*\) of the equation (4).

Proof

For matrices \(X_1, X_2, \ldots , X_m\) and \(X^*\), there exists \(a\in (0,1)\) such that

$$\begin{aligned} aX^*\le X_i\le a^{-1}X^*,\hspace{2.84544pt}i=1,2,\ldots , m. \end{aligned}$$
(6)

We show that for any \(b\in \mathbb {N}\), we have

$$\begin{aligned} a^{r^b}X^*\le X_k\le a^{-r^b}, \hspace{2.84544pt}k=bm+i\hspace{2.84544pt}(i=1,2,\ldots , m) \end{aligned}$$
(7)

for some \(r\in (0,1)\) and \(1<rp\). Then, according to the fact that \(\lim \nolimits _{b\rightarrow \infty }a^{r^b}=\lim \nolimits _{b\rightarrow \infty }a^{-r^b}=1\) and the Squeeze theorem in the normal cone \(\mathcal {P}_n\), it implies that \(\{X_k\}\) converges to \(X^*\).

Now, we prove (7) by using the method of mathematical induction. For \(b=0\), the inequality (7) reduces to the case of (6). Assume that (7) is true for \(b=q-1\) for some positive interger q, it means

$$\begin{aligned} a^{r^{q-1}}X^*\le X_{(q-1)m+i}\le a^{-r^{q-1}}X^* \end{aligned}$$
(8)

for \(k=(q-1)m+i\) and \(i=1,2,\ldots ,m\).

Since \(X_{qm+i}=T(X_{(q-1)m+i})\) and T(X) is increasing, it implies from (8) that

$$\begin{aligned} T(a^{r^{q-1}}X^*)\le T(X_{(q-1)m+i}) =X_{qm+i}\le T(a^{-r^{q-1}}X^*). \end{aligned}$$

Moreover, \(T(sX)\ge s^rT(X)\) in the case \(s\in (0,1)\) and \(T(sX)\le s^rT(X)\) in the case \(s>1\). Therefore,

$$\begin{aligned} T(a^{r^{q-1}}X^*)\ge \left( a^{r^{q-1}}\right) ^rT(X^*) =a^{r^q}T(x^*)=a^{r^q}X^* \end{aligned}$$

and

$$\begin{aligned} T\left( a^{-r^{q-1}}X^*\right) \le a^{-r^q}T(X^*)=a^{-r^q}X^*. \end{aligned}$$

So, we have

$$\begin{aligned} a^{r^q}X^*\le X_{qm+i}\le a^{-r^q}X^*. \end{aligned}$$

Thus, (7) is true, and \(\{X_k\}\) converges to \(X^*\). \(\square \)

Definition 10

([11, Definition 2.1.3]). Suppose that \(D\subset E\) and let \(T:D\times D\rightarrow E\) be a operator, we say that T is mixed monontone of for any \(x_1, x_2, y_1,y_2\in D\) with \(x_1\le x_2, y_1\ge y_2\), implies \(T(x_1,y_1)\le T(x_2,y_2)\). And \(x^*\in D\) is a fixed point of T if it satisfies \(x^*=T(x^*, x^*)\).

Lemma 11

([11, Corollary 2.1.5]). Let \(T: \mathcal {P}_n\times \mathcal {P}_n\rightarrow \mathcal {P}_n\) be a mixed monotone operator. Suppose that for any \(0<a<b<1\) there exists a constant \(r=r(a,b)\in (0,1)\) such that

$$\begin{aligned} T\left( sx, \dfrac{1}{s}x\right) \ge s^rT(x,x), \quad x\in \mathcal {P}_n, \quad s\in (0,1). \end{aligned}$$

Then T has a unique fixed point \(x^*\in \mathcal {P}_n\).

Theorem 12

Let \(A, B\in \mathcal {P}_n\), \(M_i\; (i=1,2,\ldots , m)\) be nonsingular matrices in \(\mathbb {M}_n\), m be positive interger and \(p>1\). Then, for some interger j between 1 and m, the nonlinear matrix equation

$$\begin{aligned} X^p=A+\sum \limits _{i=1}^jM_i^*(X\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(X^{-1}\sigma B)M_i \end{aligned}$$
(9)

has a unique positive definite solution \(X^*\in \mathcal {P}_n\).

Proof

Consider the following operator

$$\begin{aligned} T(X,Y)=\left( A+ \sum \limits _{i=1}^jM_i^*(X\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(Y^{-1}\sigma B)M_i \right) ^\frac{1}{p}. \end{aligned}$$

For \(X_1, X_2, Y_1, Y_2\in \mathcal {P}_n, X_1\le X_2, Y_1\ge Y_2\), according to the monotonicity of mean \(\sigma \) and the function \(t^{1/p}\), we have

$$\begin{aligned} T(X_1,Y_1)&=\left( A+ \sum \limits _{i=1}^jM_i^*(X_1\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(Y_1^{-1}\sigma B)M_i \right) ^\frac{1}{p}\\&\le \left( A+ \sum \limits _{i=1}^jM_i^*(X_2\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(Y_2^{-1}\sigma B)M_i \right) ^\frac{1}{p}\\&=T(X_2,Y_2). \end{aligned}$$

Thus, T is mixed monotone.

For any \(X\in \mathcal {P}_n\), \(s\in (0,1)\) and \(p>1\), there exists a constant \(r\in (0,1)\) with \(\dfrac{1}{p}\le 1<rp\). We have

$$\begin{aligned} T(sX, s^{-1}X)&=\left( A+ \sum \limits _{i=1}^jM_i^*((sX)\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*((s^{-1}X)^{-1}\sigma B)M_i \right) ^\frac{1}{p}\\&= \left( A+ \sum \limits _{i=1}^jM_i^*((sX)\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*((sX^{-1})\sigma B)M_i \right) ^\frac{1}{p}\\&= \left( A+ s\sum \limits _{i=1}^jM_i^*\left( X\sigma \left( \frac{1}{s}B\right) \right) M_i+s\sum \limits _{i=j+1}^mM_i^*\left( X^{-1}\sigma \left( \frac{1}{s}B\right) \right) M_i \right) ^\frac{1}{p}\\&\ge \left( s^{rp}\left( A+ \sum \limits _{i=1}^jM_i^*(X\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(X^{-1}\sigma B)M_i \right) \right) ^\frac{1}{p}\\&= s^r \left( A+ \sum \limits _{i=1}^jM_i^*(X\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(X^{-1}\sigma B)M_i \right) ^\frac{1}{p}\\&=s^rT(X,X). \end{aligned}$$

Therefore, the equation (9) has a unique positive definite solution \(X^*\in \mathcal {P}_n\) as the map T(XY) satisfies all hypotheses of Lemma 11. \(\square \)

Theorem 13

For any \(X_1, X_2, \ldots , X_m\in \mathcal {P}_n\), the matrix sequence \(\{X_k\}\) generated by

$$\begin{aligned} X_{l+m+1}=\left( A+ \sum \limits _{i=1}^jM_i^*(X_{l+i}\sigma B)M_i+\sum \limits _{i=j+1}^mM_i^*(X_{l+i}^{-1}\sigma B)M_i \right) ^\frac{1}{p} \end{aligned}$$
(10)

converges to the unique positive definite solution \(X^*\) of equation (9).

Proof

Since equation (9) has a unique positive definite solution \(X^*\), there exists a positive constant \(0<a<1\) such that the inital matrices \(X_1, X_2, \ldots , X_m\in \mathcal {P}_n\) satisfy

$$\begin{aligned} aX^*\le X_i\le a^{-1}X^*, \; i=1, 2, \ldots , m. \end{aligned}$$
(11)

We first show that for any \(n\in \mathbb {N}\),

$$\begin{aligned} a^{r^b}X^*\le X_k\le a^{-r^b}X^*, \; k=bm+i, i=1,2,\ldots ,m \end{aligned}$$
(12)

for some \(r\in (0,1)\) with \(1< rp\). Then, according to the obvious fact that \(\lim \nolimits _{b\rightarrow \infty }a^{r^b}=\lim \nolimits _{b\rightarrow \infty }a^{-r^b}=1\) and the Squeeze theorem in the normal cone \(\mathcal {P}_n\), we have \(X_k\) converges to \(X^*\).

We prove (12) by using mathematical induction. For \(b=0\), the inequality (12) reduces to the case of (11). Assume that (12) is true for \(b=q-1\) for some positive interger q, i.e, for \(k=(q-1)m+i \; (i=1,2,\ldots , m)\), we have

$$\begin{aligned} a^{r^{q-1}}X^*\le X_{(q-1)m+i}\le a^{-r^{q-1}}X^*. \end{aligned}$$
(13)

Mention that \(X_{qm+i}=T(X_{(q-1)m+i},X_{(q-1)m+i})\) and T is mixed monotone. Therefore, from (13) it implies that

$$\begin{aligned} T(a^{r^{q-1}}X^*, a^{-r^{q-1}}X^*)\le & {} X_{qm+i}=T(X_{(q-1)m+i}, X_{(q-1)m+i})\\\le & {} T(a^{-r^{q+1}}X^*, a^{r^{q+1}}X^*). \end{aligned}$$

From \(T(sX, s^{-1}X)\ge s^rT(X,X)\), we have

$$\begin{aligned} T(a^{r^{q-1}}X^*, a^{-r^{q-1}}X^*)\ge a^{r^q}T(X^*, X^*)=a^{r^q}X^*. \end{aligned}$$
(14)

On the other hands, since \(a^{-r^{q-1}}\ge 1\), the inequality \(T(sX, s^{-1}X)\ge s^rT(X,X)\) is reversed. Again using the mixed monotonicity of T, we have

$$\begin{aligned} T(a^{-r^{q-1}}X^*, a^{r^{q-1}}X^*)\le a^{-r^q}T(X^*, X^*)=a^{-r^q}X^*. \end{aligned}$$
(15)

From (14) and (15), it implies that

$$\begin{aligned} a^{r^q}X^*\le X_{qm+i}\le a^{-r^q}X^*. \end{aligned}$$

Thus, (12) is true and \(X_k\) converges to \(X^*\). \(\square \)

4 Concluding remark

In the case \(\sigma =\sharp _t\), the equations (4) and (9) reduce to the cases in [7]. When \(\sigma =\sharp _{1/2}=\sharp \), they are the cases considered in [6].

In the case \(p=1\), the method presented in this article still applies for the mean \(\sigma \) with representing function \(x^t\), where \(t\in (0, 1]\). For the case \(t\in (0,1)\), please refer to [7], while the case \(t=1\) is relatively straightforward, and readers can verify it easily. However, it is worth noting that this method may not be applicable to all Kubo–Ando means. For further clarification, consider the following examples.

Example 1

Let \(\sigma =w_l\) be the left-trivial mean, \(Aw_lB=A\) for all \(A, B\in \mathcal {P}_n\). The equation

$$\begin{aligned} X=A+M_1^* (Xw_l B)M_1, \end{aligned}$$

where \(A, B\in \mathcal {P}_n\) and \(M_1=I_n\), simplifies to \(X=A+X\). This equation has no solution since A is a positive definite matrix.

Example 2

Let \(\sigma =\nabla \) be arithmetic mean, \(A\nabla B=\frac{A+B}{2}\) for all \(A, B\in \mathcal {P}_n\), and consider the equation

$$\begin{aligned} X=A+M_1^* (X\nabla B)M_1, \end{aligned}$$

where \(A, B\in \mathcal {P}_n\) and \(M_1=\sqrt{2}I_n\). Simplifying, we get \(X = A+X+B\). This equation has no solution because AB are positive definite matrices.