Keywords

1 Introduction

Matrix \(A\in \mathbb {R}^{n\times n} \) is called stable (Routh – Hurwitz stable) if all its eigenvalues are situated in the open left half plane of the complex plane. For a stable matrix A, some perturbation \(E \in \mathbb R^{n\times n} \) may lead to that eigenvalues of \(A+E\) cross the imaginary axis, i.e., to loss of stability. Given some norm \(||\, \cdot \, ||\) in \( \mathbb {R}^{n\times n} \), the smallest perturbation E that makes \(A+E\) unstable is called the destabilizing real perturbation. It is connected with the notion of the distance to instability (stability radius) under real perturbations that is formally defined as

$$\begin{aligned} \beta _{\mathbb {R}}(A)=\min \{||E||\,\big |\, \eta (A+E)\ge 0,E\in \mathbb {R}^{n\times n}\}. \end{aligned}$$
(1)

Here \( \eta (\cdot ) \) denotes the spectral abscissa of the matrix, i.e., the maximal real part of its eigenvalues.

The present paper is devoted to the choice of Frobenius norm in (1), and thereby it is an extension of the investigation by the authors started in [10, 11]. It should be mentioned that while the 2-norm variant of the problem and the application of pseudospectrum to its solution have been explored intensively [2, 3, 7, 12] including numerical computations of spectral norm of a matrix [13], there are just a few studies [1, 6, 9] on the Frobenius norm counterpart. The treatment of the latter is considered as far more complex than the former due to the fundamental difference between the spectral and Frobenius norms. We refer to the paper [11] for the discussion of the practical applications of the stated problem and for the related references. The major difficulty in utilization of numerical procedures for estimation of (1) is that none of them is able to guarantee the convergence to the global minimum of the distance function. As an alternative to this approach, we attack the problem with the combination of symbolical and numerical methods.

It is known that the set of stable matrices in \( \mathbb R^{n\times n} \) is bounded by two manifolds, namely the one consisting of singular matrices and the other containing the matrices with a pair of eigenvalues of the opposite signs. Both boundaries are algebraic manifolds. The distance from the matrix A to the manifold of singular matrices is estimated via the least singular value of A. More difficult is the treatment of the second alternative that is in the focus of the present paper. In Sect. 3, the so-called distance equation [11, 14] is constructed, i.e., the univariate equation whose zero set contains all the critical values of the squared distance function. We also detail the structures of the nearest matrix \( B_{*} \) and the corresponding matrix of the smallest perturbation \(E_{*}\) such that \( B_{*}=A+E_{*} \). The result is presented on the feasibility of simultaneous quasi-triangular Schur decomposition for the matrices \( B_{*} \) and \( E_{*} \).

It is utilized in Sect. 4 and in Sect. 5 for the classes of stable matrices where the distance to instability \(\beta _{\mathbb {R}}(A) \) can be explicitly expressed via the eigenvalues of A. These happen to be symmetric and orthogonal matrices.

Remark. All the numerical computations were performed in CAS Maple 15.0 (LinearAlgebra package and functions discrim, and resultant). We present the results of the approximate computations with the \(10^{-6}\) accuracy.

2 Algebraic Preliminaries

Let \(M=[m_{jk}]_{j,k=1}^n\in \mathbb {R}^{n\times n}\) be an arbitrary matrix and

$$\begin{aligned} f(z)=\det (zI-M)=z^n+a_1z^{n-1}+\ldots +a_n \in \mathbb {R}^{n} \end{aligned}$$
(2)

be its characteristic polynomial. Find the real and imaginary part of \(f(x+\mathbf {i} y)\) (\(\{x,y\}\subset \mathbb {R}\)):

$$f(z)=f(x+ \mathbf {i} y)=\varPhi (x,y^2)+ \mathbf {i} y\varPsi (x,y^2),$$

where

$$\begin{aligned} \varPhi (x,Y)= & {} f(x)-\frac{1}{2!}f^{\prime \prime }(x)Y+\frac{1}{4!}f^{(4)}(x)Y^2-\ldots ,\\ \varPsi (x,Y)= & {} f^{\prime }(x)-\frac{1}{3!}f^{(3)}(x)Y+\frac{1}{5!}f^{(5)}(x)Y^2-\ldots . \end{aligned}$$

Compute the resultant of polynomials \(\varPhi (0,Y)\) and \( \varPsi (0,Y)\) in terms of the coefficients of (2):

$$ K(f):=\mathcal{R}_Y(\varPhi (0,Y),\varPsi (0,Y)) $$
$$\begin{aligned} =\mathcal{R}_Y(a_n-a_{n-2}Y+a_{n-4}Y^2+\dots ,a_{n-1}-a_{n-3}Y+a_{n-5}Y^2+\dots ) . \end{aligned}$$
(3)

Polynomial f(z) possesses a root with the zero real part iff either \( a_n =0\) or \( K(f)=0\). This results in the following statement [11].

Theorem 1

Equations

$$\begin{aligned} \det M = 0 \end{aligned}$$
(4)

and

$$\begin{aligned} K(f):= \mathcal{R}_Y(\varPhi (0,Y),\varPsi (0,Y))=0 \end{aligned}$$
(5)

define implicit manifolds in \( \mathbb R^{n^2} \) that compose the boundary for the domain of stability, i.e., the domain in the matrix space \( \mathbb R^{n\times n}\)

$$\begin{aligned} \mathbb P = \{ \text{ vec }\,(M)\in \mathbb R^{n^2}| M \text{ is } \text{ stable } \} . \end{aligned}$$
(6)

Here \(\mathrm{vec} (\cdot ) \) stands for the vectorization of the matrix:

$$ \mathrm{vec}\,(M)=[m_{11},m_{21},\ldots ,m_{n1},m_{12},\ldots ,m_{n2},\ldots ,m_{1n},\ldots ,m_{nn}]^{\top } . $$

Therefore, the distance to instability from a stable matrix A is computed as the smallest of the distances to the two algebraic manifolds in \( \mathbb {R}^{n^2}\). The Euclidean distance to the set of singular matrices equals the minimal singular value \( \sigma _{\min } (A) \) of the matrix A. If \(\beta _{\mathbb {R}}(A)=\sigma _{\min }(A)\), then the destabilizing perturbation is given by the rank-one matrix

$$\begin{aligned} E_{*}=-AV_{*}V_{*}^{\top }, \end{aligned}$$
(7)

where \(V_{*}\) stands for the normalized right singular-vector of A corresponding to \(\sigma _{\min }(A)\).

More complicated is the problem of distance evaluation from A to the manifold (5) corresponding to the matrices with a pair of eigenvalues of opposite signs (i.e., either \( \pm \lambda \) or \( \pm \mathbf {i} \beta \) for \( \{\lambda , \beta \} \subset \mathbb {R} \setminus \{0\}\)). First of all, the function (3) treated w.r.t. the entries of the matrix M, is not convex. Indeed, for \( n=3 \), the characteristic polynomial of the Hessian of this function is as follows

$$ z^9 - 4 a_1 z^8- \left( 3\sum _{j,k=1}^3m_{jk}^2+7\, a_1^2\right) z^7+\dots + 4 \left( \sum _{j,k=1}^3 m_{jk}^2 +a_1^2+a_2\right) \left[ K(f)\right] ^2 z $$
$$ -8 \left[ K(f)\right] ^3 . $$

It cannot possess all its (real) zeros of the same sign, and thus, the Hessian is not a sign definite matrix. Therefore, one may expect that any gradient-based numerical procedure applied for searching the minimum of the distance function related to the stated problem will meet the traditional trouble of recognizing the local minima.

The general problem of finding the Euclidean distance in a multidimensional space from a point to an implicitly defined algebraic manifold can be solved via the construction of the so-called distance equation [11, 14], i.e., the univariate equation whose zero set contains all the critical values of the squared distance function. In the next section, we develop an approach for the construction of this equation for the case of the manifold (5).

3 Distance to the Manifold (5)

The starting point in this construction is the following result [15].

Theorem 2

Distance from a stable matrix \(A\in \mathbb {R}^{n\times n}\) to the manifold (5) equals

$$\begin{aligned} \sqrt{z_{\min }} \end{aligned}$$
(8)

where

$$\begin{aligned} z_{\min } =&\displaystyle \min _{\{X,Y\} \in \mathbb R^n}\left\{ ||AX||^2+||AY||^2-(X^{\top }AY)^2-(Y^{\top }AX)^2 \right\} \end{aligned}$$
(9)
$$\begin{aligned}&\qquad \qquad \quad \!\!\!\!\!\!\!\!\!\!\!\!\! {subject \ to \ the \ constraints} \nonumber \\&\qquad \quad \;\!\!\!\!\!\!\!\!\!\! ||X||=1,||Y||=1,\ X^{\top }Y=0 , \\&\qquad \qquad \quad \!\!\!\!\!\!\!\!\!\!\!\! (X^{\top }AY)(Y^{\top }AX)\le 0 . \nonumber \end{aligned}$$
(10)

All vector norms here are 2-norms.

If \(\beta _{\mathbb {R}}(A) \) equals the value (8) that is attained at the columns \(X_{*}\) and \(Y_{*}\), then the destabilizing perturbation is computed by the formula

$$\begin{aligned} E_{*}=(aX_{*}-AY_{*})Y_{*}^{\top }+(bY_{*}-AX_{*})X_{*}^{\top } \ \text{ where } \ a:=X_{*}^{\top }AY_{*},b:=Y_{*}^{\top } AX_{*}. \end{aligned}$$
(11)

It is known [5] that the matrix (11) has rank 2.

Theorem 3

[11]. If \(a\ne -b\), then the matrix (11) has a unique nonzero eigenvalue

$$\begin{aligned} \lambda _{*}=-X_{*}^{\top }AX_{*}=-Y_{*}^{\top }AY_{*} \end{aligned}$$
(12)

of the multiplicity 2.

In what follows, we will consider the most general case \(a\ne -b\).

Constructive computation of (8) is a nontrivial task. Utilization of numerical optimization procedures results in convergence to several local minima (including those satisfying inappropriate condition \( a+b=0 \)). In [11], the approach was proposed reducing the problem to that of finding an unconstrained minimum of an appropriate rational function; unfortunately, the approach is applicable only for the particular case of the third order matrices.

To treat the general case, we convert the constrained optimization problem (9)–(10) to a new one with lesser number of variables and constraints. Denote the objective function in (9) by F(XY) , and consider the Lagrange function

$$ L(X,Y,\tau _1,\tau _2,\mu ):=F(X,Y)-\tau _1 (X^{\top }X-1)-\tau _2 (Y^{\top }Y-1)- \mu (X^{\top }Y) $$

with the Lagrange multipliers \( \tau _1, \tau _2\) and \( \mu \). Its derivatives with respect to X and Y yield the system

$$\begin{aligned} 2\,A^{\top }AX-2(X^{\top }AY)AY-2(Y^{\top }AX)A^{\top }Y-2\tau _1 X-\mu Y= & {} 0, \end{aligned}$$
(13)
$$\begin{aligned} 2\,A^{\top }AY-2(Y^{\top }AX)AX-2(X^{\top }AY)A^{\top }X-2\tau _2 Y - \mu X= & {} 0 . \end{aligned}$$
(14)

Together with conditions (10), this algebraic system contains \(2n+3\) variables in a nonlinear manner. We will make some manipulations aiming at reducing twice the number of these variables.

Equation (13) together with two of conditions (10) are those providing the Lagrange equations for the constrained optimization problem

$$ \min _{X\in \mathbb R^n} F(X,Y) \ s.t. \ X^{\top }X=1, \ X^{\top }Y=0 . $$

Since F(XY) is a quadratic function w.r.t. X:

$$ F(X,Y)=X^{\top } \mathfrak A(Y) X+\mathfrak {b}(Y), $$

where

$$ \mathfrak A(Y):=A^{\top }A-AYY^{\top } A^{\top } - A^{\top } YY^{\top } A, \ \mathfrak {b}(Y):=Y^{\top }A^{\top }AY , $$

one can apply the traditional method of finding its critical values [4]. First, resolve (13) w.r.t. X

$$\begin{aligned} X=\frac{\mu }{2} (\mathfrak A - \tau _1 I)^{-1} Y . \end{aligned}$$
(15)

Substitute this into \( X^{\top }X=1 \):

$$\begin{aligned} \frac{\mu ^2}{4} Y^{\top } (\mathfrak A - \tau _1 I)^{-2} Y -1 =0 \end{aligned}$$
(16)

and into \( X^{\top }Y=0 \):

$$\begin{aligned} \frac{\mu }{2} Y^{\top } (\mathfrak A - \tau _1 I)^{-1} Y=0 . \end{aligned}$$
(17)

Next, introduce a new variable z responsible for the critical values of F:

$$ z- F(X,Y)=0 $$

and substitute here (15). Skipping some intermediate computations, one arrives at

$$\begin{aligned} \varPhi (Y,\tau _1,\mu ,z):=z-\frac{\mu ^2}{4} Y^{\top } (\mathfrak A - \tau _1 I)^{-1} Y - \tau _1 - \mathfrak {b}(Y)=0 . \end{aligned}$$
(18)

Next step consists of the elimination of the parameters \( \tau _1\) and \( \mu \) from (16)–(18). It can be readily verified that \(\partial \varPhi / \partial \mu \) coincides, up to a sign, with the left-hand side of (17). One may expect that \(\partial \varPhi / \partial \tau _1\) coincides with the left-hand side of (16). This is not the fact:

$$\begin{aligned} \partial \varPhi / \partial \tau _1+ \{\text {left-hand side of } (16) \} \equiv -2 . \end{aligned}$$
(19)

Introduce the functions

$$\begin{aligned} \widetilde{\varPhi } (Y,\tau _1,\mu ,z):=\left| \begin{array}{cc} \mathfrak {A} - \tau _1 I &{} \mu /2 Y \\ \mu /2 Y^{\top } &{} z-\tau _1-\mathfrak {b}(Y) \end{array} \right| _{(n+1)\times (n+1)}, \quad \mathfrak {F}(\tau _1):=\det (\mathfrak {A} - \tau _1 I) . \end{aligned}$$
(20)

Due to Schur complement formula, one has

$$\begin{aligned} \varPhi \equiv \widetilde{\varPhi } / \mathfrak F(\tau _1) . \end{aligned}$$
(21)

Replace \( \varPhi \) by \(\widetilde{\varPhi }\). From (18) deduce

$$\begin{aligned} \widetilde{\varPhi } =0 . \end{aligned}$$
(22)

From (17) one gets that

$$\begin{aligned} \partial \widetilde{\varPhi } / \partial \mu =0 . \end{aligned}$$
(23)

Under condition (22), the following relation is valid

$$ \frac{\partial \varPhi }{\partial \tau _1}\equiv \frac{\widetilde{\varPhi }^{\prime }_{\tau _1} \mathfrak F - \mathfrak F^{\prime }_{\tau _1} \widetilde{\varPhi }}{\mathfrak F^2}= \frac{\widetilde{\varPhi }^{\prime }_{\tau _1}}{\mathfrak F}. $$

In view of (19), replace (16) by

$$\begin{aligned} \widetilde{\varPhi }^{\prime }_{\tau _1}+2\, {\mathfrak F} = 0 . \end{aligned}$$
(24)

Finally, eliminate \( \tau _1 \) and \( \mu \) from (22), (23) and (24) (elimination of \( \mu \) is simplified by the fact that the polynomial \( \widetilde{\varPhi }\) is a quadratic one w.r.t. this parameter):

$$ Y^{\top } A \cdot A^{\top } Y +\tau _1 - z = 0 . $$

The resulting equation

$$\begin{aligned} G(z,Y)=0 \end{aligned}$$
(25)

is an algebraic one w.r.t. its variables.

Conjecture 1

One has

$$ \deg _z G(z,Y)=n-1, \ \deg _Y G(z,Y)=2\,n ,$$

and the coefficient of \( z^{n-1} \) equals \( Y^{\top }Y\).

Equation (25) represents z as an implicit function of Y. We need to find the minimum of this function subject to the constraint \( Y^{\top } Y=1\). This can be done via direct elimination of either of variables \( y_1,y_2,\dots , y_n \), say \( y_1 \), from the equations (25) and \( Y^{\top } Y=1\) and further computation of the (absolute) minimum of the implicitly defined function of the variables \( y_2,\dots , y_n \). The elimination procedure for these variables consists of the successive resultant computations and results, on expelling some extraneous factors, in the distance equation \( \mathcal F(z)=0\).

Conjecture 2

Generically, one has

$$ \deg \mathcal F(z) = {n \atopwithdelims ()2}^2 , $$

while the number of real zeros of \( \mathcal F(z) \) is \( \ge {n \atopwithdelims ()2} \).

Real zeros of \( \mathcal F(z)=0 \) are the critical values of the squared distance function. In all the examples we have computed, the true distance is provided by the square root of the least positive zero of this equationFootnote 1.

Example 1

For the upper triangular matrix

$$ A=\left[ \begin{array}{rrr} -5 &{} 3 &{}-4\\ 0 &{} -7 &{} 8 \\ 0 &{} 0 &{}-11 \end{array} \right] , $$

the distance equation to the manifold (5) is as follows:

$$\begin{array}{c} \mathcal F(z):= 2761712704\,z^9-8117525391152\,z^8+9928661199130545\,z^7 \\ \\ -6661449509594611833\,z^6+2725873911089976326856\,z^5 \\ \\ -710084397702478808373248\,z^4 +117904392917228522430951424\,z^3 \\ \\ -11941405917828362824496906240\,z^2+653700309832952667775747751936\,z \\ \\ -13855088524292326555552906739712=0\end{array}$$

with real zeros

$$z_{\min } \approx 49.502398,\ z_2 \approx 178.803874,\ z_3 \approx 207.566503.$$

Distance to (5) equals \( \sqrt{z_{\min }} \approx 7.035794 \), and it is provided by the perturbation matrix

$$ E_{*} \approx \left[ \begin{array}{rrr} 4.346976 &{} 0.523508 &{} -0.557899\\ 0.705685 &{} 3.592395 &{} 1.164459 \\ -1.972167 &{} 3.053693 &{} 1.430776 \end{array} \right] . $$

Spectrum of the matrix \( B_{*} =A+E_{*} \) is \(\approx \{ -13.629850, \pm 1.273346 \, \mathbf{i} \}\).

The perturbation matrix corresponding to the zero \( z_2 \) of the distance equation is

$$ E_2\approx \left[ \begin{array}{rrr} 3.435003 &{} -5.117729 &{} -0.980014 \\ -3.957240 &{} 6.004731 &{} -0.650159\\ -0.242289 &{} -0.207877 &{} 9.360120 \end{array} \right] . $$

Spectrum of the matrix \( B_2 =A+E_2 \) is \( \approx \{ -4.200144, \pm 1.517560 \} \).

   \(\square \)

Example 2

For the matrix

$$ A=\left[ \begin{array}{rrll} -1 &{}-4 &{}-1 &{} \;\;\;0\\ 2 &{}-3 &{} \;\;\;2 &{} \;\;\;0\\ 4 &{} 1 &{}-5 &{} -0.02 \\ 0 &{} 0 &{} \;\;\;0.1 &{} -1 \end{array}\right] , $$

the distance to the manifold (5) equals \( \sqrt{z_{\min }} \) where \( z_{\min } \approx 10.404067 \). Vectors providing this value, as the solution to the constrained optimization problem (9)–(10), are as followsFootnote 2:

$$ X_{*}\approx \left[ \begin{array}{r} -0.262202\\ -0.089560\\ -0.242204\\ 0.929820 \end{array}\right] , \ Y_{*}\approx \left[ \begin{array}{r} 0.719155 \\ 0.148735 \\ 0.571599 \\ 0.366015 \end{array}\right] . $$

The perturbation matrix is determined via (11):

$$ E_{*} \approx \left[ \begin{array}{rrrr} 1.550382 &{} 0.346249 &{} 1.256766 &{} 0.018654 \\ -1.735702 &{} -0.386136 &{} -1.405552 &{} -0.066067 \\ -0.125734 &{} -0.027972 &{} -0.101818 &{} -0.004775 \\ -0.061674 &{} -0.048946 &{} -0.083641 &{} 1.057733 \end{array}\right] . $$

The only nonzero eigenvalue (12) of this matrix is \( \lambda _{*} \approx 1.060080 \). The spectrum of the corresponding nearest to A matrix \( B_{*}=A+E_{*} \) is

$$ \mu _1\approx -5.937509, \mu _2 \approx -1.942329, \mu _{3,4}=\pm 0.066088 \, \mathbf{i} . $$

Just for the sake of curiosity, let us find the real Schur decomposition [8] for the matrices \( B_{*} \) and \( E_{*}\). The orthogonal matrix

$$ P \approx \left[ \begin{array}{rrrr} 0.326926 &{} -0.579063 &{} -0.541040&{} \;\;\;0.514858\\ -0.403027&{} 0.627108 &{} -0.529829 &{}\;\;\; 0.404454\\ -0.029186&{} 0.045432&{} 0.652787&{} \;\;\;0.755614\\ 0.854304&{} 0.518994&{} -0.020604 &{} \;\;\;0.019594 \end{array} \right] $$

furnishes the lower quasi-triangular Schur decomposition for \( B_{*} \):

$$ P^{\top } B_{*} P \approx \left[ \begin{array}{ll|ll} \;\;\;0 &{} \;\;\;0.159482 &{} \;\;\;0 &{} \;\;\;0 \\ -0.027386 &{} \;\;\;0 &{} \;\;\;0 &{} \;\;\;0 \\ \hline -0.974903 &{} \;\;\;1.383580 &{} \;\;\;\mu _1 &{} \;\;\;0 \\ \;\;\;2.170730 &{} -3.675229 &{} -2.733014 &{} \;\;\;\mu _2 \end{array} \right] . $$

Eigenvalues of the upper left-corner block of this matrix

$$ \left[ \begin{array}{ll} \;\;\;0 &{}\;\;\;0.159482 \\ -0.027386 &{}\;\;\; 0 \end{array} \right] $$

equal \( \mu _{3,4}\).

Surprisingly, it turns out that the matrix P provides also the upper quasi-triangular Schur decomposition for \( E_{*} \):

$$ P^{\top } E_{*} P \approx \left[ \begin{array}{llll} \;\;\;\lambda _{*} &{} \;\;\;0 &{} -0.172898 &{} \;\;\;1.393130\\ \;\;\;0 &{} \;\;\;\lambda _{*} &{} \;\;\;0.251668 &{} -2.474365 \\ \;\;\;0 &{} \;\;\;0 &{} \;\;\;0 &{}\;\;\; 0 \\ \;\;\;0 &{} \;\;\;0 &{} \;\;\;0 &{}\;\;\; 0 \end{array} \right] . $$

   \(\square \)

The discovered property is confirmed by the following result.

Theorem 4

Let \( A\in \mathbb {R}^{n\times n}\) be a stable matrix, \( B_{*} \) and \( E_{*} \) be the nearest to A matrix in the manifold (5) and the destabilizing perturbation correspondingly: \( B_{*} = A+E_{*} \). There exists an orthogonal matrix \( P \in \mathbb {R}^{n\times n}\) such that the matrix \( P^{\top }E_{*}P \) contains only two nonzero rows while the matrix \(P^{\top }B_{*}P \) is of the lower quasi-triangular form.

Proof

Let the orthogonal matrix P furnish the lower quasi-triangular Schur decomposition for \(B_{*}\):

$$P^{\top }B_{*}P=\left[ \begin{array}{crccc} \widetilde{b}_{11}&{}\widetilde{b}_{12}&{}0&{}\ldots &{}0\\ \widetilde{b}_{21}&{} \widetilde{b}_{22}&{}0&{}\ldots &{}0\\ \hline &{}&{} \widetilde{\mathbf {B}} &{}&{} \end{array}\right] , $$

where \( \widetilde{\mathbf {B}} \in \mathbb {R}^{(n-2)\times n}\) is the lower quasi-triangular matrix while the matrix

$$\begin{aligned} \left[ \begin{array}{cc} \widetilde{b}_{11}&{} \widetilde{b}_{12}\\ \widetilde{b}_{21}&{} \widetilde{b}_{22} \end{array}\right] \end{aligned}$$
(26)

has its eigenvalues of the opposite signs, i.e., \(\widetilde{b}_{11}+\widetilde{b}_{22}=0\).

It turns out that the matrix P provides also the upper quasi-triangular Schur decomposition for \(E_{*}\):

$$\begin{aligned} P^{\top }E_{*}P=\left[ \begin{array}{ccccc} \lambda _{*}&{}0&{}e_{13}&{}\ldots &{}e_{1n}\\ 0&{}\lambda _{*} &{}e_{23}&{}\ldots &{}e_{2n}\\ \hline &{}&{} &{} \mathbb O_{(n-2)\times n} &{} \end{array}\right] , \end{aligned}$$
(27)

where \( \lambda _{*}\) is defined by (12). Indeed, represent \( P^{\top }E_{*}P \) as a stack matrix:

$$ P^{\top }E_{*}P=\left[ \begin{array}{c} \mathbf{E}_1 \\ \mathbf{E}_2 \end{array} \right] \quad \text{ where } \ \mathbf{E}_1 \in \mathbb {R}^{2\times n}, \ \mathbf{E}_2 \in \mathbb {R}^{(n-2)\times n} . $$

Then

$$\begin{aligned} P^{\top }AP + \left[ \begin{array}{c} \mathbf{E}_1 \\ \mathbb {O} \end{array} \right] = \mathfrak B \quad \text{ where } \ \mathfrak B:= \left[ \begin{array}{crccc} \widetilde{b}_{11}&{}\widetilde{b}_{12}&{}0&{}\ldots &{}0\\ \widetilde{b}_{21}&{} \widetilde{b}_{22}&{}0&{}\ldots &{}0\\ \hline &{}&{} \widetilde{\mathbf {B}} - \mathbf {E}_2 &{}&{} \end{array}\right] \, \end{aligned}$$
(28)

and, consequently,

$$ A+P \left[ \begin{array}{c} \mathbf{E}_1 \\ \mathbb {O} \end{array} \right] P^{\top }=P \mathfrak B P^{\top } . $$

Matrix \( \mathfrak B \) still lies in the manifold (5); so does the matrix \( P \mathfrak B P^{\top } \). If \( \mathbf{E}_2 \ne \mathbb {O} \), then the latter is closer to A than \( B_{*}\) since

$$ \left\| P \left[ \begin{array}{c} \mathbf{E}_1 \\ \mathbb {O} \end{array} \right] P^{\top } \right\| =\Vert \mathbf{E}_1 \Vert < \sqrt{\Vert \mathbf{E}_1 \Vert ^2 + \Vert \mathbf{E}_2 \Vert ^2 }=\Vert E_{*} \Vert . $$

This contradicts the assumption. Therefore, the matrix \( P^{\top }E_{*}P \) contains only two nonzero rows, namely those composing the matrix \( \mathbf{E}_1 \).

Furthermore, the matrix \(E_{*}\) has a single real eigenvalue \(\lambda _{*}\) of the multiplicity 2 (Theorem 3). Consider the second order submatrix located in the upper-left corner of \(P^{\top }E_{*}P\):

$$\begin{aligned} \left[ \begin{array}{cc} e_{11}&{}e_{12}\\ e_{21}&{}e_{22} \end{array}\right] . \end{aligned}$$
(29)

This submatrix has the double eigenvalue \(\lambda _{*}\), and its norm is the minimal possible. Hence, it should have the following form

$$\left[ \begin{array}{cc} \lambda _{*}&{}0\\ 0&{}\lambda _{*} \end{array}\right] .$$

Indeed, let us find the minimum of the norm of (29) under the constraints

$$(e_{11}-e_{22})^2+4e_{12}e_{21}=0,\ e_{11}+e_{22}=2\lambda _{*}$$

by the Lagrange multiplier method. We have the Lagrangian function

$$F(e_{11},e_{22},e_{12},e_{21},\mu ,\nu )=\sum _{j,k=1}^2e_{jk}^2+\mu ((e_{11}-e_{22})^2+4e_{12}e_{21})+\nu (e_{11}+e_{22}-2\lambda _{*}),$$

where \(\mu \) and \(\nu \) are the Lagrange multipliers. We obtain the system of equations:

$$\begin{aligned} e_{11}+\mu (e_{11}-e_{22})+\nu= & {} 0,\\ e_{22}-\mu (e_{11}-e_{22})+\nu= & {} 0,\\ e_{12}+2\mu e_{21}= & {} 0,\\ e_{21}+2\mu e_{12}= & {} 0,\\ (e_{11}-e_{22})^2+4\,e_{12}e_{21}= & {} 0,\\ e_{11}+e_{22}-2\lambda _{*}= & {} 0 \end{aligned}$$

whence it follows that

$$\begin{aligned} e_{12}(1-4\mu ^2)= & {} 0,\\ e_{21}(1-4\mu ^2)= & {} 0,\\ e_{22}= & {} 2\lambda _{*}-e_{11},\\ \nu= & {} -\lambda _{*},\\ (e_{11}-\lambda _{*})(1+2\mu )= & {} 0,\\ (e_{11}-\lambda _{*})^2+e_{12}e_{21}= & {} 0. \end{aligned}$$
  • If \(\mu \ne \pm 1/2\), then \(a_{12}=e_{21}=0\) and \(e_{11}=e_{22}=\lambda _{*}\).

  • If \(\mu =1/2\), then \(e_{12}=-e_{21}\), after that by the fifth equation, \(e_{11}=\lambda _{*}\), by the third equation \(e_{22}=\lambda _{*}\), and by the last equation, \(e_{12}=-e_{21}=0\).

  • If \(\mu =-1/2\), then \(e_{12}=e_{21}\) and by the last equation, \(e_{11}=\lambda _{*}\) and \(e_{12}=0\).    \(\square \)

We next investigate some classes of matrices where the distance to instability can be directly expressed via the eigenvalues.

4 Symmetric Matrix

Theorem 5

Let \(\lambda _1,\lambda _2,\ldots ,\lambda _n\) be the eigenvalues of a stable symmetric matrix A arranged in descending order:

$$ \lambda _n\le \lambda _{n-1}\le \ldots \le \lambda _2\le \lambda _1<0 .$$

The distance from A to the manifold (5) equals

$$ |\lambda _1+\lambda _2|/\sqrt{2} . $$

Proof

For a symmetric matrix A, the nearest in the manifold (5) matrix \( B_{*} \) possesses two real eigenvalues of the opposite signs. Indeed, in this case, the block (26) becomes symmetric: \( \widetilde{b}_{12} = \widetilde{b}_{21} \), and its eigenvalues equal \(\pm \alpha \) where \( \alpha := \sqrt{\widetilde{b}_{11}^2+ \widetilde{b}_{12}^2}\).

Since orthogonal transformations preserve the lengths of vectors and angles between them, we can consider our problem for diagonal matrix \(A_d=\text{ diag }\,\{\lambda _1,\lambda _2, \ldots ,\lambda _n\}\). It is evident that the matrix \(E_{d*}=\text{ diag }\,\{\lambda _{*},\lambda _{*},0,\ldots ,0\}\) where \(\lambda _{*}=-(\lambda _1+\lambda _2)/2\) is such that the matrix \(B_{d*}=A_d+E_{d*}\) belongs to the manifold (5). The distance from \(A_d\) to \(B_{d*}\) equals \(|\lambda _1+\lambda _2|/\sqrt{2}\). We need to prove that this matrix \(E_{d*}\) gives us the destabilizing perturbation, i.e., its Frobenius norm is the smallest.

Assume the converse, i.e., there exist matrices \(\widetilde{E}_{d*},\widetilde{B}_{d*}\) and \(\widetilde{P}\) satisfying Theorem 4 such that the norm of the matrix \(\widetilde{E}_{d*}\) that coincides with the norm of the matrix

$$\begin{aligned} \widetilde{P}^{\top }\widetilde{E}_{d*}\widetilde{P}= \left[ \begin{array}{crccc} \widetilde{b}_{11}&{}\widetilde{b}_{12}&{}0&{}\ldots &{}0\\ \widetilde{b}_{12}&{} \widetilde{b}_{22}&{}0&{}\ldots &{}0\\ \hline &{}&{} \widetilde{\widetilde{\mathbf{B}}} &{}&{} \end{array}\right] -\widetilde{P}^{\top }A_d\widetilde{P}= \left[ \begin{array}{ccccc} \widetilde{\lambda }_{*}&{}0&{}\widetilde{e}_{13}&{}\ldots &{}\widetilde{e}_{1n}\\ 0&{}\widetilde{\lambda }_{*} &{}\widetilde{e}_{23}&{}\ldots &{}\widetilde{e}_{2n}\\ \hline &{}&{} &{} \mathbb O_{(n-2)\times n} &{} \end{array}\right] \end{aligned}$$

is smaller than \(||E_{d*}||\). Consider the matrix \(\widetilde{A}=\widetilde{P}^{\top }A_d\widetilde{P}=[\widetilde{a}_{ij}]_{i,j=1}^n\). Since \(\widetilde{b}_{11} = -\widetilde{b}_{22}\), one gets \(\widetilde{\lambda }_{*}=-(\widetilde{a}_{11}+\widetilde{a}_{22})/2\). Let us estimate this value:

$$\begin{array}{c} -2\widetilde{\lambda }_{*}=\lambda _1(p_{11}^2+p_{12}^2)+\lambda _2(p_{21}^2+p_{22}^2)+\ldots +\lambda _n(p_{n1}^2+p_{n2}^2)\\ =\lambda _1(p_{11}^2+p_{21}^2+\ldots +p_{n1}^2)-\lambda _1(p_{21}^2+p_{31}^2+\ldots +p_{n1}^2)\\ +\, \lambda _2(p_{12}^2+p_{22}^2+\ldots +p_{n2}^2)-\lambda _2(p_{12}^2+p_{32}^2+\ldots +p_{n2}^2)\\ +\, \lambda _1p_{12}^2+\lambda _2p_{21}^2+\lambda _3(p_{31}^2+p_{32}^2)+\ldots +\lambda _n(p_{n1}^2+p_{n2}^2)\\ =\lambda _1+\lambda _2+(\lambda _2-\lambda _1)p_{21}^2+(\lambda _3-\lambda _1)p_{31}^2+\ldots +(\lambda _n-\lambda _1)p_{n1}^2\\ +\, (\lambda _1-\lambda _2)p_{12}^2+(\lambda _3-\lambda _2)p_{32}^2+\ldots +(\lambda _n-\lambda _2)p_{n2}^2 \\ \le \lambda _1+\lambda _2+(\lambda _2-\lambda _1)p_{21}^2+(\lambda _2-\lambda _1)p_{31}^2+\ldots +(\lambda _2-\lambda _1)p_{n1}^2\\ +\,(\lambda _1-\lambda _2)p_{12}^2+(\lambda _3-\lambda _2)p_{32}^2+\ldots +(\lambda _n-\lambda _2)p_{n2}^2 \\ =\lambda _1+\lambda _2+\left[ (\lambda _2-\lambda _1)-(\lambda _2-\lambda _1)p_{11}^2-(\lambda _2-\lambda _1)p_{12}^2\right] \\ +\,(\lambda _3-\lambda _2)p_{32}^2+\ldots +(\lambda _n-\lambda _2)p_{n2}^2\le \lambda _1+\lambda _2.\end{array}$$

Both values are non-positive, therefore

$$\widetilde{\lambda }_{*}^2\ge \left( \frac{\lambda _1+\lambda _2}{2}\right) ^2.$$

Finally, we obtain

$$||\widetilde{E}_{d*}||\ge \widetilde{\lambda }_{*}\sqrt{2}\ge \lambda _{*}\sqrt{2}=||E_{d*}||,$$

and it is clear that \(E_{d*}=\text{ diag }\,\{\lambda _{*},\lambda _{*},0,\ldots ,0\}\) provides the destabilizing perturbation for \(A_d\).

   \(\square \)

Corollary 1

Destabilizing perturbation providing the distance in Theorem 5 is given as the rank 2 matrix

$$\begin{aligned} E_{*}=-\frac{1}{2}(\lambda _1+\lambda _2)\left( P_{[1]}P_{[1]}^{\top }+P_{[2]}P_{[2]}^{\top }\right) \end{aligned}$$
(30)

where \( P_{[1]} \) and \( P_{[2]}\) are the normalized eigenvectors of A corresponding to the eigenvalues \( \lambda _1 \) and \( \lambda _2 \) correspondingly.

Example 3

For the matrix

$$ A=\frac{1}{9}\left[ \begin{array}{rrr} -121&{}-14&{}34\\ -14&{}-94&{}20\\ 34&{}20&{}-118 \end{array}\right] $$

with eigenvalues \(\lambda _1=-9,\lambda _2=-10,\lambda _3=-18\), the orthogonal matrix

$$ P= \frac{1}{3}\left[ \begin{array}{rrr} 1&{}2&{}2\\ 2&{}-2&{}1\\ 2&{}1&{}-2 \end{array}\right] $$

reduces it to the diagonal form \( P^{\top } A P= \text{ diag }\,\{\lambda _1,\lambda _2,\lambda _3\} \). Distance from A to the manifold (5) equals

$$\frac{1}{\sqrt{2}}|9+10|\approx 13.435028 .$$

The corresponding destabilizing matrix is determined by (30)

$$ E_{*}=\frac{1}{18} \left[ \begin{array}{rrr} 95 &{} -38 &{} 76 \\ -38 &{} 152 &{} 38 \\ 76 &{} 38 &{} 95 \end{array} \right] . $$

It is of interest to watch how the general form of the distance equation transforms for this example:

$$\mathcal{F}(z)=(z-729/2)(z-361/2)(z-392)(z-545)^2(z-1513/2)^2(z-1145/2)^2=0 .$$

   \(\square \)

Conjecture 3

Let \( \{\lambda _j\}_{j=1}^n \) be the spectrum of a symmetric matrix A. Denote

$$ \left\{ \varLambda _{jk}:=\frac{1}{2}(\lambda _j+\lambda _k)^2 \Bigg | 1\le j < k \le n \right\} . $$

Distance equation for A can be represented as

$$ \prod _{1\le j < k \le n} \left( z-\varLambda _{jk} \right) \cdot \prod \left( z-(\varLambda _{jk}+\varLambda _{\ell s}) \right) ^2=0 . $$

The second product is extended to all the possible pairs of indices (jk) and \( (\ell , s)\) such that \( j<k, \ell < s\) and \( j\ne \ell , k \ne s\).

Corollary 2

In notation of Theorem 5 and Corollary 1, the distance to instability for a stable symmetric matrix A equals \( |\lambda _1 | \) with the destabilizing perturbation \(E_{*} = - \lambda _1 P_{[1]}P_{[1]}^{\top } \).

Though this corollary makes the result of Theorem 5 redundant for solving the problem of distance to instability evaluation for symmetric matrices, it, nevertheless, might be useful for establishing the upper bound for this distance for arbitrary matrices.

Theorem 6

Let \( A \in \mathbb R^{n\times n} \) be a stable matrix. Denote by \( d( \cdot )\) the distance to the manifold (5). One has:

$$ d(A) \le \sqrt{\left\| \frac{1}{2}\left( A-A^{\top }\right) \right\| ^2+d^2\left( \frac{1}{2}\left( A+A^{\top }\right) \right) } . $$

Proof follows from the fact that the skew-symmetric matrix \( A-A^{\top } \) is normal to the symmetric matrix \( A+A^{\top } \) with respect to the inner product in \( \mathbb {R}^{n\times n} \) introduced by \( \langle A_1, A_2 \rangle :={\text {trace}} (A_1^{\top } A_2) \).

For instance, this theorem yields the estimation \( d(A) < 5.654250 \) for the matrix of Example 2.

5 Orthogonal Matrix

Now we consider how to find the distance to instability for a stable orthogonal matrix \(A\in \mathbb {R}^{n\times n}\). We assume that this matrix has at least one pair of non-real eigenvalues.

Theorem 7

Let \(\cos \alpha _j\pm \mathbf {i}\sin \alpha _j\) \(j\in \{1,\ldots ,k\}\) be the non-real eigenvalues of an orthogonal matrix A arranged in descending order of their real parts:

$$\cos \alpha _k\le \cos \alpha _{k-1}\le \ldots \le \cos \alpha _1<0 .$$

(All the other eigenvalues of A, if any, equal \((-1)\)). The distance from A to the manifold (5) equals \(\sqrt{2} |\cos \alpha _1 | \).

Proof

First, there exists an orthogonal transformation bringing the matrix A to the block diagonal form

$$ A_J= \left[ \begin{array}{cccccc} \mathbf{A}_1 &{} \dots &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} \mathbb {O} &{} \\ &{} &{} \mathbf{A}_k &{} &{} &{}\\ &{} &{} &{} -1 &{} &{} \\ &{} \mathbb {O} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} -1 \end{array}\right] \ \text{ where } \ \mathbf{A}_{\ell }:=\left[ \begin{array}{rr} \cos \alpha _{\ell }&{}-\sin \alpha _{\ell } \\ \sin \alpha _{\ell } &{} \cos \alpha _{\ell } \end{array} \right] ,\ \ell \in \{1,\ldots ,k\} . $$

It is evident that the matrix

$$\begin{aligned} E_{J*}=\text{ diag }\,\{-\cos \alpha _1,-\cos \alpha _1,0,\ldots ,0\} \end{aligned}$$
(31)

is such that the matrix \(B_{J*}=A_J+E_{J*}\) belongs to the manifold (5). The distance from \(A_J\) to \(B_{J*}\) equals \(\sqrt{2}|\cos \alpha _1| \). We need to prove that this matrix \(E_{J*}\) provides the destabilizing perturbation, i.e., its Frobenius norm is the smallest.

Assume the converse, i.e., there exist matrices \(\widetilde{E}_{J*},\widetilde{B}_{J*}\) and \(\widetilde{P}\) satisfying Theorem 4 such that the norm of the matrix \(\widetilde{E}_{J*}\) that coincides with the norm of the matrix

$$\begin{aligned} \widetilde{P}^{\top }\widetilde{E}_{J*}\widetilde{P}= \left[ \begin{array}{crccc} \widetilde{b}_{11}&{}\widetilde{b}_{12}&{}0&{}\ldots &{}0\\ \widetilde{b}_{21}&{} \widetilde{b}_{22}&{}0&{}\ldots &{}0\\ \hline &{}&{} \widetilde{\widetilde{\mathbf {B}}} &{}&{} \end{array}\right] -\widetilde{P}^{\top }A_J\widetilde{P}= \left[ \begin{array}{ccccc} \widetilde{\lambda }_{*}&{}0&{}\widetilde{e}_{13}&{}\ldots &{}\widetilde{e}_{1n}\\ 0&{}\widetilde{\lambda }_{*} &{}\widetilde{e}_{23}&{}\ldots &{}\widetilde{e}_{2n}\\ \hline &{}&{} &{} \mathbb O_{(n-2)\times n} &{} \end{array}\right] \end{aligned}$$

is smaller than \(||E_{J*}||\). Consider the matrix \(\widetilde{A}=\widetilde{P}^{\top }A_J\widetilde{P}=[\widetilde{a}_{ij}]_{i,j=1}^n\). Since \(\widetilde{b}_{11} = -\widetilde{b}_{22}\), one gets \(\widetilde{\lambda }_{*}=-(\widetilde{a}_{11}+\widetilde{a}_{22})/2\). Let us estimate this value:

$$\begin{array}{c} -2\widetilde{\lambda }_{*}=(p_{11}^2+p_{21}^2)\cos \alpha _1+(p_{31}^2+p_{41}^2)\cos \alpha _2+\ldots +(p_{n-1,1}^2+p_{n1}^2)\cos \alpha _k\\ +\,(p_{12}^2+p_{22}^2)\cos \alpha _1+(p_{32}^2+p_{42}^2)\cos \alpha _2+\ldots +(p_{n-1,2}^2+p_{n2}^2)\cos \alpha _k \\ -\, p_{k+1,1}^2-\ldots -p_{n1}^2-p_{k+1,2}^2-\ldots -p_{n2}^2.\end{array}$$

Add (and subtract) the terms \(p_{31}^2+p_{41}^2+\ldots +p_{n-1,1}^2+p_{n1}^2\) and \(p_{32}^2+p_{42}^2+\ldots +p_{n-1,2}^2+p_{n2}^2\) to the coefficients of \(\cos \alpha _1\) to obtain the sums of squares of the first and the second columns of the matrix \(\widetilde{P}\):

$$\begin{array}{c} -2\widetilde{\lambda }_{*}=2\cos \alpha _1+(\cos \alpha _2-\cos \alpha _1)(p_{31}^2+p_{41}^2+p_{32}^2+p_{42}^2)+\ldots \\ +\,(\cos \alpha _k-\cos \alpha _1)(p_{k-1,1}^2+p_{k1}^2+p_{k-1,2}^2+p_{k2}^2)\\ -\cos \alpha _1(p_{k+1,1}^2+p_{k+1,2}^2+\ldots +p_{n1}^2+p_{n2}^2)-p_{k+1,1}^2-p_{k+1,2}^2-\ldots -p_{n1}^2-p_{n2}^2\\ =2\cos \alpha _1+(\cos \alpha _2-\cos \alpha _1)(p_{31}^2+p_{41}^2+p_{32}^2+p_{42}^2)+\ldots \\ +\,(\cos \alpha _k-\cos \alpha _1)(p_{k-1,1}^2+p_{k1}^2+p_{k-1,2}^2+p_{k2}^2)\\ +\,(-1-\cos \alpha _1)(p_{k+1,1}^2+p_{k+1,2}^2+\ldots +p_{n1}^2+p_{n2}^2).\end{array}$$

Since

$$\cos \alpha _k-\cos \alpha _1\le \cos \alpha _{k-1}-\cos \alpha _1\le \ldots \le \cos \alpha _2-\cos \alpha _1\le 0, -1-\cos \alpha _1<0,$$

the following inequality holds

$$-2\widetilde{\lambda }_{*}\le 2\cos \alpha _1 .$$

Finally, we obtain

$$||\widetilde{E}_{J*}||\ge \widetilde{\lambda }_{*}\sqrt{2}\ge \lambda _{*}\sqrt{2}=||E_{J*}||,$$

and it is clear that the matrix (31) provides the destabilizing perturbation for \(A_J\).

   \(\square \)

Corollary 3

Destabilizing perturbation providing the distance in Theorem 7 is given as

$$\begin{aligned} E_{*}=-\cos \alpha _1\left[ \mathfrak {R}(P_{[1]}) \mathfrak {R}(P_{[1]})^{\top }+\mathfrak {I}(P_{[1]}) \mathfrak {I}(P_{[1]})^{\top }\right] \end{aligned}$$
(32)

where \( \mathfrak {R}(P_{[1]}) \) and \( \mathfrak {I}(P_{[1]})\) are the normalized real and imaginary parts of the eigenvector of A corresponding to the eigenvalue \( \cos \alpha _1+\mathbf {i}\sin \alpha _1 \).

Matrix (32) is, evidently, symmetric. In view of Theorem 1, the following result is valid:

Corollary 4

If \( \eta ( \cdot ) \) denotes the spectral abscissa of the matrix, then the stability radius of the orthogonal matrix A can be evaluated by the formula

$$ \beta _{\mathbb {R}}(A) = \left\{ \begin{array}{cc} \sqrt{2} \eta ( -A ) &{} \text{ if } \ -1 \not \in \{\lambda _1,\dots , \lambda _n\}, \\ \min \{1, \sqrt{2} \eta ( -A ) \} &{} \text{ otherwise. } \end{array} \right. $$

Example 4

For the matrix

$$A=\frac{1}{3}\left[ \begin{array}{rrr} -2&{}-2&{}1\\ 1&{}-2&{}-2\\ -2&{}1&{}-2 \end{array}\right] $$

with the eigenvalues \(\lambda _1=-1,\lambda _{2,3}=-\frac{1}{2}\pm \mathbf {i}\frac{\sqrt{3}}{2}\), the orthogonal matrix

$$P=\frac{1}{\sqrt{6}}\left[ \begin{array}{crr} \sqrt{2}&{}2&{}0\\ \sqrt{2}&{}-1&{}-\sqrt{3}\\ \sqrt{2}&{}-1&{}\sqrt{3} \end{array}\right] $$

reduces it to the form

$$P^{\top }AP=\frac{1}{2}\left[ \begin{array}{rrr} -2&{}0&{}0\\ 0&{}-1&{}\sqrt{3}\\ 0&{}-\sqrt{3}&{}-1 \end{array}\right] .$$

The distance from A to instability equals \( 1/\sqrt{2}\approx 0.707106\). The corresponding destabilizing matrix is determined by (32)

$$E_{*}=\frac{1}{6}\left[ \begin{array}{rrr} 2&{}-1&{}-1\\ -1&{}2&{}-1\\ -1&{}-1&{}2 \end{array}\right] .$$

Distance equation for the matrix A transforms into

$$\mathcal{F}(z):=(z-1/2)(z-15/8)^2(z^2-3z+9)(z-5)^4=0 .$$

   \(\square \)

The results of the present section can evidently be extended to the case of matrices orthogonally equivalent to the block-diagonal matrices with real blocks of the types

$$ [ \lambda ] \quad \text{ and } \quad r \left[ \begin{array}{rr} \cos \alpha &{}-\sin \alpha \\ \sin \alpha &{} \cos \alpha \end{array} \right] \ ; \quad r>0, \cos \alpha< 0, \lambda < 0 . $$

6 Conclusion

We treat the problem of the Frobenius norm real stability radius evaluation in the framework of symbolic computations, i.e., we look for the reduction of the problem to univariate algebraic equation solving. Though the obtained results clear up some issues of the problem, the latter, in its general statement, remains still open.

As it is mentioned in Introduction, the main problem of exploiting the numerical procedures for finding the distance to instability estimations is that of reliability of the results. The results of the present paper can supply these procedures with testing samples of matrix families with trustworthy estimations of the distance to instability value.