Abstract
We continue investigations on the Frobenius norm real stability radius computation started in the previous publication by the authors (LNCS, vol. 12291 (2020)). With the use of the elimination of variables procedure we reduce the problem to the univariate equation solving. The structure of the destabilizing perturbation matrix is also discussed as well as cases of symmetric and orthogonal matrices where the stability radius can be explicitly expressed via the matrix eigenvalues. Several examples are presented.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Matrix \(A\in \mathbb {R}^{n\times n} \) is called stable (Routh – Hurwitz stable) if all its eigenvalues are situated in the open left half plane of the complex plane. For a stable matrix A, some perturbation \(E \in \mathbb R^{n\times n} \) may lead to that eigenvalues of \(A+E\) cross the imaginary axis, i.e., to loss of stability. Given some norm \(||\, \cdot \, ||\) in \( \mathbb {R}^{n\times n} \), the smallest perturbation E that makes \(A+E\) unstable is called the destabilizing real perturbation. It is connected with the notion of the distance to instability (stability radius) under real perturbations that is formally defined as
Here \( \eta (\cdot ) \) denotes the spectral abscissa of the matrix, i.e., the maximal real part of its eigenvalues.
The present paper is devoted to the choice of Frobenius norm in (1), and thereby it is an extension of the investigation by the authors started in [10, 11]. It should be mentioned that while the 2-norm variant of the problem and the application of pseudospectrum to its solution have been explored intensively [2, 3, 7, 12] including numerical computations of spectral norm of a matrix [13], there are just a few studies [1, 6, 9] on the Frobenius norm counterpart. The treatment of the latter is considered as far more complex than the former due to the fundamental difference between the spectral and Frobenius norms. We refer to the paper [11] for the discussion of the practical applications of the stated problem and for the related references. The major difficulty in utilization of numerical procedures for estimation of (1) is that none of them is able to guarantee the convergence to the global minimum of the distance function. As an alternative to this approach, we attack the problem with the combination of symbolical and numerical methods.
It is known that the set of stable matrices in \( \mathbb R^{n\times n} \) is bounded by two manifolds, namely the one consisting of singular matrices and the other containing the matrices with a pair of eigenvalues of the opposite signs. Both boundaries are algebraic manifolds. The distance from the matrix A to the manifold of singular matrices is estimated via the least singular value of A. More difficult is the treatment of the second alternative that is in the focus of the present paper. In Sect. 3, the so-called distance equation [11, 14] is constructed, i.e., the univariate equation whose zero set contains all the critical values of the squared distance function. We also detail the structures of the nearest matrix \( B_{*} \) and the corresponding matrix of the smallest perturbation \(E_{*}\) such that \( B_{*}=A+E_{*} \). The result is presented on the feasibility of simultaneous quasi-triangular Schur decomposition for the matrices \( B_{*} \) and \( E_{*} \).
It is utilized in Sect. 4 and in Sect. 5 for the classes of stable matrices where the distance to instability \(\beta _{\mathbb {R}}(A) \) can be explicitly expressed via the eigenvalues of A. These happen to be symmetric and orthogonal matrices.
Remark. All the numerical computations were performed in CAS Maple 15.0 (LinearAlgebra package and functions discrim, and resultant). We present the results of the approximate computations with the \(10^{-6}\) accuracy.
2 Algebraic Preliminaries
Let \(M=[m_{jk}]_{j,k=1}^n\in \mathbb {R}^{n\times n}\) be an arbitrary matrix and
be its characteristic polynomial. Find the real and imaginary part of \(f(x+\mathbf {i} y)\) (\(\{x,y\}\subset \mathbb {R}\)):
where
Compute the resultant of polynomials \(\varPhi (0,Y)\) and \( \varPsi (0,Y)\) in terms of the coefficients of (2):
Polynomial f(z) possesses a root with the zero real part iff either \( a_n =0\) or \( K(f)=0\). This results in the following statement [11].
Theorem 1
Equations
and
define implicit manifolds in \( \mathbb R^{n^2} \) that compose the boundary for the domain of stability, i.e., the domain in the matrix space \( \mathbb R^{n\times n}\)
Here \(\mathrm{vec} (\cdot ) \) stands for the vectorization of the matrix:
Therefore, the distance to instability from a stable matrix A is computed as the smallest of the distances to the two algebraic manifolds in \( \mathbb {R}^{n^2}\). The Euclidean distance to the set of singular matrices equals the minimal singular value \( \sigma _{\min } (A) \) of the matrix A. If \(\beta _{\mathbb {R}}(A)=\sigma _{\min }(A)\), then the destabilizing perturbation is given by the rank-one matrix
where \(V_{*}\) stands for the normalized right singular-vector of A corresponding to \(\sigma _{\min }(A)\).
More complicated is the problem of distance evaluation from A to the manifold (5) corresponding to the matrices with a pair of eigenvalues of opposite signs (i.e., either \( \pm \lambda \) or \( \pm \mathbf {i} \beta \) for \( \{\lambda , \beta \} \subset \mathbb {R} \setminus \{0\}\)). First of all, the function (3) treated w.r.t. the entries of the matrix M, is not convex. Indeed, for \( n=3 \), the characteristic polynomial of the Hessian of this function is as follows
It cannot possess all its (real) zeros of the same sign, and thus, the Hessian is not a sign definite matrix. Therefore, one may expect that any gradient-based numerical procedure applied for searching the minimum of the distance function related to the stated problem will meet the traditional trouble of recognizing the local minima.
The general problem of finding the Euclidean distance in a multidimensional space from a point to an implicitly defined algebraic manifold can be solved via the construction of the so-called distance equation [11, 14], i.e., the univariate equation whose zero set contains all the critical values of the squared distance function. In the next section, we develop an approach for the construction of this equation for the case of the manifold (5).
3 Distance to the Manifold (5)
The starting point in this construction is the following result [15].
Theorem 2
Distance from a stable matrix \(A\in \mathbb {R}^{n\times n}\) to the manifold (5) equals
where
All vector norms here are 2-norms.
If \(\beta _{\mathbb {R}}(A) \) equals the value (8) that is attained at the columns \(X_{*}\) and \(Y_{*}\), then the destabilizing perturbation is computed by the formula
It is known [5] that the matrix (11) has rank 2.
Theorem 3
[11]. If \(a\ne -b\), then the matrix (11) has a unique nonzero eigenvalue
of the multiplicity 2.
In what follows, we will consider the most general case \(a\ne -b\).
Constructive computation of (8) is a nontrivial task. Utilization of numerical optimization procedures results in convergence to several local minima (including those satisfying inappropriate condition \( a+b=0 \)). In [11], the approach was proposed reducing the problem to that of finding an unconstrained minimum of an appropriate rational function; unfortunately, the approach is applicable only for the particular case of the third order matrices.
To treat the general case, we convert the constrained optimization problem (9)–(10) to a new one with lesser number of variables and constraints. Denote the objective function in (9) by F(X, Y) , and consider the Lagrange function
with the Lagrange multipliers \( \tau _1, \tau _2\) and \( \mu \). Its derivatives with respect to X and Y yield the system
Together with conditions (10), this algebraic system contains \(2n+3\) variables in a nonlinear manner. We will make some manipulations aiming at reducing twice the number of these variables.
Equation (13) together with two of conditions (10) are those providing the Lagrange equations for the constrained optimization problem
Since F(X, Y) is a quadratic function w.r.t. X:
where
one can apply the traditional method of finding its critical values [4]. First, resolve (13) w.r.t. X
Substitute this into \( X^{\top }X=1 \):
and into \( X^{\top }Y=0 \):
Next, introduce a new variable z responsible for the critical values of F:
and substitute here (15). Skipping some intermediate computations, one arrives at
Next step consists of the elimination of the parameters \( \tau _1\) and \( \mu \) from (16)–(18). It can be readily verified that \(\partial \varPhi / \partial \mu \) coincides, up to a sign, with the left-hand side of (17). One may expect that \(\partial \varPhi / \partial \tau _1\) coincides with the left-hand side of (16). This is not the fact:
Introduce the functions
Due to Schur complement formula, one has
Replace \( \varPhi \) by \(\widetilde{\varPhi }\). From (18) deduce
From (17) one gets that
Under condition (22), the following relation is valid
In view of (19), replace (16) by
Finally, eliminate \( \tau _1 \) and \( \mu \) from (22), (23) and (24) (elimination of \( \mu \) is simplified by the fact that the polynomial \( \widetilde{\varPhi }\) is a quadratic one w.r.t. this parameter):
The resulting equation
is an algebraic one w.r.t. its variables.
Conjecture 1
One has
and the coefficient of \( z^{n-1} \) equals \( Y^{\top }Y\).
Equation (25) represents z as an implicit function of Y. We need to find the minimum of this function subject to the constraint \( Y^{\top } Y=1\). This can be done via direct elimination of either of variables \( y_1,y_2,\dots , y_n \), say \( y_1 \), from the equations (25) and \( Y^{\top } Y=1\) and further computation of the (absolute) minimum of the implicitly defined function of the variables \( y_2,\dots , y_n \). The elimination procedure for these variables consists of the successive resultant computations and results, on expelling some extraneous factors, in the distance equation \( \mathcal F(z)=0\).
Conjecture 2
Generically, one has
while the number of real zeros of \( \mathcal F(z) \) is \( \ge {n \atopwithdelims ()2} \).
Real zeros of \( \mathcal F(z)=0 \) are the critical values of the squared distance function. In all the examples we have computed, the true distance is provided by the square root of the least positive zero of this equationFootnote 1.
Example 1
For the upper triangular matrix
the distance equation to the manifold (5) is as follows:
with real zeros
Distance to (5) equals \( \sqrt{z_{\min }} \approx 7.035794 \), and it is provided by the perturbation matrix
Spectrum of the matrix \( B_{*} =A+E_{*} \) is \(\approx \{ -13.629850, \pm 1.273346 \, \mathbf{i} \}\).
The perturbation matrix corresponding to the zero \( z_2 \) of the distance equation is
Spectrum of the matrix \( B_2 =A+E_2 \) is \( \approx \{ -4.200144, \pm 1.517560 \} \).
\(\square \)
Example 2
For the matrix
the distance to the manifold (5) equals \( \sqrt{z_{\min }} \) where \( z_{\min } \approx 10.404067 \). Vectors providing this value, as the solution to the constrained optimization problem (9)–(10), are as followsFootnote 2:
The perturbation matrix is determined via (11):
The only nonzero eigenvalue (12) of this matrix is \( \lambda _{*} \approx 1.060080 \). The spectrum of the corresponding nearest to A matrix \( B_{*}=A+E_{*} \) is
Just for the sake of curiosity, let us find the real Schur decomposition [8] for the matrices \( B_{*} \) and \( E_{*}\). The orthogonal matrix
furnishes the lower quasi-triangular Schur decomposition for \( B_{*} \):
Eigenvalues of the upper left-corner block of this matrix
equal \( \mu _{3,4}\).
Surprisingly, it turns out that the matrix P provides also the upper quasi-triangular Schur decomposition for \( E_{*} \):
\(\square \)
The discovered property is confirmed by the following result.
Theorem 4
Let \( A\in \mathbb {R}^{n\times n}\) be a stable matrix, \( B_{*} \) and \( E_{*} \) be the nearest to A matrix in the manifold (5) and the destabilizing perturbation correspondingly: \( B_{*} = A+E_{*} \). There exists an orthogonal matrix \( P \in \mathbb {R}^{n\times n}\) such that the matrix \( P^{\top }E_{*}P \) contains only two nonzero rows while the matrix \(P^{\top }B_{*}P \) is of the lower quasi-triangular form.
Proof
Let the orthogonal matrix P furnish the lower quasi-triangular Schur decomposition for \(B_{*}\):
where \( \widetilde{\mathbf {B}} \in \mathbb {R}^{(n-2)\times n}\) is the lower quasi-triangular matrix while the matrix
has its eigenvalues of the opposite signs, i.e., \(\widetilde{b}_{11}+\widetilde{b}_{22}=0\).
It turns out that the matrix P provides also the upper quasi-triangular Schur decomposition for \(E_{*}\):
where \( \lambda _{*}\) is defined by (12). Indeed, represent \( P^{\top }E_{*}P \) as a stack matrix:
Then
and, consequently,
Matrix \( \mathfrak B \) still lies in the manifold (5); so does the matrix \( P \mathfrak B P^{\top } \). If \( \mathbf{E}_2 \ne \mathbb {O} \), then the latter is closer to A than \( B_{*}\) since
This contradicts the assumption. Therefore, the matrix \( P^{\top }E_{*}P \) contains only two nonzero rows, namely those composing the matrix \( \mathbf{E}_1 \).
Furthermore, the matrix \(E_{*}\) has a single real eigenvalue \(\lambda _{*}\) of the multiplicity 2 (Theorem 3). Consider the second order submatrix located in the upper-left corner of \(P^{\top }E_{*}P\):
This submatrix has the double eigenvalue \(\lambda _{*}\), and its norm is the minimal possible. Hence, it should have the following form
Indeed, let us find the minimum of the norm of (29) under the constraints
by the Lagrange multiplier method. We have the Lagrangian function
where \(\mu \) and \(\nu \) are the Lagrange multipliers. We obtain the system of equations:
whence it follows that
-
If \(\mu \ne \pm 1/2\), then \(a_{12}=e_{21}=0\) and \(e_{11}=e_{22}=\lambda _{*}\).
-
If \(\mu =1/2\), then \(e_{12}=-e_{21}\), after that by the fifth equation, \(e_{11}=\lambda _{*}\), by the third equation \(e_{22}=\lambda _{*}\), and by the last equation, \(e_{12}=-e_{21}=0\).
-
If \(\mu =-1/2\), then \(e_{12}=e_{21}\) and by the last equation, \(e_{11}=\lambda _{*}\) and \(e_{12}=0\). \(\square \)
We next investigate some classes of matrices where the distance to instability can be directly expressed via the eigenvalues.
4 Symmetric Matrix
Theorem 5
Let \(\lambda _1,\lambda _2,\ldots ,\lambda _n\) be the eigenvalues of a stable symmetric matrix A arranged in descending order:
The distance from A to the manifold (5) equals
Proof
For a symmetric matrix A, the nearest in the manifold (5) matrix \( B_{*} \) possesses two real eigenvalues of the opposite signs. Indeed, in this case, the block (26) becomes symmetric: \( \widetilde{b}_{12} = \widetilde{b}_{21} \), and its eigenvalues equal \(\pm \alpha \) where \( \alpha := \sqrt{\widetilde{b}_{11}^2+ \widetilde{b}_{12}^2}\).
Since orthogonal transformations preserve the lengths of vectors and angles between them, we can consider our problem for diagonal matrix \(A_d=\text{ diag }\,\{\lambda _1,\lambda _2, \ldots ,\lambda _n\}\). It is evident that the matrix \(E_{d*}=\text{ diag }\,\{\lambda _{*},\lambda _{*},0,\ldots ,0\}\) where \(\lambda _{*}=-(\lambda _1+\lambda _2)/2\) is such that the matrix \(B_{d*}=A_d+E_{d*}\) belongs to the manifold (5). The distance from \(A_d\) to \(B_{d*}\) equals \(|\lambda _1+\lambda _2|/\sqrt{2}\). We need to prove that this matrix \(E_{d*}\) gives us the destabilizing perturbation, i.e., its Frobenius norm is the smallest.
Assume the converse, i.e., there exist matrices \(\widetilde{E}_{d*},\widetilde{B}_{d*}\) and \(\widetilde{P}\) satisfying Theorem 4 such that the norm of the matrix \(\widetilde{E}_{d*}\) that coincides with the norm of the matrix
is smaller than \(||E_{d*}||\). Consider the matrix \(\widetilde{A}=\widetilde{P}^{\top }A_d\widetilde{P}=[\widetilde{a}_{ij}]_{i,j=1}^n\). Since \(\widetilde{b}_{11} = -\widetilde{b}_{22}\), one gets \(\widetilde{\lambda }_{*}=-(\widetilde{a}_{11}+\widetilde{a}_{22})/2\). Let us estimate this value:
Both values are non-positive, therefore
Finally, we obtain
and it is clear that \(E_{d*}=\text{ diag }\,\{\lambda _{*},\lambda _{*},0,\ldots ,0\}\) provides the destabilizing perturbation for \(A_d\).
\(\square \)
Corollary 1
Destabilizing perturbation providing the distance in Theorem 5 is given as the rank 2 matrix
where \( P_{[1]} \) and \( P_{[2]}\) are the normalized eigenvectors of A corresponding to the eigenvalues \( \lambda _1 \) and \( \lambda _2 \) correspondingly.
Example 3
For the matrix
with eigenvalues \(\lambda _1=-9,\lambda _2=-10,\lambda _3=-18\), the orthogonal matrix
reduces it to the diagonal form \( P^{\top } A P= \text{ diag }\,\{\lambda _1,\lambda _2,\lambda _3\} \). Distance from A to the manifold (5) equals
The corresponding destabilizing matrix is determined by (30)
It is of interest to watch how the general form of the distance equation transforms for this example:
\(\square \)
Conjecture 3
Let \( \{\lambda _j\}_{j=1}^n \) be the spectrum of a symmetric matrix A. Denote
Distance equation for A can be represented as
The second product is extended to all the possible pairs of indices (j, k) and \( (\ell , s)\) such that \( j<k, \ell < s\) and \( j\ne \ell , k \ne s\).
Corollary 2
In notation of Theorem 5 and Corollary 1, the distance to instability for a stable symmetric matrix A equals \( |\lambda _1 | \) with the destabilizing perturbation \(E_{*} = - \lambda _1 P_{[1]}P_{[1]}^{\top } \).
Though this corollary makes the result of Theorem 5 redundant for solving the problem of distance to instability evaluation for symmetric matrices, it, nevertheless, might be useful for establishing the upper bound for this distance for arbitrary matrices.
Theorem 6
Let \( A \in \mathbb R^{n\times n} \) be a stable matrix. Denote by \( d( \cdot )\) the distance to the manifold (5). One has:
Proof follows from the fact that the skew-symmetric matrix \( A-A^{\top } \) is normal to the symmetric matrix \( A+A^{\top } \) with respect to the inner product in \( \mathbb {R}^{n\times n} \) introduced by \( \langle A_1, A_2 \rangle :={\text {trace}} (A_1^{\top } A_2) \).
For instance, this theorem yields the estimation \( d(A) < 5.654250 \) for the matrix of Example 2.
5 Orthogonal Matrix
Now we consider how to find the distance to instability for a stable orthogonal matrix \(A\in \mathbb {R}^{n\times n}\). We assume that this matrix has at least one pair of non-real eigenvalues.
Theorem 7
Let \(\cos \alpha _j\pm \mathbf {i}\sin \alpha _j\) \(j\in \{1,\ldots ,k\}\) be the non-real eigenvalues of an orthogonal matrix A arranged in descending order of their real parts:
(All the other eigenvalues of A, if any, equal \((-1)\)). The distance from A to the manifold (5) equals \(\sqrt{2} |\cos \alpha _1 | \).
Proof
First, there exists an orthogonal transformation bringing the matrix A to the block diagonal form
It is evident that the matrix
is such that the matrix \(B_{J*}=A_J+E_{J*}\) belongs to the manifold (5). The distance from \(A_J\) to \(B_{J*}\) equals \(\sqrt{2}|\cos \alpha _1| \). We need to prove that this matrix \(E_{J*}\) provides the destabilizing perturbation, i.e., its Frobenius norm is the smallest.
Assume the converse, i.e., there exist matrices \(\widetilde{E}_{J*},\widetilde{B}_{J*}\) and \(\widetilde{P}\) satisfying Theorem 4 such that the norm of the matrix \(\widetilde{E}_{J*}\) that coincides with the norm of the matrix
is smaller than \(||E_{J*}||\). Consider the matrix \(\widetilde{A}=\widetilde{P}^{\top }A_J\widetilde{P}=[\widetilde{a}_{ij}]_{i,j=1}^n\). Since \(\widetilde{b}_{11} = -\widetilde{b}_{22}\), one gets \(\widetilde{\lambda }_{*}=-(\widetilde{a}_{11}+\widetilde{a}_{22})/2\). Let us estimate this value:
Add (and subtract) the terms \(p_{31}^2+p_{41}^2+\ldots +p_{n-1,1}^2+p_{n1}^2\) and \(p_{32}^2+p_{42}^2+\ldots +p_{n-1,2}^2+p_{n2}^2\) to the coefficients of \(\cos \alpha _1\) to obtain the sums of squares of the first and the second columns of the matrix \(\widetilde{P}\):
Since
the following inequality holds
Finally, we obtain
and it is clear that the matrix (31) provides the destabilizing perturbation for \(A_J\).
\(\square \)
Corollary 3
Destabilizing perturbation providing the distance in Theorem 7 is given as
where \( \mathfrak {R}(P_{[1]}) \) and \( \mathfrak {I}(P_{[1]})\) are the normalized real and imaginary parts of the eigenvector of A corresponding to the eigenvalue \( \cos \alpha _1+\mathbf {i}\sin \alpha _1 \).
Matrix (32) is, evidently, symmetric. In view of Theorem 1, the following result is valid:
Corollary 4
If \( \eta ( \cdot ) \) denotes the spectral abscissa of the matrix, then the stability radius of the orthogonal matrix A can be evaluated by the formula
Example 4
For the matrix
with the eigenvalues \(\lambda _1=-1,\lambda _{2,3}=-\frac{1}{2}\pm \mathbf {i}\frac{\sqrt{3}}{2}\), the orthogonal matrix
reduces it to the form
The distance from A to instability equals \( 1/\sqrt{2}\approx 0.707106\). The corresponding destabilizing matrix is determined by (32)
Distance equation for the matrix A transforms into
\(\square \)
The results of the present section can evidently be extended to the case of matrices orthogonally equivalent to the block-diagonal matrices with real blocks of the types
6 Conclusion
We treat the problem of the Frobenius norm real stability radius evaluation in the framework of symbolic computations, i.e., we look for the reduction of the problem to univariate algebraic equation solving. Though the obtained results clear up some issues of the problem, the latter, in its general statement, remains still open.
As it is mentioned in Introduction, the main problem of exploiting the numerical procedures for finding the distance to instability estimations is that of reliability of the results. The results of the present paper can supply these procedures with testing samples of matrix families with trustworthy estimations of the distance to instability value.
Notes
- 1.
For the general problem of distance to arbitrary algebraic manifold, this is not always the case.
- 2.
Due to symmetry of the problem w.r.t. the entries of X and Y, the optimal solution is evaluated up to a sign.
References
Bobylev, N.A., Bulatov, A.V., Diamond, Ph.: Estimates of the real structured radius of stability of linear dynamic systems. Autom. Remote Control 62, 505–512 (2001)
Embree, M., Trefethen, L.N.: Generalizing eigenvalue theorems to pseudospectra theorems. SIAM J. Sci. Comput. 23(2), 583–590 (2002)
Freitag, M.A., Spence, A.: A Newton-based method for the calculation of the distance to instability. Linear Algebra Appl. 435, 3189–3205 (2011)
Gantmakher, F.R.: The Theory of Matrices, vol. I, II. Chelsea, New York (1959)
Guglielmi, N., Lubich, C.: Low-rank dynamics for computing extremal points of real pseudospectra. SIAM J. Matrix Anal. Appl. 34, 40–66 (2013)
Guglielmi, N., Manetta, M.: Approximating real stability radii. IMA J. Numer. Anal. 35(3), 1401–1425 (2014)
Hinrichsen, D., Pritchard, A.J.: Mathematical Systems Theory I: Modelling, State Space Analysis, Stability and Robustness. Springer, Heidelberg (2005)
Horn, R.A., Johnson, Ch.: Matrix Analysis, 2nd edn. Cambridge University Press, New York (2013)
Katewa, V., Pasqualetti, F.: On the real stability radius of sparse systems. Automatica 113, 108685 (2020)
Kalinina, E.A., Smol’kin, Yu.A., Uteshev, A.Yu.: Stability and distance to instability for polynomial matrix families. Complex perturbations. Linear Multilin. Algebra. https://doi.org/10.1080/03081087.2020.1759500
Kalinina, E.A., Smol’kin, Y.A., Uteshev, A.Y.: Routh – Hurwitz stability of a polynomial matrix family. Real perturbations. In: Boulier, F., England, M., Sadykov, T.M., Vorozhtsov, E.V. (eds.) CASC 2020. LNCS, vol. 12291, pp. 316–334. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60026-6_18
Qiu, L., Bernhardsson, B., Rantzer, A., Davison, E.J., Young, P.M., Doyle, J.C.: A formula for computation of the real stability radius. Automatica 31(6), 879–890 (1995)
Rump, S.M.: Verified bounds for singular values, in particular for the spectral norm of a matrix and its inverse. BIT Numer. Math. 51(2), 367–384 (2011)
Uteshev, A.Yu., Goncharova, M.V.: Metric problems for algebraic manifolds: analytical approach. In: Constructive Nonsmooth Analysis and Related Topics – CNSA 2017 Proceedings 7974027 (2017)
Van Loan, C.F.: How near is a stable matrix to an unstable matrix? In: Datta, B.N., et al. (eds.) Linear Algebra and Its Role in Systems Theory 1984, Contemporary Mathematics, vol. 47, pp. 465–478. American Mathematical Society, Providence, Rhode Island (1985). https://doi.org/10.1090/conm/047
Acknowledgments
The authors are grateful to the anonymous referees for valuable suggestions that helped to improve the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kalinina, E., Uteshev, A. (2021). On the Real Stability Radius for Some Classes of Matrices. In: Boulier, F., England, M., Sadykov, T.M., Vorozhtsov, E.V. (eds) Computer Algebra in Scientific Computing. CASC 2021. Lecture Notes in Computer Science(), vol 12865. Springer, Cham. https://doi.org/10.1007/978-3-030-85165-1_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-85165-1_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-85164-4
Online ISBN: 978-3-030-85165-1
eBook Packages: Computer ScienceComputer Science (R0)