1 Introduction

Multidimensional systems is a class of systems for which the information propagates in more than just one dimension as for the classical dynamical systems (this dimension being the continuous/discrete time). The latter class of systems is usually referred as 1-D systems whereas multidimensional systems are also called n-D systems, where n denotes the number of dimensions in which the information propagates. Within the frequency domain approach, such a system is defined by means of a transfer function G of the following form

$$\begin{aligned} G(z_1,\ldots ,z_n)=\frac{N(z_1,\ldots ,z_n)}{D(z_1,\ldots ,z_n)}, \end{aligned}$$
(1)

where D and N are two polynomials in the complex variables \(z_1,\ldots ,z_n\) with real coefficients, i.e., \(D, \, N \in {\mathbb {R}}[z_1, \ldots , z_n]\), which are supposed to be factor prime i.e., \(\mathrm{gcd}(D, N)=1\), where gcd stands for the greatest common divisor of D and N. As for 1-D systems, a fundamental issue in the multidimensional systems theory is stability analysis. In this paper, we are interested in testing the structural stability of discrete linear multidimensional systems defined by multivariate rational transfer function. A system such as (1) is said to be structurally stable if the denominator D of G is devoid of zeros in the complex unit polydisc \({\mathbb {U}}^n\) defined by:

$$\begin{aligned} {\mathbb {U}}^n :=\prod _{k=1}^n{\{z_k \in {\mathbb {C}}\ | \ |z_k| \le 1\}}. \end{aligned}$$

In other words, if \(V_{{\mathbb {C}}}(D)=\{z=(z_1, \ldots , z_n) \in {\mathbb {C}}^n \; | \; D(z)=0\}\) denotes the hypersurface formed by the complex zeros of D, then (1) is structurally stable if the following condition holds:

$$\begin{aligned} V_{{\mathbb {C}}}(D) \cap {\mathbb {U}}^n=\emptyset . \end{aligned}$$
(2)

In the context of discrete linear multidimensional systems, structural stability implies bounded-input bounded-output (BIBO) stability and, under certain conditions, other types of stability such as asymptotic or exponential stability. For further details on the connections between the different concepts of stability of multidimensional systems, see Bachelier et al. (2017) and the references therein.

The simplicity of (2) significantly contrasts with the difficulty to develop effective algorithms and efficient implementations for testing it. One important first step toward this objective was the formulation of new conditions that are equivalent to the above condition [see (2)] but easier to handle. The following theorems, due to Strintzis (1977) and Decarlo et al. (1977), are two good representatives of these reformulations.

Theorem 1

(Strintzis 1977) Condition (2) is equivalent to:

$$\begin{aligned} \begin{array}{ll} D(0,\ldots ,0,z_n) \ne 0, &{}\quad |z_n| \le 1, \\ D(0,\ldots ,0,z_{n-1},z_n) \ne 0, &{} \quad |z_{n-1}| \le 1, \; |z_n|=1, \\ \vdots &{} \vdots \\ D(0,z_2,\ldots ,z_n) \ne 0, &{} \quad |z_2| \le 1, \; |z_i|=1, \; i=3, \ldots , n, \\ D(z_1,z_2,\ldots ,z_n) \ne 0, &{}\quad |z_1| \le 1, \; |z_i|=1, \; i=2, \ldots , n. \end{array} \end{aligned}$$

Theorem 2

(Decarlo et al. 1977) Condition (2) is equivalent to:

$$\begin{aligned} \begin{array}{ll} D(z_1,1,\ldots ,1) \ne 0, &{}\quad |z_1| \le 1, \\ D(1,z_2,1,\ldots ,1) \ne 0, &{}\quad |z_{2}| \le 1, \\ \vdots &{}\quad \vdots \\ D(1,\ldots ,1,z_n) \ne 0, &{}\quad |z_n| \le 1, \\ D(z_1,\ldots ,z_n) \ne 0, &{}\quad |z_1|=\cdots =|z_n|=1. \end{array} \end{aligned}$$

Recent algebraic methods for testing the stability of n-D discrete linear systems are mainly based on the conditions of Theorems 1 and 2.

However, note that the specific case of 2-D systems has attracted considerable attention and numerous efficient tests have been proposed. See, e.g., Bistritz (1994, 2004), Hu and Jury (1994), Xu et al. (2004), Fu et al. (2010) and the references therein. Common to all these tests is the fact that they proceed recursively on the number of variables, reducing the computations with polynomials in two variables to computations with a set of univariate polynomials with algebraic coefficients by means of symbolic computation tools such as resultant and subresultant polynomials (see, e.g., Basu et al. 2006). Such a recursive approach, which shows its relevance for 2-D systems, becomes rather involved when it comes to n-D systems with \(n>2\), mainly due to the exponential increase of the degree of the intermediate polynomials. This fact prevents these 2-D tests from being efficiently generalized to n-D systems.

For n-D systems with \(n>2\), very few implementations for the stability analysis exist. Among the recent work on this problem, one can mention the work of Serban and Najim (2007) where, using an extension of the 1-D Schur–Cohn criterion, a new stability condition is proposed as an alternative to the conditions of Theorems 1 and 2. As a result, the stability is expressed as a positivity condition of \(n-1\) polynomials on the unit polycircle:

$$\begin{aligned} {\mathbb {T}}^{n-1} :=\prod _{k=1}^{n-1} \left\{ z_k \in {\mathbb {C}}\, | \; |z_k|=1 \right\} . \end{aligned}$$

Unfortunately, such a condition becomes considerably hard to effectively test when the involved systems are not of low degree in few variables. To achieve practical efficiency, Dumitrescu (2006, 2008) proposes a sum of squares approach to test the last Decarlo’s condition (Theorem 2). The proposed method is however conservative, i.e., it provides only a sufficient stability condition. Finally, LMI test also exist for \(n=2\) (Bachelier et al. 2016a, b). To sum up, the existing stability tests for n-D systems with \(n > 2\) are either nonconservative but inefficient or efficient (polynomial time) but conservative.

The contribution of this paper is threefold. First, a new algebraic approach for testing the stability of n-D systems is presented. Our approach, which starts with the stability conditions given by Decarlo et al. (1977) (see Theorem 2), transforms the problem of testing these conditions into that of deciding the existence of real zero in some algebraic or semi-algebraic sets. Hence, state-of-the-art real algebraic geometry techniques can then be used for this purpose. Unlike the existing counterparts, this new approach is not conservative. Moreover, our approach shows good practical performances for relatively small dimensions n.

Secondly, we address the specific case of 2-D systems with the main objective of achieving practical efficiency. Following the same approach as for n-D systems but taking advantage from the recent developments in solving bivariate algebraic systems of equations (see Bouzidi 2014b), we propose a stability test based on the existence of real solutions of bivariate algebraic systems which is efficient in practice. Namely, this test makes use of the software RS3 (Rouillier 2012) which provides very efficient tools for the symbolic solving of bivariate systems of equations.

Finally, the above 2-D stability test is extended in order to handle system parameters. More precisely, using the concept of discriminant variety developed in the computer algebra community (Lazard and Rouillier 2007), we provide a new method which, given a 2-D system depending on an arbitrary set of real parameters, decomposes the parameter space into regions inside which the stability is guaranteed.

The plan of the paper is the following. In Sect. 2.1, we first reformulate the last condition of Theorem 1 as the emptiness of a certain semi-algebraic set. We then present state-of-part computer algebra techniques (namely, Cylindrical Algebraic Decomposition, critical point methods and Rational Univariate Representation) which can be used to effectively study this last problem. In Sect. 2.2, we present a new approach for testing the last condition of Theorem 1 based on the Möbius transform and a critical point method. Algorithms are presented. They are then illustrated with explicit examples in Sect. 2.3 and their implementations in Maple are discussed and timings are shown. A new stability test for 2-D systems is presented in Sect. 3 based on a recent approach, developed by two of the authors, for the efficient computation of real solutions of systems of two polynomial equations in two variables. This approach is based on the efficient computation of Rational Univariate Representations using resultants, subresultant sequences and triangular polynomial systems. Finally, Sect. 4 shows how to use the mathematical concept of a characteristic variety, developed by one of the author, to study the stability of 2-D systems with parameters. We show how the parameter space can be explicitly decomposed into cells so that the 2-D system is either stable or unstable for all values of the parameters belonging to each cell.

2 Stability of n-D discrete linear systems

In this section, Sect. 2.1 overviews in broad lines computer algebra methods for computing the real zeros of semi-algebraic sets (namely, unions of sets defined by a finite number of polynomial equations and inequalities) and recall the basic ideas behind these methods. Then, Sect. 2.2 shows how we can obtain new stability conditions that can be tested efficiently using the previously introduced computer algebra methods. Finally, Sect. 2.3 illustrates our stability test on non-trivial examples and show its practical efficiency thanks to experimental tests.

2.1 Computer algebra methods

Recall that the transfer function G defined by (1) is said to be structurally unstable if the set

$$\begin{aligned} E :=V_{{\mathbb {C}}}(D) \cap {\mathbb {U}}^n=\{ (z_1, \ldots , z_n) \in {\mathbb {C}}^n \; | \; D ( z_1, \ldots , z_n)= 0, \; | z_1 | \le 1, \ldots , | z_n | \le 1\} \end{aligned}$$

is not empty. The set E is a semi-algebraic subset of \({\mathbb {R}}^{2 n}\). Indeed, if we note \(z_k :=x_k+i \, y_k\), where \(x_k\) (resp., \(y_k\)) is the real part (resp., the imaginary part) of \(z_k\) and i is the imaginary unit, then the polynomial D can be rewritten as follows

$$\begin{aligned} D(z_1, \ldots , z_n)={{{\mathcal {R}}}}(x_1, \ldots , x_n, y_1, \ldots , y_n)+i \, {{\mathcal {I}}}(x_1, \ldots , x_n, y_1, \ldots , y_n), \end{aligned}$$

where \({{\mathcal {R}}}, \, {{\mathcal {I}}} \in {\mathbb {R}}[x_1, \ldots , x_n, y_1, \ldots , y_n]\), and the inequalities \(|z_k| \le 1\) are equivalent to \(x_k^2+y_k^2 \le 1\) for \(k=1, \ldots , n\), which shows that E is equivalently defined by the following semi-algebraic set:

$$\begin{aligned} \begin{aligned} E&\cong \{(x_1, \ldots , x_n, y_1, \ldots , y_n) \in {\mathbb {R}}^{2n} \; | \\&\; \quad {{\mathcal {R}}}(x_1, \ldots , x_n, y_1, \ldots , y_n)=0, \; \mathcal{I}(x_1, \ldots , x_n, y_1, \ldots , y_n)=0, \\&\;\quad x_k^2+y_k^2 \le 1, \; k=1, \ldots , n \}. \end{aligned} \end{aligned}$$
(3)

Testing (2) is thus equivalent to testing that the above semi-algebraic set is empty. This test can be performed using classical computer algebra methods for computing the real zeros of semi-algebraic systems which will be briefly overviewed in the next section.

2.1.1 Cylindrical algebraic decomposition and critical point methods

To study the real zeros of semi-algebraic sets, two classes of symbolic algorithms are available: the algorithms based on Cylindrical Algebraic Decompositions (CAD) and those based on the study of the critical points of well-chosen functions (see, e.g., Basu et al. 2006).

The Cylindrical Algebraic Decomposition (CAD) Introduced originally by Collins (1975) in the seventies, the CAD has become a standard tool for the study of real zeros of semi-algebraic sets. CAD refers to both an object and an algorithm for computing this object. In short, a CAD associated to a finite set of polynomial \(F=\{ P_1, \ldots , P_s \} \subset {\mathbb {R}}[ x_1, \ldots , x_n]\) is a partition of \({\mathbb {R}}^n\) into connected semi-algebraic sets, called cells, on which each polynomial \(P_i\) has a constant sign (i.e., either \(+\), − or 0). For instance, the CAD of a set of univariate polynomials in \({\mathbb {R}}[x]\) is an union of points and open intervals that form a partition of \({\mathbb {R}}\). Such a partition is called F-invariant. Let \({\varPi }_k: {\mathbb {R}}^n \longrightarrow {\mathbb {R}}^{n-k}\) denote the projection onto the first \(n-k\) components of \({\mathbb {R}}^n\). The CAD is called cylindrical since for every two cells \(c_1\) and \(c_2\), we either have \({\varPi }_k(c_1)={\varPi }_k(c_2)\) or \({\varPi }_k(c_1) \cap {\varPi }_k(c_2)=\emptyset \). This implies that the images of the cells by \({\varPi }_k\) define a cylindrical decomposition of \({\mathbb {R}}^{n-k}\).

Example 1

A CAD associated to \(P=x_1^2+x_2^2-1 \in {\mathbb {R}}[x_1,x_2]\) is a partition of \({\mathbb {R}}^2\) into the following algebraic sets (cells) in each of which the polynomial P has a constant sign:

  • \(C_{1} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 < -1 \}\),

  • \(C_{2} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 = -1, \; x_2 < 0 \}\),

  • \(C_{3} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 = -1, \; x_2 = 0 \}\),

  • \(C_{4} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 = -1, \; x_2 > 0 \}\),

  • \(C_{5} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; -1< x_1< 1, \; x_1^2 -x_2^2 - 1 > 0, \; x_2 < 0 \}\),

  • \(C_{6} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; -1< x_1< 1, \; x_1^2 -x_2^2 - 1 = 0, \; x_2 < 0 \}\),

  • \(C_{7} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; -1< x_1< 1, \; x_1^2 -x_2^2 - 1 < 0 \}\),

  • \(C_{8} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; -1< x_1 < 1, \; x_1^2 -x_2^2 - 1 = 0, \; x_2 > 0 \}\),

  • \(C_{9} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; -1< x_1 < 1, \; x_1^2 -x_2^2 - 1> 0, \; x_2 > 0 \}\),

  • \(C_{10} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 = 1, \; x_2 < 0\}\),

  • \(C_{11} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 = 1, \; x_2 = 0\}\),

  • \(C_{12} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 = 1, \; x_2 > 0\}\),

  • \(C_{13} = \{ (x_1, x_2) \in {\mathbb {R}}^2 \; | \; x_1 > 1 \}\).

The CAD algorithm mainly consists of two distinct phases: a projection phase and a lifting phase. During the projection phase, one proceeds recursively on the number of variables. Starting from the initial set of polynomials \(F_1=\{P_1,\ldots , P_s\}\), a first set of polynomials \(F_2 \subset {\mathbb {R}}[x_1,\ldots ,x_{n-1}]\) is computed which has the property that a partition of \({\mathbb {R}}^{n-1}\) that is \(F_2\)-invariant, naturally lifts to a partition of \({\mathbb {R}}^{n}\) that is \(F_1\)-invariant. Then, from \(F_2\), another set \(F_3 \subset {\mathbb {R}}[x_1,\ldots ,x_{n-2}] \) with the same property is computed and so on, until obtaining a set of univariate polynomials \(F_n\). At each step, the projection essentially consists in computing resultants (and subresultants) for all possible pairs of polynomials as well as their discriminants (see, e.g., Basu et al. 2006).

Example 2

The projection phase for Example 1 consists in only one projection and yields the discriminant \(-4\,x_1^2+4\) of \(P=x_1^2+x_2^2-1\) with respect to \(x_2\).

Starting from the set of univariate polynomials, the lifting step, which also proceeds recursively, consists in isolating the real roots of univariate polynomials with real algebraic numbers as coefficients, which can be viewed as solving a so-called triangular zero-dimensional system, namely, a system with a finite number of complex solutions having a triangular form.

Example 3

Continuing Example 2, the real roots of \(-4\,x_1^2+4\) are first isolated, which yields a partition of \({\mathbb {R}}\) that consists of the cells \(] - \infty , -1 [\), \(\{ -1 \}\), \(] -1, 1 [\), \(\{ 1 \}\) and \(] 1, + \infty [\). Then, the real roots of the polynomial \(x_1^2+x_2^2-1\) are isolated for each \(x_1\) in these cells, which yields the partition of \({\mathbb {R}}^2\) given in Example 1.

Although the computation of the CAD answers our problem i.e,. deciding the existence of real zeros, the partition of \({\mathbb {R}}^n\) given by the CAD gives more information than required. Moreover, its computation requires a number of arithmetic operations which is doubly exponential in the number of variables of the polynomial ring, i.e., n, due to, at least, the iterative computation of the resultants (and subresultants). Despite of this, it is worth mentioning that, to some extent, most of the existing algorithms for testing the stability of multidimensional systems can be viewed as particular variants of the CAD.

Critical point methods Instead of a complete partition of the real space, critical points based methods basically compute at least one point in each real connected component of a given semi-algebraic set, which is sufficient to answer our question. Roughly speaking, the basic principle of these methods consists in computing the critical points of a well-chosen function restricted on an algebraic set. Under certain conditions, the set of critical points is defined by a zero-dimensional polynomial system (i.e., which admits a finite number complex solutions) and it meets each real connected component of the original semi-algebraic set.

Example 4

We consider again the polynomial given in Example 1, i.e., \(P=x_1^2+x_2^2-1\). Our goal is to compute at least one point in the single real connected component of the algebraic set \(V_{\mathbb {R}}(P)=\{\alpha =(\alpha _1,\alpha _2) \in {\mathbb {R}}^2 \ | \ P(\alpha )=0 \}\). Let \({\varPi }: {\mathbb {R}}^2 \longrightarrow {\mathbb {R}}\) denote the projection onto the first component \(x_1\) of \((x_1, \, x_2) \in {\mathbb {R}}^2\). The critical points of \({\varPi }\) restricted to \(V_{\mathbb {R}}(P)\) are the points of \(V_{\mathbb {R}}(P)\) on which the derivative \(\frac{\partial P}{\partial x_2}\) vanishes. They are given as the real zeros of the following system

$$\begin{aligned} \begin{aligned}&P=x_1^2+x_2^2-1=0, \\&\dfrac{\partial P}{\partial x_2}=2\,x_2=0, \end{aligned} \end{aligned}$$

which yields (1, 0) and \((-1,0)\). These points belong to the real connected component of \(P=0\).

Let us consider the real hypersurface

$$\begin{aligned} V_{{\mathbb {R}}} (P) :=\{ \alpha =(\alpha _1, \ldots , \alpha _n) \in {\mathbb {R}}^n \ | \ P ( \alpha ) = 0 \} \end{aligned}$$

defined by \(P \in {\mathbb {Q}}[x_1, \ldots , x_n]\). Let us suppose that \(V_{{\mathbb {R}}} (P)\) is smooth, i.e., it does not contain singular points, namely, points where the rank of the Jacobian matrix is not maximal, and \(V_{{\mathbb {R}}} (P)\) is compact (i.e., closed and bounded) for the so-called Zariski topology (see, e.g., Cox et al. 2007). The set of critical points of the projection with respect to some variable \(x_i\) restricted to \(V_{{\mathbb {R}}}(P)\) defined by

$$\begin{aligned} {\mathcal {C}}_{{\varPi }}( V_{{\mathbb {R}}}(P)) :=\left\{ \alpha \in V_{{\mathbb {R}}}(P) \ | \ \frac{\partial P}{\partial x_k}(\alpha )=0, \ \forall \; k \in \{1,\ldots ,i-1,i+1,\ldots ,n \} \right\} , \end{aligned}$$
(4)

is finite and meets each real connected component of \(V_{{\mathbb {R}}}(P)\) (see Bank et al. 2001; Safey El Din and Schost 2003).

There are several ways to circumvent the hypotheses (i.e., compactness and smoothness). For instance, we can use a distance function to a fixed point instead of the projection function to get rid of the compactness assumption (Seidenberg 1954), deform the variety to get a compact and smooth one (Rouillier et al. 2000), introduce more general notions of critical points (Bank et al. 2005), or separately study the subsets of singular points of the variety (Aubry et al. 2002).

For systems of polynomial equations and, more generally systems of polynomial inequalities, several strategies have been proposed by different authors (see, e.g., Basu et al. 2006; Bank et al. 2001). Some are based on the use of sums of squares to reduce the problem of studying an algebraic set to the problem of studying an hypersurface (Basu et al. 2006), on the use of infinitesimal deformations by adding some variables to avoid singularities and to deal with inequalities (Basu et al. 2006), or on the introduction of a special kind of critical points (generalized critical values) to circumvent the compactness hypothesis. But the basic ideas stay the same.

As already said, critical points methods compute less information than the CAD but they are sufficient in our case since we just have to decide if a semi-algebraic set is empty. Moreover, a key advantage of these methods is that they transform the problem into solving a zero-dimensional polynomial system and this transformation is performed within a number of arithmetic operations that is single exponential in the number n of variables.

Remark 1

In practice, for systems depending on strictly less than 3 variables, the use of CAD is usually preferred since, it provides more information than the critical point methods for a negligible additional cost.

2.1.2 Symbolic resolution of univariate polynomials and zero-dimensional systems

The methods described in Sect. 2.1.1 are based on the resolution of univariate polynomials and, more generally, on zero-dimensional polynomial systems. For stability analysis of multidimensional systems, we mainly have to decide whether or not a polynomial system admits real solutions. For polynomial systems with a finite number of solutions, we can use an additional processing that turns this last problem into computing a univariate parameterization of all the solutions.

Given a zero-dimensional polynomial system and \(I \subset {\mathbb {Q}}[x_1, \ldots , x_n]\) the ideal generated by the corresponding polynomials, a Rational Univariate Representation (RUR) of \(V_{{\mathbb {C}}}(I)\) is defined by a separating linear form \(t :=a_1 \, x_1 + \cdots + a_n \, x_n\) and univariate polynomials \(f, f_1, f_{x_1}, \ldots , f_{x_n} \in {\mathbb {Q}}[ t]\) such that we have the following one-to-one correspondence between \(V_{{\mathbb {C}}}(I)\) and \(V_{{\mathbb {C}}} (f)\):

(5)

If (5) is a RUR of \(V_{{\mathbb {C}}}(I)\), then the bijection \(\phi _t\) between the zeros of I and those of f preserves the multiplicities and the real zeros (Rouillier 1999).

According to its definition, the computation of a Rational Univariate Representation can be divided in two independent parts. The first one consists in computing a separating linear form for V(I), that is a linear combination of the variables \(a_1 \, x_1 + \cdots + a_n \, x_n\) so that the following map

$$\begin{aligned} \begin{aligned} V(I)&\longrightarrow {\mathbb {C}}\\ (\alpha _1,\ldots ,\alpha _n)&\longmapsto a_1 \, \alpha _1 + \cdots + a_n \, \alpha _n \end{aligned} \end{aligned}$$

is injective. The second part consists in computing the univariate polynomials \(f,f_1,f_{x_1},\ldots ,f_{x_n}\) that define the one-to-one mapping given in (5).

If we suppose that a separating linear form \(t:=a_1 \, x_1 + \cdots + a_n \, x_n\) is given, the polynomials of the RUR can be computed by means of simple linear algebra operations. Indeed, provided that I is a zero-dimensional ideal, the quotient algebra \({\mathcal {A}} :={\mathbb {Q}}[ x_1, \ldots ,x_n]/I\) is then a finite-dimensional \({\mathbb {Q}}\)-vector space which dimension is equal to the number of complex zeros of

$$\begin{aligned} V_{{\mathbb {C}}}(I)=\{\alpha =(\alpha _1, \ldots , \alpha _n) \in {\mathbb {C}}^n \; | \; \forall \; P \in I: \; P(\alpha )=0\} \end{aligned}$$

counted with multiplicities, and using Gröbner bases (see, e.g., Cox et al. 2007), one can compute a basis of the \({\mathbb {Q}}\)-vector space \({\mathcal {A}}\) as well as the matrix associated with the \({\mathbb {Q}}\)-endomorphism of \({\mathcal {A}}\) defined by multiplying any element of \({\mathcal {A}}\) by an element \(P \in {\mathbb {Q}}[ x_1, \ldots ,x_n]\) as follows

$$\begin{aligned} \begin{aligned} M_P: {\mathcal {A}}&\longrightarrow {\mathcal {A}} \\ \pi (a)&\longmapsto \pi (P \, a), \end{aligned} \end{aligned}$$
(6)

where \(\pi : {\mathbb {Q}}[x_1, \ldots , x_n] \longrightarrow {\mathcal {A}}\) denotes the \({\mathbb {Q}}\)-algebra epimorphism which sends a to its residue class \(\pi (a)\) in \({\mathcal {A}}\). Then, the first polynomial of the RUR f is the characteristic polynomial of \(M_t\) where \(t=a_1 \, x_1 + \cdots + a_n \, x_n\) is the separating form for V(I). Moreover, if

$$\begin{aligned} {\overline{f}}(t):=\frac{f}{\gcd (f,\partial f / \partial t)}= \displaystyle \sum _{i=0}^d{a_i}\,t^{d-i} \in {\mathbb {Q}}[t] \end{aligned}$$

denotes the square-free part of f and if we note

$$\begin{aligned} H_j(t):=\displaystyle \sum _{i=0}^j{a_i\,t^{j-i}}, \ \ j=0,\ldots ,d-1, \end{aligned}$$

then, the other polynomials of the RUR are defined as follow:

$$\begin{aligned} \begin{aligned} f_0&:=\displaystyle \sum _{i=0}^{d-1}{\text {Trace}(M_t^i)\,H_{d-i-1}(t)}, \\ f_{x_k}&:=\displaystyle \sum _{i=0}^{d-1}{M_{x_k}\,M_t^i\,H_{d-i-1}(t)}, \ \ k=1,\ldots ,n. \end{aligned} \end{aligned}$$

As mentioned above, the computation of the RUR requires to find a separating linear form. Such a separating form can be sought in a set of infinite number of linear forms that contains only a finite number of bad-separating forms. For instance, the set \(\{x_1+i\,x_2+\cdots +i^{n-1}\,x_n \ | \ i \in {\mathbb {Q}}\}\) contains at most \((n-1)\frac{d(d-1)}{2}\) bad separating forms (see Basu et al. 2006, Lemma 4.9). A random choice of a linear form then yields a form which is separating with probability close to one. Therefore, a standard strategy consists in choosing an arbitrary linear form, computing a RUR candidate using this form, and then checking whether or not this RUR candidate is a RUR. A well-known fact is that checking that a RUR candidate is a RUR, and thus that the chosen linear form is separating, is closely related to the problem of computing the number of distinct complex solutions of the system. Indeed, given the number of distinct solutions of the system, one can check that a linear form t is separating by checking whether or not the squarefree part of the first polynomial of the candidate RUR, i.e., f, has a degree equal to this number. In practice, several strategies can be used to compute the number of distinct complex solutions [e.g., computing the radical ideal of the polynomial ideal defined by the system, computing the rank of a well-chosen quadratic form (Rouillier 1999)]. To avoid costly computations resulting from many trials, the separation test is usually performed modulo a prime number, which allows one to obtain a good prediction (Rouillier 1999).

Finally, note that alternative algorithms exist for the computation of univariate representations which do not require the pre-computation of a Gröbner basis (see, e.g., Giusti et al. 2001). In addition, for the specific case of polynomial systems with only two variables, univariate representations as well as separating forms can be efficiently obtained using algorithms based on resultants and subresultants sequence (see Bouzidi 2014a). For more details, see Sect. 3.

Once a RUR (5) is known, computing the real solutions of I, namely computing \(V_{{\mathbb {R}}}(I)=\{(x_1, \ldots , x_n) \in {\mathbb {R}}^n \; | \; \forall \; P \in I: \; P(x_1, \ldots , x_n)=0\}\) (resp., deciding whether or not the polynomial system defined by I has real solutions) reduces to computing the real roots of the univariate polynomial \(f_1 \in {\mathbb {Q}}[t]\) (resp., deciding whether or not \(f_1\) has real roots). This can be done using classical bisection algorithms such as Sturm’s sequences or methods based on Descartes’ rule of signs (see, e.g., Basu et al. 2006) which gives a set of intervals which isolate the real roots of \(f_1\).

Example 5

Let us illustrate the concept of univariate representation with the polynomial system which encodes the critical points of \(P=x_1^2+x_2^2-1=0\) as defined in Example 4, namely:

$$\begin{aligned} \begin{aligned}&x_1^2+x_2^2-1=0, \\&\dfrac{\partial P}{\partial x_2}=2\,x_2=0. \end{aligned} \end{aligned}$$
(7)

We note that (7) forms a Gröbner basis of the ideal \(I= \left\langle P, \dfrac{\partial P}{\partial x_2} \right\rangle \) for the graded reverse lexicographic order (see, e.g., Cox et al. 2007) and a basis of the \({\mathbb {Q}}\)-vector space \({\mathcal {A}}={\mathbb {Q}}[x_1, x_2]/I\) is given by the monomials \(\{1,x_1\}\), which implies that the number of complex solutions counted with their multiplicities is equals to two. Since the univariate polynomial in \(x_2\) i.e., \(2\,x_2\), has degree one, \(t=x_2\) is not a separating form. We can obtain a RUR of (7) by computing a Gröbner basis of I for a monomial ordering that eliminates \(x_2\) such as:

$$\begin{aligned} \begin{aligned}&x_1^2-1=0, \\&x_2=0. \end{aligned} \end{aligned}$$

Example 6

Another example is given by the following polynomial system:

$$\begin{aligned} \begin{aligned}&P_1=36\,x_1^2\,x_2-10\,x_1-6\,x_2=0, \\&P_2=12\,x_1^2+30\,x_1\,x_2-2=0. \end{aligned} \end{aligned}$$
(8)

Computing a Gröbner basis of the ideal \(I=\langle P_1, P_2 \rangle \) generated by (8) for the graded reverse lexicographic order, we obtain that \(I=\langle 9 \, x_2^2+1, 6 \, x_1^2+15 \, x_1 \, x_2-1 \rangle \), which shows that (8) is equivalently defined by these two polynomials. On the above Gröbner basis of I, we can then read that the dimension of the \({\mathbb {Q}}\)-vector space \({\mathcal {A}}={\mathbb {Q}}[x_1, x_2]/I\) is 4: the set of monomials \(\{1, x_1, x_2, x_1 \, x_2\}\) does not satisfy \({\mathbb {Q}}\)-linear relations which are algebraic consequences of (8). As a consequence, (8) admits 4 complex solutions counted with their multiplicities. Let us solve (8). Computing a Gröbner basis of I for an order which eliminates \(x_2\) (Cox et al. 2007), we get that the ideal \(I=\langle 36 \, x_1^4+13 \, x_1^2+1, 36 \, x_1^3+19 \, x_1+15 \, x_2 \rangle \). Hence, (8) can be parametrized as follows:

$$\begin{aligned} \begin{aligned}&x_2=-\dfrac{1}{15} \, x_1 \, \left( 36 \, x_1^2+19\right) , \\&36 \, x_1^4+13 \, x_1^2+1=0. \end{aligned} \end{aligned}$$

Solving the last univariate equation, we obtain that the four solutions of (8) are of the form of:

$$\begin{aligned} \left( x_1^{\star }, \, x_2^{\star }=-\dfrac{1}{15} \, x_1^{\star } \, (36 \, {x_1^{\star }}^2+19) \right) , \quad x_1^{\star } \in \left\{ \pm \dfrac{i}{2}, \pm \dfrac{i}{3} \right\} . \end{aligned}$$

In particular, (8) does not admit real solutions, a fact that could be directly checked by applying Descartes’ rule of signs on the univariate polynomial \(36 \, x_1^4+13 \, x_1^2+1\).

2.2 Real algebraic stability conditions

As shown at the beginning of Sect. 2.1, (2) can be reduced to checking the emptiness of the semi-algebraic set of \({\mathbb {R}}^{2n}\) defined by (3). This problem can be achieved using the methods described in Sect. 2.1.2. However, such an approach presents an important drawback: it doubles the number of variables [see (3)], which yields an overhead of computations in practice due to the exponential cost of the methods described in Sects. 2.1.1 and 2.1.2.

To avoid this computational issue, we can start directly with the DeCarlo et al.’s stability conditions (see Theorem 2). From the computational point of view, the first n conditions of Theorem 2 can easily be checked using classical univariate stability tests (see, e.g., Marden 1949; Jury 1964; Bistritz 1984, 2002). We are then left with the last condition of Theorem 2, i.e.:

$$\begin{aligned} V_{{\mathbb {C}}}(D) \cap {\mathbb {T}}^n=\emptyset . \end{aligned}$$
(9)

This condition replaces the search for zeros of D in the unit polydisc \({\mathbb {U}}^n\) [see (2)] by the search for zeros over the unit polycircle \({{\mathbb {T}}}^n\). Now, our approach to test the above condition (9) consists in applying a transformation that maps the unit poly-circle \({{\mathbb {T}}}^n\) to the real space \({\mathbb {R}}^n\). More precisely, for each complex variable \(z_k\), we perform a change of variable \(z_k:=\phi (x_k)\) such that \(z_k \in {{\mathbb {T}}}\) if and only if \(x_k \in {\mathbb {R}}\). In particular, such a transformation allows us to keep unchanged the number of variables. A classical transformation that satisfies the above requirement is the so-called Möbius transformation which definition is recalled in the next definition.

Definition 1

Denoting the extended complex plane by \({\overline{{\mathbb {C}}}}\), namely, \({\overline{{\mathbb {C}}}}:={\mathbb {C}}\cup \{\infty \}\), a Möbius transformation is the following rational function

$$\begin{aligned} \begin{aligned} \phi : {\overline{{\mathbb {C}}}}&\longrightarrow {\overline{{\mathbb {C}}}} \\ z&\longmapsto \dfrac{a\,z+b}{c\,z+d}, \end{aligned} \end{aligned}$$

where \(a, \, b, \, c, \, d \in {\mathbb {C}}\) are fixed and satisfy \(a \, d-b \, c \ne 0\). We formally write:

$$\begin{aligned} \phi \left( -\dfrac{d}{c} \right) =\infty , \quad \phi (\infty )=\dfrac{a}{c}. \end{aligned}$$

Denoting by \({{{\mathcal {H}}}}\) the class of circles of arbitrary radius in \({\overline{{\mathbb {C}}}}\) (this class includes lines which can be considered as circles of infinite radius). Then, the set of Möbius transformations have the property of mapping \({{\mathcal {H}}}\) to itself, i.e., each circle in \({\overline{{\mathbb {C}}}}\) is mapped to another circle in \({\overline{{\mathbb {C}}}}\). In particular, the following transformation \(\phi (z)=\dfrac{z-i}{z+i}\), which corresponds to the Möbius transformation with \(a=1\), \(b=-i\), \(c=1\) and \(d=i\), maps the real line \({\mathbb {R}}\) to the complex unit circle deprived from the point 1, that is, \({\mathbb {T}}{\setminus } \{1\}\).

Remark 2

Different transformations such as the classical parametrization of \({\mathbb {T}}{\setminus } \{-i\}\) defined by

$$\begin{aligned} \forall \; t_k \in {\mathbb {R}}, \quad x_k=\dfrac{2 \, t_k}{1+t_k^2}, \quad y_k=\dfrac{1-t_k^2}{1+t_k^2}, \quad k=1, \ldots , n, \end{aligned}$$

with the notation \(z_k=x_k+i \, y_k\), also fulfill the above requirement but usually yield a polynomial with higher degree than the one obtained by a Möbius transformation.

Accordingly, the following result holds.

Proposition 1

Let \(D \in {\mathbb {R}}[z_1,\ldots ,z_n]\). Two polynomials \({{\mathcal {R}}}\) and \({{\mathcal {I}}}\) can be obtained such that:

$$\begin{aligned} {{\mathcal {V}}}_{\mathbb {C}}(D) \cap [{{\mathbb {T}}} {\setminus } \{1\}]^n = \emptyset \Longleftrightarrow {{\mathcal {V}}}_{\mathbb {R}}(\langle {{\mathcal {R}}}, {{\mathcal {I}}} \rangle ) =\emptyset . \end{aligned}$$

Proof

Given a polynomial \(D(z_1,\ldots ,z_n)\), we can handle the following substitution

$$\begin{aligned} z_k=\frac{x_k-i}{x_k+i}=\frac{x_k^2-1}{x_k^2+1}-i \, \frac{2 \, x_k}{x_k^2+1}, \quad k=1, \ldots , n, \end{aligned}$$
(10)

which yields a rational function in \({\mathbb {C}}(x_1,\ldots ,x_n)\) whose numerator can be written as

$$\begin{aligned} {{\mathcal {R}}}(x_1,\ldots ,x_n)+i\,{{\mathcal {I}}}(x_1,\ldots ,x_n), \end{aligned}$$

where \({{\mathcal {R}}}, \, {{\mathcal {I}}} \in {\mathbb {R}}[x_1, \ldots , x_n]\). If \((z_1,\ldots ,z_n)\) is a zero of \(D(z_1,\ldots ,z_n)\) that belongs to \([{{\mathbb {T}}} {\setminus } \{1\}]^n\), then, by construction,

$$\begin{aligned} (x_1,\ldots ,x_n)=\left( \dfrac{i\, (z_1+1)}{1-z_1}, \; \ldots \; ,\dfrac{i\, (z_n+1)}{1-z_n} \right) \end{aligned}$$

is a real zero of \({{\mathcal {R}}}(x_1,\ldots ,x_n)+i\,{{\mathcal {I}}}(x_1,\ldots ,x_n)\). Conversely, if \((x_1,\ldots ,x_n)\) is a real zero of \({{\mathcal {R}}}(x_1,\ldots ,x_n)+i\,{{\mathcal {I}}}(x_1,\ldots ,x_n)\), then

$$\begin{aligned} (z_1,\ldots ,z_n)=\left( \dfrac{x_1-i}{x_1+i}, \; \ldots \; ,\dfrac{x_n-i}{x_n+i} \right) \end{aligned}$$

is also a zero of \(D(z_1,\ldots ,z_n)\) that belongs to \([{{\mathbb {T}}} {\setminus } \{1\}]^n\). \(\square \)

Remark 3

If we denote by \(d_k\) the degree of D with respect to the variable \(z_k\), then we can easily check that the transformation used in the proof of Proposition 1 [see (10)] yields two polynomials \({{\mathcal {R}}}\) and \({{\mathcal {I}}}\) of total degrees at most \(\sum _{k=1}^n{d_k}\).

Example 7

Let us consider \(D(z_1, z_2)=(2\,z_1^2+10\,z_1+12)+(z_1^2+5\,z_1+6)\,z_2\). Applying the transformation (10) for \(k=1, \, 2\), we obtain the following polynomial system of total degree 3

$$\begin{aligned} \begin{aligned} {{\mathcal {R}}}&=36 \, x_1^2 \, x_2-10 \, x_1-6 \, x_2=0, \\ {{\mathcal {I}}}&=12 \, x_1^2+30 \, x_1 \, x_2-2=0, \end{aligned} \end{aligned}$$

which was considered in (8).

We can also consider

$$\begin{aligned} D(z_1,z_2)=-3\, z_1^2 \, z_2^2+2\, z_1^2 \, z_2+2\, z_1 \, z_2^2-3\, z_1^2+4\,z_1 \, z_2-3\, z_2^2+2\,z_1+2\,z_2-3 \end{aligned}$$

which, after transformation (10) for \(k=1, \, 2\), yields the following two polynomials of total degree 2:

$$\begin{aligned} \begin{aligned}&{{\mathcal {R}}}=0, \\&{{\mathcal {I}}}=x_1^2+x_2^2-1=0. \end{aligned} \end{aligned}$$

According to Proposition 1, we can test that a polynomial \(D \in {\mathbb {R}}[z_1,\ldots ,z_n]\) does not have complex zeros in \([{\mathbb {T}}{\setminus } \{1\}]^n\) by first computing the polynomials \({{\mathcal {R}}}(x_1,\ldots ,x_n)\) and \({{\mathcal {I}}}(x_1,\ldots ,x_n)\) and then checking that the following polynomial system

$$\begin{aligned} \begin{aligned}&{{\mathcal {R}}}(x_1,\ldots ,x_n)=0, \\&{{\mathcal {I}}}(x_1,\ldots ,x_n)=0, \end{aligned} \end{aligned}$$
(11)

does not have any solution in \({\mathbb {R}}^n\) by means of the techniques described in Sect. 2.1.

Note that to check the last condition of Theorem 2, the above test is not sufficient since it excludes the points of the poly-circle that have at least one coordinate equal to 1. Hence, we also have to check that the polynomial D does not vanish at any of these points. Let us explain how this can be done in a systematic manner. Starting with D, we first compute the following polynomials:

$$\begin{aligned} D_{i}(z_1,\ldots ,z_{k-1},z_{k+1},\ldots ,z_n):=D(z_1,\ldots ,z_{k-1},1,z_{k+1},\ldots ,z_n), \; k=1,\ldots ,n. \end{aligned}$$

To each \(D_k\), we then apply the Möbius transformation (10) for \(z_j\) with \(j=1, \ldots , k-1, k+1, \ldots , n\), followed by the test of Proposition 1. Similarly as above, this test allows us to check whether or not each \(D_k\) does not have complex zeros on \([{{\mathbb {T}}} {\setminus } \{1\}]^{n-1}\). But we still need to check that \(D_k\) does not vanish at the excluded points, namely, points having at least one coordinate in \(\{z_1,\ldots ,z_{k-1},z_{k+1},\ldots ,z_n\}\) equal to 1. This can then be done in the same way as above by considering the polynomials \(D_{k_l}\) obtained by substituting the variable \(z_l\) by in the \(D_k\)’s. Proceeding recursively until obtaining univariate polynomials of the form \(D(1,\ldots ,1,z_k,1,\ldots ,1)\), we can then check that D does not vanish on the unit poly-circles \({\mathbb {T}}\).

Note that at the step m of the above process, the set of polynomials we have to consider are exactly the polynomials obtained from D after substituting m of the n variables \(z_i\)’s by 1. From this observation, we obtain the following algorithm to check (10) based on Theorem 2.

Algorithm 1

figure a

Let us now state our n-D stability test.

Algorithm 2

figure b

Remark 4

In Algorithm 1, the polynomials considered at the last step are \(D(1,\ldots ,z_k,\ldots ,1)\) for \(k=1,\ldots ,n\). Since the stability of these polynomials are checked at step 1 of Algorithm 2, we can skip this test in Algorithm 2 by stopping Algorithm 1 at step \(n-2\).

Remark 5

From the computational cost viewpoint, it should be stressed that the most dominant part of Algorithm 1 is the first iteration of the outer loop which consists in checking that the polynomial \(D(z_1,\ldots ,z_n)\) is devoid from zero in the poly-circle \({\mathbb {T}}^n\). Indeed, for any iteration k of the outer loop, the algorithm performs \({n \atopwithdelims ()k}\) calls to the routine for checking the existence of real zeros of an algebraic system with \(n-k\) variables. Since this checking step requires a cost that is at least single exponential in the number of variables (see Basu et al. 2006), this implies that the cost of the outer loop, and thus of the whole algorithm, is dominated by the cost of the first iteration.

2.3 Examples and experiments

Let us illustrate Algorithms 1 and 2 with some explicit examples.

Example 8

We consider \(D(z_1, z_2)=(2\,z_1^2+10\,z_1+12)+(z_1^2+5\,z_1+6)\,z_2\) which appears in several articles on the stability analysis (Xu et al. 2004; Li et al. 2013). It is known that D is structurally stable. Let us check again this result.

The first step of our procedure consists in checking that the two polynomials \(D(z_1,1)=3\,z_1^2+15\,z_1+18\) and \(D(1,z_2)=12\,z_2+24\) are stable, which can be directly checked by, e.g., inspecting their solutions (i.e., {-3, -2} and {-2}).

In a second step, we apply Algorithm 1 to \(D(z_1,z_2)\). As we have already checked that \(D(z_1,1)\) and \(D(1,z_2)\) are stable, we only have to consider \(D(z_1,z_2)\) itself. Using the Möbius transformation (10), this polynomial yields the polynomial system defined by (8). In Example 6, we proved that (8) does not admit real solutions.

Example 9

If we consider

$$\begin{aligned} D(z_1, z_2)=-3\, z_1^2 \, z_2^2+2 \, z_1^2 \, z_2+2 \, z_1 \, z_2^2-3 \, z_1^2+4\,z_1 \, z_2-3 \, z_2^2+2 \, z_1+2\, z_2-3, \end{aligned}$$

then, the Möbius transformation (10) yields only one polynomial \(x_1^2+x_2^2-1\) (see Example 7) that admits an infinite number of zeros. Checking for the existence of real zeros of this polynomial can be done by checking for the existence of real solutions for the system of its critical points [see (7)].

Example 10

We consider \(D(z_1,z_2,z_3)=(z_1^2+z_2^2+4)\,(z_1+z_2+z_3+5)\) which is known to be structurally stable (Li et al. 2013). Our procedure first checks that the polynomials \(D(z_1,1,1)= (z_1^2+5)\,(z_1+7)\), \(D(1,z_2,1)=(z_2^2+5)\,(z_2+7)\) and \(D(1,1,z_3)=6\,z_3+42\) are stable. Then, applying Algorithm 1 to D, we have to test the existence of zeros on the polycircle of the following polynomials

$$\begin{aligned} \begin{aligned} D(z_1,z_2,z_3)&=\left( z_1^2+z_2^2+4\right) \,(z_1+z_2+z_3+5), \\ D(z_1,1,z_3)&=\left( z_1^2+5\right) \,(z_1+z_3+6), \\ D(1,z_2,z_3)&=\left( z_2^2+5\right) \,(z_2+z_3+6), \\ D(z_1,z_2,1)&=\left( z_1^2+z_2^2+4\right) \,(z_1+z_2+6), \end{aligned} \end{aligned}$$
(12)

by considering the set of polynomial systems obtained by applying the Möbius transformation (10) to each of them. The main difficult computation is to decide whether or not the following polynomial system, which corresponds to \(D(z_1, z_2, z_3)\), has real solutions:

$$\begin{aligned} \begin{aligned}&48\,x_1^3\,x_2^3\,x_3-72\,x_1^3\,x_2^2-96\,x_1^3\,x_2\,x_3-72\,x_1^2\, x_2^3-184\,x_1^2\,x_2^2\,x_3-96\,x_1\,x_2^3\,x_3 \\&\quad +24x_1^3+120\,x_1^2\,x_2 +72\,x_1^2\,x_3+120\, x_1\,x_2^2+176\,x_1\,x_2\,x_3+24\,x_2^3+72\,x_2^2\,x_3\\&\quad -40\,x_1-40\,x_2-24\,x_3=0, \\&36\,x_1^3\,x_2^3+100\,x_1^3\,x_2^2\,x_3+100\,x_1^2\, x_2^3\,x_3-68\,x_1^3\,x_2 -36\,x_1^3\,x_3-124\,x_1^2\,x_2^2 \\&\quad -180\,x_1^2\,x_2\,x_3-68\,x_1\,x_2^3 -180\,x_1\,x_2^2\,x_3-36\,x_2^3\,x_3+44\,x_1^2+116\,x_1\,x_2 \\&\quad +68\,x_1\,x_3+44\,x_2^2+68\,x_2\,x_3-12=0. \end{aligned} \end{aligned}$$

Using one of the methods described in Sect. 2.1.1, we can check that the above polynomial system does not have real zeros. Similarly, the second, third and fourth polynomials of (12) can be proved to be devoid from zeros in the corresponding polycircle, and we find again that D is stable.

Our stability test was implemented in a Maple routine named IsStable. It takes a polynomial defining the denominator of a transfer function in input and returns true if this polynomial satisfies (9) and false otherwise. For testing the first n conditions of Theorem 2, we use the classical 1-D Bistritz test (see Bistritz 1984) that was also implemented in Maple. To test the emptiness of a real algebraic set, which is the main critical step in Algorithm 1, we have implemented the two presented methods.Footnote 1 The first one uses the classical cylindrical algebraic decomposition. Such a decomposition is provided by the Maple routine CylindricalAlgebraicDecompose which can be found in the native package RegularChains[SemiAlgebraicSetTools]. The second method is based on the computation of the set of critical points of a given function restricted to the real algebraic set under consideration. An efficient implementation of this method has been done is the external Maple library RAGlib (Safey El Din 2007) (see the command HasRealSolutions). Finally, we use the routine Isolate of the Maple package RootFinding in order to compute numerical approximations of the solutions through the computation of a univariate representation.

In Table 1, we show the average running times in CPU seconds of the IsStable routine for random (sparse or dense) polynomials in 2 and 3 variables with rational coefficients.Footnote 2 The two running time columns correspond to the two variants IsStableCAD and IsStableCRIT (depending on the method used for testing the emptiness of a real algebraic set) of the routine IsStable.

Table 1 CPU times in seconds of IsStableCAD and IsStableCRIT run on random polynomials in 2 and 3 variables with rational coefficients

Remark 6

Note that Algorithm 2 can check the structural stability of polynomials in 4 variables with degree up to 12 in less than 20 min. However, when the polynomials have more variables (i.e., larger than 4) or have larger degrees, these methods do not return a result in a reasonable time.

3 A stability test for 2-D systems

In Sect. 2, a general framework was proposed for the stability analysis of n-D systems with \(n \ge 2\). In this section, we restrict the study to the particular case of \(n=2\) and we show that substantial improvements with respect to practical efficiency can be obtained by using state-of-the-art algorithms developed in Bouzidi (2014b) for the computation of the solutions of bivariate algebraic systems of equations.

Recall that testing the stability of a 2-D system can be reduced to deciding whether an algebraic system of the form of \(\{ {\mathcal {R}} (x_1, x_2) = {\mathcal {I}} (x_1, x_2)=0\}\) admits real solutions. In the present case, without loss of generality, we can assume that the ideal I generated by the two polynomials \({\mathcal {R}}\) and \({\mathcal {I}}\) is zero-dimensional, i.e., \(\gcd ({\mathcal {R}},{\mathcal {I}})=1\).Footnote 3 Our contribution in this section is a dedicated method for deciding if a system of two polynomial equations in two variables, having a finite number of complex solutions, admits real solutions.

Since the ideal I is zero-dimensional, we can use the univariate representation techniques described in Sect. 2.1.2 to reduce the problem of deciding the existence of real solutions of I to that of deciding the existence of real roots of a univariate polynomial. Indeed, under the hypothesis that I is zero-dimensional, the quotient algebra \({\mathcal {A}} :={\mathbb {Q}}[x_1,x_2]/I\) inherits a finite-dimensional \({\mathbb {Q}}\)-vector space structure and, for any polynomial \(P \in {\mathbb {Q}}[x_1,x_2]\), we can define the \({\mathbb {Q}}\)-endomorphism \(M_P\) of \({\mathcal {A}}\) defined by (6) whose the characteristic polynomial can be written as follows

$$\begin{aligned} C_P (t) =\det (t \, I_d-M_P)= \prod _{(\alpha _1, \, \alpha _2) \in V(I)} (t - P(\alpha _1, \alpha _2))^{\mu (\alpha _1, \, \alpha _2)}, \end{aligned}$$
(13)

where \(I_d\) denotes the identity matrix of size \(d=\mathrm{dim}_{{\mathbb {Q}}}({{\mathcal {A}}})\) and \(\mu (\alpha _1, \alpha _2)\) the multiplicity of \((\alpha _1, \alpha _2)\) as a zero of I.

Furthermore, if \(P=x_1+a\,x_2\) is a separating form of V(I),Footnote 4 then the polynomial \(C_P\) coincides with the polynomial f of the Rational Univariate Representation of I computed with respect to P [see (5) and Sect. 2.1.2] which yields an important property regarding to the existence of real solutions of V(I) and of \(C_P\). The following result can be proved considering a univariate representation of the solutions (see, e.g., Rouillier 1999).

Theorem 3

Let \(P=x_1+a\,x_2\) be a separating form for V(I). Then, the univariate polynomial \(C_P\) has real roots if and only if V(I) has real solutions.

Consequently, the computation of a separating form of V(I) and the corresponding polynomial \(C_P\) reduces the problem of searching for real solutions of V(I) to the problem of searching for real roots of \(C_P\). In the following, instead of the classical strategy which requires computations in the quotient algebra \({\mathcal {A}} :={\mathbb {Q}}[x_1,x_2]/I\), we propose an alternative approach based on the computation of the so-called resultant polynomial as well as the computation of a generic position. Thus, before going further, let us introduce the concept of resultant and subresultant sequences, as well as some of their basic properties which are useful for the description of our algorithm.

3.1 Resultant and subresultant sequence

Let \({\mathbb {A}}\) be a unique factorization domain (Cox et al. 2007), e.g., \({\mathbb {A}}:={\mathbb {K}}[y]\), where \({\mathbb {K}}\) is a field. Let \(f=\sum _{i=0}^p a_i \, x^i \in {\mathbb {A}}[x]\) and \(g=\sum _{j=0}^q b_j \, x^j \in {\mathbb {A}}[x]\), that is, the \(a_i\)’s and \(b_j\)’s belong to \({\mathbb {A}}\). Let us suppose that \(a_p \ne 0\) and \(b_q \ne 0\) so that \(\deg _x f=p\) and \(\deg _x g=q\), and \(p \ge q\). Let \({\mathbb {A}}[x]_n=\{ P \in {\mathbb {A}}[x] \; | \; \deg _x P \le n\}\) be the set of polynomials with degree at most n and \(\{x^i\}_{i=0, \ldots , n}\) the standard basis of the free \({\mathbb {A}}\)-module \({\mathbb {A}}[x]_n\) of rank n. We set \({\mathbb {A}}[x]_n=0\) for negative integer n. For \(0 \le k \le q\), we can consider the following homomorphism of free \({\mathbb {A}}\)-modules:

$$\begin{aligned} \begin{aligned} \varphi _k: {\mathbb {A}}[x]_{q-k-1} \times {\mathbb {A}}[x]_{p-k-1}&\longrightarrow {\mathbb {A}}[x]_{p+q-k-1} \\ (U, \, V)&\longmapsto U \, f+V \, g. \end{aligned} \end{aligned}$$

Using the standard basis of \({\mathbb {A}}[x]_{q-k-1}\) (resp., \({\mathbb {A}}[x]_{p-k-1}\), \(A[x]_{p+q-k-1}\)) and identifying the polynomial \(\sum _{i=0}^{q-k-1} u_i \, x^i \in {\mathbb {A}}[x]_{q-k-1}\) with the row vector \((u_0, \ldots , u_{q-k-1}) \in {\mathbb {A}}^{1 \times (p-k)}\), we obtain that

$$\begin{aligned} \varphi _k(u_0, \ldots , u_{q-k-1}, v_0, \ldots , v_{p-k-1})=(u_0, \ldots , u_{q-k-1}, v_0, \ldots , v_{p-k-1}) \, S_k, \end{aligned}$$

where the matrix \(S_k\) is the matrix defined by:

$$\begin{aligned} \begin{aligned} S_k&=\left( \begin{array}{c} U_k \\ V_k \end{array} \right) \in {\mathbb {A}}^{(q-k+p-k) \times (p+q-k)}, \\ U_k&= \left( \begin{array}{cccccccccc} a_0 &{} a_1 &{} \ldots &{} a_{q-k} &{} \ldots &{} a_p &{} 0 &{} \ldots &{} 0 \\ 0 &{} a_0 &{} \ldots &{} a_{q-k-1} &{} \ldots &{} a_{p-1} &{} a_p &{} \ldots &{} 0 \\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \vdots \\ 0 &{} \ldots &{} 0 &{} a_0 &{} \ldots &{} \ldots &{} \ldots &{} \ldots &{} a_p \end{array} \right) \in {\mathbb {A}}^{(q-k) \times (p+q-k)}, \\ V_k&= \left( \begin{array}{cccccccccc} b_0 &{} b_1 &{} \ldots &{} b_{p-k} &{} \ldots &{} b_q &{} 0 &{} \ldots &{} 0 \\ 0 &{} b_0 &{} \ldots &{} b_{q-k-1} &{} \ldots &{} b_{q-1} &{} b_q &{} \ldots &{} 0 \\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \vdots \\ 0 &{} \ldots &{} 0 &{} b_0 &{} \ldots &{} \ldots &{} \ldots &{} \ldots &{} b_q \end{array} \right) \in {\mathbb {A}}^{(p-k) \times (p+q-k)}. \end{aligned} \end{aligned}$$

To be coherent with the degree of polynomials, we attach index \(i - 1\) to the ith column of \(S_k\) so that the index of the columns goes from 0 to \(p + q - k - 1\).

Definition 2

For \(0 \le j \le p + q - k - 1\) and \(0 \le k \le q\), let \(\mathrm{sr}_{k, j}\) be the determinant of the submatrix of \(S_k\) formed by the last \(p + q - 2 \, k - 1\) columns, the column of index j and all the \(p + q - 2 \, k\) rows. The polynomial \(\mathrm{Sres}_k (f, g) = \mathrm{sr}_{k, k} \, x^k + \cdots + \mathrm{sr}_{k, 0}\) is the kth subresultant of f and g, and its leading term \(\mathrm{sr}_{k, k}\) is the kth principal subresultant of f and g. The matrix \(S_0 \in A^{(p+q) \times (p+q)}\) is the Sylvester matrix associated with f and g, and \(\mathrm{Res}_{x}(f,g)=\det S_0\) is the resultant of f and g.

Remark 7

For \(k < j \le p+q-2\, k-1\), we note that \(\mathrm{sr}_{k, j}=0\) since \(\mathrm{sr}_{k, j}\) is the determinant of a matrix having twice the same column. Moreover, we can check that we have:

$$\begin{aligned} {\mathrm{sr}}_{q,q}=b_q^{p-q}, \quad \forall \; q <p, \quad {\mathrm{Sres}}_q(f, \, g)=b_q^{p-q-1} \, g. \end{aligned}$$

Since \({\mathbb {A}}\) is an integral domain, we can consider its field of fractions which we denotes by \({\mathbb {F}}\), namely, \({\mathbb {F}}:=\{\frac{n}{d} \; | \; 0 \ne d, \, n \in {\mathbb {A}}\}\), and the Euclidean domain\({\mathbb {F}}[x]\). Since \(f, \, g \in {\mathbb {F}}[x]\), we can define the greatest common factor\(\mathrm{gcd}(f, \, g)\), which is defined up to a non-zero element of \({\mathbb {F}}\), so that we can suppose that \(\mathrm{gcd}(f, \, g) \in {\mathbb {A}}\).

Theorem 4

(Basu et al. 2006) The first \(\mathrm{Sres}_k (f, g)\) such that \(\mathrm{sr}_{k, k} \ne 0\) is equal to \(\mathrm{gcd}(f, \, g)\).

Now, if we consider two polynomials in two variables \(f=\sum _{i=0}^p a_i(x_1) \, x_2^i\) and \(g=\sum _{j=0}^q b_j(x_1) \, x_2^j\), and \(x=x_2\) so that \({\mathbb {A}}={\mathbb {K}}[x_1]\) and \({\mathbb {A}}[x]={\mathbb {K}}[x_1, x_2]\), then we have the important results.

Theorem 5

(Basu et al. 2006) Let \(f(x_1,x_2), \, g(x_1,x_2) \in {\mathbb {K}}[x_1, x_2]\) be two bivariate polynomials.

  • The roots of \(\mathrm{Res}_{x_2}(f,g)(x_1)\) are the projection onto the \(x_1\)-axis of the common solutions of f and g and of the common roots of \(a_p\) and \(b_q\).

  • For any \(\alpha \) root of \(\mathrm{Res}_{x_2}(f,g)\) such that \(a_p(\alpha )\) and \(b_q(\alpha )\) do not both vanish, the first polynomial \(\mathrm{Sres}_{k} (\alpha , x_2)\), for increasing k, that does not identically vanish is of degree k and is the gcd of \(f (\alpha , x_2)\) and \(g (\alpha , x_2)\), up to a nonzero constant in the field of fractions of \({\mathbb {A}}(\alpha )\).

The subresultant sequences can be computed either by means of determinant computations or by applying a variant of the classical Euclidean algorithm (see Basu et al. 2006). The latter method, combined with evaluation/interpolation strategies, turns out to be much more efficient in practice, especially for the case of univariate or bivariate polynomials.

3.2 Computation of a separating form of bivariate polynomial systems

Given a linear form \(x_1+a\,x_2\) (not necessarily separating), the following theorem shows that, up to a non-zero factor in \({\mathbb {Q}}\), the univariate polynomial \(C_{x_1+a\,x_2}\) [see (13)] is equal to the resultant of the two polynomials obtained by applying a change of variables to \({\mathcal {R}}\) and \({\mathcal {I}}\).

Theorem 6

(Bouzidi et al. 2015b) Let \({\mathcal {R}}, \, {{\mathcal {I}}} \in {\mathbb {Q}}[x_1, x_2]\) and let us define the polynomials

$$\begin{aligned} {\mathcal {R}}^\prime (t,x_2) :={\mathcal {R}}(t-a\,x_2,x_2), \quad {\mathcal {I}}^\prime (t,x_2) :={\mathcal {I}}(t-a\,x_2,x_2), \end{aligned}$$
(14)

where \(a \in {\mathbb {Z}}\) is such that the leading coefficient of \({\mathcal {R}}^\prime \) and \({\mathcal {I}}^\prime \) with respect to \(x_2\) are coprime. Then, the resultant of \({\mathcal {R}}^\prime \) and \({\mathcal {I}}^\prime \) with respect to \(x_2\), denoted by \(\mathrm{Res}_{x_2}({\mathcal {R}}^\prime ,{\mathcal {C}}^\prime )\), is equal to:

$$\begin{aligned} c\,\prod _{(\alpha _1, \, \alpha _2) \in V(I)} (t - \alpha _1 -a\,\alpha _2)^{\mu (\alpha _1, \alpha _2)}, \quad c \in {\mathbb {Q}}{\setminus } \{0\}. \end{aligned}$$

In practice, the computation of \(C_{x_1+a\,x_2}\) as a resultant (see Theorem 6) is much more efficient than computing the characteristic polynomial of the \({\mathbb {Q}}\)-endomorphism \(M_{x_1+a\,x_2}\) [see (6)] since the computation of the matrix \(M_{x_1+a\,x_2}\) usually requires the costly pre-computation of a Gröbner basis of the ideal \(I=\langle {{\mathcal {R}}}, \, {{\mathcal {I}}} \rangle \) for the graded reverse lexicographic order.

Let us now focus on the computation of a separating form for V(I). Below, we propose a method that consists in applying a change of variables to the initial system and then, using resultant and subresultants, to check whether or not the resulting system is in generic position as defined below.

Definition 3

Let \(f(x_1,x_2), \, g(x_1,x_2) \in {\mathbb {Q}}[x_1, x_2]\). If \(\sharp S\) denotes the cardinality of a finite set S, then the system \(\{f,g\}\) is said to be in generic position with respect to \(x_1\) if we have:

$$\begin{aligned} \forall \; \alpha \in {\mathbb {C}}, \quad \sharp \left\{ \beta \in {\mathbb {C}}\; | \; f(\alpha ,\beta )=g(\alpha ,\beta )=0 \right\} \le 1. \end{aligned}$$

Let us first illustrate our approach with an example.

Fig. 1
figure 1

Intersection between a circle and a parabola

Fig. 2
figure 2

Intersection between two circles

Example 11

Consider the ideal \(I=\{f=x_2-x_1^2, \, g=x_1^2+x_2^2-2\}\) whose set of solutions V(I) consists in four points of \({\mathbb {C}}^2\). The resultant \(\mathrm{Res}_{x_2}(f,g)\) of f and g with respect to \(x_2\) is equal to \(x_1^4+x_1^2-2\). The roots of \(\mathrm{Res}_{x_2}(f,g)\) correspond to the projections of the four solutions of V(I) onto the \(x_1\)-axis. Since all these roots are distinct, \(x_1\) is a separating form (see Fig. 1 for the real solutions of V(I)). The fact that the solutions of V(I) project distinctly onto \(x_1\) can be algebraically described by the fact that for each root \(\alpha \) of \(\mathrm{Res}_{x_2}(f,g)\), the gcd \(x_2+\alpha ^2\) of \(f(\alpha , x_2)=x_2-\alpha ^2\) and \(g(\alpha , x_2)=x_2^2+\alpha ^2-1\) has only one root.

Let us now consider the ideal \(I'=\{f=(x_1-2)^2+x_2^2-2, \, g=x_1^2+x_2^2-2 \}\). The resultant \(\mathrm{Res}_{x_2}(f',g')\) of \(f'\) and \(g'\) with respect to \(x_2\), namely, \(16 \, (x_1-1)^2\), has a single (real) root 1 of multiplicity 2, and \(\mathrm{gcd}(f'(1, x_2), g'(1,x_2))=x_2^2-1\) admits two distinct roots \(-1\) and 1 which correspond to two different solutions of \(V(I')\). This means that the system is not in generic position, and thus that \(x_1\) is not a separating form (see Fig. 2). In order to compute a separating form for \(V(I')\), we can apply a change of variables to \(f'\) and \(g'\), for instance \(t=x_1+x_2\), and then compute the resultant of these new polynomials \(f'(t-x_2, x_2)\) and \(g'(t-x_2, x_2)\) with respect to \(x_2\). This yields the polynomial \(t \, (t-2)\) whose two distinct roots \(\{0, \, 2 \}\) are the projections of the solutions onto the t-axis (see Fig. 3). For \(\alpha _1=0\) (resp., \(\alpha _2=2\)), the gcd of \(f'(-x_2, x_2)\) and \(g'(-x_2, x_2)\) (resp., of \(f'(2-x_2, x_2)\) and \(g'(2-x_2, x_2)\)) is \(x_2+1\) (resp., \(x_2-1\)). Since both gcds admit only one root, then the system \(\{f'(t-x_2, x_2),g'(t-x_2, x_2)\}\) is in generic position with respect to t and thus \(x_1+x_2\) is a separating form for \(V(I')\).

Fig. 3
figure 3

After the change of variables \((x_1=t-x_2, x_2)\)

Given a linear form \(t=x_1 + a\,x_2\), it can be shown that it is separating for \(V(\langle {{{\mathcal {R}}}}, \, {{\mathcal {I}}} \rangle )\) if and only if the system \(\{{{\mathcal {R}}}^{\prime }, \, {{\mathcal {I}}}^{\prime }\}\) is in generic position (see Definition 3). Algebraically, this means that for each root \({\alpha }\) of \(\mathrm{Res}_{x}({{\mathcal {R}}}^{\prime }, {{\mathcal {I}}}^{\prime })\) (where \({{\mathcal {R}}}^{\prime }\) and \({{\mathcal {I}}}^{\prime }\) are defined as in Theorem 6), the gcd of \({{\mathcal {R}}}^{\prime } (\alpha , x)\) and \({{\mathcal {I}}}^{\prime } (\alpha , x)\), denoted \(\mathcal {G}(\alpha ,x)\), has exactly one distinct root.

To check the above genericity condition, we can consider a triangular description of the solutions of \(\{{{\mathcal {R}}}^\prime , \, {{\mathcal {I}}}^\prime \}\) given by a finite union of triangular systems:

$$\begin{aligned} V(\langle {{\mathcal {R}}}^\prime , \, {{\mathcal {I}}}^\prime \rangle )= \mathop {\bigcup }_{k=1}^{l} \left\{ (\alpha , \, \beta ) \in {\mathbb {C}}^2 \; | \; r_k(\alpha )={{\mathcal {G}}}_k(\alpha ,\beta )=0 \right\} . \end{aligned}$$

Such a triangular description can be obtained via a triangular decomposition algorithm based on the resultant and subresultant polynomials (see Algorithm 1 of Bouzidi et al. 2015b for more details). Such a triangular decomposition yields a set of triangular systems of the form \(\{ r_k(t), {\text {Sres}}_k(t, x_2)\}_{k=1, \ldots , l}\), where \(l=\min \{\deg _{x_2} {{\mathcal {R}}}^\prime , \, \deg _{x_2} {{\mathcal {I}}}^\prime \}\), \(\mathrm{Res}_{x_2}({{\mathcal {R}}}^\prime ,{{\mathcal {I}}}^\prime )=\prod _{k=1}^{l} r_k(t)\), \(r_k \in {\mathbb {K}}[t]\) is the factor of \(\mathrm{Res}_{x_2}({{\mathcal {R}}}^\prime ,{{\mathcal {I}}}^\prime )\) (possibly equal to 1) whose roots \(\alpha \)’s satisfy the property that the degree of \(\mathcal {G}(\alpha , x_2)\) [i.e., \(\mathrm{gcd}({{\mathcal {R}}}^\prime (\alpha , x_2), {{\mathcal {I}}}^\prime (\alpha , x_2))\)] in \(x_2\) is equal to k and \(\mathrm{Sres}_k(t, x_2)=\sum _{i=0}^{k}\mathrm{sr}_{k,i}(t) \, x_2^i\) is the kth subresultant of \({{\mathcal {R}}}^\prime \) and \({{\mathcal {I}}}^\prime \). Once a triangular decomposition \(\{ r_k(t), {\text {Sres}}_k(t, x_2)\}_{k=1, \ldots , l}\) of \(\{{{\mathcal {R}}}^\prime , \, {{\mathcal {I}}}^\prime \}\) is computed, the genericity condition is equivalent to the fact that \(\mathrm{Sres}_k(t, x_2)\) can be written as \((a_k(t)\, x_2-b_{k}(t))^k\) modulo \(r_k(t)\), with \(\mathrm{gcd}(a_k, b_{k})=1\). The next theorem checks this last condition.

Theorem 7

(Daouda et al. 2008) Let \({{\mathcal {R}}}(x_1, x_2)\), \({{\mathcal {I}}}(x_1, x_2) \in {\mathbb {Q}}[x_1, x_2]\). Define the polynomials \({{\mathcal {R}}}^{\prime } (t, x_2) \), \({{\mathcal {I}}}^{\prime } (t, x_2)\) as in Theorem 6, and let \(\{ r_k (t), {\text {Sres}}_k (t, x_2) \}_{k = 1, \ldots , l}\) be the triangular decomposition of \(\{ {{\mathcal {R}}}^{\prime }, \, {{\mathcal {I}}}^{\prime } \} \). Then, \(\{{{\mathcal {R}}}^{\prime }, \, {{\mathcal {I}}}^{\prime }\}\) is in generic position if and only if we have

$$\begin{aligned} k \, (k - i) \, \mathrm{sr}_{k, i} \, \mathrm{sr}_{k,k} - (i + 1) \, \mathrm{sr}_{k, k - 1} \, \mathrm{sr}_{k, i + 1} = 0 \; \mathrm{mod} \; r_k, \end{aligned}$$
(15)

for all \(k \in \{ 1, \ldots , l \}\) and for all \(i \in \{ 0, \ldots , k - 1 \}\).

Finally, our algorithm for checking whether the system \(\{{\mathcal {R}}, \, {\mathcal {I}} \}\) admits real solutions consists in computing the above triangular decomposition for the system \(\{{\mathcal {R}}^\prime , \, {\mathcal {I}}^\prime \}\) obtained after applying an arbitrary linear change of variable \(t=x_1+a\,x_2\). If the condition of Theorem 7 is satisfied, then \(x_1+a\,x_2\) is a separating form. It then remains to check if the resultant \(\mathrm{Res}_{x_2}({\mathcal {R}}^{\prime }, {\mathcal {I}}^{\prime })\) of \({\mathcal {R}}^\prime \) and \({\mathcal {I}}^\prime \) with respect to \(x_2\) has real roots, a fact which can be done using, for instance, Sturm sequences (Basu et al. 2006).

Remark 8

In practice, several strategies are used in order to reduce the computational time of the above algorithm. For instance, the computation is stopped when the resultant, computed for some linear form \(x_1+a\,x_2\), that is the resultant of \({\mathcal {R}}(t-a\,x_2,x_2)\) and \({\mathcal {I}}(t-a\,x_2,x_2)\) with respect to \(x_2\), is devoid from multiple factors, which implies that the form \(x_1+a\,x_2\) is separating by Theorem 6. The computation is also stopped when the computed resultant does not have real zeros, since it implies that the system does not have real zeros as well. Another improvement is the way we can choose the form \(x_1+a\,x_2\) candidate to be a separating form. Indeed, in order to increase the probability that a form is separating, a first computation is performed modulo a prime number p (coefficients are then considered in the finite field \({\mathbb {F}}_p={\mathbb {Z}}/{\mathbb {Z}}\, p\)). Such a computation turns out to be very fast since it avoids coefficient swells in the algorithm. Providing that a linear form is separating modulo a prime p, then, with high probability, it is also separating over \({\mathbb {Z}}\) and we can choose it as a candidate for the algorithm over \({\mathbb {Z}}\).

3.3 Experiments

In order to measure the gain of our algorithm with respect to the general algorithm described in Sect. 2, we compare it with the general method Isolate partially developed by the same authors and available in the package RootFinding of the Maple computer algebra system. This function first computes a Rational Univariate Representation (Rouillier 1999) from a Gröbner basis computed with the \(F_4\)algorithm (Faugère 1999), and then uses of a variant of Descartes algorithm (Rouillier and Zimmermann 2003) as well as multi-precision interval arithmetic (Revol and Rouillier 2005) to isolate the real roots of the system.

For the present experiments, we re-use black boxes developed for the algorithms described in Bouzidi et al. (2013, 2015a), Bouzidi (2014a) which use exactly the same technical base to design the component of the algorithm that computes the univariate polynomial \(C_{x_1+a\,x_2}\) and performs the separation check. All the other components are shared with the MapleRootFinding[Isolate] function.

For dense polynomials with coefficients that can be encoded on 23 bits (such as if there were coming from floating point numbers), the results—obtained on a core i7 3.5 Ghz with 32 GB of memory—are summarized in the following table in which Degree denotes the total degree of the polynomial \(D(z_1,z_2)\) to be studied, \(\sharp V(I)\) the number of complex roots of the bivariate system to be solved to decide stability, RootFinding the computational time of the function RootFinding[Isolate] and Dedicated the computational time of our new dedicated algorithm.

Degree

\(\sharp V(I)\)

RootFinding

Dedicated

10

200

2.3

\(<1\)

15

450

29.8

\(<1\)

20

800

223.4

\(<1\)

25

1280

866.9

1.42

30

1800

3348.2

2.79

35

2450

\(>1\) h

7.81

40

3200

\(>1\) h

15.51

For these examples, note that we did not report the computation times required for the two 1-D stability tests (i.e., the stability test for \(D(1,z_2)\) and \(D(z_1,1)\)) since they are small compared with the resolution of the bivariate polynomial system.

Finally, we point out that our naive implementation of the Möbius transform in Maple is the main bottleneck of our dedicated algorithm compare with the extremely efficient algorithm for the real solution computation of systems of two polynomials in two variables.

4 A stability test for 2-D systems with parameters

In what follows, let \(u=\{u_1,\ldots , u_m \}\) denote a set of parameters. Throughout this section, these parameters are assumed to be real (for complex parameters, see Remark 10 at the end of the present section). In this last section, we study the structural stability of 2-D systems given by a transfer function of the form of (1) that depends on the set of parameters u, i.e., where \(n=2\) and \(D \in {\mathbb {Q}}[z_1, z_2, u]\).Footnote 5 In other words, our goal is to study (2) in terms of the parameters \(u_k\)’s. Roughly speaking, our approach consists in computing a set of polynomials \(\{h_1,\ldots ,h_s\}\), where \(h_k \in {\mathbb {Q}}[u_1,\ldots , u_r]\) for \(k=1, \ldots , s\), satisfying the property that the stability of (1) does not change provided that the sign of the sequence \(\{h_1, \ldots , h_s\}\) does not change. Then, \({\mathbb {R}}^m\) can be decomposed into cells in which the signs of \(\{h_1,\ldots ,h_s\}\) remain the same and the cells for which the system is structurally stable can then be selected by testing the structural stability of the system for a single parameter’s value in each cell.

Considering \(D(z_1,z_2,u)\) as a polynomial in the variables \(z_1\) and \(z_2\) with coefficients in \({\mathbb {Q}}[u_1,\ldots , u_r]\), we can apply the transformation given in Sect. 2.2, which yields the following set of conditions:

$$\begin{aligned} \begin{aligned}&D (z_1, 1, u) \ne 0, \quad | z_1 | \le 1, \\&D (1, z_2,u) \ne 0,\quad | z_2 | \le 1, \\&V(\langle {\mathcal {R}} (x_1, x_2,u), \, {\mathcal {I}} (x_1, x_2,u)\rangle ) \cap {\mathbb {R}}^2=\emptyset . \end{aligned} \end{aligned}$$
(16)

We start with the study of the first two conditions involving univariate polynomials with parameters. We first transform these conditions so that continuous stability tests can be applied. More precisely, we can apply the following change of variables

$$\begin{aligned} s_1=\frac{1-z_1}{1+z_1}, \quad s_2=\frac{1-z_2}{1+z_2} \end{aligned}$$

to the polynomials \(D(z_1,1, u)\) and \(D(1,z_2,u)\). We denote by \(D_1(s_1,u)\) (resp. \(D_2(s_2,u)\)), the numerator of \(D\left( -\dfrac{1-s_1}{1+s_1},1, u \right) \) (resp. \(D\left( 1,-\dfrac{1-s_2}{1+s_2},u \right) \)). The first two conditions of (16) then become:

$$\begin{aligned} \begin{aligned} D_1(s_1, u)&\ne 0, \quad \forall \; s_1 \in {\mathbb {C}}{:} \; \mathrm{Re}(s_1) \ge 0, \\ D_2(s_2, u)&\ne 0, \quad \forall \; s_2 \in {\mathbb {C}}{:} \; \mathrm{Re}(s_2) \ge 0. \end{aligned} \end{aligned}$$

Then, we can use a classical result of Liénard and Chipart (Basu et al. 2006, Thm. 9.30) that expresses the stability condition of a continuous polynomial D(s) as a positivity condition of its coefficients as well as a certain signed principal subresultant sequence of two polynomials F(s) and G(s) satisfying \(D(s)=F(s^2)+s\,G(s^2)\) (see Basu et al. 2006, Thm. 9.30). Using the specialization property of subresultants (see Basu et al. 2006), we can generalize this result to the case of univariate polynomials depending on parameters. In particular, applying this test to the polynomials \(D_1(s_1,u)\) and \(D_2(s_2,u)\) yields a set of polynomials depending only on the parameters u, and the stability of \(D_1(s_1,u)\) and \(D_2(s_2,u)\), and thus of \(D(z_1,1,u)\) and \(D(1,z_2,u)\), is then satisfied providing that these polynomials \(\{h_i(u)\}_{i=1, \ldots , t}\) are all positive.

The next problem is to decide whether or not the following system

$$\begin{aligned} \begin{aligned}&{\mathcal {R}}(x_1, x_2,u) =0, \\&{\mathcal {I}}(x_1, x_2,u) = 0, \end{aligned} \end{aligned}$$
(17)

admits real solutions. In what follows, we can assume that (17) is generically zero-dimensional, that is, for almost all values of the parameters \(u \in {\mathbb {R}}^m\), (17) admits a finite number of complex solutions. The main tool we use to solve this problem is the so-called discriminant variety, first introduced in Lazard and Rouillier (2007), and recalled in the next section.

4.1 Discriminant varieties: definition and properties

Before recalling the definition of a discriminant variety of an algebraic set, let us start with some useful notations.

For a set of polynomials \(p_1,\ldots ,p_l \in {\mathbb {Q}}[x_1,\ldots ,x_{n-m}, u_1,\ldots ,u_m]\), the corresponding algebraic set is defined as:

$$\begin{aligned} {{{\mathcal {S}}}}=\{\alpha \in {\mathbb {C}}^n \ | \ p_1(\alpha )=0,\ldots , p_l(\alpha )=0 \}. \end{aligned}$$
(18)

We consider the canonical projection onto the parameter space \({\mathbb {C}}^m\), namely, the following map

$$\begin{aligned} \begin{array}{cccc} {\varPi }_u: &{} {\mathbb {C}}^n &{} \longrightarrow &{} {\mathbb {C}}^m \\ &{} (x_1,\ldots ,x_{n-m}, u_1,\ldots ,u_m) &{} \longmapsto &{} (u_1,\ldots ,u_m), \end{array} \end{aligned}$$

and we denote by \(\overline{{\varPi }_u({{\mathcal {S}}})}\) the so-called Zariski closure of the projection of \({{\mathcal {S}}}\) onto the parameter space \({\mathbb {C}}^m\). For more details, see Cox et al. (2007).

Definition 4

(Lazard and Rouillier 2007) With the above notations, an algebraic set V of \({\mathbb {C}}^m\) is called a discriminant variety of \({{\mathcal {S}}}\) if and only if:

  1. 1.

    V is contained in \(\overline{{\varPi }_u({{\mathcal {S}}})}\).

  2. 2.

    The connected components \({{\mathcal {U}}}_1,\ldots ,{{\mathcal {U}}}_s\) of \(\overline{{\varPi }_u({{\mathcal {S}}})} {\setminus } V\) are analytic sub-manifolds (note that if \( \overline{{\varPi }_u({{\mathcal {S}}})}\) is connected, there is only one component).

  3. 3.

    For \(j=1, \ldots , s\), \(({\varPi }_u^{-1}({{\mathcal {U}}}_j) \ \cap \ {{\mathcal {S}}}, {\varPi }_u)\) is an analytic covering of \({{\mathcal {U}}}_j\).

In broad terms, a discriminant variety yields a partition of the parameter’s space \({\mathbb {C}}^m\) into cells \({{\mathcal {U}}}\), such that for each cell, the cardinal of \({\varPi }_u^{-1}(\mu ) \ \cap \ {{\mathcal {S}}}\), where \(\mu \in {\mathbb {C}}^m\), is locally constant on \({{\mathcal {U}}}\), and \({\varPi }_u^{-1}({{\mathcal {U}}}) \ \cap \ {{\mathcal {S}}}\) consists of a finite collection of sheets which are all locally homeomorphic to \({{\mathcal {U}}}\).

A consequence of Definition 4, stated in the following theorem, is a fundamental property of the discriminant variety regarding to the number of solutions. In this theorem, we assume that the polynomial system \({{\mathcal {S}}}\) defined by (18) is generically zero-dimensional, i.e., for almost all values of the parameters \(\mu \in {\mathbb {C}}^m\), the polynomial system \({{\mathcal {S}}}_{u=\mu }\), obtained by substituting the parameters u to \(\mu \) admits a finite number of complex solutions.

Theorem 8

(Lazard and Rouillier 2007) Let \({{\mathcal {S}}}\) be an algebraic system and \({{\mathcal {U}}}_1,\ldots ,{{\mathcal {U}}}_s\) defined as in Definition 4. Then, for two vectors of parameters \(\mu , \, \nu \in {{\mathcal {U}}}_j\), the specialized polynomial systems \({{\mathcal {S}}}_{u=\mu }\) and \({{\mathcal {S}}}_{u=\nu }\) have exactly the same number of distinct zeros.

For a given set \({{\mathcal {S}}}\), the smallest algebraic variety V that satisfies the conditions of Definition 4 is called the minimal discriminant variety (see Lazard and Rouillier 2007).

Example 12

A classical example of a discriminant variety is the zeros of the discriminant of a quadratic univariate polynomial \(f:=a\,x^2+b\,x+c\) whose coefficients are given as parameters. This discriminant is given as \(b^2-4\,a\,c\) and satisfies that for all \(a_0\), \(b_0\) and \(c_0\) such that \(b_0^2-4\,a_0\,c_0 \ne 0\), the polynomial \(a_0\,x^2+b_0\,x+c_0\) has exactly two distinct roots.

In the sequel, we simplify say “discriminant variety” for “minimal discriminant variety”.

4.2 Computation of discriminant varieties

For a system \({{\mathcal {F}}}\) defined by a set \(\{p_1,\ldots ,p_l\} \subset {\mathbb {Q}}[x_1,\ldots ,x_{n-m},u_1,\ldots ,u_m]\), by means of variable eliminations using, e.g. standard Gröbner bases computations (see, e.g., Lazard and Rouillier 2007), we can compute a sequence of polynomials \(\{h_1,\ldots ,h_s\} \subset {\mathbb {Q}}[u_1,\ldots ,u_m]\) whose zeros define the discriminant variety of \({{\mathcal {F}}}\). For instance, in the case of the quadratic polynomial given in Example 12, the discriminant is computed by eliminating the variable x in the system defined by f and its derivative \(\frac{\partial f}{\partial x}\) with respect to x, which can be done, e.g. by computing the resultant of f and \(\frac{\partial f}{\partial x}\) with respect to x.

In our setting, namely a system of two polynomial equations in two variables \({{\mathcal {S}}}=V(\langle {{\mathcal {R}}}, \, {{\mathcal {I}}} \rangle )\), the discriminant variety, denoted by \(V_D\), consists in the union of the following two subsets (see Lazard and Rouillier 2007 for details):

  • The set \(O_{\infty }\) of \(\alpha \in {\mathbb {C}}^m\) such that \({\varPi }_u^{- 1} ({\mathcal {U}}) \cap {\mathcal {S}}\) is not compact for any compact neighborhood \({\mathcal {U}}\) of \(\alpha \) in \({\mathbb {C}}^m\).

  • The set \(O_c\) of the union of the critical values of \({\varPi }_u\) and of the projection of the singular points of \(V ({\mathcal {S}})\) onto \({\mathbb {C}}^m\).

Intuitively, \(O_{\infty }\) represents parameter values such that there exist either vertical leafs of solutions or leafs that go to infinity above some of their neighborhoods, while \(O_c\) represents parameter values such that above some of their neighborhoods, the number of leafs varies. Thus, the minimal discriminant variety \(V_D\) roughly represents parameter values over which the number of solutions of (17) changes. Furthermore, an important remark for the computation of the discriminant variety of \({{{\mathcal {S}}}}\) is that both \(O_{\infty }\) and \(O_c\) are algebraic sets. \(V_D\) can thus be described as the union of two algebraic sets that can be computed independently.

Both \(O_{\infty }\) and \(O_c\) are projections of algebraic set. Computing these varieties remains to eliminating variables in the systems of equations corresponding to these varieties, which corresponds to the following problem: given \(I = \langle f_1, \ldots , f_l \rangle \subset {\mathbb {K}}[x_1,x_2, u]\), compute \({\varPi }_u (V(I))=V(I_u)\), where \(I_u \subset {\mathbb {K}}[u]\) is defined by \(I_u = I \cap {\mathbb {K}}[u]\). Algorithmically, \(I_u\) can be obtained by means of a Gröbner basis for any elimination ordering < satisfying \(u< x_1,x_2\). More precisely, it suffices to compute a Gröbner basis for such an ordering and to keep only the polynomials that belong to \({\mathbb {K}}[u]\).

In Lazard and Rouillier (2007), it was shown that the set \(O_c\) is equal to \({\varPi }_u (V (\langle {\mathcal {R}}, \, {\mathcal {I}}, {\text {Jac}}_{x_1,x_2}({\mathcal {R}},{\mathcal {I}}) \rangle ))\), where \({\text {Jac}}_{x_1,x_2} ({\mathcal {R}},{\mathcal {I}})\) denotes the determinant of the Jacobian matrix with respect to the variables \(x_1\) and \(x_2\). Hence, computing an ideal \(I_c\) such that \(O_c = V(I_c)\) remains to computing the determinant \({\text {Jac}}_{x_1,x_2}({\mathcal {R}},{\mathcal {I}})\) and a Gröbner basis of the ideal \(\langle {\mathcal {R}}, \, {\mathcal {I}}, \, {\text {Jac}}_{x_1,x_2}({\mathcal {R}},{\mathcal {I}}) \rangle \) for any elimination monomial ordering < satisfying \(u < x_1,x_2\). It was also noticed that such an elimination ordering allows us to compute an ideal \(I_{\infty } \subset {\mathbb {Q}}[u]\) such that \(O_{\infty } = V(I_{\infty })\). Precisely, suppose that G is a reduced Gröbner basis of \(\langle {\mathcal {R}}, \, {\mathcal {I}} \rangle \) for a monomial ordering \(<_{u, x_1,x_2}\), that is, the product of two degree reverse lexicographic orderings \(<_u\) for the parameters and \(<_{x_1,x_2}\) for the variables. For more details, see Cox et al. (2007). Let us define the following ideal

$$\begin{aligned} I^i_{\infty } = \left\{ {\text {LM}}_{<_{x_1,x_2}}(g) \ \mid \ g \in G, \ \exists \, m \ge 0, \, 1 \le i \le 2, \, {\text {LM}}_{<_{x_1,x_2}} (g) = x_i^m \right\} , \end{aligned}$$

where \(LM_<\) denotes the leading monomial of a polynomial for an admissible monomial ordering < (see Cox et al. 2007). Then, we have:

  • \(I^i_{\infty } \subset {\mathbb {K}}[u]\) is a Gröbner basis for \(<_u\).

  • \(O_{\infty } = V(I^1_{\infty }) \ \bigcup \ V (I^2_{\infty }) =V(I^1_{\infty } \bigcap I^2_{\infty } )\).

4.3 Discussing the number of real solutions

Once a discriminant variety \(V_D\) of \({\mathcal {S}}=V(\langle {\mathcal {R}}, \, {\mathcal {I}} \rangle )\), represented by a set of polynomials \(\{h_1,\ldots ,h_s\}\), is computed, we can compute a CAD adapted to these polynomials (see Sect. 2.1) in order to obtain a partition of the parameter space \({\mathbb {R}}^m\) defined by the discriminant variety \(V_D\) and the connected components of its complementary \({\mathbb {R}}^m {\setminus } V_D\) (which has the property that over any cell \({\mathcal {U}}\) that does not meet \(W_D\), \({\varPi }_u^{- 1} ({\mathcal {U}})\) is an analytic covering of \({\mathcal {U}}\)). In particular, the number of zeros of \({\mathcal {S}}\) is constant over any connected set that does not intersect the discriminant variety.

Also, for computing the constant number of solutions over a connected component that does not meet the discriminant variety, it suffices to take a particular value of parameter values \(\mu \) in this component and then solve the corresponding zero-dimensional polynomial system \({\mathcal {S}}_{u={\mu }}\).

Remark 9

Note that the structure of the solutions is not known above the discriminant variety itself. Since the discriminant variety is a set of null measure, it is useless here to study what is going on for such parameter values. However, the discriminant variety is defined by a polynomial system which can be added to the original system in order to follow the study recursively.

The discriminant variety is defined for the complex solutions of (17). For real solutions, only two cases may occur:

  1. 1.

    \({\varPi }_u ({\mathcal {S}} \cap {\mathbb {R}}^{m+2}) \subset V_D\). We then need to study \( {{\mathcal {S}}} \cap {\varPi }_u^{- 1} (V_D)\) instead of \({\mathcal {S}}\).

  2. 2.

    \({\varPi }_u ({\mathcal {S}} \cap {\mathbb {R}}^{m+2}) {\nsubseteq } V_D\). Then, we have \(V_D \cap {\mathbb {R}}^m\) is a discriminant variety for \({\mathcal {S}} \cap {\mathbb {R}}^{m+2}\), which is the usual situation.

In the second case, note that if \(V_D\) is minimal for \({\mathcal {S}}\), then \(V_D \cap {\mathbb {R}}^m\) is not necessarily minimal for \({\mathcal {S}} \cap {\mathbb {R}}^{m+2}\).

4.4 Computing stability regions

Let us now come back to the original problem, namely the computation of the regions of the parameter space such that (16) are satisfied, and thus define stable systems. As mentioned at the beginning of Sect. 4, we can compute a set of polynomials \(\{p_i(u)\}_{i=1,\ldots , t}\) such that the first two conditions of (16) are satisfied if and only if \(p_i(u) > 0\) for \(i=1, \ldots , t\). We can also compute a set of polynomials \(\{q_j(u)\}_{j=1,\ldots , s}\) that defines a partition of the parameter space in which the number of real solutions of (17) is constant. Now, considering the global set of polynomials \(F :=\{p_1(u), \ldots , p_t(u), q_1(u), \ldots , q_s(u) \} \), we can compute a CAD adapted to F (see Sect. 2.1). This yields a disjoint union of cells in \({\mathbb {R}}^m\) in which the signs of all the polynomials of F (both \(p_i\)’s and \(q_i\)’s) are constant. In particular, inside each of these cells, both the sign of the polynomials \(p_i\)’s and the number of real solutions of (17) are constant. This implies that the system is either stable or unstable. To determine the cells for which the system is stable, it suffices to select a simple point \(u={\mu }\) in each cell and to test (16) after the evaluation of the parameters.

Finally, in practice, to reduce the running time computation, we only compute the cells that have maximal dimension during the CAD.

Remark 10

In this section, the parameters were assumed to be real. The main reason is that the coefficients of a transfer function is usually assumed to be real in control theory. Moreover, the univariate stability test we use holds only for real coefficients and the Maple implementation of the CAD provides a partition of the space \({\mathbb {R}}^m\). But our method can be extended to the case of complex parameters: we can first use a univariate stability test that handles complex coefficients (see, e.g., Bistritz 1986) and then carry out a complex variant of the CAD, that is a partition of the space \({\mathbb {C}}^m\). For the latter, it is sufficient to consider the discriminant variety as an algebraic variety over the space \({\mathbb {R}}^{2m}\) (considering the real and imaginary parts of each parameter) and to perform the CAD associated to this variety. Of course, doubling the number of variables substantially increases the running time of this step.

Remark 11

Another interesting problem related to the stability of parameter-depending systems concerns the robustness analysis. One typical question is the following: given a nominal value of the parameters that ensures the stability, how to compute the minimal distance from this nominal value that can be tolerated while preserving stability. This is the subject of an ongoing work by the authors where the use of critical methods is investigated in order to compute this minimal distance as a critical value of a distance function restricted to some semi-algebraic set.

4.5 An illustrating example

We consider a 2D system defined by a transfer function G [see (1)] depending on the parameters \(u=\{u_1, \, u_2\}\) and whose denominator D is defined by:

$$\begin{aligned} \begin{aligned} D(z_1, z_2)&= (4 \, u_1+2 \, u_2+3) \, z_1+ (-2 \, u_1+1) \, z_2+2 \, (2 \, u_1-u_2-1) \, z_1\,z_2 \\&\quad + (2 \, u_1-2 \, u_2+4) \, z_1^2 \,z_2+ (-u_1-u_2+1) \, z_1\, z_2^2. \end{aligned} \end{aligned}$$

Applying the algebraic transformation defined in Sect. 2.2 to D, the bivariate polynomial system (17) is defined by:

$$\begin{aligned} {\mathcal {R}} (x, y)= & {} 7\, u_1 \, x^2 \, y^2-3\, u_2 \, x^2 \, y^2+7\, x^2 \, y ^2+u_1 \, x^2+7\, u_1 \, y^2 -5\, u_2 \, x^2+u_2 \, y^2 \\&-x^2-3\, y^2+u_1-u_2-11, \\ {\mathcal {I}} (x, y)= & {} 10\, u_1 \, x^2 \, y-8\, u_1 \,x \, y^2+6 \, u_2 \, x^2 \, y+4 \, u_2 \,x \, y^2+4 \, x^2 \, y-6\, x \, y^2 \\&-8\, u_1 \, x+10\, u_1 \,y+4\, u_2\,x+6 \, u_2 \,y -6\,x+4\,y. \end{aligned}$$
Fig. 4
figure 4

Global view—parameter space decomposition

The minimal discriminant variety of this bivariate system with respect to the projection onto \((u_1, \, u_2)\) can be obtained by means of the Maple function RootFinding[Parametric][Discriminant Variety]. The discriminant variety is the union of 8 lines, 2 quadrics and 1 curve of degree 6 defined by:

  • \([u_1], [u_2], [4\,u_1-2\,u_2+3], [u_1-u_2-11], [u_1-5\,u_2-1], [5\,u_1+3\,u_2+2]\), \([7\,u_1-3\,u_2+7], [7\,u_1+u_2-3]\).

  • \([6\,u_1^2+4\,u_1\,u_2+2\,u_2^2-8\,u_2+1], [6\,u_1^2-6\,u_1\,u_2-4\,u_2^2+25\,u_1+3\,u_2+11]\).

  • \([1276\,u_1^6-2828\,u_1^5\,u_2-168\,u_1^4\, u_2^2+2896\,u_1^3\,u_2^3+1544\,u_1^2\,u_2^4+340\,u_1\,u_2^5\) \(+76\,u_2^6 +874\,u_1^5 -10474\,u_1^4\,u_2-4984\,u_1^3\,u_2^2-4300\, u_1^2\,u_2^3-1866\,u_1\,u_2^4+14\,u_2^5\) \(-72\,u_1^4-6542\,u_1^3\,u_2+6663\,u_1^2\,u_2^2 -1396\,u_1\,u_2^3-1053\,u_2^4-239\,u_1^3-2461\,u_1^2\,u_2\) \(+8675\,u_1\,u_2^2+665\,u_2^3+170\,u_1^2-1834\,u_1\,u_2 +2064\,u_2^2+301\,u_1-557\,u_2+91].\)

Now, computing the conditions on the parameters u that discriminate the situations where \(D(z_1,1)\) (resp., \(D(1,z_2)\)) has (or not) roots in the complex unit disc lead to following 5 lines

$$\begin{aligned}{}[4\,u_1+2\,u_2+3], [7\,u_1-3\,u_2+7], [4\,u_1+3], [-2\,u_1+1], [3\,u_1-u_2+4], \end{aligned}$$

where \( [7\,u_1-3\,u_2+7]\) is already in the discriminant variety.

Decomposing the parameter space cylindrically with respect to these 14 curves gives 1161 cells shown in Fig. 4. A zoom on this decomposition is also shown in Fig. 5.

Fig. 5
figure 5

Zoom \(u_1=-4\ldots 2,u_2=-7\ldots 7\)—parameter space decomposition

Fig. 6
figure 6

Zoom around an unstable region: \(u_1=-0.4\ldots -0.6,u_2=-0.4\ldots -0.6\)—parameter space decomposition

For parameters which belong to these cells, the system is either stable or unstable. To test the stability of the corresponding system, it is sufficient to test the stability of the system obtained by evaluating the parameters u to a numerical value \({\mu }\) in this cell and to count the number of real solutions of the (non parametric) zero-dimensional polynomial system (17) defined with \(u={\mu }\), and to perform the stability test of \(D(z_1,1)\) and \(D(1,z_2)\).

It should be noticed that in some regions of the parameter space, some cells are very small.

Finally, it turns out that 118 of these regions correspond to unstable systems. For instance, the cell containing the pair \((u_1 = -.4717912847, \, u_2 = -.5389591122)\) defines unstable systems while the cell containing the pair \((u_1 = -.6152602220, \, u_2 = -.5389591122)\) as well as the cell containing the pair \((u_1 = -.3942379536, \, u_2 = -.5389591122)\) define stable systems (see Fig. 6).

5 Conclusion

The main goal of this paper was to point out some advantages of using classical techniques from the computer algebra community in the context of the stability analysis of multidimensional systems. Indeed, using state-of-the-art algorithms for solving algebraic systems of equations, several methods for the study of structural stability of these systems have been developed. The novelty of these methods compared to the existing ones is that they are both non-conservative and show promising results in practice especially for 2D and 3D systems. Moreover, despite of their own interests for testing the stability, the obtained algorithms can also be used for solving similar problems such as the computation of stabilizing feedback control for 1D linear systems or for the stabilization of n-D systems. From the computational point of view, we would try to improve the practical behavior of these methods in the case of n-D systems (\(n > 3\)) by investigating the use of numerical routines while keeping the exactness aspect of the approach since it is critical in our problems. This investigation will be the subject of further works. In addition, other classes of linear systems such as time-delay systems share the same type of representation, and can thus be addressed using the same computer algebra techniques.