Keywords

1 Introduction

Gyroscopic systems play an important role in a wide variety of engineering and physics applications, and vary from the design of urban structures (buildings, highways, and bridges), to aircraft industry, and to the motion of fluids in flexible pipes.

In its most general form, a gyroscopic system is modeled by means of a linear differential system on a finite-dimensional space, as follows:

$$\displaystyle \begin{aligned} M\ddot x(t) + (G + D)\dot x(t) + (K+ N)x(t) = 0. \end{aligned} $$
(1)

Here, x(t) corresponds to the generalized coordinates of the system, M = M T represents the mass matrix, G = −G T and K = K T are related to gyroscopic and potential forces, D = D T and N = −N T are related to dissipative (damping) and nonconservative positional (circulatory) forces, respectively. Therefore, the gyroscopic system (1) is not conservative when D and N are nonzero matrices.

The stability of the system is determined by its associated quadratic eigenvalue problem:

$$\displaystyle \begin{aligned} M\lambda^2 +(G+D) \lambda+(K+N) =0. \end{aligned} $$
(2)

In particular, the system is said to be strongly stable if all eigenvalues of (2) lie in the open left half plane, weakly stable if all eigenvalues of (2) lie in the closed left half plane, that is, there is at least one pure imaginary eigenvalue and all such eigenvalues are semi-simple. It is unstable otherwise.

Although nonconservative systems are of great interest, especially in the context of nonlinear mechanics (see [8] for reference), this work is confined to conservative systems. Thus, the equation of motion is given by:

$$\displaystyle \begin{aligned} M\ddot x(t) + G \dot x(t) + K x(t) = 0. \end{aligned} $$
(3)

In particular, the spectrum of (2) is characterized by Hamiltonian symmetry. We note indeed that for any eigenvalue λ with a corresponding pair of left and right eigenvectors (y, x), that is:

$$\displaystyle \begin{aligned} (\lambda^2 M+\lambda G + K)x =0,\quad \ y^*(\lambda^2 M+\lambda G + K)=0 \qquad (x,y\ne 0), \end{aligned}$$

also \(\overline \lambda ,-\lambda ,-\overline \lambda \) are eigenvalues with corresponding pairs of left and right eigenvectors \((\overline y,\overline x), (x,y), (\overline x,\overline y)\), respectively.

Let us define the matrix pencil \(\mathcal Q(\lambda )= M\lambda ^2 +G \lambda +K\) such that the associated quadratic eigenvalue problem reads

$$\displaystyle \begin{aligned} \mathcal Q(\lambda)x= [M\lambda^2 +G \lambda+K]x=0. \end{aligned} $$
(4)

In the absence of gyroscopic forces, it is well known that the system \( M\ddot x(t) + Kx(t) = 0\) is stable for K positive definite and unstable otherwise. When G is nonzero, then the system is weakly stable (see [11]) if the stiffness matrix K is positive definite, and may be unstable if K ≤ 0 and K is singular. In the latter case, indeed, the 0 eigenvalue can be either semi-simple (thus the system is stable) or defective (unstable). Indeed, as numbers in the complex plane, the eigenvalues are symmetrically placed with respect to both the real and imaginary axes. This property has two important consequences. On one hand, the eigenvalues can only move on the axis they belong to unless coalesce occurs; on the other hand, stability of system (3) only holds if all eigenvalues are purely imaginary.

Basically, for a conservative gyroscopic system, strong stability is impossible, since the presence of an eigenvalue on the left half plane would imply the existence of its corresponding symmetric one in the right half plane. The only possibility for the system to be stable is to be marginally stable (a particular case of weak stability), which requires that all eigenvalues lie on the imaginary axis, and the only way to lead the system to instability is a strong interaction (coalescence of two or more eigenvalues, necessary for them to leave the imaginary axis). The stiffness matrix K, for which no information about its signature is provided, plays a fundamental role in the stability of the system, and many stability results are available in the literature, based on the mutual relationship of G and K, as reported in [6, 7, 10] and references therein, and summarized in [12]. Given a marginally stable system of the form (3), the aim of this work is to find a measure of robustness of the system, that is the maximal perturbation that retains stability.

The paper is organized as follows. In Sect. 2, we phrase the problem in terms of structured distance to instability and present the methodology we adopt. In Sect. 3, we illustrate the system of ODEs for computing the minimal distance between pairs of eigenvalues. In Sect. 4, we derive a variational formula to compute the distance to instability. In Sect. 5, we present the method, and in Sect. 6, some experiments.

2 Distance to Instability

Distance to instability is the measure of the smallest additive perturbation which leads the system to be unstable. To estimate the robustness of (3), we will use the Frobenius norm. In order to preserve the Hamiltonian symmetry of the system, we will allow specific classes of perturbations. Indeed, gyroscopic forces and potential energy will be subject to additive skew-symmetric and symmetric perturbations, respectively. In [9], such a measure of robustness is called strong stability, which seems to be misleading according to the definitions in Sect. 1. Nevertheless, the author’s aim was to find a neighboring system, that is an arbitrarily close system which retains stability and symmetry properties. Interesting results on stability are presented, allowing sufficiently small perturbations. However, our goal is to characterize these perturbations, and give a measure of “how small” they need to be to avoid instability. The distance to instability is related to the ε-pseudospectrum of the system.

In particular, we assume that M is fixed and we allow specific additive perturbations on G and K.

Therefore, let us define the structured ε-pseudospectrum of [G, K] as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sigma_{\varepsilon}([G,K])= \{\lambda \in {\mathbb C} \, :\, \lambda \in \sigma([G+\varDelta G,K+\varDelta K]) \mbox{ with } || [\varDelta G, \varDelta K] ||{}_F \leq \varepsilon,\\ \mbox{for some skew-symmetric} \ \varDelta G,\ \mbox{and symmetric} \ \varDelta K\ \} \end{array} \end{aligned} $$

We will call ε the sought measure, meaning that for every ε < ε the system remains marginally stable. Moreover, as mentioned in Sect. 1, the only way to lead the system to instability is a strong interaction, which means that at least two ε -pseudoeigenvalues coalesce. Exploiting this property, we will compute the distance to instability in two phases: an outer iteration will change the measure ε of the perturbation, and an inner iteration will allow the ε-pseudoeigenvalues to move on the imaginary axis, according to the fixed ε, until determining the candidates for coalescence. The following remark suggests to limit our interest to systems in which the stiffness matrix is not positive definite.

Remark 1

When K is positive definite, the distance to instability of the system coincides with the distance to singularity of the matrix K, which is trivially equal to the absolute value of the smallest eigenvalue of K, because of the Hamiltonian symmetry.

2.1 Methodology

We make use of a two-level methodology.

First, we fix as ε the Frobenius norm of the admitted perturbation [ΔG, ΔK]. Then, given a pair of (close) eigenvalues λ 1, λ 2 on the imaginary axis, we look for the perturbations associated to a minimum of the distance |λ 1 − λ 2| on the imaginary axis. This is obtained by integrating a suitable gradient system for the functional |λ 1 − λ 2|, preserving the norm of the perturbation [ΔG, ΔK].

The external method controls the perturbation level ε to the aim of finding the minimal value ε for which λ 1 and λ 2 coalesce. The method is based on a fast Newton-like iteration.

Two-level iterations of a similar type have previously been used in [4, 5] for other matrix-nearness problems.

To formulate the internal optimization problem, we introduce the functional, for ε > 0:

$$\displaystyle \begin{aligned} f_\varepsilon(\varDelta G,\varDelta K) = \Big|\lambda_1(\varDelta G,\varDelta K) - \lambda_2(\varDelta G,\varDelta K) \Big| {} \end{aligned} $$
(5)

where λ 1,2(ΔG, ΔK) are the closest eigenvalues on the imaginary axis of the quadratic eigenvalue problem \(\det (M \lambda ^2 + (G+\varDelta G) \lambda + (K + \varDelta K))=0\).

Thus, we can recast the problem of computing the distance to instability as follows:

  1. (1)

    For fixed ε, compute

    $$\displaystyle \begin{aligned}{}[\varDelta G(\varepsilon), \varDelta K(\varepsilon)] \longrightarrow \mathop{\min}\limits_{\varDelta G, \varDelta K: \| [\varDelta G, \varDelta K] \|{}_F = \varepsilon} f_\varepsilon(\varDelta G,\varDelta K) := f(\varepsilon) {} \end{aligned} $$
    (6)

    with

    $$\displaystyle \begin{aligned} \varDelta G + \varDelta G^T \quad \mbox{and} \quad \varDelta K+ \varDelta K^T=0. \end{aligned} $$
    (7)
  2. (2)

    Compute

    $$\displaystyle \begin{aligned} \varepsilon^* \longrightarrow \mathop{\min}\limits_{\varepsilon > 0} \{\varepsilon: f(\varepsilon)=0\}. {} \end{aligned} $$
    (8)

    that means computing a pair (ΔG , ΔK ) of norm ε such that λ 1(ε ) is a double eigenvalue of the quadratic eigenvalue problem \(\det (M\lambda ^2 + (G+\varDelta G_*) \lambda + (K + \varDelta K_*))=0\).

2.2 Algorithm

In order to perform the internal minimization at (6), we locally minimize the functional f ε(ΔG, ΔK) over all [ΔG, ΔK] of at most unit Frobenius norm, by integrating a steepest-descent differential equation (identifying the gradient system to the functional (5)) until a stationary point. The key instrument to deal with eigenvalue optimization is a classical variational result concerning the derivative of a simple eigenvalue of a quadratic eigenvalue problem.

In order to perform the minimization at (8), instead, denoting the minimum value of f ε(ΔG, ΔK) by f(ε), we determine then the smallest perturbation ε  > 0 such that f(ε ) = 0, by making use of a quadratically convergent iteration.

Remark 2

Given a fixed ε, we compute all the possible distances between the eigenvalues, in order to identify the eigenpair which coalesces first (global optimum).

The whole method is summarized later by Algorithm 2.

3 The Gradient System of ODEs

In this section, the goal is to design a system of differential equations that, for a given ε, will find the closest pair of ε-pseudoeigenvalues on the imaginary axis. Indeed, this turns out to be a gradient system for the considered functional, which allows to obtain a useful monotonicity property along its analytic solutions.

To this intent, let us define the two-parameter operator:

$$\displaystyle \begin{aligned} Q(\tau, \lambda) =M \lambda^2 +G(\tau) \lambda+ K(\tau). {} \end{aligned} $$
(9)

Let λ = λ(τ), and let λ 0 satisfy the quadratic eigenvalue problem (4).

Assuming that λ 0 is a simple eigenvalue, then by Theorem 3.2 in [1]:

$$\displaystyle \begin{aligned}y_0^*\frac{\partial Q}{\partial {\lambda}} x_0 \neq 0,\end{aligned}$$

where x 0 and y 0 are the right and left eigenvectors of Q at λ 0, respectively.

Under this assumption, therefore, by the variational result (5.3) in [1], the derivative of λ with respect to τ is well defined and given by:

$$\displaystyle \begin{aligned} \frac{d \lambda}{d \tau}= -\left( y_0^*\frac{\partial Q}{\partial {\tau}} x_0\right) \bigg / \left(y_0^*\frac{\partial Q}{\partial {\lambda}} x_0 \right) \end{aligned} $$
(10)

Next, let us consider the matrix-valued functions G ε(t) = G + εΔG(t) and K ε(t) = K + εΔK(t), where the augmented matrix [ΔG, ΔK] satisfies (7), and

$$\displaystyle \begin{aligned} \|[\varDelta G(t), \varDelta K(t)]||{}_F=1 \quad \mbox{for all } t\in {\mathbb R}. \end{aligned} $$
(11)

The corresponding quadratic eigenvalue problem is Q ε(t, λ)x = 0, where:

$$\displaystyle \begin{aligned}Q _\varepsilon(t,\lambda)=M\lambda^2 +[G+\varepsilon \varDelta G]\lambda + [K+\varepsilon\varDelta K]. \end{aligned}$$

Moreover, let λ 1(t) = i θ 1(t) and λ 2(t) = i θ 2(t), with θ 1(t) > θ 2(t) be two purely imaginary eigenvalues of Q ε(t, λ)x = 0, corresponding to the eigenvalue of minimal distance of Q ε(t, λ)x = 0.

Let λ 1 = i θ 1 and λ 2 = i θ 2 with \(\theta _1, \theta _2 \in {\mathbb R}\).

Conventionally assume θ 1 > θ 2.

For i = 1, 2, let y i such that

$$\displaystyle \begin{aligned} \gamma_i := y_i^* \left[2 {\mathbf{i}} \theta_i M +(G+ \varepsilon \varDelta G) \right] x_i > 0 \end{aligned} $$
(12)

be real and positive. This is naturally possible by suitably scaling the eigenvectors. Then, applying (10) gives

$$\displaystyle \begin{aligned} \begin{array}{rcl} \dot \theta_1 - \dot \theta_2 &\displaystyle = &\displaystyle {\mathbf{i}}\varepsilon \left [ \frac{y_1^* \left( {\mathbf{i}} \theta_1 \dot{\varDelta G} + \dot{\varDelta K} \right) x_1}{\gamma_1} - \frac{y_2^* \left( {\mathbf{i}} \theta_2 \dot{\varDelta G} + \dot{\varDelta K} \right) x_2}{\gamma_2}\right] \\ {} &\displaystyle = &\displaystyle \varepsilon \left\langle -\frac{\theta_1}{\gamma_1}y_1x_1^* + \frac{\theta_2}{\gamma_2}y_2x_2^*,\; \dot{\varDelta G}\right\rangle + \varepsilon \left\langle -\frac{{\mathbf{i}}}{\gamma_1}y_1x_1^* + \frac{{\mathbf{i}}}{\gamma_2}y_2x_2^*,\; \dot{\varDelta K}\right\rangle. {} \end{array} \end{aligned} $$
(13)

where—for a pair of matrices A, B—we denote the Frobenius inner product:

$$\displaystyle \begin{aligned} \langle A,B \rangle = \text{trace}(A ^* B). \end{aligned}$$

The derivative of [ΔG(t), ΔK(t)] must be chosen in the direction that gives the maximum possible decrease of the distance between the two closest eigenvalues, along the manifold of unitary Frobenius norm matrices [ΔG, ΔK]. Notice that constraint (11) is equivalent to

$$\displaystyle \begin{aligned} \Bigl\langle [\varDelta G, \varDelta K], [\dot{\varDelta G}, \dot{\varDelta K}]\Bigr\rangle=0. \end{aligned}$$

We have the following optimization result, which allows us to determine the constrained gradient of f ε(ΔG, ΔK).

Theorem 3

Let \([\varDelta G, \varDelta K]\in {\mathbb R}^{n,2n}\) a real matrix of unit norm satisfying conditions (7)(11), x i and y i right and left eigenvectors relative to the eigenvalues λ i = i θ i , for i = 1, 2, of Q ε(t, λ)x = 0. Moreover, let γ i , with i = 1, 2, be two real and positive numbers and consider the optimization problem:

$$\displaystyle \begin{aligned} \min_{Z \in \varOmega} \left\langle -\frac{\theta_1}{\gamma_1} y_1 x_1^* + \frac{\theta_2}{\gamma_2}y_2 x_2^*,\; Z_G\right\rangle + \left\langle -\frac{{\mathbf{i}}}{\gamma_1} y_1 x_1^* + \frac{{\mathbf{i}}}{\gamma_2}y_2 x_2^*,\; Z_K \right\rangle \end{aligned} $$
(14)

with

$$\displaystyle \begin{aligned} \varOmega = \Bigl\{ \|Z\|=1,\langle [\varDelta G, \varDelta K], Z\rangle=0, Z_G \in \mathcal{M}_{\mathit{\text{Skew}}}, Z_K \in \mathcal{M}_{\mathit{\text{Sym}}} \Bigr\}, \end{aligned}$$

where \(\mathcal {M}_{\mathit{\text{Skew}}}\) is the manifold of skew-symmetric matrices and \(\mathcal {M}_{\mathit{\text{Sym}}}\) the manifold of symmetric matrices.

The solution \(Z^\star =[Z_G^\star ,Z_K^\star ]\) of (14) is given by:

$$\displaystyle \begin{aligned} \mu Z^\star= \mu \bigl[Z_G^\star, Z_K^\star\bigr] = \bigl[f_G-\eta\, \varDelta G, f_K -\eta\, \varDelta K\bigr] \end{aligned} $$
(15)

where μ > 0 is a suitable scaling factor, and

$$\displaystyle \begin{aligned} \begin{array}{rcl} \eta & =& \Re\Bigl\langle [\varDelta G, \varDelta K], [f_G, f_K]\Bigr\rangle\\ {} [5mm] f_G &=&\displaystyle{ \mathit{\text{Skew}} \left( \Re \left[ \frac{\theta_1}{\gamma_1} y_1 x_1^*-\frac{\theta_2}{\gamma_2} y_2 x_2^*\right] \right)} \\ {} [5mm] f_K &=& \displaystyle{\mathit{\text{Sym}} \left( \Im \left[ \frac{1}{\gamma_2} y_2 x_2^*-\frac{1}{\gamma_1} y_1 x_1^*\right] \right)} \end{array} \end{aligned} $$
(16)

where Skew(B) denotes the skew-symmetric part of B and Sym(B) denotes the symmetric part of B.

Proof

Preliminarily, we observe that for a real matrix:

$$\displaystyle \begin{aligned} B = \frac{B+B^{\mathrm{T}}}{2} + \frac{B-B^{\mathrm{T}}}{2} = \text{Sym}(B) + \text{Skew}(B) \end{aligned}$$

the orthogonal projection (with respect to the Frobenius inner product) onto the manifolds \(\mathcal {M}_{\text{Sym}}\) of symmetric matrices and \(\mathcal {M}_{\text{Skew}}\) of skew-symmetric matrices are, respectively, Sym(B) and Skew(B). In fact:

$$\displaystyle \begin{aligned} \langle \text{Sym}(B), Z \rangle = 0 \quad \mbox{for all} \ Z \in \mathcal{M}_{\text{Skew}} \end{aligned}$$

and

$$\displaystyle \begin{aligned} \langle \text{Skew}(B), Z \rangle = 0 \quad \mbox{for all} \ Z \in \mathcal{M}_{\text{Sym}}. \end{aligned}$$

Looking at (14), we set the free gradients:

$$\displaystyle \begin{aligned} \phi_G = -\frac{\theta_1}{\gamma_1} y_1 x_1^* + \frac{\theta_2}{\gamma_2}y_2 x_2^* \quad \mbox{and} \quad \phi_K = -\frac{{\mathbf{i}}}{\gamma_1} y_1 x_1^* + \frac{{\mathbf{i}}}{\gamma_2}y_2 x_2^*. \end{aligned}$$

The proof is obtained by considering the orthogonal projection (with respect to the Frobenius inner product) of the matrices (which can be considered as vectors) − ϕ G and − ϕ K onto the real manifold \(\mathcal {M}_{\text{Skew}}\) of skew-symmetric matrices and onto the real manifold \(\mathcal {M}_{\text{Sym}}\) of symmetric matrices, and further projecting the obtained rectangular matrix onto the tangent space to the manifold of real rectangular matrices with unit norm.

3.1 The System of ODEs

Following Theorem 3, we consider the following system of ODEs, where we omit the dependence of t:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{rcl} \displaystyle{\frac{d}{dt}} \varDelta G & = & f_G - \eta\, \varDelta G \\ {} \displaystyle{\frac{d}{dt}} \varDelta K & = & f_K - \eta\, \varDelta K \end{array} \right. {} \end{aligned} $$
(17)

with η, f G, and f K as in (16).

This is a gradient system, which implies that the functional f ε(ΔG(t), ΔK(t)) decreases monotonically along solutions of (17), until a stationary point is reached, which is generically associated to a local minimum of the functional.

4 The Computation of the Distance to Instability

As mentioned in Sect. 1, the only way to break the Hamiltonian symmetry is a strong interaction, that is two (or more) eigenvalues coalesce. This property allows us to reformulate the problem of distance to instability in terms of distance to defectivity (see [3]). In particular, since the matrices G and K must preserve their structure, we will consider a structured distance to defectivity. Because of the coalescence, we do not expect the distance between the eigenvalues to be a smooth function with respect to ε when f ε = 0.

As an illustrative example, consider the gyroscopic system described by the equation:

(18)

The minimal distance among the eigenvalue of this system is achieved by the conjugate pair closest to the origin, that is, |θ 1| = |θ 2|, and coalescence occurs at the origin, as shown in Fig. 1 (left).

Fig. 1
figure 1

Eigenvalues of system (18) on the left, before and at the moment of strong interaction at the origin. Eigenvalues of system (19) on the right: two strong interactions occur at the same time

Let us substitute the stiffness matrix in (18) with − I, that is:

(19)

Although |θ 1| = |θ 2| still holds, strong interaction does not occur at the origin. Here, two pairs coalesce at the same time, as shown in Fig. 1 (right).

4.1 Variational Formula for the ε-Pseudoeigenvalues with Respect to ε

We consider here the minimizers ΔG(ε) and ΔK(ε) computed as stationary points of the system of ODEs (17) for a given ε, and the associated eigenvalues λ i(ε) = i θ i(ε) of the quadratic eigenvalue problem with ε < ε (which implies θ 1(ε) ≠ θ 2(ε)). We assume that all the abovementioned quantities are smooth functions with respect to ε, which we expect to hold generically.

Formula (10) is useful to compute the derivative of the ε-pseudoeigenvalues with respect to ε. We need the derivative of the operator Q w.r.t. ε, which appears to be given by:

$$\displaystyle \begin{aligned}\frac{\partial Q}{\partial \varepsilon}= \varDelta G \lambda + \varDelta K + \varepsilon (\varDelta G' \lambda +\varDelta K')\end{aligned}$$

Here, the notation \( A'= \frac {d A}{d \varepsilon }\) is adopted. Assuming that λ = λ 0 is a simple eigenvalue, and x 0 and y 0 are the right and left eigenvectors of Q at λ 0 respectively, then

$$\displaystyle \begin{aligned}\frac{\partial \lambda}{\partial \varepsilon} = -\frac{y_0^* (\varDelta G \lambda + \varDelta K + \varepsilon (\varDelta G' \lambda +\varDelta K') ) x_0 }{y_0^* (2 M \lambda+ G+\varepsilon \varDelta G) x_0 } \end{aligned}$$

Claim

\(y_0^* (\varDelta G' \lambda +\varDelta K') x_0 =0\).

The norm conservation ||[ΔG, ΔK]||F = 1, which is equivalent to \( || \varDelta G ||{ }^2_F + ||\varDelta K] ||{ }^2_F=1\), implies that 〈ΔG, ΔG′〉 = 0 = 〈ΔK, ΔK′〉. Also:

$$\displaystyle \begin{aligned} \Re (y_0^*\lambda_0 \varDelta G' x_0)= \Re (y_0^*\,{\mathbf{i}} \theta_0 \, \varDelta G' x_0)= \langle \varDelta G', \Re( y_0\,x_0^* )\rangle =\langle \varDelta G', \eta \varDelta G \rangle = 0, \end{aligned}$$

and

$$\displaystyle \begin{aligned} \Im (y_0^*\lambda_0 \varDelta K' x_0)= \Im (y_0^*\, \varDelta K' x_0)= \langle \varDelta K', \Im( y_0\,x_0^* )\rangle =\langle \varDelta K', \eta \varDelta K \rangle = 0. \end{aligned}$$

Therefore:

$$\displaystyle \begin{aligned} \frac{\partial \lambda}{\partial \varepsilon} = -\frac{y_0^* (\varDelta G \lambda + \varDelta K ) x_0 }{y_0^* (2 M \lambda+ G+\varepsilon \varDelta G) x_0 } \end{aligned}$$

and

$$\displaystyle \begin{aligned} \theta_1^{\prime}-\theta_2^{\prime}= \frac{1}{\gamma_2} \bigg[\theta_2 \Re(y_2^*\, \varDelta G \, x_2) + \Im (y_2^*\, \varDelta K \, x_2)\bigg]-\frac{1}{\gamma_1} \bigg[\theta_1 \Re(y_1^*\, \varDelta G \, x_1) + \Im (y_1^*\, \varDelta K \, x_1)\bigg] \end{aligned} $$
(20)

The previous expression provides f′(ε). Hence, for ε < ε we can exploit its knowledge. Since generically coalescence gives rise to a defective pair on the imaginary axis, we have that the derivative of f(ε) is singular at ε .

Our goal is that of approximating ε by solving f(ε) = δ with δ > 0 a sufficiently small number. For ε close to ε , ε < ε we have generically (see [3])

$$\displaystyle \begin{aligned} \begin{array}{rcl} && \left\{ \begin{array}{rcl} f(\varepsilon) & = & \gamma \sqrt{\varepsilon^\star - \varepsilon} + \mathcal{O}\bigl( (\varepsilon^\star - \varepsilon)^{3/2} \bigr) \\ {} f'(\varepsilon) & = & \displaystyle{-\frac{\gamma}{2 \sqrt{\varepsilon^\star - \varepsilon}}} + \mathcal{O}\bigl( (\varepsilon^\star - \varepsilon)^{1/2} \bigr), \end{array} \right. {} \end{array} \end{aligned} $$
(21)

which corresponds to the coalescence of two eigenvalues. For an iterative process, given ε k, we use formula (20) to compute f′(ε) and estimate γ and ε by solving (21) with respect to γ and ε . We denote the solution as γ k and \(\varepsilon ^\star _k\), that is:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \gamma_k &\displaystyle = &\displaystyle \sqrt{2 f(\varepsilon_k) |f'(\varepsilon_k)|}, \qquad \varepsilon^\star_k = \varepsilon_k + \frac{f(\varepsilon_k)}{2 |f'(\varepsilon_k)|} {} \end{array} \end{aligned} $$
(22)

and then compute

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varepsilon_{k+1} &\displaystyle = &\displaystyle \varepsilon^\star_k - {\delta^2}/{\gamma_k^2}. {} \end{array} \end{aligned} $$
(23)

An algorithm based on previous formulæ is Algorithm 2, which does not add any additional cost to the algorithm since the computation of f′(ε k) is very cheap.

Unfortunately, since the function f(ε) is not smooth at ε , and vanishes identically for ε > ε , the fast algorithm has to be complemented by a slower bisection technique to provide a reliable method to approximate ε .

5 The Complete Algorithm

The whole Algorithm 2 follows:

Algorithm 2: Algorithm for computing ε

6 Numerical Experiments

We consider here some illustrative examples with M = I, from [2, 10, 13]. In the following, ε u is chosen as the distance between the largest and the smallest eigenvalues, whereas ε 0 = 0 and ε 1 is obtained by (23).

6.1 Example 1

Let and .Also in this example, the stiffness matrix is positive definite, and the distance to singularity is ε  = 3 which coincides with the distance to instability.

6.2 Example 2

Let us consider the equation of motion \(M\ddot x(t) + G \dot x(t) + K x(t) = 0\), with:

(24)

Here, the two closest eigenvalues of the system are the complex conjugate θ 1 = −θ 2 = 2.1213e − 02 and coalescence occurs at the origin, with ε  = 4.6605e − 01. Figure 2 illustrates these results. On the left, a zoom-in of the eigenvalues of system (24) near the origin is provided. In the center, coalescence occurs for the perturbed system \(M\ddot x(t) + (G+ \varepsilon ^\star \varDelta G )\dot x(t) + (K+\varepsilon ^\star \varDelta K) x(t) = 0\). On the right, the two eigenvalues become real after the strong interaction, namely, for ε > ε , and the positive one leads the system to instability.

Fig. 2
figure 2

A zoom-in, before, during, and after strong interaction for system (24)

6.3 Example 3

This problem arises in the vibration analysis of a wiresaw. Let n be the dimension of the matrices. Let

$$\displaystyle \begin{aligned} M= I_n/2, \quad K= \text{diag}_{1\leq j\leq n} (j^2\pi^2 (1-v^2)/2)\end{aligned} $$

and G = (g jk) where \(g_{jk}= \frac {4jk}{j^2-k^2}v\) if j + k is odd, and 0 otherwise.

The parameter v is a real nonnegative number representing the speed of the wire. For v ∈ (0, 1), the stiffness matrix is positive definite. Here, we present two cases in which v > 1, and K is negative definite.

First, consider n = 4 and v = 1.1. Then, the system is marginally stable, and the distance to instability is given by ε  = 4.6739e − 02. The eigenvalues i θ 1 = i3.4653 and i θ 2 = i2.5859 coalesce, as well as their respective conjugates.