1 Introduction

Noncooperative differential games with linear dynamics and quadratic costs have been widely studied in recent literature [2, 1114, 16, 19]. The existence of explicit solutions, determined by a finite set of algebraic equations, makes them a very attractive modeling tool. They have been used in a variety of applications, particularly in economics and management science.

In most of these applications, the underlying dynamical equations are nonlinear and the cost functions are not exactly quadratic polynomials. Prices, interest rates, inventory levels, etc., assume only positive values, while in a linear-quadratic game all functions must be defined on the whole real line.

In spite of these discrepancies, linear-quadratic games can be regarded as a natural approximation to more realistic models. In an attempt to justify this approximation, one is led to the following

Question Starting from a linear-quadratic differential game for two players, suppose that some small, nonlinear perturbations are added to the dynamics of the system and to the cost functions. Does the perturbed game still have a Nash equilibrium solution in feedback form, close to the original one?

For noncooperative differential games on a finite time interval [0, T], the analysis in [5, 9] shows that the answer is largely negative. Indeed, the value functions for the two players are determined by a system of two Hamilton–Jacobi equations, with given terminal conditions at \(t=T\). When the state space has dimension \(n=1\), in some cases this leads to a hyperbolic system, which has unique solutions [8]. However, in dimension \(n\ge 2\) the system is not hyperbolic and the backward Cauchy problem is generically ill posed. A small nonlinear perturbation of the terminal costs may produce a large change in the solution.

Aim of the present paper is to prove some positive results, in the case of differential games for two players in infinite time horizon, with exponentially discounted costs.

Let \(u_1,u_2\) be the controls implemented by the two players, and assume that the state of the system \(x\in I\!\!R^n\) evolves according to

$$\begin{aligned} \dot{x} =f(x, u_1,u_2),\quad u_1(t)\in U_1,\quad u_2(t)\in U_2. \end{aligned}$$
(1.1)

Moreover, let

$$\begin{aligned} J_i=\int _0^\infty e^{-\gamma t}\,\phi _i\bigl (x(t), u_1(t), u_2(t)\bigr )\, \mathrm{d}t \quad i=1,2, \end{aligned}$$
(1.2)

be the exponentially discounted costs. Here and in the sequel, an upper dot denotes a derivative w.r.t. time, while the constant \(\gamma >0\) is a discount rate.

Since the dynamics and the payoffs do not depend explicitly on time, it is natural to seek a Nash equilibrium solution consisting of time-independent feedbacks.

Definition 1

A pair of functions \(x\mapsto u_1^*(x)\in U_1\), \(x\mapsto u_2^*(x)\in U_2\) will be called a Nash equilibrium solution to the noncooperative game (1.1)–(1.2) provided that:

  1. (i)

    The map \(u_1^*(\cdot )\) is an optimal feedback, in connection with the optimal control problem for the first player:

    $$\begin{aligned} \hbox {minimize}\int _0^\infty e^{-\gamma t}\,\phi _1 \bigl (x(t), u_1(t), u_2^*(x(t))\bigr )\, \mathrm{d}t \end{aligned}$$

    subject to

    $$\begin{aligned} \dot{x} =f(x, u_1,u_2^*(x)),\quad u_1(t)\in U_1. \end{aligned}$$
  2. (ii)

    The map \(u_2^*(\cdot )\) is an optimal feedback, in connection with the optimal control problem for the second player:

    $$\begin{aligned} \hbox {minimize}\int _0^\infty e^{-\gamma t}\,\phi _2 \bigl (x(t), u_1^*(x(t)), u_2\bigr )\, \mathrm{d}t \end{aligned}$$

    subject to

    $$\begin{aligned} \dot{x} =f(x, u_1^*(x),u_2),\quad u_2(t)\in U_2. \end{aligned}$$

This equilibrium solution can be found by solving the corresponding PDEs for the value functions. Assuming that the feedbacks \(u_1^*, u_2^*\) implemented the two players are sufficiently regular, let \(t\mapsto x(t, x_0)\) be the trajectory of the system starting from an initial state \(x_0\). This is obtained by solving the Cauchy problem

$$\begin{aligned} \dot{x} =f(x,\,u_1^*(x), \, u_2^*(x)),\quad x(0)=x_0. \end{aligned}$$
(1.3)

The value functions for the two players are then computed by

$$\begin{aligned} V_i(x_0)\doteq \int _0^\infty e^{-\gamma t}\,\phi _i \Big (x(t, x_0), \,u_1^*(x(t, x_0)),\,u_2^*(x(t, x_0))\Big )\, \mathrm{d}t,\quad i=1,2. \end{aligned}$$
(1.4)

In the following we assume:

  • (H) For any \(x\in I\!\!R^n\) and every pair of vectors \((\xi _1, \xi _2)\in I\!\!R^n\times I\!\!R^n\), there exists a unique pair \((u^\sharp _1,\, u^\sharp _2)\in U_1\times U_2\) such that

    $$\begin{aligned} u^\sharp _1(x, \xi _1, \xi _2)= & {} \hbox {arg}\!\min _{\omega \in U_1} \Big \{ \xi _1\cdot f(x, \omega , u^\sharp _2) + \phi _1( x, \omega , u^\sharp _2)\Big \}, \end{aligned}$$
    (1.5)
    $$\begin{aligned} u^\sharp _2(x, \xi _1, \xi _2)= & {} \hbox {arg}\!\min _{\omega \in U_2} \Big \{ \xi _2\cdot f(x, \, u^\sharp _1, \omega ) + \phi _2(x, u^\sharp _1,\, \omega )\Big \}. \end{aligned}$$
    (1.6)

On an open region where \(V_1,V_2\) are continuously differentiable, a dynamic programming argument (see for example [1, 6, 15]) shows that these value functions satisfy a system of Hamilton–Jacobi equations:

$$\begin{aligned} \left\{ \begin{array}{rl} \gamma V_1 &{}=H^{(1)}(x,\nabla V_1, \nabla V_2)\,,\\ \gamma V_2 &{}=H^{(2)}(x,\nabla V_1, \nabla V_2)\,,\\ \end{array} \right. \end{aligned}$$
(1.7)

where, for \(i=1,2\),

$$\begin{aligned} H^{(i)}(x,\xi _1, \xi _2) \doteq \xi _i\cdot f\Big (x,\, u_1^\sharp (x, \xi _1,\xi _2),\, u_2^\sharp (x, \xi _1,\xi _2)\Big ) +\phi _i\Big (x,\, u_1^\sharp (x, \xi _1,\xi _2),\, u_2^\sharp (x, \xi _1,\xi _2)\Big ). \end{aligned}$$

If a solution to system (1.7) can be found, then the Nash equilibrium feedback controls are computed by

$$\begin{aligned} u_i^*(x) =u_i^\sharp (x, \, \nabla V_1(x),\, \nabla V_2(x)), \quad i=1,2. \end{aligned}$$
(1.8)

In general, system (1.7) is highly nonlinear and difficult to solve (globally on the entire space \(I\!\!R^n\)). Even in one space dimension, examples are known where this system admits infinitely many solutions, or no solution at all [7, 17]. In the case of linear dynamics

$$\begin{aligned} \dot{x} ~=~Ax+B_1 u_1 + B_2 u_2, \end{aligned}$$
(1.9)

and quadratic cost functionals

$$\begin{aligned} J_i=\int _0^\infty e^{-\gamma t}\,\Big (\mathbf{a}^T x(t)+x^T(t)R_ i x(t) + u^T_i(t) S_i u_i(t)\Big )\, \mathrm{d}t, \quad i=1,2, \end{aligned}$$
(1.10)

a particular solution to the system of PDEs (1.7) can be found in terms of quadratic polynomials. Namely

$$\begin{aligned} V_i(x)~ =~ {1\over 2} x^T M_i x+ \mathbf{n}^T _i x +e_i\,,\qquad \qquad i=1,2\,, \end{aligned}$$

where the superscript \(^T\) denotes transposition. To determine this particular solution, it suffices to solve a system of algebraic equations for the coefficients of the matrices \(M_1,M_2\), the vectors \(\mathbf{n}_1,\mathbf{n}_2\), and the scalars \(e_1,e_2\).

Our present goal is to understand whether this solution is stable w.r.t. small nonlinear perturbations of the dynamics and of the cost functionals. This issue can be studied in two separate settings: either on a bounded domain or on the entire space \(I\!\!R^n\).

  1. (I)

    Let \(u^*_1, u^*_2\) be the affine feedback controls in a Nash equilibrium for the linear-quadratic game (1.9)–(1.10), and consider any bounded domain \(\Omega \) which is positively invariant for the corresponding dynamical system

    $$\begin{aligned} \dot{x}=Ax + B_1 u^*_1(x) + B_2u_2^*(x). \end{aligned}$$
    (1.11)

    We recall that \(\Omega \) is positively invariant if, for every solution \(x(\cdot )\) of (1.11), one has the implication

    $$\begin{aligned} x(0)\in \Omega \quad \implies \quad x(t)\in \Omega \quad \hbox {for all}~t\ge 0. \end{aligned}$$

    In this setting a natural question is whether, for a suitably small perturbation of the dynamics and of the cost functions, the system of H–J equations (1.7) admits a solution defined on \(\Omega \), close to the original one. The uniqueness of this solution is also an important issue, both for the linear-quadratic game and for the nonlinear perturbation.

  2. (II)

    A further question is whether a solution to the perturbed game can be constructed on the entire space \(I\!\!R^n\), under suitable assumptions on how the perturbation grows as \(|x|\rightarrow \infty \).

We remark that, for many applications, the models are meaningful only on a proper subset of \(I\!\!R^n\). For example, state variables should often satisfy nonnegativity constraints. It is thus natural to construct solutions to the system of H–J equations (1.7) on a bounded domain \(\Omega \), as long as this domain is positively invariant for the corresponding dynamics. In this case, the feedback controls \(u_i^*{:}\Omega \mapsto I\!\!R^m\) will still provide a Nash equilibrium for a game where the state is required to remain inside \(\Omega \), assuming that the cost functions \(\phi _i\) in (1.2) satisfy

$$\begin{aligned} \phi _i(x)~=~+\infty \qquad \qquad \hbox {if}\quad x\notin \Omega . \end{aligned}$$

In this paper we confine the analysis to the one-dimensional case, under generic assumptions (i.e., valid as the coefficients range on an open dense set). Denoting by \(\xi _i=V_i'\in I\!\!R\), \(i=1,2\), the gradients of the value functions, the Hamilton–Jacobi equations (1.7) can be reduced to a system of two implicit ODEs, of the form

$$\begin{aligned} \Lambda (x,\xi _1,\xi _2) \begin{pmatrix} \xi '_1\\ \xi '_2 \end{pmatrix} = \begin{pmatrix} \psi _1(x,\xi _1,\xi _2)\\ \psi _2(x,\xi _1,\xi _2) \end{pmatrix}, \end{aligned}$$
(1.12)

where \(\Lambda \) is a suitable \(2\times 2\) matrix.

Fig. 1
figure 1

The position of the line \(\xi =\xi ^*(x)\), relative to the double cone \(\Gamma ^-\) where det\(\Lambda (x,\xi _1,\xi _2)\le 0\). Left: the line is always outside \(\Gamma ^-\). Center: the line is inside \(\Gamma ^-\) for x in some bounded interval [ab]. Right: the line is inside \(\Gamma ^-\) for x outside a bounded interval \(\,]a,b[\,\) . Notice that trajectories of the implicit ODE (1.12) typically become singular as they approach the boundary of \(\Gamma ^-\) and cannot be prolonged any further (left figure)

In the linear-quadratic case, one can find a solution \((\xi _1^*, \xi _2^*)\) of (1.12), where each \(\xi _i^*(x)=\alpha _i x+\beta _i \) is an affine function. Moreover, the determinant \(\det \Lambda (x,\xi _1,\xi _2)\) is a homogeneous quadratic polynomial of the three variables \(x, \xi _1,\xi _2\). It vanishes on the surface of a double cone \(\Gamma ^-\), shown in Fig. 1. Looking at the relative position of the line \(\xi ^*=\bigl \{(x,\xi _1^*(x),\xi _2^*(x))\,;~~x\in I\!\!R\bigr \}\) w.r.t. the cone \(\Gamma ^-\), under generic conditions two main cases can occur.

Fig. 2
figure 2

If \(\det \Lambda (x, \xi _1^*(x), \xi _2^*(x))\not =0\) for all \(x\in I\!\!R\), then the linear-quadratic game has infinitely many solutions. In one of these solutions the feedbacks \(u_1^*, u_2^*\) are affine, while in all the others they are fully nonlinear functions. All these solutions are stable under small nonlinear perturbations of the dynamics and of the cost functions

Fig. 3
figure 3

A case where \(\det \Lambda (x, \xi _1^*(x), \xi _2^*(x))\) vanishes at two points \(\bar{x}_1<\bar{x}_1\) and the linear-quadratic game has a unique solution, which is stable under small nonlinear perturbations of the dynamics and of the cost functions. The graph of \(\xi ^*(\cdot )\) contains a heteroclinic orbit of a related dynamical system

Case 1: The line \(\xi ^*\) lies entirely outside the cone (Fig. 1, left). As a consequence, the matrix \(\Lambda (x, \xi _1, \xi _2)\) is invertible in a neighborhood of \(\xi ^*\). The system of ODEs (1.12) can thus be written in the standard form

$$\begin{aligned} \begin{pmatrix} \xi '_1\\ \xi '_2 \end{pmatrix} = \Lambda ^{-1}(x,\xi _1,\xi _2) \begin{pmatrix} \psi _1(x,\xi _1,\xi _2)\\ \psi _2(x,\xi _1,\xi _2) \end{pmatrix} \,. \end{aligned}$$
(1.13)

In this case, by solving (1.13) with slightly different initial data at \(x=0\), one can construct infinitely many Nash equilibrium solutions in feedback form for the same linear-quadratic game (Fig. 2). These solutions vary continuously under small nonlinear perturbations of the data.

Case 2: The line \(\xi ^*\) is partly inside, partly outside the cone \(\Gamma ^-\) (Fig. 1, center and right), crossing the surface of the cone at two points \(P_i^* = (\bar{x}_i, \xi _1^*(\bar{x}_i), \xi _2^*(\bar{x}_i))\), \(i=1,2\), with \(\bar{x}_1<\bar{x}_2\). In this case, the equation det \(\Lambda (x,\xi _1,\xi _2)=0\) locally defines two surfaces \(\Sigma _1\), \(\Sigma _2\), near the points \(P_1^*\), \(P_2^*\), respectively (see Fig. 3). Always for the linear-quadratic game, a detailed analysis shows that a smooth solution of (1.12) can cross the surface \(\Sigma _1\) only along a special curve \(\gamma _1\subset \Sigma _1\). By the same argument, it can cross the surface \(\Sigma _2\) only along a curve \(\gamma _2\subset \Sigma _2\).

In general, the graph of a solution to (1.12) can be constructed as a concatenation of orbits of a related dynamical system on \(I\!\!R^3\), where all points on the curves \(\gamma _1\) and \(\gamma _2\) are steady states. Smooth solutions correspond to heteroclinic orbits, connecting a point on \(\gamma _1\) to a point on \(\gamma _2\). Depending on the properties of the two equilibrium points \(P_1^*\), \(P_2^*\) (source, saddle, sink), there can be a single orbit, or else a 1- or a 2-parameter family of heteroclinic orbits. In a typical case, shown in Fig. 3, the points \(P_1^*\in \gamma _1\) and \(P_2^*\in \gamma _2\) are both saddle points. The 2-dimensional center-stable manifold \({\mathcal M}_1\) through \(P_1^*\) and the 2-dimensional center-unstable manifold \({\mathcal M}_2\) through \(P_2^*\) are transversal. Their intersection is the unique heteroclinic orbit joining \(\gamma _1\) with \(\gamma _2\).

Under generic assumptions, all of the above configurations are structurally stable. Namely, the curves \(\gamma _i\) and the manifolds \({\mathcal M}_i\) are still well defined in the presence of a small nonlinear perturbation. The family of heteroclinic orbits is preserved as well.

The remainder of the paper is organized as follows. In Sect. 2 we recall the system of Hamilton–Jacobi equations for a noncooperative 2-player game in infinite time horizon, while in Sect. 3 we review the construction of affine feedback solutions for the linear-quadratic case. Solutions of (1.12) on a bounded interval are studied in Sect. 4. The main tool in the analysis is a representation of their graph as a concatenation of orbits, for a related dynamical system. The case of a heteroclinic orbit joining two saddle points, illustrated in Fig. 3, is worked out in detail in Sect. 5. This is the most pleasing case, where the linear-quadratic game has a unique feedback solution, which is stable under small nonlinear perturbations. In Sect. 6 we prove that, under suitable growth conditions, feedback solutions to the perturbed nonlinear game can be extended to the entire real line. Up to this section, all of our analysis is concerned with the system of ODEs (1.12) for the derivatives of the value functions. Finally, in Sect. 7 we show that, by constructing a solution to (1.12), one can recover the value functions \(V_1,V_2\) in (1.7) and provide a Nash equilibrium solution to the noncooperative differential game.

Remark 1

All of our results hold under “generic” assumptions on the coefficients \(a_0, b_i, R_i,S_i\) of the linear-quadratic game (3.2)–(3.3). In other words, they are valid as these coefficients range in an open dense set. In some cases, this set can be precisely determined. In general, this set is described by a finite set of inequalities:

$$\begin{aligned} \Phi _j(a_0, b_1,b_2, R_1,R_2,S_1,S_2)\not =0,\quad j=1,\ldots ,N, \end{aligned}$$

where the \(\Phi _j\) are analytic functions. Having constructed examples where these inequalities do hold, by analyticity we conclude that they must hold on an open dense set.

In earlier literature, a few examples of feedback equilibria for nonlinear infinite horizon differential games have been studied in [4, 7, 17]

2 The Basic Setting

Throughout the following, we consider a system whose dynamics is linear w.r.t. the controls \(u_1, u_2\). The state \(x\in I\!\!R\) thus evolves according to

$$\begin{aligned} \dot{x}=f(x) + g_1(x)u_1 + g_2(x) u_2,\quad u_1(t), \,u_2(t)\in I\!\!R. \end{aligned}$$
(2.1)

Here \(f,g_1,g_2{:}I\!\!R\mapsto I\!\!R\) are smooth maps with sublinear growth:

$$\begin{aligned} |f(x)|+|g_1(x)|+|g_2(x)|\le C\bigl (1+|x|\bigr ), \end{aligned}$$
(2.2)

for some constant C. The cost functionals for the two players are

$$\begin{aligned} J_i=\int _0^\infty e^{-\gamma t} \bigg [ \varphi _i(x(t)) + {u_i^2(t)\over 2} \bigg ]\, \mathrm{d}t. \end{aligned}$$
(2.3)

Call \(V_1,V_2\) the value functions in a Nash equilibrium in feedback form. By the previous analysis these functions satisfy the system of Hamilton–Jacobi equations (1.7), where

$$\begin{aligned} H^{(i)}(x,\xi _1, \xi _2) \doteq \xi _i\, \Big [f(x) + g_1(x) u^\sharp _1 + g_2(x) u_2^\sharp \Big ] + \varphi _i(x) + {(u_i^\sharp )^2\over 2},\qquad i=1,2. \end{aligned}$$
(2.4)

The optimal feedback controls are given by the formulas

$$\begin{aligned} \begin{array}{rl} u^\sharp _1(x, \xi _1,\xi _2)&{}\displaystyle =\hbox {arg}\!\min _\omega \left\{ \xi _1\,g_1(x) \,\omega +{\omega ^2\over 2}\right\} =-\xi _1\, g_1(x),\\ u^\sharp _2(x, \xi _1,\xi _2)&{}\displaystyle =~\hbox {arg}\!\min _\omega ~\left\{ \xi _2\, g_2(x)\, \omega + {\omega ^2\over 2}\right\} =-\xi _2\, g_2(x). \end{array} \end{aligned}$$
(2.5)

Inserting (2.5) in (2.4), from (1.7) one obtains the implicit system of ODEs

$$\begin{aligned} \left\{ \begin{array}{rl} \gamma V_1&{}=\Big [f(x) - {1\over 2} g_1^2(x) V_1'- g_2^2(x) V_2'\Big ]V_1' + \varphi _1 ,\\ \gamma V_2&{}=\Big [f(x) - g_1^2(x)V_1' - {1\over 2} g_2^2(x) V_2'\Big ]V_2' + \varphi _2 \,. \end{array}\right. \end{aligned}$$
(2.6)

Differentiating (2.6) w.r.t. x we obtain a system of two ODEs for the derivatives of the value functions: \(\xi _1= V_1'\), \(\xi _2= V_2'\). Namely

$$\begin{aligned} \Lambda (x,\xi _1,\xi _2) \begin{pmatrix} \xi _1'\\ \\ \xi _2' \end{pmatrix} = \begin{pmatrix} \psi _1(x,\xi _1,\xi _2)\\ \\ \psi _2(x,\xi _1,\xi _2) \end{pmatrix}, \end{aligned}$$
(2.7)

where

$$\begin{aligned} \Lambda (x,\xi _1,\xi _2)= & {} \begin{pmatrix}\Lambda _{11}&{}\Lambda _{12}\\ \Lambda _{21}&{}\Lambda _{22} \end{pmatrix}= \begin{pmatrix} f-g_1^2\xi _1 -g_2^2 \xi _2 &{}&{} -g_2^2 \xi _1\\ \\ -g_1^2\xi _2 &{}&{} f- g_1^2 \xi _1 - g_2^2 \xi _2 \end{pmatrix}, \end{aligned}$$
(2.8)
$$\begin{aligned} \begin{pmatrix}\psi _1(x,\xi _1,\xi _2)\\ \\ \psi _2(x,\xi _1,\xi _2)\end{pmatrix}= & {} \begin{pmatrix} (\gamma -f') \xi _1 + g_1g_1' \xi _1^2 + 2 g_2 g_2' \xi _1\xi _2-\varphi _1') \\ \\ (\gamma -f')\xi _2 + g_2g_2' \xi _2^2 + 2 g_1 g_1' \xi _1\xi _2-\varphi _2') \end{pmatrix}. \end{aligned}$$
(2.9)

When both players adopt the equilibrium feedback strategies (2.5), by (2.1) the system evolves according to

$$\begin{aligned} \dot{x}~=~f-g_1^2\xi _1 - g_2^2 \xi _2\,. \end{aligned}$$
(2.10)

3 Review of the Linear-Quadratic Case

In this section we review the construction of feedback equilibrium solutions for the linear-quadratic case. Assume that in (2.1) and (2.3) one has

$$\begin{aligned} f(x)=a_0x,\quad g_i(x)=b_i,\quad \varphi _i(x)= R_ i x+{1\over 2} S_i x^2, \quad i=1,2, \end{aligned}$$
(3.1)

for some constants \(a_i,b_i, R_ i, S_i\). The dynamics and cost functionals thus become

$$\begin{aligned} \dot{x}= & {} a_0 x+ b_1 u_1 + b_2 u_2, \end{aligned}$$
(3.2)
$$\begin{aligned} J_i= & {} \int _0^\infty e^{-\gamma t} \left[ R_ i x+{1\over 2} S_i x^2+{u_i^2\over 2} \right] \, \mathrm{d}t. \end{aligned}$$
(3.3)

Notice that, if \(a_0\not = 0\), the more general case where \(f(x) = a_0 x + b_0\) can be reduced to (3.2) by a translation of coordinates. The following assumptions will be used throughout.

  1. (A1)

    The coefficients \(S_1, S_2, \gamma \) are strictly positive. Moreover, one of the following alternatives holds.

    1. Case 1:

      \(a_0>{\gamma \over 2}>0\) and \(S_1b_1^2+ S_2 b_2^2~>~ (2a_0-\gamma )^2/2\).

    2. Case 2:

      \(a_0<0\).

In the linear-quadratic case, the matrix \(\Lambda \) and the right-hand side \(\psi \) in the system of ODEs (2.7) take the special form

$$\begin{aligned} \Lambda \bigl (x,\xi _1,\xi _2\bigr )= & {} \begin{pmatrix} \Lambda _{11}&{}\Lambda _{12}\\ \Lambda _{21}&{}\Lambda _{22} \end{pmatrix}= \begin{pmatrix} a_0x- b_1^2\xi _1 -b_2^2 \xi _2 &{}&{} -b_2^2 \xi _1\\ \\ -b_1^2\xi _2 &{}&{} a_0x-b_1^2\xi _1 -b_2^2 \xi _2 \end{pmatrix}, \end{aligned}$$
(3.4)
$$\begin{aligned} \begin{pmatrix} \psi _1\bigl (x,\xi _1,\xi _2\bigr )\\ \\ \psi _2 \bigl (x,\xi _1,\xi _2 \bigr ) \end{pmatrix}= & {} \begin{pmatrix} (\gamma -a_0) \xi _1 -R_1- S_1 x \\ \\ (\gamma -a_0) \xi _2 -R_2-S_2 x \end{pmatrix}. \end{aligned}$$
(3.5)

Under the assumption (A1), one can construct an explicit solution where \(\xi _1,\xi _2\) are affine functions, namely

$$\begin{aligned} \xi _1^*(x)=\alpha _1x+\beta _1,\quad \xi _2^* (x)=\alpha _2x+\beta _2. \end{aligned}$$
(3.6)

Indeed, inserting (3.4)–(3.6) in (2.7) one obtains

$$\begin{aligned} \left\{ \begin{array}{l}\Big (a_0x- b_1^2(\alpha _1 x+\beta _1) -b_2^2 (\alpha _2 x+\beta _2)\Big ) \alpha _1 - b_2^2 (\alpha _1 x+\beta _1)\alpha _2\\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad =(\gamma -a_0)(\alpha _1 x+\beta _1) -(R_1+S_1 x)\,,\\ \Big (a_0x- b_1^2(\alpha _2 x+\beta _2) -b_2^2 (\alpha _2 x+\beta _2)\Big ) \alpha _2 - b_1^2 (\alpha _2 x+\beta _2)\alpha _1\\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad =(\gamma -a_0)(\alpha _2 x+\beta _2) -(R_2+S_2 x). \end{array} \right. \end{aligned}$$

Since these equalities must hold for every x, we obtain a system of four quadratic equations for the unknowns \(\alpha _1,\alpha _2,\beta _1,\beta _2\).

$$\begin{aligned} \left\{ \begin{array}{ll} b_1^2\alpha _1^2 + 2b_2^2\alpha _1\alpha _2 +(\gamma -2a_0) \alpha _1= S_1,\\ b_2^2\alpha _2^2 + 2b_1^2\alpha _1\alpha _2 +(\gamma -2a_0) \alpha _2= S_2,\\ (b_1^2\alpha _1+b_2^2\alpha _2 -a_0+\gamma )\beta _1 + b_2^2 \alpha _1\beta _2= R_1,\\ (b_1^2\alpha _1+b_2^2\alpha _2 -a_0+\gamma )\beta _2 + b_1^2 \alpha _2\beta _1= R_2. \end{array} \right. \end{aligned}$$
(3.7)

To solve (3.7), it is convenient to introduce the variables

$$\begin{aligned} A\doteq 2a_0-\gamma , \quad X_i\doteq {b_i^2\over A}\,\alpha _i, \quad K_i\doteq {S_ib_i^2\over A^2}, \quad i=1,2. \end{aligned}$$
(3.8)

The first two equations in (3.7) can then be written as

$$\begin{aligned} \left\{ \begin{array}{ll} X_1^2 + 2X_1X_2 -X_1= K_1,\\ X_2^2+2X_1X_2-X_2= K_2. \end{array} \right. \end{aligned}$$
(3.9)

Equivalently,

$$\begin{aligned} \left\{ \begin{array}{ll} (X_1+X_2-1)(X_1-X_2)= K_1-K_2,\\ 3(X_1+X_2)^2-(X_1-X_2)^2-2(X_1+X_2)= 2(K_1+K_2). \end{array} \right. \end{aligned}$$
(3.10)

Introducing the further variables

$$\begin{aligned} Y_1\doteq X_1+X_2-1,\quad Y_2\doteq X_1-X_2, \end{aligned}$$

we obtain

$$\begin{aligned} \left\{ \begin{array}{ll} Y_1Y_2= K_1-K_2,\\ 3Y_1^2+4Y_1-Y_2^2= 2(K_1+K_2)-1. \end{array} \right. \end{aligned}$$
(3.11)

By the first equation, \(Y_2= (K_1-K_2)/Y_1\). Inserting this expression in the second equation, we find

$$\begin{aligned} G(Y_1)\doteq 3Y_1^4+4Y_1^3+\bigl (1-2(K_1+K_2)\bigr )Y_1^2=(K_1-K_2)^2. \end{aligned}$$
(3.12)

By the assumption (A1) and the definitions (3.8), two cases can arise.

Case 1: \(a_0>{\gamma \over 2}>0\) and \(K_1+K_2> 1/2\). This implies

$$\begin{aligned} G(0)=G'(0)=0, \quad G''(0)=2 - 4(K_1+K_2)<0,\quad \lim _{s\rightarrow +\infty } G(s)=+\infty . \end{aligned}$$
(3.13)

Hence Eq. (3.12) has a solution \(Y_1^* > 0\).

Case 2: \(a_0<0\). This implies

$$\begin{aligned} G(-1)=-2(K_1+K_2)\le 0,\quad \lim _{s\rightarrow -\infty } G(s)=+\infty . \end{aligned}$$
(3.14)

Hence Eq. (3.12) has a solution \(Y_1^*< -1\).

In both cases, reverting to the original variables we find

$$\begin{aligned} \alpha _1={A\over 2b_1^2}\left[ Y_1^*+1 + {K_1-K_2\over Y_1^*}\right] , \quad \alpha _2={A\over 2b_2^2}\left[ Y_1^*+1 - {K_1-K_2\over Y_1^*}\right] . \end{aligned}$$
(3.15)

As soon as \(\alpha _1,\alpha _2\) have been determined, the last two equations in (3.7) provide a linear system for the variables \(\beta _1,\beta _2\). This can be uniquely solved under the condition

$$\begin{aligned} (b_1^2\alpha _1+b_2^2\alpha _2 -a_0+\gamma )^2-b_1^2\alpha _1b_2^2\alpha _2\not =0. \end{aligned}$$
(3.16)

We observe that, under the assumption (A1), this inequality always holds. Indeed, dividing by \(A^2\), (3.16) can be rewritten as

$$\begin{aligned} \Big (X_1+X_2-1+{a_0\over A}\Big )^2 - X_1X_2 \not = 0. \end{aligned}$$

Equivalently,

$$\begin{aligned} \Big (Y_1^*+{a_0\over A}\Big )^2-{1\over 4}(Y_1^*+1)^2+{1\over 4}(Y_2^*)^2 \not = 0. \end{aligned}$$
(3.17)

Two cases are considered:

  • If \(a_0>{\gamma \over 2}>0\) and \(K_1+K_2 > 1/2\) then

    $$\begin{aligned} Y_1^*> 0\quad \mathrm {and}\quad {a_0\over A} = {a_0\over 2a_0-\gamma } > {1\over 2}. \end{aligned}$$

    Thus

    $$\begin{aligned} Y_1^*+{a_0\over A}> {1\over 2}(Y_1^*+1) > 0. \end{aligned}$$
  • If \(a_0<0\) then

    $$\begin{aligned} Y^*_1 \le -1\quad \mathrm {and}\quad 0< {a_0\over A} = {a_0\over 2a_0-\gamma } < {1\over 2}. \end{aligned}$$

    Hence

    $$\begin{aligned} Y_1^*+{a_0\over A}< {1\over 2}(Y_1^*+1) < 0. \end{aligned}$$

Therefore,

$$\begin{aligned} \Big (Y_1^*+{a_0\over A}\Big )^2-{1\over 4}(Y_1^*+1)^2 > 0\ \end{aligned}$$
(3.18)

in Case 1 as well as in Case 2. Hence (3.17) holds.

Having determined the constants \(\alpha _i, \beta _i\), the affine functions \(\xi _1^*,\xi _2^*\) in (3.6) yield a solution to the system (2.7). According to (2.5), the corresponding Nash equilibrium feedback controls are

$$\begin{aligned} u_1^*(x) = - b_1(\alpha _1 x+ \beta _1), \quad u_2^*(x) = -b_2(\alpha _2 x+ \beta _2). \end{aligned}$$
(3.19)

In the following, given a solution \((\alpha _1,\alpha _2,\beta _1,\beta _2)\) of the algebraic system (3.7), recalling (3.8) we shall write

$$\begin{aligned} X_1^* = {b_1^2\over A}\alpha _1,\quad X_2^* = {b_2^2\over A}\alpha _2, \quad Y_1^* = X_1^*+X_2^*-1, \quad Y_2^* = X_1^*-X_2^*. \end{aligned}$$
(3.20)

Remark 2

When the Nash equilibrium feedbacks (3.19) are implemented, the system (3.2) evolves according to

$$\begin{aligned} \begin{array}{ll} \dot{x}&{}= a_0 x - b_1^2(\alpha _1 x + \beta _1)- b_2^2 (\alpha _2 x + \beta _2)\\ &{}= (a_0-b_1^2 \alpha _1 - b_2^2\alpha _2)\,x + ( - b_1^2\beta _1-b_2^2\beta _2). \end{array} \end{aligned}$$
(3.21)

Under the assumption (A1), if either \(a_0<0\) or \(0<\gamma \le a_0\), then the stationary point

$$\begin{aligned} x^* = { b_1^2\beta _1+b_2^2\beta _2 \over a_0 - b_1^2 \alpha _1- b_2^2\alpha _2} \end{aligned}$$
(3.22)

is an asymptotically stable equilibrium for the ODE (3.21). However, this equilibrium is unstable when \(a_0=\gamma /2\).

To prove the above claims, we examine various cases.

  • If \(a_0<0\), then \(A=2a_0-\gamma <0\), while \(X_1^*+X^*_2=Y_1^*+1< 0\). Therefore

    $$\begin{aligned} a_0 - b_1^2 \alpha _1- b_2^2\alpha _2 = a_0 -A(X_1^*+X^*_2) \le a_0 < 0. \end{aligned}$$
  • If \(0<\gamma \le a_0\) and \(K_1+K_2>1/2\), recalling that \(X_1^*+X^*_2=Y_1^*+1 > 1\) and \(A= 2a_0-\gamma >0\), we obtain

    $$\begin{aligned} a_0 - b_1^2 \alpha _1- b_2^2\alpha _2 = a_0 -A(X_1^*+X^*_2) < a_0-A = \gamma -a_0 \le 0. \end{aligned}$$
  • On the other hand, if \(0<\gamma = 2 a_0\), then \(A=0\) and

    $$\begin{aligned} a_0 - b_1^2 \alpha _1- b_2^2\alpha _2 = a_0 -A(X_1^*+X^*_2) = a_0 > 0. \end{aligned}$$

Remark 3

Under the assumption (A1) one has \(\alpha _1,\alpha _2>0\). Therefore, the value functions of both players approach \(+\infty \) as \(x\rightarrow \pm \infty \).

Indeed, from (3.9) it follows

$$\begin{aligned} X^*_1\cdot (X^*_1 + 2X^*_2 -1)> 0\quad \mathrm {and}\quad X^*_2\cdot (X^*_2+2X^*_1-1) > 0. \end{aligned}$$
(3.23)

Without loss of generality we can assume that \(X_1^*\ge X_2^*\). There are two cases:

  • If \(a_0>{\gamma \over 2}>0\) and \(K_1+K_2>1/2\) then \(A=2a_0-\gamma >0\) and \(X^*_1+X^*_2=Y_1^*+1>0\). This implies that \(X^*_1>0\) and

    $$\begin{aligned} X^*_2+2X^*_1-1 = X^*_1+Y_1^* > 0. \end{aligned}$$

    Recalling (3.23) we have \(X^*_2>0\). Thus

    $$\begin{aligned} \alpha _i = {AX^*_i\over b_i^2} > 0\qquad i=1,2. \end{aligned}$$
    (3.24)
  • If \(a_0<0\) then \(A=2a_0-\gamma <0\) and \(X^*_1+X^*_2=Y_1^*+1<0\). This implies that \(X^*_2<0\) and

    $$\begin{aligned} X^*_1+2X^*_2-1 = X^*_2+Y_1^* < 0. \end{aligned}$$

    Recalling (3.23) we have \(X^*_1<0\). Hence (3.24) again holds.

4 Perturbed Solutions on a Bounded Domain

Together with the linear-quadratic game (1.9)–(1.10), we now consider a perturbed game, where the dynamics and the cost functions have the form

$$\begin{aligned} \dot{x}= & {} (a_0 x + f_0(x))+ (b_1 +h_1(x))u_1 + (b_2+h_2(x)) u_2, \end{aligned}$$
(4.1)
$$\begin{aligned} J_i= & {} \int _0^{+\infty } e^{-\gamma t} \left( R_i x + S_i x^2 + \eta _i(x) +{u_i^2\over 2}\right) \, \mathrm{d}t, \end{aligned}$$
(4.2)

for some perturbations \(f_0, h_1,h_2,\eta _1,\eta _2\). The gradients of the value functions \(\xi _i(x)= V_i'(x)\) will again satisfy the implicit system of ODEs

$$\begin{aligned} \Lambda (x,\xi _1,\xi _2) \begin{pmatrix} \xi _1'\\ \\ \xi _2' \end{pmatrix} = \begin{pmatrix} \psi _1(x,\xi _1,\xi _2)\\ \\ \psi _2(x,\xi _1,\xi _2) \end{pmatrix}, \end{aligned}$$
(4.3)

where now

$$\begin{aligned} \begin{array}{ll} \Lambda \bigl (x,\xi _1,\xi _2\bigr ) = \begin{pmatrix} \Lambda _{11}&{}\Lambda _{12}\\ \Lambda _{21}&{}\Lambda _{22} \end{pmatrix}\\ \quad \doteq { \begin{pmatrix} a_0x+ f_0- (b_1+ h_1)^2\xi _1 -(b_2+ h_2)^2 \xi _2 &{}\quad -(b_2+ h_2)^2 \xi _1\\ \\ -(b_1+h_1)^2\xi _2 &{}\quad a_0x+f_0-(b_1+ h_1)^2\xi _1 -(b_2+ h_2)^2 \xi _2 \end{pmatrix},} \end{array}\nonumber \\ \end{aligned}$$
(4.4)
$$\begin{aligned} \begin{pmatrix} \psi _1\bigl (x,\xi _1,\xi _2\bigr )\\ \\ \psi _2 \bigl (x,\xi _1,\xi _2 \bigr ) \end{pmatrix} = \begin{pmatrix} (\gamma -a_0- f_0') \xi _1 -R_1- S_1 x - \eta _1' \\ \\ (\gamma -a_0- f_0') \xi _2 -R_2-S_2 x-\eta _2' \end{pmatrix}.\nonumber \\ \end{aligned}$$
(4.5)

Throughout the following, we assume that the condition (A1) holds, and let

$$\begin{aligned} \xi _i^*(x) = V_i'(x)=\alpha _i x+\beta _i, \quad i=1,2, \end{aligned}$$
(4.6)

be the gradients of the value functions in the affine solution of the linear-quadratic game. We denote by

$$\begin{aligned} \Lambda ^*(x) \doteq \Lambda (x,\xi _1^*(x),\xi ^*_2(x)) = \begin{pmatrix} a_0x- b_1^2\xi ^*_1 -b_2^2 \xi ^*_2 &{}\quad -b_2^2 \xi ^*_1\\ \\ -b_1^2\xi ^*_2 &{}\quad a_0x-b_1^2\xi ^*_1 -b_2^2 \xi ^*_2 \end{pmatrix} \end{aligned}$$
(4.7)

the corresponding matrix at (3.4). Its determinant is

$$\begin{aligned} \det \Lambda ^*(x) = \bigl [ a_0 x - b_1^2(\alpha _1 x+\beta _1) - b_2^2(\alpha _2 x+\beta _2)\bigr ]^2 -b_1^2 b_2^2 (\alpha _1 x+\beta _1)(\alpha _2 x+\beta _2).\nonumber \\ \end{aligned}$$
(4.8)

Under generic assumptions on the coefficients, \(\det \Lambda ^*(x)\) is a polynomial of degree two, either with no real root or with two distinct roots. These two cases are qualitatively very different.

4.1 Det \(\Lambda ^*\) has no Real Roots

In this case, the matrix \(\Lambda ^*(x)\) is invertible for every \(x\in I\!\!R\). In a neighborhood of the line \(\xi ^*\) the system (4.3) can be written in explicit form as

$$\begin{aligned} \begin{pmatrix} \xi _1'\\ \\ \xi _2' \end{pmatrix} = \Lambda ^{-1} \bigl (x,\xi _1,\xi _2\bigr ) \begin{pmatrix} \psi _1\bigl (x,\xi _1,\xi _2\bigr )\\ \\ \psi _2 \bigl (x,\xi _1,\xi _2 \bigr ) \end{pmatrix}. \end{aligned}$$
(4.9)

In this setting, the standard theory of uniqueness and continuous dependence for solutions to ODEs yields

Theorem 1

Assume that the quadratic polynomial \(\det \Lambda ^*(x)\) in (4.8) has no real roots, and let \(\Omega \subset I\!\!R\) be any bounded interval. Given \(\varepsilon >0\), there exist \(\delta _0, \delta _1>0\) such that the following holds. Consider any point \(x_0\in \Omega \), any perturbations \(f_0,\eta _1,\eta _2\in {\mathcal C}^1(\Omega )\), \(h_1,h_2\in {\mathcal C}^0(\Omega )\), and any data \(\bar{\xi }_1,\bar{\xi }_2\in I\!\!R\), satisfying

$$\begin{aligned} |\bar{\xi }_1-\xi _1^*(x_0)| \le \delta _0, \quad |\bar{\xi }_2-\xi _2^*(x_0)| \le \delta _0, \end{aligned}$$
(4.10)
$$\begin{aligned} \Vert f_0\Vert _{{\mathcal C}^1} + \Vert \eta _1\Vert _{{\mathcal C}^1} + \Vert \eta _2\Vert _{{\mathcal C}^1}+\Vert h_1\Vert _{C^0}+ \Vert h_2\Vert _{{\mathcal C}^0} \le \delta _1. \end{aligned}$$
(4.11)

Then the ODE (4.3)–(4.5) has a unique solution on \(\Omega \), with initial data

$$\begin{aligned} \xi _1(x_0) = \bar{\xi }_1,\quad \xi _2(x_0) = \bar{\xi }_2. \end{aligned}$$

Moreover, this solution satisfies

$$\begin{aligned} |\xi _1(x)-\xi ^*_1(x)|+|\xi _2(x)-\xi ^*_2(x)| \le \varepsilon \quad \hbox {for all}~x\in \Omega . \end{aligned}$$
(4.12)

Notice that in this case, setting \(f_0=h_1=h_2=\eta _1=\eta _2=0\), one can already find infinitely many solutions to the implicit system of ODEs (4.3) on the interval \(\Omega \). One of these solutions is affine, given by (3.6), while the others are fully nonlinear. In Sect. 6 we shall analyze whether these nonlinear solutions can be extended beyond \(\Omega \), to the entire real line.

Remark 4

Assume that the stationary point \(x^*\) in (3.22) is an asymptotically stable equilibrium for the linear ODE (3.21), contained in the interior of \(\Omega \). Then, choosing \(\delta _0,\delta _1>0\) sufficiently small in (4.10)–(4.11), the interval \(\Omega \) will be positively invariant for the corresponding feedback dynamics

$$\begin{aligned} \dot{x} = a_0 x + f_0(x) - \bigl (b_1 + h_1(x)\bigr )^2\xi _1(x) - \bigl (b_2 + h_2(x)\bigr )^2\xi _2(x). \end{aligned}$$
(4.13)

Example 1

Consider the linear system

$$\begin{aligned} \dot{x} = 3x+u_1+u_2. \end{aligned}$$
(4.14)

and the cost functions

$$\begin{aligned} J_1 = \int _0^{\infty }e^{-t}\cdot \Big (25x^2+ 3x+{1\over 2}u^2_1\Big ),\quad J_2 = \int _0^{\infty }e^{-t}\cdot \Big (25x^2-3x+{1\over 2}u^2_2\Big ). \end{aligned}$$
(4.15)

The corresponding constants in (3.8) are

$$\begin{aligned} A = 5\quad \mathrm {and}\quad K_1 = K_2 = 2. \end{aligned}$$

Solving (3.12) and (3.11) we obtain

$$\begin{aligned} Y_1^* = 1, \quad Y_2^* = 0, \quad X_1^* = X_2^* = 1. \end{aligned}$$

After a few computations, we find that (4.3) admits a linear solution

$$\begin{aligned} \xi ^*_1(x) = 5x+1,\quad \xi ^*_2(x) = 5x-1. \end{aligned}$$

The corresponding dynamic is

$$\begin{aligned} \dot{x} = 3x-\xi _1^*(x)-\xi _2^*(x) = -7x, \end{aligned}$$

having \(x^*=0\) as asymptotically stable equilibrium point. By (4.8), the determinant of the corresponding matrix \(\Lambda ^*\) is

$$\begin{aligned} \det \Lambda ^*(x) = (-7x)^2-(5x+1)\cdot (5x-1) = 24x^2+1, \end{aligned}$$

which is always positive.

4.2 Det \(\Lambda ^*\) has Two Real Roots

We now assume that the determinant of the matrix \(\Lambda ^*(x)\) at (4.7) vanishes at two distinct points \(\bar{x}_1<\bar{x}_2\) and so that

$$\begin{aligned} \bar{x}_1, \bar{x}_2\ne x^*\quad \mathrm {and}\quad p(x) \doteq \det \Lambda ^*(x) = C_{\Lambda }(x- \bar{x}_1)(x-\bar{x}_2) \end{aligned}$$
(4.16)

where \(x^*\) is the stationary point in (3.22) and \(C_{\Lambda }\) is a nonzero constant. Notice that, if this assumption holds, the coefficients \(R_i\) in (3.3) and \(\beta _i\) in (4.6) must satisfy

$$\begin{aligned} (R_1,R_2) \ne (0,0),\quad (\beta _1,\beta _2) \ne (0,0). \end{aligned}$$
(4.17)

Indeed, if \((R_1,R_2)=(0,0)\), then the last two equations of (3.7) together with (3.16) imply \((\beta _1,\beta _2)=(0,0)\). By (3.6), for \(i=1,2\) we thus have \(\xi _i^*(x)=\alpha _ix\). This yields

$$\begin{aligned} \det \Lambda ^*(x) = \big [(a_0-b_1^2\alpha _1-b_2^2 \alpha _2)^2-b_1^2b_2^2\alpha _1\alpha _2\big ]\cdot x^2. \end{aligned}$$
(4.18)

Hence \(\bar{x}_1=\bar{x}_2 = 0\), against the assumption.

We rewrite (4.3) as a Pfaffian system

$$\begin{aligned} \left\{ \begin{array}{ll} \Lambda _{11}\, \mathrm{d}\xi _1 + \Lambda _{12}\, \mathrm{d}\xi _2 - \psi _1\, \mathrm{d}x&{}= 0,\\ \Lambda _{21} \,\mathrm{d}\xi _1 + \Lambda _{22} \,\mathrm{d}\xi _2 - \psi _2\, \mathrm{d}x&{}=~0.\end{array}\right. \end{aligned}$$
(4.19)

Consider the vectors

$$\begin{aligned} \mathbf{v} \doteq \begin{pmatrix} -\psi _1\\ \Lambda _{11}\\ \Lambda _{12} \end{pmatrix},\quad \mathbf{w} \doteq \begin{pmatrix} -\psi _2\\ \Lambda _{21}\\ \Lambda _{22}\end{pmatrix}, \end{aligned}$$
(4.20)

and denote by \(\mathbf{v}\wedge \mathbf{w}\) their wedge product. To construct trajectories of (4.3), we seek continuously differentiable functions \(x\mapsto (\xi _1(x), \,\xi _2(x))\) whose graph is obtained by concatenating trajectories of the system

$$\begin{aligned} \begin{pmatrix}\dot{x}\\ \dot{\xi }_1\\ \dot{\xi }_2\end{pmatrix}\ =\mathbf{v}\wedge \mathbf{w} = \begin{pmatrix} \Lambda _{11}\Lambda _{22}- \Lambda _{12}\Lambda _{21}\\ \Lambda _{22}\psi _1-\Lambda _{12}\psi _2\\ \Lambda _{11}\psi _2-\Lambda _{21}\psi _1 \end{pmatrix}. \end{aligned}$$
(4.21)

Let us first consider the unperturbed linear-quadratic case, where the matrix \(\Lambda \) and the vector \(\psi \) are given by (3.4) and (3.5), respectively. By (4.16), the equation \(\det \Lambda ^*(x)=0\) has two solutions. These correspond to the two points

$$\begin{aligned} P^*_1 = (\bar{x}_1, \xi _1^*(\bar{x}_1), \xi ^*_2(\bar{x}_1)),\quad P^*_2 = (\bar{x}_2, \xi _1^*(\bar{x}_2), \xi ^*_2(\bar{x}_2)). \end{aligned}$$
(4.22)

Moreover the derivative \({d\over \mathrm{d}x}\det \Lambda ^*(x)\) does not vanish at \(x=\bar{x}_1\) and at \(x=\bar{x}_2\). By the implicit function theorem, there exist two surfaces \(\Sigma _1,\Sigma _2\subset I\!\!R^3\), containing the above two points, where the determinant of \(\Lambda (x, \xi _1,\xi _2)\) vanishes (see Fig. 3).

The following analysis will show that, under generic conditions, there exist

  • Two curves \(\gamma _1\subset \Sigma _1\) and \(\gamma _2\subset \Sigma _2\), containing the points \(P_1^*\) and \(P_2^*\), respectively, where the right-hand side of (4.21) vanishes.

  • Two 2-dimensional invariant manifolds \({\mathcal M}_1,{\mathcal M}_2\), containing \(\gamma _1\) and \(\gamma _2\), respectively, which intersect transversally.

As shown in Fig. 3, the transversal intersection \({\mathcal M}_1\cap {\mathcal M}_2\) defines a heteroclinic orbit connecting \(P_1^*\) with \(P_2^*\). Since this configuration is structurally stable, it is preserved by small \({\mathcal C}^1\) perturbations of the vector fields \(\mathbf{v},\mathbf{w}\) in (4.20).

The next lemma provides generic conditions on the coefficients \(R_i, S_i, b_i\) which guarantee the existence of two smooth curves \(\gamma _1,\gamma _2\) where the vector fields \(\mathbf{v},\mathbf{w}\) are parallel.

Lemma 1

Together with (A1) and (4.16), assume that at least one of the following equalities does not hold:

$$\begin{aligned} {R_1\over R_2} = {S_1\over S_2} = {b_2^2\over b_1^2}. \end{aligned}$$
(4.23)

Then the Jacobian matrix of the vector field \(\mathbf{v}\wedge \mathbf{w}\) has rank 2 at both points \(P_1^*\) and \(P_2^*\). As a consequence, the equation

$$\begin{aligned} \mathbf{v}\wedge \mathbf{w} = 0 \in I\!\!R^3 \end{aligned}$$
(4.24)

defines two smooth curves \(\gamma _1,\gamma _2{:}[-s_0,s_0]\mapsto I\!\!R^3\), which we can parameterize by arc length, such that

$$\begin{aligned} \gamma _i\bigl ([-s_0,s_0]\bigr ) \subset \Sigma _i\quad \hbox {and}\quad \gamma _i(0) = P_i^*,\quad i=1,2. \end{aligned}$$
(4.25)

Proof

1. For \(i=1,2\), since we are assuming \(x_i\not = x^*\), at the point \(P_i^*\) we have

$$\begin{aligned} \Lambda _{11} = \Lambda _{22} = a_0 - b_1^2\xi _1^*- b_2^2\xi _2^* \not = 0. \end{aligned}$$

Indeed, by (3.21) the above quantity coincides with the time derivative \(\dot{x}\), which vanishes only at the equilibrium point \(x^*\). In turn, det\(\Lambda ^*(\bar{x}_i)=0\) implies \(\Lambda _{12}\Lambda _{21}\not =0\) as well. By continuity, all four coefficients \(\Lambda _{jk}\) in (4.19)–(4.21) are nonzero on a neighborhood of \(P_i^*\). As a consequence, if the first and second components (or the first and third) of the wedge product in (4.21) vanish, then the remaining component vanishes as well. Next, by (4.16) it follows

$$\begin{aligned} \nabla (\Lambda _{11}\Lambda _{22}- \Lambda _{12}\Lambda _{21})(P_i^*) \ne 0,\quad i=1,2. \end{aligned}$$

By the implicit function theorem, it thus suffices to show that the Jacobian matrix of the map

$$\begin{aligned} \begin{pmatrix}x\\ \xi _1\\ \xi _2\end{pmatrix} \mapsto \begin{pmatrix} \Lambda _{11}\Lambda _{22}- \Lambda _{12}\Lambda _{21} \\ \Lambda _{22}\psi _1-\Lambda _{12}\psi _2\\ \Lambda _{11}\psi _2-\Lambda _{21}\psi _1\end{pmatrix} \end{aligned}$$

has rank \(\ge 2\) at the points \(P^*_1\) and \(P^*_2\). In the following we denote by

$$\begin{aligned} \mathcal {J} = \begin{pmatrix} {\mathcal J}_1\\ {\mathcal J}_2\\ {\mathcal J}_3\end{pmatrix} \doteq \begin{pmatrix} \nabla (\Lambda _{11}\Lambda _{22}- \Lambda _{12}\Lambda _{21}) \\ \nabla (\Lambda _{22}\psi _1-\Lambda _{12}\psi _2)\\ \nabla (\Lambda _{11}\psi _2-\Lambda _{21}\psi _1)\end{pmatrix} \end{aligned}$$
(4.26)

this \(3\times 3\) Jacobian matrix. Moreover, for a fixed index \(i\in \{1,2\}\), we denote by \({\mathcal J}^{i*}\) this particular matrix computed at the point \(P_i^*\) in (4.22).

2. We claim that the range of the linear map \(\mathbf{u}\mapsto {\mathcal J}^{i*}\mathbf{u}\) has dimension \(\ge 2\).

Indeed, since the function \(\xi ^*=(\xi _1^*, \xi _2^*)\) in (3.6) is an affine solution of (4.3) through \(P_i^*\), the system (4.21) has a corresponding solution of form

$$\begin{aligned} \begin{pmatrix} 1\\ \alpha _1\\ \alpha _2 \end{pmatrix} \cdot x(t)+ \begin{pmatrix} 0\\ \beta _1\\ \beta _2 \end{pmatrix}. \end{aligned}$$

This solution satisfies

$$\begin{aligned} \dot{x} = \det \Lambda (x, \xi _1^*(x), \xi _2^*(x)). \end{aligned}$$

Therefore, as \(x(t)\rightarrow \bar{x}_i\), we have the convergence

$$\begin{aligned} {\ddot{x}(t)\over \dot{x}(t)} \rightarrow p'(\bar{x}_i) \ne 0 \end{aligned}$$

where p is defined in (4.16). This implies

$$\begin{aligned} {\mathcal J}^{i*}\begin{pmatrix} 1\\ \alpha _1\\ \alpha _2\end{pmatrix} = p'(\bar{x}_i)\cdot \begin{pmatrix}1\\ \alpha _1\\ \alpha _2\end{pmatrix}, \end{aligned}$$
(4.27)

Next, we observe that all functions

$$\begin{aligned} \Lambda _{11}=\Lambda _{22},\quad \Lambda _{12},\quad \Lambda _{21},\quad \psi _1+ R_ 1,\quad \psi _2+R_ 2, \end{aligned}$$

are linear homogeneous w.r.t. the variables \(x,\xi _1,\xi _2\). In general, if a function \(\phi {:}I\!\!R^3\mapsto I\!\!R\) is homogeneous of degree \(\beta \), so that \(\phi (t\mathbf{z})=t^\beta \phi (\mathbf{z})\), its directional derivative satisfies

$$\begin{aligned} \langle \nabla \phi (\mathbf{z}),\,\mathbf{z}\rangle = {d\over \mathrm{d}t} \phi (t\mathbf{z}) \bigg |_{t=1} = \beta \phi (\mathbf{z}). \end{aligned}$$
(4.28)

Moreover, since all \(\Lambda _{ij}\) are linear, for every \(\mathbf{z}\in I\!\!R^3\) we trivially have

$$\begin{aligned} \big \langle \nabla \Lambda _{ij}(\mathbf{z}),\, \mathbf{z}\big \rangle = \Lambda _{ij}(\mathbf{z}). \end{aligned}$$
(4.29)

For convenience, for a fixed \(i\in \{1,2\}\) we shall write

$$\begin{aligned} \Lambda ^*_{jk} = \Lambda _{jk}(P_i^*)\quad \mathrm {and}\quad \psi ^*_j = \psi _j(P_i^*). \end{aligned}$$

We now claim that

$$\begin{aligned} {\mathcal J}^{i*}\begin{pmatrix} \bar{x}_i\\ \xi _1^*(\bar{x}_i)\\ \xi _2^*(\bar{x}_i)\end{pmatrix} \doteq \begin{pmatrix}\kappa _1\\ \kappa _2\\ \kappa _3\end{pmatrix} = \begin{pmatrix}0\\ R_ 1\Lambda ^*_{22}- R_ 2 \Lambda ^*_{12}\\ R_ 2 \Lambda ^*_{11}-R_ 1\Lambda ^*_{21}\end{pmatrix}. \end{aligned}$$
(4.30)

Indeed, applying (4.28) to the function \(\phi =\Lambda _{11}\Lambda _{22}- \Lambda _{12}\Lambda _{21} \) at the point \(P_i^*\) we obtain

$$\begin{aligned} \kappa _1 = \nabla (\Lambda _{11}\Lambda _{22}- \Lambda _{12}\Lambda _{21}) \cdot \begin{pmatrix} \bar{x}_i\\ \xi _1^*(\bar{x}_i)\\ \xi _2^*(\bar{x}_i)\end{pmatrix} = 2(\Lambda ^*_{11}\Lambda ^*_{22}- \Lambda ^*_{12}\Lambda ^*_{21}) = 0. \end{aligned}$$

Using (4.28)–(4.29), we then compute

$$\begin{aligned} \kappa _2= & {} \displaystyle \nabla (\Lambda _{22}\psi _1-\Lambda _{12}\psi _2)\begin{pmatrix} \bar{x}_i \\ \xi _1^*(\bar{x}_i)\\ \xi _2^*(\bar{x}_i)\end{pmatrix}\nonumber \\= & {} \displaystyle \Big [\nabla (\Lambda _{22}(\psi _1+R_1) -\Lambda _{12}(\psi _2+R_2))-(R_ 1\nabla \Lambda _{22}-R_2\nabla \Lambda _{12})\Big ] \begin{pmatrix}\bar{x}_i \\ \xi _1^*(\bar{x}_i)\\ \xi _2^*(\bar{x}_i) \end{pmatrix}\nonumber \\= & {} 2(R_1\Lambda ^*_{22}- R_2 \Lambda ^*_{12})-(R_1\Lambda ^*_{22}- R_2 \Lambda ^*_{12}). \end{aligned}$$
(4.31)

After an entirely similar computation is yielded the value of \(\kappa _3\) given in (4.30).

If \((\kappa _2,\kappa _3)\ne (0,0)\), comparing (4.27) with (4.30) we conclude that the Jacobian matrix \({\mathcal J}^{i*}\) has rank 2.

Otherwise, if \((\kappa _2,\kappa _3)= (0,0)\), recalling that \(\Lambda _{11}=\Lambda _{22}\), by (4.30) we obtain

$$\begin{aligned} \begin{pmatrix} \Lambda ^*_{11} &{}&{} \Lambda ^*_{12}\\ \Lambda ^*_{21} &{}&{} \Lambda ^*_{22}\end{pmatrix} \begin{pmatrix} R_1\\ R_ 2 \end{pmatrix} = 2\Lambda ^*_{11}\cdot \begin{pmatrix}R_ 1\\ R_ 2\end{pmatrix}. \end{aligned}$$
(4.32)

By (4.3) it follows

$$\begin{aligned} \begin{pmatrix} \Lambda ^*_{11} &{}&{} \Lambda ^*_{12}\\ \Lambda ^*_{21} &{}&{} \Lambda ^*_{22}\end{pmatrix} \begin{pmatrix} \psi ^*_1\\ \psi ^*_2 \end{pmatrix} = 2\Lambda ^*_{11}\cdot \begin{pmatrix}\psi ^*_1\\ \psi ^*_2\end{pmatrix}\quad \mathrm {and}\quad \begin{pmatrix} \Lambda ^*_{11} &{}&{} \Lambda ^*_{12}\\ \Lambda ^*_{21} &{}&{} \Lambda ^*_{22}\end{pmatrix} \begin{pmatrix} \alpha _1\\ \alpha _2 \end{pmatrix}~=~\begin{pmatrix}\psi ^*_1\\ \psi ^*_2\end{pmatrix}. \end{aligned}$$

Since \((R_1,R_2)\ne (0,0) \), we have

$$\begin{aligned} \begin{pmatrix} \psi ^*_1\\ \psi ^*_2 \end{pmatrix} = C\cdot \begin{pmatrix}R_1\\ R_2 \end{pmatrix} \quad \mathrm {and}\quad \begin{pmatrix} \Lambda ^*_{11} &{}&{} \Lambda ^*_{12}\\ \Lambda ^*_{21} &{}&{} \Lambda ^*_{22}\end{pmatrix} \begin{pmatrix} \alpha _1\\ \alpha _2 \end{pmatrix} = 2C\Lambda ^*_{11}\cdot \begin{pmatrix}R_1\\ R_2\end{pmatrix}\, \end{aligned}$$
(4.33)

for some constant C. Two cases are considered:

  • If \(C\ne 0\) then (4.32) and (4.33) imply

    $$\begin{aligned} {R_1\over R_2} = {\psi _1^*\over \psi _2^* } = {\Lambda ^*_{12}\over \Lambda ^*_{11}} = {\alpha _1\over \alpha _2} > 0. \end{aligned}$$
    (4.34)

    To achieve a contradiction, let \({\mathcal J}_k^{i*}\) be the k-th row of the Jacobian matrix \({\mathcal J}^{i*}\), as in (4.26). Using the relations

    $$\begin{aligned} \Lambda ^*_{11}\psi _1^* = \Lambda _{12}^*\psi _2^*, \quad \Lambda ^*_{11}\psi ^*_2 =\Lambda ^*_{21}\psi ^*_1, \end{aligned}$$

    one obtains

    $$\begin{aligned} \Lambda _{11}^*{\mathcal J}^{i*}_2+\Lambda ^*_{12}{\mathcal J}^{i*}_3= & {} \bigl (*,\psi ^*_1b_2^2\Lambda ^*_{21}, \psi _1^*b_1^2\Lambda ^*_{12}\bigr ), \\\\{\mathcal J}_3^{i*}= & {} \bigl (*, -2b_1^2\Lambda ^*_{11}+b_2^2\Lambda ^*_{21}, -2b_2^2\Lambda ^*_{11}+b_1^2\Lambda ^*_{12}\bigr ). \end{aligned}$$

    If the Jacobian matrix \({\mathcal J}^{i*}\) has rank \(\le 1\), then the above two vectors must be parallel. Hence

    $$\begin{aligned} b_2^4\Lambda ^*_{21}\Lambda ^*_{11} = b_1^4\Lambda ^*_{12}\Lambda ^*_{11}. \end{aligned}$$

    This yields

    $$\begin{aligned} {\Lambda ^*_{12}\over \Lambda _{11}^*} = {\xi _1^*(\bar{x}_i)\over \xi _2^*(\bar{x}_i)}~=~{b_2^2\over b_1^1}. \end{aligned}$$

    Recalling (4.34), we obtain

    $$\begin{aligned} {R_ 1\over R_2} = {\alpha _1\over \alpha _2} = {\beta _1\over \beta _2}={b_2^2\over b_1^2}. \end{aligned}$$

    Thus, from the first two equations of (3.7), we deduce

    $$\begin{aligned} {S_1\over S_2} = {b_1^2\alpha _1^2 + 2b_2^2\alpha _1\alpha _2 +(\gamma -2a_0)\alpha _1\over b_2^2\alpha _2^2 + 2b_1^2\alpha _1\alpha _2 +(\gamma -2a_0) \alpha _2} = {b_2^2\over b_2^1}, \end{aligned}$$

    contradicting the assumption (4.23).

  • If \(C=0\) then (4.33) implies the identities

    $$\begin{aligned} \psi ^*_1 = \psi ^*_2 = 0,\quad \Lambda ^*_{11}\alpha _1 = -\Lambda ^*_{12}\alpha _2,\quad \Lambda ^*_{21}\alpha _1 = -\Lambda ^*_{11}\alpha _2. \end{aligned}$$

    Therefore

    $$\begin{aligned} \begin{pmatrix} \Lambda ^*_{11} &{}&{} \Lambda ^*_{12}\\ \Lambda ^*_{21} &{}&{} \Lambda ^*_{22}\end{pmatrix} \begin{pmatrix}\alpha _1\\ -\alpha _2\end{pmatrix} = 2\Lambda ^*_{11}\begin{pmatrix}\alpha _1\\ -\alpha _2\end{pmatrix}. \end{aligned}$$

    Recalling (4.32), we obtain

    $$\begin{aligned} R_ 1\alpha _2 = -R_ 2\alpha _1. \end{aligned}$$

    Since \(\alpha _1,\alpha _2>0\), this implies \(R_ 1\not = 0\), \(R_ 2\ne 0\). Recalling (4.30) and (4.17), we thus have \(\Lambda ^*_{11}\ne 0\). On the other hand, using the identities \(\psi ^*_1=\psi ^*_2=0\), a direct computation yields

    $$\begin{aligned} {\mathcal J}^{i*} = \begin{pmatrix}*&{}&{} (\gamma -a_0)\Lambda ^*_{11} &{}&{} -(\gamma -a_0)\Lambda ^*_{12} \\ \\ * &{}&{} *&{}&{} *\\ \\ *&{}&{} -2b_1^2\Lambda ^*_{11}+b_2^2\Lambda ^*_{21} &{}&{} -2b_2^2\Lambda ^*_{11} + b_1^2\Lambda ^*_{12} \end{pmatrix}. \end{aligned}$$

    We claim that this matrix \({\mathcal J}\) at \(P_i^*\) has rank \(\ge 2\). If not, then

    $$\begin{aligned} \Lambda ^*_{11}\cdot (-2b_2^2\Lambda ^*_{11} + b_1^2\Lambda ^*_{12})+\Lambda ^*_{12}\cdot (-2b_1^2 \Lambda ^*_{11}+b_2^2\Lambda ^*_{21}) = 0. \end{aligned}$$

    This implies

    $$\begin{aligned} b_2^2\Lambda ^*_{11}+b_1^2\Lambda ^*_{12} = 0,\quad b_1^2\Lambda ^*_{21}+b_2^2\Lambda ^*_{11} = 0. \end{aligned}$$
    (4.35)

    Recalling (4.30) we obtain

    $$\begin{aligned} -{\alpha _1\over \alpha _2}={R_ 1\over R_ 2} = {\Lambda ^*_{12}\over \Lambda ^*_{11}} = -{b_2^2\over b^2_1}. \end{aligned}$$
    (4.36)

    Hence

    $$\begin{aligned} b_1^2\alpha _1 = b_2^2\alpha _2. \end{aligned}$$
    (4.37)

    Moreover, (4.35) also implies

    $$\begin{aligned} b_1^2\cdot \xi _1^*(\bar{x}_i) = b_2^2\cdot \xi _2^*(\bar{x}_i), \end{aligned}$$

    which yields

    $$\begin{aligned} b_1^2\beta _1 = b_2^2\beta _2. \end{aligned}$$
    (4.38)

    Inserting (4.37) and (4.38) in the last two equations of the system (3.7), we obtain \(b_1^2R_ 1=b_2^2R_ 2\), reaching a contradiction with (4.36).

In all cases, we conclude that the Jacobian matrix \({\mathcal J}\) has rank \(\ge 2\) at the points \(P_1^*\) and \(P_2^*\). Hence the curves \(\gamma _1,\gamma _2\) are well defined. \(\square \)

The same conclusion remains true also in the presence of sufficiently small nonlinear perturbations. Indeed, a straightforward application of the implicit function theorem yields:

Lemma 2

Under the same assumptions as in Lemma 1, there exists \(\delta _0>0\) sufficiently small such that the following holds. Let \(f_0, \eta _1,\eta _2,h_1,h_2\), be perturbations such that

$$\begin{aligned} \Vert f_0\Vert _{{\mathcal C}^2} + \Vert \eta _1\Vert _{{\mathcal C}^2} + \Vert \eta _2\Vert _{{\mathcal C}^2} +\Vert h_1\Vert _{{\mathcal C}^1}+ \Vert h_2\Vert _{{\mathcal C}^1} \doteq \delta \le \delta _0. \end{aligned}$$
(4.39)

Then there exist equilibrium points \(\overline{P}_1,\overline{P}_2\) for the ODE (4.21), with \(\Lambda ,\psi \) given at (4.4)–(4.5), such that

$$\begin{aligned} \Vert \overline{P}_i-P^*_i\Vert ~\le ~C\,\delta . \end{aligned}$$

Moreover, there exist two surfaces \(\Sigma _1,\Sigma _2\subset I\!\!R^3\), containing the above two points, where the determinant of \(\Lambda (x, \xi _1,\xi _2)\) vanishes. Equation (4.24) defines two curves \(\gamma _i{:}[-s_0,s_0]\rightarrow \Sigma _i\), parameterized by arc length, such that \(\gamma _i(0)=\overline{P}_i\) and

$$\begin{aligned} |\dot{\gamma }_i(0)-\dot{\gamma }^*_i(0)| \le C\delta \end{aligned}$$
(4.40)

for some constant \(C>0\).

We observe that

  • Since the vector field \(\mathbf{v}\wedge \mathbf{w}\) vanishes on the curve \(\gamma _i\) and \(\overline{P}_i\in \gamma _i\), the curve \(\gamma _i\) describes the unique local center manifold for \(\overline{P}_i\).

  • At every point \(Q\in \Sigma _i{\setminus }\gamma _i\), the vector field \(\mathbf{v}\wedge \mathbf{w}\) is vertical (i.e., its first component is zero, while the last two components do not both vanish). Hence, a trajectory through Q cannot be the graph of a \({\mathcal C}^1\) solution of (4.3).

To construct a global, continuously differentiable solution to (4.3), we need to concatenate orbits of (4.21) which cross the surfaces \(\Sigma _i\) through some point along the curves \(\gamma _i\), \(i=1,2\). This can be achieved by a trajectory which lies in the intersection of two invariant manifolds \({\mathcal M}_1,{\mathcal M}_2\), as shown in Fig. 3.

4.3 Constructing a Solution as a Concatenation of Orbits.

Let us first consider the unperturbed, linear-quadratic case. As before, for \(i=1,2\), we denote by \({\mathcal J}^{i*}\) the Jacobian matrix \({\mathcal J}\) at the point \(P_i^*= \bigl (\bar{x}_i, \xi _1^*(\bar{x}_i), \xi _2^*(\bar{x}_i) \bigr )\). Two of the eigenvalues of \({\mathcal J}^{i*}\) are known, namely

$$\begin{aligned} \lambda ^{*}_{i,1} = 0, \quad \lambda ^{*}_{1,2} = p'(\bar{x}_1) = C_{\Lambda }(\bar{x}_1-\bar{x}_2)\quad \mathrm {and}\quad \lambda ^{*}_{2,2} = p'(\bar{x}_2) = C_{\Lambda }(\bar{x}_2-\bar{x}_1). \end{aligned}$$
(4.41)

On the other hand, at an arbitrary point \((x,\xi _1,\xi _2)\), by (3.4) the Jacobian matrix in (4.26) has the form

$$\begin{aligned} {\mathcal J}=\begin{pmatrix}2a_0\Lambda _{11}&{}&{}*&{}&{}*\\ \\ *&{}&{}-b_1^2\psi _1 +b_2^2\psi _2 +(\gamma -a_0)\Lambda _{11} &{}&{} *\\ \\ *&{}&{} * &{}&{} -b_2^2\psi _2 +b_1^2\psi _1 +(\gamma -a_0)\Lambda _{11}\end{pmatrix}. \end{aligned}$$

At the point \(P_i^*\), the trace of the matrix \({\mathcal J}^{i*} = {\mathcal J}(P_i^*)\) is

$$\begin{aligned} \mathrm {Trace} ({\mathcal J}^{i*}) = 2\gamma \Lambda ^{i*}_{11}. \end{aligned}$$

The third eigenvalue of \({\mathcal J}^{i*}\) is thus computed as

$$\begin{aligned} \lambda ^{*}_{i,3} = \mathrm {Trace} ({\mathcal J}^{i*})- \lambda ^{*}_{i,2} = 2\gamma \Lambda ^{i*}_{11}-p'(\bar{x}_i). \end{aligned}$$
(4.42)

To proceed, we now impose generic conditions implying that, at \(P_i^*\), the three eigenvalues of the Jacobian matrix \({\mathcal J}\) are distinct. By (4.41) and (4.42), this will be the case if

$$\begin{aligned} \left\{ \begin{array}{ll} 2\gamma \Lambda ^{i*}_{11}&{}\ne ~p'(\bar{x}_i),\\ \gamma \Lambda ^{i*}_{11}&{}\ne ~p'(\bar{x}_i),\end{array} \right. \qquad \qquad i=1,2. \end{aligned}$$
(4.43)

Let \(x=\hat{x}_i\) be the solution to the equation

$$\begin{aligned} 2\gamma \Lambda ^*_{11}(x) = p'(\bar{x}_i) = \lambda ^{*}_{i,2}\quad \mathrm {for}\ i=1,2. \end{aligned}$$
(4.44)

The first condition in (4.43) is equivalent to

$$\begin{aligned} \hat{x}_i \ne \bar{x}_i\quad \mathrm {for}\ i=1,2. \end{aligned}$$

We observe that, since \(\bar{x}_i\) and \(\hat{x}_i\) depend on the coefficients of the dynamics and the cost functions (3.2)–(3.3), the inequality (4.43) will be satisfied for generic values of these coefficients.

In order to construct a solution \(\xi (\cdot )\) as a concatenation of orbits of (4.21), one needs to study the flow in a neighborhood of the equilibrium points \(P_1^*\) and \(P_2^*\). We recall that, since all points on the curve \(\gamma _i\), \(i=1,2\), are steady states, the \(3\times 3\) Jacobian matrix \({\mathcal J}\) in (4.26) always has a zero eigenvalue at \(P_i^*\). Different cases arise, depending on the sign of the two remaining eigenvalues \(\lambda _{i,2}^{*}, \lambda _{i,3}^{*}\). In turn, these signs depend on the relative position of \(\hat{x}_i\) with respect to \(\bar{x}_i\).

Case 1 If \(C_{\Lambda }>0\), then (4.41) implies

$$\begin{aligned} \lambda ^{*}_{1,2}< 0 < \lambda ^{*}_{2,2}. \end{aligned}$$

Recalling that

$$\begin{aligned} \Lambda ^*_{11}(x) = (a_0-b_1^2 \alpha _1 - b_2^2\alpha _2)\,x - ( b_1^2\beta _1+b_2^2\beta _2), \end{aligned}$$
(4.45)

with \((a_0-b_1^2 \alpha _1 - b_2^2\alpha _2)<0\), from (4.44) we obtain \(\hat{x}_1>\hat{x}_2\). Moreover, for \(i=1,2\),

$$\begin{aligned} \left\{ \begin{array}{ll} \lambda _{i,3}^{*}> 0&{}\quad \mathrm {if}\quad \bar{x}_i<\hat{x}_i, \\ \\ \lambda _{i,3}^{*}< 0&{}\quad \mathrm {if}\quad \bar{x}_i>\hat{x}_i.\end{array} \right. \end{aligned}$$
(4.46)

Three subcases can occur:

  • If \(\bar{x}_1<\hat{x}_1\) and \(\bar{x}_2>\hat{x}_2\), then

    $$\begin{aligned} \lambda ^{*}_{1,2}<0< \lambda ^{*}_{1,3}\quad \mathrm {and}\quad \lambda ^{*}_{2,3}<0< \lambda ^{*}_{2,2}; \end{aligned}$$

    hence both \(P_1^*\) and \(P_2^*\) are saddle points.

  • If \(\bar{x}_1>\hat{x}_1\), then \(\bar{x}_2>\hat{x}_2\) and

    $$\begin{aligned} \lambda ^{*}_{1,2}, \lambda ^{*}_{1,3}< 0\quad \mathrm {and}\quad \lambda ^{*}_{2,3}<0< \lambda ^{*}_{2,2}. \end{aligned}$$

    In this case \(P^*_1\) is a stable point, while \(P^*_2\) is a saddle point.

  • If \(\bar{x}_2<\hat{x}_2\) then \(\bar{x}_1<\hat{x}_1\) and

    $$\begin{aligned} \lambda ^{*}_{1,2}<0< \lambda ^{*}_{1,3}\quad \mathrm {and}\quad \lambda ^{*}_{2,2}, \lambda ^{*}_{2,3} > 0. \end{aligned}$$

    In this case \(P^*_1\) is a saddle point and \(P^*_2\) is an unstable source.

Case 2 If \(C_{\Lambda }<0\) then (4.41) implies

$$\begin{aligned} \lambda ^{*}_{2,2}< 0< \lambda ^{*}_{1,2}\quad \mathrm {and}\quad \hat{x}_1 < \hat{x}_2. \end{aligned}$$

Recalling (4.45), with \((a_0-b_1^2 \alpha _1 - b_2^2\alpha _2)<0\), from (4.44) we obtain \(\hat{x}_1<\hat{x}_2\), together with (4.46).

Four subcases can now occur:

  • If \([\bar{x}_1,\bar{x}_2]\subset [\hat{x}_1,\hat{x}_2]\), then

    $$\begin{aligned} \lambda ^{*}_{1,3}<0< \lambda ^{*}_{1,2}\quad \mathrm {and}\quad \lambda ^{*}_{2,2}<0< \lambda ^{*}_{2,3}, \end{aligned}$$

    In this case, both \(P_1^*\) and \(P_2^*\) are saddle points.

  • If \([\hat{x}_1,\hat{x}_2]\subset [\bar{x}_1,\bar{x}_2]\), then

    $$\begin{aligned} \lambda ^{*}_{1,3},\lambda ^{*}_{1,2} > 0\quad \mathrm {and}\quad \lambda ^{*}_{2,3},\lambda ^{*}_{2,2} < 0. \end{aligned}$$

    In this case, \(P^*_1\) is an unstable source, while \(P^*_2\) is stable.

  • If \(\hat{x}_1<\bar{x}_1\le \hat{x}_2<\bar{x}_2\), then

    $$\begin{aligned} \lambda ^{*}_{1,3}<0< \lambda ^{*}_{1,2}\quad \mathrm {and}\quad \lambda ^{*}_{2,3}, \lambda ^{*}_{2,2} < 0. \end{aligned}$$

    Hence \(P^*_1\) is a saddle point, while \(P^*_2\) is stable.

  • If \(\bar{x}_1< \hat{x}_1\le \bar{x}_2<\hat{x}_2\), then

    $$\begin{aligned} \lambda ^{*}_{1,3}, \lambda ^{*}_{1,2} > 0\quad \mathrm {and}\quad \lambda ^{*}_{2,2}<0< \lambda ^{*}_{2,3}. \end{aligned}$$

    Hence \(P^*_1\) is an unstable source, while \(P^*_2\) is a saddle point.

Consider again the affine solution \(\xi ^*=(\xi _1^*,\xi _2^*)\) of the system (4.3) in the linear-quadratic case, constructed in (3.6). Depending on the signs of the eigenvalues \(\lambda _{i,2}^{*}, \lambda _{i,3}^{*}\), by looking at the dynamics generated by (4.21) we see that the following cases can occur, under generic assumptions.

  1. (I)

    Assume that \(\lambda _{1,2}^{*}\cdot \lambda _{1,3}^{*}<0\) and \(\lambda _{2,2}^{*}\cdot \lambda _{2,3}^{*}<0\), so that both \(P^*_1\) and \(P^*_2\) are saddle points. Then in the linear-quadratic case \(\xi ^*\) is the unique globally smooth solution to the system (4.3). If a small nonlinear perturbation is present, so that \(\Lambda , \psi \) are given by (4.4)–(4.5), then (4.3) has a unique globally smooth solution close to \(\xi ^*\) (see Fig. 3).

  2. (II)

    Assume that \(\lambda _{1,2}^{*}<0<\lambda _{1,3}^{*}\) and \(\lambda _{2,2}^{*},\,\lambda _{2,3}^{*}<0\), so that \(P_1^*\) is a saddle and \(P_2^*\) is a sink. Then in the linear-quadratic case system (4.3) has a 1-parameter family of globally smooth solutions close to \(\xi ^*\). The same holds in the presence of a small nonlinear perturbation.

  3. (III)

    Assume that \(\lambda _{1,2}^{*}, \lambda _{1,3}^{*}>0\) and \(\lambda _{2,2}^{*},\lambda _{2,3}^{*}<0\), so that \(P^*_1\) is a source and \(P^*_2\) is a sink. Then in the linear-quadratic case system (4.3) has a 2-parameter family of globally smooth solutions close to \(\xi ^*\). The same holds in the presence of a small nonlinear perturbation.

To conclude this section we give three examples corresponding to three cases I, II and III.

Example 2

(Two saddle points). Consider the linear system (4.14), but with cost functions

$$\begin{aligned} J_1 = \int _0^{\infty }e^{-t}\cdot \Big (25x^2+ 13x+{1\over 2}u^2_1\Big ),\quad J_2 = \int _0^{\infty }e^{-t}\cdot \Big (25x^2+13x+{1\over 2}u^2_2\Big ). \end{aligned}$$
(4.47)

In this case, (4.3) admits the affine solution

$$\begin{aligned} \xi ^*_1(x) = \xi ^*_2(x) = 5x+1. \end{aligned}$$

This yields equilibrium point \(x^*=-{2\over 7}\), which is asymptotically stable for the dynamics

$$\begin{aligned} \dot{x} = 3x-\xi _1^*(x)-\xi _2^*(x) = -7x-2. \end{aligned}$$

The determinant of the corresponding matrix \(\Lambda ^*\) is computed by

$$\begin{aligned} \det \Lambda ^*(x) = (-7x-2)^2-(5x+1)^2 = 24\Big (x+{1\over 4}\Big )\Big (x+{1\over 2}\Big ). \end{aligned}$$

It vanishes at the two points \(\bar{x}_1=-{1\over 2}\), \(\bar{x}_2=-{1\over 4}\). We thus have \(x^*\in \,]\bar{x}_1,\bar{x}_2[\,\). Moreover, the equilibrium points

$$\begin{aligned} P_1^*=\left( -{1\over 2}, -{3\over 2}, -{3\over 2}\right) \quad \mathrm {and}\quad P_2^*=\left( -{1\over 4},-{1\over 4},-{1\over 4}\right) . \end{aligned}$$

On the other hand, the constant in (4.16) is \(C_{\Lambda }=24>0\). Hence, (4.41) implies that \(\lambda _{1,2}^{*}=-6\) and \(\lambda _{2,2}^{*}=6\). By (4.42), the remaining eigenvalues are then computed as \(\lambda _{1,3}^{*}=9\) and \(\lambda _{2,3}^{*}=-{13\over 2}\). Therefore, both \(P_1^*\) and \(P_2^*\) are saddle points.

Example 3

(One saddle point). Consider the linear system (4.14), but with cost functions

$$\begin{aligned} J_1 = \int _0^{\infty }e^{-4t}\cdot \Big (4x^2+ 7x+{1\over 2}u^2_1\Big ),\quad J_2 = \int _0^{\infty }e^{-4t}\cdot \Big (4x^2+7x+{1\over 2}u^2_2\Big ). \end{aligned}$$
(4.48)

In this case, (4.3) admits the affine solution

$$\begin{aligned} \xi ^*_1(x) = \xi ^*_2(x) = 2x+1. \end{aligned}$$

This yields equilibrium point \(x^*=-2\), which is asymptotically stable for the dynamics

$$\begin{aligned} \dot{x} = 3x-\xi _1^*(x)-\xi _2^*(x) = -x-2. \end{aligned}$$

The determinant of the corresponding matrix \(\Lambda ^*\) is computed by

$$\begin{aligned} \det \Lambda ^*(x) = (-x-2)^2-(2x+1)^2 = -3(x-1)(x+1). \end{aligned}$$

It vanishes at the two points \(\bar{x}_1=-1\), \( \bar{x}_2=1\). Notice that now \(x^*\notin [\bar{x}_1,\bar{x}_2]\). At the equilibrium points \(P_1^*=(-1,-1,-1)\), \(P_2^*=(1,3,3)\), the second eigenvalues of the Jacobian matrix \({\mathcal J}\) are found to be \(\lambda _{1,2}^{*}=6\) and \( \lambda _{2,2}^{*}=-6\). Moreover, recalling (4.42), the third eigenvalues are computed as \(\lambda _{1,3}^{*}=-14\) and \(\lambda _{2,3}^{*}=-18\). We thus conclude that \(P_1^*\) is a saddle and \(P_2^*\) is a sink.

Example 4

(No saddle points). Consider the linear system (4.14), but with cost functions

$$\begin{aligned} J_1 = \int _0^{\infty }e^{-4t}\cdot \Big (4x^2 -3x+{1\over 2}u^2_1\Big ),\qquad J_2 = \int _0^{\infty }e^{-4t}\cdot \Big (4x^2+3x+{1\over 2}u^2_2\Big ). \end{aligned}$$
(4.49)

In this case, we find that (4.3) admits the affine solution

$$\begin{aligned} \xi ^*_1(x) = 2x-1,\quad \xi ^*_2(x) = 2x+1. \end{aligned}$$

This yields equilibrium point \(x^*=0\), which is asymptotically stable for the dynamics

$$\begin{aligned} \dot{x} = 3x-\xi _1^*(x)-\xi _2^*(x) = -x. \end{aligned}$$

The determinant of the corresponding matrix \(\Lambda ^*\) is computed by

$$\begin{aligned} \det \Lambda ^*(x) = (-x)^2-(2x-1)(2x+1) = 1-3x^2. \end{aligned}$$

It vanishes at the two points \( \bar{x}_1=-1 /\sqrt{3}\),  \(\bar{x}_2=1 /\sqrt{3}\). Notice that \(x^*\in [\bar{x}_1,\bar{x}_2]\). Moreover,

$$\begin{aligned} P_1^* = \Big (-{1\over \sqrt{3}}, -{2\over \sqrt{3}}-1, -{2\over \sqrt{3}}+1\Big ),\quad P_2^* = \Big ({1\over \sqrt{3}}, {2\over \sqrt{3}}-1, {2\over \sqrt{3}}+1\Big ). \end{aligned}$$

The eigenvalues of \({\mathcal J}^*\) at \(P_1^*\) and at \(P_2^*\) are found to be

$$\begin{aligned} \lambda _{1,2}^{*} = 2\sqrt{3},\quad \lambda _{1,3}^{*} = {2\sqrt{3}\over 3},\quad \lambda _{2,2}^{*} = -2\sqrt{3},\quad \lambda _{2,3}^{*} = -{2\sqrt{3}\over 3}. \end{aligned}$$

Therefore, \(P_1^*\) is a source and \(P_2^*\) is a sink.

5 The Case with Unique Solutions

In this section we study in greater detail the case where both \(P_1^*\) and \(P_2^*\) are saddle points, for the auxiliary dynamical system (4.21). We consider first the linear-quadratic case. To fix the ideas, let \(\lambda _{1,1}^{*} =\lambda _{2,1}^{*}=0\) and assume that

$$\begin{aligned} \lambda ^{*}_{1,2}< 0< \lambda ^{*}_{1,3}\quad \mathrm {and}\quad \lambda ^{*}_{2,3}< 0 < \lambda ^{*}_{2,2}. \end{aligned}$$
(5.1)

Notice that this will be the case if the constant \(C_{\Lambda }\) in (4.16) is positive and the equilibrium point \(x^*\) is asymptotically stable and lies in the interior of the interval \([\bar{x}_1,\bar{x}_2]\). Indeed, in this case

$$\begin{aligned} \lambda ^{*}_{1,2} = p'(\bar{x}_1) = C_{\Lambda }(\bar{x}_1-\bar{x}_2) < 0\quad \mathrm {and}\quad \lambda ^{*}_{2,2} = p'(\bar{x}_2) = C_{\Lambda }(\bar{x}_2-\bar{x}_1) > 0. \end{aligned}$$

Moreover, the matrix \(\Lambda ^*\) in (4.7) satisfies

$$\begin{aligned} \hbox {det}\, \Lambda ^{*}(\bar{x}_1) > 0, \quad \hbox {det}\, \Lambda ^{*}(\bar{x}_2) < 0. \end{aligned}$$

Therefore

$$\begin{aligned} \lambda ^{*}_{1,3} = 2\gamma \Lambda ^{*}(\bar{x}_1)-\lambda ^{*}_{1,2} > 0,\quad \lambda ^{*}_{2,3} = 2\gamma \Lambda ^{*}(\bar{x}_2)-\lambda ^{*}_{2,2} < 0. \end{aligned}$$

By continuity, we can then choose \(\bar{s} \in \,]0, s_0]\) sufficiently small such that for every \(s\in [-\bar{s} ,\bar{s} ]\) the Jacobian matrix \({\mathcal J}\) at the equilibrium point \(\gamma _i(s)\) defined at (4.25) has three real distinct eigenvalues \(\lambda _{i,j}(s)\), \(j=1,2,3\), which satisfy \(\lambda _{i,j}(0)= \lambda ^{*}_{i,j}\), together with

$$\begin{aligned} \lambda _{i,1}(s) = 0,\quad \lambda _{1,2}(s)< 0< \lambda _{1,3}(s)\quad \mathrm {and}\quad \lambda _{2,3}(s)< 0 < \lambda _{2,2}(s). \end{aligned}$$
(5.2)

Let \({\mathcal M}_1\) be the center-stable manifold for the equilibrium point \(P_1^*\). In other words, \({\mathcal M}_1\) is an invariant 2-dimensional manifold, whose tangent space at \(P_1^*\) is spanned by the eigenvectors

$$\begin{aligned} \mathbf{v}_1 = {\partial \gamma _1(s)\over \partial s}\bigg |_{s=0}, \quad \mathbf{v}_2 = P^*_2-P^*_1, \end{aligned}$$

corresponding to the eigenvalues \(\lambda _{1,1}^{*}=0\) and \(\lambda _{2,2}^{*}<0\), respectively.

We observe that, for general systems, a center manifold is not uniquely defined [3, 10, 18]. However, in the present setting it is clear that the local center manifold through \(P_1^*\) is unique, because it must coincide with the curve of steady states \(\gamma _1\). Indeed, the 2-dimensional manifold \({\mathcal M}_1\) can be uniquely determined as the union of the 1-dimensional stable manifolds of all equilibrium points \(\gamma _1(s)\).

Similarly, let \({\mathcal M}_2\) be the center-unstable manifold at \(P_2^*\). Notice that \({\mathcal M}_2\) is uniquely determined as the union of the 1-dimensional unstable manifolds of all equilibrium points \(\gamma _2(s)\). Its tangent space at \(P_2^*\) is spanned by the two eigenvectors

$$\begin{aligned} \mathbf{v}_1 = {\partial \gamma _2(s)\over \partial s}\bigg |_{s=0}, \quad \mathbf{v}_2 = P^*_2-P^*_1. \end{aligned}$$

The segment joining \(P_2^*\) with \(P_1^*\) (with endpoints removed) is a heteroclinic orbit of system (4.21), contained in the intersection \({\mathcal M}_1\cap {\mathcal M}_2\). To proceed, we now make the key assumption that this intersection is transversal:

  1. (A2)

    At any point P along the segment joining \(P_1^*\) with \(P_2^*\), the manifolds \({\mathcal M}_1\)and \({\mathcal M}_2\) intersect transversally. Namely, the union of the tangent spaces to \({\mathcal M}_1\) and \({\mathcal M}_2\) spans the whole space \(I\!\!R^3\).

Clearly, if the intersection is transversal at one such point P, then it is transversal at every other point.

For reader’s convenience, we collect all the assumptions that will be used in our main theorem.

  1. (i)

    The assumption (A1) holds, providing the existence of the affine solution \((\xi ^*_1,\xi _2^*)\) in (3.6).

  2. (ii)

    The determinant of the matrix \(\Lambda (x,\xi _1,\xi _2)\) at (3.4) vanishes at the two points \(P_i^*=(\bar{x}_i, \xi _1^*(\bar{x}_i), \xi _2^*(\bar{x}_i))\), \(i=1,2\).

  3. (iii)

    At least one of the equalities in (4.23) does NOT hold. By Lemma 1, the curves of steady states \(\gamma _1,\gamma _2\) are thus well defined.

  4. (iv)

    At the points \(P_1^*\) and \(P_2^*\), the eigenvalues of the Jacobian matrix \({\mathcal J}\) in (4.26) satisfy the sign conditions (5.1).

  5. (v)

    As in (A2), the manifolds \({\mathcal M}_1,{\mathcal M}_2\) intersect transversally.

We are now ready to state our main uniqueness and stability theorem, for feedback solutions to the noncooperative differential game.

Theorem 2

Let the above assumptions (i)–(v) hold, and let \(\Omega \subset I\!\!R\) be any bounded interval containing \(\bar{x}_1, \bar{x}_2\) in its interior. Then, for any \(\varepsilon >0\) there exists \(\delta >0\) such that the following holds. If the perturbations \(f_0, h_1,h_2,\eta _1,\eta _2\) are small enough, i.e., if they satisfy (4.39), then system (4.3)–(4.5) has a unique \({\mathcal C}^1\) solution \((\xi _1,\xi _2)\) defined on the whole interval \(\Omega \), such that

$$\begin{aligned} |\xi _1(x)-\xi ^*_1(x)|+|\xi _2(x)-\xi ^*_2(x)| \le \varepsilon \quad \hbox {for all}~x\in \Omega . \end{aligned}$$
(5.3)

Remark 5

In the above theorem, the interval \(\Omega \) can be taken large enough so that it contains the stable equilibrium point \(x^*\) in (3.22). Choosing \(\delta >0\) sufficiently small, the set \(\Omega \) will then be positively invariant for the corresponding feedback dynamics (4.13).

Proof of Theorem 2

Consider the vectors \(\mathbf{v},\mathbf{w}\) introduced at (4.20). To prove the result, it suffices to check that all steps in the construction of the heteroclinic orbit connecting \(P_1^*\) with \(P_2^*\) remain valid also in the presence of a small \({\mathcal C}^1\) perturbation of the vector fields \(\mathbf{v},\mathbf{w}\).

  1. 1.

    Thanks to the assumption (iii), by the implicit function theorem the two curves \(\gamma _1,\gamma _2\) where \(\mathbf{v}\wedge \mathbf{w}=0\) (i.e., where \(\mathbf{v}\) and \(\mathbf{w}\) are parallel) are still well defined also in the presence of a small \({\mathcal C}^1\) perturbation. By continuity, at every point along these curves, the eigenvalues of the Jacobian matrix \({\mathcal J}\) in (4.26) still satisfy the strict inequalities (5.2).

  2. 2.

    Given the dynamical system

    $$\begin{aligned} \dot{x} = \mathbf{v}(x)\wedge \mathbf{w}(x), \end{aligned}$$
    (5.4)

    let \({\mathcal M}_1\) be the 2-dimensional manifold obtained as the union of all the 1-dimensional stable manifolds of the points along \(\gamma _1\). Similarly, let \({\mathcal M}_2\) be the 2-dimensional manifold obtained as the union of all the 1-dimensional unstable manifolds of the points along \(\gamma _2\). By the regularity theory of invariant manifolds (see Chapter 4 in [10]), the tangent space to these manifolds varies continuously with the vector field \(\mathbf{v}\wedge \mathbf{w}\), w.r.t. the \({\mathcal C}^1\) norm. In particular, if a small \({\mathcal C}^1\) perturbation is added to \(\mathbf{v}\) and \(\mathbf{w}\), then the intersection \({\mathcal M}_1\cap {\mathcal M}_2\) is still transversal. This 1-dimensional intersection provides the unique heteroclinic orbit connecting a point \(P_1\in \gamma _1\) with a point \(P_2\in \gamma _2\).

  3. 3.

    The graph of the desired solution \(x\mapsto (\xi _1,\xi _2)(x)\) of (4.3)–(4.5) is now obtained concatenating the 1-dimensional stable manifold for \(P_1\) with the 1-dimensional unstable manifold for \(P_2\). \(\square \)

We conclude this section by providing an example where the crucial transversality condition (A2) can be directly checked.

Example 5

(continued). Consider again the linear-quadratic system with dynamics (4.14) and cost functionals (4.47). In this case, we have

$$\begin{aligned} S_1=S_2=50,\quad R_1=R_2=13,\quad b_1=b_2=1,\quad a_0=3, \end{aligned}$$
$$\begin{aligned} \xi _1^*(x) = \xi _2^*(x) = 5x+1,\quad \psi _1^*(x) = \psi _2^*(x) = -60x-15. \end{aligned}$$
(5.5)

and

$$\begin{aligned} \Lambda ^*_{11}(x) = -7x-2, \quad \Lambda ^*_{12}(x) = \Lambda ^*_{21}(x) = -5x-1. \end{aligned}$$

At the point \(P(x)=(x, \xi _1^*(x), \xi _2^*(x))\), the Jacobian matrix takes the form

$$\begin{aligned} {\mathcal J}^*(x) = \begin{pmatrix}-42x-12&{}&{}9x+3&{}&{}9x+3\\ \\ -80x+5&{}&{} 14x+4 &{}&{} 50x+13\\ \\ -80x+5&{}&{} 50x+13 &{}&{} 14x+4\end{pmatrix}. \end{aligned}$$

The eigenvalues are

$$\begin{aligned} \lambda _1(x) = -9(4x+1),\quad \lambda _2(x) = 48x+18,\quad \lambda _3(x) = -13(2x+1) \end{aligned}$$

with corresponding eigenvectors

$$\begin{aligned} \mathbf{v}_1(x) = \begin{pmatrix} 0\\ 1\\ -1 \end{pmatrix},\quad \mathbf{v}_2(x) = \begin{pmatrix} 1\\ 5\\ 5 \end{pmatrix} \quad \mathbf{v}_3(x) = \begin{pmatrix} {18x+6\over 16x-1}\\ 1\\ 1 \end{pmatrix}. \end{aligned}$$

Notice that \(\mathbf{v}_1\) and \(\mathbf{v}_2\) are independent of x. As in Fig. 3, let \({\mathcal M}_1\) be the center-stable manifold for the point \(P_1^*\), and let \({\mathcal M}_2\) be the center-unstable manifold for the point \(P_2^*\). Then the line \(\{P(x)=(x, \xi ^*_1(x), \xi ^*_2(x)); x\in I\!\!R\}\) lies in the intersection \({\mathcal M}_1\cap {\mathcal M}_2\). To check that this intersection is transversal we observe that, at every point \(P(x)=(x,\xi _1^*(x), \xi _2^*(x))\), the tangent space to the manifold \({\mathcal M}_1\) is spanned by the vectors \(\mathbf{v}_1\) and \(\mathbf{v}_2\), independent of x. On the other hand, at \(P(\bar{x}_2)\), the tangent space to \({\mathcal M}_2\) is spanned by \(\mathbf{v}_2\) and \(\mathbf{v}_3(\bar{x}_2)\). Hence \({\mathcal M}_1\) and \({\mathcal M}_2\) have transversal intersection.

6 Perturbed Solutions on the Entire Real Line

In the previous sections we studied the existence of solutions to the perturbed system (4.3) on a bounded interval \(\Omega \subset I\!\!R\), possibly containing the two points where det \(\,\Lambda ^*(x)\) vanishes. We now examine whether these perturbed solutions can be extended to the whole real line. The heart of the matter is illustrated in Fig. 1. In the linear-quadratic case, the determinant of the matrix \(\Lambda (x,\xi _1,\xi _2)\) in (3.4) is negative inside the double cone

$$\begin{aligned} \Gamma ^- \doteq \left\{ (x,\xi _1,\xi _2); (a_0x- b_1^2\xi _1 -b_2^2 \xi _2 )^2 -b_1^2b_2^2\xi _1\xi _2< 0\right\} . \end{aligned}$$
(6.1)

To fix the ideas, assume that det \(\Lambda ^*(x)\not = 0\) for all \(x\in [x_0,\,+\infty [\,\). Notice that trajectories of the implicit ODE (4.3) can become singular near the boundary of \(\Gamma ^-\). In this case, they cannot be prolonged any further (Fig. 1, left). To make sure that a small perturbation of the trajectory \(\xi ^*(\cdot )\) is well defined for all \(x\in [x_0, +\infty [\,\) we must check that it never approaches the surface where \(\det \Lambda (x,\xi _1,\xi _2)=0\).

Recalling (3.8), set \(\bar{a}_0\doteq {a_0\over A}\). By (3.6) and (4.7) all entries of the matrix \(\Lambda ^*(x)\) are polynomials of degree one. Recalling (3.20), the matrix of leading coefficients is computed as

$$\begin{aligned} \begin{array}{ll}\displaystyle \Lambda _2^*&{}\displaystyle \doteq ~\lim _{x\rightarrow \infty } {\Lambda ^*(x)\over x} = \begin{pmatrix} a_0x- b_1^2\alpha _1 -b_2^2 \alpha _2 &{}&{} -b_2^2 \xi _1\\ \\ -b_1^2\alpha _2 &{}&{} a_0x-b_1^2\alpha _1 -b_2^2 \xi _2\end{pmatrix}\\ &{}\displaystyle =~A\cdot \begin{pmatrix} \bar{a}_0-X^*_1-X^*_2 &{}&{} -{b_2^2\over b_1^2} X^*_1\\ -{b_1^2\over b_2^2}X^*_2 &{}&{} \bar{a}_0-X^*_1-X^*_2\end{pmatrix}.\end{array} \end{aligned}$$
(6.2)

Throughout the following, we assume that \(\Lambda _2^*\) is invertible, i.e.,

$$\begin{aligned} p^*_2 \doteq \hbox {det} \Lambda _2^* = {1\over A^2}\cdot \Big [\Big (\bar{a}_0-X^*_1-X^*_2\Big )^2-X_1^*X_2^*\Big ] \ne 0. \end{aligned}$$
(6.3)

The inverse matrix of \(\Lambda ^*_2\) is

$$\begin{aligned} (\Lambda _2^*)^{-1} = {1\over A\cdot \big ( [\bar{a}_0-X^*_1-X^*_2]^2-X^*_1X^*_2\big )}\cdot \begin{pmatrix} \bar{a}_0-X^*_1-X^*_2 &{}&{} -{b_2^2\over b_1^2} X^*_1\\ \\ -{b_1^2\over b_2^2}X^*_2 &{}&{} \bar{a}_0-X^*_1-X^*_2\end{pmatrix}. \end{aligned}$$

Let \(\lambda _1, \lambda _2\) be the eigenvalues of \((\Lambda ^*_2)^{-1}\) and denote by \(r_1,r_2 \) the corresponding normalized eigenvectors, so that \(|r_1|=|r_2|=1\). We have

$$\begin{aligned} \left\{ \begin{array}{l} \lambda _1+\lambda _2 = \displaystyle {2(\bar{a}_0-X_1^*-X_2^*)\over A\cdot \big ([\bar{a}_0-X^*_1-X^*_2]^2-X^*_1X^*_2\big )},\\ \lambda _1\cdot \lambda _2 = \displaystyle {1\over A^2\cdot \big ([\bar{a}_0-X^*_1-X^*_2]^2-X^*_1X^*_2\big )}. \end{array} \right. \end{aligned}$$
(6.4)

We introduce the constant

$$\begin{aligned} \lambda _{\max } \doteq \max \big \{Re(\lambda _1),\, Re(\lambda _2))\big \}. \end{aligned}$$
(6.5)

Lemma 3

Under the assumption (A1), if

$$\begin{aligned} p^*_2 = {1\over A^2}\cdot \Big [\Big (\bar{a}_0-X^*_1-X^*_2\Big )^2-X_1^*X_2^*\Big ] > 0, \end{aligned}$$
(6.6)

then there exists \(x_0>0\) such that the affine solution (3.6) satisfies

$$\begin{aligned} (x,\xi _1^*(x)),\xi _2^*(x)) \in \Gamma ^+\doteq \big \{(x,\xi _1,\xi _2);\quad \det \Lambda (x,\xi _1,\xi _2)>0\big \} \end{aligned}$$

for all \(x\in ]-\infty ,-x_0]\cup [x_0,\infty [\,\). Moreover, one has

$$\begin{aligned} \lambda _{\max } < 0. \end{aligned}$$
(6.7)

Proof

For |x| large, the determinant of \(\Lambda ^*(x)\) has the same sign as the determinant of the matrix of leading-order coefficients \(\Lambda _2^*\). Hence the first assertion of the lemma is clear.

To prove (6.7), we consider two cases.

  • If \(a_0<0\), then

    $$\begin{aligned} A< 0,\quad \bar{a}_0>0\quad \mathrm {and}\quad X_1^*+X_2^* < 0. \end{aligned}$$

    Recalling (6.6) and (6.4), we obtain

    $$\begin{aligned} \lambda _1\cdot \lambda _2 > 0\quad \mathrm {and}\quad \lambda _1+\lambda _2 < 0. \end{aligned}$$
    (6.8)
  • If \(0<{\gamma \over 2}<a_0\) and \(K_1+K_2>1/2\), then

    $$\begin{aligned} A> 0,\quad \bar{a}_0>0,\quad X_1^*+X_2^*> 1> \bar{a}_0\quad \mathrm {and}\quad X^*_i > 0. \end{aligned}$$

    By (6.6) and (6.4) one again obtains (6.8).

In both cases, (6.8) implies (6.7). \(\square \)

Lemma 4

Under the assumption (A1), if

$$\begin{aligned} p^*_2 = {1\over A^2}\cdot \Big [\Big (\bar{a}_0-X^*_1-X^*_2\Big )^2-X_1^*X_2^*\Big ] < 0, \end{aligned}$$
(6.9)

then there exists \(x_0>0\) such that the affine solution (3.6) satisfies

$$\begin{aligned} (x,\xi _1^*(x),\xi _2^*(x)) \in \Gamma ^- \doteq \big \{(x,\xi _1,\xi _2); \det \Lambda (x,\xi _1,\xi _2) < 0\big \} \end{aligned}$$

for all \(x\in \,]-\infty ,-x_0]\cup [x_0,\infty [\). Moreover, the eigenvalues \(\lambda _1\) and \(\lambda _2\) of \((\Lambda _2^*)^{-1}\) are real, with opposite signs. In particular,

$$\begin{aligned} \lambda _{\max } = \max \{\lambda _1,\lambda _2\} > 0. \end{aligned}$$
(6.10)

In the case \(a_0>0\) there exists a constant \(0<\gamma ^*_1<2a_0\), depending on \(S_1,S_2,\alpha _1,\alpha _2\), such that

$$\begin{aligned} \left\{ \begin{array}{ll} \gamma \cdot \lambda _{\max }&{}< 2\quad \hbox {if}\quad 0<\gamma<\gamma ^*_1, \\ \gamma \cdot \lambda _{\max }&{}> 2\quad \hbox {if}\quad \gamma ^*_1<\gamma <2a_0. \end{array}\right. \end{aligned}$$
(6.11)

Proof

As in Lemma 3, the first statement is clear, because for |x| large det\(\Lambda ^*(x)\) and det\(\Lambda _2^*\) have the same sign.

By (6.4) and (6.9) we have

$$\begin{aligned} \lambda _1\cdot \lambda _2 = \displaystyle {1\over A^2\cdot \big ([\bar{a}_0-X^*_1-X^*_2]^2-X^*_1X^*_2\big )} < 0. \end{aligned}$$

Hence the eigenvalues are real, and (6.10) holds.

To prove the last statement, we observe that

$$\begin{aligned} \gamma \cdot \lambda _{\max } = {\gamma \over 2a_0-\gamma }\cdot {1\over \big (\bar{a}_0-X_1^*-X_2^*+\sqrt{X^*_1X_2^*})}. \end{aligned}$$

This yields (6.11), for some \(\gamma _1^*\in \,]0, \, 2a_0[\,\). \(\square \)

The following theorem provides the existence of solutions to the perturbed ODE (4.3) on an unbounded interval \([x_0, +\infty [\). An entirely similar result holds on a domain of the form \(]-\infty ,\,-x_0]\). We recall that \(\lambda _{max}\) is the constant introduced at (6.5).

Theorem 3

Let the assumption (A1) holds together with (6.3).

If

$$\begin{aligned} \gamma \cdot \lambda _{\max } < 2, \end{aligned}$$
(6.12)

then there exist \(\delta _0, \delta _1>0\) sufficiently small and \(x_0>0\) such that the following holds. For any perturbations such that

$$\begin{aligned} \Vert f_0\Vert _{{\mathcal C}^1} + \Vert \eta _1\Vert _{{\mathcal C}^1} + \Vert \eta _2\Vert _{{\mathcal C}^1} +\Vert h_1\Vert _{C^0}+ \Vert h_2\Vert _{C^0} \le \delta _1 \end{aligned}$$
(6.13)

and any initial data

$$\begin{aligned} \xi _i(x_0) = \bar{\xi }_i\quad \hbox {with}\quad |\bar{\xi }_i- \xi _i^*(x_0)|\le \delta _0\cdot x_0,\quad i=1,2, \end{aligned}$$
(6.14)

the implicit ODE in (4.3)–(4.5) admits a unique solution \((\xi _1,\xi _2)\) which is defined for all \(x\in [x_0, \, +\infty [\).

Remark 6

There are two main cases where Theorem 3 applies.

  1. (i)

    If (6.6) holds, then Lemma 3 implies that \(\gamma \lambda _{max}<0<2\). In this case, for |x| sufficiently large, we have det\(\Lambda ^*(x)>0\) , and the solution \(\xi ^*\) lies outside the cone \(\Gamma ^-\). This is illustrated in Fig. 1, left and center.

  2. (ii)

    If (6.9) holds, then Lemma 4 yields \(\gamma \lambda _{max}<2\), provided that \(\gamma <2a_0\). In this case, for |x| sufficiently large, we have det\(\Lambda ^*(x)<0\) , and the solution \(\xi ^*\) lies inside the cone \(\Gamma ^-\). This is illustrated in Fig. 1, right.

Proof of Theorem 3

By a rescaling of coordinates, without loss of generality we can assume that \(x_0=1\). The proof will be given in several steps.

1. We first consider the unperturbed case, where \(f_0= g_1=g_2=h_1=h_2=0\).

From (4.3) it follows

$$\begin{aligned} \Lambda \bigl (x,\xi _1,\xi _2\bigr ) \begin{pmatrix}\xi _1'\\ \\ \xi _2' \end{pmatrix}-\Lambda ^*\bigl (x) \begin{pmatrix}(\xi ^*_1)'\\ \\ (\xi ^*_2)' \end{pmatrix} = \begin{pmatrix}(\gamma -a_0)(\xi _1-\xi ^*_1)\\ \\ (\gamma -a_0)(\xi _2-\xi ^*_2) \end{pmatrix}. \end{aligned}$$

Notice that \(\xi '\) is determined by the implicit ODE (4.3), while the derivative \((\xi ^*)' = \begin{pmatrix}\alpha _1\\ \alpha _2\end{pmatrix}\) is constant, according to (3.6). Introducing the variable \(\zeta =\xi -\xi ^*\), by a direct computation we obtain

$$\begin{aligned} \Lambda \bigl (x,\xi _1,\xi _2\bigr ) \begin{pmatrix}\zeta _1'\\ \\ \zeta _2'\end{pmatrix}-\begin{pmatrix} b_1^2\zeta _1+b_2^2 \zeta _2 &{}&{} b_2^2 \zeta _1\\ \\ b_1^2\zeta _2 &{}&{} b_1^2\zeta _1+ b_2^2 \zeta _2\end{pmatrix}\begin{pmatrix}\alpha _1\\ \\ \alpha _2\end{pmatrix} = \begin{pmatrix}(\gamma -a_0)\zeta _1\\ \\ (\gamma -a_0)\zeta _2 \end{pmatrix}. \end{aligned}$$

This can be written as

$$\begin{aligned} \Lambda \bigl (x,\xi _1,\xi _2\bigr ) \begin{pmatrix}\zeta _1'\\ \\ \zeta _2'\end{pmatrix} = (\gamma I -\Lambda ^*_2) \begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix}, \end{aligned}$$

where \(\Lambda ^*_2\) is the matrix of leading-order terms (6.2) and I is the \(2\times 2\) identity matrix. We thus have

$$\begin{aligned} \begin{pmatrix} \zeta _1'\\ \\ \zeta _2'\end{pmatrix} = \big [\Lambda ^{-1}\bigl (x,\xi _1,\xi _2\bigr )\Lambda ^*(x)\big ] \big [(\Lambda ^*(x))^{-1}(\gamma I -\Lambda ^*_2)\big ] \begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix}. \end{aligned}$$
(6.15)

To bring out the leading-order terms in the right-hand side of (6.15), we estimate

$$\begin{aligned} (\Lambda ^*(x))^{-1}(\gamma \,I -\Lambda ^*_2) = {1\over x}\cdot \big [\gamma (\Lambda _2^*)^{-1}-I \big ]+\mathcal O(1)\cdot {1\over x^2} \end{aligned}$$
(6.16)

and

$$\begin{aligned} \Lambda ^{-1}\bigl (x,\xi _1,\xi _2\bigr )\Lambda ^*(x)= & {} \left[ I -(\Lambda ^*(x))^{-1}\begin{pmatrix} b_1^2\zeta _1+b_2^2 \zeta _2 &{}&{} b_2^2 \zeta _1\\ \\ b_1^2\zeta _2 &{}&{} b_1^2\zeta _1+ b_2^2 \zeta _2\end{pmatrix}\right] ^{-1}, \end{aligned}$$

where the Landau symbol \(\mathcal O(1)\) denotes a uniformly bounded matrix-valued function.

Observing that \(|(\Lambda ^*(x))^{-1}|\le C/x\), for some constant C and all x sufficiently large, we can thus write (6.15) as

$$\begin{aligned} \begin{pmatrix}\zeta _1'\\ \\ \zeta _2' \end{pmatrix} = {1\over x}\cdot \big [\gamma (\Lambda _2^*)^{-1}-I \big ] \begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix} +\Psi (x,\zeta _1,\zeta _2)\begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix}. \end{aligned}$$
(6.17)

Here the remainder term satisfies the bound

$$\begin{aligned} |\Psi (x,\zeta _1,\zeta _2)| \le {C_2\over x^2}\cdot \bigl (1+ |\zeta _1|+|\zeta _2|\bigr ), \end{aligned}$$
(6.18)

as long as

$$\begin{aligned} {|\zeta _1|+|\zeta _2|\over |x|} \le \delta _2, \end{aligned}$$

for some large constant \(C_2>0\) and for \(\delta _2>0\) sufficiently small, depending on all coefficients \(a_i, b_i\).

2. Given any solution \(\xi (\cdot )\) to the implicit ODE (2.7), we introduce the auxiliary variables

$$\begin{aligned} t \doteq -{1\over x}, \quad z(t) = -t\, \zeta \left( {1\over -t}\right) = -t\left[ \xi \left( {1\over -t}\right) - \xi ^*\left( {1\over -t}\right) \right] . \end{aligned}$$
(6.19)

Here \(x\in [1,\,+\infty [\,\), \(t\in [-1,0[\,\), while \(\zeta = (\zeta _1,\zeta _2)(x)\) is a solution to (6.15). Denoting the derivative w.r.t. t by an upper dot, we obtain

$$\begin{aligned} \dot{z} = {1\over t} z(t) -{1\over t} \zeta '\left( {1\over -t}\right) . \end{aligned}$$
(6.20)

To bring out the leading-order terms in the above equation, we use (6.17) and obtain

$$\begin{aligned} \dot{z}(t) = \left( {\gamma (\Lambda ^*_2)^{-1}-2I\over |t|} +K(t,z) \right) z(t)\quad t\in [-1,0[, \end{aligned}$$
(6.21)

where

$$\begin{aligned} K(t,z) = {1\over |t|}\cdot \Psi \left( {1\over |t|},{z_1\over |t|},{z_2\over |t|}\right) . \end{aligned}$$

By (6.18), the matrix K(tz) remains uniformly bounded, as long as \(|z|= |\zeta |/|x|\) remains sufficiently small. Notice that \(\gamma (\Lambda ^*_2)^{-1}- 2I\) is a matrix with eigenvalues \(\gamma \lambda _i-2\), \(i=1,2\). By the assumption (6.12) the real parts of these eigenvalues satisfy

$$\begin{aligned} \max \bigl \{ Re(\gamma \lambda _1-2),\, Re(\gamma \lambda _2-2)\bigr \} < -\lambda . \end{aligned}$$
(6.22)

for some constant \(\lambda >0\). By a classical result on the stability of linear systems (see for example Theorem 2.61 in [10]), there exists an equivalent norm \(\Vert \cdot \Vert \) on \(I\!\!R^2\) such that every solution to the homogeneous equation

$$\begin{aligned} \dot{z} = \bigl [\gamma (\Lambda ^*_2)^{-1}- 2I\bigr ]\, z \end{aligned}$$
(6.23)

satisfies

$$\begin{aligned} {d\over \mathrm{d}t}~\Vert z(t)\Vert < -\lambda \Vert z(t)\Vert . \end{aligned}$$
(6.24)

Let \(C_3\) be an upper bound on the corresponding operator norm of the matrix K, as long as \(\Vert z\Vert \le \delta _3\), for some constant \(\delta _3>0\). Then the solution of (6.21) satisfies

$$\begin{aligned} {d\over \mathrm{d}t} \Vert z(t)\Vert \le \left( -{\lambda \over |t|}+C_3\right) \Vert z(t)\Vert , \quad t\in [-1,0[. \end{aligned}$$
(6.25)

as long as \(\Vert z(t)\Vert \le \delta _3\).

We now choose \(t_0<0\) such that

$$\begin{aligned} C_3 \le {\lambda \over 2|t_0|}. \end{aligned}$$

For \(t\in [t_0,0[\), (6.25) implies

$$\begin{aligned} {d\over \mathrm{d}t} \Vert z(t)\Vert \le -{\lambda \over 2|t|} \Vert z(t)\Vert , \end{aligned}$$
(6.26)

hence

$$\begin{aligned} \Vert z(t)\Vert \le \left| {t\over t_0}\right| ^{\lambda /2}\Vert z(t_0)\Vert . \end{aligned}$$

If now \(\Vert z(-1)\Vert \) is sufficiently small, then \(\Vert z(t_0) \Vert \le \delta _3\). In turn, (6.26) implies that z(t) is well defined for all \(t\in [t_0,0[\) and \(z(t)\rightarrow 0\) as \(t\rightarrow 0-\,\). Returning to the original variables \(x,\xi \), we conclude that if \(\zeta (1) = \xi (1)-\xi ^*(1)\) is sufficiently small, then the solution \(x\mapsto \xi (x)\) of (4.3) is well defined for all \(x\in [1,\,+\infty [\,\). This proves the theorem in the unperturbed case, where \(f_0=h_1=h_2=\eta _1=\eta _2=0\) (Fig. 4).

3. In the remainder of the proof, we check that the same conclusion holds for the perturbed problem, where the dynamics is determined by (4.3)–(4.5). A minor modification of the argument in step 2 yields \(\square \)

Fig. 4
figure 4

For \(t\in [t_0, 0[\,\), the domain \(\Vert z\Vert \le \delta _3\) is positively invariant for the ODE (6.21)

Lemma 5

Assume that both eigenvalues of the matrix \(\gamma (\Lambda ^*_2)^{-1}-2I\) have strictly negative real part. Given \(C_3\) and \(\delta _3>0\), there exist sufficiently small constants \(\delta _4, \delta _5\in \,]0, \delta _3]\) such that the following holds. Let \(z{:}[-1,0[\,\mapsto I\!\!R^2\) be a solution of the Cauchy problem

$$\begin{aligned} z(-1)=\bar{z},\quad \dot{z}(t) = \left( {\gamma (\Lambda ^*_2)^{-1}-2I\over |t|} +M(t,z)\right) z(t)+\psi (t,z), \end{aligned}$$
(6.27)

with \(|\bar{z}| \le \delta _5\) and

$$\begin{aligned} |M(t,z)| \le C_3\left( 1+{\delta _4\over |t|}\right) , \quad |\psi (t,z)| \le \delta _4, \quad \hbox {for}\,t\in [-1,0[, |z|\le \delta _3. \end{aligned}$$
(6.28)

Then z satisfies

$$\begin{aligned} |z(t)|~\le ~\delta _3 \quad \hbox {for all} \, t\in [-1,0[. \end{aligned}$$
(6.29)

Indeed, let \(\lambda >0\) be as in (6.22). Using the equivalent norm introduced at (6.23)–(6.24), we obtain

$$\begin{aligned} {d\over \mathrm{d}t} \Vert z(t)\Vert \le \left\{ -{\lambda \over |t|} + C\Big (1+{\delta _4\over |t|} \Big )\right\} \Vert z(t)\Vert + C\delta _4, \end{aligned}$$

for some constant C, depending on \(C_3\) and on the equivalent norm \(\Vert \cdot \Vert \). Choosing \(\delta _4\) and then \(t_0<0\) so that

$$\begin{aligned} C\delta _4 < {\lambda \over 3}, \quad {\lambda \over 3 |t_0|} = C, \end{aligned}$$

for all \(t\in [t_0, 0[\,\) we obtain

$$\begin{aligned} {d\over \mathrm{d}t} \Vert z(t)\Vert \le \left( -{2\lambda \over 3|t|} + C \right) \Vert z(t)\Vert + C\delta _4 \le -{\lambda \over 3|t|} \Vert z(t)\Vert + C\delta _4. \end{aligned}$$
(6.30)

By possibly taking a smaller constant \(\delta _4\), from (6.30) we deduce that the ball \(\{z\in I\!\!R^2; \Vert z\Vert \le \delta _3\}\) is positively invariant for the ODE in (6.27), for \(t\in [t_0, 0[\,\). Next, by choosing the initial condition \(z(-1)=|\bar{z}|\) sufficiently small, we can achieve the bound

$$\begin{aligned} \Vert z(t)\Vert \le \delta _3 \quad \hbox {for all}\,t\in [-1, t_0]. \end{aligned}$$

By positive invariance, this implies \(\Vert z(t)\Vert \le \delta _3\) for all \(t\in [-1, 0[\,\).

We have thus proved (6.29), with the Euclidean norm replaced by the equivalent norm \(\Vert \cdot \Vert \). Since the constant \(\delta _3\) was arbitrary, this achieves a proof of Lemma 5.

4. To complete the proof of the theorem it now suffices to check that, if (6.13)–(6.14) hold with \(\delta _0,\delta _1>0\) sufficiently small, then the change of variables (6.19) yields a function z(t) which satisfies the assumptions of Lemma 5. This is achieved by a lengthy but straightforward computation.

Repeating the argument in step 1, one obtains

$$\begin{aligned} \Lambda \bigl (x,\xi _1,\xi _2\bigr )\begin{pmatrix} \zeta _1'\\ \\ \zeta _2'\end{pmatrix} = \Big [(\gamma -f_0')I -\Lambda ^*_2+\Lambda _{p}(x)\Big ] \begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix}+\begin{pmatrix}\Phi _1(x)\\ \\ \Phi _2(x)\end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} \Lambda _p(x) = \begin{pmatrix} \alpha _1(2b_1+h_1)h_1+\alpha _2(2b_2+h_2)h_2 &{}&{} \alpha _1(2b_2+h_2)h_2\\ \\ \alpha _2(2b_1+h_1)h_1 &{}&{} \alpha _1(2b_1+h_1)h_1+\alpha _2(2b_2+h_2)h_2\end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} \begin{pmatrix}\Phi _1(x)\\ \\ \Phi _2(x) \end{pmatrix} = \begin{pmatrix} (-f_0+f_0')\alpha _1-\eta '_1\\ \\ (-f_0+f_0')\alpha _2+\eta '_2 \end{pmatrix} +\Lambda _p(x)\cdot \begin{pmatrix} \alpha _1x+\beta _1\\ \\ \alpha _2x+\beta _2 \end{pmatrix}. \end{aligned}$$

As long as that matrix \(\Lambda \bigl (x,\xi _1,\xi _2\bigr )\) is invertible, we can thus write

$$\begin{aligned} \begin{pmatrix}\zeta _1'\\ \\ \zeta _2' \end{pmatrix} = \Lambda \bigl (x,\xi _1,\xi _2\bigr )^{-1}\Big [(\gamma -f_0')I -\Lambda ^*_2+\Lambda _{p}(x)\Big ] \begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix}+\Lambda \bigl (x,\xi _1,\xi _2\bigr )^{-1} \begin{pmatrix}\Phi _1(x)\\ \\ \Phi _2(x)\end{pmatrix}.\nonumber \\ \end{aligned}$$
(6.31)

We now have

$$\begin{aligned} \Lambda ^{-1}\bigl (x,\xi _1,\xi _2\bigr ) \Lambda ^*(x) = [(\Lambda ^*(x))^{-1} \Lambda \bigl (x,\xi _1,\xi _2\bigr )]^{-1} = I +K_2(x,\zeta _1,\zeta _2) \end{aligned}$$

for some matrix \(K_2\) whose norm satisfies the bound

$$\begin{aligned} \bigl \Vert K_2(x,\zeta _1,\zeta _2)\bigr \Vert ~\le ~{C_6\over x^2}\cdot \bigl (1+|\zeta _1|+|\zeta _2|\bigr ) \end{aligned}$$

provided

$$\begin{aligned} \Big |{\zeta _1\over x}\Big |+\Big |{\zeta _2\over x}\Big | \le \delta _6, \end{aligned}$$
(6.32)

for some constant \(C_6>0\) and some \(\delta _6>0\) sufficiently small.

Recalling (6.16), one can write (6.31) in the form

$$\begin{aligned} \begin{array}{ll}\displaystyle \begin{pmatrix}\zeta _1'\\ \\ \zeta _2'\end{pmatrix}&{}\displaystyle = (I +K_2(x,\zeta _1,\zeta _2)) \left[ {1\over x}\cdot \big [(\gamma -f_0') (\Lambda _2^*)^{-1}-I \big ]+{\mathcal O(1)\over x^2}+(\Lambda ^*(x))^{-1}\Lambda _{p}(x)\right] \begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix}\\ &{}\displaystyle \quad +\,(I +K_2(x,\zeta _1,\zeta _2))(\Lambda ^*(x))^{-1}\begin{pmatrix}\Phi _1(x)\\ \\ \Phi _2(x) \end{pmatrix}.\end{array} \end{aligned}$$

We now have the estimates

$$\begin{aligned} \Vert \Lambda _p(x)\Vert= & {} \mathcal O(1)\cdot (|h_1(x)|+|h_2(x)|),\\ |\Phi _i(x)|= & {} \mathcal O(1)\cdot \\&\Big [|f_0(x)|+|f'_0(x)|+|\eta _1'(x)|+|\eta _2'(x)|+(|h_1(x)|+|h_2(x)|)\cdot (|1|+|x|)\Big ]. \end{aligned}$$

In turn, these yield

$$\begin{aligned} \begin{pmatrix} \zeta _1'\\ \\ \zeta _2' \end{pmatrix} = \left[ {1\over x}\cdot \big [\gamma (\Lambda _2^*)^{-1}-I \big ]+{1\over x}\cdot K_3(x,\zeta _1,\zeta _2)\right] \begin{pmatrix}\zeta _1\\ \\ \zeta _2 \end{pmatrix}+\mathbf{b}(x,\zeta _1,\zeta _2), \end{aligned}$$

where the matrix \(K_3\) and the vector \(\mathbf{b}\) satisfy

$$\begin{aligned} K_3(x,\zeta _1,\zeta _2) = \mathcal O(1)\cdot \left( |f_0'(x)|+|h_1(x)|+|h_2(x)|+{1\over x^2}(1+|\zeta _1|+|\zeta _2|)\right) ,\quad \end{aligned}$$
(6.33)

and

$$\begin{aligned} \mathbf{b}(x,\zeta _1,\zeta _2) = \mathcal O(1)\cdot \left( |h_1(x)|+|h_2(x)|+{1\over |x|}\cdot (|f_0(x)|+|f'_0(x)|+|\eta _1'(x)|+|\eta _2'(x)|)\right) \nonumber \\ \end{aligned}$$
(6.34)

as long as (6.32) holds. Under the assumptions (6.13) and (6.32) we thus have

$$\begin{aligned} \Vert K_3(x,\zeta _1,\zeta _2)\Vert \le C_7\left( \delta _1+{1+\delta _3\over |x|}\right) \quad \Vert \mathbf{b}(x,\zeta _1,\zeta _2)\Vert \le C_7\,\delta _1, \end{aligned}$$
(6.35)

for some constant \(C_7\).

Performing the change of coordinates (6.19) we obtain

$$\begin{aligned} {d\over \mathrm{d}t} z(t) = \left[ {\gamma (\Lambda ^*_2)^{-1}-2I\over |t|} +M(t,z(t))\right] z(t)+\psi (t,z(t)), \end{aligned}$$
(6.36)

where

$$\begin{aligned} M(t,z) = {1\over |t|}K_3\left( {1\over |t|},{z_1\over |t|},{z_2\over |t|}\right) ,\quad \psi (t,z) = \mathbf{b}\left( {1\over |t|},{z_1\over |t|},{z_2\over |t|}\right) . \end{aligned}$$

By (6.35) one has the bounds

$$\begin{aligned} \Vert M(t,z)\Vert \le C_7\left( {\delta _1\over |t|} + (1+\delta _3) \right) ,\quad \Vert \psi (t,z)\Vert \le C_7 \delta _1. \end{aligned}$$
(6.37)

Given the constants \(C_7\) and \(\delta _3>0\), set \(C_3\doteq C_7(1+\delta _3)\) and let \(\delta _4,\delta _5\) be as in Lemma 5. By choosing \(\delta _1>0\) sufficiently small, we achieve

$$\begin{aligned} \Vert M(t,z)\Vert \le C_7\left( {\delta _1\over |t|} + (1+\delta _3) \right) \le C_3\left( {\delta _4\over |t|}+1\right) . \end{aligned}$$

while \(\Vert \psi (t,z)\Vert \le C_7 \delta _1\le \delta _4\). An application of Lemma 5 now yields the existence of a solution \(z:[-1,0[\,\mapsto I\!\!R^2\). Going back to the original variables, we obtain a solution \(x\mapsto \xi (x)\), globally defined for all \(x\in [1,+\infty [\,\).

Remark 7

From the inequality (6.30) it follows

$$\begin{aligned} \lim _{t\rightarrow 0-} \Vert z(t)\Vert = 0. \end{aligned}$$

In terms of the original variables, this implies that the perturbed solutions satisfy

$$\begin{aligned} \lim _{|x|\rightarrow +\infty } {|\xi (x)-\xi ^*(x)|\over |x|} = 0. \end{aligned}$$

7 Equilibrium Solutions to Differential Games

All previous analysis has been concerned with the system of ODEs (1.12) describing the derivatives of value functions for the two players. We conclude the paper by showing that, starting with a smooth solution to these ODEs, one obtains a pair of value functions \(V_1,V_2\) satisfying the H–J equations (2.5). In turn, this yields the existence of a Nash equilibrium solution to the differential game (1.1)–(1.2), in the sense of Definition 1. Since here we are dealing with smooth value functions, all these claims can be proved by standard arguments.

Proposition 1

Let \(f, g_1,g_2{:}I\!\!R\mapsto I\!\!R\) be smooth maps with sublinear growth, as in (2.2), and let \(\varphi _i\) be such that

$$\begin{aligned} |\varphi _i(x)| \ge C_1|x|^2-C_2, \quad i=1,2 \end{aligned}$$
(7.1)

for some positive constant \(C_1, C_2\). Let \(x\mapsto (\xi _1(x),\xi _2(x))\) provide a solution to the implicit system of ODEs (2.7)–(2.9), and assume that the corresponding feedback dynamics (2.10) has a unique equilibrium point \(x^*\), which is stable. If in addition

$$\begin{aligned} |\xi _i(x)| \le \alpha |x|+\beta ,\quad i=1,2 \end{aligned}$$
(7.2)

for some positive constants \(\alpha ,\beta \), then the feedback controls

$$\begin{aligned} u_i^*(x) = -\xi _i(x) g_i(x), \quad x\in I\!\!R, i=1,2 \end{aligned}$$
(7.3)

provide a Nash equilibrium solution to the differential game with dynamics (2.1) and cost functionals (2.3).

Proof

1. Define the functions \(V_1,V_2\) by setting

$$\begin{aligned} \left\{ \begin{array}{ll} \gamma V_1(x)&{}\doteq \displaystyle \Big [f(x) - {1\over 2} g_1^2(x) \xi _1(x)- g_2^2(x) \xi _2(x)\Big ]\xi _1(x) + \varphi _1,\\ \gamma V_2(x)&{}\doteq \displaystyle \Big [f(x) - g_1^2(x)\xi _1(x) - {1\over 2} g_2^2(x) \xi _2(x)\Big ]\xi _2(x) + \varphi _2.\end{array}\right. \end{aligned}$$
(7.4)

By (2.7)–(2.9), a direct computation yields

$$\begin{aligned} \Lambda (x,\xi _1,\xi _2) \begin{pmatrix}\xi _1'\\ \\ \xi _2'\end{pmatrix} = \begin{pmatrix}-f' \xi _1 + g_1g_1' \xi _1^2 + 2 g_2 g_2' \xi _1\xi _2-\varphi _1') \\ \\ -f'\xi _2 + g_2g_2' \xi _2^2 + 2 g_1 g_1' \xi _1\xi _2-\varphi _2') \end{pmatrix} +\begin{pmatrix} \gamma V_1'\\ \\ \gamma V_2' \end{pmatrix}. \end{aligned}$$

Since \((\xi _1,\xi _2)\) is a solution to the implicit system of ODEs (2.7)–(2.9),for \(i=1,2\) we have

$$\begin{aligned} V'_i = \xi _i. \end{aligned}$$
(7.5)

Therefore \(V_1, V_2\) satisfy the H–J equations in (2.6).

2. Consider the optimal control problem for the first player:

$$\begin{aligned} \hbox {minimize} \quad \int ^{+\infty }_{0}e^{-\gamma t}\, \left[ {u^2(t)\over 2}+\varphi _i(x(t))\right] ~\mathrm{d}t \end{aligned}$$
(7.6)

subject to

$$\begin{aligned} \dot{x}(t) = f(x)+g_i(x)u(t)+g_j(x)u^*_2(x), \quad x(0)=x_0. \end{aligned}$$
(7.7)

Here \(u_2^*(\cdot )\) is the feedback control of the second player, defined at (7.3). We claim that \(V_1(\cdot )\) is the value function for this problem.

Indeed, if the first player implements the feedback control \(u=u^*_1(\cdot )\) in (7.3), then for any initial datum \(x(0)=x_0\) the system evolves according to

$$\begin{aligned} \dot{x}(t) = f(x(t))+g_1(x)u_1^*(x)+g_2(x)u^*_2(x), \quad x(0) = x_0. \end{aligned}$$

By (7.4) and (7.5) it follows

$$\begin{aligned} {d\over \mathrm{d}t} V_1(x(t)) -\gamma V_1(x(t)) = -{[u_1^*(t)]^2\over 2}-\varphi _1(x(t)). \end{aligned}$$

Hence, for every \(\tau \),

$$\begin{aligned} \int ^\tau _{0}e^{-\gamma t}\, \Big [{[u_1^*(t)]^2\over 2} +\varphi _1({x}(t))\Big ] \mathrm{d}t = V_1(x(0))-e^{-\gamma \tau }V_1(x( \tau )). \end{aligned}$$

Letting \( \tau \rightarrow \infty \) and recalling that \(\lim _{ \tau \rightarrow \infty }{x}( \tau ) = x^*\), we conclude that the right-hand side converges to \(V_1(x(0))\). Calling \(V_1^*(\cdot ) \) the minimum value function for the problem (7.6)–(7.7), we thus have

$$\begin{aligned} V^*_1(x_0) \le V_1(x_0). \end{aligned}$$
(7.8)

To prove the converse inequality we need to show that, for any control \(u(\cdot )\) such that the Cauchy problem (7.7) admits a global solution in time, one has

$$\begin{aligned} V_1(x_0) \le \int ^{\infty }_{0}e^{-\gamma t} \left[ {u^2(t)\over 2}+\varphi _1(x(t))\right] \,\mathrm{d}t. \end{aligned}$$
(7.9)

Indeed, consider the function

$$\begin{aligned} \Phi (\tau ) = \int _0^{\tau }e^{-\gamma t} \left[ {u^2(t)\over 2}+\varphi _1(x(t))\right] \,\mathrm{d}t +e^{-\gamma \tau }\, V_1(x(\tau )). \end{aligned}$$

For a.e \(\tau \in [0,+\infty )\), we compute

$$\begin{aligned} \begin{array}{ll} \displaystyle {d\over \mathrm{d}\tau }~\Phi (\tau )&{}=\displaystyle e^{-\gamma \tau }\, \bigg \{{u^2(\tau )\over 2}+\varphi _1(x(\tau ))\\ &{}\quad +\,V_1'(x(\tau ))\, \Big [f(x(\tau )) +g_1(x(\tau ))u(\tau ) \displaystyle +g_2(x(\tau ))u_2^* (x(\tau ))\Big ]-\gamma \, V_1(x(\tau ))\bigg \}\\ &{}\ge \displaystyle e^{-\gamma \tau }\, \bigg \{{[u_1^*(\tau )]^2\over 2}+\varphi _1(x(\tau ))\\ &{}\quad +\,V_1'(x(\tau ))\, \Big [f(x(\tau )) +g_1(x(\tau ))u_1^*(\tau ) \displaystyle +g_2(x(\tau ))u_2^*(x(\tau ))\Big ]-\gamma \, V_1(x(\tau ))\bigg \}\\ &{}= 0. \end{array} \end{aligned}$$

Hence, \(\Phi (\tau )\) is a nondecreasing function. We now estimate

$$\begin{aligned}&V_1(x_0) = \Phi (0) \le \lim _{\tau \rightarrow +\infty } \Phi (\tau ) \nonumber \\&\quad = \int ^{\infty }_{0}e^{-\gamma t} \left[ {u^2(t)\over 2} +\varphi _1(x(t))\right] \,\mathrm{d}t+\lim _{\tau \rightarrow +\infty }e^{-\gamma \tau }\, V_1(x(\tau )). \end{aligned}$$
(7.10)

If now

$$\begin{aligned} \lim _{\tau \rightarrow +\infty } e^{-\gamma \tau }\, V_1(x(\tau )) \le 0, \end{aligned}$$

then the inequality (7.9) follows. Assume that, on the contrary, there exist \(\tau _0,\varepsilon >0\) such that

$$\begin{aligned} e^{-\gamma \tau }\, |V_1(x(\tau ))| \ge \varepsilon \quad \hbox {for all}~\tau \ge \tau _0. \end{aligned}$$

By (7.2) and (7.5), we have

$$\begin{aligned} |V_1(x)| \le c_1+c_2|x|^2 \end{aligned}$$

for some constant \(c_1,c_2>0\). Hence, for all \(t\ge \tau _0\),

$$\begin{aligned} |x(t)|^2 \ge e^{\gamma t}\, {\varepsilon \over c_2}-{c_1\over c_2}. \end{aligned}$$

Recalling (7.1), we now estimate

$$\begin{aligned}&\int ^{\tau }_{\tau _0}e^{-\gamma t} \left[ {u^2(t)\over 2}+\varphi _1(x(t))\right] \,\mathrm{d}t \ge \int ^{\tau }_{\tau _0}e^{-\gamma t}\, \bigl [C_1|x(t)|^2-C_2\bigr ]\,\mathrm{d}t\\&\ge \int ^{\tau }_{\tau _0}{C_1\varepsilon \over c_2}-e^{-\gamma t}\Big [{C_1c_1\over c_2}+C_2\Big ]\,\mathrm{d}t = {C_1\varepsilon \over c_2}(\tau -\tau _0)+{1\over \gamma }\, \Big [{C_1c_1\over c_2}+C_2\Big ] (1-e^{-\gamma \tau }). \end{aligned}$$

Letting \(\tau \rightarrow +\infty \), we thus obtain

$$\begin{aligned} \int ^{\infty }_{0}e^{-\gamma t} \left[ {u^2(t)\over 2}+\varphi _1(x(t))\right] \, \mathrm{d}t \ge \int ^{\infty }_{\tau _0}e^{-\gamma t} \varphi _1(x(t))\,\mathrm{d}t = +\infty . \end{aligned}$$

Hence (7.9) trivially holds.

3. Having proved that \(V_1(\cdot )\) is the value function, it is clear that the feedback control \(u^*_1(\cdot )\) in (7.3) is optimal for player 1. An entirely similar argument shows that \(u_2^*(\cdot )\) is an optimal feedback control for player 2. \(\square \)

A similar result holds in the case where the solution \((\xi _1,\xi _2)\) of (2.7)–(2.9) is constructed only on a bounded interval.

Proposition 2

Let \(f, g_1,g_2{:}I\!\!R\mapsto I\!\!R\) be smooth maps with sublinear growth, as in (2.2). Let \(x\mapsto (\xi _1(x),\xi _2(x))\) provide a solution to the implicit system of ODEs (2.7)–(2.9) restricted to some interval [ab], and assume that the corresponding feedback dynamics (2.10) has a unique equilibrium point \(x^*\in [a,b]\), which is stable. Then the controls \(u_i^*\) in (7.3) are optimal for the game with dynamics (2.1) and cost functionals

$$\begin{aligned} J_i = \int _0^\infty e^{-\gamma t}\,L_i\bigl (x(t), u_i(t)\bigr )\, \mathrm{d}t, \quad i=1,2, \end{aligned}$$
(7.11)

where

$$\begin{aligned} L_i(x,u_i) = \left\{ \begin{array}{ll} \displaystyle \varphi _i(x)+{u_i^2\over 2} &{}\quad \hbox {if}\quad x\in [a,b],\\ +\infty &{}\quad \hbox {if}\quad x\notin [a,b].\end{array}\right. \end{aligned}$$
(7.12)

Indeed, by (7.12) both players face an optimization problem with state constraint \(x\in [a,b]\). The proof is achieved by the same arguments used in the previous case.

Remark 8

As a consequence of Proposition 1, the perturbed solution \((\xi _1,\xi _2)\) constructed in Sect. 6 on the whole real line yields a Nash equilibrium solution to the differential game with dynamics (2.1) and cost functionals (2.3). Indeed, (6.13) implies (7.1) and (2.2). By Remark 7, there exists a constant \(C>0\) such that

$$\begin{aligned} |\xi _i(x)-(\alpha _i x+\beta _i)| = |\xi _i(x)-\xi _i^*(x)| \le |x|+C,\quad i=1,2. \end{aligned}$$

Hence the condition (7.2) holds.

On the other hand, by Proposition 2 the perturbed solution \((\xi _1,\xi _2)\) constructed in Sects. 4 and 5 on a bounded interval [ab] yields a Nash equilibrium solution to the differential game with dynamics (2.1) and cost functionals (7.11)–(7.12).