1 INTRODUCTION

Equations of mathematical physics, such as Euler–Lagrange equations of variational problems, have an important class of solutions, namely, soliton solutions [1, 2]. In a number of models, such solutions are well approximated by soliton solutions of finite-difference analogues of the original equations, which describe not a continuous medium, but rather the interaction of medium clumps placed at vertices of a lattice [2, 3]. The resulting systems are infinite-dimensional dynamical ones. The most widely considered class of such problems includes infinite-dimensional systems with Frenkel–Kontorova potentials (periodic and slowly growing) and Fermi–Pasta–Ulam potentials (of exponential growth); a detailed survey of them can be found in [4].

The study of soliton solutions (traveling-wave solutions) is based on the existence of a one-to-one correspondence between soliton solutions of infinite-dimensional dynamical systems and solutions of induced functional differential equations of pointwise type [58]. This relationship is a fragment of a more general scheme, which is beyond the scope of this paper [9]. Importantly, the study of soliton solutions to an infinite-dimensional dynamical system is equivalent to the study of solutions to an induced functional differential equation of pointwise type. An existence and uniqueness (existence) theorem for an induced functional differential equation of pointwise type guarantees the existence and uniqueness (existence) of a soliton solution with given initial values. Solutions of functional differential equations of the pointwise type with a quasilinear right-hand side are studied within the formalism based on the group features of such equations and developed in [10, 5, 6, 11]; such solutions can be found in [8]. For linear systems, criteria for the existence of solutions in the form of an analogue of Noether’s theorem and criteria for the pointwise completeness of solutions were obtained in [12]. Easy-to-verify sufficient conditions for the existence of solutions were also established [13].

In this paper, a theorem on bifurcation of branching is derived for the solution of a linear homogeneous functional differential equation of the pointwise type.

2 FUNCTIONAL DIFFERENTIAL EQUATIONS OF POINTWISE TYPE

A functional differential equation of the pointwise type is an equation

$$\dot {x}(t) = f(t,x({{q}_{1}}(t), \ldots ,x({{q}_{s}}(t))),\quad t \in {{B}_{R}},$$
(1)

where \(f:\mathbb{R} \times {{\mathbb{R}}^{{ns}}} \to {{\mathbb{R}}^{n}}\) is a \({{C}^{{(0)}}}\) mapping; \({{q}_{j}}(\cdot )\), \(j = 1, \ldots ,s\), are orientation-preserving diffeomorphisms of the straight line; and \({{B}_{R}}\) denotes a closed interval \([{{t}_{0}},{{t}_{1}}]\), a closed numerical half-line \([{{t}_{0}}, + \infty [\), or the real line \(R\). By making a time substitution, the functions of the argument deviation \([{{q}_{j}}(t) - t]\), \(j = 1, \ldots ,s\), can always be made to satisfy the condition

$${{h}_{j}} = \mathop {\sup}\limits_{t \in \mathbb{R}} \left| {{{q}_{j}}(t) - t} \right| < + \infty ,\quad j = 1, \ldots ,s,$$

but the right-hand side of the equation may grow too fast with respect to the time variable under such a substitution, specifically, faster than exponentially.

In the study of functional differential equations of pointwise type, the main goal is to investigate the initial-boundary value problem

$$\dot {x}(t) = f(t,x({{q}_{1}}(t), \ldots ,x({{q}_{s}}(t))),\quad t \in {{B}_{R}},$$
(2)
$$\dot {x}(t) = \varphi (t),\quad t \in \mathbb{R}{\backslash }{{B}_{R}},\quad \varphi (\cdot ) \in {{L}_{\infty }}(\mathbb{R},{{\mathbb{R}}^{n}}),$$
(3)
$$x(\bar {t}) = \bar {x},\quad \bar {t} \in \mathbb{R},\quad \bar {x} \in {{\mathbb{R}}^{n}},$$
(4)

which is referred to as the basic initial-boundary value problem. In the generic case, when \(\bar {t} \ne {{t}_{0}},{{t}_{1}}\) or the deviations of the argument are arbitrary, we have a problem with nonlocal initial-boundary conditions. For such a problem, we need to study the entire solution space, its dimension, possible degenerations, and pointwise completeness (when a solution of the boundary value problem (2), (3) passes through each point of the phase plane \({{\mathbb{R}}^{n}}\)). The complexity of such equations is associated with the fact that the motion along solutions in the phase space \({{\mathbb{R}}^{n}}\) has no semigroup property, which is a major one in the theory of dynamical systems.

Krasovsky regularization. The idea of overcoming this shortcoming goes back to Krasovsky [14]. For retarded functional differential equations (\({{q}_{j}}(t) \leqslant t,j = 1, \ldots ,s\)), given a solution \(x(\cdot )\) of Eq. (1), the curve \({{x}_{t}}\) in the space \(C([ - d,0],{{\mathbb{R}}^{n}})\), \(d = {{\max}_{{j \in \{ 1, \ldots ,s\} }}}{{h}_{j}}\), is constructed according to the rule \({{x}_{t}}(\theta ) = x(t + \theta )\), \(\theta \in [ - d,0]\). The right-hand side of the functional differential equation induces a functional on the infinite-dimensional space \(C([ - d,0],{{\mathbb{R}}^{n}})\) and determines an ordinary differential equation satisfied by the solution \({{x}_{t}}\). The phase space for this equation is the infinite-dimensional space \(C([ - d,0],{{\mathbb{R}}^{n}})\). The motion along solutions in this phase space has the semigroup property, and methods of the theory of dynamical systems are applicable to such systems. With the use of this approach, it is possible to consider a class of functional differential equations that is much larger than that of functional differential equations of pointwise type. At the same time, information on the behavior of solutions in the original phase space \({{\mathbb{R}}^{n}}\) and the relationship with the theory of solutions of ordinary differential equations are lost in the case of this approach.

Another approach to the study of such equations is based on a formalism relying on constructions using a finitely generated group \(Q\) of diffeomorphisms of the straight line (the group operation in this group is a composition of diffeomorphisms) with the property \(\left\langle {{{q}_{1}}, \ldots ,{{q}_{s}}} \right\rangle \subseteq Q\). The idea of this approach is that an infinite-dimensional vector function \({{\{ x(q(t))\} }_{{q \in Q}}}\) constructed using the solution \(x(\cdot )\) of Eq. (1) is the solution of an induced infinite-dimensional ordinary differential equation with the phase space given by the complete direct product

$${{\mathcal{K}}^{n}} = \overline {\prod\limits_{q \in Q} } \,\mathbb{R}_{q}^{n},\quad \mathbb{R}_{q}^{n} = {{\mathbb{R}}^{n}},\quad \varkappa \in {{\mathcal{K}}^{n}},\quad \varkappa = {{\{ {{x}_{q}}\} }_{{q \in Q}}}.$$

With the use of this group approach, functional differential equations of pointwise type can be treated as an extension of the class of ordinary differential equations in the sense of conservation, i.e., existence and uniqueness theorems for initial-boundary value problems, continuous dependence on the initial-boundary values and the right-hand side of the equation, the pointwise completeness of solutions to boundary value problems, etc., hold for them. Importantly, with this approach, it is possible to examine the behavior of trajectories in the original phase space \({{\mathbb{R}}^{n}}\) and to reveal obstacles preventing functional differential equations of pointwise type from inheriting the properties of ordinary differential equations.

2.1 Existence and Uniqueness Theorem for a Functional Differential Equation of Pointwise Type

Define the weighted Banach space of functions \(x(\cdot )\)

$$\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}) = \left\{ {x(\cdot ):x(\cdot ) \in {{C}^{{(k)}}}(\mathbb{R},{{\mathbb{R}}^{n}}),\mathop {\max}\limits_{0 \leqslant r \leqslant k} \mathop {\sup}\limits_{t \in \mathbb{R}} {{{\left\| {{{x}^{{(r)}}}(t){{\mu }^{{\left| t \right|}}}} \right\|}}_{{{{\mathbb{R}}^{n}}}}} < + \infty } \right\},\quad \mu \in (0,1),$$

equipped with the norm

$$\left\| {x(\cdot )} \right\|_{\mu }^{{(k)}} = \mathop {\max}\limits_{0 \leqslant r \leqslant k} \mathop {\sup}\limits_{t \in \mathbb{R}} {{\left\| {{{x}^{{(r)}}}(t){{\mu }^{{\left| t \right|}}}} \right\|}_{{{{\mathbb{R}}^{n}}}}}.$$

Assume that the following set of constraints is imposed on the right-hand side of a functional differential equation of pointwise type:

(a) \(f(\cdot ) \in {{C}^{{(0)}}}(\mathbb{R} \times {{\mathbb{R}}^{{n \times s}}},{{\mathbb{R}}^{n}})\) (here, \(f(\cdot )\) is assumed to be a piecewise continuous function of variable \(t\) with jump discontinuities at points of a discrete set);

(b) quasilinear growth: for any \(t,{{z}_{j}},{{\bar {z}}_{j}},j = 1, \ldots ,s\),

$${{\left\| {f(t,{{z}_{1}}, \ldots ,{{z}_{s}})} \right\|}_{{{{\mathbb{R}}^{n}}}}} \leqslant {{M}_{0}}(t) + {{M}_{1}}\sum\limits_{j = 1}^s \,{{\left\| {{{z}_{j}}} \right\|}_{{{{\mathbb{R}}^{n}}}}},\quad {{M}_{0}}(\cdot ) \in {{C}^{{(0)}}}(\mathbb{R},\mathbb{R}),$$

and the Lipschitz condition

$${{\left\| {f(t,{{z}_{1}}, \ldots ,{{z}_{s}}) - f(t,{{{\bar {z}}}_{1}}, \ldots ,{{{\bar {z}}}_{s}})} \right\|}_{{{{\mathbb{R}}^{n}}}}} \leqslant {{L}_{f}}\sum\limits_{j = 1}^s \,{{\left\| {{{z}_{j}} - {{{\bar {z}}}_{j}}} \right\|}_{{{{\mathbb{R}}^{n}}}}}$$

(actually, \({{M}_{1}} \leqslant {{L}_{f}}\), but the constants \({{M}_{1}}\) and \({{L}_{f}}\) can be taken identical);

(c) there exists \(\mu {\text{*}} \in {{\mathbb{R}}_{ + }}\) such that the expression

$$\mathop {\sup}\limits_{i \in \mathbb{Z}} {{M}_{0}}(t + i)\mathop {(\mu {\text{*}})}\nolimits^{\left| i \right|} $$

for any \(t \in \mathbb{R}\) has a finite value and is continuous when regarded as a function of \(t\);

(d) the values

$${{h}_{j}} = \mathop {\sup}\limits_{t \in \mathbb{R}} \left| {t - {{q}_{j}}(t)} \right|,\quad j = 1, \ldots ,s,$$

are finite.

The right-hand side \(f(\cdot )\) of the functional differential equation of pointwise type is considered an element of the Banach space \({{V}_{{\mu {\text{*}}}}}(\mathbb{R} \times {{\mathbb{R}}^{{ns}}},{{\mathbb{R}}^{n}})\) defined as

$$\begin{gathered} {{V}_{{\mu {\text{*}}}}}(\mathbb{R} \times {{\mathbb{R}}^{{ns}}},{{\mathbb{R}}^{n}}) = \{ f(\cdot ):f(\cdot )\;{\text{satisfies conditions (a)-}}({\text{d}})\} , \\ {{\left\| {f(\cdot )} \right\|}_{{{\text{Lip}}}}} = \mathop {\sup}\limits_{t \in \mathbb{R}} {{\left\| {f(t,0, \ldots ,0){{{(\mu {\text{*}})}}^{{\left| t \right|}}}} \right\|}_{{{{\mathbb{R}}^{n}}}}} + \mathop {\sup}\limits_{(t,{{z}_{1}}, \ldots ,{{z}_{s}},{{{\bar {z}}}_{1}}, \ldots ,{{{\bar {z}}}_{s}}) \in {{\mathbb{R}}^{{1 + 2ns}}}} \frac{{{{{\left\| {f(t,{{z}_{1}}, \ldots ,{{z}_{s}}) - f(t,{{{\bar {z}}}_{1}}, \ldots ,{{{\bar {z}}}_{s}})} \right\|}}_{{{{\mathbb{R}}^{n}}}}}}}{{\sum\nolimits_{j = 1}^s \,{{{\left\| {{{z}_{j}} - {{{\bar {z}}}_{j}}} \right\|}}_{{{{\mathbb{R}}^{n}}}}}}}, \\ \end{gathered} $$

where the parameter \(\mu {\text{*}} \in {{\mathbb{R}}_{ + }}\) coincides with the corresponding constant in condition (c). Obviously, for \(f(\cdot ) \in {{V}_{{\mu {\text{*}}}}}(\mathbb{R} \times {{\mathbb{R}}^{{ns}}},{{\mathbb{R}}^{n}})\), the smallest value of the constant \({{L}_{f}}\) in the Lipschitz condition (condition (b)) coincides with the value of the second term in the definition of the norm of \(f(\cdot ).\) In what follows, when talking about the Lipschitz condition, by the constant \({{L}_{f}},\) we mean this smallest value; additionally, we use the notation \(h = ({{h}_{1}}, \ldots ,{{h}_{s}})\).

Theorem 1 (see [10]). Given some \(\mu \in (0,\mu {\text{*}}) \cap (0,1)\), if

$${{L}_{f}}\sum\limits_{j = 1}^s \,{{\mu }^{{ - {{h}_{j}}}}} < \ln{{\mu }^{{ - 1}}},$$
(5)

then, for any fixed initial-boundary values

$$\bar {x} \in {{\mathbb{R}}^{n}},\quad \varphi (\cdot ) \in {{L}_{\infty }}(\mathbb{R},{{\mathbb{R}}^{n}}),$$

the basic initial-boundary value problem (2)–(4) has a solution (absolutely continuous)

$$x(\cdot ) \in \mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R}).$$

This solution is unique and, when regarded as an element of the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R})\), depends continuously on the initial-boundary values \(\varphi (\cdot ) \in {{L}_{\infty }}(\mathbb{R},{{\mathbb{R}}^{n}}),\) \(\bar {x} \in {{\mathbb{R}}^{n}}\), and the right-hand side \(f(\cdot )\) of the equation.

Condition (5) is exact and unimprovable. Examples of equations can be given that have a nonunique solution or no solution at all if this condition is violated. If inequality (5) holds for some \(\mu \in (0,\mu {\text{*}}) \cap (0,1)\), then it holds on some maximum interval, which is denoted by \(({{\mu }_{1}}({{L}_{f}};h),{{\mu }_{2}}({{L}_{f}};h))\). The existence and uniqueness theorem for functional differential equations of pointwise type implies that, under the conditions of Theorem 1, a solution exists in the smaller spaces \(\mathcal{L}_{{{{\mu }_{2}}({{L}_{f}};h) - \varepsilon }}^{n}{{C}^{{(0)}}}(\mathbb{R})\) with arbitrarily small \(\varepsilon > 0\) and is unique in the larger spaces \(\mathcal{L}_{{{{\mu }_{1}}({{L}_{f}};h) + \varepsilon }}^{n}{{C}^{{(0)}}}(\mathbb{R})\) with arbitrarily small \(\varepsilon > 0\).

Obviously, inequality (5) holds in the case of ordinary differential equations. Inequality (5) singles out a subclass of functional differential equations having properties of ordinary differential ones, such as the existence and uniqueness of solutions to initial-boundary value problems, the pointwise completeness of solutions to boundary value problems, and the continuous dependence of solutions on initial-boundary values and the right-hand side of the equation (roughness of the equation). Condition (5) determines a regular extension of the class of ordinary differential equations to the class of functional differential equations of pointwise type with the preservation of the above-mentioned properties of the former equations.

3 BIFURCATION

Consider the Cauchy (initial value) problem for a linear homogeneous system of functional differential equations of pointwise type defined on the entire straight line:

$$\dot {x}(t) = \sum\limits_{j = 1}^s \,{{A}_{j}}x(t + {{\tau }_{j}}),\quad t \in \mathbb{R},$$
(6)
$$x(\bar {t}) = \bar {x},\quad \bar {t} \in \mathbb{R},\quad \bar {x} \in {{\mathbb{R}}^{n}},$$
(7)

where \({{\tau }_{j}} \in \mathbb{Z},\) \(j = 1,\,\, \ldots ,\,\,s\), and \({{\tau }_{1}} < \ldots < {{\tau }_{s}}\). In view of the notation for functional differential equations of pointwise type, the right-hand side of Eq. (6) has the form

$$f(x({{q}_{1}}(t)),\,\, \ldots ,\,\,x({{q}_{s}}(t))) = \sum\limits_{j = 1}^s \,{{A}_{j}}x({{q}_{j}}(t)),$$

where \({{q}_{j}}(t) = t + {{\tau }_{j}},\) \(j = 1, \ldots ,s\). Obviously,

$${{h}_{{{{q}_{j}}}}} = \mathop {\sup}\limits_{t \in \mathbb{R}} \left| {{{q}_{j}}(t) - t} \right| = \left| {{{\tau }_{j}}} \right|,\quad j = 1, \ldots ,s.$$

Following the general notation for such equations, we set \(h = \left( {\left| {{{\tau }_{1}}} \right|, \ldots ,\left| {{{\tau }_{s}}} \right|} \right)\).

For this system, the characteristic equation is specified by the quasi-polynomial

$$\left| {\lambda E - \sum\limits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}} \right| = 0.$$
(8)

Solutions of quasi-polynomial (8) are considered in the field of complex numbers.

Consider the complexification of the \(n\)-dimensional space \({{\mathbb{R}}^{n}}\), i.e., the complex space \({{\mathcal{Z}}^{n}} = {{\mathbb{R}}^{n}} + i{{\mathbb{R}}^{n}}\). The linear operator \(A\) acting in the real space \({{\mathbb{R}}^{n}}\) generates a linear operator in the complexified space \({{\mathcal{Z}}^{n}}\) defined according to the rule \(Az = A(x + iy) = Ax + iAy\). Define

$$\mathbb{A}(\lambda ) = \sum\limits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}},\quad \lambda \in \mathbb{C}.$$
(9)

Obviously, for any \(k \in \mathbb{Z}\), it is true that \(\mathbb{A}(\lambda ) = \mathbb{A}(\lambda + i2\pi k)\). For each \(\lambda \in \mathbb{C}\), let \(\sigma (\mathbb{A}(\lambda ))\) denote the spectrum of the matrix \(\mathbb{A}(\lambda )\). Then the solution set of the characteristic equation (8) can be written as

$$\mathcal{R} = \{ \lambda :\lambda \in \sigma (\mathbb{A}(\lambda ))\} .$$
(10)

To solve the original linear functional differential equation (6), we need to describe the solution set \(\mathcal{R}\) of the characteristic quasi-polynomial (8) and the eigenspaces corresponding to each \(\lambda \in \mathcal{R}\).

Given \(\lambda \in \mathbb{C}\) and \(r \in \mathbb{R}\), define the set

$$\varrho (\lambda ) = \{ \bar {\lambda }:\bar {\lambda } = \lambda + i2\pi k,\;\bar {\lambda } \in \sigma (\mathbb{A}(\lambda )),\;k \in \mathbb{Z}\} ,\quad \rho (r) = \bigcup\limits_{{\text{Re}}\lambda = r} \,\varrho (\lambda ).$$
(11)

Lemma 1. For any \(\mathcal{N} > 0\), the set  \(\bigcup\nolimits_{|r| \leqslant \mathcal{N}} \,\rho (r)\) is either finite or empty.

Proof. Let \(r \in [ - \mathcal{N},\mathcal{N}]\) be given. For \(z \in {{\mathcal{Z}}^{n}}\), \({{\left\| z \right\|}_{{{{\mathcal{Z}}^{n}}}}} = 1\), \(\lambda \in \mathbb{C}\), and \(\operatorname{Re} \lambda = r\), consider the equality

$$\lambda z - \sum\limits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}z = 0.$$
(12)

Let \(\lambda = \tilde {\lambda } + i2\pi l\), where \(\left| {\operatorname{Im} \tilde {\lambda }} \right| < 2\pi \). Since \({{e}^{{(\lambda + i2\pi l){{\tau }_{j}}}}} = {{e}^{{\tilde {\lambda }{{\tau }_{j}}}}}\), Eq. (12) becomes

$$(\tilde {\lambda } + i2\pi l)z - \sum\limits_{j = 1}^s \,{{A}_{j}}{{e}^{{\tilde {\lambda }{{\tau }_{j}}}}}z = 0.$$
(13)

Since the norms of the real parts \(\tilde {\lambda }\) are bounded by the number \(\mathcal{N}\), this equality cannot hold for large \(l\). Moreover, for all \(\lambda \in \rho (r)\) and \(r \in [\mathcal{N},\mathcal{N}]\), the values \(\left| {\operatorname{Im} \lambda } \right|\) are uniformly bounded. Since \(\left| {\lambda E - \sum\nolimits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}} \right|\) is an analytic function of \(\lambda \) and the points of the set \(\bigcup\nolimits_{|r| \leqslant \mathcal{N}} \,\rho (r)\) are zeros of such an analytic function, this set is at most finite.

Suppose that \(\lambda \in \sigma (\mathbb{A}(\lambda ))\). Then the eigenvectors corresponding to the eigenvalues \(\bar {\lambda } \in \varrho (\lambda )\) are linearly independent.

Along with the characteristic equation (8), we consider another one

$$\left| {\nu E - {\text{exp}}\left( {\sum\limits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{h}_{j}}}}}} \right)} \right| = 0.$$
(14)

Lemma 2. For any eigenvalue \(\lambda \) of the characteristic equation (8), the quantity \(\nu = {{e}^{\lambda }}\) is an eigenvalue of the characteristic equation (14)and, accordingly, for any eigenvalue \(\bar {\lambda } \in \varrho (\lambda )\), it is true that \(\nu = {{e}^{{\bar {\lambda }}}}\). Each eigenvector \(z \in {{\mathcal{Z}}^{n}}\) of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}\) with eigenvalue \(\bar {\lambda } \in \varrho (\lambda )\) is an eigenvector of the operator \(\exp \left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\) with the eigenvalue \(\nu = {{e}^{\lambda }}\). The linear span of the eigensubspaces of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}\) corresponding to the eigenvalues \(\bar {\lambda } \in \varrho (\lambda )\) and only this linear span is the eigenspace of the operator \(\exp \left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\) with the eigenvalue \(\nu = {{e}^{\lambda }}\). Moreover, the linear span of the subspaces of associated vectors of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}\) corresponding to eigenvalues from the set \(\varrho (\lambda )\) coincides with the subspace of associated vectors of the operator \(\exp \left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\) corresponding to the eigenvalue \(\nu = {{e}^{\lambda }}\).

Proof. Let \(\lambda \) be an eigenvalue of the characteristic equation (8). Then \(\lambda \) solves the characteristic equation for the matrix \(A(\lambda )\), i.e., the equation

$$\left| {\zeta E - \sum\limits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}} \right| = 0$$
(15)

and satisfies the equality

$$\left| {\lambda E - \sum\limits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}} \right| = 0.$$

Setting \(\nu = {{e}^{\lambda }}\), we rewrite Eq. (15) as

$$\left| {\zeta E - \sum\limits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right| = 0.$$

Consider the (analytic) function \(w(\xi ) = {{e}^{\xi }}\) and the corresponding operator function

$$w\left( {\sum\limits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right) = {\text{exp}}\left( {\sum\limits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right).$$

Since \(\lambda \) is an eigenvalue of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\gamma }_{j}}}}}\), the spectral theorem from [15] implies that \(w(\lambda ) = {{e}^{\lambda }} = \nu \) is an eigenvalue of the operator \({\text{exp}}\left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\), i.e., a solution of the characteristic equation (14), while the eigenvector \(z \in {{\mathcal{Z}}^{n}}\) of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{e}^{{\lambda {{\tau }_{j}}}}}\) with the eigenvalue \(\lambda \) is the eigenvector of the operator \({\text{exp}}\left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\) with the eigenvalue \(\nu \). Similarly, each eigenvector \(z \in {{\mathcal{Z}}^{n}}\) of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}\) with \(\nu = {{e}^{\lambda }}\) and the corresponding eigenvalue \(\lambda + i2\pi k\) for some \(k \in \mathbb{Z}\) is the eigenvector of the operator \({\text{exp}}\left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\) with the same eigenvalue \(\nu \). By Lemma 1, there is a finite set \({{k}_{1}}, \ldots ,{{k}_{r}}\) of such values \(k \in \mathbb{Z}\). Note that the zeros of the function \(({{e}^{\xi }} - \nu ),\) \(\nu = {{e}^{\lambda }}\) are the values \(\lambda + i2\pi {{k}_{l}},\) \(l = 1, \ldots ,r\) and only them, which belong to the spectrum of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}\). Since the derivative of the function \(w(\xi ) = {{e}^{\xi }}\) never vanishes, the multiplicity of the zeros \(\lambda + i2\pi {{k}_{l}},\) \(l = 1, \ldots ,r\), of the function \(({{e}^{\xi }} - \nu )\) is equal to 1. Then we have an expansion

$$({{e}^{\xi }} - \nu ) = v(\xi )(\xi - \lambda - i2\pi {{k}_{1}}) \ldots (\xi - \lambda - i2\pi {{k}_{r}}),$$
(16)

where \(v(\xi )\) has no zeros in the spectrum of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}\). Let \({{\theta }_{l}},\) \(l = 1, \ldots ,r\), denote the eigenvectors of the operator \(\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}\) corresponding to the eigenvalues \(\lambda + i2\pi {{k}_{l}},\) \(l = 1, \ldots ,r\). By the spectral theorem in [15], the linear span of the vectors \({{\theta }_{l}},\) \(l = 1, \ldots ,r\), is contained in the eigenspace of the operator \({\text{exp}}\left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\) corresponding to the eigenvalue \(\nu \). However, by virtue of expansion (16), the eigenspace of the operator \({\text{exp}}\left( {\sum\nolimits_{j = 1}^s \,{{A}_{j}}{{\nu }^{{{{\tau }_{j}}}}}} \right)\) coincides with the linear span of the eigenvectors \({{\theta }_{l}},\) \(l = 1, \ldots ,r\). In the same way, by virtue of expansion (16), we prove the last assertion of the lemma.

Any set of real matrices \(\mathcal{A} = ({{A}_{1}}, \ldots ,{{A}_{s}})\) and deviations \(\bar {\tau } = ({{\tau }_{1}}, \ldots ,{{\tau }_{s}})\) is denoted by \(\Theta = ({{A}_{1}}, \ldots ,{{A}_{s}},{{\tau }_{1}}, \ldots ,{{\tau }_{s}})\). Obviously, a pair \(\Theta = (\mathcal{A},\bar {\tau })\) uniquely determines the right-hand side of the linear homogeneous functional differential equation (6), and \(\mathcal{A} = ({{A}_{1}}, \ldots ,{{A}_{s}})\) determines it as an element of the Banach space \(\mathcal{L}({{\mathbb{R}}^{{ns}}},{{\mathbb{R}}^{n}})\). Using the previous notation, we set \(h = \left( {\left| {{{\tau }_{1}}} \right|, \ldots ,\left| {{{\tau }_{s}}} \right|} \right)\). In what follows, for all quantities, we indicate their dependence on \(\Theta \) or \(\mathcal{A}\).

Let \(\mathcal{T}(\Theta )\) be the solution set of quasi-polynomial (8) (taking into account multiplicity). For a given \(\Theta \), we can well define the quantities

$$\underline \delta (\Theta ) = {\text{inf}}\{ \delta :\sharp \{ \lambda :\lambda \in \mathcal{T}(\Theta ),\;\left| {\operatorname{Re} \lambda } \right| < \delta \} \geqslant n\} ,$$
(17)
$$\overline \delta (\Theta ) = {\text{sup}}\{ \delta :\sharp \{ \lambda :\lambda \in \mathcal{T}(\Theta ),\;\left| {\operatorname{Re} \lambda } \right| < \delta \} \leqslant n\} ,$$
(18)
$$\underline \mu (\Theta ) = {\text{exp}}( - \overline \delta (\Theta )),\quad \bar {\mu }(\Theta ) = {\text{exp}}( - \underline \delta (\Theta )),$$
(19)

where \(\sharp \) denotes the cardinality of a set.

For ordinary differential equations, i.e., for \(h = 0\), the quantity \(\underline \delta (\Theta )\) coincides with the upper Lyapunov exponent and \(\overline \delta (\Theta ) = + \infty (\underline \mu (\Theta ) = 0)\).

Lemma 3. For any set \(\Theta = (\mathcal{A},\bar {\tau }) = ({{A}_{1}}, \ldots ,{{A}_{s}},{{\tau }_{1}}, \ldots ,{{\tau }_{s}})\), it is true that \(\overline \delta (\Theta ) \geqslant \underline \delta (\Theta )\).

Proof. It follows directly from the definitions of \(\overline \delta (\Theta )\) and \(\underline \delta (\Theta )\).

For the initial value problem (6), (7), an existence and uniqueness theorem for a solution from the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R})\) with suitable values of \(\mu \in (0,1)\) can be formulated in terms of \(\overline \delta (\Theta )\) and \(\underline \delta (\Theta )\) (\(\underline \mu (\Theta )\) and \(\bar {\mu }(\Theta )\)). For solutions from the spaces \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R})\), we will describe bifurcations with respect to the parameter \(\mu \in (0,1)\) in the form of loss of solutions and their branching.

Theorem 2. Given a set \(\Theta = (\mathcal{A},\bar {\tau }) = ({{A}_{1}}, \ldots ,{{A}_{s}},{{\tau }_{1}}, \ldots ,{{\tau }_{s}})\), if \(\overline \delta (\Theta ) > \underline \delta (\Theta )\), then, for each \(\mu \in (\underline \mu (\Theta ),\bar {\mu }(\Theta )]\), Eq. (6) has \(n\) linearly independent solutions in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R})\). At \(\mu = \bar {\mu }(\Theta )\) and \(\mu = \underline \mu (\Theta ) \ne 0\), there is a bifurcation. Some of the \(n\) linearly independent solutions are lost in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R}),\) \(\mu > \bar {\mu }(\Theta )\), and new linearly independent solutions appear (branch) in the space \(\mathcal{L}_{{\underline \mu (\Theta )}}^{n}{{C}^{{(0)}}}(\mathbb{R})\).

If \(\overline \delta (\Theta ) = \underline \delta (\Theta )\), then Eq. (6) has \(m\) \((m > n)\) linearly independent solutions in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R}),\) where \(\mu = \bar {\mu }(\Theta ) = \underline \mu (\Theta )\). At \(\mu = \bar {\mu }(\Theta ) = \underline \mu (\Theta )\), there is a bifurcation. Solutions are lost from the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R})\), \(\mu > \bar {\mu }(\Theta ) = \underline \mu (\Theta )\), so that only \(m'\) \((m' < n)\) linearly independent solutions remain in this space.

Proof. Suppose that \(\overline \delta (\Theta ) > \underline \delta (\Theta )\). By the definitions of \(\overline \delta (\Theta )\) and \(\underline \delta (\Theta )\), the quasi-polynomial (8) has \(n\) solutions (counting their multiplicity) in the open cylinder \(\left| {\operatorname{Re} \lambda } \right| < \overline \delta (\Theta )\); in fact, these solutions lie in the smaller closed cylinder \(\left| {\operatorname{Re} \lambda } \right| \leqslant \underline \delta (\Theta )\). Moreover, there are more than \(n\) solutions in the closed cylinder \(\left| {\operatorname{Re} \lambda } \right| \leqslant \overline \delta (\Theta )\) and less than \(n\) solutions in the open cylinder \(\left| {\operatorname{Re} \lambda } \right| < \underline \delta (\Theta )\). By virtue of the noted spectral properties, the assertions of the theorem follow for this case.

Suppose that \(\overline \delta (\Theta ) = \underline \delta (\Theta )\). By the definitions of \(\overline \delta (\Theta )\) and \(\underline \delta (\Theta )\), the quasi-polynomial (8) has more than \(n\) solutions (counting their multiplicity) in the closed cylinder \(\left| {\operatorname{Re} \lambda } \right| \leqslant \overline \delta (\Theta )\) and less than \(n\) solutions in the open cylinder \(\left| {\operatorname{Re} \lambda } \right| < \overline \delta (\Theta )\). By virtue of the noted spectral properties, the assertions of the theorem follow for this case as well.

For a linear homogeneous functional differential equation of the pointwise type determined by the \(\Theta = (\mathcal{A},\bar {\tau })\), the basic inequality of Theorem 1 (which guarantees the existence and uniqueness of a solution to the initial value problem (6), (7)) becomes

$${{L}_{\mathcal{A}}}\sum\limits_{j = 1}^s \,{{\mu }^{{ - |{{\tau }_{j}}|}}} < \ln{{\mu }^{{ - 1}}},\quad \mu \in (0,1),$$
(20)

where \({{L}_{\mathcal{A}}} = {{\max}_{{1 \leqslant j \leqslant s}}}\left\| {{{A}_{j}}} \right\|\).

Let us formulate the condition guaranteeing the bifurcation mentioned in the first part of Theorem 2.

Theorem 3. Let \(\Theta = (\mathcal{A},\bar {\tau }) = ({{A}_{1}}, \ldots ,{{A}_{s}},{{\tau }_{1}}, \ldots ,{{\tau }_{s}})\) be a given set for which the inequality

$${{L}_{\mathcal{A}}}\sum\limits_{j = 1}^s \,{{\mu }^{{ - |{{\tau }_{j}}|}}} < \ln{{\mu }^{{ - 1}}},\quad \mu \in (0,1),$$
(21)

has a solution. Then the estimate \(\overline \delta (\Theta ) > \underline \delta (\Theta )\) holds and the embedding

$$({{\mu }_{1}}({{L}_{\mathcal{A}}},h),{{\mu }_{2}}({{L}_{\mathcal{A}}},h)) \subseteq (\underline \mu (\Theta ),\bar {\mu }(\Theta ))$$

is valid. For each \(\mu \in (\underline \mu (\Theta ),\bar {\mu }(\Theta )]\), the initial value problem (6), (7) has a solution in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R})\). This solution is unique. At \(\mu = \bar {\mu }(\Theta ),\) \(\mu = \underline \mu (\Theta ) \ne 0\), there is a bifurcation. The existence of the solution is lost in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R}),\) \(\mu > \bar {\mu }(\Theta )\), and the uniqueness of the solution is lost in the space \(\mathcal{L}_{{\underline \mu (\Theta )}}^{n}{{C}^{{(0)}}}(\mathbb{R})\).

Proof. By Theorem 1, the existence and uniqueness theorem holds for the initial value problem (6), (7) for each \(\mu \in ({{\mu }_{1}}({{L}_{\mathcal{A}}},h),{{\mu }_{2}}({{L}_{\mathcal{A}}},h))\) in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(0)}}}(\mathbb{R})\). Therefore, in the complex plane, the quasi-polynomial (8) has exactly \(n\) roots (counting their multiplicities) in the cylinder

$$\left| {\operatorname{Re} \lambda } \right| < {{\delta }_{1}}({{L}_{\mathcal{A}}},h),\quad {{\mu }_{1}}({{L}_{\mathcal{A}}},h) = {{e}^{{ - {{\delta }_{1}}({{L}_{\mathcal{A}}},h)}}}.$$

Moreover, these roots belong to the smaller cylinder

$$\left| {\operatorname{Re} \lambda } \right| \leqslant {{\delta }_{2}}({{L}_{\mathcal{A}}},h),\quad {{\mu }_{2}}({{L}_{\mathcal{A}}},h) = {{e}^{{ - {{\delta }_{2}}({{L}_{\mathcal{A}}},h)}}},$$

which implies the assertions of the theorem.

3.1 Example of Bifurcation

Consider a delay equation of the form

$$\dot {x}(t) = \alpha x(t - 1),\quad \alpha \in \mathbb{R},\quad t \in \mathbb{R}.$$
(22)

According to the notation used in this section, Eq. (22) is associated with the set \(\Theta = (\alpha , - 1)\) and the Lipschitz constant \({{L}_{\mathcal{A}}} = \left| \alpha \right|\). Consider the bifurcation of the solution space from \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}),\;\mu \in (\underline \mu (\Theta ),\bar {\mu }(\Theta ))\) with respect to the parameters \(\mu \) and \(\alpha \).

The characteristic quasi-polynomial of Eq. (22) has the form

$$\lambda = \alpha {{e}^{{ - \lambda }}}.$$
(23)

For \(\alpha \geqslant 0\), it has a unique real root (see Fig. 1), which is denoted by \(\hat {\lambda }(\alpha )\). Note that \(\hat {\lambda }(\alpha )\) is a monotonically increasing function of \(\alpha \). The real parts of the complex roots are bounded from above, and the upper bound increases monotonically with respect to \(\alpha \) and is negative for small \(\alpha \).

Fig. 1.
figure 1

Plot of the solution to Eq. (23) for \(\alpha > 0\).

On the other hand, the characteristic equation (23) has two real roots for \( - {{e}^{{ - 1}}} < \alpha < 0\) (see Figs. 2 and 3) and a unique real root equal to \( - 1\) of multiplicity \(2\) for \(\alpha = - {{e}^{{ - 1}}}\). The right root is also denoted by \(\hat {\lambda }(\alpha )\), and \(\hat {\lambda }(\alpha )\) is a monotonically increasing function of \(\alpha \) (see Fig. 4). For \(\alpha < 0\), the real parts of the complex roots are bounded from above, and the upper bound decreases monotonically with respect to \(\alpha \). Moreover, this upper bound is smaller than the left real root, which tends to \( - \infty \) as \(\alpha \to 0 - 0\).

Fig. 2.
figure 2

Plot of the solution to Eq. (23) for \(\alpha \in ( - {{e}^{{ - 1}}},0)\).

Fig. 3.
figure 3

Plot of the solution to Eq. (23) for \(\alpha = - {{e}^{{ - 1}}}\).

Fig. 4.
figure 4

Plot of the asymptotics of the spectrum of quasi-polynomial (23) for \(\alpha > {{e}^{{ - 1}}}\).

For the considered equation, the corresponding basic inequality of the existence and uniqueness theorem has the form

$$\left| \alpha \right|{{\mu }^{{ - 1}}} < \ln{{\mu }^{{ - 1}}}.$$
(24)

If we consider the function \(\xi (\mu ) = - \mu \ln\mu \) (see Fig. 5), it easy to show that inequality (24) has a solution if and only if \(\left| \alpha \right| < {{e}^{{ - 1}}}\). Therefore, for \(\left| \alpha \right| < {{e}^{{ - 1}}},\) Theorem 1 (on the existence and uniqueness of a solution to Eq. (22) with given initial data) holds in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R})\) for any \(\mu \in ({{\mu }_{1}}(\left| \alpha \right|,1),{{\mu }_{2}}(\left| \alpha \right|,1))\).

Fig. 5.
figure 5

Plot of the function \(\xi (\mu )\).

Case 1: \(\left| \alpha \right| < {{e}^{{ - 1}}}\). The existence and uniqueness theorem implies that \({{e}^{{ - |\hat {\lambda }(\alpha )|}}} \in \left( {{{\mu }_{1}}\left( {\left| \alpha \right|,1} \right),{{\mu }_{2}}\left( {\left| \alpha \right|,1} \right)} \right)\) and the characteristic equation has no other eigenvalues \(\tilde {\lambda }(\alpha )\) with the property \({{e}^{{ - |{\text{Re}}\tilde {\lambda }(\alpha )|}}} \in \left( {{{\mu }_{1}}\left( {\left| \alpha \right|,1} \right),{{\mu }_{2}}\left( {\left| \alpha \right|,1} \right)} \right)\). For the considered \(\Theta = (\alpha , - 1),\) the conditions of Theorem 3 are satisfied, specifically, \(\overline \delta (\Theta ) > \underline \delta (\Theta )\), \(\left( {{{\mu }_{1}}\left( {\left| \alpha \right|,1} \right),{{\mu }_{2}}\left( {\left| \alpha \right|,1} \right)} \right) \subseteq (\underline \mu (\Theta ),\bar {\mu }(\Theta ))\), and the solution space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R})\) bifurcates with respect to the parameter \(\mu \) at \(\mu = \underline \mu (\Theta )\) and \(\mu = \bar {\mu }(\Theta )\) as described in Theorem 3.

Case 2: \(\alpha \geqslant {{e}^{{ - 1}}}\). As was noted above, the quasi-polynomial always has a unique real root \(\hat {\lambda }(\alpha )\). We are interested in the minimum value \({{\alpha }_{{\min}}} \geqslant {{e}^{{ - 1}}}\) for which the quasi-polynomial has a pair of complex conjugate roots \({{\lambda }_{1}}({{\alpha }_{{\min}}})\) and \({{\bar {\lambda }}_{1}}({{\alpha }_{{\min}}})\) such that \(\left| {\operatorname{Re} ({{\lambda }_{1}}({{\alpha }_{{\min}}}))} \right| = \left| {\operatorname{Re} ({{{\bar {\lambda }}}_{1}}({{\alpha }_{{\min}}}))} \right| = \left| {\hat {\lambda }({{\alpha }_{{\min}}})} \right|\) (see Fig. 4). Substituting these conditions into Eq. (23) yields the following condition on \(\hat {\lambda }({{\alpha }_{{\min}}})\):

$${{e}^{{\hat {\lambda }({{\alpha }_{{\min}}})}}}\cos\left( {\hat {\lambda }({{\alpha }_{{\min}}}){{e}^{{\hat {\lambda }({{\alpha }_{{\min}}})}}}\sqrt {{{e}^{{2\hat {\lambda }({{\alpha }_{{\min}}})}}} - 1} } \right) = - 1.$$
(25)

The numerical solution of Eq. (25) gives the approximate value \(\hat {\lambda }({{\alpha }_{{\min}}}) = 0.8468\), and the corresponding value of \({{\alpha }_{{\min}}}\) is equal to 1.97496. Figure 4 shows the lines containing the eigenvalues of quasi-polynomial (23), the first pair of complex conjugate roots \({{\lambda }_{1}}({{\alpha }_{{\min}}})\) and \({{\bar {\lambda }}_{1}}({{\alpha }_{{\min}}})\) with a minimum (in absolute value) real part, and the unique real root \(\hat {\lambda }({{\alpha }_{{\min}}})\) equal to the absolute value of the real part of the indicated complex roots.

In this case, for \(\Theta = (\alpha , - 1)\) with \({{e}^{{ - 1}}} \leqslant \alpha < {{\alpha }_{{\min}}}\), the condition of Theorem 2 is satisfied, namely, \(\overline \delta (\Theta ) > \underline \delta (\Theta )\). Then, in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}),\;\mu \in (\underline \mu (\Theta ),\bar {\mu }(\Theta ))\), the existence and uniqueness theorem for Eq. (22) with given initial data holds and the solution space bifurcates with respect to the parameter \(\mu \) at \(\mu = \underline \mu (\Theta )\) and \(\mu = \bar {\mu }(\Theta )\) as described in this theorem. The existence and uniqueness theorem implies the inclusion \({{e}^{{ - \hat {\lambda }(\alpha )}}} \in (\underline \mu (\Theta ),\bar {\mu }\Theta ))\) and, accordingly, the existence and uniqueness theorem for Eq. (22) with given initial data holds in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}),\) \(\mu = {{e}^{{ - \hat {\lambda }(\alpha )}}}\).

In the case of \({{\alpha }_{{\min}}}\), we have \(\overline \delta (\Theta ) = \underline \delta (\Theta )\) and, accordingly, \(\underline \mu (\Theta ) = \bar {\mu }(\Theta )\). By Theorem 2, the solution space bifurcates with respect to \(\mu \) at \(\mu = \underline \mu (\Theta ) = \bar {\mu }(\Theta )\) as described in this theorem. Moreover, \(\underline \mu (\Theta ) = \bar {\mu }(\Theta ) = {{e}^{{ - \hat {\lambda }({{\alpha }_{{{\text{min}}}}})}}}\) and a 3-parameter family of solutions appears in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}),\) \(\mu = {{e}^{{ - \hat {\lambda }({{\alpha }_{{{\text{min}}}}})}}}\).

Case 3: \(\alpha \leqslant - {{e}^{{ - 1}}}\). It was previously noted that the characteristic equation (23) has two real roots of multiplicity \(1\) for \(\alpha \in ( - {{e}^{{ - 1}}},0)\) and the unique real root \(\hat {\lambda }( - {{e}^{{ - 1}}}) = - 1\) of multiplicity \(2\) for \(\alpha = - {{e}^{{ - 1}}}\) (see Figs. 2 and 3). The solution space associated with the right root \(\hat {\lambda }(\alpha )\) of the characteristic equation was described in Case 1. It was shown that the existence and uniqueness theorem for Eq. (22) with given initial data holds in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}),\) \(\mu = {{e}^{{ - |\hat {\lambda }(\alpha )|}}}\).

As \(\alpha \) decreases further, there are no real roots (see Fig. 6). Moreover, there exists \(d > 0\) such that, for \( - d < \alpha < - {{e}^{{ - 1}}}\), the quasi-polynomial (23) has a unique pair of complex conjugate roots with a minimum real part in absolute value.

Fig. 6.
figure 6

Plot of the asymptotics of the spectrum of quasi-polynomial (23) for \(\alpha < - {{e}^{{ - 1}}}\).

For \(\alpha = - {{e}^{{ - 1}}}\), we have \(\overline \delta (\Theta ) = \underline \delta (\Theta )\) and, accordingly, \(\underline \mu (\Theta ) = \bar {\mu }(\Theta )\). By Theorem 2, the solution space bifurcates with respect to \(\mu \) at \(\mu = \underline \mu (\Theta ) = \bar {\mu }(\Theta )\) as described in this theorem. Moreover, \(\underline \mu (\Theta ) = \bar {\mu }(\Theta ) = {{e}^{{ - |\hat {\lambda }( - {{e}^{{ - 1}}})|}}}\) and a 2-parameter family of solutions appears in the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}),\) \(\mu = {{e}^{{ - |\hat {\lambda }( - {{e}^{{ - 1}}})|}}}\).

Let us examine the behavior of the solution space with respect to the parameter \(\alpha \) (bifurcation with respect to \(\alpha \)). For each \(\Theta = (\alpha , - 1),\) \(\alpha \in \mathbb{R}\), we consider solutions of the functional differential equation from the space \(\mathcal{L}_{\mu }^{n}{{C}^{{(k)}}}(\mathbb{R}),\) \(\mu \in (\underline \mu (\Theta ),\bar {\mu }(\Theta ))\). It follows from Cases 1–3 that, for \(\alpha \in ( - {{e}^{{ - 1}}},{{\alpha }_{{{\text{min}}}}}),\) the dimension of the solution space is 1 and the existence and uniqueness theorem holds true. The dimension of the solution space is equal to 2 for \(\alpha = - {{e}^{{ - 1}}}\) and to 3 for \(\alpha = {{\alpha }_{{{\text{min}}}}}\).