Keywords

1 Introduction

At present, the theory of singular perturbations is represented by a large number of different asymptotic methods. The foundations of the theory of asymptotic integration are laid by the works of Prandtl, Birkhoff, and Schlesinger. Fundamental importance in the development of the theory of singular perturbations is the work of A.N. Tikhonov, where classical theorems on the passage to the limit in nonlinear singularly perturbed problems are proved. Deep development of this theory was obtained in the works of V. Vazov, M.I. Vishik, L.A. Lyusternik, A.B. Vasil’eva, V.F. Butuzov, N.N. Nefedov, Yu.A. Mitropol’skii, S.A. Lomov, A.N. Filatov, N.Kh. Rozov, N.I. Shkil, M.V. Fedoryuk, V.P. Maslov, V.A. Trenogin, N.N. Moiseev, M.M. Khapaev, V.F. Safonov, A.A. Bobodzhanov, A.V. Nesterov, V.G. Zadorozhniy, M.G. Dmitriev, M.I. Imanaliev, K.A. Kasymov and other researchers.

In contrast to regular perturbation theory, the theory of singular perturbations is an asymptotic theory (therefore, it is called the theory of asymptotic integration) [1,2,3].

The regularization method of Lomov [4] makes it possible to construct solutions of singularly perturbed problems in the form of series in powers of a small parameter that converge in the usual sense (not asymptotically), under certain restrictions on the data of the problem [5, 7]. Such solutions are called pseudo-analytic (pseudoholomorphic) and were obtained, in the main, for linear problems using the theory of spaces of vectors of exponential type [5]. The aim of our research is to construct, based on the regularization method of S.A. Lomov and his generalizations, the analytic theory of singular perturbations, a section of mathematics designed to equalize the rights of regular and singular theories.

2 Functional Analysis in the Theory of Singular Perturbations. Linear Problems

The first step in constructing the theory of ordinary convergence of series in powers of a small parameter was the study of the conditions for the existence of solutions of the equation analytic in the small parameter

$$\begin{aligned} \varepsilon \frac{dy}{dt}-A(t)y=h(t), t\in [0,T], \end{aligned}$$
(1)

in which \(A(t)\in C([0,t];\mathcal {L}(E))\)—algebra of continuous functions on the interval [0, T] operator-valued functions with values in the space of bounded operators acting in a Banach space E.

Denote by \(C_E=C([0,T];E)\) the Banach space of functions that are continuous on the interval [0, T] and with values E, and let A(t) be continuously invertible for each t from the specified interval. Denote by \(F=A^{-1}(t)\frac{d}{dt}\) a closed unbounded operator acting in \(C_E\). We take an arbitrary \(c>0\) and denote by the set of elements \(u(t)\in C_E\), such that

$$\begin{aligned} \sup \limits _{k} \frac{\Vert F^k u\Vert _{C_E}}{c^k}< +\infty , \end{aligned}$$
(2)

with the norm defined by the formula (2), and call it the space of vectors of exponential type \(\le \)c. One can prove that \( Y ^ C \) is a Banach space [4, 5]. Further, we unite such spaces over all positive C, we define the topology of the inductive limit there and call the resulting space the space of vectors of exponential type: \({{\exp }_{F}}E=\lim \limits _{c\rightarrow +\infty }\mathrm{ind}\,{{Y}^{c}}\). In [5] it is proved that any vector \(u(t)\in \exp _F E\) has the representation

$$\begin{aligned} \begin{array}{c} u(t)=u(0)+\int _0^t A(t_1)dt_1\cdot Fu(0)+\int _0^t A(t_1)dt_1 \int _0^{t_1} A(t_2)dt_2\cdot F^2u(0)\\ +\ldots +\int _0^t A(t_1)dt_1 \int _0^{t_1} A(t_2)dt_2\ldots \int _0^{t_{k-1}} A(t_k)dt_k\cdot F^ku(0)+\ldots , \end{array} \end{aligned}$$
(3)

and here the power series \(u(0)+\varepsilon Fu(0)+\ldots +\varepsilon ^k F^k u(0)+\ldots \) converges in a certain neighborhood of the value \(\varepsilon =0\).

Theorem 1

[5]. In order that there exist a solution of the Eq. (1), analytic at the point \(\varepsilon =0\), it is necessary and sufficient that

$$\begin{aligned} \int _0^t h(\tau )d\tau \in \exp _F E, \end{aligned}$$
(4)

and such a solution is unique.

If we give the initial condition for the Eq. (1), then the resulting problem will be singularly perturbed and the corresponding space of vectors of exponential type is constructed for the so-called basic operator of the regularization method [4] (in any case, (4) is necessary).

We consider the question of the existence of an analytic solution at the point \(\varepsilon =0\) of the equation

$$\begin{aligned} \varepsilon \frac{dy}{dt} -A(t)y=h(t), \quad t\in [0,T], \end{aligned}$$
(5)

when the operator A(t) for each \( t \in [0, T] \) is closed unbounded.

S.A. Lomov formulated the problem of describing such classes of unbounded operators A(t) so that the space \( \ exp_F E \) is maximally simple, for example, coinciding with a functional space well studied in the analysis.

Theorem 2

Let A(t) be a closed unbounded operator acting for each \(t\in [0,T]\) in a Banach space E having a domain of definition independent of t, and the inverse operator \(A^{-1}(t)\in \mathcal {L}(E)\) \(\forall t\in [0,T]\) admits an analytic continuation to the disk \( | t | \le R \), where \( R> T \). Then, if there exists a constant closed unbounded operator B such that for all natural n

$$\begin{aligned} \max \limits _{{|\eta _i|=R}\atop {i=\overline{1,n}}} \Vert A^{-1}(\eta _1)\ldots A^{-1}(\eta _n)\Vert \le \Vert B^{-n}\Vert , \end{aligned}$$

and the resolvent of this operator \( R (\lambda ; B) \) is an entire function of exponential type, then any vector-valued function u(t) with values in E analytic in the indicated disk belongs to \(\exp _F E\).

In the proof of this theorem, the integral Cauchy formula for functions of several complex variables was used [6].

We note that for A(t) we can take an ordinary linear differential operator with zero initial conditions, that is, consider equations of the form

$$\begin{aligned} \varepsilon u_t-\sum _{m=1}^{n} a_m(t,x) \partial _x^m u=h(t,x), \end{aligned}$$

in which the functions h(tx) and \( a_m (t, x) \) (\( m = \overline{1, n} \)) are analytic on the rectangle \( [0, T] \times [0, X] \). For example, the equation \( \varepsilon u_t = (t + 1) u_x + tx ^ 3 \), \( t \ge 0, \) where \(D((t+1)\partial _x)=\{y(x):\ y(x)\in C^1[0,X], \ y(0)=0\}\), has a unique analytic at the point \( \varepsilon = 0 \) solution

$$\begin{aligned} u(t,x,\varepsilon )=-\frac{tx^3}{3(t+1)}+2\sum _{k=1}^{\infty } \varepsilon ^k \frac{(-1)^k (2k-1)!! x^{k+3}}{(k+3)! (t+1)^{2k+1}}. \end{aligned}$$

This series converges as \( | \varepsilon | <1 / 2X \) uniformly in the domain\(\{(t,x): \ t\ge 0, \ 0\le x\le X\}\).

3 Algebraic Foundations of the Theory of Singular Perturbations and Holomorphic Regularization

The well-known Poincare decomposition theorem guarantees the existence and uniqueness of the solution \( y (t,\varepsilon ) \) of the Cauchy problem

$$\begin{aligned} \frac{dy}{dt}=f(t,y,\varepsilon ), \quad y(t_0,\varepsilon )=y_0, \end{aligned}$$

analytic at the point \( \varepsilon = 0 \) if \( f (t, y, \varepsilon ) \) is analytic at the point \( (t_0, y_0,0) \) as a function of three variables. It is clear that for a singularly perturbed problem

$$\begin{aligned} \varepsilon \frac{dy}{dt}=f(t,y), y(t_0,\varepsilon )=y_0 \end{aligned}$$
(6)

it is not so (in the general case). It is proved (and this is the main statement of the method of holomorphic regularization) that instead of solving the analytic dependence on \( \varepsilon \) of the left side of the Eq. (6), inherit the integrals of this equation [8]. The algebraic basis of the method is the commutation relations and equivalent homomorphisms of the algebras of analytic functions of a different number of variables [9]. We denote by \( \mathcal {A} _ {t_0} \) the algebra of functions of t analytic on the interval \([t_0,T]\), and through \(\mathcal {A}_{t_0y_0}\)—algebra of functions of variables t and y analytic on the rectangle \( \varPi = [t_0, T] \times [y_0- \gamma , y_0 + \gamma ] \), where \( \gamma \) is a sufficiently large positive number.

To describe the algorithm of the method of holomorphic regularization, we shall consider the right-hand side of Eq. (6) is scalar function for which condition \((\alpha )\): \(f(t,y)\in \mathcal {A}_{t_0y_0}\) and \(f(t,y)\ne 0\) \(\forall (t,y)\in \varPi \) is satisfied. From the nonlinear Eq. (6) we pass to the linear equation of its integrals

$$\begin{aligned} \varepsilon U_t+f(t,y) U_y=0, \end{aligned}$$
(7)

whose solution, assuming the operator \( \partial _t \) to be subordinate to the operator \( f \partial _y \), we look for a series in powers of \( \varepsilon \):

$$\begin{aligned} U(t,y,\varepsilon )=U_0(t,y)+\varepsilon U_1(t,y)+\ldots +\varepsilon ^n U_n(t,y)+\ldots . \end{aligned}$$
(8)

In accordance with the method of undetermined coefficients, we obtain a series of problems for determining the coefficients of this series:

$$\begin{aligned} \begin{array}{l} fU_{0,y}=0,\\ fU_{1,y}=-U_{0,t},\\ fU_{2,y}=-U_{1,t},\\ \ldots \ldots \ldots \ldots \ldots \\ fU_{n,y}=-U_{n-1,t},\\ \ldots \ldots \ldots \ldots \ldots \end{array} \end{aligned}$$
(9)

As a solution of the first equation of the series, we can take an arbitrary function \( \varphi (t) \in \mathcal {A} _ {t_0} \). We now introduce the following notation: \(g_k\equiv \frac{1}{f(t,y_k)}\), \(k=1,2,\ldots \); if h(ty) is some function, then

$$\begin{aligned} J_kh\equiv \int _{y_0}^{y_k} h(t,y_{k+1}) dy_{k+1}, \quad k=1,2,\ldots ; \quad J_0h\equiv \int _{y_0}^{y} h(t,y_1) dy_{1}. \end{aligned}$$

Then we have:

$$\begin{aligned} \begin{array}{c} U(t,y,\varepsilon )=\varphi -\varepsilon J_0(g_1\partial _t\varphi )+\varepsilon ^2 J_0(g_1\partial _t J_1(g_2\partial _t\varphi ))\\ -\,\varepsilon ^3 J_0(g_1\partial _t J_1(g_2\partial _t J_2(g_3\partial _t\varphi )))+\ldots . \end{array} \end{aligned}$$
(10)

To prove the convergence of the power series (10), we use the lemma, which is proved by the method of mathematical induction.

Lemma 1

If in expression \(\partial _t(b_1(\partial _t(b_2(\partial _t\ldots (\partial _t b_n))\ldots )\), in which \(b_1,\ldots \), \(b_n\)—functions of the variable t, expand the brackets by the formula of the derived product, and replace \(\partial _t^sb_r\), \(1\le r\le n\), \(0\le s\le n\) s!, then the obtained sum will be \((2n-1)!!\).

We represent the coefficient \( U_n \) of the series (10) as follows:

$$\begin{aligned} U_n=(-1)^n (J_0J_1\ldots J_{n-1})[g_1(\partial _t(g_2(\partial _t\ldots \partial _t(g_n\partial _t\varphi ))\ldots )], \end{aligned}$$
(11)

where

$$\begin{aligned} (J_0J_1\ldots J_{n-1})[H(t,y_1,\ldots ,y_n)]\equiv \int _{y_0}^{y} dy_1 \int _{y_0}^{y_1} dy_2 \ldots \int _{y_0}^{y_{n-1}} H(t,y_1,\ldots ,y_n) dy_n. \end{aligned}$$

The functions \( \varphi \), \( g_1, \ldots , g_n \) are analytic in t on the segment [0, T] uniformly in \(y\in [y_0-\gamma ,y_0+\gamma ]\), so \(\exists C>0\): \(|\partial _t^s \varphi |\le C^s s!\), \(|\partial _t^s g_r|\le C^s s!\) on a rectangle \(\varPi \). Thus,

$$\begin{aligned} |U_n(t,y)|\le \left| \int _{y_0}^{y} dy_1 \int _{y_0}^{y_1} dy_2 \ldots \int _{y_0}^{y_{n-1}}dy_n\right| C^n(2n-1)!!, \end{aligned}$$

from which it follows that \(|U_n(t,y)|\le \frac{\gamma ^n C^n(2n\,-\,1)!!}{n!}\) \(\forall (t,y)\in \varPi \).

The convergence of the series (10) in a neighborhood of the point \( \varepsilon = 0 \) uniformly on the rectangle \( \varPi \) follows from the d’Alembert test.

Since \( \varphi (t) \) enters the expression (10) linearly, we can look at \( U (t, y,\varepsilon ) \) for each fixed sufficiently small \( \varepsilon \) as an image of some linear operator: \(U(t,y,\varepsilon )=A_f^\varepsilon [\varphi ]\).

Theorem 3

The mappings \( \{A_f ^ \varepsilon \} \) form an analytic at the point \( \varepsilon = 0 \) family of homomorphisms from algebra \(\mathcal {A}_{t_0}\) to algebra \(\mathcal {A}_{t_0y_0}\).

Proof. First we establish the commutation relation for the operator \( A_f ^ \varepsilon \). Since for any \( \varphi (t) \) the function \( U (t, y, \varepsilon ) \) is an integral of the Eq. (6), then there exists a function \( \varPhi \) of one variable such that \( A_f ^ \varepsilon [\varphi (t)] = \varPhi (A_f ^ \varepsilon [t]) \). If we put \( y = y_0 \) in the left and right sides of this equality, then \( \varphi (t) \equiv \varPhi (t) \), i.e. it can be represented as a commutation relation

$$\begin{aligned} A_f^\varepsilon [\varphi (t)]=\varphi (A_f^\varepsilon [t]). \end{aligned}$$
(12)

Further, \(A_f^\varepsilon [\varphi _1\varphi _2]=(\varphi _1\varphi _2)(A_f^\varepsilon [t])=\varphi _1(A_f^\varepsilon [t]) \varphi _2(A_f^\varepsilon [t])=A_f^\varepsilon [\varphi _1]A_f^\varepsilon [\varphi _2], \) i.e. \(A_f^\varepsilon : \mathcal {A}_{t_0}\rightarrow \mathcal {A}_{t_0y_0}\)—homomorphism.    \(\square \)

Thus, it is proved that the images of the homomorphisms \( \{A_f ^ \varepsilon \} \) are analytic in the parameter integrals of Eq. (6).

4 Pseudoholomorphic Solutions of Singularly Perturbed Problems

Definition 1

The solution \( y (t,\varepsilon ) \) of the initial problem (6) is called pseudoholomorphic at \( \varepsilon = 0 \) if there exists a function \( Y (t, \eta , \varepsilon ) \), analytic in the third variable in the neighborhood of the value \( \varepsilon = 0 \) for each \( t \in [t_0, T] \) and every \( \eta \) from some unbounded set G such that for some analytic function \( \varphi (t) \) the equality \(y(t,\varepsilon )=Y(t,\varphi (t)/\varepsilon ,\varepsilon )\) \(\forall t\in [t_0,T]\).

The following theorem gives sufficient conditions for the existence of a pseudoholomorphic solution y of the Cauchy problem (6).

Theorem 4

Suppose that the function \( \varphi (t) \), analytic on the interval \( [t_0, T] \), is such that \( \varphi (t_0) = 0 \), and equation

$$\begin{aligned} \varphi '(t)\int _{y_0}^{y} \frac{dy_1}{f(t,y_1)}=\frac{\varphi (t)}{\varepsilon } \end{aligned}$$
(13)

has a solution of the form

$$\begin{aligned} y=Y_0\left( t,\varPsi \left( \frac{\varphi (t)}{\varepsilon }\right) \right) , \end{aligned}$$
(14)

in which \( q = \varPsi (\eta ) \) is an entire function with an asymptotic value equal to \( p_0 \), and the function \( Y_0 (t, q) \) is analytic on the rectangle \( \varPi _ {tq} ^ 0 = [t_0 , T] \times Q \), where Q is a segment containing the points \( p_0 \) and \( \varPsi (0) \). Then the solution \( y (t, \varepsilon ) \) of the Cauchy problem (6) is pseudoholomorphic at the point \(\varepsilon =0\).

Proof. We write the general integral of the Eq. (6) in the form

$$\begin{aligned} \widetilde{U}(t,y,\varepsilon )=\frac{\varphi (t)}{\varepsilon }, \end{aligned}$$
(15)

where

$$\begin{aligned} \begin{array}{c} \widetilde{U}(t,y,\varepsilon )=J_0(g_1\partial _t\varphi )-\varepsilon J_0(g_1\partial _t J_1(g_2\partial _t\varphi ))\\ +\,\varepsilon ^2 J_0(g_1\partial _t J_1(g_2\partial _t J_2(g_3\partial _t\varphi )))+\ldots . \end{array} \end{aligned}$$
(16)

We calculate the values of the function \( \varPsi \) from the left and right sides of (15)

$$\begin{aligned} \varPsi (\widetilde{U}(t,y,\varepsilon ))=\varPsi \left( \frac{\varphi (t)}{\varepsilon }\right) . \end{aligned}$$

We denote the right-hand side of the resulting equality by q and select the principal term on the left-hand side:

$$\begin{aligned} \varPsi (J_0(g_1\partial _t\varphi ))+\varepsilon F(t,y,\varepsilon )=q. \end{aligned}$$
(17)

Next, we take \( p> p_0 \) very close to \( p_0 \) and, assuming that \( p_0 <\varPsi (0) \), we construct a rectangle \(\varPi _{tq}=[t_0,T]\times [p,\varPsi (0)]\). It is obvious that for the Eq. (17) all the conditions of the implicit function theorem on the rectangle \(\varPi _{tq}\), \(\varepsilon =0\)\(y=Y_0(t,q)\), and the estimate of the modulus of this function does not depend on p because of the analyticity on the rectangle \( \ Pi_ {tq} ^ 0 \). Consequently, in a neighborhood \( \sigma _ {tq} \) of each point \( (t, q) \in \varPi _ {tq} \) there exists a solution \( y = Y (t, q, \varepsilon ) \) of the Eq. (17), analytic in some neighborhood of the value \( \varepsilon = 0 \). We choose a finite subcovering from the covering \( \{\sigma _ {ta} \} \) of the rectangle \( \varPi _ {tq} \). Then the function \( Y (t, q, \varepsilon ) \) is analytic, uniformly on this rectangle, in the smallest neighborhood \( 0<\varepsilon <\varepsilon _0 \) corresponding to the finite subcovering.

Let the parameter \( \varepsilon \) in the Eq. (6) satisfy the inequality \(0<\varepsilon <\varepsilon _0\) and a curve \(\varGamma :q=\varPsi (\varphi (t)/\varepsilon )\) entirely belongs to a rectangle \(\varPi _{tq}\). Then representation

$$\begin{aligned} y(t,\varepsilon )=\sum _{n=0}^{\infty } \varepsilon ^n Y_n(t,\varphi (t)/\varepsilon ), \end{aligned}$$
(18)

takes place and this series converges uniformly on the segment \( [t_0, T] \). If the rectangle \( \varPi _ {tq} \) contains only part of the curve \( \varGamma \), then the series (18) converges uniformly on some interval \( [t_0, t_1] \subset [t_0, T] \) and a pseudoholomorphic extension \( y (t, \varepsilon ) \) to the right is required.    \(\square \)

5 Generalizations and Examples

The method of holomorphic regularization admits a generalization to the case of equations of higher orders and systems [10, 12, 13]. In this section of the paper we give examples covering a sufficiently wide range of singularly perturbed initial problems.

\(1^\circ .\) \(\varepsilon y'=y^2-e^{2t}, \quad y(0,\varepsilon )=0\),

      \(\displaystyle {y(t,\varepsilon )=e^t\mathrm{th}\frac{1-e^t}{\varepsilon }+\frac{\varepsilon }{2} \mathrm{th}^2\frac{1-e^t}{\varepsilon }+\ldots }\).

\(2^\circ .\) \(\varepsilon y'=e^{-ye^t}-10, \quad y(0,\varepsilon )=0\),

      \(\displaystyle {y(t,\varepsilon )=e^{-t}\left( 1+\frac{\varepsilon e^{-t}}{10}\right) \ln \frac{1+9e^{10(1-e^t)}}{10}+\ldots }\).

\(3^\circ .\) \(\varepsilon y''+yy'-y=0, \quad y(t_0,\varepsilon )=y_0>0, \quad y'(t_0,\varepsilon )=v_0\ne 1\),

      \(\displaystyle {y(t,\varepsilon )=t-t_0+y_0+\frac{\varepsilon }{y_0}(v_0-1)\left( e^{-\frac{y_0(t-t_0)}{\varepsilon }}-1 \right) +\ldots }\).

\(4^\circ .\) \(\varepsilon y''=e^{2t}-e^{2y}(y')^2, \quad y(0,\varepsilon )=y'(0,\varepsilon )=0\),

      \(\displaystyle {y(t,\varepsilon )=t-\varepsilon e^t \ln \left( 1-\mathrm{th}\frac{1-e^t}{\varepsilon }\right) +\ldots }\).

\(5^\circ .\) A mixed problem for a nonlinear parabolic equation:

$$\begin{aligned} \begin{array}{l} \varepsilon u_t=u_{xx}-u^3,\\ u(0,\varepsilon )=\sin x,\\ u(t,0)=u(t,\pi )=0. \end{array} \end{aligned}$$

Second-order Galerkin approximation:

$$\begin{aligned} u_2(t,x,\varepsilon )=\frac{2e^{-t/\varepsilon }\sin x}{\sqrt{7-3e^{-t/\varepsilon }}}. \end{aligned}$$

\(6^\circ .\) The Tikhonov type system

$$\begin{aligned} \begin{array}{l} \left\{ \begin{array}{l} y'=v^2,\\ \varepsilon v'=v^2-y^2e^{2t}, \end{array} \right. \\ y(0,\varepsilon )=-2, \quad v(0,\varepsilon )=0 \end{array} \end{aligned}$$

has the following pseudoholomorphic solution:

$$\begin{aligned} \left\{ \begin{array}{l} y(t,\varepsilon )=-2 e^{-2t}-2\varepsilon e^{-t}\mathrm{th}\frac{2(e^{-t}-1)}{\varepsilon }+\ldots ,\\ v(t,\varepsilon )=2 e^{-t}\mathrm{th}\frac{2(e^{-t}-1)}{\varepsilon }+\ldots . \end{array} \right. \end{aligned}$$

In the paper [14] an example of a weakly nonlinear singularly perturbed system is given:

$$\begin{aligned} \begin{array}{l} \left\{ \begin{array}{l} \varepsilon y'_1=-(e^t+1)y_1+2e^{-t}y_2+\varepsilon e^{ty_2},\\ \varepsilon y'_2=e^{2t}y_1-3y_2, \end{array} \right. \\ y_1(0,\varepsilon )=1, \quad y_2(0,\varepsilon )=0. \end{array} \end{aligned}$$

In these examples it is easy to see the existence of a limit transition characteristic of singularly perturbed problems satisfying the conditions of the theorem of A.N. Tikhonov [11].