Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

(a) Elementary Matching

Milton Van Dyke’s Perturbation Methods in Fluid Mechanics [490] was effectively both the earliest and the most influential book specifically about applied singular perturbations. (Some credit might be given earlier fluid dynamics textbooks, e.g., Hayes and Probstein [199]). Van Dyke extensively surveyed the large extant aeronautical and fluid dynamical literature, forcefully advocating and clarifying the so-called method of matched asymptotic (or inner and outer) expansions . Although Van Dyke acknowledged that Prandtl’s boundary layer theory was the prototype singular perturbation problem, he introduced the subject by describing incompressible fluid flow past a thin airfoil. The book’s highlight message, sometimes called Van Dyke’s magic rule , states:

The m-term inner expansion of (the n term outer expansion) = the n-term outer expansion of (the m term inner expansion).

This glib oversimplification (for any positive integer pairs m and n) allowed many practitioners to confidently solve significant applied problems asymptotically (an advantage unavailable before then).

To grasp the basic idea of Van Dyke’s procedure for m = n = 2, consider the linear initial value problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon y^{{\prime\prime}} + y^{{\prime}} + y = 0\ \ \mbox{ on $0 \leq t \leq T$}\quad &\mbox{ for a fixed finite time $T$} \\ \mbox{ $y(0) = 0$, $y^{{\prime}}(0) = \frac{1} {\epsilon } $} \quad &\mbox{ for a small $\epsilon > 0$} \end{array} \right. }$$
(3.1)

for a displacement y. We expect the impulsive large initial derivative to provide an immediate rapid upward response, so we naturally introduce the fast time

$$\displaystyle{ \tau = t/\epsilon. }$$
(3.2)

Then \(y^{{\prime}} = \frac{1} {\epsilon } y_{\tau }\) and \(\epsilon y^{{\prime\prime}} = \frac{1} {\epsilon } y_{\tau \tau }\), so we naturally seek a local inner expansion y in satisfying the stretched problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} y_{\tau \tau } + y_{\tau } +\epsilon y = 0\mbox{ on $\tau \geq 0$} \quad & \\ \mbox{ with $y(0) = 0$ and $y_{\tau }(0) = 1$}.\quad & \end{array} \right. }$$
(3.3)

(Sophisticated readers will note that our selection of the stretched variable τ rebalances the orders of the three terms in the given ODE, changing their dominant balance in the terminology of Bender and Orszag [36]. To determine the “right” balance will more generally take some trial and error. The selection of the stretched variable also relates to a classical asymptotic technique called the Newton polygon (cf. Hille [205], Kung and Traub [269], and White [519]) which is implemented in Maple. Setting

$$\displaystyle{ y^{in}(\tau,\epsilon ) \sim y_{ 0}(\tau ) +\epsilon y_{1}(\tau ) +\epsilon ^{2}y_{ 2}(\tau )+\ldots }$$
(3.4)

and expanding y τ and y τ τ analogously, we will need to satisfy

$$\displaystyle{(y_{0\tau \tau } +\epsilon y_{1\tau \tau }+\ldots ) + (y_{0\tau } +\epsilon y_{1\tau }+\ldots ) +\epsilon (y_{0}+\ldots ) = 0,}$$

or

$$\displaystyle{y_{0\tau \tau } + y_{0\tau } +\epsilon (y_{1\tau \tau } + y_{1\tau } + y_{0})+\ldots = 0,}$$

and the corresponding initial conditions

$$\displaystyle{y(0,\epsilon ) = y_{0}(0) +\epsilon y_{1}(0)+\ldots = 0}$$

and

$$\displaystyle{y_{\tau }(0,\epsilon ) = y_{0\tau }(0) +\epsilon y_{1\tau }(0)+\ldots = 1}$$

as a regular perturbation expansion in powers of ε. Equating coefficients, we naturally require y 0 to satisfy

$$\displaystyle{ y_{0\tau \tau } + y_{0\tau } = 0,\ \ \ y_{0}(0) = 0,\ \ \ \mbox{ and }\ \ y_{0\tau }(0) = 1, }$$
(3.5)

and y 1 to next satisfy

$$\displaystyle{ y_{1\tau \tau } + y_{1\tau } + y_{0} = 0,\ \ \ y_{1}(0) = y_{1\tau }(0) = 0, }$$
(3.6)

etc. Thus, we uniquely obtain

$$\displaystyle{ y_{0}(\tau ) = 1 - e^{-\tau } }$$
(3.7)

while \(y_{1\tau \tau } + y_{1\tau } + 1 - e^{-\tau } = 0\) and the trivial initial conditions uniquely imply that

$$\displaystyle{ y_{1}(\tau ) = 2 -\tau +e^{-\tau }(-2-\tau ) }$$
(3.8)

(using, say, the method of undetermined coefficients).

We then expect the resulting uniquely determined series

$$\displaystyle{ y^{in}(\tau,\epsilon ) = 1 - e^{-\tau } +\epsilon (2 -\tau -e^{-\tau }(2+\tau ))+\ldots }$$
(3.9)

or inner expansion to be asymptotically valid at least for bounded τ values, i.e. for small values of t = O(ε). (It breaks down when τ is large, since the ratio of successive terms in the series ultimately becomes unbounded like ε τ.)

For larger values of t, we shall alternatively seek an outer solution Y out, depending on the original time variable t and ε. Thus, we will substitute the regular power series (i.e., outer) expansion

$$\displaystyle{ Y ^{out}(t,\epsilon ) \sim Y _{ 0}(t) +\epsilon Y _{1}(t)+\ldots }$$
(3.10)

into the given differential equation and equate coefficients of like powers of ε in (3.1) to successively require \(Y _{0}^{{\prime}} + Y _{0} = 0,\ \ Y _{1}^{{\prime}} + Y _{1} + Y _{0}^{{\prime\prime}} = 0\), etc. Hence

$$\displaystyle{ Y _{0}(t) = Ae^{-t}\ \ \ \mbox{ for some constant $A$}, }$$
(3.11)
$$\displaystyle{ Y _{1}(t) = (B - At)e^{-t}\ \ \ \mbox{ for some constant $B$}, }$$
(3.12)

etc., providing the first terms of an outer expansion

$$\displaystyle{ Y ^{out}(t,\epsilon ) = Ae^{-t} +\epsilon (B - At)e^{-t}+\ldots }$$
(3.13)

for finite t values and constants A, B, \(\ldots\) yet to be determined by matching this outer expansion to the inner expansion (3.9) as we now describe. (Note that the terms Y k in (3.10) satisfy first, rather than second, order differential equations and that the prescribed initial values at t = 0 are so far irrelevant to the outer expansion.) In the 1950s, an alternative patching technique was sometimes applied to inner and outer expansions. Patching typically took place at an ε-dependent t value like \(-10\epsilon \ln \epsilon\). The concept still underlies some numerical methods (cf., e.g., Kopteva and O’Riordan [259] and Miller et al. [317] regarding the Shishkin mesh ).

We first rewrite the known inner expansion in terms of the outer variable t as

$$\displaystyle{y^{in}\left (\frac{t} {\epsilon },\epsilon \right ) = 1 - e^{-t/\epsilon } +\epsilon \left (2 -\frac{t} {\epsilon } - e^{-t/\epsilon }\left (2 + \frac{t} {\epsilon } \right )\right )+\ldots }$$

Taking the limit as \(\tau = t/\epsilon \rightarrow \infty \), the exponentials become negligible and we get the truncated two-term limit

$$\displaystyle{ (y^{in})^{out} = 1 - t + 2\epsilon +\ldots }$$
(3.14)

Analogously, we represent the outer expansion in terms of the inner variable τ as

$$\displaystyle{Y ^{out}(\epsilon \tau,\epsilon ) = Ae^{-\epsilon \tau } +\epsilon (B -\epsilon \tau A)e^{-\epsilon \tau } +\ldots.}$$

Expanding the exponentials in their Maclaurin expansions for moderate values of τ as \(\epsilon \rightarrow 0\) and truncating, we obtain

$$\displaystyle{ (Y ^{out})^{in} \equiv A +\epsilon (B - A\tau ) +\ldots. }$$
(3.15)

Since τ = ε t, the asymptotic matching condition

$$\displaystyle{(y^{in})^{out} = (Y ^{out})^{in}}$$

(at this (m = n = 2) order) requires that

$$\displaystyle\begin{array}{rcl} (y^{in})^{out}& =& 1 - t + 2\epsilon +\ldots \\ & =& A - At +\epsilon B+\ldots = (Y ^{out})^{in}.{}\end{array}$$
(3.16)

We will naturally call this expression the common part of the inner and outer expansions (both truncated at the second order). We could express it in terms of either time variable t or τ. Note that the matching condition crudely corresponds to the idea of equating Y out(t, ε) near t = 0 to y in(τ, ε) near \(\tau = \infty \). We are, however, being much more explicit.

This process uniquely provides the unspecified constants A = 1 and B = 2 in the outer expansion, i.e., matching across the O(ε)-thick initial layer (by equating the common parts) has uniquely specified the outer expansion as

$$\displaystyle{ Y ^{out}(t,\epsilon ) = e^{-t} +\epsilon (2 - t)e^{-t}+\ldots }$$
(3.17)

We expect (3.17) to be the valid asymptotic solution for t outside the initial layer. Note that \(Y ^{out}(0,0)\neq y(0)\). (If this wasn’t so, the inner and outer expansions would coincide for t = ε τ.) Note that we seem to implicitly invoke some idea about overlap of the two solutions in a joint region of validity of the inner and outer expansions.

Rather than having separate asymptotic expansions, y in very near t = 0 and Y out away from t = 0, we shall now define the additive composite expansion

$$\displaystyle\begin{array}{rcl} y^{c}& \equiv & Y ^{out} + y^{in} - (Y ^{out})^{in} \\ & = & (e^{-t} +\epsilon (2 - t)e^{-t}+\ldots ) \\ & & +(1 - e^{-\tau } +\epsilon [2 -\tau -(2+\tau )e^{-\tau }]+\ldots ) \\ & & -(1 - t + 2\epsilon +\ldots ) \\ & = & [e^{-t} - e^{-\tau }] +\epsilon [(2 - t)e^{-t} - (2+\tau )e^{-\tau }] +\ldots {}\end{array}$$
(3.18)

that we expect to be uniformly valid on any fixed finite interval 0 ≤ t ≤ T as \(\epsilon \rightarrow 0\), i.e. in the domains where the inner expansion, the outer expansion, and their common part are simultaneously defined. The limit of the sum y c is y in in the inner region and Y out in the outer region since the outer expansion in the inner region and the inner expansion in the outer region agree with their common (i.e., matching) part. (We note that other alternative composite expansions have also been introduced in the literature.) Eckhaus [133] formalizes the procedure using expansion operators . Van Dyke [492] noted that the terminology global and local approximations would be preferable to outer and inner approximations.

A more subtle matching technique using intermediate variables

$$\displaystyle{t_{\beta } \equiv \frac{t} {\epsilon ^{\beta }} }$$

for βs satisfying 0 < β < 1 in both the inner and outer expansions is presented in Cole [92] and Holmes [209]. For some problems, the use of power series in ε for both the inner and outer expansions turns out to be inadequate for matching, but inserting intermediate terms suggested by their limits succeeds. The process is called switchback . To avoid going wrong, Van Dyke [491] made the practical suggestion

$$\displaystyle{\mbox{ Don't cut between logarithms.}}$$

Its subtle meaning could be clarified by examining detailed examples that caused anxiety 40 years ago.

The exact solution to the initial value problem (3.1) has the form

$$\displaystyle{ y(t,\epsilon ) = C(\epsilon )(e^{-\nu (\epsilon )t} - e^{-\kappa (\epsilon )t/\epsilon }) }$$
(3.19)

where

$$\displaystyle{ \nu (\epsilon ) \equiv \frac{1 -\sqrt{1 - 4\epsilon }} {2\epsilon } \sim 1 +\epsilon +\ldots }$$

and

$$\displaystyle{ \kappa (\epsilon ) \equiv \frac{1 + \sqrt{1 - 4\epsilon }} {2} \sim 1 -\epsilon -\epsilon ^{2} +\ldots. }$$

Thus, y(0) = 0 and \(y^{{\prime}}(0) = \frac{1} {\epsilon } = C(\epsilon )\left (-\nu (\epsilon ) + \frac{\kappa (\epsilon )} {\epsilon } \right )\) uniquely determine

$$\displaystyle{C(\epsilon ) \equiv \frac{1} {\kappa (\epsilon ) -\epsilon \nu (\epsilon )} = \frac{1} {1 - 2\epsilon +\ldots } = 1 + 2\epsilon +\ldots.}$$

The exact result

$$\displaystyle{ y(t,\epsilon ) \sim \frac{1} {1 - 2\epsilon +\ldots }\left (e^{-(1+\epsilon +\ldots )t} - e^{-(1-\epsilon -\epsilon ^{2}+\ldots )\frac{t} {\epsilon } }\right ) }$$
(3.20)

agrees asymptotically with the composite solution (3.18) obtained by matching for m = 2 and n = 2. (To carry out these calculations, we use the binomial expansion \(\sqrt{ 1 - x} = 1 -\frac{x} {2} -\frac{x^{2}} {8} +\ldots\), convergent for \(\vert x\vert < 1\).) Readers should personally experiment by matching solutions of (3.1) for larger values of m and n than 2.

Further, as we will extensively illustrate, matching results in the same uniform expansion as we’d get by determining the outer expansion Y (t, ε) as a function of t (with its unspecified constants) and adding to it a boundary layer corrector expansion v(τ, ε) (as a function of the stretched time τ = tε) that tends exponentially to zero as \(\tau \rightarrow \infty \). Thus, we’ll have

$$\displaystyle{ y(t,\epsilon ) \sim Y (t,\epsilon ) + v(\tau,\epsilon ) }$$
(3.21)

Matching, then, ultimately cancels some terms in the inner expansion (retaining \(v \equiv y^{in} - (y^{in})^{out})\), but it is somewhat inefficient because it requires us to determine terms in y in that are later neglected (i.e., the common part).

Specifically, note that the exact solution (3.20) of (3.1) also has the form

$$\displaystyle{ y(t,\epsilon ) = Y (t,\epsilon ) + Z(t,\epsilon )e^{-t/\epsilon } }$$
(3.22)

for power series Y and Z depending on t and ε. Indeed, for bounded t, \(Y \equiv C(\epsilon )e^{-\nu (\epsilon )t}\) is the outer solution . The initial conditions require that

$$\displaystyle{y(0) = Y (0,\epsilon ) + Z(0,\epsilon ) = 0}$$

and

$$\displaystyle{\epsilon y^{{\prime}}(0) \equiv \epsilon Y ^{{\prime}}(0,\epsilon ) +\epsilon Z^{{\prime}}(0,\epsilon ) - Z(0,\epsilon ) = 1.}$$

Since y, Y and the corrector \(v \equiv Ze^{-t/\epsilon }\) all satisfy the given differential equation of (3.1), Z must then satisfy

$$\displaystyle{ \epsilon Z^{{\prime\prime}}- Z^{{\prime}} + Z = 0 }$$
(3.23)

as a series in ε. The representation (3.22) implies a more efficient power series method than matching. More sophisticated matching procedures for linear differential equations in the complex plane are considered in Olde Daalhuis et al. [359]. Likewise, the Russian A. M. I’lin [221] convincingly presents matching for partial differential equations.

The unusual problem

$$\displaystyle{(x+\epsilon )y' + y = 0,\ \ \ y(1) = 1}$$

has the exact solution

$$\displaystyle{y(x,\epsilon ) = \frac{1+\epsilon } {x+\epsilon },}$$

well-behaved for 0 < x ≤ 1, but algebraically unbounded near x = 0 where the limiting equation has a singular point. Complications there must be expected (cf. our discussion of Lighthill’s method in Chap. 5).

Exercises

  1. 1.

    Show that \(e^{-t/\epsilon } \leq \epsilon ^{n}\) holds for \(-n\epsilon \ln \epsilon \leq t < \infty \) and that the inequality is reversed for smaller t > 0.

  2. 2.

    For the initial value problem

    $$\displaystyle{\epsilon \ddot{y} +\dot{ y} + y = 0,\ \ \ t \geq 0,\ \ \ y(0) = 1,\ \ \ \dot{y}(0) = 1,}$$

    show that the asymptotic solution has the form

    $$\displaystyle{y(t,\epsilon ) = Y (t,\epsilon ) +\epsilon D(t,\epsilon )e^{-t/\epsilon }}$$

    on \(0 \leq t \leq T < \infty \) for power series Y and D. The uniform limit for t ≥ 0 will be Y 0(t) = e t, since \(\dot{Y }_{0} + Y _{0} = 0\) and Y 0(0) = 1. Show that \(\dot{y}\) will jump near t = 0, however. Try computing the solution and its derivative for a small ε.

  3. 3.

    (Cole [92]) The equation

    $$\displaystyle{\epsilon y'' + (1 +\alpha x)y' +\alpha y = 0}$$

    is exact, so it is possible to obtain its general solution. Suppose α > −1, so 1 +α x > 0 on 0 ≤ x ≤ 1. Impose the boundary values

    $$\displaystyle{y(0) = 0\mbox{ and }y(1) = 1,}$$

    so the outer limit is \(Y _{0}(x) = \frac{1+\alpha } {1+\alpha x}.\) Note that the limiting initial layer corrector

    $$\displaystyle{-(1+\alpha )e^{-x/\epsilon }}$$

    approximates the exact corrector

    $$\displaystyle{-(1+\alpha )e^{-\frac{1} {\epsilon } \int _{0}^{x}(1+\alpha s)\,ds }.}$$

    Find the exact solution and the first two terms of its outer solution

    $$\displaystyle{Y (x,\epsilon ) = Y _{0}(x) +\epsilon Y _{1}(x) + O(\epsilon ^{2}).}$$
  4. 4.

    Consider the alternative composite expansion y c for problem (3.1) when the common part is nonzero by setting

    $$\displaystyle{y^{c} = \frac{Y ^{out}y^{in}} {((Y ^{out})^{in})^{2}}.}$$
  5. 5.

    Consider the two-point problem

    $$\displaystyle{\epsilon y^{{\prime\prime}} + (1 + x)^{2}y^{{\prime}} + 2(1 + x)y = 0,\ \ 0 < x < 1}$$
    $$\displaystyle{y(0) = 1,\ \ y(1) = 2.}$$
    1. (a)

      Obtain the exact solution and describe its limiting behavior. (Hint: the differential equation is exact.)

    2. (b)

      Determine a composite expansion in the form

      $$\displaystyle{y(x,\epsilon ) = A(x,\epsilon ) + v(\xi,\epsilon )}$$

      where A is an outer expansion valid for x > 0 and the boundary layer corrector \(v \rightarrow 0\) as \(\xi \equiv x/\epsilon \rightarrow \infty \).

    3. (c)

      Determine an asymptotic solution in the WKB form

      $$\displaystyle{y(x,\epsilon ) = A(x,\epsilon ) + e^{-\frac{1} {\epsilon } \int _{0}^{x}(1+s)^{2}\,ds }(y(0) - A(0,\epsilon )).}$$
  6. 6.

    Consider the two-point problem

    $$\displaystyle{\epsilon y^{{\prime\prime}} + (1 + x)^{2}y^{{\prime}}- (1 + x)y = 0,\ \ \ 0 \leq x \leq 1}$$

    with y(0) = 1 and y(1) = 3.

    1. (a)

      Obtain the exact solution and determine its limiting behavior as \(\epsilon \rightarrow 0^{+}\). (Hint: y = 1 + x is a solution of the ODE.)

    2. (b)

      Use matched asymptotic expansions to obtain the two-term composite expansion.

    3. (c)

      Determine an asymptotic solution of the form

      $$\displaystyle{y(x,\epsilon ) = A(x,\epsilon ) + B(x,\epsilon )e^{-\frac{1} {\epsilon } \int _{0}^{x}(1+s)^{2}\,ds }}$$

      (with power series expansions for A and B).

    4. (d)

      Plot the inner expansion, the outer expansion, the composite expansion, and the numerical solution for ε = 1∕10 (on the same graph).

    5. (e)

      Show that

      $$\displaystyle{e^{-\frac{1} {\epsilon } \int _{0}^{x}(1+s)^{2}\,ds } - e^{-\frac{x} {\epsilon } } = O(\epsilon )\ \ \ \mbox{ on }\ \ \ 0 \leq x \leq 1.}$$
  7. 7.

    Assuming a boundary layer of O(ε) thickness near x = 1, seek an asymptotic solution of

    $$\displaystyle\begin{array}{rcl} & & \epsilon u_{xx} = u_{x} + u_{t},\ \ u(0,t) = u_{0}(t),\ \ u(1,t) = u_{1}(t)\ \ \mbox{ for}\ t \geq 0 {}\\ & & \qquad \qquad \qquad \mbox{ and}\ u(x,0)\ \mbox{ given for}\ 0 \leq x \leq 1 {}\\ \end{array}$$

    in the form

    $$\displaystyle{u(x,t,\epsilon ) = A(x,t,\epsilon ) + B(x,t,\epsilon )e^{-(1-x)/\epsilon }.}$$

Basic issues concerning the validity of matching were raised by Fraenkel [155] and Eckhaus [133], among others (cf., e.g., Lo [297] and, especially, Skinner [463]). Some of the subtleties were reconsidered in the annotated edition of Van Dyke’s book [491] of 1975. Its frontispiece is the woodcut Sky and Water I, 1938 by the Dutch lithographer M. C. Escher featuring fish transforming vertically into birds (cf. Schattschneider [433] and [434] regarding relations between Escher’s work and groups, tilings, and other mathematical objects). (The author recently found this print for sale for about $48,000!) Van Dyke stated that the woodcut

gives a graphical impression of the “imperceptively smooth blending” of one flow into another that is the heart of the method of matched asymptotic expansions.

Milton Van Dyke (1922–2010) was an American who got a 1949 Caltech Ph.D. (with Paco Lagerstrom) and worked at NASA-Ames before taking a professorship in aeronautics at Stanford in 1959 (see Schwartz [442] for a brief biography). One reason for the annotated edition [491] of Perturbation Methods in Fluid Mechanics was that Academic Press let the 1964 original [490] go out of print because Van Dyke had insisted that the contract stipulate that

the book shall cost no more than three cents a page.

The Academic Press edition sold 8,000 copies. (In addition to the annotated edition, Parabolic Press (managed by Van Dyke) also published the picture book An Album of Fluid Motion (1984) by Van Dyke and the autobiographical Stories of a 20th Century Life (1994) by W. R. Sears.)

The more complicated use of intermediate limits/intermediate problems , rather than the formal matching of series, as proposed by Kaplun [235], relates to the often presumed existence of an overlap (as in analytic continuation in complex variables) between the domains of validity for the inner and outer expansions and the construction of a “composite” or uniform expansion as the formal sum of the inner and outer expansions less their “common part,” found by matching. Eckhaus and Fraenkel both showed that having an overlap is not necessary for matching to succeed. Fruchard and Schäfke [165], however, base their composite expansions on overlap. (The complication that the inner and outer expansions are expressed in terms of different variables indeed suggests the more sophisticated two-timing (or multiple scale) procedure that we will consider in Chap. 5.) The recent proofs of Skinner [463] and Fruchard and Schäfke [165] validate matching for a broad variety of ODE problems.

Fluid dynamicists have introduced a more elaborate triple deck technique (cf. Meyer [316], Sobey [467], and Veldman [498], noting important contributions by Stewartson, Williams, and Neiland) to handle viscous flow along a plate. Somewhat analogously, mathematicians have introduced a blow-up technique to analyze even more complicated matching (cf. Dumortier and Roussarie [127] and Kuehn [268]). Hastings and McLeod [198] combine blowup with classical methods to rigorously prove matching for the Lagerstrom model

$$\displaystyle{y^{{\prime\prime}} + \frac{n - 1} {r} y^{{\prime}} +\epsilon yy^{{\prime}} = 0,\ \ r \geq 1,\ \ y(1) = 0,\ \ y(\infty ) = 1}$$

in dimensions n = 2 and 3. This problem has been considered by a dozen authors since 1957. Most recently, Holzer and Kaper [211] used normal form techniques to handle a variety of problems with so-called logarithmic switchback .

(b) Tikhonov–Levinson Theory and Boundary Layer Corrections

(i) Introduction

Wolfgang Wasow’s Asymptotic Expansions for Ordinary Differential Equations [513] is a much more mathematical work than Van Dyke [490]. It is centered on singular perturbations, but also includes the study of regular and irregular singular points, as well as turning points. Much of the theory is carried out using matrix differential equations (which may have limited its appeal to the very applied audience). Its singular perturbation coverage includes boundary value problems for linear scalar ordinary differential equations, following Wasow’s [517] NYU doctoral thesis, as well as the (perhaps less efficient) methods of the prominent Russian analysts Vishik and Lyusternik [507, 508] and Pontryagin [398]. Results for nonlinear initial value problems rely on papers by the Soviet academician Andrei Nikolaevich Tikhonov (1906–1993) on the solution of

systems of equations with a small parameter in the term with the highest derivative

(a large percentage of singular perturbation problems, as we shall find). Tikhonov’s work on asymptotics appeared from 1948 to 1952 and was continued in the ongoing work of his former student Adelaida B. Vasil’eva (1926–) (Ph.D., Moscow State, 1961) (cf. Vasil’eva, Butuzov, and Kalachev [496] and earlier monographs in Russian by Vasil’eva and by she and her former student and MSU colleague Vladimir Butuzov). Instead of matching per se, she directly obtains a composite expansion by the so-called boundary function method , a technique analogous to the boundary layer correction method or “the subtraction trick” (which first finds the outer solution (formally) and then subtracts it from the solution being sought. Matching is then simple because the new outer expansion and the new common part are both trivial) (cf. Lions [295], O’Malley [366, 368], Smith [466], or Verhulst [500]). For a survey of Soviet work, see Vasil’eva [495].

We point out that J.-L. Lions (1928–2001) led a large school of French analysts (including many prominent former students) who applied asymptotics to control, stochastic, and partial differential equations. Readers are encouraged to consult their publications, e.g., [295].

The basic Tikhonov results were largely independently obtained later by Norman Levinson (1912–1975) , senior author of the long-dominant ODE textbook Coddington and Levinson [91]. Levinson’s approach was more geometric, aimed at describing relaxation oscillations, as occur for the van der Pol equation (cf. Levinson [287]), anticipating much recent work involving invariant manifolds. Related work was done with his junior colleague Earl Coddington and by a number of MIT graduate students from the 1950s, including D. Aronson, R. Davis, L. Flatto, V. Haas, S. Haber, J. Levin, V. Mizel, R. O’Brien, J. Scott-Thomas, and D. Trumpler.

(ii) A Nonlinear Example

To get an idea of Tikhonov–Levinson theory, we will first consider the specific planar initial value example

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = y, \quad &\mbox{ $x(0) = 1$} \\ \epsilon \dot{y} = x^{2} - y^{2},\quad &\mbox{ $y(0) = 0$} \end{array} \right. }$$
(3.24)

on a bounded t ≥ 0 interval as \(\epsilon \rightarrow 0^{+}\) (or the equivalent initial value problem for the second-order nonlinear scalar equation \(\epsilon \ddot{x} + (\dot{x})^{2} - x^{2} = 0\)), followed by the linear vector system

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = A(t)x + B(t)y\quad \\ \epsilon \dot{y} = C(t)x + D(t)y\quad \end{array} \right.}$$

and then the nonlinear system

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y,t,\epsilon )\quad \\ \epsilon \dot{y} = g(x, y, t,\epsilon )\quad \end{array} \right.}$$

with appropriate smoothness and stability assumptions. We shall characterize the system dynamics for (3.24) as being slow-fast , with variable x being slow compared to y (since the velocity \(\dot{y} = O(1/\epsilon )\) when x 2y 2 while \(\dot{x} = O(1)\) for bounded y). The reduced problem (obtained for ε = 0)

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X}_{0} = Y _{0}, \quad &\mbox{ $X_{0}(0) = 1$} \\ 0 = X_{0}^{2} - Y _{0}^{2}\quad &\mbox{ }\end{array} \right. }$$
(3.25)

omits the initial condition for y and implies the two possible roots

$$\displaystyle{ Y _{0} = \pm X_{0} }$$
(3.26)

of the algebraic equation, so X 0 must satisfy either initial value problem

$$\displaystyle{ \dot{X}_{0} = \pm X_{0},\ \ \ X_{0}(0) = 1. }$$
(3.27)

Hence, possible outer limits for t > 0 are

$$\displaystyle{ (X_{0}(t),Y _{0}(t)) = (e^{\pm t},\pm e^{\pm t}). }$$
(3.28)

Because Y 0(0) = ±1, while y(0) = 0, the fast variable y must initially converge nonuniformly. This suggests that we might actually have uniformly valid limits

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} x(t,\epsilon ) \sim X_{0}(t) \quad &\mbox{ } \\ \mbox{ and} \quad &\mbox{ } \\ y(t,\epsilon ) \sim Y _{0}(t) + v_{0}(\tau )\quad &\mbox{ } \end{array} \right. }$$
(3.29)

for bounded ts, where v 0(0) = y(0) − Y 0(0) is the initial jump in the fast variable and the initial layer corrector v 0 is significant only in a thin initial layer where \(v_{0} \rightarrow 0\) as the fast time \(\tau = \frac{t} {\epsilon }\) ranges from 0 to . Thus, the correction term v 0(τ) provides the needed nonuniform convergence of the coordinate y in the O(ε)-thick initial layer near t = 0, described in terms of τ. Then, we need

$$\displaystyle{\epsilon \frac{dy} {dt} \sim \epsilon \frac{dY _{0}} {dt} + \frac{dv_{0}} {d\tau } \sim X_{0}^{2} - (Y _{ 0} + v_{0})^{2}.}$$

Since Y 0 = ±X 0 = ±e ±t is bounded (for t bounded), \(\epsilon \frac{dy} {dt} \sim \frac{dv_{0}} {d\tau }\) shows that v 0 must nearly satisfy \(\frac{dv_{0}} {d\tau } \sim -2Y _{0}(\epsilon \tau )v_{0} - v_{0}^{2}.\) If we choose Y 0(t) = e t, Y 0 will be nearly 1 near t = 0, so v 0 must satisfy the initial value problem

$$\displaystyle{ \frac{dv_{0}} {d\tau } = -(2 + v_{0})v_{0},\ \ v_{0}(0) = -1 }$$
(3.30)

on τ ≥ 0. This problem is easy to solve explicitly as a Riccati equation. Indeed, checking the sign of \(\frac{dv_{0}} {d\tau }\) shows that v 0 increases monotonically from − 1 to 0 as τ goes from 0 to \(\infty \). We shall say that the initial vector \(\left (\begin{array}{*{10}c} x(0)\\ y(0) \end{array} \right ) = \left (\begin{array}{*{10}c} 1\\ 0 \end{array} \right )\) lies in the domain of influence (or “region of attraction” ) of the root Y 0 = X 0 of the reduced problem (3.25). If we, instead, tried using the other possible root Y 0 = −X 0 = −e t, the corresponding v 0 would have to satisfy

$$\displaystyle{\frac{dv_{0}} {d\tau } \sim v_{0}(2 - v_{0}),\ \ \ v_{0}(1) = 1,}$$

but then \(v_{0} \rightarrow 2\) as \(\tau \rightarrow \infty \) would contradict the asymptotic stability required for the limiting initial layer correction v 0. That one root of the limiting equation (3.25) is repulsive and thereby inappropriate corresponds to our expectation that there be a unique asymptotic solution to the given initial value problem (3.24). Vasil’eva’s work (as well as O’Malley’s) further suggests that the asymptotic solution of our initial value problem (3.24) indeed has the (higher-order) composite form

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} x(t,\epsilon ) = X(t,\epsilon ) +\epsilon u(\tau,\epsilon )\quad \\ y(t,\epsilon ) = Y (t,\epsilon ) + v(\tau,\epsilon )\quad \end{array} \right. }$$
(3.31)

uniformly on fixed bounded intervals 0 ≤ t ≤ T, where the outer solution \(\left (\begin{array}{*{10}c} X(t,\epsilon )\\ Y (t,\epsilon ) \end{array} \right )\) has an asymptotic power series expansion

$$\displaystyle{ \left (\begin{array}{*{10}c} X(t,\epsilon )\\ Y (t,\epsilon ) \end{array} \right ) \sim \sum _{j\geq 0}\left (\begin{array}{*{10}c} X_{j}(t) \\ Y _{j}(t) \end{array} \right )\epsilon ^{j} }$$
(3.32)

with

$$\displaystyle{ \left (\begin{array}{*{10}c} X_{0}(t) \\ Y _{0}(t) \end{array} \right ) = \left (\begin{array}{*{10}c} 1\\ 1 \end{array} \right )e^{t} }$$
(3.33)

and where all terms of the scaled supplemental initial layer corrector

$$\displaystyle{ \left (\begin{array}{*{10}c} u(\tau,\epsilon )\\ v(\tau,\epsilon ) \end{array} \right ) \sim \sum _{j\geq 0}\left (\begin{array}{*{10}c} u_{j}(\tau ) \\ v_{j}(\tau ) \end{array} \right )\epsilon ^{j} }$$
(3.34)

in (3.31) tend to zero as the fast time

$$\displaystyle{ \tau = t/\epsilon }$$
(3.35)

tends to infinity. Nonuniform convergence in the fast variable y (through v) provokes nonuniform convergence in the derivative \(\dot{x}\) of the slow variable since \(y =\dot{ x} = Y + v\). That is why X (as compared to Y ) has the asymptotically less significant initial layer correction ε u. (Although we indicate full asymptotic expansions in (3.32) and (3.34), we in practice only generate a few terms of all the series.) The critical point is that our ansatz (3.31), especially its stability condition, usually allows us to bypass the tedium and inefficiency of actually matching inner and outer expansions. (A possible exception arises in singular cases when the outer limit \(\left (\begin{array}{*{10}c} X_{0}(t) \\ Y _{0}(t) \end{array} \right )\) is no longer defined or smooth at the initial point t = 0.)

Away from t = 0, the outer solution \(\left (\begin{array}{*{10}c} X\\ Y \end{array} \right )\) must satisfy the given system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X} = Y \quad \\ \epsilon \dot{Y } = X^{2 } - Y ^{2}\quad \end{array} \right. }$$
(3.36)

as a power series (3.32) in ε, since the initial layer correction \(\left (\begin{array}{*{10}c} \epsilon u\\ v \end{array} \right )\) and its derivative have decayed to zero there. For ε = 0, we get the reduced system, and we pick its unique attractive solution \(\left (\begin{array}{*{10}c} X_{0} \\ Y _{0} \end{array} \right ) = \left (\begin{array}{*{10}c} 1\\ 1 \end{array} \right )e^{t}\) because the other possibility did not allow the needed asymptotic stability of v 0(τ). From the coefficient of ε in (3.36), we require that

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X}_{1} = Y _{1} \quad \\ \dot{Y }_{0} = 2X_{0}X_{1} - 2Y _{0}Y _{1}.\quad \end{array} \right.}$$

Since \(\dot{Y }_{0} = e^{t} = 2e^{t}(X_{1} - Y _{1})\), we need \(\dot{X}_{1} = Y _{1} = X_{1} -\frac{1} {2}\), so we obtain

$$\displaystyle{ X_{1}(t) = e^{t}\left (X_{ 1}(0) -\frac{1} {2}\right ) + \frac{1} {2} }$$
(3.37)

for an unspecified value X 1(0). Higher-order terms \(\left (\begin{array}{*{10}c} X_{k} \\ Y _{k} \end{array} \right )\) in the outer expansion likewise also follow readily and uniquely, up to specification of the initial values X k (0) for each k > 0.

Returning to the slow equation \(\dot{x} = y\), \(\dot{x} =\dot{ X} + \frac{du} {d\tau } = Y + v\), so \(\dot{X} = Y\) implies the linear initial layer equation

$$\displaystyle{ \frac{du} {d\tau } = v. }$$
(3.38)

Since \(\epsilon \dot{Y } = X^{2} - Y ^{2}\), the nonlinear fast equation \(\epsilon \dot{y} =\epsilon \dot{ Y } + \frac{dv} {d\tau } = (X +\epsilon u)^{2} - (Y + v)^{2}\) implies the coupled initial layer equation

$$\displaystyle{ \frac{dv} {d\tau } = -2Y (\epsilon \tau,\epsilon )v - v^{2} + 2\epsilon X(\epsilon \tau,\epsilon )u +\epsilon ^{2}u^{2}, }$$
(3.39)

with the terms of the outer solution \(\left (\begin{array}{*{10}c} X\\ Y \end{array} \right )\) already known, up to specification of X(0, ε). The initial conditions

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} 1 = X(0,\epsilon ) +\epsilon u(0,\epsilon )\quad \\ 0 = Y (0,\epsilon ) + v(0,\epsilon )\quad \end{array} \right. }$$
(3.40)

indeed termwise imply that

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} 1 = X_{0}(0) \quad \\ 0 = Y _{0}(0) + v_{0}(0)\quad \end{array} \right. }$$
(3.41)

and

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} 0 = X_{k}(0) + u_{k-1}(0)\quad \\ 0 = Y _{k}(0) + v_{k}(0) \quad \end{array} \right. }$$
(3.42)

for each k ≥ 1. Thus, (3.41) requires

$$\displaystyle{ v_{0}(0) = -Y _{0}(0) = -1, }$$
(3.43)

while (3.42) successively determines the unknown

$$\displaystyle{ X_{k}(0) = -u_{k-1}(0) }$$
(3.44)

and, thereby, both Y k (0) (from the outer problem for X k ) and then v k (0).

Thus, the limiting initial layer system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du_{0}} {d\tau } = v_{0} \quad \\ \frac{dv_{0}} {d\tau } = -2Y _{0}(0)v_{0} - v_{0}^{2} = -v_{ 0}(2 + v_{0})\quad \end{array} \right. }$$
(3.45)

for (3.383.39) is subject to the initial condition v 0(0) = −1. A direct integration provides

$$\displaystyle{ v_{0}(\tau ) =\tanh \tau -1, }$$
(3.46)

leaving the terminal value problem \(\frac{du_{0}} {d\tau } =\tanh \tau -1,\ \ \ u_{0}(\infty ) = 0\). Integrating backwards from τ = , we uniquely get

$$\displaystyle{ u_{0}(\tau ) =\ln \cosh \tau -\tau +\ln 2. }$$
(3.47)

This immediately provides the needed initial value

$$\displaystyle{ X_{1}(0) = -u_{0}(0) = -\ln \,2, }$$
(3.48)

which uniquely specifies the second term \(\left (\begin{array}{*{10}c} X_{1} \\ Y _{1} \end{array} \right )\) of the outer solution via (3.37). In particular, \(Y _{1}(0) = X_{1}(0) -\frac{1} {2}\) next specifies

$$\displaystyle{ v_{1}(0) = -Y _{1}(0) = \frac{1} {2} -\ln 2, }$$
(3.49)

by (3.42), while v 1 by (3.39) must satisfy the linear differential equation

$$\displaystyle{ \frac{dv_{1}} {d\tau } = -2(Y _{0}(0) + v_{0}(\tau ))v_{1} - 2(\tau Y _{0}^{{\prime}}(0) + Y _{ 1}(0))v_{0} + 2X_{0}(0)u_{0}. }$$
(3.50)

Integrating this linear initial value problem provides v 1(τ) explicitly (though we won’t bother to write down its expression) and the uniform approximations

$$\displaystyle{ x(t,\epsilon ) = e^{t} +\epsilon \left [\frac{1} {2} -\left (\frac{1} {2} +\ln 2\right )e^{t} +\ln \left (\cosh \frac{t} {\epsilon } \right ) -\frac{t} {\epsilon } +\ln 2\right ] + O(\epsilon ^{2}) }$$
(3.51)

and

$$\displaystyle{ y(t,\epsilon ) = e^{t} +\tanh \frac{t} {\epsilon } - 1 +\epsilon \left [\left (\frac{1} {2} +\ln 2\right )(1 - e^{t}) + v_{ 1}\left (\frac{t} {\epsilon } \right )\right ] + O(\epsilon ^{2}). }$$
(3.52)

The blowup of e t as \(t \rightarrow \infty \) suggests that the results only apply on bounded t intervals. Hoppensteadt [212] added the necessary hypothesis that the solution of the reduced problem be asymptotically stable to Tikhonov’s original conditions in order to extend the Tikhonov–Levinson theory to the infinite t interval . Also see Vasil’eva [494], however. Before proceeding, the reader should note (with some amazement) the efficient interlacing construction of the expansions for the outer solution and the initial layer correction . Readers should also observe how closely Tikhonov–Levinson theory links singular perturbations and stability theory (cf. Cesari [74] and Coppel [96]).

(iii) Linear Systems

For the linear vector system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = A(t)x + B(t)y\quad \\ \epsilon \dot{y} = C(t)x + D(t)y\quad \end{array} \right. }$$
(3.53)

of m + n scalar equations on, say, 0 ≤ t ≤ 1, with smooth coefficients and prescribed bounded initial vectors

$$\displaystyle{ x(0)\ \ \ \mbox{ and }\ \ \ y(0), }$$
(3.54)

we will again seek a composite asymptotic solution of the form

$$\displaystyle\begin{array}{rcl} x(t,\epsilon )& = & X(t,\epsilon ) +\epsilon u(\tau,\epsilon ) \\ y(t,\epsilon )& = & Y (t,\epsilon ) + v(\tau,\epsilon ){}\end{array}$$
(3.55)

for τ = ε t, presuming the n × n matrix

$$\displaystyle{ D(t)\ \ \ \mbox{ remains strictly stable} }$$
(3.56)

(i.e., has all its eigenvalues strictly in the left half-plane) for 0 ≤ t ≤ 1.

Here, the limiting outer solution \(\left (\begin{array}{*{10}c} X_{0}(t) \\ Y _{0}(t) \end{array} \right )\) must satisfy the reduced problem

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X}_{0}\quad &= A(t)X_{0} + B(t)Y _{0},\ \ X_{0}(0) = x(0) \\ 0 \quad &= C(t)X_{0} + D(t)Y _{0}. \end{array} \right.}$$

Thus,

$$\displaystyle{ Y _{0}(t) = -D^{-1}(t)C(t)X_{ 0}(t) }$$
(3.57)

and X 0 must be found as a solution of the reduced initial value problem

$$\displaystyle{ \dot{X}_{0} = (A(t) - B(t)D^{-1}(t)C(t))X_{ 0},\ \ X_{0}(0) = x(0) }$$
(3.58)

of m equations. Note that the state matrix for X 0 in (3.58) is the Schur complement of the block D in the matrix \(\left (\begin{array}{*{10}c} A&B\\ C &D \end{array} \right )\). Higher-order terms \(\left (\begin{array}{*{10}c} X_{k} \\ Y _{k} \end{array} \right )\) in the outer expansion are determined from a regular perturbation solution of the system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X} = A(t)X + B(t)Y \quad \\ \epsilon \dot{Y } = C(t)X + D(t)Y\quad \end{array} \right. }$$
(3.59)

about \(\left (\begin{array}{*{10}c} X_{0} \\ Y _{0} \end{array} \right )\), i.e. from the nonhomogeneous system

$$\displaystyle\begin{array}{rcl} \dot{X}_{j}& =& (A(t) - B(t)D^{-1}(t)C(t))X_{ j} + B(t)D^{-1}(t)\dot{Y }_{ j-1} \\ Y _{j}& =& -D^{-1}(t)C(t)X_{ j} + D^{-1}(t)\dot{Y }_{ j-1}. {}\end{array}$$
(3.60)

Moreover, linearity and the representation (3.55) imply that

$$\displaystyle{\frac{dx} {dt} = \frac{dX} {dt} + \frac{du} {d\tau } \ \ \mbox{ and}\ \ \epsilon \frac{dy} {dt} =\epsilon \frac{dY } {dt} + \frac{dv} {d\tau },}$$

so the initial layer correction must satisfy the nearly constant coefficient system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du} {d\tau } =\epsilon A(\epsilon \tau )u + B(\epsilon \tau )v\quad \\ \frac{dv} {d\tau } =\epsilon C(\epsilon \tau )u + D(\epsilon \tau )v\quad \end{array} \right. }$$
(3.61)

and the limiting initial layer correction \(\left (\begin{array}{*{10}c} u_{0} \\ v_{0}\end{array} \right )\) must satisfy

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du_{0}} {d\tau } = B(0)v_{0} \quad \\ \frac{dv_{0}} {d\tau } = D(0)v_{0},\ \ \ v_{0}(0) = y(0) - Y _{0}(0).\quad \end{array} \right. }$$
(3.62)

Integrating, we explicitly obtain the decaying n-vector

$$\displaystyle{ v_{0}(\tau ) = e^{D(0)\tau }(y(0) + D^{-1}(0)C(0)x(0)) }$$
(3.63)

while

$$\displaystyle{ u_{0}(\tau ) = -B(0)\int _{\tau }^{\infty }v_{ 0}(s)\,ds = B(0)D^{-1}(0)v_{ 0}(\tau ). }$$
(3.64)

Those unfamiliar with the matrix exponential should consult, e.g., Bellman [35].

Next, u 1 and v 1 will be decaying solutions of the initial value problem

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du_{1}} {d\tau } = B(0)u_{1} +\tau \dot{ B}(0)v_{0} + A(0)u_{0} \quad \\ \frac{dv_{1}} {d\tau } = D(0)v_{1} +\tau \dot{ D}(0)v_{0} + C(0)u_{0},\ \ \ v_{1}(0) = -Y _{1}(0)\quad \end{array} \right.}$$

which can be directly and uniquely solved. Taken vectorwise, the representation (3.55) determines the asymptotics of all solutions, i.e. of a fundamental matrix (cf. Coppel [96]) for the linear system (3.53) featuring initial layer behavior near t = 0.

(iv) Nonlinear Systems

The ansatz (3.55) used further applies directly to the initial value problem for the general slow-fast nonlinear system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y,t,\epsilon ),\quad &\mbox{ $x(0)$ given}\\ \epsilon \dot{y} = g(x, y, t,\epsilon ),\quad &\mbox{$y(0)$ given} \end{array} \right. }$$
(3.65)

of m + n smooth differential equations on t ≥ 0 when the limiting differential-algebraic system (or reduced problem)

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X}_{0} = f(X_{0},Y _{0},t,0),\quad &\mbox{ $X_{0}(0) = x(0)$} \\ 0 = g(X_{0},Y _{0},t,0) \quad &\mbox{ }\end{array} \right. }$$
(3.66)

(with m differential equations) has a smooth isolated solution

$$\displaystyle{ Y _{0} =\phi (X_{0},t) }$$
(3.67)

of the n algebraic constraint equations g = 0 selected so that

  1. (i)

    the resulting initial value problem

    $$\displaystyle{ \dot{X}_{0} = f(X_{0},\phi (X_{0},t),t,0),\ \ X_{0}(0) = x(0) }$$
    (3.68)

    has a solution X 0(t) defined on a finite interval 0 ≤ t ≤ T such that the Jacobian

    $$\displaystyle{g_{y}(X_{0},\phi (X_{0},t),t,0)}$$

    remains a stable n × n matrix there and

  2. (ii)

    the corresponding n-vector

    $$\displaystyle{v(0) = y(0) -\phi (x(0),0)}$$

    lies in the domain of influence of the trivial solution of the limiting autonomous initial layer system

    $$\displaystyle{ \frac{dv} {d\tau } = g(x(0),\phi (x(0),0) + v,0,0)\ \ \ \mbox{ for }\ \tau = t/\epsilon \geq 0. }$$
    (3.69)

Hypothesis (i) provides stability for the outer solution \(\left (\begin{array}{*{10}c} X\\ Y \end{array} \right )\) onthe finite t interval (and, via the implicit function theorem, guarantees that the root ϕ is locally unique), while hypothesis (ii) provides asymptotic stability for v as \(\tau \rightarrow \infty \) within the initial layer , allowing the termwise construction of a decaying initial layer correction \(\left (\begin{array}{*{10}c} \epsilon u\\ v \end{array} \right )\) for τ ≥ 0. As an alternative, one could express condition (ii) in terms of the existence of an appropriate Liapunov function (cf. Khalil [251]). In practice, we begin by checking the hypotheses for various roots Y 0 of

$$\displaystyle{g(X_{0}(t),Y _{0}(t),t,0) = 0.}$$

Smooth ε-dependent initial values for (3.65) would pose no complication.

We have naturally presumed that outside any initial layers the limiting solution to any singularly perturbed initial value problem satisfies the reduced problem, but this isn’t always so. Eckhaus [133] introduced the counterexample

$$\displaystyle{\epsilon ^{3}\cos \left (\frac{t} {\epsilon ^{2}}\right )\ddot{x} +\epsilon \sin \left (\frac{t} {\epsilon ^{2}}\right )\dot{x} - x = 0,\ t \geq 0}$$

with initial values x(0) = 1 and \(\dot{x}(0) = 0\). Its solution,

$$\displaystyle{x(t,\epsilon ) = 1 +\epsilon -\epsilon \cos \left (\frac{t} {\epsilon ^{2}}\right ),}$$

however, tends to 1, rather than 0, as \(\epsilon \rightarrow 0\).

We further note that one practical way to approximate the solution of a differential-algebraic system

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y,t)\quad \\ 0 = g(x, y, t)\quad \end{array} \right.}$$

is to regularize it, i.e. to introduce its singular perturbation

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y,t)\quad \\ \epsilon \dot{y} = g(x, y, t)\quad \end{array} \right.}$$

and to approximately solve that for a small positive ε (cf. O’Malley and Kalachev [374] and Nipp and Stoffer [351]).

As anticipated, we shall seek an asymptotic solution

$$\displaystyle{ \left (\begin{array}{*{10}c} x(t,\epsilon )\\ y(t,\epsilon ) \end{array} \right ) = \left (\begin{array}{*{10}c} X(t.\epsilon )\\ Y (t,\epsilon ) \end{array} \right )+\left (\begin{array}{*{10}c} \epsilon u(\tau,\epsilon )\\ v(\tau,\epsilon ) \end{array} \right ) }$$
(3.70)

to (3.65) where the vector initial layer correction \(\left (\begin{array}{*{10}c} \epsilon u\\ v \end{array} \right ) \rightarrow 0\) as \(\tau \rightarrow \infty \). (Vasil’eva, typically, does not introduce the ε multiplying u in the x-variable representation of (3.70). After some effort, however, she gets a trivial leading term for u.) Thus, the outer solution \(\left (\begin{array}{*{10}c} X\\ Y \end{array} \right )\) must satisfy the given system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X} = f(X,Y,t,\epsilon )\quad &\\ \epsilon \dot{Y } = g(X, Y, t,\epsilon )\quad \end{array} \right. }$$
(3.71)

as a power series \(\left (\begin{array}{*{10}c} X\\ Y \end{array} \right ) \sim \sum _{j\geq 0}\left (\begin{array}{*{10}c} X_{j}(t) \\ Y _{j}(t) \end{array} \right )\epsilon ^{j}\) in ε.

Further, the outer limit

$$\displaystyle{\left (\begin{array}{*{10}c} X_{0}(t) \\ Y _{0}(t) \end{array} \right )}$$

must correspond to an attractive root Y 0 = ϕ of the limiting algebraic equation of (3.66) such that g y (X 0, Y 0, t, 0) is stable and the resulting initial value problem

$$\displaystyle{ \dot{X}_{0} = f(X_{0}(t),\phi (X_{0}(t),t),t,0),X_{0}(0) = x(0) }$$
(3.72)

for the m-vector X 0 is guaranteed solvable (at least locally) by the classical existence and uniqueness theorem. Later terms \(\left (\begin{array}{*{10}c} X_{j} \\ Y _{j} \end{array} \right )\) must satisfy linearized systems

$$\displaystyle{ \dot{X}_{j} = f_{x}(X_{0},Y _{0},t,0)X_{j} + f_{y}(X_{0},Y _{0},t,0)Y _{j} + f_{j-1}(t) }$$
(3.73)
$$\displaystyle{ 0 = g_{x}(X_{0},Y _{0},t,0)X_{j} + g_{y}(X_{0},Y _{0},t,0)Y _{j} + g_{j-1}(t) }$$
(3.74)

for j > 0, where f j−1 and g j−1 are known successively in terms of preceding coefficients. We obtain Y j as an affine function of X j from (3.74) because the Jacobian g y (X 0, Y 0, t, 0) remains nonsingular. This leaves a linear system for X j , from (3.73), which will be uniquely solved once its initial value X j (0) is specified. Because \(\dot{x} =\dot{ X} + \frac{du} {d\tau }\), while \(\epsilon \dot{y} =\epsilon \dot{ Y } + \frac{dv} {d\tau }\), the initial layer correction \(\left (\begin{array}{*{10}c} \epsilon u\\ v \end{array} \right )\) must satisfy the nonlinear system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du} {d\tau } \quad &= f(X +\epsilon u,Y + v,\epsilon \tau,\epsilon ) - f(X,Y,\epsilon \tau,\epsilon ) \\ \frac{dv} {d\tau } \quad &= g(X +\epsilon u,Y + v,\epsilon \tau,\epsilon ) - g(X,Y,\epsilon \tau,\epsilon ) \end{array} \right. }$$
(3.75)

as a power series

$$\displaystyle{\left (\begin{array}{*{10}c} u(\tau,\epsilon )\\ v(\tau,\epsilon ) \end{array} \right ) \sim \sum _{j\geq 0}\left (\begin{array}{*{10}c} u_{j}(\tau ) \\ v_{j}(\tau ) \end{array} \right )\epsilon ^{j}}$$

in ε. The t-dependent coefficients in (3.75) are expanded as functions of τ. Thus, \(\left (\begin{array}{*{10}c} u_{0} \\ v_{0}\end{array} \right )\) must satisfy

$$\displaystyle{ \frac{du_{0}} {d\tau } = f(X_{0}(0),Y _{0}(0) + v_{0},0,0) - f(X_{0}(0),Y _{0}(0),0,0) }$$
(3.76)

and

$$\displaystyle\begin{array}{rcl} \frac{dv_{0}} {d\tau } & = & g(X_{0}(0),Y _{0}(0) + v_{0},0,0) - g(X_{0}(0),Y _{0}(0),0,0) \\ & = & g(x(0),\phi (x(0),0) + v_{0},0,0). {}\end{array}$$
(3.77)

Since

$$\displaystyle{v_{0}(0) = y(0) - Y _{0}(0) = y(0) -\phi (x(0),0)}$$

has been assumed in hypothesis (ii) to lie in the domain of influence of the rest point v 0 = 0 of system (3.77), we are guaranteed that the nonlinear initial value problem for v 0 has the desired decaying solution v 0(τ) on τ ≥ 0. (One might need to obtain it numerically.) In terms of it, we simply integrate (3.76) to get

$$\displaystyle\begin{array}{rcl} & & u_{0}(\tau ) = -\int _{\tau }^{\infty }[f(x(0),\phi (x(0),0) + v_{ 0}(s),0,0) \\ & & \qquad \qquad \qquad \quad - f(x(0),\phi (x(0),0),0,0)]\,ds. {}\end{array}$$
(3.78)

This, in turn, provides the initial value

$$\displaystyle{ X_{1}(0) = -u_{0}(0) }$$
(3.79)

needed to specify the outer expansion term X 1(t) and thereby Y 1(t). Linearized problems for \(\left (\begin{array}{*{10}c} u_{j} \\ v_{j}\end{array} \right )\), j > 0, with successively determined initial vectors v j (0) = −Y j (0) will again have exponentially decaying solutions. This immediately specifies the needed vector X j+1(0) = −u j (0) for the next terms in the outer expansion.

In the unusual situation that the outer solution

$$\displaystyle{\left (\begin{array}{*{10}c} X(t,\epsilon )\\ Y (t,\epsilon ) \end{array} \right )}$$

satisfies the initial condition

$$\displaystyle{\left (\begin{array}{*{10}c} X(0,\epsilon )\\ Y (0,\epsilon ) \end{array} \right ) = \left (\begin{array}{*{10}c} x(0)\\ y(0) \end{array} \right ),}$$

the resulting boundary layer correction

$$\displaystyle{\left (\begin{array}{*{10}c} \epsilon u(\tau,\epsilon )\\ v(\tau,\epsilon ) \end{array} \right )}$$

will be trivial. If we have

$$\displaystyle{y(0) =\phi (x(0),0,0),}$$

we can omit the trivial first terms \(\left (\begin{array}{*{10}c} u_{0} \\ v_{0}\end{array} \right )\) of the boundary layer correction. Later terms naturally satisfy linear problems.

A substantial simplification occurs when the nonlinear system (3.71) is linear with respect to the fast variable y . Thus, we separately consider the initial value problem for the system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = A(x,t,\epsilon ) + B(x,t,\epsilon )y\quad &\\ \epsilon \dot{y} = C(x, t,\epsilon ) + D(x, t,\epsilon )y\quad \end{array} \right. }$$
(3.80)

on t ≥ 0. The corresponding reduced system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X_{0}} = A(X_{0},t,0) + B(X_{0},t,0)Y _{0}\quad & \\ 0 = C(X_{0},t,0) + D(X_{0},t,0)Y _{0} \quad \end{array} \right. }$$
(3.81)

will imply

$$\displaystyle{Y _{0}(t) = -D^{-1}(X_{ 0},t,0)C(X_{0},t,0)}$$

while X 0 must satisfy the reduced nonlinear vector problem

$$\displaystyle{ \dot{X_{0}} = A(X_{0},t,0) - B(X_{0},t,0)D^{-1}(X_{ 0},t,0)C(X_{0},t,0),\ X_{0}(0) = x(0). }$$
(3.82)

We suppose that (3.82) has a solution X 0(t) on \(0 \leq t \leq T < \infty \) with a resulting stable matrix

$$\displaystyle{D(X_{0}(t),t,0).}$$

Higher-order terms in the outer expansion \(\left (\begin{array}{*{10}c} X(t,\epsilon )\\ Y (t,\epsilon ) \end{array} \right )\) then follow successively, without complication, up to specification of X(0, ε).

The supplemental initial layer correction

$$\displaystyle{ \left (\begin{array}{*{10}c} \epsilon u(\tau,\epsilon )\\ v(\tau,\epsilon ) \end{array} \right ) }$$
(3.83)

must be a decaying solution of the stretched system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du} {d\tau } \quad &= A(X +\epsilon u,t,\epsilon ) - A(X,t,\epsilon ) \\ \quad &\quad + B(X +\epsilon u,t,\epsilon )(Y + v) - B(X,t,\epsilon )Y \\ \frac{dv} {d\tau } \quad &= C(X +\epsilon u,t,\epsilon ) - C(X,t,\epsilon ) \\ \quad &\quad + D(X +\epsilon u,t,\epsilon )(Y + v) - D(X,t,\epsilon )Y.\end{array} \right. }$$
(3.84)

Moreover, the initial conditions require

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} X(0,\epsilon ) +\epsilon u(0,\epsilon ) = x(0)\quad \\ \mbox{ and} \quad \\ Y (0,\epsilon ) + v(0,\epsilon ) = y(0).\quad \end{array} \right. }$$
(3.85)

Thus, the limiting linear initial layer problem is

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du_{0}} {d\tau } = B(x(0),0,0)v_{0} \quad \\ \frac{dv_{0}} {d\tau } = D(x(0),0,0)v_{0},\quad v_{0}(0) = y(0) - Y _{0}(0).\quad \end{array} \right. }$$
(3.86)

It has the decaying solution

$$\displaystyle{ v_{0}(\tau ) = e^{D(x(0),0,0)\tau }\left (y(0) + D^{-1}(x(0),0,0)C(x(0),0,0)\right ) }$$
(3.87)

with

$$\displaystyle\begin{array}{rcl} u_{0}(\tau )& =& -\int _{\tau }^{\infty }B(x(0),0,0)v_{ 0}(s)\,ds \\ & =& -B(x(0),0,0)D^{-1}(x(0),0,0)v_{ 0}(\tau ).{}\end{array}$$
(3.88)

Later terms follow readily. Again, X 1(0) = −u 0(0) will specify the O(ε) terms in the outer expansion.

A special case of (3.65) is provided by the scalar Liénard equation

$$\displaystyle{ \epsilon \ddot{x} + f(x)\dot{x} + g(x) = 0 }$$
(3.89)

on t ≥ 0 with initial values x(0) and \(\dot{x}(0)\) provided. We introduce \(y =\dot{ x}\), so \(\epsilon \dot{y} + f(x)y + g(x) = 0\). Then, the limiting solution is the monotonic solution of the separable equation

$$\displaystyle{ \dot{X_{0}} = -\frac{g(X_{0})} {f(X_{0})},\qquad X_{0}(0) = x(0), }$$
(3.90)

presuming the stability hypothesis

$$\displaystyle{ f(X_{0}) < 0 }$$
(3.91)

holds throughout. Then

$$\displaystyle{ y(t,\epsilon ) =\dot{ x}(t,\epsilon ) = -\frac{g(X_{0})} {f(X_{0})} + e^{f(x(0))\tau }\left (\dot{x}(0) + \frac{g(x(0))} {f(x(0))}\right ) + O(\epsilon ) }$$
(3.92)

features an initial layer while there is an implicit solution for X 0(t).

Skinner [463] considers linear turning point problems of the special form

$$\displaystyle{ \epsilon ^{2}y^{{\prime}} + xa(x,\epsilon )y =\epsilon b(x,\epsilon ),\ \ y(0) =\epsilon \alpha (\epsilon ) }$$
(3.93)

for smooth functions a and b with a(x, ε) > 0. The simplest example seems to be

$$\displaystyle{ \epsilon ^{2}y^{{\prime}} + xy =\epsilon,\ \ y(0) = 0. }$$
(3.94)

Its exact solution is the inner solution

$$\displaystyle{ y(x,\epsilon ) = u(x/\epsilon ) }$$
(3.95)

for

$$\displaystyle{ u(\xi ) = e^{-\frac{\xi ^{2}} {2} }\int _{0}^{\xi }e^{\frac{r^{2}} {2} }\,dr. }$$
(3.96)

Integrating by parts repeatedly, we get the algebraically decaying behavior

$$\displaystyle{e^{-\frac{\xi ^{2}} {2} }\int ^{\xi } \frac{1} {r} \frac{d} {dr}\left (e^{\frac{r^{2}} {2} }\right )\,dr \sim \frac{1} {\xi } + \frac{1} {\xi ^{3}} +\ldots \ \ \mbox{ as }\ \xi \rightarrow \infty,}$$

corresponding to the readily generated outer expansion

$$\displaystyle{ Y (x,\epsilon ) \sim \frac{\epsilon } {x} + \frac{\epsilon ^{3}} {x^{3}}+\ldots, }$$
(3.97)

singular at the turning point x = 0. These problems are certainly more complicated than the initial value problems we have considered previously, so Skinner [463] is highly recommended reading.

Exercises

  1. 1.

    Find the exact solution to the scalar equation

    $$\displaystyle{\epsilon \dot{y} = y - y^{3}}$$

    on t ≥ 0 and determine how the outer limit Y 0(t) for t > 0 depends on y(0).

  2. 2.

    Solve the initial value problem for the planar system

    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = xy \quad \\ \epsilon \dot{y} = y - y^{3}\quad \end{array} \right.}$$

    on 0 ≤ t ≤ T <  and determine the outer solution.

  3. 3.

    Obtain an O(ε 2) approximation to the solution of the planar initial value problem

    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = -x + (x +\kappa -\lambda )y,\quad &\mbox{ $x(0) = -1$}\\ \epsilon \dot{y} = x - (x+\kappa )y, \quad &\mbox{ $y(0) = 0$} \end{array} \right.}$$

    for positive constants κ and λ as \(\epsilon \rightarrow 0^{+}\). The problem arises in enzyme kinetics (cf. Segel and Slemrod [448], Murray [339], and Segel and Edelstein-Keshet [447]).

  4. 4.

    A model for autocatalysis is given by the slow-fast system

    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = x(1 + y^{2}) - y, \quad &x(0) = 1 \\ \epsilon \dot{y} = -x(1 + y^{2}) + e^{-t},\quad &y(0) = 1.\end{array} \right.}$$

    Seek an asymptotic solution of the form

    $$\displaystyle\begin{array}{rcl} x(t,\epsilon )& =& X(t,\epsilon ) +\epsilon u(\tau,\epsilon ) {}\\ y(t,\epsilon )& =& Y (t,\epsilon ) + v(\tau,\epsilon ) {}\\ \end{array}$$

    where \(\left (\begin{array}{*{10}c} u\\ v\end{array} \right ) \rightarrow 0\) as \(\tau = \frac{t} {\epsilon } \rightarrow \infty \).

    1. (a)

      Obtain the first two terms of the outer expansion \(\left (\begin{array}{*{10}c} X\\ Y\end{array} \right )\).

    2. (b)

      Obtain the system for \(\left (\begin{array}{*{10}c} u\\ v\end{array} \right )\).

    3. (c)

      Determine the uniform approximation

      $$\displaystyle\begin{array}{rcl} x(t,\epsilon )& =& X_{0}(t) + O(\epsilon ) {}\\ y(t,\epsilon )& =& Y _{0}(t) + v_{0}\left (\frac{t} {\epsilon } \right ) + O(\epsilon ). {}\\ \end{array}$$
  5. 5.

    Consider the initial value problem for the conservation equation

    $$\displaystyle{\epsilon \frac{d^{2}x} {dt^{2}} = f(x)}$$

    with x(0) and \(\frac{dx} {dt} (0)\) prescribed. (An example is the pendulum equation \(\epsilon \ddot{x} +\sin (\pi x) = 0\).) Consider an asymptotic solution

    $$\displaystyle{x(t,\epsilon ) = X(t,\epsilon ) +\epsilon u(\tau,\epsilon )}$$

    where \(u \rightarrow 0\) as \(\tau = \frac{t} {\epsilon } \rightarrow \infty \) and 0 ≤ t ≤ T < . Use Tikhonov–Levinson theory on the corresponding slow-fast system

    $$\displaystyle\begin{array}{rcl} \frac{dx} {dt} & =& y {}\\ \epsilon \frac{dy} {dt} & =& f(x) {}\\ \end{array}$$

    under appropriate conditions.

(v) Remarks

In the remainder of this section, we will survey some important results from the literature. Readers should consult the references for further details.

We note that the typical requirements of the classical existence-uniqueness theory do not hold for the singular perturbation systems under consideration since their Lipschitz constant becomes unbounded when ε tends to zero. Sophisticated estimates are, nonetheless, provided by Nipp and Stoffer [351]. Needed asymptotic techniques, presented in Wasow [513], are updated in Hsieh and Sibuya [220] through the introduction of Gevrey asymptotics (cf. Ramis [405]) and Balser [25]). A formal power series \(\sum _{m=0}^{\infty }a_{m}x^{m}\) is defined to be of Gevrey order s if there exist nonnegative numbers C and A such that

$$\displaystyle{\vert a_{m}\vert \leq C(m!)^{s}A^{m}}$$

for all m (cf. Sibuya [457], Sibuya [458], and Canalis-Durand et al. [68]).

Fruchard and Schäfke [165] develop a “composite asymptotic expansion” approach which justifies matched asymptotic expansions for a class of ordinary differential equations, allowing some turning points. Their outer solutions and initial layer corrections are obtained as Gevrey expansions.

Instead of assuming asymptotic stability of the limiting fast system (the preceding hypothesis (ii)), one might instead consider the possibility of having rapid oscillations for the solution of the fast system (cf. Artstein et al. [13, 14]). It is useful, indeed, to interpret these solutions in terms of Young measures.

In his study of the quasi-static state analysis , Hoppensteadt [213, 214] considers the perturbed gradient system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{dx} {dt} \quad &= f(x,y,t,\epsilon ) \\ \epsilon \frac{dy} {dt} \quad &= -\bigtriangledown _{y}G(x,y) +\epsilon g(x,y,t,\epsilon ).\end{array} \right. }$$
(3.98)

Since

$$\displaystyle{\frac{dG} {dt} = -\frac{1} {\epsilon } \bigtriangledown _{y}G \cdot \bigtriangledown _{y}G + \bigtriangledown _{x}G \cdot f + \bigtriangledown _{y}G \cdot g = -\frac{1} {\epsilon } \vert \bigtriangledown _{y}G\vert ^{2} + O(1),}$$

we might expect (under natural assumptions) the fast vector y to tend rapidly to an isolated minimum y of the energy G(x, y), presuming y(0) is in its domain of attraction . The corresponding limiting slow variable will satisfy

$$\displaystyle{ \frac{dx} {dt} = f(x,y^{{\ast}},t,0). }$$
(3.99)

Extensions to more complicated systems are also given, including a four-dimensional Lorenz model

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x}_{1}\quad &= x_{2}x_{3} - bx_{1} +\epsilon f_{1}(x,y,t,\epsilon ) \\ \dot{x}_{2}\quad &= -x_{1}x_{3} + rx_{3} - x_{2} +\epsilon f_{2}(x,y,t,\epsilon ) \\ \dot{x}_{3}\quad &=\sigma (x_{2} - x_{3}) +\epsilon f_{3}(x,y,t,\epsilon ) \\ \epsilon \dot{y} \quad &=\lambda y - y^{3} +\epsilon g(x,y,t,\epsilon ) \end{array} \right.}$$

that has the function

$$\displaystyle{W(y) = \frac{1} {2}(y -\sqrt{\lambda })^{2}}$$

as a Liapunov or energy function for the branch \(y = \sqrt{\lambda } + O(\epsilon )\) (with λ > 0) of the limiting fast system (cf. Brauer and Nohel [59]). Solutions beginning nearby remain close to the manifold, and may exhibit chaotic behavior for certain values of the parameters b, r, and σ.

Asymptotic expansions, as in the ansatz (3.70), are used in Hairer and Wanner [192] to develop Runge–Kutta methods for numerically integrating vector initial value problems in the singularly perturbed form

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y),\quad \\ \epsilon \dot{y} = g(x, y),\quad \end{array} \right. }$$
(3.100)

assuming that the Jacobian matrix g y is stable near the solution of the ‘reduced differential-algebraic system. Note, in the planar situation, that trajectories will satisfy

$$\displaystyle{\epsilon \frac{dy} {dx} = \frac{g(x,y)} {f(x,y)}.}$$

In particular, Hairer and Wanner begin their treatment of such stiff differential equations by considering the one-dimensional example

$$\displaystyle{\dot{y} = g(x,y) = -50(y -\cos x)}$$

from Curtiss and Hirschfelder [107] (with ε = 0. 02), pointing out the spurious oscillations one finds with the explicit Euler method, in contrast to the success obtained using the backward differentiation formula

$$\displaystyle{ y_{n+1} - y_{n} = hg(x_{n+1},y_{n+1}). }$$
(3.101)

Aiken [5] provides a review of the early literature from the chemical engineering perspective.

The existence of periodic solutions to the slow-fast vector system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y,t,\epsilon )\quad \\ \epsilon \dot{y} = g(x, y, t,\epsilon )\quad \end{array} \right. }$$
(3.102)

was considered by Flatto and Levinson [149] and generalized in Wasow [513].

We will assume that f and g are periodic in t with period ω and that the reduced system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X}_{0} = f(X_{0},Y _{0},t,0)\quad \\ 0 = g(X_{0},Y _{0},t,0) \quad \end{array} \right. }$$
(3.103)

has a solution \(\left (\begin{array}{*{10}c} X_{0} \\ Y _{0} \end{array} \right )\) of period ω. The question is whether or not the full system (3.102) has a nearby periodic solution of the same period.

Let s be the vector parameter of initial values for X 0, with the corresponding variational system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{d} {dt}\left (\frac{\partial X_{0}} {\partial s} \right ) = f_{x}(X_{0},Y _{0},t,0)\frac{\partial X_{0}} {\partial s} + f_{y}(X_{0},Y _{0},t,0)\frac{\partial Y _{0}} {\partial s} \quad \\ 0 = g_{x}(X_{0},Y _{0},t,0)\frac{\partial X_{0}} {\partial s} + g_{y}(X_{0},Y _{0},t,0)\frac{\partial Y _{0}} {\partial s} \quad \end{array} \right. }$$
(3.104)

We will assume that

  1. (i)

    there is a smooth nonsingular matrix P(t) of period ω so that

    $$\displaystyle{ P^{-1}(t)g_{ y}(X_{0},Y _{0},t,0)P(t) \equiv \left (\begin{array}{*{10}c} B(t)& 0 \\ 0 &-C(t)\end{array} \right ) }$$
    (3.105)

    with B(t) and C(t) being stable matrices.

    Since g y (X 0, Y 0, t, 0) is nonsingular and

    $$\displaystyle{ \frac{\partial Y _{0}} {\partial s} = -g_{y}^{-1}(X_{ 0},Y _{0},t,0)g_{x}(X_{0},Y _{0},t,0)\frac{\partial X_{0}} {\partial s}, }$$
    (3.106)

    \(\xi \equiv \frac{\partial X_{0}} {\partial s}\) will satisfy the linear system

    $$\displaystyle{ \frac{d\xi } {dt} = A(t)\xi }$$
    (3.107)

    for \(A(t)\! \equiv \! f_{x}(X_{0},Y _{0},t,0)-f_{y}(X_{0},Y _{0},t,0)g_{y}^{-1}(X_{0},Y _{0},t,0)g_{x}(X_{0},Y _{0},t,0)\).

We will also assume

  1. (ii)

    the variational equation (3.107) has no nontrivial solution of period ω.

(Recall Floquet theory and the Fredholm alternative theorem from Coddington and Levinson [91]). Flatto and Levinson [149] show that the full system (3.102) will then have a solution of period ω with a uniform asymptotic expansion

$$\displaystyle{ \left (\begin{array}{*{10}c} x(t,\epsilon )\\ y(t,\epsilon ) \end{array} \right ) \sim \sum _{k=0}^{\infty }\left (\begin{array}{*{10}c} X_{k}(t) \\ Y _{k}(t) \end{array} \right )\epsilon ^{k}. }$$
(3.108)

Because there are no distinguished boundary points, the periodic solution doesn’t need boundary layers.

Verhulst [501] considers the scalar Riccati example

$$\displaystyle{ \epsilon \dot{y} = a(t)y - y^{2} }$$
(3.109)

with a(t) positive and periodic. The reduced problem has a nontrivial and stable periodic solution

$$\displaystyle{Y _{0}(t) = a(t)}$$

while we suppose the singularly perturbed equation (3.109) has a regularly perturbed solution

$$\displaystyle{y(t,\epsilon ) = Y _{0}(t) +\epsilon Y _{1}(t)+\ldots }$$

This requires

$$\displaystyle{\dot{Y }_{0} = a(t)Y _{1} - 2Y _{0}Y _{1}}$$

at O(ε) order, so \(Y _{1}\! =\! -\frac{\dot{a}(t)} {a(t)}\) implies the corresponding periodic approximation

$$\displaystyle{ y(t,\epsilon ) \sim a(t) -\epsilon \frac{\dot{a}(t)} {a(t)} +\ldots. }$$
(3.110)

Kopell and Howard [258] studied the Belousov–Zhabotinsky reaction , which provides dramatic chemical oscillations with color changes. When one seeks a traveling wave solution, a concentration C satisfies a singularly perturbed differential equation

$$\displaystyle{C^{{\prime}} = F(C) +\beta C^{{\prime\prime}}}$$

with a small β > 0. Kopell [257] supposes that the reduced problem has a stable limit cycle and she seeks a nearby invariant manifold for the perturbed problem. This provides a major motivation for Fenichel’s geometric theory from 1979, which generalizes Anosov [11].

The concept of a slow integral manifold (cf. Wiggins [522], Nipp and Stoffer [351], Goussis [178], Shchepakina et al. [450], Kuehn [268], and Roberts [418]) is valuable in many applied contexts, including chemical kinetics, control theory, and computation (cf. also, Kokotović et al. [256] and Gear et al. [167]). Let’s again consider the initial value problem for the slow-fast m + n dimensional system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y,t,\epsilon )\quad \\ \epsilon \dot{y} = g(x, y, t,\epsilon )\quad \end{array} \right. }$$
(3.111)

on t ≥ 0, subject to the usual Tikhonov–Levinson stability hypotheses. We will determine a corresponding slow manifold described by

$$\displaystyle{ y(t,\epsilon ) = h(x(t,\epsilon ),t,\epsilon ) }$$
(3.112)

for a vector function h to be determined termwise as a power series in ε. Motion along it will then be governed by the m-dimensional slow system

$$\displaystyle{ \dot{x} = f(x,h(x,t,\epsilon ),t,\epsilon ), }$$
(3.113)

subject to the prescribed initial vector x(0). This approach provides a substantial reduction in dimensionality when n is large, although it fails to describe the usual rapid nonuniform convergence of y in the O(ε)-thick initial layer. However, in chemical kinetics, for example, the initial layer behavior may occur too quickly to measure in the lab. Thus, it’s then natural to seek such a quasi-steady state. Still, the fast equation and the chain rule applied to (3.112) imply the invariance equation

$$\displaystyle{ \epsilon \dot{y} =\epsilon \left (\frac{\partial h} {\partial x}f + \frac{\partial h} {\partial t} \right ) = g. }$$
(3.114)

To lowest order, this requires

$$\displaystyle{ g(x,h_{0},t,0) = 0, }$$
(3.115)

so we naturally take

$$\displaystyle{ h_{0} =\phi (x,t) }$$
(3.116)

to be an isolated root of the limiting fast system (3.115). Moreover, we again require the root ϕ to be attractive in the sense that

$$\displaystyle{g_{y}(x,\phi (x,t),t,0)}$$

is a strictly stable matrix, thereby ruling out any repulsive roots that might occur. Higher-order terms in the expansion

$$\displaystyle{ h(x,t,\epsilon ) =\phi (x,t) +\epsilon h_{1}(x,t)+\ldots }$$
(3.117)

follow readily since g(x, ϕ(x, t), t, 0) = 0 implies the expansion \(g(x,h(x,t,\epsilon ),t,\epsilon ) = g_{y}(x,\phi (x,t),t,0)(\epsilon h_{1}(x,t)+\ldots ) +\epsilon g_{\epsilon }(x,\phi (x,t),t,0)+\ldots = 0\) about ε = 0. Balancing the O(ε) terms in (3.114) then implies that

$$\displaystyle\begin{array}{rcl} & & \frac{\partial h} {\partial x}(x,\phi (x,t),t,0)f(x,\phi (x,t),t,0) + \frac{\partial h} {\partial t} (x,\phi (x,t),t,0) \\ & & \qquad \qquad \qquad \qquad = g_{y}(x,\phi (x,t),t,0)h_{1}(x,t) + g_{\epsilon }(x,\phi (x,t),t,0). {}\end{array}$$
(3.118)

This specifies h 1 since g y is nonsingular and all else is known. h 2 next follows analogously from the O(ε 2) terms in (3.114). Thus, it is convenient to describe the slow manifold in terms of the outer limit, avoiding the initial layer correction.

Examples

1. Kokotović et al. [256] considered an initial value problem like

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = f(x,y,t) \equiv txy,\ \ x(0) = 1 \quad \\ \epsilon \dot{y} = g(x, y, t) \equiv -(y - 4)(y - 2)(y + tx),\ \ y(0)\mbox{ given.}\quad \end{array} \right. }$$
(3.119)

We naturally anticipate having an asymptotic solution of the form

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} x(t,\epsilon ) = X(t,\epsilon ) +\epsilon u(\tau,\epsilon )\quad \\ y(t,\epsilon ) = Y (t,\epsilon ) + v(\tau,\epsilon )\quad \end{array} \right.}$$

with an outer solution \(\left (\begin{array}{*{10}c} X\\ Y \end{array} \right )\) and an initial layer correction \(\left (\begin{array}{*{10}c} \epsilon u\\ v \end{array} \right )\) that tends to zero as τ = tε tends to infinity. The outer limit \(\left (\begin{array}{*{10}c} X_{0} \\ Y _{0} \end{array} \right )\) will then satisfy the reduced problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X}_{0} = tX_{0}Y _{0},\ \ \ X_{0}(0) = 1 \quad \\ 0 = -(Y _{0} - 4)(Y _{0} - 2)(Y _{0} + tX_{0}).\quad \end{array} \right. }$$
(3.120)

The first possibility

$$\displaystyle{Y _{0}(t) = 4,\ \ \ \dot{X}_{0} = 4tX_{0},\ \ X_{0}(0) = 1}$$

for the root Y 0 determines the bounded outer limit

$$\displaystyle{ \left (\begin{array}{*{10}c} X_{0}(t) \\ Y _{0}(t) \end{array} \right ) = \left (\begin{array}{*{10}c} e^{2t^{2} } \\ 4 \end{array} \right ) }$$
(3.121)

for finite t. It provides the complete outer expansion and thereby the corresponding stable integral manifold. For y(0) ≠ 4, however, we need a nontrivial boundary layer correction at t = 0. Its leading term v 0 must then satisfy

$$\displaystyle{ \frac{dv_{0}} {d\tau } = g(x(0),Y _{0}(0) + v_{0},0,0) = -v_{0}(v_{0} + 2)(v_{0} + 4),\ \ \ v_{0}(0) = y(0) - 4 }$$
(3.122)

and must decay to zero as \(\tau \rightarrow \infty \). Checking the sign of \(\frac{dv_{0}} {d\tau }\) shows that we will need v 0(0) > −2 or y(0) > 2 in order to attain such asymptotic stability.

The second possibility

$$\displaystyle{ Y _{0}(t) = 2 }$$
(3.123)

provides \(X_{0}(t) = e^{t^{2} }\). For y(0) ≠ 2, we will need a nontrivial limiting initial layer correction v 0(τ) satisfying

$$\displaystyle{ \frac{dv_{0}} {d\tau } = -(v_{0} - 2)v_{0}(v_{0} + 2),\ \ \ v_{0}(0) = y(0) - 2. }$$
(3.124)

Its trivial rest point is, however, unstable, as is the corresponding integral manifold. Thus, we rule out (3.123), except when y(0) = 2 exactly.

Finally, when we take

$$\displaystyle{ Y _{0}(t) = -tX_{0}(t), }$$
(3.125)

\(\dot{X}_{0} = tX_{0}Y _{0} = -t^{2}X_{0}^{2}\), and X 0(0) = 1 determine the limiting outer solution

$$\displaystyle{ \left (\begin{array}{*{10}c} X_{0}(t) \\ Y _{0}(t) \end{array} \right ) = \left (\begin{array}{*{10}c} \frac{3} {t^{2}+3} \\ \frac{-3t} {t^{2}+3} \end{array} \right ). }$$
(3.126)

The next term in the outer expansion must then satisfy the linear system

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X}_{1} = t(X_{1}Y _{0} + X_{0}Y _{1}) = -t^{2}X_{0}X_{1} + tX_{0}Y _{1} \quad \\ \dot{Y }_{0} = -\left [Y _{1}(Y _{0} - 2) + (Y _{0} - 4)Y _{1}\right ](Y _{0} + tX_{0}) - (Y _{0} - 4)(Y _{0} - 2)(Y _{1} + tX_{1})\quad \end{array}\right.}$$

so

$$\displaystyle{ Y _{1} + tX_{1} = \frac{-\dot{Y }_{0}} {Y _{0}^{2} - 6Y _{0} + 8} = \frac{X_{0} - t^{3}X_{0}^{2}} {t^{2}X_{0}^{2} + 6tX_{0} + 8} }$$
(3.127)

where

$$\displaystyle{ \dot{X}_{1} = -2t^{2}X_{ 0}X_{1} + \frac{tX_{0}^{2}(1 - t^{3}X_{0})} {t^{2}X_{0}^{2} + 6tX_{0} + 8},\ \ X_{1}(0) = u_{0}(0). }$$
(3.128)

The corresponding limiting initial layer system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du_{0}} {d\tau } = 0 \quad \\ \frac{dv_{0}} {d\tau } = -(v_{0} - 4)(v_{0} - 2)v_{0},\ \ v_{0}(0) = y(0)\quad \end{array} \right. }$$
(3.129)

has the trivial rest point provided y(0) < 2. In summary, we obtain one of the possible asymptotic solutions depending on the sign of y(0) − 2. The solution lies on an attractive slow invariant manifold when y(0) = 4 or 0.

The geometric singular perturbation theory of Fenichel [146] generalizes Tikhonov–Levinson theory by replacing its first stability assumption by normal hyperbolicity. See Fenichel [147], Kaper [233], Jones and Khibnik [227], Krupa and Szmolyan [265], Verhulst and Bakri [501], Kosiak and Szmolyan [260], and Kuehn [268] for updated treatments. In particular, then, imaginary eigenvalues of g y are not allowed, but unstable eigenvalues are. (Interestingly, Neil Fenichel’s work was “ahead of its time.” It didn’t attract the attention it merited for many years.) Hastings and McLeod [198] include several applications of Fenichel’s theory, which they simplify. A large variety of sophisticated approaches are combined in Desroches et al. [116]. Other significant extensions of Tikhonov’s theorem include Nipp [349] (cf., also, Nipp and Stoffer [351]).

The situation where the first Tikhonov–Levinson stability assumption is violated because the Jacobian matrix g y is everywhere singular might be called a singular singular perturbation problem (cf. Gu et al. [187]). Narang-Siddarth and Valasek [340] say these are in nonstandard form.

2. A simple example of a singular problem is provided by the linear initial value problem

$$\displaystyle{ \epsilon \dot{y} = A(\epsilon )y,\ \ \ y(0) = \left (\begin{array}{*{10}c} 1\\ 1 \end{array} \right ) }$$
(3.130)

for the nearly singular state matrix

$$\displaystyle{ A(\epsilon ) = \left (\begin{array}{*{10}c} 1 - 2\epsilon &2 - 2\epsilon \\ -1+\epsilon & -2+\epsilon \end{array} \right ) }$$
(3.131)

with eigenvalues − 1 and −ε and corresponding eigenvectors \(\left (\begin{array}{*{10}c} 1\\ -1 \end{array} \right )\) and \(\left (\begin{array}{*{10}c} 2\\ -1 \end{array} \right )\). Applying the initial condition provides the exact solution

$$\displaystyle{ y(t,\epsilon ) = \left (\begin{array}{*{10}c} 4\\ -2 \end{array} \right )e^{-t}+\left (\begin{array}{*{10}c} -3 \\ 3\end{array} \right )e^{-t/\epsilon }\ \ \ \mbox{ for }\ \ t \geq 0 }$$
(3.132)

in the anticipated form

$$\displaystyle{y(t,\epsilon ) \sim Y _{0}(t) +\xi _{0}(\tau )}$$

for an outer solution Y 0(t) and an initial layer correction ξ 0(τ) that decays to zero as \(\tau = t/\epsilon \rightarrow \infty \).

If we, instead, simply sought an outer solution

$$\displaystyle{ Y (t,\epsilon ) \sim \sum _{j\geq 0}Y _{j}(t)\epsilon ^{j} }$$
(3.133)

of \(\epsilon \dot{y} = A(t)y\) with \(Y _{j} = \left (\begin{array}{*{10}c} Y _{1j} \\ Y _{2j} \end{array} \right )\), the leading terms require that

$$\displaystyle{ Y _{10} + 2Y _{20} = 0, }$$
(3.134)

but leaves Y 0 otherwise unspecified. At O(ε), we’d need

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{Y }_{10} = Y _{11} + 2Y _{21} - 2Y _{10} - 2Y _{20}\quad \\ \mbox{ and} \quad \\ \dot{Y }_{ 20} = -Y _{11} - 2Y _{21} + Y _{10} + Y _{20}.\quad \end{array} \right. }$$
(3.135)

Adding implies that

$$\displaystyle{\dot{Y }_{10} +\dot{ Y }_{20} = -Y _{10} - Y _{20}.}$$

Because Y 10 = −2Y 20, however, \(\dot{Y }_{20} = -Y _{20}\) so

$$\displaystyle{ Y _{0}(t) = \left (\begin{array}{*{10}c} -2\\ 1 \end{array} \right )e^{-t}k_{ 0} }$$
(3.136)

for a constant k 0 to be determined by matching.

More directly, we could change variables by putting A(ε) in a more convenient triangular form. Let us set

$$\displaystyle{ z = \left (\begin{array}{*{10}c} z_{1} \\ z_{2}\end{array} \right ) \equiv Py = \left (\begin{array}{*{10}c} 1&-1\\ 1 & 1 \end{array} \right )y, }$$
(3.137)

so

$$\displaystyle{y = \frac{1} {2}\left (\begin{array}{*{10}c} 1 &1\\ -1 &1 \end{array} \right )z}$$

and the initial value problem (3.130) is transformed to

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon \dot{z}_{1} = -z_{1} + 3(1-\epsilon )z_{2},\ \ \ z_{1}(0) = 0\quad \\ \dot{z}_{2} = -z_{2},\ \ \ z_{2}(0) = -2, \quad \end{array} \right. }$$
(3.138)

a problem in fast-slow form that can be uniquely solved using Tikhonov–Levinson theory. We get

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} z_{1}(t) = 6e^{-t} - 6e^{-t/\epsilon }\quad \\ \mbox{ and} \quad \\ z_{2}(t) = 2e^{-t}, \quad \end{array} \right. }$$
(3.139)

corresponding to the constant k 0 = 3 in (3.136). Although only the first component of z has an initial layer, both components of y do.

3. A nonlinear example is given by

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon \dot{y}_{1} = y_{1} + y_{2} -\frac{1} {2}(y_{1} + y_{2})^{3} + \frac{\epsilon } {\sqrt{2}}(y_{1}^{2} - y_{2}^{2}),\ \ \ y_{1}(0) = 2 \quad \\ \epsilon \dot{y}_{2} = y_{1} + y_{2} -\frac{1} {2}(y_{1} + y_{2})^{3} - \frac{\epsilon } {\sqrt{2}}(y_{1}^{2} - y_{2}^{2}),\ \ \ y_{2}(0) = -2.\quad \end{array} \right. }$$
(3.140)

Now, the reduced problem

$$\displaystyle{ Y _{10} + Y _{20} -\frac{1} {2}(Y _{10} + Y _{20})^{3} = 0 }$$
(3.141)

has the three families of solutions

$$\displaystyle{ Y _{10} + Y _{20} = 0\ \ \mbox{ or }\ \ \pm \sqrt{2}. }$$
(3.142)

Tikhonov–Levinson theory doesn’t apply, but if we transform the problem by setting

$$\displaystyle{ z = \left (\begin{array}{*{10}c} 1& 1\\ 1 &-1 \end{array} \right )y, }$$
(3.143)

we get the separated fast-slow system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon \dot{z}_{1} = z_{1} -\frac{1} {2}z_{1}^{3},\ \ \ z_{ 1}(0) = -2\quad \\ \dot{z}_{2} = \sqrt{2}z_{1}z_{2},\ \ \ z_{2}(0) = 0. \quad \end{array} \right. }$$
(3.144)

We immediately integrate the Bernoulli equation for z 1 to get

$$\displaystyle{ z_{1}(t,\epsilon ) = -\sqrt{ \frac{2} {1 -\frac{1} {2}e^{-2t/\epsilon }}} }$$
(3.145)

and we conveniently rewrite it in the anticipated form

$$\displaystyle{ z_{1}(t,\epsilon ) = -\sqrt{2} + u_{1}(\tau ) }$$
(3.146)

with the outer solution \(-\sqrt{2}\) and the decaying initial layer correction

$$\displaystyle{ u_{1}(\tau ) = \sqrt{2}\left (1 - \frac{1} {\sqrt{1 - \frac{1} {2}e^{-2\tau }}}\right ). }$$
(3.147)

Integrating the remaining linear equation for z 2, we get

$$\displaystyle{ \begin{array}{*{10}c} z_{2}(t,\tau,\epsilon )& = -2e^{-2t}e^{\sqrt{2}\epsilon \int _{0}^{\tau }u_{ 1}(r)dr} \\ &\equiv Z_{2}(t,\epsilon )e^{-\sqrt{2}\epsilon \int _{\tau }^{\infty }u_{ 1}(r)dr}. \end{array} }$$
(3.148)

Setting

$$\displaystyle{ z_{2}(t,\epsilon ) = Z_{2}(t,\epsilon ) +\epsilon u_{2}(\tau,\epsilon ), }$$
(3.149)

we have a decaying initial layer correction ε u 2(τ, ε). Power series for Z 2 and u 2 can be obtained termwise. The limiting outer solution

$$\displaystyle{Z_{1}(t,0) = -\sqrt{2}\ \ \ \mbox{ and }\ \ \ Z_{2}(t,0) = -2e^{-2t}}$$

corresponds to the outer limits

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} Y _{10}(t) = - \frac{1} {\sqrt{2}} - e^{-2t}\quad \\ \mbox{ and } \quad \\ Y _{20}(t) = - \frac{1} {\sqrt{2}} + e^{-2t}\quad \end{array} \right. }$$
(3.150)

and to the conserved constant

$$\displaystyle{ Y _{10}(t) + Y _{20}(t) = -\sqrt{2}. }$$
(3.151)

Since neither Y 10(0) = −2 nor Y 20(0) = −2, both components of y need initial layer corrections.

For the nonlinear n-dimensional initial value problem

$$\displaystyle{ \epsilon \dot{y} = g(y,t,\epsilon ),\ \ \ t \geq 0, }$$
(3.152)

we expect the limiting solution to satisfy

$$\displaystyle{g(Y _{0},t,0) = 0.}$$

When g y is singular with a constant rank 0 < k < n and when its nontrivial eigenvalues are stable, we might seek additional constraints on the outer limit Y 0 by differentiating g = 0. (Recall the related concept of the index of a differential-algebraic equation (cf. Ascher and Petzold [15] and Lamour et al. [279]). Shchepakina et al. [450] describe applications, including singular ones and bimolecular reactions.

Historical Remark

Tikhonov made important contributions to many fields of mathematics, including topology and cybernetics. He also rose to the top of the Communist Party hierarchy in the Soviet Union, attaining great power and exerting his anti-Semitism (like Pontryagin) by, for example, influencing the results of entrance exams at Moscow State University.

Levinson , as the child of poor Russian Jewish immigrants in Revere, Massachusetts, naturally supported leftish causes. Norbert Wiener recognized his brilliance and got him (with some help from Hardy) a faculty position at the Massachusetts Institute of Technology. (Harvard was, presumably, unwilling to hire Jewish mathematicians in 1937.) In the McCarthy era of Communist witchhunts, Levinson was called to Washington to testify, but he refused to “name names” (cf. Levinson [288] and O’Connor and Robertson [355]).

(c) Two-Point Problems

The linear first-order scalar equation

$$\displaystyle{ \epsilon y^{{\prime}} + a(x)y = b(x),\ \ x \geq 0 }$$
(3.153)

has the exact solution

$$\displaystyle{y(x,\epsilon ) = e^{-\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\,y(0) + \frac{1} {\epsilon } \int _{0}^{x}e^{-\frac{1} {\epsilon } \int _{s}^{x}a(t)\,dt }\,b(s)\,ds.}$$

For bounded x and smooth coefficients, we can use repeated integrations by parts when

$$\displaystyle{a(x) > 0}$$

to show that y has a generalized asymptotic expansion of the form

$$\displaystyle{ y(x,\epsilon ) \sim A(x,\epsilon ) + B(\epsilon )e^{-\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds } }$$
(3.154)

for power series A and B. For example, since

$$\displaystyle{\frac{1} {\epsilon } \int _{0}^{x}e^{-\frac{1} {\epsilon } \int _{s}^{x}a(t)\,dt }b(s)\,ds \sim \frac{b(x)} {a(x)} - e^{-\frac{1} {\epsilon } \int _{0}^{x}a(t)\,dt } \frac{b(0)} {a(0)},}$$

\(A(x,0) = \frac{b(x)} {a(x)}\) and \(B(0) = y(0) - \frac{b(0)} {a(0)}\). Indeed, the series can be found directly by regular perturbation methods (using undetermined coefficients in the power series for A and B). Since there is an initial layer near x = 0, we could also introduce the stretched variable

$$\displaystyle{\xi = x/\epsilon }$$

and expand the product \(B(\epsilon )e^{-\frac{1} {\epsilon } \int _{0}^{\epsilon \xi }a(s)\,ds }\) in its Maclaurin expansion about ε = 0 to find the composite asymptotic solution

$$\displaystyle{ y(x,\epsilon ) = A(x,\epsilon ) + C(\xi,\epsilon ) }$$
(3.155)

for the same outer solution A(x, ε), where the coefficients of the initial layer correction

$$\displaystyle{C(\xi,\epsilon ) \sim \sum _{k\geq 0}C_{k}(\xi )\epsilon ^{k}}$$

tend to zero as \(\xi \rightarrow \infty \). Clearly, the expansion (3.154) is preferable, because it provides more immediate details regarding boundary layer behavior. In particular, it explicitly shows that the nonuniform behavior in the initial layer depends on the stretched variable

$$\displaystyle{\eta = \frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds,}$$

rather than its local limit a(0)ξ. Indeed, e η exactly satisfies the homogeneous equation. As we will later find, the expansion (3.155) corresponds to matching and (3.154) to two-timing.

For the linear second-order equation

$$\displaystyle{ \epsilon y^{{\prime\prime}} + a(x)y^{{\prime}} + b(x)y = c(x) }$$
(3.156)

with a(x) > 0, we cannot generally write down the exact solution (unless we happen to know a nontrivial solution of the homogeneous equation). Nonetheless, we will find that the asymptotic solution of the two-point problem with

$$\displaystyle{y(0)\ \ \mbox{ and }\ \ y(1)\ \ \mbox{ prescribed}}$$

will likewise have the asymptotic form

$$\displaystyle{ y(x,\epsilon ) \sim A(x,\epsilon ) + B(x,\epsilon )e^{-\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds } }$$
(3.157)

(corresponding to WKB theory) where the outer expansion A(x, ε) will now be a regular power series solution of the terminal value problem

$$\displaystyle{ \epsilon A^{{\prime\prime}} + a(x)A^{{\prime}} + b(x)A = c(x),\ \ A(1,\epsilon ) = y(1) }$$
(3.158)

and where B(x, ε) will be a regular power series solution of the initial value problem

$$\displaystyle{ \epsilon B^{{\prime\prime}}- aB^{{\prime}}- a^{{\prime}}B + bB = 0,\ \ B(0,\epsilon ) = y(0) - A(0,\epsilon ) }$$
(3.159)

(since the product \(Be^{-\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\) must satisfy the homogeneous differential equation). Curiously, the differential equation for B is the adjoint of that for A (when c is zero).

Analogous (though somewhat more complicated) results hold for the nonlinear equation

$$\displaystyle{ \epsilon y^{{\prime\prime}} + a(x)y^{{\prime}} + f(x,y) = 0 }$$
(3.160)

again for Dirichlet boundary conditions. Before obtaining them, let us first show how the more familiar method of boundary layer corrections works. With

$$\displaystyle{a(x) > 0\ \ \mbox{ on }\ \ 0 \leq x \leq 1,}$$

we would naturally expect an initial layer of nonuniform convergence when y(0) and y(1) are prescribed. Thus, for smooth coefficients a and f, we will seek a composite asymptotic expansion of the form

$$\displaystyle{ y(x,\epsilon ) = Y (x,\epsilon ) + v(\xi,\epsilon ) }$$
(3.161)

where \(v \rightarrow 0\) as the stretched coordinate

$$\displaystyle{\xi = \frac{x} {\epsilon } }$$

\(\rightarrow \infty \) and where the outer expansion Y and the initial layer correction v have power series expansions

$$\displaystyle{Y (x,\epsilon ) \sim \sum _{k\geq 0}Y _{k}(x)\epsilon ^{k}\ \ \mbox{ and }\ \ v(\xi,\epsilon ) \sim \sum _{ k\geq 0}v_{k}(\xi )\epsilon ^{k}.}$$

Away from x = 0, \(y \sim Y\) to all orders (and, likewise, for its derivatives), so Y must satisfy

$$\displaystyle{ \epsilon Y ^{{\prime\prime}} + a(x)Y ^{{\prime}} + f(x,Y ) = 0,\ \ \ Y (1,\epsilon ) = y(1). }$$
(3.162)

Clearly, Y 0 must satisfy the nonlinear reduced problem

$$\displaystyle{ a(x)Y _{0}^{{\prime}} + f(x,Y _{ 0}) = 0,\ \ \ Y _{0}(1) = y(1). }$$
(3.163)

Assuming that its solution Y 0 exists from x = 1, back to x = 0, later Y k s must satisfy linearized problems

$$\displaystyle{ a(x)Y _{k}^{{\prime}} + f_{ x}(x,Y _{0})Y _{k} +\alpha _{k-1}(x) = 0,\ \ \ Y _{k}(1) = 0 }$$
(3.164)

there, where each α k−1 is known successively in terms of earlier coefficients Y j and their first two derivatives. Using an integrating factor, each Y k then follows uniquely throughout the interval. It would be unlikely that Y (0, ε) = y(0), however, so a nontrivial corrector v must be expected.

Knowing Y asymptotically, \(y^{{\prime}} = Y ^{{\prime}} + \frac{1} {\epsilon } \frac{dv} {d\xi }\) and \(\epsilon y^{{\prime\prime}} =\epsilon Y ^{{\prime\prime}} + \frac{1} {\epsilon } \frac{d^{2}v} {d\xi ^{2}}\) imply that the initial layer correction v must satisfy the differential equation

$$\displaystyle{ \frac{d^{2}v} {d\xi ^{2}} + a(\epsilon \xi )\frac{dv} {d\xi } +\epsilon \Big (f\big(\epsilon \xi,Y (\epsilon \xi,\epsilon ) + v(\xi,\epsilon )\big) - f\big(\epsilon \xi,Y (\epsilon \xi,\epsilon )\big)\Big) = 0, }$$
(3.165)

the initial condition

$$\displaystyle{ v(0,\epsilon ) = y(0) - Y (0,\epsilon ), }$$
(3.166)

and decay to zero as \(\xi \rightarrow \infty \). Thus, the leading coefficient v 0 must satisfy the linear problem

$$\displaystyle{\frac{d^{2}v_{0}} {d\xi ^{2}} + a(0)\frac{dv_{0}} {d\xi } = 0,\ \ v_{0}(0) = y(0) - Y _{0}(0)\ \ \mbox{ and }\ \ v_{0} \rightarrow 0\mbox{ as }\xi \rightarrow \infty,}$$

so

$$\displaystyle{ v_{0}(\xi ) = e^{-a(0)\xi }(y(0) - Y _{ 0}(0)). }$$
(3.167)

Next, we will need

$$\displaystyle{\frac{d^{2}v_{1}} {d\xi ^{2}} + a(0)\frac{dv_{1}} {d\xi } + a^{{\prime}}(0)\xi \frac{dv_{0}} {d\xi } + f(0,Y _{0}(0) + v_{0}(\xi )) - f(0,Y _{0}(0)) = 0,}$$
$$\displaystyle{v_{1}(0) = -Y _{1}(0)\ \ \mbox{ and }\ \ v_{1} \rightarrow 0\ \ \mbox{ as }\ \ \xi \rightarrow \infty.}$$

The unique solution

$$\displaystyle\begin{array}{rcl} & & v_{1}(\xi ) = - e^{-a(0)\xi }Y _{ 1}(0) -\int _{0}^{\xi }e^{a(0)(s-\xi )}\big[f(0,Y _{ 0}(0) + v_{0}(\xi )) \\ & & \qquad \qquad \qquad \qquad \qquad \qquad - f(0,Y _{0}(0)) - a^{{\prime}}(0)a(0)\,s\,v_{ 0}(s)\big]\,ds {}\end{array}$$
(3.168)

decays like ξ e a(0)ξ as \(\xi \rightarrow \infty \). Subsequent v j s follow analogously, in turn. We will later obtain a somewhat more satisfying solution using multiscale methods with slow and fast variables x and \(\eta = \frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds\). Numerical methods for such problems are presented in Roos et al. [419] and Ascher et al. [16]. Related techniques for partial differential equations are given in Shishkin and Shishkina [452], Linss [294], and Miller et al. [317]. The variety of two-point singular perturbation problems one can confidently solve numerically is, sadly, quite limited, compared to the success found for stiff initial value problems. This appropriately remains a topic of substantial current research and importance. Useful recommendations about software for solving singularly perturbed two-point problems can be found on the homepage of Professor Jeff Cash of Imperial College, London (cf. also, Soetaert et al. [469]).

If we consider the two-point problem for the scalar Liénard equation

$$\displaystyle{ \epsilon y'' + f(y)y' + g(y) = 0,\qquad 0 \leq x \leq 1 }$$
(3.169)

with y(0) and y(1) prescribed, we can again expect to have an asymptotic solution of the form

$$\displaystyle{ y(x,\epsilon ) = Y (x,\epsilon ) + v(\xi,\epsilon ) }$$
(3.170)

provided

  1. (i)

    the reduced problem

    $$\displaystyle{ f(Y _{0})Y _{0}' + g(Y _{0}) = 0,\qquad Y _{0}(1) = y(1) }$$
    (3.171)

    has a solution Y 0(x) on 0 ≤ x ≤ 1 with

    $$\displaystyle{f(Y _{0}) > 0.}$$

    (Note the monotonic implicit solution \(x - 1 =\int _{ Y _{0}(x)}^{y(1)}\frac{f(r)} {g(r)} \,dr\).)

and

  1. (ii)

    the linear integrated initial layer problem

    $$\displaystyle{ \frac{dv_{0}} {d\xi } + f(Y _{0}(0))v = 0,\qquad v_{0}(0) = y(0) - Y _{0}(0) }$$
    (3.172)

    has a solution v 0(ξ) on ξ ≥ 0 that decays to zero as \(\xi \equiv \frac{x} {\epsilon } \rightarrow \infty \). (This simply requires f(Y 0(0)) > 0 since the solution is an exponential.)

Treating the problem with f(Y 0) < 0 proceeds analogously, using a terminal layer, but real complications arise when f(Y 0) has a zero within the interval.

As a specific example, suppose

$$\displaystyle{ \epsilon y'' = 2yy'\qquad \mbox{ with}\qquad y(1) < 0\mbox{ and }y(0) + y(1) < 0. }$$
(3.173)

Then, we obtain the attractive constant outer solution

$$\displaystyle{ Y (x,\epsilon ) = y(1) < 0 }$$
(3.174)

while the supplementary initial layer correction v(ξ, ε) must be a decaying solution of v ξ ξ  = 2(y(1) + v)v ξ . Integrating from infinity, we must satisfy the Riccati equation

$$\displaystyle{ v_{\xi } - 2y(1)v - v^{2} = 0,\qquad v(0) = y(0) - y(1). }$$
(3.175)

With the assumed sign restrictions, v exists and decays to zero as \(\xi \rightarrow \infty \).

Cole [92] considered the linear problem

$$\displaystyle{ \epsilon y^{{\prime\prime}} + \sqrt{x}y^{{\prime}}- y = 0,\ \ y(0) = 0\ \ \mbox{ and }\ \ y(1) = e^{2}, }$$
(3.176)

with an initial turning point . Because \(\sqrt{x} > 0\) for x > 0, we might stubbornly still seek an asymptotic solution of the composite form

$$\displaystyle{ y(x,\epsilon ) = Y (x,\epsilon ) + v(\xi,\epsilon ^{\beta }), }$$
(3.177)

with an outer expansion Y that satisfies the terminal value problem

$$\displaystyle{ \epsilon Y ^{{\prime\prime}} + \sqrt{x}Y ^{{\prime}}- Y = 0,\ \ Y (1,\epsilon ) = e^{2} }$$
(3.178)

as a power series in ε, and with an initial layer correction v satisfying the stretched equation

$$\displaystyle{ \epsilon ^{1-2\alpha }\frac{d^{2}v} {d\xi ^{2}} +\epsilon ^{-\alpha /2}\sqrt{\xi }\frac{dv} {d\xi } - v = 0, }$$
(3.179)

the initial condition

$$\displaystyle{ v(0,\epsilon ^{\beta }) = y(0) - Y (0,\epsilon ), }$$
(3.180)

and which decays to zero as the appropriate stretched variable

$$\displaystyle{ \xi = \frac{x} {\epsilon ^{\alpha }}, }$$
(3.181)

for some α > 0, tends to infinity. We will take v to have a power series in ε β, for a power β > 0 to be determined. The purpose of the new stretching ξ is to balance different terms in the differential equation (3.176) within the initial layer. The dominant balance argument (cf. Bender and Orszag [36] and Nipp [350]) here requires us to select α so that

$$\displaystyle{ 1 - 2\alpha = -\frac{\alpha } {2}\ \ \ \mbox{ or }\ \ \alpha = 2/3. }$$
(3.182)

Since this leaves

$$\displaystyle{ \frac{d^{2}v} {d\xi ^{2}} + \sqrt{\xi }\frac{dv} {d\xi } -\epsilon ^{1/3}v = 0, }$$
(3.183)

we naturally take β = 1∕3.

The outer expansion Y ∼  k ≥ 0 Y k ε k for (3.176) must satisfy

$$\displaystyle{\sqrt{x}Y _{0}^{{\prime}}- Y _{ 0} = 0,\ \ \ Y _{0}(1) = e^{2}}$$

and \(\sqrt{x}Y _{1}^{{\prime}}- Y _{1} + Y _{0}^{{\prime\prime}} = 0,\ \ Y _{1}(1) = 0\), so we get

$$\displaystyle{ Y (x,\epsilon ) = e^{2\sqrt{x}}\left (1 +\epsilon \left (- \frac{1} {2x} + \frac{2} {\sqrt{x}} -\frac{3} {2}\right )+\ldots \right ). }$$
(3.184)

Indeed, following a suggestion of E. Kirkinis, we can peel off \(e^{2\sqrt{x}}\) by setting

$$\displaystyle{ y = e^{2\sqrt{x}}z. }$$
(3.185)

The corresponding outer expansion Z(x, ε) in inner variables provides

$$\displaystyle{ Z(\epsilon ^{2/3}\xi,\epsilon ) = 1 -\frac{\epsilon ^{1/3}} {2\xi } + \frac{2\epsilon ^{2/3}} {\sqrt{\xi }} -\frac{3\epsilon } {2}+\ldots, }$$
(3.186)

conveniently a power series in ε 1∕3. The transformed equation is

$$\displaystyle{\epsilon z^{{\prime\prime}} + \left (\sqrt{x} + \frac{2\epsilon } {\sqrt{x}}\right )z^{{\prime}} +\epsilon \left (\frac{1} {x} -\frac{1} {2} \frac{1} {(\sqrt{x})^{3}}\right )z = 0}$$

and the stretched equation for the corresponding inner solution w(ξ, ε 1∕3) in terms of ξ = xε 2∕3 is

$$\displaystyle{ \frac{d^{2}w} {d\xi ^{2}} + \left (\sqrt{\xi } + \frac{2\epsilon ^{1/3}} {\sqrt{\xi }} \right )\frac{dw} {d\xi } + \left (-\frac{\epsilon ^{1/3}} {2\xi ^{3/2}} + \frac{\epsilon ^{2/3}} {\xi } \right )w = 0,\ \ \ w(0) = 0. }$$
(3.187)

Expanding

$$\displaystyle{ w(\xi,\epsilon ^{1/3}) \sim \sum _{ k=0}^{\infty }w_{ k}(\xi )\epsilon ^{k/3} }$$
(3.188)

and integrating the resulting initial value problems, we first obtain

$$\displaystyle{ w_{0}(\xi ) = c_{0}\int _{0}^{\xi }e^{-\frac{2} {3} s^{3/2} }ds, }$$
(3.189)

and then

$$\displaystyle\begin{array}{rcl} & & w_{1}(\xi ) = c_{1}\int _{0}^{\xi }e^{-\frac{2} {3} s^{3/2} }ds \\ & & \qquad \qquad +\int _{ 0}^{\xi }e^{-\frac{2} {3} s^{3/2} }\int _{0}^{\xi }e^{\frac{2} {3} t^{3/2} }\left (- \frac{2} {\sqrt{t}} \frac{dw_{0}} {d\xi } + \frac{1} {2} \frac{w_{0}} {(\sqrt{t})^{3}}\right )\,dt\,ds{}\end{array}$$
(3.190)

for constants c 0 and c 1. When we write

$$\displaystyle{w_{0}(\xi ) = c_{0}\left (\int _{0}^{\infty }e^{-\frac{2} {3} s^{3/2} }ds -\int _{\xi }^{\infty }e^{-\frac{2} {3} s^{3/2} }ds\right )}$$

and apply the crude matching condition that

$$\displaystyle{\lim _{\xi \rightarrow \infty }w_{0}(\xi ) =\lim _{x\rightarrow 0}Z_{0}(x),}$$

we determine the unusual constant,

$$\displaystyle{ c_{0} = \frac{1} {\int _{0}^{\infty }e^{-\frac{2} {3} s^{3/2} }ds} \equiv \left (\frac{3} {2}\right )^{1/3} \frac{1} {\varGamma (2/3)}. }$$
(3.191)

The asymptotic behavior of w 0 as \(\xi \rightarrow \infty \) follows by using repeated integrations by parts, i.e.

$$\displaystyle\begin{array}{rcl} w_{0}(\xi )& = & 1 - c_{0}\int _{\xi }^{\infty }e^{-\frac{2} {3} s^{3/2} }ds \\ & = & 1 - c_{0}\left [ \frac{1} {\sqrt{\xi }}- \frac{1} {2\xi ^{2}} + \frac{1} {(\sqrt{\xi })^{7}} + O\left (\frac{1} {\xi ^{5}} \right )\right ]e^{-\frac{2} {3} \xi ^{3/2} }.{}\end{array}$$
(3.192)

Next, since \(w_{0} \rightarrow 1\) and \(\frac{dw_{0}} {d\xi } \rightarrow 0\) as \(\xi \rightarrow \infty \), w 1 must satisfy \(\frac{d^{2}w_{ 1}} {d\xi ^{2}} + \sqrt{\xi }\frac{dw_{1}} {d\xi } \sim \frac{1} {2(\sqrt{\xi })^{3}}\) nearby, so upon integrating, we get

$$\displaystyle\begin{array}{rcl} w_{1}(\xi )& \sim & K_{1} +\int _{ \xi }^{\infty }e^{-\frac{2} {3} s^{3/2} }\int _{s}^{\infty }e^{\frac{2} {3} t^{3/2} } \frac{dt} {2(\sqrt{t})^{3}}\,ds \\ & \sim & K_{1} -\frac{1} {2\xi } - \frac{2} {5(\sqrt{\xi })^{5}} - \frac{7} {8\xi ^{4}} - \frac{35} {11(\sqrt{\xi })^{11}} +\ldots.{}\end{array}$$
(3.193)

(This could be more simply determined by directly introducing a power series for the limiting behavior of w 1 using undetermined coefficients.) Further, integration by parts implies that

$$\displaystyle{ \int _{\xi }^{\infty }e^{-\frac{2} {3} s^{3/2} }\int _{s}^{\infty }\frac{e^{\frac{2} {3} t^{3/2} }} {2t^{3/2}} \,dt\,ds \sim -\int _{\xi }^{\infty } \frac{ds} {2s^{2}} = -\frac{1} {2\xi }. }$$
(3.194)

To match w 1 at infinity, we need K 1 = 0, so \(w_{1} \rightarrow 0\) as \(\xi \rightarrow \infty \). Thus,

$$\displaystyle\begin{array}{rcl} & & c_{1}\int _{0}^{\infty }e^{-\frac{2} {3} s^{3/2} }ds = \\ & & \quad -\int _{0}^{\infty }e^{-\frac{2} {3} s^{3/2} }\int _{0}^{s}e^{\frac{2} {3} t^{3/2} }\left (- \frac{2} {\sqrt{t}} \frac{dw_{0}} {d\xi } + \frac{1} {2} \frac{w_{0}} {(\sqrt{t})^{3}}\right )\,dt\,ds{}\end{array}$$
(3.195)

specifies c 1. Higher-order matching follows analogously. (We will not attempt a uniformly valid composite expansion.) Clearly, matching near turning points is complicated. (It might be an instance calling for the neutrix calculus (cf. van der Corput [100]), where infinities are appropriately canceled.) Our procedure for (3.176) might be compared to that of Miller [318] and Johnson [226] who, respectively, consider the linear problem

$$\displaystyle{\epsilon y^{{\prime\prime}} + 12x^{1/3}y^{{\prime}} + y = 0,\ \ \ y(0) = 1,\ \ \ y(1) = 1}$$

and the nonlinear problem

$$\displaystyle{\epsilon y^{{\prime\prime}} + \sqrt{x}y^{{\prime}} + y^{2} = 0,\ \ \ y(0) = 2,\ \ \ y(1) = 1/3.}$$

We now reconsider the nonlinear two-point problem

$$\displaystyle{ \epsilon y^{{\prime\prime}}- 2yy^{{\prime}} = 0,\ \ \ 0 \leq x \leq 1 }$$
(3.196)

with prescribed endvalues y(0) and y(1). Wasow [514] cited this as an example of the capriciousness of singular perturbations . (In response, Franz and Roos [158] have written about the capriciousness of numerical methods for singular perturbations.) If we integrate once to get \(\epsilon y^{{\prime}} = y^{2}-\alpha\), we can separate variables to provide the general solution

$$\displaystyle\begin{array}{rcl} y(x,\epsilon )& = & -\sqrt{\alpha }\tanh \left (\frac{\sqrt{\alpha }} {\epsilon } (x-\beta )\right ) \\ & = & -\sqrt{\alpha }\left (\frac{1 - e^{-\frac{2\sqrt{\alpha }} {\epsilon } (x-\beta )}} {1 + e^{-\frac{2\sqrt{\alpha }} {\epsilon } (x-\beta )}}\right ){}\end{array}$$
(3.197)

for ε-dependent constants α and β. (When α is real, we shall take it to be nonnegative.) The boundary conditions require that

$$\displaystyle{y(0) = -\sqrt{\alpha }\left (\frac{1 - e^{\frac{2\sqrt{\alpha }\beta } {\epsilon } }} {1 + e^{\frac{2\sqrt{\alpha }\beta } {\epsilon } }}\right )\ \ \mbox{ and }\ \ y(1) = -\sqrt{\alpha }\left (\frac{1 - e^{-\frac{2\sqrt{\alpha }} {\epsilon } }e^{\frac{2\beta \sqrt{\alpha }} {\epsilon } }} {1 + e^{-\frac{2\sqrt{\alpha }} {\epsilon } }e^{\frac{2\beta \sqrt{\alpha }} {\epsilon } }}\right ),}$$

so

$$\displaystyle{ e^{\frac{2\beta \sqrt{\alpha }} {\epsilon } } = \frac{\sqrt{\alpha } + y(0)} {\sqrt{\alpha }- y(0)} = \left (\frac{\sqrt{\alpha } + y(1)} {\sqrt{\alpha }- y(1)}\right )e^{\frac{2\sqrt{\alpha }} {\epsilon } }. }$$
(3.198)

We will, curiously, find different sorts of limiting behaviors for y in four different portions of the y(0)-y(1) plane of boundary values.

  1. (i)

    On the half-line where y(0) = −y(1) > 0: Because of the sign of the coefficient of y in (3.196), we might expect y to be nearly constant near both x = 0 and 1. Thus, we can anticipate having the limit

    $$\displaystyle{y \sim y(0) > 0\ \ \ \mbox{ near }x = 0}$$

    and, likewise,

    $$\displaystyle{y \sim y(1) < 0\ \ \ \mbox{ near }x = 1.}$$

    Symmetry even suggests that a narrow shock (or transition) layer between these outer solutions will occur about the midpoint x = 1∕2 since y(1) = −y(0). Indeed, (3.198) implies that

    $$\displaystyle{e^{\frac{\sqrt{\alpha }} {\epsilon } } = \frac{\sqrt{\alpha } + y(0)} {\sqrt{\alpha }- y(0)}}$$

    (corresponding to β = 1∕2) and to the implicit relation

    $$\displaystyle{\sqrt{\alpha } = y(0) + (\sqrt{\alpha } + y(0))e^{-\sqrt{\alpha }/\epsilon }}$$

    for α. Iterating, we then find

    $$\displaystyle{ \sqrt{\alpha }\sim y(0) + 2y(0)e^{-y(0)/\epsilon }+\ldots }$$
    (3.199)

    We recognize this as a result involving exponential asymptotics (i.e., it uses asymptotically negligible correction terms like e y(0)∕ε). Thus, the limiting uniform solution

    $$\displaystyle{ y(x,\epsilon ) \sim -y(0)\tanh \left (\frac{y(0)} {\epsilon } \left (x -\frac{1} {2}\right )\right ) }$$
    (3.200)

    features an O(ε)-thick shock layer at the midpoint with the constant limit y(0) for \(x < \frac{1} {2}\) and y(1) for x > 1∕2.

  2. (ii)

    In that 135 sector of the y(0)-y(1) plane where y(0) > 0 and y(0) + y(1) > 0, we might expect the dominant endvalue y(0) to provide the limiting solution, except in a narrow terminal layer near x = 1. To see this, rewrite the boundary conditions (3.198) as

    $$\displaystyle{e^{2\sqrt{\alpha }(\beta -1)/\epsilon } = \frac{\sqrt{\alpha } + y(1)} {\sqrt{\alpha }- y(1)}}$$

    and

    $$\displaystyle{\sqrt{\alpha } = y(0) + (\sqrt{\alpha } + y(0))e^{-2\sqrt{\alpha }\beta /\epsilon }.}$$

    Using the general solution (3.197), \(\sqrt{\alpha }\sim y(0)\) implies that

    $$\displaystyle{ y(x,\epsilon ) \sim y(0)\left [\frac{y(0) + y(1) - (y(0) - y(1))e^{\frac{2y(0)} {\epsilon } (x-1)}} {y(0) + y(1) + (y(0) - y(1))e^{\frac{2y(0)} {\epsilon } (x-1)}}\right ], }$$
    (3.201)

    so y indeed has the constant limit y(0) for x < 1 and an ordinary O(ε)-thick boundary layer near x = 1.

    Curiously, when y(0) + y(1) is positive, but only asymptotically exponentially small, the previously found shock wave can be moved all the way from the midpoint x = 1∕2 to the endpoint x = 1. This demonstrates the supersensitivity of the shock location β. Imagine the computational consequences!

  3. (iii)

    One could analogously show (cf. (3.176)) that the limiting solution is y(1), except in an initial layer, when y(1) < 0 and y(0) + y(1) < 0. Now, for y(0) + y(1) appropriately exponentially negligible, the shock can be moved from \(x = \frac{1} {2}\) to x = 0. (This also follows by reflection from (ii).)

  4. (iv)

    In the quarter-plane where y(0) < 0 < y(1), we’d expect two endpoint layers. Instead of letting α be imaginary, we take the general solution to have the form

    $$\displaystyle{ y(x,\epsilon ) =\epsilon A\tan (A(x -\epsilon B)) }$$
    (3.202)

    with a trivial limit in 0 < x < 1 and boundary layers at both x = 0 and 1.

The limiting possibilities for solutions of (3.196) are illustrated in Fig. 3.1.

Figure 3.1:
figure 1

The limiting solution to \(\epsilon y^{{\prime\prime}}- 2yy^{{\prime}} = 0\) differs in four different regions of the y(0)-y(1) plane

The preceding analysis for (3.196) anticipates the corresponding asymptotics for Burgers’ partial differential equation

$$\displaystyle{ u_{t} =\epsilon u_{xx} + uu_{x} }$$
(3.203)

on the planar strip where − 1 ≤ x ≤ 1 and t ≥ 0. For the constant boundary values u(±1, t) = ±1 and a prescribed smooth initial function u(0, t), we might anticipate the development of a moving shock layer solution

$$\displaystyle{ \tanh \left (\frac{x - x_{\epsilon }(t)} {2\epsilon } \right ) }$$
(3.204)

(cf. Reyna and Ward [416] and Laforgue and O’Malley [275]). One can, indeed, use the Cole–Hopf transformation

$$\displaystyle{v(\eta,t) = e^{\int _{0}^{\eta }u(s,t)\,ds }\ \ \ \mbox{ for }\eta = \frac{x - x_{\epsilon }} {2\epsilon } }$$

to convert Burgers’ equation to the linear heat equation and to then solve that using Fourier series. Note the relation to the previously introduced Riccati transformations. One finds that the profile (3.204) moves asymptotically slowly after the shock is formed, according to the equation

$$\displaystyle{ \frac{dx_{\epsilon }} {dt} = e^{-1/\epsilon }(e^{-x_{\epsilon }/\epsilon } - e^{x_{\epsilon }/\epsilon }). }$$
(3.205)

The trivial rest point of (3.205) is reached after an asymptotically exponentially long time (i.e., we attain metastability due to the asymptotically negligible speed of the shock location x ε ). See O’Malley and Ward [379] for study of a variety of related problems. Also note the relationship to intermediate asymptotics and self-similar solutions, as presented by Barenblatt [28].

E [130] outlines a two-scale approach to satisfy the Allen–Cahn equation (describing phase transitions):

$$\displaystyle{ u_{t} =\epsilon \varDelta u -\frac{1} {\epsilon } V ^{{\prime}}(u) }$$
(3.206)

where V is a double-well potential with local minima u 1 and u 2. He introduces the stretched variable

$$\displaystyle{ \frac{\varphi (x,t)} {\epsilon }, }$$
(3.207)

with \(\varphi\) being the distance from x to a boundary curve Γ t of the domain and makes the multi-scale ansatz

$$\displaystyle{ u\left (x,t, \frac{\varphi } {\epsilon },\epsilon \right ) = U_{0}\left (\frac{\varphi (x,t)} {\epsilon } \right ) +\epsilon U_{1}\left (\frac{\varphi (x,t)} {\epsilon },x,t\right )+\ldots }$$
(3.208)

where \(U_{0}(\pm \infty ) = u_{1/2}\). Leading terms in (3.206) then imply that

$$\displaystyle{\varphi _{t}U_{0}^{{\prime}} = U_{ 0}^{{\prime\prime}}- V ^{{\prime}}(U_{ 0})}$$

so

$$\displaystyle{ \varphi _{t} = \frac{V (u_{1}) - V (u_{2})} {\int _{-\infty }^{\infty }(U_{0}^{{\prime}}(y))^{2}\,dy}. }$$
(3.209)

When V (u 1) = V (u 2), he gets shock layer motion on a longer time scale. Such multi-scale ideas will be developed in Chap. 5.

Cole [92] shows that the asymptotic solution of the boundary value problem

$$\displaystyle{ \epsilon y^{{\prime\prime}} + yy^{{\prime}}- y = 0,\ \ \ \mbox{ $0 \leq x \leq 1$},\ \ \mbox{ $y(0)$ and $y(1)$ prescribed} }$$
(3.210)

(less tractable than (3.196)) also varies significantly depending on the endvalues y(0) and y(1). This problem was introduced at Caltech in the 1950s as a nonlinear model where solutions feature both endpoint boundary and interior shock layers. It’s often called the Cole–Lagerstrom problem . We shall illustrate some possibilities. See Cole [92], Dorr et al. [124], Chang and Howes [76], and Lagerstrom [276] for the complete list of possible asymptotic solutions in nine distinct subsets of the y(0)-y(1) plane. Shen and Han [451] provide results for a more general equation.

Note that the reduced equation

$$\displaystyle{ Y _{0}(Y _{0}^{{\prime}}- 1) = 0 }$$
(3.211)

has the trivial solution Y 0(x) ≡ 0 and the linear family of solutions Y 0(x) = x + c.

  1. (a)

    Suppose y(0) = 0 and y(1) = 2. If we take c = 1, Y 0(x) = x + 1 will satisfy the terminal condition Y 0(1) = 2. The positivity of x + 1 throughout 0 ≤ x ≤ 1 suggests that Y 0 might serve as an outer solution Y (x, ε) for an asymptotic solution with an initial layer, say

    $$\displaystyle{ y(x,\epsilon ) = x + 1 + v(\xi,\epsilon ) }$$
    (3.212)

    where v(0, ε) = −1, \(v \rightarrow 0\) as \(\xi = x/\epsilon \rightarrow \infty \), and

    $$\displaystyle{ v \sim \sum _{k\geq 0}v_{k}(\xi )\epsilon ^{k}. }$$
    (3.213)

    Then, \(y^{{\prime}} = 1 + \frac{1} {\epsilon } \frac{dv} {d\xi }\) and \(\epsilon y^{{\prime\prime}} = \frac{1} {\epsilon } \frac{d^{2}v} {d\xi ^{2}}\) imply that \(\frac{d^{2}v} {d\xi ^{2}} + (\epsilon \xi +1 + v)\frac{dv} {d\xi } = 0\), so the leading term v 0 must satisfy the autonomous equation \(\frac{d^{2}v_{ 0}} {d\xi ^{2}} + (1 + v_{0})\frac{dv_{0}} {d\xi } = 0\). Integrating backwards from infinity, we obtain the initial value problem

    $$\displaystyle{ \frac{dv_{0}} {d\xi } + v_{0} + \frac{v_{0}^{2}} {2} = 0,\ \ \ v_{0}(0) = -1. }$$
    (3.214)

    Integrating this Riccati equation provides \(v_{0}(\xi ) = - \frac{2} {1+e^{\xi }}\), i.e., the uniformly valid approximation

    $$\displaystyle{ y(x,\epsilon ) \sim x + 1 - \frac{2} {1 + e^{x/\epsilon }}, }$$
    (3.215)

    featuring an initial layer of O(ε)-thickness in x.

  2. (b)

    If we instead take y(0) = −1 and y(1) = 1, we might anticipate having an interior shock layer between the linear left- and right-sided outer solutions

    $$\displaystyle{ Y _{L}(x) = x - 1\ \ \ \mbox{ and }\ \ \ Y _{R}(x) = x. }$$
    (3.216)

    (Since Y L  < 0 and Y R  > 0, we wouldn’t expect endpoint layers.) Thus, we will assume an asymptotic solution of the form

    $$\displaystyle{ y(x,\epsilon ) = x - 1 + u(\kappa,\epsilon ) }$$
    (3.217)

    for the stretched variable

    $$\displaystyle{ \kappa \equiv \frac{x -\tilde{ x}} {\epsilon }, }$$
    (3.218)

    expecting a monotonic unit jump in y about the shock location \(\tilde{x}\) (to be determined) such that

    $$\displaystyle{u \rightarrow \left \{\begin{array}{@{}l@{\quad }l@{}} 0 \quad &\mbox{ as $\kappa \rightarrow -\infty $}\\ \tilde{x} - (\tilde{x} - 1) = 1\quad &\mbox{ as $\kappa \rightarrow \infty $}.\end{array} \right.}$$

    Since \(y^{{\prime}} = 1 + \frac{1} {\epsilon } \frac{du} {d\kappa }\) and \(\epsilon y^{{\prime\prime}} = \frac{1} {\epsilon } \frac{d^{2}u} {d\kappa ^{2}}\), u must satisfy \(\frac{d^{2}u} {d\kappa ^{2}} + (\tilde{x} +\epsilon \kappa -1 + u)\frac{du} {d\kappa } = 0\). Its leading term will satisfy

    $$\displaystyle{ \frac{du_{0}} {d\kappa } + (\tilde{x} - 1)u_{0} + \frac{u_{0}^{2}} {2} = 0, }$$
    (3.219)

    upon integrating from −. The rest points are 0 and \(2(1 -\tilde{ x})\). To get a solution joining the rest points \(u_{0}(-\infty ) = 0\) and \(u_{0}(\infty ) = 1\) requires taking

    $$\displaystyle{ \tilde{x} = 1/2, }$$
    (3.220)

    as we should have anticipated from symmetry. The corresponding limiting shock layer solution is

    $$\displaystyle{ u_{0}(\kappa ) = \frac{1} {1 + e^{-\kappa /\epsilon }}\ \ \ \mbox{ for }\kappa = \frac{x -\frac{1} {2}} {\epsilon }. }$$
    (3.221)

    With analogous nonsymmetric boundary values, the jump would instead be located elsewhere. Higher-order terms follow readily.

Example

The two-point problem

$$\displaystyle{ \epsilon y^{{\prime\prime}} = 1 - (y^{{\prime}})^{2},\ \ 0 \leq x \leq 1,\ \ \mbox{ with }\ \ y(0) = 0 = y(1) }$$
(3.222)

can be solved by converting it to the slow-fast system

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} y^{{\prime}} = z \quad \\ \epsilon z^{{\prime}} = 1 - z^{2}.\quad \end{array} \right. }$$
(3.223)

If we select the left and right outer solutions

$$\displaystyle{Z_{L}(x) = -1\ \ \ \mbox{ and }\ \ \ Z_{R}(x) = 1,}$$

corresponding to

$$\displaystyle{Y _{L}(x) = -x\ \ \ \mbox{ and }\ \ \ Y _{R}(x) = x - 1,}$$

the two outer solutions Y L and Y R meet at x = 1∕2, where \(y^{{\prime}}\) must jump.

We naturally look for a shock layer as a function of the stretched variable

$$\displaystyle{\xi = \frac{1} {\epsilon } \left (x -\frac{1} {2}\right ).}$$

A direct integration (with the “right” endvalues) provides

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} z(x,\epsilon ) =\tanh ((x -\frac{1} {2})/\epsilon )\quad \\ \mbox{and}\quad \\ y(x,\epsilon ) =\epsilon \ln \left (\frac{\cosh \left (\left (x-\frac{1} {2} \right )/\epsilon \right )} {\cosh \left (1/2\epsilon \right )} \right )\quad \end{array} \right. }$$
(3.224)

with the anticipated angular asymptotics . Note that y has a minimum at x = 1∕2. (A maximum principle argument would rule out the selection Z L (x) = 1, Z R (x) = −1.)

Exercises

  1. 1.

    Numerically solve

    $$\displaystyle{\epsilon y^{{\prime\prime}} = 2yy^{{\prime}},\ \ 0 \leq x \leq 1}$$

    with various boundary values y(0) and y(1) to illustrate the four possible types of limiting solution.

  2. 2.

    For y(0) = −1∕4 and y(1) = 1∕2, show that the limiting (piecewise linear) solution to the Cole–Lagerstrom equation (3.210) is given by

    $$\displaystyle{y(x,\epsilon ) \rightarrow \left \{\begin{array}{@{}l@{\quad }l@{}} x -\frac{1} {4},\quad &\mbox{ $0 \leq x \leq \frac{1} {4}$} \\ 0, \quad &\mbox{ $\frac{1} {4} \leq x \leq \frac{1} {2}$} \\ x -\frac{1} {2},\quad &\mbox{ $\frac{1} {2} \leq x \leq 1$}.\end{array} \right.}$$
  3. 3.

    Show that the nonlinear boundary value problem

    $$\displaystyle{\epsilon y^{{\prime\prime}} + f(y)y^{{\prime}} + g(y) = 0}$$

    with y(0) and y(1) prescribed can be converted to a slow-fast problem in the y-z (or Liénard) plane with \(z \equiv \epsilon y^{{\prime}} +\int ^{y}f(r)\,dr\).

For the more general nonlinear scalar problem

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon y^{{\prime\prime}} + f(x,y)y^{{\prime}} + g(x,y) = 0,\ \ 0 \leq x \leq 1\quad &\mbox{ } \\ \mbox{ with }y(0)\mbox{ and }y(1)\mbox{ prescribed}, \quad &\mbox{ }\end{array} \right. }$$
(3.225)

perhaps first studied in Coddington and Levinson [90], we will seek an initial layer solution in the composite form

$$\displaystyle{ y(x,\epsilon ) = Y (x,\epsilon ) + v(\xi,\epsilon ) }$$
(3.226)

where \(v \rightarrow 0\) as \(\xi = x/\epsilon \rightarrow \infty \). We will naturally require two stability assumptions :

  1. (i)

    that the reduced problem

    $$\displaystyle{ f(x,Y _{0})Y _{0}^{{\prime}} + g(x,Y _{ 0}) = 0,\ \ \ Y _{0}(1) = y(1) }$$
    (3.227)

    has a solution Y 0(x) on 0 ≤ x ≤ 1 with

    $$\displaystyle{ f(x,Y _{0}(x)) > 0 }$$
    (3.228)
  2. (ii)

    that the separable limiting integrated initial layer problem

    $$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{dv_{0}} {d\xi } +\int _{ 0}^{v_{0}}f(0,Y _{ 0}(0) + r)\,dr = 0,\quad \\ v_{0}(0) = y(0) - Y _{0}(0) \quad \end{array} \right. }$$
    (3.229)

    has a solution v 0(ξ) defined throughout ξ ≥ 0 that decays to zero as \(\xi \rightarrow \infty \).

Clearly, the outer solution Y (x, ε) ∼  j ≥ 0 Y j (x)ε j must satisfy the terminal value problem

$$\displaystyle{ \epsilon Y ^{{\prime\prime}} + f(x,Y )Y ^{{\prime}} + g(x,Y ) = 0,\ \ \ Y (1,\epsilon ) = y(1). }$$
(3.230)

Since we have assumed existence of the ε = 0 solution Y 0(x) throughout 0 ≤ x ≤ 1, the next term Y 1 must satisfy the linearized problem

$$\displaystyle{ f(x,Y _{0})Y _{1}^{{\prime}} +\big (f_{ y}(x,Y _{0})Y _{0}^{{\prime}} + g_{ y}(x,Y _{0})\big)Y _{1} + Y _{0}^{{\prime\prime}} = 0,\ \ \ Y _{ 1}(1) = 0. }$$
(3.231)

Presuming smoothness of the coefficients, it is no problem to successively define all the Y k s in terms of the attractive outer limit Y 0.

Knowing Y (x, ε) asymptotically, the supplementary initial layer correction v must satisfy

$$\displaystyle{\left (\epsilon Y ^{{\prime\prime}} + \frac{1} {\epsilon } \frac{d^{2}v} {d\xi ^{2}} \right ) + f(x,Y + v)\left (Y ^{{\prime}} + \frac{1} {\epsilon } \frac{dv} {d\xi } \right ) + g(x,Y + v) = 0}$$

and \(v(0,\epsilon ) = y(0) - Y (0,\epsilon )\), i.e.

$$\displaystyle\begin{array}{rcl} \frac{d^{2}v} {d\xi ^{2}} & +& f(x,Y + v)\frac{dv} {d\xi } \\ & +& \epsilon \big[(f(x,Y + v) - f(x,Y ))Y ^{{\prime}} + g(x,Y + v) - g(x,Y )\big] = 0{}\end{array}$$
(3.232)

when we substitute \(-f(x,Y )Y ^{{\prime}}- g(x,Y )\) for \(\epsilon Y ^{{\prime\prime}}\) and expand all functions of x = ε ξ in Taylor series about x = 0. Thus, v 0 must be a decaying solution of the nonlinear terminal value problem

$$\displaystyle{ \frac{d^{2}v_{0}} {d\xi ^{2}} + f(0,Y _{0}(0) + v_{0})\frac{dv_{0}} {d\xi } = 0,\ \ \ v_{0}(\infty ) = 0. }$$
(3.233)

Integrating backwards, we require v 0 to satisfy the initial value problem (3.229). Existence of v 0 is guaranteed by the second stability condition. The next terms in (3.232) require v 1 to satisfy

$$\displaystyle\begin{array}{rcl} \frac{d^{2}v_{1}} {d\xi ^{2}} & +& f(0,Y _{0}(0) + v_{0})\frac{dv_{1}} {d\xi } +\xi \big [f_{x}(0,Y _{0}(0) + v_{0}) \\ & +& f_{y}(0,Y _{0}(0) + v_{0})Y _{0}^{{\prime}}(0)\big]\frac{dv_{0}} {d\xi } \\ & +& \Big[\big(f(0,Y _{0}(0) + v_{0}) - f(0,Y _{0}(0)\big)Y _{0}^{{\prime}}(0) \\ & +& \big(g(0,Y _{0}(0) + v_{0}) - g(0,Y _{0}(0)\big)\Big] = 0.{}\end{array}$$
(3.234)

Because v 0 and \(\frac{dv_{0}} {d\xi }\) decay exponentially to zero as \(\xi \rightarrow \infty \), we can integrate backwards from infinity where v 1() = 0. Then, we integrate the resulting linear initial value problem with v 1(0) = −Y 1(0) to get v 1(ξ). Later terms follow analogously, in turn. We note that more direct multi-scale methods for (3.225) (which we will later consider) do not seem to be generally available except when f(x, y) is independent of y.

Example

Consider

$$\displaystyle{ \epsilon y^{{\prime\prime}} + e^{y}y^{{\prime}} = 1,\ \ \ y(0) = 0,\ \ y(1) = 1. }$$
(3.235)

The limiting outer problem

$$\displaystyle{ e^{Y _{0} }Y _{0}^{{\prime}} = 1,\ \ \ Y _{ 0}(1) = 1 }$$
(3.236)

implies that \(e^{Y _{0}} = x + c\) where e = 1 + c, so

$$\displaystyle{ Y _{0}(x) =\ln (x + e - 1) }$$
(3.237)

and \(e^{Y _{0}} > 0\). We will seek a uniform limit

$$\displaystyle{ y(x,\epsilon ) \sim Y _{0}(x) + v_{0}(\xi )\mbox{ for }\xi = x/\epsilon. }$$
(3.238)

Then, v 0 must satisfy \(\frac{d^{2}v_{ 0}} {d\xi ^{2}} + e^{Y _{0}(0)+v_{0}(\xi )}\frac{dv_{0}} {d\xi } = 0,\ \ Y _{0}(0) + v_{0}(0) = 0\) and \(v_{0} \rightarrow 0\) as \(\xi \rightarrow \infty \). An integration requires v 0 to satisfy the nonlinear initial value problem

$$\displaystyle{\frac{dv_{0}} {d\xi } + (e - 1)(e^{v_{0} } - 1) = 0,\ \ \ v_{0}(0) = -\ln (e - 1).}$$

Thus

$$\displaystyle{ v_{0}(\xi ) = -\ln (1 + (e - 2)e^{-(e-1)\xi }) }$$
(3.239)

and the uniformly valid limiting solution on 0 ≤ x ≤ 1 is

$$\displaystyle{ y(x,\epsilon ) =\ln \left ( \frac{x + e - 1} {1 + (e - 2)e^{-(e-1)x/\epsilon }}\right ) + O(\epsilon ). }$$
(3.240)

We will next outline the construction of the asymptotic solution of the scalar two-point problem

$$\displaystyle{ \epsilon y^{{\prime\prime}} + f(x,y)y^{{\prime}} + g(x,y) = 0\ \ \mbox{ with $y(0)$ and $y(1)$ prescribed} }$$
(3.241)

featuring a sharp transition (i.e., a shock) layer at an interior point \(\tilde{x}\). (Recall several examples considered previously.)

We will assume

  1. (i)

    that the left limiting problem

    $$\displaystyle{ f(x,Y )Y ^{{\prime}} + g(x,Y ) = 0,\ \ \ Y (0) = y(0) }$$
    (3.242)

    has a solution Y L0(x) such that

    $$\displaystyle{ f(x,Y _{L0}(x)) < 0\ \ \ \mbox{ for }0 \leq x \leq \tilde{ x}, }$$
    (3.243)

    and

  2. (ii)

    the right limiting problem

    $$\displaystyle{ f(x,Y )Y ^{{\prime}} + g(x,Y ) = 0,\ \ \ Y (1) = y(1) }$$
    (3.244)

    has a solution Y R0(x) such that

    $$\displaystyle{ f(x,Y _{R0}(x)) > 0,\ \ \ \mbox{ for }\tilde{x} \leq x \leq 1. }$$
    (3.245)

    Here, the isolated jump location \(\tilde{x}\) is determined by the classical Rankine–Hugoniot jump condition

    $$\displaystyle{ \int _{Y _{L0}(\tilde{x})}^{Y _{R0}(\tilde{x})}f(\tilde{x},s)\,ds = 0 }$$
    (3.246)

    (cf. Whitham [520]).

We will also assume that

  1. (iii)

    the integrated shock layer problem

    $$\displaystyle{ \frac{du_{0}} {d\kappa } +\int _{ 0}^{u_{0} }f\left (\tilde{x},Y _{L0}(\tilde{x}) + r\right )\,dr = 0,\ \ \ -\infty <\kappa < \infty }$$
    (3.247)

    has a monotonic solution u 0(κ) on − < κ <  satisfying

    $$\displaystyle{u_{0}(-\infty ) = 0\ \ \mbox{ and }\ \ u_{0}(\infty ) = Y _{R0}(\tilde{x}) - Y _{L0}(\tilde{x}).}$$

Then, we can construct an asymptotic solution to (3.241) of the form

$$\displaystyle{ y(x,\epsilon ) = Y _{L}(x,\epsilon ) + u(\kappa,\epsilon ) }$$
(3.248)

where

$$\displaystyle{ \kappa = \frac{1} {\epsilon } (x -\tilde{ x}), }$$
(3.249)
$$\displaystyle{u(-\infty,\epsilon ) = 0,\ \ \mbox{ and }\ \ u(\infty,\epsilon ) \rightarrow Y _{R}(\tilde{x}+\epsilon \kappa,\epsilon ) - Y _{L}(\tilde{x}+\epsilon \kappa,\epsilon )}$$

for left- and right-outer expansions Y L (x, ε) and Y R (x, ε) with limits Y L0 and Y R0 such that

$$\displaystyle\begin{array}{rcl} & & Y _{L}(x,\epsilon ) \sim \sum _{j\geq 0}Y _{Lj}(x)\epsilon ^{j},\quad Y _{ R}(x,\epsilon ) \sim \sum _{j\geq 0}Y _{Rj}(x)\epsilon ^{j}, \\ & & \qquad \quad \qquad \qquad \qquad \mbox{ and}\quad u(\kappa,\epsilon ) \sim \sum _{j\geq 0}u_{j}(\kappa )\epsilon ^{j}. {}\end{array}$$
(3.250)

As an alternative, we could center the shock layer at the average value

$$\displaystyle{\frac{1} {2}\left (Y _{L}(\tilde{x},\epsilon ) + Y _{R}(\tilde{x},\epsilon )\right )}$$

and seek a shock layer solution v(κ, ε) tending to \(Y _{L}(\tilde{x},\epsilon )\) as \(\kappa \rightarrow -\infty \) and to \(Y _{R}(\tilde{x},\epsilon )\) as \(\kappa \rightarrow +\infty \).

Lorenz [299] considered the example

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon y^{{\prime\prime}} + y(1 - y^{2})y^{{\prime}}- y = 0\quad \\ y(0) = 1.6,\ \ \ y(1) = -1.7. \quad \end{array} \right.}$$

He showed that the limiting solution satisfies

$$\displaystyle{y(x,\epsilon ) \rightarrow \left \{\begin{array}{@{}l@{\quad }l@{}} Y _{L0}(x),\quad &0 < x < x_{1} = \frac{\sqrt{2}} {3} - 1.6 -\frac{(1.6)^{3}} {3} \\ 0, \quad &x_{1} < x < x_{2} \\ Y _{R0}(x),\quad &-\frac{\sqrt{2}} {3} + 1.7 -\frac{(1.7)^{3}} {3} = x_{2} < x \leq 1 \end{array} \right.}$$

for left and right limiting solutions Y L0(x) and Y R0(x) and with x 1 ≈ 0. 24 and x 2 ≈ 0. 59, found computationally. Readers should verify this conclusion analytically or numerically.

We might expect solutions of two-point problems for the semilinear vector equation

$$\displaystyle{ \epsilon ^{2}y^{{\prime\prime}} + f(x,y) = 0,\ \ 0 \leq x \leq 1 }$$
(3.251)

to instead converge away from boundary layers at both endpoints to a solution Y 0 of the limiting algebraic equation

$$\displaystyle{ f(x,Y _{0}(x)) = 0 }$$
(3.252)

when (i) the Jacobian

$$\displaystyle{f_{y}(x,Y _{0}(x))}$$

is a stable matrix throughout the interval and (ii) the corresponding boundary layer jumps y(0) − Y 0(0) and y(1) − Y 0(1) are appropriately restricted to achieve stability in the boundary layers.

Then, the asymptotic solution to (3.251) will take the form

$$\displaystyle{ y(x,\epsilon ) = Y (x,\epsilon ) + r(\kappa,\epsilon ) + s(\lambda,\epsilon ) }$$
(3.253)

where the terms of \(r \rightarrow 0\) as \(\kappa = \frac{x} {\epsilon } \rightarrow \infty \) while \(s \rightarrow 0\) as \(\lambda = \frac{1-x} {\epsilon } \rightarrow \infty \). Franz and Roos [158], likewise, show that the limiting solution to

$$\displaystyle{\epsilon ^{2}y^{{\prime\prime}}- y(y - 1)\left (y - x -\frac{3} {2}\right ) = 0,\ \ y(0) = 0,\ \ y(1) = 5/2}$$

satisfies

$$\displaystyle{y(x,\epsilon ) \rightarrow \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &\mbox{ $0 \leq x < \frac{1} {2}$} \\ x + \frac{3} {2},\quad &\mbox{ $\frac{1} {2} < x \leq 1$.} \end{array} \right.}$$

These boundary values don’t require endpoint layers. However, a shock layer between the two outer limits is needed at x = 1∕2.

A semilinear example with (at least) two solutions having both boundary and corner layers is contained in O’Donnell [356]. It is the system

$$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon y_{1}''\quad &= \left (y_{1} -\left \vert x -\frac{1} {2}\right \vert \right )(1 + y_{2}^{2}) \\ \epsilon y_{2}''\quad &= \left (y_{2} - 1 + \left \vert \frac{1} {3} - x\right \vert \right )(1 + y_{1}^{2})\end{array} \right.}$$

for appropriate values y 1(0),  y 1(1),  y 2(0), and y 2(1).

Other two-point problems are considered by Butuzov et al. [64]. Indeed, Butuzov et al. [63] and Vasil’eva et al. [497] develop a theory for the so-called contrast structures . See Schmeiser [439] for an application with ε-dependent boundary conditions. Somewhat comparable asymptotic results to those we’ve obtained are also available for certain integral equations (cf. Shubin [454] and its references) and other functional equations.

Overview (or Proof of Asymptotic Validity)

When one solves boundary value problems asymptotically, one typically uses only a few terms in the formal series generated, because of the effort involved and because the series generally diverge. Thus, if one knows the first two terms y 0(x, η) and y 1(x, η) in a formal approximation, with the given independent variable x and, say, a stretched variable η(x, ε) describing a boundary or interior layer of nonuniform convergence, we might write the actual solution y(x, ε) as

$$\displaystyle{ y(x,\epsilon ) = y_{0}(x,\eta ) +\epsilon y_{1}(x,\eta ) +\epsilon ^{2}R(x,\epsilon ) }$$
(3.254)

and convert the given boundary value problem for y into a new problem for the (scaled) remainder R. Often, we further convert the latter problem into an integral equation for R. If we can estimate its solution R using, for example, differential inequalities, the boundedness of R throughout the x interval will imply that our formal result

$$\displaystyle{y \sim y_{0} +\epsilon y_{1}}$$

is asymptotically correct to O(ε 2) in that interval. Such proofs are given in Smith [466], Murdock [335], and de Jager and Jiang [224].

(d) Linear Boundary Value Problems

So far, we may have casually given the incorrect impression that singularly perturbed boundary value problems have unique asymptotic solutions, typically consisting of an outer solution and endpoint boundary layer corrections. The actual situation is illustrated quite clearly by the constant linear n-dimensional vector system

$$\displaystyle{ \epsilon y^{{\prime}} = Ay,\ \ \ 0 \leq x \leq 1 }$$
(3.255)

subject to n coupled linear boundary conditions

$$\displaystyle{ \alpha y(0) +\beta y(1) =\gamma }$$
(3.256)

for scalar constants α and β and an n-vector γ. More complications naturally occur when the entries of the matrix A vary with x.

We will assume here that the matrix A has the spectral decomposition

$$\displaystyle{ A = PDP^{-1} }$$
(3.257)

for a nonsingular constant matrix P and a block-diagonal matrix

$$\displaystyle{ D \equiv \mbox{ diag }(S,0,U) }$$
(3.258)

where S is a stable \(r \times r\) matrix, 0 is the trivial \(s \times s\) matrix, and U is a \(t \times t\) unstable matrix with r + s + t = n. Then the so-called shearing transformation

$$\displaystyle{ y = Pz }$$
(3.259)

implies the two-point problem

$$\displaystyle{ \epsilon z^{{\prime}} = Dz,\ \ \ \alpha Pz(0) +\beta Pz(1) =\gamma }$$
(3.260)

for z. If we split

$$\displaystyle{ z \equiv \left (\begin{array}{*{10}c} z_{1} \\ z_{2} \\ z_{3}\end{array} \right ) }$$
(3.261)

for an r-dimensional vector z 1, an s-dimensional z 2, and a t-dimensional z 3, bounded solutions z must be of the form

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} z_{1}(x) = e^{Sx/\epsilon }z_{1}(0), \quad \\ z_{2}(x) = z_{2}(0), \quad \\ \mbox{ and} \quad \\ z_{3}(x) = e^{-U(1-x)/\epsilon }z_{3}(1)\quad \end{array} \right. }$$
(3.262)

for bounded endvalues z 1(0), z 2(0), and z 3(1) that act like shooting parameters in the transformed linear boundary condition

$$\displaystyle{\alpha P\left (\begin{array}{*{10}c} z_{1}(0) \\ z_{2}(0) \\ e^{-U/\epsilon }z_{3}(1) \end{array} \right )+\beta P\left (\begin{array}{*{10}c} e^{S/\epsilon }z_{1}(0) \\ z_{2}(0) \\ z_{3}(1) \end{array} \right ) =\gamma.}$$

Note that the matrix entries e Uε and e Sε are asymptotically negligible. Asymptotically, then, we obtain a unique solution

$$\displaystyle{ y = P\left (\begin{array}{*{10}c} e^{Sx/\epsilon }z_{1}(0) \\ z_{2}(0) \\ e^{-\frac{U} {\epsilon } (1-x)}z_{3}(1) \end{array} \right ) }$$
(3.263)

of the given boundary value problem when we can uniquely solve the limiting n-dimensional linear system

$$\displaystyle{ \alpha P\left (\begin{array}{*{10}c} z_{1}(0) \\ z_{2}(0) \\ 0 \end{array} \right )+\beta P\left (\begin{array}{*{10}c} 0 \\ z_{2}(0) \\ z_{3}(1) \end{array} \right ) \sim \gamma }$$
(3.264)

for the shooting vectors z 1(0), z 2(0), and z 3(1). Note that the resulting solution (3.263) features an r-dimensional initial layer determined by z 1(0), an s-dimensional constant outer solution determined by z 2(0), and a t-dimensional terminal layer determined by z 3(1). If the Jacobian of (3.264) (with respect to these n unknown endvalues) is singular, however, there may be multiple solutions of the problem (3.255)–(3.256) or none at all. Generalizations to block-diagonalizable slow-fast linear systems without turning points are straightforward (cf. Harris [197], Flaherty and O’Malley [148], and O’Malley [368]). Surveys of classical results are contained in Wasow [512, 513], Hsieh and Sibuya [220], and Balser [25].

Wasow [511, 517] considered the higher-order variable coefficient linear scalar equation

$$\displaystyle{ \epsilon ^{\ell-k}L(y) + K(y) = 0,\ \ \ 0 \leq x \leq 1 }$$
(3.265)

where L(y) is an \(\ell\) th-order linear differential operator with leading term

$$\displaystyle{ y^{(\ell)} }$$
(3.266)

and where K(y) is a k-th order linear differential operator with leading term

$$\displaystyle{ \beta _{0}(x)y^{(k)} }$$
(3.267)

for  > k ≥ 0 and with β 0(x) ≠ 0 throughout the interval, thereby avoiding turning points. The prototype differential equation is

$$\displaystyle{\epsilon ^{\ell-k}y^{(\ell)} +\beta _{ 0}(x)y^{(k)} = 0.}$$

He also prescribed r linear scalar initial conditions

$$\displaystyle{ A_{i}y(0) =\gamma _{i} }$$
(3.268)

for (3.265) with

$$\displaystyle{ A_{i}y = y^{(\lambda _{i})} + \mbox{ lower-order terms},\ \ \ i = 1,\ldots,r }$$
(3.269)

for decreasing orders λ i as well as s linear terminal conditions

$$\displaystyle{ A_{j}y(1) =\gamma _{j} }$$
(3.270)

with

$$\displaystyle{ A_{j}y = y^{(\lambda _{j})} + \mbox{ lower-order terms},\ \ j = r + 1,\ldots,r + s =\ell }$$
(3.271)

for decreasing λ j s. (Treating boundary conditions coupling derivatives at the two endpoints would again be more complicated.)

Assuming appropriate smoothness of the coefficients, one can construct a complete set of smooth linearly independent asymptotic solutions of (3.265) of the form

$$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} G_{j}(x,\epsilon )e^{\frac{1} {\epsilon } \int ^{x}(-\beta _{ 0}(s))^{1/(\ell-k)}\,ds },\quad &j = 1,2,\ldots,\ell-k \\ \mbox{ and} \quad &\mbox{ } \\ G_{j}(x,\epsilon ), \quad &j =\ell -k + 1,\ldots,\ell\end{array} \right. }$$
(3.272)

for power series G j to be determined and distinct roots \((-\beta _{0}(x))^{ \frac{1} {\ell-k} }\), generalizing our WKB results for  = 2 and k = 1. (Note the fixed regular spacing of these roots in the complex plane.) The first k of these solutions display boundary layer behavior near x = 0 whenever

$$\displaystyle{\mbox{ Re }(-\beta _{0}(x))^{ \frac{1} {(\ell-k)} } < 0.}$$

We will, more explicitly, use the representation

$$\displaystyle{ G_{j}(x,\epsilon )e^{\frac{1} {\epsilon } \int _{0}^{x}(-\beta _{ 0}(s))^{ \frac{1} {\ell-k} }\,ds } }$$
(3.273)

to specify initial layer behavior. Likewise, we will use solutions

$$\displaystyle{ G_{j}(x,\epsilon )e^{\frac{1} {\epsilon } \int _{x}^{1}(-\beta _{ 0}(s))^{ \frac{1} {\ell-k} }\,ds } }$$
(3.274)

with terminal layers near x = 1 when

$$\displaystyle{\mbox{ Re }(-\beta _{0}(x))^{ \frac{1} {\ell-k} } > 0}$$

holds. We suppose that

$$\displaystyle{ \sigma }$$
(3.275)

of the -k values \(\mbox{ Re }(-\beta _{0}(x))^{ \frac{1} {\ell-k} }\) are negative, that

$$\displaystyle{ \tau }$$
(3.276)

are positive, and that we are in the nonexceptional case when

$$\displaystyle{ \sigma +\tau =\ell -k. }$$
(3.277)

(Then, none of the roots are purely imaginary.) The last k asymptotic solutions (3.272) don’t feature boundary layer behavior and they can, indeed, be found as regular perturbations of any set of linearly independent solutions of the reduced equation K(y) = 0. (Solutions of the corresponding nonhomogeneous equation

$$\displaystyle{\epsilon ^{\ell-k}L(y) + K(y) = f(x)}$$

could be obtained from the asymptotic solutions (3.272) of the homogeneous equation by using variation of parameters.)

Note that even the harmless-looking two-point problem

$$\displaystyle{\epsilon ^{2}y'' + y = 0,\quad 0 \leq x \leq 1,\qquad y(0) = 0,\quad y(1) = 1}$$

hasn’t a limiting solution as \(\epsilon \rightarrow 0\). The solution

$$\displaystyle{y(x,\epsilon ) = \frac{\sin \frac{x} {\epsilon } } {\sin \frac{1} {\epsilon } } }$$

isn’t even defined for \(\epsilon = \frac{1} {n\pi },\ n = 1,\,2,\,\ldots \,\), and it is rapidly oscillating otherwise. Thus, Wasow’s quest wasn’t trivial.

Writing the solution of the boundary value problem (3.2653.271) as a linear combination of the \(\ell\) asymptotic solutions, we get a unique solution only if the appropriate \(\ell\times \ell\) determinant obtained from applying the prescribed boundary conditions (3.268) and (3.270) is nonsingular for small values of ε. Because of the special form of the differential equation and of the boundary conditions, many entries of the determinant involve a limiting Vandermonde form (cf. Horn and Johnson [215]), which allows us to asymptotically factor the determinant conveniently. Indeed, it will often allow us to define a cancellation law , implying which k limiting boundary conditions, together with the limiting equation K(y) = 0, will uniquely specify the limiting solution Y (x, 0) within 0 < x < 1. With no purely imaginary roots

$$\displaystyle{(-\beta _{0}(x))^{ \frac{1} {\ell-k} },}$$

such arguments show that the reduced problem will consist of the reduced equation K(y) = 0, the last rσ limiting initial conditions (3.268) (presuming r ≥ σ) and of the last sτ limiting terminal conditions (3.270) (presuming s ≥ τ). (Recall that

$$\displaystyle{(r-\sigma ) + (s-\tau ) =\ell -(\ell-k) = k,}$$

the order of reduced operator K.) The general result involves more complicated algebra, but the approach to take is clear in principle. See Wasow [513] for more details.

Wasow’s study can be motivated by the simpler question of finding the asymptotic behavior of the roots m(ε) to the polynomial equation

$$\displaystyle{ \epsilon ^{\ell-k}L(m) + K(m) = 0 }$$
(3.278)

where K is a polynomial of degree k and L, a polynomial of degree  > k (cf. Lin and Segel [291] and Murdock [335]). Indeed, one can also consider the polynomial

$$\displaystyle{ f(y,\epsilon ) = f_{\ell}(\epsilon )y^{\ell} + f_{\ell-1}(\epsilon )y^{\ell-1} +\ldots +f_{ 1}(\epsilon )y + f_{0}(\epsilon ) = 0. }$$
(3.279)

If asymptotic expansions

$$\displaystyle{ f_{s}(\epsilon ) \sim f_{s0}\epsilon ^{\rho _{s} }+\ldots,\ \ \ f_{s0}\neq 0\mbox{ and }\rho _{s} \geq 0 }$$
(3.280)

are given for the coefficients, it would be reasonable to use a dominant balance argument to seek asymptotic solutions satisfying

$$\displaystyle{ y \sim \epsilon ^{\nu _{p}}y_{p},\ \ \ y_{p}\neq 0 }$$
(3.281)

that provide a limiting balance in (3.279). This is the basis of the Newton polygon method. Knowing the limiting approximations (3.281) for the roots, we can readily improve upon them.

To solve problems (3.2653.271) asymptotically, we don’t need the cumbersome machinery of matched asymptotic expansions. Indeed, the advances made by the distinguished American lineage, G. D. Birkhoff, R. E. Langer, H. L. Turrittin, and W. A. Harris, among others, between 1908 and 1960 suffice for such problems, until we encounter turning points or nonlinearities (cf. Turrittin [488] and Schissel [435]). They simply construct, algorithmically, a full linearly independent set of asymptotic solutions. We observe that they were most likely completely unaware of Prandtl’s boundary layer theory, though they knew the work of the Germans Fuchs and Frobenius in the nineteenth century and more recent developments by pure mathematicians worldwide. In quite a different direction, Devaney [117] considers complex maps of the form \(P(z) + \frac{\lambda } {(z-a)^{d}}\) for polynomials P, d > 0, and λ small.

We note that one way to obtain high-order singularly perturbed differential equations like (3.265) is to consider initial function problems for delay equations

$$\displaystyle{\dot{x}(t) = f(x(t),\ x(t-\tau ))}$$

for small values of the delay τ > 0. In particular, if one expands f to some finite order in powers of τ, the highest time derivative occurring will be multiplied by the corresponding power of τ. The resulting long-term solution behavior of the original delay equation and of the approximating differential equation (under suitable hypotheses) can be expected to agree (cf. Chicone [88] and Erneux [143]).

Examples

  1. 1.

    Consider the singularly perturbed problem

    $$\displaystyle{ \epsilon ^{2}y^{{\prime\prime\prime}{\prime}}- y^{{\prime\prime}} = 0,\ \ \ 0 \leq x \leq 1 }$$
    (3.282)

    with prescribed boundary values

    $$\displaystyle{ y^{{\prime\prime\prime}}(0),\ \ \ y(0),\ y^{{\prime}}(1),\ \mbox{ and }y(1). }$$
    (3.283)

    Linearly independent solutions of the differential equation are given by

    $$\displaystyle{e^{-x/\epsilon },e^{x/\epsilon },\ \ 1,\mbox{ and }x}$$

    (as can be immediately verified), so we naturally seek a solution of the boundary value problem in the form

    $$\displaystyle{ y(x,\epsilon ) = a(\epsilon ) + b(\epsilon )x +\epsilon ^{3}c(\epsilon )e^{-x/\epsilon } +\epsilon d(\epsilon )e^{-(1-x)/\epsilon } }$$
    (3.284)

    for constants a, b, c, and d to be asymptotically determined as power series in ε (for scale factors ε 3 and ε e −1∕ε introduced to simplify later algebra). Formulas for the derivatives of y follow directly. The boundary conditions (omitting only asymptotically negligible coefficients like \(\frac{e^{-1/\epsilon }} {\epsilon ^{2}}\)) imply that

    $$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} y^{{\prime\prime\prime}}(0) \sim -c(\epsilon ) \quad \\ y(0) \sim a(\epsilon ) +\epsilon ^{3}c(\epsilon ) \quad \\ y^{{\prime}}(1) \sim b(\epsilon ) + d(\epsilon ) \quad \\ \mbox{ and} \quad \\ y(1) \sim a(\epsilon ) + b(\epsilon ) +\epsilon d(\epsilon ).\quad \end{array} \right. }$$
    (3.285)

    Solving these linear equations implies that the unique asymptotic solution of our two-point problem (3.2823.283) has the form

    $$\displaystyle\begin{array}{rcl} y(x,\epsilon ) & & \sim [y(0) +\epsilon ^{2}y^{{\prime\prime\prime}}(0)] \\ & & + \frac{x} {1-\epsilon }[y(1) -\epsilon y^{{\prime}}(1) - y(0) -\epsilon ^{3}y^{{\prime\prime\prime}}(0)] -\epsilon ^{3}y^{{\prime\prime\prime}}(0)e^{-x/\epsilon } \\ & & +\frac{\epsilon e^{-(1-x)/\epsilon }} {1-\epsilon } [y(0) +\epsilon ^{3}y^{{\prime\prime\prime}}(0) + y^{{\prime}}(1) - y(1)]. {}\end{array}$$
    (3.286)

    The limiting solution

    $$\displaystyle{ Y _{0}(x) = y(0) + (y(1) - y(0))x }$$
    (3.287)

    exactly satisfies the reduced problem

    $$\displaystyle{ Y _{0}^{{\prime\prime}} = 0,\ \ Y _{ 0}(0) = y(0),\ \ Y _{0}(1) = y(1), }$$
    (3.288)

    so a cancellation law applies. y will converge to Y 0 , but nonuniformly at x = 0 though not at x = 1, while higher derivatives of y are generally algebraically unbounded at both endpoints when \(\epsilon \rightarrow 0\).

  2. 2.

    Boundary value problems need not have unique solutions . Consider, as an example, the planar slow-fast nonlinear system

    $$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x}\quad &= y \\ \epsilon \dot{y}\quad &= -\frac{1} {2}(1 + 3x^{2})y\end{array} \right. }$$
    (3.289)

    on \(0 \leq t \leq 1\) with the homogeneous separated boundary conditions

    $$\displaystyle{ x(0,\epsilon ) +\epsilon y(0,\epsilon ) = 0\ \ \ \mbox{ and }\ \ x(1,\epsilon ) = 0. }$$
    (3.290)

    The two-point problem certainly has the trivial solution. Because \(-\frac{1} {2}(1 + 3x^{2}) < 0\), we might anticipate having an initial layer. Thus, we naturally seek a nontrivial asymptotic solution of the form

    $$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} x(t,\epsilon ) = X(t,\epsilon ) + u(\tau,\epsilon ) \quad \\ y(t,\epsilon ) = Y (t,\epsilon ) + \frac{1} {\epsilon } v(\tau,\epsilon )\quad \end{array} \right. }$$
    (3.291)

    with an initial layer correction \(\left (\begin{array}{*{10}c} u\\ \frac{v} {\epsilon }\end{array} \right )\) tending to zero as the stretched variable

    $$\displaystyle{ \tau = \frac{t} {\epsilon } }$$
    (3.292)

    tends to infinity, thereby anticipating an initial impulse in the fast variable y and nonuniform convergence in the slow. Then, the outer solution \(\left (\begin{array}{*{10}c} X\\ Y\end{array} \right )\) must satisfy the given system

    $$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \dot{X} = Y \quad \\ \epsilon \dot{Y } = -\frac{1} {2}(1 + 3X^{2})Y \quad \end{array} \right. }$$
    (3.293)

    as a power series in ε, together with the terminal condition

    $$\displaystyle{ X(1,\epsilon ) = 0. }$$
    (3.294)

    A regular perturbation procedure readily implies that this outer expansion is trivial to all orders ε k. Thus, the initial layer correction must satisfy the initial value problem

    $$\displaystyle{ \left \{\begin{array}{@{}l@{\quad }l@{}} \frac{du} {d\tau } = v \quad \\ \frac{dv} {d\tau } = -\frac{1} {2}(1 + 3u^{2})v\quad \end{array} \right. }$$
    (3.295)

    with

    $$\displaystyle{ u(0,\epsilon ) + v(0,\epsilon ) = 0 }$$
    (3.296)

    and the asymptotic stability condition as \(\tau \rightarrow \infty \). Since \(\frac{dv} {d\tau } = -\frac{1} {2}(1 + 3u^{2})\frac{du} {d\tau }\), integrating from infinity implies that − 2v = u + u 3, leaving us the initial value problem

    $$\displaystyle{ \frac{du} {d\tau } = -(1 + u^{2})\frac{u} {2},\ \ \ u(0,\epsilon ) = u^{3}(0,\epsilon ) }$$
    (3.297)

    The three possible initial values are

    $$\displaystyle{ u(0,\epsilon ) = 0,\ \ 1,\mbox{ and } - 1. }$$
    (3.298)

    The two resulting nontrivial solutions are readily found as solutions of the Bernoulli equation for u to be

    $$\displaystyle{ x(t,\epsilon ) = \frac{\pm 1} {\sqrt{2e^{t/\epsilon } - 1}} }$$
    (3.299)

    and

    $$\displaystyle{ y(t,\epsilon ) = \mp \frac{1} {\epsilon } \frac{e^{t/\epsilon }} {(\sqrt{2e^{t/\epsilon } - 1})^{3}}. }$$
    (3.300)
  3. 3.

    Smith [466] considers the nonlinear two-point problem for

    $$\displaystyle{ \epsilon ^{2}\ddot{x} = (x^{2} - 1)(x^{2} - 4). }$$
    (3.301)

    With boundary values \(x(0) = x(1) = \frac{1} {2}\), he obtains a solution with the outer limit 2 and another with the outer limit − 1 (with endpoint layers). However, with the boundary values \(\dot{x}(0) = 0\) and \(x(1) = \frac{1} {2}\), he obtains a solution with outer limit 2 and another with outer limit − 1 (and terminal layers).

To determine the limiting behavior of solutions to singularly perturbed boundary value problems, it is helpful to know about the possibilities exhibited by a variety of solved examples. In this regard, readers are especially referred to the work of the late Fred Howes (cf., e.g., Howes [217], Chang and Howes [76]) who used many explicit examples to motivate more general results. The subtleties arising suggest that blind computation may often be useless. Other challenging problems can, for example, be found in Smith [466], Carrier [70], Bogaevski and Povzner [50], Hinch [206], Johnson [226], Verhulst [500], Cousteix and Mauss [104], Ablowitz [1], Holmes [209], and Paulsen [387], and in the following.

Exercises

  1. 1.

    Show that the asymptotic solution of

    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon ^{2}y^{{\prime\prime\prime}{\prime}}- y^{{\prime\prime}} = x^{2}, \quad \\ y(0) = 0,\ \ y^{{\prime}}(0) = 1,\ \ y^{{\prime}}(1) = 2,\ \ \mbox{ and }\ \ y^{{\prime\prime}}(1) = 3\quad \end{array} \right.}$$

    is given by

    $$\displaystyle\begin{array}{rcl} y(x,\epsilon ) = & -& \frac{1} {12}(x^{4} - 28x) + \frac{4\epsilon } {3}\left (-3x - 1 + e^{-x/\epsilon }\right ) {}\\ & +& \epsilon ^{2}\left (-x^{2} + 2x + 4 - 4e^{-x/\epsilon } + 4e^{-(1-x)/\epsilon }\right ) {}\\ & +& O(\epsilon ^{3}). {}\\ \end{array}$$
  2. 2.

    Approximate the eigenvalues λ(ε) and the corresponding eigenfunctions y(x, ε) for

    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon ^{2}y^{{\prime\prime\prime}{\prime}}- y^{{\prime\prime}} =\lambda y,\ \ 0 \leq x \leq 1 \quad \\ y(0) = y^{{\prime}}(0) = y(1) = y^{{\prime}}(1) = 0\quad \end{array} \right.}$$

    (cf. Moser [329], Handelman et al. [193], and Frank [156]).

  3. 3.

    Show that the following boundary value problems have no limiting solution as \(\epsilon \rightarrow 0^{+}\):

    1. a.

      \(\epsilon y^{{\prime\prime}}- y^{{\prime}} = 0\),   y (0) = 1,   y(1) = 0

    2. b.

      \(\epsilon ^{2}y^{{\prime\prime\prime}} + y^{{\prime}} = 0\),   y (0) = 0,   y(0) = y (1) = 1.

  4. 4.

    Consider the initial value problem

    $$\displaystyle{ \epsilon \dot{y} = A(t)y,t \geq 0,\ \text{with} y(0)\ \text{given} }$$

    when

    $$\displaystyle{ A(t) = U^{-1}(t)\left (\begin{array}{*{10}c} -1& \eta_{\epsilon } \\ 0 &-1\\ \end{array} \right )U(t) }$$

    for

    $$\displaystyle{ U(t) = \left (\begin{array}{*{10}c} \cos t &\sin t\\ -\sin t &\cos t\\ \end{array} \right ). }$$

    Show that solutions for η > 2 can be unbounded, even through A(t) remains stable. Hint: Solve for v = U y (cf. Kreiss [263]).

  5. 5.

    Solve

    $$\displaystyle{\epsilon y^{{\prime\prime}} + xy^{{\prime}} = x}$$

    on 0 ≤ x ≤ 1 with y(0) = y(1) = 1 and describe the limiting behavior as \(\epsilon \rightarrow 0\).

  6. 6.

    Show how one can find an asymptotic solution of the two-point problem

    $$\displaystyle{\epsilon y'' + (1 + x^{2})y' + 2xy = x,\qquad 0 \leq x \leq 1}$$

    in the form

    $$\displaystyle{y(x,\epsilon ) = A(x,\epsilon ) + e^{-\frac{1} {\epsilon } \int _{0}^{x}(1+s^{2})\,ds }\left (y(1) - A(1,\epsilon )\right ).}$$
  7. 7.

    Consider the nonlinear two-point problem

    $$\displaystyle{\epsilon y^{{\prime\prime}} = y^{{\prime}}- (y^{{\prime}})^{3},\ \ y(0) = 0,\ \ y(1) = \frac{1} {2}.}$$

    (Hint: The equation for z = y can be integrated explicitly.)

    Show that the (angular) limiting solution satisfies

    $$\displaystyle{y(x,\epsilon ) \rightarrow \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &0 \leq x \leq \frac{1} {2} \\ x -\frac{1} {2},\quad &\frac{1} {2} \leq x \leq 1.\end{array} \right.}$$

    Why couldn’t you obtain

    $$\displaystyle{y(x,\epsilon ) \rightarrow \left \{\begin{array}{@{}l@{\quad }l@{}} -x, \quad &0 \leq x \leq \frac{1} {4} \\ x -\frac{1} {2},\quad &\frac{1} {4} \leq x \leq 1?\end{array} \right.}$$

    Such problems are discussed in Chang and Howes [76] and elsewhere. The original reference is Haber and Levinson [189]. Also, see Vishik and Lyusternik [506].

  8. 8.

    Müller et al. [333] considered the two-point problem

    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \epsilon y^{{\prime\prime}} = y^{3}, \quad &0 < x < 1 \\ y(0) = 1,\quad &y(1) = 2.\end{array} \right.}$$
    1. a.

      Obtain the exact solution in terms of elliptic functions .

    2. b.

      Show that the limiting solution within 0 < x < 1 is trivial and that \(\sqrt{\epsilon }\)-thick endpoint layers occur.

  9. 9.

    Consider

    $$\displaystyle{\epsilon y^{{\prime\prime}} = y(y - x),\ \ y(-1) = 0,\ \ y(1) = 1.}$$

    Show that the inverse function x(y) satisfies

    $$\displaystyle{\epsilon \frac{d^{2}x} {dy^{2}} = y(x - y)\left (\frac{dx} {dy}\right )^{3},\ \ \ x(0) = -1,\ \ \ x(1) = 1}$$

    (cf. Howes [218]).

  10. 10.

    Pokrovskii and Sobolev [395] consider the piecewise linear system

    $$\displaystyle{\left \{\begin{array}{@{}l@{\quad }l@{}} \dot{x} = 1, \quad \\ \epsilon \dot{y} = x +\vert y\vert.\quad \end{array} \right.}$$
    1. a.

      Determine typical trajectories numerically.

    2. b.

      Show that

      $$\displaystyle{y = \left \{\begin{array}{@{}l@{\quad }l@{}} x-\epsilon,\ \ x <\epsilon \quad \\ 2\epsilon e^{(x-\epsilon )/\epsilon } - x-\epsilon,\ \ x \geq \epsilon \quad \end{array} \right.}$$

      and

      $$\displaystyle{y = \left \{\begin{array}{@{}l@{\quad }l@{}} -x-\epsilon,\ \ x < -\epsilon \quad \\ 2\epsilon e^{-(x+\epsilon )/\epsilon } + x-\epsilon,\ \ -\epsilon < x <\epsilon \nu \quad \\ \epsilon (1+\nu )e^{(x-\nu \epsilon )/\epsilon } - x-\epsilon,\ \ \epsilon \nu <\epsilon \quad \end{array} \right.}$$

      are invariant manifolds where ν is a root of 2e −1−ν +ν − 1 = 0.

Lomov [298] used the scalar initial value problem

$$\displaystyle{ \epsilon u^{{\prime}} + \frac{2u} {1 + x^{2}} = 2\left (\frac{3 + (\tan ^{-1}x)^{2}} {1 + x^{2}} \right ),\ \ u(0) = 1,\ \ x \geq 0 }$$
(3.302)

to introduce singular perturbations. The reduced problem has the solution

$$\displaystyle{U_{0}(x) = 3 + (\tan ^{-1}x)^{2}.}$$

Let us seek an outer expansion

$$\displaystyle{ U(x,\epsilon ) = U_{0}(x) +\epsilon U_{1}(x) +\epsilon ^{2}U_{ 2}(x)+\ldots }$$
(3.303)

Substituting this series into (3.302) requires

$$\displaystyle{\epsilon U_{0}^{{\prime}} +\epsilon ^{2}U_{ 1}^{{\prime}} +\ldots + \frac{2} {1 + x^{2}}(U_{0} +\epsilon U_{1} +\epsilon ^{2}U_{ 2}+\ldots ) = \frac{2} {1 + x^{2}}\left (3 + (\tan ^{-1}x)^{2}\right ).}$$

The ε coefficient implies that \(U_{0}^{{\prime}} = -\frac{2U_{1}} {1+x^{2}}\), so

$$\displaystyle{U_{1}(x) =\tan ^{-1}x.}$$

Next the ε 2 coefficient implies that \(U_{1}^{{\prime}} = -\frac{2U_{2}} {1+x^{2}}\), or

$$\displaystyle{U_{2}(x) = \frac{1} {2}.}$$

Higher coefficients imply that U k  = 0 for k ≥ 3, so we have found an exact outer solution

$$\displaystyle{ U(x,\epsilon ) = 3 + (\tan ^{-1}x)^{2} +\epsilon \tan ^{-1}x + \frac{\epsilon ^{2}} {2} }$$
(3.304)

of the differential equation. The homogeneous differential equation has the complementary solution

$$\displaystyle{e^{-\frac{2} {\epsilon } \int ^{x} \frac{ds} {1+s^{2}} } = e^{-\frac{2} {\epsilon } \tan ^{-1}x }k,}$$

so the exact solution of our initial value problem (3.302) is

$$\displaystyle{ u(x,\epsilon ) = U(x,\epsilon ) + e^{-\frac{2} {\epsilon } \tan ^{-1}x }(1 - U(0,\epsilon )). }$$
(3.305)

Note that the second term is an initial layer correction increasing from \(-2 -\frac{\epsilon ^{2}} {2}\) when x = 0 to 0 as \(\frac{\tan ^{-1}x} {\epsilon } \rightarrow \infty \). It is essential since the outer solution doesn’t satisfy the prescribed initial condition. By contrast, the matched expansion solution would have the additive form

$$\displaystyle{ u(x,\epsilon ) = U(x,\epsilon ) + v(\xi,\epsilon ) }$$
(3.306)

where \(v \rightarrow 0\) as \(\xi = x/\epsilon \rightarrow \infty \). Since the initial layer correction v must satisfy

$$\displaystyle{ \frac{dv} {d\xi } + \frac{2} {1 +\epsilon ^{2}\xi ^{2}}v = 0,\ \ v(0,\epsilon ) = u(0) - U(0,\epsilon ), }$$
(3.307)

its leading term v 0 must satisfy \(\frac{dv_{0}} {d\xi } + 2v_{0} = 0\) and v 0(0) = −2. Thus,

$$\displaystyle{ v_{0}(\xi ) = -2e^{-2\xi } }$$
(3.308)

approximates the exact initial layer correction \(-e^{-\frac{2} {\epsilon } \tan ^{-1}x }\left (2 + \frac{\epsilon ^{2}} {2}\right )\).

The more general scalar problem

$$\displaystyle{ \epsilon u^{{\prime}} = a(x)u + b(x),\ \ \ x \geq 0 }$$
(3.309)

has the exact solution

$$\displaystyle{ u(x,\epsilon ) = e^{\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\,u(0) + \frac{1} {\epsilon } \int _{0}^{x}e^{\frac{1} {\epsilon } \int _{t}^{x}a(s)\,ds }\,b(t)\,dt. }$$
(3.310)

(as noted earlier). Assuming that

$$\displaystyle{ a(x) < 0, }$$
(3.311)

the homogeneous solution

$$\displaystyle{e^{\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }}$$

features nonuniform convergence from 1 to 0 in an O(ε)-thick initial layer. Again, it is natural to seek an asymptotic solution of (3.309) in the form

$$\displaystyle{ u(x,\epsilon ) = U(x,\epsilon ) + e^{\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }(1 - U(0,\epsilon )) }$$
(3.312)

for an outer expansion

$$\displaystyle{ U(x,\epsilon ) = U_{0}(x) +\epsilon U_{1}(x) +\epsilon ^{2}U_{ 2}(x) +\ldots. }$$
(3.313)

Then U must satisfy (3.309) as a power series in ε. Equating coefficients successively, we will need

$$\displaystyle{a(x)U_{0} + b(x) = 0,\ \ a(x)U_{1} = U_{0}^{{\prime}},\ \ a(x)U_{ 2} = U_{1}^{{\prime}},}$$

etc., so we uniquely obtain the expansion

$$\displaystyle{ U(x,\epsilon ) = -\frac{b(x)} {a(x)} - \frac{\epsilon } {a(x)}\left ( \frac{b(x)} {a(x)}\right )^{{\prime}}- \frac{\epsilon ^{2}} {a(x)}\left ( \frac{1} {a(x)}\left ( \frac{b(x)} {a(x)}\right )^{{\prime}}\right )^{{\prime}}+\ldots }$$
(3.314)

presuming sufficient smoothness of the coefficients a and b. As we’d expect, this also follows from (3.310) using repeated integration by parts. Rewriting

$$\displaystyle{\begin{array}{*{10}c} u(x,\epsilon )&=& e^{\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\left (u(0) -\int _{0}^{x} \frac{d} {dt}\left (e^{-\frac{1} {\epsilon } \int _{0}^{t}a(s)\,ds }\right )\frac{b(t)} {a(t)}dt\right ) \\ &=& e^{\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\left (u(0) - e^{-\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\frac{b(x)} {a(x)} + \frac{b(0)} {a(0)}\right. \\ && \qquad + \left.\int _{0}^{x}e^{-\frac{1} {\epsilon } \int _{0}^{t}a(s)\,ds }\left (\frac{b(t)} {a(t)}\right )^{{\prime}}dt\right ) \\ &=& -\frac{b(x)} {a(x)} + e^{\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\left (u(0) + \frac{b(0)} {a(0)}\right ) +\int _{ 0}^{x}e^{\frac{1} {\epsilon } \int _{t}^{x}a(s)\,ds } \frac{d} {dt}\left (\frac{b(t)} {a(t)}\right )dt \\ &=&-\frac{b(x)} {a(x)} - \frac{\epsilon } {a(x)} \frac{d} {dx}\left (\frac{b(x)} {a(x)}\right ) \\ &&\qquad + e^{\frac{1} {\epsilon } \int _{0}^{x}a(s)\,ds }\left (u(0) + \frac{b(0)} {a(0)} + \frac{\epsilon } {a(0)} \frac{d} {dx}\left (\frac{b(x)} {a(x)}\right )_{x=0}\right ) + O(\epsilon ^{2}), \end{array} }$$

we readily obtain the anticipated asymptotic approximation to any desired number of terms.

Next, applying the preceding result componentwise shows that the vector equation

$$\displaystyle{ \epsilon v^{{\prime}} = \wedge (x)v + k(x) }$$
(3.315)

has an asymptotic solution of the form

$$\displaystyle{ v(x,\epsilon ) = \vee (x,\epsilon ) + e^{\frac{1} {\epsilon } \int _{0}^{x}\wedge (s)\,ds }(v(0) -\vee (0,\epsilon )) }$$
(3.316)

when the state matrix ∧ is an n × n diagonal matrix with stable eigenvalues λ i , and ∨ is an outer expansion satisfying a system

$$\displaystyle{ \epsilon \vee ^{{\prime}} = \wedge (x) \vee +k(x) }$$
(3.317)

as a regular power series in ε. Because \(e^{\frac{1} {\epsilon } \int _{0}^{x}\varLambda (s)\,ds }\) is diagonal with nontrivial decaying entries \(e^{\frac{1} {\epsilon } \int _{0}^{x}\lambda _{ i}(s)\,ds}\), the asymptotic solution of (3.315) is an additive function of the slow variable x and the fast variables \(\frac{1} {\epsilon } \int _{0}^{x}\lambda _{ i}(s)\,ds\), i = 1, \(\ldots\)n.

More generally, consider the vector system

$$\displaystyle{ \epsilon u^{{\prime}} = A(x)u + b(x) }$$
(3.318)

when the state matrix A can be factored as

$$\displaystyle{ A(x) = M(x) \wedge (x)M^{-1}(x) }$$
(3.319)

for a smooth invertible n × n matrix M and a diagonal matrix \(\wedge \) with distinct smooth stable eigenvalues λ i (x), i = 1, \(\ldots\), n. The kinematic change of variables

$$\displaystyle{ u = M(x)v }$$
(3.320)

converts the equation (3.318) to the nearly diagonal form

$$\displaystyle{ \epsilon v^{{\prime}} = (\wedge -\epsilon M^{-1}M^{{\prime}})v + M^{-1}b }$$
(3.321)

which has an asymptotic solution

$$\displaystyle{ v(x,\epsilon ) = \vee (x,\epsilon ) + e^{\frac{1} {\epsilon } \int _{0}^{x}\wedge (s)\,ds }\,w(x,\epsilon ) }$$
(3.322)

where V (x, ε) and w(x, ε) have power series expansions (cf. Lomov [298] and Wasow [513]). Their expansions can be obtained via undetermined coefficient methods.

If we can diagonalize the matrix D(t), assuming it is stable, we can similarly treat the initial value problem for the slow-fast linear vector system

$$\displaystyle\begin{array}{rcl} \dot{x}& =& A(t)x + B(t)y \\ \epsilon \dot{y}& =& C(t)x + D(t)y{}\end{array}$$
(3.323)

and interpret the result in terms of using the slow time t and the fast times \(\frac{1} {\epsilon } \int _{0}^{t}\lambda _{ i}(s)\,ds\) where the λ i (t)s are stable (nonrepeated) eigenvalues of D(t). Block diagonalization of a conditionally stable matrix D might similarly allow us to treat certain two-point problems. Readers might look ahead to Example 16 in Chap. 6.

Historical Remarks

The work of Kaplun and Lagerstrom at Caltech in the 1950s was especially important to the development of matched expansions and its applications to fluid mechanics. Comparable work was simultaneously done by Proudman and Pearson at Cambridge University in England (cf. Proudman and Pearson [403]). Paco Lagerstrom (1914–1989), a Swedish-born Princeton math Ph.D., was Saul Kaplun (1924–1964)’s thesis advisor in 1954. (Contemporaries suggest they may have had an intimate personal, as well as professional, relationship.) The Polish-born Kaplun only published three papers, although Lagerstrom and others published a (not thoroughly edited) collection of his unfinished work as a monograph, Kaplun [237]. This and a book of reminiscences, My Son Saul [236] by his father and others and various memorials (at Caltech and Tel Aviv) made Kaplun into a hero of applied asymptotics in the 1960s. Lagerstrom persistently pursued their insights about matching for the next quarter century, using a limit process approach based upon the presumed overlapping domains of validity of the inner and outer expansions. Lagerstrom’s book Matched Asymptotic Expansions [276] appeared in 1988, as he wrote,

$$\displaystyle{\mbox{ after a long sequence of earlier drafts.}}$$

Edward Fraenkel (in [155]) and Wiktor Eckhaus (in [133]) both insisted that existence of an overlap was not necessary for matching to succeed. Even after Lagerstrom’s passing, Eckhaus [136] renewed the controversy, suggesting that the Kaplun extension theorem (intended to justify matching) could be based on Robinson’s lemma in nonstandard analysis (cf. Diener and Diener [120]). One fascinating example from Eckhaus [132], used in Lagerstrom [276], is the linear two-point problem

$$\displaystyle{ (\epsilon +x)u^{{\prime\prime}} + u^{{\prime}} = 1,\ \ u(0) = 0,\ u(1) = 2. }$$
(3.324)

Since x is an exact solution of the differential equation, one readily finds the exact solution

$$\displaystyle{ u(x,\epsilon ) = x + \frac{\ln \epsilon -\ln (x+\epsilon )} {\ln \epsilon -\ln (1+\epsilon )} }$$
(3.325)

of the two-point problem. Note the initial layer.

Wiktor Eckhaus (1930–2000) was a significant contributor to both the nonlinear stability and singular perturbation literatures who had a number of productive students at Delft and Utrecht, including Ferdinand Verhulst, Johan Grasman, and Arjen Doelman, and the insightful early collaborator Eduardus de Jager. Verhulst , in turn, is known for his well-written texts on dynamical systems, averaging, singular perturbations and, most recently, Poincaré. He founded a publishing house, Epsilon, which produced mathematical monographs and textbooks in Dutch. Fortunately for most of us, later editions of many of its publications appeared in English from other publishers.