Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter we give an introduction into applications of group analysis to equations with nonlocal operators, in particular, to integro-differential equations. The first section of this chapter contains a retrospective survey of different methods for constructing symmetries and finding invariant solutions of such equations. The presentation of the methods is carried out using simple model equations. In the next section, the classical scheme of the construction of determining equations of an admitted Lie group is generalized for equations with nonlocal operators. In the concluding sections of this chapter, the developed regular method of obtaining admitted Lie groups is illustrated by applications to some known integro-differential equations.

2.1 Integro-Differential Equations in Mathematics and in Applications

Equations with nonlocal operators include integro-differential equations (IDE), delay differential equations, stochastic differential equations and some other types of less-known equations. They have been intensively studied for a long time already, in mathematics and in numerous scientific and engineering applications.

The most known integro-differential equations are kinetic equations (KE) which form the basis in the kinetic theories of rarefied gases, plasma, radiation transfer, coagulation. The Boltzmann kinetic equation [10] in rarefied gas dynamics, the Vlasov and Landau equations in plasma physics [2], and the Smolukhovsky equation in coagulation theory [71] are widely used and have become classical. Numerous generalizations of these equations are also used in other applications. Brief outlines of delay and stochastic differential equations are presented in Chaps. 5 and 6.

The kinetic equations describe the time evolution of a distribution function (DF) of some interacting particles such as gas molecules, ions, electrons, aerosols, etc. DF has the meaning of a nonnormalized probability density function defined on the space of dynamical variables of particles. A large number of independent variables and the presence of complicated integral operators are typical features of KEs. KEs for dynamical systems with strong pair particle interaction include special operators which are called collision integrals. In general, they are integral operators with quadratic nonlinearity and multiple kernels as in the Boltzmann and Smolukhovsky equations. For systems where collective (averaged) particle interactions are of principal importance, the nonlocal operators have the form of functionals of DF, as for example, in the Vlasov equation for collisionless plasma or in the Bhatnagar–Gross–Krook equation in rarefied gas dynamics [12]. These peculiarities create large difficulties for investigation of integro-differential equations by both analytical and numerical methods. Starting with the classical paper [48], partial simplification of these difficulties was done by reducing the integro-differential equations to infinite systems of first order differential equations for power moments of DF. Such systems are derived by integration of the original integro-differential equation with power weights with respect to some dynamical variables. Using certain asymptotical procedures [25] one can transform infinite systems for moments into hydrodynamic type finite partial differential equation systems such as the Navier–Stokes system for the Boltzmann equation or the system of ideal magnetic hydrodynamics for the Vlasov–Maxwell system. The mathematical theory of these systems has been independently developed from the studies of the corresponding integro-differential equations.

2.2 Survey of Various Approaches or Finding Invariant Solutions

In pure mathematical theories and especially in applied disciplines a special attention is given to the study of invariant solutions of integro-differential equations which are directly associated with fundamental symmetry properties of these equations. In Chap. 1 an application of the classical Lie group theory for finding invariant solutions of differential equations was presented. Group analysis in this case is an universal tool for calculating complete sets of searched symmetries. However a direct transference of the known scheme of the group analysis method on integro-differential equations is impossible. As shown since the first work in this way [28] (see also [29]) the main obstacle consists in a presence of nonlocal integral operators. Several approaches to this problem were worked out during a long history of studying invariant (self-similar) solutions of IDEs. The main of these approaches can be classified as follows:

  1. (1)

    Use of a presentation of a solution or an admitted Lie group of transformations on the basis of a priori simplified assumptions;

  2. (2)

    Investigation of infinite systems of differential equations for power moments;

  3. (3)

    Transformation of an original integro-differential equation into a differential equation;

  4. (4)

    Direct derivation of a Lie group of transformations through corresponding determining equations and construction of a representation of invariant solutions of IDE.

Methods of the first and fourth groups one can characterize as direct methods because they deal directly with an original IDE. At the same time the methods of the second and third groups are indirect. They are based on the replacement of a considered integro-differential equation by an infinite system of differential equations or by a single differential equation. This allows one to analyze derived equations using the standard methods of the classical Lie group theory outlined in Chap. 1.

In the present section a brief survey of all these approaches is given. Each method is illustrated with a simple (model) integro-differential equation with minimal number of variables. It allows us to explain an essence of the method without too cumbersome calculations. The most noticeable results obtained in corresponding frameworks are annotated with references.

2.2.1 Methods Using a Presentation of a Solution or an Admitted Lie Group

Methods of this type have an heuristic character. Possibilities of their universalization are restricted. Just to them one can relate epigraph of the chapter. They have no direct relations with group theoretical analysis. However, these methods intuitively use some symmetry properties of equations. This allows one to choose a form of a solution or an admitted transformation. It is worth to note that most known invariant solutions of IDEs for today were obtained applying these methods.

Local-Equilibrium or Stationary Solutions

Historically the first approach of finding invariant solutions of integro-differential (kinetic) equations was based on splitting original equation in two simpler equations [10, 48]. One of these equations allows one to define a structure of a seeking solution. Consistence with another equation provides an explicit form of the solution. Using this method (local) equilibrium and stationary solutions of some kinetic equations were obtained. Here an application of this approach to basic types of integro-differential kinetic equations is considered.

The Kac equation [38] is the simplest model of the full Boltzmann kinetic equation. This equation is

$${\frac{\partial f}{\partial t}}+v{\frac{\partial f}{\partial x}}+F{\frac{\partial f}{\partial v}}=J(f,f),$$
(2.2.1)

where

$$J(f,f)=\int\limits_{-\infty}^\infty\,dw\int \limits_{-\pi }^\pi\,d\theta g(\theta)[f(v^{\prime})f(w^{\prime})-f(v)f(w)].$$
(2.2.2)

Here f(t,v,x) is the distribution function (DF), \(t\in{\mathbb{R}_{+}^{1}}\), v,x∈ℝ1, J(f,f) is the collision operator (integral), F is an external force, g(θ)=g(−θ) is a kernel associated with details of particle interaction subject to the normalization condition

$$\int\limits_{-\pi}^\pi g(\theta)\,d\theta=1.$$

For the sake of brevity only the velocity arguments of DF are saved in the integrand of (2.2.2). In this case the function g(θ) corresponds to the Maxwell molecular model [25]. The collision transformation (v,w)→(v ,w ) is given by the group of rotations in R 2=R 1×R 1 (see (1.1.2)) with the matrix representation A

$$ \begin{matrix} \left( {\upsilon }',{w}' \right)=\left( \upsilon ,w \right)A, & A=\left( \begin{matrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{matrix} \right). \\ \end{matrix} $$

Separating (2.2.1) in two parts, the form of local equilibrium solutions (so-called Maxwellians) is obtained from the equation J(f,f)=0. This equation is satisfied for any function g(θ) if and only if

$$f(v^{\prime})f(w^{\prime})-f(v)f(w)=0,$$
(2.2.3)

or, that is the same,

$$\ln f(v^{\prime})+\ln f(w^{\prime})=\ln f(v)+\ln f(w).$$

This means that ln f(v) is a summation invariant of the group of rotations in R 2. Using the infinitesimal generator (1.1.7) X=w _v-v _w of the group, one obtains from XI=0 that in this case the unique summation invariant is v 2+w 2=v 2+w 2. This gives us that the local Maxwellian solutions of (2.2.1) have the form

$$f_M(t,v,x)=a(t,x)\exp{[-b(t,x)v^2]}.$$
(2.2.4)

It is worth to emphasize a crucial step which consists here in solving functional equation (2.2.3). In turn, the solution is defined by summation invariants of the group of transformations corresponding to a collision interaction. For example, in the case of monatomic gas we deal with the group of rotations in R 6=R 3×R 3 which has four such invariants [25].

The function (2.2.4) has also to satisfy the equation

$$\frac{\partial f_{M}}{\partial t}+v\frac{\partial f_{M}}{\partial x}+F\frac{\partial f_{M}}{\partial v}=0.$$

For example, if the force \(\displaystyle F=-\varphi^{\prime}\) is conservative with the potential φ(x), then b=const, a=Cexp (−2b φ) and the well-known Maxwell–Boltzmann distribution f M (v,x)=Cexp [−b(v 2+2φ)] in potential field is obtained.Footnote 1

The local Maxwellian solutions of the full Boltzmann equation were completely studied using the outlined method by outstanding scientists: J.C. Maxwell [48], L. Boltzmann [10], T. Carleman [14], H. Grad [27]. The local-equilibrium solutions for kinetic equations with similar collision integrals such as the linear Boltzmann equation in the neutron transfer theory [25], the Landau kinetic equation in the plasma physics [2], the Wang Chang–Uhlenbeck equation in the kinetic theory of polyatomic gases [25] and others were constructed using similar approach.

There exists a wide class of integro-differential equations which include integral operators in the form of functionals depending on their solutions. In particular, kinetic equations with a self-consistent field (so-called Vlasov-type equations) belong to this class. These equations are used in plasma physics, gravitational astrophysics, theory of nonlinear waves and others. In this case such equations have the form of a first order partial differential equation with associative equations for functionals. According to the theory of differential equations their general solutions are arbitrary differentiable functions of first integrals. This property allows one to find invariant solutions of some simple problems.

To illustrate this approach let us consider the one-dimensional problem of equilibrium of a plane gravitating homogeneous layer [59]. The problem is described by the Vlasov–Poisson system:

$$ v\frac{\partial f}{\partial x}+F\frac{\partial f}{\partial v}=0,$$
(2.2.5)
$$ frac{{{d}^{2}}\varphi }{d{{x}^{2}}}=C.$$
(2.2.6)

Here f(v,x) is the distribution function of gravitating particles, vR 1 is the particle velocity, x∈[−1,1] is the space coordinate, F=−φ is the gravity force, φ(x) is the gravitational potential. The density of particles ρ(x) is the zeroth-order moment of the DF:

$$\rho(x)=\int f(v,x)\,dv.$$
(2.2.7)

Since the density is constant along a layer, it can be written as ρ(x)=ρ 0 H(1−x 2), where H is the unit Heaviside step-function. The right hand side of (2.2.6) is constant C=ρ 0. Then F(x)=−x and the general solution of (2.2.5) is f=f 0(E), where the first integral E=v 2/2+x 2/2 is the energy invariant of the particle motion. It is also necessary to satisfy the self-consistency condition (2.2.7). In fact, one has to solve the integral equation of the first kind

$$\int f_0(E)\,dv=\rho_0H(1-x^2).$$

The last equation can be transformed into the Abel equation by the substitution y=1−2E:

$$\int\limits_0^z\frac{f_0(y)dy}{\sqrt{z-y}}={\rho}_0H(z),\qquad z=1-x^2.$$

The Abel equation is invertible for an arbitrary right hand side [72]:

$$f_0(y)=\frac{1}{\pi}\frac{d}{dy}\int\limits_0^y\frac{\rho _0H(z)dz}{\sqrt{y-z}}.$$

Finally, one obtains

$$f_0(E)=\frac{\rho_0H(1-2E)}{\pi\sqrt{1-2E}}.$$

Invariant solutions were similarly obtained for gravitating problems with cylindrical and spherical symmetries (see references in [59]). It is obvious that this method can also be used in other applications of the Vlasov-type equations with two independent variables. In particular, the one-dimensional dynamics of collisionless plasma with a neutralizing background and a potential field is described by the following system

$$ \frac{\partial f}{\partial t}+v\frac{\partial f}{\partial x}+F\frac{\partial f}{\partial v}=0,$$
(2.2.8)
(2.2.9)

From [1] it follows that there exists some transformation, which maps (2.2.8), (2.2.9) to the above stationary Vlasov–Poisson system. Then, one can derive non-stationary solutions of the Vlasov–Maxwell system (2.2.8), (2.2.9) starting from the stationary solutions.

A Priori Choice of Invariant Transformations

  1. 1.

    Nikolskii’s transformations.

First time this approach was systematically applied to the Boltzmann integro-differential equation by A.A. Nikolskii in the series of papers [5153]. Transformations obtained by this approach provide nonstationary space-dependent solutions from space-homogeneous.

Let us illustrate the Nikolskii approach using the Kac equation (2.2.1). In the space-homogeneous case and in absence of the external force F it becomes

$$\frac{\partial f(t,v)}{\partial t}=J(f,f).$$
(2.2.10)

Assume that f h (t,v) is a solution of (2.2.10). The Nikolskii transformation is

$$f_s(t,x,v)=f_h(\bar{t},\bar{v}),$$
(2.2.11)

where

$$\bar{t}=\tau(t),\quad\bar{v}=(1+t/t_0)\bigg(v-\frac{x}{t+t_0}\bigg).$$
(2.2.12)

Here τ(t) is a temporarily unknown function. One can consider the quantity \(c=v-\frac{x}{t+t_{0}}\) as the heat (eigen) microscopic velocity of a particle, and the quantity

$$U=\frac{x}{t+t_0}$$

as the macroscopic velocity of a continuum (model gas) in the space position x. Flows with this velocity distribution in the framework of the one-dimensional ideal gas dynamics were studied by L.I. Sedov [61]. For t,t 0>0 it is an expansion flow of a gas; if t 0<0 it is a compression flow. Therefore the solution (2.2.11), (2.2.12) is called “expansion–compression” motions of a model gas. This means that the distribution function f s of eigen velocities is the same at each space point at any given instant.

Substitution of (2.2.11) into the left hand side of (2.2.1) with F(x)=0 gives

$$\frac{\partial f_s}{\partial t}(t,x,v)+v\frac {\partial f_s}{\partial x}(t,x,v)=\frac{d\tau}{dt}(t)\frac{\partial f_h}{\partial t}(\bar{t},\bar{v}),$$
(2.2.13)

where \((\bar{t},\bar{v})\) are defined by (2.2.12). Taking into account that f h (t,v) is a solution of (2.2.10), one can write

$$ frac{d\tau }{dt}\left( t \right)\frac{\partial {{f}_{h}}}{\partial t}\left( \bar{t},\bar{v} \right)=\frac{d\tau }{dt}\left( t \right)\int\limits_{-\infty }^{\infty }{d\bar{w}}\,\int\limits_{-\pi }^{\pi }{d\theta g\left( \theta \right)\left[ {{f}_{h}}\left( {{\bar{\upsilon }}'} \right){{f}_{h}}\left( {{\bar{w}}'} \right)-{{f}_{h}}\left( {\bar{\upsilon }} \right){{f}_{h}}\left( {\bar{w}} \right) \right],}$$
(2.2.14)

where \(\bar{v}^{\prime}=\bar{v}\cos\theta +\bar{w}\sin\theta\), \(\bar{w}^{\prime }=-\bar{v}\sin\theta+\bar{w}\cos\theta\).

By virtue of linearity of the collision transformation for dilations of the velocity space we have

$$(\lambda v',\lambda w')=(\lambda v,\lambda w)A.$$

Hence, the collision integral under such dilations is transformed as follows

$$J(f,f)(\lambda v)=\lambda J(f,f)(v).$$
(2.2.15)

Let us additionally assume that the studied class of distribution functions f h (t,v) leaves the collision integral invariant with respect to the translations of the velocity space

$$\bar{f}_h(t,v)=f_h(t,v-a).$$

This property corresponds [16] to the physical meaning of the distribution function as the particle number density in the velocity space. In this functional class the collision integral J(f,f) has the property

$$J(\bar{f}_h,\bar{f}_h)(v)=J(f_h,f_h)(v-a).$$
(2.2.16)

Sequentially exploiting the properties of the collision integral (2.2.15) and then (2.2.16), the equation (2.2.13) becomes

$$\frac{\partial f_s}{\partial t}+v\frac{\partial f_s}{\partial x}=(1+t/t_0)\frac{d\tau}{dt}J(f_s,f_s).$$
(2.2.17)

Hence, the function f s (t,x,v) determined by (2.2.11), (2.2.12) is a solution of the equation

$$\frac{\partial f}{\partial t}+v\frac{\partial f}{\partial x}=J(f,f)$$

if and only if the unknown function τ(t) satisfies differential equation

$$\frac{d\tau}{dt}(1+t/t_0)=1.$$

Choosing τ(0)=0, one obtains that τ(t)=t 0ln (1+t/t 0) for any positive t. If the factor in front of J(f s ,f s ) in (2.2.17) is chosen as an arbitrary constant, then (2.2.12) is an equivalence transformation [56].

It is known [38] that for t→∞ a solution of the space homogeneous equation (2.2.10) with arbitrary initial data converges to the absolute Maxwellian distribution f M .

One can note that in an expansion flow for t,t 0>0 the equilibrium distribution is reached

$$\lim_{t\to\infty}f_s(t,x,v)=f_M(v).$$

Whereas in an compression flow (where t 0<0) one has for t→−0 that

$$f_s(0,x,v)=f_h\biggl(\tau(0),v-\frac{x}{t_0}\biggr)\neq f_M(v),$$

and the equilibrium distribution is not achieved (see [52]).

In many IDEs the differential operator has a similar form. If the collision integral possesses similar invariant properties, then Nikolskii’s transformation can also be applied. Here it can also be mentioned the linear Boltzmann equation [25], the Landau equation [2] and some others. Unfortunately, as a rule, solutions of space homogeneous equations excepting stationary equilibrium solutions are unknown.

  1. 2.

    The Bobylev approach.

All methods for constructing invariant solutions of IDEs presented in this subsection have ad-hoc character. This means that they are not universal and, hence, have a confined field of applications. As a rule, such methods are based on intuitive windfalls rather than on systematic approach. The most outstanding results in the frameworks of this direction were derived by Bobylev [57]Footnote 2 for the Boltzmann kinetic equation for Maxwell molecules.

Here the windfall was the Fourier transform of the Boltzmann equation (BE) with respect to the velocity variables. The transformation drastically simplified an investigation of mathematical properties of BE. This has allowed one not only to obtain a new nontrivial symmetry of BE but also to complete a relaxation theory of a Maxwellian gas.

Let us demonstrate the Bobylev approach on the space homogeneous Kac model as was done in [32]. The Cauchy problem for the distribution function f(t,v) has the form

$$ \frac{\partial f}{\partial t}=\int\limits_{-\infty }^{\infty }{d\,w}\,\int\limits_{-\pi }^{\pi }{d\theta g\left( \theta \right)\left[ f\left( {{\upsilon }'} \right)f\left( {{w}'} \right)-f\left( \upsilon \right)f\left( w \right) \right],}$$
(2.2.18)
$$ f\left( 0,\upsilon \right)={{f}_{0}}\left( \upsilon \right).$$
(2.2.19)

The equilibrium solution of (2.2.18) when t→∞ is the absolute Maxwellian distribution

$$f_M(v)={\frac{1}{\sqrt{2\pi}}}\exp(-v^2/2).$$
(2.2.20)

The problem (2.2.18), (2.2.19) possesses the mass and energy conservation laws of the forms

$$\begin{array}{c}\int\limits_{-\infty}^\infty f(t,v)\,dv=\int\limits_{-\infty }^\infty f_0(v)\,dv=1,\\[16pt]\int\limits_{-\infty}^\infty v^2f(t,v)\,dv=\int\limits_{-\infty }^\infty v^2f_0(v)\,dv=1.\end{array}$$
(2.2.21)

For an arbitrary integrable function ψ(v) and the collision integral (2.2.2) the integral identity takes place

$$ I\left( \psi \right)=\int\limits_{-\infty }^{\infty }{d\upsilon \psi \left( \upsilon \right)J\left( f,f \right)}=\int\limits_{-\infty }^{\infty }{\int\limits_{-\infty }^{\infty }{\int\limits_{-\pi }^{\pi }{g\left( \theta \right)\left( \psi \left( {{\upsilon }'} \right)-\psi \left( \upsilon \right) \right)f\left( \upsilon \right)f\left( w \right)d\upsilon dwd\theta }}}.$$
(2.2.22)

The direct and inverse Fourier transforms are defined as follows

$$ varphi \left( k \right)=\int\limits_{-\infty }^{\infty }{f\left( \upsilon \right){{e}^{-ik\upsilon }}d\upsilon }.$$
(2.2.23)
$$ f\left( \upsilon \right)={{\left( 2\pi \right)}^{-1}}\int\limits_{-\infty }^{\infty }{\varphi \left( k \right){{e}^{ik\upsilon }}dk}.$$
(2.2.24)

Applying the direct transform (2.2.23) to (2.2.18) and taking into account identity (2.2.22), one can derive the Fourier representation of the Cauchy problem (2.2.18), (2.2.19):

$$\frac{\partial\varphi(t,k)}{\partial t}=\hat{J}(\varphi ,\varphi),\quad \varphi(0,k)=\varPhi(k),$$
(2.2.25)

where

$$\hat{J}(\varphi,\varphi)=\int\limits_{-\pi}^\pi \,d\theta g(\theta)[\varphi(k\cos\theta)\varphi(k\sin\theta )-\varphi(k)\varphi(0)],$$

and

$$\varPhi(k)=\int\limits_{-\infty}^\infty f_0(v)e^{-ikv}\,dv.$$
(2.2.26)

Note that an essential simplification of the collision term occurred:Footnote 3 the collision term contains a single integral over the collision parameter θ.

The Fourier transform of the equilibrium solution (2.2.20) is

$$\varphi_M(k)=\exp\biggl(-\frac{k^2}{2}\biggr).$$
(2.2.27)

The conservation laws (2.2.21) in terms of Fourier transforms become

$$\varphi(t,0)=\varPhi(0)=1,\quad{\frac{\partial ^2\varphi(t,k)}{\partial k^2}}\bigg|_{k=0}={\frac{\partial^2\varPhi(k)}{\partial k^2}}\bigg|_{k=0}=-1.$$
(2.2.28)

One can easily verify that (2.2.25) admits some simple groups of transformations. In fact, there is a group of translations of the time \(\bar{t}=t+a\) . The corresponding infinitesimal generator of this group is X 1= t .

It is necessary to point out that each transformation in the k-space has a corresponding representation in the original v-space. In such a way there is a dilation group in the k-space

$$\bar{k}=e^ak,\qquad X_2=k\partial_k.$$
(2.2.29)

This transformation leads to the change of variables in the v-space:

$$\bar{f}(t,v)=e^{-a}f(t,e^{-a}v).$$

This property corresponds to the transformation defined by the infinitesimal generator:

$$Y_2=v\partial_v+f\partial_f.$$

The Bobylev symmetry of (2.2.25) is defined by the formula

$$\bar{\varphi}(t,k)=\exp\biggl(-\frac{ak^2}{2}\biggr)\varphi(t,k).$$
(2.2.30)

This symmetry corresponds to the infinitesimal generator \(X_{3}=-\frac{k^{2}}{2}\varphi \partial_{\varphi}\).

The invariance of (2.2.25) with respect to the change (2.2.30) is easily ascertained. Because the existence of an inverse Fourier transform requires that a≥0, the transformation (2.2.30) determines a semigroup. Using (2.2.24) and the convolution theorem, one can obtain the corresponding semigroup in the v-space:Footnote 4

$$\bar{f}(t,v)=\frac{1}{\sqrt{2\pi a}}\int\limits_{-\infty}^\infty f(t,w)dw\exp \biggl[-\frac{(v-w)^2}{2a}\biggr].$$
(2.2.31)

Here corresponding an infinitesimal generator is the one-dimensional Laplace operator

$$Y_5=\frac{1}{2}\partial_{vv}.$$

The invariant solution of the problem (2.2.25) which is consistent from the physical point of view has to satisfy the initial conditions (2.2.26), the conservation laws (2.2.28) and has to converge to φ M (k) (2.2.20) for t→∞. Taking into account these demands, the invariant solution similar to the well-known BKW-mode [5] is constructed in the following way.Footnote 5

To reduce the number of independent variables and to use simultaneously the new symmetry (2.2.30) one can seek for a solution in the form

$$\varphi(k,t)=\exp\biggl(-\frac{ak^2}{2}\biggr)\varPsi(x),\qquad x=\tau(t)k,$$
(2.2.32)

where τ(t) is determined later. Substituting the presentation (2.2.32) into (2.2.25) and taking into account its invariance under the transformation (2.2.30), one obtains

$$\frac{d\tau}{dt}\frac{1}{\tau}\,x\frac{d\varPsi}{dx}=\hat{J}(\varPsi,\varPsi).$$

To separate variables here it is necessary to set

$$\frac{d\tau}{dt}\frac{1}{\tau}=c.$$

The last equation determines the function τ(t)=θ 0exp (ct), where c and θ 0 are arbitrary constants. To satisfy the initial conditions one has to require

$$\varphi(k,0)=\exp\biggl(-\frac{ak^2}{2}\biggr)\varPsi(\theta_0k)=\varPhi(k).$$

Hence, the representation of the invariant solution (2.2.32) becomes

$$\varphi(k,t)=\exp\biggl[\frac{1}{2}a(x^2-k^2)\biggr]\varPhi(x).$$
(2.2.33)

Since Φ(0)=1, for asymptotic convergence of (2.2.33) to the equilibrium solution (2.2.27), it is sufficient to accept that a=1 and c<0. Simultaneously this solution satisfies the mass conservation law. The energy conservation law will be automatically satisfied after constructing the solution in the explicit form.

One can check that the invariant solution (2.2.33) is determined by the infinitesimal generator X=−c −1 X 1+X 2X 5. In fact, solving the first-order partial differential equation

$$X(I)\equiv-c^{-1}\frac{\partial I}{\partial t}+k\frac{\partial I}{\partial k}-k^2\varphi\frac{\partial I}{\partial\varphi}=0,$$

one derives two independent integrals I 1=k θ 0exp (ct)=k τ(t) and \( I_{2}=\varphi\ * \exp(k^{2}/2) \) which are independent invariants (see Chap. 1). Since for constructing the invariant solution one requires that

$$I_2=h(I_1),$$

one has the representation of the invariant solution φ=exp (−k 2/2)h(x). Finally to satisfy the imposed demands it is sufficient to set h(x)=exp (x 2/2)Φ(x).

Substitution of the presentation (2.2.33) into (2.2.25) gives the factor-equation

$$c\,x\,\bigg(\frac{d\varPhi}{dx}+x\varPhi\bigg)=\hat{J}(\varPhi,\varPhi).$$
(2.2.34)

To find the BKW-mode one uses the Taylor expansion

$$\varPhi(x)=1+\sum_{n=1}^\infty\frac{c_n}{n!}x^n,$$
(2.2.35)

where the choice c 0=1 explicitly accomplishes the mass conservation law.

After substitution (2.2.35) into (2.2.34) one obtains a specific nonlinear spectral problem for the coefficients c n . Even coefficients c 2k (n=2k) are separately determined from closed subsystem. In particular, c 2=−1 and the energy conservation law is satisfied. Some resonance property of even eigen values allows to cut the series (2.2.35) and find a solution in the form

$$\varPhi(x)=1-x^2,\quad x=k\tau(t)\equiv k\theta_0\exp(ct),\quad c=-\frac{1}{8}\int\limits_{-\pi}^\pi\,d\theta g(\theta)\sin^22\theta.$$

Applying the inverse Fourier transform to (2.2.33), one can derive the explicit expression of the BKW-modeFootnote 6 in the v-space:

$$f(t,v)=\frac{1}{\sqrt{2\pi(1-\lambda(t))}}\biggl[1+\frac{\lambda (t)}{2(1-\lambda(t))}\biggl(\frac{v^2}{1-\lambda(t)}-1\biggr)\biggr]\exp\biggl[-\frac{v^2}{2(1-\lambda(t))}\biggr],$$

where λ(t)=τ 2(t) and \( {0<\theta_{0}^{2}<2/3}\).

  1. 3.

    Scaling conjecture.

In the work of the authors [28] some generalization of known symmetry properties of the Boltzmann equation and its models was proposed. In application to the Kac model in absence of an external force F

$$\frac{\partial f}{\partial t}+v\frac{\partial f}{\partial x}=J(f,f),$$
(2.2.36)

the admitted Lie group G of transformations T a was sought in the form

$$\begin{array}{c}\bar{f}=\psi(\bar{t},\bar{x},a)f,\quad t=q(\bar{t},\bar{x},a),\quad x=h(\bar{t},\bar{x},a),\\[6pt]v=r(\bar{t},\bar{x},a)\bar{v}.\end{array}$$
(2.2.37)

Here {f, t, x, v} and \(\{\bar{f},\,\bar{t},\,\bar{x},\,\bar{v}\}\) are original and transformed variables, respectively, ψ, h, θ, r, p are unknown functions which define the sought group G with the group parameter a. These functions have necessarily to satisfy the main group superposition property in the form

$$T_b\,T_a=T_{a+b},$$
(2.2.38)

and the identity property for the group parameter a=0:

$$\begin{array}{c}\psi(\bar{t},\bar{x},0)=1,\quad q(\bar{t},\bar{x},0)=\bar{t},\quad h(\bar{t},\bar{x},0)=\bar{x},\\[6pt]r(\bar{t},\bar{x},0)\bar{v}=\bar{v}.\end{array}$$
(2.2.39)

The Lie group of transformations G is said to be admitted by (2.2.36) or (2.2.36) admits the group G if transformations (2.2.37) convert every solution of (2.2.36) into a solution of the same equation. This means that if a function f(t,x,v) is a solution of (2.2.36), then the function

$$\bar{f}(\bar{t},\bar{x},\bar{v},a)=\psi (\bar{x},\bar{t},a)\;f(q(\bar{x},\bar{t},a),h(\bar{x},\bar{t},a),r(\bar{t},\bar{x},a)\bar{v})$$
(2.2.40)

satisfies the equation

$$\frac{\partial\bar{f}}{\partial\bar{t}}+\bar{v}\frac{\partial\bar{f}}{\partial\bar{x}}=J(\bar{f},\bar{f}).$$
(2.2.41)

By virtue of the properties of the collision integral (2.2.16) and (2.2.37), one can show that

$$J(\bar{f},\bar{f})=g(\bar{t},\bar{x},a)J(f,f)$$
(2.2.42)

with some function \(g(\bar{t},\bar{x},a)\).

Calculating the derivatives of the function \(\bar{f}(\bar{t},\bar{x},\bar{v},a)\) (2.2.40) and the collision integral \(J(\bar{f},\bar{f})\), one gets

$$ frac{\partial \bar{f}}{\partial \bar{t}}=\frac{\partial \psi }{\partial \bar{t}}f+\psi \left( \frac{\partial f}{\partial t}\frac{\partial q}{\partial \bar{t}}+\frac{\partial f}{\partial x}\frac{\partial h}{\partial \bar{t}}+\frac{\partial f}{\partial \upsilon }\frac{\partial r}{\partial \bar{t}}\upsilon \right),$$
$$ frac{\partial \bar{f}}{\partial \bar{x}}=\frac{\partial \psi }{\partial \bar{x}}f+\psi \left( \frac{\partial f}{\partial t}\frac{\partial q}{\partial \bar{x}}+\frac{\partial f}{\partial x}\frac{\partial h}{\partial \bar{x}}+\frac{\partial f}{\partial \upsilon }\frac{\partial r}{\partial \bar{x}}\upsilon \right),$$
$$ J(\bar{f},\bar{f})(\bar{t},\bar{x},\bar{\upsilon },a)=\frac{{{\psi }^{2}}(\bar{t},\bar{x},\bar{\upsilon },a)}{r(\bar{t},\bar{x},\bar{\upsilon },a)}J(f,f)(t,x,\upsilon ),$$

where (t,x,v) are defined by (2.2.37). Since the function f(t,x,v) is a solution of the Kac equation (2.2.36), the collision integral J(f,f) can be exchanged with the left hand side of this equation. This gives that

$$J(\bar{f},\bar{f})=\frac{\psi^2}{r}\bigg({\frac{\partial f}{\partial t}}+r\bar{v}{\frac{\partial f}{\partial x}}\bigg).$$

Taking into account that f(t,x,v) is an arbitrary solution of (2.2.36) one can split the derived equation with respect to f and its derivatives:

Additional splitting of these equations with respect to the variable \(\bar{v}\) gives the equations

(2.2.43)
(2.2.44)
(2.2.45)
(2.2.46)

From (2.2.43) one has that ψ=ψ(a). The general solution of (2.2.45) is

$$h(\bar{t},\bar{x},a)=\bar{x}\psi(a)+c_1(a)$$

with an arbitrary function c 1(a). Equations (2.2.46) define that

$$r=r(a).$$

The general solution of (2.2.44) is

$$q(\bar{t},\bar{x},a)=\bar{t}\frac{\psi(a)}{r(a)}+c_2(a),$$

where c 2(a) is an arbitrary function.

Thus, using the properties of the collision integral (2.2.15), one derives that the form of admitted transformations (2.2.37) is

$$\begin{array}{c}\bar{f}=\psi(a)f,\quad t=\bar{t}\frac{\psi (a)}{r(a)}+c_2(a),\\[6pt]x=\bar{x}\psi(a)+c_1(a),\quad v=r(a)\bar{v}.\end{array}$$
(2.2.47)

The identity conditions (2.2.39) of transformations (2.2.47) at a=0 impose the additional relations

$$\psi(0)=1,\quad c_1(0)=0,\quad r(0)=1,\quad c_2(0)=0.$$
(2.2.48)

The requirement to satisfy the main Lie group property (2.2.38) for the variable f and v leads to the conditions

$$\psi(a)\psi(b)=\psi (a+b),\quad r(a)r(b)=r(a+b).$$
(2.2.49)

Using (2.2.48), the general solutions of these equations are

$$\psi(a)=\exp(\hat{c}_1a),\quad r(a)=\exp(\hat{c}_2a),$$

where \(\hat{c}_{1}\) and \(\hat{c}_{2}\) are arbitrary constants. Hence, transformations (2.2.47) become

$$\begin{array}{c}\bar{f}=\exp(\hat{c}_1a)f(x,v,t),\quad\bar{x}=\left(x-c_1(a)\right)\exp(-\hat{c}_1a),\\[6pt]\bar{v}=v\exp(-\hat{c}_2a),\quad\bar{t}=(t-c_2(a))\exp[-(\hat{c}_1-\hat{c}_2)a)].\end{array}$$
(2.2.50)

Since there is one-to-one correspondence between an infinitesimal generator and a Lie group, the undefined functions c 1(a) and c 2(a) in (2.2.50) can be found from the system of Lie equations.

Recall that the coefficients of the admitted generator of the Lie group G

$$X=\xi^t\frac{\partial}{\partial t}+\xi^x\frac{\partial}{\partial x}+\xi^v\frac{\partial}{\partial v}+\zeta^f\frac{\partial}{\partial f}$$

are defined by the formulae

By virtue of (2.2.48), one obtains that

$$\begin{array}{rcl}\xi^t&=&-c_2^{\prime}(0)-t(\hat{c}_1-\hat{c}_2),\\[6pt]\xi^x&=&-c_1^{\prime}(0)-\hat{c}_1x,\\[6pt]\xi^v&=&-v\hat{c}_2,\\[6pt]\zeta^f&=&\hat{c}_1f.\end{array}$$

Thus, one has the basis of admitted generators

$$\begin{array}{cc}\hat{c}_1:&X_4=f\partial_f-t\partial_t-x\partial_x,\\[6pt]\hat{c}_2:&X_3=v\partial_v-t\partial_t,\\[6pt]c_1^{\prime}(0):&X_1=\partial_x,\\[6pt]c_2^{\prime}(0):&X_2=\partial_t.\end{array}$$
(2.2.51)

Now after finding the invariants of the group X i J=0 (i=1,…,5) by the usual way, one can obtain representations of invariant solutions.

It is seen that the integral transformation (2.2.31) is absent in transformations (2.2.50). However, as will be shown in Chap. 3 such simple scaling conjecture allows us [28] to define 11-parameter Lie algebra admitted by the full Boltzmann equation and all known extensions for some special cases of molecular potentials (see also [31]).

  1. 4.

    Teshukov’s wave-type solutions.

It is worth to mention here one more approach which was developed by V.M. Teshukov. In [67] an extension of the theory of characteristics for systems of integro-differential equations was proposed. Using the generalized characteristics and Riemann invariants, simple waves of a system of integro-differential equations were determined.

The system of integro-differential equations describing evolution of rotational free-boundary flows of an ideal incompressible fluid in a shallow-water approximation is the following

$$\begin{array}{c}hu_t+uu_x+vu_y+gh=0,\quad u_x+v_y=0,\\[9pt]h_t+\Biggl(\,\int\limits_0^h\,u\,dy\Bigg)_x=0.\end{array}$$
(2.2.52)

Here (u,v) is the fluid-velocity vector, h is the layer depth, g is the gravitational acceleration, x and y are the Cartesian plane coordinates, and t is time. The impenetration condition v(x,0,t)=0 is satisfied at the layer bottom. Equations (2.2.52) are considered in the Eulerian–Lagrangian coordinates x ,λ,t , where

$$x=x^{\prime},\quad t=t^{\prime},\quad y=\varPhi(x^{\prime},\lambda,t^{\prime}),$$

and Φ=Φ(x ,λ,t ) is the solution of the Cauchy problem

$$\varPhi_t+u(x,\varPhi,t)\varPhi_x=v(x,\varPhi,t),\quad \varPhi(x,\lambda,0)=\varPhi _0(x,\lambda).$$

In the new coordinates (2.2.52) become

$$u_t(x,\lambda,t)+u(x,\lambda,t)u_x(x,\lambda ,t)+g\int\limits_0^1H_x(x,v,t)\,dv=0,$$
$$H_t(x,\lambda,t)+(u(x,\lambda,t)H(x,\lambda,t))_x=0,$$

where the prime is omitted and H(x,λ,t)=Φ λ (x,λ,t)>0.

Solutions of the simple wave type are sought in the form

$$u=U(\alpha(x,t),\lambda),\quad H=P(\alpha(x,t),\lambda),$$

where α(x,t) is a function of two variables. The functions U(α,λ), P(α,λ) have to satisfy the equations

$$\begin{array}{c}(u(\alpha,\lambda)-k)u_\alpha(\alpha,\lambda)+g\int\limits_0^1P_\alpha (\alpha,\mu )\,d\mu =0,\\[16pt](u(\alpha,\lambda)-k)P_\alpha(\alpha,\lambda)+P(\alpha,\lambda )u_\alpha(\alpha,\lambda)=0,\end{array}$$

where k=−α t /α x . The existence of simple waves, their properties and extensions for other systems of integro-differential equations were studied in [17, 6870].

2.2.2 Methods of Moments

The method of moments for finding symmetries of integro-differential equations is based on the idea to use an infinite system of partial differential equations which is equivalent to the original integro-differential system of equations. The general idea of consideration such a system goes back to the pioneering paper [48] where the Boltzmann equation was studied by using the power moments defined on a solution of the Boltzmann equation.

The moment method for obtaining symmetries consists of the following steps. A finite subsystem of N moment equations is chosen. Applying the classical group analysis method developed for partial differential equations to the chosen subsystem, one finds the admitted Lie group (algebra) of this subsystem. Expanding the subsystem and letting N→∞, the intersection of all calculated Lie groups is carried out. The final step consists of returning the obtained symmetries for the moment representation to the symmetries of the original integro-differential equations.

The first application of this method was done in [64] for the system of the Vlasov–Maxwell collisionless plasma equations.

It is worth to notice that among the indirect methods of studying symmetries of IDEs, the method of moments is the most universal ones, despite of the substantial restrictions of its applications.

Let us demonstrate this approach by the simple model Kac equation (2.2.1). The power moments for this model are defined as:

$$M_n=\int v^nfdv,\quad v\in R^1\qquad(n=0,1,\ldots).$$

Multiplying (2.2.1) with v n and integrating it with respect to v, one obtains on the left hand side the expression

$$\frac{\partial M_n}{\partial t}+\frac{\partial M_{n+1}}{\partial x}.$$

This expression represents two terms which are typical for the moment system of a kinetic equation. For integration of the right hand side one can use the following integral identity for the collision integral (2.2.2):

$$ I\left( {{\upsilon }^{n}} \right)=\int\limits_{-\infty }^{\infty }{d\upsilon {{\upsilon }^{n}}J\left( f,f \right)}=\frac{1}{2}\int\limits_{-\infty }^{\infty }{\int\limits_{-\infty }^{\infty }{\int\limits_{-\pi }^{\pi }{g\left( \theta \right)\left[ {{{{\upsilon }'}}^{n}}+{{{{w}'}}^{n}}-{{\upsilon }^{n}}-{{w}^{n}} \right]f\left( \upsilon \right)f\left( w \right)d\upsilon \,dw\,d\theta }}},$$
(2.2.53)

where v′=vcos θ+wsin θ and w′=wcos θvsin θ. Integrating the moment system for the Kac equation (2.2.1) is obtained

(2.2.54)

where

$$ \begin{matrix}{{\Lambda }_{2k}}=\int\limits_{-\pi }^{\pi }{g\left( \theta \right)\left[ {{\cos }^{2k}}\theta +{{\sin }^{2k}}\theta -1-{{\delta }_{k0}} \right]}d\theta \end{matrix} ,$$
$$ \begin{matrix}{{\Lambda }_{2k+1}}=\int\limits_{-\pi }^{\pi }{g\left( \theta \right)\left[ {{\cos }^{2k+1}}\theta +{{\sin }^{2k}}\theta -1 \right]} & \left( k=0,1,\ldots \right), \\ \end{matrix}$$
$$ {{H}_{m,n-m}}=\frac{1}{2}C_{n}^{k}\int\limits_{-\pi }^{\pi }{g\left( \theta \right)\left[ {{\cos }^{m}}\theta {{\sin }^{n-m}}\theta +{{\left( -1 \right)}^{m}}{{\sin }^{m}}\theta {{\cos }^{n-m}}\theta \right]d\theta }.$$

It is seen that for any N the last equation of the N-order system contains the moment M N+1. Hence each truncated subsystem is unclosed. However this does not impede one to find a symmetry.

Applying the classical group analysis method to this system, and solving the determining equations, one obtains that the admitted generator is

$$X^{(3)}=k_1X_1+k_2X_2+k_3Y_3^{(3)}+k_4Y_4^{(3)}+p_1(t)\partial_{M_2}+(q_1(t,x)-xp_1^{\prime}(t))\partial_{M_3},$$

where

$$\begin{array}{c}X_1=\partial_t,\quad X_2=\partial_x,\\[3pt]Y_3^{(3)}=x\partial_x+M_1\partial_{M_1}+2M_2\partial_{M_2}+3M_3\partial_{M_3},\\[3pt]Y_4^{(3)}=t\partial_t-M_0\partial_{M_0}-2M_1\partial_{M_1}-3M_2\partial_{M_2}-4M_3\partial_{M_3}.\end{array}$$
(2.2.55)

The part of system (2.2.54) including the fourth moment M 4 consists of the equations

$$\begin{array}{c}\displaystyle{\frac{\partial M_0}{\partial t}+\frac{\partial M_1}{\partial x}=0,}\\[10pt]\displaystyle{\frac{\partial M_1}{\partial t}+\frac{\partial M_2}{\partial x}-\varLambda_1M_0M_1=0,}\\[10pt]\displaystyle{\frac{\partial M_2}{\partial t}+\frac{\partial M_3}{\partial x}=0,}\\[8pt]\displaystyle{\frac{\partial M_3}{\partial t}+\frac{\partial M_4}{\partial x}-\varLambda_3M_0M_3=(H_{1,2}+H_{2,1})M_1M_2.}\end{array}$$
(2.2.56)

Notice that: (a) system (2.2.56) contains (2.2.55) as a subsystem; (b) the set of derivatives for splitting the determining equations of system (2.2.56) contains the set of derivatives for splitting the determining equations of system (2.2.55). Because of these two properties, the generator admitted by system (2.2.56) can be obtained by expanding the operator \(X_{3}^{(3)}\) on the space of the variables t,x,M 0,M 1,M 2,M 3 and M 4:

$$ {{X}^{\left( 4 \right)}}={{k}_{1}}{{X}_{1}}+{{k}_{2}}{{X}_{2}}+{{k}_{3}}Y_{3}^{\left( 3 \right)}+{{k}_{4}}Y_{4}^{\left( 3 \right)}+{{p}_{2}}\left( t,{{M}_{4}} \right){{\partial }_{{{M}_{2}}}}+\left( {{q}_{2}}-x{{p}_{2t}}{{\partial }_{t}} \right){{\partial }_{{{M}_{3}}}}+\zeta \,{{\partial }_{{{M}_{4}}}},$$

where p 2=p 2(t,M 4), q 2=q 2(t,x,M 4) and ζ=ζ(t,x,M 1,M 2,M 3,M 4). Applying this operator to system (2.2.56) one obtains that

$$p_2=0,\quad q_2=0$$

and

$$\zeta=(4k_3-5k_4)M_4+q_3(t).$$

This means that the admitted generator of system (2.2.56) is

$$X^{(4)}=k_1X_1+k_2X_2+k_3Y_3^{(4)}+k_4Y_4^{(4)}+q_3(t)\partial_{M_4},$$

where

During calculations the following condition was used

$$H_{1,2}+H_{2,1}\neq0.$$

One can check that if H 1,2+H 2,1=0, then the operator X (4) is also admitted by system (2.2.56).

Proceeding by this way, one obtains that the only generator which is admitted by all finite subsystems of (2.2.54) is

$$X=k_1X_1+k_2X_2+k_3Y_3+k_4Y_4,$$

where

$$Y_3=x\partial_x+\sum_{k=1}^\infty kM_k\partial_{M_k},\quad Y_4=t\partial_t-\sum_{k=0}^\infty(k+1)M_k\partial_{M_k}.$$

The operator X is more convenient to rewrite in the form

$$X=k_1X_1+k_2X_2+k_3X_3+k_4X_4,$$

where

$$X_3=Y_4,\quad X_4=Y_3+Y_4=x\partial_x+t\partial_t-\sum_{k=0}^\infty M_k\partial_{M_k}.$$

Let us define corresponding generators in the space of the original variables (t,x,v,f).

Consider the generator

$$X_3=t\partial_t-\sum_{k=0}^\infty(k+1)M_k\partial_{M_k}.$$

It is necessary to obtain the corresponding group of transformations in an explicit form. Solving the Lie equations one has

$$\bar{t}=te^a,\quad \bar{x}=x,\quad \bar{M}_k=M_ke^{-(k+1)a}\quad (k=0,1,2,\ldots).$$
(2.2.57)

It is logical to assume that the variables v and f are also scaled in the space of the variables t, x, v, f:

$$\bar{v}=ve^{\alpha a},\quad \bar{f}=fe^{\beta a}.$$

Using this change, the transformed function and the transformed moments are determined by the formulae

$$\begin{array}{rcl}\bar{f}(\bar{t},\bar{x},\bar{v})&=&f(\bar {t}e^{-a},\bar{x},\bar{v}e^{-\alpha a})e^{\beta a},\\[12pt]\bar{M}_k&=&\int\limits_{-\infty}^\infty\bar{v}^k\bar{f}(\bar{v})\,d\bar{v}=e^{\beta a}\int\limits_{-\infty}^\infty\bar{v}^kf(\bar{v}e^{-\alpha a})\,d\bar{v}\\[12pt]&=&e^{(\beta+(k+1)\alpha)a}\int\limits_{-\infty}^\infty v^kf(v)\,dv=M_ke^{(\beta+(k+1)\alpha)a}.\end{array}$$
(2.2.58)

Thus, comparing with (2.2.57), one gets

$$\beta+\alpha=-1,\quad \alpha=-1.$$

This gives the generator

$$X_3=t\partial_t-v\partial_v.$$

Similar to the previous generator one obtains for the generator

$$X_4=x\partial_x+t\partial_t-\sum_{k=0}^\infty M_k\partial_{M_k}$$

that

$$\bar{t}=te^a,\quad \bar{x}=xe^a,\quad \bar{M}_k=M_ke^{-a}\quad (k=0,1,2,\ldots ).$$
(2.2.59)

Comparing this with (2.2.58), one finds

$$\beta+\alpha=-1,\quad \alpha=0.$$

This gives the generator

$$X_4=x\partial_x+t\partial_t-f\partial_f.$$

Therefore, the Kac equation (2.2.1) admits the Lie group with the generators:Footnote 7

$$X_1=\partial_t,\quad X_2=\partial_x,\quad X_3=t\partial_t-v\partial_v,\quad X_4=x\partial_x+t\partial_t-f\partial_f.$$

Starting with [64] (see also [65]) the moment method was applied to Vlasov-type equations such as different modifications of the Benney equation [41], where a transition to a moment system is natural. In order to use the classical group analysis method it is necessary that each finite subsystem of a moment system contains a finite number of moments. Taking into account this property one can mention the papers [12, 13] where the moment method was used for the group analysis of the Bhatnagar–Gross–Krook (BGK) kinetic equation of rarefied gas dynamics. In the simplest model case this equation takes the form

$$\frac{\partial f}{\partial t}+v\frac{\partial f}{\partial x}=\nu(f_0-f).$$
(2.2.60)

Here as in (2.2.1), the distribution function is f=f(t,v,x), \(t\in{\mathbb{R}}_{+}^{1}\), v,x∈ℝ1. The local Maxwellian distribution

$$f_0=n\bigg(\frac{1}{2\pi T}\bigg)^{1/2}\exp \biggl[-\frac{(v-V)^2}{2T}\bigg]$$

is defined through the moments of an unknown solution

$$n=\int dv\,f,\quad V=\frac{1}{n}\int dv\,v\,f,\quad T=\frac{1}{n}\int dv\,(v-V)^2\,f.$$

Equations similar to the BGK-equation with the so-called relaxation collision integral are also considered in the kinetic theory of molecular gases (the Landau–Teller equation [45]), in the plasma physics, etc. For these equations, a finite subsystem for power moments contains a finite set of moments. However, in the general case of dissipative kinetic equations such as the Boltzmann equation, the Smolukhovsky equation and others this property is exceptional. For example, the Boltzmann equation only has this property for Maxwellian-type molecular interaction. As noted, this case of the Boltzmann equation can be modeled by the Kac equation. The application of the group analysis method to the moment system corresponding to the Kac equation has been demonstrated above. For arbitrary intermolecular potentials, each moment equation contains an infinite number of moments. For this reason, in the general case the difficulty of constructing an admitted Lie group for such a system is equally difficult as the direct integration of the moment system as a whole.

Other difficulties related with finding an admitted Lie group of transformations using moment equations consist of some problems of inverse transition from a Lie group of transformations for the moment system to the corresponding Lie group of the original equation. In all known cases [12, 13, 41, 64] one deals with the Lie group of scaling transformations similar to the example for the Kac equation considered above. The scaling transformations are naturally carried out on the original variables v, f. However, for more complicated transformations such a transition may be not as easy.

It is clear that the form of a moment system and its Lie group depend on a moment representation. As an example for the Boltzmann equation with Maxwell molecules (also for the Kac model (2.2.1)) an alternative to the power moments can be presented by the Fourier coefficients of the expansion of the distribution function in Hermitian polynomials. In general there are no results on relations between these possible approaches.

Moreover, as a rule there are no rigorous proofs of equivalence between an original kinetic equation and the corresponding moment system. In some cases the Lie group obtained by the moment method coincides with the Lie group calculated by the regular method [29] applied to the original equations. For example, this happens for the 4-parameter Lie group derived for the moment system of the Vlasov equation [64] and for the Vlasov equation [30]. The 11-parameter Lie group of the Boltzmann equation with arbitrary power potential found in [12, 13] and the Lie group calculated directly from the equation [28] also coincide. At the same time as shown in [37], the finite Lie group calculated in [41] using the moment method for the Benney equation is not complete. Since the Benney equation possesses [44] an infinite set of conservation laws, one can expect that the finite dimension of the derived Lie algebra contradicts the infinite set of conservation laws. This inconsistency was considered in detail in [37] (see also Chap. 4).

These remarks show that in finding symmetries of IDEs, the relatively universal moment method cannot be a valuable alternative to the regular method which is constructed as a generalization of the classical Lie method for differential equations.

2.2.3 Methods Using a Transition to Equivalent Differential Equations

The idea of these approaches is quite obvious. However, its realization in each case has very individual features. Therefore the survey of these approaches is restricted here by several examples. In spite of this restriction any of the chosen examples illustrates a technique which is used at least in two papers.

Vlasov-Type Equations as First-Order Partial Differential Equations

There exists the possibility of a direct application of the classical group analysis (see Chap. 1) for finding invariant solutions of the Vlasov-type kinetic equations. The idea of this application is related with the following.

It is well-known [20] that the Lie group admitted by the first-order quasilinear partial differential equation

$$u_t+a_i(x,u)u_{x_i}=b(x,u)$$
(2.2.61)

coincides with the Lie group admitted by the characteristic system of ordinary differential equations of the quasilinear equation (2.2.61)

$$\frac{du}{dt}=b(x,u),\qquad \frac{dx_i}{dt}=a_i(x,u)\quad (i=1,2,\ldots,n).$$

Here x=(x 1,x 2,…,x n ).

Having this in minds let us separately consider the Vlasov kinetic equation (2.2.8) which can be rewritten in the form

$${\frac{\partial f}{\partial t}}+\dot{x}{\frac {\partial f}{\partial x}}+F{\frac{\partial f}{\partial\dot{x}}}=0,$$
(2.2.62)

where \(f=f(t,x,\dot{x}),\,F=F(t,x)\), and \(\dot{x}=v\). Here the self-consistency of the force F given by the Maxwell system (2.2.9) is temporarily neglected. Following [1] one makes the transition from the characteristic system of (2.2.62)

$$\frac{d\,t}{1}=\frac{d\,x}{\dot{x}}=\frac{d\dot{x}}{F}$$
(2.2.63)

to the equivalent second-order ordinary differential equation

$$\varPhi\equiv\frac{d^2x}{dt^2}-F(t,x)=0.$$

According to the remark given above, it is clear that this equation admits the same Lie group as (2.2.62) and (2.2.63). In notations of Chap. 1 the infinitesimal criterion for the generator

$$X=\xi(t,x)\partial_t+\eta(t,x)\partial_x,$$

to be admitted by the equation Φ=0 is

$$X_{(2)}\varPhi|_{\varPhi=0}\equiv(\xi\varPhi _t+\eta\varPhi _x+\zeta_1\varPhi_{\dot{x}}+\zeta_2\varPhi_{\ddot{x}})_{|\varPhi=0}=0.$$
(2.2.64)

Here X (2) is the second prolongation of the infinitesimal generator X, and the coefficients ζ1 and ζ2 are defined by the prolongation formulae. Calculations give that the determining equation (2.2.64) becomes

$$ \left( {{\eta }_{x}}-2{{\xi }_{t}} \right)F-\xi {{F}_{t}}-\eta {{F}_{x}}+{{\eta }_{tt}}+\left( 2{{\eta }_{tx}}-{{\xi }_{tt}}-3{{\xi }_{x}}F \right)\dot{x}+\left( {{\eta }_{xx}}-2{{\xi }_{tx}} \right){{\dot{x}}^{2}}-{{\xi }_{xx}}{{\dot{x}}^{3}}=0. $$

Splitting this determining equation with respect to powers of \(\dot{x}\) one finds

$$\begin{array}{c}\xi_{xx}=0,\quad \eta_{xx}-2\xi_{tx}=0,\\[6pt]2\eta_{tx}-\xi_{tt}-3\xi_xF=0,\quad (\eta_x-2\xi_t)F-\xi F_t-\eta F_x+\eta_{tt}=0.\end{array}$$

The general solution of the first two equations is [1]

$$\xi=xh_1(t)+h_2(t),\quad \eta=2x^2h_1^{\prime}(t)+xh_3(t)+h_4(t),$$

where h i (t) (i=1,2,3,4) are arbitrary functions. Using the standard technique of constructing invariant solutions, for particular choices of the functions h i (t) (i=1,2,3,4) the Vlasov equation (2.2.62) is reduced to the stationary Vlasov equation in the new variables \(\bar{f}\), \(\bar{x}\), V:

$$\varphi_1\frac{\partial\bar{f}}{\partial\bar{x}}+\varphi_2\frac{\partial\bar{f}}{\partial V}=0,$$

where \(\varphi_{1}(\bar{x},\,V)\), \(\varphi_{2}(\bar{x},V)\) are some known functions. The last equation can be integrated only in a few particular cases. Notice also that these obtained solutions have to be consistent with the Maxwell system (2.2.9). A brief survey of these results one can find in [1]. It is clear that the presented approach is effective just for similar one-dimensional problems in plasma physics, gravitational astrophysics, etc., where the Vlasov-type equation with three independent variables appeared.

Use of the Laplace Transform

Successful applications of the Laplace and other integral transforms for reducing integro-differential equations to differential ones are restricted by some degenerated cases. As a rule these equations either possess a high symmetry in the phase space or present exact solvable models [23].

As a first example let us consider the Fourier-image of the spatially homogeneous and isotropic Boltzmann equation derived in [4]

$$\varphi_t(x,t)+\varphi(x,t)\varphi(0,t)-\int\limits_0^1\varphi(xs,t)\varphi(x(1-s),t)\,ds=0.$$
(2.2.65)

One can notice that any solution of (2.2.65) possesses the property φ(0,t)=const. This property corresponds to the mass conservation law of the Boltzmann equation.

The change xs=y reduces (2.2.65) to the equation with the convolution-type integral:

$$x\varphi_t(x,t)+x\varphi(x,t)\varphi(0,t)-\int\limits_0^x\varphi(y,t)\varphi(x-y,t)\,dy=0.$$
(2.2.66)

In analysis of (2.2.66), one can assume that

$$\varphi(0,t)=1.$$
(2.2.67)

Then applying the Laplace transform

$$ u\left( z,t \right)=\mathcal{L}\left\{ \varphi \left( x,t \right) \right\}=\int\limits_{0}^{\infty }{{{e}^{-zx}}\varphi \left( x,t \right)dx,}$$
(2.2.68)

to (2.2.66) one comes to the partial differential equationFootnote 8

$$\frac{\partial^2u}{\partial z\partial t}+\frac {\partial u}{\partial z}+u^2=0.$$
(2.2.69)

Since (2.2.69) is a partial differential equation, one can apply to this equation the classical group analysis method. In fact, assuming that the infinitesimal generator of the admitted Lie group is

$$X=\tau\partial_t+\xi\partial_z+\eta\partial_u,$$

the determining equation of this Lie group is

$$\bigl(X^{(2)}\varPsi\bigr)_{|(2.2.69)}=\big(\eta^{u_{zt}}+\eta^{u_z}+2u\eta\big)_{|(2.2.69)}=0.$$
(2.2.70)

Here the coefficients \(\eta^{u_{z}}\) and \(\eta^{u_{zt}}\) are defined by the prolongation formulae

$$\eta^{u_z}=D_z\eta-u_tD_z\tau-u_zD_z\xi,\quad \eta^{u_{zt}}=D_t\eta ^{u_z}-u_{zt}D_t\tau-u_{zz}D_t\xi.$$

The general solution of the determining equation (2.2.70) is

$$X=c_1Y_1+c_2Y_2+c_3Y_3+c_4Y_4,$$

where

$$Y_1=\partial_t,\quad Y_2=\partial_z,\quad Y_3=-z\partial_z+u\partial _u,\quad Y_4=e^t(-\partial_t+u\partial_u).$$

Notice that the original equation (2.2.65) admits the Lie algebra with the basis [28]Footnote 9

$$X_1=\partial_t,\quad X_2=x\varphi\partial_\varphi,\quad X_3=x\partial _x,\quad X_4=\varphi\partial_\varphi-t\partial_t.$$

The well-known solution of (2.2.65) is the BKW-solution [4, 42]: φ=6e y(1−y), where y=xe t. This solution is an invariant solution of (2.2.65) under the Lie group of transformation corresponding to the subalgebraFootnote 10 {X 1+X 3}.

Let us study the symmetries of (2.2.69) which inherit the symmetries of (2.2.65) and vice versa.

It is trivial to check that the transformations related with the generator X 1 in the space of the variables (x,t,φ) are inherited in the space of the variables (z,t,u).

The transformations corresponding to the generator X 2 map functions as

$$\bar{\varphi}(\bar{x},\bar{t})=e^{a\bar{x}}\varphi(\bar{x},\bar{t}).$$

Hence the Laplace transform (2.2.68) maps solutions of (2.2.65) as follows

$$ bar{u}\left( \bar{z},\bar{t} \right)=\mathcal{L}\left\{ \bar{\varphi }\left( \bar{x},\bar{t} \right) \right\}=\int\limits_{0}^{\infty }{{{e}^{-\bar{z}\,\bar{x}}}\bar{\varphi }\left( \bar{x},\bar{t} \right)d\,\bar{x}}=\int\limits_{0}^{\infty }{{{e}^{-\left( \bar{z}-a \right)\bar{x}}}\varphi }\left( \bar{x},\bar{t} \right)d\,\bar{x}=u\left( \bar{z}-a,\bar{t} \right).$$

This means that the symmetry corresponding to the generator X 2 becomes the symmetry corresponding to the generator Y 2.

Implementation of a similar procedure for the generator X 3 gives

$$\bar{\varphi}(\bar{x},\bar{t})=\varphi (e^{-a}\bar{x},\bar{t}),$$

and the Laplace transform (2.2.68) maps solutions of (2.2.65) as follows

This relates the symmetry corresponding to the generator X 3 and the symmetry corresponding to the generator Y 3.

The heritage property fails for the generator X 4=φ φ t t , where the transformations are

$$\bar{\varphi}(\bar{x},\bar{t})=e^a\varphi(\bar {x},e^a\bar{t}),$$

and the Laplace transforms of the functions \( mathcal{L}\left( {\bar{\varphi }} \right)\) and \( mathcal{L}\left( \varphi \right)\) are related by the formula

Thus the symmetry related to the generator X 4=φ φ t t in the space of the variables (z,t,u) becomes the symmetry corresponding to the generator

$$Y_5=-t\partial_t+u\partial_u.$$

The last generator is not admitted by (2.2.69). It is explained by the restriction pressed by the condition (2.2.67): if φ(0,t)=1, then \(\bar{\varphi}(0,\bar {t})=e^{a}\varphi (0,e^{a}\bar{t})=e^{a}\neq 1\).

Let us analyze symmetry of the generator Y 4=e t(− t +u u ) admitted by (2.2.69). The transformations corresponding to this generator are

$$\bar{t}=t-\ln(1+ae^t),\quad \bar{u}=(1+ae^t)u.$$

These transformations map a function u(z,t) into the function

$$\bar{u}(\bar{z},\bar{t})=\frac{1}{1-ae^{\bar {t}}}u\big(\bar{z},\bar{t}-\ln(1-ae^{\bar{t}})\big).$$

The corresponding relations of the originals are

$$\bar{\varphi}(\bar{x},\bar {t})=\frac{1}{1-ae^{\bar{t}}}\varphi\big(\bar{x},\bar{t}-\ln (1-ae^{\bar{t}})\big).$$
(2.2.71)

These transformations of the function φ(x,t) define the generator

$$X_5=e^t(-\partial_t+\varphi\partial_\varphi).$$

Considering (2.2.71) at \(\bar{x}=0\), one gets

$$\bar{\varphi}(0,\bar{t})=\frac{1}{1-ae^{\bar {t}}}\varphi \big(0,\bar{t}-\ln(1-ae^{\bar{t}})\big).$$

Because of the mass conservation law φ(0,t)=const the operator X 5 is not admitted by (2.2.65).

One notices that differences of the Lie group admitted by (2.2.65) and the Lie group admitted by (2.2.69) come from the assumption (2.2.67). In fact, the direct application of the Laplace transformation to (2.2.66) leads it to the equation

$$\frac{\partial^2u}{\partial z\partial t}+k\frac{\partial u}{\partial z}+u^2=0,$$
(2.2.72)

where k=φ(0,t). Recall that according to the mass conservation mass law φ(0,t)=const. Because the functions u(z,t) and φ(x,t) are related by the Laplace transform, one can conclude that \( k={{\mathcal{L}}^{-1}}\left\{ u\left( z,t \right) \right\}\left( 0,t \right)\). Hence (2.2.72) is also a nonlocal equation and one cannot apply the classical group analysis method to this equation. This also explains the appearance of the new transformations.

Another way of applying the Laplace transform to (2.2.65) was proposed in [8]. Using the assumption (2.2.67) and the substitution y=e λt x, the equation (2.2.66) is reduced to the equation

$$-\lambda y^2\frac{d\varphi(y)}{dy}+y\varphi (y)-\int\limits_0^y\ \varphi(w)\varphi(y-w)\,dw=0,$$
(2.2.73)

where λ is constant. The Laplace transform \( u\left( z \right)=\mathcal{L}\left\{ \varphi \left( y \right) \right\}\) leads (2.2.73) into the second-order ordinary differential equation

$$\lambda zu''+(2\lambda+1)u^{\prime}+u^2=0.$$
(2.2.74)

Considering λ=1/6, and exploiting the substitution v(p)=p −2p −3 u(p −1), the equation (2.2.74) was reduced in [8] to the equation defining the Weierstrass elliptic function [72]:

$$v^{\prime\prime}=6v^2.$$
(2.2.75)

In the simplest case of choice of the invariants of the Weierstrass function g 2=g 3=0 one has v(p)=(pp 0)−2, where p 0>1 is constant. Returning to the original variables, one gets the solution

$$\varphi(y)=(1-y/p_0)e^{y/p_0}.$$

This is the Fourier image of the known BKW-solution of the Boltzmann equation [6]. However, the transition to the differential equation (2.2.74) does not allow one to describe explicitly the class of invariant BKW-solutions in whole (compare with corresponding example in the next section).

Let us proceed here with application of the classical group analysis method to (2.2.74). For arbitrary λ this equation admits the generator

$$Z_0=-z\partial_z+u\partial_u.$$

Additional admitted generators occur for λ satisfying the equation

$$(6\lambda-1)(3\lambda+2)(2\lambda+3)(\lambda-6)=0.$$

These generators are

$$\begin{array}{rcl}\lambda=1/6&:&Z_1=z^2\partial_z+(2-3uz)\partial_u,\\[6pt]\lambda=-2/3&:&Z_2=\sqrt{z}\partial_z,\\[6pt]\lambda=-3/2&:&Z_3=3z^{2/3}\partial _z-uz^{-1/3}\partial_u,\\[6pt]\lambda=6&:&Z_4=z^{-7/6}3z^2\partial_z-(1+2uz)\partial_u.\end{array}$$

The presence of two admitted generators allows one to use Lie’s integration algorithm:Footnote 11 using canonical coordinates this algorithm reduces finding solutions of a second-order ordinary differential equation to quadratures. In fact, the use of canonical variables gives the changes

$$\begin{array}{rcl}\lambda=1/6&:&u=z^{-1}-z^{-3}v,\quad p=z^{-1},\\[6pt]\lambda=-2/3&:&u=v,\quad p=\sqrt{z},\\[6pt]\lambda=-3/2&:&u=z^{-1/3}v,\quad p=z^{1/3},\\[6pt]\lambda=6&:&u=z^{-1}-z^{-2/3}v,\quad p=z^{1/6}.\end{array}$$

In all of these cases (2.2.74) is reduced to the only equation (2.2.75). Since (2.2.75) is homogeneous, one can apply the substitution v =h(v). This substitution leads to the equation

$$h^{\prime}h=6v^2.$$

Integrating this equation, one obtains

$$h^2=4v^3+c_1,$$

where c 1 is an arbitrary constant. Thus

$$v^{\prime}=\gamma\sqrt{4v^3+c_1}\quad (\gamma=\pm1),$$

and the function v(p) is found from the equation

$$\int\frac{dv}{\sqrt{4v^3+c_1}}=\gamma p+c_2.$$

In particular, for c 1=0 one has

$$v=(\gamma p+c_2)^{-2}.$$

This determines a particular solution of (2.2.74) for the chosen λ:

$$\begin{array}{rcl}\lambda=1/6&:&u=\frac{1}{z}-\frac{1}{z(\gamma+c_2z)^2},\\[12pt]\lambda=-2/3&:&u=\frac{1}{(\gamma\sqrt{z}+c_2)^2},\\[12pt]\lambda=-3/2&:&u=\frac{1}{z^{1/3}(\gamma z^{1/3}+c_2)^2},\\[12pt]\lambda=6&:&u=\frac{1}{z}-\frac{1}{z^{2/3}(\gamma z^{1/6}+c_2)^2}.\end{array}$$

The particular solutions of (2.2.65) are obtained by applying the inverse Laplace transform to the found functions. It is worth to note that solutions for λ<0 has no physical meaning for the original equation (2.2.65). The case λ=1/6 was studied in [8]. In the case where λ=6 it is difficult to find inverse Laplace transform.

Other examples of applications of integral transforms to small dimensional models of the Boltzmann equation one can find in [23, 46].

The use of the Laplace transform in the studies of more real kinetic equations one can find in the coagulation theory [71]. In fact, the Smolukhovsky kinetic equation of homogeneous coagulation is of the form

$$ \frac{\partial f(t,\upsilon )}{\partial t}=\frac{1}{2}\int\limits_{0}^{\upsilon }{d{{\upsilon }_{1}}\beta (\upsilon -{{\upsilon }_{1}},{{\upsilon }_{1}})f(t,\upsilon -{{\upsilon }_{1}})f(t,{{\upsilon }_{1}})} \\ -f(t,\upsilon )\int\limits_{0}^{\infty }{d{{\upsilon }_{1}}\beta (\upsilon ,{{\upsilon }_{1}})f(t,{{\upsilon }_{1}}).} \\ $$
(2.2.76)

The Cauchy problem for this equation is considered with the following initial data

$$f(0,v)=f_0(v).$$

Application of the Laplace transform \( F(z)=\mathcal{L}\text{ }\!\!\{\!\!\text{ }f(\upsilon )\) to ( 2.2.76) with the coagulation kernel β(v,v 1)=b(v+v 1) gives one the first-order partial differential equation

$$\frac{\partial F(t,z)}{\partial t}+b\left((F(t,z)-F(t,0))\frac {\partial F(t,z)}{\partial z}+MF(t,z)\right)=0,$$

where

$$M=\int\limits_0^\infty dvvf(t,v)=\mathrm{const}$$

is the total mass of coagulating particles. The obtained equation can be integrated in an explicit form. However, the inverse Laplace transform of the derived solution is only possible for a few initial functions f 0(v). More substantial results of a direct group analysis of (2.2.76) are presented in Chap. 3.

Use of a Moment Generating Function

This approach has a very restricted set of applications and just used in a few works which are devoted to invariant solutions of the spatially homogeneous and isotropic Boltzmann equation with isotropic scattering model [43, 54, 66]. The original interest of the study in [43] was the system of normalized power moments for the formulated case of the Boltzmann equation. As shown in [9] this system can be easily derived by the substitution of the Taylor expansion

$$\varphi(x,t)=\sum_{n=0}^{{\infty}}\frac{(-x)^n}{n!}M_n(t)$$

into (2.2.65). Such obtained system takes the form

$$\frac{dM_n}{dt}+M_n=\frac{1}{n+1}\sum_{k=0}^nM_kM_{n-k}\qquad(n=0,1,2,\ldots).$$
(2.2.77)

The moment generating function is introduced as follows

$$G(\xi,t)=\sum_{n=0}^\infty\xi^n\,M_n(t).$$

Multiplying (2.2.77) by ξ n and summing over all n, one finds

$$\frac{\partial G}{\partial t}+G=\sum_{n=0}^\infty\frac{\xi^n}{n+1}\sum_{k=0}^nM_kM_{n-k}.$$

Noting that

$$G^2=\sum_{n=0}^\infty\xi^n\sum_{k=0}^nM_kM_{n-k},$$

the last equation can be transformed to the next differential equation

$$\frac{\partial^2(\xi G)}{\partial t\partial\xi}+\frac{\partial(\xi G)}{\partial\xi}=G^2.$$
(2.2.78)

The change of variables

$$\xi=(z+1)^{-1},\qquad\xi G=u(z,t)$$

leads (2.2.78) into

$$\frac{\partial^2u}{\partial z\partial t}+\frac{\partial u}{\partial z}+u^2=0.$$
(2.2.79)

This equation coincides with (2.2.69), but the variables z, u in (2.2.79) and in (2.2.69) have a different origin.

Using further transformations of (2.2.79) and very complicated calculations, the invariant BKW-solution was also derived in [43]. Notice that in this approach the inverse transition to the distribution function is related with large difficulties.

In [66] the equation (2.2.79) was studied by the classical group analysis method as done above for (2.2.69). The same admitted Lie algebra with the basis of the generators {Y 1,…,Y 4} was obtained there. It is natural that the discrepancy between this Lie algebra and the admitted Lie algebra of the original equation was also noted. Studying this discrepancy, the authors showed that the class of the BKW-solutions is the only one which satisfies the mass conservation law M 0(t)=1 (φ(0,t)=1). Recall that for (2.2.79) this law corresponds to the condition

$$u(z=\infty,t)=0.$$

It was also proposed in [66] to make use of other obtained there classes of invariant solutions of (2.2.79) to the spatially homogeneous and isotropic Boltzmann equation with some source term. In this case (2.2.79) has a nonzero function ψ(z,t) in the right hand side and the determining equations impose conditions on the function ψ(z,t).

Some years later the described above approach was directly applied in [54] to the spatially homogeneous and isotropic Boltzmann equation with a source term. Instead of nonautonomous equation (2.2.79) the slightly different equation

$$\frac{\partial^2u}{\partial z\partial t}+M_0(t)\frac{\partial u}{\partial z}+u^2=\sigma $$

was considered. This allowed the author to weaken the conditions imposed on the source function comparing with [66].

Some Other Technique

In the framework of this subsection it is also worth to mention two more approaches which could pretend to be universal. Since they are based on very specific mathematical techniques, they are not widespread.

The method developed in [18] consists in reducing the original integro-differential equation to a system of boundary differential equations. As an example of such a transition one can consider the simple one-dimensional Gammershtein integral equation

$$u(x)=\int\limits_a^bK(x,s,u(s))\,ds,$$
(2.2.80)

where the kernel K(x,s,u) is a given function and x∈[a,b]. The equivalent system of boundary differential equations is introduced as follows

$$ \begin{matrix}{{\upsilon }_{s}}(x,s)=K(x,s,u(s)), & \upsilon (x,a)=0, \\ \end{matrix}$$
$$ u(x)=\upsilon (x,b).$$

The new dependent variable v is nonlocal because it depends on all values of a solution u(x) on the interval [a,b]. For this reason one calls the derived system as a covering of (2.2.80).

In the more interesting case of the Smolukhovsky equation (2.2.76) which is considered in [18] the corresponding covering takes the form

$$ {{u}_{{{\upsilon }_{1}}}}(\upsilon ,{{\upsilon }_{1}},t)-{{u}_{\upsilon }}(\upsilon ,{{\upsilon }_{1}},t)=\beta (\upsilon ,{{\upsilon }_{1}})f(t,\upsilon )f(t,{{\upsilon }_{1}}),$$
$$ u(\upsilon ,{{\upsilon }_{1}},t)=-u({{\upsilon }_{1}},\upsilon ,t)$$
$$ \begin{matrix} {{w}_{{{\upsilon }_{1}}}}(\upsilon ,{{\upsilon }_{1}},t)=\beta (\upsilon ,{{\upsilon }_{1}})f(t,\upsilon ), & w(\upsilon ,0,t)=0. \\ \end{matrix}$$

Using homomorphisms of the intervals of the independent variables variation, the constructed covering is formally rewritten as another differential system. For this system a very complex generalization of the classical group analysis in the geometrical interpretation was developed. Its explanation here would be very long and it is omitted. One can only remark that there are many coverings for the same integro-differential equation. Because of that one can obtain different results using this approach.

More technically simple method of reducing integro-differential equations to differential ones was suggested in [28, 29]. In this method one uses Weil’s fractional integrals and derivatives. The ν-order (ν>0) integral is defined as

$$W_x^{-\nu}f(x)=\frac{1}{\varGamma(\nu)}\int\limits_x^\infty dy(y-x)^{\nu-1}f(y),$$

where Γ(x) is the Euler gamma-function. Correspondingly, α-order Weil’s derivative is

$$W_x^\alpha f(x)=E^nW_x^{-(n-\alpha)}f(x),\quad n-1<\alpha<n,\quad E^n=(-1)^n\frac{d^n}{dx^n}.$$

For example, one can consider the spatially homogeneous and isotropic Boltzmann equation with asymptotic collision integral [29]

$$x^\alpha f_t(x,t)+f(x,t)-\int\limits_0^1f(sx,t)f((1-s)x,t)\,ds=0.$$

Reducing it to the equation with the convolution-type integral and using the Laplace transform as was done for (2.2.65), one obtains

$$\int\limits_0^\infty dxe^{-zx}(x^{1+\alpha}f(x))-F(z,t)-F^2(z,t)=0.$$

In terms of Weil’s derivatives one can rewrite the last equation in the form

$$W_z^{1+\alpha}F_t-F(z,t)-F^2(z,t)=0.$$

Since some properties of fractional Weil’s derivatives are analogical to the properties of usual derivatives, this representation can ease the search for the admitted dilation group.Footnote 12 Because for arbitrary α the operator \(W_{x}^{\alpha}\) is nonlocal, for other transformations one needs a corresponding generalization of the classical group analysis scheme. A variant of such generalization with another definition of fractional derivatives was announced in [26].

In conclusion one can summarize that all methods of reducing integro-differential equations to differential equations are confronted with the same difficulties. Among them: the lack of universality, the complexity of direct and inverse transformations, the possible violation of homomorphism of admitted groups and others.

2.3 A Regular Method for Calculating Symmetries of Equations with Nonlocal Operators

The survey presented in the previous section gives a sufficiently complete idea about methods for finding invariant solutions of integro-differential equations. However it is worth to note that none of these methods allows one to be sure that a derived Lie group is the widest Lie group admitted by considered equations. There exists the only way to derive such result: it is necessary to develop a method for constructing determining equations defining a Lie group admitted by the studied integro-differential equations. Then the completeness of an obtained Lie group will be a corollary fact of the uniqueness of the general solution of the determining equations.

In this section a regular direct method of a complete group analysis of equations with nonlocal operators will be presented. In applications of group analysis to these equations it is necessary to pass the same successive stages as for differential equations. The central conception of an admitted Lie group of equations with nonlocal terms will be defined as a Lie group satisfying determining equations. In contrast to partial differential equations the property of an admitted Lie group to map any solution into a solution of the same equations will be not required, although the method developed for constructing the determining equations uses this property. In practice the algorithm for obtaining determining equations becomes no more difficult than for partial differential equations. The main difficulty consists of solving the determining equations because they also contain some nonlocal operators. As for partial differential equations splitting the determining equations helps to obtain their general solution. The splitting method can be based, for example, on the existence of the solution of a Cauchy problem. The realization of the splitting method depends on properties of a Cauchy problem of studied nonlocal equations. In the next section we demonstrate two different approaches.

As a rule considered equations or systems along nonlocal operators also include operators or equations with partial derivatives. Hence, the definition of an admitted Lie group for equations with nonlocal terms has to be consistent with the definition of an admitted Lie group of partial differential equations.

Since the definition of an admitted Lie group given for partial differential equations cannot be applied to equations with nonlocal terms, before giving a definition the concept of an admitted Lie group requires further discussion. This discussion assists in establishing a definition of an admitted Lie group for equations with nonlocal terms.

2.3.1 Admitted Lie Group of Partial Differential Equations

One of the definitions of a Lie group admitted by a system of partial differential equations (S) is based on a knowledge of the solutions:Footnote 13 a Lie group is admitted by the system (S) if any solution of this system is mapped into a solution of the same system. Two other definitions are based on the geometrical approach: equations are considered as manifolds. One of these definitions deals with the manifold defined by the system (S). Another definition works with the extended frame of the system (S): system (S) and all its prolongations.Footnote 14 Notice that the definitions based on the geometrical approach have the following inadequacy. There are equations which have no solutions, however they have an admitted (in this meaning) Lie group. Although the geometrical approach has the advantage that it is simple in applications.

Here it should be also mentioned that different approaches have been developed for finite-difference equations. Review of these approaches can be found in [22] and in references therein.

The classical geometrical definition of an admitted Lie group deals with invariant manifolds: the group is admitted by the system of equations

$$(S)\qquad S(x,u,p)=0$$
(2.3.1)

if the manifold defined by these equations is invariant with respect to this group. All functions are assumed enough times continuously differentiable, for example, of the class C . The manifold

$$(S)=\{(x,u,p)\mid S(x,u,p)=0\},$$

defined by (2.3.1), is considered in the space J l of the variables

$$x=(x_1,x_2,\ldots,x_n),\ u=(u^1,u^2,\ldots,u^m),\quad p=(p_\alpha^j)\ (j=1,2,\ldots,m;\ |\alpha|\leq l).$$

Here and below the following notations are used:

$$\begin{array}{c}p_\alpha^j=D^{\alpha}u^j,\quad D^\alpha=D_{1}^{\alpha_1}D_{2}^{\alpha_2}\ldots D_{n}^{\alpha_n},\\[6pt]\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_n),\quad |\alpha|=\alpha_1+\alpha_2+\cdots+\alpha_n,\\[6pt]\alpha,i=(\alpha_1,\alpha_2,\ldots ,\alpha_{i-1},\alpha_i+1,\alpha_{i+1},\ldots,\alpha_n),\end{array}$$

where D j is the operator of the total differentiation with respect to x j (j=1,2,…,n).

Any local Lie group of point transformations

$$\bar{x}_i=f^i(x,u;a),\quad \bar {u}^j=\varphi ^j(x,u;a),$$
(2.3.2)

is defined by the transformations of the independent and dependent variablesFootnote 15 with the generator

$$X=\xi^i(x,u)\partial_{x_i}+\eta^j(x,u)\partial_{u^j},$$

where

$$\xi^i(x,u)=\frac{df^i}{da}(x,u;0),\quad \eta^j(x,u)=\frac{d\varphi ^j}{da}(x,u;0).$$

Here a is the group parameter.

Lie groups admitted in the sense of the geometrical approach have the property to transform any solution of the system of equations (S) into a solution of the same system. This property can be taken as a definition of the admitted Lie group of partial differential equations (S).

Definition 2.3.1

A Lie group (2.3.2) is admitted by system (S) if it maps any solution of (S) into a solution of the same system.

This definition supposes that the system (S) has at least one solution.

Recall that the determining equations for the admitted group are obtained as follows. Let a function u=u o (x) be given. Substituting it into the first part of transformation (2.3.2) and using the inverse function theorem one finds

$$x=g^x(\bar{x},a).$$
(2.3.3)

The transformed function \(u_{a}(\bar{x})\) is given by the formula

$$u_a(\bar{x})=f^u(g^x(\bar{x},a),u_o(g^x(\bar{x},a));a).$$

The transformed derivatives are \(\bar{p}_{\alpha}^{j}(\bar {x},a)=\varphi_{\alpha}^{j}(x,u_{o}(x),p(x);a)\), where p(x) are derivatives of the function u o (x), x is defined by (2.3.3), and the functions \(\varphi_{\alpha}^{j}(x,u,p;a)\) are defined by the prolongation formulae. The prolongation formulae are obtained by requiring the tangent conditions

$$du^j-p_k^jdx_k=0,\quad dp_\alpha^j-p_{\alpha,k}^jdx_k=0,$$
(2.3.4)

to be invariant. For example, for the first order derivatives

$$d\bar{u}^j-\bar{p}_k^jd\bar{x}_k=\big((\varphi_{x_k}^j+\varphi_{u^i}^jp_k^i)-\bar {p}_s^j(f_{x_k}^s+f_{u^i}^sp_k^i)\big)dx_k=0$$

or

$$\varPhi-PF=0,$$

where Φ, F and P are matrices with the entries

$$\begin{array}{c}\varPhi^j_k=\varphi_{x_k}^j+\varphi_{u^i}^jp_k^i,\quad F^s_k=f_{x_k}^s+f_{u^i}^sp_k^i,\quad P^j_s=p^j_s\\[6pt](s,k=1,2,\ldots,n;\ j=1,2,\ldots,m).\end{array}$$

Since the matrix F is invertible in a neighborhood of a=0, one has

$$P=\varPhi F^{-1}.$$

For higher order derivatives the prolongation formulae are obtained recurrently.

Let the function u o (x) be a solution of a system (S). Because of the given definition any transformation of the admitted Lie group transforms any solution to a solution of the same system, the function \(u_{a}(\bar{x})\) is also a solution of the system (S):

$$\bar{S}(\bar{x},a)=S(\bar{x},u_a(\bar {x}),\bar{p}_\alpha^j(\bar{x},a))=0.$$

In the last equations, instead of the independent variables \(\bar {x}\), a one can consider the independent variables x, a:

$$\bar{\bar{S}}(x,a)=\bar{S}(f^x(x,u_o(x);a),a).$$

Differentiating the functions \(\bar{\bar{S}}(x,a)\) or \(\bar{S}(\bar{x},a)\) with respect to the group parameter a and setting a=0, one obtains the determining equations

$$\bigg(\frac{\partial}{\partial a}\bar {\bar{S}}(x,a)\bigg)(x,0)=(XS)(x,u_o(x),p(x))=0$$
(2.3.5)

or

$$\bigg(\frac{\partial}{\partial a}\bar {S}(\bar{x},a)\bigg)(\bar{x},0)=(\tilde{X}S)(x,u_o(x),p(x))=0.$$
(2.3.6)

The operator \(\tilde{X}\) is the canonical Lie–Bäcklund operator [34]

$$\tilde{X}=\bar{\eta}^j\partial_{u^j}+D^\alpha\bar {\eta}^j\partial_{p_\alpha^j}$$

equivalent to the generator X. Here

$$\bar{\eta}^j=\eta^j(x,u)-\xi ^\beta(x,u)p_\beta^j.$$

Since the function u o (x) is a solution of the system (S), the solutions of the determining equations (2.3.5) and (2.3.6) coincide.

For solving the determining equations one needs to know arbitrary elements. In the geometrical definitions the arbitrary elements are coordinates of the manifolds. In the case of the determining equations (2.3.5) or (2.3.6) for establishing the arbitrary elements one can use, for example, a knowledge of the existence of a solution of the Cauchy problem.

From one point of view the last definition (related to a solution) is more difficult for applications than the geometrical definitions. Although, from another point of view, this definition allows the construction of the determining equations for more general objects than differential equations: integro-differential equations, functional differential equations or even for more general type of equations.

2.3.2 The Approach for Equations with Nonlocal Operators

Let us consider an abstract system of integro-differential equations:

$$\varPhi(x,u)=0.$$
(2.3.7)

Here as above u is the vector of the dependent variables, x is the vector of the independent variables. Assume that a one-parameter Lie group G 1(X) of transformations

$$\bar{x}=f^x(x,u;a),\quad \bar{u}=f^u(x,u;a)$$
(2.3.8)

with the generator

$$X=\eta^j(x,u)\partial_{u_j}+\xi^i(x,u)\partial_{x_i},$$

transforms a solution u 0(x) of (2.3.7) into the solution u a (x) of the same equations. The transformed function u a (x) is

$$u_a(\bar{x})=f^u(x,u(x);a),$$

where \(x=\psi^{x}(\bar{x};a)\) is substituted into this expression. The function \(\psi^{x}(\bar{x};a)\) is found from the relation \(\bar{x}=f^{x}(x,u(x);a)\) using the inverse function theorem. Differentiating the equations Φ(x,u a (x)) with respect to the group parameter a and considering the result for the value a=0, one obtains the equations

$$\bigg({\frac{\partial}{\partial a}}\varPhi (x,u_a(x))\bigg)_{|a=0}=0.$$
(2.3.9)

For integro-differential equations one needs to have an existence of the inverse function defined on some interval. Because of the localness of the inverse function theorem this is one of the obstacles for applying to integro-differential equations the definition of an admitted Lie group based on a solution. However, notice that (2.3.9) coincide with the equations

$$(\bar{X}\varPhi)(x,u_0(x))=0$$
(2.3.10)

obtained by the action of the canonical Lie–Bäcklund operator \(\bar{X}\), which is equivalent to the generator X:

$$\bar{X}=\bar{\eta}^j\partial_{u^j},$$

where \(\bar{\eta}^{j}=\eta^{j}(x,u)-\xi^{i}(x,u)p_{i}^{j}\). The actions of the derivatives \(\partial_{u^{j}}\) and \(\partial _{p_{\alpha}^{j}}\) are considered in terms of the Frechet derivatives. Equations (2.3.10) can be constructed without requiring the property that the Lie group should transform a solution into a solution. This allows the following definition of an admitted Lie group.

Definition 2.3.2

A one-parameter Lie group G 1 of transformations (2.3.8) is a symmetry group admitted by (2.3.7) if G 1 satisfies (2.3.10) for any solution u 0(x) of (2.3.7). Equations (2.3.10) are called the determining equations.

Remark 2.3.1

For a system of differential equations (without integral terms) the determining equations (2.3.10) coincide with the determining equations (2.3.6).

The way of obtaining determining equations for integro-differential equations is similar (and not more difficult) to the way used for differential equations. Notice also that the determining equations of integro-differential equations are integro-differential.

The advantage of the given definition of an admitted Lie group is that it provides a constructive method for obtaining the admitted group. Another advantage of this definition is the possibility to apply it for seeking Lie–Bäcklund transformations,Footnote 16 conditional symmetries and other types of symmetries for integro-differential equations.

The main difficulty in obtaining an admitted Lie group consists of solving the determining equations. There are some methods for simplifying determining equations. As for partial differential equations the main method for simplification is their splitting. It should be noted that, contrary to differential equations, the splitting of integro-differential equations depends on the studied equations. Since the determining equations (2.3.10) have to be satisfied for any solution of the original equations (2.3.7), the arbitrariness of the solution u 0(x) plays a key role in the process of solving the determining equations. The important circumstance in this process is the knowledge of the properties of solutions of the original equations. For example, one of these properties is the theorem of the existence of a solution of the Cauchy problem.

Along splitting determining equations there are some other ways to simplify them. For example, for the Vlasov-type or Benney kinetic equations a specific approach was proposed in [40]. The principal feature of this approach consists of treating equally the local and nonlocal variables in determining equations. It allows one to separate these equations in “local” and “nonlocal” parts. For solving local part of the determining equations the classical group analysis method is applied. As the result one gets a group generator which defines so-called intermediate symmetry. In the final step using the information adopted from intermediate symmetry the nonlocal determining equations are solved by special authors’ procedure of variational differentiation (see Chap. 4 for details).

Remark 2.3.2

A geometrical approach for constructing an admitted Lie group for integro-differential equations is applied in [18, 19].

2.4 Illustrative Examples

This section deals with two examples which illustrate the method developed in the previous section. In the first example the method is applied to the Fourier-image of the spatially homogeneous isotropic kinetic Boltzmann equation. This is an integro-differential equation which contains some nonlinear integral operator with respect to a so-called inner variable. The complete solution of the determining equation is given [28] by constructing necessary conditions for the coefficients of the admitted generator. These conditions are obtained by using a particular class of solutions of the original integro-differential equation. It is worth to note that the particular class of solutions allowed us to find the general solution of the determining equation.

Another example considered in this section is an application of the developed method to the equations describing one-dimensional motion of a viscoelastic continuum. The corresponding system of equations includes a linear Volterra integral equation of the second type. The method of solving the determining equations in this case differs from the previous example. The arbitrariness of the initial data in the Cauchy problem allows one to split the determining equations. Solving the split equations which are partial differential equations, one finds the general solution of the determining equations.

2.4.1 The Fourier-Image of the Spatially Homogeneous Isotropic Boltzmann Equation

In the case of the spatially homogeneous and isotropic Boltzmann equation corresponding distribution function f(v,t) depends only on modulus of a molecular velocity v and time t. The Fourier-image of the spatially homogeneous and isotropic Boltzmann equation was derived in [4]. The considered equation is (2.2.65):

$$\varPhi\equiv\varphi_t(x,t)+\varphi(x,t)\varphi (0,t)-\int\limits_0^1\varphi(xs,t)\varphi(x(1-s),t)\,ds=0.$$
(2.4.1)

Here \(\varphi(x,t)=\tilde{\varphi}(k^{2}/2,t)\), and the Fourier transform \(\tilde{\varphi}(k,t)\) of the distribution function f(v,t) is defined as

$$\tilde{\varphi}(k,t)=\frac{4\pi}{k}\int\limits_0^\infty v\sin(kv)f(v,t)\,dv.$$

Further the existence of a solution of the Cauchy problem of (2.4.1) with the initial data

$$\varphi(x,t_0)=\varphi_0(x)$$
(2.4.2)

is used.Footnote 17

By virtue of the initial conditions (2.4.2) and the equation (2.4.1), one can find the derivatives of the function φ(x,t) at time t=t 0:

$$\begin{array}{rcl}\varphi_t(x,t_0)&=&-\varphi_0(0)\varphi_0(x)+\int\limits_0^1\ \varphi _0(sx)\varphi_0((1-s)x)\,ds,\\[6pt]\varphi_{xt}(x,t_0)&=&-\varphi_0(0)\varphi_0^{\prime}(x)+2\int \limits_0^1s\varphi_0^{\prime}(sx)\varphi_0((1-s)x)\,ds,\\[6pt]\varphi_{tt}(x,t_0)&=&-\varphi_0^2(0)\varphi_0(x)-3\varphi_0(0)\int \limits_0^1\varphi_0(sx)\varphi_0((1-s)x)\,ds\\[6pt]&&{}+2\int\limits_0^1\int\limits_0^1\,\varphi_0((1-s)x)\varphi_0(ss^{\prime }x)\varphi _0(s(1-s^{\prime})x)\varphi_0((1-s)x)\,ds\,ds^{\prime}.\end{array}$$
(2.4.3)

2.4.1.1 Admitted Lie Group

The generator of the admitted Lie group is sought in the form

$$X=\xi(x,t,\varphi)\partial_x+\eta(x,t,\varphi)\partial_t+\zeta (x,t,\varphi)\partial_\varphi.$$

The determining equation for (2.4.1) is

$$ {{D}_{t}}\psi (x,t)+\psi (0,t)\varphi (x,t)+\psi (x,t)\varphi (0,t) \\ -2\int\limits_{0}^{1}{\varphi (x(1-s)s,t)\psi (xs,t)ds=0,} \\ $$
(2.4.4)

where φ(x,t) is an arbitrary solution of (2.4.1), D t is the total derivative with respect to t, and the function ψ(x,t) is

$$\psi(x,t)=\zeta(x,t,\varphi(x,t))-\xi(x,t,\varphi(x,t))\varphi _x(x,t)-\eta(x,t,\varphi(x,t))\varphi_t(x,t).$$

In the determining equation (2.4.4) the derivatives φ t , φ xt and φ tt are defined by formulae (2.4.3).

The method of solving the determining equation (2.4.4) consists of in studying the properties of the functions ξ(x,t,φ),η(x,t,φ) and ζ(x,t,φ). These properties are obtained by sequentially considering the determining equation on a particular class of solutions of (2.4.1). This class of solutions is defined by the initial conditions

$$\varphi_0(x)=bx^n$$
(2.4.5)

at the given (arbitrary) time t=t 0. Here n is a positive integer. The determining equation is considered for any arbitrary initial time t 0.

During solving the determining equation we use the following properties. Multiplying any solution of (2.4.1) by e λx, one maps it into a solution of the same equation (2.4.1). Taking into account the β-function [39]

$$B(m+1,n+1)=\int\limits_0^1s^m(1-s)^n\,ds=\frac{m!n!}{(m+n+1)!}$$

one uses the notations

$$P_n=\frac{(n!)^2}{(2n+1)!},\quad Q_n=2P_n\frac{(2n)!n!}{(3n+1)!}.$$

Notice that

$$2P_{n+1}=P_n\frac{1}{1+\frac{1}{n+1}}$$

and

$$\lim_{n\to\infty}P_n=0,\quad \lim_{n\to\infty}Q_n=0,\quad \lim_{n\to \infty}\frac{Q_n}{P_n}=0.$$

Assume that the coefficients of the infinitesimal generator X are represented by the formal Taylor series with respect to φ:

$$ xi (x,t,\varphi )=\sum\limits_{l\ge 0}{{{q}_{l}}(x,t){{\varphi }^{l}},}$$
$$ \begin{matrix} \eta (x,t,\varphi )=\sum\limits_{l\ge 0}{{{r}_{l}}(x,t){{\varphi }^{l}},} & \zeta (x,t,\varphi )=\sum\limits_{l\ge 0}{{{q}_{l}}(x,t){{\varphi }^{l}}.} \\ \end{matrix}$$

Equation (2.4.4) is studied by setting n=0,1,2,…, and varying the parameter b.

If n=0, then the determining equation (2.4.4) becomes

$$\hat{\zeta}(x,t)+b(\hat{\zeta}(0,t)+\hat{\zeta}(x,t))-2b\int\limits_0^1\hat{\zeta}(xs,t)\,ds=0.$$

From this equation one obtains

$$\begin{array}{c}\frac{\partial p_0}{\partial t}=0,\quad \frac{\partial p_{l+1}}{\partial t}(x,t)+p_l(x,t)+p_l(0,t)-2\int\limits_0^1p_l(xs,t)\,ds=0\qquad \\[9pt](l=0,1,\ldots).\end{array}$$
(2.4.6)

Here and below \(\hat{\zeta}\), \(\hat{\xi}\) and \(\hat{\eta}\) are the coefficients of the operator X evaluated for the initial data (2.4.5).

If n≥1 in (2.4.5) one finds that

$$\begin{array}{c}\varphi_t(x,t_0)=P_nb^2x^{2n},\quad \varphi_x(x,t_0)=nbx^{n-1},\\[6pt]\varphi_{tt}(x,t_0)=Q_nb^3x^{3n},\quad \varphi_{tx}(x,t_0)=2nP_nb^2x^{2n-1}.\end{array}$$

The determining equation (2.4.4) becomes

(2.4.7)

Using the arbitrariness of the value b, the equation (2.4.7) can be split into a series of equations by equating to zero the coefficients of b k (k=0,1,…) in the left-hand side of (2.4.7).

For k=0 the corresponding coefficient in the left-hand side of (2.4.7) vanishes because of the first equation of (2.4.6).

For k=1, the equation (2.4.7) yields:

$$x\Biggl(-p_0(x,t)+2\int\limits_0^1(1-(1-s)^n)p_0(xs,t)\,ds\Biggr)-n\frac {\partial q_0(x,t)}{\partial t}=0.$$

By virtue of arbitrariness of n, one finds

$$p_0(x,t)=0,\quad \frac{\partial q_0(x,t)}{\partial t}=0.$$

These relations provide that \(\hat{\zeta}(0,t)=0\).

For k=2 one obtains the equation

Consecutively dividing by n, P n and letting n→∞, one obtains

$$p_1(x,t)=c_0+c_1x,\qquad \frac{\partial q_1(x,t)}{\partial t}=0,\qquad q_0(x,t)=c_2x,\qquad \frac{\partial r_0(x,t)}{\partial t}=-c_0,$$

where c 0, c 1, c 2 are arbitrary constants.

For k=3, one has

$$ \begin{matrix} {{{\hat{\zeta }}}_{t}}+b\left( -n{{x}^{n-1}}{{{\hat{\xi }}}_{t}}+{{x}^{n}}\hat{\zeta }(0,t)-2{{x}^{n}}\int\limits_{0}^{1}{{{(1-s)}^{n}}\hat{\zeta }(xs,t)ds} \right) \\ +{{b}^{2}}\left( -{{p}_{n}}{{x}^{2n}}{{{\hat{\eta }}}_{t}}+{{p}_{n}}{{x}^{2n}}{{{\hat{\zeta }}}_{\varphi }}-2n{{p}_{n}}{{x}^{2n-1}}\hat{\xi }-{{\delta }_{n1}}\hat{\xi }(0,t)+{{2}_{n}}{{x}^{2n-1}}\int\limits_{0}^{1}{{{(1-s)}^{n}}{{s}^{n-1}}\hat{\xi }(xs,t)ds} \right) \\ +{{b}^{3}}\left( -n{{p}_{n}}{{x}^{2n-1}}{{{\hat{\xi }}}_{\varphi }}-{{Q}_{n}}{{x}^{3n}}\hat{\eta }+2{{P}_{n}}{{x}^{2n}}\int\limits_{0}^{1}{{{(1-s)}^{n}}{{s}^{2n}}\hat{\eta }(xs,t)ds} \right) \\ -{{b}^{4}}(P_{n}^{2}{{x}^{4n}}{{{\hat{\eta }}}_{\varphi }})=0. \\ \\] [{x}\left( -{{p}_{1}}(x,t)-{{p}_{1}}(0,t)+2\int\limits_{0}^{1}{(1-{{(1-s)}^{n}}{{s}^{n}}){{p}_{1}}(xs,t)ds+{{p}_{n}}\left( {{p}_{1}}(x,t)-\frac{\partial {{r}_{0}}(x,t)}{\partial t} \right)} \right)-n\frac{\partial {{q}_{1}}(x,t)}{\partial t}-2n{{P}_{n}}{{q}_{0}}(x,t) \\ +2n\int\limits_{0}^{1}{{{(1-s)}^{n}}{{s}^{n-1}}{{q}_{0}}(xs,t)ds=0.} \\ \\] [\ {{x}^{n+1}}\left( -{{p}_{2}}(x,t)-{{p}_{2}}(0,t)+2\int\limits_{0}^{1}{{{(1-1-s)}^{n}}{{s}^{2n}}){{p}_{2}}(xs,t)ds-{{P}_{n}}\frac{\partial {{r}_{1}}(x,t)}{\partial t}+2{{P}_{n}}{{p}_{2}}(x,t)+2{{P}_{n}}\int\limits_{0}^{1}{{{(1-s)}^{n}}{{s}^{2n}}{{r}_{0}}(xs,t)ds-{{Q}_{n}}{{r}_{0}}(x,t)}} \right) \\ +{{x}^{n}}\left( -n\frac{\partial {{q}_{2}}(x,t)}{\partial t}-2n{{P}_{n}}{{q}_{1}}(x,t)+2n\int\limits_{0}^{1}{{{(1-s)}^{n}}{{s}^{2n-1}}{{q}_{1}}(xs,t)ds} \right) \\ -n{{P}_{n}}{{q}_{1}}(x,t)=0. \\ \ \end{matrix}$$

Similar to the previous case (k=2) one finds

$$ \begin{matrix} {{q}_{1}}(x,t)=0, & \frac{\partial {{q}_{2}}(x,t)}{\partial t}=0, & {{p}_{2}}(x,t)=0, \\ \end{matrix}$$

where c 3 is an arbitrary constant.

For k=4+α (α=0,1,…), the equation (2.4.7) yields

From this equation one obtains

$$p_{\alpha+3}(x,t)=0,\quad q_{\alpha+2}(x,t)=0,\quad r_{\alpha +1}(x,t)=0\quad (\alpha=0,1,\ldots).$$

Thus, from the above equations, one finds

$$\xi=c_2x,\quad \eta=c_3-c_0t,\quad \zeta=(c_1x+c_0)\varphi $$
(2.4.8)

with the arbitrary constants c 0,c 1,c 2,c 3. Formulae (2.4.8) are the necessary conditions for the coefficients of the generator X to satisfy the determining equation (2.4.4). One can directly check that they also satisfy the determining equation (2.4.4). Thus, the calculations provide the unique solution of the determining equation (2.4.4).

Because of the uniqueness of the obtained solution of the determining equation (2.4.4) one finds a constructive proof of the next statement.

Theorem 2.4.1

The four-dimensional Lie algebra L 4={X 1,X 2,X 3,X 4} spanned by the generators

$$X_1=\partial_t,\quad X_2=x\varphi\partial_\varphi,\quad X_3=x\partial _x,\quad X_4=\varphi\partial_\varphi-t\partial_t$$
(2.4.9)

defines the complete Lie group G 4 admitted by (2.4.1).

2.4.1.2 Invariant Solutions

For constructing an invariant solution one has to choose a subalgebra. Since any subalgebra is equivalent to one of the representatives of an optimal system of admitted subalgebras, it is sufficient to study invariant solutions corresponding to the optimal system of subalgebras. Choosing a subalgebra from the optimal system of subalgebras, finding invariants of the subalgebra, and assuming dependence between these invariants, one obtains the representation of an invariant solution. Substituting this representation into (2.4.1) one gets the reduced equations: for the invariant solutions the original equation is reduced to the equation for a function with a single independent variable.

The optimal system of one-dimensional subalgebras of L 4 consists of the subalgebras

$$X_1,\ X_4+cX_3,\ X_2-X_1,\ X_4\pm X_2,\ X_1+X_3,$$
(2.4.10)

where c is an arbitrary constant. The corresponding representations of the invariant solutions are the following.

The invariants of the subalgebra {X 1} are φ and x. Hence, an invariant solution has the representation φ=g(x), where the function g has to satisfy the equation

$$g(x)g(0)-\int\limits_0^1g(xs)g(x(1-s))\,ds=0.$$
(2.4.11)

The Maxwell solution φ=pe λx is an invariant solution with respect to this subalgebra. Let a solution of (2.4.11) be represented through the formal series g(x)=∑ j≥0 a j x j. For the coefficients of the formal series one obtains

$$a_0\biggl(1-\frac{2}{(k+1)!}\biggr)a_k=\sum_{j=1}^{k-1}\frac{j!(k-j)!}{(k+1)!}a_ja_{k-j}\quad (k=2,3,\ldots).$$

Noticing that the value a 0=0 leads to the trivial case g=0, so that one has to assume a 0≠0. Because (2.4.11) admits scaling of the function g, one can set a 0=1. Since the multiplication by the function e λx transforms any solution of (2.4.11) into another solution, one also can set a 1=0. Hence, all other coefficients vanish, a j =0 (j=2,3,…). Thus, the general solution of (2.4.11) is g=e λx. This means the uniqueness of the absolute Maxwell distribution as was mentioned in the above section.

In the case of the subalgebra {X 4+cX 3} the representation of an invariant solution is φ=t −1 g(y), where y=xt c, and the function g has to satisfy the equation

$$cyg^{\prime}(y)-g(y)+g(y)g(0)-\int\limits_0^1g(ys)g(y(1-s))\,ds=0.$$
(2.4.12)

Assuming that a solution is represented through the formal series g(y)=∑ j≥0 a j y j, one obtains the equations for the coefficients

$$a_0=0,\ (c-1)a_1=0,\ (ck-1)a_k=\sum_{j=0}^k\frac{j!(k-j)!}{(k+1)!}a_ja_{k-j}\quad (k=2,3,\ldots).$$

The case where ck≠1 for all k (k=1,2,…) leads to the trivial solution g=0 of (2.4.12). If c=α −1 where α is integer, then a k =0 (k=1,2,…,α−1), the coefficient a α is arbitrary, and for other coefficients a k (k=α+1,α+2,…) one obtains the recurrence formula

$$(\alpha^{-1}k-1)a_k=\sum_{j=1}^{k-1}\frac{j!(k-j)!}{(k+1)!}a_ja_{k-j}.$$

The representation of an invariant solution of the subalgebra {X 2X 1} is φ=e xt g(x), where the function g satisfies the equation

$$-xg(x)+g(x)g(0)-\int\limits_0^1g(xs)g(x(1-s))\,ds=0.$$

If one assumes that a solution can be represented through the formal series g(x)=∑ j≥0 a j x j, the first two terms of the series, obtained after substitution, are

$$a_0=0,\quad a_1(6+a_1)=0.$$

The case a 1=0 leads to the trivial solution g=0. If a 1≠0, then the other coefficients are defined by the recurrent formula

$$\biggl(1-\frac{6}{k(k+1)}\biggr)a_{k-1}=-\sum_{j=1}^{k-2}\frac{j!(k-j)!}{(k+1)!}a_ja_{k-j}\quad (k=3,4,\ldots).$$

An invariant solution of the subalgebra {X 1+X 3} has the form φ=g(y), where y=xe t. The function g has to satisfy the equation

$$-yg^{\prime}(y)+g(y)g(0)-\int\limits_0^1g(ys)g(y(1-s))\,ds=0.$$
(2.4.13)

The solution of this equation g=6e y(1−y) is known as the BKW-solution [3, 42].Footnote 18 This solution was obtained by assuming that the series g(y)=e y j≥0 a j y j can be terminated. In fact, substituting the function g(y)=e y j≥0 a j y j into (2.4.13) for the coefficients a k one obtains the equations

$$ \begin{matrix} {{a}_{0}}+{{a}_{1}}=0, & 2({{a}_{0}}-6){{a}_{2}}={{a}_{1}}(6+{{a}_{1}}), & 6({{a}_{0}}-6){{a}_{3}}={{a}_{2}}(12+{{a}_{1}}), \\ \end{matrix} $$
$$ \begin{matrix}\left( {{a}_{0}}(1-\frac{2}{(k+1)})-k \right){{a}_{k}}={{a}_{k-1}}\left( 1+\frac{1}{k(k+1)}{{a}_{1}} \right)+\frac{2}{k({{k}^{2}}-1)}{{a}_{k}}-2{{a}_{2}} \\ \ +\sum\limits_{j=2}^{k-2}{\frac{j!(k-j)!}{(k+1)!}{{a}_{j}}{{a}_{k-1}}} & (k=4,5,\ldots ). \\ \ \\ \end{matrix}$$
(2.4.14)

One can check that the choice a 0=6, a 1=−6, and a k =0 (k=2,3,…) satisfies (2.4.14).

A representation of an invariant solution of the subalgebra {X 4±X 2} is φ=t −(1±x) g(x), where the function g has to satisfy the equation

$$(1\pm x)g(x)-g(x)g(0)+\int\limits_0^1g(xs)g(x(1-s))\,ds=0.$$

2.4.2 Equations of One-Dimensional Viscoelastic Continuum Motion

One of models describing the one-dimensional motion of a viscoelastic continuum is based on the equations [60]

$$v_t=\sigma_x,\ e_t=v_x,\ \sigma +\int\limits_{0}^tK(t,\tau)\sigma(x,\tau)\,d\tau=\varphi(e),$$
(2.4.15)

where the time t and the distance x are the independent variables, the stress σ, the velocity v, and the strain e are the dependent variables. The Volterra integral equation in the system (2.4.15) describes a dependence of the stress σ on the strain e, K(t,τ) is a kernel of heredity, φ(e) is a known function. It is assumed that K≠0 and φ (e)≠0.

Let the infinitesimal generator of a Lie group admitted by (2.4.15) be

$$X=\zeta^e\partial_e+\zeta^v\partial_v+\zeta^\sigma\partial _\sigma +\xi^x\partial_x+\xi^t\partial_t$$

with the coefficients depending on (t,x,v,e,σ). The determining equations are

$$ \begin{matrix} {{({{D}_{t}}\widehat{{{\zeta }^{\upsilon }}}-{{D}_{x}}\widehat{{{\zeta }^{\sigma }}})}_{|(S)}}=0, & {{({{D}_{t}}\widehat{{{\zeta }^{e}}}-{{D}_{x}}\widehat{{{\zeta }^{\upsilon }}})}_{|(S)}}=0, \\ \end{matrix}$$
(2.4.16)
$$ {{\left( {\varphi }'\widehat{{{\zeta }^{e}}}-\widehat{{{\zeta }^{\sigma }}}-\int\limits_{0}^{t}{K(t,\tau )\widehat{{{\zeta }^{\sigma }}}(x,\tau )d\tau } \right)}_{|(S)}}=0,$$
(2.4.17)

where

$$\widehat{\zeta^e}=\zeta^e-\xi^xe_x-\xi^te_t,\ \widehat{\zeta ^v}=\zeta ^v-\xi^xv_x-\xi^tv_t,\ \widehat{\zeta^\sigma}=\zeta^\sigma-\xi ^x\sigma_x-\xi^t\sigma_t$$

with the functions e(x,t), v(x,t), σ(x,t) satisfying (2.4.15) substituted in them. The complete set of solutions of the determining equations is sought under the assumption that there exists a solution of the Cauchy problemFootnote 19

$$e(x_o,t)=e_0(t),\ v(x_o,t)=v_o(t),\ \sigma(x_o,t)=\sigma_o(t)$$

with arbitrary sufficiently smooth functions e 0(t),v o (t),σ o (t).

Derivatives of the functions e(x,t),v(x,t),σ(x,t) at the point x=x o can be found from (2.4.15):

$$v_t=v_o^{\prime},\ \sigma_t=\sigma_o^{\prime },\ \sigma_x=v_o^{\prime},\ v_x=e_t=\frac{g_1}{\varphi^{\prime}},\ e_x=\frac{g_2}{\varphi^{\prime}},$$
(2.4.18)

where

$$g_1=\sigma_o^{\prime}+K(t,t)\sigma_o+\int\limits_{0}^tK_t(t,\tau)\sigma _o(\tau )\,d\tau,\quad g_2=v_o^{\prime}+\int\limits_{0}^tK(t,\tau)v_o^{\prime}(\tau )\,d\tau .$$

Substituting the derivatives v t ,σ t ,σ x ,v x ,e t ,e x into the determining equations (2.4.16), considered at the point x o , one obtains

These equations can be split with respect to \(v_{o},v_{o}^{\prime },v_{o}^{\prime}+\int_{0}^{t}K(t,\tau)v_{o}^{\prime}(\tau)\,d\tau\). In fact, setting the function v o (t) such that

$$v_o(t)=a_1+a_2(t-t_o)+a_3\frac{(t-t_o)^{n+1}}{(n+1)}\quad (n\geq1),$$

one finds at the time t=t o :

$$\begin{array}{l}v_o(t_o)=a_1,\quad v_o^{\prime}(t_o)=a_2,\\[9pt]v_o^{\prime }(t_o)+\int\limits_{0}^{t_o}K(t_o,\tau)v_o^{\prime}(\tau)\,d\tau \\[9pt]\quad {}=a_2\Biggl(1+\int\limits_{0}^{t_o}K(t_o,\tau)\,d\tau\Bigg)+a_3\int\limits_{0}^{t_o}K(t_o,\tau)(\tau-t_o)^n\,d\tau.\end{array}$$
(2.4.19)

Since the set of the functions (tt o )n (n≥0) is complete in the space L 2(0,t o ], and t o is such that K(t o ,τ)≠0, there exists n for which \(\int_{0}^{t_{o}}K(t_{o},\tau)(\tau-t_{o})^{n}\,d\tau\neq0\). Hence, for the given values \(v_{o}(t_{o}),v_{o}^{\prime}(t_{o}),\int_{0}^{t_{o}}K(t_{o},\tau )v_{o}^{\prime}(\tau)d\tau\) one can solve (2.4.19) with respect to the coefficients a 1,a 2,a 3. This means that the values \(v_{o},v_{o}^{\prime },v_{o}^{\prime}+\int_{0}^{t}K(t,\tau)v_{o}^{\prime}(\tau)\,d\tau\) are arbitrary and one can split the determining equations with respect to them. Splitting the determining equations, one finds

$$\begin{array}{c}\xi_v=\xi_e=\xi_\sigma=0,\quad \eta_v=\eta_e=\eta_\sigma=0,\quad \zeta _e^v=-\xi_t,\\[6pt]\xi_x-\eta_t=\zeta_\sigma^\sigma-\zeta_v^v,\quad \zeta_e^\sigma =0,\quad \zeta_v^e-\zeta_\sigma^v=-\eta_x,\ \end{array}$$
(2.4.20)
$$\begin{array}{c}(\zeta_\sigma^v+\eta_x)\sigma_o^{\prime}+\zeta_t^v-\zeta _x^\sigma =g_1(2\xi_t+\zeta_v^\sigma)),\\[6pt]\zeta_\sigma^e\sigma_o^{\prime}+\zeta_t^e-\zeta_x^v=g_1(\eta _t+\zeta _v^v-\xi_x-\zeta_e^e).\end{array}$$
(2.4.21)

Equations (2.4.21) also can be split with respect to

$$\sigma_o(t_o),\quad \sigma_o^{\prime}(t_o),\quad e(t_o),\quad \sigma _o^{\prime }(t_o)+K(t_o,t_o)\sigma_o(t_o)+\int\limits_{0}^{t_o}K_t(t_o,\tau)\sigma _o(\tau)\,d\tau.$$

In fact, let

$$\sigma_o(\tau)=a_1+a_2(\tau -t_o)+(t_o-\tau)^2(a_3\psi_1(\tau)+a_4\psi_2(\tau)).$$

If the determinant

is equal to zero for all functions ψ 1,ψ 2L 2[0,t o ], then by virtue of K(t,τ)≠0 one obtains that there exists a function f(t) such that

$$K_t(t,\tau)=f(t)K(t,\tau).$$
(2.4.22)

The general solution of this equation in some neighborhood of the point t=t o has the form

$$K(t,\tau)=h(t)g(\tau),$$
(2.4.23)

where f(t)=h (t)/h(t). The kernels of the typeFootnote 20 (2.4.23) are excluded from the study, because for these kernels system of equations (2.4.15) is reduced to a system of differential equations. Thus, for nondegenerate kernels, (2.4.21) can be split with respect to the considered values:

$$\begin{array}{c}\zeta_\sigma^v+\eta_x=0,\qquad \zeta_t^v-\zeta_x^\sigma=0,2\xi _t+\zeta _v^\sigma=0,\qquad \zeta_\sigma^e=0,\\[6pt]\zeta_t^e-\zeta_x^v=0,\qquad \eta_t+\zeta_v^v-\xi_x-\zeta_e^e=0.\end{array}$$
(2.4.24)

For the case z=−∞ one also obtains (2.4.24).

Integrating (2.4.20), (2.4.24), one finds

$$ \begin{matrix} \xi =t({{c}_{1}}x+{{c}_{2}})+{{c}_{3}}{{x}^{2}}+{{c}_{5}}x+{{c}_{6}}, & \eta =x({{c}_{3}}t+{{c}_{4}})+{{c}_{1}}{{t}^{2}}+{{c}_{7}}t+{{c}_{8}} \\ \end{matrix}$$
$$ {{\zeta }^{\upsilon }}=-e({{c}_{1}}x+{{c}_{2}})-\sigma ({{c}_{3}}t+{{c}_{4}})-\upsilon (2{{c}_{1}}t+2{{c}_{3}}x+{{c}_{5}}-{{c}_{9}})+{{\lambda }_{xt}},$$
$$ {{\zeta }^{\sigma }}=-\sigma (3{{c}_{1}}t+{{c}_{3}}x+{{c}_{7}}-{{c}_{9}})-2\upsilon ({{c}_{1}}x+{{c}_{2}})+{{\lambda }_{tt}},$$
$$ {{\zeta }^{e}}=-e({{c}_{1}}t+3{{c}_{3}}x+2{{c}_{5}}-{{c}_{7}}-{{c}_{9}})-2\upsilon ({{c}_{3}}t+{{c}_{4}})+{{\lambda }_{xx}}.$$
(2.4.25)

Here c i (i=1,2,…,9) are arbitrary constants, and λ(x,t) is an arbitrary function of two arguments.

For studying the remaining determining equations (2.4.17) it is convenient to write

$$\begin{array}{c}z_o=\zeta^\sigma+2v\xi_t=-\sigma(3c_1t+c_3x+c_7-c_9)+\lambda_{tt},\\[5pt]z_1=\zeta^e+2v\eta_x=-e(c_1t+3c_3x+2c_5-c_7-c_9)+\lambda_{xx}.\end{array}$$
(2.4.26)

Substituting (2.4.18) into (2.4.17) and evaluating some integrals by parts, one obtains

(2.4.27)

Because of the arbitrariness of the function v o (t), from the last equation one finds

$$ K(t,0)(\xi (t)-\xi (0))=0,$$
(2.4.28)
$$ {{\xi }_{t}}-{\varphi }'{{\eta }_{x}}=0,$$
(2.4.29)
$$ (\xi (t)-\xi (\tau )){{K}_{\tau }}(t,\tau )+{{\xi }_{t}}(\tau )K(t,\tau )=0,$$
(2.4.30)
$$ \begin{matrix} {\varphi }'{{z}_{1}}-{{z}_{0}}-\int\limits_{0}^{t}{K(t,\tau ){{z}_{o}}(\tau )d\tau -K(t,0)\eta (0){{\sigma }_{o}}(0)} \\ -\int\limits_{0}^{t}{{{\sigma }_{o}}(\tau )({{K}_{\tau }}(t,\tau )\eta (\tau )+{{K}_{t}}(t,\tau )\eta (t)} \\ +K(t,\tau ){{\eta }_{t}}(\tau ))d\tau =0. \\ \end{matrix}$$
(2.4.31)

Substituting (2.4.25) into (2.4.29) and splitting them with respect to x, one obtains

$$ \begin{matrix}{{c}_{1}}=0, & {{c}_{3}}=0, \\ \end{matrix}$$
(2.4.32)
$$ {{c}_{2}}={\varphi }'{{c}_{4}}.$$
(2.4.33)

Equations (2.4.28)–(2.4.31) become

$$ {{c}_{4}}K(t,0)=0,$$
(2.4.34)
$$ {{c}_{4}}((t-\tau ){{K}_{\tau }}(t,\tau )+K(t,\tau ))=0,$$
(2.4.35)
(2.4.36)

If there exist functions \(\psi_{i}(\tau)=(t-\tau)^{n_{i}}\) (i=1,2) such that the determinant

$$ {{\Delta }_{1}}=\left( \int\limits_{0}^{t}{{{z}_{3}}\left( t,\tau ,x \right){{\psi }_{1}}\left( \tau \right)\tau \left( t-\tau \right)d\tau } \right)\left( \int\limits_{0}^{t}{K\left( t,\tau \right)\tau \left( t-\tau \right){{\psi }_{2}}\left( \tau \right)d\tau } \right)-\left( \int\limits_{0}^{t}{{{z}_{3}}\left( t,\tau ,x \right){{\psi }_{2}}\left( \tau \right)\tau \left( t-\tau \right)d\tau } \right)\left( \int\limits_{0}^{t}{K\left( t,\tau \right)\tau \left( t-\tau \right){{\psi }_{1}}\left( \tau \right)d\tau } \right)$$

is not equal to zero, then choosing the function σ o (τ) one can obtain contradictory relations. Hence, Δ 1=0 for all functions ψ i (τ). Here z 3(t,τ,x)=(c 4 x+c 7 τ+c 8)K τ +(c 4 x+c 7 t+c 8)K t . Because K(t,τ)≠0 and the system of the functions (tτ)n is complete in L 2[0,t], there exists a function f 1(t,x) such that

$$z_3(t,\tau,x)=f_1(t,x)K(t,\tau).$$
(2.4.37)

Substituting (2.4.37) into (2.4.36), using (2.4.16), and splitting with respect to σ o (0), σ o (t) and e o (t), one obtains

$$ {{c}_{7}}+{{f}_{1}}=0,$$
(2.4.38)
$$ {{c}_{8}}K\left( t,0 \right)=0,$$
(2.4.39)
(2.4.40)

Splitting (2.4.37) with respect to x, and because of (2.4.38), one finds

$$ {{c}_{4}}\left( {{K}_{\tau }}+{{K}_{t}} \right)=0,$$
(2.4.41)
$$ \left( {{c}_{7}}t+{{c}_{8}} \right){{K}_{t}}+\left( {{c}_{7}}\tau +{{c}_{8}} \right){{K}_{\tau }}=-{{c}_{7}}K. $$
(2.4.42)

Regarding (2.4.34), (2.4.35) and (2.4.41), one obtains

$$c_4=0.$$
(2.4.43)

If \(c_{7}^{2}+c_{8}^{2}\neq0\), then from (2.4.39), (2.4.42), one finds that c 8=0 and K=(c 7 t)−1 R(τ/t). The kernels of this type are excluded from the study, because they have a singularity at the time t=0. Hence,

$$c_7=0,\quad c_8=0,$$
(2.4.44)

and the group classification of (2.4.15), (2.4.16) is reduced to the study of (2.4.40).

From (2.4.40) it follows that the kernel of the admitted Lie groups is given by the generators

$$X_1=\partial_x,\quad X_2=\partial_v.$$
(2.4.45)

Extensions of the kernel (2.4.45) are obtained for specific functions φ(e).

If φ ′′≠0, then the classifying equations are

$$ {\varphi }'\left( {{c}_{10}}+e\left( {{c}_{9}}-2{{c}_{5}} \right) \right)-{{c}_{9}}\varphi ={{c}_{11}},$$
(2.4.46)
$$ {{\lambda }_{tt}}+\int\limits_{0}^{t}{K\left( t,\tau \right){{\lambda }_{tt}}\left( \tau \right)d\tau ={{c}_{11}},}$$
(2.4.47)

where c 10, c 11 are arbitrary constants. Hence, the extension of the kernel of admitted Lie groups occurs for the following cases:

  1. (a)

    If φ=α+βln (a+ce), then the additional generator is

    $$Y_1=-cx/2\partial_x+cv/2\partial_v+(a+ce)\partial_e+\beta c\mu (t)\partial_\sigma.$$
  2. (b)

    If φ=α(a+ce)β+γ (β≠1), then system of equations (2.4.15) admits the generator

    $$Y_2=(\beta-1)cx\partial_x+(\beta+1)cv\partial_v+2(ce+a)\partial _e+2\beta c(\sigma-\gamma\mu(t))\partial_\sigma.$$
  3. (c)

    If φ=α+exp (γ e) (γ≠0), then there is the additional generator

    $$Y_3=\gamma x\partial_x+\gamma v\partial_v+2\gamma(\sigma-\alpha\mu (t))\partial_\sigma+2\partial_e.$$
  4. (d)

    If the function φ(e) is linear φ=Ee+E 1, then along with the generators X 1,X 2 system (2.4.15), (2.4.16) also admits the generators

    $$Y_4=v\partial_v+\sigma\partial_\sigma+e\partial_e,\quad Y_\lambda =\lambda_{xt}\partial_v+\lambda_{tt}\partial_\sigma+\lambda _{xx}\partial_e.$$

Here α,β,γ,a,c are constant, the function μ(t) is an arbitrary solution of the equation

$$\mu(t)+\int\limits_{0}^tK(t,\tau)\mu(\tau)\,d\tau=1,$$

and the function λ(x,t) is a solution of the equation

$$E\lambda_{xx}=\lambda_{tt}+\int\limits_{0}^tK(t,\tau)\lambda_{tt}(\tau )\,d\tau.$$

Remark 2.4.1

This approach was also used for other models of elasticity in [57, 63].