1 Introduction

There exists a class of particular solutions to the spatially inhomogeneous Boltzmann equation that corresponds to so-called homoenergetic affine flows. These solutions were introduced in 1956 independently by Galkin [17] and Truesdell [25]. There are many related references, we just mention two books [18, 26], and contributions of Cercignani to this area  [12,13,14].

Roughly speaking, these are spatially inhomogeneous flows of gas having the linear (with respect to spatial variable \(x \in {\mathbb {R}}^3\)) profile of the bulk velocity. This assumption allows to reduce the problem to the modified spatially homogeneous Boltzmann equation, which is considered in the present paper (see also [4]). Our aim is to study solutions of this equation having the self-similar form. Self-similar solutions of nonlinear equations of mathematical physics are always interesting for both physicists and mathematicians. The recent interest to this problem (see also [15]) is partly caused by papers [4, 19,20,21] related to that kind of flows. In particular, we mention the first proof of existence of self-similar solutions in [19] and more detailed study of these solutions and their asymptotic properties in [4].

The modified Boltzmann equation from [4] depends on a small parameter related to a bulk velocity of the flow. This small parameter is assumed in these papers to be ”as small as we want”.

Obviously this assumption is not very convenient for applications. It is important to know more precisely the conditions of validity of mathematically rigorous results. One of the main goals of the present paper is to prove that practically all results of our previous paper [4] remain valid also for moderately small values of the parameter, when the contribution of the perturbation term in the equation does not exceed roughly \(10 \% \) of the contribution of the main term.

The paper is organized as follows. The connection of the modified Boltzmann equation with homoenergetic affine flows is explained in the beginning of Sect. 2. We confine ourselves in this paper to the case of Maxwell-type interactions. The statement of the problem of constructing the self-similar solution to the modified Boltzmann equation in \( \mathbb {R}^d, \, d \ge 2 \), is discussed in Sect. 2. The intensity of the extra term in the equation is characterized by the norm \( \Vert A\Vert \) of a matrix A of order d. In Sect. 3 we perform a detailed study of tensor B of second moments of the solution. This problem was previously briefly considered in [4] with assumption that \( \Vert A\Vert \) is ”sufficiently small”. It is shown in Sect. 3 (Lemmas 3.1 and 3.2) that the same results for corresponding eigenvalue problem for matrix B can be proved for moderate values of \( \Vert A\Vert \), having roughly the order \( O(10^{-1}) \) in dimensionless units. The self-similar profile is constructed in the Fourier representation in Sect. 4. Here we follow (in slightly more elementary way convenient for physicists) the same way as in [4], but with much weaker assumptions on smallness of \( \Vert A\Vert \). The convergence of solutions of the Cauchy problem to self-similar solutions is proved in Sect. 5 under the same restrictions on \( \Vert A\Vert \) as in Sect. 4. The main results of the paper are formulated in Theorems 1 and 2 of Sects. 4 and 5 respectively. The results and some open problems are briefly discusses in Conclusions.

2 Homoenergetic Affine Flows and Modified Boltzmann Equation

We consider the spatially inhomogeneous Boltzmann equation for the distribution function f(xvt), where \( x \in \mathbb {R}^d \), \(v \in \mathbb {R}^d\), \(d\ge 2\), and \(t\in \mathbb {R}_{+}\) denotes respectively the particle position, velocity and time. The equation reads

$$\begin{aligned} \partial _{t}f + v \cdot f_x = Q\left( f, f \right) , \end{aligned}$$
(2.1)

where \( Q\left( f,f \right) \) is the collision integral

$$\begin{aligned}&Q\left( f,f\right) \left( x, v, t \right) = \int \limits _{ {\mathbb {R}}^{d} \times S^{d-1} } \!\! \! dw dn \, g\left( |u|, {\hat{u}} \cdot n \right) \times \nonumber \\&\times \left[ f(x, v', t)f(x, w', t) - f(x, v, t) f(x, w, t) \right] , \nonumber \\&n \in S^{d-1}; \quad u=v-w, \quad {\hat{u}}={u} / {\vert u \vert }, \nonumber \\&v^{\prime } =\frac{1}{2} \left( v + w+\vert u \vert n \right) , \, w^{\prime }=\frac{1}{2}\left( v + w-\vert u \vert n \right) . \end{aligned}$$
(2.2)

The kernel \( g(|u|, \eta ) \) is usually considered in mathematical works as a given non-negative function. In the present paper we consider mainly the pseudo-Maxwell molecules with the kernel \( g( \eta ) \) which does not depend on velocities. Note that the bilinear operator \( Q(\cdot , \cdot ) \) acts only on v-variable and commutes with translations in v-space. It is easy to see that Eq. (2.1) formally admits a particular class of solutions such that

$$\begin{aligned} f(x,v,t) = {\tilde{f}}({\tilde{v}}, t), \quad {\tilde{v}}=v -K(t) x, \end{aligned}$$
(2.3)

where K(t) is a time-dependent matrix of order d. Indeed, we can substitute (2.3) into Eq. (2.1) and obtain the following equations for \( {\tilde{f}}({\tilde{v}}, t) \) and K(t) :

$$\begin{aligned} \partial _{t}f - K(t) \,v \cdot f_v = Q\left( f, f \right) , \quad K_t+K^2=0, \end{aligned}$$

where tildes are omitted. The general formula for K(t) reads

$$\begin{aligned} K(t) = (1+t A)^{-1} A, \quad A=K(0). \end{aligned}$$

The solutions (2.3) of the Boltzmann equation (2.1) are called the homoenergetic affine flows. They were already briefly discussed in Introduction. Many related details and references can be found in [10, 12,13,14, 18, 19, 26].

The most famous solution of this kind is the so-called shear flow, which formally corresponds to such matrix \( A=\{a_{ij}; \quad i,j=1,...,d\} \) that \( a_{12} =a=const \) and all other elements of A are zeros. In that case we obtain

$$\begin{aligned} A^2=0 \, \Rightarrow \, K(t)=K(0)=A. \end{aligned}$$

Moreover the Eq. (2.1) for f(vt) can be written as

$$\begin{aligned} f_t - \mathrm {div}_v \, Av f =Q(f,f), \quad v \in \mathbb {R}^d. \end{aligned}$$
(2.4)

This equation for arbitrary constant real matrix \( A \in M_{d \times d } (\mathbb {R})\) is called the modified Boltzmann equation [4]. It was shown in [10] (for \( d=2 \)) and [19] (for \( d=3 \)) that many interesting cases can be reduced by change of variables to Eq. (2.4) with some constant matrix A.

We further simplify the problem and consider below only the case of Maxwell-type interactions, which correspond to the kernel \( g(|u|, \eta )=g( \eta ) \) in (2.2). It is also assumed below that

$$\begin{aligned} \begin{aligned} g(\eta ) \ge 0, \quad \eta \in [-1,1]; \quad \int _{S^{d-1}} d\omega g(\omega \cdot n)=1, \quad \omega \in S^{d-1}; \\ f(v,0)=f_0(v)\ge 0, \quad \int _{\mathbb {R}^d} dv \ f_0(v) v=0, \quad \int _{\mathbb {R}^d} dv \ f_0(v)=1. \end{aligned} \end{aligned}$$
(2.5)

Formally the second term in (2.4) describes the action of the external force

$$\begin{aligned} F = - A v, \quad A \in M_{d\times d}(\mathbb {R}), \end{aligned}$$
(2.6)

which looks like the anisotropic friction force proportional to components of the particle velocity. Let us consider e.g., the simplest case of Eq. (2.4) with

$$\begin{aligned} A=a I, \quad a \in \mathbb {R}, \end{aligned}$$
(2.7)

where I is the unit matrix, a is a constant with any sign. If \( a > 0 \) this is just a regular friction force \( F = - a v \). The solution f(vt) of (2.4) under assumption (2.7) leads to the following behaviour of the second moment (energy):

$$\begin{aligned} {d{\mathcal {E}}(t)}/{dt}= - 2 a {\mathcal {E}}(t) \, \Rightarrow \, {\mathcal {E}}(t)= {\mathcal {E}}(0)e^{-2 a t}, \end{aligned}$$

where

$$\begin{aligned}\quad {\mathcal {E}}(t)=\int _{\mathbb {R}^d} dv f(v,t)|v|^2. \end{aligned}$$

This equality gives an idea to consider the Eq. (2.4) with \( A = a I \) in self-similar variables by substitution

$$\begin{aligned} f(v,t)=e^{d a t} {\tilde{f}}({\tilde{v}},t), \quad {\tilde{v}} = v e^{a t}. \end{aligned}$$
(2.8)

Then after simple calculations, we obtain the familiar spatially homogeneous Boltzamnn equation for \( {\tilde{f}}({\tilde{v}},t) \)

$$\begin{aligned} {\hat{f}}_{t}= Q({\tilde{f}},{\tilde{f}}), \quad {\tilde{f}}|_{t=0} =f_0({\tilde{v}}). \end{aligned}$$
(2.9)

If, in addition,

$$\begin{aligned} {\mathcal {E}}(0)= \int _{\mathbb {R}^d} dv f_{0}(v)|v|^2=d, \end{aligned}$$

then we know (H-theorem for (2.9)) that

$$\begin{aligned} {\tilde{f}}({\tilde{v}},t) \, \rightarrow \, {\tilde{f}}_{M}({\tilde{v}})= (2\pi )^{-d/2 } e^{- |{\tilde{v}}|^2 /2 }, \quad \text {as} \,t \rightarrow 0. \end{aligned}$$

Hence, coming back to initial variables, we (1) obtain the simplest self-similar solution of (2.4), (2.7), namely

$$\begin{aligned} f_{s-s}(v,t)=\left( 2\pi e^{-2 a t} \right) ^{-d/2} \exp \left( - {|v e^{ a t}|^{2}}/{2} \right) , \end{aligned}$$
(2.10)

and (2) show that this particular solution is an attractor for various classes of initial data. It is obvious, that this simple example is valid also for \( a <0 \) in (2.7) (accelerating forces) and for arbitrary kernel (not necessary the Maxwellian one) in the collision integral.

We note that the spatially homogeneous Boltzmann equation for Maxwell-type interactions, i.e. Eq. (2.4) with \( A=0 \), also has self-similar solutions of the form

$$\begin{aligned}f(v,t) = e^{-d c t} F( v e^{-c t}), \quad c = \mathrm {const}. \end{aligned}$$

Such solutions were constructed and studied for \( d=3 \) in papers [1,2,3] (see also [6, 11, 23]). The drawback of these solutions is that they have an infinite energy \({\mathcal {E}} (t)\). Nevertheless they describe the large time asymptotics for various classes of initial data with infinite energy similarly to the above discussed elementary example (2.10). This property is typical for a wide class of Maxwell models [5, 6].

Roughly speaking, our task is to prove that the situation is, to some extent, similar in the case of arbitrary matrix A in (2.4) provided that its norm is not too large. In fact, all proofs were already done in our previous paper [4] with standard formulations of results like as ”There exists such \( \varepsilon _{0} > 0 \) that the following property holds under assumption that \( \Vert A\Vert \le \varepsilon _{0} \)... ”. This approach allows to avoid some technical work, but it does not show true limits (in terms of \( \Vert A\Vert \)) of the results. The main aim of this paper is to partly clarify this question. Here and below we use the so-called operator norm for matrices [22]. Its properties are discussed in the next section.

Following [4] we pass to the Fourier-representation (see [6, 9] for details) of the Eq. (2.4) and introduce the characteristic function [16] \(\varphi (k,t)\)

$$\begin{aligned} \varphi (k,t)=\int _{\mathbb {R}^d} dv \ f(v,t)e^{-i k\cdot v}, \,\, k\in \mathbb {R}^d. \end{aligned}$$
(2.11)

Then we obtain

$$\begin{aligned} \partial _t \varphi + \left( Ak\right) \cdot \partial _k \varphi = {\mathcal {I}}^{+}(\varphi ,\varphi )-\varphi _{|_{k=0}} \varphi \,, \end{aligned}$$
(2.12)

where

$$\begin{aligned} {\mathcal {I}}^{+}(\varphi ,\varphi )(k)= & {} \!\int \limits _{S^{d-1}} \!\!dn g\left( {\hat{k}} \cdot n \right) \varphi (k_{+}) \varphi (k_{-}), \nonumber \\ k_{\pm }= & {} \frac{1}{2}\left( k\pm \vert k \vert n\right) , \quad {\hat{k}}=\frac{k}{\vert k\vert }. \end{aligned}$$
(2.13)

The initial condition becomes

$$\begin{aligned} \varphi (k,0)=\varphi _0(k)= \int _{\mathbb {R}^d} dv \ f_0(v)e^{-i k\cdot v}, \quad \varphi _0(0)=1. \end{aligned}$$

Note that (2.4) implies the mass conservation. Therefore

$$\begin{aligned} \varphi (0,t)=\varphi _0(0)=1, \end{aligned}$$
(2.14)

and we obtain from (2.12)

$$\begin{aligned} \partial _t \varphi +\varphi + \left( Ak\right) \cdot \partial _k \varphi = {\mathcal {I}}^{+}(\varphi ,\varphi ) = \Gamma (\varphi ). \end{aligned}$$
(2.15)

For brevity we consider in the rest of this section the self-similar solution only. Following [4], we look for such solution in the form

$$\begin{aligned} \varphi _{s-s}(k,t)= \Psi (k e^{\beta t}),\, \beta \in \mathbb {R}. \end{aligned}$$
(2.16)

Note that it corresponds to the distribution function (2.10), where \( a =- \beta \). The parameter \( \beta \) will be defined below.

Then we pass to self-similar variables in (2.15) by substitution

$$\begin{aligned} \varphi (k,t)={\tilde{\varphi }}({\tilde{k}},t), \quad {\tilde{k}}=ke^{\beta t}, \end{aligned}$$
(2.17)

and obtain omitting tildes

$$\begin{aligned} \partial _t \varphi +\varphi + \left( A_{\beta } k\right) \cdot \partial _k \varphi =\Gamma \big [\varphi \big ], \quad A_{\beta }=A +{\beta }I. \end{aligned}$$
(2.18)

It is clear that the self-similar solution (2.16) of Eq. (2.15) becomes a stationary solution for Eq. (2.18). The differential form of the stationary solution is obvious from (2.18). Its integral form can be obtained at the formal level from the operator identity

$$\begin{aligned} \int \limits _{0}^{\infty } dt e^{-t \left( 1+ {\hat{D}}\right) }=\left( 1+ {\hat{D}}\right) ^{-1}, \end{aligned}$$
(2.19)

where \( {\hat{D}} \) is an abstract operator. We refer to [4] for conditions of equivalence of these integral and differential forms of equation for \( \Psi (k) \). The integral equation reads [4]

$$\begin{aligned} \Psi (k)=\int \limits _0^{\infty } dt E_{\beta }(t) \Gamma [\Psi (k)], \end{aligned}$$
(2.20)

where \( \Gamma [\Psi (k)] = {\mathcal {I}}^{+}(\Psi ,\Psi ) \) is given in (2.13),

$$\begin{aligned} E_{\beta }(t)=\exp \left[ -t\big ( 1+ A_{\beta } k\cdot \partial _k\big )\right] . \end{aligned}$$
(2.21)

It is easy to see that the action of the operator \( E_{\beta }(t)\) on any function \( \varphi (k) \) is given by formula

$$\begin{aligned} E_{\beta }(t)\varphi (k) =e^{-t} \varphi \left[ e^{-{\beta }t} \big ( e^{-t A} k \big )\right] . \end{aligned}$$
(2.22)

The Eq. (2.20) will be solved below with all necessary estimates. We begin in the next section with definition of \( \beta \) and some preliminary estimates.

3 Eigenvalue Problem for Matrices

We can apply the operator \( \big ( 1+ A_{\beta } k \cdot \partial _k \big ) \) to the Eq. (2.20) and obtain the equation for \( \Psi (k) \) in differential form (see also (2.18))

$$\begin{aligned} \big (1+{\beta } k\cdot \partial _k + A k\cdot \partial _k \big )\Psi (k)= \Gamma [\Psi ](k). \end{aligned}$$
(3.1)

It is always assumed below that \( \Psi (k) \) is a characteristic function (the Fourier transform of a probability measure in \( \mathbb {R}^{d} \)) and have the following asymptotic behaviour for small \( \vert k \vert \):

$$\begin{aligned} \Psi (k)=1 -\frac{1}{2}B:\,k\otimes k + O \left( |k|^{p} \right) \end{aligned}$$
(3.2)

for some \( 2 < p \le 4 \). The notation \( B=\{b_{ij}; \, i,j=1, \cdots , d\} \) is used for symmetric positively defined matrix. We also denote for brevity

$$\begin{aligned} B:\,k\otimes k = \sum _{i,j=1}^{n}b_{ij} k_{i}k_{j}. \end{aligned}$$

The formula (3.2) means that the corresponding distribution function, i.e. the inverse Fourier transform of \(\Psi (k) \), has finite moments of the order \( 2+ \varepsilon \), \( \varepsilon > 0 \) (see [4] for details). It can be shown that the matrix B and the parameter \( \beta \) satisfy the following equation (see Eq. (2.7) in [4]):

$$\begin{aligned} \beta B + \theta \left( B - \frac{\mathrm {Tr }B}{d}I\right) + \langle B A\rangle =0, \end{aligned}$$
(3.3)

where

$$\begin{aligned}&\theta =\dfrac{q d}{4(d-1)}, \quad q=\int \limits _{S^{d-1}} dn g(\omega \cdot n) [1-(\omega \cdot n)^{2}], \nonumber \\&\omega \in S^{d-1};\;\; \mathrm {Tr }B = \sum \limits _{i=1}^{d} b_{ii}, \; \langle B A\rangle = \frac{1}{2}[ B A+ (B A)^{T}], \end{aligned}$$
(3.4)

here the upper index T denotes the transposed matrix. This equation can be easily obtained by substitution of (3.2) into Eq. (3.1). We are interested in solution \(( \beta , B) \) of the eigenvalue problem (3.3) such that the eigenvalue \( \beta \) has the largest (as compared to other eigenvalues) real part. In addition, the real symmetric matrix B must have only positive eigenvalues. The existence of such solution \(( \beta , B )\) was proved in Lemma 7.3 in [4] under assumption that \( \Vert A\Vert \le \varepsilon _{0} \) for sufficiently small \( \varepsilon _{0} > 0 \), where

$$\begin{aligned} \Vert A\Vert = \sup _{ |k| = 1 } | A k |, \quad k \in \mathbb {R}^{d}. \end{aligned}$$
(3.5)

No estimates of \( \varepsilon _{0} \) was given in [4]. Our aim in this paper is to fill this gap and to show that main results of that paper remain valid for moderately small values of \( \varepsilon _{0} \).

To this goal we construct the solution of the problem (3.3) below in explicit form of power series in \( \Vert A\Vert \). It is convenient to denote in (3.3)

$$\begin{aligned} \beta = \theta (\lambda -1),\quad A=-\Vert A\Vert {\tilde{A}}, \quad \varepsilon =\frac{\Vert A\Vert }{\theta }. \end{aligned}$$
(3.6)

Then we obtain a new equation for eigenvalue \( \lambda \) and symmetric matrix B

$$\begin{aligned} \lambda B =\frac{\mathrm {Tr }B}{d} I + \varepsilon \langle B A\rangle , \quad \Vert A\Vert =1, \end{aligned}$$
(3.7)

where tildes are omitted.

Note that the dimension of the linear space of symmetric \( (d \times d)\)-matrices is equal to \({d(d+1)}/{2}\). If \(\varepsilon =0\) we have a very simple problem

$$\begin{aligned} \lambda B ={\hat{P}} B = \frac{\mathrm {Tr} B}{d} I, \end{aligned}$$

where the operator \({\hat{P}}\) is the projector, since \({\hat{P}}^2 = {\hat{P}}\). Obviously this problem has two eigenvalues: \( \lambda = 1 \) and \( \lambda = 0 \). The corresponding ”eigenmatrices” are: \( B=I \) for \( \lambda =1 \) and any linear combination of matrices with zero trace for \( \lambda =0 \). It is clear that we need to consider in the problem (3.7) the perturbation of the largest eigenvalue \( \lambda =1 \). By using standard procedure we assume that

$$\begin{aligned} \lambda =\sum \limits _{n=0}^{\infty } \lambda _{n} \varepsilon ^{n}, \quad B = \sum \limits _{n=0}^{\infty } \varepsilon ^{n} B_{n}, \quad \lambda _{0}=1, \quad B_{0}=I. \end{aligned}$$
(3.8)

Then we obtain for \( n \ge 1 \)

$$\begin{aligned} \sum \limits _{k=0}^{n} \lambda _{k} B_{n-k} = \frac{\mathrm {Tr} B_{n}}{d} I + \langle B_{n-1} A\rangle . \end{aligned}$$

Note that \( \mathrm {Tr} B_{0}= d \). Without any loss of generality we can assume that \( \mathrm {Tr} B_{0}= 0 \) for all \( n \ge 1 \). Hence, we obtain the following recurrent formulas for \( n \ge 1 \)

$$\begin{aligned} \lambda _{n} =d^{-1} \mathrm {Tr} \langle B_{n-1} A\rangle , \quad B_{n}= \langle B_{n-1} A \rangle - \sum \limits _{k=1}^{n} \lambda _{k} B_{n-k}, \end{aligned}$$
(3.9)

where \( \lambda _{0} \) and \( B_0 \) are given in (3.8).

We shall use below the following well-known properties of the norm (3.5), which are valid also for complex-valued matrices (in that case \( k \in {\mathbb {C}}^{d} \) in (3.5)):

$$\begin{aligned}&\Vert c A\Vert = |c| \, \Vert A \Vert , \quad c \in {\mathbb {C}}; \qquad | \mathrm {Tr} A| \le d \, \Vert A \Vert ;\qquad \nonumber \\&\Vert A_{1} + A_{2} \Vert \le \Vert A_{1} \Vert + \Vert A_{2} \Vert ; \; \; \Vert A_{1} A_{2} \Vert \le \Vert A_{1} \Vert \, \Vert A_2 \Vert , \qquad \end{aligned}$$
(3.10)

where A, \( A_{1} \) and \( A_{2} \) are arbitrary quadratic matrices with complex elements (all details can be found in [22]). Inequalities (3.10) imply that

$$\begin{aligned} \Vert \langle B_{n-1} A \rangle \Vert \le \Vert B_{n-1} \Vert , \quad | \mathrm {Tr} \langle B_{n-1} A \rangle | \le d \Vert B_{n-1} \Vert , \quad n \ge 1, \end{aligned}$$

since \( \Vert A \Vert =1 \) in (3.7). Therefore it follows from (3.9) that

$$\begin{aligned} |\lambda _{n}| \le \Vert B_{n-1} \Vert , \quad \Vert B_{n} \Vert \le \Vert B_{n-1} \Vert + \sum \limits _{k=1}^{n} \Vert B_{k-1} \Vert \, \Vert B_{n-k} \Vert ,\quad n \ge 1, \end{aligned}$$

whereas \( \lambda _0=1 \),    \( \Vert B_{0} \Vert =1\).

Let us consider a function

$$\begin{aligned} y(x)= \sum \limits _{n=1}^{\infty } y_{n} x^{n}, \quad x \ge 0, \end{aligned}$$
(3.11)

defined by recurrent formulas

$$\begin{aligned} y_0=1; \quad y_n= y_{n-1} + \sum \limits _{k=1}^{n} y_{k-1}y_{n-k}, \quad n \ge 1. \end{aligned}$$
(3.12)

Obviously we have the estimates

$$\begin{aligned} \Vert B_{n} \Vert \le y_{n}, \quad |\lambda _n|\le y_{n-1}, \quad n \ge 1. \end{aligned}$$

Therefore

$$\begin{aligned} \begin{aligned} |\lambda - 1| \le \sum \limits _{n=1}^{\infty } |\lambda _n| \varepsilon ^{n} \le \varepsilon [1+ y(\varepsilon )] , \\ \Vert B - I \Vert \le \sum \limits _{n=1}^{\infty } \Vert B_n \Vert \varepsilon ^{n} \le y(\varepsilon ). \end{aligned} \end{aligned}$$
(3.13)

It is straightforward to get a quadratic equation for y(x) from (3.11). We obtain

$$\begin{aligned} y(x)= x [1 + y(x)] [2+y(x)], \quad y(0)=0. \end{aligned}$$

Hence,

$$\begin{aligned} y(x)=\frac{1}{2x} [1 - 3x -\sqrt{(1-3x)^{2} -8 x^2 }]. \end{aligned}$$
(3.14)

The radius of convergence of series for y(x) is equal to

$$\begin{aligned} r_0 =3 - \sqrt{8} = (3 + \sqrt{8} )^{-1} > 1/6. \end{aligned}$$
(3.15)

If \( 0< \varepsilon < r_0 \) in (3.8) we have estimates for \( \Vert B-I \Vert \) and \( |\lambda - 1| \) given in (3.13). By using more general methods it is possible to prove that the radius of convergence of series (3.8) is greater than or equal to \( r_1 = 1/2 \) (note that \( r_0 \approx 1/6 \)), as it follows from [22] (Chapter II, §3.5, Theorem 3.9). We need, however, more precise estimates also for \( \lambda \) and B. Note that \( y(0) =0 \; \), \( y(1/6 ) =1 \; \), \( y'(x) >0 \; \), \( y''(x) > 0 \; \) for all \( x \in [0,r_0) \). Hence, \( 0< y(x) < 6 x \)   if   \( 0< x < 1/6 \), and we obtain from (3.13)

$$\begin{aligned} | \lambda ( \varepsilon ) -1|< 2 \varepsilon , \;\; \Vert B( \varepsilon ) - I \Vert < 6 \varepsilon \; \;\text {if} \; \; \varepsilon \in [0,1/6). \end{aligned}$$
(3.16)

The result can be formulated in the following way.

Lemma 3.1

We consider the eigenvalue problem (3.7), (3.4), where B is an unknown symmetric matrix of order \( d \ge 2 \) and A is a given real matrix of the same order. For any matrix A and any \( \varepsilon \ge 0\), satisfying conditions

$$\begin{aligned} 0 \le \varepsilon \le 1/6, \quad \Vert A\Vert \le 1, \end{aligned}$$
(3.17)

in the notation of equations (3.5), there exists a unique solution \( (\lambda , B) \) of (3.7) such that

  1. (i)

    \( \lambda =\lambda (\varepsilon ) \) and \(B= B( \varepsilon ) \) are represented by Taylor series (3.8) convergent for any \( \varepsilon \in [0,r_0) \), where \(r_0 > 1/6 \) is given in (3.15);

  2. (ii)

    the matrix \( B( \varepsilon ) \) is real and positive definite, it is uniquely defined by condition \( \mathrm {Tr} B=d \) and satisfies the estimate (3.16);

  3. (iii)

    the eigenvalue \( \lambda ( \varepsilon ) \) is real and simple, it satisfies (3.16) and also inequality \(\lambda > |\lambda ' | \), where \( \lambda ' \) is any other eigenvalue of the problem (3.7).

Proof

The convergence of series (3.8) and the validity of the estimate (3.16) are already proved above. The generalization of all results to the case \( \Vert A\Vert \le 1 \) (instead of \(\Vert A\Vert =1 \) in (3.7) ) is obvious. The fact that \(B( \varepsilon ) \) is positive definite follows from its estimate (3.16). If there is another solution \((\lambda , B') \) of (3.7) with the same \( \lambda \) , then it is easy to see that we can choose \( B' \) in such a way that \(\mathrm { Tr} B' =0 \) . Hence,

$$\begin{aligned} \lambda B' = \varepsilon \langle B' A \rangle \Rightarrow |\lambda | \le \varepsilon . \end{aligned}$$

This inequality contradicts to estimate (3.16) and therefore \( \lambda \) is simple. It remains to compare \( \lambda \) with other eigenvalues. Let \((\lambda ', B') \) be any other solution of the problem (3.7), which differs from the solution \((\lambda , B) \) under consideration. The equation for \((\lambda ', B') \) reads

$$\begin{aligned} \lambda ' B'= \frac{\mathrm { Tr} B'}{d} I + \varepsilon \langle B' A \rangle , \quad \lambda ' \ne \lambda . \end{aligned}$$

It can be re-written in the form

$$\begin{aligned} \lambda ' B'= \lambda \frac{\mathrm { Tr} B'}{d} B + \varepsilon \langle \bar{B'} A \rangle , \quad \bar{B'} = B' - \frac{\mathrm { Tr} B'}{d} B. \end{aligned}$$
(3.18)

Taking the trace, we obtain

$$\begin{aligned} ( \lambda ' - \lambda ) \mathrm { Tr} B'= \varepsilon \mathrm { Tr} \langle \bar{B'} A \rangle . \end{aligned}$$

We can also derive from (3.18) the equation for the traceless matrix \( \bar{B'} \). It reads

$$\begin{aligned} \lambda '\bar{B'} + \frac{\mathrm { Tr} B'}{d} ( \lambda ' - \lambda )B= \varepsilon \langle \bar{B'} A \rangle . \end{aligned}$$

Then we finally obtain

$$\begin{aligned} \lambda '\bar{B'} = \varepsilon \left( \langle \bar{B'} A \rangle - \frac{\mathrm { Tr}\langle \bar{B'} A \rangle }{d } B\right) . \end{aligned}$$

It remains to use inequalities (3.10) and (3.17) for \( \Vert A\Vert \). By assumption \( \Vert {B'}\Vert \ne 0 \) and therefore \( \bar{B'} \ne 0 \), otherwise \( {B'} =B \) and \( \lambda '= \lambda \). Hence, we obtain for \( \varepsilon \in [0, 1/6) \)

$$\begin{aligned} |\lambda '| \le \varepsilon (1 + \Vert B \Vert ) < 3 \varepsilon . \end{aligned}$$

Note that this estimate is valid for both complex and real eigenvalues \( \lambda ' \), since all properties (3.10) hold for complex-valued matrices [22]. Combining the estimate for \( |\lambda '| \) with the first inequality in (3.16) we obtain

$$\begin{aligned} \lambda -|\lambda '|> 1 - 5 \varepsilon > 1/6 \quad \text {if} \quad \varepsilon \in [0, 1/6). \end{aligned}$$
(3.19)

This completes the proof of Lemma 3.1.

\(\square \)

The corresponding results for the problem (3.3) are formulated in the next lemma.

Lemma 3.2

For any real matrix A such that

$$\begin{aligned} \Vert A\Vert < \frac{\theta }{6}, \quad {\theta }=\frac{q d}{4 (d-1)}, \nonumber \\ q=\int dn g({\hat{k}} \cdot n)[1 - ({\hat{k}}\cdot n)^{2}], \quad {\hat{k}} \in S^{d-1}, \end{aligned}$$
(3.20)

the eigenvalue problem (3.3), (3.4) has a solution \( (\beta , B) \) that is connected with the pair \( [\lambda (\varepsilon ), B(\varepsilon )] \) from Lemma 3.1 by transformation

$$\begin{aligned} \beta = \theta [\lambda (\varepsilon )-1],\quad B = B(\varepsilon ), \quad \varepsilon =\Vert A\Vert /\theta . \end{aligned}$$
(3.21)

All properties of the pair \( (\beta , B) \) follow from Lemma 3.1. In particular,

$$\begin{aligned} \begin{aligned} \mathrm { Tr}B = d, \quad |\beta |< 2\Vert A\Vert , \\ \beta - \Re \beta ' \ge \theta - 5 \Vert A\Vert , \quad \Vert B - I\Vert < 1 \end{aligned} \end{aligned}$$
(3.22)

provided the condition (3.20) is satisfied.

Proof

is straightforward, since the eigenvalue problems (3.3) and (3.7) are equivalent. The estimate for \(( \beta - \Re \beta ') \) follows from Eqs. (3.21) and (3.19). \(\square \)

This lemma will be directly applied to construction of the self-similar profile in the next section.

4 Construction of the Self-similar Profile

We return to the integral equation (2.20) for the self-similar profile \( \Psi (k) \) from (2.16). The parameter \( \beta \) and the tensor (matrix) B are assumed below to be chosen in accordance with Lemma 3.2 from Sect. 3. Moreover we choose the function

$$\begin{aligned} \Psi _0(k) = \exp \left[ -\frac{1}{2}B: k \otimes k \right] , \quad k \in \mathbb {R}^{d} \end{aligned}$$
(4.1)

as the first approximation for \( \Psi (k) \) in the iteration process

$$\begin{aligned} \Psi _{n+1}(k) = I(\Psi _n)= \int \limits _{0}^\infty dt E_\beta (t) \Gamma [\Psi _n (k)], \quad n=0,1,\cdots , \end{aligned}$$
(4.2)

in the notation of Eqs. (2.20)–(2.22).

The same iteration process is actually considered in [4] in the proof of Theorem 7.1. Therefore we can omit some details in order to avoid repetitions. Our main goal is to prove similar theorem under more definite assumptions (without assuming that the perturbation term in Eq. (2.18) is ”as small, as we want”).

First we need to estimate the difference between \( \Psi _0(k) \) and \( \Psi _1(k) \). The elementary inequality yields

$$\begin{aligned} \left| e^{-x} - 1 + x \right| \le \frac{1}{2} x^2, \quad x \ge 0. \end{aligned}$$
(4.3)

Therefore

$$\begin{aligned} \Psi _0(k) =1 -\frac{1}{2}B: k \otimes k + \delta _0(k), \quad |\delta _0(k)| \le \frac{1}{8} \Vert B\Vert ^{2} |k|^{4} \end{aligned}$$
(4.4)

in the notation of Eq. (3.5). Note that

$$\begin{aligned}&\Gamma [\Psi _0 (k)] = \int \limits _{S^{d-1}} dn g({\hat{k}} \cdot n) \Psi _0(k_{+}) \Psi _0(k_{-}) \\ {}&\quad =\int \limits _{S^{d-1}}\! dn \, g({\hat{k}} \cdot n) \exp \left[ -\frac{1}{2} B: (k_{+} \otimes k_{+} + k_{-} \otimes k_{-} ) \right] , \end{aligned}$$

where \( k_{\pm } = (k \pm |k| n)/2\). We substitute \( \Gamma [\Psi _0 (k)] \) into Eq. (4.2) for \( n=0 \) and obtain by using the estimate (4.3) in the integrand:

$$\begin{aligned} \Psi _1 (k) = 1 - \Psi _1^{(1)} (k) + \delta _1(k), \end{aligned}$$

where

$$\begin{aligned} \Psi _1^{(1)} (k) = \frac{1}{2} \int \limits _{0}^{\infty } dt E_\beta (t)\, \int \limits _{S^{d-1}} dn g({\hat{k}} \cdot n) B: (k_{+} \otimes k_{+} + k_{-} \otimes k_{-} ), \\ |\delta _1(k)| \le C_0 \Vert B\Vert _{1}^{2} \int \limits _{0}^{\infty } dt e^{-t[1 + 4(\beta - \Vert A\Vert )] } |k|^4, \end{aligned}$$

as it follows from Eqs. (4.3), (2.22). Here \( C_0 \) is an absolute constant. By formal construction (see the transition from Eq. (2.18) to its integral form (2.20)) we have

$$\begin{aligned} \Psi _1^{(1)} (k) = \frac{1}{2} B: k \otimes k. \end{aligned}$$

The detailed proof of this equality under assumption that \( \Vert A\Vert < \beta + 0.5 \) is given in the proof of Theorem 7.1 in [4]. In fact the convergence of the above integral for \( \delta _1(k) \) assumes a stronger restriction, namely

$$\begin{aligned} \Vert A\Vert < \beta + \frac{1}{4}. \end{aligned}$$
(4.5)

We assume below that this condition is satisfied. Then it follows from above considerations that

$$\begin{aligned} |\Psi _1 (k) -\Psi _0 (k)| \le \frac{C'_0 \Vert B\Vert _{1}^2 |k|^4}{1 + 4 (\beta - \Vert A\Vert )}, \quad k \in \mathbb {R}^{d}, \end{aligned}$$
(4.6)

with some absolute constant \( C'_0 \).

Obviously, both \( \Psi _0 (k) \) and \( \Psi _1 (k) \) are characteristic functions. Therefore

$$\begin{aligned} |\Psi _0 (k) | \le 1, \quad |\Psi _1 (k) | \le 1, \end{aligned}$$
(4.7)

and we obtain the inequality

$$\begin{aligned} |\Psi _1 (k) - \Psi _0 (k)| \le \min [C |k|^4, 2], \quad k \in \mathbb {R}^d \end{aligned}$$
(4.8)

with appropriate constant C. Then we note that

$$\begin{aligned} \begin{aligned} |\Psi _{n+1} (k) - \Psi _n (k)| \le \\ \int \limits _{0}^{\infty } dt E_\beta (t) | \Gamma [\Psi _n (k)] -\Gamma [\Psi _{n-1} (k)] |, \quad n \ge 1, \end{aligned} \end{aligned}$$
(4.9)

in accordance with equations  (4.2). On the other hand, for any pair \( \varphi (k) \) and \( \Psi (k) \) of characteristic functions, the following estimate holds

$$\begin{aligned} | \Gamma [\Psi (k)] - \Gamma [\varphi (k)] | \le {\mathcal {L}} [| \Psi - \varphi |](k), \end{aligned}$$
(4.10)

where

$$\begin{aligned} {\mathcal {L}} [\varphi ](k) = \int \limits _{S^{d-1}} dn g({\hat{k}} \cdot n) [ \varphi (k_{+} ) + \varphi (k_{-} )], \quad \\ k_{\pm }= (k \pm |k| n )/2, \quad n \in S^{d-1}, \quad {\hat{k}}=k/ |k|. \end{aligned}$$

This estimate follows from Lemma 3.1 of [4]. Note that \( |k_{\pm }|^{2}= |k|^{2} (1 \pm {\hat{k}} \cdot n )/2\), therefore

$$\begin{aligned} {\mathcal {L}} (|k|^{4} )= & {} \gamma |k|^{4}, \nonumber \\ \gamma= & {} \frac{1}{2} \int \limits _{S^{d-1}} dn g({\hat{k}} \cdot n) [1+ ({\hat{k}} \cdot n)^{2} ]= 1- q/2, \end{aligned}$$
(4.11)

in the notation of Eq. (3.4).

Coming back to iterations (4.2) we can assume by induction that

$$\begin{aligned} |\Psi _{j+1}(k) - \Psi _{j}(k) | \le C_{j+1}|k|^{4} \quad 0 \le j \le n-1, \end{aligned}$$

for some \( n \ge 2 \). Then we apply estimates (4.10), (4.11) and obtain

$$\begin{aligned} |\Psi _{n+1}(k) - \Psi _{n}(k) | \le C_{n} \gamma \int \limits _{0}^\infty dt E_\beta (t) |k|^{4}. \end{aligned}$$

It follows from Eq. (2.22) that

$$\begin{aligned} |\Psi _{n+1}(k) - \Psi _{n}(k) | \le C_{n+1}|k|^{4}, \\ C_{n+1}\le C_{n} \gamma [1 + 4(\beta - \Vert A\Vert ]^{-1}. \end{aligned}$$

Hence, the iterations (4.2) converge if \(C_{n+1}<C_{n} \) or equivalently,

$$\begin{aligned} \gamma < 1 + 4(\beta - \Vert A\Vert ). \end{aligned}$$
(4.12)

We use the estimate (3.22) for \( \beta \) and Eq. (4.11) for \( \gamma \) and obtain the sufficient condition for pointwise convergence of iterations (4.2) in the form

$$\begin{aligned} \Vert A\Vert < q / 24, \nonumber \\ q= \int \limits _{S^{d-1}} dn g(\omega \cdot n) [1 -(\omega \cdot n)^{2}], \quad \omega \in S^{d}. \end{aligned}$$
(4.13)

Note that this condition does not depend on dimension d. It is a bit stronger than the condition (3.20) of Lemma 3.2, as expected.

The final result can be formulated in the following way.

Theorem 1

We consider the integral equation (2.20) and assume that the condition (4.13) for matrix A is fulfilled. It is also assumed that the solution \( \Psi (k) \) of this equation has asymptotic behaviour for small |k| in accordance with Eq. (3.2) for some \( p \in (2,4] \).

  1. (i)

    Then the parameter \( \beta \) in (2.20) and the symmetric matrix B in (3.2) normalized by condition \( \mathrm {Tr} B=d \) are uniquely defined by the solution \( (\beta , B) \) of the eigenvalue problem (3.3), (3.4) constructed in Lemma 3.2.

  2. (ii)

    For \( \beta \) and B from the item (i) there is a unique characteristic function \( \Psi (k) \) that solves Eq. (2.20) and satisfies the asymptotic formula (3.2) with \( p=4 \).

Proof

is almost done above. The iteration process (4.2) leads to pointwise convergence to characteristic function

$$\begin{aligned} \Psi (k) = \lim \Psi _n(k), \quad k \in \mathbb {R}^{n}, \end{aligned}$$
(4.14)

because all \(\Psi _n(k) \), \( n \ge 0 \), are characteristic functions [16]. In fact the convergence is uniform on any compact set in \(\mathbb {R}^{d} \).

It remains to prove the uniqueness of \( \Psi (k) \). It is clear that

$$\begin{aligned} |\Psi (k)- 1 + \frac{1}{2} B: k \otimes k | \le C |k|^4. \end{aligned}$$
(4.15)

Let us assume that there is another solution \( \Psi ^{(1)}(k) \) of Eq. (2.20), which also satisfies similar inequality. Then we obtain

$$\begin{aligned} | \Psi (k) - \Psi ^{(1)}(k) | \le C^{(1)}|k|^4, \quad C^{(1)}=\mathrm {const.} \end{aligned}$$

In particular, we can choose

$$\begin{aligned} C^{(1)} = d ( \Psi , \Psi ^{(1)} ) = \sup \frac{| \Psi (k) - \Psi ^{(1)}(k) |}{|k|^4}, \end{aligned}$$

where \( d ( \Psi , \Psi ^{(1)} ) \) is the distance between two characteristic functions used in [4].

On the other hand it follows from Eq. (2.20) that

$$\begin{aligned} | \Psi (k) - \Psi ^{(1)}(k) | \le \int \limits _{0}^{\infty } dt E_\beta (t) | \Gamma [\Psi (k)] - \Gamma [\Psi ^{(1)} (k)] |. \end{aligned}$$

This estimate is almost a copy of (4.9). Therefore we repeat the same consideration as before and obtain

$$\begin{aligned} C^{(1)} \le C^{(1)} \gamma [1 + 4 (\beta - \Vert A\Vert )]^{-1}. \end{aligned}$$

This contradicts to the assumption (4.13) of the theorem, which guarantees the estimate (4.12). Hence, the solution \( \Psi (k) \) is unique in the class of functions satisfying the condition (4.13). This completes the proof. \(\square \)

5 Convergence to Self-similar Solution

The final step in our study is an improvement of Theorem 8.1 from [4]. We can show that this theorem with \( p=4 \) holds under conditions of Theorem 1 from Sect. 4 , i.e. for all such A that \( \Vert A\Vert < q/24 \), where q is given in (4.13). It is assumed below that this condition is fulfilled.

We briefly consider in this section the initial value problem for characteristic function \( \varphi (k,t) \) in self-similar coordinates (2.15). We obtain (see Eq. (2.18))

$$\begin{aligned} \varphi _t + A_\beta k \cdot \varphi _k + \varphi = \Gamma (\varphi ), \quad \varphi |_{t=0}=\varphi _0(k), \quad k \in \mathbb {R}^{d}, \end{aligned}$$
(5.1)

where \( A_\beta = A + \beta I \), \( \beta \in \mathbb {R}\). It is assumed that

$$\begin{aligned}&\left| \varphi _0(k) - \left( 1 - \frac{1}{2} G_0: k \otimes k \right) \right| \le C_0 |k|^{4}, \nonumber \\&C_0= \mathrm {const.}, \quad k \in \mathbb {R}^d, \end{aligned}$$
(5.2)

in the notation analogous to Eq. (3.2). Then it is known from [4] that there exists a unique characteristic function \( \varphi (k,t) \) that solves the problem (5.1) and satisfies the inequality

$$\begin{aligned} \left| \varphi (k,t) - \left( 1 - \frac{1}{2} G(t): k \otimes k \right) \right| \le C_1 |k|^{4}, \quad C_1= \mathrm {const.}, \end{aligned}$$
(5.3)

where G(t) is a time-dependent symmetric \((d \times d) \)-matrix that solves the problem

$$\begin{aligned} \frac{1}{2} G_t + \beta G + \theta \left( G - \frac{ 1}{d}\mathrm {Tr}G I \right) +&\left\langle G A \right\rangle = 0, \nonumber \\&G|_{t=0}= G_0, \end{aligned}$$
(5.4)

in the notation of equations (3.4) with \( B=G \). Since \( \varphi (k,t) \) is the characteristic function, the matrix G(t) is positive definite for any \( t \ge 0 \), except for trivial solution \( \varphi (k,t)=1 \).

We assume below that the parameter \( \beta \) in Eqs. (5.1), (2.15) coincides with \( \beta \) from Theorem 1. Then the matrix B from Theorem 1 is obviously a stationary solution of Eqs. (5.4) ans so is \( G_{st} =c^2 B \) for any \( c >0 \). Note that \( G_{st}\) is also positive definite matrix (like B). It follows from standard theory of linear ODE that the solution G(t) of (5.4) has the following form

$$\begin{aligned} G(t) = c^{2 }B + \sum \limits _{i=1}^{s} r_i e^{\gamma _i t} P_i(t), \quad t \ge 0, \end{aligned}$$
(5.5)

for some integer \( s \in [1, N_d -1] \), \(N_d = d(d+1)/2 \), constant parameters \( \gamma _i \in {\mathbb {C}} \) (distinct nonzero eigenvalues) and coefficients \( c^2 \), \( r_i \in {\mathbb {C}},\quad i =1, \cdots , s \). Polynomials \( P_i(t) \) can appear in the sum (5.5) in case of multiple eigenvalues. It is clear that \( G(t) \rightarrow c^2 B \), as \( t \rightarrow \infty \), if \( \Re \gamma _i < 0 \) for any \( i \in [1,s] \). It follows from linear algebra that all \( \gamma _1, \cdots , \gamma _s \) can be found from the eigenvalue problem

$$\begin{aligned} \beta ' B' + \theta \left( B' - \frac{\mathrm {Tr} B'}{d} I \right) + \langle B' A \rangle =0 \end{aligned}$$
(5.6)

for symmetric matrix \( B' \). Then \(\gamma _i =2 (\beta '_i - \beta ), \; i=1, \cdots , s \), where \( \{ \beta '_1, \cdots , \beta '_s \} \) are distinct eigenvalues of the problem (5.6) and such that \(\beta '_i \ne \beta \). We apply one of estimates (3.22) and obtain

$$\begin{aligned} \Re \gamma _i< - 2 (\theta - 5 \Vert A\Vert )< - q/ 12, \quad \Vert A\Vert < q/24 \end{aligned}$$

under assumptions of Theorem 1. Hence,

$$\begin{aligned} G(t)= c^2 B + O[ \exp (- qt/ 12 ) ]. \end{aligned}$$
(5.7)

The result can be formulated as follows.

Lemma 5.1

Let the symmetric positive-definite \( (d \times d) \)-matrix G(t) be a solution of the Cauchy problem (5.4) with \( \beta \) from Lemma 3.2 and \( \Vert A\Vert < q/ 24 \). Then there exists a constant \( c > 0 \) such that the equality (5.7) with B from Lemma 3.2 holds for all \( t \ge 0 \).

Proof

is already done above. \(\square \)

Lemma 5.1 is an analogue of Lemma 8.3 from [4]. Note that all parameters are defined in Lemma 5.1 explicitly. Finally we want to prove the convergence of \( \varphi (k,t) \) from (5.1) to a stationary solution of that equation, namely, to prove the equality

$$\begin{aligned} \lim \limits _{t \rightarrow \infty } \varphi (k,t) = \Psi (ck), \quad k \in \mathbb {R}^d, \end{aligned}$$
(5.8)

where the constant \( c > 0 \) is defined in Lemma 5.1.

Remark 5.1

It is easy to see that the function \( \Psi (k) \) constructed in Theorem 1 is even, i.e. \( \Psi (-k)= \Psi (k) \) (see also [4]).

To this goal we follow the scheme of [4] and introduce an auxiliary function

$$\begin{aligned} {\tilde{\varphi }}(k,t)= e^{- \frac{1}{2} G(t): k \otimes k }, \quad k \in \mathbb {R}^d, \end{aligned}$$
(5.9)

where G(t) is the same as in Lemma 5.1. We also assume that \( \varphi _0(k) \) in (5.1) satisfies the condition (5.2). Then we consider the difference

$$\begin{aligned} |\varphi (k,t) - \Psi (ck) | \le |\varphi (k,t) - {\tilde{\varphi }}(k,t)| + |{\tilde{\varphi }}(k,t) -\Psi (ck) |. \end{aligned}$$

It is straightforward to repeat arguments from [4] (the proof of Theorem 8.1) combined with Lemma 5.1 and obtain the following estimate for some \( t= T > 0 \):

$$\begin{aligned} |\varphi (k,t) - \Psi (ck) | \le C \left( |k|^4 + |k|^2 e^{- q T /12}\right) , \end{aligned}$$

provided \( \Vert A\Vert < q /24 \). Then we follow the same proof from [4] and obtain for any \( T > 0 \) and \( L> 0 \)

$$\begin{aligned}&|\varphi (k,T+L) - \Psi (ck) | \\&\quad \le C \{ |k|^4 \exp [- (q/2 - 4 \,\Vert A_{\beta }\Vert ) L ] \\&\qquad + |k|^2 \exp [- \left( q T/12 - 2 \,\Vert A_\beta \Vert \right) L ] \}. \end{aligned}$$

It follows from Lemma 3.2 that \( \Vert A_\beta \Vert < 3 \Vert A\Vert \). Taking \( L=T/3 \), we obtain

$$\begin{aligned} |\varphi (k,4T/3) - \Psi (ck) |\le & {} C \left( |k|^2 + |k|^4 \right) e^{ - \delta T}, \nonumber \\ \delta= & {} \frac{q-24 \Vert A\Vert }{12}, \quad k \in \mathbb {R}^d. \end{aligned}$$
(5.10)

Since \( \delta > 0\), this proves the pointwise convergence (5.8) for all \( k \in \mathbb {R}^d \). Hence, the following statement is proved.

Theorem 2

Let \( \varphi (k,t)\) be a solution of the problem (5.1), where \( \varphi _0(k)\) is a characteristic function (the Fourier transform of the probability distribution in \( \mathbb {R}^d \)) satisfying (5.2). Let the parameter \( \beta \) in (5.1) and the function \( \Psi (k) \) be the same as described in Theorem 1. Let \( \Vert A\Vert < q/24 \) in the notation of Eq. (4.13). Then there exist two constants \( c > 0 \) and \( C > 0 \) such that

$$\begin{aligned} \begin{aligned} |\varphi (k,t) - \Psi (ck) | \le C \left( |k|^2 + |k|^4 \right) e^{-\mu t}, \\ \mu =(q - 24 \Vert A\Vert )/16, \quad k \in \mathbb {R}^d, \quad t \ge 0. \end{aligned} \end{aligned}$$
(5.11)

Proof

is already done above. The estimate (5.11) obviously follows from (5.10). \(\square \)

This theorem can be considered as certain improvement and clarification of Theorem 8.1 from [4]. We omit the translation of results to the language of distribution functions in the velocity space \( \mathbb {R}^d \) because it would be almost a repetition of Section 10 from [4] with new conditions of applicability of the results.

6 Conclusions

We have considered the modified spatially homogeneous Maxwell – Boltzmann equation (2.4). The equation contains an additional force term  \( \mathrm {div} A v f \), where \( v \in \mathbb {R}^{d} \), A is an arbitrary constant \( (d \times d) \)-matrix. Applications of this equation are connected with well-known homoenergetic solutions to the spatially inhomogeneous Boltzmann equation studied by many authors since 1950s. The self-similar solutions and related questions for Eq. (2.4) were recently considered in detail in [4] by using the Fourier transform and some properties of the Boltzmann collision operator in the Fourier representation [5]. Main results of [4] were obtained under assumption of ”sufficiently small norm of A” in (2.4) without explicit estimates of this ”smallness”. Our aim in this paper was to fill this gap and to prove that most of the results related to self-similar solutions remain valid for moderately small matrices A with norm \( \Vert A\Vert = O(10^{-1}) \) in dimensionless units. This is important for applications because it shows boundaries for the approach based on the perturbation theory. The main results of the paper are formulated in Theorems 1 and 2 from Sects. 4 and 5, respectively. These theorems extend the corresponding results of [4] to moderate values of \( \Vert \) \( A\Vert \). The main idea of proofs of new estimates is based on detailed study of the eigenvalue problem  (3.3), see Lemma 3.1 from Sect. 3. A by-product result is the proof of existence of the bounded fourth moment of the self-similar profile for moderate values of \( \Vert \) \( A\Vert \). The question of existence of all moments for the self-similar profile F(v) remains open even in the class of arbitrarily small norm of A.