1. LIOUVILLE EQUATIONS

Let \(v \) be a smooth vector field on an \(n \)-dimensional manifold \(M \) generating the autonomous system of differential equations

$$ \dot {x}=v(x).$$
(1.1)

Here \(x=(x_1,\ldots ,x_n)\) are local coordinates on \(M \). Let \(T^*M \) be the total space of the cotangent bundle of \(M \), and let \(y=(y_1,\ldots ,y_n) \) be the Cartesian coordinates in the vector space \(T_x^*M \).

The function

$$ H(x,y)=\big (y,v(x)\big )=\sum y_i v_i(x) $$
(1.2)

is invariantly defined on \(T^*M \). The parentheses \((\thinspace ,\thinspace \!) \) stand for pairing (the value of a covector on a vector). On the other hand, there exists a natural (canonical) symplectic structure on \(T^*\!M \) given by the nondegenerate closed 2-form

$$ d(y,dx)=\sum dy_i\wedge dx_i,$$

which permits one to treat the function (1.2) as a Hamiltonian and consider the associated system of differential Hamiltonian equations

$$ \dot {x}=\frac {\partial H}{\partial y}=v(x),\quad \dot {y}=-\frac {\partial H}{\partial x}= -\biggl (\frac {\partial v}{\partial x}\biggr )^{\!*}y.$$
(1.3)

The first group of equations coincides with the original system of differential equations (1.1), and the second one is the adjoint variational system for system (1.1).

These observations, dating back to Liouville, allow reducing the study of system (1.1) to a problem about the Hamiltonian system (1.3) with Hamiltonian linear in the momenta. It is useful to associate the Hamilton–Jacobi equation

$$ \frac {\partial S}{\partial t}+ \biggl (\frac {\partial S}{\partial x},v(x)\biggr )=0$$
(1.4)

with the Hamiltonian (1.2). The function \(S \) has the meaning of action in the sense of Hamilton in the Hamiltonian formalism and is a first integral of system (1.1).

Recall that a complete solution of Eq. (1.4) is a function \(S(t,x,\alpha ) \) (where \(\alpha =(\alpha _1,\ldots ,\alpha _n)\) is a set of parameters) satisfying Eq. (1.4) for each \(\alpha \) and the nondegeneracy condition

$$ \det \biggl [\frac {\partial ^2 S}{\partial x_i\partial \alpha _j}\biggr ]\ne 0.$$
(1.5)

Since the field \(v \) is time-independent, we can set

$$ S=-h(\alpha )t+W(x,\alpha ); $$

then it is easily seen that the function \(W \) satisfies the equation

$$ \biggl (\frac {\partial W}{\partial x},v(x)\biggr )=h $$
(1.6)

and condition (1.5) (with \(S\) replaced by \(W \)).

By the Jacobi theorem, if a complete solution of Eq. (1.4) is known, then the general solution of the corresponding Hamiltonian system (1.3) can be found from the relations

$$ \frac {\partial S}{\partial \alpha _i}=\beta _i,\quad 1\leq i\leq n. $$

The possibility of finding the variables \(x_1,\ldots ,x_n \) as functions of time \(t \) and \(2n \) arbitrary constants \(\alpha _i \), \(\beta _i\) ( \(1\le i\le n \)) is guaranteed by the implicit function theorem in view of condition (1.5). The solutions of the adjoint linear variational system are found by the formulas

$$ y_j=\frac {\partial S}{\partial x_j},\quad 1\leq j\leq n,$$

where one should substitute the solutions of system (1.1) for \(x \).

We say that the variables \(x_1,\ldots ,x_n\) in the system of ordinary differential equations are separated if the complete solution of Eq. (1.6) has the form

$$ W(x,\alpha )=\sum _{i=1}^n W_i(x_i,\alpha _1,\ldots ,\alpha _n).$$

This general definition is a key one for further analysis. In Secs. 2 and 3, we indicate meaningful examples of nonlinear systems solvable by separation of variables.

However, already for \(n=2\) there exist systems of differential equations that are customarily referred to as systems with separated variables and which do not fall under our consideration. We mean systems of the form

$$ \dot {x}_1=v_1(x_2),\quad \dot {x}_2=v_2(x_1). $$
(1.7)

To eliminate this contradiction, we agree not to distinguish between systems whose right-hand sides differ by a factor in the form of an everywhere positive (or negative) smooth function. Such systems have the same phase trajectories and are simultaneously integrable (or nonintegrable) by quadratures. Since we consider autonomous systems of differential equations, the problem of finding their trajectories (rather than solutions) seems even more natural. If we divide the right-hand sides of (1.7) by the product \( v_1v_2\) (in the domain where system (1.7) has no singular points), then we obtain a system with separated variables in the sense of our general definition.

To conclude this section, we present one relation important for further analysis. Let

$$ \Phi _1=\big (y,w_1(x)\big )\quad \text {and}\quad \Phi _2=\big (y,w_2(x)\big ) $$

be two functions on \(T^*M \) linear in the momenta, and let \(w_1 \) and \(w_2 \) be smooth vector fields on \(M \). Then the Poisson bracket \(\{\Phi _1,\Phi _2\} \) is equal to

$$ \big (y,[w_1,w_2]\big ), $$

where \([\thinspace {,}\thinspace ] \) is the commutator of vector fields. Consequently, the functions \(\Phi _1\) and \(\Phi _2 \) are in involution if and only if the fields \(w_1 \) and \(w_2 \) commute with each other.

Let the functions

$$ F_1=H=(y,v),\quad F_2=(y,w_2),\quad \ldots ,\quad F_n=(y,w_n)$$
(1.8)

be pairwise in involution with each other. In particular, all of them are integrals of the Hamiltonian system (1.3). In addition, let the vector fields \(w_1=v,w_2,\ldots ,w_n \) be linearly independent at each point \(M \) and unrestrained on \(M \). Then each connected component of the manifold \(M \) is a cylinder (diffeomorphic to the direct product \(\mathbb {R}^k\times \mathbb {T}^{n-k}\), where \(\mathbb {T}^m \) is an \(m \)-dimensional torus), and on this cylinder we can introduce \(k \) linear coordinates \(\psi _1,\ldots ,\psi _k \) and \(n-k \) angular coordinates \(\varphi _{k+1},\ldots ,\varphi _n\mod 2\pi \) so that system (1.1) in these coordinates acquires the form

$$ \dot {\psi }_i=c_i,\quad \dot {\varphi }_j=\omega _j;\quad c,\omega =\mathrm {const}.$$
(1.9)

This assertion is a global version of the theorem on the rectification of trajectories (see, e.g., [1, Ch. II, Sec. 4]). In the compact case (\(k=0 \)), we obtain conditionally periodic motions on \(\mathbb {T}^n \). Reducing a system of equations to the form (1.9) is often called a linearization of this system (because the variables \(\psi _i\), \(1\le i\le k \) and \(\varphi _j \), \(k+1\le j\le n\), vary linearly with time).

The application of the Liouville equations to the integration of nonlinear systems of differential equations can be found in [2]. This note should be viewed as a supplement to that paper.

2. STÄCKEL SYSTEMS

Consider a nonsingular \(n\times n\) matrix

$$ \big [\varphi _{ij}(x_j)\big ],\quad 1\leq i,j\leq n. $$
(2.1)

Let \(\Phi \) be its determinant, and let \(\Phi _{ij}\) be the cofactor of the entry \(\varphi _{ij}\) (depending only on the coordinate \(x_j\)). Consider \(n \) systems of differential equations of the form

$$ \dot {x}_j=\frac {f_j(x_j)\Phi _{mj}(x)}{\Phi (x)},\quad 1\leq j\leq n,$$
(2.2)

where the \(f_j \) are nonvanishing smooth functions of a single variable. The integer \(m\) (\(1\leq m\leq n \)) labels these systems. We claim that each of these systems can be integrated by separation of variables. Moreover, all these nonlinear systems are linearizable by the one and the same transformation in the case of the torus (\(M=\mathbb {T}^n \)).

In the case under consideration, Eq. (1.6) acquires the form

$$ \sum _{j=1}^n f_j\Phi _{mj}\frac {\partial W}{\partial x_j}=\alpha _m\Phi \quad (h=\alpha _m),$$
(2.3)

or

$$ \sum _{j=1}^n\Phi _{mj}\biggl (f_j\frac {\partial W}{\partial x_j}- \sum _{k=1}^n\alpha _k\varphi _{kj}\biggr )=0.$$

Set

$$ W=\sum W_s(x_s,\alpha ), $$
(2.4)

where \(W_s\) (as a function of one variable \(x_s \)) satisfies the equation

$$ f_s\frac {\partial W_s}{\partial x_s}=\sum _{k=1}^n\alpha _k\varphi _{ks}(x_s).$$
(2.5)

The equation can be solved in a trivial way, and (since the matrix (2.1) is nonsingular) the function (2.4) gives a complete solution of Eq. (2.3).

We point out that all \(n\) systems (2.2) of differential equations are solved simultaneously by separating the variables \(x_1,\ldots ,x_n\). It is easy to understand that \(n\) functions of the form (1.8)

$$ F_k=\sum _j\Phi _{kj}f_jy_j/\Phi ,\quad 1\leq k\leq n,$$

form a complete set of first integrals in involution. Since the functions \(f_1,\ldots ,f_n \) nowhere vanish by assumption, we have

$$ \frac {\partial (F_1,\ldots ,F_n)}{\partial (y_1,\ldots ,y_n)}\ne 0.$$

Consequently, the phase flows of the systems of differential equations (2.2) for different \(m \) commute with each other.

This method for the separation of variables was first used by Stäckel to solve equations of dynamics (see, e.g., the monograph [3, p. 2.3] and references therein). In particular, he indicated Riemannian metrics whose geodesics can be found by quadratures. The same variables are separated in the corresponding Laplace–Beltrami operator.

It follows from Eq. (2.5) that

$$ W_s=\int _{x_s^0}^{x_s}\frac {\sum \alpha _k\varphi _{ks}(z)}{f_s(z)}\thinspace dz. $$

Consequently, by the Jacobi theorem,

$$ \begin {aligned} \sum _s\int _{x_s^0}^{x_s}\frac {\varphi _{ks}(z)}{f_s(z)}\thinspace dz&=\beta _k,\quad k\ne m, \\ \sum _s\int _{x_s^0}^{x_s}\frac {\varphi _{ms}(z)}{f_s(z)}\thinspace dz&=t+\beta _m, \end {aligned}$$
(2.6)

which gives the general solution of the \(m \)th system of differential equations in (2.2). For the arbitrary constants one can take \(x_1^0,\ldots ,x_n^0\) or \(\beta _1,\ldots ,\beta _n \).

3. ABEL EQUATIONS

Consider the special case of the Stäckel system in which the matrix (2.1) is the Vandermonde matrix

$$ \begin {bmatrix} 1 & 1 & \ldots & 1 \\ x_1 & x_2 & \ldots & x_n \\ \cdots & \cdots & \cdots & \cdots \\ x_1^{n-1} & x_2^{n-1} & \ldots & x_n^{n-1} \end {bmatrix}.$$

Let us explicitly write system (2.2) of differential equations with separated variables in the simplest case of \(m=n\),

$$ \dot {x}_j= {f_j(x_j)}\bigg /{\prod _{\substack {s=1 \\ s\ne j}}^n(x_j-x_s)},\quad 1\leq j\leq n.$$
(3.1)

If we disregard the content of Sec. 2, then it is not that obvious here that this system can be solved by separation of variables.

In many problems of mechanics that have been integrated, \(f_1=\ldots =f_n=f \) and

$$ f(z)=\sqrt {P(z)},$$
(3.2)

where \(P \) is a polynomial of degree \(2n+1 \) or \(2n+2 \) without multiple roots. In particular, for \(n=2 \) Eqs. (3.1) (the function \(f \) is determined by (3.2)) arise when integrating the equations of rotation of a massive solid body in the Kovalevskaya and Goryachev–Chaplygin cases (see, e.g., [1, Ch. II, Sec. 5]).

Equations (2.6) acquire the form

$$ \begin {gathered} \sum \int _{x_s^0}^{x_s}\dfrac {dz}{\sqrt {P(z)}}=\beta _1, \\ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\ \sum \int _{x_s^0}^{x_s}\dfrac {z^{n-2}\thinspace dz}{\sqrt {P(z)}}=\beta _{n-1}, \\ \sum \int _{x_s^0}^{x_s}\dfrac {z^{n-1}\thinspace dz}{\sqrt {P(z)}}=t+\beta _{n}. \end {gathered}$$
(3.3)

The problem of inverting these equations (finding \(x_1,\ldots ,x_n \)) is the subject of classical studies begun by Abel, Jacobi, and Riemann (the modern approach is described, e.g., in [4]). If \(P\) is a polynomial of degree \(2n-1 \), then (as Abel proved) the solution of the transcendental system (3.3) can be reduced to an algebraic problem of solving a system of polynomial equations. Jacobi associated the separating variables \(x_1,\ldots ,x_n\) with elliptic coordinates in \(\mathbb {R}^n\) [5, p. 209]. The relation between Abel equations (3.1) and Stäckel systems was also discussed in [6; 7, Ch. 3, Sec. 3] from the viewpoint of the problem of integrating the equations of dynamics.

4. SYSTEMS WITH MULTIVALUED INTEGRALS ON A TORUS

Let \(M \) be the \(n \)-dimensional torus \(\mathbb {T}^n=\{x_1,\ldots ,x_n\mod 2\pi \}\), and let \(f\colon \mathbb {T}^n\to \mathbb {R}\) be a smooth positive function. Consider the system of differential equations

$$ \dot {x}_j=\omega _j/f(x),\quad 1\leq j\leq n, $$
(4.1)

on \(\mathbb {T}^n\). Here \(\omega _1,\ldots ,\omega _n\) are nonzero constant numbers.

Obviously, system (4.1) is integrable by quadratures. Moreover, it admits the multivalued integrals

$$ F_{ij}=\omega _j x_i-\omega _i x_j,\quad 1\leq i,j\leq n,$$

of which exactly \(n-1 \) are functionally independent. Note that, under some additional assumptions, the converse is also true: if a system on a torus admits \(n-1 \) independent multivalued integrals, then this system can be reduced to the form (4.1) in some angular variables (see [8]). The case of \(n=3\) was considered by Arnold [9]. System (4.1) admits the invariant measure \(f(x)\thinspace d^nx \). As shown by Kolmogorov [10], any system without singular points on a two-dimensional torus that admits an invariant measure with a positive smooth density can be reduced to the form (4.1).

First, let us make a simple remark. If

$$ f=\sum f_j(x_j),$$

then system (4.1) can be solved by separation of variables. By the way, this case is realized in integrable problems of rigid body dynamics: after the introduction of angular variables, the Abel equations (3.1) (for \(n=2\)) take exactly this form.

In the general case, Eq. (1.6) has the simple form

$$ \sum \omega _j\frac {\partial W}{\partial x_j}=h(\alpha )f.$$
(4.2)

Set \(f=\overline {f}+\widetilde {f} \), where \(\overline {f} \) is the mean value of the function \(f \) on \(\mathbb {T}^n \) and the mean of \(\widetilde {f}=f-\overline {f} \) is zero. Then the solution of Eq. (4.2) can be represented as the sum of functions \(\overline {W} \) and \(\widetilde {W} \), where

$$ \overline {W}=\alpha _j x_j,\quad \sum \omega _j\alpha _j=h(\alpha )\overline {f}$$

and

$$ \sum \omega _j\frac {\partial \widetilde {W}}{\partial x_j}= h\widetilde {f}.$$
(4.3)

It is well known that for almost all \(n \)-tuples \((\omega _1,\ldots ,\omega _n)\in \mathbb {R}^n\) Eq. (4.3) admits a smooth solution (which is unique if the mean \(\widetilde {W} \) is assumed to be zero).

Then \(S=-h(\alpha )t+W(x,\alpha ) \) and the variables \(x_1,\ldots ,x_n \) as functions of time can be found from the relations

$$ \frac {\partial S}{\partial \alpha _j}=-\frac {\omega _j}{\overline {f}}t+ x_j+\frac {\omega _j}{\overline {f}}W^{\prime }(x)=\beta _j,\quad 1\leq j\leq n. $$

Here \(W^{\prime }=\widetilde {W}/h \). Consequently, the change of variables

$$ x_j+\omega ^{\prime }_jW^{\prime }(x)=\varphi _j,\quad \omega ^{\prime }_j=\omega _j/\overline {f}$$
(4.4)

takes system (4.1) to the system

$$ \dot \varphi _j=\omega ^{\prime }_j,\quad 1\leq j\leq n, $$
(4.5)

which defines a conditionally periodic motion on the torus. The frequencies \( \omega ^{\prime }_j\) are the mean values of the right-hand sides of system (4.1) with respect to the invariant measure \(f\thinspace d^nx\).

Formulas (4.4) are indicated in the book [11, p. 156]; the same result about reducing system (4.1) to system (4.5) for almost all \(\omega \in \mathbb {R}^n \) is proved in a similar manner in the paper [12]. Kolmogorov’s proof in [10] for \(n=2\) is based on another idea.

5. CONDITIONS FOR SEPARATION OF VARIABLES

The next assertion answers the question about the separation of variables \(x_1,\ldots ,x_n\) in the system of differential equations (1.1), i.e., in the system

$$ \dot {x}_j=v_j(x_1,\ldots ,x_n),\quad 1\leq j\leq n. $$
(5.1)

Theorem.

The variables \( x_1,\ldots ,x_n\) in system (5.1) can be separated if and only if the components of the vector field \(v \) satisfy the following \( n(n-1)\) conditions:

$$ v_jv_k\frac {\partial ^2 v_i}{\partial x_j\partial x_k}- v_j\frac {\partial v_k}{\partial x_j}\frac {\partial v_i}{\partial x_k}- v_k\frac {\partial v_j}{\partial x_k}\frac {\partial v_i}{\partial x_j}=0, \quad 1\leq i\leq n,\quad 1\leq j<k\leq n.$$
(5.2)

The proof is based on the well-known result by Levi-Civita on the criterion for separation of variables in the Hamilton–Jacobi equation with the Hamiltonian \( H(x_1,\ldots ,x_n,y_1,\ldots ,y_n)\),

$$ \frac {\partial H}{\partial y_j}\frac {\partial H}{\partial y_k} \frac {\partial ^2 H}{\partial x_j\partial x_k}- \frac {\partial H}{\partial y_j}\frac {\partial H}{\partial x_k} \frac {\partial ^2 H}{\partial x_j\partial y_k}- \frac {\partial H}{\partial x_j}\frac {\partial H}{\partial y_k} \frac {\partial ^2 H}{\partial y_j\partial x_k}+ \frac {\partial H}{\partial x_j}\frac {\partial H}{\partial x_k} \frac {\partial ^2 H}{\partial y_i\partial y_j}=0$$
(5.3)

for all \(1\leq j<k\leq n \) (there is no summation over repeated indices). The proof of this fact and its discussion can be found, for example, in [3, Sec. 2.3].

Now substituting the Hamiltonian (1.2) into relations (5.3), we obtain conditions (5.2) for the separation of variables in the system of differential equations (1.1).

It is hardly possible to find all vector fields that satisfy system (5.2) in the most general case. Consider the special case mentioned in Sec. 1, where \(n=2 \); system (1.7). In this case, system (5.2) can be reduced to the two partial differential equations

$$ \begin {aligned} v_1v_2\frac {\partial ^2 v_1}{\partial x_1\partial x_2}- \frac {\partial v_1}{\partial x_2}\biggl (v_1\frac {\partial v_2}{\partial x_1}+ v_2\frac {\partial v_1}{\partial x_1}\biggr )&=0, \\ v_1v_2\frac {\partial ^2 v_2}{\partial x_1\partial x_2}- \frac {\partial v_2}{\partial x_1}\biggl (v_1\frac {\partial v_2}{\partial x_2}+ v_2\frac {\partial v_1}{\partial x_2}\biggr )&=0. \end {aligned}$$

They are easily transformed to the form

$$ \frac {\partial }{\partial x_1} \frac {\partial v_1/\partial x_2}{v_1v_2}=0,\quad \frac {\partial }{\partial x_2} \frac {\partial v_2/\partial x_1}{v_1v_2}=0. $$

The latter, in an obvious manner, implies the equalities

$$ \partial v_1/\partial x_2=\varphi (x_2)v_1v_2,\quad \partial v_2/\partial x_1=\psi (x_1)v_1v_2$$
(5.4)

with some smooth functions \(\varphi \) and \(\psi \).

First, let \(\varphi \equiv 0\). (The case of \(\psi \equiv 0\) can be considered in a similar way.) Then the first relation in (5.4) implies that \(v_1=\alpha (x_1)\), where \(\alpha \) is a smooth function. However, then the second equality in (5.4) implies that

$$ v_2=\beta (x_2)\gamma (x_1), $$

where \(\gamma _1=\exp \int \psi (x_1)\alpha (x_1)\thinspace dx_1 \). In this case, the Hamilton–Jacobi equation (1.6) can be solved by the separation of variables.

Now assume that the functions \(\varphi \) and \(\psi \) do not vanish anywhere. Set

$$ v_1=w_1/\psi ,\quad v_2=w_2/\varphi . $$
(5.5)

Then it follows from relations (5.4) that \(\partial w_1/\partial x_2=\partial w_2/\partial x_1 \). Consequently, the field \(w=(w_1,w_2) \) is potential,

$$ w_1=\partial a/\partial x_1,\quad w_2=\partial a/\partial x_2$$
(5.6)

with a smooth potential \(a \) satisfying the equation

$$ \frac {\partial ^2 a}{\partial x_1\partial x_2}= \frac {\partial a}{\partial x_1}\frac {\partial a}{\partial x_2}.$$

Based on this, we conclude that

$$ \frac {\partial a}{\partial x_1}=c_1(x_1)e^a,\quad \frac {\partial a}{\partial x_2}=c_2(x_2)e^a$$
(5.7)

with some smooth functions \(c_1\) and \(c_2 \) of one variable. Hence, according to Eqs. (5.5) and (5.6), we have

$$ v_1=\frac {c_1(x_1)}{\psi (x_1)}e^a,\quad v_2=\frac {c_2(x_2)}{\varphi (x_2)}e^a.$$
(5.8)

Finally, from the first equation in (5.7) we obtain

$$ -e^{-a}=\int c_1(x_1)\thinspace dx_1+d(x_2). $$

As a result, the representations (5.8) yield the following form of the right-hand sides of our system of differential equations:

$$ v_1=\frac {A(x_1)}{C(x_1)+D(x_2)},\quad v_2=\frac {B(x_2)}{C(x_1)+D(x_2)},$$

where \(A \), \(B \), \(C \), and \(D \) are smooth functions. In this case, we have the Stäckel system with separating variables in Sec. 2.