1 Introduction

This chapter is devoted to a survey of the historical background of Gröbner bases for D-modules and linear rewriting theory largely developed in algebra throughout the twentieth century and to present deep relationships between them. Completion methods are the main streams for these computational theories. In the theory of Gröbner bases, they were motivated by algorithmic problems in elimination theory such as computations in quotient polynomial rings modulo an ideal, manipulating algebraic equations, and computing Hilbert series. In rewriting theory, they were motivated by computation of normal forms and linear bases for algebras and computational problems in homological algebra.

In this chapter, we present the seminal ideas of the French mathematician M. Janet on the algebraic formulation of completion methods for polynomial systems. Indeed, the problem of completion already appears in Janet’s 1920 thesis [47], which proposed an original approach by formal methods in the study of systems of linear partial differential equations, PDE systems for short. The corresponding constructions were formulated in terms of polynomial systems, but without the notions of ideal and Noetherian induction. These two notions were introduced by Noether in 1921 [68] for commutative rings.

The work of M. Janet was forgotten for about half of a century. It was rediscovered by Schwarz in 1992 in [81]. Our exposition in this chapter does not follow the historical order. The first section deals with the problems that motivate the PDE study undertaken by M. Janet. In Sect. 3, we present completion for monomial PDE systems as introduced by Janet in his monograph [51]. This completion used an original division procedure on monomials. In Sect. 4, we present an axiomatization of this Janet notion of division, called involutive division, due to V. P. Gerdt. The last two sections concern the case of polynomial PDE systems, with M. Janet’s completion method used to reduce a linear PDE system to a canonical form and the axiomatization of the reductions involved in terms of rewriting theory.

1.1 From Analytical Mechanics Problems to Involutive Division

1.1.1 From Lagrange to Janet.

The analysis of linear PDE systems was mainly motivated in eighteenth century by the desire to solve problems of analytical mechanics. The seminal work of J.-L. Lagrange gave the first systematic study of PDE systems launched by such problems. The case of PDE of one unknown function of several variables has been treated by J. F. Pfaff. The Pfaff problem will be recalled in Sect. 2.1. This theory was developed in two different directions: toward the general theory of differential invariants and the existence of solutions under given initial conditions. The differential invariants approach will be discussed in Sects. 2.1 and 2.1.4. The question of the existence of solution satisfying some initial conditions was formulated in the Cauchy–Kowalevsky theorem recalled in Sect. 2.1.3.

1.1.2 Exterior Differential Systems.

Following the work of H. Grassmann in 1844 which did set up the rules of exterior algebra computations, É. Cartan introduced exterior differential calculus in 1899. This algebraic calculus allowed him to describe a PDE system by an exterior differential system that is independent of the choice of coordinates. This did lead to the so-called Cartan–Kähler theory, reviewed in Sect. 2.2. We will present a geometrical property of involutivity on exterior differential systems in Sect. 2.2.6, which motivates the formal methods introduced by M. Janet for the analysis of linear PDE systems.

1.1.3 Generalizations of the Cauchy–Kowalevsky Theorem.

Another origin of the work of M. Janet is the Cauchy–Kowalevsky theorem that gives the initial conditions of solvability of a family of PDE systems that we describe in Sect. 2.1.3. É. Delassus, C. Riquier, and M. Janet attempted to generalize this result to a wider class of linear PDE systems which in turn led them to introduce the computation of a notion of normal form for such systems.

1.1.4 The Janet Monograph.

Section 3 presents the historical work that motivated M. Janet to introduce an algebraic algorithm in order to compute normal form of linear PDE systems. In particular, we recall the problem of computation of inversion of differentiation introduced by M. Janet in his monograph \(\ll \) Leçons sur les systèmes d’équations aux dérivées partielles \(\gg \) on the analysis of linear PDE systems, published in 1929 [51]. Therein, M. Janet introduced formal methods based on polynomial computations for analysis of linear PDE systems. He developed an algorithmic approach for analyzing ideals in the polynomial ring \(\mathbb {K}[\frac{\partial }{\partial x_1}, \ldots , \frac{\partial }{\partial x_n}]\) of differential operators with constant coefficients. Having the ring isomorphism between this ring and the ring \(\mathbb {K}[x_1,\ldots ,x_n]\) of polynomials with n variables in mind, M. Janet gave its algorithmic construction in this latter ring. He began by introducing some remarkable properties of monomial ideals. In particular, he recovered Dickson’s Lemma [17], assertion that monomial ideals are finitely generated. This property is essential for the Noetherian properties on the set of monomials. Note that M. Janet was not familiar with the axiomatization of the algebraic structure of ideals and the property of Noetherianity already introduced by Noether in [68] and [69]. Note also that the Dickson Lemma was published in 1913 in a paper on number theory in an American journal. Due to the First World War, it took a long time before these works became accessible to the French mathematical community. Janet’s algebraic constructions given in his monograph will be recalled in Sect. 3 for monomial systems and in Sect. 5 for polynomial systems.

1.1.5 Janet’s Multiplicative Variables.

The computations on monomial and polynomial ideals carried out by M. Janet are based on the notion of multiplicative variable that he introduced in his thesis [47]. Given an ideal generated by a set of monomials, he distinguished the monomials contained in the ideal and those contained in the complement of the ideal. The notions of multiplicative and non-multiplicative variables appear in order to stratify these two families of monomials. We will recall this notion of multiplicativity of variables in Sect. 3.1.9. This leads to a refinement of the classical division on monomials, nowadays called Janet’s division.

1.1.6 Involutive Division and Janet’s Completion Procedure.

The notion of multiplicative variable is local, in the sense that it is defined with respect to a subset \(\mathcal {U}\) of the set of all monomials. A monomial u in \(\mathcal {U}\) is said to be a Janet divisor of a monomial w with respect to \(\mathcal {U}\), if \(w=uv\) and all variables occurring in v are multiplicative with respect to \(\mathcal {U}\). In this way, we distinguish the set \(\mathrm {cone}_\mathcal {J}(\mathcal {U})\) of monomials having a Janet divisor in \(\mathcal {U}\), called multiplicative or involutive cone of \(\mathcal {U}\), and the set \(\mathrm {cone}(\mathcal {U})\) of multiple of monomials in \(\mathcal {U}\) for the classical division. The Janet division being a refinement of the classical division, the set \(\mathrm {cone}_\mathcal {J}(\mathcal {U})\) is a subset of \(\mathrm {cone}(\mathcal {U})\). M. Janet called a set of monomials \(\mathcal {U}\) complete when this inclusion is an equality.

To a monomial PDE system \((\Sigma )\) of the form

$$ \frac{\partial ^{\alpha _1+\ldots +\alpha _n}\varphi }{\partial x_1^{\alpha _1}\ldots \partial x_n^{\alpha _n}} = f_{\alpha }(x_1,x_2,\ldots , x_n), $$

where \((\alpha _1,\ldots ,\alpha _n)\) belongs to a subset I of \(\mathbb {N}^n\), M. Janet associated the set of monomials

$$ {{\,\mathrm{lm}\,}}(\Sigma ) =\{x_1^{\alpha _1}\ldots x_n^{\alpha _n}\;\;|\;\;(\alpha _1,\ldots ,\alpha _n)\in I\}. $$

The compatibility conditions of the system \((\Sigma )\) correspond to the factorizations of the monomials ux in \(\mathrm {cone}_\mathcal {J}({{\,\mathrm{lm}\,}}(\Sigma ))\), where u is in \({{\,\mathrm{lm}\,}}(\Sigma )\) and x is a non-multiplicative variable of u with respect to \({{\,\mathrm{lm}\,}}(\Sigma )\), as explained in Sect. 3.3.1. By definition, for any monomial u in \({{\,\mathrm{lm}\,}}(\Sigma )\) and x non-multiplicative variable of u with respect to \({{\,\mathrm{lm}\,}}(\Sigma )\), the monomial ux admits such a factorization if and only if \({{\,\mathrm{lm}\,}}(\Sigma )\) is complete, see Proposition 3.2.5.

The main procedure presented in Janet’s monograph [51] completes in a finite number of operations a finite set of monomials \(\mathcal {U}\) to a complete set of monomials \(\widetilde{\mathcal {U}}\) that contains \(\mathcal {U}\). This procedure consists in analyzing all the local defects of completeness, by adding all the monomials ux where u in \(\mathcal {U}\) and x is a non-multiplicative variable for u with respect to \(\mathcal {U}\). This procedure will be recalled in Sect. 3.2.9. A generalization of this procedure to any involutive division was given by Gerdt in [25], and is recalled in Sect. 4.2.12.

Extending this procedure to a set of polynomials, M. Janet applied it to linear PDE systems, giving a procedure that transforms a linear PDE system into a complete PDE system with the same set of solutions. This construction is given in Sect. 5.6. In Sect. 6, we present such a procedure for an arbitrary involutive division given by V. P. Gerdt and Blinkov in [27] and its relation to the Buchberger completion procedure in commutative polynomial rings, [7].

1.1.7 The Space of Initial Conditions.

In order to stratify the complement of the involutive cone \(\mathrm {cone}_\mathcal {J}(\mathcal {U})\), M. Janet introduced the notion of complementary monomial, see Sect. 3.1.13. With this notion, the monomials that generate this complement in a such a way that the involutive cone of \(\mathcal {U}\) and the involutive cone of the set \(\mathcal {U}^{\complement }\) of complementary monomials form a partition of the set of all monomials, see Proposition 3.2.2.

For each complementary monomial v in \({{\,\mathrm{lm}\,}}(\Sigma )^{\complement }\), each analytic function in the multiplicative variables of v with respect to \({{\,\mathrm{lm}\,}}(\Sigma )^{\complement }\) provides an initial condition of the PDE system \((\Sigma )\) as stated by Theorem 3.3.3.

1.1.8 Polynomial Partial Differential Equations Systems.

In Sect. 5, we present the analysis of polynomial PDE systems as Janet [51]. To deal with polynomials, he defined some total orders on the set of derivatives, corresponding to total orders on the set of monomials. We recall them in Sect. 5.1. The definitions on monomial orders given by M. Janet clarified the same notion introduced previously by Riquier in [74]. In particular, he made more explicit the notion of parametric and principal derivatives in order to distinguish the leading derivative in a polynomial PDE. In this way, he extended the algorithms for monomial PDE systems to the case of polynomial PDE systems. In particular, using these notions, he defined the property of completeness for polynomial PDE systems. Namely, a polynomial PDE system is complete if the associated set of monomials corresponding to leading derivatives of the system is complete. Moreover, M. Janet extended the notion of complementary monomials to define the notion of initial condition for a polynomial PDE system as in the monomial case.

1.1.9 Initial Conditions.

In this way, the notion of completeness provides a suitable framework to discuss the existence and the uniqueness of the initial conditions for a linear PDE system. M. Janet proved that if a linear polynomial PDE system of the form

$$ D_i\varphi =\sum _{j}a_{i,j}D_{i,j}\varphi , \quad i \in I, $$

with one unknown function \(\varphi \) is such that all the functions \(a_{i,j}\) are analytic in a neighborhood of a point P in \(\mathbb {C}^n\) and if it is complete with respect to some total order, then it admits at most one analytic solution satisfying the initial condition formulated in terms of complementary monomials, see Theorems 5.3.4 and 5.3.6.

1.1.10 Integrability Conditions.

A linear polynomial PDE system of the above form is said to be completely integrable if it admits an analytic solution for any given initial condition. M. Janet gave an algebraic characterization of complete integrability by introducing integrability conditions formulated in terms of factorization of leading derivatives of the PDE by non-multiplicative variables. These integrability conditions are stated explicitly in Sect. 5.4.4 as generalization to the polynomial situation of the integrability conditions formulated above for monomial PDE systems in Sect. 3.3. M. Janet proved that a linear polynomial PDE system is completely integrable if and only if every integrability condition is trivially satisfied, as stated in Theorem 5.4.7.

1.1.11 Janet’s Procedure of Reduction of Linear PDE Systems to a Canonical Form.

In order to extend algorithmically the Cauchy–Kowalevsky theorem on the existence and uniqueness of solutions of initial value problems as presented in Sect. 2.1.3, M. Janet considered normal forms of linear PDE systems with respect to a suitable total order on derivatives, satisfying some analytic conditions on coefficients and a complete integrability condition on the system, as defined in Sect. 5.5.2. Such normal forms of PDE systems are called canonical by M. Janet.

Procedure 7 is Janet’s method for deciding if a linear PDE system can be transformed into a completely integrable system. If the system cannot be reduced to a canonical form, the procedure returns the obstructions to such a reduction. Janet’s procedure depends on a total order on derivatives of unknown functions of the PDE system. For this purpose, M. Janet introduced a general method to define a total order on derivatives using a parametrization of a weight order on variables and unknown functions, as explained in Sect. 5.1.5. The Janet procedure uses a specific weight order called canonical and defined in Sect. 5.6.2.

The first step of Janet’s method consists in applying autoreduction procedure, defined in Sect. 5.6.4, in order to reduce any PDE of the system with respect to the total order on derivatives. Namely, two PDE of the system cannot have the same leading derivative, and any PDE of the system is reduced with respect to the leading derivatives of the others PDE, as specified in Procedure 5.

The second step is the completion procedure, Procedure 6. In it, the set of leading derivatives of the system defines a complete set of monomials in the sense given in Sect. 5.3.2.

Having transformed the PDE system to an autoreduced and complete system, one can look at its integrability conditions. M. Janet showed that this set of integrability conditions is a finite set of relations that do not contain principal derivatives, as explained in Sect. 5.4.4. Hence, these integrability conditions are \(\mathcal {J}\)-normal forms and uniquely defined. By Theorem 5.4.7, if all of these normal forms are trivial, then the system is completely integrable. Otherwise, any nontrivial condition in the set of integrability conditions that contains only unknown functions and variables imposes a relation on the initial conditions of the system. If there is no such relation, the procedure is applied again on the PDE system completed by all the integrability conditions. Note that this procedure depends on the Janet division and on a total order on the set of derivatives.

By this algorithmic method, M. Janet did generalize in certain cases the Cauchy–Kowalevsky theorem at the time where the algebraic structures have not been introduced to perform computations with polynomial ideals. This is pioneering work in the field of formal approaches to analysis of PDE systems. Algorithmic methods for dealing with polynomial ideals were developed throughout the twentieth century and extended to a wide range of algebraic structures. In the next subsection, we present some milestones on these formal themes in mathematics.

1.2 Constructive Methods and Rewriting in Algebra Through the Twentieth Century

The constructions developed by M. Janet in his formal theory of linear partial differential equation systems are based on the structure of ideals that he called module of forms. This notion corresponds to those introduced previously by Hilbert in [43] with the terminology of algebraic form. Notice that Gunther studied such a structure in [39]. The axiomatization of the notion of ideal in an arbitrary ring is due to Noether [68]. As we will explain in this chapter, M. Janet introduced algorithmic methods to compute a family of generators of an ideal having the involutive property and called an involutive basis. This property is used to obtain a normal form of linear partial differential equation systems.

Janet’s computation of involutive bases is based on a refinement of classical polynomial division, called involutive division. He defined a division that is suitable for reduction of linear partial differential equation systems. Thereafter, other involutive divisions were studied, in particular, by Thomas [86] and by Pommaret [72]; we refer to Sect. 4.3 for a discussion on these divisions.

The main purpose is to complete a generating family of an ideal to an involutive basis with respect to a given involutive division. This completion process is quite similar to those introduced by means of the classical division in the theory of Gröbner bases. In fact, involutive bases appear to be particular cases of Gröbner bases. The principle of completion has been developed independently in rewriting theory, which proposes a combinatorial approach to equivalence relations motivated by several computational and decision problems in algebra, computer science, and logic.

1.2.1 Some Milestones in Algebraic Rewriting and Constructive Algebra.

The main results in the work of M. Janet rely on constructive methods in linear algebra using the principle of computing normal forms by rewriting and the principle of completion of a generating set of an ideal. These two principles have been developed through the twentieth century in many algebraic contexts with different formulations and in several instances. We review below some important milestones in this long and rich history from T. Seki to the more recent developments.  

1683.:

Seki introduced the notion of resultant and developed the notion of determinant to express the resultant. He also made progress in elimination theory in the Kai-fukudai-no-hō, see, e.g., [94].

1840.:

Sylvester studied the resultant of two polynomials in [85] and gave an example for two quadratic polynomials.

1882.:

Kronecker [54] gave the first result in elimination theory using this notion.

1886.:

Weierstrass proved a fundamental result called preparation theorem on the factorization of analytic functions by polynomials. As an application, he obtained a division theorem for rings of convergent series [93].

1890.:

Hilbert proved that any ideal in a ring of commutative polynomials in a finite set of variables over a field or over the ring of integers is finitely generated [43]. This is the first formulation of the Hilbert basis theorem, which states that every polynomial ring over a Noetherian ring is Noetherian.

1913.:

In a paper on number theory, L. E. Dickson proved a monomial version of the Hilbert basis theorem by a combinatorial method [17, Lemma A].

1913.:

In a series of forgotten papers, N. Günther developed algorithmic approaches for polynomials rings [38,39,40]. A review of Günther’s theory can be found in [41].

1914.:

Dehn described the word problem for finitely presented groups [16]. Using systems of transformations rules, A. Thue studied the problem for finitely presented semigroups [87]. It was only much later, in 1947, that the problem for finitely presented monoids was shown to be undecidable, independently by Post [73] and Markov [64, 65].

1916.:

Macaulay was one of the pioneers in commutative algebra. In his book The algebraic theory of modular systems [59], following the fundamental Hilbert basis theorem, he initiated an algorithmic approach to treat generators of polynomial ideals. In particular, he introduced the notion of H-basis corresponding to a monomial version of Gröbner bases.

1920.:

Janet defended his doctoral thesis [47], which presents a formal study of systems of partial differential equations following works of Ch. Riquier and É. Delassus. In particular, he analyzed completely integrable systems and Hilbert functions of polynomial ideals.

1921.:

In her seminal paper, Idealtheorie in Ringbereichen [68], Noether laid the foundation of general commutative ring theory, and gave one of the first general definitions of a commutative ring. She also formulated the Finite Chain Theorem [68, Satz I, Satz von der endlichen Kette].

1923.:

Noether formulated in [69, 70] concepts of elimination theory in the language of ideals that she had introduced in [68].

1926.:

Hermann, a student of Noether [42], initiated purely algorithmic approaches to ideals, such as the ideal membership problem and primary decomposition ideals. This work is a fundamental contribution to the emergence of computer algebra.

1927.:

Macaulay showed in [60] that the Hilbert function of a polynomial ideal I is equal to the Hilbert function of the monomial ideal generated by the set of leading monomials of the elements in I with respect a monomial order. As a consequence, the coefficients of the Hilbert function of a polynomial ideal are polynomial for sufficiently big degree.

1937.:

Based on early works by Ch. Riquier and Janet, in [86] J. M. Thomas reformulated in the algebraic language of B. L. van der Waerden, Moderne Algebra [89, 90], the theory of normal forms of systems of partial differential equations.

1937.:

In [32], W. Gröbner exhibited the isomorphism between the ring of polynomials with coefficients in an arbitrary field and the ring of differential operators with constant coefficients, see Proposition 3.1.2. The identification of these two rings was used before in the algebraic study of systems of partial differential equations, but without being explicit.

1942.:

In a seminal paper on rewriting theory, M. Newman presented rewriting as a combinatorial approach to study equivalence relations [66]. He proved a fundamental rewriting result stating that under a termination hypothesis, the confluence property is equivalent to local confluence.

1949.:

In his monograph Moderne algebraische Geometrie. Die idealtheoretischen Grundlagen [33], W. Gröbner surveyed algebraic computation on ideal theory with applications to algebraic geometry.

1962.:

Shirshov introduced in [83] an algorithmic method to compute normal forms in a free Lie algebra with respect to a family of elements of the Lie algebra satisfying a confluence property. The method is based on a completion procedure. He also proved a version of Newman’s lemma for Lie algebras, called composition lemma, and deduced a constructive proof of the Poincaré–Birkhoff–Witt theorem.

1964.:

Hironaka introduced in [44] a division algorithm and proposed the notion of standard basis, analogous to the notion of Gröbner basis, for rings of power series in order to solve problems of resolution of singularities in algebraic geometry.

1965.:

Under the supervision of W. Gröbner, B. Buchberger developed in his Ph.D. thesis an algorithmic theory of Gröbner bases for commutative polynomial algebras [7, 8, 10]. Buchberger gave a characterization of Gröbner bases in terms of S-polynomials as well as an algorithm to compute such bases, with a complete implementation in the assembler language of the computer ZUSE Z 23 V.

1967.:

Knuth and Bendix defined in [53] a completion procedure that completes with respect to a termination a set of equations in an algebraic theory into a confluent term rewriting system. The procedure is similar to Buchberger’s completion procedure. We refer the reader to [9] for a historical account of critical pair/completion procedures.

1972.:

Grauert introduced in [30] a generalization of Weierstrass’s preparation division theorem in the language of Banach algebras.

1973.:

Nivat formulated a critical pair lemma for string rewriting systems and proved that for a terminating rewriting system, the local confluence is decidable [67].

1976, 1978.:

Bokut in [5] and Bergman in [4] extended the Gröbner bases and Buchberger’s algorithm to associative algebras. They obtained the confluence Newman Lemma for rewriting systems in free associative algebras compatible with a monomial order, called, respectively, Diamond Lemma for ring theory and Composition Lemma.

1978.:

Pommaret introduced in [72] a global involutive division simpler than those introduced by M. Janet.

1980.:

Schreyer in his Ph.D. thesis [80] gave a method that computes syzygies in commutative multivariate polynomial rings using the division algorithm, see [18, Theorem 15.10].

1980.:

Huet [45] gave a proof of Newman’s lemma using a Noetherian well-founded induction method.

1985.:

Gröbner basis theory was extended to Weyl algebras by A. Galligo in [24], see also [79].

1997.:

Gerdt and Blinkov [25, 27] introduced the notion of involutive monomial division and its axiomatization.

1999, 2002.:

Faugère developed efficient algorithms for computing Gröbner bases, algorithm F4 [20], then an algorithm F5 [21].

2005.:

Gerdt [26] presented and analyzed an efficient involutive algorithm for computing Gröbner bases.

2012.:

Bächler, Gerdt, Lange-Hegermann, and Robertz algorithmized in [2] the Thomas decomposition of algebraic and differential systems.

 

1.3 Conventions and Notations

The set of nonnegative integers is denoted by \(\mathbb {N}\). In this chapter, \(\mathbb {K}[x_1,\ldots ,x_n]\) denotes the polynomial ring on the variables \(x_1,\ldots ,x_n\) over a field \(\mathbb {K}\) of characteristic zero. For a subset G of \(\mathbb {K}[x_1,\ldots ,x_n]\), we will denote by \(\mathrm {Id}(G)\) the ideal of \(\mathbb {K}[x_1,\ldots ,x_n]\) generated by G. A polynomial is either zero or it can be written as a finite sum of nonzero terms, each term being the product of a scalar in \(\mathbb {K}\) by a monomial.

1.3.1 Monomials.

We denote by \(\mathcal {M}(x_1,\ldots ,x_n)\) the set of monomials in the ring \(\mathbb {K}[x_1,\ldots ,x_n]\). For a subset I of \(\{x_1,\ldots ,x_n\}\) we will denote by \(\mathcal {M}(I)\) the set of monomials in \(\mathcal {M}(x_1,\ldots ,x_n)\) whose variables lie in I. A monomial u in \(\mathcal {M}(x_1,\ldots ,x_n)\) is written as \(u=x_1^{\alpha _1}\cdots x_n^{\alpha _n}\), were the \(\alpha _i\) are nonnegative integers. The integer \(\alpha _i\) is called the degree of the variable \(x_i\) in u, it will be also denoted by \(\deg _i(u)\). For \(\alpha =(\alpha _1,\ldots ,\alpha _n)\) in \(\mathbb {N}^n\), we denote \(x^\alpha =x_1^{\alpha _1}\cdots x_n^{\alpha _n}\) and \(|\alpha |=\alpha _1+\cdots +\alpha _n\).

For a finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) and \(1\leqslant i\leqslant n\), we denote by \(\deg _i(\mathcal {U})\) the largest degree in the variable \(x_i\) of the monomials in \(\mathcal {U}\), that is

$$ \deg _i(\mathcal {U}) = \max \big (\deg _i(u)\;|\;u\in \mathcal {U}\,\big ). $$

We call the cone of a subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) the set of all multiples of monomials in \(\mathcal {U}\), defined by

$$ \mathrm {cone}(\mathcal {U}) = \bigcup _{u\in \mathcal {U}} u\mathcal {M}(x_1,\ldots ,x_n) = \{\, uv \;|\; u \in \mathcal {U},\; v \in \mathcal {M}(x_1,\ldots , x_n) \,\}. $$

1.3.2 Homogeneous Polynomials.

A homogenous polynomial in \(\mathbb {K}[x_1,\ldots ,x_n]\) is a polynomial for which all nonzero terms have the same degree. A homogenous polynomial is of degree p if all its nonzero terms have degree p. We denote by \(\mathbb {K}[x_1,\ldots , x_n]_p\) the space of homogeneous polynomials of degree p. The dimension of this space is given by the formula:

$$ \Gamma _n^p:=\dim \big (\,\mathbb {K}[x_1,\ldots , x_n]_p\,\big )=\frac{(p+1)(p+2)\cdots (p+n-1)}{1\cdot 2\cdot \cdots \cdot (n-1)}. $$

1.3.3 Monomial Order.

Recall that a monomial order on \(\mathcal {M}(x_1,\ldots ,x_n)\) is a relation \(\preccurlyeq \) on \(\mathcal {M}(x_1,\ldots ,x_n)\) satisfying the following three conditions:

(i):

\(\preccurlyeq \) is a total order on \(\mathcal {M}(x_1,\ldots ,x_n)\),

(ii):

\(\preccurlyeq \) is compatible with multiplication, that is, if \(u\preccurlyeq u'\), then \(uw\preccurlyeq u'w\) for any monomial w in \(\mathcal {M}(x_1,\ldots ,x_n)\),

(iii):

\(\preccurlyeq \) is a well-order on \(\mathcal {M}(x_1,\ldots ,x_n)\), that is, every non-empty subset of \(\mathcal {M}(x_1,\ldots ,x_n)\) has a smallest element with respect to \(\preccurlyeq \).

The leading term, leading monomial, and leading coefficient of a polynomial f of \(\mathbb {K}[x_1,\ldots ,x_n]\), with respect to a monomial order \(\preccurlyeq \), will be denoted by \(\mathrm{lt}_\preccurlyeq (f)\), \({{\,\mathrm{lm}\,}}_\preccurlyeq (f)\), and \({{\,\mathrm{lc}\,}}_\preccurlyeq (f)\), respectively. For a set F of polynomials in \(\mathbb {K}[x_1,\ldots ,x_n]\), we will denote by \({{\,\mathrm{lm}\,}}_\preccurlyeq (F)\) the set of leading monomials of the polynomials in F. For simplicity, we will use notations \(\mathrm{lt}(f)\), \({{\,\mathrm{lm}\,}}(f)\), \({{\,\mathrm{lc}\,}}(f)\), and \({{\,\mathrm{lm}\,}}(F)\) if there is no danger of confusion.

2 Exterior Differential Systems

Motivated by problems in analytical mechanics, Euler (1707–1783) and Lagrange (1736–1813) initiated the so-called variational calculus, cf. [57], which led to the problem of solving partial differential equations, PDEs for short. In this section, we briefly present the evolution of this theory to serve as a guide to M. Janet’s contributions. We follow the history to introduce material on exterior differential systems and various PDE problems. For a deeper discussion of the theory of differential equations and the Pfaff problem, we refer the reader to [22, 92] or [11].

2.1 Pfaff’s Problem

2.1.1 Partial Differential Equations for One Unknown Function.

In 1772, Lagrange [56] considered a PDE of the form

$$\begin{aligned} F(x,y,\varphi ,p,q)=0, \quad \text {with}\quad p=\frac{\partial \varphi }{\partial x} \quad \text {and}\quad q=\frac{\partial \varphi }{\partial y}, \end{aligned}$$
(2.1)

i.e., a PDE for one unknown function \(\varphi \) of two variables x and y. Lagrange’s method to solve this PDE can be summarized as follows.

(i):

Express the PDE (2.1) in the form

$$\begin{aligned} q=F_1(x,y,\varphi ,p), \quad \text {with}\quad p=\frac{\partial \varphi }{\partial x} \quad \text {and}\quad q=\frac{\partial \varphi }{\partial y}. \end{aligned}$$
(2.2)
(ii):

Ignore for the moment that \(p=\frac{\partial \varphi }{\partial x}\) and consider the 1-form

$$ \Omega =d\varphi -pdx-qdy=d\varphi -pdx-F_1(x,y,\varphi ,p)dy, $$

where p is regarded as some (not yet fixed) function of xy, and \(\varphi \).

(iii):

If there exist functions M and \(\Phi \) of xy, and \(\varphi \) satisfying \(M\Omega =d\Phi \), then \(\Phi (x,y,\varphi )=C\) for some constant C. Solving this new equation, we obtain a solution \(\varphi =\psi (x,y,C)\) to Eq. (2.2).

2.1.2 Pfaffian Systems.

In 1814–15, Pfaff (1765–1825) [71] studied a PDE for one unknown function of n variables; this work was then continued by Jacobi (1804–1851) (cf. [46]). Recall that a PDE of any order is equivalent to a system of first-order PDEs. Thus, we may only think of systems of first-order PDEs with m unknown functions

$$ F_k\big ( x_1, \ldots , x_n, \varphi ^1, \ldots , \varphi ^m, \frac{\partial \varphi ^a}{\partial x_i}~(1\leqslant a \leqslant m, 1\leqslant i\leqslant n)\big )=0, \quad \text {for}\quad 1\leqslant k \leqslant r. $$

Introducing new variables \(p^{a}_i\), the system lives on the space with coordinates \((x_i, \varphi ^{a},p_i^{a})\) and is given by

$$ {\left\{ \begin{array}{ll} F_k(x_i, \varphi ^{a},p_i^{a})=0, &{} \\ d\varphi ^{a}- \displaystyle {\sum _{i=1}^n} p_i^{a}dx_i=0, &{} \\ dx_1 \wedge \cdots \wedge dx_n \ne 0. &{} \end{array}\right. } $$

Note that the last condition means that the variables \(x_1,\ldots , x_n\) are independent. Such a system is called a Pfaffian system. One is interested in the questions whether this system admits a solution or not, and if there exists a solution, if it is unique under some conditions. We will refer to these as Pfaff’s problems.

2.1.3 Cauchy–Kowalevsky’s Theorem.

A naive approach to Pfaff’s problems, with applications to mechanics in mind, is the question of the initial conditions. In series of articles published in 1842, A. Cauchy (1789–1857) studied the system of first-order PDEs:

$$ \frac{\partial \varphi ^{a}}{\partial t}=f_a( t, x_1,\ldots , x_n) +\sum _{b=1}^{m}\sum _{i=1}^{n} f_{a,b}^{i}( t, x_1,\ldots , x_n) \frac{\partial \varphi ^{b}}{\partial x_i}, \quad \text {for}\quad 1\leqslant a\leqslant m, $$

where \(f_a, f_{a,b}^i\) and \(\varphi ^1, \ldots , \varphi ^m\) are functions of \(n+1\) variables \(t,x_1,\ldots ,x_n\). Kowalevsky (1850–1891) [91] in 1875 considered systems of PDEs of the following form: for some \(r_a \in \mathbb {Z}_{>0}\) (\(1\leqslant a\leqslant m\)),

$$ \frac{\partial ^{r_a} \varphi ^{a}}{\partial t^{r_a}}=\sum _{b=1}^{m} \sum _{\begin{array}{c} j=0 \\ j+\vert \alpha \vert \leqslant r_a \end{array}}^{r_a-1} f_{a,b}^{j,\alpha }( t, x_1,\ldots , x_n) \frac{\partial ^{j+\vert \alpha \vert } \varphi ^{b}}{\partial t^j \partial x^{\alpha }}, $$

where \(f_{a,b}^{j,\alpha }\) and \(\varphi ^1, \ldots , \varphi ^m\) are functions of \(n+1\) variables \(t,x_1,\ldots ,x_n\), and where for a multi-index \(\alpha =(\alpha _1,\cdots , \alpha _n)\) in \((\mathbb {Z}_{\geqslant 0})^n\), we set \(\vert \alpha \vert =\sum _{i=1}^n \alpha _i\) and \(\partial x^\alpha =\partial x_1^{\alpha _1} \cdots \partial x_n^{\alpha _n}\). They showed that under the hypothesis of analyticity of the coefficients, such a system admits a unique analytic local solution satisfying a given initial condition. This statement is now called the Cauchy–Kowalevsky theorem.

2.1.4 Completely Integrable Systems.

A first geometric approach to the above problem was undertaken over by Frobenius (1849–1917) [23] and independently by Darboux (1842–1917) [15]. Let X be a differentiable manifold of dimension n. We consider the Pfaffian system

$$ \omega _i=0 \qquad 1\leqslant i\leqslant r, $$

where \(\omega _i\) are 1-forms defined on a neighborhood V of a point x in X. Suppose that the family

$$ \{(\omega _i)_y\}_{1\leqslant i\leqslant r}\subset T_y^*X $$

is linearly independent for all y in V. For \(0\leqslant p \leqslant n\), let us denote by \(\Omega _X^p(V)\) the space of differentiable p-forms on V. A p-dimensional distribution \(\mathcal {D}\) on X is a subbundle of TX with fiber of dimension p. A distribution \(\mathcal {D}\) is involutive if, for any vector fields \(\xi \) and \(\eta \) taking values in \(\mathcal {D}\), the Lie bracket

$$ [\xi , , \eta ]:=\xi \eta -\eta \xi $$

takes values in \(\mathcal {D}\) as well. Such a Pfaffian system is said to be completely integrable.

G. Frobenius and G. Darboux showed that the ideal I of \(\bigoplus _{p=0}^n \Omega _X^p(V)\), generated by the 1-forms \(\omega _1,\ldots ,\omega _r\), is a differential ideal, i.e., \(dI \subset I\), if and only if the distribution \(\mathcal {D}\) on V defined as the annihilator of \(\omega _1,\ldots ,\omega _r\) is involutive.

2.2 The Cartan–Kähler Theory

Here, we give a brief historically oriented exposition of the so-called Cartan–Kähler theory. In particular, we will present the notion of system in involution. For the original treatment by the founders of the theory, we refer the reader to [14, 52], modern introductions are provided in [6, 62], and a quick survey can be found in [95, Appendix].

2.2.1 Differential Forms.

Grassmann (1809–1877) [29] introduced in 1844 the first equation-based formulation of the structure of exterior algebra with the anti-commutativity rule

$$ x\wedge y = - y \wedge x. $$

Using this setting, Cartan (1869–1951) [11] defined in 1899 the exterior differential and differential p-forms. He showed that these notions are invariant under arbitrary coordinate transformation. Thanks to these differential structures, several results obtained in the nineteenth century were reformulated in a clear manner.

2.2.2 Exterior Differential Systems.

An exterior differential system \(\Sigma \) is a finite set of homogeneous differential forms, i.e., \(\Sigma \subset \bigcup _p \Omega _X^p\). Cartan [12], in 1901, studied exterior differential systems generated by 1-forms, i.e., Pfaffian systems. Later, Kähler (1906–2000) [52] generalized Cartan’s theory to any differential ideal I generated by an exterior differential system. For this reason, the general theory on exterior differential systems is nowadays called the Cartan–Kähler theory.

In the rest of this subsection, we discuss briefly the existence theorem for such a system. Since the argument developed here is local and we need the Cauchy–Kowalevsky theorem, we assume that all functions are analytic in \(x_1, \ldots , x_n\) unless otherwise stipulated.

2.2.3 Integral Elements.

Let \(\Sigma \) be an exterior differential system on a real analytic manifold X of dimension n such that the ideal generated by \(\Sigma \) is a differential ideal. For \(0\leqslant p \leqslant n\), set \(\Sigma ^p=\Sigma \cap \Omega _X^p\). We fix x in X. For \(p>0\), a pair \((E_p,x)\), with a p-dimensional vector subspace \(E_p\subset T_xX\), is called an integral p-element if \(\omega \vert _{E_p}=0\) for any \(\omega \) in \(\Sigma ^p_x:=\Sigma ^p \cap \Omega _{X,x}^p\), where \(\Omega _{X,x}^p\) denotes the space of differential p-forms defined on a neighborhood of x in X. We denote the set of integral elements of dimension p by \(I\Sigma ^p_x\).

An integral manifold Y is a submanifold of X whose tangent space \(T_{y}Y\) at each point y in Y is an integral element. Since the exterior differential system defined by \(\Sigma \) is completely integrable, there exists independent r-functions \(\varphi _1(x), \ldots , \varphi _r(x)\), called integrals of motion or first integrals, defined on a neighborhood V of a point \(x \in X\) such that their restrictions on \(V\cap Y\) are constants.

The polar space \(H(E_p)\) of an integral element \(E_p\) of \(\Sigma \) at the point x is the vector subspace of \(T_xX\) generated by the vectors \(\xi \in T_xX\) such that \(E_p+\mathbb {R}\xi \) is an integral element of \(\Sigma \).

2.2.4 Regular Integral Elements.

Let \(E_0\) be the real analytic subvariety of X defined as the zeros of \(\Sigma ^0\) and let \(\mathcal {U}\) be the subset of smooth points. A point in \(E_0\) is called integral point. A tangent vector \(\xi \) in \(T_xX\) is called a linear integral element if \(\omega (\xi )=0\) for any \(\omega \in \Sigma _x^1\) with \(x \in \mathcal {U}\). We define inductively the properties called “regular” and “ordinary” as follows:

(i):

The zeroth-order character is the integer \(s_0=\max _{x \in \mathcal {U}} \{\dim \mathbb {R}\Sigma _x^1\}\). A point \(x \in E_0\) is said to be regular if \(\dim \mathbb {R}\Sigma _x^1=s_0\), and a linear integral element \(\xi \in T_xX\) is called ordinary if x is regular.

(ii):

Let \(E_1=\mathbb {R}\xi \), where \(\xi \) is an ordinary linear integral element. The first-order character is the integer \(s_1\) satisfying \(s_0+s_1=\max _{x \in \mathcal {U}} \{\dim H(E_1)\}\). The ordinary integral 1-element \((E_1,x)\) is said to be regular if \(\dim H(E_1)=s_0+s_1\). An integral 2-element \((E_2,x)\) is called ordinary if it contains at least one regular linear integral element.

(iii):

Assume that all these concepts are defined up to \((p-1)\)th step and that \(s_0+s_1+\cdots +s_{p-1}<n-p+1\). The pth-order character is the integer \(s_p\) satisfying

$$ \sum _{i=0}^p s_i=\max _{x \in \mathcal {U}} \,\{\dim H(E_p)\}. $$

An integral p-element \((E_p,x)\) is said to be regular if

$$ \sum _{i=0}^p s_i=\dim H(E_p). $$

The integral p-element \((E_p,x)\) is called ordinary if it contains at least one regular integral element \((E_{p-1},x)\).

Let h be the smallest positive integer such that \(\sum _{i=0}^h s_i=n-h\). Then, there does not exist an integral \((h+1)\)-element. The integer h is called the genus of the system \(\Sigma \). For \(0<p\leqslant h\), one has

$$\begin{aligned} \sum _{i=0}^{p-1} s_i \leqslant n-p. \end{aligned}$$

2.2.5 Theorem

Let \(0<p\leqslant h\) be an integer.

(i):

The case \(\sum _{i=0}^{p-1} s_i=n-p:\) let \((E_p,x)\) be an ordinary integral p-element and let \(Y_{p-1}\) be an integral manifold of dimension \(p-1\) such that \((T_xY_{p-1},x)\) is a regular integral \((p-1)\)-element contained in \((E_p,x)\). Then, there exists a unique integral manifold \(Y_p\) of dimension p containing \(Y_{p-1}\) such that \(T_xY_p=E_p\).

(ii):

The case \(\sum _{i=0}^{p-1} s_i<n-p:\) let \((E_p,x)\) be an integral p-element and let \(Y_{p-1}\) be an integral manifold of dimension \(p-1\) such that \((T_xY_{p-1},x)\) is a regular integral \((p-1)\)-element contained in \((E_p,x)\). Then, there is a one-to-one correspondence between the set of real analytic functions \(C^\omega (\mathbb {R}^p, \mathbb {R}^{n-p-\sum _{i=0}^{p-1} s_i})\) and the set of p-dimensional integral manifolds \(Y_p\) containing \(Y_{p-1}\) such that \(T_xY_p=E_p\).

This theorem states that a given chain of ordinary integral elements

$$ (E_0,x) \subset (E_1,x) \subset \cdots \subset (E_h,x), \qquad \dim E_p=p \quad (0\leqslant p\leqslant h) , $$

one can inductively find an integral manifold \(Y_p\) of dimension p such that \(Y_0=\{x\}\), \(Y_{p-1} \subset Y_p\) and \(T_xY_p=E_p\). Notice that to obtain \(Y_p\) from \(Y_{p-1}\), one applies the Cauchy–Kowalevsky theorem to the PDE system defined by \(\Sigma ^p\) and the choice of real analytic functions in the above statement provide a datum to define the integral manifold \(Y_p\).

2.2.6 Systems in Involution.

In many applications, the exterior differential systems one considers admit p-independent variables \(x_1,\ldots , x_p\). In such a case, we are only interested in the p-dimensional integral manifolds among which no additional relation between \(x_1, \ldots , x_p\) is imposed. In general, an exterior differential system \(\Sigma \) for \(n-p\) unknown functions and p-independent variables \(x_1,\ldots , x_p\) is said to be in involution if it satisfies the two following conditions:

  1. 1.

    its genus is larger than or equal to p,

  2. 2.

    the defining equations of the generic ordinary integral p-element introduce no linear relation among \(dx_1,\ldots , dx_p\).

2.2.7 Reduced Characters.

Consider a family \(\mathcal {F}\) of integral elements of dimensions \(1,2,\ldots , p-1\) than can be included in an integral p-element at a generic integral point \(x \in X\). Take a local chart with origin x. The reduced polar system \(H^{\mathrm {red}}(E_i)\) of an integral element at x is the polar system of the restriction of the exterior differential system \(\Sigma \) to the submanifold

$$ \{x_1=x_2=\cdots =x_p=0\}. $$

The integers \(s_0',\ldots , s_{p-1}'\), called the reduced characters, are defined in such a way that \(s_0'+\cdots +s_i'\) is the dimension of the reduced polar system \(H^{\mathrm {red}}(E_i)\) at a generic integral element. For convenience, one sets \(s_p'=n-p-(s_0'+\cdots +s_{p-1}')\).

Let \(\Sigma \) be an exterior differential system of \(n-p\) unknown functions of p-independent variables such that the ideal generated by \(\Sigma \) is a differential ideal. É. Cartan showed that \(\Sigma \) is a system in involution iff the most general integral p-element in \(\mathcal {F}\) depends on \(s_1'+2s_2'+\cdots +ps_p'\) independent parameters.

2.2.8 Recent Developments.

In 1957, Kuranishi (1924 –) [55] considered the problem of the prolongation of a given exterior differential system and treated what É. Cartan called total case. Here, M. Kuranishi as well as É. Cartan worked locally in the analytic category. After an algebraic approach to the integrability was proposed by Guillemin and Sternberg [34], in 1964, Singer and Sternberg, [84], in 1965 studied some classes of infinite-dimensional systems which can be treated even in the \(C^\infty \)-category. In 1970s, with the aid of jet bundles and the Spencer cohomology, Pommaret (cf. [72]) considered formally integrable involutive differential systems generalizing the work of M. Janet, in the language of sheaf theory. For other geometric aspects not using sheaf theory, see the books by Griffiths (1938-) [31], and Bryant et al. [6].

3 Monomial PDE Systems

In this section, we present the method introduced by M. Janet under the name “calcul inverse de la dérivation” in his monograph [51]. In [51, Chap. I], M. Janet considered monomial PDE, that is, PDE of the form

$$\begin{aligned} \frac{\partial ^{\alpha _1+\alpha _2+\cdots +\alpha _n}\varphi }{\partial x_1^{\alpha _1}\partial x_2^{\alpha _2}\cdots \partial x_n^{\alpha _n}} = f_{\alpha _1\alpha _2\ldots \alpha _n}(x_1,x_2,\ldots , x_n), \end{aligned}$$
(3.1)

where \(\varphi \) is an unknown function and the \(f_{\alpha _1\alpha _2\ldots \alpha _n}\) are analytic functions of several variables. By an algebraic method, he analyzed the solvability of such an equation, namely, the existence and the uniqueness of an analytic solution \(\varphi \) of the system. Notice that the analyticity condition guarantees the commutativity of partial differentials operators. This property is crucial for the constructions that M. Janet carried out in the ring of commutative polynomials. Note that the first example of PDE that does not admit any solution was found by Lewy in the 1950s in [58].

3.1 Ring of Partial Differential Operators and Multiplicative Variables

3.1.1 Historical Context.

In the beginning of 1890s, following collaboration with C. Méray (1835–1911), Riquier (1853–1929) initiated his research on finding normal forms of systems of (infinitely many) PDEs for finitely many unknown functions of finitely many independent variables (see [75] and [76] for more details).

In 1894, Tresse [88] showed that such systems can be always reduced to systems of finitely many PDEs. This is the first result on Noeterianity of a module over a ring of differential operators. Based on this result, É. Delassus (1868–19..) formalized and simplified Riquier’s theory. In these works, one already finds an algorithmic approach for analyzing ideals of the ring \(\mathbb {K}[\frac{\partial }{\partial x_1}, \ldots , \frac{\partial }{\partial x_n}]\).

It was Janet (1888–1983) who already in his thesis [47], published in 1920, had realized that the latter ring is isomorphic to the ring of polynomials with n variables \(\mathbb {K}[x_1, \ldots , x_n]\). At that time, several abstract notions on rings were introduced by E. Noether in Germany but by M. Janet in France was not familiar with them. It was only in 1937 that W. Gröbner (1899–1980) proved this isomorphism.

3.1.2 Proposition

[32, Sect. 2.] There exists a ring isomorphism

$$ \Phi : \mathbb {K}[x_1,\ldots , x_n] \longrightarrow \mathbb {K}[\frac{\partial }{\partial x_1}, \ldots , \frac{\partial }{\partial x_n}], $$

from the ring of polynomials in n variables \(x_1, \ldots , x_n\) with coefficients in an arbitrary field \(\mathbb {K}\) to the ring of differential operators with constant coefficients.

3.1.3 Derivations and Monomials.

M. Janet considers monomials in the variables \(x_1,\ldots ,x_n\) and uses implicitly the isomorphism \(\Phi \) of Proposition 3.1.2. To a monomial \(x^\alpha =x_1^{\alpha _1}x_2^{\alpha _2}\cdots x_n^{\alpha _n}\), he associates the differential operator

$$ D^\alpha :=\Phi (x^\alpha )= \frac{\partial ^{|\alpha |}\quad }{\partial x_1^{\alpha _1}\partial x_2^{\alpha _2}\cdots \partial x_n^{\alpha _n}}. $$

In [51, Chap. I], M. Janet considered finite monomial PDE systems. The equations are of the form (3.1) and since the system has a finitely many equations, the set of monomials associated to it is finite. The first result of the monograph is a finiteness result on monomials stating that a sequence of monomials in which none is a multiple of a preceding one is necessarily finite. M. Janet proved this result by induction on the number of variables. We can formulate it as follows.

3.1.4 Lemma

([51, Sect. 7]) Let \(\mathcal {U}\) be a subset of \(\mathcal {M}(x_1,\ldots ,x_n)\). If, for any monomials u and \(u'\) in \(\mathcal {U}\), the monomial u does not divide \(u'\), then the set \(\mathcal {U}\) is finite.

This result corresponds to Dickson’s Lemma [17], which asserts that every monomial ideal of \(\mathbb {K}[x_1,\ldots ,x_n]\) is finitely generated.

3.1.5 Stability of the Multiplication.

M. Janet paid a special attention to families of monomials with the following property. A subset of monomial \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) is called multiplicatively stable if for any monomial u in \(\mathcal {M}(x_1,\ldots ,x_n)\) such that there exists \(u'\) in \(\mathcal {U}\) that divides u, one has that u is in \(\mathcal {U}\). In other words, the set \(\mathcal {U}\) is closed under multiplication by monomials in \(\mathcal {M}(x_1,\ldots ,x_n)\).

As a consequence of Lemma 3.1.4, if \(\mathcal {U}\) is a multiplicatively stable subset of \(\mathcal {M}(x_1,\ldots ,x_n)\), then it contains only finitely many elements that are not multiples of any other elements in \(\mathcal {U}\). Hence, there exists a finite subset \(\mathcal {U}_f\) of \(\mathcal {U}\) such that for any u in \(\mathcal {U}\), there exists \(u_f\) in \(\mathcal {U}_f\) such that \(u_f\) divides u.

3.1.6 Ascending Chain Condition.

M. Janet observed another consequence of Lemma 3.1.4: the ascending chain condition on multiplicatively stable monomial sets, which he formulated as follows. Any ascending sequence of multiplicatively stable subsets of \(\mathcal {M}(x_1,\ldots ,x_n)\)

$$ \mathcal {U}_1 \subset \mathcal {U}_2 \subset \; \cdots \; \subset \mathcal {U}_k \subset \cdots $$

is finite. This corresponds to the Noetherian property on the set of monomials in finitely many variables.

3.1.7 Inductive Construction.

Let us fix a total order on the variables \(x_n>x_{n-1}>\cdots > x_1\). Let \(\mathcal {U}\) be a finite subset of \(\mathcal {M}(x_1,\ldots , x_n)\). Let us define, for every \(0\leqslant \alpha _n \leqslant \deg _n(\mathcal {U})\),

$$ [\alpha _n] = \{u \in \mathcal {U}\;|\; \deg _{n}(u)=\alpha _n\,\}. $$

The family \(([0],\ldots ,[\deg _n(\mathcal {U})])\) forms a partition of \(\mathcal {U}\). We define for every \(0\leqslant \alpha _n \leqslant \deg _n(\mathcal {U})\)

$$ \overline{[\alpha _n]} =\{u \in \mathcal {M}(x_1,\ldots ,x_{n-1}) \;|\; ux_n^{\alpha _n} \in \mathcal {U}\,\}. $$

We set for every \(0\leqslant i \leqslant \deg _n(\mathcal {U})\)

$$ \mathcal {U}_i' = \underset{0\leqslant \alpha _n \leqslant i}{\bigcup } \{u \in \mathcal {M}(x_1,\ldots ,x_{n-1}) \;|\; \text {there exists } u'\in \overline{[\alpha _n]} \text { such that } u'|u\,\}.$$

Finally, we set

$$ \mathcal {U}_k = {\left\{ \begin{array}{ll} \{\,ux_n^k\;|\; u\in \mathcal {U}'_k\,\}, &{} \text {if } k< \deg _n(\mathcal {U}), \\ \{\,ux_n^k\;|\; u\in \mathcal {U}'_{\deg _n(\mathcal {U})}\,\}, &{} \text{ if } k\geqslant \deg _n(\mathcal {U}), \end{array}\right. } $$

and \(M(\mathcal {U}) = \underset{k\geqslant 0}{\bigcup } \mathcal {U}_k\). By this inductive construction, M. Janet obtains the monomial ideal generated by \(\mathcal {U}\). Indeed, \(M(\mathcal {U})\) coincides with the following set of monomial:

$$ \{\,u \in \mathcal {M}(x_1,\ldots ,x_n) \;|\; \text {there exists }u' \text { in } \mathcal {U}\text {such that } u'|u \,\}. $$

3.1.8 Example.

Consider the subset \(\mathcal {U}=\{\,x_3x_2^2,x_3^3x_1^2\,\}\) of \(\mathcal {M}(x_1,x_2,x_3)\). We have

$$ [0] = \emptyset , \quad [1] = \{x_3x_2^2\}, \quad [2] = \emptyset , \quad [3] = \{x_3^3x_1^2\}. $$

Hence,

$$ \overline{[0]} = \emptyset , \quad \overline{[1]} = \{x_2^2\}, \quad \overline{[2]} = \emptyset , \quad \overline{[3]} = \{x_1^2\}. $$

The set \(M(\mathcal {U})\) is defined using of the following subsets:

$$ \mathcal {U}_0' = \emptyset , \quad \mathcal {U}_1' = \{x_1^{\alpha _1}x_2^{\alpha _2}\;|\; \alpha _2\geqslant 2\}, \quad \mathcal {U}_2' = \mathcal {U}_1', \quad \mathcal {U}_3' = \{x_1^{\alpha _1}x_2^{\alpha _2}\;|\; \alpha _1\geqslant 2\;\text {ou}\;\alpha _2\geqslant 2\}. $$

3.1.9 Janet’s Multiplicative Variables [47, Sect. 7].

Let us fix a total order \(x_n>x_{n-1}>\cdots >x_1\) on variables. Let \(\mathcal {U}\) be a finite subset of \(\mathcal {M}(x_1,\ldots ,x_n)\). For all \(1\leqslant i \leqslant n\), we define the following subset of \(\mathcal {U}\):

$$ [\alpha _i,\ldots ,\alpha _n] = \{u\in \mathcal {U}\;|\;\deg _j(u)=\alpha _j\;\;\text {for all}\;\;i\leqslant j \leqslant n\}. $$

That is, \([\alpha _i,\ldots ,\alpha _n]\) contains monomials of \(\mathcal {U}\) of the form \(vx_i^{\alpha _i}\cdots x_n^{\alpha _n}\), with v in \(\mathcal {M}(x_1,\ldots ,x_{i-1})\). The sets \([\alpha _i,\ldots ,\alpha _n]\) with \(\alpha _i,\ldots ,\alpha _n\) in \(\mathbb {N}\) form a partition of \(\mathcal {U}\). Moreover, for all \(1\leqslant i \leqslant n-1\), we have \([\alpha _i,\alpha _{i+1},\ldots ,\alpha _n] \subseteq [\alpha _{i+1},\ldots ,\alpha _n]\) and the sets \([\alpha _i,\ldots ,\alpha _n]\), where \(\alpha _i\in \mathbb {N}\), form a partition of \([\alpha _{i+1},\ldots ,\alpha _n]\).

Given a monomial u in \(\mathcal {U}\), the variable \(x_n\) is said to be multiplicative for u in the sense of Janet if

$$ \deg _n(u) = \deg _n(\mathcal {U}). $$

For \(i\leqslant n-1\), the variable \(x_i\) is said to be multiplicative for u in the sense of Janet if

$$ u\in [\alpha _{i+1},\ldots ,\alpha _n] \qquad \text {and}\qquad \deg _i(u)= \deg _i([\alpha _{i+1},\ldots ,\alpha _n]). $$

We will denote by \(\mathrm {Mult}_{\mathcal {J}}^\mathcal {U}(u)\) the set of multiplicative variables of u in the sense of Janet with respect to the set \(\mathcal {U}\), also called \(\mathcal {J}\)-multiplicative variables.

Note that, by definition, for any u and \(u'\) in \([\alpha _{i+1},\ldots ,\alpha _n]\), we have

$$ \{x_{i+1},\ldots ,x_n\} \cap \mathrm {Mult}_\mathcal {J}^\mathcal {U}(u) = \{x_{i+1},\ldots ,x_n\} \cap \mathrm {Mult}_\mathcal {J}^\mathcal {U}(u'). $$

Accordingly, we will denote this set of multiplicative variables by \(\mathrm {Mult}_\mathcal {J}^\mathcal {U}([\alpha _{i+1},\ldots ,\alpha _n])\).

3.1.10 Example.

Consider the subset \(\mathcal {U}=\{x_2x_3,x_2^2,x_1\}\) of \(\mathcal {M}(x_1,x_2,x_3)\) with the order \(x_3> x_2 > x_1\). We have \(\deg _3(\mathcal {U})=1\); hence, the variable \(x_3\) is \(\mathcal {J}\)-multiplicative for \(x_3x_2\) and not \(\mathcal {J}\)-multiplicative for \(x_2^2\) and \(x_1\).

For \(\alpha \in \mathbb {N}\), we have \([\alpha ]=\{u\in \mathcal {U}\;|\; \deg _3(u)=\alpha \}\), hence

$$ [0] = \{x_2^2,x_1\}, \qquad [1]=\{x_2x_3\}. $$

We have \(\deg _2(x_2^2)=\deg _2([0])\), \(\deg _2(x_1)\ne \deg _2([0])\) and \(\deg _2(x_2x_3)=\deg _2([1])\), so the variable \(x_2\) is \(\mathcal {J}\)-multiplicative for \(x_2^2\) and \(x_2x_3\) and not \(\mathcal {J}\)-multiplicative for \(x_1\). Further,

$$ [0, 0]=\{x_1\}, \quad [0, 2]=\{x_2^2\}, \quad [1, 1]=\{x_2x_3\}, $$

and \(\deg _1(x_2^2)=\deg _1([0, 2])\), \(\deg _1(x_1)= \deg _1([0, 0])\) and \(\deg _1(x_3x_2)=\deg _1([1, 1])\), so the variable \(x_1\) is \(\mathcal {J}\)-multiplicative for \(x_1\), \(x_2^2\) and \(x_3x_2\).

3.1.11 Janet Divisor.

Let \(\mathcal {U}\) be a subset of \(\mathcal {M}(x_1,\ldots ,x_n)\). A monomial u in \(\mathcal {U}\) is called Janet divisor of a monomial w in \(\mathcal {M}(x_1,\ldots ,x_n)\) with respect to \(\mathcal {U}\), if there is a decomposition \(w=uv\), where any variable occurring in v is \(\mathcal {J}\)-multiplicative with respect to \(\mathcal {U}\).

3.1.12 Proposition

Let \(\mathcal {U}\) be a subset of \(\mathcal {M}(x_1,\ldots ,x_n)\) and w be a monomial in \(\mathcal {M}(x_1,\ldots ,x_n)\). Then w admits in \(\mathcal {U}\) at most one Janet divisor with respect to \(\mathcal {U}\).

Proof

If u is a Janet divisor of w with respect to \(\mathcal {U}\), there is a v in \(\mathcal {M}(\mathrm {Mult}_\mathcal {J}^\mathcal {U}(u))\) such that \(w=uv\). We have \(\deg _n(v)=\deg _n(w)-\deg _n(u)\). If \(\deg _n(w)\geqslant \deg _n(\mathcal {U})\), then the variable \(x_n\) is \(\mathcal {J}\)-multiplicative and \(\deg _n(v)=\deg _n(w)-\deg _n(\mathcal {U})\). If \(\deg _n(w)<\deg _n(\mathcal {U})\), then \(x_n\) cannot be \(\mathcal {J}\)-multiplicative and \(\deg _n(v)=0\).

As a consequence, for any Janet divisors u and \(u'\) of w in \(\mathcal {U}\), we have \(\deg _n(u)=\deg _n(u')\) and \(u,u'\in [\alpha ]\) for some \(\alpha \in \mathbb {N}\).

Suppose now that u and \(u'\) are two distinct Janet divisors of w in \(\mathcal {U}\). There exists \(1<k\leqslant n\) such that \(u,u'\in [\alpha _k,\ldots ,\alpha _n]\) and \(\deg _{k-1}(u)\ne \deg _{k-1}(u')\). Suppose that \(\deg _{k-1}(u) >\deg _{k-1}(u')\). Then the variable \(x_{k-1}\) cannot be \(\mathcal {J}\)-multiplicative for \(u'\) with respect to \(\mathcal {U}\). It follows that \(u'\) cannot be a Janet divisor of w. This leads to a contradiction, hence \(u=u'\).    \(\square \)

3.1.13 Complementary Monomials.

Let \(\mathcal {U}\) be a finite subset of \(\mathcal {M}(x_1,\ldots ,x_n)\). The set of complementary monomials of \(\mathcal {U}\) is the set of monomials

$$\begin{aligned} \mathcal {U}^{\complement } = \bigcup _{1\leqslant i \leqslant n} \; \mathcal {U}^{\complement (i)}, \end{aligned}$$
(3.2)

where

$$ \mathcal {U}^{\complement (n)} = \{ x_n^\beta \; | \; 0 \leqslant \beta \leqslant \deg _n(\mathcal {U})\;\text {and}\; [\beta ]= \emptyset \}, $$

and for every \(1\leqslant i < n\),

$$\begin{aligned}&\mathcal {U}^{\complement (i)} = \big \{\, x_i^\beta x_{i+1}^{\alpha _{i+1}}\ldots x_n^{\alpha _n} \; \big | \; [\alpha _{i+1},\ldots ,\alpha _n] \ne \emptyset , \; \\&\quad 0 \leqslant \beta < \deg _i([\alpha _{i+1},\ldots ,\alpha _n]), \; [\beta , \alpha _{i+1}, \ldots , {\alpha _n}] = \emptyset \,\big \}. \end{aligned}$$

Note that the union in (3.2) is disjoint, since \(\mathcal {U}^{\complement (i)} \cap \mathcal {U}^{\complement (j)} = \emptyset \) for \(i\ne j\).

3.1.14 Multiplicative Variables of Complementary Monomials.

For any monomial u in \(\mathcal {U}^{\complement }\), we define the set \(\mathrm {^{\complement }Mult}^{\mathcal {U}^{\complement }}\) of multiplicative variables for u with respect to complementary monomials in \(\mathcal {U}^{\complement }\) as follows. If the monomial u is in \(\mathcal {U}^{\complement (n)}\), we set

$$ \mathrm {^{\complement }Mult}_\mathcal {J}^{\mathcal {U}^{\complement (n)}}(u) = \{x_1,\ldots ,x_{n-1}\}. $$

For \(1\leqslant i \leqslant n-1\), for any monomial u in \(\mathcal {U}^{\complement (i)}\), there exist \(\alpha _{i+1},\ldots ,\alpha _n\) such that \(u\in [\alpha _{i+1},\ldots ,\alpha _n]\). Then

$$ \mathrm {^{\complement }Mult}_\mathcal {J}^{\mathcal {U}^{\complement (i)}}(u) = \{x_1,\ldots ,x_{i-1}\} \cup \mathrm {Mult}_\mathcal {J}^\mathcal {U}([\alpha _{i+1},\ldots ,\alpha _n]). $$

Finally, for u in \(\mathcal {U}^{\complement }\), there exists a unique \(1\leqslant i_u \leqslant n\) such that \(u\in \mathcal {U}^{\complement (i_u)}\). Then we set

$$ \mathrm {^{\complement }Mult}_\mathcal {J}^{\mathcal {U}^{\complement }}(u) = \mathrm {^{\complement }Mult}_\mathcal {J}^{\mathcal {U}^{\complement (i_u)}}(u). $$

3.1.15 Example [51, p. 17].

Consider the subset \(\mathcal {U}=\{\,x_3^3x_2^2x_1^2,x_3^3x_1^3,x_3x_2x_1^3,x_3x_2\,\}\) of \(\mathcal {M}(x_1,x_2,x_3)\) with the order \(x_3>x_2>x_1\). The following table gives the multiplicative variables for each monomial:

$$\begin{aligned} \begin{array}{c|ccc} x_3^3x_2^2x_1^2 &{} x_3 &{} x_2 &{} x_1\\ x_3^3x_1^3 &{} x_3 &{} &{} x_1\\ x_3x_2x_1^3 &{} &{} x_2 &{} x_1\\ x_3x_2 &{} &{} x_2 &{} \\ \end{array} \end{aligned}$$

The sets of complementary monomials are

$$\begin{aligned}&\mathcal {U}^{\complement (3)} = \{1,x_3^2\}, \quad \mathcal {U}^{\complement (2)} = \{x_3^3x_2,x_3\}, \\ \mathcal {U}^{\complement (1)}&= \{x_3^3x_2^2x_1,x_3^3x_2^2,x_3^3x_1^2,x_3^3x_1,x_3^3,x_3x_2x_1^2,x_3x_2x_1\}. \end{aligned}$$

The following table gives the multiplicative variables for each monomial:

$$\begin{aligned} \begin{array}{c|ccc} 1, x_3^2 &{} &{} x_2 &{} x_1\\ \hline x_3^3x_2 &{} x_3 &{} &{} x_1\\ x_3 &{} &{} &{} x_1\\ \hline x_3^3x_2^2x_1, x_3^3x_2^2 &{} x_3 &{} x_2 &{} \\ x_3^3x_1^2, x_3x_1, x_3^3 &{} x_3 &{} &{} \\ x_3x_2x_1^2, x_3x_2x_1 &{} &{} x_2 &{} \\ \end{array} \end{aligned}$$

3.2 Completion Procedure

In this subsection, we present the notion of complete system introduced by Janet in [51]. In particular, we recall the completion procedure that he gave in order to complete a finite set of monomials.

3.2.1 Complete Systems.

Let \(\mathcal {U}\) be a subset of \(\mathcal {M}(x_1,\ldots ,x_n)\). For a monomial u in \(\mathcal {U}\) (resp. in \(\mathcal {U}^{\complement }\)), M. Janet defined the involutive cone of u with respect to \(\mathcal {U}\) (resp. to \(\mathcal {U}^{\complement }\)) as the set of monomials

$$\begin{aligned} \mathrm {cone}_\mathcal {J}(u,\mathcal {U})&= \{\, uv \; | \; v\in \mathcal {M}(\mathrm {Mult}_\mathcal {J}^\mathcal {U}(u))\,\} \\ \text {(resp.}\quad \mathrm {cone}^{\complement }_\mathcal {J}(u,\mathcal {U})&= \{\, uv \; | \; v\in \mathcal {M}(\mathrm {^{\complement }Mult}_\mathcal {J}^{\mathcal {U}^{\complement }}(u))\,\} ). \end{aligned}$$

The involutive cone of the set \(\mathcal {U}\) is defined by

$$ \mathrm {cone}_\mathcal {J}(\mathcal {U}) = \underset{u \in \mathcal {U}}{\bigcup } \; \mathrm {cone}_\mathcal {J}(u,\mathcal {U}) \qquad \text {(resp.}\quad \mathrm {cone}^{\complement }_\mathcal {J}(\mathcal {U}) = \underset{u \in \mathcal {U}^{\complement }}{\bigcup } \; \mathrm {cone}^{\complement }_\mathcal {J}(u,\mathcal {U}) ). $$

M. Janet called complete a set of monomials \(\mathcal {U}\) when \(\mathrm {cone}(\mathcal {U}) = \mathrm {cone}_\mathcal {J}(\mathcal {U})\). An involutive cone is called class in Janet’s monograph [51]. The terminology “involutive” first appeared in the paper [25] by Gerdt and is standard now. We refer the reader to [63] for a discussion of the relation between this notion and the notion of involutivity in the work of É. Cartan.

3.2.2 Proposition

[51, p. 18] For any finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\), we have the partition

$$ \mathcal {M}(x_1,\ldots ,x_n) = \mathrm {cone}_\mathcal {J}(\mathcal {U}) \amalg \mathrm {cone}^{\complement }_\mathcal {J}(\mathcal {U}). $$

3.2.3 A Proof of Completeness by Induction.

Let \(\mathcal {U}\) be a finite subset of \(\mathcal {M}(x_1,\ldots ,x_n)\). Consider the partition \([0],\ldots ,[\deg _n(\mathcal {U})]\) of monomials in \(\mathcal {U}\) by their degrees in \(x_n\). Let \(\alpha _1<\alpha _2<\cdots <\alpha _k\) be positive integers such that \([\alpha _i]\) is non-empty. Recall that \(\overline{[\alpha _i]}\) is the set of monomials u in \(\mathcal {M}(x_1,\ldots ,x_{n-1})\) such that \(ux_n^{\alpha _i}\) is in \(\mathcal {U}\). With these notations, the following result gives an inductive method to prove that a finite set of monomials is complete.

3.2.4 Proposition

[51, p. 19] A finite set \(\mathcal {U}\) is complete if and only if the two following conditions are satisfied:

(i):

the sets \(\overline{[\alpha _1]},\ldots ,\overline{[\alpha _k]}\) are complete,

(ii):

for any \(1\leqslant i < k\), the set \(\overline{[\alpha _i]}\) is contained in \(\mathrm {cone}_\mathcal {J}(\overline{[\alpha _i+1]})\).

As an immediate consequence of this proposition, M. Janet obtained the following characterization.

3.2.5 Proposition

[51, p. 20] A finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) is complete if and only if, for any u in \(\mathcal {U}\) and any x non-multiplicative variable of u with respect to \(\mathcal {U}\), ux lies in \(\mathrm {cone}_\mathcal {J}(\mathcal {U})\).

3.2.6 Example [51, p. 21].

Consider the subset  \(\mathcal {U}=\{\,x_5x_4,x_5x_3,x_5x_2,x_4^2,x_4x_3,x_3^2\}\) of \(\mathcal {M}(x_1,\ldots ,x_5)\). The multiplicative variables are given by the following table:

$$\begin{aligned} \begin{array}{c|ccccc} x_5x_4 &{} x_5 &{} x_4 &{} x_3 &{} x_2 &{} x_1\\ x_5x_3 &{} x_5 &{} &{}x_3 &{} x_2 &{} x_1\\ x_5x_2 &{} x_5 &{} &{}&{}x_2 &{} x_1\\ x_4^2 &{} &{} x_4 &{} x_3 &{} x_2 &{} x_1\\ x_3x_4 &{} &{} &{} x_3 &{} x_2 &{} x_1\\ x_3^2 &{} &{} &{} x_3 &{} x_2 &{} x_1\\ \end{array} \end{aligned}$$

To prove that this set of monomials is complete, we apply Proposition 3.2.5. The completeness follows from the identities

$$\begin{aligned} \begin{array}{c} x_5x_3\cdot x_4=x_5x_4\cdot x_3, \\ x_5x_2\cdot x_4=x_5x_4\cdot x_2,\; x_5x_2\cdot x_3=x_5x_3\cdot x_2, \\ x_4^2\cdot x_5=x_5x_4\cdot x_4, \\ x_4x_3\cdot x_5=x_5x_4\cdot x_3,\; x_4x_3\cdot x_4 = x_4^2\cdot x_3, \\ x_3^2\cdot x_5 = x_5x_3\cdot x_3,\; x_3^2\cdot x_4 = x_4x_3\cdot x_3. \end{array} \end{aligned}$$

3.2.7 Examples.

For every \(1\leqslant p\leqslant n\), the set of monomials of degree p is complete. Any finite set of monomials of degree 1 is complete.

3.2.8 Theorem

(Janet’s Completion Lemma, [51, p. 21]) For any finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) there exists a finite set  \(J(\mathcal {U})\) satisfying the following three conditions:

(i):

\(J(\mathcal {U})\) is complete,

(ii):

\(\mathcal {U}\subseteq J(\mathcal {U})\),

(iii):

\(\mathrm {cone}(\mathcal {U}) = \mathrm {cone}(J(\mathcal {U}))\).

3.2.9 Completion Procedure.

From Proposition 3.2.5, M. Janet deduced the completion procedure \(\mathbf{Complete}(\mathcal {U})\), Procedure 1, which computes a completion of a finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) [51, p. 21]. M. Janet did not give a proof of the fact that this procedure terminates. We will present a proof of the correctness and termination of this procedure in Sect. 4.2.

figure a

3.2.10 Example [51, p. 28].

Consider the subset \(\mathcal {U}=\{\,x_3x_2^2,x_3^3x_1^2\,\}\) of \(\mathcal {M}(x_1,x_2,x_3)\) with the order \(x_3>x_2>x_1\). The following table gives the multiplicative variables for each monomial:

$$\begin{aligned} \begin{array}{c|ccc} x_3^3x_1^2 &{} x_3 &{} x_2 &{} x_1\\ x_3x_2^2 &{} &{} x_2 &{} x_1\\ \end{array} \end{aligned}$$

We complete the set \(\mathcal {U}\) as follows. The monomial \(x_3x_2^2\cdot x_3\) is not in \(\mathrm {cone}_\mathcal {J}(\mathcal {U})\); we set \(\widetilde{\mathcal {U}}\leftarrow \mathcal {U}\cup \{x_3^2x_2^2\}\) and we compute multiplicative variables with respect to \(\widetilde{\mathcal {U}}\):

$$\begin{aligned} \begin{array}{c|ccc} x_3^3x_1^2 &{} x_3 &{} x_2 &{} x_1\\ x_3^2x_2^2 &{} &{} x_2 &{} x_1\\ x_3x_2^2 &{} &{} x_2 &{} x_1\\ \end{array} \end{aligned}$$

The monomial \(x_3x_2^2\cdot x_3\) is in \(\mathrm {cone}_\mathcal {J}(\widetilde{\mathcal {U}})\), but \(x_3^2x_2^2\cdot x_3\) is not in \(\mathrm {cone}_\mathcal {J}(\widetilde{\mathcal {U}})\); we set \(\widetilde{\mathcal {U}} \leftarrow \widetilde{\mathcal {U}} \cup \{x_3^3x_2^2\}\). The multiplicative variables of this new set of monomials are

$$\begin{aligned} \begin{array}{c|ccc} x_3^2x_2^2 &{} x_3 &{} x_2 &{} x_1\\ x_3^3x_1^2 &{} x_3 &{} &{} x_1\\ x_3^2x_2^2 &{} &{} x_2 &{} x_1\\ x_3x_2^2 &{} &{} x_2 &{} x_1\\ \end{array} \end{aligned}$$

The monomial \(x_3x_1^2\cdot x_2\) is not in \(\mathrm {cone}_\mathcal {J}(\widetilde{\mathcal {U}})\), the other products are in \(\mathrm {cone}_\mathcal {J}(\widetilde{\mathcal {U}})\), and we prove that the system

$$ \widetilde{\mathcal {U}} = \{\,x_3x_2^2,x_3^3x_1^2,x_3^3x_2^2,x_3^3x_2x_1^2,x_3^2x_2^2\,\} $$

is complete.

3.3 Inversion of Differentiation

In this subsection, we recall the results of Janet from [51] on the solvability of monomial PDE systems of the form

$$\begin{aligned} (\Sigma ) \qquad D^\alpha \varphi = f_{\alpha }(x_1,x_2,\ldots , x_n), \qquad \alpha \in \mathbb {N}^n, \end{aligned}$$
(3.3)

where \(\varphi \) is an unknown function and \(f_{\alpha }\) are analytic functions of several variables. As recalled in Sect. 3.1.1, an infinite set of partial differential equations can be always reduced to a finite set of such equations. This is a consequence of Dickson’s Lemma, whose formulation due to M. Janet is given in Lemma 3.1.4. Accordingly, without loss of generality, we can assume that the system \((\Sigma )\) is finite. Using Proposition 3.1.2, M. Janet associated to each differential operator \(D^\alpha \) a monomial \(x^\alpha \) in \(\mathcal {M}(x_1,\ldots ,x_n)\). In this way, to a PDE system \((\Sigma )\) in the variables \(x_1,\ldots ,x_n\) he associated a finite set \({{\,\mathrm{lm}\,}}(\Sigma )\) of monomials. By Theorem 3.2.8, any such set \({{\,\mathrm{lm}\,}}(\Sigma )\) of monomials can be completed to a finite complete set \(J({{\,\mathrm{lm}\,}}(\Sigma ))\) having the same cone as \({{\,\mathrm{lm}\,}}(\Sigma )\).

3.3.1 Computation of Inversion of Differentiation.

Let us now assume that the set of monomials \({{\,\mathrm{lm}\,}}(\Sigma )\) is finite and complete. Since the cone of \({{\,\mathrm{lm}\,}}(\Sigma )\) is equal to the involutive cone of \({{\,\mathrm{lm}\,}}(\Sigma )\), each monomial u in \({{\,\mathrm{lm}\,}}(\Sigma )\) and non-multiplicative variable \(x_i\) in \(\mathrm {NMult}_\mathcal {J}^{{{\,\mathrm{lm}\,}}(\Sigma )}(u)\), admits a decomposition

$$ ux_i = vw, $$

where v is in \({{\,\mathrm{lm}\,}}(\Sigma )\) and w belongs to \(\mathcal {M}(\mathrm {Mult}_\mathcal {J}^{{{\,\mathrm{lm}\,}}(\Sigma )}(v))\). To each such a decomposition, it corresponds a compatibility condition of the PDE system \((\Sigma )\), that is, for \(u=x^{\alpha }\), \(v=x^{\beta }\) and \(w=x^{\gamma }\) with \(\alpha , \beta \) and \(\gamma \) in \(\mathbb {N}^n\),

$$ \frac{\partial f_{\alpha }}{\partial x_i} = D^\gamma f_{\beta }. $$

Let us denote by \((C_\Sigma )\) the set of all such compatibility conditions. M. Janet showed that under the completeness hypothesis this set of compatibility conditions is sufficient for the PDE system \((\Sigma )\) to be formally integrable in the sense of [72].

3.3.2 The Space of Initial Conditions.

Let us consider the set \({{\,\mathrm{lm}\,}}(\Sigma )^{\complement }\) of complementary monomials of the finite complete set \({{\,\mathrm{lm}\,}}(\Sigma )\). Suppose that the PDE system \((\Sigma )\) satisfies the set \((C_\Sigma )\) of compatibility conditions. M. Janet associated to each monomial \(v=x^{\beta }\) in \({{\,\mathrm{lm}\,}}(\Sigma )^{\complement }\) with \(\beta \in \mathbb {N}^n\) an analytic function

$$ \varphi _{\beta }(x_{i_1},\ldots ,x_{i_{k_v}}), $$

where \(\{x_{i_1},\ldots ,x_{i_{k_v}}\}=\mathrm {^{\complement }Mult}_\mathcal {J}^{{{\,\mathrm{lm}\,}}(\Sigma )^{\complement }}(v)\). By Proposition 3.2.2, the set of such analytic functions provides a compatible initial condition. Under these assumptions, M. Janet proved the following result.

3.3.3 Theorem

[51, p. 25] Let \((\Sigma )\) be a finite monomial PDE system such that \({{\,\mathrm{lm}\,}}(\Sigma )\) is complete. If \((\Sigma )\) satisfies the compatibility conditions \((C_\Sigma )\), then it admits a unique solution with initial conditions given for any \(v=x^{\beta }\) in \({{\,\mathrm{lm}\,}}(\Sigma )^{\complement }\) with \(\beta \in \mathbb {N}^n\) by

$$ \left. D^\beta \varphi \right| _{x_j=0 \; \forall x_j\in \mathrm {^{\complement }NMult}_\mathcal {J}^{{{\,\mathrm{lm}\,}}(\Sigma )^{\complement }}(v)}= \varphi _{\beta }(x_{i_1},\ldots ,x_{i_{k_v}}), $$

where \(\{x_{i_1},\ldots ,x_{i_{k_v}}\} = \mathrm {^{\complement }Mult}_\mathcal {J}^{{{\,\mathrm{lm}\,}}(\Sigma )^{\complement }}(v)\).

These initial conditions were called by M. Janet initial conditions. A method to obtain these initial conditions is illustrated by the two following examples.

3.3.4 Example [51, p. 26].

Consider the following monomial PDE system \((\Sigma )\) for the unknown function \(\varphi \) of the variables \(x_1,\ldots ,x_5\):

$$\begin{aligned} \frac{\partial ^2\varphi }{\partial x_5 \partial x_4}&= f_1(x_1,\ldots ,x_5),&\; \frac{\partial ^2\varphi }{\partial x_5\partial x_3}&= f_2(x_1,\ldots ,x_5),&\; \frac{\partial ^2\varphi }{\partial x_5\partial x_2}&= f_3(x_1,\ldots ,x_5), \\ \frac{\partial ^2\varphi }{\partial x_4^2}&= f_4(x_1,\ldots ,x_5),&\; \frac{\partial ^2\varphi }{\partial x_4\partial x_3}&= f_5(x_1,\ldots ,x_5),&\; \frac{\partial ^2\varphi }{\partial x_3^2}&= f_6(x_1,\ldots ,x_5). \end{aligned}$$

The set \((C_\Sigma )\) of compatibility relations of the PDE system \((\Sigma )\) is a consequence of the identities used in Example 3.2.6 to prove the completeness of the system:

$$\begin{aligned} \begin{array}{c|c} x_5x_3\cdot x_4=x_5x_4\cdot x_3, &{} \frac{\partial f_2}{\partial x_2}=\frac{\partial f_1}{\partial x_3},\\ x_5x_2\cdot x_4=x_5x_4\cdot x_2,\; x_5x_2\cdot x_3=x_5x_3\cdot x_2, &{} \frac{\partial f_3}{\partial x_4}=\frac{\partial f_1}{\partial x_2}, \frac{\partial f_3}{\partial x_3}=\frac{\partial f_2}{\partial x_2},\\ x_4^2\cdot x_5=x_5x_4\cdot x_4, &{}\frac{\partial f_4}{\partial x_5}=\frac{\partial f_1}{\partial x_4},\\ x_4x_3\cdot x_5=x_5x_4\cdot x_3,\; x_4x_3\cdot x_4 = x_4^2\cdot x_3, &{} \frac{\partial f_5}{\partial x_5}=\frac{\partial f_1}{\partial x_3},\; \frac{\partial f_5}{\partial x_4}=\frac{\partial f_4}{\partial x_3}, \\ x_3^2\cdot x_5 = x_5x_3\cdot x_3,\; x_3^2\cdot x_4 = x_4x_3\cdot x_3, &{}\frac{\partial f_6}{\partial x_5}=\frac{\partial f_2}{\partial x_3},\; \frac{\partial f_6}{\partial x_4}=\frac{\partial f_5}{\partial x_3}.\\ \end{array} \end{aligned}$$

The initial conditions are obtained using the multiplicative variables of the set \({{\,\mathrm{lm}\,}}(\Sigma )^{\complement }\) of complementary monomials of \({{\,\mathrm{lm}\,}}(\Sigma )\). We have

$$ {{\,\mathrm{lm}\,}}(\Sigma )^{\complement (5)} = {{\,\mathrm{lm}\,}}(\Sigma )^{\complement (4)} = {{\,\mathrm{lm}\,}}(\Sigma )^{\complement (1)} = \emptyset , \quad {{\,\mathrm{lm}\,}}(\Sigma )^{\complement (3)} = \{ 1,x_3,x_4\}, \quad {{\,\mathrm{lm}\,}}(\Sigma )^{\complement (2)} = \{ x_5\}. $$

The multiplicative variables of these monomials are given in the table

$$\begin{aligned} \begin{array}{c|c} 1, x_3, x_4 &{} x_1, x_2,\\ x_5 &{} x_1, x_5.\\ \end{array} \end{aligned}$$

By Theorem 3.3.3, the PDE system \((\Sigma )\) always admits a unique solution with any given initial conditions of the type

$$\begin{aligned} \left. \frac{\partial \varphi }{\partial x_4}\right| _{x_3=x_4=x_5=0}&= \varphi _{0,0,0,1,0}(x_1,x_2),\\ \left. \frac{\partial \varphi }{\partial x_3}\right| _{x_3=x_4=x_5=0}&= \varphi _{0,0,1,0,0}(x_1,x_2),\\ \left. \varphi \right| _{x_3=x_4=x_5=0}&= \varphi _{0,0,0,0,0}(x_1,x_2),\\ \left. \frac{\partial \varphi }{\partial x_5}\right| _{x_2=x_3=x_4=0}&= \varphi _{0,0,0,0,1}(x_1,x_5).\\ \end{aligned}$$

3.3.5 Example.

In a last example, M. Janet considered a monomial PDE system where the partial derivatives of the left-hand side do not form a complete set of monomials, namely, the PDE system \((\Sigma )\) for one unknown function \(\varphi \) of the variables \(x_1,x_2,x_3\), given by

$$ \frac{\partial ^3\varphi }{\partial x_2^2 \partial x_3}= f_1(x_1,x_2,x_3), \qquad \frac{\partial ^5\varphi }{\partial x_1^2 \partial x_3^3}= f_2(x_1,x_2,x_3). $$

We consider the set of monomials \({{\,\mathrm{lm}\,}}(\Sigma )=\{x_3x_2^2,x_3^3x_1^2\}\). In Example 3.2.10, we complete \({{\,\mathrm{lm}\,}}(\Sigma )\) to the complete set of monomials

$$ J({{\,\mathrm{lm}\,}}(\Sigma )) = \{\,x_3x_2^2,x_3^3x_1^2,x_3^3x_2^2,x_3^3x_2x_1^2,x_3^2x_2^2\,\}. $$

The complementary sets of monomials are

$$\begin{aligned}&J({{\,\mathrm{lm}\,}}(\Sigma ))^{\complement (3)} = \{1\}, \; J({{\,\mathrm{lm}\,}}(\Sigma ))^{\complement (2)} = \{x_3^2x_2,x_3^2,x_3x_2,x_3\}, \\&J({{\,\mathrm{lm}\,}}(\Sigma ))^{\complement (1)} = \{ x_3^3x_2x_1, x_3^3x_2, x_3^3x_1,x_3^3\}. \end{aligned}$$

The multiplicative variables of these monomials are given in the table

$$\begin{aligned} \begin{array}{c|c} J({{\,\mathrm{lm}\,}}(\Sigma ))^{\complement (3)} &{} x_1, x_2,\\ J({{\,\mathrm{lm}\,}}(\Sigma ))^{\complement (2)} &{} x_1.\\ J({{\,\mathrm{lm}\,}}(\Sigma ))^{\complement (1)} &{} x_3.\\ \end{array} \end{aligned}$$

By Theorem 3.3.3, the PDE system \((\Sigma )\) admits always a unique solution for any given initial conditions of the type

$$ \left. \varphi \right| _{x_3=0} = \varphi _{0,0,0}(x_1,x_2) ,\quad \left. \frac{\partial \varphi }{\partial x_3}\right| _{x_2=x_3=0} = \varphi _{0,0,1}(x_1) ,\quad \left. \frac{\partial ^2 \varphi }{\partial x_3\partial x_2}\right| _{x_2=x_3=0} = \varphi _{0,1,1}(x_1), $$
$$ \left. \frac{\partial ^2 \varphi }{\partial x_3^2}\right| _{x_2=x_3=0} = \varphi _{0,0,2}(x_1) ,\quad \left. \frac{\partial ^3 \varphi }{\partial x_3^2\partial x_2}\right| _{x_2=x_3=0} = \varphi _{0,1,2}(x_1) ,\quad \left. \frac{\partial ^3 \varphi }{\partial x_3^3}\right| _{x_1=x_2=0} = \varphi _{0,0,3}(x_3), $$
$$ \left. \frac{\partial ^4 \varphi }{\partial x_3^3\partial x_1}\right| _{x_1=x_2=0} = \varphi _{1,0,3}(x_3) ,\quad \left. \frac{\partial ^4 \varphi }{\partial x_3^3\partial x_2}\right| _{x_1=x_2=0} = \varphi _{0,1,3}(x_3) ,\quad \left. \frac{\partial ^5 \varphi }{\partial x_3^3\partial x_2\partial x_1}\right| _{x_1=x_2=0} = \varphi _{1,1,3}(x_3). $$

4 Monomial Involutive Bases

In this section, we recall a general approach of involutive monomial divisions introduced by  Gerdt in [25], see also [27, 28]. In particular, we give the axiomatic properties of an involutive division. The partition of variables into multiplicative and non-multiplicative can be deduced from this axiomatics. In this way, we explain how the notion of multiplicative variable in the sense of Janet can be deduced from a particular involutive division.

4.1 Involutive Division

4.1.1 Involutive Division.

An involutive division \(\mathcal {I}\) on the set of monomials \(\mathcal {M}(x_1,\ldots ,x_n)\) is defined by a relation \(|_\mathcal {I}^\mathcal {U}\) in \(\mathcal {U}\times \mathcal {M}(x_1,\ldots ,x_n)\), for every subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\), satisfying, for all monomials u, \(u'\) in \(\mathcal {U}\) and v, w in \(\mathcal {M}(x_1,\ldots ,x_n)\), the following six conditions:

(i):

\(u |_\mathcal {I}^\mathcal {U}w\) implies u|w,

(ii):

\(u|_\mathcal {I}^\mathcal {U}u\), for all u in \(\mathcal {U}\),

(iii):

\(u|_\mathcal {I}^\mathcal {U}uv\) and \(u |_\mathcal {I}^\mathcal {U}uw\) if and only if \(u |_\mathcal {I}^\mathcal {U}uvw\),

(iv):

if \(u|_\mathcal {I}^\mathcal {U}w\) and \(u'|_\mathcal {I}^\mathcal {U}w\), then \(u|_\mathcal {I}^\mathcal {U}u'\) or \(u'|_\mathcal {I}^\mathcal {U}u\),

(v):

if \(u|_\mathcal {I}^\mathcal {U}u'\) and \(u'|_\mathcal {I}^\mathcal {U}w\), then \(u|_\mathcal {I}^\mathcal {U}w\),

(vi):

if \(\mathcal {U}' \subseteq \mathcal {U}\) and \(u\in \mathcal {U}'\), then \(u |_\mathcal {I}^\mathcal {U}w\) implies \(u|_\mathcal {I}^{\mathcal {U}'}w\).

When there is no danger of confusion, the relation \(|_\mathcal {I}^\mathcal {U}\) will be also denoted by \(|_\mathcal {I}\).

4.1.2 Multiplicative Monomial.

If \(u|_\mathcal {I}^\mathcal {U}w\), by (i) there exists a monomial v such that \(w=uv\). We say that u is an \(\mathcal {I}\)-involutive divisor of w, that w is an \(\mathcal {I}\)-involutive multiple of u, and that v is \(\mathcal {I}\)-multiplicative for u with respect to \(\mathcal {U}\). When the monomial uv is not an involutive multiple of u with respect to \(\mathcal {U}\), we say that v is \(\mathcal {I}\)-non-multiplicative for u with respect to \(\mathcal {U}\).

We define in the same way the notion of multiplicative (resp. non-multiplicative) variable. We denote by \(\mathrm {Mult}_\mathcal {I}^{\mathcal {U}}(u)\) (resp. \(\mathrm {NMult}_\mathcal {I}^\mathcal {U}(u)\)) the set of multiplicative (resp. non-multiplicative) variables for the division \(\mathcal {I}\) of a monomial u with respect to \(\mathcal {U}\). We have

$$ \mathrm {Mult}_\mathcal {I}^{\mathcal {U}}(u) = \{\;x\in \{x_1,\ldots ,x_n\}\;\big |\; u |_\mathcal {I}^\mathcal {U}ux \;\} $$

and thus obtain a partition of the set of variables \(\{\,x_1,\ldots ,x_n\,\}\) into sets of multiplicative and non-multiplicative variables. An involutive division \(\mathcal {I}\) is thus entirely defined by a partition

$$ \{x_1,\ldots ,x_n\} = \mathrm {Mult}_\mathcal {I}^{\mathcal {U}}(u) \sqcup \mathrm {NMult}_\mathcal {I}^\mathcal {U}(u), $$

for any finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) and any u in \(\mathcal {U}\), satisfying conditions \(\mathbf{(iv)}\), \(\mathbf{(v)}\) and \(\mathbf{(vi)}\) of Definition 4.1.1. The involutive division \(\mathcal {I}\) is then defined by setting \(u\mid _\mathcal {I}^\mathcal {U}w\) if \(w=uv\) and the monomial v belongs to \(\mathcal {M}(\mathrm {Mult}_\mathcal {I}^{\mathcal {U}}(u))\). Conditions \(\mathbf{(i)}\), \(\mathbf{(ii)}\), and \(\mathbf{(iii)}\) of Definition 4.1.1 are consequences of this definition.

4.1.3 Example.

Consider \(\mathcal {U}=\{x_1,x_2\}\) in \(\mathcal {M}(x_1,x_2)\) and suppose that \(\mathcal {I}\) is an involutive division such that \(\mathrm {Mult}_\mathcal {I}^{\mathcal {U}}(x_1)=\{x_1\}\) and \(\mathrm {Mult}_\mathcal {I}^\mathcal {U}(x_2)=\{x_2\}\). Then we have

$$ x_1\not \mid _\mathcal {I} x_1x_2, \quad \text {and}\quad x_2\not \mid _\mathcal {I} x_1x_2. $$

4.1.4 Autoreduction.

A subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) is said to be autoreduced with respect to an involutive division \(\mathcal {I}\), or \(\mathcal {I}\)-autoreduced, if it does not contain a monomial \(\mathcal {I}\)-divisible by another monomial of \(\mathcal {U}\).

In particular, by the definition of the involutive division, for any monomials \(u,u'\) in \(\mathcal {U}\) and any monomial w in \(\mathcal {M}(x_1,\ldots ,x_n)\), we have \(u|_\mathcal {I}w\) and \(u'|_\mathcal {I} w\) implies \(u|_\mathcal {I}u'\) or \(u'|_\mathcal {I}u\). As a consequence, if a set of monomials \(\mathcal {U}\) is \(\mathcal {I}\)-autoreduced, then any monomial in \(\mathcal {M}(x_1,\ldots ,x_n)\) admits at most one \(\mathcal {I}\)-involutive divisor in \(\mathcal {U}\).

4.1.5 Janet Division.

We call Janet division the division on \(\mathcal {M}(x_1,\ldots ,x_n)\) given by the multiplicative variables in the sense of M. Janet defined in Sect. 3.1.9. Explicitly, for a subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) and monomials u in \(\mathcal {U}\) and w in \(\mathcal {M}(x_1,\ldots ,x_n)\), we define \(u|_\mathcal {J}^\mathcal {U}w\) if u is a Janet divisor of w as defined in Sect. 3.1.11, that is \(w=uv\), where \(v\in \mathcal {M}(\mathrm {Mult}_\mathcal {J}^\mathcal {U}(u))\) and \(\mathrm {Mult}_\mathcal {J}^\mathcal {U}(u)\) is the set of Janet’s multiplicative variables defined in Sect. 3.1.9.

By Proposition 3.1.12, for a fixed subset of \(\mathcal {U}\), any monomial of \(\mathcal {M}(x_1,\ldots ,x_n)\) has a unique Janet divisor in \(\mathcal {U}\) with respect to \(\mathcal {U}\). As a consequence, the conditions (iv) and (v) of Definition 4.1.1 hold trivially for Janet division. Now suppose that \(\mathcal {U}' \subseteq \mathcal {U}\) and u is a monomial in \(\mathcal {U}'\). If \(u |_\mathcal {J}^\mathcal {U}w\), then there is a decomposition \(w=uv\) with \(v\in \mathcal {M}(\mathrm {Mult}_\mathcal {J}^\mathcal {U}(u))\). As \(\mathrm {Mult}_\mathcal {J}^{\mathcal {U}}(u) \subseteq \mathrm {Mult}_\mathcal {J}^{\mathcal {U}'}(u)\), this implies that \(u|_\mathcal {J}^{\mathcal {U}'}w\). Hence, the conditions (vi) of Definition 4.1.1 holds for Janet division. We have thus proved.

4.1.6 Proposition

[27, Proposition 3.6] Janet division is involutive.

4.2 Involutive Completion Procedure

4.2.1 Involutive Set.

Let \(\mathcal {I}\) be an involutive division on \(\mathcal {M}(x_1,\ldots ,x_n)\) and let \(\mathcal {U}\) be a set of monomials. The involutive cone of a monomial u in \(\mathcal {U}\) with respect to the involutive division \(\mathcal {I}\) is defined by

$$ \mathrm {cone}_\mathcal {I}(u,\mathcal {U}) = \{\;uv \;\big |\; v\in \mathcal {M}(x_1,\ldots ,x_n)\; \text {and} \; u |_\mathcal {I}^\mathcal {U}uv \;\}. $$

The involutive cone of \(\mathcal {U}\) with respect to the involutive division \(\mathcal {I}\) is the following subset of monomials:

$$ \mathrm {cone}_\mathcal {I}(\mathcal {U}) = \bigcup _{u\in \mathcal {U}} \mathrm {cone}_\mathcal {I}(u,\mathcal {U}). $$

Note that the inclusion \(\mathrm {cone}_\mathcal {I}(\mathcal {U}) \subseteq \mathrm {cone}(\mathcal {U})\) holds for any set \(\mathcal {U}\). When the set \(\mathcal {U}\) is \(\mathcal {I}\)-autoreduced, this union is disjoint, thanks to involutivity.

A subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) is \(\mathcal {I}\)-involutive if the following equality holds:

$$ \mathrm {cone}(\mathcal {U})=\mathrm {cone}_\mathcal {I}(\mathcal {U}). $$

In other words, a set \(\mathcal {U}\) is \(\mathcal {I}\)-involutive if any multiple of an element u in \(\mathcal {U}\) is also the \(\mathcal {I}\)-involutive multiple of an element v of \(\mathcal {U}\). Note that the monomial v can be different from the monomial u, as we have seen in Example 3.2.6.

4.2.2 Involutive Completion.

A completion of a subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) with respect to an involutive division \(\mathcal {I}\), or \(\mathcal {I}\)-completion for short, is a set of monomials \(\widetilde{\mathcal {U}}\) satisfying the following three conditions:

(i):

\(\widetilde{\mathcal {U}}\) is involutive,

(ii):

\(\mathcal {U}\subseteq \widetilde{\mathcal {U}}\),

(iii):

\(\mathrm {cone}(\widetilde{\mathcal {U}})=\mathrm {cone}(\mathcal {U})\).

4.2.3 Noetherianity.

An involutive division \(\mathcal {I}\) is said to be Noetherian if all finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) admits a finite \(\mathcal {I}\)-completion \(\widetilde{\mathcal {U}}\).

4.2.4 Proposition

[27, Proposition 4.5] Janet division is Noetherian.

4.2.5 Prolongation.

Let \(\mathcal {U}\) be a subset of \(\mathcal {M}(x_1,\ldots ,x_n)\). We call prolongation of an element u of \(\mathcal {U}\) a multiplication of u by a variable x. Given an involutive division \(\mathcal {I}\), a prolongation ux is multiplicative (resp. non-multiplicative) if x is a multiplicative (resp. non-multiplicative) variable.

4.2.6 Local Involutivity.

A subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) is locally involutive with respect to an involutive division \(\mathcal {I}\) if any non-multiplicative prolongation of an element of \(\mathcal {U}\) admit an involutive divisor in \(\mathcal {U}\). That is

$$ \forall u \in \mathcal {U}\quad \forall x_i \in \mathrm {NMult}_\mathcal {I}^\mathcal {U}(u) \quad \exists v \in \mathcal {U}\quad \text {such that}\quad v|_\mathcal {I} ux_i. $$

4.2.7 Example [27, Example 4.8].

By definition, if \(\mathcal {U}\) is \(\mathcal {I}\)-involutive, then it is locally \(\mathcal {I}\)-involutive. The converse is false in general. Indeed, consider the involutive division \(\mathcal {I}\) on \(\mathcal {M}=\mathcal {M}(x_1,x_2,x_3)\) defined by

$$ \mathrm {Mult}_\mathcal {I}^\mathcal {M}(x_1) = \{x_1,x_3\},\quad \mathrm {Mult}_\mathcal {I}^\mathcal {M}(x_2) = \{x_1,x_2\},\quad \mathrm {Mult}_\mathcal {I}^\mathcal {M}(x_3) = \{x_2,x_3\}, $$

with \(\mathrm {Mult}_\mathcal {I}^\mathcal {M}(1) = \{x_1,x_2,x_3\}\) and \(\mathrm {Mult}_\mathcal {I}^\mathcal {M}(u)\) is empty for \(\deg (u)\geqslant 2\). Then the set \(\{x_1,x_2,x_3\}\) is locally \(\mathcal {I}\)-involutive, but not \(\mathcal {I}\)-involutive.

4.2.8 Continuity.

An involutive division \(\mathcal {I}\) is continuous if for any finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) and any finite sequence \((u_1,\ldots ,u_k)\) of elements in \(\mathcal {U}\) for which there exists \(x_{i_j}\) in \(\mathrm {NMult}_\mathcal {I}^\mathcal {U}(u_j)\) such that

$$ u_{k} |_\mathcal {I} u_{k-1}x_{i_{k-1}}, \; \ldots \; , u_{3} |_\mathcal {I} u_2x_{i_2}, \; u_{2} |_\mathcal {I} u_1x_{i_1}, $$

it holds that \(u_i\ne u_j\), for any \(i\ne j\).

For instance, the involutive division in Example 4.2.7 is not continuous. Indeed, there exists the following cycle of divisions:

$$ x_2|_\mathcal {I} x_1x_2, \quad x_1|_\mathcal {I} x_3x_1, \quad x_3|_\mathcal {I} x_2x_3, \quad x_2|_\mathcal {I} x_1x_2. $$

4.2.9 From Local to Global Involutivity.

Any \(\mathcal {I}\)-involutive subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) is locally \(\mathcal {I}\)-involutive. When the division \(\mathcal {I}\) is continuous, the converse is also true. Indeed, suppose that \(\mathcal {U}\) is locally \(\mathcal {I}\)-involutive and \(\mathcal {I}\) is continuous. Let us show that \(\mathcal {U}\) is \(\mathcal {I}\)-involutive.

Given a monomial u in \(\mathcal {U}\) and a monomial w in \(\mathcal {M}(x_1,\ldots ,x_n)\), we claim that the monomial uw admits an \(\mathcal {I}\)-involutive divisor in \(\mathcal {U}\). If \(u|_\mathcal {I} uw\), the claim is proved. Otherwise, there exists a non-multiplicative variable \(x_{k_1}\) in \(\mathrm {NMult}_\mathcal {I}^\mathcal {U}(u)\) such that \(x_{k_1} | w\). By local involutivity, the monomial \(ux_{k_1}\) admits an \(\mathcal {I}\)-involutive divisor \(v_1\) in \(\mathcal {U}\). If \(v_1|_\mathcal {I} uw\), the claim is proved. Otherwise, there exists a non-multiplicative variable \(x_{k_2}\) in \(\mathrm {NMult}_\mathcal {I}^\mathcal {U}(v_1)\) such that \(x_{k_2}\) divides \(\frac{uw}{v_1}\). By local involutivity, the monomial \(v_1x_{k_2}\) admits an \(\mathcal {I}\)-involutive divisor \(v_2\) in \(\mathcal {U}\).

In this way, we construct a sequence \((u,v_1,v_2,\ldots )\) of monomials in \(\mathcal {U}\) such that

$$ v_1|_\mathcal {I} ux_{k_1}, \quad v_2|_\mathcal {I} v_1x_{k_2}, \quad v_3|_\mathcal {I} v_2x_{k_3}, \quad \ldots $$

By the continuity hypothesis, all monomials \(v_1,v_2,\ldots \) are distinct. Moreover, all these monomials are divisors of uw, which admits a finite set of distinct divisors. As a consequence, the above sequence is finite. It follows that its last term \(v_k\) is an \(\mathcal {I}\)-involutive monomial of uw. We have thus proved the following result.

4.2.10 Theorem

[27, Theorem 4.10] Let \(\mathcal {I}\) be a continuous involutive division. A subset of \(\mathcal {M}(x_1,\ldots ,x_n)\) is locally \(\mathcal {I}\)-involutive if and only if it is \(\mathcal {I}\)-involutive.

4.2.11 Proposition

[27, Corollary 4.11] Janet division is continuous.

figure b

4.2.12 Involutive Completion Procedure.

Procedure 2 generalizes Janet’s completion procedure given in Sect. 3.2.9 to any involutive division. Let us fix a monomial order \(\preccurlyeq \) on \(\mathcal {M}(x_1,\ldots ,x_n)\). Given a set of monomials \(\mathcal {U}\), the procedure completes the set \(\mathcal {U}\) by all possible non-involutives prolongations of monomials in \(\mathcal {U}\).

By introducing the notion of constructive involutive division, Gerdt and Blinkov gave in [27] some conditions on the involutive division \(\mathcal {I}\) in order to establish the correctness and the termination of this procedure. A continuous involutive division \(\mathcal {I}\) is constructive if for any subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) and for any non-multiplicative prolongation ux of a monomial u in \(\mathcal {U}\) satisfying the two conditions

(i):

ux does not have an \(\mathcal {I}\)-involutive divisor in \(\mathcal {U}\),

(ii):

any non-multiplicative prolongation \(vy \ne ux\) of a monomial v in \(\mathcal {U}\) that divides ux has an \(\mathcal {I}\)-involutive divisor in \(\mathcal {U}\),

the monomial ux cannot be \(\mathcal {I}\)-involutively divided by a monomial w in \(\mathrm {cone}_\mathcal {I}(\mathcal {U})\) with respect to \(\mathcal {U}\cup \{w\}\).

If \(\mathcal {I}\) is a constructive division, then the completion procedure completes the set \(\mathcal {U}\) to an involutive set. We refer the reader to [27, Theorem 4.14] for a proof of the correctness and termination of the completion procedure under these hypotheses.

4.2.13 Example.

An application of this procedure to the set of monomials \(\mathcal {U}=\{\,x_3x_2^2,x_3^3x_1^2\,\}\) given by Janet in [51] is presented in Sect. 3.2.10.

4.3 Others Involutive Approaches

For the analysis of differential systems, several other notions of multiplicative variables were studied by J. M. Thomas 1937 and J.-F. Pommarret in 1978. Other examples of involutive divisions can be found in [28].

4.3.1 Thomas Division.

In [86], Thomas introduced an involutive division that differs from that of M. Janet, also used in the analysis of differential systems. The multiplicative variables in the sense of Thomas’s division for a monomial u with of a finite subset \(\mathcal {U}\) of \(\mathcal {M}(x_1,\ldots ,x_n)\) are defined by the rule

$$ x_i\in \mathrm {Mult}_{\mathcal {T}}^\mathcal {U}(u) \quad \text {if}\quad \deg _i(u)=\deg _i(\mathcal {U}). $$

In particular, we have \(u|_{\mathcal {T}}^\mathcal {U}w\) if \(w=uv\) and for all variables \(x_i\) in v, we have \(\deg _i(u)=\deg _i(\mathcal {U})\). The Thomas division is a Noetherian and continuous involutive division. We refer the reader to [27] for detailed proofs of these results. Note also that the Janet division is a refinement of the Thomas division, in the sense that for any finite set of monomials \(\mathcal {U}\) and any monomial u in \(\mathcal {U}\), the following inclusions hold:

$$ \mathrm {Mult}_{\mathcal {T}}^\mathcal {U}(u) \subseteq \mathrm {Mult}_\mathcal {J}^\mathcal {U}(u) \quad \text {and}\quad \mathrm {NMult}_\mathcal {J}^\mathcal {U}(u) \subseteq \mathrm {NMult}_{\mathcal {T}}^\mathcal {U}(u). $$

4.3.2 Pommaret Division.

In [72], Pommaret studied an involutive division that is defined globally, that is, the multiplicative variables for the Pommaret division do not depend on a given subset of monomials. In this way, Pommaret’s division can be defined on an infinite set of monomials.

Fix an order on the variables \(x_1>x_2>\cdots >x_n\). Given a monomial \(u=x_1^{\alpha _1}\cdots x_k^{\alpha _k}\), with \(\alpha _k>0\), the Pommaret multiplicative variables for u are defined by the rule

$$ x_j\in \mathrm {Mult}_{\mathcal {P}}^{\mathcal {M}(x_1,\ldots ,x_n)}(u), \quad \text {if } j\geqslant k, \qquad \text {and}\qquad x_j\in \mathrm {NMult}_{\mathcal {P}}^{\mathcal {M}(x_1,\ldots ,x_n)}(u), \quad \text {if j<k.} $$

Set \(\mathrm {Mult}_{\mathcal {P}}^{\mathcal {M}(x_1,\ldots ,x_n)}(1)=\{x_1,\ldots ,x_n\}\). The Pommaret division is a continuous involutive division that is not Noetherian [27]. The Janet division is a refinement of the Pommaret division, that is, for an autoreduced finite set of monomials \(\mathcal {U}\), the following inclusions hold for any monomial u in \(\mathcal {U}\):

$$ \mathrm {Mult}_{\mathcal {P}}^\mathcal {U}(u) \subseteq \mathrm {Mult}_\mathcal {J}^\mathcal {U}(u) \quad \text {and}\quad \mathrm {NMult}_\mathcal {J}^\mathcal {U}(u) \subseteq \mathrm {NMult}_{\mathcal {P}}^\mathcal {U}(u). $$

Finally, let us remark that the separation of variables into multiplicative and non-multiplicative ones in the Pommaret division was used first by Janet in [51, Sect. 20]. For this reason, the terminology Pommaret division does not reflect correctly the history of the theory. We refer the reader to the monograph by Seiler [82, Section 3.5] for a historical account.

5 Polynomial Partial Differential Equations Systems

In this section, we extend the results on monomial systems presented in Sect. 3 to linear (polynomial) systems. All PDE systems are considered in analytic categories, meaning that all unknown functions, coefficients, and initial conditions are assumed to be analytic. In the first part, we recall the notion of principal derivative with respect to an order on derivatives introduced by M. Janet. This notion is used to give an algebraic characterization of complete integrability conditions of a PDE system. Then we present a procedure that decides whether a given finite linear PDE system can be transformed into a completely integrable linear PDE system. Finally, we recall the algebraic formulation of involutivity introduced by Janet in [51].

5.1 Parametric and Principal Derivatives

5.1.1 Motivations.

In [51, Chapter 2], M. Janet first considered the following PDE for one unknown function on \(\mathbb {C}^n\):

$$\begin{aligned} \frac{\partial ^2 \varphi }{\partial x_n^2}= \sum _{1\leqslant i,\, j<n}a_{i,j}(x)\frac{\partial ^2 \varphi }{\partial x_i \partial x_j}+ \sum _{1\leqslant i<n}a_i(x) \frac{\partial ^2 \varphi }{\partial x_i \partial x_n}+ \sum _{r=1}^n b_r(x) \frac{\partial \varphi }{\partial x_r}+c(x)\varphi +f(x), \end{aligned}$$
(5.1)

where the functions \(a_{i,j}(x)\), \(a_i(x)\), \(b_r(x)\), c(x) and f(x) are analytic functions in a neighborhood of a point \(P=(x_1^0,\ldots , x_n^0)\) in \(\mathbb {C}^n\). Given two analytic functions \(\varphi _1\) and \(\varphi _2\) in a neighborhood \(U_Q\) of a point \(Q=(x_1^0,\ldots , x_{n-1}^0)\) in \(\mathbb {C}^{n-1}\), M. Janet studied the problem of the existence of solutions of equation (5.1) with the initial condition

$$\begin{aligned} \varphi \vert _{x_n=x_n^0}=\varphi _1, \qquad \left. \frac{\partial \varphi }{\partial x_n} \right| _{x_n=x_n^0}=\varphi _2, \end{aligned}$$
(5.2)

in a neighborhood of the point Q. In Sect. 5.4.2, we will formulate such condition for higher order linear PDE systems with several unknown functions, called initial condition.

5.1.2 Principal and Parametric Derivatives.

In order to treat the problems of the existence and uniqueness of a solution of Eq. (5.1) under the initial condition (5.2), M. Janet introduced the notions of parametric and principal derivatives defined as follows. The partial derivatives \(D^\alpha \varphi \), with \(\alpha =(\alpha _1,\ldots ,\alpha _n)\), of an analytic function \(\varphi \) are determined by

(i):

\(\varphi _1\) and its derivatives for \(\alpha _n=0\),

(ii):

\(\varphi _2\) and its derivatives for \(\alpha _n=1\),

in the neighborhood \(U_Q\). These derivatives for \(\alpha _n=0\) and \(\alpha _n=1\) are called parametric, while the derivatives for \(\alpha _n\geqslant 2\), i.e., the derivatives of \(\frac{\partial ^2 \varphi }{\partial x_n^2}\), are called principal. Note that the values of the principal derivatives at the point P are entirely given by \(\varphi _1\) and \(\varphi _2\) and by their derivatives thanks to Eq. (5.1). Note that the notion of parametric derivative corresponds to a parametrization of the initial conditions of the system.

5.1.3 Janet’s Orders on Derivatives.

Let \(\alpha =(\alpha _1,\ldots ,\alpha _n)\) and \(\beta =(\beta _1,\ldots ,\beta _n)\) be in \(\mathbb {N}^n\). Let \(\varphi \) be an analytic function. The derivative \(D^\alpha \varphi \) is said to be posterior (resp. anterior) to \(D^\beta \varphi \) if

$$\begin{aligned} \vert \alpha \vert>\vert \beta \vert \quad (\text {resp. } \vert \alpha \vert<\vert \beta \vert ) \qquad \text {or } \qquad \vert \alpha \vert =\vert \beta \vert \text { and } \alpha _n>\beta _n \quad (\text {resp}. \alpha _n<\beta _n). \end{aligned}$$

Obviously, any derivative of \(\varphi \) admits only finitely many anterior derivatives of \(\varphi \). Using this notion of posteriority, M. Janet showed the existence and uniqueness of the solution to Eq. (5.1) under the initial conditions (5.2).

In his monograph, M. Janet gave several generalizations of the above notion of posteriority. The first one corresponds to the degree lexicographic order [51, Sect. 22], formulated as follows:

(i):

for \(\vert \alpha \vert \ne \vert \beta \vert \), the derivative \(D^\alpha \varphi \) is called posterior (resp. anterior) to \(D^\beta \varphi \), if \(\vert \alpha \vert >\vert \beta \vert \) (resp. \(\vert \alpha \vert <\vert \beta \vert \)),

(ii):

for \(\vert \alpha \vert =\vert \beta \vert \), the derivative \(D^\alpha \varphi \) is called posterior (resp. anterior) to \(D^\beta \varphi \) if the first nonzero difference

$$ \alpha _n-\beta _n\; , \quad \alpha _{n-1}-\beta _{n-1}\; , \quad \ldots \quad , \; \alpha _1-\beta _1, $$

is positive (resp. negative).

5.1.4 Generalization.

Let us consider the following generalization of equation (5.1):

$$\begin{aligned} D\varphi =\sum _{i \in I}a_{i}D_i\varphi +f, \end{aligned}$$
(5.3)

where D and the \(D_i\) are differential operators such that \(D_i\varphi \) is anterior to \(D\varphi \) for all i in I. The derivative \(D\varphi \) and all its derivatives are called principal derivatives of the Eq. (5.3). All the other derivatives of u are called parametric derivatives of the Eq. (5.3).

5.1.5 Weight Order.

Further generalization of these order relations was given by M. Janet by introducing the notion of cote, which corresponds to a parametrization of a weight order defined as follows. Let us fix a positive integer s. We define a weight matrix

$$ C = \left[ \begin{array}{cccccc} C_{1,1} &{} \ldots &{} C_{n,1} \\ \vdots &{} &{}\vdots \\ C_{1,s} &{} \ldots &{} C_{n,s} \\ \end{array} \right] $$

that associates to each variable \(x_i\) nonnegative integers \(C_{i,1}, \ldots , C_{i,s}\), called the s-weights of \(x_i\). This notion was called cote by Janet in [51, Sect. 22] following the terminology introduced by Riquier [75]. Ritt used the term mark in [77]. For each derivative \(D^\alpha \varphi \), with \(\alpha =(\alpha _1,\ldots ,\alpha _n)\), of an analytic function \(\varphi \), we associate the s-weight \(\Gamma (C)=(\Gamma _1,\ldots ,\Gamma _s)\), where the \(\Gamma _k\) are defined by

$$ \Gamma _k =\sum _{i=1}^n \alpha _i C_{i,k}. $$

Given two monomial partial differential operators \(D^\alpha \) and \(D^\beta \) as in Sect. 5.1.3, we say that \(D^\alpha \varphi \) is posterior (resp. anterior) to \(D^\beta \varphi \) with respect to the weight matrix C if

(i):

\(\vert \alpha \vert \ne \vert \beta \vert \) and \(\vert \alpha \vert >\vert \beta \vert \) (resp. \(\vert \alpha \vert <\vert \beta \vert \)), or

(ii):

\(\vert \alpha \vert =\vert \beta \vert \) and the first nonzero difference

$$ \Gamma _1-\Gamma _1', \quad \Gamma _2-\Gamma _2'\; , \quad \ldots \quad , \; \Gamma _s-\Gamma _s', $$

is positive (resp. negative).

In this way, we define an order on the set of monomial partial derivatives, called weight order. Note that, by setting \(C_{i,k}=\delta _{i+k,n+1}\), we recover the Janet order defined in Sect. 5.1.3.

5.2 First-Order PDE Systems

We consider first the resolution of first-order PDE systems.

5.2.1 Complete Integrability.

In [51, Sect. 36], M. Janet considered a first-order PDE system of the form

$$\begin{aligned} (\Sigma ) \qquad \, \frac{\partial \varphi }{\partial y_\lambda }=f_\lambda (y_1,\ldots , y_h, z_1, \ldots , z_k, \varphi , q_1,\ldots , q_k) \qquad (1\leqslant \lambda \leqslant h), \end{aligned}$$
(5.4)

where \(\varphi \) is an unknown function of the independent variables \(y_1,\ldots , y_h, z_1,\ldots , z_k\), with \(h+k=n\) and \(q_i=\frac{\partial \varphi }{\partial z_i}\). It is assumed that the functions \(f_\lambda \) are analytic in a neighborhood of a point P. M. Janet wrote down explicitly the integrability condition of the PDE systems \((\Sigma )\) as the equality

$$ \frac{\partial }{\partial y_\lambda }\left( \frac{\partial \varphi }{\partial y_\mu }\right) = \frac{\partial }{\partial y_\mu }\left( \frac{\partial \varphi }{\partial y_\lambda }\right) , $$

for any \(1\leqslant \lambda , \mu \leqslant h\). Differentiating (5.4), we deduce that

$$\begin{aligned} \frac{\partial }{\partial y_\lambda }\left( \frac{\partial \varphi }{\partial y_\mu }\right) =&\,\frac{\partial f_\mu }{\partial y_\lambda }+\frac{\partial \varphi }{\partial y_\lambda }\frac{\partial f_\mu }{\partial \varphi }+\sum _{i=1}^k \frac{\partial f_\mu }{\partial q_i}\frac{\partial ^2 \varphi }{\partial y_\lambda \partial z_i}, \\ =&\,\frac{\partial f_\mu }{\partial y_\lambda }+f_\lambda \frac{\partial f_\mu }{\partial \varphi } +\sum _{i=1}^k \frac{\partial f_\mu }{\partial q_i} \left( \frac{\partial f_\lambda }{\partial z_i}+ q_i\frac{\partial f_\lambda }{\partial \varphi }\right) + \sum _{i,j=1}^k\frac{\partial f_\lambda }{\partial q_i}\frac{\partial f_\mu }{\partial q_j} \frac{\partial ^2 \varphi }{\partial z_i \partial z_j}. \\ \end{aligned}$$

Hence, the integrability condition reads

$$\begin{aligned} \begin{aligned}&\;\frac{\partial }{\partial y_\lambda }\left( \frac{\partial \varphi }{\partial y_\mu }\right) -\frac{\partial }{\partial y_\mu }\left( \frac{\partial \varphi }{\partial y_\lambda }\right) \\ =&\;\frac{\partial f_\mu }{\partial y_\lambda }+f_\lambda \frac{\partial f_\mu }{\partial \varphi } +\sum _{i=1}^k \frac{\partial f_\mu }{\partial q_i} \left( \frac{\partial f_\lambda }{\partial z_i}+ q_i\frac{\partial f_\lambda }{\partial \varphi }\right) - \frac{\partial f_\lambda }{\partial y_\mu }-f_\mu \frac{\partial f_\lambda }{\partial \varphi } -\sum _{i=1}^k \frac{\partial f_\lambda }{\partial q_i} \left( \frac{\partial f_\mu }{\partial z_i}+ q_i\frac{\partial f_\mu }{\partial \varphi }\right) \\ =&\;0, \end{aligned} \end{aligned}$$
(5.5)

for any \(1\leqslant \lambda \ne \mu \leqslant h\). When the PDE system \((\Sigma )\) in (5.4) satisfies relation (5.5), it is said to be completely integrable.

5.2.2 Theorem

Suppose that the PDE system \((\Sigma )\) in (5.4) is completely integrable. Let P be a point in \(\mathbb {C}^n\) and \(\varphi (z_1,\ldots , z_k)\) be an analytic function in the neighborhood of the point \(\pi (P)\), where \(\pi :\mathbb {C}^n \rightarrow \mathbb {C}^k\) denotes the canonical projection \((y_1,\ldots , y_h, z_1, \ldots z_k) \, \mapsto \, (z_1,\ldots , z_k)\). Then, the system \((\Sigma )\) admits only one analytic solution satisfying \(u=\varphi \circ \pi \) in a neighborhood of the point P.

5.3 Higher Order Finite Linear PDE Systems

In [51, Sect. 39], M. Janet discussed the existence of solutions of a finite linear PDE system for one unknown function \(\varphi \) in which each equation is of the form

$$\begin{aligned} (\Sigma )\qquad D_i\varphi =\sum _{j}a_{i,j}D_{i,j}\varphi , \quad i \in I. \end{aligned}$$
(5.6)

All the functions \(a_{i,j}\) are assumed to be analytic in a neighborhood of a point P in \(\mathbb {C}^n\).

5.3.1 Principal and Parametric Derivatives.

Consider Janet’s order \(\preccurlyeq _{J}\) on derivatives as the generalization defined in Sect. 5.1.3. We assume that each equation of the system \((\Sigma )\) defined by (5.6) satisfies the following two conditions:

(i):

\(D_{i,j}\varphi \) is anterior to \(D_i\varphi \), for any i in I,

(ii):

all the \(D_i\)’s for i in I are distinct.

We extend the notion of principal derivative introduced in Sect. 5.1.4 for one PDE equation to a system of the form (5.6) as follows. The derivative \(D_i\varphi \), for i in I, and all its derivatives are called principal derivatives of the PDE system \((\Sigma )\) in (5.6) with respect to Janet’s order. Any other derivative of \(\varphi \) is called parametric derivative.

5.3.2 Completeness with Respect to Janet’s Order.

Fix an order \(x_n>x_{n-1}>\cdots >x_1\) on variables. By the isomorphism of Proposition 3.1.2, which identifies monomial partial differential operators with monomials in \(\mathcal {M}(x_1,\ldots ,x_n)\), we associate to the set of operators \(D_i\)’s, i in I, defined in Sect. 5.3.1, a set \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )\) of monomials. By definition, the set \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )\) contains the monomials associated to leading derivatives of the PDE system \((\Sigma )\) with respect to Janet’s order.

The PDE system \((\Sigma )\) is said to be complete with respect to Janet’s order \(\preccurlyeq _{J}\) if the set of monomials \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )\) is complete in the sense of Sect. 3.2.1. Procedure 6 is a completion procedure that transforms a finite linear PDE system into an equivalent complete linear PDE system.

By definition, the set of principal derivatives corresponds, via the isomorphism of Proposition 3.1.2, to the multiplicative cone of the monomial set \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )\). Hence, when \((\Sigma )\) is complete, the set of principal derivatives corresponds to the involutive cone of \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )\). By Proposition 3.2.2, there is a partition

$$ \mathcal {M}(x_1,\ldots ,x_n) = \mathrm {cone}_\mathcal {J}({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )) \amalg cone^{\complement }_\mathcal {J}({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )). $$

It follows that the set of parametric derivatives of a complete PDE system \((\Sigma )\) corresponds to the involutive cone of the set of monomials \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )^{\complement }\).

5.3.3 Initial Conditions.

Consider the set \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )^{\complement }\) of complementary monomials of \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )\), as defined in Sect. 3.1.13. To a monomial \(x^{\beta }\) in \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )^{\complement }\), with \(\beta =(\beta _1,\ldots ,\beta _n)\) in \(\mathbb {N}^n\) and

$$ \mathrm {^{\complement }Mult}_{\mathcal {J}}^{{{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )^{\complement }}(x^{\beta }) = \{x_{i_1},\ldots ,x_{i_{k_\beta }}\}, $$

we associate an arbitrary analytic function

$$ \varphi _{\beta }(x_{i_1},\ldots ,x_{i_{k_\beta }}). $$

Using these functions, M. Janet defined an initial condition:

$$ (C_\beta ) \qquad \left. D^\beta \varphi \right| _{x_j=0 \; \forall x_j\in \mathrm {^{\complement }NMult}_{\mathcal {J}}^{{{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )^{\complement }}(x^{\beta })}= \varphi _{\beta }(x_{i_1},\ldots ,x_{i_{k_\beta }}). $$

Then he introduced an initial condition for the Eq. (5.6) with respect to the Janet order as the set

$$\begin{aligned} \{\, C_\beta \;|\; x^\beta \in {{\,\mathrm{lm}\,}}_{\preccurlyeq _{J}}(\Sigma )^{\complement }\,\}. \end{aligned}$$
(5.7)

5.3.4 Theorem

[51, Sect. 39] If the PDE system \((\Sigma )\) in (5.6) is complete with respect to Janet’s order \(\preccurlyeq _{J}\), then it admits at most one analytic solution satisfying the initial condition (5.7).

5.3.5 PDE Systems with Several Unknown Functions.

The construction of initial conditions given in Sect. 5.3.3 for one unknown function can be extended to linear PDE systems on \(\mathbb {C}^n\) with several unknown functions using a weight order. Consider a linear PDE system with m unknown analytic functions \(\varphi ^1,\ldots , \varphi ^m\) of the form

$$\begin{aligned} (\Sigma ) \qquad D^\alpha \varphi ^r=\sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \end{array}} a_{\alpha ,\beta }^{r,s}D^\beta \varphi ^s, \qquad \alpha \in I^r, \end{aligned}$$
(5.8)

for \(1\leqslant r\leqslant m\), where \(I^r\) is a finite subset of \(\mathbb {N}^n\) and the \(a_{\alpha ,\beta }^{r,s}\) are analytic functions.

For such a system, we define a weight order as follows. Fix a positive integer s. To any variable \(x_i\) we associate \(s+1\) weights \(C_{i,0},C_{i,1},\ldots , C_{i,s}\) by setting \(C_{i,0}=1\) and taking \(C_{i,1},\ldots , C_{i,s}\) as defined in Sect. 5.1.5. To each unknown function \(\varphi ^{j}\), we associate \(s+1\) weights \(T_0^{(j)},T_1^{(j)}\ldots , T_s^{(j)}\). With these data, we define the \(s+1\) weights \(\Gamma _0^{(j)},\Gamma _1^{(j)}, \ldots , \Gamma _s^{(j)}\) of the partial derivative \(D^\alpha \varphi ^{j}\) with \(\alpha =(\alpha _1,\ldots , \alpha _n)\) in \(\mathbb {N}^n\) by setting

$$ \Gamma _k^{(j)}=\sum _{i=1}^n \alpha _i C_{i,k}+T_k^{(j)} \qquad (0\leqslant k\leqslant s). $$

We define the notions of anteriority and posteriority on derivatives with respect to this weight order, denoted by \(\preccurlyeq _{wo}\), as it is done in Sect. 5.3.1 for systems with one unknown function. In particular, we define the notions of principal and parametric derivatives in a similar way to the case of systems with one unknown function.

Now suppose that the system (5.8) is written in the form

$$\begin{aligned} (\Sigma ) \qquad D^\alpha \varphi ^r=\sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{wo}D^\alpha \varphi ^r \end{array}} a_{\alpha ,\beta }^{r,s}D^\beta \varphi ^s, \qquad \alpha \in I^r. \end{aligned}$$
(5.9)

We can formulate the notion of completeness with respect to the weight order \(\preccurlyeq _{wo}\) as in Sect. 5.3.2. Let \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)\) be the set of monomials associated to leading derivatives \(D^\alpha \) of all PDE in \((\Sigma )\) such that \(\alpha \) belongs to \(I^r\). The PDE system \((\Sigma )\) is complete with respect to \(\preccurlyeq _{wo}\), if for any \(1\leqslant r \leqslant m\), the set of monomials \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)\) is complete in the sense of Sect. 3.2.1. Finally, we can formulate, as in (5.7), an initial condition for the linear PDE system (5.9) with respect to such a weight order:

$$\begin{aligned} \{\, C_{\beta ,r} \;\;|\;\; x^\beta \in {{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)^{\complement },\;\;\text {for } 1\leqslant r \leqslant m\,\}. \end{aligned}$$
(5.10)

5.3.6 Theorem

[51, Sect. 40] If the PDE system \((\Sigma )\) in (5.9) is complete with respect to a weight order \(\preccurlyeq _{wo}\), then it admits at most one analytic solution satisfying the initial condition (5.10).

M. Janet asserted that this result could be proved in a way similar to the proof of Theorem 5.3.4.

5.4 Completely Integrable Higher Order Linear PDE Systems

In this subsection, we will introduce integrability conditions for higher order linear PDE systems with several unknown functions. The main result, Theorem 5.4.7, characterizes algebraically the complete integrability property for complete PDE systems. It states that, under the completeness property, the complete integrability condition is equivalent to all integrability conditions being trivially satisfied. In this subsection, we will assume that the linear PDE systems are complete. In Sect. 5.6 we will provide Procedure 6 that transforms a linear PDE system of the form (5.9) into a complete linear PDE system with respect to a weight order.

5.4.1 Formal Solutions.

Consider a linear PDE system \((\Sigma )\) of the form (5.9) with unknown functions \(\varphi ^1,\ldots ,\varphi ^m\) and independent variables \(x_1,\ldots ,x_n\). Assume that \((\Sigma )\) is complete; hence, the set of monomials \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)=\{x^\alpha \;|\; \alpha \in I^r\}\) is complete for all \(1\leqslant r\leqslant m\). For the remaining part of this subsection, we will denote \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)\) by \(\mathcal {U}_r\). Let \((\mathrm {cone}_{\mathcal {J},\preccurlyeq _{wo}}(\Sigma ))\) denote the PDE system

$$ \Phi (u)(D^{\alpha } \varphi ^r)= \sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{wo}D^\alpha \varphi ^r \end{array}} \Phi (u)\left( a_{\alpha ,\beta }^{r,s}D^\beta \varphi ^s\right) , \qquad 1 \leqslant r\leqslant m, $$

for \(\alpha \in I^r\) and \(u \in \mathcal {M}(\mathrm {Mult}(x^\alpha ,\mathcal {U}_r))\).

We use the PDE system \((\mathrm {cone}_{\mathcal {J},\preccurlyeq _{wo}}(\Sigma ))\) to compute the values of the principal derivative at a point \(P^0=(x_1^0,\ldots , x_n^0)\) of \(\mathbb {C}^n\). We call formal solutions of the PDE system \((\Sigma )\) at the point \(P^0\) the elements \(\varphi ^1,\ldots ,\varphi ^m\) in \(\mathbb {C}[[x_1-x_1^0, \ldots ,x_n -x_n^0]]\) which are solutions of \((\Sigma )\). If the system \((\Sigma )\) admits an analytic solution, then these formal solutions are convergent series and give analytic solutions of \((\Sigma )\) on a neighborhood of the point \(P^0\).

5.4.2 Initial Conditions.

We are interested in condition under which the system \((\Sigma )\) admits a solution for any given initial condition. The initial conditions are parametrized by the set \(\mathcal {U}_r^{\complement }\) of complementary monomials of the set of monomials \(\mathcal {U}_r\) as in Sect. 5.3.3. Explicitly, for \(1\leqslant r \leqslant m\), to a monomial \(x^{\beta }\) in \(\mathcal {U}_r^{\complement }\), with \(\beta \) in \(\mathbb {N}^n\) and \(\mathrm {^{\complement }Mult}_{\mathcal {J}}^{\mathcal {U}_r^{\complement }}(x^{\beta }) = \{x_{i_1},\ldots ,x_{i_{k_r}}\}\), we associate an arbitrary analytic function

$$ \varphi _{\beta ,r}(x_{i_1},\ldots ,x_{i_{k_r}}). $$

Then by initial condition one means the following data:

$$ (C_{\beta ,r}) \qquad \left. D^{\beta } \varphi ^r \right| _{x_j=x_j^0 \; \forall x_j\in \mathrm {^{\complement }NMult}_{\mathcal {J}}^{\mathcal {U}_r^{\complement }}(x^{\beta _r})}= \varphi _{\beta ,r}(x_{i_1},\ldots ,x_{i_{k_r}}). $$

Then, as the initial condition for the system \((\Sigma )\) in (5.8), one takes the set

$$\begin{aligned} \underset{1\leqslant r \leqslant m}{\bigcup }\{\, C_{\beta ,r} \;|\; x^{\beta _r} \in \mathcal {U}_r^{\complement }\,\}. \end{aligned}$$
(5.11)

Note that M. Janet calls degree of generality of the solution of the PDE system \((\Sigma )\) the dimension of the initial conditions of the system, that is

$$ \underset{\;\;\;u \in \mathcal {U}_r^{\complement }}{\mathrm {Max}} \big |\mathrm {^{\complement }Mult}_{\mathcal {J}}^{\mathcal {U}_r^{\complement }}(u)\big |. $$

5.4.3 \(\mathcal {J}\)-Normal Form.

Suppose that the PDE system \((\Sigma )\) is complete. Given a linear equation E among the unknown functions \(\varphi ^1,\ldots ,\varphi ^m\) and variables \(x_1,\ldots ,x_n\), a \(\mathcal {J}\)-normal form of E with respect to the system \((\Sigma )\) is an equation obtained from E by the reduction process that replaces principal derivatives by parametric derivatives by means of a procedure similar to RightReduce given in Procedure 5.

5.4.4 Integrability Conditions.

Given \(1\leqslant r\leqslant m\) and \(\alpha \in I^r\), let \(x_i\) in \(\mathrm {NMult}_{\mathcal {J}}^{\mathcal {U}_r}(x^\alpha )\) be a non-multiplicative variable. Apply the partial derivative \(\Phi (x_i)=\frac{\partial }{\partial x_i}\) to the equation

$$ D^\alpha \varphi ^r=\sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{wo}D^\alpha \varphi ^r \end{array}} a_{\alpha ,\beta }^{r,s}D^\beta \varphi ^s. $$

This yields the PDE

$$\begin{aligned} \Phi (x_i)(D^\alpha \varphi ^r)= \sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{wo}D^\alpha \varphi ^r \end{array}} \left( \frac{\partial a_{\alpha ,\beta }^{r,s}}{\partial x_i}D^\beta \varphi ^s+a_{\alpha ,\beta }^{r,s}\Phi (x_i)(D^\beta \varphi ^s)\right) . \end{aligned}$$
(5.12)

Using the system \((\mathrm {cone}_{\mathcal {J},\preccurlyeq _{wo}}(\Sigma ))\), we can rewrite the PDE (5.12) as a PDE formulated in terms of parametric derivatives and independent variables. The set of monomials \(\mathcal {U}_r\) being complete, there exists \(\alpha '\) in \(\mathbb {N}^n\) with \(x^{\alpha '}\) in \(\mathcal {U}_r\) and u in \(\mathcal {M}(\mathrm {Mult}_{\mathcal {J}}^{\mathcal {U}_r}(x^{\alpha '}))\) such that \(x_ix^{\alpha }=ux^{\alpha '}\). Then we have \(\Phi (x_i)D^\alpha =\Phi (u)D^{\alpha '}\) and we obtain the equation

$$\begin{aligned} \sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{wo}D^\alpha \varphi ^r \end{array}} \left( \frac{\partial a_{\alpha ,\beta }^{r,s}}{\partial x_i}D^\beta \varphi ^s+a_{\alpha ,\beta }^{r,s}\Phi (x_i)(D^\beta \varphi ^s)\right) = \sum _{\begin{array}{c} (\beta ',s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^{\beta '} \varphi ^s \prec _{wo}D^{\alpha '} \varphi ^r \end{array}} \Phi (u)(a_{\alpha ',\beta '}^{r,s}D^{\beta '} \varphi ^s). \end{aligned}$$
(5.13)

Using the equations of the system \((\mathrm {cone}_{\mathcal {J},\preccurlyeq _{wo}}(\Sigma ))\), we replace all principal derivatives in the equation (5.13) by parametric derivatives and independent variables. The order \(\preccurlyeq _{wo}\) being well-founded this process is terminating. Moreover, when the PDE system \((\Sigma )\) is complete this reduction process is confluent in the sense that any transformations of an Eq. (5.13) end with a unique \(\mathcal {J}\)-normal form. The set of resulting \(\mathcal {J}\)-normal forms is denoted by \({{\,\mathrm{\mathbf{IntCond}}\,}}_{\mathcal {J},\preccurlyeq _{wo}}(\Sigma )\).

5.4.5 Remarks.

Since the system \((\Sigma )\) is complete, any Eq. (5.13) is reduced to a unique normal form. Such a normal form allows us to judge whether a given integrability condition is trivial or not.

Recall that the parametric derivatives correspond to the initial conditions. Hence, a nontrivial relation in \({{\,\mathrm{\mathbf{IntCond}}\,}}_{\mathcal {J},\preccurlyeq _{cwo}}(\Sigma )\) provides a nontrivial relation among the initial conditions. In this way, we can decide whether the system \((\Sigma )\) is completely integrable or not.

5.4.6 Completely Integrable Systems.

A complete linear PDE system \((\Sigma )\) of the form (5.9) is said to be completely integrable if it admits an analytic solution for any given initial condition (5.11). For the geometrical interpretation of this condition, we refer the reader to Sect. 2.1.4.

5.4.7 Theorem

[51, Sect. 42] Let \((\Sigma )\) be a complete finite linear PDE system of the form (5.9). Then the system \((\Sigma )\) is completely integrable if and only if every relation in \({{\,\mathrm{\mathbf{IntCond}}\,}}_{\mathcal {J},\preccurlyeq _{wo}}(\Sigma )\) is a trivial identity.

A proof of this result is given in [51, Sect. 43]. Note that the condition in this theorem is equivalent to asserting that any relation (5.13) is an algebraic consequence of a PDE equation of the system \((\mathrm {cone}_{\mathcal {J},\preccurlyeq _{wo}}(\Sigma ))\).

5.5 Canonical Forms of Linear PDE Systems

In this subsection, we recall from [51] the notion of canonical linear PDE system. A canonical system is a normal form with respect to a weight order on derivatives, and such that it satisfies some analytic conditions, allowing to extend the Cauchy–Kowalevsky theorem given in Sect. 2.1.3. Note that this terminology refers to a notion of normal form, but it does not correspond to the well-known notion for a rewriting system meaning both terminating and confluence. In this chapter, we present canonical systems with respect to a weight order as it has done in Janet’s monograph [51], but we point out here that this notion can be defined with any total order on derivatives.

5.5.1 Autoreduced PDE Systems.

Let \((\Sigma )\) be a finite linear PDE system. Suppose that a weight order \(\preccurlyeq _{wo}\) is fixed on the set of unknown functions \(\varphi ^1,\ldots ,\varphi ^m\) of \((\Sigma )\) and their derivatives, as defined in Sect. 5.3.5. Suppose also that each equation of the system \((\Sigma )\) can be expressed in the form

$$ (\Sigma ^{(\alpha ,r)}) \qquad D^\alpha \varphi ^r=\sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{wo}D^\alpha \varphi ^r \end{array}} a_{(\beta ,s)}^{(\alpha ,r)}D^\beta \varphi ^s, $$

so that

$$\begin{aligned} (\Sigma ) = \bigcup _{(\alpha ,r)\in I} \Sigma ^{(\alpha ,r)}, \end{aligned}$$
(5.14)

the union being indexed by a multiset I. The support of the equation \((\Sigma ^{(\alpha ,r)})\) is defined by

$$ \mathrm {Supp}( \Sigma ^{(\alpha ,r)}) = \{\,(\beta ,s)\;|\;a_{(\beta ,s)}^{(\alpha ,r)} \ne 0\;\}. $$

For \(1\leqslant r \leqslant m\), consider the set of monomials \({{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)\) corresponding to leading derivatives, that is, monomial \(x^{\alpha }\) such \((\alpha ,r)\) belongs to I. The system \((\Sigma )\) is said to be

(i):

\(\mathcal {J}\)-left-reduced with respect to \(\preccurlyeq _{wo}\) if for any \((\alpha ,r)\) in I there exist no \((\alpha ',r)\) in I and nontrivial monomial \(x^\gamma \) in \(\mathcal {M}(\mathrm {Mult}_\mathcal {J}^{{{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)}(x^{\alpha '}))\) such that \(x^\alpha =x^\gamma x^{\alpha '}\);

(ii):

\(\mathcal {J}\)-right-reduced with respect to \(\preccurlyeq _{wo}\) if, for any \((\alpha ,r)\) in I and any \((\beta ,s)\) in \(\mathrm {Supp}( \Sigma ^{(\alpha ,r)})\), there exist no \((\alpha ',s)\) in I and nontrivial monomial \(x^\gamma \) in \(\mathcal {M}(\mathrm {Mult}_\mathcal {J}^{{{\,\mathrm{lm}\,}}_{\preccurlyeq _{wo}}(\Sigma ,\varphi ^r)}(x^{\alpha '}))\) such that \(x^\beta =x^\gamma x^{\alpha '}\);

(iii):

\(\mathcal {J}\)-autoreduced with respect to \(\preccurlyeq _{wo}\) if it is both \(\mathcal {J}\)-left-reduced and \(\mathcal {J}\)-right-reduced with respect to \(\preccurlyeq _{wo}\).

5.5.2 Canonical PDE Systems.

A PDE system \((\Sigma )\) is said to be \(\mathcal {J}\)-canonical with respect a weight order \(\preccurlyeq _{wo}\) if it satisfies the following five conditions

(i):

it consists of finitely many equations, and each equation can be expressed in the form

$$ D^\alpha \varphi ^r=\sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{wo}D^\alpha \varphi ^r \end{array}} a_{(\beta ,s)}^{(\alpha ,r)}D^\beta \varphi ^s, $$
(ii):

the system \((\Sigma )\) is \(\mathcal {J}\)-autoreduced with respect to \(\preccurlyeq _{wo}\);

(iii):

the system \((\Sigma )\) is complete;

(iv):

the system \((\Sigma )\) is completely integrable;

(v):

the coefficients \(a_{(\beta ,s)}^{(\alpha ,r)}\) of the equations in (i) and the initial conditions of \((\Sigma )\) are analytic.

Under these assumptions, the system \((\Sigma )\) admits a unique analytic solution satisfying appropriate initial conditions parametrized by complementary monomials as in Sect. 5.3.3.

5.5.3 Remark.

We note that the notion of canonicity proposed by Janet in [51] does not impose the condition of being \(\mathcal {J}\)-autoreduced, even if M. Janet did mentioned this autoreduced property for some simple cases. The autoreduced property implies the minimality of the system. This fact was formulated by Gerdt and Blinkov in [28] with the notion of minimal involutive basis.

5.5.4 Example.

In [51, Sect. 44], M. Janet studied the following linear PDE system with one unknown function \(\varphi \):

$$ (\Sigma )\qquad {\left\{ \begin{array}{ll} p_{54}=p_{11}, \\ p_{53}=p_{41}, \\ p_{52}=p_{31}, \\ p_{44}=p_{52}, \\ p_{43}=p_{21}, \\ p_{33}=p_{42}, \end{array}\right. } $$

where \(p_{i,j}\) denotes \(\dfrac{\partial ^2 \varphi }{\partial x_i \partial x_j}\). In Example 3.2.6, we have shown that the left-hand sides of the equations of this system form a complete set of monomials. Let us define the following weights for the variables:

$$\begin{aligned} \begin{array}{ccccc} x_1 &{} x_2 &{} x_3 &{} x_4 &{} x_5 \\ \hline 1 &{} 0 &{} 1 &{} 1 &{} 2 \\ 0 &{} 0 &{} 0 &{} 1 &{} 1 \\ \end{array} \end{aligned}$$

We deduce the following weights for the second derivatives:

$$\begin{aligned} \begin{array}{c|c|c|c|c|c|c|c|c} p_{22} &{} \begin{matrix} p_{21} \\ p_{32} \end{matrix} &{} p_{42} &{} \begin{matrix} p_{11} \\ p_{31} \\ p_{33} \end{matrix} &{} \begin{matrix} p_{52} \\ p_{41} \\ p_{43} \end{matrix} &{} p_{44} &{} \begin{matrix} p_{51} \\ p_{53} \end{matrix} &{} p_{54} &{} p_{55} \\ \hline 0 &{} 1 &{} 1 &{} 2 &{} 2 &{} 2 &{} 3 &{} 3 &{} 4 \\ 0 &{} 0 &{} 1 &{} 0&{} 1 &{} 2 &{} 1 &{} 2 &{} 2 \\ \end{array} \end{aligned}$$

As seen in Example 3.3.4, given any four analytic functions

$$ \varphi _0(x_1,x_2), \quad \varphi _3(x_1, x_2), \quad \varphi _4(x_1, x_2), \quad \varphi _5(x_1, x_5), $$

there exists a unique solution of the PDE system \((\Sigma )\). Note that the initial condition is given by

$$\begin{aligned} \varphi \vert _{x_3=x_3^0, x_4=x_4^0, x_5=x_5^0}&=\varphi _{0,0,0,0,0}(x_1,x_2),\\ \left. \frac{\partial \varphi }{\partial x_3}\right| _{x_3=x_3^0, x_4=x_4^0, x_5=x_5^0}&=\varphi _{0,0,1,0,0}(x_1, x_2), \\ \left. \frac{\partial \varphi }{\partial x_4}\right| _{x_3=x_3^0, x_4=x_4^0, x_5=x_5^0}&=\varphi _{0,0,0,1,0}(x_1, x_2), \\ \left. \frac{\partial \varphi }{\partial x_5}\right| _{x_2=x_2^0, x_3=x_3^0, x_4=x_4^0}&=\varphi _{0,0,0,0,1}(x_1, x_5). \end{aligned}$$

We set

$$\begin{aligned} \begin{array}{c|ccccc} A=p_{54}-p_{11} &{} x_5 &{} x_4 &{} x_3 &{} x_2 &{} x_1 \\ B=p_{53}-p_{41} &{} x_5 &{} &{} x_3 &{} x_2 &{} x_1 \\ C=p_{52}-p_{31} &{} x_5 &{} &{} &{} x_2 &{} x_1 \\ D=p_{44}-p_{52} &{} &{} x_4 &{} x_3 &{} x_2 &{} x_1 \\ E=p_{43}-p_{21} &{} &{} &{} x_3 &{} x_2 &{} x_1 \\ F=p_{33}-p_{42} &{} &{} &{} x_3 &{} x_2 &{} x_1 \end{array} \end{aligned}$$

where the variables on the right correspond to the multiplicative variables of the first term. In order to decide if the system \((\Sigma )\) is completely integrable it suffices to check if the terms

$$ B_4, C_4, C_3, D_5, E_5, E_4, F_5, F_4 $$

are linear combinations of derivatives of the terms ABCDEF with respect to their multiplicative variables. Here \(Y_i\) denotes the derivative \(\frac{\partial }{\partial x_i} Y\) of a term Y. Finally, we observe that

$$\begin{aligned}&B_4=A_3-D_1-C_1, \\&C_4=A_2-E_1, \qquad C_3=B_2-F_1, \\&D_5=A_4-B_1-C_5, \\&E_5=A_3-C_1, \qquad E_4=D_3+B_2, \\&F_5=B_3-A_2+E_1, \qquad F_4=E_3-D_2-C_2. \end{aligned}$$

As a consequence, the system \((\Sigma )\) is completely integrable; hence, it is \(\mathcal {J}\)-canonical.

5.6 Reduction of a PDE System to a Canonical Form

In his monograph [51], M. Janet did not talk about the correctness of the procedures that he introduced in order to reduce a finite linear PDE system to a canonical form. In this section, we explain how to transform a finite linear PDE system with several unknown functions by derivation, elimination, and autoreduction, into an equivalent linear PDE system that is either in canonical form, or an incompatible system. For linear PDE systems with constant coefficients, the correctness of the procedure can be verified easily.

5.6.1 Equivalence of PDE System.

Janet’s procedure transforms by reduction and completion a finite linear PDE system into a new PDE system, which is equivalent to the original one. In his work, M. Janet dit not explain this notion of equivalence which can be described as follows. Consider two finite linear PDE systems with m unknown functions and n independent variables,

$$ (\Sigma ^l) \qquad \sum _{j=1}^m p_{i,j}^l \varphi ^j =0, \qquad i \in I^l, $$

for \(l=1,2\), where \(p_{i,j}^l\) are linear differential operators. We say that the PDE systems \((\Sigma ^1)\) and \((\Sigma ^2)\) are equivalent if the sets of solutions of the two systems coincide. This notion can be also formulated by saying that the D-modules generated by the families of differentials operators \((p_{i,1}^1,\ldots ,p_{i,m}^1)\) for \(i\in I^1\) and \((p_{i,1}^2,\ldots ,p_{i,m}^2)\) for \(i\in I^2\) coincide.

5.6.2 A Canonical Weight Order.

Consider a finite linear PDE system \((\Sigma )\) of m unknown functions \(\varphi ^1,\ldots , \varphi ^m\) of the independent variables \(x_1,\ldots , x_n\). To these variables and functions, we associate the following weights:

$$\begin{aligned} \begin{array}{ccccc|cccc} x_1 &{} x_2 &{} \ldots &{} x_{n-1} &{} x_n &{} \varphi ^1 &{} \varphi ^2 &{} \ldots &{} \varphi ^m \\ \hline 1 &{} 1 &{} \ldots &{} 1 &{} 1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 0 &{} 0 &{} \ldots &{} 0 &{} 0 &{} 1 &{} 2 &{} \ldots &{} m \\ 0 &{} 0 &{} \ldots &{} 0 &{} 1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 0 &{} 0 &{} \ldots &{} 1 &{} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} &{} \vdots &{} \vdots &{} \vdots &{} \vdots &{} &{} \vdots \\ 0 &{} 1 &{} \ldots &{} 0 &{} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 1 &{} 0 &{} \ldots &{} 0 &{} 0 &{} 0 &{} 0 &{} \ldots &{} 0 \\ \end{array} \end{aligned}$$

The weight order on monomial partial derivatives defined in Sect. 5.1.5 induced by this weight system is total. Following M. Janet, this order is called canonical weight order and is denoted by \(\preccurlyeq _{cwo}\).

5.6.3 Combination of Equations.

Consider the PDE system \((\Sigma )\) with the canonical weight order \(\preccurlyeq _{cwo}\) defined in Sect. 5.6.2. We assume that the system \((\Sigma )\) is given in the same form as (5.14) and that each equation of the system is written in the form

$$ (E^{(\alpha ,r)}_i) \qquad D^\alpha \varphi ^r=\sum _{\begin{array}{c} (\beta ,s) \in \mathbb {N}^n\times \{1,2,\ldots ,m\} \\ D^\beta \varphi ^s \prec _{cwo}D^\alpha \varphi ^r \end{array}} a_{(\alpha ,r),i}^{(\beta ,s)}D^\beta \varphi ^s, \qquad i \in I^{(\alpha ,r)}. $$

The leading pair \((\alpha ,r)\) of the equation \(E^{(\alpha ,r)}_i\) will be denoted by \(\mathrm {ldeg}_{\preccurlyeq _{cwo}}(E_{i}^{\alpha ,r})\). We will denote by \(\mathrm {Ldeg}_{\preccurlyeq _{cwo}}(\Sigma )\) the subset of \(\mathbb {N}^n\times \{1,\ldots ,m\}\) consisting of leading pairs of the equations forming the system \((\Sigma )\):

$$ \mathrm {Ldeg}_{\preccurlyeq _{cwo}}(\Sigma ) = \{\,\mathrm {ldeg}_{\preccurlyeq _{cwo}}(E) \;|\; E\;\text {is an equation of}\; \Sigma \,\}. $$

The canonical weight order \(\preccurlyeq _{cwo}\) induces a total order on \(\mathbb {N}^n\times \{1,\ldots ,m\}\) denoted by \(\prec _{lp}\). We will denote by \(K(\alpha ,r,i)\) the set of pairs \((\beta ,s)\) of running indices in the sum of the equation \(E^{(\alpha ,r)}_i\). Given i and j in \(I^{(\alpha ,r)}\), we set

$$ (\alpha _{i,j},r_{i,j}) = \mathrm {Max} \big ( (\beta ,s)\in K(\alpha ,r,i) \cup K(\alpha ,r,j) \; |\; a_{(\alpha ,r),i}^{(\beta ,s)} \ne a_{(\alpha ,r),j}^{(\beta ,s)}\big ). $$

We define

$$\begin{aligned} b_{(\alpha ,r)}^{(\alpha _{i,j},r_{i,j})} = {\left\{ \begin{array}{ll} a_{(\alpha ,r),i}^{(\alpha _{i,j},r_{i,j})}, &{} \text{ if } (\alpha _{i,j},r_{i,j})\in K(\alpha ,r,i) \setminus K(\alpha ,r,j), \\ -a_{(\alpha ,r),i}^{(\alpha _{i,j},r_{i,j})}, &{} \text{ if } (\alpha _{i,j},r_{i,j})\in K(\alpha ,r,j) \setminus K(\alpha ,r,i), \\ a_{(\alpha ,r),i}^{(\alpha _{i,j},r_{i,j})} - a_{(\alpha ,r),i}^{(\alpha _{i,j},r_{i,j})}, &{} \text{ if } (\alpha _{i,j},r_{i,j})\in K(\alpha ,r,i) \cap K(\alpha ,r,j), \\ \end{array}\right. } \end{aligned}$$
(5.15)

and we denote by \(E_{i,j}^{(\alpha ,r)}\) the equation

$$\begin{aligned} D^{\alpha _{i,j}}\varphi ^{r_{i,j}}= \sum _{(\beta ,s) \in K(\alpha ,r,j) \atop (\beta ,s) \prec _{lp} (\alpha _{i,j},r_{i,j})} c_{(\alpha _{i,j},r_{i,j}),j}^{(\beta ,s)} D^\beta \varphi ^s - \sum _{(\beta ,s) \in K(\alpha ,r,i) \atop (\beta ,s) \prec _{lp} (\alpha _{i,j},r_{i,j})} c_{(\alpha _{i,j},r_{i,j}),i}^{(\beta ,s)} D^\beta \varphi ^s, \end{aligned}$$
(5.16)

where, for any \(k=i,j\),

$$ c_{(\alpha _{i,j},r_{i,j}),k}^{(\beta ,s)} = a_{(\alpha ,r),k}^{(\beta ,s)}/b_{(\alpha ,r)}^{(\alpha _{i,j},r_{i,j})}. $$

Equation (5.16) corresponds to a combination of the two equations \(E_{i}^{(\alpha ,r)}\) and \(E_{j}^{(\alpha ,r)}\) and accordingly it will be denoted by \(\mathbf {Combine}_{\preccurlyeq _{cwo}}(E_{i}^{(\alpha ,r)},E_{j}^{(\alpha ,r)})\). Procedure 3 adds to a set of PDE equations \((\Sigma )\) an equation E by combination.

figure c
figure d
figure e

Note that at each step of the procedure \(\mathbf{RightReduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) the running system \(\Gamma \) remains \(\mathcal {J}\)-left-reduced. Combining this procedure with the procedure \(\mathbf{LeftReduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) we obtain the following autoreduce procedure that transform a PDE system into a autoreduced PDE system.

5.6.4 Procedure \(\mathbf{Autoreduce}_{\mathcal {J},\preccurlyeq _{cwo}}(\Sigma )\).

Let us fix a canonical weight order \(\preccurlyeq _{cwo}\) for \(\varphi ^1,\ldots ,\varphi ^m\) and \(x_1,\ldots ,x_n\). Let \((\Sigma )\) be a finite linear PDE system given in the same form as (5.14), with unknown functions \(\varphi ^1,\ldots ,\varphi ^m\) of the independent variables \(x_1,\ldots ,x_n\). We assume that the leading derivatives of \((\Sigma )\) are all different. The procedure \(\mathbf{Autoreduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) transforms the PDE system \((\Sigma )\) into an \(\mathcal {J}\)-autoreduced PDE system equivalent to \((\Sigma )\), by applying successively the procedures \(\mathbf{LeftReduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) and \(\mathbf{RightReduce}_{\mathcal {J},\preccurlyeq _{cwo}}\). An algebraic version of this procedure is given in Procedure 9. Let us remark that the autoreduction procedure given in Janet’s monographs corresponds to the \(\mathbf{LeftReduce}_{\mathcal {J},\preccurlyeq _{cwo}}\), so does not deal with right reduction of equations.

Note that the procedure \(\mathbf{Autoreduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) fails if and only if the procedure \(\mathbf{Combine}_{\preccurlyeq _{cwo}}\) fails. This occurs when the procedure \(\mathbf{Combine}_{\preccurlyeq _{cwo}}\) is applied to equations \(E_i^{(\alpha ,r)}\) and \(E_j^{(\alpha ,r)}\) and some coefficients \(b_{(\alpha ,r)}^{(\alpha _{i,j},r_{i,j})}\), as defined in (5.15), vanish at some point of \(\mathbb {C}^n\). In particular, the procedure \(\mathbf{Autoreduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) does not fail when all the coefficients are constant. This constraint on the coefficients of the system concerns only the left reduction and was not discussed in Janet’s monograph. As a consequence, we have the following result.

5.6.5 Theorem

If \((\Sigma )\) is a finite linear PDE system with constant coefficients, the procedure \(\mathbf{Autoreduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) terminates and produces a finite autoreduced PDE system that is equivalent to \((\Sigma )\).

5.6.6 Completion Procedure of a PDE System.

Consider a finite linear PDE system \((\Sigma )\) with the canonical weight order \(\preccurlyeq _{cwo}\) given in Sect. 5.6.2. If the system \((\Sigma )\) is \(\mathcal {J}\)-autoreduced, then the following procedure \(\mathbf{Complete}_{\mathcal {J},\preccurlyeq _{cwo}}(\Sigma )\) transforms the system \((\Sigma )\) into a finite complete \(\mathcal {J}\)-autoreduced linear PDE system. This completion procedure appears in Janet’s monograph [51] but not in an explicit way.

figure f

5.6.7 Completion and Integrability Conditions.

In Procedure 6, the set \(\mathcal {P}r_r\) contains all the obstructions to the completeness of a system. The procedure \(\mathbf{Complete}_{\mathcal {J},\preccurlyeq _{cwo}}\) adds to the system the necessary equations in order to eliminate all these obstructions. The equations added to the system have the form

$$ D^\beta \varphi ^r = \mathrm {Rhs}(E^{(\beta ,r)}) - a_{(\beta ,r)}^{(\delta ,r)} D^\delta \varphi ^r + a_{(\beta ,r)}^{(\delta ,r)} D^\gamma (\mathrm {Rhs}(E^{(\alpha ,r)})) $$

with \(\delta \ne \beta \) and lead to the definition of a new integrability condition of the form (5.13) by using the construction given in Sect. 5.4.4.

5.6.8 Janet’s Procedure.

Given a finite linear PDE system \((\Sigma )\) with the canonical weight order \(\preccurlyeq _{cwo}\) defined in Sect. 5.6.2, Janet’s procedure \(\mathbf{Janet}_{\mathcal {J},\preccurlyeq _{cwo}}\) either transforms the system \((\Sigma )\) into a PDE system \((\Gamma )\) that is \(\mathcal {J}\)-canonical with respect to \(\preccurlyeq _{cwo}\), or computes an obstruction to the feasibility of such a transformation. In the first case, the solutions of the \(\mathcal {J}\)-canonical system \((\Gamma )\) are solutions of the initial system \((\Sigma )\). In the second case, the obstruction corresponds to a nontrivial relation on the initial conditions. We refer the reader to [81] or [78] for a deeper discussion on this procedure and its implementations.

Applying the procedures \(\mathbf{Autoreduce}_{\mathcal {J}}\) and \({{\,\mathrm{\mathbf{Complete}}\,}}_{\mathcal {J}}\) successively, the first step of the procedure consists in reducing the given PDE system \((\Sigma )\) to a PDE system \((\Gamma )\) that is \(\mathcal {J}\)-autoreduced and complete with respect to \(\preccurlyeq _{cwo}\).

Then one computes the set \({{\,\mathrm{\mathbf{IntCond}}\,}}_{\mathcal {J},\preccurlyeq _{cwo}}(\Gamma )\) of integrability conditions of the system \((\Gamma )\). Recall from Sect. 5.4.4 that this set is a finite set of relations that do not contain principal derivatives. Hence, these integrability conditions are \(\mathcal {J}\)-normal forms with respect to \((\Gamma )\). Since the system \((\Gamma )\) is complete, these normal forms are unique, and by Theorem 5.4.7, if all of these normal forms are trivial, then the system \((\Gamma )\) is completely integrable. Otherwise, the procedure takes a nontrivial condition \(\mathcal {R}\) in the set \({{\,\mathrm{\mathbf{IntCond}}\,}}_{\mathcal {J},\preccurlyeq _{cwo}}(\Gamma )\) and distinguishes two cases. If the relation \(\mathcal {R}\) is among functions \(\varphi ^1,\ldots , \varphi ^m\) and variables \(x_1,\ldots ,x_n\), then it imposes a relation on the initial conditions of the system \((\Gamma )\). In the other case, the set \({{\,\mathrm{\mathbf{IntCond}}\,}}_{\mathcal {J},\preccurlyeq _{cwo}}(\Gamma )\) contains at least one PDE involving a derivative of one of the functions \(\varphi ^1,\ldots ,\varphi ^m\) and the procedure \(\mathbf{Janet}_{\mathcal {J},\preccurlyeq _{cwo}}\) is applied again to the PDE system \((\Sigma )\) completed by all the PDE equations in \({{\,\mathrm{\mathbf{IntCond}}\,}}_{\mathcal {J},\preccurlyeq _{cwo}}(\Gamma )\).

figure g

3.6.9 Remarks.

If the procedure stops at the first loop, that is, if C consists only of trivial identities, then the system \((\Sigma )\) is reducible to the \(\mathcal {J}\)-canonical form \((\Gamma )\) equivalent to \((\Sigma )\).

When the set C contains an integrability condition involving at least one derivative of the unknown functions, the procedure is applied again to the system \((\Sigma )\cup C\). Notice that it could be also possible to recall the procedure on \((\Gamma )\cup C\), but as done in Janet’s monograph [51], we choose to restart the procedure on \((\Sigma )\cup C\) in order to have a PDE system where each equation has a clear meaning, namely, it comes either from the initial problem or from the integrability condition.

Finally, note that the procedure \(\mathbf{Janet}_{\mathcal {J},\preccurlyeq _{cwo}}\) fails on a PDE system \((\Sigma )\) if and only if the procedure \(\mathbf{Autoreduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) fails on \((\Sigma ) \cup C\), where C consists of the potential nontrivial relations among the unknown functions and the variables added during the process, as explained in Sect. 5.6.4. In particular, by Theorem 5.6.5, if \((\Sigma )\) is a finite linear PDE system with constant coefficients, the procedure \(\mathbf{Autoreduce}_{\mathcal {J},\preccurlyeq _{cwo}}\) terminates and produces a finite autoreduced PDE system equivalent to \((\Sigma )\).

5.6.10 Example.

In [51, Sect. 47], M. Janet studied the PDE system

$$ (\Sigma ) \qquad {\left\{ \begin{array}{ll} p_{33}=&{}x_2p_{11}, \\ p_{22}=&{}0, \end{array}\right. } $$

where \(p_{i_1\ldots i_k}\) denotes the derivative \(\dfrac{\partial ^k \varphi }{\partial x_{i_1}\ldots \partial x_{i_k}}\) of an unknown function \(\varphi \) of the independent variables \(x_1,x_2,x_3\). The set of monomials of the left-hand side of the system \((\Sigma )\) is \(\mathcal {U}=\{x_3^2, x_2^2\}\). The set \(\mathcal {U}\) is not complete. Indeed, for instance the monomial \(x_3x_2^2\) is not in the involutive cone \(\mathrm {cone}_\mathcal {J}(\mathcal {U})\). If we complete the set \(\mathcal {U}\) by the monomial \(x_3x_2^2\) we obtain a complete set \(\widetilde{\mathcal {U}}:=\mathcal {U}\cup \{x_3x_2^2\}\). The PDE system \((\Sigma )\) is then equivalent to the PDE system

$$ (\Gamma ) \qquad {\left\{ \begin{array}{ll} p_{33}=&{} x_2p_{11}, \\ p_{322}=&{}0, \\ p_{22}=&{}0. \end{array}\right. } $$

Note that \(p_{322}=\partial _{x_3}p_{22}=0\). The table of multiplicative variables with respect to the set \(\widetilde{\mathcal {U}}\) is given by

$$\begin{aligned} \begin{array}{c|ccc} x_3^2 &{} x_3 &{} x_2 &{} x_1 \\ x_3x_2^2&{} &{} x_2 &{} x_1 \\ x_2^2 &{} &{} x_2 &{} x_1 \end{array} \end{aligned}$$

We deduce that there exists only one nontrivial compatibility condition, which reads

$$\begin{aligned} p_{3322}=&\,\partial _{x_3}p_{322}=\partial _{x_2}^2p_{33}, \qquad (x_3\cdot x_3x_2^2=(x_2)^2\cdot x_3^2) \\ =&\,\partial _{x_2}^2(x_2p_{11})=2p_{211}+x_2p_{2211}=2p_{211}=0, \qquad (p_{2211}=\partial _{x_1}^2 p_{22}=0). \end{aligned}$$

Hence, \(p_{211}=0\) is a nontrivial relation of the system \((\Gamma )\). Hence, the PDE system \((\Sigma )\) is not completely integrable. Then, we consider the new PDE system given by

$$ (\Sigma ') \quad {\left\{ \begin{array}{ll} p_{33}=&{}x_2p_{11}, \\ p_{22}=&{}0, \\ p_{211}=&{}0. \end{array}\right. } $$

The associated set of monomials \(\mathcal {U}'=\{x_3^2, x_2^2, x_2x_1^2\}\) is not complete. It can be completed to the complete set \(\widetilde{\mathcal {U}'}:=\mathcal {U}' \cup \{x_3x_2^2, x_3x_2x_1^2\}\). The PDE system \((\Sigma ')\) is then equivalent to the following PDE system:

$$ (\Gamma ') \quad {\left\{ \begin{array}{ll} p_{33}=&{}x_2p_{11}, \\ p_{322}=&{}0, \\ p_{3211}=&{}0, \\ p_{22}=&{}0, \\ p_{221}=&{}0. \end{array}\right. } $$

Note that \(p_{322}=\partial _{x_3}p_{22}\) and \(p_{3211}=\partial _{x_3}p_{211}\). The multiplicative variables with respect to the set of monomials \(\mathcal {U}'\) are given by the following table:

$$\begin{aligned} \begin{array}{c|ccc} x_3^2 &{} x_3 &{} x_2 &{} x_1 \\ x_3x_2^2&{} &{} x_2 &{} x_1 \\ x_3x_2x_1^2 &{} &{} &{} x_1 \\ x_2^2 &{} &{} x_2 &{} x_1 \\ x_2x_1^2 &{} &{} &{} x_1 \end{array} \end{aligned}$$

We deduce that the only nontrivial compatibility relation is

$$\begin{aligned} p_{33211}=&\; \partial _{x_3}(p_{3211})=0, \\ =&\; \partial _{x_1}^2\partial _{x_2}(p_{33})=\partial _{x_1}^2\partial _{x_2}(x_2p_{11}), \\ =&\; \partial _{x_1}^2(p_{11}+x_2p_{211})=p_{1111}, \qquad \text {since }p_{211}=0. \end{aligned}$$

We see that \(p_{1111}=0\) is a nontrivial relation of the system \((\Gamma ')\). Hence, the system \((\Sigma ')\) is not completely integrable. Now consider the new PDE system given by

$$ (\Sigma '') \quad {\left\{ \begin{array}{ll} p_{33}=&{}x_2p_{11}, \\ p_{22}=&{}0, \\ p_{211}=&{}0, \\ p_{1111}=&{}0. \end{array}\right. } $$

The associated set of monomials \(\mathcal {U}''=\{x_3^2, x_2^2, x_2x_1^2, x_1^4\}\) is not complete. It can be completed to the set of monomials \(\widetilde{\mathcal {U}''}:=\mathcal {U}''\cup \{x_3x_2^2, x_3x_2x_1^2, x_3x_1^4\}\). The PDE system \((\Sigma '')\) is seen to be equivalent to the system

$$ (\Gamma '')\quad {\left\{ \begin{array}{ll} p_{33}=&{}x_2p_{11}, \\ p_{322}=&{}0, \\ p_{31111}=&{}0, \\ p_{22}=&{}0, \\ p_{211}=&{}0, \\ p_{1111}=&{}0. \end{array}\right. } $$

Note that \(p_{322}=\partial _{x_2}p_{22}\) and \(p_{31111}=\partial _{x_3}p_{1111}\). All the compatibility conditions are trivial identities, and by Theorem 5.4.7 we deduce that the PDE \((\Sigma '')\) obtained from the initial PDE system \((\Sigma )\) by adding compatibility conditions is completely integrable.

5.6.11 Remark.

Let us mention that using a procedure similar to the one presented in this section, Janet in [51, Sect. 48] gave a constructive proof of a result obtained previously by Tresse [88] asserting that an infinite linear PDE system can be reduced to a finite linear PDE system.

5.7 Algebra, Geometry, and PDEs

The notion of ideal first appeared in the work of R. Dedekind. It appeared also in a seminal paper [43] of Hilbert, where he developed the theory of ideals in polynomial rings. In particular, he proved Noetherianity results, such as the Noetherianity of the ring of polynomials over a field, a result known now as Hilbert’s basis theorem. In his works on PDE systems [48,49,50], M. Janet used the notion of ideal generated by homogeneous polynomials under the terminology of module of forms, which he defined as follows. He called form a homogeneous polynomial with several variables and he defined a module of forms as an algebraic system satisfying the two following conditions:

(i):

if a form f belongs to the system, then the form hf belongs to the system for every form h,

(ii):

if f and g are two forms of the same order in the system, then the form \(f+g\) belongs to the system.

Finally, in [51, Sect. 51], M. Janet recalls Hilbert’s basis theorem.

5.7.1 Characteristic Functions of Homogeneous Ideals.

In [51, Sect. 51], M. Janet recalled the Hilbert description of the problem of finding the number of independent conditions so that a homogenous polynomial of order p belongs to a given homogeneous ideal. These independent conditions correspond to the independent linear forms that annihilate all homogeneous polynomials of degree p in the ideal. Janet recalled from [43] that this number of independent conditions is expressed as a polynomial in p for sufficiently large p.

Let I be a homogenous ideal of \(\mathbb {K}[x_1,\ldots , x_n]\) generated by polynomials \(f_1, \ldots , f_k\). Given a monomial order on \(\mathcal {M}(x_1,\ldots ,x_n)\), we can assume that all the leading coefficients are equal to 1. For any \(p\geqslant 0\), consider the homogeneous component of degree p so that \(I=\bigoplus _{p} I_p\), with

$$ I_p:=I \cap \mathbb {K}[x_1,\ldots x_n]_p. $$

Recall that

$$ \dim I_p \leqslant \dim \big ( \mathbb {K}[x_1,\ldots , x_n]_p \big )=\Gamma _n^p. $$

The number of independent conditions such that a homogeneous polynomial of order p belongs to the ideal I is given by the difference

$$ \chi (p) := \Gamma _n^p - \dim I_p. $$

This is the number of monomials of degree p that cannot be divided by the monomials \({{\,\mathrm{lm}\,}}(f_1), \ldots , {{\,\mathrm{lm}\,}}(f_k)\). The function \(\chi (p)\) corresponds to a coefficient of the Hilbert series of the ideal I and is called the characteristic function of the ideal I, or postulation by Janet in [51, Sect. 52]. We refer the reader to [18] for the definition of Hilbert series of polynomial rings and their applications. In Sect. 5.8, we will show that the function \(\chi (p)\) is polynomial for sufficiently large p. Finally, note that the set of monomials that cannot be divided by the monomials \({{\,\mathrm{lm}\,}}(f_1), \ldots , {{\,\mathrm{lm}\,}}(f_k)\) consists of a finite number of classes of complementary monomials.

5.7.2 Geometric Remark.

M. Janet made the following geometric observation about the characteristic function. Suppose that p is sufficiently large so that the function \(\chi (p)\) is polynomial. Let \(\lambda -1\) be the degree of the leading term of the polynomial \(\chi (p)\). Consider the projective variety V(I) defined by

$$ V(I) = \{a \in \mathbb {P}^{n-1} \; |\; f(a) = 0 \; \text {for all }f \text {in }I\,\}. $$

The integer \(\mu ={{\,\mathrm{lc}\,}}(\chi (p))(\lambda - 1) !\) corresponds to the degree of the variety V(I) [43]. If \(\chi (p)=0\) then the variety V(I) is empty, in the other cases V(I) is a subvariety of \(\mathbb {P}^{n-1}\) of dimension \(\lambda - 1\).

5.7.3 Example [51, Sect. 53].

Consider the monomial ideal I of \(\mathbb {K}[x_1,x_2,x_3]\) generated by \(x_1^2\), \(x_1x_2\), and \(x_2^2\). The characteristic function \(\chi (p)\) of the ideal I is constant and equal to 3. The unique point that annihilates the ideal I is (0, 0, 1), with multiplicity 3. This result is compatible with the fact that the zeros of the ideal J generated by the polynomials

$$ (x_1-ax_3)(x_1-bx_3), \qquad (x_1-ax_3)(x_2-cx_3), \qquad (x_2-cx_3)(x_2-dx_3), $$

consists of the three points

$$ (a,c,1), \qquad (a,d,1), \qquad (b,c,1). $$

5.7.4 The Ideal – PDE Dictionary.

Let I be a homogeneous ideal of \(\mathbb {K}[x_1,\ldots , x_n]\) generated by a set \(F=\{f_1,\ldots , f_k\}\) of polynomials. For a fixed monomial order on \(\mathcal {M}(x_1,\ldots , x_n)\), we set \(\mathcal {U}={{\,\mathrm{lm}\,}}(F)\). Consider the ring isomorphism \(\Phi \) from \(\mathbb {K}[x_1,\ldots , x_n]\) to \(\mathbb {K}[\frac{\partial }{\partial x_1}, \ldots , \frac{\partial }{\partial x_n}]\) given in Proposition 3.1.2. To each polynomial f in I, we associate a PDE \(\Phi (f)\varphi =0\). In this way, the ideal I defines a PDE system \((\Sigma (I))\). Let \(\lambda \) and \(\mu \) be the integers associated to the characteristic function \(\chi (p)\) as defined in 5.7.2. The maximal number of arguments of the arbitrary analytic functions used to define the initial conditions

$$ \{\, C_\beta \;|\; x^\beta \in \mathcal {U}^{\complement }\,\} $$

of the PDE system \((\Sigma (I))\), as defined in (5.7), corresponds to \(\lambda \), explicitly,

$$ \lambda = \max _{v\in \mathcal {U}^{\complement }} |\mathrm {^{\complement }Mult}_{\mathcal {J}}^{\mathcal {U}^{\complement }}(v)|, $$

where \(\mathcal {U}^{\complement }\) denotes the set of complementary monomials of \(\mathcal {U}\). Moreover, the number of arbitrary analytic functions with \(\lambda \) arguments in the initial conditions \(\{\, C_\beta \;|\; x^\beta \in \mathcal {U}^{\complement }\,\}\) is equal to \(\mu \), that is

$$ \mu = \big |\,\{\, v \in \mathcal {U}^{\complement } \;\text {such that}\; |\mathrm {^{\complement }Mult}_{\mathcal {J}}^{\mathcal {U}^{\complement }}(v)| = \lambda \,\}\,\big |. $$

Conversely, let \((\Sigma )\) be a PDE system with one unknown function \(\varphi \) of the independent variables \(x_1,\ldots , x_n\). Denote by \(\mathrm {ldo}(\Sigma )\) the set of differential operators associated to the principal derivatives of PDE in \((\Sigma )\), with respect to Janet’s order on derivatives defined in Sect. 5.1.3. The isomorphism \(\Phi \) associates to any monomial differential operator \(\frac{\partial ^{\vert \alpha \vert }\;}{\partial x_1^{\alpha _1}\cdots \partial x_n^{\alpha _n}}\) in \(\mathrm {ldo}(\Sigma )\) a monomial \(x_1^{\alpha _1}\cdots x_n^{\alpha _n}\) in \(\mathcal {M}(x_1,\ldots ,x_n)\).

Denote by \(I(\Sigma )\) the ideal of \(\mathbb {K}[x_1,\ldots ,x_n]\) generated by \(\Phi ^{-1}(\mathrm {ldo}(\Sigma ))\). Note that, by construction, the ideal \(I(\Sigma )\) is monomial and for any monomial u in \(I(\Sigma )\) the derivative \(\Phi (u)\varphi \) is a principal derivative of the PDE system \((\Sigma )\) as defined in Sect. 5.3.1. In [51, Sect. 54], M. Janet called characteristic form any element of the ideal \(I(\Sigma )\).

In this way, M. Janet concluded that the degree of generality of the solutions of a linear PDE system with one unknown function is described by the leading term of the characteristic function of the ideal of characteristic forms defined in Sect. 5.7.1.

5.7.5 The Particular Case of First-Order Systems.

Consider a completely integrable first-order linear PDE system \((\Sigma )\). The number \(\lambda \), defined in Sect. 5.7.4, which is equal to the maximal number of arguments of the arbitrary functions used to define the initial conditions of the system \((\Sigma )\), is also equal in this case to the cardinality of the set \(\mathcal {U}^{\complement }\) of complementary monomials of the set of monomials \(\mathcal {U}=\Phi ^{-1}(\mathrm {ldo}(\Sigma ))\).

5.8 Involutive Systems

In this subsection, we recall the algebraic formulation of involutive systems as introduced by M. Janet. This formulation first appeared in its work in [48] and [49]. But notice that this notion comes from the work of Cartan in [13].

5.8.1 Characters and Derived Systems.

Let I be a proper ideal of \(\mathbb {K}[x_1,\ldots , x_n]\) generated by homogeneous polynomials. M. Janet introduced the characters of the homogeneous component \(I_p\) as the nonnegative integers \(\sigma _1,\sigma _2,\ldots , \sigma _n\) defined inductively by the formula

$$ \dim \left( I_p + \left( \sum _{i=1}^h \mathbb {K}[x_1,\ldots , x_n]_{p-1}x_i\right) \right) =\dim (I_p) +\sigma _1+\cdots +\sigma _h, \qquad 1\leqslant h\leqslant n. $$

Note that the sum \(\sigma _1+\sigma _2+\cdots +\sigma _n\) corresponds to the codimension of \(I_p\) in \(\mathbb {K}[x_1,\ldots , x_n]_p\).

Given a positive integer \(\lambda \), we set

$$ J_{p+\lambda }=\mathbb {K}[x_1,\ldots , x_n]_\lambda I_p. $$

We define the nonnegative integers \(\sigma _1^{(\lambda )},\sigma _2^{(\lambda )},\ldots , \sigma _n^{(\lambda )}\) by the relations

$$\begin{aligned}&\dim \left( J_{p+\lambda } + \left( \sum _{i=1}^h \mathbb {K}[x_1,\ldots , x_n]_{p+\lambda -1}x_i\right) \right) \\&\qquad =\dim (J_{p+\lambda })+\sigma _1^{(\lambda )}+\cdots +\sigma _h^{(\lambda )}, \qquad 1\leqslant h\leqslant n. \end{aligned}$$

For \(\lambda =1\), M. Janet called \(J_{p+1}\) the derived system of \(I_p\). Let us mention some properties of these numbers proved by M. Janet.

5.8.2 Lemma

We set \(\sigma _h'=\sigma _h^{(1)}\) and \(\sigma _h''=\sigma _h^{(2)}\) for \(1\leqslant h\leqslant n\). Then,

(i):

\(\sigma _1'+\sigma _2'+\cdots +\sigma _n' \leqslant \sigma _1+2\sigma _2+\cdots +n \sigma _n\).

(ii):

If \( \sigma _1'+\sigma _2'+\cdots +\sigma _n' = \sigma _1+2\sigma _2+\cdots +n \sigma _n\), the two following relations hold:

(a):

\(\sigma _1''+\sigma _2''+\cdots +\sigma _n'' = \sigma _1'+2\sigma _2'+\cdots +n \sigma _n'\).

(b):

\(\sigma _h'=\sigma _h+\sigma _{h+1}+\cdots +\sigma _n\).

We refer the reader to [51] for a proof of the relations of Lemma 5.8.2.

5.8.3 Involutive Systems.

The homogenous component \(I_p\) is said to be in involution when

$$ \sigma _1'+\sigma _2'+\cdots +\sigma _n' = \sigma _1+2\sigma _2+\cdots +n \sigma _n. $$

Following properties (ii)(a) of Lemma 5.8.2, if the component \(I_p\) is in involution, then the component \(I_{p+k}\) is in involution for all \(k\geqslant 0\).

5.8.4 Proposition

[51, Sect. 56 & Sect. 57] The characters of a homogeneous component \(I_p\) satisfy the two following properties:

(i):

\(\sigma _1\geqslant \sigma _2\geqslant \cdots \geqslant \sigma _n\).

(ii):

if \(I_p\ne \{0\}\), then \(\sigma _n=0\).

5.8.5 Polynomiality of Characteristic Functions.

Suppose that the homogeneous component \(I_p\) is in involution. We claim that the characteristic function \(\chi (P)\) defined in Sect. 5.7.1 is polynomial for \(P\geqslant p\). Indeed, using Lemma 5.8.2, we show by induction that for any \(1\leqslant h< n\) and any positive integer \(\lambda \), it holds that

$$ \sigma _h^{(\lambda )}=\sum _{k=0}^{n-h-1} \left( {\begin{array}{c}\lambda +k-1\\ k\end{array}}\right) \sigma _{h+k}. $$

The codimension of \(I_{p+\lambda }\) in \(\mathbb {K}[x_1,\ldots , x_n]_{p+\lambda }\) is given by

$$\begin{aligned} \sum _{h=1}^{n-1}\sigma _h^{(\lambda )}=&\,\sum _{h=1}^{n-1} \sum _{k=0}^{n-h-1}\left( {\begin{array}{c}\lambda +k-1\\ k\end{array}}\right) \sigma _{h+k} = \sum _{i=1}^{n-1} \left( \sum _{k=0}^{i-1}\left( {\begin{array}{c}\lambda +k-1\\ k\end{array}}\right) \right) \sigma _i \\ =&\,\sum _{i=1}^{n-1} \left( \sum _{k=0}^{i-1}\left( {\begin{array}{c}P-p+k-1\\ k\end{array}}\right) \right) \sigma _i = \sum _{i=1}^{n-1} \left( {\begin{array}{c}P-p+i-1\\ i-1\end{array}}\right) \sigma _i. \end{aligned}$$

This proves the polynomiality of the characteristic function of the ideal I for sufficiently large p.

5.9 Concluding Remarks

Recall that the so-called Cartan–Kähler theory is concerned with the Pfaffian systems on a differentiable (or analytic) manifold and its aim is to determine whether a given system is prolongeable to a completely integrable system or an incompatible system. The Cartan–Kähler method relies on a geometrical argument, which is to construct integral submanifolds of the system inductively. Here, a step of the induction is to find an integral submanifold of dimension \(i+1\) containing the integral submanifold of dimension i, and their theory does not allow to deduce whether such step can be achieved or not.

Janet’s method is, even if it works only locally, completely algebraic and algorithmic so that it partially completes the parts where the Cartan–Kähler theory does not work.

According to these works, there are two seemingly different notions of involutivity, the one by G. Frobenius, G. Darboux, and É. Cartan and the other by M. Janet. The fact is that at each step of the induction in the Cartan–Kähler theory, one has to study a system of PDE. The system is called in involution (compare with those in Sects. 2.2.6 with Sect. 5.8) if it can be written in a canonical form, as defined in Sect. 5.5.2, perhaps after a change of coordinates, if necessary. Following Janet’s algebraic definition of involutivity, several involutive methods were developed for polynomial and differential systems, [72, 86]. In these approaches, a differential system is involutive when its non-multiplicative derivatives are consequences of multiplicative derivatives. In [25, 27], Gerdt gave an algebraic characterization of involutivity for polynomial systems. Gerdt’s approach is presented in the next section.

6 Polynomial Involutive Bases

In this section, we present the algebraic definition of involutivity for polynomial systems given by  Gerdt in [25, 27]. In particular, we relate the notion of involutive basis for a polynomial ideal to the notion of Gröbner basis.

6.1 Involutive Reduction on Polynomials

6.1.1 Involutive Basis.

Recall that a monomial ideal I of \(\mathbb {K}[x_1,\ldots ,x_n]\) is an ideal generated by monomials. An involutive basis of the ideal I with respect to an involutive division \(\mathcal {I}\) is an involutive set of monomials \(\mathcal {U}\) that generates I. By the Dickson Lemma [17], every monomial ideal I admits a finite set of generators. When the involutive division \(\mathcal {I}\) is Noetherian as defined in Sect. 4.2.3, this generating set admits a finite \(\mathcal {I}\)-completion that forms an involutive basis of the ideal I. As a consequence, we deduce the following result.

6.1.2 Proposition

Let \(\mathcal {I}\) be a Noetherian involutive division on \(\mathcal {M}(x_1,\ldots ,x_n)\). Every monomial ideal of \(\mathbb {K}[x_1,\ldots ,x_n]\) admits an \(\mathcal {I}\)-involutive basis.

The objective of this section is to show how to extend this result to polynomial ideals with respect to a monomial order. In the remainder of this subsection, we assume that a monomial order \(\preccurlyeq \) is fixed on \(\mathcal {M}(x_1,\ldots ,x_n)\).

6.1.3 Multiplicative Variables for a Polynomial.

Let \(\mathcal {I}\) be an involutive division on \(\mathcal {M}(x_1,\ldots ,x_n)\). Let F be a set of polynomials from \(\mathbb {K}[x_1,\ldots ,x_n]\), and let f be a polynomial in F. We define the set of \(\mathcal {I}\)-multiplicative (resp. \(\mathcal {I}\)-non-multiplicative) variables of the polynomial f with respect to F and the monomial order \(\preccurlyeq \) by setting

$$ \mathrm {Mult}_{\mathcal {I},\preccurlyeq }^F(f) = \mathrm {Mult}_\mathcal {I}^{{{\,\mathrm{lm}\,}}_{\preccurlyeq }(F)}({{\,\mathrm{lm}\,}}_{\preccurlyeq }(f)) \quad (\,\text {resp.}\;\; \mathrm {NMult}_{\mathcal {I},\preccurlyeq }^F(f) = \mathrm {NMult}_\mathcal {I}^{{{\,\mathrm{lm}\,}}_{\preccurlyeq }(F)}({{\,\mathrm{lm}\,}}_{\preccurlyeq }(f))\,). $$

Note that the \(\mathcal {I}\)-multiplicative variables depend on the monomial order \(\preccurlyeq \) used to determine the leading monomials of the polynomials of F.

6.1.4 Polynomial Reduction.

Polynomial division can be described as a rewriting operation as follows. Given polynomials f and g in \(\mathbb {K}[x_1,\ldots ,x_n]\), we say that f is reducible modulo g with respect to \(\preccurlyeq \), if there is a term \(\lambda u\) in f whose monomial u is divisible by \({{\,\mathrm{lm}\,}}_\preccurlyeq (g)\) for the usual monomial division. In this case, we denote such a reduction by , where

$$ h = f - \frac{\lambda u}{\mathrm{lt}_\preccurlyeq (g)}g. $$

For a set G of polynomials of \(\mathbb {K}[x_1,\ldots ,x_n]\), we define a rewriting system corresponding to the division modulo G by considering the relation reduction defined by

We will denote by the reflexive and transitive closure of the relation .

6.1.5 Involutive Reduction.

In a same way, we define a notion of reduction with respect to an involutive division \(\mathcal {I}\) on \(\mathcal {M}(x_1,\ldots ,x_n)\). Let g be a polynomial in \(\mathbb {K}[x_1,\ldots ,x_n]\). A polynomial f in \(\mathbb {K}[x_1,\ldots ,x_n]\) is said to be \(\mathcal {I}\)-reducible modulo g with respect to the monomial order \(\preccurlyeq \), if there is a term \(\lambda u\) of f, with \(\lambda \in \mathbb {K}-\{0\}\) and \(u\in \mathcal {M}(x_1,\ldots ,x_n)\), such that

$$ u={{\,\mathrm{lm}\,}}_\preccurlyeq (g)v \quad \text {and}\quad v\in \mathcal {M}(\mathrm {Mult}_\mathcal {I}^{{{\,\mathrm{lm}\,}}_\preccurlyeq (G)}(g)). $$

Such an \(\mathcal {I}\)-reduction is denoted by , where

$$ h = f - \frac{\lambda }{{{\,\mathrm{lc}\,}}_\preccurlyeq (g)}gv = f - \frac{\lambda u}{\mathrm{lt}_\preccurlyeq (g)}g. $$

6.1.6 Involutive Normal Forms.

Let G be a set of polynomials of \(\mathbb {K}[x_1,\ldots ,x_n]\). A polynomial f is said to be \(\mathcal {I}\)-reducible modulo G with respect to the monomial order \(\preccurlyeq \), if there exists a polynomial g in G such that f is \(\mathcal {I}\)-reducible modulo g. We will denote by this reduction relation defined by

The polynomial f is said to be in \(\mathcal {I}\)-irreducible modulo G if it is not \(\mathcal {I}\)-reducible modulo G. A \(\mathcal {I}\)-normal form of a polynomial f is an \(\mathcal {I}\)-irreducible polynomial h such that there is a sequence of reductions from f to h:

The procedure \(\mathbf{InvReduction}_\mathcal {I,\preccurlyeq }(f,G)\) computes a normal form of f modulo G with respect to the division \(\mathcal {I}\). The proofs of its correctness and termination can be carried out as in the case of the division procedure for the classical polynomial division, see for instance [3, Proposition 5.22].

figure h

6.1.7 Remarks.

Note that the involutive normal form of a polynomial f is not unique; in general, it depends on the order in which the reductions are applied. Suppose that for each polynomial f we have a \(\mathcal {I}\)-normal form with respect to the monomial order \(\preccurlyeq \), denoted by \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)\). Denote by \(\mathrm {nf}_{\preccurlyeq }^G(f)\) a normal form of a polynomial f obtained by the classical division procedure. In general, the equality \(\mathrm {nf}_\preccurlyeq ^G(f)=\mathrm {nf}^G_{\mathcal {I},\preccurlyeq }(f)\) does not hold. For example, let \(G=\{x_1,x_2\}\) and consider the Thomas division \(\mathcal {T}\) defined in Sect. 4.3.1. Then \(\mathrm {nf}_\preccurlyeq ^G(x_1x_2)=0\), while \(\mathrm {nf}^G_{\mathcal {T},\preccurlyeq }(x_1x_2)=x_1x_2\) because the monomial \(x_1x_2\) is a \(\mathcal {T}\)-irreducible modulo G.

6.1.8 Autoreduction.

Recall from Sect. 4.1.4 that a set of monomials \(\mathcal {U}\) is \(\mathcal {I}\)-autoreduced with respect to an involutive division \(\mathcal {I}\) if it does not contain a monomial \(\mathcal {I}\)-divisible by another monomial of \(\mathcal {U}\). In that case, every monomial in \(\mathcal {M}(x_1,\ldots ,x_n)\) admits at most one \(\mathcal {I}\)-involutive divisor in \(\mathcal {U}\).

A set G of polynomials of \(\mathbb {K}[x_1,\ldots ,x_n]\) is said to be \(\mathcal {I}\)-autoreduced with respect to the monomial order \(\preccurlyeq \), if it satisfies the two following conditions:

(i):

(left \(\mathcal {I}\)-autoreducibility) the set of leading monomials \({{\,\mathrm{lm}\,}}_\preccurlyeq (G)\) is \(\mathcal {I}\)-autoreduced,

(ii):

(right \(\mathcal {I}\)-autoreducibility) for any g in G, there is no term \(\lambda u \ne \mathrm{lt}_\preccurlyeq (g)\) of g, with \(\lambda \ne 0\) and \(u\in \mathrm {cone}_\mathcal {I}({{\,\mathrm{lm}\,}}_\preccurlyeq (G))\).

Note that the condition (i), (resp. (ii)) corresponds to the left-reducibility (resp. right-reducibility) property given in Sect. 5.5.2. Any finite set G of polynomials of \(\mathbb {K}[x_1,\ldots ,x_n]\) can be transformed by Procedure 9 into a finite \(\mathcal {I}\)-autoreduced set that generates the same ideal. The proofs of the correctness and termination are immediate consequences of the property of involutive division.

figure i

6.1.9 Proposition

[27, Theorem 5.4] Let G be an \(\mathcal {I}\)-autoreduced set of polynomials of \(\mathbb {K}[x_1,\ldots ,x_n]\) and f be a polynomial in \(\mathbb {K}[x_1,\ldots ,x_n]\). Then \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)=0\) if and only if the polynomial f can be written in the form

$$ f = \sum _{i,j} \beta _{i,j}g_iv_{i,j}, $$

where \(g_i\in G\), \(\beta _{i,j}\in \mathbb {K}\) and \(v_{i,j}\in \mathcal {M}(\mathrm {Mult}_\mathcal {I}^{{{\,\mathrm{lm}\,}}_\preccurlyeq (G)}({{\,\mathrm{lm}\,}}_\preccurlyeq (g_i)))\), with \({{\,\mathrm{lm}\,}}_\preccurlyeq (v_{i,j})\ne {{\,\mathrm{lm}\,}}_\preccurlyeq (v_{i,k})\) if \(j\ne k\).

Proof

Suppose that \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)=0\). Then there exists a sequence of involutive reductions modulo G,

terminating on 0. For any \(1\leqslant i \leqslant k\), we have

$$ f_{i} = f_{i-1} - \frac{\lambda _{i,j}}{{{\,\mathrm{lc}\,}}_\preccurlyeq (g_{i})}g_{i}v_{i,j}, $$

with \(v_{i,j}\) in \(\mathcal {M}(\mathrm {Mult}_\mathcal {I}^{{{\,\mathrm{lm}\,}}_\preccurlyeq (G)}({{\,\mathrm{lm}\,}}_\preccurlyeq (g_i)))\). This shows the equality.

Conversely, suppose that f can be written in the indicated form. Then the leading monomial \({{\,\mathrm{lm}\,}}_\preccurlyeq (f)\) admits an involutive \(\mathcal {I}\)-divisor in \({{\,\mathrm{lm}\,}}_\preccurlyeq (G)\). Indeed, the leading monomial of the decomposition of f has the form

$$ {{\,\mathrm{lm}\,}}_\preccurlyeq \left( \sum _{i,j} g_iv_{i,j}\right) ={{\,\mathrm{lm}\,}}_\preccurlyeq (g_{i_0})v_{i_0,j_0}. $$

The monomial \({{\,\mathrm{lm}\,}}_\preccurlyeq (g_{i_0})\) is an involutive divisor of \({{\,\mathrm{lm}\,}}_\preccurlyeq (f)\), and by the autoreduction hypothesis, such a divisor is unique. Hence, the monomial \({{\,\mathrm{lm}\,}}_\preccurlyeq (g_{i_0})v_{i_0,j_0}\) does not divide other monomials of the form \({{\,\mathrm{lm}\,}}_\preccurlyeq (g_i)v_{i,j}\). We apply the reduction to the decomposition. In this way, we define a sequence of reductions ending on 0. This proves that \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)=0\).    \(\square \)

6.1.10 Uniqueness and Additivity of Involutive Normal Forms.

From decomposition Proposition 6.1.9, we deduce two important properties of involutive normal forms. Let G be an \(\mathcal {I}\)-autoreduced set of polynomials of \(\mathbb {K}[x_1,\ldots ,x_n]\) and f be a polynomial. Suppose that \(h_1=\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)\) and \(h_2=\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)\) are two involutive normal forms of f. From the involutive reduction procedure that computes this two normal forms, we deduce two decompositions

$$ h_1 = f - \sum _{i,j} \beta _{i,j}g_i v_{i,j}, \qquad h_2 = f - \sum _{i,j} \beta '_{i,j}g_i v'_{i,j}. $$

As a consequence, \(h_1-h_2\) admits a decomposition as in Proposition 6.1.9, hence \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(h_1-h_2)=0\). The polynomial \(h_1-h_2\) being in normal form, we deduce that \(h_1=h_2\). This shows the uniqueness of the involutive normal form modulo an autoreduced set of polynomials.

In a same manner, we prove the following additivity formula for any polynomial f and \(f'\):

$$ \mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f+f')=\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)+\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f'). $$

6.2 Involutive Bases

Fix a monomial order \(\preccurlyeq \) on \(\mathcal {M}(x_1,\ldots ,x_n)\).

6.2.1 Involutive Bases.

Let I be an ideal of \(\mathbb {K}[x_1,\ldots ,x_n]\). A subset G of polynomials in \(\mathbb {K}[x_1,\ldots ,x_n]\) is an \(\mathcal {I}\)-involutive basis of the ideal I with respect the monomial order \(\preccurlyeq \), if G is \(\mathcal {I}\)-autoreduced and satisfies the following property:

$$ \forall g \in G,\; \forall u\in \mathcal {M}(x_1,\ldots ,x_n),\quad \mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(gu)=0. $$

In other words, for any polynomial g in G and any monomial u in \(\mathcal {M}(x_1,\ldots ,x_n)\), there is a sequence of involutive reductions:

with \(g_i\) in G. In particular, we recover the notion of involutive sets of monomials given in Sect. 4.2.1. Indeed, if G is an \(\mathcal {I}\)-involutive basis, then \({{\,\mathrm{lm}\,}}_\preccurlyeq (G)\) is an \(\mathcal {I}\)-involutive set of monomials of \(\mathcal {M}(x_1,\ldots ,x_n)\).

6.2.2 Proposition

Let \(\mathcal {I}\) be an involutive division on \(\mathbb {K}[x_1,\ldots ,x_n]\) and G be a \(\mathcal {J}\)-involutive subset of \(\mathbb {K}[x_1,\ldots ,x_n]\). A polynomial of \(\mathbb {K}[x_1,\ldots ,x_n]\) is reducible with respect to G if and only if it is \(\mathcal {I}\)-reducible modulo G.

Proof

Let f be a polynomial in \(\mathbb {K}[x_1,\ldots ,x_n]\). By the definition of the involutive reduction, if f is \(\mathcal {I}\)-reducible modulo G, then it is reducible for the relation . Conversely, suppose that f is reducible by a polynomial g in G. That is, there exists a term \(\lambda u\) in f, where \(\lambda \) is a nonzero scalar and u is a monomial in \(\mathcal {M}(x_1,\ldots ,x_n)\) such that \(u={{\,\mathrm{lm}\,}}_\preccurlyeq (g)v\), where \(v\in \mathcal {M}(x_1,\ldots ,x_n)\). The set G being involutive, we have \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(gv)=0\). By Proposition 6.1.9, the polynomial gv can written in the form

$$ gv = \sum _{i,j} \beta _{i,j}g_i v_{i,j}, $$

where \(g_i\in G\), \(\beta _{i,j}\in \mathbb {K}\), and \(v_{i,j}\in \mathcal {M}(\mathrm {Mult}_\mathcal {I}^{{{\,\mathrm{lm}\,}}_\preccurlyeq (G)}({{\,\mathrm{lm}\,}}_\preccurlyeq (g_i)))\). In particular, this shows that the monomial u admits an involutive divisor in G.    \(\square \)

6.2.3 Uniqueness of Normal Forms.

Let us mention an important consequence of Proposition 6.2.2 given in [27, Theorem 7.1]. Let G be a \(\mathcal {J}\)-involutive subset of \(\mathbb {K}[x_1,\ldots ,x_n]\), for any reduction procedure that computes a normal form \(\mathrm {nf}_\preccurlyeq ^G(f)\) of a polynomial f in \(\mathbb {K}[x_1,\ldots ,x_n]\) and any involutive reduction procedure that computes an involutive normal form \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)\), as a consequence of the uniqueness of the involutive normal form and Proposition 6.2.2, we have

$$ \mathrm {nf}_\preccurlyeq ^G(f) = \mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f). $$

6.2.4 Example.

We set \(\mathcal {U}=\{x_1,x_2\}\). We consider the deglex order induced by \(x_2>x_1\) and the Thomas division \(\mathcal {T}\). The monomial \(x_1x_2\) is \(\mathcal {T}\)-irreducible modulo \(\mathcal {U}\). Hence, it does not admit zero as \(\mathcal {T}\)-normal form and the set \(\mathcal {U}\) cannot be an \(\mathcal {T}\)-involutive basis of the ideal generated by \(\mathcal {U}\). In turn, the set \(\{x_1,x_2,x_1x_2\}\) is a \(\mathcal {T}\)-involutive basis of the ideal generated by \(\mathcal {U}\).

We now consider the Janet division \(\mathcal {J}\). We have \(\deg _2(\mathcal {U})=1\), \([0]=\{x_1\}\) and \([1]=\{x_2\}\). The \(\mathcal {J}\)-multiplicative variables are given by the table

$$\begin{aligned} \begin{array}{c|cc} u &{} \mathrm {Mult}_{\mathcal {J}}^\mathcal {U}(u)\\ \hline x_1 &{} x_1 &{}\\ x_2 &{} x_1 &{} x_2\\ \end{array} \end{aligned}$$

It follows that the monomial \(x_1x_2\) is not \(\mathcal {J}\)-reducible by \(x_1\) modulo \(\mathcal {U}\). However, it is \(\mathcal {J}\)-reducible by \(x_2\). We conclude that the set \(\mathcal {U}\) forms a \(\mathcal {J}\)-involutive basis.

As an immediate consequence of involutive bases, the involutive reduction procedure provides a decision method of the ideal membership problem, as stated by the following result.

6.2.5 Proposition

[27, Corollary 6.4] Let I be an ideal of \(\mathbb {K}[x_1,\ldots ,x_n]\), and G be an \(\mathcal {I}\)-involutive basis of I with respect to a monomial order \(\preccurlyeq \). For any polynomial f of \(\mathbb {K}[x_1,\ldots ,x_n]\), we have \(f \in I\) if and only if \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)=0\).

Proof

If \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)=0\), then the polynomial f can be written in the form Proposition 6.1.9. This shows that f belongs to the ideal I. Conversely, suppose that f belongs to I. Then it can be decomposed in the form

$$ f = \sum _i h_ig_i, $$

where \(h_i=\sum _j\lambda _{i,j}u_{i,j}\in \mathbb {K}[x_1,\ldots ,x_n]\). Since the set G is \(\mathcal {I}\)-involutive, we have \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(u_{i,j}g_i)=0\), for any monomials \(u_{i,j}\) and \(g_i\) in G. By the linearity of the operator \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(-)\), we see that \(\mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(f)=0\).    \(\square \)

6.2.6 Local Involutivity.

Gerdt and Blinkov introduced in [27] the notion of local involutivity for a set of polynomials. A set G of polynomials in \(\mathbb {K}[x_1,\ldots ,x_n]\) is said to be locally involutive if the following condition holds:

$$ \forall g \in G,\; \forall x\in \mathrm {NMult}_\mathcal {I}^{{{\,\mathrm{lm}\,}}_\preccurlyeq (G)}({{\,\mathrm{lm}\,}}_\preccurlyeq (g)),\quad \mathrm {nf}_{\mathcal {I},\preccurlyeq }^G(gx)=0. $$

For a continuous involutive division \(\mathcal {I}\), they prove that an \(\mathcal {I}\)-autoreduced set of polynomials is involutive if and only if it is locally involutive [27, Theorem 6.5]. This local involutivity criterion is essential for computing the completion of a set of polynomials into an involutive basis. Note that this result is analogous to the critical pair lemma in rewriting theory stating that a rewriting system is locally confluent if and only if all its critical pairs are confluent, see, e.g., [36, 37]. Together with the Newman Lemma stating that for terminating rewriting, local confluence and confluence are equivalent properties, this gives a constructive method to prove confluence in a terminating rewriting system by analyzing the confluence of critical pairs.

6.2.7 Completion Procedure.

For a given monomial order \(\preccurlyeq \) on \(\mathcal {M}(x_1,\ldots ,x_n)\) and a continuous and constructive involutive division \(\mathcal {I}\), as defined in [27, Definition 4.12], Procedure 10 computes an \(\mathcal {I}\)-involutive basis of an ideal from a set of generators of the ideal. We refer the reader to [27, Sect. 8] or [19, Sect. 4.4] for the correctness of this procedure and conditions for its termination. This procedure is in the same vein as the completion procedure for rewriting systems by Knuth and Bendix [53], and completion procedure for commutative polynomials by Buchberger [7].

figure j

6.2.7 Example: Computation of an Involutive Basis.

Let I be the ideal of \(\mathbb {Q}[x_1,x_2]\) generated by the set \(F=\{f_1,f_2\}\), where the polynomial \(f_1\) and \(f_2\) are defined by

$$\begin{aligned} f_1&= x_2^2-2x_1x_2+1,\\ f_2&= x_1x_2 - 3x_1^2 -1. \end{aligned}$$

We compute an involutive basis of the ideal I with respect to the Janet division \(\mathcal {J}\) and the deglex order induced by \(x_2>x_1\). We have \({{\,\mathrm{lm}\,}}(f_1)=x_2^2\) and \({{\,\mathrm{lm}\,}}(f_2)=x_1x_2\), hence the following \(\mathcal {J}\)-reductions

The polynomial \(f_1\) is \(\mathcal {J}\)-reducible by \(f_2\), and we have

Thus, we set \(f_3=x_2^2-6x_1^2-1\) and we consider the reduction

The set \(F'=\{f_2,f_3\}\) is \(\mathcal {J}\)-autoreduced and generates the ideal I. Let us compute the multiplicative variables of the polynomials \(f_2\) and \(f_3\). We have \(\deg _2(F')=\deg _2(\{x_2^2,x_1x_2\})=2\), \([1]=\{x_1x_2\}\) and \([2]=\{x_2^2\}\). Hence, the \(\mathcal {J}\)-multiplicative variables are given by the table

$$\begin{aligned} \begin{array}{c|c|cc} f &{} {{\,\mathrm{lm}\,}}(f) &{} \mathrm {Mult}_{\mathcal {J}}^{F'}(f)\\ \hline f_2 &{} x_1x_2 &{} x_1 &{}\\ f_3 &{} x_2^2 &{} x_1 &{} x_2\\ \end{array} \end{aligned}$$

The polynomial \(f_2x_2 = x_1x_2^2-3x_1^2x_2-x_2\) is the only non-multiplicative prolongation to consider. This prolongation can be reduced as follows:

We set \(f_4=-3x_1^3 - 2x_1 - x_2\); the associated reduction of \(f_4\) is

and we set \(F'=\{f_2,f_3,f_4\}\). We have \(\deg _2(F')=2\), \([0]=\{x_1^3\}\), \([1]=\{x_1x_2\}\) and \([2]=\{x_2^2\}\). Hence, the \(\mathcal {J}\)-multiplicative variables are given by the table

$$\begin{aligned} \begin{array}{c|c|cc} f &{} {{\,\mathrm{lm}\,}}(f) &{} \mathrm {Mult}_{\mathcal {J}}^{F'}(f)\\ \hline f_2 &{} x_1x_2 &{} x_1 &{}\\ f_3 &{} x_2^2 &{} x_1 &{} x_2\\ f_4 &{} x_1^3 &{} x_1 &{}\\ \end{array} \end{aligned}$$

There are two non-multiplicative prolongations to consider:

$$ f_2x_2 = x_1x_2^2 - 3x_1^2x_2 - x_2,\qquad f_4x_2 = -3x_1^3x_2 - 2x_1x_2 - x_2^2. $$

We have \({{\,\mathrm{lm}\,}}(f_2x_2)=x_1x_2^2 < {{\,\mathrm{lm}\,}}(f_4x_2)=x_1^3x_2\). Hence, the prolongation \(f_2x_2\) must be examined first. We have the following reductions:

Hence, there is no polynomial to add. The other non-multiplicative prolongation is \(f_4x_2\), which can be reduced to an \(\mathcal {J}\)-irreducible polynomial as follows:

All the non-multiplicative prolongations are \(\mathcal {J}\)-reducible to 0; consequently, the set \(F'\) is a Janet basis of the ideal I.

6.3 Involutive Bases and Gröbner Bases

In this subsection, we recall the notion of Gröbner basis and we show that any involutive basis is a Gröbner basis. We fix a monomial order \(\preccurlyeq \) on \(\mathcal {M}(x_1,\ldots ,x_n)\).

6.3.1 Gröbner Bases.

A subset G of \(\mathbb {K}[x_1,\ldots ,x_n]\) is a Gröbner basis with respect to the monomial order \(\preccurlyeq \) if it is finite and satisfies one of the following equivalent conditions:  

(i):

is Church-Rosser,

(ii):

is confluent,

(iii):

is locally confluent,

(iv):

has unique normal forms,

(v):

, for all polynomial f in \(\mathrm {Id}(G)\),

(vi):

every polynomial f in \(\mathrm {Id}(G)\setminus \{0\}\) is reducible modulo G,

(vii):

for any term t in \(\mathrm{lt}_\preccurlyeq (\mathrm {Id}(G))\), there is g in G such that \(\mathrm{lt}_\preccurlyeq (g)\) divides t,

(viii):

for all \(g_1,g_2\) in G, where

$$ S_{\preccurlyeq }(g_1,g_2) = \frac{\mu }{\mathrm{lt}_\preccurlyeq (g_1)} g_1 - \frac{\mu }{\mathrm{lt}_\preccurlyeq (g_2)}g_2, $$

with \(\mu = \mathrm {ppcm}({{\,\mathrm{lm}\,}}_\preccurlyeq (g_1),{{\,\mathrm{lm}\,}}_\preccurlyeq (g_2))\), is the S-polynomial of \(g_1\) and \(g_2\) with respect to the monomial order \(\preccurlyeq \),

(xi):

any critical pair

with \(\mu = \mathrm {ppcm}({{\,\mathrm{lm}\,}}(g_1),{{\,\mathrm{lm}\,}}(g_2))\), of the relation is confluent.

 

We refer the reader to [3, Theorem 5.35] for proofs of these equivalences, see also [35, Section 3] [61]. The proofs of the equivalence of conditions (i)(iv) are classical results for terminating rewriting systems. Note that condition (viii) corresponds to the Buchberger criterion [7] and condition (ix) is a formulation of this criterion in rewriting terms. We refer to [1, Chapter 8] for the rewriting interpretation of the Buchberger algorithm.

A Gröbner basis of an ideal I of \(\mathbb {K}[x_1,\ldots ,x_n]\) with respect to a monomial order \(\preccurlyeq \) is a Gröbner basis with respect to \(\preccurlyeq \) that generates the ideal I. This can be also be formulated saying that G is a generating set for I such that \(\mathrm {Id}(\mathrm{lt}(G )) = \mathrm {Id}(\mathrm{lt}(I))\).

6.3.2 Involutive Bases and Gröbner Bases.

Let I be an ideal of \(\mathbb {K}[x_1,\ldots ,x_n]\). Suppose that G is an involutive basis of the ideal I with respect to an involutive division \(\mathcal {I}\) and the monomial order \(\preccurlyeq \). In particular, the set G generates the ideal I. For every \(g_1\) and \(g_2\) in G, we consider the S-polynomial \(S_{\preccurlyeq }(g_1,g_2)\) with respect to \(\preccurlyeq \). By definition, the polynomial \(S_{\preccurlyeq }(g_1,g_2)\) belongs to the ideal I. By the involutivity of the set G, it follows from Sect. 6.2.3 and Proposition 6.2.5 that we have

$$ \mathrm {nf}^G(S_{\preccurlyeq }(g_1,g_2))=\mathrm {nf}_\mathcal {I}^G(S_{\preccurlyeq }(g_1,g_2))=0. $$

In this way, G is a Gröbner basis of the ideal I by the Buchberger criterion (viii). We have thus proved the following result due to V. P. Gerdt and Y. A. Blinkov.

6.3.3 Theorem

[27, Corollary 7.2] Let \(\preccurlyeq \) be a monomial order on \(\mathcal {M}(x_1,\ldots ,x_n)\) and \(\mathcal {I}\) be an involutive division on \(\mathbb {K}[x_1,\ldots ,x_n]\). Any \(\mathcal {I}\)-involutive basis of an ideal I of \(\mathbb {K}[x_1,\ldots ,x_n]\) is a Gröbner basis of I.

Since the involutive division used to define involutive bases is a refinement of the classical division with respect to which the Gröbner bases are defined, the converse of this result is false in general.