1 Introduction

As it is well known, many linear differential equations have their solutions represented by integrals. This is true for example for equations such the Laplace equation, the heat equation, or the wave equation, and one can consider these results as a particular instance of the Fundamental Principle of Ehrenpreis [5], that states that every solution of an overdetermined system of homogeneous partial differential equations with constant coefficients can be represented as an integral, with respect to an appropriate measure, over the characteristic variety of the system. This variety is the algebraic variety determined by the symbol of the system, and it is therefore natural to imagine that such an integral representation must be connected to the geometry of the variety itself. The main goal of this paper is to study the specific case in which we consider a single partial differential equation whose symbol is a homogeneous polynomial. We will show how the use of classical instruments from algebraic geometry allow us to find integral representations for its solutions. We will show how these ideas were already present in their infancy in Bateman’s [1], where the author showed how to obtain an integral representation for the solutions of the Laplace equation. Since the argument is essentially a counting argument, it is not surprising that in order to obtain solution of suitable differential equations, we are forced to delve into existence theorems of the classical theory of algebraic curves, the Riemann-Roch theorem, the Cayley-Bacharach theorem, Serret’s theorem, and more generally those theorems that give us the dimensions of suitable cohomology groups, such as the Brill-Noether Restsatz theorem.

The plan of the paper is as follows: Section 2 offers a first concrete example of the ideas behind twistor theory; specifically the entire section is devoted to the use of differential geometry in partial differential equations, and we follow [10] to interpret the Laplace and ultrahyperbolic equations in terms of vector bundles. In Section 3 we first consider the case of the Laplace equation in Section 3.1, and later on we study, in Section 3.2, integral representations of general partial differential equations in three variables, whose symbols are homogeneous polynomials thus recoveringr the results of [10, 11]. We finally generalize the theory of Section 3.2 to more than three variables in Section 4. The paper concludes with one appendix on the twistor correspondence that we have isolated in an appendix to facilitate the reading of the rest of the article.

The novelty of our presentation lies in the fact that throughout the paper we insist on the profound role of different aspects of classical algebraic geometry in the study of partial differential equations. We should conclude this introduction to point out that our own interest in this topic stems from our desire to better understand the interconnections between the theory of twistors and the Fundamental Principle of Ehrenpreis-Palamodov-Malgrange, at least in the case of elliptic and strictly hyperbolic systems. This interest was stimulated by our reading of Ehrenpreis’ [6, 7] and we are currently investigating further how to clarify those interconnections.

2 Differential Geometry and Partial Differential Equations

This part was inspired to us by [10]. It has been known for some time that problems in real differential geometry can often be simplified by using complex coordinates. For example in the plane \({\mathbb {R}}^2\) we can write \(z=x+iy\) and thereby identify \({\mathbb {R}}^2\simeq {\mathbb {C}}\). We then discover that a \(C^2\) function \(f:{\mathbb {R}}^2\rightarrow {\mathbb {R}}\) is harmonic if and only if we can write it as

$$\begin{aligned} f=\psi +\overline{\psi } \end{aligned}$$

where \(\psi :{\mathbb {C}}\rightarrow {\mathbb {C}}\) is a holomorphic (complex-analytic) function. This is because a \(C^2\) real-valued function f is harmonic if and only if its laplacian is zero and the Laplace operator \(\frac{\partial ^2}{\partial x^2}+\frac{\partial ^2}{\partial y^2}\) is proportional to the operator \(\frac{\partial ^2}{\partial z\partial \overline{z}}\). This shows us how to connect harmonic real-valued functions-an object from real differential geometry in the plane- to holomorphic functions of one complex variable- the natural objects of complex analysis. If we try the same technique in \({\mathbb {R}}^3\) we have to accept the fact that odd dimensional spaces cannot be identified with complex spaces \({\mathbb {C}}^n\), for any integer n. We can however form another space closely associated to the geometry of \(T={\mathbb {R}}^3\) that is intrinsically complex, and this is the fundamental idea behind twistor theory. Consider therefore the space M of all oriented lines of \({\mathbb {R}}^3\). The generic element of the space M is the oriented line L(uv) given by

$$\begin{aligned} L(u,v)=\{v+tu,t\in {\mathbb {R}}\} \end{aligned}$$

where \(||u||=1\) and u, \(v\in {\mathbb {R}}^3\). Consider now the tangent bundle of the 2-sphere \(S^2\) defined by

$$\begin{aligned} TS^2=\{(u,v)\in {\mathbb {R}}^3\times {\mathbb {R}}^3:\Vert u\Vert =1,(u,v)=0\} \end{aligned}$$

with (uv) denoting the Euclidean scalar product of u and v. We can now define a bijection

$$\begin{aligned} M&\rightarrow TS^2\\ L(u,v)&\mapsto (u,v-(v,u)u) \end{aligned}$$

where the second component is the point on L(uv) closest to the origin of \(T=R^3\). Remark that the map is indeed \(TS^2\)-valued and is clearly surjective. It is injective because if \((u,v-(v,u)u)=(u_1,v_1-(v_1,u_1)u_1)\) then \(u=u_1\) and \(v-v_1=(v-v_1,u)u\) which gives \(L(u,v)=L(u_1,v_1)\). The mapping and its inverse mapping \((u,v)\in TS^2\mapsto L(u,v)\in M\) are smooth, a fact that shows that M and \(TS^2\) are at least diffeomorphic.

To get to the next stage, we recall that the unit sphere \(S^2\) can be endowed with a structure of complex manifold by choosing a covering atlas \(\{U_0,U_1\}\), where \(U_0=S^2\setminus \{(0,0,1)\}\) and \(U_1=S^2\setminus \{(0,0,-1)\}\). We define complex coordinates on \(U_0\) by

$$\begin{aligned} \xi _0(x,y,z)=\dfrac{x+iy}{1-z} \end{aligned}$$

which is the stereographic projection of the point (xyz) from the north pole and on \(U_1\) by

$$\begin{aligned} \xi _1(x,y,z)=\dfrac{x-iy}{1+z} \end{aligned}$$

which is the stereographic projection of the point (xyz) from the south pole. We have by construction

$$\begin{aligned} \xi _0(x,y,z)=\dfrac{1}{\xi _1(x,y,z)}=F(\xi _1(x,y,z)) \end{aligned}$$

where \(F(w)=\dfrac{1}{w}\), on \(U_0\cap U_1\). This defines a complex structure on \(S^2\). To define a complex structure on \(TS^2\) we use standard constructions in differential geometry; for example a chart on \(TS^2\) corresponding to the chart \(\xi _0\) is given in local coordinates (uv) where \(u\in U_0\) and \(v\in {\mathbb {R}}^3\) by

$$\begin{aligned} (\xi (u,v),\,\eta (u,v))=\left( \dfrac{u_1+iu_2}{1-u_3},\,d\xi _u(v) =\dfrac{v_1+iv_2}{1-u_3}+\dfrac{(u_1+iu_2)v_3}{(1-u_3)^2}\right) . \end{aligned}$$
(2.1)

By definition the points of M are oriented lines on \({\mathbb {R}}^3\). Moreover any point p in \({\mathbb {R}}^3\) defines a 2-sphere of lines, namely all oriented lines going through that point. Specifically the set of all lines through p is the set of all \((u,v)\in S^2\times {\mathbb {R}}^3\) satisfying

$$\begin{aligned} v=p-(p,u)u. \end{aligned}$$

We call this a real section of M and denote it by \(X_p\). Let us explore in more detail the geometry of these real sections.

First we observe that these \(X_p\) are called sections because the map

$$\begin{aligned} \rho _p:S^2&\rightarrow M\nonumber \\ u&\mapsto (u,p-(p,u)u) \end{aligned}$$
(2.2)

defines a section of the projection \(\pi :M\rightarrow S^2\) (namely \(\pi (\rho _p(u))=u\), for all \(u\in S^2\)), and, with some abuse of notation, the image of this section is \(X_p\). To understand why we called the sections \(X_p\) real sections, we need to define a real structure on M, through a map

$$\begin{aligned} \tau :M\rightarrow M \end{aligned}$$

called a real structure. This map is defined as the involution that sends an oriented line to the same line with opposite orientation, i.e.

$$\begin{aligned} \tau (u,v)=(-u,v). \end{aligned}$$

This real structure fixes the set \(X_p\) because

$$\begin{aligned} \tau (u,p-(p,u)u)=(-u,p-(p,u)u)=(-u,p-(p,-u)(-u)), \end{aligned}$$

and this explains why \(X_p\) is called a real section.

If \(p=(x,y,z)\) is a point of \({\mathbb {R}}^3\) then \(X_p\) is the set of all (uv) that correspond to lines through p:

$$\begin{aligned} X_p=\{(u,p-(p,u)u),u\in S^2\}. \end{aligned}$$

If we substitute \(v=p-(p,u)u\) into equation (2.1) and simplify, we see that the equation of \(X_p\) as a subset of M is

$$\begin{aligned} \eta =\dfrac{1}{2}((x+iy)+2z\xi -(x-iy)\xi ^2) \end{aligned}$$
(2.3)

when we identify \(X_p\) with its image by the local chart given in equation (2.1). Hence under a similar identification, in coordinates, \(\rho _p\) is given by

$$\begin{aligned} \rho _p(\xi )=(\xi ,\dfrac{1}{2}((x+iy)+2z\xi -(x-iy)\xi ^2)). \end{aligned}$$

We will call any section that can be written in this way a holomorphic section. It is then possible to show that all holomorphic sections \(S^2\rightarrow M\) take the form

$$\begin{aligned} \xi \mapsto (\xi ,a+b\xi +c\xi ^2), a,b,c\in {\mathbb {C}}\end{aligned}$$

in local coordinates (this is because the holomorphic line \(T{\mathbb {C}}P_1\) is the line bundle \(\mathscr {O}(2)\) whose holomorphic sections are given by degree two homogeneous complex polynomials in two variables, hence by a second degree trinomial in non-homogeneous coordinates). With our choice of coordinates and the definition of a real structure one can show that if \((\xi ,\eta )\) are the coordinates of a point \(m\in M\), then \(\left( \frac{-1}{\overline{\xi }}, -\frac{\overline{\eta }}{\overline{\xi }^2}\right) \) are the coordinates of \(\tau (m)\). So \(\tau \) is anti-holomorphic. Therefore a section is real (i.e. invariant under the anti-holomorphic involution \(\tau \)) if and only if the equation

$$\begin{aligned} \eta =a+b\xi +c\xi ^2 \end{aligned}$$

defines the same subset of M as

$$\begin{aligned} -\dfrac{\overline{\eta }}{\overline{\xi }^2}=a+b \left( \dfrac{-1}{\overline{\xi }}\right) +c\dfrac{1}{\overline{\xi }^2}. \end{aligned}$$

This immediately implies that \(a=-\overline{c}\) and that b is real. Hence the real sections defined by points of \({\mathbb {R}}^3\) as in equation (2.3) are precisely all the real sections of M. Thus we have a surjection between points of \({\mathbb {R}}^3\) and real sections of M. The correspondence we have now established between \({\mathbb {R}}^3\) and T is completely symmetric: points in M define special subsets (oriented lines) in \({\mathbb {R}}^3\) and points in \({\mathbb {R}}^3\) define spacial subsets (holomorphic real sections) in M.

Set \(\omega =g(\xi ,\eta )d\xi \) a differential one form on M. If

$$\begin{aligned} \phi (x,y,z)=\int g\left( \xi ,\dfrac{1}{2}((x+iy)+2z\xi -(x-iy)\xi ^2)\right) d\xi \end{aligned}$$

and we differentiate under the integral sign we have

$$\begin{aligned} \dfrac{\partial ^2\phi }{\partial x^2}+\dfrac{\partial ^2\phi }{\partial y^2} +\dfrac{\partial ^2\phi }{\partial x^2}=0 \end{aligned}$$

that is \(\phi \) is harmonic.

Consider now the ultrahyperbolic equation in \({\mathbb {R}}^4\) given by

$$\begin{aligned} \dfrac{\partial ^2\varphi }{\partial x\partial y} -\dfrac{\partial ^2\varphi }{\partial s\partial z}=0. \end{aligned}$$

Let \(T={\mathbb {R}}^3\) and \(f:{\mathbb {R}}^3\rightarrow {\mathbb {R}}\) be an arbitrary element of the Schwartz space \(S({\mathbb {R}}^3)\) and identify locally M with \({\mathbb {R}}^4\). Choose local coordinates (sxyz) for M, on the open set where the third coordinate of v does not vanish or again on the open set of lines which do not lie on the planes of constant \(x_3\). A typical line in this open set is given by

$$\begin{aligned} L=\{(s+ty,x+tz,t),t\in {\mathbb {R}}\}. \end{aligned}$$

Define a function \(\varphi \) on M by

$$\begin{aligned} \varphi (L)=\displaystyle \int _Lf \end{aligned}$$

which gives in coordinates

$$\begin{aligned} \varphi (s,x,y,z)=\displaystyle \int _{-\infty }^{+\infty }f(s+ty,x+tz,t)dt. \end{aligned}$$

Now there are four variables sxyz and f is defined on \({\mathbb {R}}^3\) so we expect a differential condition on \(\varphi \) (constraint). Indeed differentiating under the integral sign one has

$$\begin{aligned} \dfrac{\partial ^2\varphi }{\partial x\partial y} -\dfrac{\partial ^2\varphi }{\partial s\partial z} =\displaystyle \int _{-\infty }^{+\infty }t \left( \dfrac{\partial ^2}{\partial x\partial y} -\dfrac{\partial ^2}{\partial x\partial y}\right) f(s+ty,x+tz,t)dt=0. \end{aligned}$$

It is natural to ask if this procedure, which goes under the name of John transform, can be inverted. This the case as shown by John in [11].

This example illustrates the defining philosophy of “twistor” theory. Namely, an unconstrained function on “twistor” space T yields the solution to a differential equation on Minkowski space M, by means of an integral transform. We also have a simple geometric correspondence, another characteristic feature of twistor methods. More precisely we see

$$\begin{aligned} T&\longleftrightarrow M\\ \{\text {point in T}\}&\longrightarrow \{\text {oriented lines through point}\}\\ \{\text {line in T}\}&\longleftarrow \{\text {point in M}\}. \end{aligned}$$

Remark 2.1

We recall that the tautological line bundle H over \({\mathbb {C}}P_1\), also denoted by \(\mathcal {O}(-1),\) is the holomorphic line bundle whose fibre over a point \(\left[ z\right] =\left[ z_0:z_1\right] \) is given by the line \(\left[ z\right] \). This can be written also as \(H=\{(\left[ z\right] ,w)\,|\,w=\lambda z,\lambda \in {\mathbb {C}}-\{0\}\}\subset {\mathbb {C}}P_1\times {\mathbb {C}}^2\). The projection map \(\pi :H\rightarrow {\mathbb {C}}P_1\) is the restriction of the projection from \({\mathbb {C}}P_1\times {\mathbb {C}}^2\), that is \((\left[ z\right] ,w)\mapsto \left[ z\right] \). By covering \({\mathbb {C}}P_1\) with the two open sets \(U_0=\{\left[ z\right] =\left[ z_0:z_1\right] ,z_0\ne 0\}\) and \(U_1=\{\left[ z\right] =\left[ z_0:z_1\right] ,z_1\ne 0\}\) we see that the transition function for the line bundle H is given by

$$\begin{aligned}&g_{01}:U_0\cap U_1\rightarrow C^\times ,\\&\qquad \left[ z\right] \mapsto \dfrac{z_1}{z_0}. \end{aligned}$$

This follows from the fact that one can define local sections \(\psi _i:U_i\rightarrow H,\,i\in \{0,1\}\) by

$$\begin{aligned} \psi _0(\left[ z\right] )=\left( \left[ z\right] , \left( 1,\dfrac{z_1}{z_0}\right) \right) \end{aligned}$$

and

$$\begin{aligned} \psi _1(\left[ z\right] )=\left( \left[ z\right] , \left( \dfrac{z_0}{z_1},1\right) \right) . \end{aligned}$$

And one sees that \(\psi _0(\left[ z\right] ) =\dfrac{z_1}{z_0}\psi _1(\left[ z\right] )\). By dualizing the line bundle H one obtains the line bundle \(\mathscr {O}(1)\) with transition function \(g^{\star }_{01}(\left[ z\right] )=\dfrac{z_0}{z_1}\) and taking the tensor product of \(\mathscr {O}(1)\) with itself one gets the line bundle \(\mathscr {O}(2)\) with transition function given by \(\left[ z\right] \mapsto \left( \dfrac{z_0}{z_1}\right) ^2\).

Set \(\xi =\dfrac{z_0}{z_1}\) on \(U_1\) and \(w=\dfrac{z_1}{z_0}\) on \(U_0\), the two coordinates associated to \(U_1\) and \(U_0\) respectively. We have \(\xi =\dfrac{1}{w}\) on \(U_0\cap U_1\). This gives \(d\xi =-\dfrac{1}{w^2}dw\), i.e. \(dw=-\xi ^{-2}d\xi \), and therefore \(\partial _w=-\xi ^2\partial _{\xi }\). This shows that the line bundles \(\mathscr {O}(2)\) and \(T{\mathbb {C}}P_1\) on \({\mathbb {C}}P_1\) have the same transition functions and as a consequence they are isomorphic.

3 The Solution of Partial Differential Equations by Means of Definite Integrals

We want to begin this section with what Atiyah regarded as the beginning of twistor theory, intended as the representation of solutions of linear homogeneous (as a polynomial in the partial derivatives) partial differential equations with constant coefficients on \({\mathbb {R}}^n\) or \({\mathbb {C}}^n\) by means of definite integrals. We will soon need some results from classical algebraic geometry, but we begin here with a relatively simple example where all the calculations can be made explicit.

3.1 The Laplace Equation and Some its Solutions

Consider two points \(P=(a,b,c)\) and \(M=(x,y,z)\) in the usual Euclidean space \({\mathbb {R}}^3\), and assume they are subjected to Newtonian attraction, with P being the attracting point, and M the attracted one. By a suitable normalization we have that the force exerted by P on M is \(\mathbf {F}=-\frac{\overrightarrow{PM}}{\Vert \overrightarrow{PM}\Vert ^{3}}\). We set \(r=\Vert \overrightarrow{PM} \Vert =\displaystyle \sqrt{(x-a)^2+(y-b)^2+(z-c)^2}\). Then the components of the force \(\mathbf {F}\) are

$$\begin{aligned} X=-\dfrac{x-a}{r^3},\quad Y=-\dfrac{y-b}{r^3}, \quad Z=-\dfrac{z-c}{r^3}, \end{aligned}$$

and this attraction derives from the potential

$$\begin{aligned} U(x,y,z)=\dfrac{1}{r}, \end{aligned}$$

since

$$\begin{aligned} \dfrac{\partial U}{\partial x}=\dfrac{\partial U}{\partial r} \dfrac{\partial r}{\partial x}=-\dfrac{x-a}{r^3} \end{aligned}$$

(the same calculation holds for the other partial derivatives of U).

If instead of an attracting point P, one has a finite attracting volume V, then the potential U is given for points (xyz) lying outside the volume V by

$$\begin{aligned} U(x,y,z)=\displaystyle \int \int \int _V\dfrac{dadbdc}{\sqrt{(x-a)^2 +(y-b)^2+(z-c)^2}}. \end{aligned}$$

By computing now the derivatives of U(xyz) we obtain

$$\begin{aligned} \dfrac{\partial U}{\partial x}=-\displaystyle \int \int \int _V\dfrac{(x-a)dadbdc}{\left[ (x-a)^2+(y-b)^2 +(z-c)^2\right] ^{\frac{3}{2}}} \end{aligned}$$

and

$$\begin{aligned} \dfrac{\partial ^2U}{\partial x^2}&=-\displaystyle \int \int \int _V\dfrac{dadbdc}{\left[ (x-a)^2+(y-b)^2 +(z-c)^2\right] ^{\frac{3}{2}}}\\&\quad +3\displaystyle \int \int \int _V\dfrac{(x-a)^2dadbdc}{\left[ (x-a)^2+(y-b)^2+(z-c)^2\right] ^{\frac{5}{2}}}. \end{aligned}$$

This gives

$$\begin{aligned} \dfrac{\partial ^2U}{\partial x^2}+\dfrac{\partial ^2U}{\partial y^2} +\dfrac{\partial ^2U}{\partial z^2}=0 \end{aligned}$$

and so

$$\begin{aligned} \Delta U=0. \end{aligned}$$

In the introduction we made reference to how Whittaker, [16, 17], found a way to write some solutions to this Laplace equation by means of definite integrals. We will now review in detail how that can be achieved.

The first observation is the fact that the Laplace differential operator \(\Delta =\frac{\partial ^2}{\partial x^2} +\frac{\partial ^2}{\partial y^2}+\frac{\partial ^2}{\partial z^2}\) is elliptic. This means that there is no non-zero element of \(R^3\) which satisfies \(x^2+y^2+z^2=0\). Hence by the elliptic regularity theorem any solution to the Laplace partial differential equation is real analytic.

Let then U(xyz) be a solution to the Laplace differential equation expressed as a convergent power series with respect to the three variables x, y, z, in the neighborhood of a given point \(x_0\), \(y_0\), \(z_0\), and set

$$\begin{aligned} x=x_0+X,\quad y=y_0+Y,\quad z=z_0+Z. \end{aligned}$$

The series

$$\begin{aligned} U{=}a_0+a_1X+b_1Y+c_1Z+a_2X^2+b_2Y^2{+}c_2Z^2{+}2d_2YZ+2e_2ZX+2f_2XY+\ldots \end{aligned}$$

is therefore convergent for \(|X|+|Y|+|Z|\) sufficiently small. To determine the coefficients \(a_0\), \(a_1\), \(\ldots \), we will calculate the second order partial derivatives of U with respect to X, Y, Z, and put the resulting expressions in the equation

$$\begin{aligned} \dfrac{\partial ^2U}{\partial X^2}+\dfrac{\partial ^2U}{\partial Y^2} +\dfrac{\partial ^2U}{\partial Z^2}=0. \end{aligned}$$

By identification, this will give us linear relations from which we can deduce the values of the coefficients. Now note that if we consider, in the series of U, the homogenous part \(U_n\) of degree n in X, Y, Z, the number of its coefficients is \(\frac{(n+1)(n+2)}{2}\), because this is the dimension of the space of homogeneous polynomials of degree n in three variables. As the laplacian is of second order, when \(n\geqslant 2\), its action on \(U_n\) will give a homogeneous polynomial of degree \(n-2\). This term has to vanish identically so we have \(\frac{n(n-1)}{2}\) (dimension of the space of homogeneous polynomials of degree \(n-2\) in three variables) linear relations among the coefficients of \(U_n\). Therefore in the terms of degree n of \(U_n\), there will be

$$\begin{aligned} \dfrac{(n+1)(n+2)}{2}-\dfrac{n(n-1)}{2}=2n+1 \end{aligned}$$

arbitrary coefficients when \(n\geqslant 2\) (the dimension of the space of homogeneous polynomials of degree n satisfying the Laplace equation is \(2n+1\)). Note that for \(n=0,\,1\), \(\frac{(n+1)(n+2)}{2}=2n+1\), and so there are \(2n+1\) arbitrary coefficients in \(U_n\) regardless of the value of n. By superposition these terms will be linear combinations of \(2n+1\) particular polynomial solutions, of degree n, to the Laplace equation. Let us look for such solutions (a basis of solutions).

For that let us start with the expression

$$\begin{aligned} E_n:=(Z+iX\cos (u)+iY\sin (u))^n,\,u\in {\mathbb {R}}, \end{aligned}$$

which is clearly a solution to the Laplace equation of degree n. We can develop \(E_n\) into a Fourier series because it is smooth and \(2\pi \)-periodic in u. This gives

$$\begin{aligned} \displaystyle \sum _{0}^\infty g_m(X,Y,Z)\cos (mu) +\sum _{0}^\infty h_j(X,Y,Z)\sin (ju); \end{aligned}$$

with coefficients \(g_m\) and \(h_j\) linearly independent polynomials in XYZ.

However the development in Fourier series of \(E_n\) contains only a finite number of terms. This follows by computing \(E_n\) via the binomial formula, by linearizing the various powers of \(\cos (u)\), \(\sin (u)\) and by uniqueness of the Fourier expansion of a continuous \(2\pi \)-periodic function. Therefore one can write

$$\begin{aligned} E_n=\displaystyle \sum _{0}^n g_m(X,Y,Z)\cos (mu) +\sum _{1}^nh_j(X,Y,Z)\sin (ju) \end{aligned}$$

where by Fourier one has

$$\begin{aligned}&\pi g_m(X,Y,Z)=\displaystyle \int _{-\pi }^\pi (Z+iX\cos u+iY\sin u)^n \cos (mu) du\\&\pi h_j(X,Y,Z)=\displaystyle \int _{-\pi }^\pi (Z+iX\cos u+iY\sin u)^n\sin (ju) du. \end{aligned}$$

To show that \(E_n\) may be written in such a form one can use an induction based on the classical formulas, valid for \(a,b\in {\mathbb {R}}\)

$$\begin{aligned}&\cos (a)\cos (b)=(\cos (a-b)+\cos (a+b))/2,\\&\quad \sin (a)\sin (b)=(\cos (a-b)-\cos (a+b))/2\\&\cos (a)\sin (b)=(\sin (a+b)-\sin (a-b))/2. \end{aligned}$$

We remark that the \(g_m\) are even in Y and that the \(h_j\) are odd in Y. For instance one has by definition

$$\begin{aligned} \pi g_m(X,-Y,Z)=\displaystyle \int _{-\pi }^\pi (Z+iX\cos u-iY\sin u)^n \cos (mu) du; \end{aligned}$$

by setting \(u=-v\), we obtain

$$\begin{aligned} \pi g_m(X,Y,Z)=\displaystyle \int _{\pi }^{-\pi } (Z+iX\cos v+iY\sin v)^n\cos (mv) (-dv)=\pi g_m(X,-Y,Z). \end{aligned}$$

Also the highest power of Z present in \(g_m\) or \(h_j\) is \(n-m\) (respectively \(n-j\)). To see this one may use an induction based on the formula

$$\begin{aligned} E_n=\displaystyle \sum _{0}^n g_m(X,Y,Z)\cos (mu) +\sum _{1}^nh_j(X,Y,Z) \sin (ju) \end{aligned}$$

and the fact that

$$\begin{aligned} E_{n+1}=(Z+iX\cos (u)+iY\sin (u))E_n. \end{aligned}$$

Now we can use these properties of \(g_m\) and \(h_j\) to show that they are linearly independent (and therefore form a basis of the vector space of homogenous polynomials in XYZ solution to the Laplace equation, XYZ are still considered to be sufficiently small). Let \(\lambda _0\), \(\lambda _1\), \(\ldots \), \(\lambda _n\) and \(\mu _1\), \(\mu _2\), \(\ldots \), \(\mu _n\) be scalars such that

$$\begin{aligned} \lambda _0g_0+\ldots +\lambda _ng_n+\mu _1h_1+\ldots \mu _nh_n=0. \end{aligned}$$

Then since the \(g_m\) are even and the \(h_j\) are odd, we have separately

$$\begin{aligned} \lambda _0g_0+\ldots +\lambda _ng_n=0 \end{aligned}$$

and

$$\begin{aligned} \mu _1h_1+\ldots \mu _nh_n=0. \end{aligned}$$

Therefore from the fact that \(g_m\) and \(h_m\) are of degree \(n-m\) in Z, one deduces immediately that all the coefficients \(\lambda _m\) and \(\mu _m\) are zero.

This being said every linear combination of the independent \(2n+1\) solutions can then be put in the form

$$\begin{aligned} \displaystyle \int _{-\pi }^\pi (Z+iX\cos u+iY\sin u)^nf_n(u) du. \end{aligned}$$

Here for each \(n\ f_n(u)=\frac{1}{\pi } (\sum _{0}^n\alpha _m\cos (mu)+\sum _{1}^n\beta _j\sin (ju))\), for some \(\alpha _m\) and \(\beta _j\). Assuming that \(|X|+|Y|+|Z|<B<1\) (for instance) and choosing \(D>0\) such that for each n \(|(\alpha _m)_{0\leqslant m\leqslant n}|<D\) and \(|(\beta _j)_{1\leqslant j\leqslant n}|<D\) we shall have

$$\begin{aligned}&U(X,Y,Z)=\sum _{0}^\infty \displaystyle \int _{-\pi }^\pi (Z+iX\cos u+iY\sin u)^nf_n(u) du \\&\quad \equiv \displaystyle \int _{-\pi }^\pi \Phi (Z+iX\cos u+iY\sin u, u) du, \end{aligned}$$

for \(\Phi \) a suitable well-defined function in two variables. We have therefore obtained a local integral representation of some solutions to the Laplace equation in \({\mathbb {R}}^3\).

3.2 The Kernel of the Partial Differential Operator \(F\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) \)

In this subsection we will show how to generalize the result of the previous subsection to the case in which the Laplacian is replaced by another partial differential operator, whose symbol is still a homogeneous polynomial in three variables. As we will see, it is not so easy to calculate the dimensions of the spaces of coefficients in the series expansion of the solution, and therefore we have to resort to some pretty significant results from classical algebraic geometry.

To this end, we introduce some classical terminology before being able to prove the theorem stated later in this section. Let C be a smooth projective plane algebraic curve and let \(\tilde{C}\) be its associated compact Riemann surface. A divisor D on C is a formal sum \(D=\sum _{p\in C} n_p p\) with \(n_p\) an element of the set of integers and all but a finite number of the \(n_p\)’s are equal to zero. The degree of a divisor D is defined by \(\deg {D}:=\sum _{p\in C}n_p\). A divisor \(D=\sum _{p\in C} n_p p\) is called effective, and we will write \(D\ge 0\), when \(n_p\ge 0\) for all \(p\in C\). Two divisors D and \(D^\prime \) are linearly equivalent if there exists a rational function f on C such that \(D-D^\prime =(f)\), where (f) stands for \(f^{-1}(0)-f^{-1}(\infty )\). Equivalence between divisors is easily seen to be an equivalence relation. One important example of a divisor class on a smooth algebraic curve is the class of the divisor of any given rational differential on the curve. It will be called the canonical divisor class and will be denoted K. Let \(\mathscr {K}\left( C\right) \) be the field of rational functions on C. Let us introduce now the following vector space associated to a given divisor D

$$\begin{aligned} L(D)=\{0\}\cup \{f, f\in \mathscr {K}\left( C\right) , (f)+D\ge 0\}. \end{aligned}$$
(3.1)

The dimension of L(D) is denoted by l(D). Note that l(D) only depends on the divisor class of D. The fundamental theorem which enables one to compute l(D) in general is the Riemann-Roch theorem

Theorem 3.1

(R-R, [8]) Let C be a smooth projective plane algebraic curve of degree d, D a divisor on C, K the canonical divisor class of C and g the genus of C. Then

$$\begin{aligned} l(D)=\deg {D}+1-g+l(K-D). \end{aligned}$$

The following result is a simple consequence of the Riemann-Roch theorem.

Theorem 3.2

Any rational function f on a nonsingular plane curve C can be written as the quotient of two homogeneous polynomials in three variables and of the same degree, restricted to C:

$$\begin{aligned} f=\displaystyle \dfrac{Q}{R}|_C \end{aligned}$$

Let L be a line on \({\mathbb {C}}P_2\) not containing C; we set \(H:=\sum _{p\in L\cap C}I_p(C,L)p\), where \(I_p(C,L)\) is the intersection multiplicity at p between C and L. More generally, let X be any plane curve intersecting C only in isolated points; then we define the divisor cut on C by X, denoted by \(C\cdot X\), by the formula \(C\cdot X:=\sum _{p\in X\cap C}I_p(C,X)p\). Remark that any two such divisors \(C\cdot X\) and \(C\cdot X^\prime \) associated to different such curves X and \(X^\prime \) of the same degree, are linearly equivalent. This is because if \(X=\{F=0\}\) and \(X^\prime =\{F^\prime =0\}\) then \(C\cdot X-C\cdot X^\prime =\left( \frac{F}{F^\prime }|_C\right) \).

Let us now recall the Bézout’s theorem

Theorem 3.3

(Bézout) If C is a smooth plane curve of degree d. If X is a plane curve of degree e not containing C, then the degree of the divisor \(C\cdot X\) cut by X on C is \(d\cdot e\).

It follows from Bézout’s theorem [8, p. 86] that the degree of H is equal to n. We also have

Theorem 3.4

(\(\mathrm{M}=\mathrm{AF}+\mathrm{BG}\)) Let \(C=\{F=0\}\) be a smooth curve and \(X=\{G=0\}\) a curve not containing C. Then if a curve \(Y=\{M=0\}\) contains \(C\cdot X\) one can write \(M=AF+BG\), with A and B homogeneous polynomials of degrees, respectively, \(\deg (M)-\deg (F)\) and \(\deg (M)-\deg (G)\).

One of the central results in the classical theory of plane algebraic curves is the following theorem of Brill and Noether

Theorem 3.5

(Brill-Noether Restsatz) Let C be a non-singular plane curve, and let X be any plane curve not containing C. Then for any divisor linearly equivalent to \(C\cdot X\), there is a plane curve \(X^\prime \) not containing C, and such that \(C\cdot X^\prime =D\).

An important important consequence of the Brill-Noether theorem [4, cor. 6] is

Corollary 3.6

Let C be a smooth plane curve of degree d, and let \(\Lambda \) be a subset of \(\lambda \) distinct points of C considered as an effective divisor on C. Then the dimension of the space of homogeneous polynomials of degree m in three variables modulo those vanishing on \(\Lambda \) is equal to

$$\begin{aligned} l(mH)-l(mH-\Lambda ). \end{aligned}$$
(3.2)

Proof

We apply the Restsatz to the curve C and to the m-th power of a generic line. The vector space of homogeneous polynomials of degree m cuts out on C the family of divisors linearly equivalent to mH. This family denoted |mH| is isomorphic to the projective space P(L(mH)). This set has the same dimension as the projective space associated to the vector space of homogeneous polynomials of degree m not containing C or, in other words, the projective space of the vector space of homogeneous polynomials of degree m modulo those homogeneous polynomials vanishing on C.

To see this, take a divisor D linearly equivalent to mH. We have \(D=C\cdot X\) for some X of degree m not containing C. Therefore the map

$$\begin{aligned} P(V)&\rightarrow |mH|\\ X&\mapsto C\cdot X=D \end{aligned}$$

is surjective. Suppose that \(D=C\cdot X^\prime \) for another curve \(X^\prime \). We show that the defining polynomials of X and \(X^\prime \) are proportional. The fact that X and \(X^\prime \) do not contain C is equivalent to the fact that their defining polynomials F and \(F_1\) (resp) are not divisible by the polynomial P which defines C. From Theorem 3.4 we can write \(F=A_1F_1+B_1P\) and \(F_1=A_2F+B_2P\) with \(A_1\) and \(A_2\) complex numbers. Moreover \(F(1-A_1A_2)=(A_1B_2+B_1)P\) and \(F_1(1-A_1A_2)=(A_2B_1+B_2)P\). So we have a contradiction unless \(B_1=B_2=0\) and \(F_1=\lambda _0 F\). Hence the map

$$\begin{aligned} P(V)&\rightarrow |mH|\\ X&\mapsto C\cdot X=D \end{aligned}$$

is injective.

We have thus shown that \(P(V)\simeq |mH|\simeq P(L(mH))\). This gives \(\dim V=l(mH)=\dim (L(mH))\). Likewise one shows that \(l(mH-\Lambda )=\dim L(mH-\Lambda )\) is the dimension of \(V^\prime \), the vector space of homogeneous polynomials of degree m passing through the points of \(\Lambda \) modulo those vanishing on C. This follows from the fact that \(L(mH-\lambda )\simeq \Gamma (\tilde{C},\left[ mH-\Lambda \right] )\), where \(\left[ mH-\Lambda \right] \) is the line bundle, [9, chap. 1], associated to the divisor \(mH-\Lambda \). This latter line bundle is \(\mathscr {O}(m)|_{\widetilde{C}}\otimes \mathscr {O}_{\widetilde{C}} (\left[ -\Lambda \right] )\) and this concludes the proof. \(\square \)

Lemma 3.7

Given a divisor D on a nonsingular projective algebraic curve C, the set

$$\begin{aligned} |D|:=\{D^\prime \sim D,D^\prime \geqslant 0\}\simeq P(L(D)). \end{aligned}$$

Proof

For every \(D^\prime \in |D|\), there exists \(f\in \mathscr {K}(C)\) such that \(D^\prime =(f)+D\). And any two such \(f\in \mathscr {K}(C)\) differ by a non-zero constant. Indeed if \((f)+D=(g)+D\) then \((f)=(g)\) so \((f/g)=0\). Let us take two representatives of f and g, still denoted f and g (\(f\ne 0\) and \(g\ne 0\)). One sees then that f/g has no zeros or poles (so it is in particular holomorphic on \(\tilde{C}\)) therefore it is an element of \({\mathbb {C}}^\times \) because \(\mathscr {O}_C(C)={\mathbb {C}}\). Therefore we have a bijective map

$$\begin{aligned} P(L(D))&\rightarrow |D|\\ f&\mapsto (f)+D \end{aligned}$$

\(\square \)

We will need the following version of Cayley-Bacharach’s theorem [4, th. CB4]

Theorem 3.8

(C-B1) Let \(X_1\), \(X_2\) be plane curves of degrees m and n respectively, with \(X_1\) smooth and meeting in a collection of mn distinct points \(\Gamma =\{p_1,\ldots ,p_{mn}\}\). If \(X\subset {\mathbb {C}}P_2\) is any plane curve of degree \(m+n-3\) containing all but one point of \(\Gamma \), then X contains all of \(\Gamma \).

Corollary 3.9

(Chasles’ theorem) Let \(X_1\), \(X_2\subset {\mathbb {C}}P_2\) be cubic plane curves, with \(X_1\) smooth, meeting in nine points \(P_1,P_2,\ldots ,P_9\). If \(X\subset {\mathbb {C}}P_2\) is any cubic plane curve containing 8 among them, then X contains the remaining point as well.

Let us introduce the following theorem of Serret [14, p. 99]

Theorem 3.10

(Serret) Let \(p=\left[ a,b,c\right] \) be a point of \({\mathbb {C}}P_2\) and associate to it the linear form \(l_p(x,y,z)=ax+by+cz\). Then the necessary and sufficient condition that a curve \(C^r\), of degree r, which passes through \(q-1\) of a set of q given points of \({\mathbb {C}}P_2\), passes through the remaining one, is that there is a linear relation or syzygy (with all coefficients non-zero) connecting the \(r^{th}\) powers of the linear forms (or, by abuse of language, tangential equation) associated to each given point.

Proof

If one has a relation of the form

$$\begin{aligned} \sum _{i=1}^q\lambda _i(l_{p_i})^r=0 \end{aligned}$$

and the equation of the curve \(C^r\) is given by h, then \(h\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) (l_{p_i})^r=r!h(p_i)\). So if \(q-1\) of the points lie on \(C^r\) the remaining one also does. The converse is shown along similar lines, see [14, p. 99] \(\square \)

Corollary 3.11

When \(q=\genfrac(){0.0pt}0{m+2}{m}\) Serret’s theorem gives the necessary and sufficient condition that q points should lie on a curve of degree m.

Proof

Indeed it follows from the Riemann-Roch theorem (see below in the proof of Theorem 3.12) that there always exists a curve of degree m which passes through any \(q-1=\frac{1}{2}m(m+3)\) given points of the fixed smooth curve C. So if there is linear relation between the \(m^{th}\) powers of the linear terms associated to the q points, then necessarily all the q points lie on that curve of degree m. The converse is the necessity statement of Serret’s theorem. \(\square \)

We finally prove the following generalized Cayley-Bacharach theorem inspired by [14] and which easily follows from Serret’s theorem and The Riemann-Roch theorem.

Theorem 3.12

(C-B2) Let \(X_1\) and \(X_2\) be plane curves of degrees m and n respectively, with \(X_1\) smooth and meeting \(X_2\) in a collection of mn distinct points \(\Gamma =\{p_1,\ldots ,p_{mn}\}\). Every curve \(C^{m+n-\gamma }\) \((\gamma > 3)\) of degree \(m+n-\gamma \) which passes through \(mn-\dfrac{1}{2}(\gamma -1)(\gamma -2)\) of the points \(X_1\cap X_2\) passes through the remainder except when these remaining \(\dfrac{1}{2}(\gamma -1)(\gamma -2)\) points lie on a curve \(C^{\gamma -3}\) of degree \(\gamma -3\).

Proof

Denote by \(\left[ a_s,b_s,c_s\right] \) the points of \(X_1\cap X_2\). From the Cayley-Bacharach Theorem 3.8 every curve \(C^{m+n-3}\) which passes through all but one point of \(X_1\cap X_2\) necessarily passes through the remaining point. Therefore from Serret’s theorem we have a syzygy

$$\begin{aligned} \sum _{s=1}^{mn}k_s(a_sx+b_sy+c_sz)^{m+n-3}=0, k_s\in {\mathbb {C}}^\times . \end{aligned}$$

Since this last equation is an identity in xyz, it can be differentiated repeatedly with respect to the variables xyz. Then, if \(F\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) \) is a homogeneous polynomial of degree \(\nu \) in the operators, we evidently have

$$\begin{aligned} \sum _{s=1}^{mn}k_sF(a_s,b_s,c_s)(a_sx+b_sy+c_sz)^{m+n-3-\nu }=0. \end{aligned}$$

In particular, taking \(\nu =m+n-\gamma \)

$$\begin{aligned} \sum _{s=1}^{mn}k_sF(a_s,b_s,c_s)(a_sx+b_sy+c_sz)^{\gamma -3}=0. \end{aligned}$$

Thus if F is the equation of the curve \(C^{m+n-\gamma }\) provided by the hypothesis of the theorem, we obtain an identity involving only \(\dfrac{1}{2}(\gamma -1)(\gamma -2)\) of the points. But from the Riemann-Roch theorem we know that there always exists a curve \(C^{\gamma -3}\) passing through any given \(\dfrac{1}{2}\gamma (\gamma -3)\) given points of \(X_1\). Indeed if \(\Lambda \) is that set of points considered as an effective divisor

$$\begin{aligned}&l((\gamma -3)H)-l((\gamma -3)H-\Lambda )\\&\quad =\dfrac{1}{2}\gamma (\gamma -3) +l(K-(\gamma -3)H)-l(K-(\gamma -3)H+\Lambda ), \end{aligned}$$

with K and H defined with respect to \(X_1\). Because \(\Lambda \ge 0\) it follows that \(l(K-(\gamma -3)H)-l(K-(\gamma -3)H+\Lambda )\le 0\), by definition. Thus \(l((\gamma -3)H)-l((\gamma -3)H-\Lambda )\le \dfrac{1}{2} \gamma (\gamma -3)\) which is strictly less than the dimension \(\dfrac{1}{2}(\gamma -1)(\gamma -2)\) of the vector space of homogeneous polynomials of degree \(\gamma -3\) in three variables. Given the interpretation we gave of the quantity \(l((\gamma -3)H)-l((\gamma -3)H-\Lambda )\), we conclude that there exists a non-trivial polynomial of degree \(\gamma -3\) vanishing on \(\Lambda \). Hence if the remaining \(\dfrac{1}{2}(\gamma -1)(\gamma -2)\) points do not lie on a \(C^{\gamma -3}\), they must all lie on the given curve of degree \(m+n-\gamma \), by Serret’s theorem. \(\square \)

We are finally ready for the main result of this work:

Theorem 3.13

Let C be a smooth projective plane curve of degree \(n\geqslant 2\) with equation given by \(F(\xi ,\eta ,\zeta )=0\). Let \(\left[ \xi ,\eta ,\zeta \right] \) be the coordinates of its points expressed as functions of a uniformizing parameter t. Then any real analytic solution V of the equation

$$\begin{aligned} \displaystyle F\left( \dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y},\dfrac{\partial }{\partial z}\right) \phi (x,y,z)=0 \end{aligned}$$
(3.3)

on a sufficiently small open set can be put in the form

$$\begin{aligned} \displaystyle V(x,y,z)=\int \Phi (\xi x+\eta y+\zeta z, t)dt, \end{aligned}$$
(3.4)

for a suitable path of integration. The function \(\Phi \) depends naturally on F. It also depends on V and determines it, if the necessary differentiations are allowed.

Proof

Without loss of generality we assume V to be real analytic near the origin as a function of x, y, z. Expand V as an absolutely and uniformly convergent power series in x, y, z near the origin itself.

We now apply \(F\left( \dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y},\dfrac{\partial }{\partial z}\right) \) to this series, and we set to zero the coefficients of the various powers. Let us look at the homogeneous part of \(F\left( \dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y},\dfrac{\partial }{\partial z}\right) V\) of degree m; when \(m \ge n\) we find

$$\begin{aligned} \dfrac{1}{2}(m-n+1)(m-n+2) \end{aligned}$$

relations among the coefficients since the action of \(F\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) \) on V(xyz) will leave a polynomial of degree \(m-n\) and \(\frac{1}{2}(m-n+1)(m-n+2)\) represents the dimension of the space of homogeneous polynomials of degree \(m-n\) in three variables. But when \(m<n\) no such relations arise, because the operation of \(F\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) \) on the homogeneous parts of V(xyz) of degree \(m< n\) will cancel them entirely.

As the dimension of the space of homogeneous polynomials of degree m in three variables is \(\left( {\begin{array}{c}m+2\\ m\end{array}}\right) \), the homogeneous part of degree m of the solution V is a linear combination of at most

$$\begin{aligned} \dfrac{1}{2}(m+1)(m+2)-\dfrac{1}{2}(m-n+1)(m-n+2)=mn+1 -\dfrac{1}{2}(n-1)(n-2) \end{aligned}$$

linearly independent terms when \(m\geqslant n\), and of at most \(\frac{1}{2}(m+1)(m+2)\) independent terms when \(m<n\).

We first consider the case \(m\geqslant n\). In order to express the terms of order m into integral form (3.4) we proceed as follows. If we take \(M:=mn+1-\dfrac{1}{2}(n-1)(n-2)\) arbitrary points \(\left[ \xi _i,\eta _i,\zeta _i \right] ,\,1\leqslant i\leqslant M\), on the curve \(\{F=0\}\) (belonging to the same domain \(\mathscr {D}\) of the uniformizing parameter), then the corresponding powers (of linear forms) \((\xi x+\eta y+\zeta z)^m\) will in general be linearly independent.

For, if not, there would be a linear relation between them, of the form

$$\begin{aligned} \displaystyle \sum _1^{M}\lambda _i(\xi _i x+\eta _i y+\zeta _i z)^m=0, \,\lambda _i\in {\mathbb {C}},\,\forall \,i. \end{aligned}$$
(3.5)

Leaving out one of the M points, say \((\xi _1,\eta _1,\zeta _1)\), we can draw a curve of the m-th order \(f(\xi ,\eta ,\zeta )=0\) through the remaining \(M-1\) points. Indeed with the notations introduced above we have

$$\begin{aligned} l(mH-\Lambda )=\deg (mH-\Lambda )+1-g+l(K-mH+\Lambda ), \end{aligned}$$

where \(\Lambda \) is the set of the \(mn-\dfrac{1}{2}(n-1)(n-2)\) chosen points, considered as an effective divisor. From this last equality we deduce that \(l(mH-\Lambda )\ge 1\). Because \(L(mH-\Lambda )\) is interpreted [4] as the vector space of homogeneous polynomials of degree m vanishing on \(\Lambda \) modulo those of degree m vanishing on C, we can ascertain that there exists a curve f of degree m passing through the \(mn-\dfrac{1}{2}(n-1)(n-2)\) remaining points, and not vanishing identically on C. We now operate on the equation (3.5) by \(F\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) \). Then the terms corresponding to the points disappear on account of the relation

$$\begin{aligned} \displaystyle F\left( \dfrac{\partial }{\partial x}, \dfrac{\partial }{\partial y},\dfrac{\partial }{\partial z}\right) (\xi x+\eta y+\zeta z)^m=m!F(\xi ,\eta ,\zeta ) \end{aligned}$$
(3.6)

and we are left with the equation

$$\begin{aligned} \displaystyle \lambda _1F(\xi _1,\eta _1,\zeta _1)=0. \end{aligned}$$
(3.7)

Therefore either \(F(\xi _1,\eta _1,\zeta _1)=0\), in which case all the points lie on a curve of the m-th degree and they would not have been chosen arbitrarily; or \(\lambda _1=0\). But \(\lambda _1\) can be taken to be anyone of the coefficients; hence all the coefficients are zero and the syzygy or linear relation (3.5) does not exist.

This shows that we have \(M=mn+1-\dfrac{1}{2}(n-1)(n-2)\) independent solutions \((s_j)_{1\leqslant j\leqslant M}\)(or tangential equation) of the equation \(F\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) \phi (x,y,z)=0\).

Similarly, when \(m<n\) we may take \(\left( {\begin{array}{c}m+2\\ m\end{array}}\right) \) points on the curve C which do not lie on a curve of the \(m^{th}\) degree. This is possible by Corollary 3.11 and the fact that \(m<n\).

We observe that for these so chosen \(\left( {\begin{array}{c}m+2\\ m\end{array}}\right) \) the corresponding tangential equations are linearly independent. This follows from the fact that one can always draw a curve \(C^m\) of degree m through any \(\dfrac{1}{2}m(m+3)\) given points by Riemann-Roch theorem, as before. Thus we obtain a linear relation between the \(m^{th}\) powers of the tangential equations of any \(\dfrac{1}{2}(m+2)(m+1)+1\) points on the curve, and which satisfy the equation (3.3).

So the conclusion of what we have said so far is that we can find a basis of the space of homogeneous polynomials of degree m which satisfy the partial differential equation:

$$\begin{aligned} F\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y}, \frac{\partial }{\partial z}\right) \phi (x,y,z)=0 \end{aligned}$$

in the form \((\xi _i x+\eta _i y+\zeta _i z)^m,\,1\leqslant i\leqslant r\) for r well-chosen points on C with \(r=mn+1-\frac{1}{2}(n-1)(n-2)\) in case \(m\geqslant n\) and \(r=\genfrac(){0.0pt}0{m+2}{2}\) when \(m<n\).

In the two cases if r is the number of independent solutions, we have that

$$\begin{aligned} \displaystyle (\xi (t) x+\eta (t) y+\zeta (t) z)^m =\sum _1^r(\xi _i x+\eta _i y+\zeta _i z)^m\mu _i(t) \end{aligned}$$
(3.8)

is a solution of the partial differential equation (3.3) with \(\left[ \xi ,\eta ,\zeta \right] \in C\), expressed as a function of the uniformizing parameter t.

Indeed when \(m<n\) we have \(F\left( \frac{\partial }{\partial x}, \frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) (\xi x+\eta y+\zeta z)^m=0\) for degree reasons. And when \(m\geqslant n\) we have

$$\begin{aligned} F\left( \dfrac{\partial }{\partial x},\dfrac{\partial }{\partial y}, \dfrac{\partial }{\partial z}\right) (\xi _i x+\eta _i y+\zeta _i z)^m =F(\xi _i,\eta _i,\zeta _i)(\xi _i x+\eta _i y+\zeta _i z)^{m-n}. \end{aligned}$$

The \(\mu _i\) are analytic functions of t because if \((\chi _i)_{1\leqslant i\leqslant r}\) is the basis dual to \(\{(\xi _i x+\eta _i y+\zeta _i z)^m\}_{1\leqslant i\leqslant r}\) then \(\chi _i((\xi (t)x+\eta (t) y+\zeta (t) z)^m)=\mu _i(t)\).

Let us take the \(\mu _i\) as in (3.8); we remark that the \(\mu _i\) are linearly independent over \({\mathbb {C}}\) as functions of t. Indeed if not, \((\xi (t) x+\eta (t) y+\zeta (t) z)^m\) would be expressible as a linear combination of \(r-1\) solutions to equation (3.3), and we would be able to obtain a syzygy among the tangential equations of any r points on the curve. More precisely if for instance \(\mu _1\) is expressed as a linear combination of the other \((\mu _i)_{2\leqslant i\leqslant r}\) then we have a relation of the form

$$\begin{aligned} (\xi (t) x+\eta (t) y+\zeta (t) z)^m=\displaystyle \sum _1^{r-1} \alpha _j(t)H_j(x,y,z) \end{aligned}$$

for fixed \((H_j(x,y,z))_{1\leqslant j\leqslant r-1}\) (solution to (3.3)). Let us take now r arbitrary points \(\left[ \xi ^0_k,\eta _k^0,\zeta _k^0\right] _{1\leqslant k\leqslant r}\) on the curve C in the domain of the uniformizing parameter t. Then there exists \((t_k)_{1\leqslant k\leqslant r}\) so that \(\left[ \xi ^0_k,\eta _k^0,\zeta _k^0\right] =\left[ \xi (t_k), \eta (t_k),\zeta (t_k)\right] \), \(1\leqslant k\leqslant r\). Thus the corresponding m-th powers of linear forms \((\xi _i^0 x+\eta _i^0 y+\zeta ^0_k z)^m\), \(1\leqslant k\leqslant r\) are expressed as linear combinations of the \(r-1\) fixed given vectors \(H_j\). Hence by linear algebra, \((\xi _i^0 x+\eta _i^0 y+\zeta ^0_k z)^m\) are linear dependent. This contradicts the above.

We consider r functions \((f_s)_{1\leqslant s\leqslant r}\) (analytic). We have

$$\begin{aligned} \displaystyle \int (\xi x+\eta y+\zeta z)^mf_s(t)dt =\sum _1^r(\xi _i x+\eta _i y+\zeta _i z)^m\int \mu _i(t)f_s(t)dt. \end{aligned}$$
(3.9)

and the determinant of the matrix \((\displaystyle \theta _{s,i})_{1\leqslant s\leqslant r,1\leqslant i\leqslant r}:=(\int \mu _i(t)f_s(t)dt)_{1\leqslant i\leqslant r,1\leqslant s \leqslant r}\) will, in general, be non zero (this follows by a suitable generalization of Lemma 3.14 below).

Accordingly we may choose r constants \(\lambda _j\), so that the expressions

$$\begin{aligned} \displaystyle \lambda _1\theta _{1,i}+\lambda _2\theta _{2,i} +\ldots +\lambda _r\theta _{r,i}\qquad (i=1,2,\ldots ,r) \end{aligned}$$
(3.10)

take any r assigned values \(p_1\),\(\ldots \),\(p_r\), and we have

$$\begin{aligned} \displaystyle \int (\xi (t) x+\eta (t) y+\zeta (t) z)^m \sum _1^r(\lambda _sf_s)(t)dt=\sum _1^rp_i(\xi _i x+\eta _i y+\zeta _i z)^m. \end{aligned}$$
(3.11)

But any homogeneous polynomial of degree m solution of equation (3.3) can be expressed in the form

$$\begin{aligned} \displaystyle \sum _1^rp_i(\xi _i x+\eta _i y+\zeta _i z)^m, \end{aligned}$$
(3.12)

and can therefore be put in the form

$$\begin{aligned} \displaystyle \int (\xi x+\eta y+\zeta z)^mg_m(t)dt, \,\,g_m(t)=\sum _1^r(\lambda _sf_s)(t). \end{aligned}$$
(3.13)

The series \(\displaystyle \sum _{m\geqslant 0}((\xi x+\eta y+\zeta z)^mg_m)(t)\) converges uniformly on compact sets of the domain \(\mathscr {D}\) of the uniformizing parameter t provided that \(|x|+|y|+|z|\) is small. Hence if we integrate on a compact path \(\mathscr {L}\) of \(\mathscr {D}\), we can write

$$\begin{aligned} \displaystyle V=\int \sum _{m\geqslant 0} (\xi x+\eta y+\zeta z)^mg_m(t)dt=\int \Phi (\xi x+\eta y+\zeta z,t)dt \end{aligned}$$
(3.14)

which is the desired form of the solution. \(\square \)

Lemma 3.14

Let \(\nu _1\), \(\ldots \), \(\nu _N\) be linearly independent over \({\mathbb {C}}\), continuous on a segment \(\left[ a,b\right] \), with \(-\infty<a<b<+\infty \) to fix ideas. Then there exists N continuous functions \(l_1\), \(\ldots \), \(l_N\) with

$$\begin{aligned} \det \left( \int _a^b\nu _i(u)l_j(u)du\right) \not =0. \end{aligned}$$

Proof

We define

$$\begin{aligned} F_k(s)=\int _a^b\nu _k(u)e^{su}du,s\in {\mathbb {C}}. \end{aligned}$$

\(F_k\) is entire, for every k. Besides \(F_1\), \(F_2\), \(\ldots \), \(F_N\) are linearly independent. Indeed if one has a relation

$$\begin{aligned} \displaystyle \sum _{k=1}^Nc_kF_k=0 \end{aligned}$$

then \(\sum _{k=1}^Nc_k\int _a^b\nu _k(u)e^{su}du=0\) for all \(s\in {\mathbb {C}}\). Therefore \(\int _a^b(\sum _1^Nc_k\nu _k(u))u^pe^{su}du=0\) for all \(s\in {\mathbb {C}}\) and \(p\geqslant 0\). So \(\int _a^b(\sum _1^Nc_k\nu _k(u))R(u)du=0\) for every polynomial R(u) by linearity and taking \(s=0\). Hence by the Stone-Weiertrass theorem \(\sum ^N_1c_k\nu _k=0\) and \(c_k=0\) for every k by linear independence of the \(\nu _k\). Now the \((F_k)_{1\leqslant k\leqslant N}\) are linearly independent and they form a basis of solutions of the linear differential equation \(W(y,F_1,\ldots , F_N)=0\) where W is the Wronskian. And then the Wronskian determinant of \(F_1\), \(\ldots \), \(F_N\) is non identically zero, hence it does not vanish for some \(s_0\in {\mathbb {C}}\). So taking \(l_j(u)=u^{j-1}e^{s_0u}\), \(1\leqslant j\leqslant N\) we find that

$$\begin{aligned} \det \left( \int _a^b\nu _i(u)l_j(u)du\right) \not =0. \end{aligned}$$

\(\square \)

4 The Kernel of the Partial Differential Operator \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) \)

In this section we prove that one can always represent any real analytic function in the kernel of \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) \) on a sufficiently small open set, by means of a definite integral. Here \(F(x_1,\ldots ,x_d)\) is homogeneous polynomial of degree n whose associated hypersurface is smooth. Before doing so we introduce some notations.

Let \(X\subset {\mathbb {C}}P_{d}\) a projective variety with vanishing ideal \(I(X)\subset {\mathbb {C}}[z_0 ,\ldots ,z_d]\). For \(m\geqslant 1\), we denote by \(I(X)_m\) the m-th homogeneous part \(I(X)\cap {\mathbb {C}}[z_0,\ldots ,z_d]_m\) of I(X), where \({\mathbb {C}}[z_0,\ldots ,z_d]_m\) is the subset of homogeneous polynomials of degree m in \({\mathbb {C}}[z_0,\ldots ,z_d]\). Since I(X) is a homogeneous ideal, the homogeneous coordinate ring S(X) is a graded ring with decomposition

$$\begin{aligned} S(X)=\displaystyle \bigoplus _{m\geqslant 0}S(X)_m \end{aligned}$$

where \(S(X)_m={\mathbb {C}}[z_0 ,\ldots ,z_d]/I(X)_m\). Each homogeneous part \(I(X)_m\) is a linear subspace of the \(\left( {\begin{array}{c}d+m\\ d\end{array}}\right) \) dimensional \({\mathbb {C}}\)-vector space \({\mathbb {C}}[z_0,\ldots ,z_d]_m\). The dimension of \(I(X)_m\) is the number of independent hypersurfaces of degree m containing X.

If \(X\subset {\mathbb {C}}P_d\) is a hypersurface given by some irreducible homogeneous polynomial F of degree n. The m-th homogeneous part \(I(X)_m\) then consists of all polynomials of degree m divisible by F. So we can identify \(I(X)_m\) with \({\mathbb {C}}[z_0,\ldots ,z_d]_{m-n}\) for \(m\geqslant n\). So that

$$\begin{aligned} \dim (I(X)_m)=\genfrac(){0.0pt}0{m-n+d}{d}. \end{aligned}$$

Let us introduce the following theorem of Serret [14, p. 99-100]

Theorem 4.1

(Serret) Consider \(p=\left[ a_0,a_1,\ldots ,a_d\right] \) a point of \({\mathbb {C}}P_d\) and associate to it the linear form \(l_p(z_0,z_1,\ldots ,z_d)=a_0z_0 +a_1z_1+\ldots a_nz_d\). Then the necessary and sufficient condition that any given hypersurface \(C^r\), of given degree r, which passes through \(q-1\) of a set of q given points of \({\mathbb {C}}P_d\) should pass through the remaining point, is that there should be a linear relation (or syzygy) connecting the \(r^{th}\) powers of the linear forms (or by abuse of language, tangential equations) associated to each given point.

Proof

The proof is similar to the case \(d=2\). See [14, p. 99-100]. \(\square \)

Corollary 4.2

When \(q=\left( {\begin{array}{c}m+d\\ d\end{array}}\right) \) Serret’s theorem gives the necessary and sufficient condition that q points should lie on a hypersurface of degree m.

Proof

Indeed there always exists a hypersurface of degree m which passes through any \(q-1=\left( {\begin{array}{c}m+d\\ d\end{array}}\right) -1\) given points of the fixed smooth hypersurface C. So if there is linear relation between the \(m^{th}\) powers of the linear terms associated to the q points, then necessarily all the q points lie on a hypersurface of degree m. The converse is the necessity statement of Serret’s theorem. \(\square \)

Theorem 4.3

If \(\left[ \xi _1,\xi _2,\ldots ,\xi _d\right] \) are the coordinates of a point on the smooth projective hypersurface X of equation \(F(\xi _1,\xi _2,\ldots ,\xi _d)=0\) and of degree \(n\geqslant 2\), expressed as functions of uniformizing parameters \(t_1,t_2,\ldots ,t_{d-1}\), then any real analytic solution of the equation

$$\begin{aligned} \displaystyle F\left( \dfrac{\partial }{\partial x_1}, \dfrac{\partial }{\partial x_2},\ldots , \dfrac{\partial }{\partial x_d}\right) \phi (x_1,x_2,\ldots ,x_d)=0, \,d\geqslant 4 \end{aligned}$$
(4.1)

on a sufficiently small open set can be put in the form

$$\begin{aligned} \displaystyle V=\int \Phi (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d, t_1,t_2,\ldots ,t_{d-1})dt_1dt_2\ldots dt_{d-1}, \end{aligned}$$
(4.2)

for a suitable region of integration. The function \(\Phi \) depends naturally on F. It also depends on V and determines it, if the necessary differentiations are allowed.

Proof

To prove this we choose for origin a point in the vicinity of which V is a real analytic function of \(x_1\), \(x_2,\ldots ,\) \(x_d\): we can expand V as a power series in \(x_1\), \(x_2,\ldots ,\) \(x_d\) converging absolutely and uniformly within a certain region.

Operating on this series with \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) \) and equating to zero the coefficients of the various powers: we have when \(m\geqslant n\) (we look at the homogeneous part of \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) V\) of degree m)

$$\begin{aligned} \genfrac(){0.0pt}0{m-n+(d-1)}{d-1} \end{aligned}$$

relations among the coefficients of the homogeneous parts of degree m as the action of \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) \) on \(V(x_1,x_2,\ldots ,x_d)\) will leave a polynomial of degree \(m-n\) and \(\left( {\begin{array}{c}m-n+(d-1)\\ d-1\end{array}}\right) \) represents the dimension of the space of homogeneous polynomials of degree \(m-n\) in d variables. But when \(m<n\) no such relations exist, because the operation of \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) \) on the homogeneous parts of \(V(x_1,x_2,\ldots ,x_d)\) of degree \(m < n\) will kill them entirely.

As the dimension of the space of homogeneous polynomials of degree m in d variables is \(\left( {\begin{array}{c}m+(d-1)\\ (d-1)\end{array}}\right) \), the terms of order m are a linear combination of at most

$$\begin{aligned} M_1=\genfrac(){0.0pt}0{m+(d-1)}{(d-1)}-\genfrac(){0.0pt}0{m-n+(d-1)}{d-1}, \end{aligned}$$

linearly independent elements when \(m\geqslant n\); and of at most \(\left( {\begin{array}{c}m+(d-1)\\ d-1\end{array}}\right) \) independent terms when \(m<n\).

In order to express the terms of order m in the form (4.2) in the case \(m\geqslant n\) we proceed as follows. Take \(M_1\) arbitrary points on the hypersurface \(F(\xi _1,\xi _2,\ldots ,\xi _n)=0\) (belonging to the same domain \(\mathscr {D}\) of the uniformizing parameters \(t_1,\ldots ,t_{d-1}\)); then the corresponding quantities \((\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d)^m\) will in general be linearly independent.

For, if not, there would be a linear relation between them; let it be

$$\begin{aligned} \displaystyle \sum _1^{M_1}\lambda _i(\xi _1^i x_1+\xi _2^i x_2+\ldots +\xi _d^i x_d)^m=0. \end{aligned}$$
(4.3)

Leaving out one of the points, say \((\xi _1^1,\xi _2^1,\ldots ,\xi _n^1)\), we can draw a hypersurface of the m-th order \(f(\xi _1,\xi _2,\ldots ,\xi _n)=0\) through the remainder and not vanishing identically on X. Indeed with the notations introduced above we have

$$\begin{aligned} \dim (I(X)_m)=\genfrac(){0.0pt}0{m-n+d-1}{d-1} \end{aligned}$$

and this number is positive for \(d\geqslant 4\) and \(m\geqslant n\). This quantity is the dimension of the space of hypersurfaces of degree m containing X. Since we are considering \(M_1-1\) points of X, the codimension of the space of forms of degree m in the variables \(z_1,z_2,\ldots ,z_d\) in the space of forms of degree m in those variables and vanishing on X is at most \(M_1-1\). So the dimension of the space of forms of degree m in the d variables \(z_1,z_2,\ldots ,z_d\) and passing through the \(M_1-1\) points is at least \(\dim {\mathbb {C}}[z_1,z_2,\ldots ,z_d]_m-M_1+1=\left( {\begin{array}{c}m-n+(d-1)\\ d-1\end{array}}\right) +1\). So there is at least one non-zero homogeneous polynomial f, of degree m in the d variables vanishing on the chosen \(M_1-1\) points but not vanishing entirely on X.

We now operate on the equation (4.3) with \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) \). Then the terms corresponding to the points disappear on account of the relation

$$\begin{aligned} \displaystyle F\left( \dfrac{\partial }{\partial x_1}, \dfrac{\partial }{\partial x_2},\ldots , \dfrac{\partial }{\partial x_d}\right) (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d)^m =m!F(\xi _1 ,\xi _2 ,\ldots ,\xi _d) \end{aligned}$$
(4.4)

and we are left with the equation

$$\begin{aligned} \displaystyle \lambda _1F(\xi _1^1,\xi _2^1,\ldots ,\xi _d^1)=0. \end{aligned}$$
(4.5)

Therefore either \(F(\xi _1^1,\xi _2^1,\ldots ,\xi _d^1)=0\), in which case all the points lie on a hypersurface of the m-th degree and they would not have been chosen arbitrarily; or \(\lambda _1=0\). But \(\lambda _1\) can be taken to be anyone of the coefficients; hence all the coefficients are zero and the syzygy or linear relation (4.3) does not exist.

Thus we have \(M_1\) independent solutions (or tangential equations) of the equation

$$\begin{aligned} F\left( \dfrac{\partial }{\partial x_1}, \dfrac{\partial }{\partial x_2},\ldots , \dfrac{\partial }{\partial x_d}\right) \phi (x_1,x_2,\ldots ,x_d)=0. \end{aligned}$$

In other words, there exists a linear relation between the m-th powers of the tangential equations of any \(M_1+1\) sufficiently general points on the hypersurface.

Similarly, when \(m<n\) we may take \(\left( {\begin{array}{c}m+(d-1)\\ d-1\end{array}}\right) \) points on the hypersurface which do not lie on a hypersurface of the \(m^{th}\) degree. This is possible by Corollary 4.2 and the fact that \(m<n\).

Having so chosen the \(\left( {\begin{array}{c}m+(d-1)\\ d-1\end{array}}\right) \) points on X we observe that the corresponding tangential equations are linearly independent. This follows from the fact that one can always draw a hypersurface of degree m through any \(\left( {\begin{array}{c}m+(d-1)\\ d-1\end{array}}\right) -1\) given points on X.

Again we have the corresponding assertion that when \(m<n\) there is a linear relation between the \(m^{th}\) powers of the tangential equations of any \(\left( {\begin{array}{c}m+(d-1)\\ d-1\end{array}}\right) +1\) points on the curve, and which satisfy the equation (4.1).

So the conclusion of what we have said so far is that we can find a basis of the space of homogeneous polynomials which satisfy (4.1) in the form \((\xi _1^i x_1+\xi _2^i x_2+\ldots +\xi _d^i x_d)^m,\,1\leqslant i\leqslant r\) for r well-chosen points on X with \(r=M_1\) in case \(m\geqslant n\) and \(r=\left( {\begin{array}{c}m+(d-1)\\ d-1\end{array}}\right) \) when \(m<n\).

Let us take the two cases together, and denote by r the number of independent solutions, we see that

$$\begin{aligned} \displaystyle&(\xi _1(t_1,t_2,\ldots ,t_{d-1}) x_1 +\xi _2(t_1,t_2,\ldots ,t_{d-1}) x_2+\ldots +\xi _d (t_1,t_2,\ldots ,t_{d-1})x_d)^m\nonumber \\&\quad =\sum _1^r(\xi _1^i x_1+\xi _2^i x_2+\ldots +\xi _d^i x_d)^m\mu _i(t_1,t_2,\ldots ,t_{d-1}) \end{aligned}$$
(4.6)

is a solution of the pde (4.1) with \(\left[ \xi _1,\xi _2,\ldots ,\xi _d\right] \in X\), expressed as a function of the uniformizing parameters \(t_1,t_2,\ldots ,t_{d-1}\).

Indeed when \(m<n\) then \(F\left( \frac{\partial }{\partial x_1},\frac{\partial }{\partial x_2},\ldots ,\frac{\partial }{\partial x_d}\right) (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d)^m=0\) for degree reasons. And in case \(m\geqslant n\) we have

$$\begin{aligned}&F\left( \dfrac{\partial }{\partial x_1},\dfrac{\partial }{\partial x_2}, \ldots ,\dfrac{\partial }{\partial x_d}\right) (\xi _1^i x_1+\xi _2^i x_2 +\ldots +\xi _d^i x_d)^m\nonumber \\&\quad =F(\xi _1^i,\xi _2^i ,\ldots ,\xi _d^i)(\xi _1^i x_1+\xi _2^i x_2 +\ldots +\xi _d^i x_d)^{m-n}. \end{aligned}$$
(4.7)

The \(\mu _i\) are analytic functions of \((t_1,t_2,\ldots ,t_{d-1})\) because if \((\chi _i)_{1\leqslant i\leqslant r}\) is the basis dual to \(\{(\xi _1^i x_1+\xi _2^i x_2+\ldots +\xi _d^i x_d)\}_{1\leqslant i\leqslant r}\) then \(\chi _i((\xi _1(t_1,t_2,\ldots ,t_{d-1}) x_1+\)\(\xi _2(t_1,t_2,\ldots ,t_{d-1}) x_2+\ldots +\xi _d(t_1,t_2, \ldots ,t_{d-1})x_d)^m)=\mu _i(t_1,t_2,\ldots ,t_{d-1})\).

Let us take the \(\mu _i\) as in (4.6); we remark that the \(\mu _i\) are linearly independent over \({\mathbb {C}}\) as functions of \((t_1,t_2,\ldots ,t_{d-1})\) by a reasoning similar to the one given in the proof of Theorem 3.13.

We consider r functions \((f_s)_{1\leqslant s\leqslant r}\) of \(d-1\) variables \((t_1,t_2,\ldots ,t_{d-1})\). We have

$$\begin{aligned} \displaystyle&\int (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d)^mf_s (t_1,t_2,\ldots t_{d-1})dt_1dt_2\ldots dt_{d-1}\nonumber \\&\quad =\sum _1^r(\xi _1^i x_1+\xi _2^i x_2+\ldots +\xi _d^i x_d)^{m} \int \mu _if_s(t_1,t_2,\ldots t_{d-1})dt_1dt_2\ldots dt_{d-1} \end{aligned}$$
(4.8)

and the vectors \((\displaystyle \theta _{s,i})_{1\leqslant i\leqslant r}:=(\int \mu _if_s(t_1,t_2,\ldots t_{d-1})dt_1dt_2\ldots dt_{d-1})_{1\leqslant i\leqslant r}\), where we have \(s=1,\ldots ,r\), will in general, be linearly independent (by a several variable lemma analogue to Lemma 3.14).

Accordingly we may choose r constants \(\lambda _j\), so that the expressions

$$\begin{aligned} \displaystyle \lambda _1\theta _{1,i}+\lambda _2\theta _{2,i}+\ldots +\lambda _r\theta _{r,i}\qquad (i=1,2,\ldots ,r) \end{aligned}$$
(4.9)

take any r assigned values \(p_1\),\(\ldots \),\(p_r\), and we have

$$\begin{aligned} \displaystyle&\int (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d)^m \sum _1^r\lambda _sf_s(t_1,t_2,\ldots ,t_{d-1})dt_1dt_2\ldots dt_{d-1}\nonumber \\&\quad =\sum _1^rp_i(\xi _1^ ix_1+\xi _2^i x_2+\ldots +\xi _d^i x_d)^m. \end{aligned}$$
(4.10)

But any homogeneous polynomial of the degree m solution of (4.1) can be expressed in the form

$$\begin{aligned} \displaystyle \sum _1^rp_i(\xi _1^ ix_1+\xi _2^i x_2+\ldots +\xi _d^i x_d)^m, \end{aligned}$$
(4.11)

and therefore admits the representation

$$\begin{aligned} \displaystyle \int (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d)^mg_m (t_1,t_2,\ldots ,t_{d-1})dt_1dt_2\ldots dt_{d-1}. \end{aligned}$$
(4.12)

The series

$$\begin{aligned} \displaystyle \sum _{m\geqslant 0}((\xi _1x_1+\xi _2(t_1,t_2,\ldots ,t_{d-1}) x_2+\ldots +\xi _d x_d)^mg_m)(t_1,t_2,\ldots ,t_{d-1}) \end{aligned}$$

converges uniformly on compact sets of the domain \(\mathscr {D}\) of the uniformizing parameters \(t_1\), \(t_2\), \(\ldots ,t_{d-1}\) provided that \(|x_1|+|x_2|+\ldots +|x_d|\) is small. Hence if we integrate on a compact submanifold \(\mathscr {S}\) of \(\mathscr {D}\), we can write

$$\begin{aligned} \displaystyle V&=\int \sum _{m\geqslant 0}((\xi _1 x_1+\ldots +\xi _d x_d)^m g_m(t_1,\ldots ,t_{d-1})dt_1\ldots dt_{d-1}\nonumber \\&=\int \Phi (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d, t_1,t_2, \ldots ,t_{d-1})dt_1dt_2\ldots dt_{d-1}. \end{aligned}$$
(4.13)

which is the desired form of the solution.

This being done for all values of m, our series for V takes the required form

$$\begin{aligned} \displaystyle V(x_1,\ldots ,x_d)=\int \Phi (\xi _1 x_1+\xi _2 x_2+\ldots +\xi _d x_d, t_1,t_2, \ldots ,t_{d-1})dt_1dt_2\ldots dt_{d-1}. \end{aligned}$$
(4.14)

\(\square \)

Example 4.4

We treat the example of the partial differential equation

$$\begin{aligned} \displaystyle \dfrac{\partial ^{n-1}}{\partial x_1\partial x_2 \ldots \partial x_{n-1}}\phi (x_1,x_2,\ldots ,x_{n-1}) =\dfrac{\partial ^{n-1}}{\partial x_n^{n-1}}\phi (x_1,x_2,\ldots ,x_n). \end{aligned}$$

Using the following embedding of \(({\mathbb {C}}^{\times })^{n-2}\) into the associated characteristic variety of the equation: \(z_1z_2\ldots z_{n-1}=z_n^{n-1}\)

$$\begin{aligned} (t_1,t_2,\ldots ,t_{n-2})\mapsto \left( t_1,t_2,\ldots ,t_{n-2}, \frac{1}{t_1t_2\ldots t_{n-2}},1\right) \end{aligned}$$

we obtain an integral representation

$$\begin{aligned} \displaystyle \int _\mathscr {S}\Phi (x_1t_1,x_2t_2, \ldots x_{n-3}t_{n-2},\frac{x_{n-2}}{t_1t_2 \ldots t_{n-2}},x_n,t_1,t_2,\ldots ,t_{n-2})dt_{1}dt_{2}\ldots dt_{n-2} \end{aligned}$$

for a suitable region \(\mathscr {S}\) in \(({\mathbb {C}}^{\times })^{n-2}\).

We consider now the example of the partial differential equation

$$\begin{aligned} \left( \dfrac{\partial ^3}{\partial x^3}+\dfrac{\partial ^3}{\partial y^3} +\dfrac{\partial ^3}{\partial z^3}\right) \phi (x,y,z)=0. \end{aligned}$$

We explain how a parametrization of the Fermat cubic \(x^3+y^3=1\) arises, following Dixon [3]. A natural idea to parametrize this curve is to find two meromorphic (eventually multivalued) functions c(u), s(u) such that

$$\begin{aligned} c^3(u)+s^3(u)=1. \end{aligned}$$

We require, following [3] the nonlinear differential system

$$\begin{aligned}&s^\prime =c^2,\,\,c^\prime =-s^2\nonumber \\&s(0)=0,\,\,c(0)=1. \end{aligned}$$
(4.15)

These functions are analytic about the origin from the theory of ordinary differential equations. Besides

$$\begin{aligned} s^3(u)+c^3(u)=1 \end{aligned}$$

because

$$\begin{aligned} (s^3+c^3)^\prime =3s^2c^2-3c^2s^2=0 \end{aligned}$$

and by making use of the initial conditions. So (s(u), c(u)) gives a parametrization of the curve \(x^3+y^3=1\) near the point (0, 1). Dixon established that the functions are meromorphic in the whole of the complex plane and doubly periodic (that is, elliptic), hence they provide a global parametrization of the Fermat cubic. More precisely one can solve the differential system for s(u) and c(u) as usual by elimination

$$\begin{aligned} s^\prime =c^2\Longrightarrow s^{\prime \prime }=2cc^\prime =-2cs^2 =-2s^2\sqrt{s^\prime }. \end{aligned}$$

This gives

$$\begin{aligned} s^{\prime \prime }\sqrt{s^\prime }=-2s^2s^\prime . \end{aligned}$$

Thus after one integration we have

$$\begin{aligned} \dfrac{2}{3}(s^\prime )^{3/2}=-\dfrac{2}{3}s^3+{\mathcal {K}}. \end{aligned}$$

Since \(s(0)=0\), \(s^\prime (0)=1\) we get \({\mathcal {K}}=2/3\) and

$$\begin{aligned} (s^{\prime })^{3/2}=-s^3+1. \end{aligned}$$

This gives

$$\begin{aligned}&\dfrac{ds}{du}=(1-s^3)^{2/3},\\&\dfrac{du}{ds}=(1-s^3)^{-2/3}\Longrightarrow du=(1-s^3)^{-2/3}ds. \end{aligned}$$

So

$$\begin{aligned} \displaystyle u=\int _0^udz=\displaystyle \int _0^{s(u)} \dfrac{dt}{(1-t^3)^{2/3}}. \end{aligned}$$

Similarly we obtain

$$\begin{aligned} u=\int _0^udz=\displaystyle \int _{c(u)}^1\dfrac{dt}{(1-t^3)^{2/3}}. \end{aligned}$$

Expanding \((1-t^3)^{2/3}\) near 0 (respectively near 1) and using Lagrange inversion we arrive at the formulas given by

$$\begin{aligned}&s(u)=u-4\frac{u^4}{4!}+160\frac{u^7}{7!} -20800\frac{u^{10}}{10!}+6476800\frac{u^{13}}{13!}-\ldots \nonumber \\&c(u)=1-2\frac{u^3}{3!}+40\frac{u^6}{6!}-3680\frac{u^{9}}{9!} +8880000\frac{u^{12}}{12!}-\ldots . \end{aligned}$$
(4.16)

Using this we have an integral representation for the solution

$$\begin{aligned} V(x,y,z)=\displaystyle \int _\gamma \Phi (c(u)x+s(u)y-z,t)dt \end{aligned}$$

for a suitable path \(\gamma \).