1 Introduction

Let (Xx) and (Yy) be two germs of complex analytic manifolds of respective dimensions n and m. We will consider two nonzero holomorphic functions \(f\in \mathcal {O}_{X,x}\) and \(g\in \mathcal {O}_{Y,y}\), not necessarily reduced, and their Thom-Sebastiani sum \(h:=f+g\in \mathcal {O}_{X\times Y,(x,y)}\).

Let s be a dummy variable. The Bernstein module of f is the \(\mathcal {D}_{X,x}[s]\)-module \({\mathcal {B}}_f=\mathcal {O}_{X,x}[s,f^{-1}]\cdot f^s\), with the usual action of \(\mathcal {D}_{X,x}\). The Bernstein-Sato polynomial of f is then the monic generator of the ideal of polynomials \(b(s)\in \mathbb {C}[s]\) verifying that

$$\begin{aligned} P(s)f\cdot f^s=b(s)\cdot f^s \end{aligned}$$

in \({\mathcal {B}}_f\), for some \(P(s)\in \mathcal {D}_{X,x}[s]\), or equivalently, the minimal polynomial of the action of s on the quotient module \(\mathcal {D}_{X,x}[s]\cdot f^s/\mathcal {D}_{X,x}[s]\langle f\rangle \cdot f^s\). We will denote it by \(b_f(s)\).

As long as f is not invertible, it is well known that \(s+1\) divides \(b_f(s)\), so we can define the reduced Bernstein-Sato polynomial of f as \(\tilde{b}_f(s):=b_f(s)/(s+1)\), which is also the minimal polynomial of the action of s on the \(\mathcal {D}_{X,x}[s]\)-module \(\mathcal {M}_f:=\mathcal {D}_{X,x}[s]\cdot f^s/\mathcal {D}_{X,x}[s]J_f\cdot f^s\) (see, for instance, [3, Lemma 1.1] and the commentary thereafter), where \(J_f\) is the “true” Jacobian ideal of f; in local coordinates \(x_1,\ldots ,x_n\), we have \(J_f:=\langle f,f'_{x_1},\ldots ,f'_{x_n}\rangle \subseteq \mathcal {O}_{X,x}\). In that sense, we have a new functional equation of the form

$$\begin{aligned} P(s)\cdot f^s=\tilde{b}_f(s)\cdot f^s \end{aligned}$$

in \({\mathcal {B}}_f\), where now P(s) belongs to \(\mathcal {D}_{X,x}[s]J_f\).

The polynomial \(\tilde{b}_f(s)\) is also called microlocal Bernstein-Sato polynomial, and we will see the reason later on in Sect. 2.

Everything in the paragraphs above can be analogously defined for g and h, and thus we may wonder about the relation between \(\tilde{b}_f\), \(\tilde{b}_g\) and \(\tilde{b}_h\). Before continuing, let us state some notation and define an important notion.

Given a polynomial \(p(s)\in \mathbb {C}[s]\), we will denote by \(R_p\subseteq \mathbb {C}\) the set of the opposites of its roots. For any given \(\alpha \in R_p\), we will call \(m_\alpha (p)\) its multiplicity as root of p.

Definition 1.1

Let \(a(s),b(s)\in \mathbb {C}[s]\) be two nonzero polynomials. Let \((a*b)(s)=a(s)*b(s)\in \mathbb {C}[s]\) be the monic polynomial with roots \(R_{a*b}=R_a+R_b\) and multiplicities

$$\begin{aligned} m_\gamma (a*b)=\max \{m_\alpha (a)+m_\beta (b)-1:\,\alpha +\beta =\gamma \}, \end{aligned}$$

for every \(\gamma \in R_{a*b}\). We will call \(a*b\) the star operation of a and b.

We will use the convention that adding the empty set to any other one gives the empty set. Therefore, if, for example, f defines a smooth divisor, in such a way that \(\tilde{b}_f=1\), then \(\tilde{b}_f*\tilde{b}_g=1=\tilde{b}_h\) as well.

In [9], M. Saito studied the existing relation between the reduced Bernstein-Sato polynomial of h and those of f and g and proved the following ([loc. cit., Proposition 0.7, Theorem 0.8]):

Theorem 1.2

Under the same conditions as above,

  • \(R_{\tilde{b}_f}+R_{\tilde{b}_g}\subseteq R_{\tilde{b}_h}+\mathbb {Z}_{\le 0}\) and \(R_{\tilde{b}_h}\subseteq R_{\tilde{b}_f}+R_{\tilde{b}_g}+\mathbb {Z}_{\ge 0}\).

  • In addition, if there exists a germ of vector field \(\chi \in \Theta _{Y,y}\) such that \(\chi (g)=g\), then \((\tilde{b}_f*\tilde{b}_g)(s)=\tilde{b}_h(s)\).

Remark 1.3

The condition on g given in the second point is usually referred to as being Euler-homogeneous at y, \(\chi \) being an Euler field for g. Two easy consequences of that fact are that the Jacobian ideal \(J_h\subset \mathcal {O}_{X\times Y,(x,y)}\) is just the sum of the extended Jacobian ideals \(J_f^e+J_g^e\), and that \(t\cdot g^t=\chi \cdot g^t\) in \(\mathcal {O}_{Y,y}[t,g^{-1}]\cdot g^t\).

Saito’s proof of the theorem uses the power of the Kashiwara-Malgrange filtration on \(\mathcal {D}_{X,x}[t,\partial _t]\). This note is the result of our efforts to find a purely algebraic proof of such result, that can be extended to a more general context. What is new, to the best of our knowledge, is an explicit expression for the functional equation for the reduced Bernstein-Sato polynomial of the sum \(h=f+g\) in terms of those for f and g:

Theorem 1.4

Let (Xx) and (Yy) be two germs of complex analytic manifolds of respective dimensions n and m. Let \(f\in \mathcal {O}_{X,x}\) and \(g\in \mathcal {O}_{Y,y}\) two nonzero holomorphic functions, and let \(h:=f+g\in \mathcal {O}_{X\times Y,(x,y)}\). Assume moreover that \(\chi \in \Theta _{Y,y}\) is an Euler vector field for g, i.e. \(\chi (g)=g\), and that we have functional equations:

$$\begin{aligned} \begin{aligned} P(s)\cdot f^s=\tilde{b}_f(s)\cdot f^s \,\text { in } \mathcal {O}_{X,x}[s,f^{-1}]\cdot f^s,&\text { with } P(s)\in \mathcal {D}_{X,x}[s]J_f,\\ Q\cdot g^t=\tilde{b}_g(t)\cdot g^t \,\text { in } \mathcal {O}_{Y,y}[t,g^{-1}]\cdot g^t,&\text { with } Q\in \mathcal {D}_{Y,y} J_g. \end{aligned} \end{aligned}$$

Then, we have the functional equation

$$\begin{aligned} R(u)\cdot h^u=(\tilde{b}_f*\tilde{b}_g)(u)\cdot h^u \end{aligned}$$

in \(\mathcal {O}_{X\times Y,(x,y)}[u,h^{-1}]\cdot h^u\), where \(R(u)=P(u-\chi )A(u,\chi )+B(u,\chi )Q\in \mathcal {D}_{X\times Y,(x,y)}J_h\). There, A(st) and B(st) are certain polynomials in \(\mathbb {C}[s,t]\) that can be obtained from \(\tilde{b}_f\) and \(\tilde{b}_g\), whose meaning will be explained at the end of Sect. 2.1.

In particular, \(\tilde{b}_h\) divides \(\tilde{b}_f*\tilde{b}_g\).

Note that in the functional equation for g the operator Q does not depend on t. This is because g being Euler-homogeneous implies that \(t\cdot g^t=\chi \cdot g^t\).

Again, notice that we do not just prove that one polynomial divides the other, but provide a concrete functional equation for \(\tilde{b}_f*\tilde{b}_g\). The statement on just the divisibility was first proved by Yano in [10, Theorem 3.15] in the particular case that g is quasi-homogeneous and has an isolated singularity at y (note that \(\tilde{b}_f*\tilde{b}_g\) is hidden in the statement due to the simple expression of \(\tilde{b}_g\)).

In fact, even though the statements of Theorems 1.2 and 1.4 above relate holomorphic functions on germs of complex manifolds, the functional equation and thus the relation between the reduced Bernstein-Sato polynomials in the case g is Euler-homogeneous can be easily generalized to the global algebraic and formal cases. The latter is just a consequence of the extension \(\mathcal {O}_{X,x}\rightarrow \mathbb {C}[[x_1,\ldots ,x_n]]\) being faithfully flat for a choice of local parameters \(x_1,\ldots ,x_n\) at x.

Let us consider with a little more detail the first case, so assume f and g are nonzero polynomials. Then, denoting by \(\mathcal {V}(p)\subseteq \mathbb {A}_\mathbb {C}^r\) the vanishing locus of a polynomial \(p\in \mathbb {C}[x_1,\ldots ,x_r]\), we know that their associated (algebraic) Bernstein-Sato polynomials are just the least common multiples of their local versions at each point of \(\mathcal {V}(f)\subseteq \mathbb {A}_\mathbb {C}^n\) and \(\mathcal {V}(g)\subseteq \mathbb {A}_\mathbb {C}^m\), respectively (see, for instance, [5, Proposition 4.2.1]). In fact we could consider only the respective singular points, for the reduced Bernstein-Sato polynomials are 1 otherwise. In that case, let us write \(\tilde{b}_{h,(x,y)}(s)\) to denote the local polynomial at the point \((x,y)\in \mathbb {A}_\mathbb {C}^n\times \mathbb {A}_\mathbb {C}^m\). Therefore, \(\tilde{b}_h(s)={\text {lcm}}\{\tilde{b}_{h,(x,y)}(s)\,:\,(x,y)\in {\text {Sing}}\mathcal {V}(h)\subseteq \mathbb {A}_\mathbb {C}^n\times \mathbb {A}_\mathbb {C}^m\}\). The variety \({\text {Sing}}\mathcal {V}(h)\) is given by the equations

$$\begin{aligned} h=0,\quad h'_{x_i}=0,\quad h'_{y_j}=0, \end{aligned}$$
(1)

for \(i=1,\ldots ,n\), \(j=1,\ldots ,m\). Since \(h'_{x_i}=f'_{x_i}\) and \(h'_{y_j}=g'_{y_j}\) and g is Euler-homogeneous, the vanishing of the \(g'_{y_j}\) implies that of g, so the Eq. (1) define the same set as \({\text {Sing}}\mathcal {V}(f)\times {\text {Sing}}\mathcal {V}(g)\). In conclusion, the functional equation for \(\tilde{b}_f*\tilde{b}_g\) at each point of \(\mathbb {A}_\mathbb {C}^n\times \mathbb {A}_\mathbb {C}^m\) implies the same relation for its global versions.

In fact, the proof of Theorem 1.4 can be extended almost literally to any context in which we have a properly working formal functional equation, like differentially admissible algebras (see [7, Definition 1.2.3.6, Theorem 3.2.2.1] and [8, Hypothesis 2.3, Proposition 3.10]), nonregular algebras or direct summands ([1, Proposition 2.18, Theorem 3.24]).

Regarding Bernstein-Sato polynomials of ideals (see [2]), we know thanks to [6, Theorem 1.1] that the Bernstein-Sato polynomial of a nonzero ideal \(\mathfrak {a}=\langle f_1(\underline{x}),\ldots ,f_r(\underline{x})\rangle \subseteq \mathbb {C}[x_1,\ldots ,x_n]\) is exactly the reduced Bernstein-Sato polynomial of \(z_1f_1(\underline{x})+\ldots +z_rf_r(\underline{x})\in \mathbb {C}[\underline{x},\underline{z}]\). Therefore, since such a polynomial is always Euler homogeneous at the origin of \(\mathbb {A}_\mathbb {C}^n\times \mathbb {A}_\mathbb {C}^r\) (assuming \(0\in \mathcal {V}(f_1,\ldots ,f_r)\subseteq \mathbb {A}_\mathbb {C}^n\)), the Bernstein-Sato polynomial of the sum of two ideals \(\mathfrak {a}\subseteq \mathbb {C}[x_1,\ldots ,x_n]\) and \(\mathfrak {b}\subseteq \mathbb {C}[y_1,\ldots ,y_m]\) always divides \(b_\mathfrak {a}*b_\mathfrak {b}\).

On the other hand, we believe it would be worthwhile to find a proof for the remaining divisibility \(\tilde{b}_f*\tilde{b}_g|\tilde{b}_h\) that is as formal or algebraic as possible, following the spirit of the proof of the theorem above. However, up to now we have not been able to do it.

The rest of this note is organised as follows: in Sect. 2, we provide an alternative way to obtain the star operation of two polynomials and we give a proof that relates directly the definition of \(\tilde{b}_f\) given here and M. Saito’s one using a microlocal construction. Finally, in Sect. 3 we prove Theorem 1.4.

2 Alternative definitions

In this section we will give a couple of equivalent definitions for both the star operation of two polynomials and the reduced Bernstein-Sato polynomial. Their actual equivalence might be folklore, but we have not been able to find it in the existing literature.

2.1 Two operations with polynomials

Let us recall Definition 1.1: the star operation of two polynomials \(a,b\in \mathbb {C}[s]\) is another polynomial \((a*b)(s)\in \mathbb {C}[s]\), such that \(R_{a*b}=R_a+R_b\) and, for any \(\gamma \in R_{a*b}\), its multiplicity is given by the highest value of \(m_\alpha (a)+m_\beta (b)-1\), where \(\alpha \in R_a\), \(\beta \in R_b\) and \(\alpha +\beta =\gamma \).

We will use another approach to work with such operation:

Proposition 2.1

Let \(a,b\in \mathbb {C}[s]\) be two nonzero polynomials and let us denote by \((a\bullet b)(s)\in \mathbb {C}[s]\) the monic polynomial that verifies that \(\langle a(s),b(t)\rangle \cap \mathbb {C}[s+t]=\langle (a\bullet b) (s+t)\rangle \). Then, \(a\bullet b=a*b\).

Note that, in the statement of the proposition, we can take \((a\bullet b)(s)\) to be the generator of the ideal \(\langle a(s-t),b(t)\rangle \cap \mathbb {C}[s]\), just by a simple change of variables. This definition will be useful later on.

Proof

The proof is elementary but a bit long. If any of a or b is constant there is nothing to show. Therefore, let us prove first the proposition when \(a(s)=(s-\alpha )^d\) and \(b(s)=(s-\beta )^e\), for some \(\alpha ,\beta \in \mathbb {C}\) and \(d,e\ge 1\). In that case, clearly \((a*b)(u)=(u-\alpha -\beta )^{d+e-1}\). Let us consider then the ideal \(I=\langle (s-\alpha )^d,(t-\beta )^e\rangle \cap \mathbb {C}[s+t]\). Expanding \((s+t-\alpha -\beta )^{d+e-1}=((s-\alpha )+(t-\beta ))^{d+e-1}\) makes clear that \((a*b)(s+t)\) belongs to I and is a multiple of \((a\bullet b)(s+t)\).

To see the converse, we can assume, up to a simple change of variables, that \(\alpha =\beta =0\) for the sake of simplicity. Consider any \(p(s+t)=\sum _{i=0}^N p_i(s+t)^i\in I\). If \(N\ge d+e-1\), reasoning as above we can claim that \(p\in I\) if and only if \({\tilde{p}}:=\sum _{i=0}^{d+e-2} p_i(s+t)^i\) lies within I too, but that implies that \(p_i=0\) for each \(i=0,\ldots ,d+e-2\). Indeed, modulo \(s^d\) and \(t^e\), the only nonvanishing term of degree \(d+e-2\) is \(p_{d+e-2}\left( {\begin{array}{c}d+e-2\\ d-1\end{array}}\right) \), that must be zero if \({\tilde{p}}\in I\). We can continue the same argument with the remaining coefficients. Therefore, \({\tilde{p}}=0\) and \(I=\langle (a*b)(s+t)\rangle \), that is, \(a\bullet b=a*b\).

Let us prove now that, if \(a(s),b(s),q(s)\in \mathbb {C}[s]\), then

$$\begin{aligned} {\text {lcm}}(a,b)\star q={\text {lcm}}(a\star q,b\star q), \end{aligned}$$
(2)

where \(\star =*,\bullet \). Since both operations are commutative in \(\mathbb {C}[s]\), that suffices to finish the proof. We will write \(c(s):={\text {lcm}}(a(s),b(s))\) for the sake of brevity.

First, let us take \(\star =*\). On one hand, \(R_{c}=R_a\cup R_b\), so \(R_{c}+R_q=(R_a+R_q)\cup (R_b\cup R_q)\). Since \(m_\alpha (c)=\max \{m_\alpha (a),m_\alpha (b)\}\) for every \(\alpha \in R_{c}\), we have that for any \(\gamma \in R_{c}+R_q\),

$$\begin{aligned} \begin{aligned} m_\gamma (c*q)&=\max \{m_\alpha (c)+m_\beta (q)-1:\,\alpha +\beta =\gamma \}\\&=\max \big \{\max \{m_\alpha (a)+m_\beta (q)-1:\,\alpha +\beta =\gamma \}, \max \{m_\alpha (b)\\&\quad +m_\beta (q)-1:\,\alpha +\beta =\gamma \}\big \}. \end{aligned}\end{aligned}$$
(3)

On the other hand, the opposites of the roots of \({\text {lcm}}(a*q,b*q)\) are \(R_{a*q}\cup R_{b*q}=(R_a+R_q)\cup (R_b+R_q)\) and their multiplicities are exactly the second line of formula (3), so formula (2) holds.

Let \(\star \) be \(\bullet \) now, and let us show that \(I:=\langle c(s),q(t)\rangle =J:=\langle a(s),q(t)\rangle \cap \langle b(s),q(t)\rangle \subseteq \mathbb {C}[s,t]\). It is clear that \(I\subseteq J\); let us show the reverse inclusion. To do so, let us also write \(d(s):=\gcd (a(s),b(s))\), so that we have a Bézout identity of the form \(d=\alpha a+\beta b\) in \(\mathbb {C}[s]\). Then, if we have a polynomial \(p(s,t)=m(s,t)a(s)+n(s,t)q(t)=m'(s,t)b(s)+n'(s,t)q(t)\in J\), we can also write \(dp=\alpha ap+\beta bp\) and use both representations of p as an element of J, such that

$$\begin{aligned} p=\alpha \frac{a}{d}p+\beta \frac{b}{d}p=\alpha m'\frac{ab}{d}+\alpha \frac{a}{d} n'q+\beta m \frac{ab}{d}+\beta \frac{b}{d}nq\in I, \end{aligned}$$

since \(c=ab/d\).

In order to finish, just note that the generator of \(I\cap \mathbb {C}[s+t]\) is \({\text {lcm}}(a,b)\bullet q\), whereas the generator of \(J\cap \mathbb {C}[s+t]\) is \({\text {lcm}}(a\bullet q,b\bullet q)\). \(\square \)

Now we can explain all actors involved in the statement of Theorem 1.4. Namely, since we know that \((\tilde{b}_f*\tilde{b}_g)(s)\) is the generator of the ideal \(\langle \tilde{b}_f(s-t),\tilde{b}_g(t)\rangle \cap \mathbb {C}[s]\), there must exist \(A(s,t),B(s,t)\in \mathbb {C}[s,t]\) such that \((\tilde{b}_f*\tilde{b}_g)(s)=A(s,t)\tilde{b}_f(s-t)+B(s,t)\tilde{b}_g(t)\). Those are the polynomials we use to build up the functional equation for \(\tilde{b}_f*\tilde{b}_g\).

2.2 Reduced Bernstein-Sato polynomial

There are at least three known objects called the reduced Bernstein-Sato polynomial \(\tilde{b}_f(s)\): the quotient of the usual polynomial by \(s+1\), the obtained by the Jacobian approach noted in the introduction, that we will call “Jacobian Bernstein-Sato polynomial”, and the microlocal Bernstein-Sato polynomial of M. Saito (see [9, § 1]). Although it is well known that these last two ones provide the same object, we will include here a direct proof of the fact without showing that both of them are \(b_f(s)/(s+1)\). Before that, let us comment on more about the microlocal setting, following [loc. cit.].

Let us call \({\widetilde{\mathcal {B}}}_f=\mathcal {O}_{X,x}{[}\partial _{t},\partial _t^{-1}]\cdot \delta (t-f)\), where \(\delta (t-f)\) is a symbol representing the delta function supported on the graph \(\{f=t\}\) on which \(\mathcal {D}_{X,x}\), t and the integer powers of \(\partial _t\) act in the usual way. Therefore, \({\widetilde{\mathcal {B}}}_f\) can be endowed with the structure of \({\widetilde{\mathcal {D}}}_{t}\)-module, where by \({\widetilde{\mathcal {D}}}_t\) we mean the ring \(\mathcal {D}_{X,x}[t,\partial _t,\partial _t^{-1}]\) (called \({\widetilde{\mathcal {R}}}\) in Saito’s construction).

We can define a V-filtration on \({\widetilde{\mathcal {D}}}_t\) by setting \(V^0{\widetilde{\mathcal {D}}}_t=\mathcal {D}_{X,x}[t\partial _t,\partial _t^{-1}]\) and \(V^p{\widetilde{\mathcal {D}}}_t=\partial _t^{-p}V^0{\widetilde{\mathcal {D}}}_t=V^0{\widetilde{\mathcal {D}}}_t\partial _t^{-p}\). This filtration induces another one on \({\widetilde{\mathcal {B}}}_f\) just by taking \(G^p{\widetilde{\mathcal {B}}}_f=V^p{\widetilde{\mathcal {D}}}_t\cdot {\widetilde{\mathcal {B}}}_f\). With all this in mind, the microlocal Bernstein-Sato polynomial \(\tilde{b}_{f,m}(s)\) is defined as the minimal polynomial of the action of \(s:=-\partial _tt\) on \({\text {Gr}}_G^0{\widetilde{\mathcal {B}}}_f\).

Proposition 2.2

Let \(f\in \mathcal {O}_{X,x}\) be a nonzero holomorphic function, and let \(\tilde{b}_{f,m}(s)\) be its microlocal Bernstein-Sato polynomial and \(\tilde{b}_{f,J}(s)\) be its Jacobian Bernstein-Sato polynomial. Then, \(\tilde{b}_{f,m}=\tilde{b}_{f,J}\).

Proof

As we have said above, recall that \(\tilde{b}_{f,m}(s)\) and \(\tilde{b}_{f,J}(s)\) are, respectively, the minimal polynomials of the actions of s on \({\text {Gr}}_G^0{\widetilde{\mathcal {B}}}_f\) and on \(\mathcal {M}_f=\mathcal {D}_{X,x}[s]f^s/\mathcal {D}_{X,x}[s]J_ff^s\), acting on the first object as \(-\partial _tt\).

Following the well-known construction of [4, § 4], we can consider the isomorphism \({\widetilde{\mathcal {B}}}_f\rightarrow {\widetilde{\mathcal {M}}}:=\mathcal {D}_{X,x}[t,\partial _t,\partial _t^{-1}]/\langle t-f,\partial _i+f'_i\partial _t\rangle \) sending \(\delta (t-f)\) to the generator \(\bar{1}\) of \({\widetilde{\mathcal {M}}}\). As a consequence, we know that \(G^0{\widetilde{\mathcal {B}}}_f=V^0{\widetilde{\mathcal {D}}}_t\cdot \bar{1}= \mathcal {D}_{X,x}[\partial _t t,\partial _t^{-1}]\cdot \bar{1}\) and \(G^1{\widetilde{\mathcal {B}}}_f=V^1{\widetilde{\mathcal {D}}}_t\cdot \bar{1}=\partial _t^{-1}\mathcal {D}_{X,x}[\partial _t t,\partial _t^{-1}]\cdot \bar{1}\).

Moreover, \(\partial _t^{-1}\partial _i=-f'_i\) in \({\widetilde{\mathcal {M}}}\), so we obtain that

$$\begin{aligned} {\text {Gr}}_G^0{\widetilde{\mathcal {B}}}_f\cong \frac{\mathcal {D}_{X,x}[\partial _tt]+\mathcal {O}_{X,x}[\partial _t^{-1}]_{>0}}{\mathcal {D}_{X,x}[\partial _tt]J_f +\mathcal {O}_{X,x}[\partial _t^{-1}]_{>0}} \cdot \bar{1}, \end{aligned}$$

where \(\mathcal {O}_{X,x}[\partial _t^{-1}]_{>0}\) represents the polynomials in \(\partial _t^{-1}\) with coefficients in \(\mathcal {O}_{X,x}\) and no term of degree zero.

Now note that, as \(\mathcal {O}_{X,x}\)-modules,

$$\begin{aligned} \frac{\mathcal {D}_{X,x}[\partial _tt]+\mathcal {O}_{X,x}[\partial _t^{-1}]_{>0}}{\mathcal {D}_{X,x}[\partial _tt]J_f +\mathcal {O}_{X,x}[\partial _t^{-1}]_{>0}}\cong \frac{\mathcal {D}_{X,x}[\partial _tt]}{\mathcal {D}_{X,x}[\partial _tt]J_f} \end{aligned}$$

thanks to the second isomorphism theorem. Consequently,

$$\begin{aligned} {\text {Gr}}_G^0{\widetilde{\mathcal {B}}}_f\cong \frac{\mathcal {D}_{X,x}[\partial _tt]}{\mathcal {D}_{X,x}[\partial _tt]J_f}\cdot \bar{1}. \end{aligned}$$

Finally, recall that there is an isomorphism \(\mathcal {D}_{X,x}[\partial _tt]\cdot \bar{1}\cong \mathcal {M}_f\) induced by the isomorphism of \(\mathcal {D}_{X,x}\)-modules \(\mathcal {D}_{X,x}[\partial _tt]\delta (t-f)\rightarrow \mathcal {D}_{X,x}[s]\cdot f^s\) that sends \(\partial _tt\) to \(-s\). Using this last correspondence, we see that the minimal polynomial of s on \(\mathcal {M}_f\) and of \(-\partial _tt\) on \({\widetilde{\mathcal {B}}}_f\) are the same. \(\square \)

3 Proof of the main result

We provide in this section an elementary proof of Theorem 1.4.

Theorem 3.1

Let (Xx) and (Yy) be two nonzero germs of complex analytic manifolds of respective dimensions n and m. Let \(f\in \mathcal {O}_{X,x}\) and \(g\in \mathcal {O}_{Y,y}\) two holomorphic functions, and let \(h:=f+g\in \mathcal {O}_{X\times Y,(x,y)}\). Assume moreover that \(\chi \in \Theta _{Y,y}\) is an Euler vector field for g, i.e. \(\chi (g)=g\), and that we have functional equations:

$$\begin{aligned}&\displaystyle b(s) \cdot f^s = P(s)\cdot f^s \,\text { in } \mathcal {O}_{X,x}[s,f^{-1}]\cdot f^s \text { with }\, P(s) = \sum _j P_j s^j,\ P_j \in \mathcal {D}_{X,x} J_f,&\\&\displaystyle c(t)\cdot g^{t} = Q\cdot g^{t} \,\text { in } \mathcal {O}_{Y,y}[t,g^{-1}]\cdot g^t \text { with }\, Q \in \mathcal {D}_{Y,y} J_g = \mathcal {D}_{Y,y} \langle g'_{y_1},\dots ,g'_{y_m}\rangle , \end{aligned}$$

and let \(A(s,t), B(s,t) \in \mathbb {C}[s,t]\) be such that \((b * c)(s) = A(s,t) b(s-t) + B(s,t) c(t)\). Then, we have a functional equation:

$$\begin{aligned}{} & {} (b * c)(s)\cdot h^s = R(s)\cdot h^s \,\text { in } \mathcal {O}_{X\times Y,(x,y)}[s,h^{-1}]\cdot h^s \text { with }\, \\{} & {} \quad R(s)=P(s-\chi ) A(s,\chi ) + B(s,\chi ) Q, \end{aligned}$$

where \(P(s-\chi ) = \sum _{j} P_j (s-\chi )^j = \sum _{j} (s-\chi )^j P_j\). Moreover, \(R(s) \in \mathcal {D}_{X\times Y,(x,y)}[s] J_h\). In particular, \(\tilde{b}_h\) divides \(\tilde{b}_f * \tilde{b}_g\).

Proof

For any integer \(k\ge 1\), we obtain by expanding \(h^k\) and R(k) that

$$\begin{aligned} \begin{aligned} R(k) \big (h^k\big )&= R(k) \left( \sum _{\ell =0}^k \left( {\begin{array}{c}k\\ \ell \end{array}}\right) f^{k-\ell } g^\ell \right) \\&=\sum _{\ell =0}^k \left( {\begin{array}{c}k\\ \ell \end{array}}\right) \left( P(k-\chi ) \Big ( A(k,\chi ) \big ( f^{k-\ell } g^\ell \big ) \Big ) + B(k,\chi )\Big ( Q \big (f^{k-\ell } g^\ell \big ) \Big )\right) . \end{aligned}\end{aligned}$$
(4)

In the first summands we have

$$\begin{aligned} \begin{aligned} P(k-\chi ) \Big ( A(k,\chi ) \big ( f^{k-\ell } g^\ell \big ) \Big )&= P(k-\chi ) \Big ( f^{k-\ell } A(k,\chi ) \big ( g^\ell \big ) \Big )\\&=P(k-\chi ) \big ( f^{k-\ell } A(k,\ell ) g^\ell \big )\\&=P(k-\ell ) \big ( f^{k-\ell } A(k,\ell ) g^\ell \big )\\&= P(k-\ell ) \big ( f^{k-\ell } \big ) A(k,\ell ) g^\ell \\&= b(k-\ell ) f^{k-\ell } A(k,\ell ) g^\ell , \end{aligned}\end{aligned}$$
(5)

just by elementary commuting relations and the fact that \(\chi (g^\ell )=\ell g^\ell \). Regarding the second summands in formula (4),

$$\begin{aligned} \begin{aligned} B(k,\chi )\Big (Q\big (f^{k-\ell }g^\ell \big )\Big )&=B(k,\chi )\Big (f^{k-\ell }Q\big (g^\ell \big )\Big )= B(k,\chi )\big (f^{k-\ell }c(\ell )g^\ell \big )\\&=B(k,\ell )f^{k-\ell }c(\ell )g^\ell \end{aligned}\end{aligned}$$
(6)

by the same arguments as above. Putting together formulas (4), (5) and (6) and using the functional equations for f and g and the expression of \((b*c)(s)\), we finally obtain that

$$\begin{aligned} R(k) \big (h^k\big )=\sum _{\ell =0}^k \left( {\begin{array}{c}k\\ \ell \end{array}}\right) (A(k,\ell )b(k-\ell )+B(k,\ell )c(\ell ))f^{k-\ell }g^\ell =(b * c)(k) h^k, \end{aligned}$$

hence \(R(s)\cdot h^s =(b * c)(s)\cdot h^s\).

Now, we know from our hypotheses that \(P_j = P_{j0} f + \sum _{r=1}^n P_{jr} f'_{x_r}\), with \(P_{jr}\in \mathcal {D}_{X,x}\) and \(Q= \sum _{t=1}^m Q_t g'_{y_t}\), with \(Q_t\in \mathcal {D}_{Y,y}\). Therefore,

$$\begin{aligned} R(s)&= P(s-\chi ) A(s,\chi ) + B(s,\chi ) Q = A(s,\chi ) P(s-\chi ) + B(s,\chi ) Q \\&=\sum _j A(s,\chi ) (s-\chi )^j P_j + B(s,\chi ) Q, \end{aligned}$$

that belongs to \(\mathcal {D}_{X\times Y,(x,y)}[s] \langle f, f'_{x_1},\dots , f'_{x_n}, g'_{y_1},\dots , g'_{y_m} \rangle \). However, since \(g = \chi (g) \in \mathcal {O}_{Y,y} \langle g'_{y_1},\dots g'_{y_m} \rangle \), we can affirm that \(\langle f,f'_{x_1},\dots ,f'_{x_n},g'_{y_1},\dots g'_{y_m} \rangle =\langle f+g,f'_{x_1},\dots ,f'_{x_n},g'_{y_1},\dots g'_{y_m} \rangle =J_h\).

The last claim is just an easy consequence of taking \(b(s)=\tilde{b}_f(s)\) and \(c(s)=\tilde{b}_g(s)\). \(\square \)

Examples 3.2

Let \(X=\mathbb {A}_x^1\), \(Y=\mathbb {A}_y^1\), and let us consider the well-known example of the cusp \(h=x^2+y^3\), that is, the sum of \(f=x^2\) and \(g=y^3\). As an example of our main result, we can obtain not just a multiple of the reduced Bernstein-Sato polynomial of h (in fact, the actual polynomial), but also a functional equation. Note in this case that both f and g are evidently Euler-homogeneous. Let us choose g as such, so that \(\chi \in \mathcal {D}_{Y,y}\) will be \(\frac{1}{3}y\partial _y\).

On one hand, we have that \(\tilde{b}_f(s)=(s+1/2)\) and \(\tilde{b}_g(t)=(t+1/3)(t+2/3)\). In this case, \((\tilde{b}_f*\tilde{b}_g)(s)=(s+5/6)(s+7/6)=\tilde{b}_h(s)\). On the other hand, following the notation of Theorem 3.1, we can take \(P(s)=\frac{1}{2}\partial _xx=\frac{1}{2}(x\partial _x+1)\), \(Q=\frac{1}{9}\partial _y^2y^2=\frac{1}{9}(y^2\partial _y^2+4y\partial _y+2)\), \(A(s,t)=s+t+3/2\) and \(B(s,t)=1\).

Summing up, we have \(R(s)\cdot h^s=(\tilde{b}_f*\tilde{b}_g)(s)\cdot h^s\), where

$$\begin{aligned} R(s)&=A(s,\chi )P(s-\chi )+B(s,\chi )Q\\&=\left( s+\frac{1}{3}y\partial _y+\frac{3}{2}\right) \frac{1}{2}(x\partial _x+1) +\frac{1}{9}(y^2\partial _y^2+4y\partial _y+2)\\&=\frac{1}{2}(x\partial _x+1)s+\frac{1}{6}xy\partial _x\partial _y+\frac{1}{9}y^2\partial _y^2+\frac{3}{4}x\partial _x +\frac{11}{18}y\partial _y+\frac{35}{36}. \end{aligned}$$

The example above can obviously be extended to the case of any suspension of the form \(h(x_1,\ldots ,x_n,z)=z^r+f(x_1,\ldots ,x_n)\), for any \(f\in \mathcal {O}_{\mathbb {A}^n}\) and \(r\ge 2\). In that case we can take \(g(z)=z^r\), for which we know that \(\chi =\frac{1}{r}z\partial _z\), \(Q=\frac{1}{r^{r-1}}\partial _z^{r-1}z^{r-1}\) and \(\tilde{b}_g(t)=\prod _{i=1}^{r-1}(t+i/r)\). If we have a reduced Bernstein-Sato functional equation of the form \(P(s)\cdot f^s=\tilde{b}_f(s)\cdot f^s\), we could write

$$\begin{aligned} R(s)\cdot h^s=(\tilde{b}_f*\tilde{b}_g)(s)\cdot h(s), \end{aligned}$$

where

$$\begin{aligned} R(s)=A\left( s,\frac{1}{r}z\partial _z\right) P\left( s-\frac{1}{r}z\partial _z\right) + B\left( s,\frac{1}{r}z\partial _z\right) \prod _{i=1}^{r-1}(t+i/r), \end{aligned}$$

\(A(s,t),B(s,t)\in \mathbb {C}[s,t]\) being such that \((\tilde{b}_f*\tilde{b}_g)(s)=A(s,t)\tilde{b}_f(s-t)+B(s,t)\tilde{b}_g(t)\). Note that, for instance, if no pair of roots of \(\tilde{b}_f\) differ by any j/r, with \(j=1,\ldots ,r\), then \((\tilde{b}_f*\tilde{b}_g)(s)=\prod _{i=1}^{r-1}\tilde{b}_f(s+i/r)\).