1 Introduction

Let \({\mathbb {k}}\) be a field. Let A and B be polynomial rings over \({\mathbb {k}}\), and let \(I \subseteq A\) and \(J \subseteq B\) be ideals. A binomial expansion for symbolic powers of the sum \(I+J\) in \(S = A \otimes _{\mathbb {k}}B\) was given by the second author and various co-authors in [8, 9]. Particularly, it was shown that, for any positive integer \(k \in {\mathbb N}\),

$$\begin{aligned} (I+J)^{(k)} = \sum _{\ell =0}^k I^{(\ell )} \cdot J^{(k-\ell )}. \end{aligned}$$
(1.1)

This formula was proved for symbolic powers defined using minimal primes in [9]. It was established for symbolic powers defined using all associated primes and, more generally, for saturated powers recently in [8]. The formula has been quite well received and seen many applications since its discovery (cf. [2, 5, 6, 13, 14, 16, 18,19,20, 23]).

It is not known if the integral closures of powers of an ideal could be realized as saturated powers. Thus, a natural question arises: does a similar binomial expansion exist for the integral closures of powers of sums of ideals? We shall address this question in this paper.

Simple examples exist to illustrate that the binomial expansion (1.1) does not hold in general when symbolic powers are replaced by the integral closures of powers. For instance, by taking \(I = (x^2) \subseteq {\mathbb {k}}[x] = A\) and \(J = (y^2) \subseteq B = {\mathbb {k}}[y]\), it is easy to see that \(xy \in \overline{I+J} \subseteq {\mathbb {k}}[x,y]\), while \(\overline{I} + \overline{J} = (x^2,y^2)\) does not contain xy. Particularly,

$$\begin{aligned} \overline{I+J} \not = \overline{I} + \overline{J}. \end{aligned}$$

On the other hand, a recent result of Mau and Trung [15, Theorem 2.1] showed that if \(I \subseteq A\) is a normally torsion-free square-free monomial ideal and \(J \subseteq B\) is an arbitrary monomial ideal, then, for any \(k \in {\mathbb N}\),

$$\begin{aligned} \overline{(I+J)^k} = \sum _{\ell =0}^k \overline{I^\ell } \cdot \overline{J^{k-\ell }}. \end{aligned}$$
(1.2)

(Note that if both I and J are normally torsion-free square-free monomial ideals, then \(I+J\) is normally torsion-free by [21, Corollary 5.6]; see also [21, Corollary 5.3 and Theorem 5.4] for more information about the addition of normally torsion-free ideals. Thus, in this case, equality (1.2) becomes the usual expansion of ordinary powers.) It is therefore desirable to establish new binomial expansions that work for all monomial ideals and to identify classes of monomial ideals for which the binomial expansion (1.2) holds.

To search for a binomial expansion that holds for all monomial ideals, our solution is to focus instead on rational powers. Let \(u = \frac{p}{q} \in {\mathbb Q}_+\) be any positive rational number, with \(p,q \in {\mathbb N}\) and \(q \not = 0\). Following [22, Definition 10.5.1], the u-th rational power of an ideal I in a domain A is defined to be

$$\begin{aligned} I_u = \{x \in A ~\big |~ x^q \in \overline{I^p}\}. \end{aligned}$$

This definition of \(I_u\) does not depend on the particular presentation \(u = \frac{p}{q}\). Obviously, if u is a positive integer, then \(I_u = \overline{I^u}\) is the integral closure of \(I^u\). Rational powers have been extended to real powers in a recent work of Dongre et. al. [4].

Our first main theorem reads as follows.

Theorem 2.2. Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals. Let \(u \in {\mathbb Q}\) be any positive rational number. Then,

$$\begin{aligned} (I+J)_u = \sum _{0 \le \omega \le u, \ \omega \in {\mathbb Q}} I_\omega \cdot J_{u-\omega }, \end{aligned}$$

and the sum on the right-hand side is a finite sum.

Theorem 2.2 particularly exhibits a reason why we should not expect the binomial expansion (1.2) to hold for all monomial ideals in general—there are missing terms in the right-hand side of (1.2) when \(u=k\) is an integer.

Our methods are based on [3, 10], where the membership in the integral closure \(\overline{I^k}\) was characterized in terms of the optimal solution to linear programming problems associated to I. Specifically, let M be the matrix whose columns are exponent vectors of the (unique set of) minimal monomial generators of I and assume that M is of size \(n \times m\). For \(\textbf{a}\in {\mathbb Z}_{\ge 0}^n\), consider the following linear programming problem:

$$\begin{aligned} {(\star )}&\left\{ \begin{array}{l} \text {maximize } \textbf{1}^{m} \cdot \textbf{y}, \\ \text {subject to } M \cdot \textbf{y}\le \textbf{a}, \textbf{y}\in {\mathbb R}^{m}_{\ge 0}. \end{array}\right. \end{aligned}$$

Let \(\nu ^*_\textbf{a}(I)\) be the optimal solution to (\(\star \)). It was shown in [10, Proposition 1.1] (see also [3, Proposition 3.5 and Remark 3.6]) that \(x^\textbf{a}\in \overline{I^k}\) if and only if \(\nu ^*_\textbf{a}(I) \ge k\). We provide a similar criterion for the membership of a rational power \(I_u\); see Lemma 2.1.

The finite sum on the right-hand side of Theorem 2.2 can be made more precise in terms of jumping numbers of I and J. The jumping numbers of a monomial ideal were defined in [4] as a means to identify different real powers of the ideal. It turns out that these jumping numbers are rational. For a fixed \(u \in {\mathbb Q}_+\), a rational number \(\theta \in [0,u]\) is called a jumping number on the interval [0, u] of a monomial ideal I if either \(\theta = u\) or \(I_\theta \not = I_{\theta '}\) for all \(\theta ' > \theta \) (see also [4, Corollary 5.7]).

Thanks to an anonymous referee’s observation and suggestion, we prove the following result.

Theorem 2.5. Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals. Let \(u \in {\mathbb Q}\) be any positive rational number. Then,

$$\begin{aligned} (I+J)_u = \sum _{\begin{array}{c} \omega \text { is a jumping number} \\ \text {of { I} on } [0,u] \end{array}} I_\omega \cdot J_{u - \omega } = \sum _{\begin{array}{c} \theta \text { is a jumping number} \\ \text {of { J} on } [0,u] \end{array}} I_{u-\theta } \cdot J_\theta . \end{aligned}$$

Theorem 2.5 is achieved by observing that distinct terms on the right-hand side of the binomial expansion established in Theorem 2.2 are exactly those with different rational powers of I or of J.

As shown in Theorems 2.2 and 2.5, we cannot expect the binomial expansion (1.2) to hold for all monomial ideals. Corollary 2.6 is a special case of Theorem 2.2 and gives an improvement of the aforementioned result of Mau and Trung [15, Theorem 2.1]. Making use of the jumping numbers of powers of I and J, Corollary 2.9 is a consequence of Theorem 2.5 and presents another sufficient condition for the binomial expansion (1.2) to hold.

Corollaries 2.6 and 2.9. Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals, and let \(k \in {\mathbb N}\). Suppose that at least one of the following conditions holds:

  1. (a)

    for every nonnegative integral vector \(\alpha \), \(\nu ^*_\alpha (I) \in {\mathbb Z}\); or

  2. (b)

    the jumping numbers on [0, k] of either I or J are all integers.

Then, we have

$$\begin{aligned} \overline{(I+J)^k} = \sum _{\ell = 0}^k \overline{I^\ell } \cdot \overline{J^{k-\ell }}. \end{aligned}$$

Having a binomial expansion as in (1.2) for \(\overline{(I+J)^k}\) allows us to estimate important algebraic invariant, such as the depth and the regularity, of \(S/\overline{(I+J)^k}\). Particularly, we exhibit explicit formulas for the depth and regularity of \(S/\overline{(I+J)^k}\) in terms of those of the integral closures of powers of I and J, under the sufficient conditions in Corollaries 2.6 and 2.9. Such formulas for the ordinary and symbolic powers of \((I+J)\) were given in [9, 11, 17]. A formula for the integral closure of powers of \((I+J)\) would be desirable, for instance, as stated in [16].

Theorem 3.8. Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals, and let \(k \in {\mathbb N}\). Suppose that at least one of the following conditions holds:

  1. (a)

    for every nonnegative integral vector \(\alpha \), \(\nu ^*_\alpha (I) \in {\mathbb Z}\); or

  2. (b)

    the jumping numbers on [0, k] of either I or J are all integers.

Then, we have

  1. (1)

    \({{\,\textrm{depth}\,}}S/\overline{(I+J)^k} \)    \(={\displaystyle \min _{\begin{array}{c} i \in [1,k-1] \\ j \in [1,k] \end{array}} \{{{\,\textrm{depth}\,}}A/\overline{I^{k-i}} + {{\,\textrm{depth}\,}}B/\overline{J^i} + 1, {{\,\textrm{depth}\,}}A/\overline{I^{k-j+1}} + {{\,\textrm{depth}\,}}B/\overline{J^j}\}}\),

  2. (2)

    \({{\,\textrm{reg}\,}}S/\overline{(I+J)^k} \)    \(={\displaystyle \max _{\begin{array}{c} i \in [1,k-1] \\ j \in [1,k] \end{array}} \{{{\,\textrm{reg}\,}}A/\overline{I^{k-i}} + {{\,\textrm{reg}\,}}B/\overline{J^i} + 1, {{\,\textrm{reg}\,}}A/\overline{I^{k-j+1}} + {{\,\textrm{reg}\,}}B/\overline{J^j}\}}\).

Our approach to proving Theorem 3.8 is similar to that of [9, Theorems 4.2 and 5.3]. Particularly, by setting

$$\begin{aligned} P_{k,t} = \overline{I^k} \cdot \overline{J^0} + \overline{I^{k-1}} \cdot \overline{J} + \dots + \overline{I^{k-t}} \cdot \overline{J^t}, \end{aligned}$$

for \(0 \le t \le k\), and observing that

  1. (a)

    \(P_{k,t} = P_{k,t-1} + \overline{I^{k-t}} \cdot \overline{J^t}\), and

  2. (b)

    \(P_{k,t-1} \cap \overline{I^{k-t}} \cdot \overline{J^t} = \overline{I^{k-t+1}} \cdot \overline{J^t},\)

one direction of the inequality in Theorem 3.8 follows from standard short exact sequences:

$$\begin{aligned} 0 \longrightarrow R\big /P_{k,t-1} \cap \overline{I^{k-t}} \rightarrow R/P_{k,t-1} \oplus R\big /\overline{I^{k-t}} \cdot \overline{J^t} \longrightarrow R/P_{k,t} \longrightarrow 0. \end{aligned}$$
(1.3)

To establish the reverse inequality, we show that the decomposition \(P_{k,t} = P_{k,t-1} + \overline{I^{k-t}} \cdot \overline{J^t}\) is a Betti splitting, a notion defined by Francisco, Hà and Van Tuyl [7] to guarantee that the inequality between Betti numbers that result from the exact sequences in (1.3) is in fact equality. To accomplish this last step, we exhibit that the filtration \(\{\overline{I^k}\}_{k \in {\mathbb N}}\) and \(\{\overline{J^k}\}_{k \in {\mathbb N}}\) is Tor-vanishing, in the sense of Nguyen and Vu [17].

2 Binomial expansion of integral closures of powers

Throughout the paper, let \(A = {\mathbb {k}}[X_1, \dots , X_r]\), \(B = {\mathbb {k}}[Y_1, \dots , Y_s]\), and \(S = A \otimes _{\mathbb {k}}B\). Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals. By abusing notation, we shall write I and J also for their extensions in S.

Observe that the extension of \(\overline{I}\) in S is the same as the integral closure of IS in S, i.e., \(\overline{I}S = \overline{IS}\). This follows, for instance, from [22, Corollary 19.5.2], since S is a normal extension of A. Therefore, also by abusing notation, we shall write \(\overline{I}\) and \(\overline{J}\) to refer to the integral closures of I and J, considered both as ideals in A and B, respectively, and as their extensions in S.

Recall that for an ideal \(I \subseteq A\) and a positive rational number \(u = \frac{p}{q}\), with \(p, q \in {\mathbb N}\) and \(q \not = 0\), the u-th rational power of I is

$$\begin{aligned} I_u= \{x \in A ~\big |~ x^q \in \overline{I^p}\}. \end{aligned}$$

For monomial ideals, rational powers were extended to real powers in [4]. Particularly, for a monomial ideal \(I \subseteq A\) and \(u \in {\mathbb R}_{\ge 0}\), the u-th real power of I is defined to be

$$\begin{aligned} I_u=\{x^\textbf{a}\in A ~\big |~ \textbf{a}\in u.{{\,\textrm{NP}\,}}(I)\cap {\mathbb Z}^r_{\ge 0}\}, \end{aligned}$$

where \({{\,\textrm{NP}\,}}(I)\) is the Newton polyhedron of I and, for \(\textbf{a}= (a_1, \dots , a_r) \in {\mathbb Z}^r_{\ge 0}\), \(x^\textbf{a}\) represents the monomial \(X_1^{a_1} \cdots X_r^{a_r}\) in A.

The following membership criterion for rational powers is similar to that of [10, Proposition 1.1] and [3, Proposition 3.5 and Remark 3.6].

Lemma 2.1

Let \(I \subseteq A\) be any monomial ideal and let \(\textbf{a}\in {\mathbb Z}_{\ge 0}^r\). Let \(\frac{p}{q}\) be any rational number, where \(p,q \in {\mathbb N}\) and \(q \not = 0\). Then, \(x^\textbf{a}\in I_{\frac{p}{q}}\) if and only if \(\nu ^*_{\textbf{a}}(I) \ge \frac{p}{q}\).

Proof

By definition, \(x^\textbf{a}\in I_{\frac{p}{q}}\) if and only if \(\left( x^\textbf{a}\right) ^q \in \overline{I^p}\). By [10, Proposition 1.1], this is the case if and only if \(\nu ^*_{q\cdot \textbf{a}}(I) \ge p\) or, equivalently, \(\nu ^*_\textbf{a}(I) \ge \frac{p}{q}\). \(\square \)

Observe that if I is a monomial ideal then \(\overline{I^p}\) is a monomial ideal. This, particularly, implies that the u-th rational power \(I_u\), for \(u = \frac{p}{q}\), is also a monomial ideal. Our first main result is stated as follows.

Theorem 2.2

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideal. Let \(u \in {\mathbb Q}\) be any positive rational number. Then,

$$\begin{aligned} (I+J)_u = \sum _{\begin{array}{c} 0\le \omega \le u, \ \omega \in {\mathbb Q} \end{array}} I_\omega \cdot J_{u-\omega }, \end{aligned}$$

and the sum on the right-hand side is a finite sum.

Proof

Fix a rational number \(0 \le \omega \le u\). By using the same denominator, without loss of generality, we may assume that \(u = \frac{p}{q}\), and \(\omega = \frac{k}{q}\) for some \(0 \le k \le p\). Consider arbitrary monomials \(x^\textbf{a}\in I_\omega \) and \(x^\textbf{e}\in J_{u-\omega }.\) By definition, we have

$$\begin{aligned} \left( x^\textbf{a}\right) ^q \in \overline{I^k} \text { and } \left( x^\textbf{e}\right) ^q \in \overline{J^{p-k}}. \end{aligned}$$

It follows that \(\left( x^{\textbf{a}+\textbf{e}}\right) ^q \in \overline{I^k} \cdot \overline{J^{p-k}} \subseteq \overline{(I+J)^p}\), where the later inclusion is a consequence of [22, Proposition 10.5.2]. Particularly, \(x^{\textbf{a}+\textbf{e}} \in (I+J)_{\frac{p}{q}}\). This proves that \(I_\omega J_{u-\omega } \subseteq (I+J)_u\). Since this holds for all rational numbers \(0 \le \omega \le u\), we obtain the inclusion

$$\begin{aligned} (I+J)_u \supseteq \sum _{\begin{array}{c} 0\le \omega \le u, \ \omega \in {\mathbb Q} \end{array}} I_\omega \cdot J_{u-\omega }. \end{aligned}$$

We shall proceed to get the reverse inclusion. As observed before, \((I+J)_u\) is a monomial ideal. Thus, it suffices to show that all monomials in \((I+J)_u\) belong to \(\sum \limits _{0 \le \omega \le u, \ \omega \in {\mathbb Q}} I_\omega \cdot J_{u-\omega }\).

Let \(m_1\) and \(m_2\) be the number of minimal generators of I and J, respectively. Let \(M_1\) be the \(r \times m_1\) exponent matrix of I and let \(M_2\) be the \(s \times m_2\) exponent matrix of J. Set \(n = r+s\) and \(m = m_1 + m_2\). Relabel the variables of S to be \(x_1, \dots , x_n\) (corresponding to \(X_1, \dots , X_r, Y_1, \dots , Y_s\)). Define M to be the following \(n \times m\) matrix:

$$\begin{aligned} M = \left( \begin{array}{c|c} M_1 &{} \ \textbf{0} \\ \hline \textbf{0} &{} M_2\end{array}\right) . \end{aligned}$$

Consider any monomial \(x^\textbf{a}\in (I+J)_u\). We have \(\left( x^\textbf{a}\right) ^q \in \overline{(I+J)^p}\); that is, \(x^{q\cdot \textbf{a}} \in \overline{(I+J)^p}.\) By [10, Proposition 1.1], we get \(\nu ^*_{q \cdot \textbf{a}}(M) \ge p\) or, equivalently, \(\nu ^*_\textbf{a}(M) \ge u\). This condition states that the optimal solution to the following linear programming problem is at least u:

$$\begin{aligned} \left\{ \begin{array}{l} \text {maximize } \textbf{1}^m \cdot \textbf{y}, \\ \text {subject to } M \cdot \textbf{y}\le \textbf{a}, \ \textbf{y}\in {\mathbb R}^m_{\ge 0}. \end{array} \right. \end{aligned}$$
(2.1)

Suppose that \(\textbf{y}= (y_1, \dots , y_m) \in {\mathbb R}^m_{\ge 0}\) is a vector that gives the optimal solution to (2.1). Then, we have

$$\begin{aligned} M \cdot \textbf{y}\le \textbf{a}\text { and } \textbf{1}^m \cdot \textbf{y}\ge u. \end{aligned}$$

Write \(\textbf{a}= (\alpha , \textbf{0}) + (\textbf{0}, \beta )\), where \(\alpha \in {\mathbb Z}_{\ge 0}^r\) and \(\beta \in {\mathbb Z}_{\ge 0}^s\), and \(\textbf{y}= (\textbf{y}_1, \textbf{0}) + (\textbf{0}, \textbf{y}_2)\), where \(\textbf{y}_1 \in {\mathbb R}^{m_1}_{\ge 0}\) and \(\textbf{y}_2 \in {\mathbb R}^{m_2}_{\ge 0}\). Observe that, since M is a block matrix, the optimization problem (2.1) is equivalent to the following two problems:

$$\begin{aligned} (\dagger ) \left\{ \begin{array}{l} \text {maximize } \textbf{1}^{m_1} \cdot \textbf{y}_1, \\ \text {subject to } M_1 \cdot \textbf{y}_1 \le \alpha , \textbf{y}_1 \in {\mathbb R}^{m_1}_{\ge 0} \end{array}\right. \quad \text { and } \quad (\sharp ) \left\{ \begin{array}{l} \text {maximize } \textbf{1}^{m_2} \cdot \textbf{y}_2, \\ \text {subject to } M_2\cdot \textbf{y}_2 \le \beta , \textbf{y}_2 \in {\mathbb R}^{m_2}_{\ge 0}. \end{array}\right. \end{aligned}$$

Let \(\omega \) be the optimal solution to (\(\dagger \)). It can be seen that the system \(M_1 \cdot \textbf{y}_1 \le \alpha \) consists of linear inequalities with rational coefficients, so its feasible set has rational vertices. This implies that \(\omega = \nu ^*_\alpha (M_1) \in {\mathbb Q}\). We also have \(\nu ^*_\beta (M_2) = \nu ^*_\textbf{a}(M) - \nu ^*_\alpha (M_1) \ge u-\omega \in {\mathbb Q}\). It then follows from Lemma 2.1 that \(x^\alpha \in I_\omega \) and \(x^\beta \in J_{u-\omega }\). If \(\omega \ge u\) then we get \(x^\textbf{a}\in I_\omega S \subseteq I_u S\). If \(\omega \le u\) then we have \(x^\textbf{a}= x^\alpha \cdot x^\beta \in I_\omega J_{u-\omega }\). This is true for any monomial \(x^\textbf{a}\in (I+J)_u\). Hence, we obtain the inclusion

$$\begin{aligned} (I+J)_u \subseteq \sum _{\begin{array}{c} 0\le \omega \le u, \ \omega \in {\mathbb Q} \end{array}} I_\omega \cdot J_{u-\omega }, \end{aligned}$$

and the first assertion is established.

We continue to prove the second assertion. Observe that, by [22, Proposition 10.5.5], there exist integers e and f such that every rational power \(I_\omega \) and \(J_\omega \) is of the form \(I_{\frac{p}{e}}\) and \(J_{\frac{q}{f}}\) for some \(p,q \in {\mathbb N}\). Particularly, it was shown in [22, Proposition 10.5.5] that

$$\begin{aligned} I_u = I_{\frac{\lceil ue\rceil }{e}} = I_{\frac{n}{e}} \text { and } J_u = J_{\frac{\lceil uf\rceil }{f}} = J_{\frac{m}{f}}, \end{aligned}$$

where \(n = \lceil ue\rceil \), and \(m = \lceil uf\rceil \).

It can be seen that \(\frac{n-1}{e} < u\) and \(\frac{m-1}{f} < u\). Moreover, by [22, Proposition 10.5.2], \(I_\alpha \supseteq I_\beta \) if \(\alpha \le \beta \). Therefore, it follows that \(\{I_\omega ~\big |~ 0 \le \omega \le u\}\) coincides with \(\{I_{\frac{p}{e}} ~\big |~ 0 \le p \le n\}\) and \(\{J_{u-\omega } ~\big |~ 0 \le \omega \le u\}\) coincides with \(\{J_{\frac{q}{e}} ~\big |~ 0 \le q \le m\}\). Hence, in the sum \(\sum _{0 \le \omega \le u, \ \omega \in {\mathbb Q}}I_\omega J_{u-\omega }\), only finitely many terms appear. This completes the second assertion of the theorem. \(\square \)

Example 2.3

Consider \(I = (x^2) \subseteq {\mathbb {k}}[x]\) and \(J = (y^2, yz) \subseteq {\mathbb {k}}[y,z]\). By [22, Theorem 10.3.5 and Proposition 10.5.5], the constants e and f as in the proof of Theorem 2.2 for rational powers of I and J can be taken to be \(e = f = 2\). Thus, Theorem 2.2 gives the following binomial expansion

$$\begin{aligned}{} & {} \overline{(I+J)^2} = I_2 + I_{\frac{3}{2}} \cdot J_{\frac{1}{2}} + I_1 \cdot J_1 + I_{\frac{1}{2}} \cdot J_{\frac{3}{2}} + J_2\\ {}{} & {} = \overline{I^2} + I_{\frac{3}{2}} \cdot J_{\frac{1}{2}} + \overline{I} \cdot \overline{J} + I_{\frac{1}{2}} \cdot J_{\frac{3}{2}} + \overline{J^2}. \end{aligned}$$

It can be verified directly that

$$\begin{aligned} \overline{(I+J)^2} = (y^2z^2, y^3z, xy^2z, x^2yz, y^4, xy^3, x^2y^2, x^3y, x^4), \end{aligned}$$

while

$$\begin{aligned} \overline{I^2} + \overline{I} \cdot \overline{J} + \overline{J^2} = (x^4, y^2z^2, y^3z, y^4, x^2yz, x^2y^2). \end{aligned}$$

Clearly, \(\overline{(I+J)^2} \not = \overline{I^2} + \overline{IJ} + \overline{J^2}.\)

On the other hand, it is easy to see that \(x^2 \in I \subseteq \overline{I}\) and \(y^6 \in I^3 \subseteq \overline{I^3}\), so \(x \in I_{\frac{1}{2}}\) and \(y^3 \in J_{\frac{3}{2}}\). Thus, \(xy^3 \in I_{\frac{1}{2}} \cdot J_{\frac{3}{2}}.\) Similarly, it can be seen that \(xy^2z \in I_{\frac{1}{2}} \cdot J_{\frac{3}{2}}\) and \(x^3y \in I_{\frac{3}{2}} \cdot J_{\frac{1}{2}}\).

It would be desirable to see if the binomial expansion for rational powers in Theorem 2.2 holds for arbitrary ideals.

Question 2.4

Let \(I \subseteq A\) and \(J \subseteq B\) be arbitrary (or homogeneous) proper ideals. Is it true that for any \(u \in {\mathbb Q}_+\), we have

$$\begin{aligned} (I+J)_u = \sum _{0 \le \omega \le u, \ \omega \in {\mathbb Q}} I_\omega \cdot J_{u-\omega }? \end{aligned}$$

With another closer look at the binomial terms in Theorem 2.2, it can be realized that the distinct powers of I and J correspond to jumping numbers. Thus, the finite sum in Theorem 2.2 can be made more precise using jumping numbers of powers of I and J as follows. The authors thank an anonymous referee for pointing out this observation. (By definition, if u is sandwiched between two consecutive jumping numbers \(j' < j\) of a monomial ideal I, i.e., \(j'< u < j\), then we will also consider u as a jumping number of I on the interval [0, u].)

Theorem 2.5

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals. Let \(u \in {\mathbb Q}\) be any positive rational number. Then,

$$\begin{aligned} (I+J)_u = \sum _{\begin{array}{c} \omega \text { is a jumping number} \\ \text {of { I} on } [0,u] \end{array}} I_\omega \cdot J_{u - \omega } = \sum _{\begin{array}{c} \theta \text { is a jumping number} \\ \text {of { J} on } [0,u] \end{array}} I_{u-\theta } \cdot J_\theta . \end{aligned}$$

Proof

We shall establish the first equality, as the second one can be similarly handled. Note that, by [4, Theorem 5.9(1)], all jumping numbers of I and J are rational. Notice also that, as indicated in the proof of Theorem 2.2, the binomial expansion of \((I+J)_u\) contains all distinct rational powers in the interval [0, u] of I. Thus, it follows that

$$\begin{aligned} (I+J)_u \supseteq \sum _{\begin{array}{c} \omega \text { is a jumping number} \\ \text {of { I} on } [0,u] \end{array}} I_\omega \cdot J_{u - \omega }. \end{aligned}$$
(2.2)

To prove the other containment, observe that if \(j' < j\) are two consecutive jumping numbers of I on [0, u], then for any \(j'< \omega < j\), by [4, Lemma 5.1 and Corollary 5.7], we have

$$\begin{aligned} I_\omega = I_j \text { and } J_{u-\omega } \subseteq J_{u-j}. \end{aligned}$$

Thus, \(I_\omega \cdot J_{u-\omega } \subseteq I_j \cdot J_{u-j}\), which is included in the right-hand side of (2.2). This implies that all terms in the binomial expansion of Theorem 2.2 are included in the right-hand side of (2.2). Therefore,

$$\begin{aligned} (I+J)_u \subseteq \sum _{\begin{array}{c} \omega \text { is a jumping number} \\ \text {of { I} on } [0,u] \end{array}} I_\omega \cdot J_{u - \omega }, \end{aligned}$$

and the proof of the desired equality completes. \(\square \)

As illustrated in Example 2.3, when \(u = k\) is a positive integer, the right-hand side of the binomial expansion for \(\overline{(I+J)^k} = (I+J)_u\) given in Theorem 2.2 has more terms than just the integral closures of powers of I and J, which were as expressed in (1.2). This explains why we cannot expect (1.2) to hold for arbitrary monomial ideals. Our next result gives a sufficient condition for the binomial expansion (1.2) to hold and generalizes [15, Theorem 2.1] to a larger class of ideals.

Corollary 2.6

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals. Suppose that for every \(\alpha \in {\mathbb Z}_{\ge 0}^r\), \(\nu ^*_\alpha (I) \in {\mathbb Z}\). Then, for any \(k \in {\mathbb N}\), we have

$$\begin{aligned} \overline{(I+J)^k} = \sum _{\ell =0}^k \overline{I^\ell }\cdot \overline{J^{k-\ell }}. \end{aligned}$$

Proof

It is easy to see that \(\sum _{\ell =0}^k \overline{I^\ell }\cdot \overline{J^{k-\ell }} \subseteq \overline{(I+J)^k}\) (see, for example, [22, Proposition 10.5.2]). We shall prove the other inclusion. Since \(\overline{(I+J)^k}\) is a monomial ideal, it suffices to show that all monomials in \(\overline{(I+J)^k}\) belong to \(\sum \limits _{\ell = 0 }^k \overline{I^\ell } \cdot \overline{J^{k-\ell }}.\)

Let \(M_1\), \(M_2\) and M be exponent matrices as in Theorem 2.2 (and using the same notations). Consider any monomial \(x^\textbf{a}\in \overline{(I+J)^k}\), for \(\textbf{a}\in {\mathbb Z}_{\ge 0}^n\). By [10, Proposition 1.1], we have \(\nu ^*_\textbf{a}(M) \ge k\). This condition states that the optimal solution to the linear programming problem (2.1) is at least k.

Suppose that \(\textbf{y}= (y_1, \dots , y_m) \in {\mathbb R}^m_{\ge 0}\) is a vector that gives the optimal solution to (2.1). Then, we have

$$\begin{aligned} M \cdot \textbf{y}\le \textbf{a}\text { and } \textbf{1}^m \cdot \textbf{y}\ge k. \end{aligned}$$

As before, write \(\textbf{a}= (\alpha , \textbf{0}) + (\textbf{0}, \beta )\), where \(\alpha \in {\mathbb Z}_{\ge 0}^r\) and \(\beta \in {\mathbb Z}_{\ge 0}^s\), and \(\textbf{y}= (\textbf{y}_1, \textbf{0}) + (\textbf{0}, \textbf{y}_2)\), where \(\textbf{y}_1 \in {\mathbb R}^{m_1}_{\ge 0}\) and \(\textbf{y}_2 \in {\mathbb R}^{m_2}_{\ge 0}\). As observed in the proof of Theorem 2.2, the optimization problem (2.1) is equivalent to the two optimization problems (\(\dagger \)) and (\(\sharp \)).

Let \(\ell \) be the optimal solution to (\(\dagger \)); that is, \(\nu ^*_\alpha (I) = \nu ^*_\alpha (M_1) = \ell \). Then, the optimal solution to (\(\sharp \)) satisfies \(\nu ^*_\beta (M_2) \ge k-\ell \). By the hypotheses, \(\ell \in {\mathbb Z}\) and \(k-\ell \in {\mathbb Z}\). The inequalities \(\nu ^*_\alpha (M_1) \ge \ell \) and \(\nu ^*_\beta (M_2) \ge k-\ell \), by [10, Proposition 1.1], then imply that \(x^{\alpha } \in \overline{I^\ell }\) and \(x^\beta \in \overline{J^{k - \ell }}\). As a consequence, we get that \(x^\textbf{a}\in \overline{I^\ell } \cdot \overline{J^{k-\ell }}\), which belong to the right-hand side of the desired equality. The assertion is proved. \(\square \)

As immediate consequences of Corollary 2.6, we obtain the following corollaries which generalize [15, Theorem 2.1].

Corollary 2.7

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals. Suppose that I is square-free and \(I^{(k)} = \overline{I^k}\) for all \(k \in {\mathbb N}\). Then, for any \(k \in {\mathbb N}\), we have

$$\begin{aligned} \overline{(I+J)^k} = \sum _{\ell =0}^k \overline{I^\ell }\cdot \overline{J^{k-\ell }}. \end{aligned}$$

Proof

By [10, Proposition 1.1], for any \(\alpha \in {\mathbb Z}_{\ge 0}^r\), we have \(\nu ^*_\alpha (I) = \tau ^*_\alpha (I) = \tau _\alpha (I) \in {\mathbb Z}\). The conclusion now follows from Corollary 2.6. \(\square \)

Corollary 2.8

( [15, Theorem 2.1]) Let \(I \subseteq A\) be a normally torsion-free square-free monomial ideal, and let \(J \subseteq B\) be an arbitrary monomial ideal. Then, for any \(k \in {\mathbb N}\), we have

$$\begin{aligned} \overline{(I+J)^k} = \sum _{\ell =0}^k I^\ell \cdot \overline{J^{k-\ell }}. \end{aligned}$$

Proof

Since I is a square-free monomial ideal, we have \(I^k \subseteq \overline{I^k} \subseteq I^{(k)}\) for all \(k \in {\mathbb N}\). The assumption that I is normally torsion-free then implies that \(I^k = \overline{I^k} = I^{(k)}\) for all \(k \in {\mathbb N}\). The assertion now follows from Corollary 2.7. \(\square \)

Making use of jumping numbers of powers of I and J, as a consequence of Theorem 2.5, we present another sufficient condition for the binomial expansion (1.2) to hold. The authors again thank an anonymous referee for this observation.

Corollary 2.9

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals, and let \(k \in {\mathbb N}\). Suppose that the jumping numbers of either I or J on [0, k] are all integers. Then, we have

$$\begin{aligned} \overline{(I+J)^k} = \sum _{\ell = 0}^k \overline{I^\ell } \cdot \overline{J^{k-\ell }}. \end{aligned}$$

Proof

As in the proof of Corollary 2.6, we have \(\sum _{\ell = 0}^k \overline{I^\ell } \cdot \overline{J^{k-\ell }} \subseteq \overline{(I+J)^k}\). The other inclusion follows immediately from the hypothesis on jumping numbers of I or J and Theorem 2.5. \(\square \)

Corollary 2.10

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideal. Then,

$$\begin{aligned} \overline{I+J} = \overline{I} + \overline{J} \end{aligned}$$

if and only if either I or J does not have any jumping number in (0, 1).

Proof

One implication follows directly from Corollary 2.9. We shall establish the other implication; that is, if \(\overline{I+J} = \overline{I} + \overline{J}\) then either I or J does not have any jumping number in (0, 1). Suppose, on the contrary, that both I and J have jumping numbers in (0, 1).

Let \(r \in (0,1)\) be a jumping number of I. If there is a jumping number of J lying in \([1-r,1)\), then it can be seen that \(I_r \not = I_1 = \overline{I}\) and \(J_{1-r} \not = J_1 = \overline{J}\). Thus, \(I_rJ_{1-r}\) is not contained in \(\overline{I} + \overline{J}\), which is a contradiction to Theorem 2.5. We have shown that, if \(r \in (0,1)\) is a jumping number of I, then all jumping numbers in (0, 1) of J are smaller than \(1-r\). Hence, by interchanging I and J if necessary, we now may assume that \(r < 1/2\). We may also take r to be the smallest jumping number of I and J in (0, 1).

It follows from [4, Theorem 5.9.(3)] that nr is a jumping number of I for all \(n \in {\mathbb N}\). Let mr be the largest multiple of r that is strictly less than 1. Particularly, \(1-mr \le r\). This implies that J cannot have any jumping number lying in \((0,1-mr)\). Therefore, J must have a jumping number lying in \([1-mr, 1)\). By the same argument as above, we conclude that \(I_{mr}J_{1-mr}\) is not contained in \(\overline{I} + \overline{J}\), which is a contradiction to Theorem 2.5. The assertion is proved. \(\square \)

The condition that the jumping numbers of either I or J on [0, k] are all integers in Corollary 2.9 is not a necessary condition, as illustrated in the following example. We thank an anonymous referee and Jonathan Montaño for providing us with this example.

Example 2.11

Consider \(I = (xy,yz,zx) \subseteq {\mathbb {k}}[x,y,z]\) and \(J = (ab,bc,ca) \subseteq {\mathbb {k}}[a,b,c]\). Direct computation shows that

$$\begin{aligned} (I+J)_2&= \overline{(I+J)^2} \\&= (b^2c^2, abc^2, ab^2c, a^2c^2, a^2bc, a^2b^2,\\&\qquad yzbc, yzac, yzab, y^2z^2, xzbc, xzac, xzab, \\&\quad \ \ \ xybc, xyac, xyab, xyz^2, xy^2z, x^2z^2, x^2yz, x^2y^2) \\&= \overline{I^2} + \overline{I} \cdot \overline{J} + \overline{J^2} = I_2 + I_1 J_1 + J_2. \end{aligned}$$

On the other hand, it can be seen that \(\frac{3}{2}\) is a jumping number of both I and J on [0, 2].

3 Depth and regularity

Depth and regularity are perhaps among the most important invariant associated to ideals and modules. In this section, we shall use the binomial expansions established in the previous section to give bounds and precise formulas for the depth and regularity of rational powers \((I+J)_u\) and the integral closures \(\overline{(I+J)^k}\).

We start with a general bound for the depth and regularity of a rational power \((I+J)_u\), whose proof is an easy adaptation of that of [9, Theorem 4.2] together with the binomial expansion in Theorem 2.5.

Theorem 3.1

Let \(u \in {\mathbb Q}_+\), and assume that the jumping numbers of I on the interval [0, u] are \(0 \le k_0< k_1< \dots < k_n \le u\). Then, we have

  1. (1)

    \({{\,\textrm{depth}\,}}S/(I+J)_u \ge \min _{\begin{array}{c} \ell \in [1,n-1] \\ \theta \in [1,n] \end{array}} \{{{\,\textrm{depth}\,}}A/I_{k_{n-\ell }} + {{\,\textrm{depth}\,}}B/J_{u-k_{n-\ell }} + 1, {{\,\textrm{depth}\,}} A/I_{k_{n-\theta +1}} + {{\,\textrm{depth}\,}}B/J_{u-k_{n-\theta }}\}\),

  2. (2)

    \({{\,\textrm{reg}\,}}S/(I+J)_u \le \max _{\begin{array}{c} \ell \in [1,n-1] \\ \theta \in [1,n] \end{array}} \{{{\,\textrm{reg}\,}}A/I_{k_{n-\ell }} + {{\,\textrm{reg}\,}}B/J_{u-k_{n-\ell }} + 1, {{\,\textrm{reg}\,}}A/I_{k_{n-\theta +1}} + {{\,\textrm{reg}\,}}B/J_{u-k_{n-\theta }}\}\).

Proof

By Theorem 2.5, we have

$$\begin{aligned} (I+J)_u = I_{k_n} \cdot J_{u-k_n} + I_{k_{n-1}} \cdot J_{u-k_{n-1}} + \dots + I_{k_0} \cdot J_{u-k_0}. \end{aligned}$$

For \(0 \le t \le n\), set

$$\begin{aligned} P_{u,t} = I_{k_n}\cdot J_{u-k_n} + I_{k_{n-1}}\cdot J_{u-k_{n-1}} + \dots + I_{k_{n-t}} \cdot J_{u-k_{n-t}}. \end{aligned}$$

Observe that

  1. (a)

    \(P_{u,t} = P_{u,t-1} + I_{k_{n-t}} \cdot J_{u-k_{n-t}}\) for \(1 \le t \le n\); and

  2. (b)

    \(P_{u,t-1} \cap I_{k_{n-t}} \cdot J_{j-k_{n-t}} = I_{k_{n-t+1}} \cdot J_{u-k_{n-t}}.\)

Indeed, (a) is obvious. To see (b), notice first that, by [4, Lemma 5.1], \(P_{u,t-1} \subseteq I_{k_{n-t+1}}\). Thus,

$$\begin{aligned} P_{u,t-1} \cap I_{k_{n-t}} \cdot J_{j-k_{n-t}} \subseteq I_{k_{n-t+1}} \cap J_{u-k_{n-t}} = I_{k_{n-t+1}} \cdot J_{u-k_{n-t}}. \end{aligned}$$

On the other hand, also by [4, Lemma 5.1], we have

$$\begin{aligned} I_{k_{n-t+1}} \cdot J_{u-k_{n-t}} \subseteq I_{k_{n-t+1}} \cdot J_{u-k_{n-t+1}} \subseteq P_{u,t-1} \text { and } I_{k_{n-t+1}} \cdot J_{u-k_{n-t}} \subseteq I_{k_{n-t}} \cdot J_{u-k_{n-t}}. \end{aligned}$$

Therefore,

$$\begin{aligned} P_{u,t-1} \cap I_{k_{n-t}} \cdot J_{j-k_{n-t}} = I_{k_{n-t+1}} \cdot J_{u-k_{n-t}}. \end{aligned}$$

The desired inequality for depth and regularity now follow by tracing through the exact sequences

$$\begin{aligned} 0 \rightarrow S/I_{k_{n-t+1}} \cdot J_{u-k_{n-t}} \rightarrow S/P_{u,t-1} \oplus S/I_{k_{n-t}} \cdot J_{u-k_{n-t}} \rightarrow S/P_{u,t} \rightarrow 0, \end{aligned}$$

and making use of [12, Lemmas 2.2 and 3.2]. \(\square \)

By a similar line of arguments as in Theorem 3.1, Corollaries 2.6 and 2.9 then give the following consequence.

Lemma 3.2

(See [9, Theorem 4.2]) Let \(k \in {\mathbb N}\). Suppose that at least one of the following conditions holds:

  1. (a)

    for every nonnegative integral vector \(\alpha \in {\mathbb Z}_{\ge 0}^r\), \(\nu ^*_\alpha (I) \in {\mathbb Z}\); or

  2. (b)

    the jumping numbers on [0, k] of either I or J are all integers.

Then,

  1. (1)

    \({{\,\textrm{depth}\,}}S/\overline{(I+J)^k} \)    \(\ge {\displaystyle \min _{\begin{array}{c} i \in [1,k-1] \\ j \in [1,k] \end{array}} \{{{\,\textrm{depth}\,}}A/\overline{I^{k-i}} + {{\,\textrm{depth}\,}}B/\overline{J^i} + 1, {{\,\textrm{depth}\,}}A/\overline{I^{k-j+1}} + {{\,\textrm{depth}\,}}B/\overline{J^j}\}}\),

  2. (2)

    \({{\,\textrm{reg}\,}}S/\overline{(I+J)^k} \)    \(\le {\displaystyle \max _{\begin{array}{c} i \in [1,k-1] \\ j \in [1,k] \end{array}} \{{{\,\textrm{reg}\,}}A/\overline{I^{k-i}} + {{\,\textrm{reg}\,}}B/\overline{J^i} + 1, {{\,\textrm{reg}\,}}A/\overline{I^{k-j+1}} + {{\,\textrm{reg}\,}}B/\overline{J^j}\}}\).

We will show that the inequalities in Lemma 3.2 are in fact equalities. To this end, for \(0 \le t \le k\), as before, set

$$\begin{aligned} P_{k,t} = \overline{I^k}\cdot \overline{J^0} + \overline{I^{k-1}}\cdot \overline{J} + \dots + \overline{I^{k-t}}\cdot \overline{J^t}. \end{aligned}$$

Then, as in the proof of Theorem 3.1 (see also [9, Theorem 4.2]), we have

  1. (a)

    \(P_{k,t} = P_{k,t-1} + \overline{I^{k-t}}\cdot \overline{J^t}\) for \(1 \le t \le k\);

  2. (b)

    \(P_{k,t-1} \cap \overline{I^{k-t}}\cdot \overline{J^t} = \overline{I^{k-t+1}}\cdot \overline{J^t}.\)

These decomposition allow us to evoke the following notion and property of a Betti splitting to investigate the depth and regularity of \(P_{k,t}\); see [7]. For homogeneous ideals PIJ in S such that \(P = I+J\), the sum \(P = I+J\) is called a Betti splitting if the graded Betti numbers of PIJ and \(I \cap J\) satisfy the following relation:

$$\begin{aligned} \beta _{i,j}(P) = \beta _{i,j}(I) + \beta _{i,j}(J) + \beta _{i-1,j}(I \cap J) \text { for all } i \ge 0 \text { and } j \in {\mathbb Z}. \end{aligned}$$

Lemma 3.3

( [7, Corollary 2.2]) Let \(P = I+J\) be a Betti splitting in S. Then,

  1. (a)

    \({{\,\textrm{depth}\,}}S/P = \min \{{{\,\textrm{depth}\,}}S/I, {{\,\textrm{depth}\,}}S/J, {{\,\textrm{depth}\,}}S/I \cap J - 1\},\)

  2. (b)

    \({{\,\textrm{reg}\,}}S/P = \max \{{{\,\textrm{reg}\,}}S/I, {{\,\textrm{reg}\,}}S/J, {{\,\textrm{reg}\,}}S/I \cap J - 1\}.\)

As in the proof of [9, Theorem 5.3], to establish the equality in Lemma 3.2, it suffices to show that \(P_{k,t} = P_{k,t-1} + \overline{I^{k-t}}\cdot \overline{J^t}\) is a Betti splitting. This is also how we proceed. It turns out that Betti splitting can be characterized by Tor-vanishing homomorphisms. We shall recall the following necessary terminology and results from [7, 9, 17].

Definition 3.4

(See [9, 17])

  1. (1)

    We say that a homomorphism \(\phi : M \rightarrow N\) of graded S-modules is Tor-vanishing if

    $$\begin{aligned} {{\,\textrm{Tor}\,}}^S_i({\mathbb {k}}, \phi ) = 0 \text { for all } i \ge 0. \end{aligned}$$
  2. (2)

    We say that a filtration \(\{Q_k\}_{k \in {\mathbb N}}\) of S-modules is a Tor-vanishing filtration if, for all \(k \ge 1\), the inclusion map \(Q_k \rightarrow Q_{k-1}\) is Tor-vanishing. That is, \({{\,\textrm{Tor}\,}}^S_i({\mathbb {k}}, Q_k) \rightarrow {{\,\textrm{Tor}\,}}^S_i({\mathbb {k}}, Q_{k-1})\) is the zero map for all \(k \ge 1\).

Lemma 3.5

( [7, Proposition 2.1]) The following conditions are equivalent:

  1. (1)

    The decomposition \(P = I+J\) is a Betti splitting, and

  2. (2)

    The inclusion maps \(I \cap J \rightarrow I\) and \(I \cap J \rightarrow J\) are Tor-vanishing.

In light of Lemmas 3.3 and 3.5, to show that the inequality in Lemma 3.2 is equality, our argument is based on the following essential fact: the family of integral closures of powers of a monomial ideal is a Tor-vanishing filtration.

Lemma 3.6

Let I be a monomial ideal in A.

  1. (1)

    \(\{\overline{I^k}\}_{k \in {\mathbb N}}\) is a Tor-vanishing filtration of ideals.

  2. (2)

    Assuming that \({\mathfrak {m}}\) is the maximal homogeneous ideal in A, for any \(k \in {\mathbb N}\), we have

    $$\begin{aligned} \overline{I^{k}} \subseteq {\mathfrak {m}}\cdot \overline{I^{k-1}}. \end{aligned}$$

Proof

(1) Recall from [9] that for a monomial ideal I, \(\delta ^*(I)\) denotes the ideal generated by elements of the form f/x, where f is a minimal monomial generator of I and x is a variable dividing f. By [17, Proposition 4.4 and Lemma 4.2] (see also [1, Proposition 3.5]), it suffices to show that

$$\begin{aligned} \delta ^*(\overline{I^k}) \subseteq \overline{I^{k-1}}. \end{aligned}$$
(3.1)

Let M be any minimal monomial generator of \(\overline{I^k}\). Then,

$$\begin{aligned} M^r \in (I^k)^r = I^{kr} \text { for some } r \in {\mathbb N}. \end{aligned}$$

That is, there exist monomials \(f_1, \dots , f_{kr} \in I\) (not necessarily distinct) such that \(M^r = f_1 \dots f_{kr}\).

Let x be a variable dividing M. Clearly, \(x^r ~\big |~ M^r\). This implies that there exist \(i_1, \dots , i_s\), for some \(s \le r\) such that x appears in \(f_{i_1} \dots f_{i_s}\) with powers at least r. By considering the product of \(f_i\)’s for \(i \not = i_1, \dots , i_s\), it is easy to see that \((M/x)^r \in I^{kr-r} = (I^{k-1})^r\). Thus, \(M/x \in \overline{I^{k-1}}\). We have established (3.1).

(2) The statement is trivial for \(k = 1\). Suppose that \(k \ge 2\). By definition, we have

$$\begin{aligned} \overline{I^k} \subseteq {\mathfrak {m}}\cdot \delta ^*(\overline{I^k}). \end{aligned}$$

The desired containment now follows from (3.1). \(\square \)

As a consequence of Lemma 3.6, we immediately obtain the following containment, which is of independent interest.

Corollary 3.7

For any positive integer e, we have

$$\begin{aligned} \overline{I^{k+e}} \subseteq {\mathfrak {m}}^e \cdot \overline{I^{k}}. \end{aligned}$$

Proof

The assertion follows from a repeated application of part (2) of Lemma 3.6. \(\square \)

We are now ready to state our last main result.

Theorem 3.8

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals, and let \(k \in {\mathbb N}\). Suppose that at least one of the following conditions holds:

  1. (a)

    for every nonnegative integral vector \(\alpha \in {\mathbb Z}_{\ge 0}^r\), \(\nu ^*_\alpha (I) \in {\mathbb Z}\); or

  2. (b)

    the jumping numbers on [0, k] of either I or J are all integers.

Then, we have

  1. (1)

    \({{\,\textrm{depth}\,}}S/\overline{(I+J)^k}\)    \(={\displaystyle \min _{\begin{array}{c} i \in [1,k-1] \\ j \in [1,k] \end{array}} \{{{\,\textrm{depth}\,}}A/\overline{I^{k-i}} + {{\,\textrm{depth}\,}}B/\overline{J^i} + 1, {{\,\textrm{depth}\,}}A/\overline{I^{k-j+1}} + {{\,\textrm{depth}\,}}B/\overline{J^j}\}}\),

  2. (2)

    \({{\,\textrm{reg}\,}}S/\overline{(I+J)^k} \)    \(={\displaystyle \max _{\begin{array}{c} i \in [1,k-1] \\ j \in [1,k] \end{array}} \{{{\,\textrm{reg}\,}}A/\overline{I^{k-i}} + {{\,\textrm{reg}\,}}B/\overline{J^i} + 1, {{\,\textrm{reg}\,}}A/\overline{I^{k-j+1}} + {{\,\textrm{reg}\,}}B/\overline{J^j}\}}\).

Proof

By Lemma 3.6, we have that \(\{\overline{I^k}\}_{k \in {\mathbb N}}\) and \(\{\overline{J^k}\}_{k \in {\mathbb N}}\) are Tor-vanishing filtrations of ideals in A and B, respectively. It then follows from the proof of [9, Theorem 5.3] that \(P_{k,t} = P_{k,t-1} + \overline{I^{k-t}} \cdot \overline{J^t}\) is a Betti splitting for all \(1 \le t \le k\). Notice again that \(P_{k,t-1} \cap \overline{I^{k-t}} \cdot \overline{J^t} = \overline{I^{k-t+1}} \cdot \overline{J^t}\). Now, applying Lemma 3.3 and [12, Lemmas 2.2 and 2.3] in the same way as in the proof of [9, Theorems 4.2 and 5.3], we obtain the desired equality. \(\square \)

As noted before, the condition that \(\nu ^*_\alpha (I) \in {\mathbb Z}\) in Theorem 3.8 is satisfied, for instance, when \(I^{(k)} = \overline{I^k}\) for all \(k \in {\mathbb N}\).

Corollary 3.9

Let \(I \subseteq A\) and \(J \subseteq B\) be monomial ideals. Suppose that \(I^{(k)} = \overline{I^k}\) for all \(k \in {\mathbb N}\). Then, the inequalities in Lemma 3.2 are equality.

Remark 3.10

It would be of interest to know when the inequality in Theorem 3.1 are equality. Since rational powers at jumping numbers of I and J do not necessarily give filtration of ideals, it is not clear how techniques of Betti splitting and Tor-vanishing would generalize to this case.