Abstract
We identify the p-adic unit roots of the zeta function of a projective hypersurface over a finite field of characteristic p as the eigenvalues of a product of special values of a certain matrix of p-adic series. That matrix is a product \(F(\varLambda ^p)^{-1}F(\varLambda )\), where the entries in the matrix \(F(\varLambda )\) are A-hypergeometric series with integral coefficients and \(F(\varLambda )\) is independent of p.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Dwork [7] expressed the unit root of the zeta function of an ordinary elliptic curve in the Legendre family in characteristic p in terms of the Gaussian hypergeometric function \({}_2F_1(1/2,1/2,1,\lambda )\). There have since been a number of generalizations and extensions of that result: see [3, Sect. 1]. The point of the current paper is to prove such a result for projective hypersurfaces where the zeta function has multiple unit roots by using the matrix of A-hypergeometric series that appeared in [1]. We note that related results have recently been obtained for hypersurfaces in the torus by Beukers and Vlasenko [4, 5] using different methods.
Let \(\{x^{\mathbf{b}_k}\}_{k=1}^N\) be the set of all monomials of degree d in variables \(x_0,\dots ,x_n\) \(\left( {so}\ N = \left( {\begin{array}{*{20}c} {d + n} \\ n \\ \end{array} } \right)\right)\) and let
be a homogeneous polynomial of degree d over the finite field \({{\mathbb {F}}}_q\), \(q=p^a\), p a prime. Let \(X_{\lambda }\subseteq {{\mathbb {P}}}^n_{{{\mathbb {F}}}_q}\) be the projective hypersurface defined by the vanishing of \(f_\lambda\) and let \(Z(X_\lambda /{{\mathbb {F}}}_q,t)\) be its zeta function. Define a rational function \(P_\lambda (t)\) by the equation
When \(X_\lambda\) is smooth, \(P_\lambda (t)\) is the characteristic polynomial of Frobenius acting on middle-dimensional primitive cohomology. In this case, \(P_\lambda (t)\) has degree
In the general case, we know only that \(P_\lambda (t)\) is a rational function.
We regard \(f_\lambda\) as a polynomial with fixed exponents and variable coefficients, giving rise to a family of rational functions \(P_\lambda (t)\). The p-adic unit (reciprocal) roots of \(P_\lambda (t)\) all occur in the numerator (Proposition 1.1) and for generic \(\lambda\) it has the maximal possible number of p-adic unit (reciprocal) roots (by the generic invertibility of the Hasse-Witt matrix, see below). Our goal in this paper is to give a p-adic analytic formula for these unit roots in terms of A-hypergeometric series.
Write \(\mathbf{b}_k = (b_{0k},\dots ,b_{nk})\) with \(\sum _{i=0}^n b_{ik} = d\). It will be convenient to define an augmentation of these vectors. For \(k=1,\dots ,N\), put
(where \({{\mathbb {N}}}\) denotes the nonnegative integers). Let \(A = \{\mathbf{a}_k\}_{k=1}^N\). Note that the vectors \(\mathbf{a}_k\) all lie on the hyperplane \(\sum _{i=0}^n u_i=du_{n+1}\) in \({{\mathbb {R}}}^{n+2}\). Let
Note that \(\{x_0^{u_0}\cdots x_n^{u_n}\mid u\in U\}\) is the set of all monomials of degree d that are divisible by the product \(x_0\cdots x_n\), so \(|U |= \left( {\begin{array}{c}d-1\\ n\end{array}}\right)\). We assume throughout that \(d\ge n+1\), which implies \(U\ne \emptyset\). (If \(d<n+1\), then U is empty and none of the reciprocal roots of \(P_\lambda (t)\) is a p-adic unit. If one defines the determinant of an empty matrix to be 1, then Theorem 1.1 below is trivially true in this case.)
We recall the definition of the Hasse-Witt matrix of \(f_\lambda\), a \((|U|\times |U|)\)-matrix with rows and columns indexed by U. Let \(\varLambda _1,\dots ,\varLambda _N\) be indeterminates and let \(H(\varLambda ) = \big [ H_{uv}(\varLambda )\big ]_{u,v\in U}\) be the matrix of polynomials:
Since the last coordinate of each \(\mathbf{a}_k\), u, and v equals 1, the condition on the summation implies \(\sum _{k=1}^N \nu _k = p-1\). In particular, it follows that \(\nu _k\le p-1\) for all k, so the coefficients of each \(H_{uv}(\varLambda )\) lie in \({{\mathbb {Q}}}\cap {{\mathbb {Z}}}_p\). Let \({\bar{H}}_{uv}(\varLambda )\in {{\mathbb {F}}}_p[\varLambda ]\) be the reduction mod p of \(H_{uv}(\varLambda )\) and let \({\bar{H}}(\varLambda )\) be the reduction mod p of \(H(\varLambda )\). Then \({\bar{H}}(\lambda )\) is the Hasse-Witt matrix of \(f_\lambda\) (relative to a certain basis: see Katz [8, Algorithm 2.3.7.14]).
Write \(P_\lambda (t) = Q_\lambda (t)/R_\lambda (t)\), where \(Q_\lambda (t),R_\lambda (t)\in 1 + {{\mathbb {Z}}}[t]\) are relatively prime polynomials. As a special case of [2, Theorem 1.4] we have the following result.
Proposition 1.1
For all \(\lambda \in {{\mathbb {F}}}_q^N\) we have \(R_\lambda (t)\equiv 1 \pmod {q}\) and
Proposition 1.1 implies that all the unit roots of \(P_\lambda (t)\) occur in the numerator and that there are at most \(\mathrm{card}(U)\) of them. The Hasse-Witt matrix is known to be generically invertible for a “sufficiently general” polynomial \(f_\lambda\) (Koblitz [9], Miller [10, 11]). We recall the precise version of that fact that we need.
We suppose the \(\mathbf{a}_k\) to be ordered so that \(U=\{\mathbf{a}_k\}_{k=1}^M\), \(M=\left( {\begin{array}{c}d-1\\ n\end{array}}\right)\). We may then write the matrix \(H(\varLambda )\) as \(\big [ H_{ij}(\varLambda )\big ]_{i,j=1}^M\), where
Consider the related matrix \(B(\varLambda ) = \big [B_{ij}(\varLambda )\big ]_{i,j=1}^M\) of Laurent polynomials defined by
i. e., \(B(\varLambda ) = C(\varLambda ^p)^{-1}H(\varLambda )C(\varLambda )\), where \(C(\varLambda )\) is the diagonal matrix with entries \(\varLambda _1,\dots ,\varLambda _M\). Since \(\lambda _k^{p^a}=\lambda _k\) for \(\lambda _k\in {{\mathbb {F}}}_q\), Proposition 1.1 has the following corollary.
Corollary 1.1
For all \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\),
Put \(D\,(\varLambda ) = \det B(\varLambda )\). By [1, Proposition 2.11] we have the following result.
Proposition 1.2
The Laurent polynomial \(D(\varLambda )\) has constant term 1.
Proposition 1.2 implies that neither \(D(\varLambda )\) nor \({\bar{D}}(\varLambda )\) is the zero polynomial, so the matrix \({\bar{B}}(\lambda )\) is invertible for \(\lambda\) in a Zariski open subset of \(({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\), a nonempty set for q sufficiently large. It follows from Corollary 1.1 that for \({\bar{D}}(\lambda )\ne 0\), \(P_\lambda (t)\) has M unit reciprocal roots. Let \(\{\pi _j(\lambda )\}_{j=1}^M\) be these roots and put
Our goal is to describe the polynomial \(\rho (\lambda ,t)\) in terms of the A-hypergeometric series introduced in [1, Equation (3.3)].
We begin by defining a ring that contains the desired series. For each \(i=1,\dots ,M\), the Laurent polynomials \(B_{ij}(\varLambda )\) have exponents lying in the set
Let \({{\mathscr {C}}}\subseteq {{\mathbb {R}}}^{N}\) be the real cone generated by \(\bigcup _{i=1}^M L_{i}\). By [1, Proposition 2.9], the cone \({{\mathscr {C}}}\) does not contain a line, hence it has a vertex at the origin. Put \(E={{\mathscr {C}}}\cap {{\mathbb {Z}}}^N\). Note that \((l_1,\dots ,l_N)\in E\) implies that \(\sum _{k=1}^N l_k\mathbf{a}_k = \mathbf{0}\) and that \(\sum _{k=1}^N l_k = 0\) since the last coordinate of each \(\mathbf{a}_k\) equals 1. Let \({{\mathbb {C}}}_p\) be the completion of an algebraic closure of \({{\mathbb {Q}}}_p\) with absolute value \(|\cdot |\) associated to the valuation \(\mathrm{ord}\), normalized by \(\mathrm{ord}\;p=1\) and \(|p|= p^{-1}\). Set
i. e., \(R_E\) is the set of Laurent series over \({{\mathbb {C}}}_p\) having exponents in E and bounded coefficients. Note that \(R_E\) is a ring (since \({{\mathscr {C}}}\) has a vertex at the origin) and that the entries of \(B(\varLambda )\) all lie in \(R_E\). By Proposition 1.2, the Laurent polynomial \(D(\varLambda )\) is an invertible element of \(R_E\), so \(B(\varLambda )^{-1}\) has entries in \(R_E\).
We define a matrix \(F(\varLambda ) = \big [ F_{ij}(\varLambda )\big ]_{i,j=1}^M\) with entries in \(R_E\). For \(i\ne j\), put
and for \(i=j\) put
The coefficients of these series are multinomial coefficients, hence lie in \({{\mathbb {Z}}}\), so these series lie in \(R_E\). Note also that each \(F_{ii}(\varLambda )\) has constant term 1, while the \(F_{ij}(\varLambda )\) for \(i\ne j\) have no constant term. It follows that \(\det F(\varLambda )\) has constant term 1, hence \(\det F(\varLambda )\) is an invertible element of \(R_E\). We may therefore define a matrix \({{\mathscr {F}}}(\varLambda )\) with entries in \(R_E\) by the formula
The interpretation of the \(F_{ij}(\varLambda )\) as A-hypergeometric series will be explained in Sect. 7 (see Equation (7.10)).
Put
Our main result is the following.
Theorem 1.1
The entries in the matrix \({{\mathscr {F}}}\,(\varLambda )\) are functions on \({{\mathscr {D}}}\). Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. If \({\bar{D}}(\lambda )\ne 0\), then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and
Remark 1
The first sentence of Theorem 1.1 will be made more precise in Sect. 2 (see Theorem 2.1). The assertion that \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for all i if \({\bar{D}}(\lambda )\ne 0\) follows immediately from the fact that \({\bar{D}}(\varLambda )\) has coefficients in \({{\mathbb {F}}}_p\).
Remark 2
The series \(F_{ij}(\varLambda )\) are related to the Laurent polynomials \(B_{ij}(\varLambda )\) by truncation (see [1, Proposition 3.8]). One can thus regard Theorem 1.1 as a refinement of Corollary 1.1, i. e., one may think of \({{\mathscr {F}}}(\varLambda )\) as a p-adic refinement of the Hasse-Witt matrix \(B(\varLambda )\).
2 Rings of p-adic series
This section has two purposes. We need a ring R large enough to contain all the series that will be encountered in the proof of Theorem 1.1 and we need to identify a subring \(R'\subseteq R\) whose elements define functions on \({{\mathscr {D}}}\).
Consider the \({{\mathbb {C}}}_p\)-vector space
of all Laurent series with bounded coefficients. We define a norm on \({{\mathbb {B}}}\) by setting
The \({{\mathbb {C}}}_p\)-vector space \({{\mathbb {B}}}\) is complete in this norm and \(R_E\) is a \({{\mathbb {C}}}_p\)-subspace of \({{\mathbb {B}}}\) which is also complete.
We begin by defining a subring \(R'_E\) of the ring \(R_E\) whose elements define functions on \({{\mathscr {D}}}\). Let \(L_E\) denote the subring of \(R_E\) consisting of the Laurent polynomials that lie in \(R_E\). Since \(D(\varLambda )\) is an invertible element of \(R_E\), we may define a subring of \(R_E\)
The definitions of \(R_E\) and \({{\mathscr {D}}}\) show that the elements of \(L_{E,D(\varLambda )}\) are functions on \({{\mathscr {D}}}\). We define \(R_E'\) to be the completion of \(L_{E,D(\varLambda )}\) under the norm on \({{\mathbb {B}}}\). Clearly \(R_E'\) is a subring of \(R_E\). The next two lemmas will show that the elements of \(R_E'\) define functions on \({{\mathscr {D}}}\).
Lemma 2.1
For \(h\,(\varLambda )\in L_E\) and \(k\in {{\mathbb {N}}}\), one has
Proof
Write
Since \(D\,(\varLambda )\) has constant term 1 and p-integral coefficients, it follows that \(d_\mathbf{0} = 1\) and the \(d_l\) are p-integral. Write
where \(e_i\in E\) for \(i=1,\dots ,r\). The coefficient of \(\varLambda ^l\) in the quotient \(h(\varLambda )/D(\varLambda )\,^k\) is
with the understanding that \(d_{l-e_i} = 0\) if \(l-e_i\not \in E\). Since the \(d_{l-e_i}\) are p-integral
which implies that \(|h(\varLambda )/D(\varLambda )^k|\le |h(\varLambda )|\).
We must have \(|h(\varLambda )|= |h_i|\) for some i. To fix ideas, suppose that
and
Since the cone \({{\mathscr {C}}}\) has a vertex at the origin, we can choose a vector \(v\in {{\mathbb {R}}}^N\) such that the inner product \(v\bullet w\) is \(\ge 0\) for all \(w\in {{\mathscr {C}}}\) and \(v\bullet w=0\) only if \(w=\mathbf{0}\). For some \(i\in \{1,\dots ,s\}\), the inner product \(v\bullet e_i\) is minimal. To fix ideas, suppose that
From (2.1), the coefficient of \(\varLambda ^{e_1}\) in \(h(\varLambda )/D(\varLambda )^k\) is (using \(d_\mathbf{0} = 1\))
For \(i=2,\dots ,s\) we have by (2.4) that \(v\bullet (e_1-e_i)\le 0\). But \(e_1-e_i\ne \mathbf{0}\), so \(e_1-e_i\not \in E\). This implies \(d_{e_1-e_i} = 0\), so (2.5) simplifies to
Equations (2.2) and (2.3) now imply that
hence \(|h(\varLambda )/D(\varLambda )^k|\ge |h(\varLambda )|\). \(\square\)
It follows from the definition of E that if \((l_1,\dots ,l_N)\in E\), then \(l_i\ge 0\) for \(i=M+1,\dots ,N\). This implies that if \(h(\varLambda )\in L_E\) and \(\lambda \in {{\mathscr {D}}}\), then
Lemma 2.2
Let \(h_k(\varLambda )\in L_E\) for \(k\ge 1\). We suppose the sequence \(\{h_k(\varLambda )/D(\varLambda )^k\}_{k=1}^\infty\) converges to \(\xi (\varLambda )\in R_E'\). Then for all \(\lambda \in {{\mathscr {D}}}\), the sequence \(\{h_k(\lambda )/D(\lambda )^k\}_{k=1}^\infty\) converges in \({{\mathbb {C}}}_p\).
Proof
For \(k<k'\) we have
where the second line follows from the requirement that \(|D(\lambda )|= 1\) for \(\lambda \in {{\mathscr {D}}}\), the third line holds by (2.6), and the fourth line holds by Lemma 2.1.
Since the sequence \(\{h_k(\varLambda )/D(\varLambda )^k\}_{k=1}^\infty\) converges, it is a Cauchy sequence. This inequality shows that the sequence \(\{h_k(\lambda )/D\,(\lambda )^k\}_{k=1}^\infty\) is also a Cauchy sequence, which must converge because \({{\mathbb {C}}}_p\) is complete. \(\square\)
It is easy to see that the limit of the sequence \(\{h_k(\lambda )/D(\lambda )^k\}_{k=1}^\infty\) is independent of the choice of sequence \(\{h_k(\varLambda )/D(\varLambda )^k\}_{k=1}^\infty\) converging to \(\xi (\varLambda )\). We may therefore define
Using this definition, the elements of \(R_E'\) become functions on \({{\mathscr {D}}}\). Note that the proof of Lemma 2.2 shows that, as functions on \({{\mathscr {D}}}\), the sequence \(\{h_k/D^k\}_{k=1}^\infty\) converges uniformly to \(\xi\).
We shall need the following fact later.
Lemma 2.3
If \(\xi (\varLambda )\in R_E'\) and \(\lambda \in {{\mathscr {D}}}\), then \(|\xi (\lambda )\vert \le |\xi (\varLambda )|\).
Proof
It follows from Lemma 2.1, Equation (2.6), and the fact that \(|D(\lambda )|= 1\) for \(\lambda \in {{\mathscr {D}}}\) that
for \(h\,(\varLambda )\in L_E\), \(k\in {{\mathbb {N}}}\), and \(\lambda \in {{\mathscr {D}}}\). If we choose a sequence \(\{h_k(\varLambda )/D\,(\varLambda )^k\}_{k=1}^\infty\), with the \(h_k(\varLambda )\) in \(L_E\), converging to \(\xi (\varLambda )\in R_E'\), then for k sufficiently large we have
and for \(\lambda \in {{\mathscr {D}}}\) we have by the definition of \(\xi (\lambda )\)
The assertion of the lemma now follows from (2.7). \(\square\)
The following result is our more precise version of Theorem 1.1.
Theorem 2.1
The entries in the matrix \({{\mathscr {F}}}(\varLambda )\) lie in \(R_E'\). Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. If \({\bar{D}}(\lambda )\ne 0\), then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and
Most of this paper is devoted to proving the first sentence of Theorem 2.1. The remaining assertion will then follow by an application of Dwork’s p-adic cohomology theory.
Unfortunately the rings \(R_E\) and \(R'_E\) are not large enough to contain all the relevant series we shall encounter and therefore need to be enlarged. Let \({\tilde{R}} = R_E[\varLambda _1^{\pm 1},\dots ,\varLambda _N^{\pm 1}]\) and let
It is clear from the definition of \({{\mathscr {D}}}\) that the elements of \({\tilde{R}}'\) are functions on \({{\mathscr {D}}}\). We define R (resp. \({R}'\)) to be the completion of \({\tilde{R}}\) (resp. \({\tilde{R}}'\)) under the norm on \({{\mathbb {B}}}\). Note that the elements of \({R}'\) define functions on \({{\mathscr {D}}}\). It follows from Lemma 2.3 that
Remark
One can show using the argument of Dwork [6, Lemma 1.2] that for \(\xi (\varLambda )\in R'\)
Let \({{\mathscr {M}}}\subseteq {{\mathbb {Z}}}^{n+2}\) be the abelian group generated by A. We define an \({{\mathscr {M}}}\)-grading on R and \({R}'\). Every \(\xi (\varLambda )\in {R}\) can be written as a series \(\xi (\varLambda ) = \sum _{l\in {{\mathbb {Z}}}^N} c_l\varLambda ^l\). We say that \(\xi (\varLambda )\) has degree \(u\in {{\mathscr {M}}}\) if \(\sum _{k=1}^N l_k\mathbf{a}_k = u\) for all \(l\in {{\mathbb {Z}}}^N\) for which \(c_l\ne 0\). We denote by \({R}_u\) the \({{\mathbb {C}}}_p\)-subspace of R consisting of all elements of degree u. Each \({R}_u\) is complete in the norm and is a module over \({R}_\mathbf{0}\), which is a ring. Identical remarks apply to the induced grading on \({R}'\).
3 p-adic estimates
In this section we recall some estimates from [3, Sect. 3] to be applied later. Let \(\mathrm{AH}(t)= \exp (\sum _{i=0}^{\infty }t^{p^i}/p^i)\) be the Artin-Hasse series, a power series in t with p-integral coefficients, let \(\gamma _0\) be a zero of the series \(\sum _{i=0}^\infty t^{p^i}/p^i\) having \(\mathrm{ord}\;\gamma _0 = 1/(p-1)\), and set
We then have
We define \({\hat{\theta }}(t) = \prod _{j=0}^\infty \theta (t^{p^j})\), which gives \(\theta (t) = {\hat{\theta }}(t)/{\hat{\theta }}(t^p)\). If we set
then
If we write \({\hat{\theta }}(t) = \sum _{i=0}^\infty {\hat{\theta }}_i(\gamma _0 t)^i/i!\), then by [3, Equation (3.8)] we have
We shall also need the series
Note that \({\hat{\theta }}(t) = \exp (\gamma _0t){\hat{\theta }}_1(t)\). By [3, Equation (3.10)]
Define the series \({\hat{\theta }}_1(\varLambda ,x)\) by the formula
Put \({{\mathbb {N}}}A = \{\sum _{j=1}^N l_j\mathbf{a}_j\mid l_j\in {{\mathbb {N}}}\}\). Expanding the product (3.7) according to powers of x we get
where
We have similar results for the reciprocal power series
If we write
then the coefficients satisfy
We also have
which we again expand in powers of x as
with
We also define
Expanding the right-hand side in powers of x, we have
where
and
so \(\theta _u(\varLambda )\) is homogeneous of degree u. The equation \(\sum _{k=1}^N \nu _k\mathbf{a}_k = u\) has only finitely many solutions \(\nu \in {{\mathbb {N}}}^N\), so \(\theta _u(\varLambda )\) is a polynomial in the \(\varLambda _k\). Equations (3.1) and (3.18) show that
4 The Dwork-Frobenius operator
We define the spaces S and \(S'\) and Dwork’s Frobenius operator on those spaces.
Note that the abelian group \({{\mathscr {M}}}\) generated by A lies in the hyperplane \(\sum _{i=0}^n u_i = du_{n+1}\) in \({{\mathbb {R}}}^{n+2}\). Set \({{\mathscr {M}}}_- = {{\mathscr {M}}}\cap ({{\mathbb {Z}}}_{<0})^{n+2}\). We denote by \(\delta _-\) the truncation operator on formal Laurent series in variables \(x_0,\dots ,x_{n+1}\) that preserves only those terms having all exponents negative:
We use the same notation for formal Laurent series in a single variable t:
It is straightforward to check that if \(\xi _1\) and \(\xi _2\) are two series for which the product \(\xi _1\xi _2\) is defined and if no monomial in \(\xi _2\) has a negative exponent, then
Define S to be the \({{\mathbb {C}}}_p\)-vector space of formal series
Let \(S'\) be defined analogously with the condition “\(\xi _{u}(\varLambda )\in {R}_{u}\)” being replaced by “\(\xi _{u}(\varLambda )\in {R}_{u}'\)”. Define a norm on S by setting
Both S and \(S'\) are complete under this norm.
Let
We show that the product \(\theta (\varLambda ,x) \xi (\varLambda ^p,x^p)\) is well-defined as a formal series in x. We have formally
where
Since \(\theta _u(\varLambda )\) is a polynomial, the product \(\theta _u(\varLambda )\xi _\nu (\varLambda ^p)\) is a well-defined element of \({R}_{\rho }\). It follows from (3.17), (3.19), and the equality \(u+p\nu =\rho\) that the coefficients of \(\gamma _0^{\nu _{n+1}}\theta _u(\varLambda )\) all have p-ordinal at least \(\big (\rho _{n+1}/(p-1)\big ) - \nu _{n+1}\). Since \(|\xi _\nu (\varLambda )|\) is bounded independently of \(\nu\) and there are only finitely many terms on the right-hand side of (4.2) with a given value of \(\nu _{n+1}\), the series (4.2) converges to an element of \({R}_\rho\) (because \(-\nu _{n+1}\rightarrow \infty\) as \(\nu \rightarrow \infty\)). This estimate also shows that if \(\xi (\varLambda ,x)\in S'\), then \(\zeta _\rho (\varLambda )\in {R}'_\rho\).
Define for \(\xi (\varLambda ,x)\in S\)
For \(\rho \in {{\mathscr {M}}}_-\), put \(\eta _{\rho }(\varLambda ) = \gamma _0^{-\rho _{n+1}}\zeta _{\rho }(\varLambda )\), so that
with (by (4.2))
Proposition 4.1
The map \(\alpha ^*\) is an endomorphism of S and of \(S'\), and for \(\xi (\varLambda ,x)\in S\) we have
Proof
By (4.3), the proposition will follow from the estimate
Using (4.4), we see that this estimate will follow in turn from the estimate
for all \(u\in {{\mathbb {N}}}A\), \(\nu \in {{\mathscr {M}}}_-\), with \(u+p\nu = \rho\). From (3.17) and (3.19) we see that all coefficients of \(\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )\) have p-ordinal greater than or equal to
Since \(u+p\nu =\rho\), this expression simplifies to \(-\nu _{n+1}\), so
and \(-\nu _{n+1}\ge 1\) since \(\nu \in {{\mathscr {M}}}_-\). \(\square\)
Note that the equality \(-\nu _{n+1} = 1\) occurs for only M points \(\nu \in {{\mathscr {M}}}_-\), namely, \(\nu = -\mathbf{a}_1,\dots ,-\mathbf{a}_M\). The following corollary is then an immediate consequence of the proof of Proposition 4.1.
Corollary 4.1
If \(\xi _{\nu }(\varLambda ) = 0\) for \(\nu = -\mathbf{a}_1,\dots ,-\mathbf{a}_M\), then \(|\alpha ^*(\xi (\varLambda ,x))|\le |p^{2}\xi (\varLambda ,x)|\).
5 A technical lemma
The action of \(\alpha ^*\) is extended to \(S^M\) componentwise: if
then
Let \(\xi (\varLambda ,x)\) be as in (5.1) with
Since \(\xi ^{(i)}_{-\mathbf{a}_j}(\varLambda )\in R_{-\mathbf{a}_j}\), the matrix
has entries in \(R_\mathbf{0}\). If \(\xi (\varLambda ,x)\in (S')^M\), then \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) has entries in \(R'_\mathbf{0}\). The map \(\xi (\varLambda ,x)\mapsto {{\mathbb {M}}}(\xi (\varLambda ,x))\) is a \({{\mathbb {C}}}_p\)-linear map from \(S^M\) to the \({{\mathbb {C}}}_p\)-vector space of \((M\times M)\)-matrices with entries in \(R_\mathbf{0}\). Note that if \(Y(\varLambda )\) is an \((M\times M)\)-matrix with entries in \(R_\mathbf{0}\), then
where we regard \(\xi (\varLambda ,x)\) as a column vector for the purpose of matrix multiplication. In particular, if \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) is an invertible matrix, then
We extend the norm on S to \(S^M\) and the norm on R to \(\mathrm{Mat}_M(R)\), the \((M\times M)\)-matrices with entries in R. For \(\xi (\varLambda ,x)\) as in (5.1), we define
and for \(Y(\varLambda ) = \big (Y_{ij}(\varLambda )\big )_{i,j=1}^M\) with \(Y_{ij}(\varLambda )\in R\) we define
Note that the matrix \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) has an inverse with entries in \(R_\mathbf{0}\) if and only if \(\det {{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) is an invertible element of \(R_\mathbf{0}\). Likewise, if \(\xi (\varLambda ,x)\in (S')^M\), this matrix has an inverse with entries in \(R'_\mathbf{0}\) if and only if its determinant is an invertible element of \(R'_\mathbf{0}\). The main result of this section is the following assertion.
Lemma 5.1
Let \(\xi (\varLambda ,x)\in S^M\) with \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) invertible, \(\big |{{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\big |= |\xi (\varLambda ,x)|\), and \(\big |\det {{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\big |= |\xi (\varLambda ,x)|^M\). Then \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\) is invertible,
and
Remark
To prove Lemma 5.1, it suffices to show that \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\) is invertible and that (5.7) holds: from Proposition 4.1 we get
If any of these inequalities were strict, the usual formula for the determinant of a matrix in terms of its entries would imply that
Similar reasoning applies to the inverse of \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\). By (5.7) we have
The usual formula for the inverse of a matrix in terms of its cofactors implies that
But if this inequality were strict it would lead to a contradiction of (5.8), so we have the following corollary.
Corollary 5.1
Under the hypotheses of Lemma 5.1,
The proof of Lemma 5.1 will require several steps. We first consider a matrix constructed from \(\theta (\varLambda ,x)\) (Equation (3.16)):
From (3.17) and (3.18) we have explicit formulas for these matrix entries:
Furthermore, since the last coordinate of \(p\mathbf{a}_i-\mathbf{a}_j\) equals \(p-1\), each \(\nu _k\) in this summation is \(\le p-1\), so from their definition \(\theta _{\nu _k} = \gamma _0^{\nu _k}/\nu _k!\). Since \(\sum _{k=1}^N \nu _k = p-1\), we have
Thus
where \(H(\varLambda )\) is given in Equation (1.3), and
where \(B(\varLambda )\) is given in Equation (1.4). It now follows from the discussion in Sect. 1 that \(C(\varLambda )^{-p}{{\mathbb {M}}}_\theta (\varLambda ) C(\varLambda )\) is an invertible element of \(R_\mathbf{0}'\). Furthermore, we have
And since \(|D(\varLambda )|= 1\) (since \(D(\varLambda )\) has constant term 1 and integral coefficients) we get
With \(\xi (\varLambda ,x)\) as in (5.1), we rewrite (5.2) as
with
We then have
If \(\nu \ne -\mathbf{a}_1,\dots ,-\mathbf{a}_M\), then \(\nu _{n+1}\le -2\), so by (5.14) we may write
We write \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big ) = {{\mathbb {M}}}^{(1)}(\varLambda ) + {{\mathbb {M}}}^{(2)}(\varLambda )\), where
and
It follows from (5.17), (5.18), and a short calculation that
Proof of Lemma 5.1
For notational convenience set
so that (5.19) can be rewritten as
From (5.10) we get
and from (5.11)
Applying our hypotheses gives
and
And since the equality in (5.22) would fail if the inequality in (5.21) were strict, we must have
The matrix \(\widetilde{{\mathbb {M}}}(\varLambda )\) is invertible by (5.10) and our hypotheses, and (5.22) implies
Arguing as in the derivation of Corollary 5.1 from (5.8) then gives
Rewrite (5.20) as
Estimate (4.6) implies that
so by (5.25)
It follows from (5.28) that \(I + \widetilde{{\mathbb {M}}}(\varLambda )^{-1}{{\mathbb {M}}}^{(2)}(\varLambda )\) is invertible, so by (5.26) the invertibility of \(\widetilde{{\mathbb {M}}}(\varLambda )\) implies the invertibility of \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\). Estimate (5.28) implies that
6 Contraction mapping
We use the Dwork-Frobenius operator to construct a contraction mapping on a subset of \(S^M\). Finding the fixed point of this contraction mapping will be the crucial step in proving the first assertion of Theorem 2.1.
Put
and put \(T' = T\cap (S')^M\). Note that T and \(T'\) are closed in the topology on \(S^M\). Elements of T satisfy the hypotheses of Lemma 5.1. By that result, if \(\xi (\varLambda ,x)\in T\), then \({{\mathbb {M}}}(\alpha ^*(\xi (\varLambda ,x)))\) is invertible, so we may define
where we regard \(\alpha ^*(\xi (\varLambda ,x))\) on the right-hand side as a column vector for the purpose of matrix multiplication. By Equation (5.5) we have
Equation (4.5) and Corollary 5.1 imply that \(|\phi (\xi (\varLambda ,x))|\le 1\), so in fact
by (6.1). Equations (6.1) and (6.2) show that \(\phi (T)\subseteq T\) and \(\phi (T')\subseteq T'\).
Proposition 6.1
The operator \(\phi\) is a contraction mapping on T. More precisely, if \(\xi ^{(1)}(\varLambda ,x),\xi ^{(2)}(\varLambda ,x)\in T\), then
Proof
For notational convenience we sometimes write \(\eta ^{(i)}(\varLambda ,x) = \alpha ^*(\xi ^{(i)}(\varLambda ,x))\) for \(i=1,2\). Then
The difference \(\xi ^{(1)}(\varLambda ,x)-\xi ^{(2)}(\varLambda ,x)\) satisfies the hypothesis of Corollary 4.1, so
By Corollary 5.1 we have
so
By Proposition 4.1 we have
and by (6.4) we have
so
The assertion of the proposition now follows from (6.3), (6.6), and (6.7). \(\square\)
By a well-known theorem, the contraction mapping \(\phi\) has a unique fixed point, and since \(\phi\) is stable on \(T'\) that fixed point lies in \(T'\). In the next section we discuss certain hypergeometric series and their p-adic relatives. These p-adic relatives will be used to describe explicitly this fixed point.
7 A-hypergeometric series
The entries of the matrix \(F(\varLambda )\) are A-hypergeometric in nature. The purpose of this section is to recall their construction and to introduce some related series that satisfy better p-adic estimates.
Let \(L\subseteq {{\mathbb {Z}}}^{N}\) be the lattice of relations on the set A:
For each \(l=(l_1,\dots ,l_N)\in L\), we define a partial differential operator \(\Box _l\) in variables \(\{\varLambda _j\}_{j=1}^N\) by
For \(\beta = (\beta _0,\beta _1,\dots ,\beta _{n+1})\in {{\mathbb {C}}}^{n+2}\), the corresponding Euler (or homogeneity) operators are defined by
for \(i=0,\dots ,n+1\). The A-hypergeometric system with parameter \(\beta\) consists of Equations (7.1) for \(l\in L\) and (7.2) for \(i=0,1,\dots ,n+1\).
Consider the formal series
Let \(i\in \{1,\dots ,M\}\). The A-hypergeometric series of interest arise as the coefficients of \(\gamma _0^{u_{n+1}}x^u\) in the expression
For \(u\in {{\mathscr {M}}}_-\), let \(F_u^{(i)}(\varLambda )\) be the coefficient of \(\gamma _0^{u_{n+1}}x^u\) in (7.3) and write
If we set
then from (7.3)
It follows from the definition of q(t) that for \(i=1,\dots ,M\)
A straightforward calculation from (7.3) gives
for \(k=1,\dots ,N\). Equivalently, for \(u\in {{\mathscr {M}}}_-\), we have by (7.6)
More generally, if \(l_1,\dots ,l_N\) are nonnegative integers, it follows that
In particular, it follows from the definition of the box operators (7.1) that
The condition on the summation in (7.5) implies that \(F^{(i)}_u(\varLambda )\) satisfies the Euler operators (7.2) with parameter \(\beta = u\). In summary:
Lemma 7.1
For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), the series \(F^{(i)}_u(\varLambda )\) is a solution of the A-hypergeometric system with parameter \(\beta = u\).
Lemma 7.2
For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), the series \(F^{(i)}_u(\varLambda )\) lies in \(R_u\).
Proof
The homogeneity condition is clear from (7.5). We prove that \(F^{(i)}_u(\varLambda )\) lies in \({\tilde{R}}\).
Let \(u\in {{\mathscr {M}}}_-\). By the definition of the set A, we can write \(-u=\sum _{k=1}^N l_k\mathbf{a}_k\) with the \(l_k\) in \({{\mathbb {N}}}\). Suppose that \((l'_1,\dots ,l'_N)\in L_{i,u}\). Then
Furthermore, we have \(l_k+l'_k\ge 0\) for \(k\ne i\) and \(l_i+l'_i\le 0\), so
It follows that \(L_{i,u}\subseteq -(l_1,\dots ,l_N)+L_i\), hence \(F_u^{(i)}(\varLambda )\in \varLambda _1^{-l_1}\cdots \varLambda _N^{-l_N} R_E\). \(\square\)
Remark
We note the relation between the \(F^{(i)}_u(\varLambda )\) and the \(F_{ij}(\varLambda )\) defined in the Introduction: for \(i,j=1,\dots ,M\),
Lemma 7.3
For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), the coefficients of the series \(F^{(i)}_u(\varLambda )\) are integers divisible by \((-u_{n+1}-1)!\).
Proof
Since the last coordinate of each \(\mathbf{a}_k\) equals 1, the condition on the summation in (7.5) implies that
i. e.,
Applying this formula to the coefficients of the series in (7.5) implies the lemma. \(\square\)
Although the series \(F^{(i)}_u(\varLambda )\) are more natural to consider, we replace them in what follows by some related series \(G^{(i)}_u(\varLambda )\) that are more closely tied to the Dwork-Frobenius operator \(\alpha ^*\).
For \(i=1,\dots ,M\), define
where the second equality follows from (4.1). If we write
then by (3.8) and (7.4) one gets from the first equality of (7.11)
Note that as a series in \(\varLambda\), \(G^{(i)}_u(\varLambda )\) has exponents in \(L_{i,u}\).
We have an analogue of Lemma 7.3 for these series.
Lemma 7.4
For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), \(G^{(i)}_u(\varLambda )/(-u_{n+1}-1)!\) has p-integral coefficients.
Proof
Using (3.9) we can write (7.13) in the form
The series \(F_{u^{(1)}}^{(i)}(\varLambda )/(-u^{(1)}_{n+1}-1)!\) has integral coefficients by Lemma 7.3. The condition on the first summation on the right-hand side of (7.14) implies that
where \(-u_{n+1},-u^{(1)}_{n+1}\ge 1\) since \(u,u^{(1)}\in {{\mathscr {M}}}_-\). Since the last coordinate of each \(\mathbf{a}_k\) equals 1, the condition on the second summation on the right-hand side of (7.14) implies that
so by (7.15)
It follows that the ratio \((-u^{(1)}_{n+1}-1)!/\prod _{k=1}^N l_k\) appearing in the second summation on the right-hand side of (7.14) is an integer divisible by \((-u_{n+1}-1)!\). For each N-tuple \((l_1,\dots ,l_N)\) appearing in the second summation on the right-hand side of (7.14) we have
by (3.6). This implies that the series on the right-hand side of (7.14) converges to a series in the \(\varLambda _k\) with p-integral coefficients that remain p-integral when divided by \((-u_{n+1}-1)!\). \(\square\)
We also have an analogue of Lemma 7.2.
Lemma 7.5
For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\) the series \(G^{(i)}_u(\varLambda )\) lies in \({R}_u\).
Proof
The homogeneity condition is clear. Since the second summation on the right-hand side of (7.14) is finite, the sum
lies in \({\tilde{R}}\) by the proof of Lemma 7.2. Furthermore, it follows from (7.16) that the expression (7.17) has norm bounded by \(p^{-u^{(2)}_{n+1}(p-1)/p}\). In the first summation on the right-hand side of (7.14), a given \(u^{(2)}\in {{\mathbb {N}}}A\) can appear at most once, i. e, \(u^{(2)}\rightarrow \infty\) in this summation. This implies that the first summation on the right-hand side of (7.14) converges in the norm on \({{\mathbb {B}}}\). \(\square\)
We define a matrix \(G(\varLambda ) = \big [ G_{ij}(\varLambda ) \big ]_{i,j=1}^M\) corresponding to \(F(\varLambda )\) by the analogue of (7.10):
Like the \(F_{ij}(\varLambda )\), the monomials in the \(G_{ij}(\varLambda )\) have exponents in \(L_i\). Furthermore, the \(G_{ii}(\varLambda )\) have constant term 1, while the \(G_{ij}(\varLambda )\) for \(i\ne j\) have no constant term. This implies that \(\det G(\varLambda )\) is an invertible element of R, hence \(G(\varLambda )^{-1}\) is a matrix with entries in R. Since the series \(G_{ij}(\varLambda )\) have integral coefficients (Lemma 7.4) and the \(G_{ii}(\varLambda )\) have constant term 1, it follows that
This implies in particular that
We simplify our notation: for \(u,u^{(1)}\in {{\mathscr {M}}}_-\), \(u\ne u^{(1)}\), set
Note that this is a finite sum, \(C_{u,u^{(1)}}\) is p-integral, and \(\mathrm{ord}\,C_{u,u^{(1)}}>0\) by (7.16). Equation (7.14) then simplifies to
Furthermore, the estimate (7.16) implies that \(C_{u,u^{(1)}}\rightarrow 0\) as \(u^{(1)}\rightarrow \infty\) in the sense that for any \(\kappa >0\), the estimate \(\mathrm{ord}\, C_{u,u^{(1)}}>\kappa\) holds for all but finitely many \(u^{(1)}\).
We need the analogue of (7.21) with the roles of F and G reversed. It follows from (7.11) and (4.1) that for \(i=1,\dots ,M\)
This leads to the analogue of (7.14):
where the \({\hat{\theta }}'_{1,l_k}\) are defined by (3.10) and Lemma 7.4 tells us that the
have p-integral coefficients. We define for \(u,u^{(1)}\in {{\mathscr {M}}}_-\), \(u\ne u^{(1)}\),
Since the \({\hat{\theta }}'_{1,l_k}\) also satisfy the estimate (7.16) (see (3.11)), we get that \(C'_{u,u^{(1)}}\) is p-integral and \(\mathrm{ord}\,C'_{u,u^{(1)}}>0\). In addition, \(C'_{u,u^{(1)}}\rightarrow 0\) as \(u^{(1)}\rightarrow \infty\). Substitution into (7.23) now gives the desired formula:
For future reference, we record estimates that were used in the proof of (7.21) and (7.24).
Proposition 7.1
For \(u,u^{(1)}\in {{\mathscr {M}}}_-\), \(u\ne u^{(1)}\), we have \(\mathrm{ord}\,C_{u,u^{(1)}}>0\) (resp. \(\mathrm{ord}\,C'_{u,u^{(1)}}>0\)) and \(C_{u,u^{(1)}}\rightarrow 0\) (resp. \(C'_{u,u^{(1)}}\rightarrow 0\)) as \(u^{(1)}\rightarrow \infty\).
8 Eigenvectors of \(\alpha ^*\)
We use the series of Sect. 7 to construct eigenvectors of \(\alpha ^*\). These eigenvectors will lead to the fixed point of the contraction mapping of Sect. 6. We begin by recalling some results from [3].
By [3, Lemma 6.1] the product \({\hat{\theta }}_1(t)q(\gamma _0t)\) is well-defined so we may set
where the \(Q_i\) are defined by the second equality. The proof of [3, Lemma 6.1] shows that the \(Q_i\) are p-integral. By [3, Proposition 6.10] we have
It follows from (8.2) that
for some series A(t) in nonnegative powers of t. Fix i, \(1\le i\le M\), and replace t in this equation by \(\varLambda _ix^{\mathbf{a}_i}\):
Since \(\delta _-(Q(\varLambda _i x^{\mathbf{a}_i})) = Q(\varLambda _i x^{\mathbf{a}_i})\) and \(\delta _-(A(\varLambda _ix^{\mathbf{a}_i})) = 0\), we get
These series are related to the \(G^{(i)}(\varLambda ,x)\) defined in Sect. 7.
Lemma 8.1
We have
Proof
Combining (7.11), (7.3), and (3.7) and using (4.1) we get
Using (3.3) and (3.5), this may be rewritten as
The assertion of the lemma now follows from (8.1) and (4.1). \(\square\)
The following result is a key step in the proof of Theorem 2.1.
Theorem 8.1
For \(i=1,\dots ,M\) we have
Proof
Since \(\theta (t) = {\hat{\theta }}(t)/{\hat{\theta }}(t^p)\) we have
We now compute
where the first equality follows from Lemma 8.1, the second follows from (4.1) and rearranging the product, the third follows from (8.3) and (8.4), and the fourth follows from Lemma 8.1. Note that the rearrangement of the product occuring in the second equality is not completely trivial. One of the series contains negative powers of the \(x_i\), the rest contain positive powers of the \(x_i\), so they do not all lie in a common commutative ring. One has to write out both sides to verify that they are equal. (See Sect. 5 of: Adolphson, A., Sperber, S.: On the integrality of hypergeometric series whose coefficients are factorial ratios, available at arXiv:2001.03283, for more details on this calculation.) \(\square\)
The action of \(\alpha ^*\) was extended to \(S^M\) coordinatewise, so if we set
then we get the following result.
Corollary 8.1
We have \(\alpha ^*\big (G(\varLambda ,x)\big ) = pG(\varLambda ,x)\).
We now verify that \(G(\varLambda )^{-1}G(\varLambda ,x)\in T\). First of all,
by (7.18) and (5.5). This implies in particular that
But \(|G(\varLambda ,x)|\le 1\) by Lemma 7.4 and \(|G(\varLambda )^{-1}|\le 1\) by (7.20), so
also. We thus conclude that
Corollary 8.2
The product \(G(\varLambda )^{-1}G(\varLambda ,x)\) is the unique fixed point of \(\phi\), hence lies in \(T'\).
Proof
From the definition of \(\alpha ^*\) and Corollary 8.1 we have
This implies by (7.18) and (5.4) that
hence
\(\square\)
Corollary 8.3
The entries of \(G(\varLambda ^p)^{-1}G(\varLambda )\) lie in \(R'_\mathbf{0}\).
Proof
Corollary 8.2 implies that \(G(\varLambda )^{-1}G(\varLambda ,x)\) lies in \((S')^M\), and since \(\alpha ^*\) is stable on \((S')^M\) we also have \(\alpha ^*\big (G(\varLambda )^{-1}G(\varLambda ,x)\big )\in (S')^M\). The assertion of the corollary now follows from (8.7). \(\square\)
Put \({{\mathscr {G}}}(\varLambda ) = G(\varLambda ^p)^{-1}G(\varLambda )\). We note one more consequence of Theorem 8.1.
Proposition 8.1
The reciprocal eigenvalues of \({{\mathscr {G}}}(\lambda )\) are p-adic units for all \(\lambda\) in \({{\mathscr {D}}}\).
Proof
We showed earlier that \(G(\varLambda )^{-1}G(\varLambda ,x)\in T\), so \(G(\varLambda )^{-1}G(\varLambda ,x)\) satisfies the hypotheses of Lemma 5.1. By (8.6) and (5.7) we have
and by (8.6) and Corollary 5.1 we have
Combining these equalities with (8.8) gives
By (2.8), we then have
which implies that \(\det {{\mathscr {G}}}(\lambda )\) assumes unit values on \({{\mathscr {D}}}\). Since \({{\mathscr {G}}}(\lambda )\) assumes p-integral values on \({{\mathscr {D}}}\), it follows that the roots of \(\det (I-t{{\mathscr {G}}}(\lambda ))\) are p-adic units. \(\square\)
We introduce some additional notation that will be useful later. Put
This is an element of \(T'\) by Corollary 8.2, so each \({{\mathscr {G}}}^{(i)}(\varLambda ,x)\) lies in \(S'\) and we may write
with \({{\mathscr {G}}}_u^{(i)}(\varLambda )\) in \(R_u'\) for all i, u.
9 Relation between \({{\mathscr {F}}}(\varLambda )\) and \({{\mathscr {G}}}(\varLambda )\)
In this section we prove the first assertion of Theorem 2.1 and show that the second assertion of Theorem 2.1 is equivalent to the same statement with \({{\mathscr {F}}}(\varLambda )\) replaced by \({{\mathscr {G}}}(\varLambda )\) (see Proposition 9.2).
Put \(F(\varLambda ,x) = (F^{(1)}(\varLambda ,x),\dots ,F^{(M)}(\varLambda ,x))\), where \(F^{(i)}(\varLambda ,x)\) is given by (7.3). We regard \(F(\varLambda ,x)\) and \(G(\varLambda ,x)\) (defined in (8.5)) as column vectors so they can be multiplied on the left by \((M\times M)\)-matrices. The following result is a consequence of the fact that \(G(\varLambda )^{-1}G(\varLambda ,x)\) lies in \((S')^M\) (Corollary 8.2).
Proposition 9.1
The vectors \(F(\varLambda )^{-1}F(\varLambda ,x)\), \(G(\varLambda )^{-1}F(\varLambda ,x)\), and \(F(\varLambda )^{-1}G(\varLambda ,x)\) lie in \((S')^M\).
Proof
We begin by showing that \(G(\varLambda )^{-1}F(\varLambda ,x)\in (S')^M\). Writing Equation (7.24) for \(i=1,\dots ,M\) and multiplying by \(G(\varLambda )^{-1}\) gives the vector equation
We have \(|G(\varLambda )^{-1}|\le 1\) by (7.20). Lemma 7.4, Proposition 7.1, and Corollary 8.2 imply that the sum on the right-hand side of (9.1) converges to an element of \((R'_u)^M\). Since \(u\in {{\mathscr {M}}}_-\) was arbitrary, this shows that \(G(\varLambda )^{-1}F(\varLambda ,x)\in (S')^M\).
Take successively \(u=-\mathbf{a}_j\), \(j=1,\dots ,M\) in (9.1) and multiply the j-th equation by \(\varLambda _j\). We combine the column vectors in the resulting M equations into matrices and apply (7.10) and (7.18). The resulting matrix equation is
where we have written \(H(\varLambda )\) for the \((M\times M)\)-matrix whose j-th column is
The entries in the matrix \(H(\varLambda )\) all lie in \({R}'_\mathbf{0}\) by Corollary 8.2 and have norm \(<1\) by (7.20), Lemma 7.4, and Proposition 7.1. We can thus apply the usual geometric series formula to invert the right-hand side of (9.2). This proves that \(F(\varLambda )^{-1}G(\varLambda )\) is a matrix with entries in \({R}'_\mathbf{0}\), all entries having norm \(\le 1\). Since the left-hand side of (9.1) lies in \(({R}'_u)^M\), we can now multiply it by \(F(\varLambda )^{-1}G(\varLambda )\) to conclude that
This shows that \(F(\varLambda )^{-1}F(\varLambda ,x)\) lies in \((S')^M\).
The remaining assertion, that \(F(\varLambda )^{-1}G(\varLambda ,x)\) lies in \((S')^M\), can be proved similarly by reversing the roles of F and G and using (7.21) in place of (7.24). Since that result is not needed in what follows, we omit the details. \(\square\)
Proposition 9.2
The matrix \({{\mathscr {F}}}(\varLambda )\) has entries in \(R'_\mathbf{0}\). For any \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) with \({\bar{D}}(\lambda )\ne 0\) we have
where \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) denotes the Teichmüller lifting of \(\lambda\).
Proof
We showed in the proof of Proposition 9.1 that \({{\mathscr {H}}}(\varLambda ):=F(\varLambda )^{-1}G(\varLambda )\) and its inverse \({{\mathscr {H}}}(\varLambda )^{-1} = G(\varLambda )^{-1}F(\varLambda )\) have entries in \({R}_\mathbf{0}'\). We then have
which implies the first assertion of the proposition since \({{\mathscr {G}}}(\varLambda )\) has entries in \(R_\mathbf{0}'\) by Corollary 8.3. Equation (9.4) implies
The Teichmüller lifting \({\hat{\lambda }}\) satisfies \({\hat{\lambda }}^{p^a} = {\hat{\lambda }}\) and \({{\mathscr {H}}}(\varLambda )\) is a function on \({{\mathscr {D}}}\), so the second assertion of the proposition follows by evaluating (9.5) at \(\varLambda = {\hat{\lambda }}\). \(\square\)
The first assertion of Proposition 9.2 implies the first assertion of Theorem 2.1. The second assertion of Proposition 9.2 shows that the second assertion of Theorem 2.1 is equivalent to the following statement.
Theorem 9.1
Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. If \({\bar{D}}(\lambda )\ne 0\), then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and
10 Proof of Theorem 9.1
We apply Dwork’s p-adic cohomology theory to prove Theorem 9.1.
We begin by recalling the formula for the rational function \(P_\lambda (t)\) that was proved in [3]. For a subset \(I\subseteq \{0,1,\dots ,n\}\), let
Put \(g_\lambda = x_{n+1}f_\lambda\), so that
and let \({\hat{g}}_\lambda\) be its Teichmüller lifting:
From (3.15) we have
We also need the series \(\theta _0({\hat{\lambda }},x)\) defined by
Define an operator \(\psi\) on formal power series by
Denote by \(\alpha _{{\hat{\lambda }}}\) the composition
The map \(\alpha _{{\hat{\lambda }}}\) is stable on \(L^I\) for all I and we have ( [3, Equation (7.12)])
For notational convenience, put \(\Gamma = \{0,1,\dots ,n\}\). The following lemma is an immediate consequence of [3, Proposition 7.13(b)] (note that since we are assuming \(d\ge n+1\), we have \(\mu =0\) in that proposition).
Lemma 10.1
The unit reciprocal roots of \(P_\lambda (t)\) are obtained from the reciprocal roots of \(\det (I-t\alpha _{{\hat{\lambda }}}\mid L_0^\Gamma )\) of q-ordinal equal to 1 by division by q.
We give an alternate description of \(\det (I-t\alpha _{{\hat{\lambda }}}\mid L_0^\Gamma )\) ( [3, Sect. 7]). Set
a p-adic Banach space with norm \(|\xi ^*|= \sup _{u\in ({{\mathbb {Z}}}_{<0})^{n+2}}\{|c_u^*|\}\). Define a map \(\varPhi\) on formal power series by
Consider the formal composition \(\alpha ^*_{{\hat{\lambda }}} = \delta _-\circ \theta _0({\hat{\lambda }},x)\circ \varPhi ^a\). The following result is [3, Proposition 7.30].
Proposition 10.1
The operator \(\alpha ^*_{{\hat{\lambda }}}\) is an endomorphism of B which is adjoint to \(\alpha _{{\hat{\lambda }}}:L_0^\Gamma \rightarrow L_0^\Gamma\).
From Proposition 10.1, it follows by Serre [12, Proposition 15] that
so Lemma 10.1 gives the following result.
Corollary 10.1
The unit reciprocal roots of \(P_\lambda (t)\) are obtained from the reciprocal roots of \(\det (I-t\alpha ^*_{{\hat{\lambda }}}\mid B)\) of q-ordinal equal to 1 by division by q.
We saw in Sect. 1 that \(P_\lambda (t)\) has exactly M unit roots when \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and \({\bar{D}}(\lambda )\ne 0\). By Proposition 8.1, the eigenvalues of the \((M\times M)\)-matrix \({{\mathscr {G}}}(\lambda )\) are units for all \(\lambda \in {{\mathscr {D}}}\). So to prove Theorem 9.1, it suffices to establish the following result.
Proposition 10.2
Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. Assume that \({\bar{D}}(\lambda )\ne 0\). Then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and
is a factor of \(\det (I-t\alpha ^*_{{\hat{\lambda }}}\mid B)\).
Proof
Using the notation of (8.9), we have by (8.7)
Iterating this gives for all \(m\ge 0\)
Evaluate \({{\mathscr {G}}}(\varLambda ,x)\) at \(\varLambda = {\hat{\lambda }}\):
where by (8.10)
Since \(\gamma _0^{-(p-1)u_{n+1}}\rightarrow 0\) as \(u\rightarrow \infty\), this expression lies in B. One checks that the specialization of the left-hand side of (10.7) with \(m=a\) at \(\varLambda = {\hat{\lambda }}\) is \(\alpha _{{\hat{\lambda }}}^*({{\mathscr {G}}}({\hat{\lambda }},x)\big )\), so specializing (10.7) with \(m=a\) and \(\varLambda = {\hat{\lambda }}\) gives
We have proved that \({{\mathscr {G}}}({\hat{\lambda }},x)\) is a vector of M elements of B and that the action of \(\alpha ^*_{{\hat{\lambda }}}\) on these M elements is represented by the matrix
This implies the assertion of the proposition. \(\square\)
Availability of data and material
Not applicable.
References
Adolphson, A., Sperber, S.: \(A\)-hypergeometric series and the Hasse-Witt matrix of a hypersurface. Finite Fields Appl. 41, 55–63 (2016)
Adolphson, A., Sperber, S.: A generalization of the Hasse-Witt matrix of a hypersurface. Finite Fields Appl. 47, 203–221 (2017)
Adolphson, A., Sperber, S.: Distinguished-root formulas for generalized Calabi-Yau hypersurfaces. Algebra Number Theory 11(6), 1317–1356 (2017)
Beukers, F., Vlasenko, M.: Dwork crystals I. Int. Math. Res. Not., to appear
Beukers, F., Vlasenko, M.: Dwork crystals II. Int. Math. Res. Not., to appear
Dwork, B.: On the zeta function of a hypersurface. Inst. Hautes Études Sci. Publ. Math. No. 12, 5–68 (1962)
Dwork, B.: \(p\)-adic cycles. Inst. Hautes Études Sci. Publ. Math. No. 37, 27–115 (1969)
Katz, N.: Algebraic solutions of differential equations (\(p\)-curvature and the Hodge filtration). Invent. Math. 18, 1–118 (1972)
Koblitz, N.: \(p\)-adic variation of the zeta-function over families of varieties defined over finite fields. Compositio Math. 31(2), 119–218 (1975)
Miller, L.: Über gewöhnliche Hyperflächen. I. J. Reine Angew. Math. 282, 96–113 (1976)
Miller, L.: Über gewöhnliche Hyperflächen. II. J. Reine Angew. Math. 283(284), 402–420 (1976)
Serre, J.-P.: Endomorphismes complètement continus des espaces de Banach \(p\)-adiques. Inst. Hautes Études Sci. Publ. Math. 12, 69–85 (1962)
Funding
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflicts of interest or competing interests.
Code availability
Not applicable.
Additional information
Communicated by Jens Funke.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Adolphson, A., Sperber, S. A-hypergeometric series and a p-adic refinement of the Hasse-Witt matrix. Abh. Math. Semin. Univ. Hambg. 91, 225–256 (2021). https://doi.org/10.1007/s12188-021-00243-1
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12188-021-00243-1