1 Introduction

Dwork [7] expressed the unit root of the zeta function of an ordinary elliptic curve in the Legendre family in characteristic p in terms of the Gaussian hypergeometric function \({}_2F_1(1/2,1/2,1,\lambda )\). There have since been a number of generalizations and extensions of that result: see [3, Sect. 1]. The point of the current paper is to prove such a result for projective hypersurfaces where the zeta function has multiple unit roots by using the matrix of A-hypergeometric series that appeared in [1]. We note that related results have recently been obtained for hypersurfaces in the torus by Beukers and Vlasenko [4, 5] using different methods.

Let \(\{x^{\mathbf{b}_k}\}_{k=1}^N\) be the set of all monomials of degree d in variables \(x_0,\dots ,x_n\) \(\left( {so}\ N = \left( {\begin{array}{*{20}c} {d + n} \\ n \\ \end{array} } \right)\right)\) and let

$$\begin{aligned} f_\lambda (x_0,\dots ,x_n) = \sum _{k=1}^N \lambda _k x^{\mathbf{b}_k}\in {{\mathbb {F}}}_q[x_0,\dots ,x_n] \end{aligned}$$
(1.1)

be a homogeneous polynomial of degree d over the finite field \({{\mathbb {F}}}_q\), \(q=p^a\), p a prime. Let \(X_{\lambda }\subseteq {{\mathbb {P}}}^n_{{{\mathbb {F}}}_q}\) be the projective hypersurface defined by the vanishing of \(f_\lambda\) and let \(Z(X_\lambda /{{\mathbb {F}}}_q,t)\) be its zeta function. Define a rational function \(P_\lambda (t)\) by the equation

$$\begin{aligned} P_\lambda (t) = \big (Z(X_\lambda /{{\mathbb {F}}}_q,t)\,(1-t)\,(1-qt)\cdots (1-q^{n-1}t)\big )^{(-1)^n}\in 1+t{{\mathbb {Z}}}\,[[t]]. \end{aligned}$$

When \(X_\lambda\) is smooth, \(P_\lambda (t)\) is the characteristic polynomial of Frobenius acting on middle-dimensional primitive cohomology. In this case, \(P_\lambda (t)\) has degree

$$\begin{aligned} d^{-1}\big ((d-1)^{n+1}+(-1)^{n+1}(d-1)\big ). \end{aligned}$$

In the general case, we know only that \(P_\lambda (t)\) is a rational function.

We regard \(f_\lambda\) as a polynomial with fixed exponents and variable coefficients, giving rise to a family of rational functions \(P_\lambda (t)\). The p-adic unit (reciprocal) roots of \(P_\lambda (t)\) all occur in the numerator (Proposition 1.1) and for generic \(\lambda\) it has the maximal possible number of p-adic unit (reciprocal) roots (by the generic invertibility of the Hasse-Witt matrix, see below). Our goal in this paper is to give a p-adic analytic formula for these unit roots in terms of A-hypergeometric series.

Write \(\mathbf{b}_k = (b_{0k},\dots ,b_{nk})\) with \(\sum _{i=0}^n b_{ik} = d\). It will be convenient to define an augmentation of these vectors. For \(k=1,\dots ,N\), put

$$\begin{aligned} \mathbf{a}_k = (\mathbf{b}_k,1) = (b_{0k},b_{1k},\dots ,b_{nk},1)\in {{\mathbb {N}}}^{n+2} \end{aligned}$$

(where \({{\mathbb {N}}}\) denotes the nonnegative integers). Let \(A = \{\mathbf{a}_k\}_{k=1}^N\). Note that the vectors \(\mathbf{a}_k\) all lie on the hyperplane \(\sum _{i=0}^n u_i=du_{n+1}\) in \({{\mathbb {R}}}^{n+2}\). Let

$$\begin{aligned} U = \bigg \{ u=(u_0,\dots ,u_n,1)\in {{\mathbb {N}}}^{n+2}\mid \sum _{i=0}^n u_i=d\, \text { and } u_i>0 \, \text {for }\, i=0,\dots ,n\bigg \}. \end{aligned}$$

Note that \(\{x_0^{u_0}\cdots x_n^{u_n}\mid u\in U\}\) is the set of all monomials of degree d that are divisible by the product \(x_0\cdots x_n\), so \(|U |= \left( {\begin{array}{c}d-1\\ n\end{array}}\right)\). We assume throughout that \(d\ge n+1\), which implies \(U\ne \emptyset\). (If \(d<n+1\), then U is empty and none of the reciprocal roots of \(P_\lambda (t)\) is a p-adic unit. If one defines the determinant of an empty matrix to be 1, then Theorem 1.1 below is trivially true in this case.)

We recall the definition of the Hasse-Witt matrix of \(f_\lambda\), a \((|U|\times |U|)\)-matrix with rows and columns indexed by U. Let \(\varLambda _1,\dots ,\varLambda _N\) be indeterminates and let \(H(\varLambda ) = \big [ H_{uv}(\varLambda )\big ]_{u,v\in U}\) be the matrix of polynomials:

$$\begin{aligned} H_{uv}(\varLambda ) = \sum _{\begin{array}{c} \nu \in {{\mathbb {N}}}^N\\ \sum _{k=1}^N \nu _k\mathbf{a}_k = pu-v \end{array}} \frac{\varLambda _1^{\nu _1}\cdots \varLambda _N^{\nu _N}}{\nu _1!\cdots \nu _N!}\in {{\mathbb {Q}}}[\varLambda _1,\dots ,\varLambda _N]. \end{aligned}$$
(1.2)

Since the last coordinate of each \(\mathbf{a}_k\), u, and v equals 1, the condition on the summation implies \(\sum _{k=1}^N \nu _k = p-1\). In particular, it follows that \(\nu _k\le p-1\) for all k, so the coefficients of each \(H_{uv}(\varLambda )\) lie in \({{\mathbb {Q}}}\cap {{\mathbb {Z}}}_p\). Let \({\bar{H}}_{uv}(\varLambda )\in {{\mathbb {F}}}_p[\varLambda ]\) be the reduction mod p of \(H_{uv}(\varLambda )\) and let \({\bar{H}}(\varLambda )\) be the reduction mod p of \(H(\varLambda )\). Then \({\bar{H}}(\lambda )\) is the Hasse-Witt matrix of \(f_\lambda\) (relative to a certain basis: see Katz [8, Algorithm 2.3.7.14]).

Write \(P_\lambda (t) = Q_\lambda (t)/R_\lambda (t)\), where \(Q_\lambda (t),R_\lambda (t)\in 1 + {{\mathbb {Z}}}[t]\) are relatively prime polynomials. As a special case of [2, Theorem 1.4] we have the following result.

Proposition 1.1

For all \(\lambda \in {{\mathbb {F}}}_q^N\) we have \(R_\lambda (t)\equiv 1 \pmod {q}\) and

$$\begin{aligned} Q_\lambda (t) \equiv \det \big ( I-t {\bar{H}}(\lambda ^{p^{a-1}})\, {\bar{H}}(\lambda ^{p^{a-2}})\cdots {\bar{H}}(\lambda )\big ) \pmod {p}. \end{aligned}$$

Proposition 1.1 implies that all the unit roots of \(P_\lambda (t)\) occur in the numerator and that there are at most \(\mathrm{card}(U)\) of them. The Hasse-Witt matrix is known to be generically invertible for a “sufficiently general” polynomial \(f_\lambda\) (Koblitz [9], Miller [10, 11]). We recall the precise version of that fact that we need.

We suppose the \(\mathbf{a}_k\) to be ordered so that \(U=\{\mathbf{a}_k\}_{k=1}^M\), \(M=\left( {\begin{array}{c}d-1\\ n\end{array}}\right)\). We may then write the matrix \(H(\varLambda )\) as \(\big [ H_{ij}(\varLambda )\big ]_{i,j=1}^M\), where

$$\begin{aligned} H_{ij}(\varLambda ) = \sum _{\begin{array}{c} \nu \in {{\mathbb {N}}}^N\\ \sum _{k=1}^N \nu _k\mathbf{a}_k = p\mathbf{a}_i-\mathbf{a}_j \end{array}} \frac{\varLambda _1^{\nu _1}\cdots \varLambda _N^{\nu _N}}{\nu _1!\cdots \nu _N!}\in ({{\mathbb {Q}}}\cap {{\mathbb {Z}}}_p)[\varLambda _1,\dots ,\varLambda _N]. \end{aligned}$$
(1.3)

Consider the related matrix \(B(\varLambda ) = \big [B_{ij}(\varLambda )\big ]_{i,j=1}^M\) of Laurent polynomials defined by

$$\begin{aligned} B_{ij}(\varLambda ) = \varLambda _i^{-p}\varLambda _j H_{ij}(\varLambda )\in ({{\mathbb {Q}}}\cap {{\mathbb {Z}}}_p)[\varLambda _1,\dots ,\varLambda _{i-1},\varLambda _i^{-1},\varLambda _{i+1},\dots ,\varLambda _N], \end{aligned}$$
(1.4)

i. e., \(B(\varLambda ) = C(\varLambda ^p)^{-1}H(\varLambda )C(\varLambda )\), where \(C(\varLambda )\) is the diagonal matrix with entries \(\varLambda _1,\dots ,\varLambda _M\). Since \(\lambda _k^{p^a}=\lambda _k\) for \(\lambda _k\in {{\mathbb {F}}}_q\), Proposition 1.1 has the following corollary.

Corollary 1.1

For all \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\),

$$\begin{aligned} Q_\lambda (t) \equiv \det \big ( I-t {\bar{B}}(\lambda ^{p^{a-1}})\, {\bar{B}}(\lambda ^{p^{a-2}})\cdots {\bar{B}}(\lambda )\big ) \pmod {p}. \end{aligned}$$

Put \(D\,(\varLambda ) = \det B(\varLambda )\). By [1, Proposition 2.11] we have the following result.

Proposition 1.2

The Laurent polynomial \(D(\varLambda )\) has constant term 1.

Proposition 1.2 implies that neither \(D(\varLambda )\) nor \({\bar{D}}(\varLambda )\) is the zero polynomial, so the matrix \({\bar{B}}(\lambda )\) is invertible for \(\lambda\) in a Zariski open subset of \(({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\), a nonempty set for q sufficiently large. It follows from Corollary 1.1 that for \({\bar{D}}(\lambda )\ne 0\), \(P_\lambda (t)\) has M unit reciprocal roots. Let \(\{\pi _j(\lambda )\}_{j=1}^M\) be these roots and put

$$\begin{aligned} \rho (\lambda ,t) =\prod _{j=1}^M \big (1-\pi _j(\lambda )t\big ). \end{aligned}$$

Our goal is to describe the polynomial \(\rho (\lambda ,t)\) in terms of the A-hypergeometric series introduced in [1, Equation (3.3)].

We begin by defining a ring that contains the desired series. For each \(i=1,\dots ,M\), the Laurent polynomials \(B_{ij}(\varLambda )\) have exponents lying in the set

$$\begin{aligned} L_{i} := \bigg \{ l=(l_1,\dots ,l_{N})\in {{\mathbb {Z}}}^{N}\mid \sum _{k=1}^{N} l_k\mathbf{a}_k = \mathbf{0}, l_i\le 0, \,\text { and}\, l_k\ge 0 \, \text { for }\, k\ne i\bigg \}. \end{aligned}$$

Let \({{\mathscr {C}}}\subseteq {{\mathbb {R}}}^{N}\) be the real cone generated by \(\bigcup _{i=1}^M L_{i}\). By [1, Proposition 2.9], the cone \({{\mathscr {C}}}\) does not contain a line, hence it has a vertex at the origin. Put \(E={{\mathscr {C}}}\cap {{\mathbb {Z}}}^N\). Note that \((l_1,\dots ,l_N)\in E\) implies that \(\sum _{k=1}^N l_k\mathbf{a}_k = \mathbf{0}\) and that \(\sum _{k=1}^N l_k = 0\) since the last coordinate of each \(\mathbf{a}_k\) equals 1. Let \({{\mathbb {C}}}_p\) be the completion of an algebraic closure of \({{\mathbb {Q}}}_p\) with absolute value \(|\cdot |\) associated to the valuation \(\mathrm{ord}\), normalized by \(\mathrm{ord}\;p=1\) and \(|p|= p^{-1}\). Set

$$\begin{aligned} R_E=\bigg \{ \xi (\varLambda ) = \sum _{l=(l_1,\dots ,l_N)\in E} c_l\varLambda _1^{l_1}\cdots \varLambda _N^{l_N}\mid c_l\in {{\mathbb {C}}}_p\, \text {and}\, \{|c_l\vert \}_{l\in E} \, \text {is bounded}\bigg \}, \end{aligned}$$

i. e., \(R_E\) is the set of Laurent series over \({{\mathbb {C}}}_p\) having exponents in E and bounded coefficients. Note that \(R_E\) is a ring (since \({{\mathscr {C}}}\) has a vertex at the origin) and that the entries of \(B(\varLambda )\) all lie in \(R_E\). By Proposition 1.2, the Laurent polynomial \(D(\varLambda )\) is an invertible element of \(R_E\), so \(B(\varLambda )^{-1}\) has entries in \(R_E\).

We define a matrix \(F(\varLambda ) = \big [ F_{ij}(\varLambda )\big ]_{i,j=1}^M\) with entries in \(R_E\). For \(i\ne j\), put

$$\begin{aligned} F_{ij}(\varLambda ) = \sum _{\begin{array}{c} (l_1,\dots ,l_N)\in L_i\\ l_j>0 \end{array}} (-1)^{-l_i-1} \frac{(-l_i-1) !}{\displaystyle (l_j-1)!\prod _{\begin{array}{c} k=1\\ k\ne i,j \end{array}}^N l_k!} \varLambda _1^{l_1}\cdots \varLambda _N^{l_N}, \end{aligned}$$

and for \(i=j\) put

$$\begin{aligned} F_{ii}(\varLambda ) = \sum _{(l_1,\dots ,l_N)\in L_i} (-1)^{-l_i} \frac{(-l_i)!}{\displaystyle \prod _{\begin{array}{c} k=1\\ k\ne i \end{array}}^N l_k!} \varLambda _1^{l_1}\cdots \varLambda _N^{l_N}. \end{aligned}$$

The coefficients of these series are multinomial coefficients, hence lie in \({{\mathbb {Z}}}\), so these series lie in \(R_E\). Note also that each \(F_{ii}(\varLambda )\) has constant term 1, while the \(F_{ij}(\varLambda )\) for \(i\ne j\) have no constant term. It follows that \(\det F(\varLambda )\) has constant term 1, hence \(\det F(\varLambda )\) is an invertible element of \(R_E\). We may therefore define a matrix \({{\mathscr {F}}}(\varLambda )\) with entries in \(R_E\) by the formula

$$\begin{aligned} {{\mathscr {F}}}\,(\varLambda ) = F(\varLambda ^p)^{-1}F(\varLambda ). \end{aligned}$$
(1.5)

The interpretation of the \(F_{ij}(\varLambda )\) as A-hypergeometric series will be explained in Sect. 7 (see Equation (7.10)).

Put

$$\begin{aligned} {{\mathscr {D}}}= & {} \{ (\lambda _1,\dots ,\lambda _N)\in {{\mathbb {C}}}_p^N\mid |\lambda _k|=1 \, \text { for }\, k=1,\dots ,M, \nonumber \\&|\lambda _k|\le 1 \,\text { for}\, k=M+1,\dots ,N, \, \text {and}\, |D(\lambda )|=1\}. \end{aligned}$$
(1.6)

Our main result is the following.

Theorem 1.1

The entries in the matrix \({{\mathscr {F}}}\,(\varLambda )\) are functions on \({{\mathscr {D}}}\). Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. If \({\bar{D}}(\lambda )\ne 0\), then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and

$$\begin{aligned} \rho (\lambda ,t) = \det \big ( I-t{{\mathscr {F}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {F}}}({\hat{\lambda }}^{p^{a-2}})\cdots {{\mathscr {F}}}({\hat{\lambda }})\big ). \end{aligned}$$

Remark 1

The first sentence of Theorem 1.1 will be made more precise in Sect. 2 (see Theorem 2.1). The assertion that \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for all i if \({\bar{D}}(\lambda )\ne 0\) follows immediately from the fact that \({\bar{D}}(\varLambda )\) has coefficients in \({{\mathbb {F}}}_p\).

Remark 2

The series \(F_{ij}(\varLambda )\) are related to the Laurent polynomials \(B_{ij}(\varLambda )\) by truncation (see [1, Proposition 3.8]). One can thus regard Theorem 1.1 as a refinement of Corollary 1.1, i. e., one may think of \({{\mathscr {F}}}(\varLambda )\) as a p-adic refinement of the Hasse-Witt matrix \(B(\varLambda )\).

2 Rings of p-adic series

This section has two purposes. We need a ring R large enough to contain all the series that will be encountered in the proof of Theorem 1.1 and we need to identify a subring \(R'\subseteq R\) whose elements define functions on \({{\mathscr {D}}}\).

Consider the \({{\mathbb {C}}}_p\)-vector space

$$\begin{aligned} {{\mathbb {B}}}:= \bigg \{ \xi (\varLambda ) = \sum _{l\in {{\mathbb {Z}}}^N} c_l\varLambda ^l\mid \{|c_l|\}_{l\in {{\mathbb {Z}}}^N}\, \text { is bounded} \bigg \} \end{aligned}$$

of all Laurent series with bounded coefficients. We define a norm on \({{\mathbb {B}}}\) by setting

$$\begin{aligned} |\xi (\varLambda )|= \sup _{l\in {{\mathbb {Z}}}^N} |c_l|. \end{aligned}$$

The \({{\mathbb {C}}}_p\)-vector space \({{\mathbb {B}}}\) is complete in this norm and \(R_E\) is a \({{\mathbb {C}}}_p\)-subspace of \({{\mathbb {B}}}\) which is also complete.

We begin by defining a subring \(R'_E\) of the ring \(R_E\) whose elements define functions on \({{\mathscr {D}}}\). Let \(L_E\) denote the subring of \(R_E\) consisting of the Laurent polynomials that lie in \(R_E\). Since \(D(\varLambda )\) is an invertible element of \(R_E\), we may define a subring of \(R_E\)

$$\begin{aligned} L_{E,D(\varLambda )} := \bigg \{ \frac{h(\varLambda )}{D(\varLambda )^k}\mid h(\varLambda )\in L_E \, \text { and }\, k\in {{\mathbb {N}}}\bigg \}. \end{aligned}$$

The definitions of \(R_E\) and \({{\mathscr {D}}}\) show that the elements of \(L_{E,D(\varLambda )}\) are functions on \({{\mathscr {D}}}\). We define \(R_E'\) to be the completion of \(L_{E,D(\varLambda )}\) under the norm on \({{\mathbb {B}}}\). Clearly \(R_E'\) is a subring of \(R_E\). The next two lemmas will show that the elements of \(R_E'\) define functions on \({{\mathscr {D}}}\).

Lemma 2.1

For \(h\,(\varLambda )\in L_E\) and \(k\in {{\mathbb {N}}}\), one has

$$\begin{aligned} \bigg |\frac{h(\varLambda )}{D(\varLambda )^k}\bigg |= |h(\varLambda )|. \end{aligned}$$

Proof

Write

$$\begin{aligned} D(\varLambda )^{-k} = \sum _{l\in E} d_l\varLambda ^l. \end{aligned}$$

Since \(D\,(\varLambda )\) has constant term 1 and p-integral coefficients, it follows that \(d_\mathbf{0} = 1\) and the \(d_l\) are p-integral. Write

$$\begin{aligned} h(\varLambda ) = \sum _{i=1}^r h_i\varLambda ^{e_i} \end{aligned}$$

where \(e_i\in E\) for \(i=1,\dots ,r\). The coefficient of \(\varLambda ^l\) in the quotient \(h(\varLambda )/D(\varLambda )\,^k\) is

$$\begin{aligned} \sum _{i=1}^r h_i d_{l-e_i}, \end{aligned}$$
(2.1)

with the understanding that \(d_{l-e_i} = 0\) if \(l-e_i\not \in E\). Since the \(d_{l-e_i}\) are p-integral

$$\begin{aligned} \bigg |\sum _{i=1}^r h_i d_{l-e_i}\bigg |\le \sup _{i=1,\dots ,r} |h_i|, \end{aligned}$$

which implies that \(|h(\varLambda )/D(\varLambda )^k|\le |h(\varLambda )|\).

We must have \(|h(\varLambda )|= |h_i|\) for some i. To fix ideas, suppose that

$$\begin{aligned} |h(\varLambda )|= |h_i|\quad \text {for }\, i=1,\dots ,s \end{aligned}$$
(2.2)

and

$$\begin{aligned} |h(\varLambda )|> |h_i|\quad \text {for }\, i=s+1,\dots ,r. \end{aligned}$$
(2.3)

Since the cone \({{\mathscr {C}}}\) has a vertex at the origin, we can choose a vector \(v\in {{\mathbb {R}}}^N\) such that the inner product \(v\bullet w\) is \(\ge 0\) for all \(w\in {{\mathscr {C}}}\) and \(v\bullet w=0\) only if \(w=\mathbf{0}\). For some \(i\in \{1,\dots ,s\}\), the inner product \(v\bullet e_i\) is minimal. To fix ideas, suppose that

$$\begin{aligned} v\bullet e_1\le v\bullet e_i\quad \text {for }\, i=2,\dots ,s. \end{aligned}$$
(2.4)

From (2.1), the coefficient of \(\varLambda ^{e_1}\) in \(h(\varLambda )/D(\varLambda )^k\) is (using \(d_\mathbf{0} = 1\))

$$\begin{aligned} h_1 + \sum _{i=2}^r h_i d_{e_1-e_i}. \end{aligned}$$
(2.5)

For \(i=2,\dots ,s\) we have by (2.4) that \(v\bullet (e_1-e_i)\le 0\). But \(e_1-e_i\ne \mathbf{0}\), so \(e_1-e_i\not \in E\). This implies \(d_{e_1-e_i} = 0\), so (2.5) simplifies to

$$\begin{aligned} h_1 + \sum _{i=s+1}^r h_i d_{e_1-e_i}. \end{aligned}$$

Equations (2.2) and (2.3) now imply that

$$\begin{aligned} \bigg |h_1 + \sum _{i=s+1}^r h_i d_{e_1-e_i}\bigg |= |h(\varLambda )|, \end{aligned}$$

hence \(|h(\varLambda )/D(\varLambda )^k|\ge |h(\varLambda )|\). \(\square\)

It follows from the definition of E that if \((l_1,\dots ,l_N)\in E\), then \(l_i\ge 0\) for \(i=M+1,\dots ,N\). This implies that if \(h(\varLambda )\in L_E\) and \(\lambda \in {{\mathscr {D}}}\), then

$$\begin{aligned} |h\,(\lambda )|\le |h\,(\varLambda )|. \end{aligned}$$
(2.6)

Lemma 2.2

Let \(h_k(\varLambda )\in L_E\) for \(k\ge 1\). We suppose the sequence \(\{h_k(\varLambda )/D(\varLambda )^k\}_{k=1}^\infty\) converges to \(\xi (\varLambda )\in R_E'\). Then for all \(\lambda \in {{\mathscr {D}}}\), the sequence \(\{h_k(\lambda )/D(\lambda )^k\}_{k=1}^\infty\) converges in \({{\mathbb {C}}}_p\).

Proof

For \(k<k'\) we have

$$\begin{aligned} \bigg |\frac{h_k(\lambda )}{D^k(\lambda )} - \frac{h_{k'}(\lambda )}{D^{k'}(\lambda )}\bigg |&= \bigg |\frac{h_k(\lambda )D(\lambda )^{k'-k}-h_{k'}(\lambda )}{D^{k'}(\lambda )}\bigg |\\&= |h_k(\lambda )D(\lambda )^{k'-k}-h_{k'}(\lambda )|\\&\le |h_k(\varLambda )D(\varLambda )^{k'-k}-h_{k'}(\varLambda )|\\&= \bigg |\frac{h_k(\varLambda )D(\varLambda )^{k'-k}-h_{k'}(\varLambda )}{D^{k'}(\varLambda )}\bigg |\\&= \bigg |\frac{h_k(\varLambda )}{D^k(\varLambda )} - \frac{h_{k'}(\varLambda )}{D^{k'}(\varLambda )}\bigg |, \end{aligned}$$

where the second line follows from the requirement that \(|D(\lambda )|= 1\) for \(\lambda \in {{\mathscr {D}}}\), the third line holds by (2.6), and the fourth line holds by Lemma 2.1.

Since the sequence \(\{h_k(\varLambda )/D(\varLambda )^k\}_{k=1}^\infty\) converges, it is a Cauchy sequence. This inequality shows that the sequence \(\{h_k(\lambda )/D\,(\lambda )^k\}_{k=1}^\infty\) is also a Cauchy sequence, which must converge because \({{\mathbb {C}}}_p\) is complete. \(\square\)

It is easy to see that the limit of the sequence \(\{h_k(\lambda )/D(\lambda )^k\}_{k=1}^\infty\) is independent of the choice of sequence \(\{h_k(\varLambda )/D(\varLambda )^k\}_{k=1}^\infty\) converging to \(\xi (\varLambda )\). We may therefore define

$$\begin{aligned} \xi (\lambda ) = \lim _{k\rightarrow \infty } \frac{h_k(\lambda )}{D^k(\lambda )} . \end{aligned}$$

Using this definition, the elements of \(R_E'\) become functions on \({{\mathscr {D}}}\). Note that the proof of Lemma 2.2 shows that, as functions on \({{\mathscr {D}}}\), the sequence \(\{h_k/D^k\}_{k=1}^\infty\) converges uniformly to \(\xi\).

We shall need the following fact later.

Lemma 2.3

If \(\xi (\varLambda )\in R_E'\) and \(\lambda \in {{\mathscr {D}}}\), then \(|\xi (\lambda )\vert \le |\xi (\varLambda )|\).

Proof

It follows from Lemma 2.1, Equation (2.6), and the fact that \(|D(\lambda )|= 1\) for \(\lambda \in {{\mathscr {D}}}\) that

$$\begin{aligned} \bigg |\frac{h(\lambda )}{D(\lambda )^k}\bigg |\le \bigg |\frac{h(\varLambda )}{D(\varLambda )^k}\bigg |\end{aligned}$$
(2.7)

for \(h\,(\varLambda )\in L_E\), \(k\in {{\mathbb {N}}}\), and \(\lambda \in {{\mathscr {D}}}\). If we choose a sequence \(\{h_k(\varLambda )/D\,(\varLambda )^k\}_{k=1}^\infty\), with the \(h_k(\varLambda )\) in \(L_E\), converging to \(\xi (\varLambda )\in R_E'\), then for k sufficiently large we have

$$\begin{aligned} |\xi (\varLambda )|= \bigg |\frac{h_k(\varLambda )}{D(\varLambda )^k}\bigg |\end{aligned}$$

and for \(\lambda \in {{\mathscr {D}}}\) we have by the definition of \(\xi (\lambda )\)

$$\begin{aligned} |\xi (\lambda )|= \bigg |\frac{h_k(\lambda )}{D(\lambda )^k}\bigg |. \end{aligned}$$

The assertion of the lemma now follows from (2.7). \(\square\)

The following result is our more precise version of Theorem 1.1.

Theorem 2.1

The entries in the matrix \({{\mathscr {F}}}(\varLambda )\) lie in \(R_E'\). Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. If \({\bar{D}}(\lambda )\ne 0\), then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and

$$\begin{aligned} \rho (\lambda ,t) = \det \big ( I-t{{\mathscr {F}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {F}}}({\hat{\lambda }}^{p^{a-2}})\cdots {{\mathscr {F}}}({\hat{\lambda }})\big ). \end{aligned}$$

Most of this paper is devoted to proving the first sentence of Theorem 2.1. The remaining assertion will then follow by an application of Dwork’s p-adic cohomology theory.

Unfortunately the rings \(R_E\) and \(R'_E\) are not large enough to contain all the relevant series we shall encounter and therefore need to be enlarged. Let \({\tilde{R}} = R_E[\varLambda _1^{\pm 1},\dots ,\varLambda _N^{\pm 1}]\) and let

$$\begin{aligned} {\tilde{R}}' = R'_E[\varLambda _1^{\pm 1},\dots ,\varLambda _M^{\pm 1},\varLambda _{M+1},\dots ,\varLambda _N]. \end{aligned}$$

It is clear from the definition of \({{\mathscr {D}}}\) that the elements of \({\tilde{R}}'\) are functions on \({{\mathscr {D}}}\). We define R (resp. \({R}'\)) to be the completion of \({\tilde{R}}\) (resp. \({\tilde{R}}'\)) under the norm on \({{\mathbb {B}}}\). Note that the elements of \({R}'\) define functions on \({{\mathscr {D}}}\). It follows from Lemma 2.3 that

$$\begin{aligned} |\xi (\lambda )|\le |\xi (\varLambda )|\quad \text {for } \xi (\varLambda )\in {R}' \, \text { and } \lambda \in {{\mathscr {D}}}. \end{aligned}$$
(2.8)

Remark

One can show using the argument of Dwork [6, Lemma 1.2] that for \(\xi (\varLambda )\in R'\)

$$\begin{aligned} \sup _{\lambda \in {{\mathscr {D}}}} |\xi (\lambda )|= |\xi (\varLambda )|. \end{aligned}$$

Let \({{\mathscr {M}}}\subseteq {{\mathbb {Z}}}^{n+2}\) be the abelian group generated by A. We define an \({{\mathscr {M}}}\)-grading on R and \({R}'\). Every \(\xi (\varLambda )\in {R}\) can be written as a series \(\xi (\varLambda ) = \sum _{l\in {{\mathbb {Z}}}^N} c_l\varLambda ^l\). We say that \(\xi (\varLambda )\) has degree \(u\in {{\mathscr {M}}}\) if \(\sum _{k=1}^N l_k\mathbf{a}_k = u\) for all \(l\in {{\mathbb {Z}}}^N\) for which \(c_l\ne 0\). We denote by \({R}_u\) the \({{\mathbb {C}}}_p\)-subspace of R consisting of all elements of degree u. Each \({R}_u\) is complete in the norm and is a module over \({R}_\mathbf{0}\), which is a ring. Identical remarks apply to the induced grading on \({R}'\).

3 p-adic estimates

In this section we recall some estimates from [3, Sect. 3] to be applied later. Let \(\mathrm{AH}(t)= \exp (\sum _{i=0}^{\infty }t^{p^i}/p^i)\) be the Artin-Hasse series, a power series in t with p-integral coefficients, let \(\gamma _0\) be a zero of the series \(\sum _{i=0}^\infty t^{p^i}/p^i\) having \(\mathrm{ord}\;\gamma _0 = 1/(p-1)\), and set

$$\begin{aligned} \theta (t) = \mathrm{AH}({\gamma }_0t)=\sum _{i=0}^{\infty }\theta _i t^i. \end{aligned}$$

We then have

$$\begin{aligned} \mathrm{ord}\, \theta _i\ge \frac{i}{p-1}. \end{aligned}$$
(3.1)

We define \({\hat{\theta }}(t) = \prod _{j=0}^\infty \theta (t^{p^j})\), which gives \(\theta (t) = {\hat{\theta }}(t)/{\hat{\theta }}(t^p)\). If we set

$$\begin{aligned} \gamma _j = \sum _{i=0}^j \frac{\gamma _0^{p^i}}{p^i}, \end{aligned}$$
(3.2)

then

$$\begin{aligned} {\hat{\theta }}(t) = \exp \bigg (\sum _{j=0}^{\infty } \gamma _j t^{p^j}\bigg ) = \prod _{j=0}^\infty \exp (\gamma _j t^{p^j}). \end{aligned}$$
(3.3)

If we write \({\hat{\theta }}(t) = \sum _{i=0}^\infty {\hat{\theta }}_i(\gamma _0 t)^i/i!\), then by [3, Equation (3.8)] we have

$$\begin{aligned} \mathrm{ord}\,{\hat{\theta }}_i\ge 0. \end{aligned}$$
(3.4)

We shall also need the series

$$\begin{aligned} {\hat{\theta }}_1(t) := \prod _{j=1}^\infty \exp (\gamma _jt^{p^j}) = :\sum _{i=0}^\infty \frac{{\hat{\theta }}_{1,i}}{i!}(\gamma _0 t)^i. \end{aligned}$$
(3.5)

Note that \({\hat{\theta }}(t) = \exp (\gamma _0t){\hat{\theta }}_1(t)\). By [3, Equation (3.10)]

$$\begin{aligned} \mathrm{ord}\,{\hat{\theta }}_{1,i}\ge \frac{i(p-1)}{p}. \end{aligned}$$
(3.6)

Define the series \({\hat{\theta }}_1(\varLambda ,x)\) by the formula

$$\begin{aligned} {\hat{\theta }}_1(\varLambda ,x) = \prod _{k=1}^N {\hat{\theta }}_1(\varLambda _kx^{\mathbf{a}_k}). \end{aligned}$$
(3.7)

Put \({{\mathbb {N}}}A = \{\sum _{j=1}^N l_j\mathbf{a}_j\mid l_j\in {{\mathbb {N}}}\}\). Expanding the product (3.7) according to powers of x we get

$$\begin{aligned} {\hat{\theta }}_1(\varLambda ,x) = \sum _{u=(u_0,\dots ,u_{n+1})\in {{\mathbb {N}}}A} {\hat{\theta }}_{1,u}(\varLambda )\gamma _0^{u_{n+1}} x^u, \end{aligned}$$
(3.8)

where

$$\begin{aligned} {\hat{\theta }}_{1,u}(\varLambda ) = \sum _{\begin{array}{c} k_1,\dots ,k_N\in {{\mathbb {N}}}\\ \sum _{j=1}^N k_j\mathbf{a}_j = u \end{array}} \bigg (\prod _{j=1}^N \frac{{\hat{\theta }}_{1,k_j}}{k_j!}\bigg )\varLambda _1^{k_1}\cdots \varLambda _N^{k_N}. \end{aligned}$$
(3.9)

We have similar results for the reciprocal power series

$$\begin{aligned} {\hat{\theta }}_1(t)^{-1} = \prod _{j=1}^\infty \exp (-\gamma _jt^{p^j}). \end{aligned}$$

If we write

$$\begin{aligned} {\hat{\theta }}_1(t)^{-1} = \sum _{i=0}^\infty \frac{{\hat{\theta }}'_{1,i}}{i!}(\gamma _0 t)^i, \end{aligned}$$
(3.10)

then the coefficients satisfy

$$\begin{aligned} \mathrm{ord}\, {\hat{\theta }}'_{1,i}\ge \frac{i(p-1)}{p}. \end{aligned}$$
(3.11)

We also have

$$\begin{aligned} {\hat{\theta }}_1(\varLambda ,x)^{-1} = \prod _{k=1}^N {\hat{\theta }}_1(\varLambda _kx^{\mathbf{a}_k})^{-1}, \end{aligned}$$
(3.12)

which we again expand in powers of x as

$$\begin{aligned} {\hat{\theta }}_1(\varLambda ,x)^{-1} = \sum _{u=(u_0,\dots ,u_{n+1})\in {{\mathbb {N}}}A} {\hat{\theta }}'_{1,u}(\varLambda )\gamma _0^{u_{n+1}} x^u \end{aligned}$$
(3.13)

with

$$\begin{aligned} {\hat{\theta }}'_{1,u}(\varLambda ) = \sum _{\begin{array}{c} k_1,\dots ,k_N\in {{\mathbb {N}}}\\ \sum _{j=1}^N k_j\mathbf{a}_j = u \end{array}} \bigg (\prod _{j=1}^N \frac{{\hat{\theta }}'_{1,k_j}}{k_j!}\bigg ) \varLambda _1^{k_1}\cdots \varLambda _N^{k_N}. \end{aligned}$$
(3.14)

We also define

$$\begin{aligned} \theta (\varLambda ,x) = \prod _{k=1}^N \theta (\varLambda _kx^{\mathbf{a}_k}). \end{aligned}$$
(3.15)

Expanding the right-hand side in powers of x, we have

$$\begin{aligned} \theta (\varLambda ,x) = \sum _{u\in {{\mathbb {N}}}A} \theta _u(\varLambda )x^u, \end{aligned}$$
(3.16)

where

$$\begin{aligned} \theta _u(\varLambda ) = \sum _{\nu \in {{\mathbb {N}}}^N} \theta ^{(u)}_\nu \varLambda ^\nu \end{aligned}$$
(3.17)

and

$$\begin{aligned} \theta ^{(u)}_\nu = {\left\{ \begin{array}{ll} \prod _{k=1}^N \theta _{\nu _k} &{} \text {if }\, \sum _{k=1}^N \nu _k\mathbf{a}_k = u, \\ 0 &{} \text {if }\, \sum _{k=1}^N \nu _k\mathbf{a}_k \ne u, \end{array}\right. } \end{aligned}$$
(3.18)

so \(\theta _u(\varLambda )\) is homogeneous of degree u. The equation \(\sum _{k=1}^N \nu _k\mathbf{a}_k = u\) has only finitely many solutions \(\nu \in {{\mathbb {N}}}^N\), so \(\theta _u(\varLambda )\) is a polynomial in the \(\varLambda _k\). Equations (3.1) and (3.18) show that

$$\begin{aligned} \mathrm{ord}_p\,\theta ^{(u)}_\nu \ge \frac{\sum _{j=1}^N \nu _j}{p-1} = \frac{u_{n+1}}{p-1}. \end{aligned}$$
(3.19)

4 The Dwork-Frobenius operator

We define the spaces S and \(S'\) and Dwork’s Frobenius operator on those spaces.

Note that the abelian group \({{\mathscr {M}}}\) generated by A lies in the hyperplane \(\sum _{i=0}^n u_i = du_{n+1}\) in \({{\mathbb {R}}}^{n+2}\). Set \({{\mathscr {M}}}_- = {{\mathscr {M}}}\cap ({{\mathbb {Z}}}_{<0})^{n+2}\). We denote by \(\delta _-\) the truncation operator on formal Laurent series in variables \(x_0,\dots ,x_{n+1}\) that preserves only those terms having all exponents negative:

$$\begin{aligned} \delta _-\bigg (\sum _{k\in {{\mathbb {Z}}}^{n+2}}c_kx^k\bigg ) = \sum _{k\in ({{\mathbb {Z}}}_{<0})^{n+2}} c_kx^k. \end{aligned}$$

We use the same notation for formal Laurent series in a single variable t:

$$\begin{aligned} \delta _-\bigg (\sum _{k=-\infty }^\infty c_kt^k\bigg ) = \sum _{k=-\infty }^{-1} c_kt^k. \end{aligned}$$

It is straightforward to check that if \(\xi _1\) and \(\xi _2\) are two series for which the product \(\xi _1\xi _2\) is defined and if no monomial in \(\xi _2\) has a negative exponent, then

$$\begin{aligned} \delta _-\big (\delta _-(\xi _1)\xi _2\big ) = \delta _-(\xi _1\xi _2). \end{aligned}$$
(4.1)

Define S to be the \({{\mathbb {C}}}_p\)-vector space of formal series

$$\begin{aligned} S = \bigg \{\xi (\varLambda ,x) = \sum _{u\in {{\mathscr {M}}}_-} \xi _{u}(\varLambda ) \gamma _0^{u_{n+1}}x^{u} \mid \xi _{u}(\varLambda )\in {R}_{u}\, \text { and } \{|\xi _{u}|\}_u \,\text { is bounded}\bigg \}. \end{aligned}$$

Let \(S'\) be defined analogously with the condition “\(\xi _{u}(\varLambda )\in {R}_{u}\)” being replaced by “\(\xi _{u}(\varLambda )\in {R}_{u}'\)”. Define a norm on S by setting

$$\begin{aligned} |\xi (\varLambda ,x)|= \sup _{u\in {{\mathscr {M}}}_-}\{|\xi _{u}|\}. \end{aligned}$$

Both S and \(S'\) are complete under this norm.

Let

$$\begin{aligned} \xi (\varLambda ,x) = \sum _{\nu \in {{\mathscr {M}}}_-} \xi _\nu (\varLambda ) \gamma _0^{\nu _{n+1}}x^\nu \in S. \end{aligned}$$

We show that the product \(\theta (\varLambda ,x) \xi (\varLambda ^p,x^p)\) is well-defined as a formal series in x. We have formally

$$\begin{aligned} \theta (\varLambda ,x) \xi (\varLambda ^p,x^p) = \sum _{\rho \in {{\mathscr {M}}}} \zeta _\rho (\varLambda )x^\rho , \end{aligned}$$

where

$$\begin{aligned} \zeta _\rho (\varLambda ) = \sum _{\begin{array}{c} u\in {{\mathbb {N}}}A,\nu \in {{\mathscr {M}}}_-\\ u+p\nu = \rho \end{array}} \gamma _0^{\nu _{n+1}}\theta _u(\varLambda )\xi _\nu (\varLambda ^p). \end{aligned}$$
(4.2)

Since \(\theta _u(\varLambda )\) is a polynomial, the product \(\theta _u(\varLambda )\xi _\nu (\varLambda ^p)\) is a well-defined element of \({R}_{\rho }\). It follows from (3.17), (3.19), and the equality \(u+p\nu =\rho\) that the coefficients of \(\gamma _0^{\nu _{n+1}}\theta _u(\varLambda )\) all have p-ordinal at least \(\big (\rho _{n+1}/(p-1)\big ) - \nu _{n+1}\). Since \(|\xi _\nu (\varLambda )|\) is bounded independently of \(\nu\) and there are only finitely many terms on the right-hand side of (4.2) with a given value of \(\nu _{n+1}\), the series (4.2) converges to an element of \({R}_\rho\) (because \(-\nu _{n+1}\rightarrow \infty\) as \(\nu \rightarrow \infty\)). This estimate also shows that if \(\xi (\varLambda ,x)\in S'\), then \(\zeta _\rho (\varLambda )\in {R}'_\rho\).

Define for \(\xi (\varLambda ,x)\in S\)

$$\begin{aligned} \alpha ^*\big (\xi (\varLambda ,x)\big )&= \delta _-\big (\theta (\varLambda ,x)\xi (\varLambda ^p,x^p)\big ) \\&= \sum _{\rho \in {{\mathscr {M}}}_-} \zeta _{\rho }(\varLambda )x^{\rho }. \end{aligned}$$

For \(\rho \in {{\mathscr {M}}}_-\), put \(\eta _{\rho }(\varLambda ) = \gamma _0^{-\rho _{n+1}}\zeta _{\rho }(\varLambda )\), so that

$$\begin{aligned} \alpha ^*\big (\xi (\varLambda ,x)\big ) = \sum _{\rho \in {{\mathscr {M}}}_-} \eta _{\rho }(\varLambda )\gamma _0^{\rho _{n+1}}x^{\rho } \end{aligned}$$
(4.3)

with (by (4.2))

$$\begin{aligned} \eta _{\rho }(\varLambda ) = \sum _{\begin{array}{c} u\in {{\mathbb {N}}}A,\,\nu \in {{\mathscr {M}}}_-\\ u+p\nu =\rho \end{array}}\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )\xi _{\nu }(\varLambda ^p). \end{aligned}$$
(4.4)

Proposition 4.1

The map \(\alpha ^*\) is an endomorphism of S and of \(S'\), and for \(\xi (\varLambda ,x)\in S\) we have

$$\begin{aligned} |\alpha ^*\big (\xi (\varLambda ,x)\big )|\le |p\xi (\varLambda ,x)|. \end{aligned}$$
(4.5)

Proof

By (4.3), the proposition will follow from the estimate

$$\begin{aligned} |\eta _{\rho }(\varLambda )|\le |p\xi (\varLambda ,x)|\quad \text {for all }\, \rho \in {{\mathscr {M}}}_-. \end{aligned}$$

Using (4.4), we see that this estimate will follow in turn from the estimate

$$\begin{aligned} |\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )|\le |p|\end{aligned}$$

for all \(u\in {{\mathbb {N}}}A\), \(\nu \in {{\mathscr {M}}}_-\), with \(u+p\nu = \rho\). From (3.17) and (3.19) we see that all coefficients of \(\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )\) have p-ordinal greater than or equal to

$$\begin{aligned} \frac{-\rho _{n+1}+\nu _{n+1}+u_{n+1}}{p-1}. \end{aligned}$$

Since \(u+p\nu =\rho\), this expression simplifies to \(-\nu _{n+1}\), so

$$\begin{aligned} |\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )|\le |p|^{-\nu _{n+1}} \end{aligned}$$
(4.6)

and \(-\nu _{n+1}\ge 1\) since \(\nu \in {{\mathscr {M}}}_-\). \(\square\)

Note that the equality \(-\nu _{n+1} = 1\) occurs for only M points \(\nu \in {{\mathscr {M}}}_-\), namely, \(\nu = -\mathbf{a}_1,\dots ,-\mathbf{a}_M\). The following corollary is then an immediate consequence of the proof of Proposition 4.1.

Corollary 4.1

If \(\xi _{\nu }(\varLambda ) = 0\) for \(\nu = -\mathbf{a}_1,\dots ,-\mathbf{a}_M\), then \(|\alpha ^*(\xi (\varLambda ,x))|\le |p^{2}\xi (\varLambda ,x)|\).

5 A technical lemma

The action of \(\alpha ^*\) is extended to \(S^M\) componentwise: if

$$\begin{aligned} \xi (\varLambda ,x) = (\xi ^{(1)}(\varLambda ,x),\dots ,\xi ^{(M)}(\varLambda ,x))\in S^M, \end{aligned}$$
(5.1)

then

$$\begin{aligned} \alpha ^*\big (\xi (\varLambda ,x)\big ) = \big (\alpha ^*\big (\xi ^{(1)}(\varLambda ,x)\big ),\dots ,\alpha ^*\big (\xi ^{(M)}(\varLambda ,x)\big )\big ). \end{aligned}$$
(5.2)

Let \(\xi (\varLambda ,x)\) be as in (5.1) with

$$\begin{aligned} \xi ^{(i)}(\varLambda ,x) = \sum _{\nu \in {{\mathscr {M}}}_-} \xi _\nu ^{(i)}(\varLambda ) \gamma _0^{\nu _{n+1}}x^\nu . \end{aligned}$$
(5.3)

Since \(\xi ^{(i)}_{-\mathbf{a}_j}(\varLambda )\in R_{-\mathbf{a}_j}\), the matrix

$$\begin{aligned} {{\mathbb {M}}}\big (\xi (\varLambda ,x)\big ):= \big ( \varLambda _j\xi ^{(i)}_{-\mathbf{a}_j}(\varLambda )\big )_{i,j=1}^M \end{aligned}$$

has entries in \(R_\mathbf{0}\). If \(\xi (\varLambda ,x)\in (S')^M\), then \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) has entries in \(R'_\mathbf{0}\). The map \(\xi (\varLambda ,x)\mapsto {{\mathbb {M}}}(\xi (\varLambda ,x))\) is a \({{\mathbb {C}}}_p\)-linear map from \(S^M\) to the \({{\mathbb {C}}}_p\)-vector space of \((M\times M)\)-matrices with entries in \(R_\mathbf{0}\). Note that if \(Y(\varLambda )\) is an \((M\times M)\)-matrix with entries in \(R_\mathbf{0}\), then

$$\begin{aligned} {{\mathbb {M}}}\big (Y(\varLambda )\xi (\varLambda ,x)\big ) = Y(\varLambda ) {{\mathbb {M}}}\big (\xi (\varLambda ,x)\big ), \end{aligned}$$
(5.4)

where we regard \(\xi (\varLambda ,x)\) as a column vector for the purpose of matrix multiplication. In particular, if \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) is an invertible matrix, then

$$\begin{aligned} {{\mathbb {M}}}\bigg ( {{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )^{-1}\xi (\varLambda ,x)\bigg ) = I. \end{aligned}$$
(5.5)

We extend the norm on S to \(S^M\) and the norm on R to \(\mathrm{Mat}_M(R)\), the \((M\times M)\)-matrices with entries in R. For \(\xi (\varLambda ,x)\) as in (5.1), we define

$$\begin{aligned} |\xi (\varLambda ,x) |= \max \{|\xi ^{(i)}(\varLambda ,x) |\}_{i=1}^M \end{aligned}$$

and for \(Y(\varLambda ) = \big (Y_{ij}(\varLambda )\big )_{i,j=1}^M\) with \(Y_{ij}(\varLambda )\in R\) we define

$$\begin{aligned} |Y(\varLambda )|= \max \{|Y_{ij}(\varLambda )|\}_{i,j=1}^M. \end{aligned}$$

Note that the matrix \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) has an inverse with entries in \(R_\mathbf{0}\) if and only if \(\det {{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) is an invertible element of \(R_\mathbf{0}\). Likewise, if \(\xi (\varLambda ,x)\in (S')^M\), this matrix has an inverse with entries in \(R'_\mathbf{0}\) if and only if its determinant is an invertible element of \(R'_\mathbf{0}\). The main result of this section is the following assertion.

Lemma 5.1

Let \(\xi (\varLambda ,x)\in S^M\) with \({{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\) invertible, \(\big |{{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\big |= |\xi (\varLambda ,x)|\), and \(\big |\det {{\mathbb {M}}}\big (\xi (\varLambda ,x)\big )\big |= |\xi (\varLambda ,x)|^M\). Then \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\) is invertible,

$$\begin{aligned} \big |{{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\big |= \big |\alpha ^*\big (\xi (\varLambda ,x)\big )\big |= |p\xi (\varLambda ,x)|, \end{aligned}$$
(5.6)

and

$$\begin{aligned} \big |\det {{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\big |= |p\xi (\varLambda ,x)|^M. \end{aligned}$$
(5.7)

Remark

To prove Lemma 5.1, it suffices to show that \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\) is invertible and that (5.7) holds: from Proposition 4.1 we get

$$\begin{aligned} \big |{{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\big |\le \big |\alpha ^*\big (\xi (\varLambda ,x)\big )\big |\le |p\xi (\varLambda ,x)|. \end{aligned}$$

If any of these inequalities were strict, the usual formula for the determinant of a matrix in terms of its entries would imply that

$$\begin{aligned} \big |\det {{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\big |< |p\xi (\varLambda ,x)|^M. \end{aligned}$$

Similar reasoning applies to the inverse of \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\). By (5.7) we have

$$\begin{aligned} \big |\det {{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )^{-1}\big |= |p\xi (\varLambda ,x)|^{-M}. \end{aligned}$$
(5.8)

The usual formula for the inverse of a matrix in terms of its cofactors implies that

$$\begin{aligned} \big |{{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )^{-1}\big |\le \frac{1}{|p\xi (\varLambda ,x)|}. \end{aligned}$$

But if this inequality were strict it would lead to a contradiction of (5.8), so we have the following corollary.

Corollary 5.1

Under the hypotheses of Lemma 5.1,

$$\begin{aligned} \big |{{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )^{-1}\big |=\frac{1}{|p\xi (\varLambda ,x)|}. \end{aligned}$$

The proof of Lemma 5.1 will require several steps. We first consider a matrix constructed from \(\theta (\varLambda ,x)\) (Equation (3.16)):

$$\begin{aligned} {{\mathbb {M}}}_\theta (\varLambda ) = \big ( \theta _{p\mathbf{a}_i-\mathbf{a}_j}(\varLambda )\big )_{i,j=1}^M. \end{aligned}$$

From (3.17) and (3.18) we have explicit formulas for these matrix entries:

$$\begin{aligned} \theta _{p\mathbf{a}_i-\mathbf{a}_j}(\varLambda ) = \sum _{\begin{array}{c} \nu \in {{\mathbb {N}}}^N\\ \sum _{k=1}^N \nu _k\mathbf{a}_k = p\mathbf{a}_i-\mathbf{a}_j \end{array}} \bigg (\prod _{k=1}^N \theta _{\nu _k}\bigg ) \varLambda ^\nu . \end{aligned}$$

Furthermore, since the last coordinate of \(p\mathbf{a}_i-\mathbf{a}_j\) equals \(p-1\), each \(\nu _k\) in this summation is \(\le p-1\), so from their definition \(\theta _{\nu _k} = \gamma _0^{\nu _k}/\nu _k!\). Since \(\sum _{k=1}^N \nu _k = p-1\), we have

$$\begin{aligned} \theta _{p\mathbf{a}_i-\mathbf{a}_j}(\varLambda ) = \sum _{\begin{array}{c} \nu \in {{\mathbb {N}}}^N\\ \sum _{k=1}^N \nu _k\mathbf{a}_k = p\mathbf{a}_i-\mathbf{a}_j \end{array}} \frac{\gamma _0^{p-1} \varLambda ^\nu }{\nu _1!\cdots \nu _N!}. \end{aligned}$$

Thus

$$\begin{aligned} {{\mathbb {M}}}_\theta (\varLambda ) = \gamma _0^{p-1}H(\varLambda ), \end{aligned}$$
(5.9)

where \(H(\varLambda )\) is given in Equation (1.3), and

$$\begin{aligned} C(\varLambda )^{-p} {{\mathbb {M}}}_\theta (\varLambda ) C(\varLambda ) = \gamma _0^{p-1}B(\varLambda ), \end{aligned}$$
(5.10)

where \(B(\varLambda )\) is given in Equation (1.4). It now follows from the discussion in Sect. 1 that \(C(\varLambda )^{-p}{{\mathbb {M}}}_\theta (\varLambda ) C(\varLambda )\) is an invertible element of \(R_\mathbf{0}'\). Furthermore, we have

$$\begin{aligned} \det C(\varLambda )^{-p} {{\mathbb {M}}}_\theta (\varLambda ) C(\varLambda ) = \gamma _0^{(p-1)M} D(\varLambda ). \end{aligned}$$
(5.11)

And since \(|D(\varLambda )|= 1\) (since \(D(\varLambda )\) has constant term 1 and integral coefficients) we get

$$\begin{aligned} |\det C(\varLambda )^{-p} {{\mathbb {M}}}_\theta (\varLambda ) C(\varLambda ) |= |p^M|. \end{aligned}$$
(5.12)

With \(\xi (\varLambda ,x)\) as in (5.1), we rewrite (5.2) as

$$\begin{aligned} \alpha ^*\big (\xi (\varLambda ,x)\big ) = \big ( \eta ^{(1)}(\varLambda ,x),\dots ,\eta ^{(M)}(\varLambda ,x)\big ), \end{aligned}$$

where (see (4.3) and (4.4))

$$\begin{aligned} \eta ^{(i)}(\varLambda ,x) = \sum _{\rho \in {{\mathscr {M}}}_-} \eta ^{(i)}_{\rho }(\varLambda )\gamma _0^{\rho _{n+1}}x^{\rho } \end{aligned}$$
(5.13)

with

$$\begin{aligned} \eta ^{(i)}_{\rho }(\varLambda ) = \sum _{\begin{array}{c} u\in {{\mathbb {N}}}A,\,\nu \in {{\mathscr {M}}}_-\\ u+p\nu =\rho \end{array}}\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )\xi ^{(i)}_{\nu }(\varLambda ^p). \end{aligned}$$
(5.14)

We then have

$$\begin{aligned} {{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big ) = \big ( \varLambda _j\eta ^{(i)}_{-\mathbf{a}_j}(\varLambda )\big )_{i,j=1}^M. \end{aligned}$$
(5.15)

If \(\nu \ne -\mathbf{a}_1,\dots ,-\mathbf{a}_M\), then \(\nu _{n+1}\le -2\), so by (5.14) we may write

$$\begin{aligned} \eta ^{(i)}_{-\mathbf{a}_j}(\varLambda ) = \sum _{k=1}^M \theta _{p\mathbf{a}_k-\mathbf{a}_j}(\varLambda )\xi ^{(i)}_{-\mathbf{a}_k}(\varLambda ^p) + \sum _{\begin{array}{c} u\in {{\mathbb {N}}}A,\,\nu \in {{\mathscr {M}}}_-\\ u+p\nu =-\mathbf{a}_j\\ \nu _{n+1}\le -2 \end{array}}\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )\xi ^{(i)}_{\nu }(\varLambda ^p). \end{aligned}$$
(5.16)

We write \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big ) = {{\mathbb {M}}}^{(1)}(\varLambda ) + {{\mathbb {M}}}^{(2)}(\varLambda )\), where

$$\begin{aligned} {{\mathbb {M}}}^{(1)}_{ij}(\varLambda ) = \varLambda _j\sum _{k=1}^M \theta _{p\mathbf{a}_k-\mathbf{a}_j}(\varLambda )\xi ^{(i)}_{-\mathbf{a}_k}(\varLambda ^p) \end{aligned}$$
(5.17)

and

$$\begin{aligned} {{\mathbb {M}}}^{(2)}_{ij}(\varLambda ) = \varLambda _j\sum _{\begin{array}{c} u\in {{\mathbb {N}}}A,\,\nu \in {{\mathscr {M}}}_-\\ u+p\nu =-\mathbf{a}_j\\ \nu _{n+1}\le -2 \end{array}}\gamma _0^{-\rho _{n+1}+\nu _{n+1}}\theta _u(\varLambda )\xi ^{(i)}_{\nu }(\varLambda ^p). \end{aligned}$$
(5.18)

It follows from (5.17), (5.18), and a short calculation that

$$\begin{aligned} {{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big ) = {{\mathbb {M}}}\big (\xi (\varLambda ^p,x)\big )C(\varLambda )^{-p} {{\mathbb {M}}}_\theta (\varLambda )C(\varLambda ) + {{\mathbb {M}}}^{(2)}(\varLambda ). \end{aligned}$$
(5.19)

Proof of Lemma 5.1

For notational convenience set

$$\begin{aligned} \widetilde{{\mathbb {M}}}(\varLambda ) = {{\mathbb {M}}}\big (\xi (\varLambda ^p,x)\big )C(\varLambda )^{-p} {{\mathbb {M}}}_\theta (\varLambda )C(\varLambda ) \end{aligned}$$

so that (5.19) can be rewritten as

$$\begin{aligned} {{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big ) = \widetilde{{\mathbb {M}}}(\varLambda ) + {{\mathbb {M}}}^{(2)}(\varLambda ). \end{aligned}$$
(5.20)

From (5.10) we get

$$\begin{aligned} |\widetilde{{\mathbb {M}}}(\varLambda ) |\le |p{{\mathbb {M}}}(\xi (\varLambda ^p,x))|= |p{{\mathbb {M}}}(\xi (\varLambda ,x))|\end{aligned}$$

and from (5.11)

$$\begin{aligned} |\det \widetilde{{\mathbb {M}}}(\varLambda ) |= |p^M\det {{\mathbb {M}}}(\xi (\varLambda ^p,x))|= |p^M\det {{\mathbb {M}}}(\xi (\varLambda ,x))|. \end{aligned}$$

Applying our hypotheses gives

$$\begin{aligned} |\widetilde{{\mathbb {M}}}(\varLambda ) |\le |p\xi (\varLambda ,x)|\end{aligned}$$
(5.21)

and

$$\begin{aligned} |\det \widetilde{{\mathbb {M}}}(\varLambda ) |= |p \xi (\varLambda ,x)|^M. \end{aligned}$$
(5.22)

And since the equality in (5.22) would fail if the inequality in (5.21) were strict, we must have

$$\begin{aligned} |\widetilde{{\mathbb {M}}}(\varLambda ) |= |p\xi (\varLambda ,x)|. \end{aligned}$$
(5.23)

The matrix \(\widetilde{{\mathbb {M}}}(\varLambda )\) is invertible by (5.10) and our hypotheses, and (5.22) implies

$$\begin{aligned} |\det \widetilde{{\mathbb {M}}}(\varLambda )^{-1}|= |p \xi (\varLambda ,x)|^{-M}. \end{aligned}$$
(5.24)

Arguing as in the derivation of Corollary 5.1 from (5.8) then gives

$$\begin{aligned} |\widetilde{{\mathbb {M}}}(\varLambda )^{-1} |= \frac{1}{|p\xi (\varLambda ,x)|}. \end{aligned}$$
(5.25)

Rewrite (5.20) as

$$\begin{aligned} {{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big ) = \widetilde{{\mathbb {M}}}(\varLambda )\big ( I + \widetilde{{\mathbb {M}}}(\varLambda )^{-1}{{\mathbb {M}}}^{(2)}(\varLambda )\big ). \end{aligned}$$
(5.26)

Estimate (4.6) implies that

$$\begin{aligned} |{{\mathbb {M}}}^{(2)}(\varLambda )|\le |p^2\xi (\varLambda ,x)|, \end{aligned}$$
(5.27)

so by (5.25)

$$\begin{aligned} |\widetilde{{\mathbb {M}}}(\varLambda )^{-1}{{\mathbb {M}}}^{(2)}(\varLambda )|\le |p|. \end{aligned}$$
(5.28)

It follows from (5.28) that \(I + \widetilde{{\mathbb {M}}}(\varLambda )^{-1}{{\mathbb {M}}}^{(2)}(\varLambda )\) is invertible, so by (5.26) the invertibility of \(\widetilde{{\mathbb {M}}}(\varLambda )\) implies the invertibility of \({{\mathbb {M}}}\big (\alpha ^*(\xi (\varLambda ,x))\big )\). Estimate (5.28) implies that

$$\begin{aligned} \big |\det \big (I + \widetilde{{\mathbb {M}}}(\varLambda )^{-1}{{\mathbb {M}}}^{(2)}(\varLambda )\big ) \big |= 1, \end{aligned}$$
(5.29)

so (5.7) follows from (5.26) and (5.22). \(\square\)

6 Contraction mapping

We use the Dwork-Frobenius operator to construct a contraction mapping on a subset of \(S^M\). Finding the fixed point of this contraction mapping will be the crucial step in proving the first assertion of Theorem 2.1.

Put

$$\begin{aligned} T = \{ \xi (\varLambda ,x)\in S^M\mid {{\mathbb {M}}}(\xi (\varLambda ,x)) = I\, \text { and }\, |\xi (\varLambda ,x)|= 1\} \end{aligned}$$

and put \(T' = T\cap (S')^M\). Note that T and \(T'\) are closed in the topology on \(S^M\). Elements of T satisfy the hypotheses of Lemma 5.1. By that result, if \(\xi (\varLambda ,x)\in T\), then \({{\mathbb {M}}}(\alpha ^*(\xi (\varLambda ,x)))\) is invertible, so we may define

$$\begin{aligned} \phi \big (\xi (\varLambda ,x)\big ) := {{\mathbb {M}}}(\alpha ^*(\xi (\varLambda ,x)))^{-1} \alpha ^*\big (\xi (\varLambda ,x)\big ), \end{aligned}$$

where we regard \(\alpha ^*(\xi (\varLambda ,x))\) on the right-hand side as a column vector for the purpose of matrix multiplication. By Equation (5.5) we have

$$\begin{aligned} {{\mathbb {M}}}\big (\phi (\xi (\varLambda ,x))\big ) = I. \end{aligned}$$
(6.1)

Equation (4.5) and Corollary 5.1 imply that \(|\phi (\xi (\varLambda ,x))|\le 1\), so in fact

$$\begin{aligned} |\phi (\xi (\varLambda ,x))|= 1 \end{aligned}$$
(6.2)

by (6.1). Equations (6.1) and (6.2) show that \(\phi (T)\subseteq T\) and \(\phi (T')\subseteq T'\).

Proposition 6.1

The operator \(\phi\) is a contraction mapping on T. More precisely, if \(\xi ^{(1)}(\varLambda ,x),\xi ^{(2)}(\varLambda ,x)\in T\), then

$$\begin{aligned} \big |\phi \big (\xi ^{(1)}(\varLambda ,x)\big ) - \phi \big (\xi ^{(2)}(\varLambda ,x)\big )\big |\le |p|\cdot |\xi ^{(1)}(\varLambda ,x) - \xi ^{(2)}(\varLambda ,x)|. \end{aligned}$$

Proof

For notational convenience we sometimes write \(\eta ^{(i)}(\varLambda ,x) = \alpha ^*(\xi ^{(i)}(\varLambda ,x))\) for \(i=1,2\). Then

$$\begin{aligned}&\phi \big (\xi ^{(1)}(\varLambda ,x)\big ) - \phi \big (\xi ^{(2)}(\varLambda ,x)) \nonumber \\&\quad ={{\mathbb {M}}}\big (\eta ^{(1)}(\varLambda ,x)\big )^{-1} \eta ^{(1)}(\varLambda ,x)-{{\mathbb {M}}}\big (\eta ^{(2)}(\varLambda ,x)\big )^{-1} \eta ^{(2)}(\varLambda ,x) \nonumber \\&\quad ={{\mathbb {M}}}\big (\eta ^{(1)}(\varLambda ,x)\big )^{-1}\bigg (\eta ^{(1)}(\varLambda ,x)-\eta ^{(2)}(\varLambda ,x)\bigg ) \nonumber \\&\qquad -{{\mathbb {M}}}\big (\eta ^{(1)}(\varLambda ,x)\big )^{-1}\bigg ({{\mathbb {M}}}\big (\eta ^{(1)}(\varLambda ,x) - \eta ^{(2)}(\varLambda ,x)\big )\bigg ) {{\mathbb {M}}}\big (\eta ^{(2)}(\varLambda ,x)\big )^{-1}\eta ^{(2)}(\varLambda ,x). \end{aligned}$$
(6.3)

The difference \(\xi ^{(1)}(\varLambda ,x)-\xi ^{(2)}(\varLambda ,x)\) satisfies the hypothesis of Corollary 4.1, so

$$\begin{aligned} \big |\alpha ^*\big (\xi ^{(1)}(\varLambda ,x)-\xi ^{(2)}(\varLambda ,x)\big )\big |\le |p^2|\cdot |\xi ^{(1)}(\varLambda ,x)-\xi ^{(2)}(\varLambda ,x)|. \end{aligned}$$
(6.4)

By Corollary 5.1 we have

$$\begin{aligned} |{{\mathbb {M}}}\big (\alpha ^*(\xi ^{(1)}(\varLambda ,x))\big )^{-1} |= |{{\mathbb {M}}}\big (\alpha ^*(\xi ^{(2)}(\varLambda ,x))\big )^{-1} |=1/|p|, \end{aligned}$$
(6.5)

so

$$\begin{aligned} \bigg |{{\mathbb {M}}}\big (\eta ^{(1)}(\varLambda ,x)\big )^{-1}\bigg (\eta ^{(1)}(\varLambda ,x)-\eta ^{(2)}(\varLambda ,x)\bigg )\bigg |\le |p|\cdot |\xi ^{(1)}(\varLambda ,x)-\xi ^{(2)}(\varLambda ,x)|. \end{aligned}$$
(6.6)

By Proposition 4.1 we have

$$\begin{aligned} \big |\alpha ^*\big (\xi ^{(2)}(\varLambda ,x)\big )\big |\le |p|, \end{aligned}$$

and by (6.4) we have

$$\begin{aligned} \big |{{\mathbb {M}}}\big (\alpha ^*\big (\xi ^{(1)}(\varLambda ,x) - \xi ^{(2)}(\varLambda ,x)\big )\big )\big |\le |p^2|\cdot |\xi ^{(1)}(\varLambda ,x)-\xi ^{(2)}(\varLambda ,x)|, \end{aligned}$$

so

$$\begin{aligned}&\bigg |{{\mathbb {M}}}\big (\eta ^{(1)}(\varLambda ,x)\big )^{-1}\bigg ({{\mathbb {M}}}\big (\eta ^{(1)}(\varLambda ,x) - \eta ^{(2)}(\varLambda ,x)\big )\bigg ) {{\mathbb {M}}}\big (\eta ^{(2)}(\varLambda ,x)\big )^{-1}\eta ^{(2)}(\varLambda ,x)\bigg |\nonumber \\&\quad \le |p|\cdot |\xi ^{(1)}(\varLambda ,x)-\xi ^{(2)}(\varLambda ,x)|. \end{aligned}$$
(6.7)

The assertion of the proposition now follows from (6.3), (6.6), and (6.7). \(\square\)

By a well-known theorem, the contraction mapping \(\phi\) has a unique fixed point, and since \(\phi\) is stable on \(T'\) that fixed point lies in \(T'\). In the next section we discuss certain hypergeometric series and their p-adic relatives. These p-adic relatives will be used to describe explicitly this fixed point.

7 A-hypergeometric series

The entries of the matrix \(F(\varLambda )\) are A-hypergeometric in nature. The purpose of this section is to recall their construction and to introduce some related series that satisfy better p-adic estimates.

Let \(L\subseteq {{\mathbb {Z}}}^{N}\) be the lattice of relations on the set A:

$$\begin{aligned} L= \bigg \{l=(l_1,\dots ,l_N)\in {{\mathbb {Z}}}^N\mid \sum _{j=1}^N l_j\mathbf{a}_j = \mathbf{0}\bigg \}. \end{aligned}$$

For each \(l=(l_1,\dots ,l_N)\in L\), we define a partial differential operator \(\Box _l\) in variables \(\{\varLambda _j\}_{j=1}^N\) by

$$\begin{aligned} \Box _l = \prod _{l_j>0} \bigg (\frac{\partial }{\partial \varLambda _j}\bigg )^{l_j} - \prod _{l_j<0} \bigg (\frac{\partial }{\partial \varLambda _j}\bigg )^{-l_j}. \end{aligned}$$
(7.1)

For \(\beta = (\beta _0,\beta _1,\dots ,\beta _{n+1})\in {{\mathbb {C}}}^{n+2}\), the corresponding Euler (or homogeneity) operators are defined by

$$\begin{aligned} Z_i = \sum _{j=1}^N a_{ij}\varLambda _j\frac{\partial }{\partial \varLambda _j} - \beta _i \end{aligned}$$
(7.2)

for \(i=0,\dots ,n+1\). The A-hypergeometric system with parameter \(\beta\) consists of Equations (7.1) for \(l\in L\) and (7.2) for \(i=0,1,\dots ,n+1\).

Consider the formal series

$$\begin{aligned} q(t) = \sum _{l=0}^\infty (-1)^l l!t^{-l-1}. \end{aligned}$$

Let \(i\in \{1,\dots ,M\}\). The A-hypergeometric series of interest arise as the coefficients of \(\gamma _0^{u_{n+1}}x^u\) in the expression

$$\begin{aligned} {F}^{(i)}(\varLambda ,x) := \delta _-\bigg (q(\gamma _0\varLambda _i x^{\mathbf{a}_i}) \prod _{\begin{array}{c} k=1\\ k\ne i \end{array}}^N \exp (\gamma _0\varLambda _kx^{\mathbf{a}_k})\bigg ). \end{aligned}$$
(7.3)

For \(u\in {{\mathscr {M}}}_-\), let \(F_u^{(i)}(\varLambda )\) be the coefficient of \(\gamma _0^{u_{n+1}}x^u\) in (7.3) and write

$$\begin{aligned} {F}^{(i)}(\varLambda ,x) = \sum _{u\in {{\mathscr {M}}}_-} F^{(i)}_u(\varLambda )\gamma _0^{u_{n+1}}x^u. \end{aligned}$$
(7.4)

If we set

$$\begin{aligned} L_{i,u} = \bigg \{ l=(l_1,\dots ,l_N)\in {{\mathbb {Z}}}^N\mid \sum _{k=1}^N l_k\mathbf{a}_k = u, l_i\le 0,\, \text { and }\, l_k\ge 0 \, \text {for}\, k\ne i \bigg \}, \end{aligned}$$

then from (7.3)

$$\begin{aligned} F_u^{(i)}(\varLambda ) = \sum _{l=(l_1,\dots ,l_N)\in L_{i,u}} \frac{(-1)^{-l_i-1}(-l_i-1)!}{\displaystyle \prod _{\begin{array}{c} k=1\\ k\ne i \end{array}}^N l_k!} \prod _{k=1}^N \varLambda _k^{l_k}. \end{aligned}$$
(7.5)

It follows from the definition of q(t) that for \(i=1,\dots ,M\)

$$\begin{aligned} \frac{\partial }{\partial \varLambda _i} q(\gamma _0\varLambda _ix^{\mathbf{a}_i}) = \gamma _0x^{\mathbf{a}_i}q(\gamma _0\varLambda _ix^{\mathbf{a}_i})-\frac{1}{\varLambda _i}. \end{aligned}$$

A straightforward calculation from (7.3) gives

$$\begin{aligned} \frac{\partial }{\partial \varLambda _k}{F}^{(i)}(\varLambda ,x) = \delta _-\big (\gamma _0x^{\mathbf{a}_k}{F}^{(i)}(\varLambda ,x)\big ) \end{aligned}$$
(7.6)

for \(k=1,\dots ,N\). Equivalently, for \(u\in {{\mathscr {M}}}_-\), we have by (7.6)

$$\begin{aligned} \frac{\partial }{\partial \varLambda _k}F^{(i)}_u(\varLambda ) = F^{(i)}_{u-\mathbf{a}_k}(\varLambda ). \end{aligned}$$
(7.7)

More generally, if \(l_1,\dots ,l_N\) are nonnegative integers, it follows that

$$\begin{aligned} \prod _{k=1}^N \bigg (\frac{\partial }{\partial \varLambda _k}\bigg )^{l_k} F^{(i)}_u(\varLambda ) = F_{u-\sum _{k=1}^N l_k\mathbf{a}_k}^{(i)}(\varLambda ). \end{aligned}$$
(7.8)

In particular, it follows from the definition of the box operators (7.1) that

$$\begin{aligned} \Box _l\big (F^{(i)}_u(\varLambda )\big ) = 0\quad \text {for all } l\in L, i=1,\dots ,M, \text { and all }\, u\in {{\mathscr {M}}}_-. \end{aligned}$$
(7.9)

The condition on the summation in (7.5) implies that \(F^{(i)}_u(\varLambda )\) satisfies the Euler operators (7.2) with parameter \(\beta = u\). In summary:

Lemma 7.1

For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), the series \(F^{(i)}_u(\varLambda )\) is a solution of the A-hypergeometric system with parameter \(\beta = u\).

Lemma 7.2

For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), the series \(F^{(i)}_u(\varLambda )\) lies in \(R_u\).

Proof

The homogeneity condition is clear from (7.5). We prove that \(F^{(i)}_u(\varLambda )\) lies in \({\tilde{R}}\).

Let \(u\in {{\mathscr {M}}}_-\). By the definition of the set A, we can write \(-u=\sum _{k=1}^N l_k\mathbf{a}_k\) with the \(l_k\) in \({{\mathbb {N}}}\). Suppose that \((l'_1,\dots ,l'_N)\in L_{i,u}\). Then

$$\begin{aligned} \sum _{k=1}^N (l_k+l'_k)\mathbf{a}_k = \mathbf{0}. \end{aligned}$$

Furthermore, we have \(l_k+l'_k\ge 0\) for \(k\ne i\) and \(l_i+l'_i\le 0\), so

$$\begin{aligned} (l_1+l'_1,\dots ,l_N+l'_N)\in L_i. \end{aligned}$$

It follows that \(L_{i,u}\subseteq -(l_1,\dots ,l_N)+L_i\), hence \(F_u^{(i)}(\varLambda )\in \varLambda _1^{-l_1}\cdots \varLambda _N^{-l_N} R_E\). \(\square\)

Remark

We note the relation between the \(F^{(i)}_u(\varLambda )\) and the \(F_{ij}(\varLambda )\) defined in the Introduction: for \(i,j=1,\dots ,M\),

$$\begin{aligned} F_{ij}(\varLambda ) = \varLambda _j F^{(i)}_{-\mathbf{a}_j}(\varLambda )= {{\mathbb {M}}}\big (F(\varLambda ,x)\big ). \end{aligned}$$
(7.10)

Lemma 7.3

For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), the coefficients of the series \(F^{(i)}_u(\varLambda )\) are integers divisible by \((-u_{n+1}-1)!\).

Proof

Since the last coordinate of each \(\mathbf{a}_k\) equals 1, the condition on the summation in (7.5) implies that

$$\begin{aligned} \sum _{k=1}^N l_k = u_{n+1}, \end{aligned}$$

i. e.,

$$\begin{aligned} (-l_i-1)= (-u_{n+1}-1) +\sum _{\begin{array}{c} k=1\\ k\ne i \end{array}}^N l_k. \end{aligned}$$

Applying this formula to the coefficients of the series in (7.5) implies the lemma. \(\square\)

Although the series \(F^{(i)}_u(\varLambda )\) are more natural to consider, we replace them in what follows by some related series \(G^{(i)}_u(\varLambda )\) that are more closely tied to the Dwork-Frobenius operator \(\alpha ^*\).

For \(i=1,\dots ,M\), define

$$\begin{aligned} {G}^{(i)}(\varLambda ,x)&= \delta _-\big ({F}^{(i)}(\varLambda ,x){\hat{\theta }}_1(\varLambda ,x)\big ) \nonumber \\&=\delta _-\bigg (q(\gamma _0\varLambda _ix^{\mathbf{a}_i}){\hat{\theta }}_1(\varLambda _ix^{\mathbf{a}_i})\bigg (\prod _{\begin{array}{c} k=1\\ k\ne i \end{array}}^N {\hat{\theta }}(\varLambda _kx^{\mathbf{a}_k})\bigg )\bigg ), \end{aligned}$$
(7.11)

where the second equality follows from (4.1). If we write

$$\begin{aligned} {G}^{(i)}(\varLambda ,x) = \sum _{u\in {{\mathscr {M}}}_-} G^{(i)}_u(\varLambda ) \gamma _0^{u_{n+1}}x^u, \end{aligned}$$
(7.12)

then by (3.8) and (7.4) one gets from the first equality of (7.11)

$$\begin{aligned} G^{(i)}_u(\varLambda ) = \sum _{\begin{array}{c} u^{(1)}\in {{\mathscr {M}}}_-,u^{(2)}\in {{\mathbb {N}}}A\\ u^{(1)}+u^{(2)} = u \end{array}} F_{u^{(1)}}^{(i)}(\varLambda ){\hat{\theta }}_{1,u^{(2)}}(\varLambda ). \end{aligned}$$
(7.13)

Note that as a series in \(\varLambda\), \(G^{(i)}_u(\varLambda )\) has exponents in \(L_{i,u}\).

We have an analogue of Lemma 7.3 for these series.

Lemma 7.4

For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\), \(G^{(i)}_u(\varLambda )/(-u_{n+1}-1)!\) has p-integral coefficients.

Proof

Using (3.9) we can write (7.13) in the form

$$\begin{aligned} G^{(i)}_u(\varLambda ) = \sum _{\begin{array}{c} u^{(1)}\in {{\mathscr {M}}}_-,u^{(2)}\in {{\mathbb {N}}}A\\ u^{(1)}+u^{(2)} = u \end{array}} \frac{F_{u^{(1)}}^{(i)}(\varLambda )}{(-u^{(1)}_{n+1}-1)!} \sum _{\begin{array}{c} l_1,\dots ,l_N\in {{\mathbb {N}}}\\ \sum _{k=1}^N l_k\mathbf{a}_k = u^{(2)} \end{array}} \bigg (\prod _{k=1}^N {\hat{\theta }}_{1,l_k}\bigg ) \frac{(-u^{(1)}_{n+1}-1)!}{\prod _{k=1}^N l_k!} \varLambda _1^{l_1}\cdots \varLambda _N^{l_N}. \end{aligned}$$
(7.14)

The series \(F_{u^{(1)}}^{(i)}(\varLambda )/(-u^{(1)}_{n+1}-1)!\) has integral coefficients by Lemma 7.3. The condition on the first summation on the right-hand side of (7.14) implies that

$$\begin{aligned} (-u_{n+1}-1)+ u^{(2)}_{n+1} = -u^{(1)}_{n+1} -1, \end{aligned}$$
(7.15)

where \(-u_{n+1},-u^{(1)}_{n+1}\ge 1\) since \(u,u^{(1)}\in {{\mathscr {M}}}_-\). Since the last coordinate of each \(\mathbf{a}_k\) equals 1, the condition on the second summation on the right-hand side of (7.14) implies that

$$\begin{aligned} u^{(2)}_{n+1} = \sum _{k=1}^N l_k, \end{aligned}$$

so by (7.15)

$$\begin{aligned} (-u_{n+1}-1) + \sum _{k=1}^N l_k= -u^{(1)}_{n+1}-1. \end{aligned}$$

It follows that the ratio \((-u^{(1)}_{n+1}-1)!/\prod _{k=1}^N l_k\) appearing in the second summation on the right-hand side of (7.14) is an integer divisible by \((-u_{n+1}-1)!\). For each N-tuple \((l_1,\dots ,l_N)\) appearing in the second summation on the right-hand side of (7.14) we have

$$\begin{aligned} \mathrm{ord}\,\prod _{k=1}^N {\hat{\theta }}_{1,l_k}\ge \frac{\sum _{k=1}^N l_k(p-1)}{p} = \frac{u^{(2)}_{n+1}(p-1)}{p} \end{aligned}$$
(7.16)

by (3.6). This implies that the series on the right-hand side of (7.14) converges to a series in the \(\varLambda _k\) with p-integral coefficients that remain p-integral when divided by \((-u_{n+1}-1)!\). \(\square\)

We also have an analogue of Lemma 7.2.

Lemma 7.5

For \(i=1,\dots ,M\) and \(u\in {{\mathscr {M}}}_-\) the series \(G^{(i)}_u(\varLambda )\) lies in \({R}_u\).

Proof

The homogeneity condition is clear. Since the second summation on the right-hand side of (7.14) is finite, the sum

$$\begin{aligned} \frac{F_{u^{(1)}}^{(i)}(\varLambda )}{(-u^{(1)}_{n+1}-1)!} \sum _{\begin{array}{c} l_1,\dots ,l_N\in {{\mathbb {N}}}\\ \sum _{k=1}^N l_k\mathbf{a}_k = u^{(2)} \end{array}} \bigg (\prod _{k=1}^N {\hat{\theta }}_{1,l_k}\bigg ) \frac{(-u^{(1)}_{n+1}-1)!}{\prod _{k=1}^N l_k!} \varLambda _1^{l_1}\cdots \varLambda _N^{l_N} \end{aligned}$$
(7.17)

lies in \({\tilde{R}}\) by the proof of Lemma 7.2. Furthermore, it follows from (7.16) that the expression (7.17) has norm bounded by \(p^{-u^{(2)}_{n+1}(p-1)/p}\). In the first summation on the right-hand side of (7.14), a given \(u^{(2)}\in {{\mathbb {N}}}A\) can appear at most once, i. e, \(u^{(2)}\rightarrow \infty\) in this summation. This implies that the first summation on the right-hand side of (7.14) converges in the norm on \({{\mathbb {B}}}\). \(\square\)

We define a matrix \(G(\varLambda ) = \big [ G_{ij}(\varLambda ) \big ]_{i,j=1}^M\) corresponding to \(F(\varLambda )\) by the analogue of (7.10):

$$\begin{aligned} G_{ij}(\varLambda ) = \varLambda _j G^{(i)}_{-\mathbf{a}_j}(\varLambda ) = {{\mathbb {M}}}\big (G(\varLambda ,x)\big ). \end{aligned}$$
(7.18)

Like the \(F_{ij}(\varLambda )\), the monomials in the \(G_{ij}(\varLambda )\) have exponents in \(L_i\). Furthermore, the \(G_{ii}(\varLambda )\) have constant term 1, while the \(G_{ij}(\varLambda )\) for \(i\ne j\) have no constant term. This implies that \(\det G(\varLambda )\) is an invertible element of R, hence \(G(\varLambda )^{-1}\) is a matrix with entries in R. Since the series \(G_{ij}(\varLambda )\) have integral coefficients (Lemma 7.4) and the \(G_{ii}(\varLambda )\) have constant term 1, it follows that

$$\begin{aligned} \big |G(\varLambda ) \big |= \big |\det G(\varLambda )\big |= 1. \end{aligned}$$
(7.19)

This implies in particular that

$$\begin{aligned} |G(\varLambda )^{-1}|\le 1. \end{aligned}$$
(7.20)

We simplify our notation: for \(u,u^{(1)}\in {{\mathscr {M}}}_-\), \(u\ne u^{(1)}\), set

$$\begin{aligned} C_{u,u^{(1)}} = \sum _{\begin{array}{c} l_1,\dots ,l_N\in {{\mathbb {N}}}\\ \sum _{k=1}^N l_k\mathbf{a}_k = u-u^{(1)} \end{array}} \bigg (\prod _{k=1}^N {\hat{\theta }}_{1,l_k}\bigg ) \frac{(-u^{(1)}_{n+1}-1)!}{\prod _{k=1}^N l_k!} \varLambda _1^{l_1}\cdots \varLambda _N^{l_N}. \end{aligned}$$

Note that this is a finite sum, \(C_{u,u^{(1)}}\) is p-integral, and \(\mathrm{ord}\,C_{u,u^{(1)}}>0\) by (7.16). Equation (7.14) then simplifies to

$$\begin{aligned} G^{(i)}_u(\varLambda ) = F^{(i)}_u(\varLambda ) + \sum _{\begin{array}{c} u^{(1)}\in {{\mathscr {M}}}_-\\ u^{(1)}\ne u \end{array}} C_{u,u^{(1)}}\frac{F_{u^{(1)}}^{(i)}(\varLambda )}{(-u^{(1)}_{n+1}-1)!}. \end{aligned}$$
(7.21)

Furthermore, the estimate (7.16) implies that \(C_{u,u^{(1)}}\rightarrow 0\) as \(u^{(1)}\rightarrow \infty\) in the sense that for any \(\kappa >0\), the estimate \(\mathrm{ord}\, C_{u,u^{(1)}}>\kappa\) holds for all but finitely many \(u^{(1)}\).

We need the analogue of (7.21) with the roles of F and G reversed. It follows from (7.11) and (4.1) that for \(i=1,\dots ,M\)

$$\begin{aligned} F^{(i)}(\varLambda ,x) = \delta _-\big (G^{(i)}(\varLambda ,x) {\hat{\theta }}_1(\varLambda ,x)^{-1}\big ). \end{aligned}$$
(7.22)

This leads to the analogue of (7.14):

$$\begin{aligned} F^{(i)}_u(\varLambda ) = \sum _{\begin{array}{c} u^{(1)}\in {{\mathscr {M}}}_-,u^{(2)}\in {{\mathbb {N}}}A\\ u^{(1)}+u^{(2)} = u \end{array}} \frac{G_{u^{(1)}}^{(i)}(\varLambda )}{(-u^{(1)}_{n+1}-1)!} \sum _{\begin{array}{c} l_1,\dots ,l_N\in {{\mathbb {N}}}\\ \sum _{k=1}^N l_k\mathbf{a}_k = u^{(2)} \end{array}} \bigg (\prod _{k=1}^N {\hat{\theta }}'_{1,l_k}\bigg ) \frac{(-u^{(1)}_{n+1}-1)!}{\prod _{k=1}^N l_k!} \varLambda _1^{l_1}\cdots \varLambda _N^{l_N}, \end{aligned}$$
(7.23)

where the \({\hat{\theta }}'_{1,l_k}\) are defined by (3.10) and Lemma  7.4 tells us that the

$$\begin{aligned} {G_{u^{(1)}}^{(i)}(\varLambda )}/{(-u^{(1)}_{n+1}-1)!} \end{aligned}$$

have p-integral coefficients. We define for \(u,u^{(1)}\in {{\mathscr {M}}}_-\), \(u\ne u^{(1)}\),

$$\begin{aligned} C'_{u,u^{(1)}} = \sum _{\begin{array}{c} l_1,\dots ,l_N\in {{\mathbb {N}}}\\ \sum _{k=1}^N l_k\mathbf{a}_k = u-u^{(1)} \end{array}} \bigg (\prod _{k=1}^N {\hat{\theta }}'_{1,l_k}\bigg ) \frac{(-u^{(1)}_{n+1}-1)!}{\prod _{k=1}^N l_k!} \varLambda _1^{l_1}\cdots \varLambda _N^{l_N}. \end{aligned}$$

Since the \({\hat{\theta }}'_{1,l_k}\) also satisfy the estimate (7.16) (see (3.11)), we get that \(C'_{u,u^{(1)}}\) is p-integral and \(\mathrm{ord}\,C'_{u,u^{(1)}}>0\). In addition, \(C'_{u,u^{(1)}}\rightarrow 0\) as \(u^{(1)}\rightarrow \infty\). Substitution into (7.23) now gives the desired formula:

$$\begin{aligned} F^{(i)}_u(\varLambda ) = G^{(i)}_u(\varLambda ) + \sum _{\begin{array}{c} u^{(1)}\in {{\mathscr {M}}}_-\\ u^{(1)}\ne u \end{array}} C'_{u,u^{(1)}}\frac{G_{u^{(1)}}^{(i)}(\varLambda )}{(-u^{(1)}_{n+1}-1)!} \end{aligned}$$
(7.24)

For future reference, we record estimates that were used in the proof of (7.21) and (7.24).

Proposition 7.1

For \(u,u^{(1)}\in {{\mathscr {M}}}_-\), \(u\ne u^{(1)}\), we have \(\mathrm{ord}\,C_{u,u^{(1)}}>0\) (resp. \(\mathrm{ord}\,C'_{u,u^{(1)}}>0\)) and \(C_{u,u^{(1)}}\rightarrow 0\) (resp. \(C'_{u,u^{(1)}}\rightarrow 0\)) as \(u^{(1)}\rightarrow \infty\).

8 Eigenvectors of \(\alpha ^*\)

We use the series of Sect. 7 to construct eigenvectors of \(\alpha ^*\). These eigenvectors will lead to the fixed point of the contraction mapping of Sect. 6. We begin by recalling some results from [3].

By [3, Lemma 6.1] the product \({\hat{\theta }}_1(t)q(\gamma _0t)\) is well-defined so we may set

$$\begin{aligned} Q(t) = \delta _-\big ({\hat{\theta }}_1(t)q(\gamma _0t)\big ) = \sum _{i=1}^\infty Q_i i!\gamma _0^{-i-1} t^{-i-1}, \end{aligned}$$
(8.1)

where the \(Q_i\) are defined by the second equality. The proof of [3, Lemma 6.1] shows that the \(Q_i\) are p-integral. By [3, Proposition 6.10] we have

$$\begin{aligned} \delta _-\big (\theta (t)Q(t^p)\big ) = pQ(t). \end{aligned}$$
(8.2)

It follows from (8.2) that

$$\begin{aligned} \theta (t)Q(t^p) = pQ(t) + A(t) \end{aligned}$$

for some series A(t) in nonnegative powers of t. Fix i, \(1\le i\le M\), and replace t in this equation by \(\varLambda _ix^{\mathbf{a}_i}\):

$$\begin{aligned} \theta (\varLambda _ix^{\mathbf{a}_i}) Q(\varLambda _i^p x^{p\mathbf{a}_i}) = A(\varLambda _ix^{\mathbf{a}_i}) + pQ(\varLambda _i x^{\mathbf{a}_i}). \end{aligned}$$

Since \(\delta _-(Q(\varLambda _i x^{\mathbf{a}_i})) = Q(\varLambda _i x^{\mathbf{a}_i})\) and \(\delta _-(A(\varLambda _ix^{\mathbf{a}_i})) = 0\), we get

$$\begin{aligned} \delta _-\bigg ( \theta (\varLambda _ix^{\mathbf{a}_i}) Q(\varLambda _i^p x^{p\mathbf{a}_i}) \bigg ) = pQ(\varLambda _i x^{\mathbf{a}_i}). \end{aligned}$$
(8.3)

These series are related to the \(G^{(i)}(\varLambda ,x)\) defined in Sect. 7.

Lemma 8.1

We have

$$\begin{aligned} G^{(i)}(\varLambda ,x) = \delta _-\bigg ( Q(\varLambda _i x^{\mathbf{a}_i}) \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N {\hat{\theta }}(\varLambda _jx^{\mathbf{a}_j})\bigg ). \end{aligned}$$

Proof

Combining (7.11), (7.3), and (3.7) and using (4.1) we get

$$\begin{aligned} G^{(i)}(\varLambda ,x) = \delta _-\bigg ( q(\gamma _0\varLambda _ix^{\mathbf{a}_i})\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N \exp (\gamma _0\varLambda _jx^{\mathbf{a}_j})\prod _{j=1}^N {\hat{\theta }}_1(\varLambda _jx^{\mathbf{a}_j})\bigg ). \end{aligned}$$

Using (3.3) and (3.5), this may be rewritten as

$$\begin{aligned} G^{(i)}(\varLambda ,x) = \delta _-\bigg ( q(\gamma _0\varLambda _ix^{\mathbf{a}_i}) {\hat{\theta }}_1(\varLambda _ix^{\mathbf{a}_i}) \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N {\hat{\theta }}(\varLambda _jx^{\mathbf{a}_j})\bigg ). \end{aligned}$$

The assertion of the lemma now follows from (8.1) and (4.1). \(\square\)

The following result is a key step in the proof of Theorem 2.1.

Theorem 8.1

For \(i=1,\dots ,M\) we have

$$\begin{aligned} \alpha ^*\big (G^{(i)}(\varLambda ,x)\big ) = pG^{(i)}(\varLambda ,x). \end{aligned}$$

Proof

Since \(\theta (t) = {\hat{\theta }}(t)/{\hat{\theta }}(t^p)\) we have

$$\begin{aligned} \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N \theta (\varLambda _jx^{\mathbf{a}_j})\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N {\hat{\theta }}(\varLambda _j^px^{p\mathbf{a}_j}) = \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N {\hat{\theta }}(\varLambda _jx^{\mathbf{a}j}). \end{aligned}$$
(8.4)

We now compute

$$\begin{aligned} \alpha ^*\big (G^{(i)}(\varLambda ,x)\big )&= \delta _-\bigg (\prod _{j=1}^N \theta (\varLambda _jx^{\mathbf{a}_j}) \delta _-\bigg ( Q(\varLambda _i^p x^{p\mathbf{a}_i}) \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N {\hat{\theta }}(\varLambda _j^px^{p\mathbf{a}_j})\bigg )\bigg ) \\&= \delta _-\bigg (\theta (\varLambda _ix^{\mathbf{a}_i})Q(\varLambda _i^px^{p\mathbf{a}_i}) \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N \theta (\varLambda _jx^{\mathbf{a}_j}) \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N {\hat{\theta }}(\varLambda _j^px^{p\mathbf{a}_j})\bigg ) \\&= p\delta _-\bigg (Q(\varLambda _ix^{\mathbf{a}_i}) \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^N {\hat{\theta }}(\varLambda _jx^{\mathbf{a}_j})\bigg ) \\&= pG^{(i)}(\varLambda ,x), \end{aligned}$$

where the first equality follows from Lemma 8.1, the second follows from (4.1) and rearranging the product, the third follows from (8.3) and (8.4), and the fourth follows from Lemma 8.1. Note that the rearrangement of the product occuring in the second equality is not completely trivial. One of the series contains negative powers of the \(x_i\), the rest contain positive powers of the \(x_i\), so they do not all lie in a common commutative ring. One has to write out both sides to verify that they are equal. (See Sect. 5 of: Adolphson, A., Sperber, S.: On the integrality of hypergeometric series whose coefficients are factorial ratios, available at arXiv:2001.03283, for more details on this calculation.) \(\square\)

The action of \(\alpha ^*\) was extended to \(S^M\) coordinatewise, so if we set

$$\begin{aligned} G(\varLambda ,x) = \big ( G^{(1)}(\varLambda ,x),\dots ,G^{(M)}(\varLambda ,x)\big )\in S^M, \end{aligned}$$
(8.5)

then we get the following result.

Corollary 8.1

We have \(\alpha ^*\big (G(\varLambda ,x)\big ) = pG(\varLambda ,x)\).

We now verify that \(G(\varLambda )^{-1}G(\varLambda ,x)\in T\). First of all,

$$\begin{aligned} {{\mathbb {M}}}\big (G(\varLambda )^{-1}G(\varLambda ,x)\big ) = I \end{aligned}$$

by (7.18) and (5.5). This implies in particular that

$$\begin{aligned} |G(\varLambda )^{-1}G(\varLambda ,x)|\ge 1. \end{aligned}$$

But \(|G(\varLambda ,x)|\le 1\) by Lemma 7.4 and \(|G(\varLambda )^{-1}|\le 1\) by (7.20), so

$$\begin{aligned} |G(\varLambda )^{-1}G(\varLambda ,x)|\le 1 \end{aligned}$$

also. We thus conclude that

$$\begin{aligned} |G(\varLambda )^{-1}G(\varLambda ,x)|= 1. \end{aligned}$$
(8.6)

Corollary 8.2

The product \(G(\varLambda )^{-1}G(\varLambda ,x)\) is the unique fixed point of \(\phi\), hence lies in \(T'\).

Proof

From the definition of \(\alpha ^*\) and Corollary 8.1 we have

$$\begin{aligned} \alpha ^*\big (G(\varLambda )^{-1}G(\varLambda ,x)\big ) = G(\varLambda ^p)^{-1} \alpha ^*\big (G(\varLambda ,x)\big ) = G(\varLambda ^p)^{-1}pG(\varLambda ,x). \end{aligned}$$
(8.7)

This implies by (7.18) and (5.4) that

$$\begin{aligned} {{\mathbb {M}}}\big (\alpha ^*\big (G(\varLambda )^{-1}G(\varLambda ,x)\big )\big ) = pG(\varLambda ^p)^{-1}G(\varLambda ), \end{aligned}$$
(8.8)

hence

$$\begin{aligned} \phi \big (G(\varLambda )^{-1}G(\varLambda ,x)\big ) = G(\varLambda )^{-1}G(\varLambda ,x). \end{aligned}$$

\(\square\)

Corollary 8.3

The entries of \(G(\varLambda ^p)^{-1}G(\varLambda )\) lie in \(R'_\mathbf{0}\).

Proof

Corollary 8.2 implies that \(G(\varLambda )^{-1}G(\varLambda ,x)\) lies in \((S')^M\), and since \(\alpha ^*\) is stable on \((S')^M\) we also have \(\alpha ^*\big (G(\varLambda )^{-1}G(\varLambda ,x)\big )\in (S')^M\). The assertion of the corollary now follows from (8.7). \(\square\)

Put \({{\mathscr {G}}}(\varLambda ) = G(\varLambda ^p)^{-1}G(\varLambda )\). We note one more consequence of Theorem 8.1.

Proposition 8.1

The reciprocal eigenvalues of \({{\mathscr {G}}}(\lambda )\) are p-adic units for all \(\lambda\) in \({{\mathscr {D}}}\).

Proof

We showed earlier that \(G(\varLambda )^{-1}G(\varLambda ,x)\in T\), so \(G(\varLambda )^{-1}G(\varLambda ,x)\) satisfies the hypotheses of Lemma 5.1. By (8.6) and (5.7) we have

$$\begin{aligned} \big |\det {{\mathbb {M}}}\big (\alpha ^*\big (G(\varLambda )^{-1}G(\varLambda ,x)\big )\big ) \big |= |p|\end{aligned}$$

and by (8.6) and Corollary 5.1 we have

$$\begin{aligned} \big |\det {{\mathbb {M}}}\big (\alpha ^*\big (G(\varLambda )^{-1}G(\varLambda ,x)\big )\big )^{-1} \big |= |1/p|. \end{aligned}$$

Combining these equalities with (8.8) gives

$$\begin{aligned} \big |\det \big (G(\varLambda ^p)^{-1}G(\varLambda )\big )\big |=\big |\det \big (G(\varLambda )^{-1}G(\varLambda ^p)\big )\big |= 1. \end{aligned}$$

By (2.8), we then have

$$\begin{aligned} |\det {{\mathscr {G}}}(\lambda )|\le 1 \;\text {and}\; |\det {{\mathscr {G}}}(\lambda )^{-1}|\le 1\;\text {for all}\, \lambda \in {{\mathscr {D}}}, \end{aligned}$$

which implies that \(\det {{\mathscr {G}}}(\lambda )\) assumes unit values on \({{\mathscr {D}}}\). Since \({{\mathscr {G}}}(\lambda )\) assumes p-integral values on \({{\mathscr {D}}}\), it follows that the roots of \(\det (I-t{{\mathscr {G}}}(\lambda ))\) are p-adic units. \(\square\)

We introduce some additional notation that will be useful later. Put

$$\begin{aligned} {{\mathscr {G}}}(\varLambda ,x) = \big ({{\mathscr {G}}}^{(1)}(\varLambda ,x),\dots ,{{\mathscr {G}}}^{(M)}(\varLambda ,x)\big ) = G(\varLambda )^{-1}G(\varLambda ,x). \end{aligned}$$
(8.9)

This is an element of \(T'\) by Corollary 8.2, so each \({{\mathscr {G}}}^{(i)}(\varLambda ,x)\) lies in \(S'\) and we may write

$$\begin{aligned} {{\mathscr {G}}}^{(i)}(\varLambda ,x) = \sum _{u\in {{\mathscr {M}}}_-} {{\mathscr {G}}}_u^{(i)}(\varLambda ) \gamma _0^{u_{n+1}} x^u \end{aligned}$$
(8.10)

with \({{\mathscr {G}}}_u^{(i)}(\varLambda )\) in \(R_u'\) for all iu.

9 Relation between \({{\mathscr {F}}}(\varLambda )\) and \({{\mathscr {G}}}(\varLambda )\)

In this section we prove the first assertion of Theorem 2.1 and show that the second assertion of Theorem 2.1 is equivalent to the same statement with \({{\mathscr {F}}}(\varLambda )\) replaced by \({{\mathscr {G}}}(\varLambda )\) (see Proposition 9.2).

Put \(F(\varLambda ,x) = (F^{(1)}(\varLambda ,x),\dots ,F^{(M)}(\varLambda ,x))\), where \(F^{(i)}(\varLambda ,x)\) is given by (7.3). We regard \(F(\varLambda ,x)\) and \(G(\varLambda ,x)\) (defined in (8.5)) as column vectors so they can be multiplied on the left by \((M\times M)\)-matrices. The following result is a consequence of the fact that \(G(\varLambda )^{-1}G(\varLambda ,x)\) lies in \((S')^M\) (Corollary 8.2).

Proposition 9.1

The vectors \(F(\varLambda )^{-1}F(\varLambda ,x)\), \(G(\varLambda )^{-1}F(\varLambda ,x)\), and \(F(\varLambda )^{-1}G(\varLambda ,x)\) lie in \((S')^M\).

Proof

We begin by showing that \(G(\varLambda )^{-1}F(\varLambda ,x)\in (S')^M\). Writing Equation (7.24) for \(i=1,\dots ,M\) and multiplying by \(G(\varLambda )^{-1}\) gives the vector equation

$$\begin{aligned}&G(\varLambda )^{-1}\begin{bmatrix} F^{(1)}_u(\varLambda ) \\ \vdots \\ F^{(M)}_u(\varLambda )\end{bmatrix} \nonumber \\&\quad =G(\varLambda )^{-1} \begin{bmatrix} G^{(1)}_u(\varLambda ) \\ \vdots \\ G^{(M)}_u(\varLambda )\end{bmatrix}+ \sum _{\begin{array}{c} u^{(1)}\in {{\mathscr {M}}}_-\\ u^{(1)}\ne u \end{array}} \frac{C'_{u,u^{(1)}}}{(-u^{(1)}_{n+1}-1)!}G(\varLambda )^{-1} \begin{bmatrix} G^{(1)}_{u^{(1)}}(\varLambda ) \\ \vdots \\ G^{(M)}_{u^{(1)}}(\varLambda )\end{bmatrix}. \end{aligned}$$
(9.1)

We have \(|G(\varLambda )^{-1}|\le 1\) by (7.20). Lemma 7.4, Proposition 7.1, and Corollary 8.2 imply that the sum on the right-hand side of (9.1) converges to an element of \((R'_u)^M\). Since \(u\in {{\mathscr {M}}}_-\) was arbitrary, this shows that \(G(\varLambda )^{-1}F(\varLambda ,x)\in (S')^M\).

Take successively \(u=-\mathbf{a}_j\), \(j=1,\dots ,M\) in (9.1) and multiply the j-th equation by \(\varLambda _j\). We combine the column vectors in the resulting M equations into matrices and apply (7.10) and (7.18). The resulting matrix equation is

$$\begin{aligned} G(\varLambda )^{-1}F(\varLambda ) = I+ H(\varLambda ), \end{aligned}$$
(9.2)

where we have written \(H(\varLambda )\) for the \((M\times M)\)-matrix whose j-th column is

$$\begin{aligned} \sum _{\begin{array}{c} u^{(1)}\in {{\mathscr {M}}}_-\\ u^{(1)}\ne -\mathbf{a}_j \end{array}} \frac{C'_{-\mathbf{a}_j,u^{(1)}}\varLambda _j}{(-u^{(1)}_{n+1}-1)!} G(\varLambda )^{-1}\begin{bmatrix} G^{(1)}_{u^{(1)}}(\varLambda ) \\ \vdots \\ G^{(M)}_{u^{(1)}}(\varLambda )\end{bmatrix}. \end{aligned}$$

The entries in the matrix \(H(\varLambda )\) all lie in \({R}'_\mathbf{0}\) by Corollary 8.2 and have norm \(<1\) by (7.20), Lemma 7.4, and Proposition 7.1. We can thus apply the usual geometric series formula to invert the right-hand side of (9.2). This proves that \(F(\varLambda )^{-1}G(\varLambda )\) is a matrix with entries in \({R}'_\mathbf{0}\), all entries having norm \(\le 1\). Since the left-hand side of (9.1) lies in \(({R}'_u)^M\), we can now multiply it by \(F(\varLambda )^{-1}G(\varLambda )\) to conclude that

$$\begin{aligned} F(\varLambda )^{-1}\begin{bmatrix} F^{(1)}_u(\varLambda ) \\ \vdots \\ F^{(M)}_u(\varLambda )\end{bmatrix}\in ({R}'_u)^M\quad \text {for all }\, u\in {{\mathscr {M}}}_-. \end{aligned}$$
(9.3)

This shows that \(F(\varLambda )^{-1}F(\varLambda ,x)\) lies in \((S')^M\).

The remaining assertion, that \(F(\varLambda )^{-1}G(\varLambda ,x)\) lies in \((S')^M\), can be proved similarly by reversing the roles of F and G and using (7.21) in place of (7.24). Since that result is not needed in what follows, we omit the details. \(\square\)

Proposition 9.2

The matrix \({{\mathscr {F}}}(\varLambda )\) has entries in \(R'_\mathbf{0}\). For any \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) with \({\bar{D}}(\lambda )\ne 0\) we have

$$\begin{aligned} \det \big (I- t{{\mathscr {F}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {F}}}({\hat{\lambda }}^{p^{a-2}})\cdots {{\mathscr {F}}}({\hat{\lambda }})\big ) = \det \big (I-t{{\mathscr {G}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {G}}}({\hat{\lambda }}^{p^{a-2}})\cdots {{\mathscr {G}}}({\hat{\lambda }})\big ) \end{aligned}$$

where \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) denotes the Teichmüller lifting of \(\lambda\).

Proof

We showed in the proof of Proposition 9.1 that \({{\mathscr {H}}}(\varLambda ):=F(\varLambda )^{-1}G(\varLambda )\) and its inverse \({{\mathscr {H}}}(\varLambda )^{-1} = G(\varLambda )^{-1}F(\varLambda )\) have entries in \({R}_\mathbf{0}'\). We then have

$$\begin{aligned} {{\mathscr {F}}}(\varLambda )= & {} F(\varLambda ^p)^{-1}F(\varLambda ) = {{\mathscr {H}}}(\varLambda ^p)\big (G(\varLambda ^p)^{-1}G(\varLambda )\big ) {{\mathscr {H}}}(\varLambda )^{-1} \nonumber \\= & {} {{\mathscr {H}}}(\varLambda ^p){{\mathscr {G}}}(\varLambda ){{\mathscr {H}}}(\varLambda )^{-1}, \end{aligned}$$
(9.4)

which implies the first assertion of the proposition since \({{\mathscr {G}}}(\varLambda )\) has entries in \(R_\mathbf{0}'\) by Corollary 8.3. Equation (9.4) implies

$$\begin{aligned} {{\mathscr {H}}}(\varLambda ^{p^a})^{-1}{{\mathscr {F}}}(\varLambda ^{p^{a-1}}){{\mathscr {F}}}(\varLambda ^{p^{a-2}})\cdots {{\mathscr {F}}}(\varLambda ){{\mathscr {H}}}(\varLambda ) = {{\mathscr {G}}}(\varLambda ^{p^{a-1}}){{\mathscr {G}}}(\varLambda ^{p^{a-2}})\cdots {{\mathscr {G}}}(\varLambda ). \end{aligned}$$
(9.5)

The Teichmüller lifting \({\hat{\lambda }}\) satisfies \({\hat{\lambda }}^{p^a} = {\hat{\lambda }}\) and \({{\mathscr {H}}}(\varLambda )\) is a function on \({{\mathscr {D}}}\), so the second assertion of the proposition follows by evaluating (9.5) at \(\varLambda = {\hat{\lambda }}\). \(\square\)

The first assertion of Proposition 9.2 implies the first assertion of Theorem 2.1. The second assertion of Proposition 9.2 shows that the second assertion of Theorem 2.1 is equivalent to the following statement.

Theorem 9.1

Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. If \({\bar{D}}(\lambda )\ne 0\), then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and

$$\begin{aligned} \rho (\lambda ,t) = \det \big ( I-t{{\mathscr {G}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {G}}}({\hat{\lambda }}^{p^{a-2}})\cdots {{\mathscr {G}}}({\hat{\lambda }})\big ). \end{aligned}$$

10 Proof of Theorem 9.1

We apply Dwork’s p-adic cohomology theory to prove Theorem 9.1.

We begin by recalling the formula for the rational function \(P_\lambda (t)\) that was proved in [3]. For a subset \(I\subseteq \{0,1,\dots ,n\}\), let

$$\begin{aligned} L^I= & {} \bigg \{ \sum _{u\in {{\mathbb {N}}}^{n+2}} c_u\gamma _0^{pu_{n+1}}x^u\mid \sum _{i=0}^n u_i = du_{n+1}, u_i>0\, \text { for } i\in I, c_u\in {{\mathbb {C}}}_p, \\ &\text {and}\, \{c_u\} \, \text { is bounded}\bigg \}. \end{aligned}$$

Put \(g_\lambda = x_{n+1}f_\lambda\), so that

$$\begin{aligned} g_\lambda (x_0,\dots ,x_{n+1}) = \sum _{k=1}^N \lambda _kx^{\mathbf{a}_k} \in {{\mathbb {F}}}_q[x_0,\dots ,x_{n+1}], \end{aligned}$$

and let \({\hat{g}}_\lambda\) be its Teichmüller lifting:

$$\begin{aligned} {\hat{g}}_\lambda (x_0,\dots ,x_{n+1}) = \sum _{k=1}^N{\hat{\lambda }}_kx^{\mathbf{a}_k}\in {{\mathbb {Q}}}(\zeta _{q-1})[x_0,\dots ,x_{n+1}]. \end{aligned}$$

From (3.15) we have

$$\begin{aligned} \theta ({\hat{\lambda }},x) = \prod _{j=1}^N \theta ({\hat{\lambda }}_jx^{\mathbf{a}_j}). \end{aligned}$$
(10.1)

We also need the series \(\theta _0({\hat{\lambda }},x)\) defined by

$$\begin{aligned} \theta _0({\hat{\lambda }},x) = \prod _{i=0}^{a-1} \prod _{j=1}^N \theta \big (({\hat{\lambda }}_jx^{\mathbf{a}_j})^{p^i}\big ) = \prod _{i=0}^{a-1}\theta ({\hat{\lambda }}^{p^i},x^{p^i}). \end{aligned}$$
(10.2)

Define an operator \(\psi\) on formal power series by

$$\begin{aligned} \psi \bigg (\sum _{u\in {{\mathbb {N}}}^{n+2}} c_ux^u\bigg ) = \sum _{u\in {{\mathbb {N}}}^{n+2}} c_{pu}x^u. \end{aligned}$$
(10.3)

Denote by \(\alpha _{{\hat{\lambda }}}\) the composition

$$\begin{aligned} \alpha _{{{\hat{\lambda }}}} := \psi ^a\circ \text {``multiplication by}\, \theta _0({\hat{\lambda }},x)." \end{aligned}$$

The map \(\alpha _{{\hat{\lambda }}}\) is stable on \(L^I\) for all I and we have ( [3, Equation (7.12)])

$$\begin{aligned} P_\lambda (qt) = \prod _{I\subseteq \{0,1,\dots ,n\}} \det (I-q^{n+1-|I|}t\alpha _{{\hat{\lambda }}}\mid L_0^I)^{(-1)^{n+1+|I|}}. \end{aligned}$$
(10.4)

For notational convenience, put \(\Gamma = \{0,1,\dots ,n\}\). The following lemma is an immediate consequence of [3, Proposition 7.13(b)] (note that since we are assuming \(d\ge n+1\), we have \(\mu =0\) in that proposition).

Lemma 10.1

The unit reciprocal roots of \(P_\lambda (t)\) are obtained from the reciprocal roots of \(\det (I-t\alpha _{{\hat{\lambda }}}\mid L_0^\Gamma )\) of q-ordinal equal to 1 by division by q.

We give an alternate description of \(\det (I-t\alpha _{{\hat{\lambda }}}\mid L_0^\Gamma )\) ( [3, Sect. 7]). Set

$$\begin{aligned} B = \bigg \{ \xi ^* = \sum _{u\in ({{\mathbb {Z}}}_{<0})^{n+2}} c_u^*\gamma _0^{pu_{n+1}}x^u\mid c_u^*\rightarrow 0 \, \text { as }\, u\rightarrow -\infty \bigg \}, \end{aligned}$$

a p-adic Banach space with norm \(|\xi ^*|= \sup _{u\in ({{\mathbb {Z}}}_{<0})^{n+2}}\{|c_u^*|\}\). Define a map \(\varPhi\) on formal power series by

$$\begin{aligned} \varPhi \bigg (\sum _{u\in {{\mathbb {Z}}}^n} c_ux^u\bigg ) = \sum _{u\in {{\mathbb {Z}}}^n} c_ux^{pu}. \end{aligned}$$

Consider the formal composition \(\alpha ^*_{{\hat{\lambda }}} = \delta _-\circ \theta _0({\hat{\lambda }},x)\circ \varPhi ^a\). The following result is [3, Proposition 7.30].

Proposition 10.1

The operator \(\alpha ^*_{{\hat{\lambda }}}\) is an endomorphism of B which is adjoint to \(\alpha _{{\hat{\lambda }}}:L_0^\Gamma \rightarrow L_0^\Gamma\).

From Proposition 10.1, it follows by Serre [12, Proposition 15] that

$$\begin{aligned} \det (I-t\alpha _{{\hat{\lambda }}}\mid L_0^\Gamma ) = \det (I-t\alpha ^*_{{\hat{\lambda }}}\mid B), \end{aligned}$$
(10.5)

so Lemma 10.1 gives the following result.

Corollary 10.1

The unit reciprocal roots of \(P_\lambda (t)\) are obtained from the reciprocal roots of \(\det (I-t\alpha ^*_{{\hat{\lambda }}}\mid B)\) of q-ordinal equal to 1 by division by q.

We saw in Sect. 1 that \(P_\lambda (t)\) has exactly M unit roots when \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and \({\bar{D}}(\lambda )\ne 0\). By Proposition 8.1, the eigenvalues of the \((M\times M)\)-matrix \({{\mathscr {G}}}(\lambda )\) are units for all \(\lambda \in {{\mathscr {D}}}\). So to prove Theorem 9.1, it suffices to establish the following result.

Proposition 10.2

Let \(\lambda \in ({{\mathbb {F}}}_q^\times )^M\times {{\mathbb {F}}}_q^{N-M}\) and let \({\hat{\lambda }}\in {{\mathbb {Q}}}_p(\zeta _{q-1})^N\) be its Teichmüller lifting. Assume that \({\bar{D}}(\lambda )\ne 0\). Then \({\hat{\lambda }}^{p^i}\in {{\mathscr {D}}}\) for \(i=0,\dots ,a-1\) and

$$\begin{aligned} \det \big ( I-qt{{\mathscr {G}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {G}}}({\hat{\lambda }}^{p^{a-1}})\cdots {{\mathscr {G}}}({\hat{\lambda }})\big ) \end{aligned}$$

is a factor of \(\det (I-t\alpha ^*_{{\hat{\lambda }}}\mid B)\).

Proof

Using the notation of (8.9), we have by (8.7)

$$\begin{aligned} \alpha ^*\big ({{\mathscr {G}}}(\varLambda ,x)\big ) = p{{\mathscr {G}}}(\varLambda ){{\mathscr {G}}}(\varLambda ,x). \end{aligned}$$
(10.6)

Iterating this gives for all \(m\ge 0\)

$$\begin{aligned} (\alpha ^*)^m\big ({{\mathscr {G}}}(\varLambda ,x)\big ) = p^m{{\mathscr {G}}}(\varLambda ^{p^{m-1}}){{\mathscr {G}}}(\varLambda ^{p^{m-2}})\cdots {{\mathscr {G}}}(\varLambda ){{\mathscr {G}}}(\varLambda ,x). \end{aligned}$$
(10.7)

Evaluate \({{\mathscr {G}}}(\varLambda ,x)\) at \(\varLambda = {\hat{\lambda }}\):

$$\begin{aligned} {{\mathscr {G}}}({\hat{\lambda }},x) = \big ( {{\mathscr {G}}}^{(1)}({\hat{\lambda }},x),\cdots ,{{\mathscr {G}}}^{(M)}({\hat{\lambda }},x)\big ), \end{aligned}$$

where by (8.10)

$$\begin{aligned} {{\mathscr {G}}}^{(i)}({\hat{\lambda }},x)&= \sum _{u\in {{\mathscr {M}}}_-} {{\mathscr {G}}}_u^{(i)}({\hat{\lambda }})\gamma _0^{u_{n+1}} x^u \\&= \sum _{u\in {{\mathscr {M}}}_-} \big (\gamma _0^{-(p-1)u_{n+1}}{{\mathscr {G}}}_u^{(i)}({\hat{\lambda }})\big )\gamma _0^{pu_{n+1}}x^u. \end{aligned}$$

Since \(\gamma _0^{-(p-1)u_{n+1}}\rightarrow 0\) as \(u\rightarrow \infty\), this expression lies in B. One checks that the specialization of the left-hand side of (10.7) with \(m=a\) at \(\varLambda = {\hat{\lambda }}\) is \(\alpha _{{\hat{\lambda }}}^*({{\mathscr {G}}}({\hat{\lambda }},x)\big )\), so specializing (10.7) with \(m=a\) and \(\varLambda = {\hat{\lambda }}\) gives

$$\begin{aligned} \alpha ^*_{{\hat{\lambda }}}\big ({{\mathscr {G}}}({\hat{\lambda }},x)\big ) = q{{\mathscr {G}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {G}}}({\hat{\lambda }}^{p^{a-2}})\cdots {{\mathscr {G}}}({\hat{\lambda }}){{\mathscr {G}}}({\hat{\lambda }},x). \end{aligned}$$
(10.8)

We have proved that \({{\mathscr {G}}}({\hat{\lambda }},x)\) is a vector of M elements of B and that the action of \(\alpha ^*_{{\hat{\lambda }}}\) on these M elements is represented by the matrix

$$\begin{aligned} q{{\mathscr {G}}}({\hat{\lambda }}^{p^{a-1}}){{\mathscr {G}}}({\hat{\lambda }}^{p^{a-2}})\cdots {{\mathscr {G}}}({\hat{\lambda }}). \end{aligned}$$

This implies the assertion of the proposition. \(\square\)