1 Introduction and Results

This paper is the second in a series devoted to the development of a rigorous renormalisation group method. In [5], we defined a normed algebra \(\mathcal {N}\) of functionals of the fields. The fields can be bosonic, or fermionic, or both, and in most of this paper there is no distinction between these possibilities. The algebra \(\mathcal {N}\) is equipped with the \(T_\phi \) semi-norm, which is defined in terms of a normed space \(\varPhi \) of test functions. In the renormalisation group method, a sequence of test function spaces \(\varPhi _j\) is chosen, with corresponding normed algebras \(\mathcal {N}_j\), and there is a dynamical system whose trajectories evolve through these normed algebras in the sequence \(\mathcal {N}_0 \rightarrow \mathcal {N}_1 \rightarrow \mathcal {N}_2 \rightarrow \cdots \). The dimension of the dynamical system is unbounded, but a finite number of local polynomials in the fields represent the relevant (expanding) and marginal (neutral) directions for the dynamical system. These local polynomials play a central role in the renormalisation group approach.

In this paper, we develop a general method for the extraction from an element \(F \in \mathcal {N}\) of a local polynomial \(\mathrm{Loc} _XF\), localised on a spatial region \(X\), that captures the relevant and marginal parts of \(F\). We also prove norm estimates which show that the norm of \(\mathrm{Loc} _XF\) is not much larger than the norm of \(F\), while the norm of \(F-\mathrm{Loc} _XF\) is substantially smaller than the norm of \(F\). The latter fact, which is crucial, indicates that \(\mathrm{Loc} _XF\) has encompassed the important part of \(F\), leaving the irrelevant remainder \(F-\mathrm{Loc} _XF\). The method used in our construction of \(\mathrm{Loc} _XF\) bears some relation to ideas in [8].

This paper is organised as follows. Section 1 contains the principal definitions and statements of results, as well as some of the simpler proofs. More substantial proofs are deferred to Sect. 2. Section 3 contains estimates for lattice Taylor expansions; these play an essential role in the proofs of Propositions 1.111.12, which provide the norm estimates on \(\mathrm{Loc} _XF\) and \(F-\mathrm{Loc} _XF\).

1.1 Fields and Test Functions

We recall some concepts and notation from [5].

Let \({\varLambda } = { {{\mathbb Z}}^d }/(mR{\mathbb Z})\) denote the \(d\)-dimensional discrete torus of (large) period \(mR\) for integers \(R \ge 2\) and \(m \ge 1\). In [5], we introduced an index set \(\varvec{\Lambda }= \varvec{\Lambda }_b \sqcup \varvec{\Lambda }_f\). The set \(\varvec{\Lambda }_b\) is itself a disjoint union of sets \(\varvec{\Lambda }_b^{(i)}\) (\(i=1,\ldots , s_b\)) corresponding to different species of boson fields. Each \(\varvec{\Lambda }_b^{(i)}\) is either a finite disjoint union of copies of \({\varLambda }\), with each copy representing a distinct field component for that species, or is \({\varLambda } \sqcup \bar{\varLambda }\) when a complex field species is intended. The set \(\varvec{\Lambda }_f\) has the same structure, with possibly a different number \(s_f\) of fermion field species.

An element of \({\mathbb R}^{\varvec{\Lambda }_b}\) is called a boson field, and can be written as \(\phi = (\phi _{x})_{x \in \varvec{\Lambda }_{b}}\). Let \(\mathcal {R}=\mathcal {R}(\varvec{\Lambda }_b)\) denote the ring of functions from \({\mathbb R}^{\varvec{\Lambda }_b}\) to \(\mathbb {C}\) having at least \(p_\mathcal {N}\) continuous derivatives, where \(p_\mathcal {N}\) is fixed. The fermion field \(\psi = (\psi _{y})_{y \in \varvec{\Lambda }_{f}}\) is a set of anticommuting generators for an algebra \(\mathcal {N}=\mathcal {N}(\varvec{\Lambda })\) over the ring \(\mathcal {R}\). By definition (see [5]), \(\mathcal {N}\) consists of elements \(F\) of the form

$$\begin{aligned} F = \sum _{y \in \vec {\varvec{\Lambda }}_f^*} \frac{1}{y!} F_y \psi ^y , \end{aligned}$$
(1.1)

where each coefficient \(F_{y}\) is an element of \(\mathcal {R}\). We will use test functions \(g : \vec {\varvec{\Lambda }}^* \rightarrow \mathbb {C}\) as defined in [5]. Also, given a boson field \(\phi \), we will use the bilinear pairing between elements of \(\mathcal {N}\) and test functions defined in [5] and written as

$$\begin{aligned} \langle F,g \rangle _\phi = \sum _{z \in \vec {\varvec{\Lambda }}^*} \frac{1}{z!} F_z(\phi )g_z. \end{aligned}$$
(1.2)

For our present purposes, we distinguish between the boson and fermion fields only through the dependence of the pairing on the boson field \(\phi \). When the distinction is unimportant, we use \(\varphi \) to denote both kinds of fields, and identify \(\vec {\varvec{\Lambda }}\) with \({\varLambda } \times \{1,2,\dots ,p_{\varvec{\Lambda }} \}\), where \(p_{\varvec{\Lambda }}\) is the number of copies of \({\varLambda }\) comprising \(\vec {\varvec{\Lambda }}\). This \(p_{\varvec{\Lambda }}\) is given by the sum, over all species, of the number of components within a species. Thus we can write the fields all evaluated at \(x \in {\varLambda }\) as the sequence \(\varphi (x) = (\varphi _{1}(x),\dots ,\varphi _{p_{\varvec{\Lambda }}}(x))\).

1.2 Local Monomials and Local Polynomials

Let \(e_{1},\dots ,e_{d}\) denote the standard unit vectors in \({ {{\mathbb Z}}^d }\), so that

$$\begin{aligned} \mathcal {U}= \{\pm e_{1},\dots ,\pm e_{d}\} \end{aligned}$$
(1.3)

is the set of all \(2d\) unit vectors. For \(e \in \mathcal {U}\) and \(f:{\varLambda }\rightarrow \mathbb {C}\), the difference operator is given by

$$\begin{aligned} \nabla ^{e} f(x)=f(x+e) - f(x). \end{aligned}$$
(1.4)

When \(e\) is one of the standard unit vectors \(\{e_{1},\dots ,e_{d} \}\), we refer to \(\nabla ^{e}\) as a forward derivative. When \(e\) is the negative of a standard unit vector we refer to \(\nabla ^{e}\) as a backward derivative, although it is the negative of a conventional backward derivative. We allow \(2d\) directions in \(\mathcal {U}\), rather than only \(d\), so as not to break lattice symmetries by favouring forward derivatives over backward derivatives. This introduces redundancy expressed by the identity

$$\begin{aligned} \nabla ^{e} + \nabla ^{-e} = - \nabla ^{-e}\nabla ^{e} , \end{aligned}$$
(1.5)

which is straightforward to verify by evaluating both sides on a function \(f\). For \(\alpha \in {\mathbb N}_0^\mathcal {U}\) with components \(\alpha (e) \in {\mathbb N}_0\), we write

$$\begin{aligned} \nabla ^\alpha = \prod _{e \in \mathcal {U}} \nabla ^{\alpha (e)} , \quad \quad \nabla ^{0} = \mathrm {Id}, \end{aligned}$$
(1.6)

where the product is independent of the order of its factors.

A local monomial \(M\) is a finite product of fields and their derivatives, all to be evaluated at the same point in \({\varLambda }\) (whose value we suppress). To be more precise, for \(m= (m_{1},\dots ,m_{p (m)})\) a finite sequence whose components \(m_{k} = (i_{k},\alpha _{k})\) are elements of \(\{1,\cdots ,p_{\varvec{\Lambda }} \}\times {\mathbb N}_0^\mathcal {U}\), we define

$$\begin{aligned} M_{m} = \prod _{k=1}^{p (m)} \nabla ^{\alpha _{k}}\varphi _{i_{k}} = \big (\nabla ^{\alpha _{1}}\varphi _{i_{1}}\big ) \cdots \big (\nabla ^{\alpha _{p(m)}}\varphi _{i_{p(m)}}\big ). \end{aligned}$$
(1.7)

The product in \(M_m\) is taken in the same order as the components \(i_k\) in \(m\). For example, if the sequence \(m\) is given by \(m=((1,\alpha _1),(1,\alpha _1),(1,\alpha _2),(1,\alpha _2), (1,\alpha _2),(2,\alpha _3))\) with \(\alpha _1 < \alpha _2\), then

$$\begin{aligned} M_m = (\nabla ^{\alpha _1}\varphi _1)^2 (\nabla ^{\alpha _2}\varphi _1)^3 \nabla ^{\alpha _3}\varphi _2. \end{aligned}$$
(1.8)

It is convenient to denote the number of times \(m\) contains a given pair \((i,\alpha )\) as \(n_{(i,\alpha )}=n_{(i,\alpha )}(m)\); in (1.8) we have \(n_{(1,\alpha _1)}=2\), \(n_{(1,\alpha _2)}=3\), \(n_{(2,\alpha _3)}=1\), and all other \(n_{(i,\alpha )}\) are zero. For a fermionic species \(i\), \(M_{m}=0\) when \(n_{(i,\alpha )} > 1\). Permutations of the order of the components of \(m\) give plus or minus the same monomial. We will now define a subset \(\mathfrak {m}\) of sequences such that every non-zero monomial (1.7) is represented by exactly one \(m \in \mathfrak {m}\). First we fix an order \(\le \) on the elements of \({\mathbb N}_0^\mathcal {U}\). Let \(\mathfrak {m}\) be the set whose elements are finite sequences as defined above and such that: (i) \(i_{1} \le \cdots \le i_{p (m)}\); (ii) for \(i\) a fermionic species \(n_{(i,\alpha )} = 0,1\); (iii) for \(k<k'\) with \(i_{k}=i_{k'}, \alpha _{k}\le \alpha _{k'}\). Conditions (i) and (iii) together amount to imposing lexicographic order on the components of a sequence \(m\).

The degree of a local monomial \(M_{m}\) is the length \(p=p (m)\) of the sequence \(m\in \mathfrak {m}\). For \(m\) equal to the empty sequence \(\varnothing \) of length \(0\), we set \(M_\varnothing = 1\), and we include \(m=\varnothing \) in \(\mathfrak {m}\). In addition, we specify a map which associates to each field species a value in \((0,+\infty ]\) called the scaling dimension (also known as engineering dimension), which we abbreviate as the dimension of the field species. Following tradition, for \(i = 1,\ldots , p_{\varvec{\Lambda }}\), we denote the dimension of the species of the field \(\varphi _i\) by \([\varphi _i]\). This dimension does not depend on the value of the field, only on its species. Then we define the dimension of \(M_{m}\) by

$$\begin{aligned}{}[M_{m}] = \sum _{k=1}^{p (m)} \big ([\varphi _{i_{k}}] + |\alpha _{k}|_1 \big ) , \end{aligned}$$
(1.9)

with the degenerate case \([M_\varnothing ]=[1]=0\).

Let \(\mathfrak {m}_+\) denote the subset of \(\mathfrak {m}\) for which only forward derivatives occur. Given \(d_{+} \ge 0\), let \(\mathcal {M}_+\) denote the set of monomials \(M_{m}\) with \(m\in \mathfrak {m}_+\), such that

$$\begin{aligned}{}[M_{m}] \le d_{+}. \end{aligned}$$
(1.10)

Example 1.1

Consider the case of a single real-valued boson field \(\varphi \) of dimension \([\varphi ]=\frac{d-2}{2}\), with no fermion field. The space \(\mathcal {N}_j\) is reached after \(j\) renormalisation group steps have been completed. Each renormalisation group step integrates out a fluctuation field, with the remaining field increasingly smoother and smaller in magnitude. A basic principle is that there is an \(L>0\) such that \(\varphi _x\) will typically have magnitude approximately \(L^{-j[\varphi ]}\), and that moreover \(\varphi \) is roughly constant over distances of order \(L^{j}\). A block \(B\) in \({ {{\mathbb Z}}^d }\), of side \(L^{j}\), contains \(L^{dj}\) points, so the above assumptions lead to the rough correspondence

$$\begin{aligned} \sum _{x \in B} |\varphi _{x}|^p \approx L^{(d-p[\varphi ])j}. \end{aligned}$$
(1.11)

In the case of \(d=4\), for which \([\varphi ]=1\), this scales down when \(p>4\) and \(\varphi ^{p}\) is said to be irrelevant. The power \(p=4\) neither decays nor grows, and \(\varphi ^4\) is called marginal. Powers \(p<4\) grow with the scale, and \(\varphi ^{p}\) is said to be relevant. The assumption that \(\varphi \) is roughly constant over distances of order \(L^{j}\) translates into an assumption that each spatial derivative of \(\varphi \) produces a factor \(L^{-j}\), so that, e.g., \(\sum _{x \in B} |\nabla ^\alpha \varphi _{x}|^p \approx L^{(d-p[\varphi ]-p|\alpha |_1)j}\). Thus, in dimension \(d=4\) with \(d_+=4\), \(\mathcal {M}_+\) consists of the relevant monomials

$$\begin{aligned} 1,\;\; \varphi ,\;\; \varphi ^2,\;\; \varphi ^3,\;\; \nabla _i\varphi ,\;\; \nabla _j\nabla _i\varphi ,\;\;\varphi \nabla _i \varphi , \end{aligned}$$
(1.12)

together with the marginal monomials

$$\begin{aligned} \varphi ^4,\;\; \nabla _k\nabla _j\nabla _i \varphi , \;\; \varphi \nabla _j\nabla _i \varphi , \;\; \varphi ^2\nabla _i \varphi , \end{aligned}$$
(1.13)

with each \(\nabla _l\) represents forward differentiation in the direction \(e_l\in \{+e_1,\ldots ,+e_d\}\). \(\square \)

Let \(\mathcal {P}\) be the vector space over \({\mathbb C}\) freely generated by all the monomials \((M_{m})_{ m \in \mathfrak {m}}\) of finite dimension. A polynomial \(P \in \mathcal {P}\) has a unique representation

$$\begin{aligned} P = \sum _{m \in \mathfrak {m}} a_{m} M_{m} , \end{aligned}$$
(1.14)

where all but finitely many coefficients \(a_{m} \in {\mathbb C}\) are zero. Similarly, we define \(\mathcal {P}_+\) to be the vector subspace of \(\mathcal {P}\) freely generated by the monomials \((M_{m})_{ m \in \mathfrak {m}_+}\) of finite dimension. Given \(x\in {\varLambda }\), a polynomial \(P\in \mathcal {P}\) is mapped to an element \(P_{x} \in \mathcal {N}\) by evaluating the fields in \(P\) at \(x\). More generally, for any \(X \subset {\varLambda }\) and \(P \in \mathcal {P}\), we define an element of \(\mathcal {N}\) by

$$\begin{aligned} P(X) = \sum _{x \in X} P_x. \end{aligned}$$
(1.15)

For a real number \(t\) we define \(\mathcal {P}_{t}\) to be the subspace of \(\mathcal {P}\) spanned by the monomials with \([M_{m}]\ge t\). Let

$$\begin{aligned} \mathfrak {v}_+ = \{ m \in \mathfrak {m}_+ : [M_m] \le d_+\} = \{ m \in \mathfrak {m}_+ : M_m \in \mathcal {M}_+ \}, \end{aligned}$$
(1.16)

and let \(\mathcal {V}_+\) denote the vector subspace of \(\mathcal {P}_+\) generated by the monomials in \(\mathcal {M}_+\). By definition, the set \(\mathfrak {v}_+\) is finite. The use of only forward derivatives to define \(\mathcal {V}_+\) breaks the Euclidean symmetry of \({\varLambda }\). We wish to replace \(\mathcal {V}_+\) by a symmetric family of polynomials, and this leads us to consider symmetry in more detail.

Let \(\Sigma \) be the group of permutations of \(\mathcal {U}\). Let \(\Sigma _{\text {axes}}\) be the abelian subgroup of \(\Sigma \) whose elements fix \(\{e_{i},-e_{i} \}\) for each \(i=1,\dots ,d\). In other words, elements of \(\Sigma _{\text {axes}}\) act on \(\mathcal {U}\) by possibly reversing the signs of the unit vectors. Let \(\Sigma _{+}\) be the subgroup of permutations that permute \(\{e_1,\ldots , e_d\}\) onto itself and \(\{-e_1,\ldots ,-e_d\}\) onto itself. Then (i) \(\Sigma _{\text {axes}}\) is a normal subgroup of \(\Sigma \), (ii) every element of \(\Sigma \) is the product of an element of \(\Sigma _{\text {axes}}\) with an element of \(\Sigma _{+}\), and (iii) the intersection of the two subgroups is the identity. Therefore, by definition, \(\Sigma \) is the semidirect product \(\Sigma = \Sigma _{\text {axes}}\rtimes \Sigma _{+}\).

An element \({\varTheta }\in \Sigma \) acts on elements of \({\mathbb N}_0^\mathcal {U}\) via its action on components, as \(({\varTheta }\alpha )(e) = \alpha ({\varTheta }(e))\). The action of \({\varTheta }\) on derivatives is then given by \({\varTheta }\nabla ^\alpha = \nabla ^{{\varTheta }\alpha }\). This allows us to define an action of the group \(\Sigma \) on \(\mathcal {P}\) by linear transformations, determined by the action

$$\begin{aligned} M_{m} \mapsto {\varTheta } M_{m} = \prod _{k=1}^{p (m)} \nabla ^{{\varTheta }\alpha _{k}}\varphi _{i_{k}} = M_{{\varTheta } m} \end{aligned}$$
(1.17)

on the monomials, where \({\varTheta } m \in \mathfrak {m}\) is defined by the action of \({\varTheta }\) on the components \(\alpha _k\) of \(m\). We say that \(P \in \mathcal {P}\) is \(\Sigma _{\text {axes}}\)-covariant if there is a homomorphism \(\lambda (\cdot , P):\Sigma _{\text {axes}}\rightarrow \{-1,1\}\) such that

$$\begin{aligned} {\varTheta } P = \lambda ({\varTheta },P)P, \quad \quad {\varTheta } \in \Sigma _{\text {axes}}. \end{aligned}$$
(1.18)

As the notation indicates, the homomorphism can depend on \(P\).

The polynomials in \(\mathcal {V}_+\) contain only forward derivatives and hence do not form an invariant subspace of \(\mathcal {P}\) under the action of \(\Sigma \). We wish to replace \(\mathcal {V}_+\) by a suitable \(\Sigma \)-invariant subspace of \(\mathcal {P}\), which we will call \(\mathcal {V}\). As a first step in this process, we define a map that associates to a monomial \(M \in \mathcal {M}_+\) a polynomial \(P =P(M) \in \mathcal {P}\), by

$$\begin{aligned} P (M) = |\Sigma _{\text {axes}}|^{-1} \sum _{{\varTheta } \in \Sigma _{\text {axes}}} \lambda ({\varTheta },M) {\varTheta } M \end{aligned}$$
(1.19)

where \(\lambda ({\varTheta },M)=-1\) if the number of derivatives in \(M\) that are reversed by \({\varTheta }\) is odd and otherwise \(\lambda ({\varTheta },M)=1\). This is a homomorphism: for \({\varTheta },{\varTheta }' \in \Sigma _{\text {axes}}\), \(\lambda ({\varTheta }{\varTheta }',M)=\lambda ({\varTheta },M)\lambda ({\varTheta }',M)\). Note that \(P(M)\) consists of a linear combination of monomials whose degrees and dimensions are all equal to those of \(M\). We claim that for any \(M \in \mathcal {M}_+\), the polynomial \(P=P(M)\) of (1.19) obeys: \(P (M)\) is \(\Sigma _{\text {axes}}\)-covariant; \(M-P (M) \in \mathcal {P}_{t}\) for some \(t>[M]\) up to terms that vanish under the redundancy relation (1.5); and \(P ({\varTheta } M)={\varTheta } P (M)\) for \({\varTheta } \in \Sigma _{+}\). The proof of this fact is deferred to Sect. 2.3.

To enable the use of the redundancy relation (1.5), let \(\mathcal {R}_{1}\) be the vector subspace of \(\mathcal {P}\) generated by the relation (1.5); this is defined more precisely as follows. First, \(0 \in \mathcal {R}_1\). Given nonzero \(P \in \mathcal {P}\), we recursively replace any occurrence of \(\nabla ^e\nabla ^{-e}\) in any monomial in \(P\) by the equivalent expression \(-(\nabla ^e + \nabla ^{-e})\). This procedure produces monomials of lower dimension so eventually terminates. If the resulting polynomial is the zero polynomial, then \(P \in \mathcal {R}_1\), and otherwise \(P \not \in \mathcal {R}_1\). The claim in the previous paragraph shows the existence of the polynomial \(\hat{P}\) of the next definition.

Definition 1.2

To each monomial \(M\in \mathcal {M}_+\) we choose a polynomial \(\hat{P} (M)\in \mathcal {P}\), which is a linear combination of monomials of the same degree and dimension as \(M\), such that

$$\begin{aligned} (\mathrm{{i}})&\quad \hat{P}(M) \;\text {is}\; \Sigma _{\text {axes}} \hbox {-covariant},\\ (\mathrm{{ii}})&\quad M-\hat{P} (M) \in \mathcal {P}_{t} + \mathcal {R}_{1}\; {\hbox {for some }}\; t>[M], \\ (\mathrm{{iii}})&\quad {\varTheta } \hat{P} (M) = \hat{P} ({\varTheta } M)\; \text {for}\; {\varTheta } \in \Sigma _{+}. \end{aligned}$$

Let \(\mathcal {V}\) be the vector subspace of \(\mathcal {P}\) spanned by the polynomials \(\{\hat{P} (M): M \in \mathcal {M}_+ \}\). We also define \(\mathcal {V}(X) = \{P(X) : P\in \mathcal {V}\}\), which is a subset of \(\mathcal {N}\).

Note that \(\mathcal {V}\) depends on our choice of \(\hat{P}(M)\) for each \(M \in \mathcal {M}_+\), but is spanned by monomials of dimension at most \(d_+\). The restriction of \({\varTheta }\) to \(\Sigma _+\) in item (iii) ensures that \({\varTheta } M \in \mathcal {M}_+\) when \(M \in \mathcal {M}_+\), so that \(\hat{P}({\varTheta } M)\) makes sense.

Example 1.3

In practice, we may prefer to choose \(\hat{P}\) satisfying the conditions of Definition 1.2 using a formula other than (1.19). For example, for \(e \in \mathcal {U}\) let \(M_{e} = \varphi \nabla ^{e}\nabla ^{e} \varphi \). The formula (1.19) gives

$$\begin{aligned} P (M_{e}) = (1/2)\left( \varphi \nabla ^{e}\nabla ^{e} \varphi + \varphi \nabla ^{-e}\nabla ^{-e} \varphi \right) , \end{aligned}$$
(1.20)

but via (1.5) the simpler choice \(\hat{P} (M_{e}) = - \varphi \nabla ^{-e}\nabla ^{e} \varphi \) also satisfies the conditions of Definition 1.2.

Proposition 1.4

The subspace \(\mathcal {V}\) is a \(\Sigma \)-invariant subspace of \(\mathcal {P}\).

Proof

By Definition 1.2(iii), the set \(\{\hat{P} (M):M \in \mathcal {M}_+\}\) is mapped to itself by \(\Sigma _{+}\). Since \(\hat{P} (M)\) is \(\Sigma _{\text {axes}}\)-covariant, \(\mathcal {V}\) is invariant under \(\Sigma _{+}\) and \(\Sigma _{\text {axes}}\). Thus, since \(\Sigma = \Sigma _{\text {axes}}\rtimes \Sigma _{+}\), \(\mathcal {V}\) is invariant under \(\Sigma \). \(\square \)

1.3 The Operator loc

We would like to define polynomial functions on subsets of the torus, and for this we need to restrict to subsets which do not “wrap around” the torus. The restricted subsets we use are called coordinate patches and are defined as follows. Fix a non-negative integer \(p_{\varPhi } \ge 0\) and let \(\bar{p}_{\varPhi } = \max \{1,p_{\varPhi }\}\). For a nonempty subset \(X\subset {\varLambda }\), let \(X^{(1)} \supset X\) be the set of all points within \(L^{\infty }\) distance \(\bar{p}_{\varPhi }\) of \(X\). This definition is such that the values of derivatives \(\nabla ^\alpha g_z\) of a test function \(g\) can be computed when all components of \(z\) lie in \(X\), for all \(\alpha \) with \(|\alpha |_\infty \le p_{\varPhi }\), knowing only the values of \(g_z\) when all components of \(z\) lie in \(X^{(1)}\). For a nonempty subset \({\varLambda } '\subset {\varLambda }\), a map \(z = (x_{1}, \dots , x_{d})\) from \({\varLambda } '^{(1)}\) to \({ {{\mathbb Z}}^d }\) is said to be a coordinate on \({\varLambda } '\) if: (i) \(z\) is injective and maps nearest-neighbour points in \({\varLambda } '^{(1)}\) to nearest-neighbour points in \({ {{\mathbb Z}}^d }\), (ii) nearest-neighbour points in the image \(z ({\varLambda } '^{(1)})\) of \({\varLambda } '^{(1)}\) are mapped by \(z^{-1}\) to nearest-neighbour points in \({\varLambda } '^{(1)}\). We say that a nonempty subset \({\varLambda } '\) of \({\varLambda }\) is a coordinate patch if there is a coordinate \(z\) on \({\varLambda } '\) such that \(z ({\varLambda } ')\) is a rectangle \(\{x \in { {{\mathbb Z}}^d }:|x_{i}| \le r_{i}, i= 1,\dots ,d\}\) for some nonnegative integers \(r_{1},\dots ,r_{d}\).

By “cutting open” the torus \({\varLambda }\), all rectangles with \(\max _i 2 (r_{i}+\bar{p}_{\varPhi })\) strictly smaller than the period of \({\varLambda }\) are clearly coordinate patches. By definition, the intersection of two coordinate patches is also a coordinate patch, unless it is empty. If \(z\) and \(\tilde{z}\) are both coordinates for a coordinate patch then there is a Euclidean automorphism \(E\) of \({ {{\mathbb Z}}^d }\) that fixes the origin and is such that \(\tilde{z} = Ez\). This is proved by noticing that the composition \(Z = \tilde{z}\circ z^{-1}\) is well defined on \(\{x \in { {{\mathbb Z}}^d }:\Vert x \Vert _{\infty } \le 1\}\), and therefore \(Z\) is a permutation of the set \(\mathcal {U}\) of unit vectors. The orthogonal transformation \(E\) that acts by the same permutation on \(\mathcal {U}\) is then an automorphism of \({ {{\mathbb Z}}^d }\) with the required properties.

Given a coordinate patch \({\varLambda }'\) with coordinate \(z\), and given \(\alpha = (\alpha _{1},\dots ,\alpha _{d})\) in \({\mathbb N}^{d}\), we define the monomial \(z^{\alpha } = x_{1}^{\alpha _{1}} \dots x_{d}^{\alpha _{d}}\), which is a function defined on \({\varLambda }'^{(1)}\). We will define a class of test functions \(\varPi = \varPi [{\varLambda } ']\) which are polynomials in each argument by specifying the monomials which span \(\varPi \). To a local monomial \(M_{m} \in \mathcal {M}_{+}\) in fields, as in (1.7), we associate a monomial \(p_{m}\in \varPi \) by replacing \(\nabla ^{\alpha _{k}}\varphi _{i_{k}}\) by \(z_{k}^{\alpha _{k}}\). Thus

$$\begin{aligned} p_{m} (z) = \prod _{k=1}^{p (m)} z_{k}^{\alpha _{k}} , \end{aligned}$$
(1.21)

which is a function defined on \({\varLambda }_{i_1}'^{(1)} \times \cdots \times {\varLambda }_{i_{p(m)}}'^{(1)}\). For the degenerate monomial \(m=\varnothing \), we set \(p_\varnothing =1\). We implicitly extend \(p_{m}\) by zero so that it becomes a test function defined on \(\vec {\varvec{\Lambda }}^*\). For example, we associate the monomial \( z_{1}^{\alpha _1} z_{2}^{\alpha _1} z_{3}^{\alpha _2} z_{4}^{\alpha _2} z_{5}^{\alpha _2} z_{6}^{\alpha _3}\) to the field monomial (1.8). However, we will also need the monomial \(z_{1}^{\alpha _2} z_{2}^{\alpha _2} z_{3}^{\alpha _2} z_{4}^{\alpha _3}z_{5}^{\alpha _1} z_{6}^{\alpha _1}\) which cannot be obtained from \(m \in \mathfrak {m}_{+}\) because the condition (iii) below (1.8) now requires \(\alpha _{2} \le \alpha _{3} \le \alpha _{1}\), whereas we chose \(\alpha _1<\alpha _2\) in (1.8). Therefore we define \(\bar{\mathfrak {m}}_{+}\) and \(\bar{\mathfrak {v}}_{+}\) by dropping the order condition (iii) in \(\mathfrak {m}_+\) and \(\mathfrak {v}_+\). The space \(\varPi =\varPi [{\varLambda } ']\) is the span of \(\{p_{m}: m \in \bar{\mathfrak {v}}_+\}\). Euclidean automorphisms of \({ {{\mathbb Z}}^d }\) that fix the origin act on \(\varPi \) and map it to itself, and therefore \(\varPi [{\varLambda } ']\) does not depend on the choice of coordinate on \({\varLambda } '\).

An equivalent alternate classification of the monomials in \(\varPi [{\varLambda }']\) is as follows. We define the dimension of a monomial (1.21) to be its polynomial degree plus \(\sum _{k=1}^{p} [\varphi _{i_k}]\), i.e., \(\sum _{k=1}^{p}([\varphi _{i_k}]+|\alpha _k|_1)\), consistent with (1.9). Then we can define \(\varPi [{\varLambda }']\) to be the vector space spanned by the monomials (truncated outside \({\varLambda }'\) as above) whose dimension is at most \(d_+\).

In the following, we will also need the subspace \(S\varPi \) of \(\varPi \). This is the image of \(\varPi \) under the symmetry operator \(S\) defined in [5, Definition 3.4].

Recall the definition from [5] that, given \(X \subset {\varLambda }\), \(\mathcal {N}(X)\) consists of those \(F \in \mathcal {N}\) such that \(F_z(\phi )=0\) for all \(\phi \) whenever any component of \(z\) lies outside of \(X\). For nonempty \(X \subset {\varLambda }\), we say \(F \in \mathcal {N}_{X}\) if there exists a coordinate patch \({\varLambda } '\) such that \(F \in \mathcal {N}({\varLambda }')\) and \(X \subset {\varLambda }'\). The condition \(F \in \mathcal {N}_X\) guarantees that neither \(X\) nor \(F\) “wrap around” the torus.

Proposition 1.5

For nonempty \(X \subset {\varLambda }\) and \(F \in \mathcal {N}_X\), there is a unique \(V \in \mathcal {V}\), depending on \(F\) and \(X\), such that

$$\begin{aligned} \langle F,g \rangle _{0} = \langle V (X),g \rangle _{0} \quad \quad \text {for all } g \in \varPi . \end{aligned}$$
(1.22)

The polynomial \(V\) does not depend on the choice of coordinate \(z\) or coordinate patch \({\varLambda }'\) implicit in the requirement \(F \in \mathcal {N}_X\), as long as \(X \subset {\varLambda }'\) and \(F \in \mathcal {N}({\varLambda }')\). Moreover, \(\mathcal {V}(X)\) and \(S\varPi \) are dual vector spaces under the pairing \(\langle \cdot ,\cdot \rangle _0\).

The proof of Proposition 1.5 is deferred to Sect. 2.1. It allows us to define our basic object of study in this paper, the map \(\mathrm{loc}_X\).

Definition 1.6

For nonempty \(X \subset {\varLambda }\) we define \(\mathrm{loc}_{X}: \mathcal {N}_{X} \rightarrow \mathcal {V}(X)\) by \(\mathrm{loc}_X F = V (X)\), where \(V\) is the unique element of \(\mathcal {V}\) such that (1.22) holds. For \(X = \varnothing \), we define \(\mathrm{loc}_{\varnothing }=0\).

By definition, the map \(\mathrm{loc}_X : \mathcal {N}_{X} \rightarrow \mathcal {V}(X)\) is a linear map.

1.4 Properties of loc

By definition, for nonempty \(X \subset {\varLambda }\) and \(F \in \mathcal {N}_{X}\),

$$\begin{aligned} \langle F,g \rangle _{0} = \langle \mathrm{loc}_XF,g \rangle _{0} \quad \quad \text {for all } g \in \varPi . \end{aligned}$$
(1.23)

Also, if \(F=V(X) \in \mathcal {V}(X)\) then trivially \(\langle F,g \rangle _0 =\langle V(X),g \rangle _0\) and hence the uniqueness in Definition 1.6 implies that \(\mathrm{loc}_XF=V(X)=F\). Thus \(\mathrm{loc}_X\) acts as the identity on \(\mathcal {V}(X)\). The following proposition shows that \(\mathrm{loc}\) behaves well under composition.

Proposition 1.7

For \(X,X' \subset {\varLambda }\) and \(F \in \mathcal {N}_{X\cup X'}\), excluding the case \(X'=\varnothing \not = X\),

$$\begin{aligned}&\mathrm{loc}_{X}\circ \mathrm{loc}_{X'} F = \mathrm{loc}_{X} F . \end{aligned}$$
(1.24)

In particular, \(\mathrm{loc}_{X}\circ (\mathrm {Id}- \mathrm{loc}_{X})=0\) on \(\mathcal {N}_X\).

Proof

If \(X=\varnothing \) then both sides are zero, so suppose that \(X,X'\not = \varnothing \). Let \(g \in \varPi \). By (1.23),

$$\begin{aligned} \langle \mathrm{loc}_{X}\circ \mathrm{loc}_{X'}F,g \rangle _0 =\langle \mathrm{loc}_{X'}F,g \rangle _0 = \langle F,g \rangle _0 = \langle \mathrm{loc}_{X}F,g \rangle _0. \end{aligned}$$
(1.25)

Since \(\mathrm{loc}_{X} \circ \mathrm{loc}_{X'}F\) and \(\mathrm{loc}_{X} F\) are both in \(\mathcal {V}(X)\), their equality follows from the uniqueness in Definition 1.6. \(\square \)

The following proposition gives an additivity property of \(\mathrm{loc}\).

Proposition 1.8

Let \(X \subset {\varLambda }\) and \(F_x \in \mathcal {N}_{X}\) for all \(x \in X\). Suppose that \(P \in \mathcal {V}\) obeys \(\mathrm{loc}_{\{x\}}F_x = P_x\) for all \(x\in X\). Then \(\mathrm{loc}_{X} F(X)=P(X)\), where \(F(X)=\sum _{x\in X}F_x\).

Proof

If \(X\) is empty then both sides are zero, so suppose that \(X\) is not empty. Let \(g \in \varPi \). It follows from (1.23), linearity of the pairing, and the assumption, that

$$\begin{aligned} \langle \mathrm{loc}_XF(X),g \rangle _0&= \langle F(X),g \rangle _0 = \sum _{x\in X} \langle F_x,g \rangle _0 \end{aligned}$$
(1.26)
$$\begin{aligned}&= \sum _{x\in X} \langle \mathrm{loc}_{\{x\}}F_x,g \rangle _0 = \sum _{x\in X} \langle P_x,g \rangle _0 = \langle P(X),g \rangle _0. \end{aligned}$$
(1.27)

Since \(\mathrm{loc}_XF(X)\) and \(P(X)\) are both in \(\mathcal {V}(X)\), their equality follows from the uniqueness in Definition 1.6. \(\square \)

For nonempty \(X \subset {\varLambda }\), let \(\mathcal {E}(X)\) be the set of automorphisms of \({\varLambda }\) which map \(X\) to itself. Here, an automorphism of \({\varLambda }\) means a bijective map from \({\varLambda }\) to \({\varLambda }\) under which nearest-neighbour points are mapped to nearest-neighbour points under both the map and its inverse. In particular, \(\mathcal {E}({\varLambda })\) is the set of automorphisms of \({\varLambda }\). An automorphism \(E \in \mathcal {E}({\varLambda })\) defines a mapping of the boson field by \((\phi _E)_{x} = \phi _{Ex}\). Then, for \(F =\sum _{y \in \vec {\varvec{\Lambda }}_f^*} \frac{1}{y!}F_y \psi ^y \in \mathcal {N}\), we define \(E\) as a linear operator on \(\mathcal {N}\) by

$$\begin{aligned} (EF)(\phi )&= \sum _{y\in \vec {\varvec{\Lambda }}_f^*} \frac{1}{y!}F_{y} (\phi _{E}) \psi ^{Ey} = \sum _{y\in \vec {\varvec{\Lambda }}_f^*} \frac{1}{y!}F_{E^{-1}y} (\phi _{E}) \psi ^{y} , \end{aligned}$$
(1.28)

where in the second equality we have extended the action of \(E\) to component-wise action on \(\varvec{\Lambda }_f\), and we used the fact that summation over \(y\) is the same as summation over \(E^{-1}y\). The following proposition gives a Euclidean covariance property of \(\mathrm{loc}\).

Proposition 1.9

For \(X \subset {\varLambda }\), \(F \in \mathcal {N}_{X}\) and \(E \in \mathcal {E}({\varLambda })\),

$$\begin{aligned}&E\big (\mathrm{loc}_{X} F\big ) = \mathrm{loc}_{EX} (EF) . \end{aligned}$$
(1.29)

Proof

We define \(E^{*}:{\varPhi } \rightarrow {\varPhi }\) by \((E^{*}g)_{z}=g_{Ez}\). By (1.28), and by taking derivatives with respect to \(\phi _{x_i}\) for \(x_i \in \varvec{\Lambda }_b\), for \(x \in \vec {\varvec{\Lambda }}_b^*\) we have

$$\begin{aligned} (EF)_{x,y}(\phi ) = F_{E^{-1}x,E^{-1}y} (\phi _{E}). \end{aligned}$$
(1.30)

Therefore,

$$\begin{aligned} \langle E F,g \rangle _\phi = \sum _{z\in \vec {\varvec{\Lambda }}^*} \frac{1}{z!} F_{E^{-1}z} (\phi _{E}) g_{z} = \sum _{z\in \vec {\varvec{\Lambda }}^*} \frac{1}{z!} F_{z} (\phi _{E}) g_{Ez} = \langle F,E^{*}g \rangle _{\phi _{E}}. \end{aligned}$$
(1.31)

Since \(F \in \mathcal {N}_{X}\) there exists a coordinate patch \({\varLambda } '\) containing \(X\) such that \(F \in \mathcal {N}({\varLambda } ')\). Let \(g \in \varPi [E{\varLambda } ']\), and note that \(E^{*}\) maps test functions in \(\varPi [E{\varLambda } ']\) to test functions in \(\varPi [{\varLambda } ']\). By (1.23) and (1.31),

$$\begin{aligned} \langle E \mathrm{loc}_{X}F,g \rangle _{0} = \langle \mathrm{loc}_{X} F,E^{*}g \rangle _{0} = \langle F,E^{*}g \rangle _{0} = \langle E F,g \rangle _{0} = \langle \mathrm{loc}_{EX} E F,g \rangle _{0}. \end{aligned}$$
(1.32)

Since both \(E \mathrm{loc}_{X} F\) and \(\mathrm{loc}_{EX} E F\) are in \(\mathcal {V}(EX)\), their equality follows from the uniqueness in Proposition 1.5. \(\square \)

The subgroup of \(\mathcal {E}({\varLambda })\) consisting of automorphisms that fix the origin is homomorphic to the group \(\Sigma \), with the element \({\varTheta }_E \in \Sigma \) determined from such an \(E\in \mathcal {E}({\varLambda })\) by the action of \(E\) on the set \(\mathcal {U}\) of unit vectors. Since \(\mathcal {E}({\varLambda })\) is the semidirect product of the subgroup of translations and the subgroup that fixes the origin, we can use this homomorphism to associate to each element \(E \in \mathcal {E}({\varLambda })\) a unique element \({\varTheta }_E \in \Sigma \). The following proposition ensures that the polynomial \(P\in \mathcal {V}\) determined by \(\mathrm{loc}_XF\) inherits symmetry properties of \(X\) and \(F\).

Proposition 1.10

For \(X \subset {\varLambda }\) and \(F \in \mathcal {N}_{X}\) such that \(EF=F\) for all \(E\in \mathcal {E}(X)\), the polynomial \(P\in \mathcal {V}\) determined by \(P (X)=\mathrm{loc}_{X} F \in \mathcal {V}(X)\) obeys \({\varTheta }_E P = P\) for all \(E\in \mathcal {E}(X)\).

Proof

By Proposition 1.9 and by hypothesis, \(EP(X) = \mathrm{loc}_{EX}EF= P(X)\). Therefore, for \(g\in \varPi \),

$$\begin{aligned} \langle F,g \rangle _0 = \langle P(X),g \rangle _0 = \langle EP(X),g \rangle _0. \end{aligned}$$
(1.33)

Since \(EP(X) = ({\varTheta }_EP)(X)\), this gives

$$\begin{aligned} \langle P(X),g \rangle _0 = \langle ({\varTheta }_EP)(X),g \rangle _0, \end{aligned}$$
(1.34)

and since \({\varTheta }_E P \in \mathcal {V}\) by Proposition 1.4, the uniqueness in Proposition 1.5 implies that \({\varTheta }_EP=P\), as required. \(\square \)

The next two propositions concern norm estimates, using the \(T_\phi \) semi-norm defined in [5]. The \(T_\phi \) semi-norm is itself defined in terms of a norm on test functions, and next we define the particular norm on test functions that we will use here.

The norm depends on a vector \(\mathfrak {h}= (\mathfrak {h}_1,\ldots , \mathfrak {h}_{p_{\varvec{\Lambda }}})\) of positive real numbers, one for each field species and component, where we assume that \(\mathfrak {h}_i\) depends only on the field species of the index \(k\). Given \(z=(z_1,\ldots , z_{p (z)}) \in \varvec{\Lambda }^*\), we define \(\mathfrak {h}^{-z}= \prod _{i=1}^{p (z)} \mathfrak {h}_{k(z_i)}^{-1}\), where \(k(z_i)\) denotes the copy of \({\varLambda }\) inhabited by \(z_i\in \varvec{\Lambda }\). Given \(p_\varPhi \ge 0\), the norm on test functions is defined by

$$\begin{aligned} \Vert g\Vert _{\varPhi (\mathfrak {h})} = \sup _{z\in \vec {\varvec{\Lambda }}^*} \sup _{|\alpha |_\infty \le p_{\varPhi }} \mathfrak {h}^{-z} |\nabla _R^\alpha g_{z}|, \end{aligned}$$
(1.35)

where \(\nabla _R^\alpha = R^{|\alpha |_1}\nabla ^\alpha \). In terms of this norm, a semi-norm on \(\mathcal {N}\) is defined by

$$\begin{aligned} \Vert F\Vert _{T_\phi } = \sup _{g \in B({\varPhi })}|\langle F,g \rangle _\phi |, \end{aligned}$$
(1.36)

where \(B({\varPhi })\) denotes the unit ball in \({\varPhi }={\varPhi }(\mathfrak {h})\). This \(T_\phi \) semi-norm depends on the boson field \(\phi \), via the pairing (1.2).

For the next two propositions, which provide essential norm estimates on \(\mathrm{loc}\), we restrict attention to the case where the torus \({\varLambda }\) has period \(L^N\) for integers \(L,N>1\). In practice, both \(L\) and \(N\) are large. We fix \(j<N\) and take \(R=L^j\). The proofs of the propositions, which make use of the results in Sect. 3, are deferred to Sect. 2.2. A \(j\)-polymer is defined to be a union of blocks of side \(R=L^j\) in a paving of \({\varLambda }\). Given a \(j\)-polymer \(X\), we define \(X_+\) by replacing each block \(B\) in \(X\) by a larger cube \(B_+\) centred on \(B\) and with side \(2L^j\) if \(L^j\) is even, or \(2L^j-1\) if \(L^j\) is odd (the parity consideration permits centring).

Proposition 1.11

Let \(L>1\), \(j<N\), and let \(X\) be a \(j\)-polymer with \(X_+ \subset U\) for a coordinate patch \(U\subset {\varLambda }\). For \(F \in \mathcal {N}(U)\), there is a constant \(\bar{C}'\), which depends only on \(L^{-j} \mathrm{diam}(U)\), such that

$$\begin{aligned} \Vert \mathrm{loc}_{X} F\Vert _{T_0} \le \bar{C}' \Vert F\Vert _{T_0}. \end{aligned}$$
(1.37)

The next result, which is crucial, involves the \(T_\phi \) semi-norm defined in terms of \({\varPhi }(\mathfrak {h})\), as well as the \(T_\phi '\) semi-norm defined in terms of the \({\varPhi }'(\mathfrak {h}')\) norm given by replacing \(R=L^j\) and \(\mathfrak {h}\) of (1.35) by \(R'=L^{j+1}\) and \(\mathfrak {h}'\). In addition, we assume that \(\mathfrak {h}'\) and \(\mathfrak {h}\) are chosen such that \(\mathfrak {h}_i'/\mathfrak {h}_i \le cL^{-[\phi _i]}\) for each component \(i\), where \(c\) is a universal constant. Let

$$\begin{aligned} d_+' = \min \{ [M_m] : m \not \in \mathfrak {v}_+ \}, \end{aligned}$$
(1.38)

where \(\mathfrak {v}_+\) was defined in (1.16); thus \(d_+'\) denotes the smallest dimension of a monomial not in the range of loc. Let \([\varphi _\mathrm{min}] = \min \{[\varphi _i]: i = 1,\ldots ,p_{\varvec{\Lambda }}\}\). Given a positive integer \(A\), we define

$$\begin{aligned} \gamma = L^{-d_{+}'} + L^{-(A+1)[\varphi _\mathrm{min}]}. \end{aligned}$$
(1.39)

In anticipation of a hypothesis of Lemma 3.6, for the next proposition we impose the restriction that \(p_{\varPhi } \ge d_+' -[\varphi _\mathrm{min}]\). Its choice of large \(L\) depends only on \(d_+\).

Proposition 1.12

Let \(j<N\), let \(A < p_\mathcal {N}\) be a positive integer, let \(L\) be sufficiently large, let \(X\) be a \(j\)-polymer with \(X_+\) contained in a coordinate patch, and let \(Y \subset X\) be a nonempty \(j\)-polymer. For \(i=1,2\), let \(F_{i} \in \mathcal {N}(X)\). Then

$$\begin{aligned} \Vert F_1(1-\mathrm{loc}_Y)F_2 \Vert _{T_{\phi }'}&\le \gamma \, \bar{C} \left( 1 + \Vert \phi \Vert _{{\varPhi }'}\right) ^{A'} \sup _{0\le t \le 1} \big ( \Vert F_1F_2\Vert _{T_{t\phi }} + \Vert F_1\Vert _{T_{t\phi }}\Vert F_2\Vert _{T_{0}}\big ), \end{aligned}$$
(1.40)

where \(\gamma \) is given by (1.39), \(A'=A+d_+/[\varphi _\mathrm{min}]+1\), and \(\bar{C}\) depends only on \(L^{-j} \mathrm{diam}(X)\).

For the special case with \(F_1=1\), \(F_2=F\), and \(\phi = 0\), Proposition 1.12 asserts that

$$\begin{aligned} \Vert F-\mathrm{loc}_X F \Vert _{T_{0}'}&\le \gamma \bar{C} \Vert F\Vert _{T_{0}}. \end{aligned}$$
(1.41)

For the case of \(d\ge 4\), \(d_+=d\), \([\varphi _\mathrm{min}]=\frac{d-2}{2}\), and with \(A\) (and so \(p_\mathcal {N}\)) chosen sufficiently large that \((A+1)\frac{d-2}{2} \ge d +1\), we have \(d_+'=d_++1\) and \(\gamma =O(L^{-d-1})\). This shows that, when measured in the \(T_0'\) semi-norm, \(F-\mathrm{loc}_XF\) is substantially smaller than \(F\) measured in the \(T_0\) semi-norm.

1.5 An Example

The following example is not needed elsewhere in this paper, but it serves to illustrate the evaluation of \(\mathrm{loc}\).

Example 1.13

Consider the case where there is a single complex boson field \(\phi \), in dimension \(d=4\), with \([\varphi ]=1\), and with \(d_+=d=4\). The list of relevant and marginal monomials is as in (1.12)–(1.13), but now each factor of \(\varphi \) in those lists can be replaced by either \(\phi \) or its conjugate \(\bar{\phi }\). To define \(\mathcal {V}\), for each monomial \(M\) we choose \(P(M)\) as in (1.19), except monomials which contain \(\nabla ^e\nabla ^e\) for which we use \(\nabla ^{-e}\nabla ^e\) as in Example 1.3 instead. Let \(X\subset {\varLambda }\) be contained in a coordinate patch and let \(a,x \in X\).

(i) Simple examples are given by

$$\begin{aligned} \mathrm{loc}_X |\phi _x|^6&= 0, \quad \quad \mathrm{loc}_{\{a\}} |\phi _x|^4 = |\phi _{a}|^4, \end{aligned}$$
(1.42)

which hold since in both cases the pairing requirement of Definition 1.6 is obeyed by the right-hand sides.

(ii) Let \(\tau _x = \phi _x\bar{\phi }_x\), let \(q:{\varLambda }\rightarrow \mathbb {C}\), let \(X\) be such that the range of \(q\) plus the diameter of \(X\) plus \(2\bar{p}_{\varPhi }\) is strictly less than the period of the torus, and let

$$\begin{aligned} F&= \sum _{x \in X, y \in {\varLambda }} q (x-y) \tau _{y}. \end{aligned}$$
(1.43)

The assumption on the range of \(q\) ensures that the coordinate patch condition in the definition of \(\mathrm{loc}_XF\) is satisfied. We define

$$\begin{aligned}&q^{(1)} = \sum _{x \in {\varLambda }}q (x), \quad \quad q^{(**)} = \sum _{x \in {\varLambda }} q (x) x_{1}^{2}, \end{aligned}$$
(1.44)

and assume that

$$\begin{aligned}&\sum _{x \in {\varLambda }} q (x) x_{i} = 0, \quad \quad \sum _{x \in {\varLambda }} q (x) x_{i}x_{j} = q^{(**)} \delta _{i,j} \quad \quad \quad i,j \in \{1,2,\cdots ,d \}. \end{aligned}$$
(1.45)

We claim that

$$\begin{aligned} \mathrm{loc}_{X} F&= \sum _{x\in X} \big ( q^{(1)}\tau _{x} + q^{(**)} \sigma _{x} \big ), \end{aligned}$$
(1.46)

where, with \({\varDelta } = - \sum _{i=1}^d \nabla ^{-e_i}\nabla ^{e_i}\),

$$\begin{aligned} \sigma _x = \frac{1}{2} \left( \phi _x \,{\varDelta } \bar{\phi }_x + \sum _{e\in \mathcal {U}} \nabla ^{e}\phi _x \,\nabla ^{e}\bar{\phi }_x + {\varDelta }\phi _x \,\bar{\phi }_x \right) . \end{aligned}$$
(1.47)

To verify (1.46), we define

$$\begin{aligned} A&= \sum _{y \in {\varLambda }} q (a-y) \tau _{y}. \end{aligned}$$
(1.48)

By Proposition 1.8, it suffices to show that

$$\begin{aligned} \mathrm{loc}_{\{a\}} A&= q^{(1)}\tau _{a} + q^{(**)} \sigma _{a}. \end{aligned}$$
(1.49)

For this, it suffices to show that \(A\) and \(q^{(1)}\tau _{a} + q^{(**)} \sigma _{a}\) have the same zero-field pairing with test functions \(g \in \varPi \). By definition, \(\langle A,g \rangle _0 = \sum _{y\in {\varLambda }} q(a-y) g_{y,y}\). Since the polynomial test function \(g = g_{y_{1},y_{2}}\) is in \(\varPi \), it is a quadratic polynomial in \(y_{1},y_{2}\) and we can write the coefficients of this polynomial in terms of lattice derivatives of \(g\) at the point \((a,a)\). For example the quadratic terms in \(g\) are \((1/2)\sum _{i,j = 1}^d (y_i-a_i)(y_j-a_j) \nabla ^{e_i}_1\nabla ^{e_j}_2g_{a, a}\). (The construction of lattice Taylor polynomials is described below in (2.4).)

The constant term in \(g\) is the zeroth derivative \(g_{a,a}\). The linear terms vanish in the pairing due to (1.45). For the quadratic terms with derivatives on both variables of \(g\), the only non-vanishing contribution to the pairing arises from \(\frac{1}{2} \sum _{i=1}^d(y_i-a_i)^2 \nabla ^{e_i}_1\nabla ^{e_i}_2g_{a, a}\), due to (1.45), where the subscripts on the derivatives indicate on which argument they act. For the quadratic terms with both derivatives on a single variable of \(g\), by (1.45) we may assume that both derivatives are in the same direction, and for those, we can replace the binomial coefficient \(\left( {\begin{array}{c}y_i-a_i\\ 2\end{array}}\right) \) by \(\frac{1}{2} (y_i-a_i)^2\) due to the first assumption in (1.45), to see that the relevant terms for the pairing are

$$\begin{aligned} \frac{1}{2} \sum _{i=1}^d (y_i-a_i)^2 \nabla ^{e_i}_1\nabla ^{e_i}_1 g_{a, a} + \frac{1}{2} \sum _{i=1}^d ( y_i- a_i)^2 \nabla ^{e_i}_2\nabla ^{ e_i}_2 g_{a, a}. \end{aligned}$$
(1.50)

Since \(g\) is a polynomial of total degree at most 2, we can use (1.5) to replace derivatives \(\nabla ^e\) by \(-\nabla ^{-e}\) in the above expressions involving two derivatives. Thus we obtain

$$\begin{aligned} \langle A,g \rangle _0 = q^{(1)} g_{a, a} + q^{(**)} \frac{1}{2} \left( {\varDelta }_1 g_{a, a} + \sum _{e \in \mathcal {U}} \nabla ^e_1 \nabla ^{e}_2 g_{a, a} + {\varDelta }_2 g_{a, a} \right) . \end{aligned}$$
(1.51)

By inspection, the right-hand side of (1.49) has the same pairing with \(g\) as \(A\), so (1.49) is verified.

(iii) Let

$$\begin{aligned} F ' = \sum _{x \in X, y \in {\varLambda }} q (x-y) ( \tau _{xy} + \tau _{yx}). \end{aligned}$$
(1.52)

By a similar analysis to that used in (ii),

$$\begin{aligned} \mathrm{loc}_{X} F ' = \sum _{x\in X} \left( 2 q^{(1)}\tau _{x} + q^{(**)} \frac{1}{2} \big ( \phi _{x} {\varDelta } \bar{\phi }_{x} + ({\varDelta } \phi )_{x} \bar{\phi }_{x} \big ) \right) . \end{aligned}$$
(1.53)

\(\square \)

1.6 Supersymmetry and loc

For our application to self-avoiding walk in [1, 2], we will use \(\mathrm{loc}\) in the context of a supersymmetric field theory involving a complex boson field \(\phi \) with conjugate \(\bar{\phi }\), and a pair of conjugate fermion fields \(\psi ,\bar{\psi }\), all of dimension \(\frac{d-2}{2}\). We now show that if \(F \in \mathcal {N}\) is supersymmetric then so is \(\mathrm{loc}_XF\).

The supersymmetry generator \({Q} = d + \underline{i}\), which is discussed in [4, Sect. 6], has the following properties: (i) \(Q\) is an antiderivation that acts on \(\mathcal {N}\), (ii) \(Q^{2}\) is the generator of the gauge flow characterised by \(q \mapsto e^{-2\pi i t}q\) for \(q = \phi _{x} ,\psi _{x}\) and \(\bar{q} \mapsto e^{+2\pi i t}\bar{q}\) for \(\bar{q} = \bar{\phi }_{x} , \bar{\psi }_{x}\), for all \(x \in {\varLambda }\). An element \(F \in \mathcal {N}\) is said to be gauge invariant if it is invariant under this flow and supersymmetric if \(QF=0\). By property (ii), supersymmetric elements are gauge invariant. Let \(\hat{Q} = (2 \pi i)^{-1/2} Q\). Then \(\hat{Q}\) is an antiderivation satisfying:

$$\begin{aligned}&\hat{Q}\phi = \psi , \quad \quad \hat{Q}\psi = - \phi , \quad \quad \hat{Q}\bar{\phi }= \bar{\psi }, \quad \quad \hat{Q}\bar{\psi }= \bar{\phi }. \end{aligned}$$
(1.54)

The gauge flow clearly maps \(\mathcal {V}\) to itself. Also, since the boson and fermion fields have the same dimension, \(Q\) also maps \(\mathcal {V}\) to itself. The following observation is a general one, but it has the specific consequences that if \(F\) is gauge invariant then so is \(\mathrm{loc}_{X}F\), and if \(F\) is supersymmetric then \(Q\mathrm{loc}_{X}F=\mathrm{loc}_XQF=0\) so \(\mathrm{loc}_{X}F\) is supersymmetric. This provides a simplifying feature in the analysis applied in [7].

Proposition 1.14

The map \(Q: \mathcal {N}\rightarrow \mathcal {N}\) commutes with \(\mathrm{loc}_{X}\).

Proof

Let \(F \in \mathcal {N}\) and \(g \in \varPi \). There is a map \(Q^*:\varPi \rightarrow \varPi \), which can be explicitly computed using (1.54), such that \(\langle QF,g \rangle _0=\langle F,Q^*g \rangle _0\). It then follows from (1.23) that

$$\begin{aligned} \langle Q\mathrm{loc}_{X} F,g \rangle _0 = \langle \mathrm{loc}_{X} F,Q^{*}g \rangle _0 = \langle F,Q^{*}g \rangle _0 = \langle QF,g \rangle _0 = \langle \mathrm{loc}_{X}QF,g \rangle _0.\quad \end{aligned}$$
(1.55)

Since \(Q: \mathcal {V}(X) \rightarrow \mathcal {V}(X)\) by (1.54), it then follows from the uniqueness in Definition 1.6 that \(Q\mathrm{loc}_{X} F = \mathrm{loc}_{X}QF\). \(\square \)

The proof of Proposition 1.14 displays the general principle that a linear map on \(\mathcal {N}\) commutes with \(\mathrm{loc}_{X}\) if its adjoint maps \(\Pi \) to itself. In particular, the map on \(\mathcal {N}\) induced by interchanging \(\phi \) with its conjugate \(\bar{\phi }\) commutes with \(\mathrm{loc}_{X}\) for all \(X\).

1.7 Observables and the Operator Loc

We now generalise the operator \(\mathrm{loc}\) in two ways: to modify the set onto which it localises, and to incorporate the effect of observable fields. The first of these is accomplished by the following definition.

Definition 1.15

For \(Y \subset X \subset {\varLambda }\) and \(F \in \mathcal {N}_{X}\), we define the linear operator \(\mathrm{loc}_{X,Y}:\mathcal {N}\rightarrow \mathcal {V}(Y)\) by

$$\begin{aligned}&\mathrm{loc}_{X,Y}F = P_{X} (Y) \quad \text {with } P_{X} \hbox { determined by }P_{X} (X) = \mathrm{loc}_{X}F. \end{aligned}$$
(1.56)

In other words, \(\mathrm{loc}_{X,Y}F\) evaluates the polynomial \(\mathrm{loc}_XF\) on the set \(Y\) rather than on \(X\). It is an immediate consequence of the definition that \(\mathrm{loc}_X = \mathrm{loc}_{X,X}\), and that if \(\{X_1,\ldots , X_m\}\) is a partition of \(X\) then

$$\begin{aligned} \mathrm{loc}_{X}= \sum _{i=1}^m \mathrm{loc}_{X,X_{i}}. \end{aligned}$$
(1.57)

The following norm estimate for \(\mathrm{loc}_{X,Y}\) is proved in Sect. 2.2.

Proposition 1.16

Suppose \({\varLambda }\) has period \(L^N\) with \(L,N>1\). Let \(j<N\), and let \(Y\subset X\) be \(j\)-polymers with \(X_+ \subset U\) for a coordinate patch \(U\subset {\varLambda }\). For \(F \in \mathcal {N}(U)\), there is a constant \(\bar{C}'\), which depends only on \(L^{-j} \mathrm{diam}(U)\), such that for \(F \in \mathcal {N}(U)\),

$$\begin{aligned} \Vert \mathrm{loc}_{X,Y}F\Vert _{T_0} \le \bar{C}' \Vert F\Vert _{T_0}. \end{aligned}$$
(1.58)

Next, we incorporate the presence of an observable field. The observable field is not needed for our analysis of the self-avoiding walk susceptibility in [2], but it is used in our analysis of the two-point function in [1]. Specifically, its application is seen in [1, Sect. 2.3]. In that context we see that the observable field \(\sigma \in \mathbb {C}\) is a complex variable such that differentiating the partition function with respect to \(\sigma \) and \(\bar{\sigma }\) at \(\sigma =0\) gives the two-point function. In particular, elements of \(\mathcal {N}\) become functions of \(\sigma \), and given an element \(F\in \mathcal {N}\) we need the norm of \(F\) to measure the size of the derivatives of \(F\) at zero with respect to \((\sigma ,\bar{\sigma })\). We can make our existing norm do this automatically by declaring \((\sigma ,\bar{\sigma })\) to be a new species of complex boson field, that is \(\sigma \) is a function on \({\varLambda }\), but since we do not need the additional information encoded by the dependence of \(\sigma \) on \(x \in {\varLambda }\) we choose test functions that are constant in \(x\). This means that the norm only measures derivatives with respect to observable fields that are constant on \({\varLambda }\). Furthermore we choose test functions such that only derivatives that are at most first order with respect to each of \(\sigma \) and \(\bar{\sigma }\) are measured, since higher-order dependence on \(\sigma \) plays no role in the analysis of the two-point function.

Thus, let \(\sigma \) be a new species of complex boson field. The norm on test functions is defined as in [5], with the previously chosen weights \(w_{\alpha _i,z_i}^{-1} = \mathfrak {h}_i^{-z_i}R^{|\alpha |}\) for the non-observable fields. For the observable field, we choose the weights differently, as follows. First, if \(\alpha \not = 0\) then we choose \(w_{\alpha _i,z_i}=0\) when \(i\) corresponds to the observable species. This eliminates test functions which are not constant in the observable variables. In addition, we set test functions equal to zero if their observable variables exceed one \(\sigma \), one \(\bar{\sigma }\), or one pair \(\sigma \bar{\sigma }\). Therefore, modulo the ideal \(\mathcal {I}\) of zero norm elements, a general element \(F \in \mathcal {N}\) has the form

$$\begin{aligned} F = F^{\varnothing } + F^{a} + F^{b} + F^{ab} , \end{aligned}$$
(1.59)

where \(F^\varnothing \) is obtained from \(F\) by setting \(\sigma =\bar{\sigma }=0\), while \(F^{a} = F_\sigma \sigma \), \(F^{b} = F_{\bar{\sigma }} \bar{\sigma }\), and \(F^{ab} =F_{\sigma ,\bar{\sigma }} \sigma \bar{\sigma }\) with the derivatives evaluated at \(\sigma =\bar{\sigma }=0\). In the \(T_{\phi }\) semi-norm we will always set \(\sigma = \bar{\sigma }= 0\). We unite the above cases with the notation \(F^\alpha = F_\alpha \sigma ^\alpha \) for \(\alpha \in \{\varnothing , a,b,ab\}\). This corresponds to a direct sum decomposition,

$$\begin{aligned} \mathcal {N}/\mathcal {I}= \mathcal {N}^{\varnothing } \oplus \mathcal {N}^{a} \oplus \mathcal {N}^{b} \oplus \mathcal {N}^{ab} , \end{aligned}$$
(1.60)

with canonical projections \(\pi _\alpha : \mathcal {N}/\mathcal {I}\rightarrow \mathcal {N}^\alpha \) defined by \(\pi _\varnothing F = F_\varnothing \), \(\pi _aF = F_a\sigma \), and so on. Note that

$$\begin{aligned} \Vert F\Vert _{T_\phi }=\sum _{\alpha }\Vert F_\alpha \sigma ^\alpha \Vert _{T_\phi } =\sum _{\alpha }\Vert F_\alpha \Vert _{T_\phi }\Vert \sigma ^\alpha \Vert _{T_0}, \end{aligned}$$
(1.61)

by definition. We use the same value \(\mathfrak {h}_\sigma \) in the weight for both \(\sigma \) and \(\bar{\sigma }\). In particular, \(\mathfrak {h}_\sigma = \Vert \sigma \Vert _{T_0}=\Vert \bar{\sigma }\Vert _{T_0}\).

On each of the subspaces on the right-hand side of (1.60), we choose a value for the parameter \(d_+\) and construct corresponding spaces \(\mathcal {V}^\varnothing , \mathcal {V}^{a},\mathcal {V}^{b} ,\mathcal {V}^{ab}\) as in Definition 1.2. We allow the freedom to choose different values for the parameter \(d_{+}\) in each subspace, and in our application in [3, 6] we will make use of this freedom. Then we define

$$\begin{aligned} \mathcal {V}= \mathcal {V}^{\varnothing } \oplus \mathcal {V}^{a} \oplus \mathcal {V}^{b} \oplus \mathcal {V}^{ab}. \end{aligned}$$
(1.62)

The following definition extends the definition of the localisation operator by applying it in a graded fashion in the above direct sum decomposition.

Definition 1.17

Let \({\varLambda } '\) be a coordinate patch. Let \(a,b \in {\varLambda }'\) be fixed. Let \(X(\varnothing )=X\), \(X(a) = X \cap \{a\}\), \(X(b)=X \cap \{b\}\), and \(X(ab) = X \cap \{a,b\}\). For \(Y\subset X \subset {\varLambda }'\) and \(F \in \mathcal {N}_{X}\), we define the linear operator \(\mathrm{Loc} _{X,Y}:\mathcal {N}_{X} \rightarrow \mathcal {V}(Y)\) by specifying its action on each subspace in (1.60) as

$$\begin{aligned}&\mathrm{Loc} _{X,Y} F^\alpha = \sigma ^\alpha \mathrm{loc}_{X(\alpha ),Y(\alpha )}^\alpha F_\alpha , \end{aligned}$$
(1.63)

and the linear map \(\mathrm{Loc} _X : \mathcal {N}_{X} \rightarrow \mathcal {V}(X)\) by

$$\begin{aligned}&\mathrm{Loc} _{X}F = \mathrm{Loc} _{X,X}F = \mathrm{loc}_{X}^\varnothing F_{\varnothing } + \sigma \mathrm{loc}_{X(a)}^a F_{a} + \bar{\sigma }\mathrm{loc}_{X(b)}^b F_{b} + \sigma \bar{\sigma }\mathrm{loc}_{X(ab)}^{ab} F_{ab}. \end{aligned}$$
(1.64)

The space \(\mathcal {V}\) is defined by (1.62). Different choices of \(d_+\) are permitted on each subspace, and the label \(\alpha \) appearing on the operators \(\mathrm{loc}\) on the right-hand side of (1.63)–(1.64) is present to reflect these choices. The use of \(\mathcal {V}(X)\) to denote the range of \(\mathrm{Loc} _X\) is a convenient abuse of notation, which does not explicitly indicate that the range on the four subspaces in the four terms on the right-hand side of (1.64) are actually \(\mathcal {V}^\alpha (X(\alpha ))\).

It is immediate from the definition that

$$\begin{aligned}&\pi _{\alpha } \mathrm{Loc} _{X,Y} = \mathrm{Loc} _{X,Y} \pi _{\alpha }\quad \text { for } \alpha = \varnothing , a,b, ab, \end{aligned}$$
(1.65)

and from (1.57) that, for a partition \(\{X_1,\ldots , X_m\}\) of \(X\),

$$\begin{aligned}&\mathrm{Loc} _{X} = \sum _{i=1}^m \mathrm{Loc} _{X,X_{i}} . \end{aligned}$$
(1.66)

It is a consequence of Proposition 1.7 that

$$\begin{aligned} \mathrm{Loc} _{X'}\circ \mathrm{Loc} _X = \mathrm{Loc} _{X'} \quad \text {for } X' \subset X \subset {\varLambda }, \end{aligned}$$
(1.67)

and therefore

$$\begin{aligned} \mathrm{Loc} _{X}\circ (\mathrm {Id}- \mathrm{Loc} _X) = 0. \end{aligned}$$
(1.68)

Also, by Proposition 1.9, for an automorphism \(E\in \mathcal {E}({\varLambda })\),

$$\begin{aligned} E\big (\mathrm{Loc} _{X} F\big ) = \mathrm{Loc} _{EX} (EF) \quad \quad \text {if } F \in \mathcal {N}^{\varnothing }_{X}. \end{aligned}$$
(1.69)

Note that (1.69) fails in general for \(F \in \mathcal {N}_{X} \setminus \mathcal {N}^\varnothing _{X}\), due to the fixed points \(a,b\) in the definition of \(\mathrm{Loc} _{X,Y}F\). The following two propositions extend the norm estimates for \(\mathrm{loc}\) to \(\mathrm{Loc} \).

Proposition 1.18

Suppose \({\varLambda }\) has period \(L^N\) with \(L,N>1\). Let \(j<N\), and let \(Y\subset X\) be \(j\)-polymers with \(X_+ \subset U\) for a coordinate patch \(U\subset {\varLambda }\). For \(F \in \mathcal {N}(U)\), there is a constant \(\bar{C}'\), which depends only on \(L^{-j} \mathrm{diam}(U)\), such that for \(F \in \mathcal {N}(U)\),

$$\begin{aligned} \Vert \mathrm{Loc} _{X,Y}F\Vert _{T_0} \le \bar{C}' \Vert F\Vert _{T_0}. \end{aligned}$$
(1.70)

Note that the case \(X=Y\) gives (1.70) for \(\mathrm{Loc} _XF\).

Proof

By definition, the triangle inequality, Proposition 1.16, and (1.61),

$$\begin{aligned} \Vert \mathrm{Loc} _{X,Y} F\Vert _{T_0}&= \sum _{\alpha = \varnothing , a,b, ab} \Vert \sigma ^\alpha \mathrm{loc}_{X,Y}^\alpha F_\alpha \Vert _{T_0} \le \bar{C}' \sum _{\alpha = \varnothing , a,b, ab} \Vert \sigma ^\alpha \Vert _{T_0}\Vert F_\alpha \Vert _{T_0} = \bar{C}' \Vert F\Vert _{T_0}, \end{aligned}$$
(1.71)

where \(\bar{C}'= \max _\alpha \bar{C}_\alpha '\), with \(\bar{C}_\alpha '\) the constant arising in each of the four applications of Proposition 1.16. \(\square \)

For the next proposition, which is applied in [6, Proposition 4.9], we write \(d_{\alpha }\) for the choice of \(d_{+}\), and \([\varphi _\mathrm{min}]\) for the common minimal field dimension on each space \(\mathcal {N}^\alpha \) for \(\alpha = \varnothing , a,b\) and \(ab\). We choose the spaces \({\varPhi }(\mathfrak {h})\) and \({\varPhi }'(\mathfrak {h}')\) as in Proposition 1.12. With \(d_\alpha '\) defined as in (1.38), let

$$\begin{aligned} \gamma _{\alpha ,\beta } = \left( L^{-d_{\alpha }'} + L^{-(A+1)[\varphi _\mathrm{min}]}\right) \left( \frac{\mathfrak {h}'_\sigma }{\mathfrak {h}_\sigma } \right) ^{|\alpha \cup \beta |}. \end{aligned}$$
(1.72)

As in Proposition 1.12, for the next proposition we again require that \(p_{\varPhi } \ge d_+' -[\varphi _\mathrm{min}]\) and consider the case where \({\varLambda }\) has period \(L^N\).

Proposition 1.19

Let \(j<N\), let \(A < p_\mathcal {N}\) be a positive integer, let \(L\) be sufficiently large, let \(X\) be a \(j\)-polymer with enlargement \(X_+\) contained in a coordinate patch, and let \(Y \subset X\) be a nonempty \(L^j\)-polymer. For \(i=1,2\), let \(F_{i} \in \mathcal {N}(X)\), with \(F_{2,\alpha }=0\) when \(Y(\alpha )=\varnothing \). Let \(F = F_1(1-\mathrm{Loc} _{Y})F_2\). Then

$$\begin{aligned} \Vert F\Vert _{T_{\phi }'}&\le \bar{C} \!\! \!\! \sum _{\alpha ,\beta =\varnothing ,a,b,ab} \gamma _{\alpha ,\beta } \left( 1 + \Vert \phi \Vert _{{\varPhi }'}\right) ^{A'} \nonumber \\&\quad \quad \quad \times \sup _{0\le t \le 1} \big ( \Vert F_{1,\beta }F_{2,\alpha }\Vert _{T_{t\phi }} + \Vert F_{1,\beta }\Vert _{T_{t\phi }}\Vert F_{2,\alpha }\Vert _{T_{0}}\big ) \Vert \sigma ^{\alpha \cup \beta }\Vert _{T_0}, \end{aligned}$$
(1.73)

where \(\gamma \) is given by (1.39), \(A'=A+d_+/[\varphi _\mathrm{min}]+1\), and \(\bar{C}\) depends only on \(L^{-j} \mathrm{diam}(X)\).

Proof

We use

$$\begin{aligned} \Vert F\Vert _{T_{\phi }'} \le \sum _{\alpha ,\beta } \Vert \sigma ^{\alpha \cup \beta } \Vert _{T_0'} \Vert F_{1,\beta }(1-\mathrm{loc}_{Y(\alpha )}^\alpha )F_{2,\alpha }\Vert _{T_\phi '} \end{aligned}$$
(1.74)

and apply Proposition 1.12 to each term. We also use

$$\begin{aligned} \Vert \sigma ^{\alpha \cup \beta } \Vert _{T_0'} = (\mathfrak {h}_\sigma ')^{|\alpha \cup \beta |} = \Vert \sigma ^{\alpha \cup \beta } \Vert _{T_0} \left( \frac{\mathfrak {h}'_\sigma }{\mathfrak {h}_\sigma } \right) ^{|\alpha \cup \beta |}. \end{aligned}$$
(1.75)

The constant \(\bar{C}\) is the largest of the four constants \(\bar{C}_\alpha \) arising from Proposition 1.12. \(\square \)

2 The Operator loc

In Sect. 2.1, we prove existence of the operator \(\mathrm{loc}\) and prove Proposition 1.5. In Sect. 2.2, we prove Propositions 1.111.12, using the results on Taylor polynomials proven in Sect. 3. Finally, in Sect. 2.3, we prove the claim which guaranteed existence of the polynomials \(\hat{P}\) used to define \(\mathcal {V}\) in Definition 1.2.

Throughout this section, \({\varLambda }'\) is a coordinate patch in \({\varLambda }\), and the space of polynomial test functions is \(\varPi = \varPi [{\varLambda }']\).

2.1 Existence and Uniqueness of loc: Proof of Proposition 1.5

Recall from [5, Proposition 3.5] that the pairing obeys

$$\begin{aligned} \langle F,g \rangle _\phi = \langle F,Sg \rangle _\phi \end{aligned}$$
(2.1)

for all \(F \in \mathcal {N}\), \(g\in {\varPhi }\), and for all boson fields \(\phi \). The symmetry operater \(S\) is defined in [5, Definition 3.4]; it obeys \(S^2=S\). Let \(m \in \mathfrak {m}\) have components \(m_k=(i_k,\alpha _k)\) for \(k=1,\ldots , p(m)\). Recall that \(m\) determines an abstract monomial \(M_{m}\) by (1.7) and, given \(a \in {\varLambda }\), \(M_{m}\) determines \(M_{m,a} \in \mathcal {N}\) by evaluation of \(M_{m}\) at \(a\). Recall from [5, Example 3.6] that, for any test function \(g\),

$$\begin{aligned} \langle M_{m,a}, g \rangle _0 = \nabla ^{m} (Sg)_{\vec a}, \quad \quad \nabla ^{m} = \prod _{k=1}^{p (m)} \nabla ^{\alpha _{k}}, \end{aligned}$$
(2.2)

where on the right-hand side \(\vec a\) indicates that each of the \(p(m)\) arguments is evaluated at \(a\), and \(\nabla ^{\alpha _k}\) acts on the variable \(z_k\).

We specified a basis for \(\varPi \) in (1.21), but now we require another basis. For \(z=(x_1,\ldots ,x_d)\) a coordinate on \({\varLambda }'\), and \(\alpha = (\alpha _{1},\dots ,\alpha _{d})\in {\mathbb N}_{0}^{d}\), we define the binomial coefficient \(\left( {\begin{array}{c}z\\ \alpha \end{array}}\right) = \left( {\begin{array}{c}x_{1}\\ \alpha _{1}\end{array}}\right) \cdots \left( {\begin{array}{c}x_{d}\\ \alpha _{d}\end{array}}\right) \). The new basis is obtained by replacing, in the definition (1.21) of \(p_{m}\), the monomial \(z_{k}^{\alpha _{k}}\) by the polynomial \(\left( {\begin{array}{c}z_{k}\\ \alpha _{k}\end{array}}\right) \). More generally, we can also move the origin. Thus for \(m\in \bar{\mathfrak {m}}_+\) and \(a \in {\varLambda } '\) we define

$$\begin{aligned} b_{m,z}^{(a)} = \prod _{k=1}^{p}\left( {\begin{array}{c}z_{k}-a\\ \alpha _{k}\end{array}}\right) . \end{aligned}$$
(2.3)

This is a polynomial function defined on \({\varLambda }_{i_1}'^{(1)} \times \cdots \times {\varLambda }_{i_{p(m)}}'^{(1)}\). We implicitly extend it by zero so that it is a test function defined on \(\vec {\varvec{\Lambda }}^{*}\). For \(p(m)=0\), we set \(b_\varnothing ^{(a)}=1\). For any \(a \in {\varLambda } '\), the set \(\{b_{m,z}^{(a)}:m \in \bar{\mathfrak {v}}_{+} \}\) is a basis for \(\varPi \). For \(g \in {\varPhi }\), we define \(\mathrm{Tay}_{a}: {\varPhi } \rightarrow \varPi \) by

$$\begin{aligned} (\mathrm{Tay}_{a} g)_z = \sum _{m \in \bar{\mathfrak {v}}_{+}} (\nabla ^{m}g)_{\vec {a}} \, b_{m,z}^{(a)}. \end{aligned}$$
(2.4)

The following lemma shows that \(\mathrm{Tay}_{a} g\) is the lattice analogue of a Taylor polynomial approximation to \(g\). Its proof is given in Sect. 3.1.

Lemma 2.1

Let \({\varLambda }'\) be a coordinate patch, and let \(a,z\in {\varLambda }'\).

  1. (i)

    For \(g\in {\varPhi }\), \(\mathrm{Tay}_{a}g\) is the unique \(p \in \varPi \) such that \(\nabla ^{m} (g-p)_z|_{z=\vec {a}} = 0\) for all \(m \in \bar{\mathfrak {v}}_{+}\).

  2. (ii)

    \(\mathrm{Tay}_{a}\) commutes with \(S\).

  3. (iii)

    For \(g \in \varPi \), \((\mathrm{Tay}_{a}g)_z = g_z\).

For \(m \in \mathfrak {m}_+\), let

$$\begin{aligned} f_{m}^{(a)} = N_m Sb_{m}^{(a)} , \end{aligned}$$
(2.5)

where \(N_{m}\) is a normalisation constant (whose value is chosen in (3.9) so that case \(m=m'\) holds in (2.6) below). The lexicographic ordering on \(\mathfrak {m}_+\) implies that \(f_{m}^{(a)} \not = f_{m'}^{(a)} \not = 0\) for \(m\not =m'\). Since \(\{b_{m}^{(a)}\}_{m\in \bar{\mathfrak {v}}_+}\) forms a basis of \(\varPi \), the linearly independent set \(\{f_{m}^{(a)}\}_{m\in \mathfrak {v}_+}\) forms a basis of \(S\varPi \). The next lemma, which is proved in Sect. 3.2, says that \(\{M_{m,a}\}_{m\in \mathfrak {v}_+}\) and \(\{f_{m'}^{(a)}\}_{m'\in \mathfrak {v}_+}\) are dual bases of \(\mathcal {V}_+\) and \(S\varPi \) with respect to the zero-field pairing.

Lemma 2.2

Let \({\varLambda }'\) be a coordinate patch, and let \(a,z\in {\varLambda }'\).

  1. (i)

    For \(m,m' \in \mathfrak {m}_+\),

    $$\begin{aligned} \big \langle {M_{m,a},f_{m'}^{(a)}\big \rangle }_{0}&= \delta _{m,m'}. \end{aligned}$$
    (2.6)
  2. (ii)

    For \(g \in {\varPhi }\),

    $$\begin{aligned} (\mathrm{Tay}_a S g)_z = \sum _{m\in \mathfrak {v}_+} \langle M_{m,a},g \rangle _0 f_{m,z}^{(a)}. \end{aligned}$$
    (2.7)

Definition 2.3

Given \(a \in {\varLambda }\), we define a linear map \(\mathrm{loc}_{+,a} : \mathcal {N}_{\{a\}} \rightarrow \mathcal {V}_{+}(\{a\})\) by

$$\begin{aligned} \mathrm{loc}_{+,a} F = \sum _{m\in \mathfrak {v}_+} \big \langle {F,f_m^{(a)}\big \rangle }_0 M_{m,a}. \end{aligned}$$
(2.8)

It is an immediate consequence of (2.8) and (2.6) that \(\mathrm{loc}_{+,a} M_{m,a}=M_{m,a}\) for all \(m\in \mathfrak {v}_+\). Since \(\mathcal {V}_+\) is spanned by the monomials \((M_m)_{m\in \mathfrak {v}_+}\), it follows that

$$\begin{aligned} \mathrm{loc}_{+,a} P_a = P_a \quad \quad P \in \mathcal {V}_+. \end{aligned}$$
(2.9)

The following lemma shows that the map \(\mathrm{loc}_{+,a}\) is dual to \(\mathrm{Tay}_a\) with respect to the zero-field pairing of \(\mathcal {N}\) and \({\varPhi }\).

Lemma 2.4

For any \(a\in {\varLambda }\), \(F \in \mathcal {N}_{\{a\}}\), and \(g\in {\varPhi }\),

$$\begin{aligned} \langle \mathrm{loc}_{+,a}F, g \rangle _0 = \langle F, \mathrm{Tay}_a g \rangle _0. \end{aligned}$$
(2.10)

In particular, if \(g \in \varPi \), then

$$\begin{aligned} \langle \mathrm{loc}_{+,a} F, g \rangle _0 = \langle F, g \rangle _0. \end{aligned}$$
(2.11)

Proof

For (2.10), we use Definition 2.3, linearity of the pairing, (2.7), Lemma 2.1(ii) and (2.1) to obtain

$$\begin{aligned} \langle \mathrm{loc}_{+,a} F, g \rangle _0&= \sum _{m \in \mathfrak {v}_+} \langle F,f_m^{(a)} \rangle _0 \langle M_{m,a},g \rangle _0 = \langle F, \mathrm{Tay}_a S g \rangle _0 \nonumber \\&= \langle F, S \mathrm{Tay}_a g \rangle _0 = \langle F, \mathrm{Tay}_a g \rangle _0. \end{aligned}$$
(2.12)

For (2.11), we use (2.10) and the fact that \(\mathrm{Tay}_a g = g\) for \(g\in \varPi \), by Lemma 2.1(iii). \(\square \)

Lemma 2.5

Let \(a\in {\varLambda }\) and \(X \subset {\varLambda }\) be such that \(X \cup \{a\}\) is contained in a coordinate patch. Given \(V_+ \in \mathcal {V}_+\), there exists a unique \(V \in \mathcal {V}\) (depending on \(V_+\), \(a\), and \(X\)) such that

$$\begin{aligned} \mathrm{loc}_{+,a} V(X) = V_{+,a}. \end{aligned}$$
(2.13)

In particular, the map \(V_{+} \mapsto V\) defines an isomorphism from \(\mathcal {V}_+\) to \(\mathcal {V}\).

Proof

Fix \(V_+ = \sum _{m \in \mathfrak {v}_+}\alpha _m M_{m,a} \in \mathcal {V}_{+}(\{a\})\); then \(\alpha _m = \langle V_{+,a},f_{m}^{(a)} \rangle _0\) by (2.6). Let \(\hat{P}_m = \hat{P}(M_m)\). We want to show that there is a unique \(V = \sum _{m'\in \mathfrak {v}_+} \beta _{m'} \hat{P}_{m'} \in \mathcal {V}\) such that

$$\begin{aligned} \alpha _m = \sum _{m'\in \mathfrak {v}_+}\beta _{m'} \big \langle {\hat{P}_{m'}(X),f_m^{(a)}\big \rangle }_0 = \sum _{m'\in \mathfrak {v}_+}\beta _{m'} B_{m',m}, \end{aligned}$$
(2.14)

where \(B_{m',m}=\langle \hat{P}_{m'}(X),f_m^{(a)} \rangle _0\). Let \(\hat{Q}_{m'}=\hat{P}_{m'}-M_{m'}\). According to Definition 1.2, \(\hat{Q}_{m'} \in \mathcal {P}_t + \mathcal {R}_1\) for some \(t > [M_{m'}]\). By definition, elements of \(\mathcal {R}_1(X)\) annihilate test functions in pairings. With (3.14)–(3.15) below, this implies that, for \([M_{m'}]\ge [M_m]\),

$$\begin{aligned} B_{m',m} = \big \langle {M_{m'}(X),f_m^{(a)}\big \rangle }_0 + \big \langle {\hat{Q}_{m'}(X),f_m^{(a)}\big \rangle }_0 = |X|\delta _{m',m} + 0 = \delta _{m',m}. \end{aligned}$$
(2.15)

Thus the matrix \(B\) is triangular, with \(|X|\) on the diagonal, and hence \(B^{-1}\) exists. Then the row vector \(\beta \) is given in terms of the row vector \(\alpha \) by \(\beta = \alpha B^{-1}\), and this solution is unique. Since \(\mathcal {V}_+\) and \(\mathcal {V}\) have the same finite dimension, the map \(V_{+} \mapsto V\) defines an isomorphism between these two spaces. \(\square \)

The following commutative diagram illustrates the construction of \(\mathrm{loc}_X\) in the next proof:

figure a

Proof of Proposition 1.5

(i) Existence of \(V \in \mathcal {V}\). Given \(a\) in \(X\), let \(\mu _{X,a}: \mathcal {V}_+(\{a\}) \rightarrow \mathcal {V}(X)\) denote the map which associates the polynomial \(V(X)\) to \(V_{+,a}\) in Lemma 2.5. Let \(V (X) = (\mu _{X,a} \circ \mathrm{loc}_{+,a}) F\). By (2.11) and Lemma 2.5, for all \(g \in \varPi \),

$$\begin{aligned} \langle V (X),g \rangle _0 = \langle \mathrm{loc}_{+,a} V (X),g \rangle _0 = \langle \mathrm{loc}_{+,a} \mu _{X,a} \mathrm{loc}_{+,a} F,g \rangle _0 = \langle \mathrm{loc}_{+,a} F,g \rangle _0 = \langle F,g \rangle _0.\nonumber \\ \end{aligned}$$
(2.16)

This establishes (1.22).

(ii) Uniqueness. Given two polynomials in \(\mathcal {V}\) that satisfy (1.22), let \(P\) be their difference. Then \(P\) is a polynomial in \(\mathcal {V}\) such that, for all \(g \in \varPi \) and \(a \in X\),

$$\begin{aligned} 0 = \langle P (X),g \rangle _{0} = \langle \mathrm{loc}_{+,a} P (X),g \rangle _{0} , \end{aligned}$$
(2.17)

where we used (2.11). By (2.6), \(\mathrm{loc}_{+,a} P (X)=0\) is zero as an element of \(\mathcal {V}_{+} (\{a \})\). By Lemma 2.5, \(P=0\). This proves uniqueness.

(iii) Independence of coordinate and coordinate patch. Recall the definition of \(F \in \mathcal {N}_X\) above Proposition 1.5. Suppose there are two coordinate patches \({\varLambda }',{\varLambda }''\) with corresponding coordinates \(z',z''\) that imply \(F \in \mathcal {N}_X\). Then there exists \(V'\) such that (1.22) holds for all \(g \in \varPi [{\varLambda } ']\) and \(V''\) such that (1.22) holds for all \(g \in \varPi [{\varLambda } '']\). In particular, \(V'\) and \(V''\) satisfy (1.22) for all \(g \in \varPi [{\varLambda } ' \cap {\varLambda } '']\). Since \({\varLambda } ' \cap {\varLambda } ''\) with either of the coordinates \(z',z''\) is also a valid choice of coordinate patch that contains \(X\), the uniqueness part (ii) with coordinate patch \({\varLambda } ' \cap {\varLambda } ''\) implies \(V'=V''\). So the polynomial \(V\) does not depend on the choice of \({\varLambda }'\) implicit in the requirement \(F \in \mathcal {N}_X\).

(iv) Duality. For \(n \in \mathfrak {v}_+\), let \(c_n\) be the vector \((c_n)_{n'} = B^{-1}_{n,n'}\), where \(B\) is the matrix in the proof of Lemma 2.5. It follows from that proof that the pairing of \(\sum _{n'}(c_n)_{n'} \hat{P}_{n'}(X)\) with \(f_m^{(a)}\) is \(\delta _{n,m}\). Thus the basis \((c_n)_{n \in \mathfrak {v}_+}\) is dual to the basis \((f_m^{(a)})_{m \in \mathfrak {v}_+}\) of \(\varPi \). This completes the proof of Proposition 1.5. \(\square \)

It follows from (i) and (ii) above that, for any \(a \in X\),

$$\begin{aligned} \mathrm{loc}_X F = (\mu _{X,a} \circ \mathrm{loc}_{+,a}) F, \end{aligned}$$
(2.18)

2.2 Proof of Norm Estimates for loc

We now prove Propositions 1.11, 1.12 and 1.16, using the following definition which we recall from [5, (3.37)]. Given \(X \subset {\varLambda }\) and a test function \(g \in {\varPhi }\), we define

$$\begin{aligned} \Vert g\Vert _{{\varPhi }(X)} = \inf \{ \Vert g -f\Vert _{{\varPhi }} : f_{z} = 0 \text { if all components of z lie in X}\}. \end{aligned}$$
(2.19)

Let \(f\) be as in (2.19). By definition, if \(F \in \mathcal {N}(X)\) then \(\langle F,g \rangle _\phi = \langle F,g - f \rangle _\phi \). Hence \( |\langle F,g \rangle _{\phi }| \le \Vert F\Vert _{T_{\phi }}\, \Vert g - f\Vert _{{\varPhi } }\), and by taking the infimum over \(f\) we obtain

$$\begin{aligned} |\langle F,g \rangle _\phi | \le \Vert F\Vert _{T_{\phi }}\, \Vert g \Vert _{{\varPhi } (X)} \quad \quad F\in \mathcal {N}(X). \end{aligned}$$
(2.20)

Proof of Propositions 1.11 and 1.16

We use the notation in the proof of Lemma 2.5. By definition, \(\mathrm{loc}_{+,a}F = \sum _{m' \in \mathfrak {v}_+} \alpha _{m'} M_{m',a}\) with \(\alpha _{m'} = \langle F,f_{m'}^{(a)} \rangle _0\). Therefore, by (2.18) and the formula \(\beta = \alpha B^{-1}\) of the proof of Lemma 2.5,

$$\begin{aligned} \mathrm{loc}_X F = \sum _{m \in \mathfrak {v}_+} \beta _m \hat{P}_m(X) = \sum _{m,m' \in \mathfrak {v}_+} \langle F,f_{m'}^{(a)} \rangle _0 B_{m',m}^{-1} \hat{P}_m(X). \end{aligned}$$
(2.21)

By Definition 1.15, this implies that

$$\begin{aligned} \mathrm{loc}_{X,Y} F = \sum _{m \in \mathfrak {v}_+} \beta _m \hat{P}_m(Y) = \sum _{m,m' \in \mathfrak {v}_+} \langle F,f_{m'}^{(a)} \rangle _0 B_{m',m}^{-1} \hat{P}_m(Y). \end{aligned}$$
(2.22)

Hence, writing \(A=|X|^{-1}B\), and estimating the norm of \(\hat{P}_m(Y)= \sum _{y\in Y} \hat{P}_{m,y}\) by the triangle inequality, we obtain

$$\begin{aligned} \Vert \mathrm{loc}_{X,Y} F \Vert _{T_0}&\le \sum _{m,m' \in \mathfrak {v}_+} |\langle F,f_{m'}^{(a)} \rangle _0|\, |B_{m',m}^{-1}|\, \Vert \hat{P}_m(Y)\Vert _{T_0} \nonumber \\&\le \frac{|Y|}{|X|} \sum _{m,m' \in \mathfrak {v}_+} |\langle F,f_{m'}^{(a)} \rangle _0|\, |A_{m',m}^{-1}|\, \Vert \hat{P}_{m,0}\Vert _{T_0} \nonumber \\&\le \Vert F\Vert _{T_0} \frac{|Y|}{|X|} \sum _{m,m' \in \mathfrak {v}_+} \Vert f_{m'}^{(a)}\Vert _{{\varPhi }(U)} \, |A_{m',m}^{-1}|\, \Vert \hat{P}_{m,0}\Vert _{T_0}, \end{aligned}$$
(2.23)

where we used (2.20) in the last inequality.

It is shown in Lemmas 3.2 and 3.4 that

$$\begin{aligned} \Vert \hat{P}_{m,0}\Vert _{T_0} \le R^{-|\alpha (m)|_1}\mathfrak {h}^{m} , \quad \quad \Vert f_{m'}^{(a)}\Vert _{{\varPhi }(U)} \le \bar{C} \mathfrak {h}^{-m'} R^{|\alpha (m')|_1}, \end{aligned}$$
(2.24)

where \(\mathfrak {h}^m\) denotes the product of \(\mathfrak {h}_{i_k}\) over the components \((i_k,\alpha _k)\) of \(m\). It therefore suffices to show that

$$\begin{aligned} |A_{m',m}^{-1}| \le \bar{C} \mathfrak {h}^{m'} R^{-|\alpha (m')|_1} R^{|\alpha (m)|_1}\mathfrak {h}^{-m}. \end{aligned}$$
(2.25)

The matrix elements \(A_{m',m}\) can be computed using the formula

$$\begin{aligned} A_{m',m}^{-1} = \left( I + (A- I) \right) ^{-1} = \sum _{j=0}^{|\mathfrak {v}_+|-1}(-1)^j (A- I)^j, \end{aligned}$$
(2.26)

where we have used the fact that the upper triangular matrix \(A- I\) with zero diagonal is nilpotent. Consequently, \(A_{m',m}^{-1}\) is bounded by a sum of products of factors of the form

$$\begin{aligned} |X|^{-1}|\langle \hat{P}_{m'}(X),f_m^{(a)} \rangle _0| \le \Vert \hat{P}_{m',0}\Vert _{T_0} \Vert f_m^{(a)}\Vert _{{\varPhi }(\hat{X})}, \end{aligned}$$
(2.27)

where \(\hat{X}\) is a polymer which extends \(X\) in a minimal way to ensure that \(P_{m'}(X) \in \mathcal {N}(\hat{X})\) for all \(m'\in \mathfrak {v}_+\). The extension is present because the discrete derivatives in \(P_{m'}\) cause \(P_{m'}(X)\) to depend on points near the boundary, but outside \(X\). Now repeated application of (2.24) gives rise to a telescoping product in which the powers of \(R\) and \(\mathfrak {h}\) exactly cancel, leading to an upper bound

$$\begin{aligned} \Vert \mathrm{loc}_{X,Y} F \Vert _{T_0} \le \bar{C} \Vert F\Vert _{T_0}. \end{aligned}$$
(2.28)

This proves Proposition 1.16, and the special case \(Y=X\) then gives Proposition 1.11. \(\square \)

For the proof of Proposition 1.12, we need some preliminaries. For \(X\) contained in a coordinate patch \({\varLambda }'\), let \(\varPi (X) \subset {\varPhi }\) denote the set of test functions whose restriction to every argument in \(X\) agrees with the restriction of an element of \(\varPi \). This is not the same as \(\varPi [{\varLambda }']\) defined previously. Let

$$\begin{aligned} \varPi ^{\perp } (X) = \{G\in \mathcal {N}(X) : \langle G,f \rangle _{0}=0 \; \text {for all } f\in \varPi (X)\}. \end{aligned}$$
(2.29)

We claim that \(\varPi ^{\perp } (X)\) is an ideal in \(\mathcal {N}(X)\), namely that

$$\begin{aligned} \langle FG,f \rangle _{0}=0 \;\; \text {for all } F\in \mathcal {N}(X), G \in \varPi ^{\perp } (X), f \in \varPi (X). \end{aligned}$$
(2.30)

To prove (2.30), it suffices to consider test functions \(f\in \varPi (X)\) which vanish except on sequences \(z= (z_{1},\dots ,z_{p (z)})\) in \(\vec {\varvec{\Lambda }}^{*}\) with \(p (z)\) fixed equal to some positive integer \(n\). Likewise, we can assume that \(f_{z}=0\) unless the component species \(i (z_{1}),\dots ,i (z_{n})\) have specified values. These restrictions are sufficient because such test functions span \(\varPi (X)\). For such test functions, it follows from [5, (5.24)] that \(\langle FG,f \rangle _\phi = \langle G,F^\dagger f \rangle _\phi \), where, for some constants \(c_{z'}\),

$$\begin{aligned} (F^{\dagger }f)_{z''}&= \sum _{z'} c_{z'}F_{z'} \tilde{f}^{(z')}_{z''} \quad \text {with} \quad \tilde{f}^{(z')}_{z''} = \sum _{z \in z' \diamond z''} f_{z}. \end{aligned}$$
(2.31)

For each fixed \(z'\), the test function \(\tilde{f}^{(z')}\) is an element of \(\varPi (X)\), and hence \(\langle G,\tilde{f}^{(z')} \rangle _0=0\). Then (2.30) follows from (2.31) and the linearity of the pairing.

We define, on \({\varPhi }\), the semi-norm

$$\begin{aligned} \Vert g \Vert _{\tilde{{\varPhi }} (X)} = \inf \{ \Vert g -f\Vert _{{\varPhi } } : f \in \varPi (X)\}. \end{aligned}$$
(2.32)

Lemma 2.6

Let \(\epsilon >0\), \(X\subset {\varLambda }'\), and \(g \in {\varPhi }\). Then there exists a decomposition \(g=f+h\) with \(f \in \varPi (X)\), \(\Vert g\Vert _{\tilde{{\varPhi }} (X)} \le \Vert h\Vert _{{\varPhi }} \le (1+\epsilon )\Vert g\Vert _{\tilde{{\varPhi }} (X)}\) and \(\Vert f \Vert _{{\varPhi }} \le (2+\epsilon ) \Vert g\Vert _{{\varPhi }}\).

Proof

By (2.32), we can choose \(f \in \varPi (X)\) so that \(h=g-f\) obeys \(\Vert g\Vert _{\tilde{{\varPhi }}(X)} \le \Vert h \Vert _{{\varPhi }} \le (1+\epsilon )\Vert g\Vert _{\tilde{{\varPhi }}(X)}\), and then \(\Vert f \Vert _{{\varPhi }}\le \Vert h\Vert _{{\varPhi }} + \Vert g\Vert _{{\varPhi }} \le (2+\epsilon ) \Vert g\Vert _{{\varPhi }}\). \(\square \)

Proof of Proposition 1.12

Let \(R=L^j\). We write \(c\) for a generic constant and \(\bar{c}\) for a generic constant that depends on \(R^{-1}\mathrm{diam}(X)\). Let \(F \in \mathcal {N}(X)\) and \(A<p_\mathcal {N}\). We first apply [5, Proposition 3.11] to obtain

$$\begin{aligned} \Vert F \Vert _{T_{\phi }'}&\le \left( 1 + \Vert \phi \Vert _{{\varPhi }'}\right) ^{A+1} \left[ \Vert F\Vert _{T_{0}'} + \rho ^{(A+1)} \sup _{0\le t\le 1} \Vert F\Vert _{T_{t\phi }} \right] , \end{aligned}$$
(2.33)

where, due to our choice of norm, \(\rho ^{(A+1)}\le c L^{-(A+1)[\varphi _\mathrm{min}]}\). To estimate \(\Vert F\Vert _{T_{0}'}\), given a test function \(g\), we choose \(f\in \varPi (X)\) as in Lemma 2.6, and obtain

$$\begin{aligned} \left| \langle F,g \rangle _{0} \right|&\le \left| \langle F,f \rangle _{0}\right| + \left| \langle F,g-f \rangle _{0} \right| . \end{aligned}$$
(2.34)

Now we set \(F=F_1(1-\mathrm{loc}_Y)F_2\). By (1.23) and (2.29), \((1-\mathrm{loc}_Y)F_2 \in \varPi ^{\perp } (X)\). By (2.30), this implies that \(F \in \varPi ^{\perp } (X)\), so the first term on the right-hand side of (2.34) is zero. For the second term, we use

$$\begin{aligned} \left| \langle F,g-f \rangle _{0} \right|&\le \Vert F\Vert _{T_{0} } \Vert g-f\Vert _{{\varPhi }} \le \Vert F\Vert _{T_{0}} (1+\epsilon )\Vert g\Vert _{\tilde{{\varPhi }}} \le \Vert F\Vert _{T_{0}} (1+\epsilon ) \bar{c} L^{-d_+ '}\Vert g\Vert _{{\varPhi }'} , \end{aligned}$$
(2.35)

where the final inequality is a consequence of Lemma 3.6. After taking the supremum over \(g \in B ({\varPhi }')\), followed by the infimum over \(\epsilon >0\), we obtain \( \Vert F\Vert _{T_{0} '}\le \bar{c}\, L^{-d_+'} \Vert F\Vert _{T_{0} }\), and hence

$$\begin{aligned} \Vert F \Vert _{T_{\phi }'} \le \left( 1 + \Vert \phi \Vert _{{\varPhi }'}\right) ^{A+1} \bar{c} \,\left( L^{-d_+ '} + L^{-(A+1)[\varphi _\mathrm{min}]} \right) \sup _{0\le t\le 1} \Vert F\Vert _{T_{t\phi }}. \end{aligned}$$
(2.36)

Next, we apply the triangle inequality and the product property of the \(T_\phi \) semi-norm to obtain

$$\begin{aligned} \Vert F \Vert _{T_{t\phi }}&\le \Vert F_1F_2\Vert _{T_{t\phi }} + \Vert F_1 \Vert _{T_{t\phi }} \Vert \mathrm{loc}_Y F_2 \Vert _{T_{t\phi }}. \end{aligned}$$
(2.37)

Since \(\mathrm{loc}_Y F_2 \in \mathcal {V}\), it is a polynomial of dimension at most \(d_+\), and hence of degree at most \(d_+/[\varphi _\mathrm{min}]\). It follows from [5, Proposition 3.10] that \(\Vert \mathrm{loc}_Y F_2 \Vert _{T_{t\phi }}\le (1+\Vert \phi \Vert _{\varPhi })^{d_+/[\varphi _\mathrm{min}]}\Vert \mathrm{loc}_Y F_2 \Vert _{T_{0}}\). With Proposition 1.11, this gives

$$\begin{aligned} \Vert F \Vert _{T_{t\phi }}&\le \Vert F_1F_2\Vert _{T_{t\phi }} + \bar{C}' (1+\Vert \phi \Vert _{\varPhi })^{d_+/[\varphi _\mathrm{min}]} \Vert F_1 \Vert _{T_{t\phi }} \Vert F_2 \Vert _{T_{0}}. \end{aligned}$$
(2.38)

Since \(\Vert \phi \Vert _{{\varPhi }} \le cL^{-[\varphi _\mathrm{min}]} \Vert \phi \Vert _{{\varPhi }'} \le c \Vert \phi \Vert _{{\varPhi }'}\) due to our choice of norm, this gives

$$\begin{aligned} \Vert F \Vert _{T_{t\phi }}&\le \Vert F_1F_2\Vert _{T_{t\phi }} + \bar{c} (1+\Vert \phi \Vert _{{\varPhi }'})^{d_+/[\varphi _\mathrm{min}]} \Vert F_1 \Vert _{T_{t\phi }} \Vert F_2 \Vert _{T_{0}}. \end{aligned}$$
(2.39)

Substitution of (2.39) into (2.36) completes the proof. \(\square \)

2.3 The Polynomials \(P(M)\)

We now prove the claim which guaranteed existence of the polynomials \(\hat{P}\) of Definition 1.2. These polynomials were used to define the \(\Sigma \)-invariant subspace \(\mathcal {V}\) of \(\mathcal {P}\).

Lemma 2.7

For any \(M \in \mathcal {M}_{+}\), the polynomial \(P=P(M)\) of (1.19) obeys: (i) \(P (M)\) is \(\Sigma _{\text {axes}}\)-covariant, (ii) \(M-P (M) \in \mathcal {P}_{t} + \mathcal {R}_{1}\) for some \(t>[M]\), and (iii) \(P ({\varTheta } M)={\varTheta } P (M)\) for \({\varTheta } \in \Sigma _{+}\).

Proof

(i) For \({\varTheta } ' \in \Sigma _{\text {axes}}\),

$$\begin{aligned} {\varTheta } 'P&= |\Sigma _{\text {axes}}|^{-1} \sum _{{\varTheta } \in \Sigma _{\text {axes}}} \lambda ({\varTheta } ,M) {\varTheta } '{\varTheta } M \nonumber \\&= |\Sigma _{\text {axes}}|^{-1} \sum _{{\varTheta } \in \Sigma _{\text {axes}}} \lambda ({\varTheta } '^{-1}{\varTheta } ,M) {\varTheta } M \nonumber \\&= \lambda ({\varTheta } '^{-1},M) |\Sigma _{\text {axes}}|^{-1} \sum _{{\varTheta } \in \Sigma _{\text {axes}}} \lambda ({\varTheta } ,M) {\varTheta } M \nonumber \\&= \lambda ({\varTheta } '^{-1},M) P = \lambda ({\varTheta } ',M) P , \end{aligned}$$
(2.40)

as required.

(ii) Given \(M\in \mathcal {M}_{+}\) and \({\varTheta } \in \Sigma _{\text {axes}}\), the monomial \({\varTheta } M\) is equal to \(M\) with derivatives switched from forward to backward in each coordinate where \({\varTheta }\) changes sign. Any derivative that was switched can be restored to its original direction using (1.5), modulo a term in \(\mathcal {P}_t + \mathcal {R}_{1}\). The use of (1.5) introduces a sign change for each restored derivative, with the effect that \(M\) is equal to \(\lambda ({\varTheta } ,M) {\varTheta } M\) modulo \(\mathcal {P}_t\). Therefore, \(M-P(M)\) is also in \(\mathcal {P}_{t} + \mathcal {R}_{1}\).

(iii) Let \(M \in \mathcal {M}_{+}\), \({\varTheta } '\in \Sigma _{+}\), and \({\varTheta } \in \Sigma _{\text {axes}}\). Since \({\varTheta }'^{-1}{\varTheta } {\varTheta }' \in \Sigma _{\text {axes}}\), it makes sense to write \(\lambda ({\varTheta } '^{-1}{\varTheta } {\varTheta } ',M)\). Also, since the number of derivatives that change direction in the transformation \(M \mapsto {\varTheta } '^{-1}{\varTheta } {\varTheta } 'M\) is equal to the number that change direction in the transformation \({\varTheta }'M \mapsto {\varTheta }{\varTheta }'M\), it follows that \(\lambda ({\varTheta } '^{-1}{\varTheta } {\varTheta } ',M)= \lambda ({\varTheta } ,{\varTheta } 'M)\). Therefore, by the change of variables \({\varTheta } \mapsto {\varTheta } '^{-1}{\varTheta } {\varTheta } '\) in the sum,

$$\begin{aligned} {\varTheta } 'P (M)&= |\Sigma _{\text {axes}}|^{-1} \sum _{{\varTheta } \in \Sigma _{\text {axes}}} \lambda ({\varTheta } ,M) {\varTheta } '{\varTheta } M \nonumber \\&= |\Sigma _{\text {axes}}|^{-1} \sum _{{\varTheta } \in \Sigma _{\text {axes}}} \lambda ({\varTheta } '^{-1}{\varTheta } {\varTheta } ',M) {\varTheta } {\varTheta } 'M \nonumber \\&= |\Sigma _{\text {axes}}|^{-1} \sum _{{\varTheta } \in \Sigma _{\text {axes}}} \lambda ({\varTheta } ,{\varTheta } 'M) {\varTheta } ({\varTheta } 'M) = P ({\varTheta } 'M) , \end{aligned}$$
(2.41)

and the proof is complete. \(\square \)

3 Lattice Taylor Polynomials

3.1 Taylor Polynomials

Let \({\varLambda }'\) be a coordinate patch, and let \(a \in {\varLambda }'\). Recall the definition of the test functions \(b_{m}^{(a)}\) in (2.3), for \(m \in \bar{\mathfrak {m}}_+\). We now prove Lemma 2.1.

Proof of Lemma 2.1

(i) To show that \(p=\mathrm{Tay}_a g\) obeys the desired identity \(\nabla ^m(g-p)|_{z=\vec a} =0\), it suffices to show that

$$\begin{aligned} \nabla ^{m}b_{m',z}^{(a)}|_{z=\vec a}=\delta _{m,m'}, \quad \quad m,m' \in \bar{\mathfrak {m}}_{+}. \end{aligned}$$
(3.1)

To prove (3.1), it suffices to consider one species and the 1-dimensional case, since the derivatives and binomial coefficients all factor. For non-negative integers \(k,n\), it suffices to show that \(\nabla _+^n \left( {\begin{array}{c}x-a\\ k\end{array}}\right) |_{x=a} = \delta _{n,k}\), where we write \(\nabla _+\) to emphasise that this is a forward derivative. We use induction on \(n\), noting first that when \(n =0\) we have \(\nabla _+^n \left( {\begin{array}{c}x-a\\ k\end{array}}\right) |_{x=a} = \left( {\begin{array}{c}0\\ k\end{array}}\right) = \delta _{0,k} =\delta _{n, k}\). To advance the induction, we assume that the identity holds for \(n -1\) (for all \(k\in {\mathbb N}_0\)). Since \(\nabla _+ \left( {\begin{array}{c}x-a\\ k\end{array}}\right) = \left( {\begin{array}{c}x-a+1\\ k\end{array}}\right) -\left( {\begin{array}{c}x-a\\ k\end{array}}\right) = \left( {\begin{array}{c}x-a\\ k -1\end{array}}\right) \) for all \(x \in {\mathbb Z}\), the induction hypothesis gives, as required,

$$\begin{aligned} \left. \nabla _+^n \left( {\begin{array}{c}x-a\\ k\end{array}}\right) \right| _{x=a} = \left. \nabla _+^{n-1} \left( {\begin{array}{c}x-a\\ k -1\end{array}}\right) \right| _{x=a} = \delta _{n -1,k -1} = \delta _{n,k}. \end{aligned}$$
(3.2)

For the uniqueness, suppose \(q \in \varPi \) obeys \(\nabla ^m(g-q)|_{z=\vec a} =0\). Since \(\{b_{m}^{(a)}, m \in \bar{\mathfrak {v}}_{+} \}\) is a basis of \(\varPi \), there are constants \(c_m\) such that \(q=\sum _{m \in \bar{\mathfrak {v}}_+}c_m b_{m}^{(a)}\). By our assumption about \(q\) and (3.1), \(\nabla ^m g_{\vec a}=\nabla ^m q_{\vec a}=c_m\), so \(q=\mathrm{Tay}_a g\) as required.

(ii) It follows from (2.4) that the Taylor expansion of \(g\) with permuted arguments is obtained by permuting the arguments of \(\mathrm{Tay}_{a} g\), and from this it follows that \(\mathrm{Tay}_{a}\) commutes with \(S\).

(iii) This follows from the uniqueness in (i). \(\square \)

We also make note of a simple fact that we use below. Suppose the components of \(m\in \bar{\mathfrak {m}}_+\) are \((i_{k},\alpha _{k})\) and the components of \(m'\in \bar{\mathfrak {m}}_+\) are \((i_{k},\alpha _{k}')\) where \(k \in \{1,\dots ,p\}\) and \(\alpha _{k},\alpha _{k}' \in {\mathbb N}_{0}^{d}\). We say \(\alpha _{k} \ge \alpha _{k}'\) if each component of \(\alpha _{k}\) is at least as large as the corresponding component of \(\alpha _{k}'\). By examining the proof of (3.1), we find that

$$\begin{aligned} \nabla ^{m}b_{m',z}^{(a)}&= 0 \quad \text {if } \alpha _{k} > \alpha _{k}' \hbox { for some } k = 1,\dots ,p,\end{aligned}$$
(3.3)
$$\begin{aligned} \nabla ^{m}b_{m,z}^{(a)}&= 1. \end{aligned}$$
(3.4)

In other words, the condition \(z = \vec {a}\) is not needed in these cases.

3.2 Dual Pairing

For \(m \in \mathfrak {m}_+\) let \(\vec \Sigma (m)\) be the set of permutations of \(1,\ldots , p (m)\) that fix the species when they act on \(m\) by permuting components, i.e., \(\pi (i_{k},\alpha _{k}) = (i_{\pi k},\alpha _{\pi k})\) with \(i_{\pi k}= i_{k}\). Let \(|\vec \Sigma (m) |\) be the order of this group. There is also the subgroup \(\vec \Sigma _{0} (m)\) of permutations that fix \(m\). It has order

$$\begin{aligned} |\vec \Sigma _{0} (m)| = \prod _{(i,\alpha )}n_{(i,\alpha )}(m)! , \end{aligned}$$
(3.5)

with \(n_{(i,\alpha )}\) as defined below (1.8): \(n_{(i,\alpha )}\) denotes the number of times that \((i,\alpha )\) appears as a component of \(m\).

For example, for \(m=((1,\alpha _1),(1,\alpha _1),(1,\alpha _2),(1,\alpha _2), (1,\alpha _2),(2,\alpha _3))\) with \(\alpha _1 < \alpha _2\), we have \(|\vec \Sigma (m) |=5!1!\) and \(|\vec \Sigma _{0} (m)|= 2!3!1!\). For this choice of \(m\),

$$\begin{aligned} b_{m,z}^{(a)} = \left( {\begin{array}{c}z_1-a\\ \alpha _1\end{array}}\right) \left( {\begin{array}{c}z_2-a\\ \alpha _1\end{array}}\right) \left( {\begin{array}{c}z_3-a\\ \alpha _2\end{array}}\right) \left( {\begin{array}{c}z_4-a\\ \alpha _2\end{array}}\right) \left( {\begin{array}{c}z_5-a\\ \alpha _2\end{array}}\right) \left( {\begin{array}{c}z_6-a\\ \alpha _3\end{array}}\right) . \end{aligned}$$
(3.6)

For this, or for any other \(m \in \bar{\mathfrak {m}}_{+}\), a permutation \(\pi \) in \(\vec \Sigma (m)\) has an action on \(b_{m,z}^{(a)}\) either by mapping it to \(b_{\pi m,z}^{(a)}\) or to \(b_{m,\pi z}^{(a)}\), where \(\pi (z_{1},\dots ,z_{p}) = (z_{\pi 1},\dots ,z_{\pi p})\). The two actions are related by \(b_{\pi m,z}^{(a)} = b_{m,\pi ^{-1}z}^{(a)}\). Therefore \(\vec \Sigma _{0} (m)\) is the set of permutations that leave \(b_{m,z}^{(a)}\) invariant.

By the definition of the symmetry operator \(S: {\varPhi } \rightarrow {\varPhi }\) in [5, Definition 3.4], for \(m \in \mathfrak {m}_{+}\),

$$\begin{aligned} \big (Sb_{m}^{(a)}\big )_z = |\vec \Sigma (m) |^{-1}\sum _{\sigma \in \vec \Sigma (m)} \mathrm {sgn}(\sigma _f) b_{m,\sigma z}^{(a)}, \end{aligned}$$
(3.7)

where \(\sigma _f\) denotes the restriction of \(\sigma \) to the fermion components of \(z\), and \(\mathrm {sgn}(\sigma _f)\) denotes the sign of this permutation. In (2.5), we defined

$$\begin{aligned} f_{m}^{(a)} = N_m Sb_{m}^{(a)}, \end{aligned}$$
(3.8)

and we now specify that

$$\begin{aligned} N_m = \frac{|\vec \Sigma (m)|}{|\vec \Sigma _{0} (m) |}. \end{aligned}$$
(3.9)

We are now in a position to prove Lemma 2.2. Lemma 2.2(i) is subsumed by Lemma 3.1 and is proved in (3.13).

Proof of Lemma 2.2(ii)

Let \(g \in \varPi \). By Lemma 2.1(ii), \(\mathrm{Tay}_{a} S = \mathrm{Tay}_{a} S^2 = S \mathrm{Tay}_{a}S\). With (2.4) and (2.2), this gives

$$\begin{aligned} \mathrm{Tay}_a (S g)&= S\sum _{m \in \bar{\mathfrak {v}}_{+}} \langle M_{m,a},g \rangle _0 b_{m}^{(a)}. \end{aligned}$$
(3.10)

Since \(\vec \Sigma _0(m)\) is the set of permutations that leave \(m\) invariant, the sum over \(\bar{\mathfrak {v}}_+\) can be written as a sum over \(\mathfrak {v}_+\), as

$$\begin{aligned} S\sum _{m \in \bar{\mathfrak {v}}_{+}} \langle M_{m,a},g \rangle _0 b_{m}^{(a)} = S\sum _{m \in \mathfrak {v}_{+}}\frac{1}{|\vec \Sigma _0(m)|} \sum _{\sigma \in \vec \Sigma (m)} \langle M_{\sigma m,a},g \rangle _0 b_{\sigma m}^{(a)}. \end{aligned}$$
(3.11)

The anticommutativity of the fermions implies that \(\langle M_{\sigma m,a},g \rangle _0=\mathrm {sgn}(\sigma _f)\langle M_{ m,a},g \rangle _0\). Since \(b_{\sigma m,z}^{(a)} = b_{m,\sigma ^{-1}z}^{(a)}\), it follows from (3.7) to (3.9) and the fact that \(Sf_m^{(a)}=f_m^{(a)}\) that

$$\begin{aligned} \mathrm{Tay}_a (S g) = S\sum _{m \in \mathfrak {v}_{+}} \langle M_{m,a},g \rangle _0 N_mSb_{ m}^{(a)} = S\sum _{m \in \mathfrak {v}_{+}} \langle M_{m,a},g \rangle _0 f_{ m}^{(a)} = \sum _{m \in \mathfrak {v}_{+}} \langle M_{m,a},g \rangle _0 f_{ m}^{(a)}, \end{aligned}$$
(3.12)

and the proof is complete. \(\square \)

The next lemma provides statements concerning the duality of field monomials and test functions, for use in Sect. 2. In particular, (3.13) gives Lemma 2.2(i).

Lemma 3.1

The following identities hold, for \(a, x \in {\varLambda }'\):

$$\begin{aligned} \big \langle {M_{m,a},f_{m'}^{(a)}\big \rangle }_{0}&= \delta _{m,m'} \quad \quad m,m' \in \mathfrak {m}_+,\end{aligned}$$
(3.13)
$$\begin{aligned} \big \langle {M_{m,x},f_{m'}^{(a)}\big \rangle }_{0}&= \delta _{m,m'} \quad \quad m,m' \in \mathfrak {m}_+ \; \text { with } [M_m] = [M_{m'}],\end{aligned}$$
(3.14)
$$\begin{aligned} \big \langle {M_{m,x}, f_{m'}^{(a)}\big \rangle }_0&=0 \quad \quad \quad \quad m \in \mathfrak {m} ,m' \in \mathfrak {m}_+\;\; \text {with } [M_m]>[M_{m'}]. \end{aligned}$$
(3.15)

Proof

We begin with a preliminary observation. Let \(m \in \mathfrak {m}\) and \(m' \in \mathfrak {m}_+\). It follows from (2.2), the identity \(S^2=S\), and (3.7)–(3.9) that

(3.16)

where for the last step we recall that \(b_{\pi m,z}^{(a)} = b_{m,\pi ^{-1}z}^{(a)}\).

It is now easy to prove (3.13). Indeed, by (3.1) with \(x=a\), \(\nabla ^{m} b_{\sigma m',z}^{(a)}|_{z={\vec a}} =\delta _{m,\sigma m'}\). For \(m,m' \in \mathfrak {m}_{+}\), \(m=\sigma m'\) holds if and only if \(m=m'\) and \(\sigma \in \vec \Sigma _{0} (m')\). Since \(n_{(i,\alpha )}=1\) for fermion species \(i\), we have \(\mathrm {sgn}(\sigma _f)=1\) for permutations that fix \(m\), and (3.13) follows.

For the proof of (3.14)–(3.15), we first observe that by the definition of the zero-field pairing, \(M_{m,x}\) has nonzero pairing only with test functions with the same number of variables as there are fields in \(M_{m,x}\). Therefore, we may assume that the number \(p(m)\) of fields in \(M_{m,x}\) is equal to the number \(p (m')\) of variables in \(f_{m'}^{(a)}\). Furthermore, the pairing only replaces the fields in \(M_{m,x}\) with test functions whose arguments match the species of the fields. Thus, for \(m,m' \in \mathfrak {m}\), the pairing \(\langle M_{m,x}, f_{m'}^{(a)} \rangle _0\) is zero unless \(p(m)=p(m')\) and the components \((i_{k},\alpha _{k})\) of \(m\) and the components \((i_{k}',\alpha _{k}')\) of \(m'\) obey \(i_k=i_k'\) for all \(k=1,\ldots ,p(m)\). For (3.14), the condition that \([M_m] = [M_{m'}]\) therefore becomes the condition that \(|\alpha |_1 = |\alpha '|_1\). Consider first the case where \(\alpha _{k} \not = \alpha _{k}'\) for some \(k\). Then, for some \(k\), \(\alpha _{k}>\alpha _{k}'\). Since \(m,m'\) are elements of \(\mathfrak {m}_{+}\) both the \(\alpha _{k}\) and the \(\alpha _{k}'\) are ordered within each species. Therefore it is also true that for any permutation \(\sigma \in \vec \Sigma (m')\) there is some \(k\) such that \(\alpha _{k}>\alpha _{\sigma k}'\). By (3.3), in this case \(\nabla ^{m} b_{\sigma m',z}^{(a)}=0\), so the right-hand side of (3.16) is zero. We are now reduced to the case \(\alpha _{k}=\alpha _{k'}\) for all \(k\). This means that \(m=m'\) and we complete the proof of (3.14) as in the proof of (3.13), applying (3.4) rather than (3.1).

Finally, we prove (3.15). As in the proof of (3.14), the condition that \([M_m]>[M_{m'}]\) implies that for any \(\sigma \) there is some \(k\) such that \(\alpha _{k}>\alpha _{\sigma k}'\). By (3.3), this implies that \(\nabla ^{m} b_{\sigma m',z}^{(a)}=0\), and hence the right-hand side of (3.16) is zero, and (3.15) is proved. \(\square \)

The following lemma is used in the proof of Proposition 1.11.

Lemma 3.2

For \(m \in \mathfrak {v}_+\), let \(\hat{P}_{m,x} = \hat{P}(M_{m,x})\), with \(\hat{P}\) given by Definition 1.2. Then there is a constant \(c\) such that

$$\begin{aligned} \Vert \hat{P}_{m,x}\Vert _{T_0} \le R^{-|\alpha (m)|_1} \mathfrak {h}^{m}, \end{aligned}$$
(3.17)

where \(\mathfrak {h}^m\) denotes the product of \(\mathfrak {h}_{i_k}\) over the components \((i_k,\alpha _k)\) of \(m\).

Proof

By Definition 1.2, \(\hat{P}_m\) is a sum of monomials of the same degree and dimension as \(M_m\), so it suffices to prove (3.17) for a single such monomial \(\tilde{M}_m\). But for any test function \(g\), by (2.2) and by the definition of the \({\varPhi }(\mathfrak {h})\) norm in (1.35), we have

$$\begin{aligned} |\langle \tilde{M}_{m,x}, g \rangle _0| = |\nabla ^{\tilde{\alpha }(m)}(Sg)_z|_{z=\vec x}| \le R^{-|\alpha (m)|_1} \mathfrak {h}^{m} \Vert Sg\Vert _{{\varPhi }(\mathfrak {h})} \le R^{-|\alpha (m)|_1} \mathfrak {h}^{m} \Vert g\Vert _{{\varPhi }(\mathfrak {h})}, \end{aligned}$$
(3.18)

as required. \(\square \)

3.3 Norm Estimates and Taylor Approximation

The main results in this section are Lemmas 3.4 and 3.6, which are used in the proofs of Propositions 1.11 and 1.12 respectively. Lemma 3.3 is used to prove Lemmas 3.4 and 3.6, and Lemma 3.5 is used to prove Lemma 3.6. Lemmas 3.33.6 are in essence statements about test functions and Taylor approximation on the infinite lattice \({ {{\mathbb Z}}^d }\), which we can apply to the torus \({\varLambda }\) by judicious restriction to a coordinate patch. The correspondence between \({ {{\mathbb Z}}^d }\) and a coordinate patch is possible since norms of test functions are preserved by a coordinate \(z\) as defined at the beginning of Sect. 1.3, since nearest-neighbours and hence derivatives are preserved by \(z\). Thus we work primarily in this section on \({ {{\mathbb Z}}^d }\), with commentary in the statements of Lemmas 3.4 and 3.6 concerning applicability on the torus \({\varLambda }\).

Let \(j<N\) and let \(X\) be a \(j\)-polymer in \({\varLambda }\) or \({ {{\mathbb Z}}^d }\), depending on context. Recall that we defined an enlargement \(X_+\) of \(X\) by doubling its blocks, above the statement of Proposition 1.11. We extend this notion, as follows. For real \(t>0\) and a nonempty \(j\)-polymer \(X\subset { {{\mathbb Z}}^d }\), let \(X_{t}\subset { {{\mathbb Z}}^d }\) be the smallest subset that contains \(X\) and all points in \({ {{\mathbb Z}}^d }\) that are within distance \(tL^{j}\) of \(X\). In particular, \(X_+=X_{1/2}\). Below, we frequently write \(R=L^j\).

The following lemma shows that, given \(t>0\), it is possible to estimate the \({\varPhi }(X)\) norm of a test function \(g\) using the values of \(g\) only in \(X_{2t}\). In its statement, we write \(z \in {\mathbf X}_{2t}\) to mean that each component \(z_i\) of \(z\) lies in \(X_{2t}\). Recall from (2.19) that the \({\varPhi }(X)\) norm is defined in terms of the \({\varPhi }={\varPhi }(\mathfrak {h})\) norm of (1.35) by

$$\begin{aligned} \Vert g\Vert _{{\varPhi }(X)}&= \inf \{ \Vert g -f\Vert _{{\varPhi }} : f_{z} = 0 \text {if all components of z lie in X} \}, \end{aligned}$$
(3.19)

where we can interpret \(g\) as a test function either on \({ {{\mathbb Z}}^d }\) or on \({\varLambda }\), depending on context.

Lemma 3.3

Let \(t>0\), \(p \ge 1\), \(j<N\), and let \(X \subset { {{\mathbb Z}}^d }\) be a \(j\)-polymer. There is a function \(\chi _t\) of \(p\) variables, which takes the value \(1\) if each variable lies in \(X\), and the value \(0\) if any variable lies in \({ {{\mathbb Z}}^d }\setminus X_{2t}\), and a positive constant \(c_0\), independent of p, \(X\) and \(R=L^j\), such that for any test function \(g\) on \({ {{\mathbb Z}}^d }\) which depends on \(p\) variables,

$$\begin{aligned} \Vert g\Vert _{{\varPhi }(X)} \le \Vert g\chi _t\Vert _{{\varPhi } ({ {{\mathbb Z}}^d })} \le \left( (1+c_0t^{-1})\mathfrak {h}^{-1}\right) ^p \sup _{z \in {\mathbf X}_{2t}} \sup _{|\beta |_\infty \le p_{\varPhi }} |\nabla ^\beta _R g_{z}|. \end{aligned}$$
(3.20)

Proof

By definition, \(g\) is a function of finite sequences each of whose components is in a disjoint union \({\mathbf X}\) of copies of \(X\), where the copies label species (fermions, bosons, field and conjugate field). We give the proof for the special case \({\mathbf X}=X\), so that \(g\) is a function of \(z=(z_{1},\dots ,z_{p})\) with \(z_i\in { {{\mathbb Z}}^d }\). The general proof is a straightforward elaboration of the notation.

Let \(t>0\). We first construct a \(t\)-dependent function \(\chi :{ {{\mathbb R}}^d}\rightarrow [0,1]\) such that

$$\begin{aligned} \chi |_{X} = 1, \quad \quad \chi |_{{ {{\mathbb Z}}^d }\setminus X_{2t}} = 0, \quad \quad \big |\nabla _{R}^{\alpha }\chi |_{{ {{\mathbb Z}}^d }} \big | \le c (\alpha ) t^{-|\alpha |_{1}} , \end{aligned}$$
(3.21)

where \(\nabla _{R}^{\alpha }=R^{|\alpha |_1}\nabla ^\alpha \), and where the estimate holds for all multi-indices \(\alpha \) and is uniform in \(X\). Let \(Y_{t}\) be the subset of \({ {{\mathbb R}}^d}\) obtained by taking the union over lattice points in \(X_{t}\) of closed unit cubes centred on lattice points. Let \(\varphi \) be a smooth non-negative function on \({ {{\mathbb R}}^d}\) supported inside a ball of radius one and normalised so that \(\int \varphi dx = 1\). For \(a = tR\), let \(\varphi _{a} (x) = a^{-d}\varphi (a^{-1}x)\) and let \(\chi (x) = \int _{Y_{t}} \varphi _{a}(x-y)\,dy\). Then

$$\begin{aligned} 0 \le \chi (x) \le \int _{{ {{\mathbb R}}^d}} \varphi _{a} (x-y) \,dy = \int _{{ {{\mathbb R}}^d}} \varphi (x-y) \,dy = 1 \end{aligned}$$
(3.22)

as required. For \(x\in X \subset { {{\mathbb R}}^d}\), the distance between \(x\) and the complement of \(Y_{t}\) is at least \(a\) and therefore \(\chi (x) = \int _{Y_{t}} \varphi _{a} (x-y)\,dy = \int _{{ {{\mathbb R}}^d}} \varphi (x-y)\,dy=1\). Therefore \(\chi |_{X} = 1\) as required. For \(x \not \in X_{2t}\), in the definition of \(\chi \), \(x-y\) is not in the support of \(\varphi _{a}\) so \(\chi (x)=0\) as required. The partial derivative \(\chi ^{(\alpha )}\) of \(\chi \) of total order \(|\alpha |_{1}\) obeys

$$\begin{aligned} \big | \chi ^{(\alpha )} (x) \big |&\le a^{-|\alpha |_{1}} \int _{X_{t}} \Big |\varphi ^{(\alpha )} \Big (\frac{x-y}{a}\Big )\Big | a^{-d}\,dy \nonumber \\&\le a^{-|\alpha |_{1}} \int _{{ {{\mathbb R}}^d}} \big |\varphi ^{(\alpha )} (x-y)\big | \,dy \le c(\alpha ) a^{-|\alpha |_{1}}. \end{aligned}$$
(3.23)

By the mean-value theorem, the finite difference derivative \(\nabla ^{\alpha } \chi |_{{ {{\mathbb Z}}^d }}\) is bounded by the continuum derivative which is less than \(c (\alpha ) a^{-|\alpha |_{1}}\). When we convert \(\nabla \) derivatives to \(\nabla _{R}\) derivatives the factors of \(R\) convert this estimate to \(c (\alpha ) t^{-|\alpha |_{1}}\) as claimed. This establishes the last estimate in (3.21) and concludes the construction of \(\chi \).

We extend \(\chi \) to a function on sequences: for a sequence \(z =(z_1,\ldots ,z_p)\), we define \(\chi _{t}(z) = \prod _{i=1}^{p} \chi (z_i)\). Since \(g\chi _t\) agrees with \(g\) when evaluated on \({\mathbf X}\), and is zero outside \({\mathbf X}_{2t}\), it follows from the definition of the \({\varPhi }(X)\) norm in (2.19) that

$$\begin{aligned} \Vert g\Vert _{{\varPhi }(X)} \le \Vert g\chi _t\Vert _{{\varPhi }({ {{\mathbb Z}}^d })}&\le \sup _{z\in {\mathbf X}_{2t}} \mathfrak {h}^{-z} \sup _{|\beta |_\infty \le p_{\varPhi }} | \nabla _R^\beta (g\chi _t )_{z} |. \end{aligned}$$
(3.24)

Recall the lattice product rule \(\nabla _{e} (hf)= (T_{e}f) \nabla h + h\nabla f\) for differentiating a product, where \(T_{e}\) is translation by the unit vector \(e\). When the derivatives in \(\nabla _{R}^{\beta } (g\chi _t)\) are expanded using the lattice product rule, one of the terms is \(\chi _t \nabla _{k}^{\beta }g\). The remaining terms all involve derivatives of \(\chi _t\), at most \(p_{\varPhi }\) in each coordinate. This leads to a number of terms that grows exponentially in \(p\), so that, as required,

$$\begin{aligned} \sup _{|\beta |_\infty \le p_{\varPhi }} | \nabla _R^\beta (g\chi _t )_{z} | \le \big (1+O(t^{-1})\big )^p\sup _{|\beta |_\infty \le p_{\varPhi }} | \nabla _R^\beta g_{z} |. \end{aligned}$$
(3.25)

This completes the proof. \(\square \)

Lemma 3.4

Let \(j<N\), let \(m \in \mathfrak {m}_+\), let \(X\) be a \(j\)-polymer in \({ {{\mathbb Z}}^d }\), and let \(a \in X\). There is a constant \(\bar{C}\), independent of \(m\) but dependent on the diameter of \(R^{-1}X\), such that for the polynomial \(f_m^{(a)}\) defined on all of \({ {{\mathbb Z}}^d }\),

$$\begin{aligned} \Vert f_m^{(a)}\Vert _{{\varPhi } (X)} \le \bar{C} \mathfrak {h}^{-m} R^{|\alpha (m)|_1}. \end{aligned}$$
(3.26)

The same inequality holds for \(f_m^{(a)}\) as we have defined it on the torus, provided \(X_+\) lies in a coordinate patch.

Proof

For the case of \({ {{\mathbb Z}}^d }\), by the definition of \(f_m^{(a)}\) in (3.8), and by Lemma 3.3 with \(t=\frac{1}{2}\), it suffices to show that for \(z \in {\mathbf X}_+\) and for \(|\beta |_\infty \le p_{\varPhi }\),

$$\begin{aligned} |\nabla _R^\beta b_{m,z}^{(a)}| \le \bar{c} R^{|\alpha |_1}, \end{aligned}$$
(3.27)

where \(\bar{c}\) depends on \(m\) and \(R^{-1} X\). Note that any dependence on \(p\) (from Lemma 3.3) and \(m\) is uniformly bounded since the number of variables in bounded when \(m \in \mathfrak {m}_{+}\).

To prove (3.27), we first note that if any component of \(\beta \) exceeds the corresponding component of \(\alpha = \alpha (m)\) then the left-hand side of (3.27) is equal to zero as in the proof of (3.15). Thus we may assume that each component of \(\beta \) is at most the corresponding component of \(\alpha \), and without loss of generality we may consider the 1-dimensional case. In this case, for \(j=j_-+j_+ \le k\), \(|\nabla _-^{j_-}\nabla _+^{j_+} \left( {\begin{array}{c}x-a\\ k\end{array}}\right) | = |\left( {\begin{array}{c}x-a-j_-\\ k-j\end{array}}\right) |\) and this is at most a multiple of \(R^{k-j}\), with the multiple dependent on the ratio of the diameter of \(X\) to \(R\). This proves (3.27) and completes the proof of (3.26) for \({ {{\mathbb Z}}^d }\). There is no dependence of \(\bar{C}\) on \(m \in \mathfrak {m}_+\), since \(\mathfrak {m}_+\) is a finite set.

This then implies the extension to the torus, since derivatives of \(b_m^{(a)}\) are the same on a coordinate patch and its image rectangle in \({ {{\mathbb Z}}^d }\). \(\square \)

The following Taylor remainder estimate is used to prove Lemma 3.6, which plays an important role in the proof of the crucial change of scale bound in Proposition 1.12. For its statement, given \(a\in {\mathbb Z}^d\), \(p \in {\mathbb N}\), \(z=(z_1,\ldots , z_p)\) with \(z_1,\ldots ,z_p \in {\mathbb Z}^d\) and with \((z_i)_j \ge a_j\) for all \(i=1,\ldots , p\) and \(j=1,\ldots ,d\), and \(t \in {\mathbb N}\), we define \(S_t(a,z) = \{ y =(y_1,\ldots ,y_p): y_i\in {\mathbb Z}^d : a_j -t \le (y_i)_j \le (z_i)_j\}\). We make use of the map \(\mathrm{Tay}_{a} :{\varPhi } \rightarrow \varPi \) given by (2.4), interpreted as a map on test functions \(g\) defined on \({ {{\mathbb Z}}^d }\). The range of \(\mathrm{Tay}_a\) involves polynomials in the components of \(z\) to maximal degree \(s = d_+ - \sum _{k=1}^p [\varphi _{i(z_k)}]\), where \(i(z_k)\) denotes the field species corresponding to the component \(z_k\). Also, given a test function \(g\in {\varPhi }^{(p)}\), we write \(M_g = \sup _{y \in S_{s}(a,z)} \sup _{|\alpha |_\infty =s+1} |\nabla ^\alpha g_y|\) where the supremum over \(\alpha \) is a supremum over only forward derivatives.

Lemma 3.5

For \(a \in { {{\mathbb Z}}^d }\), components of \(z=(z_1,\ldots , z_p)\) in \({ {{\mathbb Z}}^d }\) with \((z_i)_j \ge a_j\) for all \(i,j\), and for \(|\beta |_{1}=t\le s\) (forward or backward derivatives), the remainder in the approximation of \(g=g_z\) by its Taylor polynomial obeys

$$\begin{aligned} |\nabla ^\beta (g-\mathrm{Tay}_a g)_z| \le M_g \left( {\begin{array}{c}|z-\vec {a}|_{1} \\ s-t+1\end{array}}\right) , \end{aligned}$$
(3.28)

with \(M_g\) and \(s\) as defined above.

Proof

The proof is by induction on the dimension of \(z \in {\mathbb Z}^{dp}\) and does not depend on the grouping of these components of \(z\) into \({ {{\mathbb Z}}^d }\). Therefore we give the proof for case \(d=1\). Also without loss of generality, we assume that \(a=0\). Let \(f_z = \mathrm{Tay}_a g_z = \mathrm{Tay}_0 g_z\).

We first show that it suffices to establish (3.28) for the case \(|\beta |_{1}=t=0\), namely

$$\begin{aligned} |g_z-f_z|\le M_g \left( {\begin{array}{c}|z|_{1}\\ s+1\end{array}}\right) , \end{aligned}$$
(3.29)

with the supremum defining \(M\) taken over \(S_0(z)\). In fact, for the case where \(\beta \) involves only forward derivatives, \(\nabla ^\beta f\) is the degree \(s-t\) Taylor polynomial for \(\nabla ^\beta g\), and it follows from (3.29) that

$$\begin{aligned} |\nabla ^\beta (g-f)_z|\le M_{g }\left( {\begin{array}{c}|z|_{1} \\ s-t+1\end{array}}\right) , \end{aligned}$$
(3.30)

which is better than (3.28). To allow also backward derivatives, we simply note that a single backward derivative is equal in absolute value to a forward derivative at a point translated backwards, and this translation is handled in our estimate by the extension of \(S_0(z)\) to \(S_t(z)\) in the definition of \(M_g\).

It remains to prove (3.29). The proof is by induction on \(p\) (with \(s\) held fixed). Consider first the case \(p=1\). For a function \(\phi \) on \({\mathbb Z}\), let \((T\phi )_x =\phi _{x+1}\) and let \(D=T-I\). For \(m >0\), \(T^{m} = I + \sum _{n= 1}^m (T-I)T^{n-1}\). Iteration of this formula \(s\) times gives

$$\begin{aligned} T^{m}&= I + \sum _{m\ge n_{1}\ge 1}D I + \sum _{m\ge n_{1}>n_{2}\ge 1}D^{2}T^{n_{2}-1} = \cdots = \sum _{\alpha =0}^{s}\left( {\begin{array}{c}m\\ \alpha \end{array}}\right) D^{\alpha } + E, \end{aligned}$$
(3.31)

where

$$\begin{aligned} E = \sum _{m\ge n_{1} > n_{2} > \cdots > n_{s+1} \ge 1} D^{s+1}T^{n_{s+1}-1}. \end{aligned}$$
(3.32)

We apply this operator identity to \(( T^{z_1}g)_0\) and obtain, for \(p=1\),

$$\begin{aligned} g_{z_1} = (T^{z_1}g)_0 = f_{z_1} + (Eg)_0. \end{aligned}$$
(3.33)

The remainder term obeys the estimate

$$\begin{aligned} |(Eg)_0|&\le \sum _{m\ge n_{1} > n_{2} > \cdots n_{s+1} \ge 1} \ \sup _{x\in S_0 (z_1)} |D^{s+1}g_x | = \left( {\begin{array}{c}m \\ s+1\end{array}}\right) \sup _{x\in S_0 (z_1)} |D^{s+1}g_x |. \end{aligned}$$
(3.34)

This proves (3.29) for \(p=1\).

To advance the induction, we assume that (3.29) holds for \(p-1\). We write \(y = (z_1,\ldots , z_{p-1})\) and \(z=(y, z_{p})\), and apply the case \(p-1\) to \(g\) with the coordinate \(z_{p}\) regarded as a parameter. This gives

$$\begin{aligned} g_z = \sum _{|\beta |_{1} \le s} \left( {\begin{array}{c}y\\ \beta \end{array}}\right) D^{ \beta } g_{( 0, z_{p})} + \tilde{E}, \end{aligned}$$
(3.35)

where by the induction hypothesis \(|\tilde{E} | \le M \left( {\begin{array}{c}|y|_{1}\\ s+1\end{array}}\right) \). We also apply the case \(p=1\) to obtain

$$\begin{aligned} D^{ \beta } g_{( 0, z_{p})} = \sum _{\alpha = 0}^{s-|\beta |_{1}} \left( {\begin{array}{c}z_{p}\\ \alpha \end{array}}\right) D^\alpha D^{\beta } g_0 + E_1, \end{aligned}$$
(3.36)

with \(|E_1| \le M \left( {\begin{array}{c} z_{p} \\ s-|\beta |_{1}+1\end{array}}\right) \). The insertion of (3.36) into (3.35) yields

$$\begin{aligned} g_z = \sum _{|\beta |_{1} \le s} \left( {\begin{array}{c}y \\ \beta \end{array}}\right) \sum _{\alpha = 0}^{s-|\beta |_{1}} \left( {\begin{array}{c}z_{p} \\ \alpha \end{array}}\right) D^\alpha D^{\beta } g_0 + \sum _{|\beta |_{1} \le s} \left( {\begin{array}{c}y\\ \beta \end{array}}\right) E_1 + \tilde{E}. \end{aligned}$$
(3.37)

The first term on the right-hand side is just the Taylor polynomial \(f_z\) for \(g_z\). It therefore suffices to show that

$$\begin{aligned} \sum _{|\beta |_{1} \le s} \left( {\begin{array}{c}y\\ \beta \end{array}}\right) \left( {\begin{array}{c} z_{p} \\ s-|\beta |_{1}+1\end{array}}\right) + \left( {\begin{array}{c}|y|_{1}\\ s+1\end{array}}\right) \le \left( {\begin{array}{c}|z|_1\\ s+1\end{array}}\right) . \end{aligned}$$
(3.38)

However, (3.38) follows from a simple counting argument: the right-hand side counts the number of ways to choose \(s+1\) objects from \(|z|_1\), while the left-hand side decomposes this into two terms, in the first of which at least one object is chosen from the last coordinate of \(z\), and in the second of which no object is chosen from the last coordinate. This completes the proof of (3.29). \(\square \)

The following lemma is used in this paper in the proof of Proposition 1.12, and it is also used in [6, Lemma 1.2]. Its most natural setting is \({ {{\mathbb Z}}^d }\), but we do require it in the case of a torus \({\varLambda }\) with period \(L^N\) for integers \(L,N>1\). Given \(j<N\), let \(R=L^j\) and \(R'=L^{j+1}\). Let \({\varPhi }(\mathfrak {h}),{\varPhi }'(\mathfrak {h}')\) be test function spaces defined via weights involving parameters \(R=L^j,\mathfrak {h}\) and \(R'=L^{j+1},\mathfrak {h}'\) respectively. Suppose that \(\mathfrak {h}'_i/\mathfrak {h}_i \le cL^{-[\phi _i]}\), where \(c\) is a universal constant.

Lemma 3.6

Suppose that \(p_{\varPhi } \ge d_+'-[\varphi _\mathrm{min}]\). Fix \(L >1\). Let \(j<N\) and let \(X\) be an \(L^j\)-polymer on \({ {{\mathbb Z}}^d }\) with enlargement \(X_+\) as in Lemma 3.3 with \(t=\frac{1}{2}\). There exists \(\bar{C}_3\), which is independent of \(L\) and depends on \(j\) only via \(L^{-j}\mathrm{diam}(X)\), such that for any test function \(g\) on \({ {{\mathbb Z}}^d }\),

$$\begin{aligned} \Vert g\Vert _{\tilde{{\varPhi }} (X)} \le \bar{C}_3 L^{-d_{+}'} \Vert g\Vert _{\tilde{{\varPhi }}' (X_+)} , \end{aligned}$$
(3.39)

with \(d_+'\) given by (1.38). In particular, \(\Vert g\Vert _{\tilde{{\varPhi }} (X)} \le \bar{C}_3 L^{-d_+ '} \Vert g\Vert _{{\varPhi }'}\). The bound (3.39) also holds for a test function \(g\) on the torus \({\varLambda }\), provided \(L\) is sufficiently large and there is a coordinate patch \({\varLambda }' \supset X_+\).

Proof

We first consider the case of \({ {{\mathbb Z}}^d }\). We assume that \(X\) is connected; if it is not then the following argument can be applied in a componentwise fashion. For connected \(X\), let \(a\) be the largest point which is lexicographically no larger than any point in \(X\).

Given \(g\), we use Lemma 2.6 to choose \(f \in \varPi (X)\) such that \(h = g -f\) obeys \(\Vert h\Vert _{{\varPhi }'(X)} \le 2 \Vert g\Vert _{\tilde{{\varPhi }}' (X)}\). Then \(g-(h-\mathrm{Tay}_a h) \in \varPi (X)\), and hence

$$\begin{aligned} \Vert g\Vert _{\tilde{\varPhi }(X)} = \Vert h - \mathrm{Tay}_a h \Vert _{\tilde{\varPhi }(X)} \le \Vert h - \mathrm{Tay}_a h \Vert _{{\varPhi } (X)}. \end{aligned}$$
(3.40)

It suffices to prove that for every test function \(h\),

$$\begin{aligned} \Vert h - \mathrm{Tay}_a h\Vert _{{\varPhi } (X)}&\le \frac{1}{2} \bar{C}_3 L^{-d_+ '} \Vert h \Vert _{{\varPhi }' ( X_+)}, \end{aligned}$$
(3.41)

since \(\Vert h\Vert _{{\varPhi }'( X_+)} \le 2 \Vert g\Vert _{\tilde{\varPhi }'( X_+)} \le 2\Vert g\Vert _{{\varPhi }'}\).

The rest of the proof is concerned with proving (3.41). We write \(R=L^j\) and \(R'=L^{j+1}\). Let \(r= h - \mathrm{Tay}_a h\). By Lemma 3.3 with \(t=\frac{1}{2}\), there is a constant \(K>1\) such that

$$\begin{aligned} \Vert r\Vert _{{\varPhi } (X)}&\le \sup _{z \in {\mathbf X}_+} (K\mathfrak {h}^{-1})^z \sup _{|\beta |_\infty \le p_{\varPhi }} | \nabla _R^\beta r_{z} |. \end{aligned}$$
(3.42)

By the hypothesis on \(\mathfrak {h}'\), (3.42) implies that

$$\begin{aligned} \Vert r\Vert _{{\varPhi } (X)}&\le \sup _{z \in {\mathbf X}_+} (cK\mathfrak {h}'^{-1})^z \sup _{|\beta |_\infty \le p_{\varPhi }} L^{-(\sum _k [\varphi _{i_k}]+|\beta |_1)} | \nabla _{R'}^\beta r_{z} |, \end{aligned}$$
(3.43)

where the sum on the right-hand side is over the components present in \(z\). We write \(u \prec v\) to denote \(u \le \mathrm{const}\, v\) with a constant whose value is unimportant.

Consider first the case \(\sum _k [\varphi _{i_k}]+|\beta |_1 > d_{+}\), for which \(\nabla ^\beta r_z=\nabla ^\beta h_z\). By definition of \(d_+'\) in (1.38), \(\sum _k [\varphi _{i_k}]+|\beta |_1 \ge d_{+}'\). We claim that the contribution to the right-hand side of (3.43) due to this case is

$$\begin{aligned}&\prec L^{-d_{+}'} \Vert h\Vert _{{\varPhi }'( X_+)}, \end{aligned}$$
(3.44)

as required. In fact, here there is no dependence on \(R^{-1}\mathrm{diam}(X)\) in the constant, and the hypothesis on \(p_{\varPhi }\) ensures that there are sufficiently many derivatives in the norm of \(h\). The potentially dangerous factor \((cK)^z\) is uniformly bounded when \(p(z)\) is uniformly bounded, in particular with \(p(z) \le d_+'/[\varphi _\mathrm{min}]\). On the other hand, when \(p(z) > d_+'/[\varphi _\mathrm{min}]\), the excess \((cK)^{p(z)-d_+'/[\varphi _\mathrm{min}]}\) is more than compensated by the number of excess powers of \(L^{-1}\) from (3.43), namely \(\sum _k [\varphi _{i_k}]+|\beta |_1 - d_+' \ge p(z)[\varphi _\mathrm{min}] -d_+'\), for large \(L\).

For the case \(\sum _k [\varphi _{i_k}]+|\beta |_1 \le d_{+}\), we write \(t=|\beta |_1\) and \(s=d_{+}-\sum _k [\varphi _{i_k}] \ge t\). In this case, \(p(z)\) must be uniformly bounded, and hence so is the factor \((cK)^z\) in (3.43). By Lemma 3.5, there exists \(\bar{c}\), depending on \(R^{-1}\mathrm{diam}(X)\), such that

$$\begin{aligned} |\nabla ^\beta r_{z} |&\le \bar{c} \sup _{|\alpha |=s +1} R^{s-t+1} \sup _z |\nabla ^{\alpha } h_{z} | \le \bar{c} R^{s-t+1} (R')^{-s-1} (\mathfrak {h}')^{z} \Vert h \Vert _{{\varPhi }'(X_+)}, \end{aligned}$$
(3.45)

(the power of \(R\) in the first line arises from the binomial coefficient in (3.28), and it is here that the constant develops its dependence on \(R^{-1}\mathrm{diam} (X)\)) and hence

$$\begin{aligned} (\mathfrak {h}')^{-z}|\nabla _{R'}^\beta r_{z} |&\le \bar{c} R^{s-t+1} (R')^{t-s-1} \Vert h \Vert _{{\varPhi }'(X_+)} \prec \bar{c} L^{t-s-1} \Vert h \Vert _{{\varPhi }'(X_+)}. \end{aligned}$$
(3.46)

Thus the contribution to (3.43) due to this case is

$$\begin{aligned} \prec \bar{c} L^{-\sum _k [\varphi _{i_k}]-t+t-s-1} \Vert h \Vert _{{\varPhi }'(X_+)} = \bar{c} L^{-d_{+}-1} \Vert h \Vert _{{\varPhi }'( X_+)}. \end{aligned}$$
(3.47)

Since \(d_+ +1 \ge d_+'\) by the definition of \(d_{+'}\), this completes the proof for the case of \({ {{\mathbb Z}}^d }\).

The torus case follows from the \({ {{\mathbb Z}}^d }\) case by the coordinate patch assumption, once we choose \(L\) large enough to ensure that the set \(\cup _{z \in {\mathbf X}_+}S_s(a,z)\) lies in a coordinate patch if \(X_+\) does. This is possible because \(j<n\) and hence there is a gap of diameter at least \(L\) preventing \(X_+\) from “wrapping around” the torus, whereas the enlargement of \(X_+\) due to the set \(S_s(a,z)\) depends only on \(d_+\). This enlargement cannot wrap around the torus if \(L\) is large enough.   \(\square \)