1 Introduction

In this article, we undertake the study of fundamental matrices associated to systems of generalized Schrödinger operators; we establish the existence of such fundamental matrices and we prove sharp upper and lower exponential decay estimates for them. Our work is strongly motivated by the papers [2] and [3] in which similar exponential decay estimates were established for fundamental solutions associated to Schrödinger operators of the form

$$\begin{aligned} - \Delta + \mu , \end{aligned}$$

and generalized magnetic Schrödinger operators of the form

$$\begin{aligned} - \left( \nabla - i \textbf{a} \right) ^T A \left( \nabla - i \textbf{a} \right) + V, \end{aligned}$$

respectively. In [2], \(\mu \) is assumed to be a nonnegative Radon measure, whereas in [3], A is bounded and uniformly elliptic, while \(\textbf{a}\) and V satisfy a number of reverse Hölder conditions. Here we consider systems of generalized electric Schrödinger operators of the form

$$\begin{aligned} \mathcal {L}_V= -D_\alpha \left( A^{\alpha \beta } D_\beta \right) + V, \end{aligned}$$
(1)

where \(A^{\alpha \beta } = A^{\alpha \beta }(x)\), for each \(\alpha , \beta \in \left\{ 1, \dots , n\right\} \), is a \(d \times d\) matrix with bounded measurable coefficients defined on \(\mathbb {R}^n\) that satisfies boundedness and ellipticity conditions as described by (25) and (24), respectively. Moreover, the zeroth order potential function V is assumed to be a matrix \({\mathcal {B}_p}\) function. We say that V is in the matrix \({\mathcal {B}_p}\) class if and only if \(\left\langle V \vec {e}, \vec {e} \right\rangle := \vec {e}^T V \vec {e}\) is uniformly a scalar \({\text {B}_p}\) function for any \(\vec {e} \in \mathbb {R}^d\). As such, the operators that we consider in this article fall in between the generality of those that appear in [2] and [3], but are far more general in the sense that they are for elliptic systems of equations.

Many of the ideas in Shen’s prior work [4,5,6] have contributed to this article. In particular, we have built on some of the framework used to prove power decay estimates for fundamental solutions to Schrödinger operators \(-\Delta + V\), where V belongs to the scalar reverse Hölder class \({\text {B}_p}\), for \(p = \infty \) in [4] and \(p \ge \frac{n}{2}\) in [5], along with the exponential decay estimates for eigenfunctions of more general magnetic operators as in [6].

As in both [2] and [3], Fefferman–Phong inequalities (see [7], for example) serve as one of the main tools used to establish both the upper and lower exponential bounds that are presented in this article. However, since the Fefferman–Phong inequalities that we found in the literature only apply to scalar weights, we state and prove new matrix-weighted Fefferman–Phong inequalities (see Lemma 15 and Corollary 2) that are suited to our problem. To establish our new Fefferman–Phong inequalities, we build upon the notion of an auxiliary function associated to a scalar \({\text {B}_p}\) function that was introduced by Shen in [4]. More specifically, given a matrix function \(V \in {\mathcal {B}_p}\), we introduce a pair of auxiliary functions: the upper and lower auxiliary functions. (Sect. 3 contains precise definitions of these functions and examines their properties.) Roughly speaking, we can, in some settings, interpret these quantities as the auxiliary functions associated to the largest and smallest eigenvalues of V. The upper and lower auxiliary functions are used to produce two versions of the Fefferman–Phong inequalities. Using these auxiliary functions, we also define upper and lower Agmon distances (also defined in Sect. 3), which then appear in our lower and upper exponential bounds for the fundamental matrix, respectively. We remark that the original Agmon distance appeared in [8], where exponential upper bounds for N-body Schrödinger operators first appeared.

Given the elliptic operator \(\mathcal {L}_V\) as in (1) that satisfies a suitable set of conditions, there exists a fundamental matrix function associated to \(\mathcal {L}_V\), which we denote by \(\Gamma ^V\). The fundamental matrix generalizes the notion of a fundamental solution to the systems setting; see for example [9], where the authors generalized the results of [10] to the systems setting. To make precise the notion of the fundamental matrix for our systems setting, we rely upon the constructions presented in [11]. In particular, we define our bilinear form associated to (1), and introduce a well-tailored Hilbert space that is used to establish the existence of weak solutions to PDEs of the form \(\mathcal {L}_V\vec {u} = \vec {f}\). We then assume that our operator \(\mathcal {L}_V\) satisfies a natural collection of de Giorgi–Nash–Moser estimates. This allows us to confirm that the framework from [11] holds for our setting, thereby verifying the existence of the fundamental matrix \(\Gamma ^V\). Section 6 contains these details.

In Sect. 7, assuming very mild conditions on V, we verify that the class of systems of “weakly coupled” elliptic operators of the form

$$\begin{aligned} - {{\,\textrm{div}\,}}\left( A \nabla \right) + V \end{aligned}$$

satisfy the de Giorgi–Nash–Moser estimates that are mentioned in the previous paragraph (see the remark at the end of Sect. 7 for details). Consequently, this implies that the fundamental matrices associated to weakly coupled elliptic systems exist and satisfy the required estimates. In fact, this additionally shows that Green’s functions associated to these elliptic systems exist and satisfy weaker interior estimates, though we will not need this fact. Further, we establish local Hölder continuity of bounded weak solutions \(\vec {u}\) to

$$\begin{aligned} - {{\,\textrm{div}\,}}\left( A \nabla \vec {u} \right) + V \vec {u} = 0 \end{aligned}$$

under even weaker conditions on V. Specifically, V doesn’t have to be positive semidefinite a.e. or even symmetric, see Proposition 10 and Remark 10. Finally, although we will not pursue this line of thought in this paper, note that the combination of Proposition 10 and Remark 10 likely leads to new Schauder estimates for bounded weak solutions \(\vec {u}\) to (1). We remark that this section on elliptic theory for weakly coupled Schrödinger systems could be of independent interest beyond the theory of fundamental matrices. A “weakly coupled (linear) elliptic operator” typically refers to an operator of the form

$$\begin{aligned} - {{\,\textrm{div}\,}}\left( \mathcal {A} \nabla \right) + B \nabla + V , \end{aligned}$$
(2)

where B is a \(d \times d\) matrix function and

$$\begin{aligned} \mathcal {A}= \begin{pmatrix} A_1 &{} &{} \\ &{} \, \ddots &{} \\ &{} &{} \, A_d \end{pmatrix}, \end{aligned}$$

where the matrices \(A_1, \ldots , A_d\) are uniformly elliptic. We refer the reader to [12,13,14] for boundedness and regularity results for weak solutions to \(- {{\,\textrm{div}\,}}\left( \mathcal {A} \nabla \vec {u} \right) + B \nabla \vec {u} + V\vec {u} = 0\) under very different conditions on the coefficients of matrices \(A_1, \ldots , A_d, B, \) and V.

Assuming the set-up outlined above, we now describe the decay results for the fundamental matrices. We show that there exists a small constant \(\varepsilon > 0\) so that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \frac{e^{-\varepsilon \overline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}} \lesssim \left| \left\langle \Gamma ^V (x, y) \vec {e}, \vec {e} \right\rangle \right| \lesssim \frac{ e^{-\varepsilon \underline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}}, \end{aligned}$$
(3)

where \(\overline{d}\) and \(\underline{d}\) denote the upper and lower Agmon distances associated to the potential function \(V \in {\mathcal {B}_p}\) (as defined in Sect. 3). That is, we establish an exponential upper bound for the norm of the fundamental matrix in terms of the lower Agmon distance function, while the fundamental matrix is always exponentially bounded from below in terms of the upper Agmon distance function. The precise statements of these bounds are described by Theorems 11 and 12. For the upper bound, we assume that \(V \in {\mathcal {B}_p}\) along with a quantitative cancellability condition \({\mathcal{Q}\mathcal{C}}\) that will be made precise in Sect. 2.3. On the other hand, the lower bound requires the scalar condition \(\left| V\right| \in {\text {B}_p}\) and that the operator \(\mathcal {L}_V\) satisfies some additional properties—notably a scale-invariant Harnack-type condition. In fact, the term \(\overline{d}(x, y, V)\) in the lower bound of (3) can be replaced by \(d(x, y, \left| V\right| )\), see Remark 15.

Interestingly, (3) can be used to provide a beautiful connection between our upper and lower auxiliary functions and Landscape functions that are similar to those defined in [15]. Note that this connection was previously found in [16] for scalar elliptic operators with nonnegative potentials. We will briefly discuss these ideas at the end of Sect. 9, see Remark 17.

To further understand the structure of the bounds stated in (3), we consider a simple example. For some scalar functions \(0 < v_1 \le v_2 \in {\text {B}_p}\), define the matrix function

$$\begin{aligned} V = \begin{bmatrix} v_1 &{}\quad 0 \\ 0 &{}\quad v_2 \end{bmatrix}. \end{aligned}$$

A straightforward check shows that \(V \in {\mathcal {B}_p}\) and satisfies a nondegeneracy condition that will be introduced below. Moreover, the upper and lower Agmon distances satisfy \(\underline{d}\left( \cdot , \cdot , V \right) = d\left( \cdot , \cdot , v_1 \right) \) and \(\overline{d}\left( \cdot , \cdot , V \right) = d\left( \cdot , \cdot , v_2 \right) \), where \(d\left( x, y, v \right) \) denotes the standard Agmon distance from x to y that is associated to a scalar function \(v \in {\text {B}_p}\). We then set

$$\begin{aligned} \mathcal {L}_V= - \Delta + V. \end{aligned}$$

Since \(\vec {u}\) satisfies \(\mathcal {L}_V\vec {u} = \vec {f}\) if and only if \(u_i\) satisfies \(- \Delta u_i + v_i u_i = f_i\) for \(i = 1, 2\), then \(\mathcal {L}_V\) satisfies the set of elliptic assumptions required for our operator. Moreover, the fundamental matrix for \(\mathcal {L}_V\) has a diagonal form given by

$$\begin{aligned} \Gamma ^V = \begin{bmatrix} \Gamma ^{v_1} &{}\quad 0 \\ 0 &{}\quad \Gamma ^{v_2} \end{bmatrix}, \end{aligned}$$

where each \(\Gamma ^{v_i}\) is the fundamental solution for \(-\Delta + v_i\). The results of [2] and [3] show that for \(i = 1, 2\), there exists \(\varepsilon _i > 0\) so that

$$\begin{aligned} \frac{e^{-\varepsilon _i d(x, y, v_i)}}{\left| x - y\right| ^{n-2}} \lesssim \Gamma ^{v_i}(x, y) \lesssim \frac{ e^{-\varepsilon _i d(x, y, v_i)}}{\left| x - y\right| ^{n-2}}. \end{aligned}$$

Restated, for \(i = 1, 2\), we have

$$\begin{aligned} \frac{e^{-\varepsilon _i d(x, y, v_i)}}{\left| x - y\right| ^{n-2}} \lesssim \left\langle \Gamma ^V \vec {e}_i, \vec {e}_i \right\rangle \lesssim \frac{ e^{-\varepsilon _i d(x, y, v_i)}}{\left| x - y\right| ^{n-2}}, \end{aligned}$$

where \(\left\{ \vec {e}_1, \vec {e}_2\right\} \) denotes the standard basis for \(\mathbb {R}^2\). Since \(v_1 \le v_2\) implies that \(\underline{d}\left( x, y, V \right) = d\left( x, y, v_1 \right) \le d\left( x, y, v_2 \right) = \overline{d}\left( x, y, V \right) \), then we see that there exists \(\varepsilon > 0\) so that for any \(\vec {e} \in {\mathbb {S}}^1\),

$$\begin{aligned} \frac{e^{-\varepsilon \overline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}} \lesssim \left\langle \Gamma ^V \vec {e}, \vec {e} \right\rangle \lesssim \frac{ e^{-\varepsilon \underline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}}. \end{aligned}$$

Compared to estimate (3) that holds for our general operators, this example shows that our results are sharp up to constants. In particular, the best exponential upper bound we can hope for will involve the lower Agmon distance function, while the best exponential lower bound will involve the upper Agmon distance function.

As stated above, the Fefferman–Phong inequalities are crucial to proving the exponential upper and lower bounds of this article. The classical Poincaré inequality is one of the main tools used to prove the original Fefferman–Phong inequalities. Since we are working in a matrix setting, we use a new matrix-weighted Poincaré inequality. Interestingly, a fairly straightforward argument based on the scalar Poincaré inequality from [2] can be used to prove this matrix version of the Poincaré inequality, which is precisely what is needed to prove the main results described above.

Although the main theorems in this article may be interpreted as vector versions of the results in [2] and [3], many new ideas (that go well beyond the technicalities of working with systems) were required and developed to establish our results. We now describe these technical innovations.

First, the theory of matrix weights was not suitably developed for our needs. For example, we had to appropriately define the matrix reverse Hölder classes, \({\mathcal {B}_p}\). And while the scalar versions of \({\text {B}_p}\) and \({\text {A}_\infty }\) have a well-understood and very useful correspondence (namely, a scalar weight \(v \in \text {B}_p\) iff \(v^p \in \text {A}_\infty \)), this relationship was not known in the matrix setting. In order to arrive at a setting in which we could establish interesting results, we explored the connections between the matrix classes \({\mathcal {B}_p}\) that we develop, as well as \({\mathcal {A}_\infty }\) and \({\mathcal {A}_{p,\infty }}\) that were introduced in [1] and [17, 18], respectively. The matrix classes are introduced in Sect. 2, and many additional relationships (including a matrix version of the previously mentioned correspondence between A\({}_\infty \) and B\({}_p\)) are explored in Appendices A and B.

Given that we are working in a matrix setting, there was no reason to expect to work with a single auxiliary function. Instead, we anticipated that our auxiliary functions would either be matrix-valued, or that we would have multiple scalar-valued “directional” Agmon functions. We first tried to work with a matrix-valued auxiliary function based on the spectral decomposition of the matrix weight. However, since this set-up assumed that all eigenvalues belong to \({\text {B}_p}\), and it is unclear when that assumption holds, we decided that this approach was overly restrictive. As such, we decided to work with a pair of scalar-valued auxiliary functions that capture the upper and lower behaviors of the matrix weight. Once these functions were defined and understood, we could associate Agmon distances to them in the usual manner. These notions are introduced in Sect. 3.

Another virtue of this article is the verification of elliptic theory for a class of elliptic systems of the form (1). By following the ideas of Caffarelli from [19], we prove that under standard assumptions on the potential matrix V, the solutions to these systems are bounded and Hölder continuous. That is, instead of simply assuming that our operators are chosen to satisfy the de Giorgi–Nash–Moser estimates, we prove in Sect. 7 that these results hold for this class of examples. In particular, we can then fully justify the existence of their corresponding fundamental matrices. To the best of our knowledge, these ideas from [19] have not been used in the linear setting.

A final challenge that we overcame has to do with the fact there are two distinct and unrelated Agmon distance functions associated to the matrix weight V. In particular, because these distance functions aren’t related, we had to modify the scalar approach to proving exponential upper and lower bounds for the fundamental matrix associated to the operator \(\mathcal {L}_V:= \mathcal {L}_0+ V\). The first bound that we prove for the fundamental matrix is an exponential upper bound in terms of the lower Agmon distance. In the scalar setting, this upper bound is then used to prove the exponential lower bound. But for us, the best exponential lower bound that we can expect is in terms of the upper Agmon distance. If we follow the scalar proof, we are led to a standstill since the upper and lower Agmon distances of V aren’t related. We overcame this issue by introducing \(\mathcal {L}_\Lambda := \mathcal {L}_0+ \left| V\right| I_d\), an elliptic operator whose upper and lower Agmon distances agree and are comparable to the upper Agmon distance associated to \(\mathcal {L}_V\). In particular, the upper bound for the fundamental matrix of \(\mathcal {L}_\Lambda \) depends on the upper Agmon distance of V. This observation, along with a clever trick, allows us to prove the required exponential lower bound. These ideas are described in Sect. 9, using results from the end of Sect. 8.

The motivating reasons for studying systems of elliptic equations are threefold, as we now describe.

First, real-valued systems may be used to describe complex-valued equations and systems. To illuminate this point, we consider a simple example. Let \(\Omega \subseteq \mathbb {R}^n\) be an open set and consider the complex-valued Schrödinger operator given by

$$\begin{aligned} L_x = - {{\,\textrm{div}\,}}\left( c \nabla \right) + x, \end{aligned}$$

where \(c = \left( c^{\alpha \beta } \right) _{\alpha , \beta = 1}^n\) denotes the complex-valued coefficient matrix and x denotes the complex-valued potential function. That is, for each \(\alpha , \beta = 1, \ldots , n\),

$$\begin{aligned} c^{\alpha \beta } = a^{\alpha \beta } + i b^{\alpha \beta }, \end{aligned}$$

where both \(a^{\alpha \beta }\) and \(b^{\alpha \beta }\) are \(\mathbb {R}\)-valued functions defined on \(\Omega \subseteq \mathbb {R}^n\), while

$$\begin{aligned} x = v + i w, \end{aligned}$$

where both v and w are \(\mathbb {R}\)-valued functions defined on \(\Omega \). To translate our complex operator into the language of systems, we define

$$\begin{aligned} A = \begin{bmatrix} a &{}\quad - b \\ b &{}\quad a \end{bmatrix}, \quad \quad V = \begin{bmatrix} v &{}\quad -w \\ w &{}\quad v \end{bmatrix}. \end{aligned}$$

That is, each of the entries of A is an \(n \times n\) matrix function:

$$\begin{aligned} A_{11} = A_{22} = a, \quad A_{12} = -b, \quad A_{21} = b, \end{aligned}$$

while each of the entries of V is a scalar function:

$$\begin{aligned} V_{11} = V_{22} = v, \quad V_{12} = -w, \quad V_{21} = w. \end{aligned}$$

Then we define the systems operator

$$\begin{aligned} \mathcal {L}_V= -D_\alpha \left( A^{\alpha \beta } D_\beta \right) + V. \end{aligned}$$
(4)

If \(u = u_1 + i u_2\) is a \(\mathbb {C}\)-valued solution to \(L_x u = 0\), where both \(u_1\) and \(u_2\) are \(\mathbb {R}\)-valued, then \(\vec {u} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}\) is an \(\mathbb {R}^2\)-valued vector solution to the elliptic system described by \(\mathcal {L}_V\vec {u} = \vec {0}\).

This construction also works with complex systems. Let \(C = A + i B\), where each \(A^{\alpha \beta }\) and \(B^{\alpha \beta }\) is \(\mathbb {R}^{d \times d}\)-valued, for \(\alpha , \beta \in \left\{ 1, \ldots , n\right\} \). If we take \(X = V + i W\), where V and W are \(\mathbb {R}^{d \times d}\)-valued, then the operator

$$\begin{aligned} L_X = - D_\alpha \left( C^{\alpha \beta } D_\beta \right) + X \end{aligned}$$

describes a complex-valued system of d equations. Following the construction above, we get a real-valued system of 2d equations of the form described by (4), where now

$$\begin{aligned} A = \begin{bmatrix} A &{}\quad - B \\ B &{}\quad A \end{bmatrix}, \qquad V = \begin{bmatrix} V &{}\quad - W \\ W &{}\quad V \end{bmatrix}. \end{aligned}$$

In particular, if X is assumed to be a \(d \times d\) Hermitian matrix (meaning that \(X = X^*\)), then V is a \(2d \times 2d\) real, symmetric matrix. This shows that studying systems of equations with Hermitian potential functions (as is often done in mathematical physics) is equivalent to studying real-valued systems with symmetric potentials, as we do in this article. Moreover, X is positive (semi)definite iff V is positive (semi)definite. In conclusion, because there is much interest in complex-valued elliptic operators, we believe that it is very meaningful to study real-valued elliptic systems of equations.

Our second motivation comes from physics and molecular dynamics. Schrödinger operators with complex Hermitian matrix potentials V naturally arise when one seeks to solve the Schrödinger eigenvalue problem for a molecule with Coulomb interactions between electrons and nuclei. More precisely, it is sometimes useful to convert the eigenvalue problem associated to the above (scalar) Schrödinger operator into a simpler eigenvalue problem associated to a Schrödinger operator with a matrix potential and Laplacian taken with respect to only the nuclear coordinates. See the classical references [20, p. 335–342] and [21, p. 148–153] for more details. Note that this potential is self-adjoint and is often assumed to have eigenvalues that are bounded below, or even approaching infinity as the nuclear variable approaches infinity. See for example [22,23,24,25,26], where various molecular dynamical approximation errors and asymptotics are computed utilizing the matrix Schrödinger eigenvalue problem stated above as their starting point.

With this in mind, we are hopeful that the results in this paper might find applications to the mathematical theory of molecular dynamics. Moreover, it would be interesting to know whether the results of Sects. 6 and 7 are true for “Schrödinger operators” with a matrix potential and nonzero first order terms. Note that such operators also appear naturally when one solves the same Schrödinger eigenvalue problem for a molecule with Coulomb interactions between electrons and nuclei, but only partially performs the “conversion” described in the previous paragraph. We again refer the reader to [20, p. 335–342] for additional details. It would also be interesting to determine whether defining a landscape function in terms of a Green’s function of a Schrödinger operator with a matrix potential would provide useful pointwise eigenfunction bounds.

Third, studying elliptic systems of PDEs with a symmetric nonnegative matrix potential provides a beautiful connection between the theory of matrix-weighted norm inequalities and the theory of elliptic PDEs. In particular, classical scalar reverse Hölder and Muckenhoupt A\({}_\infty \) assumptions on the scalar potential of elliptic equations are very often assumed (see [4,5,6] for example). On the other hand, while various matrix versions of these conditions have appeared in the literature (see for example [1, 17, 18, 27, 28]), the connections between elliptic systems of PDEs with a symmetric nonnegative matrix potential and the theory of matrix-weighted norm inequalities is a mostly unexplored area (with the exception of [1], which provides a Shubin–Maz’ya type sufficiency condition for the discreteness of the spectrum of a Schrödinger operator with complex Hermitian positive-semidefinite matrix potential V on \(\mathbb {R}^n\)). This project led to the systematic development of the theory of matrix reverse Hölder classes, \({\mathcal {B}_p}\), as well as an examination of the connections between \({\mathcal {B}_p}\), \({\mathcal {A}_\infty }\), and \({\mathcal {A}_{2,\infty }}\). By going beyond the ideas from [1, 17, 18], we carefully study \({\mathcal {A}_\infty }\) and prove that \({\mathcal {A}_\infty }= {\mathcal {A}_{2,\infty }}\).

Unless otherwise stated, we assume that our \(d \times d\) matrix weights (which play the role of the potential in our operators) are real-valued, symmetric, and positive semidefinite. As described above, real symmetric potentials are equivalent to complex Hermitian potentials through a “dimension doubling” process. In fact, because of this equivalence, our results can be compared with those in mathematical physics, where systems with complex, Hermitian matrix potentials are considered. To reiterate, we assume throughout the body of the article that V is real-valued and symmetric. However, in Appendix B, we follow the matrix weights community convention and assume that our matrix weights are complex-valued and Hermitian.

1.1 Organization of the article

The next three sections are devoted to matrix weight preliminaries with the goal of stating and proving our matrix version of the Fefferman–Phong inequality, Lemma 15. In Sect. 2, we present the different classes of matrix weights that we work with throughout this article, including the aforementioned matrix reverse Hölder condition \({\mathcal {B}_p}\) for \(p > 1\) and the (non-Muckenhoupt) quantitative cancellability condition \({\mathcal{Q}\mathcal{C}}\) that will be crucial to the proof of Lemma 15. These two classes will be discussed in relationship to the existing matrix weight literature in Appendix B. Section 3 introduces the auxiliary functions and their associated Agmon distance functions. The Fefferman–Phong inequalities are then stated and proved in Sect. 4. Section 4 also contains the Poincaré inequality that is used to prove one of our new Fefferman–Phong inequalities. The following three sections are concerned with elliptic theory. Section 5 introduces the elliptic systems of the form (1) discussed earlier. The fundamental matrices associated to these operators are discussed in Sect. 6. In Sect. 7, we show that the elliptic systems of the form (1) satisfy the assumptions from Sect. 6. The last two sections, Sects. 8 and 9, are respectively concerned with the upper and lower exponential bounds for our fundamental matrices. Further, we discuss the aforementioned connection between our upper and lower auxiliary functions and Landscape functions at the end of Sect. 9.

Finally, in our first two appendices, we state and prove a number of results related to the theory of matrix weights that are interesting in their own right, but are not needed for the proofs of our main results. In Appendix A, we explore the quantitative cancellability class \({\mathcal{Q}\mathcal{C}}\) in depth, providing examples and comparing it to our other matrix classes. In Appendix B, we systematically develop the theory of the various matrix classes that are introduced in Sect. 2. In particular, we provide a comprehensive discussion of the matrix \({\mathcal {B}_p}\) class and characterize this class of matrix weights in terms of the more classical matrix weight class \({\mathcal {A}_{p,\infty }}\) from [17, 18]. This discussion nicely complements a related matrix weight characterization from [28, Corollary 3.8]. We also discuss how the matrix \({\mathcal {A}_\infty }\) class introduced in [1] relates to the other matrix weight conditions discussed in this paper. In particular, we establish that \({\mathcal {A}_\infty }= {\mathcal {A}_{2,\infty }}\). Further, we provide a new characterization of the matrix \({\mathcal {A}_{2,\infty }}\) condition in terms of a new reverse Brunn–Minkowski type inequality. We hope that Appendix B will appeal to the reader who is interested in the theory of matrix-weighted norm inequalities in their own right. The last appendix contains the proofs of technical results that we skipped in the body.

We have attempted to make this article as self-contained as possible, particularly for the reader who is not an expert in elliptic theory or matrix weights. In Appendix B, we have not assumed any prior knowledge of matrix weights. As such, we hope that this section can serve as a reference for the \({\mathcal {A}_\infty }\) theory of matrix weights.

1.2 Notation

As is standard, we use C, c, etc. to denote constants that may change from line to line. We may use the notation C(np) to indicate that the constant depends on n and p, for example. The notation \(a \lesssim b\) means that there exists a constant \(c > 0\) so that \(a \le c b\). If \(c = c\left( d, p \right) \), for example, then we may write \(a \lesssim _{(d, p)} b\). We say that \(a \simeq b\) if both \(a \lesssim b\) and \(b \lesssim a\) with dependence denoted analogously.

Let \(\left\langle \cdot , \cdot \right\rangle _d: \mathbb {R}^d \times \mathbb {R}^d \rightarrow \mathbb {R}\) denote the standard Euclidean inner product on d-dimensional space. When the dimension of the underlying space is understood from the context, we may drop the subscript and simply write \(\left\langle \cdot , \cdot \right\rangle \). For a vector \(\vec {v} \in \mathbb {R}^d\), its scalar length is \(\left| \vec {v}\right| = \left\langle \vec {v}, \vec {v} \right\rangle ^{\frac{1}{2}}\). The sphere in d-dimensional space is \(\mathbb {S}^{d-1}= \left\{ \vec {v} \in \mathbb {R}^d: \left| \vec {v}\right| = 1\right\} \).

For a \(d \times d\) real-valued matrix A, we use the 2-norm, which is given by

$$\begin{aligned} \left| A\right| = \left| A\right| _2 = \sup \left\{ \left| Ax\right| : x \in \mathbb {S}^{d-1}\right\} = \sup \left\{ \left\langle Ax, y \right\rangle : x, y \in \mathbb {S}^{d-1}\right\} . \end{aligned}$$

Alternatively, \(\left| A\right| \) is equal to its largest singular value, the square root of the largest eigenvalue of \(AA^T\). For symmetric positive semidefinite \(d \times d\) matrices A and B, we say that \(A \le B\) if \(\left\langle A \vec e, \vec e \right\rangle \le \left\langle B \vec e, \vec e \right\rangle \) for every \(\vec e \in \mathbb {R}^d\). Note that both \(\left| \vec {v}\right| \) and \(\left| A\right| \) are scalar quantities.

If A is symmetric and positive semidefinite, then \(\left| A\right| \) is equal to \(\lambda \), the largest eigenvalue of A. Let \(\vec {v} \in \mathbb {S}^{d-1}\) denote the eigenvector associated to \(\lambda \) and let \(\left\{ \vec {e}_i\right\} _{i=1}^d\) denote the standard basis of \(\mathbb {R}^d\). Observe that \(\vec {v} = \sum _{i=1}^d \left\langle \vec {v}, \vec {e}_i \right\rangle \vec {e}_i\) and for each j, \( \left\langle \vec {v}, \vec {e}_j \right\rangle ^2 \le \sum _{i=1}^d \left\langle \vec {v}, \vec {e}_i \right\rangle ^2 = 1\). Then, since \(A^{\frac{1}{2}}\) is well-defined, an application of Cauchy–Schwarz shows that

$$\begin{aligned} \begin{aligned} \left| A\right|&= \lambda = \left\langle A\vec {v}, \vec {v} \right\rangle \le d \sum _{i=1}^d \left\langle A\vec {e}_i,\vec {e}_i \right\rangle . \end{aligned} \end{aligned}$$
(5)

Let \(\Omega \subseteq \mathbb {R}^n\) and \(p \in \left[ 1,\infty \right) \). For any d-vector \(\vec {v}\), we write \( \left\| \vec {v}\right\| _{L^p(\Omega )} = \left( \int _{\Omega } \left| \vec {v}(x)\right| ^p dx \right) ^{\frac{1}{p}}\). Similarly, for any \(d \times d\) matrix A, we use the notation \( \left\| A\right\| _{L^p(\Omega )} = \left( \int _{\Omega } \left| A(x)\right| ^p dx \right) ^{\frac{1}{p}}\). When \(p = \infty \), we write \( \left\| \vec {v}\right\| _{L^\infty (\Omega )} = {{\,\textrm{ess}\,}}\sup _{x \in \Omega } \left| \vec {v}(x)\right| \) and \( \left\| A\right\| _{L^\infty (\Omega )} = {{\,\textrm{ess}\,}}\sup _{x \in \Omega } \left| A(x)\right| \). We say that a vector \(\vec {v}\) belongs to \(L^p\left( \Omega \right) \) iff the scalar function \(\left| \vec {v}\right| \) belongs to \(L^p\left( \Omega \right) \). Similarly, for a matrix function A defined on \(\Omega \), \(A \in L^p\left( \Omega \right) \) iff \(\left| A\right| \in L^p\left( \Omega \right) \). In summary, we use the notation \(\left| \cdot \right| \) to denote norms of vectors and matrices, while we use the notations \(\left\| \cdot \right\| _p\), \(\left\| \cdot \right\| _{L^p}\), or \(\left\| \cdot \right\| _{L^p\left( \Omega \right) }\) to denote \(L^p\)-space norms.

We let \(C^\infty _c(\Omega )\) denote the set of all infinitely differentiable functions with compact support in \(\Omega \). If \(\vec {\varphi }: \Omega \rightarrow \mathbb {R}^d\) is a vector-valued function for which each component function \(\varphi _i \in C^\infty _c(\Omega )\), then \(\vec {\varphi } \in C^\infty _c(\Omega )\).

For \(x \in \mathbb {R}^n\) and \(r > 0\), we use the notation B(xr) to denote a ball of radius \(r > 0\) centered at \(x \in \mathbb {R}^n\). We let Q(xr) be the ball of radius r and center \(x \in \mathbb {R}^n\) in the \(\ell ^\infty \) norm on \(\mathbb {R}^n\) (i.e. the cube with side length 2r and center x). We write Q to denote a generic cube. We use the notation \(\ell \) to denote the sidelength of a cube. That is, \(\ell \left( Q\left( x, r \right) \right) = 2r\).

We will assume throughout that \(n \ge 3\) and \(d \ge 2\). In general, \(1< p < \infty \), but we may further specify the range of p as we go.

2 Matrix classes

Within this section, we define the classes of matrix functions that we work with, then we collect a number of observations about them.

2.1 Reverse Hölder matrices

Recall that for a nonnegative scalar-valued function v, we say that v belongs to the reverse Hölder class \({\text {B}_p}\) if \(v \in L^p_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \) and there exists a constant \(C_v\) so that for every cube Q in \(\mathbb {R}^n\),

(6)

Let V be a \(d \times d\) matrix weight on \(\mathbb {R}^n\). That is, V is a \(d \times d\) real-valued, symmetric, positive semidefinite matrix. For such matrices, we define \({\mathcal {B}_p}\), the class of reverse Hölder matrices, via quadratic forms as follows.

Definition 1

(Matrix \({\mathcal {B}_p}\)) For matrix weights, we say that V belongs to the class of reverse Hölder matrices, \(V \in {\mathcal {B}_p}\), if \(V \in L^p_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \) and there exists a constant \(C_V\) so that for every cube Q in \(\mathbb {R}^n\) and every \(\vec e \in \mathbb {R}^d\) (or \(\mathbb {S}^{d-1}\)),

(7)

This constant is independent of Q and \(\vec {e}\), so that the inequality holds uniformly in Q and \(\vec {e}\). We call \(C_V\) the (uniform) \({\mathcal {B}_p}\) constant of V. Note that \(C_V\) depends on V as well as p; to indicate this, we may use the notation \(C_{V,p}\).

Now we collect some observations about such matrix functions. The first result regards the norm of a matrix \({\mathcal {B}_p}\) function.

Lemma 1

(Largest eigenvalue is \({\text {B}_p})\) If \(V \in {\mathcal {B}_p}\), then \(\left| V\right| \in {\text {B}_p}\) with \(C_{\left| V\right| } \lesssim _{(d, p)} C_V\).

Proof

Let \(\vec {e}_1, \ldots , \vec {e}_d\) denote the standard basis for \(\mathbb {R}^d\). Using that \( \left| V\right| \le d \sum _{i=1}^d \left\langle V \vec {e}_i, \vec {e}_i \right\rangle \) (as explained in the notation section, see (5)) combined with the Hölder and Minkowski inequalities shows that

where we have used the reverse Hölder inequality (7) in the fourth step. \(\square \)

Lemma 2

(Gehring’s Lemma) If \(V \in {\mathcal {B}_p}\), then there exists \(\varepsilon \left( p, C_V \right) > 0\) so that \(V \in \mathcal {B}_{p+\varepsilon }\). In particular, \(V \in \mathcal {B}_q\) for all \(q \in \left[ 1, p + \varepsilon \right] \). Moreover, if \(q \le s\), then \(C_{V, q} \le C_{V, s}\).

Proof

Since \(\left\langle V(x)\vec {e}, \vec {e} \right\rangle \) is a scalar \({\text {B}_p}\) weight (with \({\text {B}_p}\) constant uniform in \(\vec {e} \in \mathbb {R}^d\)), then it follows from the proof of Gehring’s Lemma (see for example the dyadic proof in [29]) that \(V \in {\mathcal {B}_p}\) implies that there exists \(\varepsilon > 0\) such that \(V \in \mathcal {B}_{p + \varepsilon }\). Let \(q \le p + \varepsilon \), \(\vec {e} \in \mathbb {R}^d\). Then by Hölder’s inequality,

showing that \(V \in \mathcal {B}_q\). If \(q \le s \le p + \varepsilon \), then the same argument holds with \(C_{V,s}\) in place of \(C_{V, p+\varepsilon }\). \(\square \)

Now we introduce an averaged version of V that will be extensively used in our arguments.

Definition 2

(Averaged matrix) Let V be a function, \(x \in \mathbb {R}^n\), \(r > 0\). We define the averaged matrix as

$$\begin{aligned} \Psi \left( x, r; V \right) = \frac{1}{r^{n-2}} \int _{Q\left( x,r \right) } V(y) dy. \end{aligned}$$
(8)

These averages have a controlled growth.

Lemma 3

(Controlled growth, cf. Lemma 1.2 in [5]) If \(V \in {\mathcal {B}_p}\), then for any \(0< r< R < \infty \),

$$\begin{aligned} \Psi \left( x, r; V \right) \le C_V \left( \frac{r}{R} \right) ^{2 - \frac{n}{p}} \Psi \left( x, R; V \right) , \end{aligned}$$

where \(C_V\) is the uniform \({\mathcal {B}_p}\) constant for V.

Proof

Let \(0< r < R\). Then for any \(\vec e \in \mathbb {R}^n\), applications of the Hölder inequality and the reverse Hölder inequality described by (7) show that

As \(\vec {e} \in \mathbb {R}^n\) was arbitrary, then , which leads to the conclusion of the lemma. \(\square \)

Furthermore, the \({\mathcal {B}_p}\) matrices serve as doubling measures.

Lemma 4

(Doubling) If \(V \in {\mathcal {B}_p}\), then V is a doubling measure. That is, there exists a doubling constant \(\gamma = \gamma \left( n, p, C_V \right) > 0\) so that for every \(x \in \mathbb {R}^n\) and every \(r > 0\),

$$\begin{aligned} \int _{Q\left( x, 2r \right) } V(y) dy \le \gamma \int _{Q\left( x, r \right) } V(y) dy. \end{aligned}$$

Proof

Since each \(\left\langle V \vec {e}, \vec {e} \right\rangle \) belongs to \({\text {B}_p}\), then by the scalar result, \(\left\langle V \vec {e}, \vec {e} \right\rangle \) defines a doubling measure. Moreover, since the \({\text {B}_p}\) constant is independent of \(\vec {e} \in \mathbb {S}^{d-1}\), then so too is the doubling constant associated to each measure defined by \(\left\langle V \vec {e}, \vec {e} \right\rangle \). It follows that V defines a doubling measure. \(\square \)

2.2 Nondegenerate matrices

Next, we define a very natural class of nondegenerate matrices.

Definition 3

(Nondegeneracy class) We say that V belongs to the nondegeneracy class, \(V \in \mathcal{N}\mathcal{D}\), if V is a matrix weight that satisfies the following very mild nondegeneracy condition: For any measurable \(\left| E\right| > 0\), we have (in the usual sense of semidefinite matrices) that

$$\begin{aligned} V(E) := \int _E V(y) \, dy > 0. \end{aligned}$$
(9)

First we give an example of a matrix function in \({\mathcal {B}_p}\) but not in \(\mathcal{N}\mathcal{D}\).

Example 1

(\({\mathcal {B}_p}\setminus \mathcal{N}\mathcal{D}\) is nonempty) Take \(v: \mathbb {R}^n \rightarrow \mathbb {R}\) in \({\text {B}_p}\) and define

$$\begin{aligned} V = \left[ \begin{array}{cccc} v &{}\quad 0 &{}\quad \ldots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \ldots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad \ldots &{}\quad 0 \end{array}\right] . \end{aligned}$$

It is clear that \(V \in {\mathcal {B}_p}\). However, since V and its averages all have zero eigenvalues, then \(V \notin \mathcal{N}\mathcal{D}\).

Now we produce a number of examples in both \({\mathcal {B}_p}\) and \(\mathcal{N}\mathcal{D}\).

Example 2

(\({\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) polynomial matrices) Let V be a polynomial matrix.

  1. (a)

    If V is symmetric, nontrivial along the diagonal, and positive semidefinite, then V satisfies (9). It follows from Corollary 5 in Appendix A that \(V \in {\mathcal {B}_p}\) as well.

  2. (b)

    If for every \(\vec {e} \in \mathbb {R}^d\), there exists \(i \in \left\{ 1, \ldots , d\right\} \) so that \( \sum _{j = 1}^d V_{ij}e_j \ne 0\), then

    $$\begin{aligned} \left\langle V^T V\vec {e}, \vec {e} \right\rangle&= \left\langle V\vec {e}, V\vec {e} \right\rangle = \sum _{k = 1}^d \left( \sum _{j = 1}^d V_{kj}e_j \right) ^2 \ge \left( \sum _{j = 1}^d V_{ij}e_j \right) ^2 > 0 \end{aligned}$$

    so that \(V^T V\) satisfies (9). Since \(V^T V\) is symmetric and polynomial, then \(V^T V \in \mathcal{N}\mathcal{D}\cap {\mathcal {B}_p}\). A similar condition shows that \(V V^T \in \mathcal{N}\mathcal{D}\cap {\mathcal {B}_p}\) as well.

As we will see below, the nondegeneracy condition described by (9) facilitates the introduction of one of our key tools. However, there are also practical reasons to avoid working with matrices that aren’t nondegenerate. For example, consider a matrix-valued Schrödinger operator of the form \(- \Delta + V\), where V is as given in Example 1. The fundamental matrix of this operator is diagonal with only the first entry exhibiting decay, while all other diagonal entries contain the fundamental solution for \(\Delta \). In particular, since the norm of this fundamental matrix doesn’t exhibit exponential decay, we believe that the assumption of nondegeneracy is very natural for our setting.

2.3 Quantitative cancellability condition

As we’ll see below, the single assumption that \(V \in {\mathcal {B}_p}\) will not suffice for our needs, and we’ll impose additional conditions on V. To define the quantitative cancellability condition that we use, we need to introduce the lower auxiliary function associated to \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\).

If \(V \in \mathcal{N}\mathcal{D}\), then by (8) and (9), for each \(x \in \mathbb {R}^n\) and \(r > 0\), we have \(\Psi \left( x, r; V \right) > 0\). If \(V \in {\mathcal {B}_p}\) and \(p > \frac{n}{2}\), then the power \(2 - \frac{n}{p} > 0\) and it follows from Lemma 3 that

$$\begin{aligned} \begin{aligned}&\lim _{r \rightarrow 0^+} \left\langle \Psi \left( x, r; V \right) \vec {e}, \vec {e} \right\rangle = 0 \; \text { for any } \vec {e} \in \mathbb {R}^d, \\&\lim _{R \rightarrow \infty } \min _{\vec {e} \in \mathbb {S}^{d-1}} \left\langle \Psi \left( x, R; V \right) \vec {e}, \vec {e} \right\rangle = \infty . \end{aligned} \end{aligned}$$

These observations allows us to make the following definition of \(\underline{m}\), the lower auxiliary function.

Definition 4

(Lower auxiliary function) Let \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\). We define the lower auxiliary function \( \underline{m}\left( \cdot , V \right) : \mathbb {R}^n\rightarrow (0, \infty )\) as follows:

$$\begin{aligned} \frac{1}{\underline{m}\left( x, V \right) } = \sup _{r > 0} \left\{ r : \min _{\vec {e} \in \mathbb {S}^{d-1}} \left\langle {\Psi }\left( x,r; V \right) \vec {e}, \vec {e} \right\rangle \le 1\right\} . \end{aligned}$$

We investigate this function and others in much more detail within Sect. 3. For now, we use \(\underline{m}\) to define our next matrix class.

Definition 5

(Quantitative cancellability class) If \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\), then we say that V belongs to the Quantitative Cancellability class, \(V \in {\mathcal{Q}\mathcal{C}}\), if there exists \(N_V > 0\) so that for every \(x \in \mathbb {R}^n\) and every \(\vec {e} \in \mathbb {R}^d\),

$$\begin{aligned} N_V \left| \vec {e}\right| ^2 \le \int _Q \left\langle V^\frac{1}{2} (y) V(Q)^{-1} V^\frac{1}{2} (y) \vec {e}, \vec {e} \right\rangle \, dy, \end{aligned}$$
(10)

where \(Q = Q\left( x, \frac{1}{\underline{m}(x, V)} \right) \).

For most of our applications, we consider \({\mathcal{Q}\mathcal{C}}\) as a subset of \({\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) in order to make sense of \(\underline{m}(\cdot , V)\). However, if \(V \notin {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) or \(\underline{m}\left( \cdot , V \right) \) is not well-defined, we say that \(V \in {\mathcal{Q}\mathcal{C}}\) if (10) holds for every cube \(Q \subseteq \mathbb {R}^n\).

To show that this class of matrices is meaningful, we provide a non-example.

Example 3

(\({\mathcal{Q}\mathcal{C}}\) is a proper subset of \({\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\)) Define \(V: \mathbb {R}^n \rightarrow \mathbb {R}^{2 \times 2}\) by

$$\begin{aligned} V(x) = \begin{bmatrix}1 &{}\quad \left| x\right| ^2 \\ \left| x\right| ^2 &{}\quad \left| x\right| ^4 \end{bmatrix} = \begin{bmatrix}1 &{}\quad x_1^2 + \cdots x_n^2 \\ x_1^2 + \ldots x_n^2 &{}\quad \left( x_1^2 + \cdots x_n^2 \right) ^2 \end{bmatrix}. \end{aligned}$$

By Example 2, \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\). However, as shown in Appendix A, \(V \notin {\mathcal{Q}\mathcal{C}}\).

In Sect. 8, we prove one of our main results: an upper exponential decay estimate for the fundamental matrix of the elliptic operator \(\mathcal {L}_V\), where \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal{Q}\mathcal{C}}\). A further discussion of these matrix classes and their relationships is available in Appendix A.

2.4 Stronger conditions

To finish our discussion of matrix weights, we introduce some closely related and more well-known classes of matrices. Note that these assumptions are stronger and more readily checkable.

Definition 6

(\({\mathcal {A}_\infty }\) matrices) We say that V belongs to the A-infinity class of matrices, \(V \in {\mathcal {A}_\infty }\), if for any \(\varepsilon > 0\), there exists \(\delta > 0\) so that for every cube Q,

(11)

This class of matrix weights was first introduced in [1], where the author proved a Shubin–Maz’ya type sufficiency condition for the discreteness of the spectrum of a Schrodinger operator \(- \Delta + V\), where \(V \in {\mathcal {A}_\infty }\). Interestingly, and somewhat surprisingly, we show in the Appendix B that the condition \(V \in {\mathcal {A}_\infty }\) is equivalent to \(V \in \mathcal {A}_{2, \infty }\). The class \(\mathcal {A}_{2, \infty }\) is the readily checkable class of matrix weights introduced in [17, 18], which we now define.

Definition 7

(\(\mathcal {A}_{2, \infty }\) matrices) We say \(V \in \mathcal {A}_{2, \infty }\), if there exists \(A_V > 0\) so that for every cube Q, we have

We briefly discuss the relationship between \({\mathcal {B}_p}\) and \({\mathcal {A}_\infty }\). First, we have the following application of Gehring’s lemma.

Lemma 5

(\({\mathcal {A}_\infty }\subseteq \mathcal {B}_q\)) If \(V \in {\mathcal {A}_\infty }\), then \(V \in \mathcal {B}_q\) for some \(q > 1\).

Proof

Since \(\left\langle V \vec {e}, \vec {e} \right\rangle \in {\text {A}_\infty }\) uniformly in \(\vec {e} \in \mathbb {S}^{d-1}\), then by [30, Lemma 7.2.2], there exists \(q > 1\) so that \(\left\langle V \vec {e}, \vec {e} \right\rangle \in B_q\) uniformly in \(\vec {e} \in \mathbb {S}^{d-1}\). In particular, \(V \in \mathcal {B}_q\), as required. \(\square \)

On the other hand, when \(V \in {\mathcal {B}_p}\), there is no reason to expect that \(V \in {\mathcal {A}_\infty }\). If \(V \in {\mathcal {B}_p}\), then for each \(\vec {e} \in \mathbb {R}^d\), \(\left\langle V(x) \vec {e}, \vec {e} \right\rangle \) is a scalar \({\text {B}_p}\) function, which means that \(\left\langle V(x) \vec {e}, \vec {e} \right\rangle \) is a scalar \({\text {A}_\infty }\) function. As the \({\text {B}_p}\) constants are uniform in \(\vec {e}\), then for any \(\varepsilon > 0\), there exists \(\delta > 0\) so that for every cube Q, if we define , then \( \left| {Q_{\vec {e}}}\right| \ge (1-\varepsilon ) \left| Q\right| \). However, this doesn’t guarantee any quantitative lower bound on \( \left| {\bigcap _{\vec {e}} Q_{\vec {e}}}\right| \). In fact, Example 3 above gives a matrix function that belongs to \(\mathcal{N}\mathcal{D}\cap {\mathcal {B}_p}\) for every p, but has zero determinant everywhere. Therefore, for every x, there exists \(\vec {e}_x\) for which \(\left\langle V(x) \vec {e}_x, \vec {e}_x \right\rangle = 0\), while the nondegeneracy condition ensures that . From this observation, we see that the \({\mathcal {A}_\infty }\) condition (11) is impossible to satisfy. Alternatively, as shown in Appendix A, \(V \notin {\mathcal{Q}\mathcal{C}}\) and therefore by Lemma 6 below, \(V \notin {\mathcal {A}_\infty }\).

Recall that \(\lambda _d = \left| V\right| \) denotes the largest eigenvalue of V. As we saw in Lemma 1, if \(V \in {\mathcal {B}_p}\), then \(\lambda _d\) belongs to \({\text {B}_p}\). Let \(\lambda _1\) denote the smallest eigenvalue of V. That is, \(\lambda _1 = \left| V^{-1}\right| ^{-1}\). Under a stronger set of assumptions, we can also make the interesting conclusion that \(\lambda _1\) is in \({\text {B}_p}\).

Proposition 1

(Smallest eigenvalue is scalar \({\text {B}_p})\) If \(V \in {\mathcal {B}_p}\cap {\mathcal {A}_\infty }\), then \(\lambda _1 \in {\text {B}_p}\).

The proof of this result appears in the appendix of [31]. Although the assumption that \(V \in {\mathcal {B}_p}\cap {\mathcal {A}_\infty }\) implies that the smallest and largest eigenvalues of V belong to \({\text {B}_p}\), it is unclear what conditions would imply that the other eigenvalues belong to this reverse Hölder class.

The next result and its proof show that the \({\mathcal{Q}\mathcal{C}}\) condition can be thought of as a noncommutative, non-\({\mathcal {A}_\infty }\) condition that is very naturally implied by the noncommutativity that is built into the \({\mathcal {A}_\infty }\) definition.

Lemma 6

(\({\mathcal {A}_\infty }{\cap \mathcal{N}\mathcal{D}} \subseteq {\mathcal{Q}\mathcal{C}}\)) If \(V \in {\mathcal {A}_\infty }{\cap \mathcal{N}\mathcal{D}}\), then \(V \in {\mathcal{Q}\mathcal{C}}\).

In the following proof, we establish that (10) holds for all cubes \(Q \subseteq \mathbb {R}^n\), not just those at the special scale which are defined by \(Q = Q\left( x, \frac{1}{\underline{m}(x, V)} \right) \).

Proof

Since \(V \in {\mathcal {A}_\infty }\), we may choose \(\delta > 0\) so that (11) holds with \(\varepsilon = \frac{1}{2}\). That is, for any \(Q \subseteq \mathbb {R}^n\), if we define , then \(\left| S\right| \ge \frac{1}{2} \left| Q\right| \). Observe that since V(Q) is invertible, then

$$\begin{aligned} S = \left\{ x \in Q: V(x)^{\frac{1}{2}} V\left( Q \right) ^{-1} V(x)^{\frac{1}{2}} \ge \frac{\delta }{\left| Q\right| } I\right\} , \end{aligned}$$

then

$$\begin{aligned} \int _Q V(x)^{\frac{1}{2}} V\left( Q \right) ^{-1} V(x)^{\frac{1}{2}} dx&\ge \int _{S} V(x)^{\frac{1}{2}} V\left( Q \right) ^{-1} V(x)^{\frac{1}{2}} dx \ge \int _{S} \frac{\delta }{\left| Q\right| } I dx \ge \frac{\delta }{2} I, \end{aligned}$$

showing that \(V \in {\mathcal{Q}\mathcal{C}}\). \(\square \)

Next we describe a collection of examples of matrix functions in \({\mathcal {B}_p}\cap {\mathcal {A}_{2,\infty }}\). Let \(A = \left( a_{ij} \right) _{i, j = 1}^d\) be a \(d \times d\) Hermitian, positive definite matrix and let \(\Gamma = \left( \gamma _{ij} \right) _{i, j = 1}^d\) be some constant matrix. We use A and \(\Gamma \) to define the \(d \times d\) matrix function \(V: \mathbb {R}^n \rightarrow \mathbb {R}^{d \times d}\) by

$$\begin{aligned} V(x) = \begin{pmatrix} a_{11} \left| x\right| ^{\gamma _{11}} &{}\quad \dots &{}\quad a_{1d} \left| x\right| ^{\gamma _{1d}} \\ \vdots &{}\quad \ddots &{}\quad \vdots \\ a_{d1} \left| x\right| ^{\gamma _{d1}} &{}\quad \dots &{}\quad a_{dd} \left| x\right| ^{\gamma _{dd}} \end{pmatrix}. \end{aligned}$$
(12)

By [32, Theorem 3.1], a matrix of the form (12) is positive definite a.e. iff \(\gamma _{ij} = \gamma _{ji} = \frac{1}{2} \left( \gamma _{ii} + \gamma _{jj} \right) \) for \(i, j = 1, \ldots , d\). Moreover, in this setting, [32, Lemma 3.4] shows that \(V^{-1}: \mathbb {R}^n \rightarrow \mathbb {R}^{d \times d}\) is well-defined and given by

$$\begin{aligned} V(x)^{-1} = \left( a^{ij} \left| x\right| ^{- \gamma _{ji}} \right) _{i, j =1}^d, \end{aligned}$$
(13)

where \(A^{-1} = \left( a^{ij} \right) _{i, j = 1}^d\). Under the assumption of positive definiteness, these matrices provide a full class of examples of matrix weights in \({\mathcal {B}_p}\cap {\mathcal {A}_{2,\infty }}\).

Proposition 2

Let V be defined by (12)) where \(A = \left( a_{ij} \right) _{i, j = 1}^d\) is a \(d \times d\) Hermitian, positive definite matrix and \(\gamma _{ij} = \frac{1}{2} \left( \gamma _i + \gamma _j \right) \) for some \(\vec {\gamma } \in \mathbb {R}^d\). If \(p \ge 1\) and \(\gamma _{i} > - \frac{n}{p}\) for each \(1 \le i \le d\), then \(V \in {\mathcal {B}_p}\cap {\mathcal {A}_{2,\infty }}\).

The proof of this result appears in Appendix C.

The classical Brunn–Minkowski inequality implies that the map \(A \mapsto (\det A)^{\frac{1}{d}}\), defined on the set of \(d \times d\) symmetric positive semidefinite matrices A, is concave. An application of Jensen’s inequality shows that

(14)

see [17, p. 48] for a proof. Accordingly, we make the following definition of an associated reverse class.

Definition 8

(\(\mathcal {R}_{\text {BM}}\)) We say that a matrix weight V belongs to the reverse Brunn–Minkowski class, \(V \in \mathcal {R}_{\text {BM}}\), if there exists a constant \(B_V > 0\) so that for any cube \(Q \subseteq \mathbb {R}^n\), it holds that

(15)

In Appendix C, we also provide the proof of the following“non-\({\mathcal {A}_\infty }\)” condition for \(V \in {\mathcal{Q}\mathcal{C}}\).

Proposition 3

If \(V \in \mathcal{N}\mathcal{D}\) and there exists a constant \(B_V > 0\) so that (15) holds for every cube \(Q \subseteq \mathbb {R}^n\), then \(V \in {\mathcal{Q}\mathcal{C}}\).

Even for a \(d \times d\) diagonal matrix weight V with a.e. positive entries \(\lambda _1(x), \ldots , \lambda _d(x)\), it is not clear when (15) holds. If each \(\lambda _j (x) \in {\text {A}_\infty }\) for \(1 \le j \le d\), then (15) holds. In particular, since every diagonal matrix weight V with positive a.e. entries belongs to \({\mathcal{Q}\mathcal{C}}\), then (15) doesn’t provide a necessary condition for \({\mathcal{Q}\mathcal{C}}\). It would be interesting to find an easily checkable sufficient condition for \(V \in {\mathcal{Q}\mathcal{C}}\) that is at least trivially true in the case of diagonal matrix weights.

For a much deeper discussion of the classes \({\mathcal {B}_p}, {\mathcal {A}_\infty }, {\mathcal {A}_{2,\infty }}\), and their relationships to each other, we refer the reader to Appendix B. We don’t discuss matrix \(\mathcal {A}_p\) weights in Appendix B since they play no role in this paper. However, [33] serves as an excellent reference for the theory of matrix \(\mathcal {A}_p\) weights and the boundedness of singular integrals on these spaces.

3 Auxiliary functions and Agmon distances

Now that we have introduced the class of matrices that we work with, we develop the theory of their associated auxiliary functions. In the scalar setting, these ideas appear in [4, 2, 5], and [3], for example. As we are working with matrices instead of scalar functions, there are many different ways to generalize these ideas.

We assume from now on that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p \in \left[ \frac{n}{2}, \infty \right] \). By Lemma 2, there is no loss in assuming that \(p > \frac{n}{2}\). Since \(V \in \mathcal{N}\mathcal{D}\), then by (8) and (9), for each \(x \in \mathbb {R}^n\) and \(r > 0\), we have \(\Psi \left( x, r; V \right) > 0\). Since \(p > \frac{n}{2}\), then the power \(2 - \frac{n}{p} > 0\) and it follows from Lemma 3 that for any \(\vec {e} \in \mathbb {R}^d\),

$$\begin{aligned} \begin{aligned}&\lim _{r \rightarrow 0^+} \left\langle \Psi \left( x, r; V \right) \vec {e}, \vec {e} \right\rangle = 0 \\&\lim _{R \rightarrow \infty } \left\langle \Psi \left( x, R; V \right) \vec {e}, \vec {e} \right\rangle = \infty . \end{aligned} \end{aligned}$$

This allows us to make the following definition.

If \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\), then for \(x \in \mathbb {R}^n\) and \(\vec {e} \in \mathbb {S}^{d-1}\), the auxiliary function \(m\left( x, \vec {e}, V \right) \in (0, \infty )\) is defined by

$$\begin{aligned} \frac{1}{m\left( x, \vec {e}, V \right) } = \sup _{r > 0} \left\{ r : \left\langle \Psi \left( x,r; V \right) \vec {e}, \vec {e} \right\rangle \le 1\right\} . \end{aligned}$$

Remark 1

If v is a scalar \({\text {B}_p}\) function, then we may eliminate the \(\vec {e}\)-dependence and define

$$\begin{aligned} \frac{1}{m\left( x, v \right) } = \sup _{r > 0} \left\{ r : \Psi \left( x,r; v \right) \le 1\right\} . \end{aligned}$$

See [2, 4, 5], for example.

We recall the following lemma from [5], for example, that applies to scalar functions.

Lemma 7

(cf. [5, Lemma 1.4]) Assume that \(v \in {\text {B}_p}\) for some \(p > \frac{n}{2}\). There exist constants \(C, c, k_0 > 0\), depending on n, p, and \(C_v\), so that for any \(x, y \in \mathbb {R}^n\),

  1. (a)

    If \( \left| x - y\right| \lesssim \frac{1}{m\left( x, v \right) }\), then \( m\left( x, v \right) \simeq _{(n, p, C_V)} m\left( y, v \right) \),

  2. (b)

    \( m\left( y, v \right) \le C \left[ 1 + \left| x - y\right| m\left( x, v \right) \right] ^{k_0} m\left( x, v \right) \),

  3. (c)

    \( m\left( y, v \right) \ge \frac{c \, m\left( x, v \right) }{\left[ 1 + \left| x -y\right| m\left( x, v \right) \right] ^{{k_0}/ \left( k_0+1 \right) }}\).

As the properties described in this lemma will be very useful below, we seek auxiliary functions that also satisfy this set of results in the matrix setting. We define two auxiliary functions as follows.

Definition 9

(Lower and upper auxiliary functions) Let \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\). We define the lower auxiliary function as follows:

$$\begin{aligned} \frac{1}{\underline{m}\left( x, V \right) } = \sup _{r > 0} \left\{ r : \min _{\vec {e} \in \mathbb {S}^{d-1}} \left\langle \Psi \left( x,r; V \right) \vec {e}, \vec {e} \right\rangle \le 1\right\} . \end{aligned}$$
(16)

The upper auxiliary function is given by

$$\begin{aligned} \frac{1}{\overline{m}\left( x, V \right) } = \sup _{r> 0} \left\{ r : \max _{\vec {e} \in \mathbb {S}^{d-1}} \left\langle \Psi \left( x,r; V \right) \vec {e}, \vec {e} \right\rangle \le 1\right\} = \sup _{r > 0} \left\{ r : \left| \Psi \left( x,r; V \right) \right| \le 1\right\} . \end{aligned}$$
(17)

Remark 2

Since \(\left| \Psi \left( x, r; V \right) \right| \) satisfies Lemma 3 whenever \(V \in {\mathcal {B}_p}\), then for the upper auxiliary function, \(\overline{m}\left( x, V \right) \), we do not need to assume that \(V \in \mathcal{N}\mathcal{D}\).

For V fixed, we define

$$\begin{aligned} \begin{aligned} \underline{\Psi }(x)&= \Psi \left( x, \frac{1}{\underline{m}\left( x, V \right) }; V \right) \\ \overline{\Psi }(x)&= \Psi \left( x, \frac{1}{\overline{m}\left( x, V \right) }; V \right) , \end{aligned} \end{aligned}$$

then observe that \(\overline{\Psi }(x) \le I \le \underline{\Psi }(x)\). In particular, for every \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left\langle \overline{\Psi }(x) \vec {e}, \vec {e} \right\rangle \le 1 \le \left\langle \underline{\Psi }(x) \vec {e}, \vec {e} \right\rangle . \end{aligned}$$
(18)

With this pair of functions in hand, we now seek to prove Lemma 7 for both \(\underline{m}\) and \(\overline{m}\). The following pair of observations for each auxiliary function will allow us to prove the desired results.

Lemma 8

(Lower observation) Let \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\). If \( c \ge \left\langle \Psi \left( x,r; V \right) \vec {e}, \vec {e} \right\rangle \) for some \(\vec {e} \in \mathbb {S}^{d-1}\), then \( r \le \max \left\{ 1, (C_Vc)^\frac{p}{2p - n}\right\} \frac{1}{\underline{m}(x, V)} \).

Proof

If \(r \le \frac{1}{\underline{m}\left( x, V \right) }\), then we are done, so assume that \(\frac{1}{\underline{m}\left( x, V \right) } < r\). Then it follows from (18) and Lemma 3 that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} 1&\le \left\langle \underline{\Psi }(x) \vec {e}, \vec {e} \right\rangle = \left\langle \Psi \left( x, \frac{1}{\underline{m}\left( x, V \right) }; V \right) \vec {e}, \vec {e} \right\rangle \\&\le C_V \left( \frac{1}{\underline{m}\left( x, V \right) r} \right) ^{2 - \frac{n}{p}} \left\langle \Psi \left( x, r; V \right) \vec {e}, \vec {e} \right\rangle \le C_V c \left( \frac{1}{\underline{m}\left( x, V \right) r} \right) ^{2 - \frac{n}{p}} . \end{aligned}$$

The conclusion follows after algebraic simplifications. \(\square \)

As we observed in Lemma 1, if \(V \in {\mathcal {B}_p}\), then \(\left| V\right| \in {\text {B}_p}\). Thus, it is meaningful to discuss the quantity \(m\left( x, \left| V\right| \right) \). For \(\overline{m}\left( x, V \right) \), we rely on the following relationship regarding norms. Note that by Remark 2, we do not need to assume that \(V \in \mathcal{N}\mathcal{D}\) for this result.

Lemma 9

(Upper auxiliary function relates to norm) If \(V \in {\mathcal {B}_p}\) for some \(p > \frac{n}{2}\), then

$$\begin{aligned} \overline{m}(x, V) \le m(x, \left| V\right| ) \le \left( d^2 C_V \right) ^{\frac{2}{2p-n}} \overline{m}(x, V). \end{aligned}$$

Proof

For any \(r > 0\), choose \(\vec {e} \in \mathbb {S}^{d-1}\) so that

$$\begin{aligned} \left| \Psi (x, r ;V)\right|&= \left\langle \Psi (x, r ;V) \vec {e}, \vec {e} \right\rangle = \frac{1}{r^{n-2}} \int _{Q\left( x,r \right) } \left\langle V(y) \vec {e}, \vec {e} \right\rangle dy. \end{aligned}$$

Since \(\left\langle V(y) \vec {e}, \vec {e} \right\rangle \le \left| V\right| \), then \(\left| \Psi (x, r;V)\right| \le \Psi (x, r; \left| V\right| )\). It follows that \(\frac{1}{\overline{m}\left( x, V \right) } \ge \frac{1}{m\left( x, \left| V\right| \right) }\) so that

$$\begin{aligned} {m(x, \left| V\right| )} \ge \overline{m}(x, V). \end{aligned}$$

Let \(\left\{ \vec {e}_i\right\} _{i=1}^d\) denote the standard basis of \(\mathbb {R}^d\). For any \(r > 0\), it follows from (5) that

$$\begin{aligned} \begin{aligned} \Psi (x, r, \left| V\right| )&= r^{2-n} \int _{Q(x, r)} \left| V(y)\right| \, dy \le r^{2-n} \int _{Q(x, r)} d \sum _{j = 1}^d \left\langle V(y)\vec {e}_j, \vec {e}_j \right\rangle \, dy \\&= d \sum _{j = 1}^d \left\langle \Psi (x, r, V) \vec {e}_j, \vec {e}_j \right\rangle \le d^2 \left| \Psi (x, r;V)\right| . \end{aligned} \end{aligned}$$
(19)

Combining the fact that \(\Psi \left( x, \frac{1}{m\left( x, \left| V\right| \right) }; \left| V\right| \right) = 1\) with the previous observation, Lemma 3, and the definition of \(\overline{m}\) shows that

$$\begin{aligned} 1&= \Psi \left( x, \frac{1}{m\left( x, \left| V\right| \right) }; \left| V\right| \right) \le d^2 \left| \Psi \left( x, \frac{1}{m\left( x, \left| V\right| \right) }; V \right) \right| \\&\le d^2 C_V \left[ \frac{\overline{m}(x, V)}{{m}(x, \left| V\right| )}\right] ^{2-\frac{n}{p}} \left| \Psi \left( x, \frac{1}{\overline{m}\left( x, {V} \right) }; V \right) \right| = d^2 C_V \left[ \frac{\overline{m}(x, V)}{{m}(x, \left| V\right| )}\right] ^{2-\frac{n}{p}}, \end{aligned}$$

and the second part of the inequality follows. \(\square \)

Since \(\left| V\right| = \lambda _d\), the largest eigenvalue of V, then this result shows that \(\overline{m}\left( x, V \right) \simeq m\left( x, \lambda _d \right) \), indicating why we call \(\overline{m}\) the upper auxiliary function.

Now we use these lemmas to establish a number of important tools related to the functions \(\underline{m}\left( x, V \right) \) and \(\overline{m}\left( x, V \right) \). From now on we will assume that \(\left| \cdot \right| \) on \(\mathbb {R}^n\) refers to the \(\ell _\infty \) norm.

Lemma 10

(Auxiliary function properties) If \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\), then both \(\underline{m}(\cdot , V)\) and \(\overline{m}(\cdot , V)\) satisfy the conclusions of Lemma 7 where all constants depend on n, p, and \(C_V\). For \(\overline{m}(\cdot , V)\), the constants depend additionally on d and we may eliminate the assumption that \(V \in \mathcal{N}\mathcal{D}\).

Proof

First consider \(\overline{m}(\cdot , V)\). Lemma 9 combined with Lemma 1 implies that all of these properties follow immediately from Lemma 7.

Now consider \(\underline{m}(\cdot , V)\). Suppose \(\left| x - y\right| \le \frac{2^j -1}{\underline{m}\left( x, V \right) }\) for some \(j \in \mathbb {N}\). Then \(Q\left( y, \frac{1}{\underline{m}\left( x, V \right) } \right) \subseteq Q\left( x, \frac{2^j}{\underline{m}\left( x, V \right) } \right) \). Choose \(\vec {e} \in \mathbb {S}^{d-1}\) so that \(\left\langle \Psi \left( x, \frac{1}{\underline{m}\left( x, V \right) }; V \right) \vec {e}, \vec {e} \right\rangle = 1\). Then

$$\begin{aligned}&\left\langle \Psi \left( y, \frac{1}{\underline{m}\left( x, V \right) }; V \right) \vec {e}, \vec {e} \right\rangle \ = \underline{m}\left( x, V \right) ^{n-2} \int _{Q\left( y, \frac{1}{\underline{m}\left( x, V \right) } \right) } \left\langle V\left( z \right) \vec {e}, \vec {e} \right\rangle dz\\&\quad \le \underline{m}\left( x, V \right) ^{n-2}\int _{Q\left( x, \frac{2^j}{\underline{m}\left( x, V \right) } \right) } \left\langle V\left( z \right) \vec {e}, \vec {e} \right\rangle dz \\&\quad \le \underline{m}\left( x, V \right) ^{n-2} \gamma ^j \int _{Q\left( x, \frac{1}{\underline{m}\left( x, V \right) } \right) } \left\langle V\left( z \right) \vec {e}, \vec {e} \right\rangle dz = \gamma ^j \left\langle \underline{\Psi }(x) \vec {e}, \vec {e} \right\rangle = \gamma ^j, \end{aligned}$$

where we have used Lemma 4 and \(\gamma \) denotes the doubling constant. It then follows from Lemma 8 that \( \frac{1}{\underline{m}\left( x, V \right) } \le \frac{\max \left\{ 1, \left( C_V \gamma ^j \right) ^{p/(2p-n)}\right\} }{\underline{m}\left( y, V \right) }\) or

$$\begin{aligned} \underline{m}\left( y, V \right) \le \left( C_V \gamma ^j \right) ^{p/(2p-n)} \underline{m}\left( x, V \right) . \end{aligned}$$
(20)

Since \(\left| x - y\right| \le \frac{2^j -1}{\underline{m}\left( x, V \right) }\) and \(\frac{1}{\underline{m}\left( x, V \right) } \le \frac{\left( C_V \gamma ^j \right) ^{p/(2p-n)}}{\underline{m}\left( y, V \right) }\), then \(\left| x - y\right| \le \frac{\left( 2^j - 1 \right) \left( C_V \gamma ^j \right) ^{p/(2p-n)}}{\underline{m}\left( y, V \right) }\). Thus, \(Q\left( x, \frac{1}{\underline{m}\left( y, V \right) } \right) \subseteq Q\left( y, \frac{\left( 2^j - 1 \right) \left( C_V \gamma ^j \right) ^{p/(2p-n)}+1}{\underline{m}\left( y, V \right) } \right) \). Setting \( {\tilde{j}} = \lceil \ln \left[ \left( 2^j - 1 \right) \left( C_V \gamma ^j \right) ^{p/(2p-n)}+1\right] / \ln 2\rceil \), it can be shown, as above, that

$$\begin{aligned} \left\langle \Psi \left( x, \frac{1}{\underline{m}\left( y, V \right) }; V \right) \vec {e}, \vec {e} \right\rangle \le \gamma ^{{\tilde{j}}}, \end{aligned}$$

where now \(\vec {e} \in \mathbb {S}^{d-1}\) is such that \(\left\langle \underline{\Psi }(y) \vec {e}, \vec {e} \right\rangle = 1\). Arguing as above, we see that \(\frac{1}{\underline{m}\left( y, V \right) } \le \frac{ \max \left\{ 1, \left( C_V \gamma ^{\tilde{j}} \right) ^{p/(2p-n)}\right\} }{\underline{m}\left( x, V \right) }\), or

$$\begin{aligned} \underline{m}\left( x, V \right) \le \left( C_V \gamma ^{\tilde{j}} \right) ^{p/(2p-n)} \underline{m}\left( y, V \right) . \end{aligned}$$
(21)

When \(\left| x - y\right| \lesssim \frac{1}{\underline{m}\left( x, V \right) }\), we have that \(j \simeq 1\) and \(\tilde{j} \simeq 1\). Then statement (a) is a consequence of (20) and (21).

If \(\left| x - y\right| \le \frac{1}{\underline{m}\left( x,V \right) }\), then part (a) implies that \(\underline{m}\left( y, V \right) \lesssim \underline{m}\left( x, V \right) \) and the conclusion of (b) follows. Otherwise, choose \(j \in \mathbb {N}\) so that \(\frac{2^{j-1}-1}{\underline{m}\left( x, V \right) } \le \left| x - y\right| < \frac{2^j-1}{\underline{m}\left( x, V \right) }\). From (20), we see that

$$\begin{aligned} \underline{m}\left( y, V \right)&\le \left( C_V \gamma \right) ^{\frac{p}{2p-n}} \left( 2^{j-1} \right) ^{\frac{p \ln \gamma }{\left( 2p-n \right) \ln 2 } } \underline{m}\left( x, V \right) \\&\le \left( C_V \gamma \right) ^{\frac{p}{2p-n}} \left[ 1 + \left| x - y\right| \underline{m}\left( x, V \right) \right] ^{\frac{p \ln \gamma }{\left( 2p-n \right) \ln 2 } } \underline{m}\left( x, V \right) . \end{aligned}$$

Setting \(C = \left( C_V \gamma \right) ^{\frac{p}{2p-n}}\) and \(k_0 = \frac{p \ln \gamma }{\left( 2p-n \right) \ln 2 }\) gives the conclusion of (b).

If \(\left| x - y\right| \le \frac{1}{\underline{m}\left( x, V \right) }\) or \(\left| x - y\right| \le \frac{1}{\underline{m}\left( y, V \right) }\), then part (a) implies that \(\underline{m}\left( x, V \right) \lesssim \underline{m}\left( y, V \right) \), and the conclusion of (c) follows. Thus, we consider when \(\left| x - y\right| > \frac{1}{\underline{m}\left( x, V \right) }\) and \(\left| x - y\right| > \frac{1}{\underline{m}\left( y, V \right) }\). Repeating the arguments from the previous paragraph with x and y interchanged, we see that

$$\begin{aligned} \underline{m}\left( x, V \right) \le C \left[ 1 + \left| x - y\right| \underline{m}\left( y, V \right) \right] ^{k_0} \underline{m}\left( y, V \right) \le 2^{k_0}C \left| x - y\right| ^{k_0}\underline{m}\left( y, V \right) ^{k_0+1}. \end{aligned}$$

Rearranging gives that

$$\begin{aligned} \underline{m}\left( y, V \right) \ge \frac{2^{-k_0/(k_0+1)}C^{-1/(k_0+1)} \underline{m}\left( x, V \right) }{ \left( \underline{m}\left( x, V \right) \left| x - y\right| \right) ^{k_0/(k_0+1)}} \ge \frac{2^{-k_0/(k_0+1)}C^{-1/(k_0+1)} \underline{m}\left( x, V \right) }{ \left( 1 + \underline{m}\left( x, V \right) \left| x - y\right| \right) ^{k_0/(k_0+1)}}. \end{aligned}$$

Taking \(c = 2^{-k_0/(k_0+1)}C^{-1/(k_0+1)}\) leads to the conclusion of (c). \(\square \)

Using these auxiliary functions, we now define the associated Agmon distance functions.

Definition 10

(Agmon distances) Let \(\underline{m}(\cdot , V)\) and \(\overline{m}(\cdot , V)\) be as in (16) and (17), respectively. We define the lower Agmon distance function as

$$\begin{aligned} \underline{d}(x, y, V) = \inf _{\gamma } \int _0^1 \underline{m}(\gamma (t), V) \left| \gamma '(t)\right| \, dt , \end{aligned}$$

and the upper Agmon distance function as

$$\begin{aligned} \overline{d}(x, y, V) = \inf _{\gamma } \int _0^1 \overline{m}(\gamma (t), V) \left| \gamma '(t)\right| \, dt , \end{aligned}$$

where in both cases, the infimum ranges over all absolutely continuous \(\gamma :[0,1] \rightarrow \mathbb {R}^n\) with \(\gamma (0) = x\) and \(\gamma (1) = y\).

We make the following observation.

Lemma 11

(Property of Agmon distances) If \(\left| x - y\right| \le \frac{C}{\underline{m}(x, V)}\), then \(\underline{d}\left( x, y, V \right) \lesssim _{(n, p, C_V)} C\). If \(\left| x - y\right| \le \frac{C}{\overline{m}(x, V)}\), then \(\overline{d}\left( x, y, V \right) \lesssim _{(d, n, p, C_V)} C\).

Proof

We only prove the first statement since the second one is analogous. Let \(x, y \in \mathbb {R}^n\) be as given. Define \(\gamma : \left[ 0,1\right] \rightarrow \mathbb {R}^n\) by \(\gamma \left( t \right) = x + t\left( y - x \right) \). By Lemma 10(a), \(\underline{m}\left( \gamma \left( t \right) , V \right) \lesssim _{(n, p, C_V)} \underline{m}\left( x, V \right) \) for all \(t \in \left[ 0, 1\right] \). It follows that

$$\begin{aligned} \underline{d}\left( x, y, V \right)&\le \int _0^1 \underline{m}(\gamma (t), V) \left| \gamma '(t)\right| dt \lesssim _{(n, p, C_V)} \int _0^1 \underline{m}\left( x, V \right) \left| x - y\right| dt\\&\lesssim _{(n, p, C_V)} C, \end{aligned}$$

as required. \(\square \)

In future sections, the lower Agmon distance function will be an important tool for us once it has been suitably regularized. We regularize this function \(\underline{d}(\cdot , \cdot , V)\) by following the procedure from [2]. Observe that by Theorem 10(c), \(\underline{m}\left( \cdot , V \right) \) is a slowly varying function; see [34, Definition 1.4.7], for example. As such, we have the following.

Lemma 12

(cf. the proof of Lemma 3.3 in [2]) There exist a sequence \(\left\{ x_j\right\} _{j=1}^\infty \subseteq \mathbb {R}^n\) so that with \( Q_j = Q\left( x_j, \tfrac{1}{\underline{m}\left( x_j, V \right) } \right) \), we have \( \mathbb {R}^n = \bigcup _{j=1}^\infty Q_j\) and \( \sum _{j=1}^\infty \chi _{Q_j} \lesssim _{(n, p, C_V)} 1\). Moreover, for each j, there exist \(\phi _j \in C^\infty _0\left( Q_j \right) \) so that \(0 \le \phi _j \le 1\), \( \sum _{j=1}^\infty \phi _j = 1\), and \( \left| \nabla \phi _j(x)\right| \lesssim _{(n, p, C_V)} \underline{m}\left( x, V \right) \).

Remark 3

Since \(\overline{m}\left( \cdot , V \right) \) is also a slowly varying function, the same result applies to \(\overline{m}(\cdot , V)\) with constants that depend additionally on d.

Using this lemma and [34, Theorem 1.4.10], we can follow the process from [2, pp. 542] to establish the following pair of results.

Lemma 13

(Lemma 3.3 in [2]) For each \(y \in \mathbb {R}^n\), there exists nonnegative function \(\varphi _V(\cdot , y) \in C^\infty (\mathbb {R}^n)\) such that for every \(x \in \mathbb {R}^n\),

$$\begin{aligned}&\left| \varphi _V(x, y) - \underline{d}(x, y, V)\right| \lesssim _{(n, p, C_V)} 1 \\&\left| \nabla _x \varphi _V(x, y)\right| \lesssim _{(n, p, C_V)} \underline{m}(x, V). \end{aligned}$$

Lemma 14

(Lemma 3.7 in [2]) For each \(y \in \mathbb {R}^n\), there exists a sequence of nonnegative, bounded functions \(\left\{ \varphi _{V, j}\left( \cdot , y \right) \right\} \subseteq C^\infty (\mathbb {R}^n)\) such that for every \(x \in \mathbb {R}^n\),

$$\begin{aligned}&\varphi _{V, j} (x, y) \le \varphi _V(x, y) \\&\varphi _{V, j} (x, y) \rightarrow \varphi _V(x, y) \text { as } j \rightarrow \infty \\&\left| \nabla _x \varphi _{V,j}(x, y)\right| \lesssim _{(n, p, C_V)} \underline{m}(x, V). \end{aligned}$$

To conclude the section, we observe that under the stronger assumption that \(V \in {\mathcal {B}_p}\cap {\mathcal {A}_\infty }\), we can prove a result analogous to Lemma 9 for the smallest eigenvalue. By Proposition 1, \(\lambda _1 \in {\text {B}_p}\), so it is meaningful to discuss \(m\left( x, \lambda _1 \right) \). In subsequent sections, we will not assume that \(V \in {\mathcal {A}_\infty }\), so this result should be treated as an interesting observation. Its proof is provided in [31].

Proposition 4

(Lower auxiliary function relates to \(\lambda _1\)) If \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal {A}_\infty }\) for some \(p > \frac{n}{2}\), then

$$\begin{aligned} m(x, \lambda _1) \le \underline{m}(x, V) \lesssim m(x, \lambda _1), \end{aligned}$$

where the implicit constant depends on n, p, \(C_V\) and the \({\mathcal {A}_\infty }\) constants.

This result leads to the following observation.

Corollary 1

If \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal {A}_\infty }\) for some \(p > \frac{n}{2}\), then \(m\left( x, \lambda _1 \right) \) satisfies the conclusions of Lemma 7 where the constants have additional dependence on the \({\mathcal {A}_\infty }\) constants.

Remark 4

In fact, if we assume that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal {A}_\infty }\), then we can show that Lemma 10 holds for \(\underline{m}\left( \cdot , V \right) \) in the same way that we show it holds for \(\overline{m}\left( \cdot , V \right) \). That is, we apply Lemma 7 to \(\lambda _1\), then use Proposition 4.

4 Fefferman–Phong inequalities

In this section, we present and prove our matrix versions of the Fefferman–Phong inequalities. The first result is a lower Fefferman–Phong inequality which holds with the lower auxiliary function from (16). This result will be applied in Sect. 8 where we establish upper bound estimates for the fundamental matrices. A corollary to this lower Fefferman–Phong inequality, which is used in Sect. 9 to prove lower bound estimates for the fundamental matrices, is then provided. In keeping with [2], we also present the upper bound with the upper auxiliary function from (17).

Before stating and proving the lower Fefferman–Phong inequality, we present the Poincaré inequality that will be used in its proof. Additional and more complex matrix-valued Poincaré inequalities and related Chanillo–Wheeden type conditions appear in a forthcoming manuscript.

Proposition 5

(Poincaré inequality) Let \(V \in \mathcal {B}_{\frac{n}{2}}\). For any open cube \(Q \subseteq \mathbb {R}^n\) and any \(\vec {u} \in C^1(Q)\), we have

$$\begin{aligned} {\int _Q \int _Q \left| (V(Q))^{-\frac{1}{2}} V^\frac{1}{2} (y) \left( \vec {u} (x) - \vec {u} (y) \right) \right| ^2 \, dx \, dy} \lesssim _{\left( d, n, C_V \right) } \left| Q\right| ^\frac{2}{n} {\int _Q \left| D \vec {u} (x)\right| ^2 \, dx}. \end{aligned}$$

We prove this result by following the arguments from the scalar version of the Poincaré inequality in Shen’s article, [2, Lemma 0.14].

Proof

Fix a cube Q and define the scalar weight \(v_Q (y) = \left| V(Q)^{-\frac{1}{2}} V(y) V(Q)^{-\frac{1}{2}}\right| \).

First we show that \(v_Q \in B_{\frac{n}{2}}\) with a comparable constant. For an arbitrary cube P, observe that by (5)

where we have used that \(V \in \mathcal {B}_{\frac{n}{2}}\) to reach the third line. This shows that \(v_Q \in B_{\frac{n}{2}}\).

Since \(v_Q \in B_{\frac{n}{2}}\), then it follows from [2, Lemma 0.14] that

$$\begin{aligned}&\int _Q \int _Q \left| (V(Q))^{-\frac{1}{2}} V^\frac{1}{2} (y) (\vec {u} (x) - \vec {u} (y))\right| ^2 \, dx \, dy \\&\quad \le {\int _Q \int _Q \left| (V(Q))^{-\frac{1}{2}} V^\frac{1}{2} (y)\right| ^2 \left| \vec {u} (x) - \vec {u} (y)\right| ^2 \, dx \, dy}\\&\quad = {\int _Q \int _Q \sum _{j = 1}^d \left| u_j (x) - u_j (y)\right| ^2 \, v_Q (y) dy \, dx}\\&\quad \lesssim _{\left( d, n, C_V \right) } \left| Q\right| ^{\frac{2}{n}} v_Q (Q) \int _Q \sum _{j = 1}^d \left| \nabla u_j (x)\right| ^2 \, dx\\&\quad = \left| Q\right| ^{\frac{2}{n}} v_Q (Q) \int _Q \left| D \vec {u} (x)\right| ^2 \, dx. \end{aligned}$$

Since

$$\begin{aligned} v_Q (Q)&= \int _Q \left| V(Q)^{-\frac{1}{2}} V(y) V(Q)^{-\frac{1}{2}}\right| \, dy \\&\le d \sum _{j = 1}^d \int _Q \left\langle V(Q)^{-\frac{1}{2}} V(y) V(Q)^{-\frac{1}{2}} \vec {e}_j, \vec {e}_j \right\rangle \, dy = d^2, \end{aligned}$$

the conclusion follows. \(\square \)

Now we present the lower Fefferman–Phong inequality. This result will be applied in Sect. 8 to prove the exponential upper bound on the fundamental matrix. Note that we assume here that V belongs to the first three matrix classes that were introduced in Sect. 2.

Lemma 15

(Lower Auxiliary Function Fefferman–Phong Inequality) Assume that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal{Q}\mathcal{C}}\) for some \(p > \frac{n}{2}\). Then for any \( \vec {u} \in C^1 _0(\mathbb {R}^n)\), it holds that

$$\begin{aligned} \int _{\mathbb {R}^n} \left| \underline{m}\left( x, V \right) \vec {u}(x)\right| ^2 \, dx \lesssim _{(d, n, p, C_V, N_V)} \int _{\mathbb {R}^n}\left| D\vec {u} (x)\right| ^2 \, dx + \int _{\mathbb {R}^n}\left| V^\frac{1}{2} (x) \vec {u} (x)\right| ^2 \, dx. \end{aligned}$$

Proof

For some \(x_0 \in \mathbb {R}^n\), let \(r_0 = \frac{1}{\underline{m}(x_0, V)}\) and set \(Q = Q(x_0, r_0)\). Property \({\mathcal{Q}\mathcal{C}}\) in (10) shows that

$$\begin{aligned} N_V \int _{Q} \left| \vec {u}(x)\right| ^2 \, dx&\le \int _{Q} \int _{Q} \left\langle V^\frac{1}{2} (y) V(Q) ^{-1} V^\frac{1}{2} (y)\vec {u}(x), \vec {u}(x) \right\rangle \, dy dx \nonumber \\&= \int _{Q} \int _{Q} \left| V(Q)^{-\frac{1}{2}} V^\frac{1}{2}(y) \vec {u}(x)\right| ^2 \, dy dx \nonumber \\&\lesssim \int _{Q} \int _{Q} \left| V(Q)^{-\frac{1}{2}} V^\frac{1}{2}(y)( \vec {u}(x) - \vec {u}(y))\right| ^2 \, dy dx \nonumber \\&\quad + \int _{Q} \int _{Q} \left| V(Q)^{-\frac{1}{2}} V^\frac{1}{2}(y) \vec {u}(y)\right| ^2 \, dy dx \nonumber \\&\lesssim _{(d, n, C_V)} r_0^{2} \int _{Q} \left| D\vec {u}(x)\right| ^2 \, dx + r_0^n \int _{Q} \left| V(Q)^{-\frac{1}{2}} V^\frac{1}{2}(y) \vec {u}(y)\right| ^2 \, dy, \end{aligned}$$
(22)

where the last line follows from an application of Proposition 5. Now we multiply this inequality through by \(r_0^{-2} = \underline{m}\left( x_0, V \right) ^{2}\), then apply Lemma 10 to conclude that \(\underline{m}\left( x_0, V \right) \simeq _{(n, p, C_V)} \underline{m}\left( x, V \right) \) on Q. It follows that

$$\begin{aligned} \int _{Q} \left| \underline{m}\left( x, V \right) \vec {u}(x)\right| ^2 dx&\lesssim _{(d, n, p, C_V, N_V)} \int _{Q} \left| D\vec {u}(x)\right| ^2 dx\\&\quad + r_0^{n-2} \left| V(Q)^{-1}\right| \int _{Q} \left| V^\frac{1}{2}(y) \vec {u}(y)\right| ^2 dy. \end{aligned}$$

Since \(r_0^{2-n}V(Q) = \Psi (x_0, r_0, V) \ge I\) implies that \(r_0^{n-2} \left| (V(Q))^{-1}\right| = \left| \Psi (x_0, r_0, V)^{-1} \right| \le 1\), then for any \(Q = Q\left( x_0, \frac{1}{\underline{m}\left( x_0, V \right) } \right) \), we have shown that

$$\begin{aligned} \int _{Q} \left| \underline{m}\left( x, V \right) \vec {u}(x)\right| ^2 \, dx&\lesssim _{(d, n, p, C_V, N_V)} \int _{Q} \left| D\vec {u}(x)\right| ^2 \, dx + \int _{Q} \left| V^\frac{1}{2}(x) \vec {u}(x)\right| ^2 \, dx. \end{aligned}$$
(23)

According to Lemma 12, there exists a sequence \(\left\{ x_j\right\} _{j=1}^\infty \subseteq \mathbb {R}^n\) such that if we define \( Q_j = Q\left( x_j, \frac{1}{\underline{m}\left( x_j, V \right) } \right) \), then \( \mathbb {R}^n = \bigcup _{j=1}^\infty Q_j\) and \( \sum _{j=1}^\infty \chi _{Q_j} \lesssim _{(n, p, C_V)} 1.\) Therefore, it follows from (23) that

$$\begin{aligned}&\int _{\mathbb {R}^n}\left| \underline{m}\left( x, V \right) \vec {u}(x)\right| ^2 \, dx \le \sum _{j=1}^\infty \int _{Q_j} \left| \underline{m}\left( x, V \right) \vec {u}(x)\right| ^2 \, dx \\&\quad \lesssim _{(d, n, p, C_V, N_V)} \sum _{j=1}^\infty \left( \int _{Q_j} \left| D\vec {u}(x)\right| ^2 \, dx + \int _{Q_j} \left| V^\frac{1}{2}(x) \vec {u}(x)\right| ^2 \, dx \right) \\&\quad \lesssim _{(n, p, C_V)} \int _{\mathbb {R}^n} \left| D\vec {u}(x)\right| ^2 \, dx + \int _{\mathbb {R}^n} \left| V^\frac{1}{2}(x) \vec {u}(x)\right| ^2 \, dx, \end{aligned}$$

as required. \(\square \)

Remark 5

If we assume that \(\vec {u} \equiv 1\) on Q, then the condition \({\mathcal{Q}\mathcal{C}}\) is necessary for (22) to be true on all such cubes. As such, the condition \({\mathcal{Q}\mathcal{C}}\) is very natural assumption to impose. In fact, as we show in Appendix A, there are matrix weights \(V \in \left( {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D} \right) \setminus {\mathcal{Q}\mathcal{C}}\) for which this Fefferman–Phong estimate fails to hold.

Finally, if we replace V by \(\left| V\right| I\), we are essentially reduced to the scalar setting and we only need that \(\left| V\right| \in {\text {B}_p}\). In particular, we don’t need to assume that \(V \in {\mathcal{Q}\mathcal{C}}\). As shown in Sects. 8 and 9, this result will be applied to prove the exponential lower bound on the fundamental matrix.

Corollary 2

(Norm Fefferman–Phong Inequality) Assume that \(\left| V\right| \in {\text {B}_p}\) for some \(p > \frac{n}{2}\). Then for any \( \vec {u} \in C^1 _0(\mathbb {R}^n)\), it holds that

$$\begin{aligned} \int _{\mathbb {R}^n} \left| m\left( x, \left| V\right| \right) \vec {u}(x)\right| ^2 \, dx \lesssim _{(d, n, p, C_{\left| V\right| })} \int _{\mathbb {R}^n}\left| D\vec {u} (x)\right| ^2 \, dx + \int _{\mathbb {R}^n}\left| V\right| \left| \vec {u} (x)\right| ^2 \, dx. \end{aligned}$$

To conclude the section, although we will not use it, we state the straightforward upper bound which is similar [2, Theorem 1.13(b)]. The proof is very similar to that of [2, Theorem 1.13(b)], so we omit it. Notice that for this result, we only assume that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\).

Proposition 6

(Upper Auxiliary Function Fefferman–Phong Inequality) Assume that \(V \in {\mathcal {B}_p}\) for some \(p > \frac{n}{2}\). For any \( \vec {u} \in C_0^1(\mathbb {R}^n)\), it holds that

$$\begin{aligned} \int _{\mathbb {R}^n} \left| V^\frac{1}{2}(x) \vec {u}(x)\right| ^2 \, dx \lesssim _{(d, n, p, C_V)} \int _{\mathbb {R}^n}\left| D\vec {u} (x)\right| ^2 \, dx + \int _{\mathbb {R}^n}\left| \overline{m}(x, V) \vec {u} (x)\right| ^2 \, dx. \end{aligned}$$

5 The elliptic operator

In this section, we introduce the generalized Schrödinger operators. For this section and the subsequent two, we do not need to assume that our the matrix weight V belongs to \({\mathcal {B}_p}\) and therefore work in a more general setting. In particular, to define the operator, the fundamental matrices, and discuss a class of systems operators that satisfy a set of elliptic theory results, we only require nondegeneracy and local p-integrability of the zeroth order potential terms. The stronger assumption that \(V \in {\mathcal {B}_p}\) is not required until we establish more refined bounds for the fundamental matrices; namely the exponential upper and lower bounds. As such, the next three sections are presented for V in this more general setting.

For the leading operator, let \(A^{\alpha \beta } = A^{\alpha \beta }(x)\), for each \(\alpha , \beta \in \left\{ 1, \dots , n\right\} \), be an \(d \times d\) matrix with bounded measurable coefficients defined on \(\mathbb {R}^n\). We assume that there exist constants \(0< \lambda , \Lambda < \infty \) so that \(A^{\alpha \beta }\) satisfies an ellipticity condition of the form

$$\begin{aligned} \sum _{i, j = 1}^d \sum _{\alpha , \beta = 1}^n A^{\alpha \beta }_{ij}(x) \xi _\beta ^{j} \xi _\alpha ^{i}&\ge \lambda \sum _{i = 1}^d \sum _{\alpha = 1}^n \left| \xi _\alpha ^i\right| ^2 = \lambda \left| \xi \right| ^2 \quad \forall \, x \in \mathbb {R}^d, \xi \in \mathbb {R}^{d \times n} \end{aligned}$$
(24)

and a boundedness assumption of the form

$$\begin{aligned} \sum _{i, j = 1}^d \sum _{\alpha , \beta = 1}^n A_{ij}^{\alpha \beta }(x) \xi _\beta ^{j} \zeta _\alpha ^{i}&\le \Lambda \sum _{i, j = 1}^d \sum _{\alpha , \beta = 1}^n \xi _\beta ^{j} \zeta _\alpha ^{i} \le \Lambda \left| \xi \right| \left| \zeta \right| , \quad \forall x \in \mathbb {R}^d, \xi , \zeta \in \mathbb {R}^{d \times n}. \end{aligned}$$
(25)

For the zeroth order term, we assume that

$$\begin{aligned} V \in L^{\frac{n}{2}}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}. \end{aligned}$$
(26)

In particular, since V is a matrix weight, then V is a \(d \times d\), a.e. positive semidefinite, symmetric, \(\mathbb {R}\)-valued matrix function.

Remark 6

Note that if \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p \in \left[ \frac{n}{2}, \infty \right] \), then since \(V \in {\mathcal {B}_p}\) implies that \(V \in L^p_{loc}\left( \mathbb {R}^n \right) \) for some \(p \ge \frac{n}{2}\), such a choice of V satisfies (26). This more specific assumption on the potential functions will appear in Sects. 8 and 9.

The equations that we study are formally given by

$$\begin{aligned} \mathcal {L}_V\vec {u}&= -D_\alpha \left( A^{\alpha \beta } D_\beta \vec {u} \right) + V \vec {u}. \end{aligned}$$
(27)

To make sense of what it means for some function \(\vec {u}\) to satisfy (27), we introduce new Hilbert spaces. But first we recall some familiar and related Hilbert spaces. For any open \(\Omega \subseteq \mathbb {R}^n\), \(W^{1,2}(\Omega )\) denotes the family of all weakly differentiable functions \(u \in L^{2}(\Omega )\) whose weak derivatives are functions in \(L^2(\Omega )\), equipped with the norm that is given by

$$\begin{aligned} \left\| u\right\| _{W^{1,2}(\Omega )}^2 = \left\| u\right\| _{L^{2}(\Omega )}^2 + \left\| D u\right\| _{L^2(\Omega )}^2. \end{aligned}$$

The space \(W^{1,2}_0(\Omega )\) is defined to be the closure of \(C^\infty _c(\Omega )\) with respect to \(\left\| \cdot \right\| _{W^{1,2}(\Omega )}\). Recall that \(C^\infty _c(\Omega )\) denotes the set of all infinitely differentiable functions with compact support in \(\Omega \).

For any open \(\Omega \subseteq \mathbb {R}^n\), the space \(Y^{1,2}(\Omega )\) is the family of all weakly differentiable functions \(u \in L^{2^*}(\Omega )\) whose weak derivatives are functions in \(L^2(\Omega )\), where \(2^*=\frac{2n}{n-2}\). We equip \(Y^{1,2}(\Omega )\) with the norm

$$\begin{aligned} \left\| u\right\| ^2_{Y^{1,2}(\Omega )} := \left\| u\right\| ^2_{L^{2^*}(\Omega )} + \left\| D u\right\| ^2_{L^2(\Omega )}. \end{aligned}$$

Define \(Y^{1,2}_0(\Omega )\) as the closure of \(C^\infty _c(\Omega )\) in \(Y^{1,2}(\Omega )\). When \(\Omega = \mathbb {R}^n\), \(Y^{1,2}\left( \mathbb {R}^n \right) = Y^{1,2}_0\left( \mathbb {R}^n \right) \) (see, e.g., Appendix A in [11]). By the Sobolev inequality,

$$\begin{aligned} \left\| u\right\| _{L^{2^*}(\Omega )} \le c_n \left\| D u\right\| _{L^2(\Omega )} \quad \text {for all}\, u \in Y^{1,2}_0(\Omega ). \end{aligned}$$

It follows that \(W^{1,2}_0(\Omega ) \subseteq Y^{1,2}_0(\Omega )\) with set equality when \(\Omega \) has finite measure. The bilinear form on \(Y_0^{1,2}(\Omega )\) that is given by

$$\begin{aligned} \left\langle u, v \right\rangle _{Y_0^{1,2}(\Omega )} := \int _{\Omega } D_\alpha u D_\alpha v \end{aligned}$$

defines an inner product on \(Y_0^{1,2}(\Omega )\). With this inner product, \(Y_0^{1,2}(\Omega )\) is a Hilbert space with norm

$$\begin{aligned} \left\| u\right\| _{Y_0^{1,2}(\Omega )} := \left\langle u, u \right\rangle _{Y_0^{1,2}(\Omega )}^{1/2} = \left\| Du\right\| _{L^2(\Omega )}. \end{aligned}$$

We refer the reader to [11, Appendix A] for further properties of \(Y^{1,2}(\Omega )\), and some relationships between \(Y^{1,2}(\Omega )\) and \(W^{1,2}(\Omega )\). These spaces can be generalized to vector-valued functions in the usual way.

Towards the development of our new function spaces, we start with the associated inner products. For any V as in (26) and any \(\Omega \subseteq \mathbb {R}^n\) open and connected, let \(\left\langle \cdot , \cdot \right\rangle ^2_{W_V^{1,2}(\Omega )}: C_c^\infty (\Omega ) \times C_c^\infty (\Omega ) \rightarrow \mathbb {R}\) be given by

$$\begin{aligned} \left\langle \vec {u}, \vec {v} \right\rangle _{W_V^{1,2}(\Omega )} = \int _{\Omega } \left\langle V \vec {u}, \vec {v} \right\rangle + \left\langle D \vec {u}, D \vec {v} \right\rangle . \end{aligned}$$

This inner product induces a norm, \(\left\| \cdot \right\| ^2_{W_V^{1,2}(\Omega )}: C_c^\infty (\Omega ) \rightarrow \mathbb {R}\) that is defined by

$$\begin{aligned} \left\| \vec {u}\right\| ^2_{W_V^{1,2}(\Omega )} := \left\| V^{1/2} \vec {u}\right\| ^2_{L^{2}(\Omega )} + \left\| D \vec {u}\right\| _{L^2(\Omega )}^2 = \int _{\Omega } \left\langle V \vec {u}, \vec {u} \right\rangle + \left\langle D \vec {u}, D \vec {u} \right\rangle . \end{aligned}$$

The nondegeneracy condition described by (9) ensures that this is indeed a norm and not just a semi-norm. In particular, if \(\left\| D \vec {u}\right\| _{L^2(\Omega )} = 0\), then \(\vec {u} = \vec {c}\) a.e., where \(\vec {c}\) is a constant vector. But then by (9), \(\left\| V^{1/2} \vec {u}\right\| ^2_{L^{2}(\Omega )} = \left\| V^{1/2} \vec {c}\right\| ^2_{L^{2}(\Omega )} = 0\) iff \(\vec {c} = \vec {0}\).

For any \(\Omega \subseteq \mathbb {R}^n\) open and connected, we use the notation \(L_V^2(\Omega )\) to denote the space of V-weighted square integrable functions. that is,

$$\begin{aligned} L_V^2(\Omega ) = \left\{ \vec {u}: \Omega \rightarrow \mathbb {R}^d: \left\| V^{1/2} \vec {u}\right\| _{L^2\left( \Omega \right) } < \infty \right\} . \end{aligned}$$

For any V as in (26) and any \(\Omega \subseteq \mathbb {R}^n\) open and connected, define each space \(W_{V,0}^{1,2}(\Omega )\) as the completion of \(C_c^\infty (\Omega )\) with respect to the norm \(\left\| \cdot \right\| _{W_V^{1,2}(\Omega )}\). That is,

$$\begin{aligned} W_{V,0}^{1,2}(\Omega ) = \overline{C_c^\infty (\Omega )}^{\left\| \cdot \right\| _{W_V^{1,2}(\Omega )}}. \end{aligned}$$

The following proposition clarifies the meaning of our trace zero Sobolev space.

Proposition 7

Let V be as in (26) and let \(\Omega \subseteq \mathbb {R}^n\) be open and connected. For every sequence \(\{\vec {u}_k\}_{k=1}^\infty \subset C_c ^\infty (\Omega )\) that is Cauchy with respect to the \(W_{V}^{1,2}(\Omega )\)-norm, there exists a \(\vec {u} \in L_V ^2(\Omega ) \cap Y^{1,2}_0(\Omega )\) for which

$$\begin{aligned} \lim _{k \rightarrow \infty } \left\| \vec {u}_k - \vec {u}\right\| _{W^{1,2}_V(\Omega )}^2 = \lim _{k \rightarrow \infty } \left( \int _{\Omega }\left| V^\frac{1}{2} \left( \vec {u}_k - \vec {u} \right) \right| ^2 + \int _{\Omega }\left| D\vec {u}_k - D\vec {u}\right| ^2 \right) = 0. \end{aligned}$$

Proof

Since \(\{\vec {u}_k\} \subset C_c ^\infty (\Omega )\) is Cauchy in the \(W_{V}^{1,2}(\Omega )\) norm, then \(\left\{ V^{1/2} \vec {u}_k\right\} \) and \(\left\{ D\vec {u}_k\right\} \) are Cauchy in \(L^2(\Omega )\), so there exists \(\vec {h} \in L^2(\Omega )\) and \(U \in L^2(\Omega )\) so that

$$\begin{aligned}&V^{1/2} \vec {u}_k \rightarrow \vec {h} \quad \text { in } \quad L^2(\Omega ) \end{aligned}$$
(28)
$$\begin{aligned}&D\vec {u}_k \rightarrow U \quad \text { in } \quad L^2(\Omega ) . \end{aligned}$$
(29)

By the Sobolev inequality applied to \(\vec {u}_k-\vec {u}_j\), we have

$$\begin{aligned} \left\| \vec {u}_k-\vec {u}_j\right\| _{L^{2^*}(\Omega )} \le c_n \left\| D\left( \vec {u}_k - \vec {u}_j \right) \right\| _{L^{2}(\Omega )} \le c_n \left\| \vec {u}_k-\vec {u}_j\right\| _{W_V^{1,2}(\Omega )}. \end{aligned}$$

In particular, \(\left\{ \vec {u}_k\right\} \) is also Cauchy in \(L^{2^*}(\Omega )\) and then there exists \(\vec {u} \in L^{2^*}(\Omega )\) so that

$$\begin{aligned} \vec {u}_k \rightarrow \vec {u} \quad \text { in } \quad L^{2^*}(\Omega ). \end{aligned}$$
(30)

For any \(\Omega ' \Subset \mathbb {R}^n\), observe that another application of Hölder’s inequality shows that

$$\begin{aligned} \left\| V^{1/2} \vec {u}_k - V^{1/2} \vec {u} \right\| _{L^{2}\left( \Omega \cap \Omega ' \right) }&\le \left( \int _{\Omega \cap \Omega '} \left| V\right| \left| \vec {u}_k - \vec {u}\right| ^2 \right) ^{\frac{1}{2}} \le \left\| V\right\| _{L^{\frac{n}{2}}\left( \Omega ' \right) }^{2} \left\| \vec {u}_k - \vec {u} \right\| _{L^{2^*}\left( \Omega \right) }. \end{aligned}$$

Since \(V \in L^{\frac{n}{2}}_{{{\,\textrm{loc}\,}}}(\mathbb {R}^n)\), then \(\left\| V\right\| _{L^{\frac{n}{2}}\left( \Omega ' \right) } < \infty \) and we conclude that \(V^{1/2} \vec {u}_k \rightarrow V^{1/2} \vec {u}\) in \(L^2\left( \Omega \cap \Omega ' \right) \) for any \(\Omega ' \Subset \mathbb {R}^n\). By comparing this statement with (28), we deduce that \(V^{1/2} \vec {u} = \vec {h}\) in \(L^2(\Omega )\) and therefore a.e., so that (28) holds with \(\vec {h} = V^{1/2} \vec {u}\). Moreover, \(\vec {u} \in L^2_V\left( \Omega \right) \).

Next we show that \(D\vec {u} = U\) weakly in \(\Omega \). Let \(\vec {\xi } \in C^\infty _c(\Omega )\). Then for \(j = 1, \ldots , n\), we get from (30) and (29) that

$$\begin{aligned} \int _{\Omega } \left\langle \vec {u} (x), D_j \vec {\xi } (x) \right\rangle \, dx&= \lim _{k \rightarrow \infty } \int _{\Omega } \left\langle \vec {u}_k (x), D_j \vec {\xi } (x) \right\rangle \, dx \\&= -\lim _{k \rightarrow \infty } \int _{\Omega } \left\langle D_j \vec {u}_k (x), \vec {\xi } (x) \right\rangle \, dx = - \int _{\Omega } \left\langle U_j(x), \vec {\xi } (x) \right\rangle \, dx, \end{aligned}$$

where \(U_j\) denotes the \(j^{\text {th}}\) column of U. That is, \(D\vec {u} = U \in L^2(\Omega )\). In particular, (29) holds with \(U = D\vec {u}\). Finally, in combination with (29) and (30), this shows that \(\vec {u} \in Y^{1,2}_0(\Omega )\). \(\square \)

By Proposition 7, associated to each equivalence class of Cauchy sequences \([\{\vec {u}_k\}] \in W_{V, 0} ^{1, 2} (\Omega )\) is a function \(\vec {u} \in L_V ^2(\Omega ) \cap Y^{1,2}_0(\Omega )\) with

$$\begin{aligned} \lim _{k \rightarrow \infty } \left\| \vec {u}_k - \vec {u}\right\| _{W^{1,2}_V(\Omega )} = 0 \end{aligned}$$

so that

$$\begin{aligned} \left\| [\{\vec {u}_k\}]\right\| _{W_{V}^{1, 2} (\Omega )}:= \lim _{k \rightarrow \infty } \left\| \vec {u}_k \right\| _{W^{1,2}_V(\Omega )} = \left\| \vec {u}\right\| _{W^{1,2}_V(\Omega )}. \end{aligned}$$

In fact, this defines a norm on weakly-differentiable vector-valued functions \(\vec {u}\) for which \(\left\| \vec {u}\right\| _{W^{1,2}_V(\Omega )} < \infty \). It follows that the function \(\vec {u}\) is unique and this shows that \(W_{ V, 0} ^{1, 2} (\Omega )\) isometrically imbeds into the space \(L_V^2(\Omega ) \cap Y^{1,2}_0(\Omega )\) equipped with the norm \(\left\| \cdot \right\| _{W^{1,2}_V(\Omega )}\).

Going forward, we will slightly abuse notation and denote each element in \(W_{V, 0}^{1, 2} (\Omega )\) by its unique associated function \(\vec {u} \in L_V ^2(\Omega ) \cap Y^{1,2}_0(\Omega )\). To define the nonzero trace spaces that we need below to prove the existence of fundamental matrices, we use restriction. That is, define the space

$$\begin{aligned} W_{V}^{1,2}(\Omega ) = \left\{ \vec {u}|_\Omega : \vec {u} \in W_{V,0}^{1,2}(\mathbb {R}^n)\right\} \end{aligned}$$

and equip it with the \(W_{V}^{1,2}(\Omega )\)-norm.

Note that \(W_{V}^{1,2}(\mathbb {R}^n) = W_{V, 0}^{1,2}(\mathbb {R}^n)\). Moreover, when \(\Omega \ne \mathbb {R}^n\), \(W_{V}^{1,2}(\Omega )\) may not be complete so we simply treat it as an inner product space. We stress that in general, \(W_{V}^{1,2}(\Omega )\) should not be thought of as a kind of “Sobolev space,” but should instead be viewed as a convenient collection of functions used in the construction from [11]. Specifically, the construction of fundamental matrices from [11] uses the restrictions of elements from appropriate “trace zero Hilbert–Sobolev spaces” defined on \(\mathbb {R}^n\). For us, \(W_{V,0}^{1,2}(\mathbb {R}^n)\) plays the role of the trace zero Hilbert–Sobolev space. Also, as an immediate consequence of Proposition 7 we have the following.

Corollary 3

Let V be as in (26) and let \(\Omega \subseteq \mathbb {R}^n\) be open and connected. If \(\vec {u} \in W_V ^{1, 2} (\Omega )\), then \(\vec {u} \in L_V ^2(\Omega ) \cap Y^{1,2}(\Omega )\) and there exists \(\{\vec {u}_k \}_{k=1}^\infty \subset C_c ^\infty (\mathbb {R}^n)\) for which

$$\begin{aligned} \lim _{k \rightarrow \infty } \left\| \vec {u}_k - \vec {u}\right\| _{W^{1,2}_V(\Omega )}^2 = 0. \end{aligned}$$

We now formally fix the notation and then we will discuss the proper meaning of the operators at hand. For every \(\vec {u} = \left( u^1,\ldots , u^d \right) ^T\) in \(W^{1,2}_{V}(\Omega )\) (and hence in \(Y^{1,2}\left( \Omega \right) \)), we define \(\mathcal {L}_0\vec {u} = - D_\alpha \left( A^{\alpha \beta } D_\beta \vec {u} \right) \). Component-wise, we have \(\left( \mathcal {L}_0\vec {u} \right) ^i = - D_\alpha \left( A_{ij}^{\alpha \beta } D_\beta u^j \right) \) for each \(i = 1, \ldots , d\). The second-order operator is written as

$$\begin{aligned} \mathcal {L}_V= \mathcal {L}_0+ V, \end{aligned}$$

see (27). Component-wise, \(\left( \mathcal {L}_V\vec {u} \right) ^i = -D_\alpha \left( A_{ij}^{\alpha \beta } D_\beta u^j \right) + V_{ij} u^j\) for each \(i = 1, \ldots , d\).

The transpose operator of \(\mathcal {L}_0\), denoted by \(\mathcal {L}_0^*\), is defined by \(\mathcal {L}_0^* \vec {u} = - D_\alpha \left[ \left( A^{\alpha \beta } \right) ^* D_\beta \vec {u}\right] \), where \(\left( A^{\alpha \beta } \right) ^* = \left( A^{\beta \alpha } \right) ^T\), or rather \(\left( A_{ij}^{\alpha \beta } \right) ^* = A_{ji}^{\beta \alpha }\). Note that the adjoint coefficients, \(\left( A_{ij}^{\alpha \beta } \right) ^*\) satisfy the same ellipticity assumptions as \(A_{ij}^{\alpha \beta }\) given by (24) and (25). Take \(V^* = V^T (= V\), since V is assumed to be symmetric). The adjoint operator to \(\mathcal {L}_V\) is given by

$$\begin{aligned} \mathcal {L}_V^* \vec {u}&:=\, \mathcal {L}_0^* \vec {u} + V^* \vec {u} = -D_\alpha \left[ \left( A^{\beta \alpha } \right) ^T D_\beta \vec {u}\right] + V^T \vec {u}. \end{aligned}$$
(31)

All operators, \(\mathcal {L}_0, \mathcal {L}_0^*, \mathcal {L}_V, \mathcal {L}_V^*\) are understood in the sense of distributions on \(\Omega \). Specifically, for every \(\vec {u} \in W^{1,2}_{V}(\Omega )\) and \(\vec {v}\in C_c^\infty (\Omega )\), we use the naturally associated bilinear form and write the action of the functional \(\mathcal {L}_V\vec {u}\) on \(\vec {v}\) as

$$\begin{aligned} ({\mathcal {L}_V}\vec {u}, \vec {v}) =\mathcal {B}_V\left[ \vec {u}, \vec {v}\right]&= \int _\Omega \left\langle A^{\alpha \beta } D_\beta \vec {u}, D_\alpha \vec {v} \right\rangle + \left\langle V \, \vec {u}, \vec {v} \right\rangle \\&= \int _\Omega A_{ij}^{\alpha \beta } D_\beta u^j D_\alpha v^i + V_{ij} u^j v^i. \end{aligned}$$

It is straightforward to check that for such \(\vec {v}, \vec {u}\) and for the coefficients satisfying (25), the bilinear form above is well-defined and finite since \(V \in L^\frac{n}{2}_{{{\,\textrm{loc}\,}}}\). We explore these details in the next section. Similarly, \(\mathcal {B}_V^*\left[ \cdot , \cdot \right] \) denotes the bilinear operator associated to \(\mathcal {L}_V^*\), given by

$$\begin{aligned} (\mathcal {L}_V^*\vec {u}, \vec {v}) =\mathcal {B}_V^*\left[ \vec {u}, \vec {v}\right]&= \int \left\langle \left( A^{\beta \alpha } \right) ^T D_\beta \vec {u}, D_\alpha \vec {v} \right\rangle + \left\langle V^T \, \vec {u}, \vec {v} \right\rangle \\&= \int A_{ji}^{\beta \alpha } D_\beta u^j D_\alpha v^i + V_{ji} u^j v^i . \end{aligned}$$

Clearly, \(\mathcal {B}_V\left[ \vec {v},\vec {u}\right] =\, \mathcal {B}_V^*\left[ \vec {u},\vec {v}\right] \).

For any vector distribution \(\vec {f}\) on \(\Omega \) and \(\vec {u}\) as above, we always understand \({{\mathcal {L}}}\vec {u}= \vec {f} \) on \(\Omega \) in the sense of distributions; that is, as \(\mathcal {B}\left[ \vec {u},\vec {v}\right] = \vec {f}(\vec {v})\) for all \(\vec {v}\in C_c^\infty (\Omega )\). Typically \(\vec {f}\) will be an element of some \(L^\ell (\Omega )\) space and so the action of \(\vec {f}\) on \(\vec {v}\) is then simply \( \int \vec {f}\cdot \vec {v}.\) The identity \({{\mathcal {L}}}^*\vec {u}= \vec {f}\) is interpreted similarly.

We define the associated local spaces as

$$\begin{aligned} \widetilde{W}^{1,2}_{V, {{\,\textrm{loc}\,}}}(\Omega ) = \{\vec {u} \text { weakly differentiable on } \Omega : \Vert \vec {u}\Vert _{W^{1, 2} _V (\Omega ')} < \infty \text { for every } \Omega ' \Subset \Omega \}, \end{aligned}$$

where the tilde notation here is meant to emphasize that this notion of local differs from the standard one. Note that \( {W}^{1,2}_{V}(\Omega ) \subseteq \widetilde{W}^{1,2}_{V, {{\,\textrm{loc}\,}}}(\Omega )\). Moreover, the operators and bilinear forms described above may all be defined in the sense of distributions for any \(\vec {u} \in \widetilde{W}^{1,2}_{V, {{\,\textrm{loc}\,}}}(\Omega )\).

6 Fundamental matrix constructions

We maintain the assumptions from the previous section. That is, \(A^{\alpha \beta }\) is a coefficient matrix that satisfies boundedness (25) and ellipticity (24), and V is a locally integrable matrix weight that satisfies (26). The elliptic operator \(\mathcal {L}_V\) is defined formally by (27). For any open, connected \(\Omega \subseteq \mathbb {R}^n\), V is used to define the Hilbert spaces \(W^{1,2}_{V,0}\left( \Omega \right) \) and the inner product spaces \(W^{1,2}_{V}\left( \Omega \right) := W^{1,2}_{V,0}\left( \mathbb {R}^n \right) |_\Omega \).

To justify the existence of fundamental matrices associated to our generalized Schrödinger operators, we use the constructions and results presented in [11]. By the fundamental matrix, we mean the following.

Definition 11

(Fundamental Matrix) We say that a matrix function \(\Gamma ^V\left( x,y \right) = \left( \Gamma ^V_{ij}\left( x,y \right) \right) _{i,j=1}^d\) defined on \(\left\{ \left( x,y \right) \in \mathbb {R}^n \times \mathbb {R}^n: x \ne y\right\} \) is the fundamental matrix of \(\mathcal {L}_V\) if it satisfies the following properties:

  1. (a)

    \(\Gamma ^V\left( \cdot , y \right) \) is locally integrable and \(\mathcal {L}_V\Gamma ^V\left( \cdot , y \right) = \delta _y I\) for all \(y \in \mathbb {R}^n\) in the sense that for every \(\vec {\phi } = \left( \phi ^1, \ldots , \phi ^d \right) ^T \in C^\infty _c\left( \mathbb {R}^n \right) ^{d}\),

    $$\begin{aligned}&\int _{\mathbb {R}^n} A_{ij}^{\alpha \beta } D_\beta \Gamma ^V_{jk}\left( \cdot , y \right) D_\alpha \phi ^i + V_{ij} \Gamma ^V_{jk}\left( \cdot , y \right) \phi ^i = \phi ^k(y). \end{aligned}$$
  2. (b)

    For all \(y \in \mathbb {R}^n\) and \(r > 0\), \(\Gamma ^V(\cdot , y) \in Y^{1,2}\left( \mathbb {R}^n {\setminus } B\left( y, r \right) \right) \).

  3. (c)

    For any \(\vec {f} = \left( f^1, \ldots , f^d \right) ^T \in L^\infty _c\left( \mathbb {R}^n \right) \), the function \(\vec {u} = \left( u^1, \ldots , u^d \right) ^T\) given by

    $$\begin{aligned} u^k(y) = \int _{\mathbb {R}^n} \Gamma ^V_{jk}\left( x,y \right) f^j(x) \,dx \end{aligned}$$

    belongs to \(W^{1,2}_{V,0}(\mathbb {R}^n)\) and satisfies \(\mathcal {L}_V^* \vec {u} = \vec {f}\) in the sense that for every \(\phi = \left( \phi ^1, \ldots , \phi ^d \right) ^T \in C^\infty _c\left( \mathbb {R}^n \right) ^{d}\),

    $$\begin{aligned}&\int _{\mathbb {R}^n} A_{ij}^{\alpha \beta } D_\alpha u^i D_\beta \phi ^j + V_{ij} u^i\phi ^j = \int _{\mathbb {R}^n} f^j \phi ^j. \end{aligned}$$

We say that the matrix function \(\Gamma ^V\left( x,y \right) \) is the continuous fundamental matrix if it satisfies the conditions above and is also continuous.

We restate the following theorem from [11]. The stated assumptions and properties will be described below.

Theorem 8

(Theorem 3.6 in [11]) Assume that (A1)–(A7) as well as properties (IB) and (H) hold. Then there exists a unique continuous fundamental matrix, \(\Gamma ^{V}(x,y)=\left( \Gamma ^{V}_{ij}(x,y) \right) _{i,j=1}^d, \,\left\{ x\ne y\right\} \), that satisfies Definition 11. We have \(\Gamma ^V(x,y)= \Gamma ^{V*}(y,x)^T\), where \(\Gamma ^{V*}\) is the unique continuous fundamental matrix associated to \(\mathcal {L}_V^*\) as defined in (31). Furthermore, \(\Gamma ^V(x,y)\) satisfies the following estimates:

$$\begin{aligned}&\left\| \Gamma ^V(\cdot , y)\right\| _{Y^{1,2}\left( \mathbb {R}^n\setminus B(y,r) \right) } + \left\| \Gamma ^V(x, \cdot )\right\| _{Y^{1,2}\left( \mathbb {R}^n\setminus B(x,r) \right) } \le C r^{1-\frac{n}{2}} \end{aligned}$$
(32)
$$\begin{aligned}&\left\| \Gamma ^V(\cdot , y)\right\| _{L^q\left( B(y,r) \right) } + \left\| \Gamma ^V(x, \cdot )\right\| _{L^q\left( B(x,r) \right) } \le C_q r^{2-n+\frac{n}{q}}, \quad \forall q\in \left[ 1, \tfrac{n}{n-2}\right) \end{aligned}$$
(33)
$$\begin{aligned}&\left\| D \Gamma ^V\left( \cdot , y \right) \right\| _{L^{q}\left( B(y,r) \right) } + \left\| D \Gamma ^V\left( x, \cdot \right) \right\| _{L^{q}\left( B(x,r) \right) } \le C_q r^{1-n +\frac{n}{q}}, \quad \forall q \in \left[ 1, \tfrac{n}{n-1}\right) \end{aligned}$$
(34)
$$\begin{aligned}&\left| \left\{ x \in \mathbb {R}^n : \left| \Gamma ^V\left( x,y \right) \right|> \tau \right\} \right| + \left| \left\{ y \in \mathbb {R}^n : \left| \Gamma ^V\left( x,y \right) \right| > \tau \right\} \right| \le C \tau ^{- \frac{n}{n-2}} \end{aligned}$$
(35)
$$\begin{aligned}&\left| \left\{ x \in \mathbb {R}^n : \left| D_x \Gamma ^V\left( x,y \right) \right|> \tau \right\} \right| + \left| \left\{ y \in \mathbb {R}^n : \left| D_y \Gamma ^V\left( x,y \right) \right| > \tau \right\} \right| \le C \tau ^{- \frac{n}{n-1}} \end{aligned}$$
(36)
$$\begin{aligned}&\left| \Gamma ^V\left( x,y \right) \right| \le C \left| x - y\right| ^{2 - n}, \quad \forall x \ne y , \end{aligned}$$
(37)

where (32)–(34) hold for all \(r > 0\), and (35)–(36) hold for all \(\tau > 0\). Moreover, each constant depends on \(d, n, \Lambda , \lambda \), and \(C_\mathrm{{IB}}\), and each \(C_q\) depends additionally on q. Moreover, for any \(0<R\le R_0<\left| x - y\right| \),

$$\begin{aligned}&\left| \Gamma ^V\left( x,y \right) - \Gamma ^V\left( z,y \right) \right| \le C_{R_0} C \left( \frac{\left| x-z\right| }{R} \right) ^\eta R^{2-n}\\&\left| \Gamma ^V\left( x,y \right) - \Gamma ^V\left( x,z \right) \right| \le C_{R_0} C \left( \frac{\left| y-z\right| }{R} \right) ^\eta R^{2-n} \end{aligned}$$

whenever \(\left| x-z\right| <\frac{R}{2}\) and \(\left| y-z\right| <\frac{R}{2}\), respectively, where \(C_{R_0}\) and \(\eta =\eta (R_0)\) are the same as in assumption (H).

To justify the existence of \(\Gamma ^V\) satisfying Definition 11 and the results in Theorem 8, it suffices to show that for our Hilbert space \(W^{1,2}_{V, 0}\left( \mathbb {R}^n \right) \) (and the associated inner product spaces \(W^{1,2}_V(\Omega )\) where \(\Omega \subseteq \mathbb {R}^n\)), operators \(\mathcal {L}_V\), \(\mathcal {L}_V^*\), and bilinear forms \(\mathcal {B}_V\), \(\mathcal {B}_V^*\) that were introduced in the previous section, the assumptions (A1)–(A7) from [11] hold.

In addition to properties (A1)–(A7), we must also assume that we are in a setting where de Giorgio–Nash–Moser theory holds. Therefore, we assume the following interior boundedness (IB) and Hölder continuity (H) conditions:

  1. (IB)

    We say that (IB) holds if whenever \(\vec {u} \in W^{1,2}_V(B\left( 0, 4R \right) )\) is a weak solution to \(\mathcal {L} \vec {u} = \vec {f}\) or \(\mathcal {L}^* \vec {u} = \vec {f}\) in B(0, 2R), for some \(R>0\), where \(\vec {f} \in L^\ell \left( B(0,2R) \right) \) for some \(\ell \in \left( \frac{n}{2}, \infty \right] \), then for any \(q \ge 1\),

    $$\begin{aligned} \left\| \vec {u}\right\| _{L^\infty \left( B(0,R) \right) } \le C_\mathrm{{IB}} \left[ R^{- \frac{n}{q}}\left\| \vec {u}\right\| _{L^q\left( B(0,2R) \right) } + R^{2 - \frac{n}{\ell }} \Vert \vec {f}\Vert _{L^\ell \left( B(0,2R) \right) }\right] , \end{aligned}$$
    (38)

    where the constant \(C_\mathrm{{IB}}>0\) is independent of \(R>0\).

  2. (H)

    We say that (H) holds if whenever \(\vec {u} \in W^{1,2}_V(B(0, 2R_0))\) is a weak solution to \(\mathcal {L} \vec {u} = \vec {0}\) or \(\mathcal {L}^* \vec {u} = \vec {0}\) in \(B(0,R_0)\) for some \(R_0>0\), then there exists \(\eta \in \left( 0, 1 \right) \) and \(C_{R_0}>0\), both depending on \(R_0\), so that whenever \(0 < R \le R_0\),

    (39)

For systems of equations, the assumptions (IB) and (H) may actually fail. However, for the class of weakly coupled Schrödinger systems that are introduced in the next section, we prove that these assumptions are valid. To establish (IB) in that setting, it suffices to consider \(V \in L^{\frac{n}{2}}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}\), while our validation of (H) requires the stronger assumption that \(V \in L^{\frac{n}{2}+}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}\). For the simpler, scalar setting, we refer the reader to [11, Section 5] for such a discussion of validity. For many of the scalar settings discussed in [11, Section 5], one must assume, as is standard, that \(V \in L^{\frac{n}{2} +}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \).

Now we proceed to recall and check that (A1)–(A7) from [11] hold for our setting. Since we are working with fundamental matrices, we only need the following conditions to hold when \(\Omega = \mathbb {R}^n\). However, we’ll show that the assumptions actually hold in the full generality from [11].

Recall that \(V \in L^{\frac{n}{2}}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}\) and for any \(\Omega \subseteq \mathbb {R}^n\) open and connected, \(W^{1,2}_{V,0}(\Omega ) = \overline{C^\infty _c(\Omega )}^{\left\| \cdot \right\| _{W^{1,2}_V}}\) and \(W^{1,2}_{V}(\Omega )\) is defined via restriction as \(W^{1,2}_{V}(\Omega ) = W^{1,2}_{V}(\mathbb {R}^n) |_\Omega \). Moreover, by Proposition 7 and Corollary 3, these spaces consist of weakly-differentiable, vector-valued \(L^1_{{{\,\textrm{loc}\,}}}\) functions.

  1. (A1)

    Restriction property: For any \(U \subseteq \Omega \), if \(\vec {u} \in W^{1,2}_V\left( \Omega \right) \), then \(\vec {u}\vert _U \in W^{1,2}_V(U)\) with \(\left\| \vec {u}\vert _U\right\| _{W^{1,2}_V(U)} \le \left\| \vec {u}\right\| _{W^{1,2}_V\left( \Omega \right) }\).

    The restriction property holds by definition. That is, for any \(U \subseteq \Omega \subseteq \mathbb {R}^n\), if \(\vec {u} \in W_V^{1,2}(\Omega )\), then there exists \(\vec {v} \in W^{1,2}_V(\mathbb {R}^n)\) for which \(\vec {v} |_\Omega = \vec {u}\). Since \(\vec {v} |_U = \vec {u} |_U\), then \(\vec {u} |_U \in W^{1,2}_V(U)\) and \(\left\| \vec {u}|_U\right\| _{W^{1,2}_V(U)} \le \left\| \vec {u}\right\| _{W^{1,2}_V(\Omega )}\).

  2. (A2)

    Containment of smooth compactly supported functions: \(C_c^\infty \left( \Omega \right) \) functions belong to \(W^{1,2}_{V}\left( \Omega \right) \). The space \(W^{1,2}_{V,0}\left( \Omega \right) \), defined as the closure of \(C_c^\infty \left( \Omega \right) \) with respect to the \(W^{1,2}_{V}\left( \Omega \right) \)-norm, is a Hilbert space with respect to some \(\Vert \cdot \Vert _{W^{1,2}_{V,0}(\Omega )}\) such that \( \Vert \vec {u}\Vert _{W^{1,2}_{V,0}(\Omega )} \simeq \Vert \vec {u}\Vert _{W^{1,2}_{V}(\Omega )}\) for all \(\vec {u} \in W^{1,2}_{V,0}(\Omega )\).

    To establish that \(C^\infty _c(\Omega ) \subseteq W^{1,2}_V(\Omega )\), we’ll show that \(C^\infty _c(\Omega ) \subseteq W^{1,2}_{V, 0}(\Omega )\) and \(W^{1,2}_{V,0}(\Omega ) \subseteq W^{1,2}_V(\Omega )\). The first containment follows from the definition: \(W_{V,0}^{1,2}(\Omega )\) is defined as the closure of \(C_c^\infty (\Omega )\) with respect to the \(W_V^{1,2}(\Omega )\)-norm and is a Hilbert space with respect to that same norm. To establish the second containment, let \(\vec {u} \in W^{1,2}_{V, 0}\left( \Omega \right) \). Then there exists \(\left\{ \vec {u}_k\right\} _{k = 1}^\infty \subseteq C^\infty _c\left( \Omega \right) \) such that \(\vec {u}_k \rightarrow \vec {u}\) in \(W^{1,2}_V(\Omega )\). It follows that \(\vec {u} \in W^{1,2}_V(\mathbb {R}^n)\) since \(\left\{ \vec {u}_k\right\} _{k = 1}^\infty \subseteq C^\infty _c\left( \mathbb {R}^n \right) \) and \(\vec {u}_k \rightarrow \vec {u}\) in \(W^{1,2}_V(\mathbb {R}^n)\). However, \(\vec {u} |_\Omega = \vec {u}\), so we conclude that \(\vec {u} \in W^{1,2}_V(\Omega )\), as required.

  3. (A3)

    Embedding in \(Y^{1,2}_0\left( \mathbb {R}^n \right) \): The space \(W^{1,2}_{V,0}\left( \Omega \right) \) is continuously embedded into \(Y^{1,2}_0\left( \Omega \right) \) and respectively, there exists \(c_0>0\) such that for any \(\vec {u} \in W^{1,2}_{V,0}\left( \Omega \right) \), \(\left\| \vec {u}\right\| _{Y^{1,2}_0{\left( \Omega \right) }} \le c_0 \left\| \vec {u}\right\| _{W^{1,2}_{V}\left( \Omega \right) }\).

    Proposition 7 shows that \(W^{1,2}_{V,0}(\Omega )\) is contained in \(Y^{1,2}_0(\Omega )\). In fact, for any \(\vec {u} \in W^{1,2}_{V,0}(\Omega )\), since

    $$\begin{aligned} \left\| \vec {u}\right\| _{Y^{1,2}_0{(\Omega )}} \lesssim \left\| \vec {u}\right\| _{W^{1,2}_{V}(\Omega )}, \end{aligned}$$
    (40)

    then \(W^{1,2}_{V,0}(\Omega )\) is continuously embedded into \(Y^{1,2}_0(\Omega )\). Moreover, a Sobolev embedding implies that for any \(\vec {u} \in W^{1,2}_{V,0}(\Omega )\),

    $$\begin{aligned} \left\| \vec {u}\right\| _{L^{2^*}{(\Omega )}} \le \left\| D\vec {u}\right\| _{L^{2}(\Omega )}. \end{aligned}$$
  4. (A4)

    Cutoff properties:For any \(U\subset \mathbb {R}^n\) open and connected

    $$\begin{aligned}{} & {} \vec {u}\in W^{1,2}_V(\Omega )\, and\, \xi \in C_c^\infty (U) \quad \Longrightarrow \quad \vec {u} \xi \in W^{1,2}_V(\Omega \cap U),\nonumber \\{} & {} \vec {u}\in W^{1,2}_V(\Omega )\, and \,\xi \in C_c^\infty (\Omega \cap U) \quad \Longrightarrow \quad \vec {u} \xi \in W^{1,2}_{V,0}(\Omega \cap U), \nonumber \\{} & {} \vec {u}\in W^{1,2}_{V,0}(\Omega )\, and\, \xi \in C_c^\infty (\mathbb {R}^n) \quad \Longrightarrow \quad \vec {u} \xi \in W^{1,2}_{V,0}(\Omega ). \end{aligned}$$
    (41)

    with \(\Vert \vec {u} \xi \Vert _{W^{1,2}_V(\Omega \cap U)}\le C_\xi \, \Vert \vec {u}\Vert _{W^{1,2}_V(\Omega )}\) in the first two cases.

    To establish (41), first let \(\vec {u} \in W_V^{1,2}(\Omega )\). Since \(\vec {u} \in W_V^{1,2}(\Omega )\), then there exists \(\vec {v} \in W_{V,0}^{1,2}\left( \mathbb {R}^n \right) \) such that \(\vec {v}\vert _\Omega = \vec {u}\). Moreover, there exists \(\left\{ \vec {v}_k\right\} _{k=1}^\infty \subseteq C^\infty _c\left( \mathbb {R}^n \right) \) such that \(\left\| \vec {v}_k - \vec {v}\right\| _{W^{1,2}_{V}\left( \mathbb {R}^n \right) } \rightarrow 0\). If \(\xi \in C^\infty _c\left( U \right) \), then \(\left\{ \vec {v}_k \xi \right\} \subseteq C^\infty _c\left( \mathbb {R}^n \right) \). We first show that

    $$\begin{aligned} \lim _{k \rightarrow \infty }\left\| \vec {v}_k \xi - \vec {v} \xi \right\| _{W^{1,2}_{V}\left( \mathbb {R}^n \right) } = 0. \end{aligned}$$

    Observe that

    $$\begin{aligned}&\left\| \vec {v}_k \xi - \vec {v} \xi \right\| _{W^{1,2}_{V}\left( \mathbb {R}^n \right) }^2 = \left\| V^{1/2}\left( \vec {v}_k \xi - \vec {v} \xi \right) \right\| _{L^{2}\left( \mathbb {R}^n \right) }^2 + \left\| D\left( \vec {v}_k \xi - \vec {v} \xi \right) \right\| _{L^{2}\left( \mathbb {R}^n \right) }^2 \\&\quad \lesssim \left\| V^{1/2}\left( \vec {v}_k - \vec {v} \right) \xi \right\| _{L^{2}\left( \mathbb {R}^n \right) }^2 + \left\| D\left( \vec {v}_k - \vec {v} \right) \xi \right\| _{L^{2}\left( \mathbb {R}^n \right) }^2 + \left\| \left( \vec {v}_k - \vec {v} \right) D \xi \right\| _{L^{2}\left( \mathbb {R}^n \right) }^2 \\&\quad \lesssim \left\| \vec {v}_k - \vec {v}\right\| _{W^{1,2}_{V}\left( \mathbb {R}^n \right) }^2 + \left\| \vec {v}_k - \vec {v}\right\| _{L^{2}\left( {{{\,\textrm{supp}\,}}\xi } \right) }^2, \end{aligned}$$

    where the constants depend on \(\xi \). An application of Hölder’s inequality shows that

    $$\begin{aligned} \left\| \vec {v}_k - \vec {v}\right\| _{L^{2}\left( {{{\,\textrm{supp}\,}}\xi } \right) }&\le \left\| \vec {v}_k - \vec {v}\right\| _{L^{2^*}\left( \mathbb {R}^n \right) } \left| {{{\,\textrm{supp}\,}}\xi }\right| ^{\frac{2^*-2}{2^*2}} \lesssim \left\| \vec {v}_k - \vec {v}\right\| _{Y^{1,2}_0\left( \mathbb {R}^n \right) }. \end{aligned}$$

    Combining the previous two inequalities, then applying (40), we see that

    $$\begin{aligned} \left\| \vec {v}_k \xi - \vec {v} \xi \right\| _{W^{1,2}_{V}\left( \mathbb {R}^n \right) }^2&\lesssim \left\| \vec {v}_k - \vec {v}\right\| _{W^{1,2}_{V}\left( \mathbb {R}^n \right) }^2 \rightarrow 0. \end{aligned}$$

    In particular, \(\vec {v} \xi \in W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \). Since \(\xi \) is compactly supported on U, then \(\left( \vec {v} \xi \right) |_{\Omega \cap U} = \vec {v}\vert _\Omega \xi = \vec {u} \xi \) and we conclude that \(\vec {u} \xi \in W^{1,2}_V\left( \Omega \cap U \right) \).

    Now assume that \(\xi \in C^\infty _c\left( \Omega \cap U \right) \). For each \(k \in \mathbb {N}\), define \(\vec {u}_k = \vec {v}_k\vert _{\Omega } \in C^\infty (\Omega )\). Then \(\left\{ \vec {u}_k \xi \right\} \subseteq C^\infty _c\left( \Omega \cap U \right) \). Since \(\left\| \vec {v}_k \xi - \vec {v} \xi \right\| _{W^{1,2}_{V}\left( \mathbb {R}^n \right) } \rightarrow 0\), as shown above, then by the restriction property (A1), \(\left\| \vec {u}_k \xi - \vec {u} \xi \right\| _{W^{1,2}_{V}\left( \Omega \cap U \right) } \rightarrow 0\) as well. It follows that \(\vec {u} \xi \in W^{1,2}_{V,0}\left( \Omega \cap U \right) \), as required.

    The third line of (41) follows immediately from the arguments above.

We require that \(\mathcal {B}\) and \(\mathcal {B}^*\) can be extended to bounded and accretive bilinear forms on \(W^{1,2}_{V,0}(\Omega ) \times W^{1,2}_{V,0}(\Omega )\) so that the Lax–Milgram theorem may be applied in \(W^{1,2}_{V,0}(\Omega )\). The next two assumptions capture this requirement.

  1. (A5)

    Boundedness hypotheses: There exists a constant \(\Gamma > 0\) so that \(\mathcal {B}\left[ \vec {u}, \vec {v}\right] \le \Gamma \left\| \vec {u}\right\| _{W^{1,2}_{V}} \left\| \vec {v}\right\| _{W^{1,2}_{V}}\) for all \(\vec {u}, \vec {v} \in W^{1,2}_{V,0}\left( \Omega \right) \).

    For any \(\vec {u}, \vec {v} \in W^{1,2}_{V,0}(\Omega )\), we may set \(\Gamma = \Lambda + 1\) since it follows from (25) that

    $$\begin{aligned} \mathcal {B}\left[ \vec {u}, \vec {v}\right] \le \Lambda \int \left\langle D\vec {u}, D\vec {v} \right\rangle + \int \left\langle V\vec {u}, \vec {v} \right\rangle \le \left( \Lambda + 1 \right) \left\| \vec {u}\right\| _{W^{1,2}_{V}} \left\| \vec {v}\right\| _{W^{1,2}_{V}}. \end{aligned}$$
  2. (A6)

    Coercivity hypotheses: There exists a \(\gamma > 0\) so that \(\gamma \left\| \vec {u}\right\| _{W^{1,2}_V}^2 \le \mathcal {B}\left[ \vec {u}, \vec {u}\right] \) for any \(\vec {u} \in W^{1,2}_{V,0}\left( \Omega \right) \).

    For any \(\vec {u} \in W^{1,2}_{V,0}(\Omega )\), it follows from (24) that \(\lambda \left\| \vec {u}\right\| _{W^{1,2}_V}^2 \le \mathcal {B}\left[ \vec {u}, \vec {u}\right] \), so we take \(\lambda = \gamma \).

Using (A1)–(A6), a standard proof, which we omit, leads to the final assumption from [11]:

  1. (A7)

    The Caccioppoli inequality: If \(\vec {u} \in W^{1,2}_V\left( \Omega \right) \) is a weak solution to \(\mathcal {L} \vec {u} = \vec {0}\) in \(\Omega \) and \(\zeta \in C^\infty (\mathbb {R}^n)\) is such that \(D\zeta \in C_c^\infty (\Omega )\), \(\zeta \vec {u} \in W^{1,2}_{V,0}\left( \Omega \right) \), and \(\partial ^i\zeta \,\vec {u}\in L^2(\Omega )\), \(i=1,..., n\), then there exists \(C = C\left( n, \lambda , \Lambda \right) \) so that

    $$\begin{aligned} \int \left| D \vec {u}\right| ^2 \zeta ^2 \le C \int \left| \vec {u}\right| ^2 \left| D \zeta \right| ^2. \end{aligned}$$

    Note that C is independent of the set on which \(\zeta \) and \(D\zeta \) are supported.

In conclusion, the fundamental solution for the operator \(\mathcal {L}_V\) defined in (27), denoted by \(\Gamma ^V\), exists and satisfies Definition 11 as well as the properties listed in Theorem 8 whenever we assume that assumptions (IB) and (H) hold for \(\mathcal {L}_V\).

In fact, given that assumptions (A1) through (A7) hold for general \(\Omega \subseteq \mathbb {R}^n\), not just \(\Omega = \mathbb {R}^n\), the framework here allows us to also discuss Green’s matrices as defined in [11, Definition 3.9], for example. That is, whenever we assume that assumptions (IB) and (H) hold for \(\mathcal {L}_V\), the results of [11, Theorem 3.10] hold for the Green’s matrix. As we show in the next section, there are many examples of vector-valued Schrödinger operators that satisfy assumptions (IB) and (H). However, for the boundary boundedness assumption (BB) introduced in [11, Section 3.4], it is not clear to us when any vector-valued Schrödinger operators satisfy this assumption. As such, determining whether the global estimates for Green’s matrices as described in [11, Corollary 3.12] hold for operators \(\mathcal {L}_V\) is an interesting question, but is beyond the scope of this current investigation.

7 Elliptic theory for weakly coupled systems

In this section, we introduce a class of elliptic systems called weakly coupled Schrödinger operators and show that they satisfy the elliptic theory assumptions from Sect. 6. In particular, these are elliptic systems for which the fundamental matrices that were described in the previous section may be directly proven to exist without having to assume that (IB) and (H) hold. That is, for the class of weakly coupled Schrödinger operators that we introduce in the next paragraph, we prove here that local boundedness and Hölder continuity actually hold.

We introduce the class of weakly coupled Schrödinger operators. As above, let the leading coefficients be given by \(A^{\alpha \beta } = A^{\alpha \beta }(x)\), where for each \(\alpha , \beta \in \left\{ 1, \dots , n\right\} \), \(A^{\alpha \beta }\) is a \(d \times d\) matrix with bounded measurable coefficients. Here we impose the condition that \(A^{\alpha \beta }(x) = a^{\alpha \beta }(x) I_d\), where each \(a^{\alpha \beta }\) is scalar-valued and \(I_d\) is the \(d \times d\) identity matrix. That is, \(A^{\alpha \beta }_{ij}(x) = a^{\alpha \beta }(x) \delta _{ij}\). As usual, we assume that there exist constants \(0< \lambda , \Lambda < \infty \) so that \(A^{\alpha \beta }\) satisfies the ellipticity condition described by (24) and the boundedness condition (25). For the zeroth-order term, let V satisfy (26). That is, V is a nondegenerate, symmetric, positive semidefinite \(d \times d\) matrix function in \(L^{\frac{n}{2}}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \). The equations that we study are formally given by (27). With our additional conditions on the leading coefficients, the operator takes the component-wise form

$$\begin{aligned} \left( \mathcal {L}_V\vec {u} \right) ^i = -D_\alpha \left( A_{ij}^{\alpha \beta } D_\beta u^j \right) + V_{ij} u^j = -D_\alpha \left( a^{\alpha \beta } D_\beta u^i \right) + V_{ij} u^j \end{aligned}$$
(42)

for each \(i = 1, \ldots , d\).

We begin with a lemma that will be applied in the Moser boundedness arguments below.

Lemma 16

(Local boundedness lemma) If \(\vec {u} \in W^{1,2}_V(B_2)\), then for any \(k > 0\), it holds that \(\vec {w} = \vec {w}\left( k \right) := \frac{\vec {u}}{\sqrt{\left| \vec {u}\right| ^2 + k^2}} \in W^{1,2}_V(B_2)\) as well.

Proof

Since \(\vec {u} \in W^{1,2}_V(B_2)\), then \(\vec {u}\) is the restriction of an element \(\vec {u} \in W^{1,2}_V(\mathbb {R}^n)\), so there exists \(\left\{ \vec {u}_j\right\} _{j=1}^\infty \subseteq C^\infty _c(\mathbb {R}^n)\) so that \(\vec {u}_j \rightarrow \vec {u}\) in \(W^{1,2}_V(\mathbb {R}^n)\). For each \(j \in \mathbb {N}\), define \(\vec {w}_j:= \vec {u}_j \left( \left| \vec {u}_j\right| ^2 + k^2 \right) ^{-\frac{1}{2}} \in C^\infty _c(\mathbb {R}^n)\). We will show that \(\vec {w}_j \rightarrow \vec {w}\) in \(W^{1,2}_V(B_2)\).

To simplify notation, let \(v_j = \left( \left| \vec {u}_j\right| ^2 + k^2 \right) ^{\frac{1}{2}}\) and \(v = \left( \left| \vec {u}\right| ^2 + k^2 \right) ^{\frac{1}{2}}\). Observe that

$$\begin{aligned} \begin{aligned} \vec {w}_j - \vec {w}&= \frac{\vec {u}_j}{ v_j} - \frac{\vec {u}}{v} = v_j^{-1} \left[ \vec {u}_j - \vec {u} + \vec {w} \left\langle \vec {u} - \vec {u}_j, \frac{\vec {u} + \vec {u}_j}{v + v_j} \right\rangle \right] . \end{aligned} \end{aligned}$$
(43)

Since \( \left| \vec {w}\right| , \left| \frac{\vec {u} + \vec {u}_j}{v + v_j}\right| \le 1\) and \(v_j \ge k\) for all \(j \in \mathbb {N}\), then \( \left| \vec {w}_j - \vec {w}\right| \le 2 k^{-1} \left| \vec {u}_j - \vec {u}\right| \) and it follows from a Sobolev embedding that

$$\begin{aligned} \left( \int _{B_2} \left| \vec {w}_j - \vec {w}\right| ^{2^*} \right) ^{\frac{n-2}{n}}&\lesssim k^{-2} \left( \int _{\mathbb {R}^n} \left| \vec {u}_j - \vec {u}\right| ^{2^*} \right) ^{\frac{n-2}{n}} \le k^{-2} c_n \int _{\mathbb {R}^n} \left| D\vec {u}_j - D\vec {u}\right| ^2. \end{aligned}$$

Since \(\vec {u}_j \rightarrow \vec {u}\) in \(W^{1,2}_V(\mathbb {R}^n)\), then \(D\vec {u}_j \rightarrow D\vec {u}\) in \(L^2(\mathbb {R}^n)\), so we deduce that both \(\vec {w}_j \rightarrow \vec {w}\) in \(L^{2^*}(B_2)\) and \(\vec {u}_j \rightarrow \vec {u}\) in \(L^{2^*}(B_2)\). Therefore, there exists subsequences \(\left\{ \vec {w}_{j_i}\right\} _{i = 1}^\infty \) and \(\left\{ \vec {u}_{j_i}\right\} _{i = 1}^\infty \) so that \(\vec {w}_{j_i} \rightarrow \vec {w}\) a.e. and \(\vec {u}_{j_i} \rightarrow \vec {u}\) a.e. In particular, we relabel so that \(\vec {w}_{j} \rightarrow \vec {w}\) a.e. and \(\vec {u}_{j} \rightarrow \vec {u}\) a.e.

From (43) and that \(k \le v_j\), we have

$$\begin{aligned} k^2 \left| V^{\frac{1}{2}}\left( \vec {w}_j - \vec {w} \right) \right| ^2&\le \left\langle V\left( \vec {u}_j - \vec {u} + \vec {w} \left\langle \vec {u} - \vec {u}_j, \frac{\vec {u} + \vec {u}_j}{v + v_j} \right\rangle \right) , \vec {u}_j - \vec {u} + \vec {w} \left\langle \vec {u} - \vec {u}_j, \frac{\vec {u} + \vec {u}_j}{v + v_j} \right\rangle \right\rangle \\&\le 2 \left| V^{\frac{1}{2}}\left( \vec {u}_j - \vec {u} \right) \right| ^2 + 2 \left| V\right| \left| \vec {u} - \vec {u}_j\right| ^2 \left| \vec {w}\right| ^2 \left| \frac{\vec {u} + \vec {u}_j}{v + v_j}\right| ^2 \\&\le 2 \left| V^{\frac{1}{2}}\left( \vec {u}_j - \vec {u} \right) \right| ^2 + 2 \left| V\right| \left| \vec {u}_j - \vec {u}\right| ^2 , \end{aligned}$$

where we have applied Cauchy–Schwarz and that \( \left| \vec {w}\right| , \left| \frac{\vec {u} + \vec {u}_j}{v + v_j}\right| \le 1\) for all \(j \in \mathbb {N}\). It follows from Hölder and Sobolev inequalities that

$$\begin{aligned}&\int _{B_2} \left\langle V\left( \vec {w}_j - \vec {w} \right) , \vec {w}_j - \vec {w} \right\rangle \le 2 k^{-2} \int _{B_2} \left| V^{\frac{1}{2}}\left( \vec {u}_j - \vec {u} \right) \right| ^2 + 2 k^{-2} \int _{B_2} \left| V\right| \left| \vec {u}_j - \vec {u}\right| ^2 \\&\quad \le 2 k^{-2} \int _{B_2} \left| V^{\frac{1}{2}}\left( \vec {u}_j - \vec {u} \right) \right| ^2 + 2 k^{-2} \left( \int _{B_2} \left| V\right| ^{\frac{n}{2}} \right) ^{\frac{2}{n}} \left( \int _{\mathbb {R}^n} \left| \vec {u}_j - \vec {u}\right| ^{2^*} \right) ^{\frac{n-2}{n}} \\&\quad \le 2 k^{-2} \int _{\mathbb {R}^n} \left| V^{\frac{1}{2}}\left( \vec {u}_j - \vec {u} \right) \right| ^2 + 2 k^{-2} c_n \left\| V\right\| _{L^{\frac{n}{2}}(B_2)} \int _{\mathbb {R}^n} \left| D\vec {u}_j - D\vec {u}\right| ^2. \end{aligned}$$

Since \(\vec {u}_j \rightarrow \vec {u}\) in \(W^{1,2}_V(\mathbb {R}^n)\), this shows that \(V^{\frac{1}{2}}\vec {w}_j \rightarrow V^{\frac{1}{2}}\vec {w}\) in \(L^2(B_2)\), or \(\vec {w}_j \rightarrow \vec {w}\) in \(L^2_V\left( B_2 \right) \).

Now we consider the gradient terms. Since

$$\begin{aligned} D\vec {w}_j&= \frac{D\vec {u}_j}{ v_j} - \frac{\vec {u}_j \left\langle D \vec {u}_j, \vec {u}_j \right\rangle }{ v_j^3} = v_j^{-1}\left[ D\vec {u}_j - \vec {w}_j \left\langle D \vec {u}_j, \vec {w}_j \right\rangle \right] \end{aligned}$$

and analogously for \(D\vec {w}\), then \(D\vec {w}_j - D\vec {w} = A_j + B_j\), where

$$\begin{aligned} A_j&= v_j^{-1}\left[ D\vec {u}_j- D\vec {u} - \vec {w}_j \left\langle D \vec {u}_j - D \vec {u}, \vec {w}_j \right\rangle \right] \\ B_j&= \left( \vec {w}_j \left\langle D \vec {u}, \vec {w}_j \right\rangle - D\vec {u} \right) \left\langle \frac{\vec {u}_j - \vec {u}}{v_j v}, \frac{\vec {u}_j + \vec {u}}{v_j + v} \right\rangle \\&\quad + v^{-1} \left[ \left( \vec {w} - \vec {w}_j \right) \left\langle D \vec {u}, \vec {w} \right\rangle + \vec {w}_j \left\langle D \vec {u}, \vec {w} - \vec {w}_j \right\rangle \right] . \end{aligned}$$

This shows that

$$\begin{aligned} \lim _{j \rightarrow \infty } \int _{B_2} \left| D\vec {w}_j - D\vec {w} \right| ^2&\lesssim \lim _{j \rightarrow \infty } \int _{B_2} \left| A_j\right| ^2 + \lim _{j \rightarrow \infty } \int _{B_2} \left| B_j\right| ^2 . \end{aligned}$$

Since \(v_j, v \ge k\), \(\left| \vec {w}_j\right| , \left| \vec {w}\right| \le 1\) for all \(j \in \mathbb {N}\), and \(D\vec {u}_j \rightarrow D\vec {u}\) in \(L^2(B_2)\), then

$$\begin{aligned} \lim _{j \rightarrow \infty }\int _{B_2} \left| A_j\right| ^2&\lesssim k^{-2} \lim _{j \rightarrow \infty } \int _{B_2} \left| D\vec {u}_j- D\vec {u}\right| ^2 = 0. \end{aligned}$$

On the other hand, since \(\left| B_j\right| \le 8 k^{-1} \left| D\vec {u}\right| \) and \(\left| D\vec {u}\right| ^2 \in L^1(\mathbb {R}^n)\), then the Lebesgue Dominated Convergence Theorem shows that

$$\begin{aligned} \lim _{j \rightarrow \infty }\int _{B_2} \left| B_{j}\right| ^2&= \int _{B_2} \lim _{j \rightarrow \infty } \left| B_{j}\right| ^2 . \end{aligned}$$

Because \(v_j, v \ge k\), \(\left| \vec {w}_j\right| , \left| \vec {w}\right| \le 1\) for all \(j \in \mathbb {N}\), \(\left| D\vec {u}\right| < \infty \) a.e., \(\vec {w}_{j} \rightarrow \vec {w}\) a.e., and \(\vec {u}_{j} \rightarrow \vec {u}\) a.e., then \( \lim _{j \rightarrow \infty } B_{j} = 0\) a.e. and we deduce that

$$\begin{aligned} \lim _{j \rightarrow \infty }\int _{B_2} \left| B_{j}\right| ^2&= 0. \end{aligned}$$

Thus, we conclude that \(D\vec {w}_j \rightarrow D\vec {w}\) in \(L^2(B_2)\). In combination with the fact that \(\vec {w}_j \rightarrow \vec {w}\) in \(L^2_V(B_2)\), we have shown that \(\vec {w}_j \rightarrow \vec {w}\) in \(W^{1,2}_{V}(B_2)\), as required. \(\square \)

With the above lemma, we prove local boundedness of solutions to weakly coupled systems.

Proposition 9

(Local boundedness of vector solutions) With \(B_r = B(0, r)\), assume that \(B_{4R} \subseteq \Omega \). Let \(\mathcal {L}_V\) be as given in (42), where A is bounded and uniformly elliptic as in (25) and (24), and V satisfies (26). Assume that \(\vec {f} \in L^\ell (B_{2R})\) for some \(\ell > \frac{n}{2}\). Let \(\vec {u} \in W_{V} ^{1, 2} (B_{4R})\) satisfy \(\mathcal {L}_V\vec {u} = \vec {f}\) in the weak sense on \(B_{2R}\). That is, for any \(\vec {\phi } \in W^{1,2}_{V,0}(B_{2R})\), it holds that

$$\begin{aligned} \int _{B_{2R}} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \vec {\phi } \right\rangle + \int _{B_{2R}} \left\langle V \vec {u}, \vec {\phi } \right\rangle = \int _{B_{2R}} \left\langle \vec {f}, \vec {\phi } \right\rangle . \end{aligned}$$
(44)

Then, for any \(q \ge 1\), we have

$$\begin{aligned} \left\| \vec {u}\right\| _{L^\infty (B_{R})} \le C \left[ R^{-\frac{n}{q}} \left\| \vec {u}\right\| _{L^q(B_{2R})} + R^{2 - \frac{n}{\ell }} \Vert \vec {f}\Vert _{L^\ell (B_{2R})}\right] , \end{aligned}$$
(45)

where the constant C depends on n, d, \(\lambda \), \(\Lambda \), q and \(\ell \).

Remark 7

Note that the constant C in Proposition 9 is independent of V and R. Therefore, this result establishes that estimate (38) in (IB) holds for our weakly-coupled systems.

Remark 8

This statement assumes that \(V \in L^{\frac{n}{2}}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}\), but the proof only uses that V is positive semidefinite. As described in previous sections, the additional conditions on V ensure that each \(W^{1,2}_V\left( \Omega \right) \) is a well-defined inner product space over which we can talk about weak solutions. As such, we maintain that V satisfies (26). However, if a different class of solution functions were considered, this proof would carry through assuming only that \(V \ge 0\) a.e.

Remark 9

There is nothing special about the choice of R, 2R and 4R here except for the ordering \(R< 2R < 4R\), the scale of differences \(4R - 2R, 2R - R \simeq R\), and that this statement matches that of (38) in (IB). In applications of this result, we may modify the choices of radii while maintaining the ordering and difference properties, keeping in mind that the associated constants will change in turn.

Proof

We assume first that \(R = 1\).

For some \(k > 0\) to be specified below, define the scalar function

$$\begin{aligned} v = v\left( k \right) := \left( \left| \vec {u}\right| ^2 + k^2 \right) ^{\frac{1}{2}} \end{aligned}$$

and the associated vector function

$$\begin{aligned} \vec {w} = \vec {w}\left( k \right) := \vec {u} \, v^{-1}. \end{aligned}$$

Observe that \(v \ge k > 0\) and since \(v > \left| \vec {u}\right| \), then \(\left| \vec {w}\right| \le 1\). In fact, since \(v \le \left| \vec {u}\right| + k\) and \(\vec {u} \in W^{1,2}_V(B_2)\) implies that \(\vec {u} \in L^{2}(B_2)\), then \(v \in L^2(B_2)\). Moreover, since \( D_\beta v = \left\langle D_\beta \vec {u}, \vec {w} \right\rangle \), then \(\left| D_\beta v\right| \le \left| D_\beta \vec {u}\right| \) and we deduce that each \(D_\beta v \in L^2\left( B_R \right) \). In particular, \(v \in W^{1,2}(B_2)\). An application of Lemma 16 implies that \(\vec {w} \in W^{1,2}_V(B_2)\). That is, since \(\vec {w}\) and \(v^{-1}\) are bounded, then \( D_\alpha \vec {w} = \left[ D_\alpha \vec {u} - \vec {w} \left\langle D_\alpha \vec {u}, \vec {w} \right\rangle \right] v^{-1} \in L^2(B_2)\). Let \(\varphi \in C^\infty _c(B_2)\) satisfy \(\varphi \ge 0\) in \(B_2\) and note that \(D\left( \vec {w} \,\varphi \right) \in L^{2}(B_2)\). Then

$$\begin{aligned} \begin{aligned} \int _{B_2} a^{\alpha \beta } D_\beta v \, D_\alpha \varphi&= \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {u}, \vec {w} \right\rangle \, D_\alpha \varphi \\&= \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \left( \vec {w} \varphi \right) \right\rangle - \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {u}, v \, D_\alpha \vec {w} \right\rangle \, v^{-1} \varphi . \end{aligned} \end{aligned}$$
(46)

To simplify the last term, observe that

$$\begin{aligned} \left\langle D_\beta \vec {u}, v \, D_\alpha \vec {w} \right\rangle&= \left\langle D_\beta \vec {u}, D_\alpha \vec {u} - \vec {w} \left\langle D_\alpha \vec {u}, \vec {w} \right\rangle \right\rangle = \left\langle D_\beta \vec {u}, D_\alpha \vec {u} \right\rangle - \left\langle D_\beta \vec {u}, \vec {w} \right\rangle \left\langle D_\alpha \vec {u}, \vec {w} \right\rangle , \end{aligned}$$

while

$$\begin{aligned} \left\langle v \, D_\beta \vec {w}, v \, D_\alpha \vec {w} \right\rangle&= \left\langle D_\beta \vec {u} - \vec {w} \left\langle D_\beta \vec {u}, \vec {w} \right\rangle , D_\alpha \vec {u} - \vec {w} \left\langle D_\alpha \vec {u}, \vec {w} \right\rangle \right\rangle \\&= \left\langle D_\beta \vec {u}, D_\alpha \vec {u} \right\rangle - \left( 1+ k^2v^{-2} \right) \left\langle D_\beta \vec {u}, \vec {w} \right\rangle \left\langle D_\alpha \vec {u}, \vec {w} \right\rangle . \end{aligned}$$

By combining the previous two expressions, we see that

$$\begin{aligned} a^{\alpha \beta } \left\langle D_\beta \vec {u}, v \, D_\alpha \vec {w} \right\rangle v^{-1} \varphi&= a^{\alpha \beta } \left[ \left\langle v \, D_\beta \vec {w}, v\, D_\alpha \vec {w} \right\rangle v^{-1} \varphi + \left\langle D_\beta \vec {u}, \vec {w} \right\rangle \left\langle D_\alpha \vec {u}, \vec {w} \right\rangle k^2v^{-3} \varphi \right] , \end{aligned}$$

where all of the terms in these expressions are integrable since \(D \vec {u}, v \, D\vec {w} \in L^2(B_2)\) and \(\vec {w}, v^{-1}, \varphi , a^{\alpha \beta } \in L^\infty (B_2)\). Then (46) becomes

$$\begin{aligned} \begin{aligned} \int _{B_2} a^{\alpha \beta } D_\beta v \, D_\alpha \varphi&= \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \left( \vec {w} \varphi \right) \right\rangle - \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {w}, D_\alpha \vec {w} \right\rangle \, v \varphi \\&\quad - \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {u}, \vec {w} \right\rangle \left\langle D_\alpha \vec {u}, \vec {w} \right\rangle \, k^2 v^{-3} \varphi \\&\le \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \left( \vec {w} \varphi \right) \right\rangle , \end{aligned} \end{aligned}$$

where we have used that \(a^{\alpha \beta }\) is elliptic to eliminate the last two terms.

Since \(\varphi \in C^\infty _c(B_2)\) and \(\vec {w} \in W^{1,2}_V(B_2)\), then (A4) implies that \( \vec {\phi }:= \vec {w} \varphi = \frac{\vec {u}\varphi }{v} \in W^{1,2}_{V,0}(B_2)\), so we may use (44) with \( \vec {\phi }\) to get

$$\begin{aligned} \int _{B_2} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \left( \vec {w} \varphi \right) \right\rangle&= \int _{B_2} \left\langle \vec {f}, \vec {w} \right\rangle \varphi - \int _{B_2} \left\langle V \vec {u}, \vec {u} \right\rangle \frac{\varphi }{v} \le \int _{B_2} \left\langle \vec {f}, \vec {w} \right\rangle \varphi , \end{aligned}$$

since V is positive semidefinite. By setting \(F = \left\langle \vec {f}, \vec {w} \right\rangle \in L^\ell (B_2)\) and combining the previous two inequalities, we see that for any \(\varphi \in C^\infty _c(B_2)\) with \(\varphi \ge 0\), it holds that

$$\begin{aligned} \int _{B_2} a^{\alpha \beta } D_\beta v \, D_\alpha \varphi&\le \int _{B_2} F \varphi . \end{aligned}$$
(47)

An application of Lemma 17 implies that (47) holds for any \(\varphi \in W^{1,2}_0\left( B_1 \right) \) with \(\varphi \ge 0\) a.e., and we then have that \(- {{\,\textrm{div}\,}}\left( A \nabla v \right) \le F\) in the standard weak sense on \(B_2\). Then [35, Theorem 4.1], for example, shows that for any \(q \ge 1\),

$$\begin{aligned} \left\| \vec {u}\right\| _{L^\infty (B_1)}&\le \left\| v\right\| _{L^\infty (B_1)} \le C \left[ \left\| v\right\| _{L^q(B_2)} + \left\| F\right\| _{L^\ell (B_2)}\right] \\&\le C \left[ \left\| \vec {u}\right\| _{L^q(B_2)} + \left\| k\right\| _{L^q(B_2)} + \left\| F\right\| _{L^\ell (B_2)}\right] , \end{aligned}$$

where C depends on \(n, d, \lambda , \Lambda , q, \ell \). With \(q = 2\), the righthand side is finite and therefore \(\vec {u} \in L^\infty (B_1)\). Setting \( k = \frac{\left\| \vec {u}\right\| _{L^\infty (B_1)}}{2C \left| B_2\right| ^{\frac{1}{q}}}\), noting that \( \left\| F\right\| _{L^\ell (B_2)} \le \Vert \vec {f}\Vert _{L^\ell (B_2)}\), we get

$$\begin{aligned} \left\| \vec {u}\right\| _{L^\infty (B_1)} \le 2 C \left[ \left\| \vec {u}\right\| _{L^q(B_2)} + \Vert \vec {f}\Vert _{L^\ell (B_2)}\right] , \end{aligned}$$

as required.

For the general case of \(R \ne 1\), we rescale. That is, we apply the result above with \(\vec {u}_R(x) = \vec {u}(Rx)\), \(A_R(x) = A(Rx)\), \(V_R (x) = R^2 V(Rx),\) and \(\vec {f}_R (x) = R^2 \vec {f} (Rx)\) to get (45) in general. \(\square \)

Lemma 17

Let \(C^\infty _c(\Omega )^+ = C^\infty _c(\Omega ) \cap \left\{ \varphi : \varphi \ge 0 \text { in } \Omega \right\} \) and let \(W^{1,2}_0(\Omega )^+ = W^{1,2}_0(\Omega ) \cap \left\{ u: u \ge 0 \text { a.e. in } \Omega \right\} \). For any ball \(B \subseteq \mathbb {R}^n\), \(C^\infty _c(B)^+\) is dense in \(W^{1,2}_0(B)^+\).

Proof

Assume that \(B = B_1\) and let \(u \in W^{1,2}_0(B_1)^+\). Let \(\phi \in C_c^\infty (B_1)\) be a standard mollifier and set \(\phi _t = t^{-n} \phi \left( \frac{\cdot }{t} \right) \in C_c^\infty (B_t)\). For every \(k \in \mathbb {N}\), define \(v_k = \phi _{k^{-1}} *u \in C^\infty _c\left( B_{1 + k^{-1}} \right) \), then set \(u_k = v_k\left( \left( 1 + \frac{1}{k} \right) \cdot \right) \in C^\infty _c\left( B_1 \right) \). Since \(u \in W^{1,2}_0(B_1)^+\), then \(u_k \ge 0\) in \(B_1\) so that \(\left\{ u_k\right\} _{k = 1}^\infty \subseteq C^\infty _c\left( B_1 \right) ^+\). The aim is to show that \(u_k \rightarrow u\) in \(W^{1,2}(B_1)\).

Let \(\varepsilon > 0\) be given. Since u may be extended by zero to a function defined on all of \(\mathbb {R}^n\), then regarding \(u \in L^2\left( \mathbb {R}^n \right) \), there exists \(g = g_\varepsilon \in C_c\left( \mathbb {R}^n \right) \) such that \(\left\| u - g\right\| _{L^2\left( \mathbb {R}^n \right) } < \varepsilon \). As g is uniformly continuous, then there exists \(K \in \mathbb {N}\) so that \(\left\| g\left( \left( 1 + \frac{1}{k} \right) \cdot \right) - g\right\| _{L^2\left( \mathbb {R}^n \right) } < \varepsilon \) whenever \(k \ge K\). By extending all functions to \(\mathbb {R}^n\), we see that

$$\begin{aligned}&\left\| u_k - u\right\| _{L^2(B_1)} \\&\quad \le \left\| v_k\left( \left( 1 + \tfrac{1}{k} \right) \cdot \right) - u\left( \left( 1 + \tfrac{1}{k} \right) \cdot \right) \right\| _{L^2(\mathbb {R}^n)} + \left\| u\left( \left( 1 + \tfrac{1}{k} \right) \cdot \right) - g\left( \left( 1 + \tfrac{1}{k} \right) \cdot \right) \right\| _{L^2(\mathbb {R}^n)} \\&\qquad + \left\| g\left( \left( 1 + \tfrac{1}{k} \right) \cdot \right) - g\right\| _{L^2(\mathbb {R}^n)} + \left\| g - u\right\| _{L^2(\mathbb {R}^n)} \\&\quad = \frac{\left\| v_k- u\right\| _{L^2(\mathbb {R}^n)} + \left\| u- g\right\| _{L^2(\mathbb {R}^n)}}{\left( 1 + \tfrac{1}{k} \right) ^{\frac{n}{2}} } + \left\| g\left( \left( 1 + \tfrac{1}{k} \right) \cdot \right) - g\right\| _{L^2(\mathbb {R}^n)} + \left\| g - u\right\| _{L^2(\mathbb {R}^n)} . \end{aligned}$$

Since \(v_k \rightarrow u\) in \(L^2(B_2)\), then there exists \(M \in \mathbb {N}\) so that \(\left\| v_k - u\right\| _{L^2(\mathbb {R}^n)} < \varepsilon \) whenever \(k \ge M\). In particular, if \(k \ge \max \left\{ K, M\right\} \), then \(\left\| u_k - u\right\| _{L^2(B_1)} < 4 \varepsilon \), so we deduce that \(u_k \rightarrow u\) in \(L^2\left( \mathbb {R}^n \right) \) and hence in \(L^2\left( B_1 \right) \).

Since \(\nabla v_k = \phi _{k^{-1}} *\nabla u\) in \(B_{1 + k^{-1}}\), then an analogous argument shows that \(\nabla u_k \rightarrow \nabla u\) in \(L^2\left( B_1 \right) \), completing the proof. \(\square \)

The next main result of this section is the following Hölder continuity result.

Proposition 10

(Hölder continuity) With \(B_r = B(0, r)\), assume that \(B_{2R_0} \subseteq \Omega \). Let \(\mathcal {L}_V\) be as given in (42), where A is bounded and uniformly elliptic as in (25) and (24), and V \(\in L^{\frac{n}{2} +}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}\). Assume that \(\vec {u} \in W_{V} ^{1, 2}(B_{2R_0})\) is a weak solution to \(\mathcal {L}_V\vec {u} = 0\) in \(B_{3R_0/2}\). Then there exist constants \(c_1, c_2, c_3 > 0\), all depending on n, p, \(\lambda \), and \(\Lambda \), such that if

$$\begin{aligned} \eta := -c_1 \left[ \log \left( \min \left\{ \frac{c_2}{R_0}\Vert V\Vert _{L^p(B_{R_0})} ^{-\frac{1}{2 - \frac{n}{p}}}, c_3, \frac{1}{2} \right\} \right) \right] ^{-1} \in \left( 0, 1 \right) , \end{aligned}$$

then for any \(R \le R_0\),

$$\begin{aligned} \sup _{\begin{array}{c} x, y \in B_{R/2} \\ x \ne y \end{array}} \frac{\left| \vec {u} (x) - \vec {u}(y)\right| }{\left| x-y\right| ^\eta } \le 4 R^{-\eta } \Vert \vec {u}\Vert _{L^\infty (B_R)}. \end{aligned}$$
(48)

In fact, for any \(q \ge 1\), there exists \(c_4\left( n, q \right) \) so that

$$\begin{aligned} \sup _{\begin{array}{c} x, y \in B_{R/2} \\ x \ne y \end{array}} \frac{\left| \vec {u} (x) - \vec {u}(y)\right| }{\left| x-y\right| ^\eta } \le c_4 R^{-\eta - \frac{n}{q}} \Vert \vec {u}\Vert _{L^q(B_{3R/2})}. \end{aligned}$$
(49)

Remark 10

We point out that the assumption on V in this proposition is stronger than in previous statements. First, we now need \(V \in L^{\frac{n}{2} +}_{{{\,\textrm{loc}\,}}}\), as opposed to \(V \in L^{\frac{n}{2}}_{{{\,\textrm{loc}\,}}}\), in order to apply the Harnack inequality—a crucial step in the proof. Second, the assumption that V is positive semidefinite is used in the application of Lemma 9. Finally, the full power of \(V \in \mathcal{N}\mathcal{D}\) is used to ensure that the spaces \(W_{V} ^{1, 2}\) are well-defined. However, if we were to use the spaces \(W^{1,2}\) or \(Y^{1,2}\) in place of \(W_{V} ^{1, 2}\) to define our weak solutions, it may be possible to drop the requirement that \(V \in \mathcal{N}\mathcal{D}\) and establish (49) by only assuming that V is positive semidefinite. In fact, if we knew a priori that the weak solution is bounded (and therefore didn’t need to resort to Lemma 9), we could prove (48) by only assuming that \(V \in L^{\frac{n}{2} +}_{{{\,\textrm{loc}\,}}}\) without imposing that V is positive semidefinite anywhere.

Remark 11

Although the choice radii in this statement do not match those in the statement of the Hölder continuity assumption (39) from (H), this presentation suits the proof well. As usual, the the radii may be modified to give precisely (39) from (H), but we will not do that here.

The proof was inspired by the arguments in [19] (see also [36] for a more detailed account of this method), which proves Hölder continuity for very general nonlinear elliptic systems. To prove this result, we will carefully iterate the following lemma.

Lemma 18

(Iteration lemma) Let \(B_r = B(0, r)\). Let \(\rho \le 1\) and \(\vec {\nu }_* \in \mathbb {R}^d\) with \(\left| \vec {\nu }_*\right| \le 2\). Let \(\mathcal {L}_V\) be as given in (42), where A is bounded and uniformly elliptic as in (25) and (24), and \(V \in L^{\frac{n}{2} +}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}\). Assume that \(\vec {u} \in W_{V} ^{1, 2}(B_{3\rho })\) is a weak solution to \(\mathcal {L}_V\vec {u} = -V \vec {\nu }_*\) in \(B_{2\rho }\). If \(\Vert \vec {u}\Vert _{L^\infty (B_\rho )} \le M \le 1\), then there exists \(\delta = \delta (n, p, \lambda , \Lambda ) \in (0, 1)\) and a universal constant \(c_0 > 0\) so that for any \(0 < \theta \le 1\), it holds that

$$\begin{aligned} \sup _{x \in B_{\theta \rho /2}}\left| \vec {u}(x) - \delta a_{B_{3\theta \rho /4}} \vec {u}\right| \le M (1 - \delta ) + c_0 M^{\frac{1}{2}} \left( \theta \rho \right) ^{1-\frac{n}{2p}} \Vert V\Vert _{L^p(B_1)}^{\frac{1}{2}}. \end{aligned}$$
(50)

Before we prove this lemma, let us briefly discuss its application. To run the arguments in [19], we look at functions of the form \(\vec {u}_k = \vec {u} - \vec {\nu }_k\) for constant vectors \(\vec {\nu }_k\) to be inductively selected, where here \(\mathcal {L}_V\vec {u} = 0\). However, we then have

$$\begin{aligned} \mathcal {L}_V\vec {u}_k&= - {{\,\textrm{div}\,}}\left( A \nabla \vec {u} \right) + V \left( \vec {u} - \vec {\nu }_k \right) = \mathcal {L}_V\vec {u} - V \vec {\nu }_k = - V \vec {\nu }_k. \end{aligned}$$

Proof

For some constant vector \(\hat{\nu } \in {\mathbb {S}}^{n-1} \cup \left\{ \vec {0}\right\} \) to be determined later, set

$$\begin{aligned} h(x) = \frac{1}{2} M^2 + M - \frac{1}{2} \left| \vec {u}(x)\right| ^2 - \left\langle \vec {u}(x), \hat{\nu } \right\rangle \ge 0. \end{aligned}$$

Since \(\vec {u} \in W^{1,2}_V(B_{2\rho }) \cap L^\infty (B_\rho )\) by assumption, then \(h \in W^{1,2}(B_{\rho }) \cap L^\infty (B_\rho )\). For any \(\varphi \in C^\infty _c\left( B_{\rho } \right) ^+\), it holds that

$$\begin{aligned} \int _{B_{\rho }} a^{\alpha \beta } D_\beta h \, D_\alpha \varphi&= - \int _{B_{\rho }} a^{\alpha \beta } \left\langle D_\beta \vec {u}, \vec {u} \right\rangle \, D_\alpha \varphi - \int _{B_{\rho }} a^{\alpha \beta } \left\langle D_\beta \vec {u}, \hat{\nu } \right\rangle \, D_\alpha \varphi \\&= - \int _{B_{\rho }} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \left[ \varphi \left( \vec {u} + \hat{\nu } \right) \right] \right\rangle + \int _{B_{\rho }} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \vec {u} \right\rangle \, \varphi . \end{aligned}$$

Since \(\varphi \left( \vec {u} + \hat{\nu } \right) \in W^{1,2}_{V,0}(B_{\rho })\) by (41) in A4) and \(\mathcal {L}_V\vec {u} = -V \vec {\nu }_*\) weakly in \(B_\rho \), then

$$\begin{aligned}&\int _{B_{\rho }} a^{\alpha \beta } D_\beta h \, D_\alpha \varphi \\&\quad = \int _{B_{\rho }} \left\langle V \vec {u}, \varphi \left( \vec {u} + \hat{\nu } \right) \right\rangle + \int _{B_{\rho }} \left\langle V \vec {\nu }_*, \varphi \left( \vec {u} + \hat{\nu } \right) \right\rangle + \int _{B_{\rho }} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \vec {u} \right\rangle \, \varphi \\&\quad = \int _{B_{\rho }} \left[ \left\langle V \vec {u}, \vec {u} \right\rangle + \left\langle V \vec {u}, \hat{\nu } \right\rangle + \left\langle V \vec {\nu }_*, \vec {u} \right\rangle + \left\langle V \vec {\nu }_*, \hat{\nu } \right\rangle \right] \varphi + \int _{B_{\rho }} a^{\alpha \beta } \left\langle D_\beta \vec {u}, D_\alpha \vec {u} \right\rangle \, \varphi \\&\quad \ge - 5 \int _{B_{\rho }} \left| V\right| \varphi , \end{aligned}$$

since \(\left| \vec {u}\right| , \left| \hat{\nu }\right| \le 1\), \(\left| \vec {\nu }_*\right| \le 2\), and A is elliptic. An application of Lemma 17 implies that \(-{{\,\textrm{div}\,}}\left( A \nabla h \right) \ge - 5 |V| \) weakly on \(B_{\rho }\). By the weak Harnack inequality [35, Theorem 4.15], since \(\left| V\right| \in L^p\left( B_{\rho } \right) \) for some \(p > \frac{n}{2}\), then there exists \(\delta _1(n, p, \lambda , \Lambda ) > 0\) so that

$$\begin{aligned} \delta _1 a_{B_{3\rho /4}} h \le \inf _{B_{\rho /2}} h + 5 \rho ^{2-\frac{n}{p}} \Vert V\Vert _{L^p(B_\rho )} \le \inf _{B_{\rho /2}} h + 5 \rho ^{2-\frac{n}{p}} \Vert V\Vert _{L^p(B_1)}. \end{aligned}$$

Since \(\frac{1}{2} (M^2 - \left| \vec {u}\right| ^2) \ge 0\), then for any \(x \in B_{\rho /2}\),

$$\begin{aligned}&\delta _1 \left[ M - \left\langle a_{B_{3\rho /4}} \vec {u}(x), \hat{\nu } \right\rangle \right] \le \delta _1 a_{B_{3\rho /4}} h\\&\quad \le \frac{1}{2} M^2 + M - \frac{1}{2} \left| \vec {u}(x)\right| ^2 - \left\langle \vec {u}(x), \hat{\nu } \right\rangle + 5 \rho ^{2-\frac{n}{p}} \Vert V\Vert _{L^p(B_1)}. \end{aligned}$$

Now we fix \(x \in B_{\rho /2}\) and set \(\hat{\nu } = \left\{ \begin{array}{ll} \vec {u}(x)/\left| \vec {u}(x)\right| &{} \text { if } \vec {u}(x) \ne 0 \\ \vec {0} &{} \text { otherwise} \end{array}\right. \), \(r(x) = \left| \vec {u}(x)\right| /M\), and define \(\theta = \theta (x)\) so that \(\left\langle a_{B_{3\rho /4}} \vec {u}, \vec {u}(x) \right\rangle = \left| \vec {u}(x)\right| \left| a_{B_{3\rho /4}} \vec {u}\right| \cos \theta \). Then the previous inequality may be written as

$$\begin{aligned} M \delta _1 \left( 1 - \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M}\cos \theta \right) \le M \left( 1 - r(x) \right) \left[ \frac{M}{2} \left( 1 + r(x) \right) + 1\right] + 5 \rho ^{2-\frac{n}{p}} \Vert V\Vert _{L^p(B_1)}. \end{aligned}$$

Since \(M\left[ \frac{1}{2} M \left( 1 + r(x) \right) + 1\right] \le M(M + 1)\) and \(M \le 1\), then

$$\begin{aligned} \frac{\delta _1}{2} \left( 1 - \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M}\cos \theta \right)&\le \frac{\delta _1}{M + 1} \left( 1 - \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M}\cos \theta \right) \\&\le 1 - r(x) + \frac{5 \rho ^{2-\frac{n}{p}}}{M\left( M + 1 \right) } \Vert V\Vert _{L^p(B_1)}. \end{aligned}$$

Since \(r(x) \in \left[ 0, 1\right] \) and \(\delta _1 \le 2\), then \(\frac{\delta _1}{2}\left( 1 - r(x) \right) \le 1 - r(x)\) or, equivalently, \(\frac{\delta _1}{2} \le 1 - r(x) + r(x) \frac{\delta _1}{2}\). Therefore, it follows from the previous inequality that

$$\begin{aligned}&\frac{\delta _1}{2} \left( 1 - r(x) \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M}\cos \theta \right) \le 1 - r(x) + r(x) \frac{\delta _1}{2}\left( 1 - \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M}\cos \theta \right) \\&\quad \le 1 - r(x) + r(x) \left[ 1 - r(x) + \frac{5 \rho ^{2-\frac{n}{p}}}{M\left( M + 1 \right) } \Vert V\Vert _{L^p(B_1)}\right] . \end{aligned}$$

Rearranging this equation and using that \(r(x) \le 1\), we see that

$$\begin{aligned} r(x)^2 - \frac{\delta _1}{2} r(x) \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M} \cos \theta&\le 1 - \frac{\delta _1}{2} + \frac{5 \rho ^{2-\frac{n}{p}}}{M} \Vert V\Vert _{L^p(B_1)}. \end{aligned}$$

Since \( \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M} \le 1\), with \(\delta := \frac{\delta _1}{4}\), this implies that

$$\begin{aligned} \frac{1}{M^2} \left| \vec {u}(x) - \delta a_{B_{3\rho /4}} \vec {u}\right| ^2&= r(x)^2 - 2 \delta r(x) \frac{\left| a_{B_{3\rho /4}} \vec {u} \right| }{M} \cos \theta + \left( \delta \frac{\left| a_{B_{3\rho /4}} \vec {u}\right| }{M} \right) ^2 \\&\le \left( 1 - \delta \right) ^2 + \frac{5 \rho ^{2-\frac{n}{p}}}{M} \Vert V\Vert _{L^p(B_1)}. \end{aligned}$$

As this inequality holds for any \(x \in B_{\rho /2}\), it follows that

$$\begin{aligned} \sup _{x \in B_{\rho /2}}\left| \vec {u}(x) - \delta a_{B_{3\rho /4}} \vec {u}\right| ^2 \le M^2 (1 - \delta )^2 + 5M \rho ^{2-\frac{n}{p}} \Vert V\Vert _{L^p(B_1)}. \end{aligned}$$

Taking a square root gives (50) when \(\theta = 1\). As all of the above arguments still hold with \(\rho \) replaced by \(\theta \rho \) for any \(0< \theta < 1\), we get (50) in general. \(\square \)

This lemma is used to recursively define a sequence of functions and constant vectors and establish bounds for them.

Lemma 19

(Sequence lemma) Let \(B_r = B(0, r)\). Let \(\mathcal {L}_V\) be as given in (42), where A is bounded and uniformly elliptic as in (25) and (24), and \(V\in L^{\frac{n}{2} +}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \cap \mathcal{N}\mathcal{D}\). Assume that \(\vec {u} \in W_{V} ^{1, 2}(B_{3})\) is a weak solution to \(\mathcal {L}_V\vec {u} = 0\) in \(B_{2}\). Assume further that \(\Vert \vec {u}\Vert _{L^\infty (B_{1})} \le 1\). Let \(\delta = \delta \left( n, p, \lambda , \Lambda \right) \in \left( 0, 1 \right) \) and \(c_0 > 0\) be as in Lemma 18. Recursively define the sequences \(\left\{ \vec {\nu }_k\right\} _{k=0}^\infty \subseteq \mathbb {R}^d\) and \(\left\{ \vec {u}_k\right\} _{k=0}^\infty \subseteq W^{1,2}_V(B_1)\) as follows: Let \(\vec {\nu }_0 = \vec {0}\), \(\vec {u}_0(x) = \vec {u}(x)\), and for all \(k \in \mathbb {Z}_{\ge 0}\), set

$$\begin{aligned}&\vec {\nu }_{k+1} = \vec {\nu }_{k} + \delta a_{B_{3/2 \left( \theta / 2 \right) ^{k+1}}} \vec {u}_k \\&\vec {u}_{k+1}(x) = \vec {u}(x) - \vec {\nu }_{k+1}. \end{aligned}$$

If \(\theta \le \min \left\{ \left( \frac{ \delta }{2 c_0 \left\| V\right\| _{L^p(B_1)}^{1/2}} \right) ^\frac{1}{1-\frac{n}{2p}}, 2\left( 1-\frac{\delta }{2} \right) ^\frac{1}{2-\frac{n}{p}}, 1\right\} \), then for all \(k \in \mathbb {N}\), it holds that

$$\begin{aligned} \begin{aligned}&\left| \vec {\nu }_k\right| \le \delta \sum _{i = 0}^{k - 1} \left( 1 - \frac{\delta }{2} \right) ^i, \qquad \sup _{x \in B_{\left( \theta /2 \right) ^k}}\left| \vec {u}_k (x)\right| \le \left( 1 - \frac{\delta }{2} \right) ^k. \end{aligned} \end{aligned}$$
(51)

Proof

Since \(\theta \le 1\), then Lemma 18 is applicable with \(\rho = 1\), \(M = 1\), and \(\vec {\nu }_* = \vec {0}\), so from (50) we get that

$$\begin{aligned} \sup _{x \in B_{\theta /2}}\left| \vec {u}(x) - \delta a_{B_{3\theta /4}} \vec {u}\right| \le (1 - \delta ) + c_0 {\theta }^{1-\frac{n}{2p}} \Vert V\Vert _{L^p(B_1)}^{\frac{1}{2}}. \end{aligned}$$

Since \( \theta \le \left( \frac{ \delta }{2 c_0 \left\| V\right\| _{L^p(B_1)}^{1/2}} \right) ^\frac{1}{1-\frac{n}{2p}}\) implies that \(c_0 \theta ^{1-\frac{n}{2p}} \Vert V\Vert _{L^p (B_1)} ^\frac{1}{2} \le \frac{\delta }{2}\), then

$$\begin{aligned} \sup _{x \in B_{\theta /2}}\left| \vec {u}(x) - \delta a_{B_{3\theta /4}} \vec {u}\right| \le \left( 1 - \frac{\delta }{2} \right) . \end{aligned}$$

By defining \(\vec {\nu }_1 = \delta a_{B_{3\theta /4}} \vec {u}\) and \(\vec {u}_1 = \vec {u}(x) - \vec {\nu }_1\), we get that \(\Vert \vec {u}_1\Vert _{L^\infty (B_{\theta /2})} \le \left( 1 - \frac{\delta }{2} \right) \), \(\left| \vec {\nu }_1\right| \le \delta \), and \(\mathcal {L}_V\vec {u}_1 = -V\vec {\nu }_1\).

We have proved (51) for \(k = {1}\), so we now prove it for \(k {\ge } 2\) via induction. Assume that (51) holds for some \(k \ge {1}\), and then

$$\begin{aligned} \left| \vec {\nu }_{k+1}\right|&\le \left| \vec {\nu }_{k}\right| + \delta \left| a_{B_{3/2 \left( \theta / 2 \right) ^{k+1}}} \vec {u}_k\right| \le \delta \sum _{i = 0}^{k - 1} \left( 1 - \frac{\delta }{2} \right) ^i + \delta \left\| \vec {u}_k\right\| _{L^\infty \left( B_{3/2 \left( \theta / 2 \right) ^{k+1}} \right) } \\&\le \delta \sum _{i = 0}^{k - 1} \left( 1 - \frac{\delta }{2} \right) ^i + \delta \left\| \vec {u}_k\right\| _{L^\infty \left( B_{\left( \theta / 2 \right) ^{k}} \right) } \le \delta \sum _{i = 0}^{k } \left( 1 - \frac{\delta }{2} \right) ^i. \end{aligned}$$

Since

$$\begin{aligned} \left| \vec {\nu }_k\right| < \delta \sum _{i = 0}^{\infty } \left( 1 - \frac{\delta }{2} \right) ^i = 2, \end{aligned}$$

then an application of Lemma 18 with \(\vec {u} = \vec {u}_k\), \(\rho = \left( \theta /2 \right) ^k\), \(M = \left( 1 - \frac{\delta }{2} \right) ^k\), and \(\vec {\nu }_* = \vec {\nu }_k\) gives us

$$\begin{aligned}&\sup _{x \in B_{\left( \theta /2 \right) ^{k+1}}} \left| \vec {u}_k(x) - \delta a_{B_{3/2\left( \theta /2 \right) ^{k+1}}} \vec {u}_k\right| \\&\qquad \le \left( 1 - \frac{\delta }{2} \right) ^k (1 - \delta ) + c_0 \left( 1 - \frac{\delta }{2} \right) ^{\frac{k}{2}} \theta ^{1-\frac{n}{2p}} \left( \frac{\theta }{2} \right) ^{k-\frac{k n}{2p}} \Vert V\Vert _{L^p(B_1)} ^\frac{1}{2}. \end{aligned}$$

The choice of \(\theta \) ensures that \(\left( \frac{\theta }{2} \right) ^{1 - \frac{n}{2p}} \le \left( 1 - \frac{\delta }{2} \right) ^{\frac{1}{2}}\) and \(\left( \frac{\theta }{2} \right) ^{1 - \frac{n}{2p}} \le \left( 1 - \frac{\delta }{2} \right) ^{\frac{1}{2}}\), so it follows from \(\vec {u}_{k+1} = \vec {u}_k - \delta a_{B_{3/2 \left( \theta / 2 \right) ^{k+1}}} \vec {u}_k\) that

$$\begin{aligned} \sup _{x \in B_{\left( \theta /2 \right) ^{k+1}}}\left| \vec {u}_{k+1}(x)\right| \le \left( 1 - \frac{\delta }{2} \right) ^k (1 - \delta ) + \left( 1 - \frac{\delta }{2} \right) ^k \left( \frac{\delta }{2} \right) = \left( 1 - \frac{\delta }{2} \right) ^{k+1}, \end{aligned}$$

which completes the proof of (51). \(\square \)

Using Lemma 19, we give the proof of Proposition 10.

Proof of Proposition 10

Assume first that \(R_0 = 2\). Then \(\vec {u} \in W_{V} ^{1, 2}(B_{4})\) is a weak solution to \(\mathcal {L}_V\vec {u} = 0\) in \(B_{3}\). An application of Proposition 9 with modified radii (see Remark 9) implies that \(\vec {u} \in L^\infty (B_2)\).

For any \(R \le 2\) and \(x_0 \in B\left( 0, \frac{R}{2} \right) \), since \(B\left( x_0, \frac{R}{2} \right) \subseteq B(0, 2)\), then \(\vec {u} \in L^\infty \left( B\left( x_0, \frac{R}{2} \right) \right) \). Define

$$\begin{aligned} \vec {u}_R(x) = \vec {u}\left( x_0 + \frac{R}{2} x \right) / \left\| \vec {u}\right\| _{L^\infty \left( B\left( x_0, \frac{R}{2} \right) \right) } \end{aligned}$$

and note that \(\Vert \vec {u}_R\Vert _{L^\infty (B_1)} = 1\). Since \(\vec {u} \in W_{V} ^{1, 2}(B_4)\) is a weak solution to \(\mathcal {L}_V\vec {u} = 0\) in \(B_3\) by hypothesis, then it holds that \(\vec {u}_R \in W_{V} ^{1, 2}(B_3)\) is a weak solution to \(\mathcal {L}_{V_R} \vec {u}_R = 0\) in \(B_2\), where \(V_R(x) = \left( \frac{R}{2} \right) ^2 V\left( x_0 + \frac{R}{2} x \right) \). Because \(\left\| V_R\right\| _{L^p(B_1)} = \left( \frac{R}{2} \right) ^{2 - \frac{n}{p}} \left\| V\right\| _{L^p\left( B\left( x_0, \frac{R}{2} \right) \right) } \le \left\| V\right\| _{L^p(B_2)}\), then with \(\delta > 0\) is as in Lemma 18, we set

$$\begin{aligned} \theta = \min \left\{ \left( \frac{ \delta }{2 c_0 } \right) ^\frac{1}{1-\frac{n}{2p}} \left\| V\right\| _{L^p(B_2)} ^{-\frac{1}{2-\frac{n}{p}}}, 2\left( 1-\frac{\delta }{2} \right) ^\frac{1}{2-\frac{n}{p}}, 1\right\} . \end{aligned}$$
(52)

It follows that the hypotheses of Lemma 19 are satisfied for any such \(\vec {u}_R\).

Define \(\eta = \log \left( 1 - \frac{\delta }{2} \right) \left[ \log \left( \frac{\theta }{2} \right) \right] ^{-1} > 0\) so that \(\left( 1 - \frac{\delta }{2} \right) = \left( \frac{\theta }{2} \right) ^\eta \). Since \(\delta \in \left( 0, 1 \right) \), then \(\theta \le 1 < 2\left( 1-\frac{\delta }{2} \right) \), so that \(\eta \le 1\). Observe that since \(\delta \in (0, 1)\), then \(\left( \frac{2}{\theta } \right) ^\eta = \left( 1 - \frac{\delta }{2} \right) ^{-1}< 2\).

For \(0 < r \le 1\), choose \(k \in \mathbb {Z}_{\ge 0}\) so that

$$\begin{aligned} \left( \frac{\theta }{2} \right) ^{k+1} < r \le \left( \frac{\theta }{2} \right) ^k. \end{aligned}$$

With any \(x_0 \in B(0, 1)\), let \( \underset{r}{{{\,\textrm{osc}\,}}}\ \, \vec {u}_R = \sup _{x, y \in B_r } \left| \vec {u}_R (x) - \vec {u}_R(y)\right| \) and observe that with the notation from Lemma 19, we have

$$\begin{aligned} \underset{r}{{{\,\textrm{osc}\,}}}\ \, \vec {u}_R&\le \sup _{x, y \in B_{\left( \theta /2 \right) ^k} } \left| \vec {u}_R(x) - \vec {u}_R(y)\right| = \sup _{x, y \in B_{\left( \theta /2 \right) ^k} } \left| \left( \vec {u}_R(x) - \nu _k \right) - \left( \vec {u}_R(y) - \nu _k \right) \right| \\&= \sup _{x, y \in B_{\left( \theta /2 \right) ^k} } \left| \vec {u}_{R,k} (x) - \vec {u}_{R,k}(y)\right| . \end{aligned}$$

It then follows from an application of (51) in Lemma 19 that

$$\begin{aligned} \underset{r}{{{\,\textrm{osc}\,}}}\ \, \vec {u}_R \le 2 \left( 1 - \frac{\delta }{2} \right) ^{k} = 2 \left( \frac{2}{\theta } \right) ^\eta \left( \frac{\theta }{2} \right) ^{\eta (k+1)} \le 4 r ^\eta . \end{aligned}$$

Take \(x, y \in B\left( 0, \frac{R}{2} \right) \) and set \(\tilde{r} = \frac{\left| x - y\right| }{2R} < \frac{1}{2}\). For any \(c > 1\), it holds that \(\pm \frac{x - y}{2R} \in B(0, c\tilde{r})\), so we choose \(c \in (1, 2]\) for which \(c \tilde{r} \le 1\). Then we have

$$\begin{aligned} \left| \vec {u}_R\left( \frac{x - y}{2R} \right) - \vec {u}_R\left( \frac{y - x}{2R} \right) \right| \le \underset{c \tilde{r}}{{{\,\textrm{osc}\,}}}\ \, \vec {u}_R \le 4 (c\tilde{r}) ^\eta \le 4 \left( \frac{\left| x - y\right| }{R} \right) ^\eta . \end{aligned}$$

With \(x_0 = \frac{1}{2} (x + y) \in B\left( 0, \frac{R}{2} \right) \), we have \( \vec {u}_R\left( \frac{x - y}{2R} \right) = \vec {u}(x)/\left\| \vec {u}\right\| _{L^\infty (B(x_0, \frac{R}{2}))}\) and \( \vec {u}_R\left( \frac{y - x}{2R} \right) = \vec {u}(y)/\left\| \vec {u}\right\| _{L^\infty (B(x_0, \frac{R}{2}))}\). Therefore,

$$\begin{aligned} \left| \vec {u}(x) - \vec {u}(y)\right|&= \left| \vec {u}_R\left( \frac{x - y}{2R} \right) - \vec {u}_R\left( \frac{y - x}{2R} \right) \right| \left\| \vec {u}\right\| _{L^\infty (B(x_0, \frac{R}{2}))}\\&\le 4 \left( \frac{\left| x - y\right| }{R} \right) ^\eta \left\| \vec {u}\right\| _{L^\infty (B(0, R))}. \end{aligned}$$

Since \(x, y \in B\left( 0, \frac{R}{2} \right) \) were arbitrary, the proof of (48) is complete for any \(R \le 2 = R_0\). Estimate (49) follows from (48) and an application of Proposition 9 with a modified choice of radii (again).

As usual, the case of \(R_0 \ne 2\) follows from a scaling argument. With \(V_{R_0}(x) = \left( \frac{R_0}{2} \right) ^2 V\left( \frac{R_0}{2} x \right) \), we have

$$\begin{aligned} \left\| V_{R_0}\right\| _{L^p(B_2)}^{-\frac{1}{2 - \frac{n}{p}}} = \left[ \left( \frac{R_0}{2} \right) ^{2 - \frac{n}{p}} \left\| V\right\| _{L^p(B_{R_0})}\right] ^{-\frac{1}{2 - \frac{n}{p}}} = \frac{2}{R_0} \left\| V\right\| _{L^p(B_{R_0})}^{-\frac{1}{2 - \frac{n}{p}}}, \end{aligned}$$

so the definition of \(\theta \) in (52) changes accordingly. \(\square \)

Propositions 9 and 10 (after modifying the choice of radii) show that assumptions (IB) and (H) hold for any operator in the class of weakly coupled Schrödinger systems. Accordingly, the results of Sect. 6 hold for all such elliptic systems. That is, the fundamental matrices associated to these systems exist and satisfy Definition 11 as well as the statements in Theorem 8.

Finally, we point out that many of these results may be extended to weakly coupled elliptic systems with nontrivial first-order terms. Since we do not consider such operators in this article, we don’t include those details.

8 Upper bounds

We now prove an exponential decay upper bound for the fundamental matrix associated to our elliptic operator. Going forward, the elliptic operator \(\mathcal {L}_V\) is given by (27), where the matrix A satisfies ellipticity and boundedness as described by (24) and (25), respectively. For the zeroth order term, we assume now that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal{Q}\mathcal{C}}\) for some \(p \ge \frac{n}{2}\). As pointed out in Remark 6, the assumption that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p \ge \frac{n}{2}\) implies that (26) holds. Therefore, this current setting is more restrictive than that of the last three sections. Since \(V \in {\mathcal {B}_p}\) for some \(p \ge \frac{n}{2}\), then Lemma 2 implies that \(V \in L^{\frac{n}{2}+}_{{{\,\textrm{loc}\,}}}\). This is meaningful since the Hölder continuity results for weakly coupled systems given in Proposition 10 hold in this setting. As such, there is no loss in assuming that \(p > \frac{n}{2}\) and we will do that throughout. We impose the additional condition that \(V \in {\mathcal {B}_p}\cap {\mathcal{Q}\mathcal{C}}\) so that we may apply the Fefferman–Phong inequality described by Lemma 15. We also require that assumptions (IB) and (H) hold so that we can meaningfully discuss our fundamental matrices and draw conclusions about them.

We follow the general arguments of [2]. Our first lemma is as follows.

Lemma 20

(Upper bound lemma) Let \(\mathcal {L}_V\) be given by (27), where A satisfies (24) and (25), and \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal{Q}\mathcal{C}}\) for some \(p > \frac{n}{2}\). Let \(B \subseteq \mathbb {R}^n\) be a ball. Assume that \(\vec {u} \in W_{V}^{1, 2}(\mathbb {R}^n \backslash B)\) is a weak solution to \(\mathcal {L}_V\vec {u} = 0\) in \(\mathbb {R}^n \backslash B\). Let \(\phi \in C_c ^\infty (\mathbb {R}^n)\) satisfy \(\phi = 0\) on 2B and let \(g \in C^1(\mathbb {R}^n)\) be a nonnegative function satisfying \(\left| \nabla g(x)\right| \lesssim _{(n, p, C_V)} \underline{m}(x, V)\) for every \(x \in \mathbb {R}^n\). Then there exists \(\varepsilon _0\), \(C_0\), both depending on \(d, n, p, C_V, N_V, \lambda , \Lambda \), such that whenever \(\varepsilon \in \left( 0, \varepsilon _0 \right) \), it holds that

$$\begin{aligned} \int _{\mathbb {R}^n}\underline{m}(\cdot , V)^2 \left| \phi \vec {u}\right| ^2 e^{2 \varepsilon g} \, \le C_0 \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla \phi \right| ^2 e^{2\varepsilon g} . \end{aligned}$$

The proof is a modification of the proof of Proposition 6.5 in [3].

Proof

Since \(\vec {u} \in W_{V}^{1, 2}(\mathbb {R}^n \backslash B)\), then by the definition of \(W^{1,2}_V(\Omega )\), there exists \(\vec {v} \in W_{V,0}^{1, 2}(\mathbb {R}^n)\) such that \(\vec {v}|_{\mathbb {R}^n {\setminus } B} = \vec {u}\). Define the function \(\vec {\psi } = \phi e^{\varepsilon g} \vec {v} = f \vec {u}\). By a modification to the arguments in 6, since \(\phi \in C_c^\infty (\mathbb {R}^n)\) and \(g \in C^1\left( \mathbb {R}^n \right) \), it holds that \(\vec {\psi } \in W_{V,0}^{1, 2}(\mathbb {R}^n)\). A similar argument shows that \(f^2 \vec {u} \in W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \) as well.

We adopt notation used in [37]: For \(\mathbb {R}^d\)-valued functions \(\vec {u}\) and \(\vec {v}\), and for a scalar function f, we write

$$\begin{aligned} A \, D\vec {u} \, D\vec {v} = A_{ij} ^{\alpha \beta } D_\beta u^i D_\alpha v^j, \qquad (\vec {u} \otimes \nabla f )_{i\beta } = u^i D_\beta f. \end{aligned}$$

By uniform ellipticity (24), we have

$$\begin{aligned} \int _{\mathbb {R}^n}\lambda \left| D \vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle \le \int _{\mathbb {R}^n}A D\vec {\psi } D\vec {\psi } + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle . \end{aligned}$$

Using that

$$\begin{aligned} D\vec {\psi } = D(f \vec {u}) = \vec {u} \otimes \nabla f + f D\vec {u}, \end{aligned}$$

we get

$$\begin{aligned} \int _{\mathbb {R}^n}\lambda \left| D \vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle&\lesssim \int _{\mathbb {R}^n}A (\vec {u} \otimes \nabla f)(\vec {u} \otimes \nabla f) + A (\vec {u} \otimes \nabla f) (f D \vec {u}) \nonumber \\&\quad + \int _{\mathbb {R}^n} A (f D\vec {u}) (\vec {u} \otimes \nabla f) + A (f D\vec {u}) (f D\vec {u}) + \left\langle V\vec {u}, f^2 \vec {u} \right\rangle . \end{aligned}$$
(53)

Since \(\vec {v} \in W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \), \(\vec {v}|_{\mathbb {R}^n {\setminus } B} = \vec {u} \in W^{1,2}_V\left( \mathbb {R}^n {\setminus } B \right) \) is a weak solution away from B, and \(f^2 \vec {u} \in W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \) is supported away from B, then \(\mathcal {B}\left[ \vec {u}, f^2 \vec {u}\right] = 0.\) That is,

$$\begin{aligned} 0&= \int _{\mathbb {R}^n}A \, D\vec {u} \, D(f^2\vec {u}) + \left\langle V\vec {u}, f^2 \vec {u} \right\rangle \\&=\int _{\mathbb {R}^n}2 A\, D\vec {u} (f \vec {u} \otimes \nabla f ) + A \, D\vec {u} \, f^2 D \vec {u} + \left\langle V\vec {u}, f^2 \vec {u} \right\rangle . \end{aligned}$$

Plugging this into (53) gives

$$\begin{aligned} \int _{\mathbb {R}^n}\left| D \vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle&\lesssim _{(\lambda )} \int _{\mathbb {R}^n}A (\vec {u} \otimes \nabla f)(\vec {u} \otimes \nabla f) + A (\vec {u} \otimes \nabla f) (f D \vec {u}) \nonumber \\&\quad - \int _{\mathbb {R}^n}A (f D\vec {u}) (\vec {u} \otimes \nabla f). \end{aligned}$$
(54)

Now we obtain an upper bound for the right hand side of (54). Using the boundedness of A from (25) and Cauchy’s inequality, we get that for any \(\delta ' > 0\),

$$\begin{aligned} \left| A (\vec {u} \otimes \nabla f)(f D\vec {u})\right| \le \Lambda \left| \vec {u}\right| \left| f\right| \left| D\vec {u}\right| \left| \nabla \vec {f}\right| \le \delta ' \left| f\right| ^2 \left| D\vec {u}\right| ^2 + \frac{C(\Lambda )}{\delta '} \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2 \end{aligned}$$

and similarly

$$\begin{aligned} \left| A (f D\vec {u}) (\vec {u} \otimes \nabla f)\right| \le \delta ' \left| f\right| ^2 \left| D\vec {u}\right| ^2 + \frac{C(\Lambda )}{\delta '} \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2, \end{aligned}$$

while

$$\begin{aligned} \left| A (\vec {u} \otimes \nabla f)(\vec {u} \otimes \nabla f)\right| \le \Lambda \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2. \end{aligned}$$

Then with \(\delta \simeq _{(\lambda )} \delta '\), we see that

$$\begin{aligned} \int _{\mathbb {R}^n}\left| D \vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle \le \delta \int _{\mathbb {R}^n}\left| f\right| ^2 \left| D\vec {u}\right| ^2 + C\left( \delta , \lambda , \Lambda \right) \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla f\right| ^2. \end{aligned}$$
(55)

Since \(\vec {\psi } = f \vec {u}\), then

$$\begin{aligned} \left| D\vec {\psi }\right| ^2&= \left\langle \vec {u} \otimes \nabla f + f D\vec {u}, \vec {u} \otimes \nabla f + f D\vec {u} \right\rangle _{\text {tr}} \\&= \left| f\right| ^2 \left| D\vec {u}\right| ^2 + 2 f \left\langle D\vec {u}, \vec {u} \otimes \nabla f \right\rangle _{\text {tr}} + \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2. \end{aligned}$$

The Cauchy inequality implies that

$$\begin{aligned} \int _{\mathbb {R}^n}\left| f\right| ^2 \left| D\vec {u}\right| ^2&\le \int _{\mathbb {R}^n}\left| D\vec {\psi }\right| ^2 + \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2 + 2 \left| f\right| \left| D\vec {u}\right| \left| \vec {u} \otimes \nabla f\right| \\&\le \int _{\mathbb {R}^n}\left| D\vec {\psi }\right| ^2 + \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2 + \frac{1}{2} \left| f\right| ^2 \left| D\vec {u}\right| ^2 + 2 \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2. \end{aligned}$$

Since \( \int _{\mathbb {R}^n}\left| f\right| ^2 \left| D\vec {u}\right| ^2 < \infty \), then we can absorb the third term into the left to get

$$\begin{aligned} \int _{\mathbb {R}^n}\left| f\right| ^2 \left| D\vec {u}\right| ^2&\le \int _{\mathbb {R}^n}2 \left| D\vec {\psi }\right| ^2 + 6 \left| \vec {u}\right| ^2 \left| \nabla f\right| ^2. \end{aligned}$$

Plugging this expression into (55) shows that

$$\begin{aligned} \int _{\mathbb {R}^n}\left| D \vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle \le 2 \delta \int _{\mathbb {R}^n}\left| D\vec {\psi }\right| ^2 + C\left( \delta , \lambda , \Lambda \right) \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla f\right| ^2. \end{aligned}$$

Setting \(\delta = \frac{1}{3}\), we see that

$$\begin{aligned} \int _{\mathbb {R}^n}\left| D \vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle \le C(\lambda , \Lambda ) \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla f\right| ^2. \end{aligned}$$
(56)

To apply Lemma 15 to \(\vec {\psi }\), we require that \(\vec {\psi } \in C^1_0\left( \mathbb {R}^n \right) ^d\), so we use a limiting argument. Since \(\vec {\psi } \in W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \), then 6 gives that there exists \(\left\{ \vec {\psi }_k\right\} _{k = 1}^\infty \subseteq C^\infty _c\left( \mathbb {R}^n \right) ^d\) for which \(\vec {\psi }_k \rightarrow \vec {\psi }\) in \(W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \). Moreover, since \(\vec {\psi }_k \rightarrow \vec {\psi }\) in \(L^{2^*}\left( \mathbb {R}^n \right) \) (as shown in the proof of Proposition 7), there exists a subsequence that converges a.e. to \(\vec {\psi }\). After relabeling the indices, we may assume that \(\vec {\psi }_k \rightarrow \vec {\psi }\) a.e. and in \(W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \). Fatou’s Lemma followed by Lemma 15 applied to \(\vec {\psi }_k \in C^\infty _c\left( \mathbb {R}^n \right) \) gives

$$\begin{aligned} \int _{\mathbb {R}^n}\left| \vec {\psi }\right| ^2 \underline{m}(\cdot , V)^2&\le \liminf _{k \rightarrow \infty } \int _{\mathbb {R}^n}\left| \vec {\psi _k}\right| ^2 \underline{m}(\cdot , V)^2 \\&\le \liminf _{k \rightarrow \infty } c_1 \left( \int _{\mathbb {R}^n}\left| D\vec {\psi _k}\right| ^2 + \left\langle V \vec {\psi }_k, \vec {\psi }_k \right\rangle \right) \\&= c_1 \left( \int _{\mathbb {R}^n}\left| D\vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle \right) , \end{aligned}$$

where the last line uses convergence in \(W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \) and \(c_1 = c_1\left( d, n, p, C_V, N_V \right) \). Combining this inequality with (56) shows that

$$\begin{aligned} \int _{\mathbb {R}^n}\left| \phi \vec {u}\right| ^2 \underline{m}(\cdot , V)^2 e^{2 \varepsilon g}&\le c_1 \left( \int _{\mathbb {R}^n}\left| D\vec {\psi }\right| ^2 + \left\langle V \vec {\psi }, \vec {\psi } \right\rangle \right) \le c_2 \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla f\right| ^2 \, dx \\&\le 2 c_2 \varepsilon ^2 \int _{\mathbb {R}^n}\left| \nabla g\right| ^2 e^{2 \varepsilon g} \left| \phi \vec {u}\right| ^2 + 2 c_2 \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla \phi \right| ^2 e^{2\varepsilon g} \\&\le 2 c_3 \varepsilon ^2 \int _{\mathbb {R}^n}\underline{m}(\cdot , V)^2 \left| \phi \vec {u}\right| ^2 e^{2 \varepsilon g} + 2 c_2 \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla \phi \right| ^2 e^{2\varepsilon g}, \end{aligned}$$

where \(c_2 = c_2\left( d, n, p, C_V, N_V, \lambda , \Lambda \right) \) and \(c_3 = c_3\left( d, n, p, C_V, N_V, \lambda , \Lambda \right) \). If \(\varepsilon \) is sufficiently small, we may absorb the first term on the right into the left, completing the proof. \(\square \)

Remark 12

This proof uses that \(\vec {u}, f^2 \vec {u} \in W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \) in order to make sense of the expression \(\mathcal {B}\left[ \vec {u}, f^2 \vec {u}\right] \). It also uses that \(f \in C^1_0\left( \mathbb {R}^n \right) \) and \(\vec {u} \in W^{1,2}_{V,0}\left( \mathbb {R}^n \right) \) to say that \(f D\vec {u} \in L^2\left( \mathbb {R}^n \right) \). Other assumptions on \(\vec {u}\) would still allow these arguments to carry through. More specifically, we can apply Lemma 20 with \(\vec {u} \in Y^{1,2}_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \) and \(f \in C^1_0\left( \mathbb {R}^n \right) \). To see this, let \({{\,\textrm{supp}\,}}f \subseteq \Omega \) where \(\overline{\Omega } \Subset \mathbb {R}^n\) and observe that since

$$\begin{aligned} \mathcal {B}\left[ \vec {u}, f^2 \vec {u}\right]&= \int _{\mathbb {R}^n} \left\langle A^{\alpha \beta } D_\beta \vec {u}, D_\alpha \left( f^2 \vec {u} \right) \right\rangle + \left\langle V \, \vec {u}, f^2 \vec {u} \right\rangle \\&= \int _\Omega f^2 \left\langle A^{\alpha \beta } D_\beta \vec {u}, D_\alpha \vec {u} \right\rangle + 2 f \left\langle A^{\alpha \beta } D_\beta \vec {u}, \vec {u} \, D_\alpha f \right\rangle + f^2 \left\langle V \, \vec {u}, \vec {u} \right\rangle \end{aligned}$$

then by applications of Hölder’s inequality,

$$\begin{aligned}&\left| \mathcal {B}\left[ \vec {u}, f^2 \vec {u}\right] \right| \le \Lambda \left\| f\right\| _{L^\infty \left( \Omega \right) }^2 \left\| D \vec {u}\right\| _{L^2\left( \Omega \right) }^2 + \left\| f\right\| _{L^\infty \left( \Omega \right) }^2 \left\| V\right\| _{L^{\frac{n}{2}}\left( \Omega \right) } \left\| \vec {u}\right\| _{L^{2^*}\left( \Omega \right) }^2 \\&\qquad + 2 \Lambda \left\| f\right\| _{L^\infty \left( \Omega \right) } \left\| D \vec {u}\right\| _{L^2\left( \Omega \right) } \left\| \vec {u}\right\| _{L^{2^*}\left( \Omega \right) } \left\| D f\right\| _{L^n\left( \Omega \right) } \\&\quad \le 2 \Lambda \left\| f\right\| _{L^\infty \left( \Omega \right) }^2 \left\| D \vec {u}\right\| _{L^2\left( \Omega \right) }^2 + \left( \left\| D f\right\| _{L^n\left( \Omega \right) }^2+ \left\| f\right\| _{L^\infty \left( \Omega \right) }^2 \left\| V\right\| _{L^{\frac{n}{2}}\left( \Omega \right) } \right) \left\| \vec {u}\right\| _{L^{2^*}\left( \Omega \right) }^2\\&\quad \lesssim _{\left( \Lambda , \left\| f\right\| \right) } \left\| \vec {u}\right\| _{Y^{1,2}\left( \Omega \right) }^2. \end{aligned}$$

In particular, this shows that \(\mathcal {B}\left[ \vec {u}, f^2 \vec {u}\right] \) is well-defined and finite. Moreover, since \(D \vec {u} \in L^2_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \right) \) and \(\overline{{{\,\textrm{supp}\,}}f }\Subset \mathbb {R}^n\), then \(f D\vec {u} \in L^2\left( \mathbb {R}^n \right) \).

Remark 13

Going forward, we say that a constant C depends on \(\mathcal {L}_V\) to mean that C has the same dependence as the constants in Theorem 8. That is, \(C = C\left( \mathcal {L}_V \right) \) means that C depends on the package \(d, n, \lambda , \Lambda \), and \(C_\mathrm{{IB}}\).

We now prove our upper bound.

Theorem 11

(Exponential upper bound) Let \(\mathcal {L}_V\) be given by (27), where A satisfies (24) and (25), and \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\cap {\mathcal{Q}\mathcal{C}}\) for some \(p > \frac{n}{2}\). Assume that (IB) and (H) hold. Let \(\Gamma ^V(x, y)\) denote the fundamental matrix of \(\mathcal {L}_V\) and let \(\varepsilon _0\) be as given in Lemma 20. For any \(\varepsilon < \varepsilon _0\), there exists \(C = C(\mathcal {L}_V, p, C_V, N_V, \varepsilon )\) so that for all \(x, y \in \mathbb {R}^n\),

$$\begin{aligned} \left| \Gamma ^V(x, y)\right| \le \frac{C e^{-\varepsilon \underline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}}. \end{aligned}$$

The following proof is similar to that of [3, Theorem 6.7].

Proof

Fix \(x, y \in \mathbb {R}^n\) with \(x \ne y\). If \(\underline{d}\left( x, y, V \right) \lesssim _{(n, p, C_V)} 1\), then \(e^{-C\varepsilon } \le e^{-\varepsilon \underline{d}\left( x, y, V \right) }\), so the result follows from (37) in Theorem 8. Therefore, we focus on \(x, y \in \mathbb {R}^n\) for which \(\underline{d}\left( x, y, V \right) \gtrsim _{(n, p, C_V)} 1\). By Lemma 11, we can assume \(\left| x-y\right| > \frac{C}{\underline{m}(x, V)}\) since otherwise \(\underline{d}(x, y, V) \lesssim _{(n, p, C_V)} 1\). Likewise, we can assume \(\left| x-y\right| > \frac{C}{\underline{m}(y, V)}\). Finally, we can assume

$$\begin{aligned} B\left( x, \frac{4}{\underline{m}(x, V)} \right) \cap B\left( y, \frac{4}{\underline{m}(y, V)} \right) = \emptyset \end{aligned}$$
(57)

for if not, then the triangle inequality shows that

$$\begin{aligned} \left| x-y\right| \le 8 \max \left\{ \frac{1}{\underline{m}(x, V)}, \frac{1}{\underline{m}(y, V)}\right\} \end{aligned}$$

so that again \(\underline{d}(x, y, V) \lesssim _{(n, p, C_V)} 1\).

Let \(r = \frac{1}{\underline{m}(y, V)}\) and pick \(M > 0\) large enough so that \(B(y, 4r) \subseteq B(0, M)\). Let \(\phi \in C_c ^\infty (\mathbb {R}^n)\) be such that \(\phi \equiv 0\) on \(B(y, 2r), \phi \equiv 1\) on \(B(0, M) \backslash B(y, 4r), \phi \equiv 0\) on \(\mathbb {R}^n \backslash B(0, 2\,M)\),

$$\begin{aligned} \left| \nabla \phi \right| \le \frac{1}{4r} \text { on } B(y, 4r) \backslash B(y, 2r) \text { and } \left| \nabla \phi \right| \le \frac{2}{M} \text { on } B(0, 2M) \backslash B(0, M). \end{aligned}$$

The next step is to apply Lemma 20. We take \(B = B(y, r)\), \(\vec {u}\) to be each of the individual columns of \(\Gamma ^V(\cdot , y)\), and \(g = \varphi _{V, j}\left( \cdot , y \right) \), where \(\varphi _{V, j} \in C^\infty (\mathbb {R}^n)\) is as in Lemma 14. Since \(\Gamma ^V\left( \cdot , y \right) \in Y^{1,2}\left( \mathbb {R}^n \setminus B \right) \), then it can be shown that \(\phi ^2 \Gamma ^V\left( \cdot , y \right) e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) } \in Y^{1,2}_0\left( \mathbb {R}^n {\setminus } B \right) \). Since \(C^\infty _c\left( \mathbb {R}^n \setminus B \right) \) is dense in \(Y^{1,2}_0\left( \mathbb {R}^n {\setminus } B \right) \), then the expression \(\mathcal {B}\left[ \Gamma ^V\left( \cdot , y \right) , \phi ^2 \Gamma ^V\left( \cdot , y \right) e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) }\right] \) is meaningful and equals zero, see Definition 11(a). Moreover, since \(\Gamma ^V\left( \cdot , y \right) \in Y^{1,2}\left( \mathbb {R}^n {\setminus } B \right) \), then \(\phi D \Gamma ^V\left( \cdot , y \right) e^{\varepsilon \varphi _{V, j}\left( \cdot , y \right) } \in L^2\left( \mathbb {R}^n \right) \). In particular, according to Remark 12, we can apply Lemma 20. Doing so, we see that for any \(\varepsilon < \varepsilon _0\),

$$\begin{aligned}&\int _{B(0, M) \backslash B(y, 4r)} \underline{m}(\cdot , V) ^2 \left| {\Gamma ^V}(\cdot , y)\right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) }\\&\quad \le \int _{\mathbb {R}^n}\underline{m}(\cdot , V) ^2 \left| \phi \Gamma ^V(\cdot , y)\right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) } \\&\quad \le C_0 \int _{\mathbb {R}^n}\left| {\Gamma ^V}(\cdot , y)\right| ^2 \left| \nabla \phi \right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) } \\&\quad \le \frac{C_0}{r^2} \int _{B(y, 4r) \backslash B(y, 2r)} \left| {\Gamma ^V}(\cdot , y)\right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) } \\&\qquad + \frac{C_0}{M^2} \int _{B(0, 2M) \backslash B(0, M)} \left| {\Gamma ^V}(\cdot , y)\right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) }. \end{aligned}$$

For each fixed j, \(\varphi _{V, j}\left( \cdot , y \right) \) is bounded on \(\mathbb {R}^n\). Applying (37) then shows that

$$\begin{aligned} \frac{1}{M^2} \int _{B(0, 2M) \backslash B(0, M)} \left| {\Gamma ^V}(\cdot , y)\right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) } \lesssim M^{n-2} (M - \left| y\right| )^{4 - 2n} \rightarrow 0 \text { as } M \rightarrow \infty , \end{aligned}$$

and so

$$\begin{aligned}&\int _{\mathbb {R}^n \backslash B(y, 4r)} \underline{m}(\cdot , V) ^2 \left| {\Gamma ^V}(\cdot , y)\right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) } \nonumber \\&\quad \le \frac{C_0}{r^2} \int _{B(y, 4r) \backslash B(y, 2r)} \left| {\Gamma ^V}(\cdot , y)\right| ^2 e^{2\varepsilon \varphi _{V, j}\left( \cdot , y \right) }. \end{aligned}$$
(58)

By Lemma 11 and our choice of r, if \(z \in B(y, 4r) \backslash B(y, 2r)\), then \(\underline{d}(z, y, V) \lesssim _{(n, p, C_V)} 1\). It follows from Lemmas 14 and 13 that \(\varphi _{V, j}(z) \le \varphi _{V}(z) \lesssim _{(n, p, C_V)} 1\). Combining this observation with (37), (58), Fatou’s Lemma, and Lemma 14 shows that

$$\begin{aligned} \int _{\mathbb {R}^n \backslash B(y, 4r)} \underline{m}(z, V)^2 \left| \Gamma ^V(z, y)\right| ^2 e^{2\varepsilon \varphi _{V}(z, y)} dz \lesssim _{(\mathcal {L}_V, p, C_V, N_V)} r^{2-n}. \end{aligned}$$

If we set \(R = \frac{1}{\underline{m}(x, V)}\) then (57) shows that \(B(x, R) \subseteq \mathbb {R}^n \backslash B(y, 4r)\). Consequently,

$$\begin{aligned} \int _{B(x, R)} \underline{m}(z, V) ^2 \left| {\Gamma ^V}(z, y)\right| ^2 e^{2\varepsilon \underline{d}(z, y, V)} \, dz \lesssim _{(\mathcal {L}_V, p, C_V, N_V)} r^{2-n}. \end{aligned}$$

An application of the triangle inequality and Lemma 11 shows that for \(z \in B(x, R)\),

$$\begin{aligned} \underline{d}\left( x, y, V \right)&\le \underline{d}\left( x, z, V \right) + \underline{d}\left( z, y, V \right) \le L + \underline{d}\left( z, y, V \right) , \end{aligned}$$

where \(L = L\left( n, p, C_V \right) \), so that

$$\begin{aligned} e^{2\varepsilon \underline{d}(z, y, V)} \ge C\left( n, p, C_V, \varepsilon \right) e^{2 \varepsilon \underline{d}(x, y, V)}. \end{aligned}$$

Furthermore, Lemma 10(a) shows that \(R^{-1} = \underline{m}(x, V) \simeq _{(n, p, C_V)} \underline{m}(z, V)\) so that

(59)

Choose \(\gamma : \left[ 0,1\right] \rightarrow \mathbb {R}^n\) so that \(\gamma \left( 0 \right) = x\), \(\gamma \left( 1 \right) = y\) and

$$\begin{aligned} 2 \underline{d}\left( x, y, V \right) \ge \int _0^1 \underline{m}\left( \gamma \left( t \right) , V \right) \left| \gamma '\left( t \right) \right| dt. \end{aligned}$$

It follows from Lemma 10(c) that

$$\begin{aligned} \underline{d}\left( x, y, V \right) \ge \frac{c}{2} \int _0^1 \frac{\underline{m}\left( x, V \right) \left| \gamma '\left( t \right) \right| dt}{\left[ 1 + \left| \gamma \left( t \right) - x\right| \underline{m}\left( x, V \right) \right] ^{k_0/(k_0+1)}} = \frac{c}{2} \int _0^1 \frac{\left| {\widetilde{\gamma }} \,'\left( t \right) \right| dt}{\left[ 1 + \left| {\widetilde{\gamma }}\left( t \right) \right| \right] ^{\frac{k_0}{(k_0+1)}}}, \end{aligned}$$

where \({\widetilde{\gamma }}: \left[ 0,1\right] \rightarrow \mathbb {R}^n\) is a shifted, rescaled version of \(\gamma \). That is, \({\widetilde{\gamma }}(0) = 0\) and \({\widetilde{\gamma }}(1) = \underline{m}\left( x, V \right) \left( y - x \right) \). This integral is bounded from below by the geodesic distance from 0 to \(\underline{m}\left( x, V \right) \left( y - x \right) \) in the metric

$$\begin{aligned} \frac{dz}{\left( 1 + \left| z\right| \right) ^{k_0/(k_0+1)}}. \end{aligned}$$

A computation shows that the straight line path achieves this minimum. Therefore,

$$\begin{aligned} \underline{d}\left( x, y, V \right)&\ge \frac{c}{2} \int _0^1 \frac{\underline{m}\left( x, V \right) \left| y - x\right| dt}{\left[ 1 + \underline{m}\left( x, V \right) t \left| y - x\right| \right] ^{k_0/(k_0+1)}} \\ {}&= \frac{c(k_0+1)}{2} \left[ \left( 1+\underline{m}\left( x, V \right) \left| y - x\right| \right) ^{\frac{1}{k_0+1}} - 1\right] \ge C \left( \underline{m}\left( x, V \right) \left| y - x\right| \right) ^{\frac{1}{k_0+1}}, \end{aligned}$$

where we have used that \(\left| x - y\right| \ge \frac{4}{\underline{m}\left( x, V \right) }\) to reach the final line. In particular, for any \(\varepsilon ' > 0\), it holds that

$$\begin{aligned} \underline{m}\left( x, V \right) \left| y - x\right| \le \frac{1}{C^{k_0 + 1}} \underline{d}(x, y, V)^{k_0 + 1} \le \frac{1}{C^{k_0 + 1}} C_{\varepsilon '} e^{\varepsilon ' \underline{d}(x, y, V)/2}, \end{aligned}$$
(60)

where \(C_{\varepsilon '} > 0\) depends on \(\varepsilon '\). A similar argument shows that

$$\begin{aligned} \underline{m}\left( y, V \right) \left| y - x\right| \le \frac{1}{C^{k_0 + 1}} C_{\varepsilon '} e^{\varepsilon ' \underline{d}(x, y, V)/2}. \end{aligned}$$

Multiplying these two bounds gives

$$\begin{aligned} \underline{m}\left( x, V \right) \underline{m}\left( y, V \right) \le C^{-2(k_0 + 1)} C_{\varepsilon '}^2 e^{\varepsilon ' \underline{d}(x, y, V)} \left| y - x\right| ^{-2}. \end{aligned}$$

Define \(\varepsilon ' = \frac{\varepsilon }{n-2}\). We then substitute this upper bound into (59) and simplify to get

(61)

Finally, since we assume that \(y \not \in B(x, R)\), then \(\mathcal {L}_V\Gamma ^V\left( \cdot , y \right) = 0\) in B(xR). In particular, (38) from assumption (IB) is applicable, so that

and the conclusion follows by combining the previous two inequalities. \(\square \)

Remark 14

As in [3], if instead of assuming (IB) and (H), we assume that \(\Gamma ^V\) exists and satisfies the pointwise bound described by (37), then (61) holds.

Define the diagonal matrix \(\Lambda = \left| V\right| I\), where \(\left| V\right| = \lambda _d \in {\text {B}_p}\) is the largest eigenvalue of V and I denotes the \(d \times d\) identity matrix. Then set \(\mathcal {L}_\Lambda = -D_\alpha \left( A^{\alpha \beta } D_\beta \right) + \Lambda \) to be the associated Schrödinger operator. We let \(\Gamma ^\Lambda \) denote the fundamental matrix for \(\mathcal {L}_\Lambda \). Since the assumptions imposed to make sense of \(\Gamma ^V\) are inherited for \(\Gamma ^\Lambda \), then \(\Gamma ^\Lambda \) exists and satisfies the conclusions of Theorem 8 as well. Because \(\Lambda \) is diagonal, then its upper and lower auxiliary functions coincide and are comparable to \(\overline{m}\left( x, V \right) \) by Lemma 9. That is, \(\underline{m}\left( x, \Lambda \right) = \overline{m}\left( x, \Lambda \right) {\simeq } \overline{m}\left( x, V \right) \) so that \(\underline{d}\left( x, \Lambda \right) = \overline{d}\left( x, \Lambda \right) {\simeq } \overline{d}\left( x, V \right) \). As such, we can obtain an upper bound for \(\Gamma ^\Lambda \) without having to assume that \(V \in {\mathcal{Q}\mathcal{C}}\) or even that \(V \in \mathcal{N}\mathcal{D}\), see Remark 2. We accomplish this by applying the following lemma in place of Lemma 20.

Lemma 21

(Upper bound lemma for \(V = \Lambda )\) Let \(\mathcal {L}_\Lambda \) be as defined above, where A satisfies (24) and (25), and \(\left| V\right| \in {\text {B}_p}\) for some \(p > \frac{n}{2}\). Let \(B \subseteq \mathbb {R}^n\) be a ball. Assume that \(\vec {u} \in W_{\left| V\right| I}^{1, 2}(\mathbb {R}^n \backslash B)\) is a weak solution to \(\mathcal {L}_\Lambda \vec {u} = 0\) in \(\mathbb {R}^n \backslash B\). Let \(\phi \in C_c ^\infty (\mathbb {R}^n)\) satisfy \(\phi = 0\) on 2B and let \(g \in C^1(\mathbb {R}^n)\) be a nonnegative function satisfying \(\left| \nabla g(x)\right| \lesssim _{(n, p, C_V)} m(x, \left| V\right| )\) for every \(x \in \mathbb {R}^n\). Then there exists \(\varepsilon _1\), \(C_1\), both depending on \(d, n, p, C_{\left| V\right| }, \lambda , \Lambda \), such that whenever \(\varepsilon \in \left( 0, \varepsilon _1 \right) \), it holds that

$$\begin{aligned} \int _{\mathbb {R}^n}m(\cdot , \left| V\right| )^2 \left| \phi \vec {u}\right| ^2 e^{2 \varepsilon g} \, \le C_1 \int _{\mathbb {R}^n}\left| \vec {u}\right| ^2 \left| \nabla \phi \right| ^2 e^{2\varepsilon g} . \end{aligned}$$

The proof of this result exactly follows that of Lemma 20 except that the Fefferman–Phong inequality described by Corollary 2 is used in place of Lemma 15. We arrive at the following corollary to Theorem 11.

Corollary 4

(Exponential upper bound for \(V = \Lambda )\) Let \(\mathcal {L}_\Lambda = -D_\alpha \left( A^{\alpha \beta } D_\beta \right) + \left| V\right| I\), where A satisfies (24) and (25), and \(\left| V\right| \in {\text {B}_p}\) for some \(p > \frac{n}{2}\). Assume that (IB) and (H) hold. Let \(\Gamma ^\Lambda (x, y)\) denote the fundamental matrix of \(\mathcal {L}_\Lambda \) and let \(\varepsilon _1\) be as given in Lemma 21. For any \(\varepsilon < \varepsilon _1\), there exists \(C = C(\mathcal {L}_\Lambda , p, C_{\left| V\right| }, \varepsilon )\) so that for all \(x, y \in \mathbb {R}^n\),

$$\begin{aligned} \left| \Gamma ^\Lambda (x, y)\right| \le \frac{C e^{-\varepsilon \overline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}}. \end{aligned}$$

This result will be used to obtain lower bounds in the next section.

9 Lower bounds

Here we prove an exponential decay lower bound for the fundamental matrix associated to our elliptic operator. As before, the elliptic operator \(\mathcal {L}_V\) is given by (27), where the matrix A satisfies ellipticity and boundedness as described by (24) and (25), respectively. For the zeroth order term, we assume that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\). In contrast to the upper bound section, we will not require that \(V \in {\mathcal{Q}\mathcal{C}}\). In fact, many of the results in this section hold when we assume that \(\left| V\right| \in {\text {B}_p}\) (instead of \(V \in {\mathcal {B}_p}\)) and accordingly replace all occurrences of \(\overline{m}\left( \cdot , V \right) \) with \(m\left( \cdot , \left| V\right| \right) \). The assumption that \(V \in \mathcal{N}\mathcal{D}\) ensures that the spaces \(W^{1,2}_{V, 0}\left( \mathbb {R}^n \right) \) are Hilbert spaces and we require this for Lemma 22, for example. Moreover, the Hilbert spaces are crucial to the fundamental matrix constructions in Sect. 6.

We also assume that conditions (IB) and (H) hold so that we can meaningfully discuss our fundamental matrices and draw conclusions about them. Further on, we will impose a pair of additional assumptions for fundamental matrices. As with (IB) and (H), these assumptions are known to hold in the scalar setting.

Let \({\Gamma }^0(x, y)\) denote the fundamental matrix for the homogeneous operator \(\mathcal {L}_0\) that we get when \(V \equiv 0\). That is, \(\mathcal {L}_0:= -D_\alpha \left( A^{\alpha \beta } D_\beta \right) \). Since the assumptions imposed to make sense of \(\Gamma ^V\) are inherited for \(\Gamma ^0\), the conclusions of Theorem 8 hold for \(\Gamma ^0\). Recall that \(\mathcal {L}^\Lambda = \mathcal {L}_0+ \Lambda \), where \(\Lambda = \left| V\right| I\) and \(\Gamma ^\Lambda \) denotes the associated fundamental matrix.

In [2], a clever presentation of \(\Gamma ^0 - \Gamma ^V\) is used to prove bounds for that difference function. Here, we take a slightly different approach and look at both \(\Gamma ^0 - \Gamma ^\Lambda \) and \(\Gamma ^\Lambda - \Gamma ^V\), then combine the bounds. Using the fundamental matrix associated to the operator with a diagonal matrix as an intermediary allows us to prove the bounds that we require for the lower bound estimates without having to assume that \(V \in {\mathcal{Q}\mathcal{C}}\) or impose other conditions.

We begin with the representation formula. To establish this result, we follow the ideas from [3].

Lemma 22

(Representation formula) Assume that the coefficient matrix A satisfies boundedness (25) and ellipticity (24), and that V is a locally integrable matrix weight that satisfies (26). Assume also that conditions (IB) and (H) hold. Let \(\Gamma ^0\), \(\Gamma ^\Lambda \), and \(\Gamma ^V\) denote the fundamental matrices of \(\mathcal {L}_0\) \(\mathcal {L}_\Lambda \), and \(\mathcal {L}_V\), respectively. Then

$$\begin{aligned} \Gamma ^0(x, y) - \Gamma ^V (x, y)&= \int _{\mathbb {R}^n}\Gamma ^0(x, {\cdot }) \, \Lambda \left( {\cdot } \right) \, \Gamma ^\Lambda ({\cdot }, y) \\&\quad + \int _{\mathbb {R}^n}\Gamma ^\Lambda (x, {\cdot })\left[ V - \Lambda \right] \left( {\cdot } \right) \Gamma ^V({\cdot }, y). \end{aligned}$$

Proof

Let \(\left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\) denote the dual space to \(W_{V, 0}^{1, 2} (\mathbb {R}^n)\). Given \(\vec {f} \in \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\), an application of the Lax–Milgram theorem shows that there exists unique \(\vec {u} \in W_{V, 0}^{1, 2} (\mathbb {R}^n)\) so that for every \(\vec {v} \in W_{V, 0}^{1, 2} (\mathbb {R}^n)\), \(\mathcal {B}_V\left[ \vec {u}, \vec {v}\right] = \vec {f}(\vec {v})\). We denote \(\vec {u}\) by \(\mathcal {L}_V^{-1} \vec {f}\), so that \(\mathcal {L}_V^{-1}: \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\rightarrow W_{V, 0}^{1, 2} (\mathbb {R}^n)\) and

$$\begin{aligned} \mathcal {B}_V\left[ \mathcal {L}_V^{-1} \vec {f}, \vec {v}\right] = \vec {f}(\vec {v}) \quad \text {for every } \vec {v} \in W_{V, 0}^{1, 2} (\mathbb {R}^n). \end{aligned}$$
(62)

Note that the inverse mapping \(\mathcal {L}_V: W_{V, 0}^{1, 2} (\mathbb {R}^n)\rightarrow \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\) satisfies \(\left( \mathcal {L}_V\vec {u} \right) (\vec {v}) = \mathcal {B}_V\left[ \vec {u}, \vec {v}\right] \) for every \(\vec {v} \in W_{V, 0}^{1, 2} (\mathbb {R}^n)\). In particular,

$$\begin{aligned} (\mathcal {L}_V\mathcal {L}_V^{-1} \vec {f}) (\vec {v}) = \mathcal {B}_V\left[ \mathcal {L}_V^{-1} \vec {f}, \vec {v}\right] = \vec {f}(\vec {v}) \quad \text {for every } \vec {v} \in W_{V, 0}^{1, 2} (\mathbb {R}^n)\end{aligned}$$

showing that \(\mathcal {L}_V\mathcal {L}_V^{-1}\) acts as the identity on \(\left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\). On the other hand, if \(\vec {f} = \mathcal {L}_V\vec {u} \), then \(\vec {f}(\vec {v}) = \mathcal {B}_V\left[ \vec {u}, \vec {v}\right] \) for any \(\vec {v} \in W_{V, 0}^{1, 2} (\mathbb {R}^n)\). It follows that

$$\begin{aligned} \vec {u} = \mathcal {L}_V^{-1} \vec {f} = \mathcal {L}_V^{-1} \mathcal {L}_V\vec {u} \end{aligned}$$

and we conclude that \(\mathcal {L}_V^{-1} \mathcal {L}_V\) is the identity on \(W_{V, 0}^{1, 2} (\mathbb {R}^n)\). Since \(\mathcal {B}_V\left[ \vec {v}, \vec {u}\right] = \mathcal {B}_V^*\left[ \vec {u}, \vec {v}\right] \) for every \(\vec {u}, \vec {v} \in W_{V, 0}^{1, 2} (\mathbb {R}^n)\), then analogous statements may be made for \(\mathcal {L}_V^{*}\) and \(\left( \mathcal {L}_V^* \right) ^{-1}\).

Since \(\left\| \vec {u}\right\| _{W_{V, 0}^{1, 2} (\mathbb {R}^n)} \le \left\| \vec {u}\right\| _{W_{\Lambda , 0}^{1, 2} (\mathbb {R}^n)}\), then \(W_{\Lambda , 0}^{1, 2} (\mathbb {R}^n)\subseteq W_{V, 0}^{1, 2} (\mathbb {R}^n)\) so that \(\left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\subseteq \left( W_{\Lambda , 0}^{1, 2} (\mathbb {R}^n) \right) '\). It follows that for any \(\vec {f} \in \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\), \(\mathcal {L}_V\mathcal {L}_\Lambda ^{-1} \vec {f} \in \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\). Observe that if \(\vec {u}, \vec {v} \in W_{\Lambda , 0}^{1, 2} (\mathbb {R}^n)\), then

$$\begin{aligned} \left[ \left( \mathcal {L}_\Lambda - \mathcal {L}_V \right) \vec {u}\right] (\vec {v}) = \mathcal {B}_{\Lambda }\left[ \vec {u}, \vec {v}\right] - \mathcal {B}_V\left[ \vec {u}, \vec {v}\right] = \left\langle (\Lambda - V) \vec {u}, \vec {v} \right\rangle _{L^2(\mathbb {R}^n)}. \end{aligned}$$

Thus, with \(\vec {f} \in \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\), we deduce that

$$\begin{aligned} \vec {f} - \mathcal {L}_V\mathcal {L}_\Lambda ^{-1} \vec {f} = \left( \mathcal {L}_\Lambda - \mathcal {L}_V \right) \mathcal {L}_\Lambda ^{-1} \vec {f} = (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}. \end{aligned}$$
(63)

Since \(\vec {f}, \mathcal {L}_V\mathcal {L}_\Lambda ^{-1} \vec {f} \in \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\) as noted above, then \((\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f} \in \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\) as well. It follows that \(\mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f} \in W_{V, 0}^{1, 2} (\mathbb {R}^n).\) By applying \(\mathcal {L}_V^{-1}\) to both sides of (63), we see that

$$\begin{aligned} \mathcal {L}_V^{-1} \vec {f} = \mathcal {L}_\Lambda ^{-1} \vec {f} + \mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}. \end{aligned}$$
(64)

For \(\vec {\phi } \in C_c^\infty (\mathbb {R}^n) \subseteq \left( W_{V, 0}^{1, 2} (\mathbb {R}^n) \right) '\) that acts via \( \vec {\phi }(\vec {u}) = \left\langle \vec {u}, \vec {\phi } \right\rangle _{L^2(\mathbb {R}^d)}\), we see from (62) that

$$\begin{aligned}&\mathcal {B}_V\left[ \mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f} , \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi }\right] = \left( (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f} \right) \left( \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi } \right) \\&\quad = \left\langle (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}, \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)}, \end{aligned}$$

and

$$\begin{aligned}&\mathcal {B}_V\left[ \mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f} , \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi }\right] = \mathcal {B}_V^*\left[ \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi }, \mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f} \right] \\&\quad = \vec {\phi } \left( \mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f} \right) = \left\langle \mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}, \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)}. \end{aligned}$$

Combining these observations shows that

$$\begin{aligned} \left\langle \mathcal {L}_V^{-1} (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}, \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)} = \left\langle (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}, \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)} . \end{aligned}$$
(65)

Pairing (64) with \(\vec {\phi }\) in an inner product, integrating over \(\mathbb {R}^n\), and using (65) then gives

$$\begin{aligned} \left\langle \mathcal {L}_V^{-1} \vec {f}, \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)} = \left\langle \mathcal {L}_\Lambda ^{-1} \vec {f}, \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)} + \left\langle (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}, \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)}. \end{aligned}$$
(66)

Recall from Definition 11 that \( \mathcal {L}_V^{-1} \vec {f}(x) = \int _{\mathbb {R}^n} \Gamma ^V\left( x,y \right) \vec {f}(y) dy\) for any \(\vec {f} \in L^\infty _c\left( \mathbb {R}^n \right) ^d\). By taking \(\vec {f}, \vec {\phi } \in C_c ^\infty (\mathbb {R}^n)\) with disjoint supports, it follows that

$$\begin{aligned} \left\langle \mathcal {L}_V^{-1} \vec {f}, \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)}&= \int _{\mathbb {R}^n} \left\langle \int _{\mathbb {R}^n} \Gamma ^V\left( x,y \right) \vec {f}(y) dy, \vec {\phi }(x) \right\rangle dx\\&= \int _{\mathbb {R}^n}\int _{\mathbb {R}^n} \left\langle \Gamma ^V\left( x,y \right) \vec {f}(y), \vec {\phi }(x) \right\rangle dy dx, \end{aligned}$$

where the application of Fubini is justified by the fact that \(\Gamma ^V\) is locally bounded away from the diagonal. A similar equality holds for the second term in (66). For the last term in (66), observe that

$$\begin{aligned}&\left\langle (\Lambda - V) \mathcal {L}_\Lambda ^{-1} \vec {f}, \left( \mathcal {L}_V^* \right) ^{-1} \vec {\phi } \right\rangle _{L^2(\mathbb {R}^n)}\\&\quad = \int _{\mathbb {R}^n} \left\langle (\Lambda (z) - V(z)) \int _{\mathbb {R}^n} \Gamma ^V\left( z,y \right) \vec {f}(y) dy, \int _{\mathbb {R}^n} \Gamma ^{V*}\left( z,x \right) \vec {\phi }(x) dx \right\rangle dz \\&\quad = \int _{\mathbb {R}^n}\int _{\mathbb {R}^n}\int _{\mathbb {R}^n} \left\langle \Gamma ^{V*}\left( z,x \right) ^T (\Lambda (z) - V(z)) \Gamma ^V\left( z,y \right) \vec {f}(y) , \vec {\phi }(x) \right\rangle dz dy dx\\&\quad = \int _{\mathbb {R}^n}\int _{\mathbb {R}^n} \left\langle \left[ \int _{\mathbb {R}^n} \Gamma ^{V}\left( x,z \right) (\Lambda (z) - V(z)) \Gamma ^V\left( z,y \right) dz\right] \vec {f}(y) , \vec {\phi }(x) \right\rangle dy dx, \end{aligned}$$

where we have used the property that \(\Gamma ^V(x, z) = \Gamma ^{V*}(z, x)^T\). Putting it all together gives

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^n} \int _{\mathbb {R}^n}\left\langle \left[ \Gamma ^V (x, y) - \Gamma ^{\Lambda } (x, y) \right] \vec {f}(y), \vec {\phi }(x) \right\rangle \, dy \, dx \\&\quad = \int _{\mathbb {R}^n} \int _{\mathbb {R}^n}\left\langle \left[ \int _{\mathbb {R}^n}\Gamma ^V (x, z)(\Lambda (z) -V(z)) \Gamma ^\Lambda (z, y) \, dz\right] \vec {f}(y), \vec {\phi }(x) \right\rangle \, dy \, dx. \end{aligned} \end{aligned}$$
(67)

By (36) in Theorem 8, the functions \( \Gamma ^V (x, y)\) and \( \Gamma ^{\Lambda }(x, y)\) are locally bounded on \(\mathbb {R}^n \times \mathbb {R}^n \setminus \Delta \). As shown in Lemma 23 below, \(\int _{\mathbb {R}^n}\Gamma ^V (x, z)(\Lambda (z) -V(z)) \Gamma ^\Lambda (z, y) \, dz\) is also locally bounded on \(\mathbb {R}^n \times \mathbb {R}^n \setminus \Delta \). It follows that \( \Gamma ^V (x, y) - \Gamma ^{\Lambda } (x, y) - \int _{\mathbb {R}^n}\Gamma ^V (x, z)(\Lambda (z) -V(z)) \Gamma ^\Lambda (z, y) \, dz \in L^1_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \times \mathbb {R}^n {\setminus } \Delta \right) \). As (67) holds for all \(\vec {f}, \vec {\phi } \in C_c ^\infty (\mathbb {R}^n)\) with disjoint supports, then an application of the fundamental lemma of calculus of variations for matrix-valued functions shows that for a.e. \((x, y) \in \mathbb {R}^n\times \mathbb {R}^n\),

$$\begin{aligned} \Gamma ^V (x, y)= \Gamma ^{\Lambda } (x, y) - \int _{\mathbb {R}^n}\Gamma ^V (x, z)\left[ V(z) - \Lambda (z)\right] \Gamma ^\Lambda (z, y) \, dz. \end{aligned}$$
(68)

Since \(\left\| \vec {u}\right\| _{Y_{0}^{1, 2} (\mathbb {R}^n)} \le \left\| \vec {u}\right\| _{W_{V, 0}^{1, 2} (\mathbb {R}^n)}\) implies that \(W_{\Lambda , 0}^{1, 2} (\mathbb {R}^n)\subseteq Y_{0}^{1, 2} (\mathbb {R}^n)\), then \(\left( Y_{0}^{1, 2} (\mathbb {R}^n) \right) '\subseteq \left( W_{\Lambda , 0}^{1, 2} (\mathbb {R}^n) \right) '\). In particular, all of the arguments from above hold with V replaced by 0, so we get that for a.e. \((x, y) \in \mathbb {R}^n\times \mathbb {R}^n\),

$$\begin{aligned} \Gamma ^0 (x, y)= \Gamma ^{\Lambda } (x, y) + \int _{\mathbb {R}^n}\Gamma ^0 (x, z)\Lambda (z) \Gamma ^\Lambda (z, y) \, dz. \end{aligned}$$
(69)

Subtracting (68) from (69) leads to the conclusion of the lemma. \(\square \)

Next, we establish that the integral functions in Lemma 22 are locally integrable away from the diagonal.

Lemma 23

(Local integrability on \(\mathbb {R}^n \times \mathbb {R}^n \setminus \Delta )\) Assume that A satisfies boundedness (25) and ellipticity (24), and that \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\). Assume also that (IB) and (H) hold so that \(\Gamma ^0\), \(\Gamma ^\Lambda \), and \(\Gamma ^V\), the fundamental matrices of \(\mathcal {L}_0\) \(\mathcal {L}_\Lambda \), and \(\mathcal {L}_V\), respectively, exist and satisfy the conclusions of Theorem 8. Define \( G(x,y) = \int _{\mathbb {R}^n}\Gamma ^V (x, z)\left[ V(z) - \Lambda (z)\right] \Gamma ^\Lambda (z, y) \, dz\) and \( H(x,y) = \int _{\mathbb {R}^n}\Gamma ^0 (x, z)\Lambda (z)\Gamma ^\Lambda (z, y) \, dz\). Then \(G, H \in L^1_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \times \mathbb {R}^n {\setminus } \Delta \right) \).

Proof

Throughout this proof, we write \(\lesssim \) in place of \(\lesssim _{\left( \ldots \right) }\) since the dependence on constants is not important here. We show that \(G \in L^1_{{{\,\textrm{loc}\,}}}\left( \mathbb {R}^n \times \mathbb {R}^n {\setminus } \Delta \right) \) and note that the argument for H is analogous. Set \(r = \left| x - y\right| \) and let \(\varepsilon = \frac{\varepsilon _1}{2}\), where \(\varepsilon _1 > 0\) is as in Lemma 21. An application of Lemma 22 followed by Corollary 4 along with the bound (37) from Theorem 8 applied to \(\Gamma ^V\) shows that

$$\begin{aligned} \begin{aligned} \left| G\left( x, y \right) \right|&\le \int _{\mathbb {R}^n}\left| \Gamma ^V (x, \cdot )\right| \left| V - \Lambda \right| \left| \Gamma ^\Lambda (\cdot , y)\right| \lesssim \int _{\mathbb {R}^n}\frac{e^{-\varepsilon \overline{d}(x, z, V)}\, \left| V(z) - \Lambda (z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}}dz \\&\lesssim \int _{B(x, \frac{r}{2})} \frac{ \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} dz + \int _{B(y, \frac{r}{2})} \frac{ \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} dz \\&\quad + \int _{\mathbb {R}^n \setminus \left( B(x, \frac{r}{2}) \cup B(y, \frac{r}{2}) \right) } \frac{e^{-\varepsilon \overline{d}(z, x, V)}\, \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} dz. \end{aligned} \end{aligned}$$
(70)

For the first term, an application of Hölder’s inequality shows that

$$\begin{aligned} \begin{aligned}&\int _{B(x, \frac{r}{2})} \frac{ \left| V(z)\right| \, dz}{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} \lesssim r^{2-n}\int _{B(x, \frac{r}{2})} \frac{\left| V(z)\right| \, dz}{\left| z-x\right| ^{n-2}} \\&\quad \le r^{2-n} \left\| V\right\| _{L^p\left( B(x, \frac{r}{2}) \right) } \left( \int _{0}^{r/2} \rho ^{n-1 + \frac{p\left( 2-n \right) }{p-1}} d\rho \right) ^{\frac{p-1}{p}} \\&\quad \lesssim \left\| V\right\| _{L^p\left( B(x, \frac{r}{2}) \right) } r^{4 - 2n + \frac{pn}{p-1}}. \end{aligned} \end{aligned}$$
(71)

An analogous bound holds for the second term in (70).

We now turn to the third integral in (70). Observe that with \(R = \frac{1}{\overline{m}\left( x, V \right) }\),

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^n \setminus \left( B(x, \frac{r}{2}) \cup B(y, \frac{r}{2}) \right) } \frac{e^{-\varepsilon \overline{d}(z, x, V)}\, \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} dz \\&\quad \lesssim \int _{B(x,R) \setminus B(x, \frac{r}{2})} \frac{\left| V(z)\right| }{\left| z-x\right| ^{2n-4}} dz + \int _{\mathbb {R}^n \setminus B(x, R)} \frac{e^{-\varepsilon \overline{d}(z, x, V)}\, \left| V(z)\right| }{\left| z-x\right| ^{2n-4}} dz. \end{aligned} \end{aligned}$$
(72)

Assuming that \(R \ge \frac{r}{2}\), choose \(J \in \mathbb {Z}_{\ge 0}\) so that \(2^{J-1} r \le R \le 2^J r\). Let \(q = p\) if \(n \ge 4\) and \(q \in \left( \frac{3}{2}, \min \left\{ p, 3\right\} \right) \) if \(n = 3\). Since \(q \le p\), then by Lemma 2, \(V \in \mathcal {B}_{q}\) as well with the same uniform \({\mathcal {B}_p}\) constant. Let \(q'\) denote the Hölder conjugate of q and note that \(n- 2q'\left( n-2 \right) < 0\). Therefore, an application of Hölder’s inequality shows that

$$\begin{aligned} \begin{aligned} \int _{B(x, R) \setminus B(x, \frac{r}{2})} \frac{\left| V(z)\right| }{\left| z-x\right| ^{2n-4}} dz&\lesssim \left\| V\right\| _{L^q\left( B(x, R) \right) } r^{4 - n - \frac{n}{q}} \lesssim \frac{\left[ r \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{q}}}{r^{n-2}}, \end{aligned} \end{aligned}$$
(73)

where we use that \(V \in \mathcal {B}_{q}\) and (19) to get

$$\begin{aligned} \left( \int _{B(x, R)} \left| V(z)\right| ^q dz \right) ^{\frac{1}{q}} \lesssim&R^{\frac{n}{q} -2} \left( \frac{1}{R^{n-2}}\int _{B(x, R)} \left| V(z)\right| dz \right) \lesssim R^{\frac{n}{q} -2}. \end{aligned}$$

For the exterior integral, with the notation \(A_j = B(x, 2^{j} R) {\setminus } B(x, 2^{j-1} R)\), we have

(74)

We may repeat the arguments used to reach (60) and conclude that if \(\left| x -z\right| \ge 2^{j-1}R = \frac{2^{j-1}}{\overline{m}\left( x,V \right) }\), then for any \(\varepsilon ' > 0\), \( e^{\varepsilon ' \overline{d}(z, x, V)} \gtrsim \overline{m}\left( x, V \right) \left| x - z\right| \gtrsim 2^{j}\). For \(\varepsilon '\) to be specified below, it follows that with \(c = \frac{\varepsilon }{\varepsilon '} \ln 2\),

where we have used that \(V \in \mathcal {B}_{q}\) and \(\Psi \left( x, R; \left| V\right| \right) \le d^2\). By choosing \(\varepsilon ' \simeq _{(\gamma ,n)} \varepsilon \) sufficiently small, we can ensure that \(c = c\left( \gamma , n \right) \) is large enough for the series to converge when we substitute this expression into (74), and then we get

$$\begin{aligned} \int _{\mathbb {R}^n \setminus B(x, R)} \frac{e^{-\varepsilon \overline{d}(z, x, V)}\, \left| V(z)\right| }{\left| z-x\right| ^{2n-4}} dz&\lesssim \overline{m}\left( x, V \right) ^{n-2}. \end{aligned}$$
(75)

Combining (70) with (71), (72), (73) and (75) then shows that

$$\begin{aligned} \left| G\left( x,y \right) \right|&\lesssim \left\| V\right\| _{L^p\left( B(x, \frac{r}{2}) \cup B(y, \frac{r}{2}) \right) } r^{4 - 2n + \frac{pn}{p-1}} + r^{2 - n} \left[ r \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{q}} + \overline{m}\left( x, V \right) ^{n-2}. \end{aligned}$$

A standard argument shows that G(xy) is locally integrable away from the diagonal, as required. \(\square \)

Using the representation formula from Lemma 22 and many arguments from the proof of Lemma 23, we can now bound the difference between \(\Gamma ^V\) and \(\Gamma ^0\). We use \(\Gamma ^\Lambda \) as an intermediary because this allows us to use the upper bound described by Corollary 4 instead of the one for \(\Gamma ^V\) given in Theorem 11. The advantage to this approach is that we don’t need to assume that \(V \in {\mathcal{Q}\mathcal{C}}\).

Lemma 24

(Lower bound lemma) Let \(\mathcal {L}_V\) be given by (27), where A satisfies (24) and (25), and \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\). Assume that (IB) and (H) hold. Let \(\Gamma ^V(x, y)\) denote the fundamental matrix of \(\mathcal {L}_V\). Let \(x, y \in \mathbb {R}^n\) be such that \(\left| x - y\right| \le \frac{1}{\overline{m}(x, V)}\). Set \(\alpha = 2 - \frac{n}{q}\), where \(q = p\) if \(n \ge 4\) and \(q \in \left( \frac{3}{2}, \min \left\{ p, 3\right\} \right) \) if \(n = 3\). Then there exists a constant \(C_2 = C_2\left( \mathcal {L}_V, p, C_V \right) \) for which

$$\begin{aligned} \left| \Gamma ^V (x, y) - \Gamma ^0(x, y)\right| \le C_2 \frac{\left[ \left| x - y\right| \overline{m}(x, V)\right] ^{\alpha } }{\left| x - y\right| ^{n-2}}. \end{aligned}$$

Proof

Set \(r = \left| x - y\right| \) and let \(\varepsilon = \frac{\varepsilon _1}{2}\), where \(\varepsilon _1 > 0\) is as in Lemma 21. An application of Lemma 22 followed by Corollary 4 along with the bound (37) from Theorem 8 applied to \(\Gamma ^0\) and \(\Gamma ^V\) shows that

$$\begin{aligned}&\left| \Gamma ^V (x, y) - \Gamma ^0(x, y)\right| \nonumber \\&\quad \le \int _{\mathbb {R}^n}\left| \Gamma ^0(x, z)\right| \left| \Lambda (z)\right| \left| \Gamma ^\Lambda (z, y)\right| dz + \int _{\mathbb {R}^n}\left| \Gamma ^\Lambda (x, z)\right| \left| V(z) - \Lambda (z)\right| \left| \Gamma ^V(z, y)\right| dz \nonumber \\&\quad \lesssim _{(\mathcal {L}_V, p, C_V)} \int _{\mathbb {R}^n}\frac{e^{-\varepsilon \overline{d}(z, y, V)}\, \left| \Lambda (z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}}dz + \int _{\mathbb {R}^n}\frac{e^{-\varepsilon \overline{d}(x, z, V)}\, \left| V(z) - \Lambda (z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}}dz \nonumber \\&\quad \lesssim \int _{B(x, \frac{r}{2})} \frac{ \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} dz + \int _{B(y, \frac{r}{2})} \frac{ \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} dz \nonumber \\&\qquad + \int _{\mathbb {R}^n \setminus \left( B(x, \frac{r}{2}) \cup B(y, \frac{r}{2}) \right) } \left( \frac{e^{-\varepsilon \overline{d}(z, y, V)}\, \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} + \frac{e^{-\varepsilon \overline{d}(x, z, V)}\, \left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} \right) dz. \end{aligned}$$
(76)

For the first term in (76), we get

$$\begin{aligned}&\int _{B(x, \frac{r}{2})} \frac{\left| V(z)\right| }{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}} dz \lesssim _{(n)} r^{2-n}\int _{B(x, \frac{r}{2})} \frac{\left| V(z)\right| }{\left| z-x\right| ^{n-2}} dz \\&\quad = r^{2-n} \sum _{j=1}^\infty \int _{B(x, \frac{r}{2^{j}}) \setminus B(x, \frac{r}{2^{j+1}})} \frac{\left| V(z)\right| }{\left| z-x\right| ^{n-2}} dz \\&\quad \lesssim r^{2-n} \sum _{j=1}^\infty \left( \frac{r}{2^{j}} \right) ^{2-n} \int _{B(x, \frac{r}{2^{j}})} \left| V(z)\right| dz = r^{2-n} \sum _{j=1}^\infty \Psi \left( x, \frac{r}{2^{j}}; \left| V\right| \right) \\&\quad \le r^{2-n} \sum _{j=1}^\infty C_V \left[ \frac{r \, \overline{m}\left( x, V \right) }{2^j}\right] ^{2 - \frac{n}{p}} \Psi \left( x, \frac{1}{\overline{m}\left( x, V \right) }; \left| V\right| \right) , \end{aligned}$$

where we have applied Lemma 3 to reach the last line. (We remark that a version of this inequality was established in [2, Remark 0.13] using a different argument.) By (19) in the proof of Lemma 9, \(\Psi \left( x, \frac{1}{\overline{m}\left( x, V \right) }; \left| V\right| \right) \le d^2 \left| \Psi \left( x, \frac{1}{\overline{m}\left( x, V \right) }; V \right) \right| = d^2\). Since \(p > \frac{n}{2}\), the series converges and we see that

$$\begin{aligned} \int _{B(x, \frac{r}{2})} \frac{\left| V(z)\right| dz}{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}}&\lesssim _{(d, n, p, C_V)} \frac{ \left[ r \, \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{p}}}{r^{n-2}} \le \frac{\left[ r \, \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{q}}}{r^{n-2}}, \end{aligned}$$
(77)

since \(q \le p\). An analogous argument shows that the second term in (76) satisfies the same bound since Lemma 10 and the assumption that \(\left| x - y\right| \le \frac{1}{\overline{m}(x, V)}\) imply that \(\overline{m}\left( x, V \right) \simeq _{(d, n, p, C_V)} \overline{m}\left( y, V \right) \).

We now turn to the fourth integral in (76). By the arguments in the proof of Lemma 23, we combine (72) with (73) and (75) to get

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^n \setminus \left( B(x, \frac{r}{2}) \cup B(y, \frac{r}{2}) \right) } \frac{e^{-\varepsilon \overline{d}(x, z, V)}\, \left| V(z)\right| \, dz}{\left| z-x\right| ^{n-2} \left| z-y\right| ^{n-2}}\\&\quad \lesssim \int _{B(x,R) \setminus B(x, \frac{r}{2})} \frac{\left| V(z)\right| \, dz}{\left| z-x\right| ^{2n-4}} + \int _{\mathbb {R}^n \setminus B(x, R)} \frac{e^{-\varepsilon \overline{d}(z, x, V)}\, \left| V(z)\right| }{\left| z-x\right| ^{2n-4}} \, dz \\&\quad \lesssim _{(d, n, p, C_V)} r^{2 - n} \left[ r \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{q}} + \overline{m}\left( x, V \right) ^{n-2}. \end{aligned} \end{aligned}$$
(78)

An analogous argument applied to the third integral in (76) gives the same bound where we have again applied Lemma 10 to conclude that \(\overline{m}\left( x, V \right) \simeq _{(d, n, p, C_V)} \overline{m}\left( y, V \right) \).

Substituting (77), (78), and analogous estimates into (76) shows that

$$\begin{aligned} \begin{aligned} \left| \Gamma ^V (x, y) - \Gamma ^0(x, y)\right|&\lesssim _{\left( \mathcal {L}_V, p, C_V \right) } \frac{ \left[ r \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{q}}}{r^{n-2}} + \overline{m}\left( x, V \right) ^{n-2} \\&\lesssim \frac{ \left[ r \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{q}}}{r^{n-2}}, \end{aligned} \end{aligned}$$

where we have used that

$$\begin{aligned} \overline{m}\left( x, V \right) ^{n-2}&= r^{2 - n} \left[ r \overline{m}\left( x, V \right) \right] ^{n - 2} \le r^{2-n} \left[ r \overline{m}\left( x, V \right) \right] ^{2 - \frac{n}{q}}, \end{aligned}$$

since \(r \, \overline{m}\left( x, V \right) \le 1\) and \(n\left( 1 + \frac{1}{q} \right) \ge 4\) by definition. The conclusion of the lemma follows. \(\square \)

We now prove our lower bound. To do this, we assume that the following scale-invariant Harnack inequality holds for the fundamental matrix associated to our operator.

  1. (SIH)

    If U is a \(d \times d\) matrix for which each \(\vec {u}_i \in W^{1,2}_V(2B)\) is a weak solution to \(\mathcal {L}_V\vec {u}_i = \vec {0}\) for \(i = 1, 2 \ldots , d\), then we say that (SIH) holds for U if there exists a small constant \(c_S\) so that whenever \(x_0 \in \mathbb {R}^n\) and \(r \le \frac{c_S}{\overline{m}(x_0, V)}\), with \(B = B(x_0, r)\), it holds that

    $$\begin{aligned} \sup _{x \in B} \left| \left\langle U \vec {e}, \vec {e} \right\rangle \right| \le C_{\text {H}} \inf _{x \in B} \left| \left\langle U \vec {e},\vec {e} \right\rangle \right| , \end{aligned}$$
    (79)

    for every \(\vec {e} \in \mathbb {R}^d\), where the constant \(C_{\text {H}}\) depends only on \(d, n, \lambda , \Lambda \), and V.

The standard Harnack inequality has a constant that typically grows with the size of the domain and the norm of V. Since the constant here is independent of r, we refer to this as the “scale-invariant” version of the inequality.

As observed in [3, p. 4349], the estimate (79) holds for nonnegative solutions to the scalar elliptic equation \(-{{\,\textrm{div}\,}}\left( A \nabla u \right) + v \, u = 0\), where A is a uniformly elliptic matrix and \(v \in {\text {B}_p}\) satisfies \(v \ge 0 \) a.e. However, it is unclear when the fundamental matrix satisfies (79), even in the Schrödinger case \(\mathcal {L}_V= -\Delta + V\) with a matrix potential V. In [14], a Harnack inequality is proved for vector solutions (with nonnegative entries) to (2) under conditions very different than those discussed in this paper. It is reasonable to ask whether the ideas in [14] can be used to provide conditions on \(\mathcal {L}_V\) to ensure that (79) holds for the fundamental matrix \(\Gamma ^V\), and we plan to investigate this question in the future.

Of course, since we are working in a systems setting, there is no guarantee that this estimate, or any of the standard de Giorgi–Nash–Moser results, necessarily hold. As such, we assume that \(\mathcal {L}_V\) is chosen so that (IB), (H), and (SIH) all hold. To convince ourselves that these are reasonable assumptions to make, we refer the reader to [3] and [11], where the validity of these assumptions in the scalar setting is shown.

Finally, we also need to assume the following lower bound on the fundamental matrix of the homogeneous operator.

  1. (LB)

    We say that (LB) holds if there exists a constant \(c_0\) so that for every \(\vec {e} \in \mathbb {S}^{d-1}\),

    $$\begin{aligned} \left| \left\langle \Gamma ^0(x,y) \vec {e}, \vec {e} \right\rangle \right| \ge \frac{c_0}{\left| x-y\right| ^{n-2}}. \end{aligned}$$
    (80)

In [9], the fundamental and Green’s matrices for homogeneous elliptic systems are extensively studied. Although such a bound does not necessary follow from the collection of results presented in [9], this result is shown to hold in the scalar setting and therefore also in the case \(\mathcal {L}_0= - \text {div}A\nabla \) when A is uniformly elliptic; see [10, Theorem 1.1].

Theorem 12

(Exponential lower bound) Let \(\mathcal {L}_V\) be given by (27), where A satisfies (24) and (25), and \(V \in {\mathcal {B}_p}\cap \mathcal{N}\mathcal{D}\) for some \(p > \frac{n}{2}\). Assume that (IB), (H), and (LB) hold. Let \(\Gamma ^V(x, y)\) denote the fundamental matrix of \(\mathcal {L}_V\) and assume that (SIH) holds for \(\Gamma ^V\). Then there exist constants \(C = C\left( \mathcal {L}_V, p, C_V, C_{\text {H}}, c_S, c_0 \right) \), \(\varepsilon _2 = \varepsilon _2\left( d, n, p, C_V, C_{\text {H}}, c_S \right) \) so that for every \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle \Gamma ^V (x, y) \vec {e}, \vec {e} \right\rangle \right| \ge C \frac{e^{-\varepsilon _2 \overline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}}. \end{aligned}$$
(81)

Remark 15

If we make the weaker assumption that \(\left| V\right| \in {\text {B}_p}\) (instead of assuming that \(V \in {\mathcal {B}_p}\)), then all of the statements in this section still hold with \(\overline{m}\left( \cdot , V \right) \) replaced by \(m\left( \cdot , \left| V\right| \right) \). Accordingly, the conclusion described by (81) still holds with \(\overline{d}(x, y, V)\) replaced by \(d(x, y, \left| V\right| )\).

Remark 16

Versions of this result still hold with either \(\left| \Gamma ^V (x, y)\right| \) or \(\left| \Gamma ^V (x, y) \vec {e}\right| \) on the left side of (81) in place of \(\left| \left\langle \Gamma ^V (x, y) \vec {e}, \vec {e} \right\rangle \right| \) if we replace the assumptions (SIH) and (LB) accordingly.

We follow the arguments from [2, Theorem 4.15] and [3, Theorem 7.27], with appropriate modifications for our systems setting.

Proof

By Lemma 10 and the proof of [3, Proposition 3.25], there exists \(A = A\left( d, n, p, C_V \right) \) large enough so that

$$\begin{aligned} x \not \in B\left( y, \frac{2}{\overline{m}(y, V)} \right) \text { whenever } \left| x - y\right| \ge \frac{A}{\overline{m}(x, V)}. \end{aligned}$$
(82)

Similarly, with \(c_1 = \min \left\{ \left( \frac{c_0}{2C_2} \right) ^{1/\alpha }, 1\right\} \), where \(c_0\) is from (LB) and \(C_2\left( \mathcal {L}_V, p, C_V \right) \) and \(\alpha \left( n, p \right) \) are from Lemma 24, an analogous argument shows that there exists \(c_2 = c_2\left( d, n, p, C_V, c_1 \right) \) sufficiently small so that

$$\begin{aligned} y \not \in B\left( z, \frac{2 c_2}{\overline{m}(z, V)} \right) \text { whenever } \left| z-y\right| \ge \frac{c_1}{\overline{m}(y, V)}. \end{aligned}$$
(83)

Since \(c_1 = c_1\left( \mathcal {L}_V, p, C_V, c_0 \right) \), then \(c_2 = c_2\left( \mathcal {L}_V, p, C_V, c_0 \right) \) as well.

We prove our bound in three settings: when \(\left| x - y\right| \) is small, medium, and large. The constant A is used to distinguish between the medium and the large settings, while \(c_1\) is used to distinguish between the small and medium settings. The small setting is used as a tool to prove the medium setting, so we start there.

Assume that we are in the small-scale setting where \( \left| z-y\right| \le \frac{c_1}{\overline{m}(z, V)}\). By (80) in (LB), the triangle inequality, and Lemma 24, since \(\left| z-y\right| \le \frac{1}{\overline{m}(z, V)}\), then for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \frac{c_0}{\left| z-y\right| ^{n-2}}&\le \left| \left\langle \Gamma ^0\left( z,y \right) \vec {e}, \vec {e} \right\rangle \right| \le \left| \left\langle \left( \Gamma ^0\left( z,y \right) - \Gamma ^V\left( z,y \right) \right) \vec {e}, \vec {e} \right\rangle \right| + \left| \left\langle \Gamma ^V\left( z,y \right) \vec {e}, \vec {e} \right\rangle \right| \\&\le \left| \Gamma ^0\left( z,y \right) - \Gamma ^V\left( z,y \right) \right| + \left| \left\langle \Gamma ^V\left( z,y \right) \vec {e}, \vec {e} \right\rangle \right| \\&\le C_2 \frac{\left[ \left| z-y\right| \overline{m}(z, V)\right] ^{\alpha }}{\left| z-y\right| ^{n-2}} + \left| \left\langle \Gamma ^V\left( z,y \right) \vec {e}, \vec {e} \right\rangle \right| . \end{aligned}$$

Since \(c_1\) is defined so we may absorb the first term into the left, it follows that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle \Gamma ^V\left( z,y \right) \vec {e}, \vec {e} \right\rangle \right| \ge \frac{c_0}{2\left| z-y\right| ^{n-2}} \quad \text { whenever } \, \left| z-y\right| \le \frac{c_1}{\overline{m}(z, V)}. \end{aligned}$$
(84)

Lemma 10 implies that \(\overline{m}(z, V) \simeq _{(d, n, p, C_V)} \overline{m}(y, V)\), so after redefining \(c_1\left( \mathcal {L}_V, p, C_V, c_0 \right) \) if we need to, we also have that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle \Gamma ^V\left( z,y \right) \vec {e}, \vec {e} \right\rangle \right| \ge \frac{c_0}{2\left| z-y\right| ^{n-2}} \quad \text { whenever } \, \left| z-y\right| \le \frac{c_1}{\overline{m}(y, V)}. \end{aligned}$$
(85)

We now consider the midrange setting where \(\left| x - y\right| \in \left[ \frac{c_1}{\overline{m}(x, V)}, \frac{A}{\overline{m}(x, V)}\right] \). There is no loss in assuming that \(c_2 \le c_S\), where \(c_S\) is the small constant from (SIH). Construct a chain \(\left\{ z_i\right\} _{i=1}^N\) of N elements along the straight line connecting x and y so that \(\left| y - z_1\right| = \frac{c_1}{\overline{m}(y, V)}\), \(\left| z_{i+1} - z_i\right| = \frac{c_2}{\overline{m}(z_i, V)}\) for \(i = 1, \ldots , N\), and \(\left| x - z_N\right| \le \frac{c_2}{\overline{m}(z_N, V)}\). Since Lemma 10 implies that \(\overline{m}(z, V) \simeq _{(d, n, p, C_V)} \overline{m}(x, V)\) for any point z along the line between x and y, then \(N \lesssim _{(d, n, p, C_V, c_1, c_2)} A\). Since \(\left| z_j - y\right| \ge \left| y - z_1\right| = \frac{c_1}{\overline{m}(y, V)}\) for all \(j = 1, \ldots , N\), then (83) shows that \(y \not \in B\left( z_j, \frac{2 c_2}{\overline{m}(z_j, V)} \right) \). In particular, with \(U = \Gamma ^V(\cdot , y)\) then \(\mathcal {L}_V\vec {u}_i = 0\) weakly on each \(B\left( z_j, \frac{2 c_2}{\overline{m}(z_j, V)} \right) \). Then we see by repeatedly applying (79) from the scale-invariant Harnack inequality (SIH) that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \begin{aligned}&\left| \left\langle \Gamma ^V (x, y) \vec {e}, \vec {e} \right\rangle \right| \ge \inf _{B(z_N, \frac{c_2}{\overline{m}(z_N, V)})} \left| \left\langle \Gamma ^V (\cdot , y) \vec {e}, \vec {e} \right\rangle \right| \\&\quad \ge C_{\text {H}}^{-1} \sup _{B(z_N, \frac{c_2}{\overline{m}(z_N, V)})} \left| \left\langle \Gamma ^V (\cdot , y) \vec {e}, \vec {e} \right\rangle \right| \ge C_{\text {H}}^{-1} \left| \left\langle \Gamma ^V (z_N, y) \vec {e}, \vec {e} \right\rangle \right| \\&\quad \ge C_{\text {H}}^{-1} \inf _{B(z_{N-1}, \frac{c_2}{\overline{m}(z_{N-1}, V)})} \left| \left\langle \Gamma ^V (\cdot , y) \vec {e}, \vec {e} \right\rangle \right| \\&\quad \ge C_{\text {H}}^{-2} \sup _{B(z_{N-1}, \frac{c_2}{\overline{m}(z_{N-1}, V)})} \left| \left\langle \Gamma ^V (\cdot , y) \vec {e}, \vec {e} \right\rangle \right| \quad \ge C_{\text {H}}^{-2} \left| \left\langle \Gamma ^V (z_{N-1}, y) \vec {e}, \vec {e} \right\rangle \right| \\&\quad \ge \cdots \ge C_{\text {H}}^{-N} \left| \left\langle \Gamma ^V (z_{1}, y) \vec {e}, \vec {e} \right\rangle \right| \ge \frac{C_{\text {H}}^{-N} c_0}{2 \left| z_1-y\right| ^{n-2}}, \end{aligned} \end{aligned}$$

where the last bound follows from (85). However,

$$\begin{aligned} \left| z_{1} - y\right| = \frac{c_1}{\overline{m}(y, V)} \le C_A \frac{c_1}{\overline{m}(x, V)} \le C_A \left| x-y\right| . \end{aligned}$$

Therefore, for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle \Gamma ^V (x, y) \vec {e}, \vec {e} \right\rangle \right| \ge \frac{C_{\text {H}}^{-N} c_0}{2 \left( C_A \left| x-y\right| \right) ^{n-2}} \quad \text {whenever} \, \left| x - y\right| \in \left[ \frac{c_1}{\overline{m}(x, V)}, \frac{A}{\overline{m}(x, V)}\right] . \end{aligned}$$

Combining this observation with (84) shows that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle \Gamma ^V (x, y) \vec {e}, \vec {e} \right\rangle \right| \ge \frac{C_3}{\left| x-y\right| ^{n-2}} \quad \text { whenever } \, \left| x - y\right| \le \frac{A}{\overline{m}(x, V)}. \end{aligned}$$
(86)

An application of Lemma 10 implies that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle \Gamma ^V (x, y) \vec {e}, \vec {e} \right\rangle \right| \ge \frac{C_3}{\left| x-y\right| ^{n-2}} \quad \text { whenever } \, \left| x - y\right| \le \frac{A}{\overline{m}(y, V)}, \end{aligned}$$
(87)

where \(C_3 = C_3\left( \mathcal {L}_V, p, C_V, C_{\text {H}}, c_0 \right) \) is possibly redefined. By the proof of Lemma 11, if \(\left| x - y\right| \le \frac{A}{\overline{m}(x, V)}\), then \(\overline{d}(x, y, V) \lesssim _{(d, n, p, C_V)} 1\). In particular, this observation combined with (86) gives the result (81) in the setting where \(\left| x - y\right| \le \frac{A}{\overline{m}(x, V)}\).

Now consider the final (large-scale) setting where \(\left| x - y\right| > \frac{A}{\overline{m}(x, V)}\). Choose \(\gamma : \left[ 0, 1\right] \rightarrow \mathbb {R}^n\) with \(\gamma (0) = x, \gamma (1) = y\), and

$$\begin{aligned} \int _0^1 \overline{m}(\gamma (t), V) \left| \gamma '(t)\right| \, dt \le 2 \overline{d}(x, y, V). \end{aligned}$$

Let

$$\begin{aligned} t_0 = \sup \left\{ t \in [0, 1]: \left| x - \gamma (t)\right| \le \frac{A}{\overline{m}(x, V)}\right\} < 1. \end{aligned}$$

If \(\left| \gamma (t_0) - y\right| \le \frac{1}{\overline{m}(\gamma (t_0), V)}\), then

$$\begin{aligned} \left| x - y\right| \le \left| x - \gamma (t_0)\right| + \left| \gamma (t_0) - y\right| \le \frac{A}{\overline{m}(x, V)} + \frac{1}{\overline{m}(\gamma (t_0), V)} \le \frac{\tilde{A}}{\overline{m}(x, V)}, \end{aligned}$$

since Lemma 10 implies that \(\overline{m}(\gamma (t_0), V) \simeq _{(d, n, p, C_V)} \overline{m}(x, V)\). In this case, we may repeat the arguments from the previous paragraph to reach the conclusion of the theorem.

To proceed, we assume that \(\left| x - y\right| > \frac{A}{\overline{m}(x, V)}\) and \(\left| \gamma (t_0) - y\right| > \frac{1}{\overline{m}(\gamma (t_0), V)}\). Since \(\overline{m}(\cdot , V)\) is locally bounded above and below, we can recursively define a finite sequence \(0< t_0< t_1< \cdots < t_\ell \le 1\) as follows. For \(j = 0, 1, \ldots , \ell \), let

$$\begin{aligned} t_j = \inf \left\{ t \in [t_{j-1}, 1]: \left| \gamma (t) - \gamma (t_{j-1}) \right| \ge \frac{1}{\overline{m}(\gamma (t_{j-1}), V)} \right\} . \end{aligned}$$

Then set \(B_{j} = B\left( \gamma (t_{j}), \frac{1}{\overline{m}(\gamma (t_{j}), V)} \right) \). Define \(I_{j} = [t_{j}, t_{j+1})\) for \(j = 0, 1, \ldots , \ell -1\), and set \(I_{\ell } = \left[ t_\ell , 1\right] \). Observe that for \(j = 0, 1, \ldots , \ell \),

$$\begin{aligned} \gamma (t) \in B_j \text { for all } t \in I_{j}. \end{aligned}$$

In particular, Lemma 10 implies that \(\overline{m}\left( \gamma \left( t \right) , V \right) \simeq _{(d, n, p, C_V)} \overline{m}\left( \gamma \left( t_{j} \right) , V \right) \) whenever \(t \in I_{j}\). Moreover, for \(j = 0, 1, \ldots , \ell -1\),

$$\begin{aligned} \left| \gamma (t_{j+1}) - \gamma (t_{j})\right| = \frac{1}{\overline{m}(\gamma (t_{j}), V)}. \end{aligned}$$

Thus,

$$\begin{aligned}&\int _0^1 \overline{m}(\gamma (t), V) \left| \gamma '(t)\right| \, dt \ge \sum _{j = 0}^{\ell -1} \int _{I_j} \overline{m}(\gamma (t), V) \left| \gamma '(t)\right| \, dt\\&\quad \gtrsim _{(d, n, p, C_V)} \sum _{j = 0}^{\ell -1} \overline{m}(\gamma (t_{j}), V) \int _{I_j} \left| \gamma '(t)\right| \, dt \\&\quad \ge \sum _{j = 0}^{\ell -1} \overline{m}(\gamma (t_{j}), V) \left| \gamma (t_{j+1}) - \gamma (t_{j})\right| = \ell . \end{aligned}$$

Recalling how we defined \(\gamma \), this shows that

$$\begin{aligned} \ell \le C_4 \, \overline{d}(x, y, V), \end{aligned}$$
(88)

where \(C_4 = C_4\left( d, n, p, C_V \right) \).

We defined \(t_0\) so that whenever \(t \ge t_0\), \(\left| x - \gamma (t)\right| \ge \frac{A}{\overline{m}(x, V)}\). Therefore, by the choice of A from (82), for each \(j = 0, \ldots , \ell \), \(x \not \in 2B_j\). This means that if \(U = \Gamma ^V(\cdot , x)\) then \(\mathcal {L}_V\vec {u}_i = 0\) weakly on each \(2B_j\). Thus, repeated applications of the scale-invariant Harnack inequality from (SIH) show that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle U\left( \gamma \left( t_0 \right) \right) \vec {e}, \vec {e} \right\rangle \right|&\le \widetilde{C}_{\text {H}} \left| \left\langle U\left( \gamma \left( t_1 \right) \right) \vec {e}, \vec {e} \right\rangle \right| \le \cdots \le \widetilde{C}_{\text {H}}^\ell \left| \left\langle U\left( \gamma \left( t_\ell \right) \right) \vec {e}, \vec {e} \right\rangle \right| \\&\le \widetilde{C}_{\text {H}}^{\ell +1} \left| \left\langle U\left( \gamma \left( 1 \right) \right) \vec {e}, \vec {e} \right\rangle \right| , \end{aligned}$$

where \(\widetilde{C}_{\text {H}} = C_{\text {H}}^\beta \) and \(\beta \) depends on \(c_S\) from (SIH). Since \(\gamma (1) = y\), then

$$\begin{aligned} \left| \left\langle \Gamma ^V\left( y, x \right) \vec {e}, \vec {e} \right\rangle \right|&\ge \widetilde{C}_{\text {H}}^{-\left( \ell +1 \right) } \left| \left\langle \Gamma ^V\left( \gamma (t_0), x \right) \vec {e}, \vec {e} \right\rangle \right| \ge \widetilde{C}_{\text {H}}^{-\left( \ell +1 \right) } \frac{C_3}{\left| \gamma (t_0) -x\right| ^{n-2}}, \end{aligned}$$

where (87) was applicable since \(\left| \gamma (t_0) - x\right| \le \frac{A}{\overline{m}(x, V)}\). Continuing on, since \(\left| \gamma (t_0) - x\right| < \left| x-y\right| \), we get that for any \(\vec {e} \in \mathbb {S}^{d-1}\),

$$\begin{aligned} \left| \left\langle \Gamma ^V\left( y, x \right) \vec {e}, \vec {e} \right\rangle \right|&\ge \frac{C_3 \exp \left( -\ell \log \widetilde{C}_{\text {H}} \right) }{\widetilde{C}_{\text {H}} \left| x-y\right| ^{n-2}} \ge \frac{C_3}{\widetilde{C}_{\text {H}} } \frac{\exp \left( - C_4 \log \widetilde{C}_{\text {H}} \, \overline{d}(x, y, V) \right) }{\left| x-y\right| ^{n-2}}, \end{aligned}$$

where we have applied (88) in the final step. As this bound is symmetric in x and y, the conclusion (81) follows. \(\square \)

Finally, let us briefly discuss the connection between our upper and lower auxiliary functions and the Landscape functions that were mentioned in the introduction.

Remark 17

For all \(x \in \mathbb {R}^n\), define

$$\begin{aligned} u(x) = \int _{\mathbb {R}^n} \left| \Gamma _V(x, y)\right| \, dy. \end{aligned}$$

We decompose \(\mathbb {R}^n\) into the disjoint union of the ball \(B\left( x, \frac{1}{\underline{m}\left( x,V \right) } \right) \) and the annuli \(B\left( x, \frac{2^{j}}{\underline{m}\left( x,V \right) } \right) \backslash B\left( x, \frac{2^{j-1}}{\underline{m}\left( x,V \right) } \right) \) for \(j \in \mathbb {N}\), then assuming the conditions of Theorem 11, we argue as in Lemma 23 to show that \(u(x) \lesssim \underline{m}\left( x,V \right) ^{-2} \) for all \(x \in \mathbb {R}^n\). On the other hand, for all \(x \in \mathbb {R}^n\), Remark 16 tells us that (under appropriate conditions)

$$\begin{aligned} u(x) \ge \int _{B\left( x, \frac{1}{\overline{m}\left( x,V \right) } \right) } \left| \Gamma _V(x, y)\right| \, dy \ge \int _{B\left( x, \frac{1}{\overline{m}\left( x,V \right) } \right) } \frac{e^{-\varepsilon \overline{d}(x, y, V)}}{\left| x - y\right| ^{n-2}} \, dy \gtrsim \overline{m}\left( x,V \right) ^{-2}. \end{aligned}$$

As mentioned in the introduction, this connection was previously found in [16] for scalar elliptic operators \(\mathcal {L}_v\) with a nonnegative scalar potential v on \(\mathbb {R}^n\). In the scalar setting, it holds that \(\overline{m}\left( x,v \right) = \underline{m}\left( x,v \right) \) for all \(x \in \mathbb {R}^n\). If we denote this common function by \(m(\cdot , v),\) it follows that \(u(x) \simeq m(x, v)^{-2}\) for all \(x \in \mathbb {R}^n\). Moreover, since the fundamental solution of such an operator is positive, we see that u satisfies \(\mathcal {L}_v u = 1\), which means that u inherits desirable qualities that are not satisfied by \(m(\cdot , v)\). We refer the reader to Theorems 1.18 and 1.31 in [16] for additional details.