Abstract
A very important question in geometric measure theory is how geometric features of a set translate into analytic information about it. Reifenberg (Bull Am Math Soc 66:312–313, 1960) proved that if a set is well approximated by planes at every point and at every scale, then the set is a bi-Hölder image of a plane. It is known today that Carleson-type conditions on these approximating planes guarantee a bi-Lipschitz parameterization of the set. In this paper, we consider an n-Ahlfors regular rectifiable set \(M \subset \mathbb {R}^{n+d}\) that satisfies a Poincaré-type inequality involving Lipschitz functions and their tangential derivatives. Then, we show that a Carleson-type condition on the oscillations of the tangent planes of M guarantees that M is contained in a bi-Lipschitz image of an n-plane. We also explore the Poincaré-type inequality considered here and show that it is in fact equivalent to other Poincaré-type inequalities considered on general metric measure spaces.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Finding bi-Lipschitz parameterizations of sets is a central question in areas of geometric measure theory and geometric analysis. A Lipschitz function on a metric space plays the role played by a smooth function on a manifold, and a bi-Lipschitz function plays the role of that of a diffeomorphism. Many concepts in metric spaces, such as metric dimensions and Poincaré inequalities, are preserved under bi-Lipschitz mappings. Moreover, a bi-Lipschitz parameterization of a set by Euclidean space leads to its uniform rectifiability. Uniform rectifiability is a quantified version of rectifiability which is well adapted to the study of problems in harmonic analysis on non-smooth sets.
The type of parameterizations discussed in this paper first appeared in 1960 when Reifenberg [15] showed that if a closed set \(M \subset \mathbb {R}^{n+d}\) is well approximated by affine n-planes at every point and every scale, then M is a bi-Hölder image of \(\mathbb {R}^{n}\). Such a set is called a Reifenberg flat set. In recent years, there has been renewed interest in this result and its proof. In particular, Reifenberg type parameterizations have been used to get good parameterizations of many spaces such as chord arc surfaces with small constant (see [16, 17]), and limits of manifolds with Ricci curvature bounded from below (see [4, 5]). Moreover, Reifenberg’s theorem has been refined to get better parameterizations of a set: bi-Lipschitz parameterizations (see [6, 7, 14, 19]). In fact, it is well known today, due to the authors of the latter references, that Carleson-type conditions are the correct conditions to study when seeking necessary and sufficient conditions for bi-Lipschitz parameterizations of sets. For example, in [19], Toro considers a Carleson condition on the Reifenberg flatness of M that guarantees its bi-Lipschitz parameterization. In [7], David and Toro consider a Carleson condition on the Jones beta numbers \(\beta _{\infty }\) and on the (possibly smaller) \(\beta _{1}\) numbers that guarantees the same result. In [14], the author studies a Carleson-type condition on the oscillation of the unit normals to an n-rectifiable set M of co-dimension 1, that guarantees its bi-Lipschitz parameterization. An n-rectifiable set \(M \subset \mathbb {R}^{n+d}\) is a generalization of a smooth n-manifold in \(\mathbb {R}^{n+d}\). Rectifiable sets are characterized by having (approximate) tangent planes (see Definition 2.4) at \(\mathcal {H}^{n}\)-almost every point. Moreover, in the special case when the rectifiable set M has co-dimension 1, then M has an (approximate) unit normal \(\nu \) (see Remark 2.5) at \(\mathcal {H}^{n}\)-almost every point. In fact, in [14], the author considers an n-Ahlfors regular rectifiable set \(M \subset \mathbb {R}^{n+1}\), of co-dimension 1, that satisfies the following Poincaré-type inequality for \(d=1\) and \(\lambda =2\):
For all \(x \in M\), \(r > 0\), and f a Lipschitz function on \(\mathbb {R}^{n+d}\), we have
where \(C_{P}\) denotes the Poincaré constant that appears here, \(\lambda \ge 1\) is the dilation constant, \(\mu =\) is the Hausdorff measure restricted to M, is the average of the function f on \(B_{r}(x)\), \(B_{r}(x)\) is the Euclidean ball in the ambient space \(\mathbb {R}^{n+d}\), and \(\nabla ^{M}f(y)\) denotes the tangential derivative of f (see Definition 2.6).
Then, the author shows that a Carleson-type condition on the oscillation of the unit normal \(\nu \) to M guarantee a bi-Lipschitz parameterization of M.
Theorem 1.1
(see [14, Theorem 1.5]) Let \(M \subset B_{2}(0) \subset \mathbb {R}^{n+1}\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1) with \(d=1\) and \(\lambda = 2\). There exists \(\epsilon _{0} = \epsilon _{0}(n, C_{M}, C_{P})>0\), such that if for some choice of unit normal \(\nu \) to M, we have
then \(M \cap B_{\frac{1}{10^{4}}}(0)\) is contained in the image of an affine n-plane by a bi-Lipschitz mapping, with bi-Lipschitz constant depending only on n, \(C_{M}\) and \(C_{P}\).
In this paper, we generalize Theorem 1.1 to higher co-dimensions d and arbitrary dilation constants \(\lambda \ge 1\). Before stating the theorem, let us introduce some notation. Suppose that \(M \subset \mathbb {R}^{n+d}\) is an n-Ahlfors regular rectifiable set that satisfies the Poincaré-type inequality (1.1). Fix \(x \in M\) and \(r >0\). Let \(y \in M \cap B_{r}(x)\) such that the approximate tangent plane \(T_{y}M\) of M at the point y exists, and denote by \(\pi _{T_{y}M}\) the orthogonal projection of \(\mathbb {R}^{n+d}\) on \(T_{y}M\). Using the standard basis of \(\mathbb {R}^{n+d}\), \(\{ e_{1} , \ldots , e_{n+d} \}\), we can view \(\pi _{T_{y}M}\) as an \((n+d) \times (n+d)\) matrix whose jth column is the vector is \(\pi _{T_{y}M}(e_{j})\). Thus, we denote \(\pi _{T_{y}M}\) by the matrix \(\big (a_{ij}(y)\big )_{ij}\). Finally, let \(A_{x,r} = ((a_{ij})_{x,r} )_{ij}\), be the matrix whose ijth entry is the average of the function \(a_{ij}\) in the ball \(B_{r}(x)\).
Theorem 1.2
Let \(M \subset B_{2}(0) \subset \mathbb {R}^{n+d}\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1). There exist \(\epsilon _{0}= \epsilon _{0}(n,d, C_{M},C_{P}) >0\) and \(\theta _{0} = \theta _{0}(\lambda ) < 1\), such that if
where \(|\pi _{T_{y}M} - A_{x,r}|\) denotes the Frobenius normFootnote 1 of \(\pi _{T_{y}M} - A_{x,r}\), then there exists an onto K-bi-Lipschitz map \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) where the bi-Lipschitz constant \(K = K(n,d, C_{M},C_{P})\) and an n-dimensional plane \(\Sigma _{0}\), with the following properties:
and
where \(C_{0}= C_{0}(n,d,C_{M},C_{P})\). Moreover,
and
Notice that the conclusion of Theorem 1.2 states that M is (locally) contained in a bi-Lipschitz image of an n-plane instead of M being exactly a (local) bi-Lipschitz image of an n-plane. This is very much expected, since we do not assume that M is Reifenberg flat, and thus we have to deal with the fact that M might have holes. However, if we assume, in addition to the hypothesis of Theorem 1.2, that M is Reifenberg flat, then we do obtain that M is in fact (locally) a bi-Lipschitz image of an n-plane. We show this in this paper as a corollary to Theorem 1.2.
A natural question is whether the hypotheses of Theorem 1.2, that is the Ahlfors regularity of M, the Poincaré inequality (1.1), and the Carleson condition (1.3) imply that M is Reifenberg flat. An affirmative answer to this question would directly imply (by the paragraph above) that the conclusion of Theorem 1.2 should be that M is exactly a bi-Lipschitz image of an n-plane instead of M being just contained in bi-Lipschitz image of an n-plane. A negative answer would show that the conclusion of Theorem 1.2 is the best that we can hope for. It is not surprising that the Poincaré inequality (1.1) is the correct condition to explore in order to answer this question (which as we discuss below, will turn out negative). In fact, it is already known that (1.1) encodes geometric properties of the set M.
Let \((M, d_{0}, \mu )\) be a metric measure space, where \(M \subset B_{2}(0)\) is an n-Ahlfors regular rectifiable set in \(\mathbb {R}^{n+d}\), \(\mu =\) is the measure that lives on M, and \(d_{0}\) is the metric on M which is the restriction of the standard Euclidean metric on \(\mathbb {R}^{n+d}\). In [14], the author proves that the Poincaré inequality (1.1) implies that M is quasiconvex. More precisely,
Definition 1.3
A metric space (X, d) is \(\kappa _{1}\)-quasiconvex if there exists a constant \(\kappa _{1} \ge 1\) such that for any two points x and y in X, there exists a rectifiable curve \(\gamma \) in X, joining x and y, such that \(\text {length}(\gamma ) \le \kappa _{1} \, d(x,y)\).
Theorem 1.4
(see [14, Theorem 5.5])Footnote 2Let \((M, d_{0}, \mu )\) be as discussed above. Suppose that M satisfies the Poincaré-type inequality (1.1). Then \((M, d_{0}, \mu )\) is \(\kappa _{1}\)-quasiconvex, with \(\kappa _{1}= \kappa _{1}(n, \lambda , C_{M}, C_{P})\).
There are many Poincaré-type inequalities found in literature that imply quasiconvexity (see for example [3, 8, 10, 11]). To state a couple of the main ones, let \((X, d, \nu )\) be a measure space endowed with a metric d and a positive complete Borel regular measure \(\nu \) supported on X. Denote by \(B^{X}_{r}(x)\) the metric ball in X, center \(x \in X\) and radius \(r>0\). Moreover, assume that \(0< \nu (B_{r}^{X}(x)) < \infty \) for all \(x \in X\) and \(r>0\).
Definition 1.5
(p-Poincaré inequality) Let \(p \ge 1\). \((X, d, \nu )\) is said to admit a p-Poincaré inequality if there exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for any measurable function \(u: X \rightarrow \mathbb {R}\) and for any upper gradient \(\rho \) (see Definition 2.12) of u, the following holds
where \(x \in X\), \(r>0\), and .
Definition 1.6
(Lip-Poincaré inequality) Let \(p \ge 1\). \((X, d, \nu )\) is said to admit a Lip-Poincaré inequality if there exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for every Lipschitz function f on X, and for every \(x \in X\) and \(r>0\), we have
(see Definition 2.15 for the definition of Lipf).
These Poincaré inequalities are a-priori different because the right hand side varies according to the notion of “derivative” used on the metric space. However, Keith has shown (see [10, 11]) that if \((X, d, \nu )\) is a complete metric measure space with \(\nu \) a doubling measure, then (1.8) and (1.9) are equivalent. It turns out that the Poincaré-type inequality (1.1) is also related to (1.8) and (1.9).
In this paper, we take \((M, d_{0}, \mu )\) as described above and prove that in this setting, the Poincaré-type inequalities (1.1) [or a more generalized version of it, see (1.12) below], (1.8), and (1.9) are equivalent.
Theorem 1.7
Let \(p \ge 1\), and let \((M, d_{0}, \mu )\) be a metric measure space, where \(M \subset B_{2}(0)\) is an n-Ahlfors regular rectifiable set in \(\mathbb {R}^{n+d}\), \(\mu =\) is the measure that lives on M, and \(d_{0}\) is the metric on M which is the restriction of the standard Euclidean metric on \(\mathbb {R}^{n+d}\).Then, the following are equivalent:
- (i)
There exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for any measurable function \(u: M \rightarrow \mathbb {R}\), for any upper gradient \(\rho \) of u, and for every \(x \in M\) and \(r>0\), we have
(1.10) - (ii)
There exist constants \(\kappa \ge 1\), and \(\lambda \ge 1\), such that for every Lipschitz function f on M, and for every \(x \in M\) and \(r>0\), we have
(1.11) - (iii)
There exist constants \(\kappa \ge 1\), and \(\lambda \ge 1\), such that for every Lipschitz function f on \(\mathbb {R}^{n+d}\), and for every \(x \in M\) and \(r>0\), we have
(1.12)
Theorem 1.7 is interesting in its own right, as it shows that the Poincaré inequality (1.1) [or more generally, (1.12)] is equivalent to the other usual Poincaré-type inequalities on metric spaces that imply quasiconvexity. Moreover, Theorem 1.7 opens the door to many examples of spaces satisfying the Poincaré inequality (1.12) as there are many examples in literature of spaces satisfying the p-Poincaré and Lip-Poincaré inequalities (see for example [1, 2, 9, 12]). This allows us to get an example of a set that is not Reifenberg flat, and yet satisfies all the hypotheses of Theorem 1.2.
Theorem 1.8
There exists a non-Reifenberg flat, n-Ahlfors regular, rectifiable set \(M \subset B_{2}(0) \subset \mathbb {R}^{n+d}\) that satisfies all the hypotheses of Theorem 1.2.
Theorem 1.8 shows that the hypotheses of Theorem 1.2 on the set M are not strong enough to guarantee its Reifenberg flatness, and thus the conclusion of Theorem 1.2 is optimal.
The paper is structured as follows: in Sect. 2, we introduce some definitions and preliminaries. In Sect. 3, we prove Theorem 1.2. Moreover, we prove that Theorem 1.1 follows as a corollary from Theorem 1.2. Section 4 is dedicated to proving that the Poincaré inequality (1.12) is equivalent to the p-Poincaré and the Lip-Poincaré inequalities. Finally, in the last section, we prove Theorem 1.8 by constructing a concrete example of a set that is not Reifenberg flat, yet satisfies the hypotheses of Theorem 1.2.
2 Preliminaries
Throughout this paper, our ambient space is \(\mathbb {R}^{n+d}\). \(B_{r}(x)\) denotes the open ball center x and radius r in \(\mathbb {R}^{n+d}\), while \(\bar{B}_{r}(x)\) denotes the closed ball center x and radius r in \(\mathbb {R}^{n+d}\). d(., .) denotes the distance function from a point to a set. \(\mathcal {H}^{n}\) is the n-Hausdorff measure. Finally, constants may vary from line to line, and the parameters they depend on will always be specified in a bracket. For example, C(n, d) will be a constant that depends on n and d that may vary from line to line.
We begin by the definitions needed starting Sect. 3 and onwards.
Definition 2.1
Let \(M \subset \mathbb {R}^{N_{1}}\). A function \( f: M \rightarrow \mathbb {R}^{N_{2}}\) is called Lipschitz if there exists a constant \(K>0\), such that for all \(x, \, y \in M\) we have
The smallest such constant is called the Lipschitz constant and is denoted by \(L_{f}\).
Definition 2.2
A function \( f: \mathbb {R}^{N_{1}} \rightarrow \mathbb {R}^{N_{2}}\) is called K-bi-Lipschitz if there exists a constant \(K>0\), such that for all \(x, \, y \in \mathbb {R}^{N_{1}}\) we have
\(K^{-1} |x-y| \le |f(x) - f(y)| \le K \, |x-y|.\)
Let’s introduce the class of n-rectifiable sets, and the definition of approximate tangent planes.
Definition 2.3
Let \(M \subset \mathbb {R}^{n+d}\) be an \(\mathcal {H}^{n}\)-measurable set. M is said to be countably n-rectifiable if
where \( \mathcal {H}^{n}(M_{o}) = 0\), and \( f_{i} : A_{i} \rightarrow \mathbb {R}^{n+d}\) is Lipschitz, and \(A_{i} \subset \mathbb {R}^{n}\), for \(i = 1, 2, \ldots \)
Definition 2.4
If M is an \(\mathcal {H}^{n}\)-measurable subset of \(\mathbb {R}^{n+d}\). We say that the n-dimensional subspace P(x) is the approximate tangent space of M at x, if
Remark 2.5
Notice that if it exists, P(x) is unique. From now on, we shall denote the tangent space of M at x by \(T_{x}M\). Moreover, in the special case when M has co-dimension 1, then one can define the unit normal \(\nu \) to M at the point \(x \in M\) to be the unit normal to \(T_{x}M\). Thus, the unit normal \(\nu \) exists at every point \(x \in M\) that admits a tangent plane, and of course, there are two choices for the direction of the unit normal.
It is well known (see [18, Theorem 11.6]) that n-rectifiable sets have tangent planes at \(\mathcal {H}^{n}\) almost every point in the set.
Definition 2.6
Let f be a real valued Lipschitz function on \(\mathbb {R}^{n+d}\). The tangential derivative of f at the point \(y \in M\) id denoted by \(\nabla ^{M}f(y)\) and defined as follows:
where \(L := y + T_{y}M\), \(f|_{L}\) is the restriction of f on the affine subspace L, and \(\nabla (f|_{L})\) is the usual gradient of \(f|_{L}\).
In the special case when f is a smooth function on \(\mathbb {R}^{n+d}\), we have
where \(\pi _{T_{y}M}\) is the orthogonal projection of \(\mathbb {R}^{n+1}\) on \(T_{y}M\), and \(\nabla f\) is the usual gradient of f.
Note that \(\nabla ^{M}f(y)\) exists at \(\mathcal {H}^{n}\)- almost every point in M.
We also need to define the notion of Reifenberg flatness:
Definition 2.7
Let M be an n-dimensional subset of \(\mathbb {R}^{n+d}\). We say that M is \(\epsilon \)-Reifenberg flat for some \(\epsilon >0\), if for every \(x \in M\) and \(0 < r \le \frac{1}{10^{4}}\), we can find an n-dimensional affine subspace P(x, r) of \(\mathbb {R}^{n+d}\) that contains x such that
and
Remark 2.8
Notice that the above definition is only interesting if \(\epsilon \) is small, since any set is 1-Reifenberg flat.
In the proof of our Theorem 1.2, we need to measure the distance between two n-dimensional planes. We do so in terms of normalized local Hausdorff distance:
Definition 2.9
Let x be a point in \(\mathbb {R}^{n+d}\) and let \(r >0\). Consider two closed sets \(E,\,F \subset \mathbb {R}^{n+d}\) such that both sets meet the ball \(B_{r}(x)\). Then,
is called the normalized Hausdorff distance between E and F in \(B_{r}(x)\).
Let us recall the definition of an n-Ahlfors regular measure and an n-Ahlfors regular set:
Definition 2.10
Let \(M \subset \mathbb {R}^{n+d}\) be a closed, \(\mathcal {H}^{n}\) measurable set, and let \(\mu =\) be the n-Hausdorff measure restricted to M. We say that \(\mu \) is n-Ahlfors regular if there exists a constant \(C_{M} \ge 1\), such that for every \(x \in M\) and \( 0< r \le 1\), we have
In such a case, the set M is called an n-Ahlfors regular set, and \(C_{M}\) is referred to as the Ahlfors regularity constant.
Let us now move to definitions and notations needed in Sects. 4 and 5. In these sections, (X, d) denotes a space X endowed with a metric d. \(B^{X}_{r}(x)\) denotes the open metric ball of center \(x \in X\) and radius \(r>0\). Moreover, \((X, d, \nu )\) denotes a measure space endowed with a metric d and a positive complete Borel regular measure \(\nu \) supported on X such that \(0< \nu (B_{r}^{X}(x)) < \infty \) for all \(x \in X\) and \(r>0\).
Definition 2.11
Let \((X,d, \nu )\) be a metric measure space. We say that \(\nu \) is a doubling measure if there is a constant \(\kappa _{0} >0\) such that
where \(x \in X\), \(r>0\).
In Sects. 4 and 5, a curve \(\gamma \) in a metric space (X, d) is a continuous non-constant map from a compact interval \(I \subset \mathbb {R}\) into X. \(\gamma \) is said to be rectifiable if it has finite length, where the latter is denoted by \(l(\gamma )\). Thus, any rectifiable curve can be parametrized by arc length, and we will always assume that it is.
Let us now define the notions of upper gradients, p-weak upper gradients, and the Local Lipschitz constant function.
Definition 2.12
A non-negative Borel function \(\rho : X \rightarrow [0, \infty ]\) is said to be an upper gradient of a function \(u: X \rightarrow \mathbb {R}\) if
for any rectifiable curve \(\gamma : [0, l_{\gamma }] \rightarrow X\).
Definition 2.13
Let \(p \ge 1\) and let \(\Gamma \) be a family of rectifiable curves on X. We define the p-modulus of \(\Gamma \) by
where the infimum is taken over all nonnegative Borel functions g such that \(\int _{\gamma } g \, ds \ge 1\) for all \(\gamma \in \Gamma \).
Definition 2.14
A non-negative measurable function \(\rho : X \rightarrow [0, \infty ]\) is said to be a p-weak upper gradient of a function \(u: X \rightarrow \mathbb {R}\) if
for p-a.e. rectifiable curve \(\gamma : [0, l_{\gamma }] \rightarrow X\) (that is, with the exception of a curve family of zero p-modulus).
Definition 2.15
Let f be a Lipschitz function on a metric measure space \((X,d,\nu )\). The local Lipschitz constant function of f is defined as follows
where \(B_{r}^{X}(x)\) denotes the metric ball in X, center x, and radius r.
Remark 2.16
Let us note here that for any Lipschitz function f, \(L_{f}\) denotes the usual Lipschitz constant [see sentence below (2.1)], whereas Lipf(.) stands for the local Lipschitz constant function defined above.
3 A bi-Lipschitz parameterization of M
The main goal in this section is to prove Theorem 1.2. We begin with three linear Algebra lemmas needed to prove the theorem, as they can be stated and proved independently.
Lemma 3.1
In the next lemma, let V be an n-dimensional subspace of \(\mathbb {R}^{n+d}\). Denote by \(\pi _{V}\) the orthogonal projection on V. Then, there exists a \(\delta _{0}= \delta _{0}(n,d) > 0\), such that for any \(\delta \le \delta _{0}\), and for any linear operator L on \(\mathbb {R}^{n+d}\) such that
where || . || denotes the induced operator norm, L has exactly n eigenvalues \(\lambda _{1}, \ldots , \lambda _{n}\) such that
and exactly d eigenvalues \(\lambda _{n+1}, \ldots , \lambda _{n+d}\), such that
Proof
Since \(\pi _{V}\) is an orthogonal projection, then there exists an orthonormal basis \(\{ w_{1} , \ldots , w_{n+d} \} \) of \(\mathbb {R}^{n+d}\) such that the matrix representation of \(\pi _{V}\) in this basis is
where \(Id_{n}\) denotes the \(n \times n\) identity matrix.
Let \(\delta < \delta _{0}\) (with \(\delta _{0}\) to be determined later), and suppose L is as in the statement of the lemma. Let \(L = (l_{ij})_{ij}\) be the matrix representation of L in the basis \(\{ w_{1} , \ldots , w_{n+d} \} \). Then, by (3.1), we have
that is,
and
Now, for each \( j \in \{ 1 \ldots n+d\}\), consider the closed disk \(D_{j}\) in the complex plane, of center \((l_{jj}, 0)\) and radius \(R_{j} = \sum _{i \ne j} |l_{ij}|\). Notice that by (3.4), (3.5), and the fact that \(\delta < \delta _{0}\), we have
and
Choosing \(\delta _{0}\) such that \((n+d-1) \delta _{0} \le \frac{1}{8}\) , we can guarantee that \( \bigcup _{j=1}^{n}D_{j}\) is disjoint from \( \bigcup _{j=n+1}^{n+d}D_{j}\). Thus, by the Gershgorin circle theorem (see [13, pp. 277–278]), \( \bigcup _{j=1}^{n}D_{j}\) contains exactly n eigenvalues of L, and \( \bigcup _{j=n+1}^{n+d}D_{j}\) contains exactly d eigenvalues of L. The lemma follows from (3.6) to (3.8) \(\square \)
Notation Let V be an affine subspace of \(\mathbb {R}^{n+d}\) of dimension k, \( k \in \{ 0, \dots , n-1\}\). Denote by \(N_{\delta }(V)\), the \(\delta \)-neighborhood of V, that is,
Lemma 3.2
(see [14, Lemma 3.1])Footnote 3Let M be an n-Ahlfors regular subset of \(\mathbb {R}^{n+d}\), and let \(\mu =\) be the Hausdorff measure restricted to M. There exists a constant \(c_{0} = c_{0}(n,d, C_{M}) \le \displaystyle \frac{1}{2}\) such that the following is true: Fix \(x_{0} \in M\), \(r_{0} < 1\) and let \(r = c_{0} \, r_{0}\). Then, for every V, an affine subspace of \(\mathbb {R}^{n+d}\) of dimension \(0 \le k \le n-1\), there exists \(x \in M \cap B_{r_{0}}(x_{0})\) such that \(x \notin N_{11 r}(V)\) and \(B_{r}(x) \subset B_{2 r_{0}}(x_{0})\).
Lemma 3.3
(see [14, Lemma 3.3])Footnote 4Fix \(R>0\), and let \(\{u_{1}, \ldots u_{n} \}\) be n vectors in \(\mathbb {R}^{n+d}\). Suppose there exists a constant \(K_{0} >0\) such that
Moreover, suppose there exists a constant \(0< k_{0} < K_{0}\), such that
and
Then, for every vector \(v \in V:= span\{u_{1}, \ldots u_{n}\} \), v can be written uniquely as
where
with \(K_{1}\) being a constant depending only on n, \(k_{0}\), and \(K_{0}\).
Throughout the rest of the paper, M denotes an n-Ahlfors regular rectifiable subset of \(\mathbb {R}^{n+d}\) and \(\mu =\) denotes the Hausdorff measure restricted to M. The average of a function f on the ball \(B_{r}(x)\) is denoted by
We recall the statement of Theorem 1.2: if M satisfies the Poincaré-type condition (1.1), and if the Carleson-type condition (1.3) on the oscillation of the tangent planes to M is satisfied, and if then M is contained in a bi-Lipschitz image of an n-dimensional plane.
To prove this theorem, we follow steps similar to those used in [14] to prove the co-dimension 1 case (see Theorem 1.5 in [14]) which is stated as Theorem 1.1 in this paper. First, we define what we call the \(\alpha \)-numbers
where \(x \in M\), and \(0 < r \le \displaystyle \frac{1}{10}\), \(\pi _{T_{y}M}\) has \( ( a_{ij}(y) )_{ij}\) as its matrix representation in the standard basis of \(\mathbb {R}^{n+d}\), and \(A_{x,r}= ( (a_{ij})_{x,r} )_{ij}\) is the matrix whose ijth entry is the average of the function \(a_{ij}\) in the ball \(B_{r}(x)\).
These numbers are the key ingredient to proving our theorem. In Lemma 3.4, we show that the Carleson condition (1.3) implies that these numbers are small at every point \(x \in M\) and every scale \(0< r < \frac{1}{10}\). Moreover, for every point \(x \in M\), and series \( \sum _{i=1}^{\infty } \alpha ^2(x, 10^{-j})\) is finite. Then, in Theorem 3.5, we use the Poincaré-type inequality to get an n-plane \(P_{x,r}\) at every point \(x \in M\) and every scale \(0<r \le \frac{1}{10 \lambda }\) such that the distance (in integral form) from \(M \cap B_{r}(x)\) to \(P_{x,r}\) is bounded by \(\alpha (x,\lambda r)\). This means, by Lemma 3.4, that those distances are small, and for a fixed point x, when we add these distances at the scales \( 10^{-j}\) for \(j \in \mathbb {N}\), this series is finite.Footnote 5 Theorem 3.5 is the key point that allows us to use the bi-Lipschitz parameterization that G. David and T. Toro construct in [7]. In fact, what they do is construct approximating n-planes, and prove that at any two points that are close together, the two planes associated to these points at the same scale, or at two consecutive scales are close in the Hausdorff distance sense. From there, they construct a bi-Hölder parameterization for M. Then, they show that the sum of these distances at scales \( 10^{-j}\) for \(j \in \mathbb {N}\) is finite (uniformly for every \(x \in M\)). This is what is needed for their parameterization to be bi-Lipschitz (see Theorem 3.7 below and the definition before it). Thus, the rest of the proof is devoted to using Theorem 3.5 in order to prove the compatibility conditions between the approximating planes mentioned above.
Note that, in the process of proving Theorem 1.2, we find several parts of the proof very similar to the proof of the co-dimension 1 case found in [14] (see Theorem 1.5 in [14] or Theorem 1.1 in this paper). In fact, most of the differences in the proof happen in Lemma 3.4 and Theorem 3.5, with the most important difference being in the latter. The rest of the proof follows closely to the proof of co-dimension 1 case. Thus, in this paper we do as follows: first, we prove Lemma 3.4 and Theorem 3.5 and include all the details. Then, for the rest of the proof (that is introducing the David and Toro bi-Lipschitz construction, and proving the compatibility conditions between the approximating planes that allow us to use this construction), we only give an outline of the main ideas, and leave the smaller details and tedious calculations out. However, in each place where the details are omitted, we refer the reader to the parts of the proof of Theorem 1.5 in [14] where they can be found. That being said, this part of the proof of Theorem 1.2 still has enough details so that the reader understands all the steps needed to get the bi-Lipschitz parameterization of M, and the intuition behind them. Moreover, the way the proof is presented here includes all the information that we need from the construction of the bi-Lipschitz parameterization of M to prove the corollaries that follow from Theorem 1.2.
Let us begin with Lemma 3.4 that decodes the Carleson condition (1.3).
Lemma 3.4
Let \(M \subset B_{2}(0)\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Let \(\epsilon > 0\), and suppose that
Then, for every \(x \in M\), we have
where the \(\alpha \)-numbers are as defined in (3.15) and \(C = C(n, C_{M})\). Moreover, for every \(x \in M\) and \(0 < r \le \displaystyle \frac{1}{10}\), we have
where \(C = C(n, C_{M})\).
Proof
Let \(\epsilon > 0\) and suppose that (3.16) holds. By the definition of the Frobenius norm, (3.16) becomes
where \(\pi _{T_{y}M} = \big (a_{ij}(y)\big )_{ij}\) and \(A_{x,r} = \big ((a_{ij})_{x,r}\big )_{ij}\).
Fix \(x \in M\), and fix \(i, \, j \in \{ 1, \ldots n+d\}\). For all \(a \in \mathbb {R}\), and for all \(0 < r_{0} \le 1\), we have
since the average \((a_{ij})_{x,r_{0}}\) of \(a_{ij}\) in the ball \(B_{r_{0}}(x)\) minimizes the integrand on the right hand side of (3.20).
To prove (3.17), we note that
This is a straightforward computation that uses (3.20) and the Ahlfors regularity of \(\mu \), and is found in details in [14] (see [14], Lemma 4.1 proof of inequality (4.6)). Moreover, it is trivial to check that
Thus, plugging (3.22) in (3.21), we get
Since (3.23) is true for every \( i, \, j \in \{ 1, \ldots n+d\}\), we can take the sum over i and j on both sides of (3.23), and using (3.15) and (3.19), we get
which is exactly (3.17).
To prove inequality (3.18), fix \(x \in M\) and \(0 < r \le \displaystyle \frac{1}{10}\). Then, there exists \(k \ge 1\) such that
Now, fix \(i, \, j \in \{ 1, \ldots n+d\}\). Using inequality (3.20) for \(a = (a_{ij})_{x, 10^{-k}}\) and \(r_{0} = r\), (3.24), and the fact that \(\mu \) is Ahlfors regular, we get that
Summing over i and j on both sides of (3.25), and using the definition of the the Frobenius norm together with (3.15), we get
Taking the square root on both sides of (3.26) and using (3.17) finishes the proof of (3.18) \(\square \)
Next, we use the Poincaré inequality to get good approximating n-planes for M at every point \(x \in M\) and at every scale \(0< r< \frac{1}{10 \lambda }\). In this context, a good approximating n-plane at the point \(x \in M\) and radius r, is a plane \(P_{x,r}\) such that the distance (in integral form) from \(M \cap B_{r}(x)\) to \(P_{x,r}\) is small.
Theorem 3.5
Let \(M \subset B_{2}(0)\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1). There exists an \(\epsilon _{1} >0 = \epsilon _{1}(n,d,C_{M})\), that for every \(0 < \epsilon \le \epsilon _{1}\), if
then for every \(x \in M\) and \(0< r \le \displaystyle \frac{1}{10 \lambda }\), there exists an affine n-dimensional plane \(P_{x,r}\) such that
where \(C= C(n,d,C_{P})\).
Proof
Fix \(x \in M\) and \(r \le \displaystyle \frac{1}{10 \lambda }\). Let \(\epsilon \le \epsilon _{1}\) (with \(\epsilon _{1}\) to be determined later) such that (3.27) is satisfied. By (3.15), (3.18) from Lemma 3.4, and the fact that \(\lambda r \le \displaystyle \frac{1}{10}\), we have
From (3.29) and the fact that M is rectifiable (so approximate tangent planes exist \(\mu \)-a.e.), it is easy to check that there exists \(y_{0} \in B_{\lambda r}(x) \cap M\) such that \(T_{y_{0}}M\) exists, and
where \(C_{1}\) is a (fixed) constant depending only on n and \(C_{M}\). Comparing the operator norm with the Frobenius norm (the operator norm is at most the Frobenius norm), we get
Let \(\delta _{0}\) be the constant from Lemma 3.1, and choose \(\epsilon _{1} \le \displaystyle \frac{\delta _{0}}{C_{1}}\). Then (3.30), becomes
and by Lemma 3.1 (with \(\delta = \alpha (x,\lambda r)\), \(V = T_{y_{0}}M\), and \(L = A_{x,\lambda r}\)), we deduce that \(A_{x,\lambda r}\) has exactly n eigenvalues such that \(\lambda ^{1}_{x,\lambda r}, \ldots , \lambda ^{n}_{x,\lambda r}\) such that \( |\lambda ^{i}_{x,\lambda r}| \ge 1 - c \, \alpha (x,\lambda r)\), for all \(i \in \{ 1, \ldots , n\}\), and exactly d eigenvalues \(\lambda ^{n+1}_{x,\lambda r}, \ldots , \lambda ^{n+d}_{x,\lambda r}\) such that
Since \(A_{x,\lambda r}\) is a real symmetric matrix, \(n+d\) eigenvectors of the matrix \(A_{x,\lambda r}\), say \(v^{1}_{x,\lambda r} , \ldots v^{n+d}_{x,\lambda r}\) (each corresponding to exactly one of the \(n+d\) eigenvalues mentioned above) can be chosen to be orthonormal. Thus, \(v^{1}_{x,\lambda r} , \ldots v^{n+d}_{x,\lambda r}\) are unit, linearly independent vectors such that
Let us now fix our attention to the last d eigenvector and eigenvalues. For \(i \in \{ n+1 , \ldots n+d \}\) and consider the function \(f_{i}\) on \(\mathbb {R}^{n+d}\) defined by
Notice that \(f_{i}\) is a smooth function on \(\mathbb {R}^{n+d}\), and for every point \(y \in M\) where the tangent plane \(T_{y}M\) exists, (which, again, is almost everywhere in M), we have
In fact,
Thus, using the definition of the operator norm, the fact that \(v^{i}_{x,\lambda r}\) is unit, (3.32), and the fact that the operator norm of a matrix is at most its Frobenius norm we get
Now, applying the Poincaré inequality to the function \(f_{i}\) and the ball \(B_{r}(x)\), and using (3.33), we get
But \(v^{i}_{x, \lambda r}\) is a constant vector, so (3.34) can be rewritten as
that is,
Using (3.31) and (3.15), (3.36) becomes
Since (3.37) is true for every \( i \in \{n+1, \ldots , n+d \}\), we can take the sum over i on both sides of (3.37) to get
We are now ready to choose our plane \(P_{x,r}\). Take \(P_{x,r}\) to be the n-plane passing through the point , the centre of mass of \(\mu \) in the ball \(B_{r}(x)\), and such that \(P_{x,r} - c = \text {span} \{ v^{1}_{x,\lambda r}, \ldots , v^{n}_{x,\lambda r} \}\). In other words, \((P_{x,r} - c_{x,r})^{\perp } = \text {span} \{ v^{n+1}_{x,\lambda r}, \ldots , v^{n+d}_{x,\lambda r} \}\). Here \((P_{x,r} - c_{x,r})^{\perp }\) denotes the d-plane of \(\mathbb {R}^{n+d}\) perpendicular to the n-plane \(P_{x,r} - c_{x,r}\).
For \(y \in B_{r}(x)\), we have that
Dividing by r and taking the average over \(B_{r}(x)\) on both sides of (3.39), and using the definition of \(c_{x,r}\), we get
where the last inequality comes from (3.38).
Thus, by the definition of \(\alpha (x,\lambda r)\) (see (3.15)), we get (3.28) and the proof is done. \(\square \)
As mentioned earlier, we want to use the construction of the bi-Lipschitz map given by David and Toro in their paper [7]. To do that, we introduce the notion of a coherent collection of balls and planes. Here, we follow the steps given by David and Toro (see [7], chapter 2).
First, let \(l_{0} \in \mathbb {N}\) such that \(10^{l_{0}} \le \lambda \le 10^{l_{0}+1}\), and set \(r_{k} = 10^{-k-l_{0} - 5}\) for \( k \in \mathbb {N}\), and let \(\epsilon \) be a small number (will be chosen later) that depends only on n and d. Choose a collection \(\{x_{jk} \}, \, \, j \in J_{k}\) of points in \(\mathbb {R}^{n+d}\), so that
Set \(B_{jk} := B_{r_{k}}(x_{jk})\) and \(V_{k}^{\lambda } := \bigcup _{j \in J_{k}} \lambda B_{jk} = \bigcup _{j \in J_{k}} B_{\lambda r_{k}}(x_{jk}),\,\) for \(\lambda > 1\).
We also ask for our collection \(\{x_{jk} \}, \, \, j \in J_{k}\) and \(k \ge 1\) to satisfy
Suppose that our initial net \(\{x_{j0} \}\) is close to an n-dimensional plane \(\Sigma _{0}\), that is
For each \(k \ge 0\) and \(j \in J_{k}\), suppose you have an n-dimensional plane \(P_{jk}\), passing through \(x_{jk}\) such that the following compatibility conditions hold:
and
We can now define a coherent collection of balls and planes:
Definition 3.6
A coherent collection of balls and planes, (in short a CCBP), is a triple \((\Sigma _{0}, \{B_{jk} \}, \{P_{jk}\})\) where the properties (3.40) up to (3.45) above are satisfied, with a prescribed \(\epsilon \) that is small enough, and depends only on n and d.
Theorem 3.7
(see Theorems 2.4 in [7]) There exists \(\epsilon _{2} > 0\) depending only on n and d, such that the following holds: If \(\epsilon \le \epsilon _{2}\), and \((\Sigma _{0}, \{B_{jk} \}, \{P_{jk}\})\) is a CCBP (with \(\epsilon \)), then there exists a bijection \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) with the following properties:
and
where \(C^{'}_{0}= C^{'}_{0}(n,d)\). Moreover, \(g(\Sigma _{0})\) is a \(C^{'}_{0} \epsilon \)-Reifenberg flat set that contains the accumulation set
In [7], David and Toro give a sufficient condition for g to be bi-Lipschitz that we want to use in our proof. To state this condition, we need some technical details from the construction of the map g from Theorem 3.7. So, let us briefly discuss the construction here: David and Toro defined a mapping f whose goal is to push a small neighborhood of \(\Sigma _{0}\) towards a final set, which they proved to be Reifenberg flat. They obtained f as a limit of the composed functions \(f_{k} = \sigma _{k-1} \circ \ldots \sigma _{0}\) where each \(\sigma _{k}\) is a smooth function that moves points near the planes \(P_{jk}\) at the scale \(r_{k}\). More precisely,
where \(\{\theta _{jk}\}_{j \in J_{k}, k\ge 0}\) is a partition of unity with each \(\theta _{jk}\) supported on \(10B_{jk}\), and \(\pi _{jk}\) denotes the orthogonal projection from \(\mathbb {R}^{n+d}\) onto the plane \(P_{jk}\).
Since f in their construction was defined on \(\Sigma _{0}\), g was defined to be the extension of f on the whole space.
Corollary 3.8
(see Proposition 11.2 in [7]) Suppose we are in the setting of Theorem 3.7. Define the quantity
for \(k \ge 1 \,\, \text {and} \,\,y \in V_{k}^{10}\), and \(\epsilon _{k}^{'}(y)=0 \,\, \text {when}\,\, y \in \mathbb {R}^{n+d} {\setminus } V_{k}^{10}\) (when there are no pairs (j, k) as above). If there exists \(N > 0\) such that
then the map g constructed in Theorem 3.7 is K-bi-Lipschitz, where the bi-Lipschitz constant \(K = K(n,d,N)\).
We are finally ready to prove Theorem 1.2.
Proof of Theorem 1.2
Proof
As mentioned before, from here on, the proof of this theorem is essentially the same as that of its co-dimension 1 analogue found in [14] (Theorem 1.5 in [14]). In fact, the essential differences in the proofs of Theorem 1.2 and its co-dimension 1 analogue took place in Lemma 3.4 and Theorem 3.5. Thus, we continue this proof by outlining the main ideas and referring the reader to the proof of Theorem 1.5 in [14] for a more detailed proof.
Let \(\epsilon _{0} > 0\) (to be determined later), and suppose that (1.3) holds. Let \(\epsilon _{2}\) be the constant from Theorem 3.7. We would like to apply Theorem 3.7 for \(\epsilon = \epsilon _{2}\), and then Corollary 3.8. So our first goal is to construct a CCBP, and we do that in several steps:
Let us start with a collection \(\{\tilde{x}_{jk}\},\, j \in J_{k}\) of points in \(M \cap B_{\frac{1}{10^{l_{0} +4}}}(0)\) that is maximal under the constraint
Of course, we can arrange matters so that the point 0 belongs to our initial maximal set, at scale \(r_{0}\). Thus, \(0 = \tilde{x}_{i_{0},0} \) for some \(i_{0} \in J_{0}\). Notice that for every \(k \ge 0\), we have
Later, we choose
By (3.52) and (3.53), we can see
Using (3.51), (3.53), and (3.54), it is easy to see that the collection \(\{x_{jk} \},\,\,\, j \in J_{k}\) satisfies (3.40) and (3.41). (for details, see [14, page 23]).
Next, we choose our planes \(P_{jk}\) and our collection \(\{ x_{jk} \}\), for \(k \ge 0\) and \( j \in J_{k}\). Fix \(k\ge 0\) and \(j \in J_{k}\). Let \(\epsilon _{1}\) be the constant from Theorem 3.5. For
we apply Theorem 3.5 to the point \(\tilde{x}_{jk}\) (by construction \(\tilde{x}_{jk} \in M\)) and radius \(120r_{k}\) (notice that \(120\,r_{k} \le \frac{1}{10 \lambda }\)) to get an n-plane \(P_{\tilde{x}_{jk},120r_{k}}\), denoted in this proof by \( P^{'}_{jk}\) for simplicity reasons, such that
Thus, by (3.56) and the fact that \(\mu \) is Ahlfors regular, there exists \(x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})\) such that
Let \(P_{jk}\) be the plane parallel to \(P^{'}_{jk}\) and passing through \(x_{jk}\). From (3.56), (3.57) and the fact that the two planes are parallel, we see that (see [14] p. 24)
To summarize what we did so far, we have chosen n-dimensional planes \(P_{jk}\) for \(k\ge 0\) and \(j \in J_{k}\) where each \(P_{jk}\) passes through \(x_{jk}\), and satisfies (3.58). Notice that (3.58) shows that \(P_{jk}\) is a good approximating plane to M in the ball \(B_{120r_{k}}(\tilde{x}_{jk})\).
We want to get our CCBP with \(\epsilon _{2}\). Thus, we show that (3.42)–(3.45) hold with \(\epsilon = \epsilon _{2}\). Since the proofs of these inequalities are the same as the proofs of their analogue inequalities in the co-dimension 1 case, we only outline their proofs here (see [14, p. 25–p. 31] for a detailed proof of the inequalities).
Outline of the proofs for (3.44) and (3.45)
Inequalities (3.44) and (3.45) can be proved simultaneously. Fix \(k \ge 0\) and \(j \in J_{k}\); let \(m \in \{k, k-1 \}\) and \(i \in J_m\) such that \(|x_{jk} - x_{im}| \le 100r_{m}\). We want to show that \(P_{jk}\) and \(P_{im}\) are close together. To do that, we construct n linearly independent vectors that “effectively” span \(P_{jk}\), (that is, these vectors span \(P_{jk}\), and are far away from each other in a uniform quantitative manner), and that are close to \(P_{im}\). More precisely, using Lemma 3.2 inductively, together with (3.58), we can prove the following claim:
Claim 1 Denote by \(\pi _{jk}\) is the orthogonal projection of \(\mathbb {R}^{n+d}\) on the plane \(P_{jk}\). Let \(r = c_{0} \, r_{k}\), where \(c_{0} \le \frac{1}{2}\) is the constant from Lemma 3.2 depending only on n, d, and \(C_{M}\). There exists \(C_{1} = C_{1}(n,d,C_{M},C_{P})\), such that if \(C_{1} \epsilon _{0} \le 1\), then there exists a sequence of \(n+1\) balls \(\{B_{r}(y_{l})\}_{l=0}^{n}\), such that
- (1)
\(\forall \, l \in \{ 0, \ldots n\}\), we have \(y_{l} \in M\) and \(B_{r}(y_{l}) \subset B_{2r_{k}}(\tilde{x}_{jk}).\)
- (2)
\(q_{1} - q_{0} \notin B_{5r}(0)\), and \(\forall \, l \in \{ 2, \ldots n\}\), we have \(q_{l} - q_{0} \notin N_{5r}\big (span \{q_{1} - q_{0}, \ldots , q_{l-1} - q_{0} \}\big ),\)
where \(q_{l} = \pi _{jk}(p(y_{l}))\) and is the centre of mass of \(\mu \) in the ball \(B_{r}(y_{l})\).
Now, on one hand, notice that
On the other hand, by the definition of \(p(y_{l})\), Jensen’s inequality applied on the convex function \(\phi (.) = d(.,P_{jk})\), the fact that \(\mu \) is Ahlfors regular, \(B_{r}(y_{l}) \subset B_{2r_{k}}(\tilde{x}_{jk})\), \(r = c_{0}\, r_{k}\), and (3.58), we have that
Similarly, we have that
Thus, combining (3.60) and (3.61), we directly get
To compute the distance between \(P_{jk}\) and \(P_{im}\), let \(y \in P_{jk} \cap B_{\rho }(x_{im})\) where \(\rho = \{20r_m, 100r_{m}\}\). By (3.59), y can be written uniquely as
Using Lemma 3.3, for \(u_{l} = q_{l} - q_{0}\), \(R = r\), and \(v = y - q_{0}\) to get an upper bound on the \(\beta _{l}\)’s that show up in (3.63), together with (3.62), we get that
Thus,
Now, by Lemma 3.4, we know that \( \alpha (\tilde{x}_{jk},120 \lambda r_{k}) \le C(n,C_{M}) \, \epsilon _{0}\), and \(\alpha (\tilde{x}_{im},120 \lambda r_{m}) \le C(n,C_{M}) \, \epsilon _{0}\). Thus, (3.64) becomes
So, we have shown that there exist two constants \(C_{2}\) and \(C_{3}\), each depending only on n, d, \(C_{M}\), and \(C_{P}\), such that
and
For
Outline of the proofs for (3.42) and (3.43)
We start with (3.43). Recall that \(0 = \tilde{x}_{i_{0},0}\) for some \(i_{0} \in J_{0}\). Choose \(\Sigma _{0}\) to be the plane \(P_{i_{0},0}\) described above (recall that \(P_{i_{0},0}\) passes through \(x_{i_{0},0}\), where \(r_{0} = 10^{-l_{0} - 5}\)). Then, what we need to show is
Fix \(j \in J_{0}\), and take the corresponding \(x_{j0}\). Since by construction \(|\tilde{x}_{j0}| < \displaystyle \frac{1}{10^{l_{0}+4}}\) and since (3.53) says that \( |x_{j_{0},0} - \tilde{x}_{j_{0},0}| \le \displaystyle \frac{r_{0}}{6}\), then, we have
Moreover, by (3.53) and the fact that \(0 = \tilde{x}_{i_{0},0}\), we have
Combining (3.70) and (3.71), and using the fact that \(r_{0} = 10^{-l_{0}-4}\) we get
Thus, by (3.44) for \(x_{ik} = x_{j0}\), \(P_{ik} = P_{j0}\), and \(P_{jk} = P_{i_{0},0}\), we get exactly (3.69), hence finishing the proof for (3.43).
It remains to show (3.42) with \(\epsilon = \epsilon _{2}\), that is
However, notice that since \(x_{j0} \in P_{j0}\), (3.42) follows directly from (3.43).
We finally have our CCBP. Now, by the proof of Theorem 3.7 [see paragraph above (3.48)] we get the smooth maps \(\sigma _{k} \,\, \text {and} \,\, f_{k} = \sigma _{k-1} \circ \ldots \sigma _{0} \,\, \text {for} \,\, k \ge 0\), and then the map \(f = \lim _{k \rightarrow \infty } f_{k}\) defined on \(\Sigma _{0}\), and finally the map g that we want.
Moreover, by Theorem 3.7, we know that \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) is a bijection with the following properties:
and
Fix \(\epsilon _{0}\) such that (3.55), (3.68), and the hypothesis of Claim 1 are all satisfied. Notice that by the choice of \(\epsilon _{0}\), we can write \(\epsilon _{0} = c_{4}\, \epsilon _{2}\), where \(c_{4} = c_{4}(n,d,C_{M},C_{P})\). Hence, from (3.74) to (3.76), we directly get (1.4)–(1.6).
Next, we show that
Fix \(x \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0) \). Then, by (3.54), we see that for all \(k \ge 0\), there exists a point \(x_{jk}\) such that \(|x - x_{jk}| \le \displaystyle \frac{3r_{k}}{2}\), and hence \(x \in E_{\infty } \subset g(\Sigma _{0})\) (\(E_{\infty }\) is the set defined in Theorem 3.7). Since x was an arbitrary point in \(M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)\), (3.77) is proved. This shows that (1.7) holds for \(\theta _{0} := \frac{1}{10^{l_{0}+4}}\).
We still need to show that g is bi-Lipschitz. By Corollary 3.8, it suffices to show (3.50). To do that, we need the following inequality from [7] (see inequality (6.8) page 27 in [7]
Let \(z \in \Sigma _{0}\), and choose \(\bar{z} \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)\) such that
Fix \(k \ge 0\), and consider the index \(m \in \{k,k-1\}\) and the indices \(j \in J_{k}\) and \(i \in J_{m}\) such that \(f_{k}(z) \in 10B_{jk} \cap 11B_{im}\). We show that
In fact, by (3.79) and (3.78), and since \(\tilde{x}_{jk} \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)\), \(|\tilde{x}_{jk} - x_{jk}| \le \displaystyle \frac{r_{k}}{6}\), and \(f_{k}(z) \in 10B_{jk}\), one can show that (see [14, p. 32–33] for detailed proof)
Now, writing \(\pi _{T_{y}M} = (a_{pq}(y) )_{pq}\), and using the definition of the Frobenius norm, together with (3.20) for \( a = (a_{pq})_{\bar{z},r_{k-l_{0}-5}}\), (3.81), and the fact that \(\mu \) is Ahlfors regular
and thus,
Similarly, we can show that
Plugging (3.82) and (3.83) in (3.64) for \(\rho = 100r_{m}\), we get
This finishes the proof of (3.80).
Hence, we have shown that \(\epsilon ^{'}_{k}(f_{k}(z)) \le C(n,d,C_{M},C_{P}) \, \alpha (\bar{z},r_{k-l_{0}-5})\) for every \(k\ge 1\), that is
Summing both sides of (3.85) over \(k \ge 0\), and using (3.17) in Lemma 3.4 together with the fact that \(\bar{z} \in M\cap B_{\frac{1}{10^{l_{0}+4}}}(0)\), we get
Inequality (3.50) is proved, and our theorem follows. \(\square \)
As mentioned in the introduction, in the special case when M has co-dimension 1, (1.3) translates a Carleson-type condition on the oscillation of the unit normals to M.
Proof that Theorem 1.1 follows from Theorem 1.2
Proof
Suppose that (1.2) holds for some choice of unit normal \(\nu \) to M. We show that (1.2) is in fact exactly inequality (1.3). Fix \(x \in M\) and \( 0< r < 1\) and let \(y \in M \cap B_{r}(x)\) be a point where the approximate tangent plane \(T_{y}M\) [and thus the unit normal \(\nu (y)\)] exists. Denote by \( {T_{y}M}^{\perp } \) the subspace perpendicular to \(T_{y}M\). Then, using the matrix representation of \( \pi _{T_{y}M} \) in the standard basis of \(\mathbb {R}^{n+1}\), and the fact that \( \pi _{{T_{y}M}^{\perp }} = Id_{n+1} - \pi _{T_{y}M}\) where \(Id_{n+1}\) is the \((n+1) \times (n+1)\) identity matrix, one can easily see that
where \( \pi _{{T_{y}M}^{\perp }} = ( b_{ij}(y) )_{ij}\) and \(B_{x,r} = Id_{n+d} - A_{x,r} = ( (b_{ij})_{x,r} )_{ij}\).
Now, we want to express the right hand side of (3.87) using a different basis than the standard basis of \(\mathbb {R}^{n+1}\). For any choice of orthonormal basis \(\{ \nu _{1}(y), \ldots \nu _{n}(y) \}\) of \(T_{y}M\), we have that \(\{ \nu _{1}(y), \ldots , \nu _{n}(y), \nu (y)\}\) is an orthonormal basis for \(\mathbb {R}^{n+1}\). The matrix representation of \( \pi _{{T_{y}M}^{\perp }}\) with \(\{ \nu _{1}(y), \ldots , \nu _{n}(y), \nu (y)\}\) as a basis for the domain \(\mathbb {R}^{n+1}\) and the standard basis for the range \(\mathbb {R}^{n+1}\), is the \((n+1) \times (n+1)\) matrix whose last column is \(\nu (y)\) while the other columns are all zero. Thus, with this choice of bases and matrix representations, \(B_{x,r}\) becomes the matrix whose last column is \(\nu _{x,r}\) while the other column are all zero.Footnote 6 Hence, using (3.87), we get that
Since (3.88) is true for any \(y \in B_{r}(x)\), and since x and r are arbitrary, then,
and the proof is done \(\square \)
We now show that if we assume, in addition to the hypothesis of Theorem 1.2, that M is Reifenberg flat, then (locally) M is exactly the bi-Lipschitz image of an n-plane. In other words, the containment in (1.7) becomes an equality.
Corollary 3.9
Let \(M \subset B_{2}(0)\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1). There exist \(\epsilon _{3} = \epsilon _{3}(n,d,C_{M},C_{P})>0\), and \(\theta _{1} = \theta _{1}(\lambda )\) such that if (1.3) is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and if for every \(x \in M\) and \(r < 1\) there is an n-plane \(Q_{x,r}\), passing through x such that
and
then there exists an onto K-bi-Lipschitz map \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) where the bi-Lipschitz constant \(K=K(n,d,C_{M},C_{P})\) and an n-dimensional plane \(\Sigma _{0}\), such that (1.4) holds (1.5), holds with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and with \(C_{0}'' = C_{0}''(n,d,C_{M},C_{P})\) instead of \(C_{0}\), and
Proof
Let \(\epsilon _{2}\) be as in Theorem 3.7, and let \(\epsilon _{3} \le \epsilon \le \epsilon _{2}\) (\(\epsilon _{3}\) and \(\epsilon \) to be determined later). Going through the exact same steps as in the proof of Theorem 1.2, but with \(\epsilon \) instead of \(\epsilon _{2}\), and \(\epsilon _{3}\) instead of \(\epsilon _{0}\), we get a bijective map \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) such that (1.4) holds,
and
Note that we have not fixed \(\epsilon _{3}\) and \(\epsilon \) yet. However, we know that the above holds for \(\epsilon _{3} \le \epsilon \le \epsilon _{2}\) while inequality (3.55) is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), (3.68) is satisfied with \(\epsilon \) instead of \(\epsilon _{2}\) and \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and the hypothesis of Claim 1 is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\). Now, we want to show that
We first show that for every \(k \ge 0\) and for every \(j \in J_{k}\), \(M \cap B_{120 r_{k}}(\tilde{x}_{jk})\) is close to \(P_{jk}\) and that the n-planes \(P_{jk}\) and \(Q_{jk} := Q_{x_{jk}, r_{k}}\) are close to each other (in the Hausdorff distance sense). Let us begin by showing that for every \(k \ge 0\) and for every \(j \in J_{k}\),
By Markov’s inequality, we know that
Using (3.58) with the fact that \(\mu \) is Ahlfors regular, and (1.3) with (3.18) from Lemma 3.4 and the fact that \(120 \lambda r_{k} \le \frac{1}{10}\), we get
Now, take a point \(z \in M \cap B_{120r_{k}}(\tilde{x}_{jk})\). We consider two cases: Either
or
In the first case, combining (3.96) with (1.3) and (3.18), we get
In case of (3.97), let \(\rho \) be the biggest radius such that
Now, since \(z \in M\) and \(\mu \) is Ahlfors regular, we get using (3.96) that
Thus, relabelling (3.99), becomes
On the other hand, since \(\rho \) is the biggest radius such that \(B_{\rho }(z) \subset \Big \{ x \in B_{120r_{k}}(\tilde{x}_{jk}) ; \frac{d(x,P_{jk})}{120r_{k}} > \alpha ^{\frac{1}{2}}\left( \tilde{x}_{jk}, 120 \lambda r_{k}\right) \Big \}\), then there exists \(x_{0} \in \partial B_{\rho }(z)\) such that
Thus, by (3.101), (3.100) and (1.3) together with (3.18), we get
Combining (3.98) and (3.102), we get that
where \(C_{5}= C_{5}(n,d,C_{M},C_{P})\). Thus, for \( C_{5} \, \epsilon _{3}^{\frac{1}{2n}} \le \epsilon ,\) we get (3.95) which is the desired inequality.
Now, let us show that \(P_{jk}\) and \(Q_{jk}\) are close together, that is
Since \(P_{jk}\) and \(Q_{jk}\) are n-planes, it is enough to show
Let \(y \in Q_{jk} \cap B_{5r_{k}}(x_{jk})\). By (3.90), we get that \(d(y,M) \le \epsilon _{0} r_{k}\), and thus, there exists \(y' \in M\) such that \(|y - y'| \le 2 \, \epsilon _{0} \, r_{k}\). Recalling that \( x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})\) (see (3.53)), we get
that is \(y' \in B_{120 r_{k}}(\tilde{x}_{jk})\). Hence, by (3.95), we get that \(d(y', P_{jk}) \le \epsilon \,r_{k}\), and using the fact that \(\epsilon _{3} \le \epsilon \), we get
which finishes the proof of (3.105) and in particular (3.104).
Before starting the proof of (3.94), let us recall a little bit how the map g was defined. In the proof of Theorem 3.7 [see paragraph above (3.48)] David and Toro constructed the smooth maps \(\sigma _{k} \,\, \text {and} \,\, f_{k}\) where \(f_{0} = Id\) and \(f_{k} = \sigma _{k-1} \circ \ldots \sigma _{0} \,\, \text {for} \,\, k \ge 1\), and then defined the map \(f = \lim _{k \rightarrow \infty } f_{k}\) defined on \(\Sigma _{0}\), and finally the map g was the extension of f to the whole space.
In order to prove (3.94), we will need the following inequality from [7] (see proposition 5.1 page 19 in [7])
We are finally ready to prove (3.94). Let \(w \in g(\Sigma _{0}) \cap B_{\frac{1}{10^{l_{0}+8}}}(0)\), and let \(d_{0} := d(w, M)\). We would like to prove that \(d_{0}=0\) (recall that M is closed by assumption). Let \(z \in \Sigma _{0}\) such that \( w = g(z)\). Notice that by (3.78) (with \(\epsilon \) instead of \(\epsilon _{2}\)), the definition of \(f_{0}\), and the fact that g and f agree on \(\Sigma _{0}\), we have
Recalling that \(\Sigma _{0} = P_{i_{0}0}\), \(\tilde{x}_{i_{0}0} = 0\) , \(r_{0} = \frac{1}{10^{l_{0}+5}}\), and that \( x_{jk} \in B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})\) (see (3.53)), we get
for \(\epsilon \) such that \(C_{6} \epsilon \le 1\), where \(C_{6} = C_{6}(n,d)\). Thus, \(z \in P_{i_{0}0} \cap B_{5 r_{0}}(x_{i_{0}0})\), and by (3.104), there is a point \(z' \in Q_{i_{0}0}\) such that \( |z - z'| \le 6 \epsilon \, r_{0}\). Moreover,
for \(\epsilon < 1\). Thus, \(z' \in Q_{i_{0}0} \cap B_{10 r_{0}}(x_{i_{0}0})\), and by (3.90), we get that \(d(z', M) \le \epsilon _{3} \, r_{0}.\)
Combining (3.107), the line after (3.108), the line before and the line after (3.109), and the fact that \(\epsilon _{3} \le \epsilon \), we get
for \(\epsilon \) such that \((C_{6} + 7) \, \epsilon \le \frac{1}{10}\), where \(C_{6} = C_{6}(n,d)\).
We proceed by contradiction. Suppose \(d_{0} > 0\), then there exists \(k \ge 0\) such that \(r_{k+1} < d_{0} \le r_{k}\). Notice that since \(w = g(z)\), \(z \in \Sigma _{0}\), and the maps g and f agree on \(\Sigma _{0}\), then by (3.78), we have
Now, by the definition of \(d_{0}\), there exists \(\xi \in M\) such that \( |\xi - w| \le \frac{3}{2} d_{0}\). Using (3.110) and the fact that \(r_{0} = \frac{1}{10^{l_{0}+5}}\), we get
and thus by (3.54), there exists \(j \in J_{k}\) such that \( \xi \in B_{\frac{3}{2}r_{k}}(x_{jk})\).
Since both k and j are now fixed, consider the n-plane \(P_{jk}\) and the point \(x_{jk}\). By the line under (3.112), the line under (3.111), (3.111), and the fact that \(d_{0} \le r_{k}\), we have
for \(\epsilon \) such that \(C_{7} \epsilon \le 1\), where \(C_{7} = C_{7}(n,d)\). Thus, inequality (3.106) tell us that \(d(f_{k}(z), P_{jk}) \le C(n,d) \, \epsilon \, r_{k}\). Let \(y \in P_{jk}\) such that \(| y - f_{k}(z) | \le C(n,d) \, \epsilon \, r_{k}\). Then, by (3.111), the line below it, the line below (3.112), and recalling that \(d_{0} \le r_{k}\), we get
for \(\epsilon \) such that \(C_{8} \, \epsilon \le 1\), where \(C_{8} = C_{8}(n,d)\). Thus, \(y \in P_{jk} \cap B_{5 r_{k}}(x_{jk})\), and by (3.104) there exists \(y' \in Q_{jk}\) such that \(| y - y'| \le 3 \epsilon \, r_{k}\). But then, \(| y' - x_{jk}| \le |y - y'| + |y - x_{jk}| \le 10 \, r_{k}\); thus \( y' \in Q_{jk} \cap B_{10 r_{k}}(x_{jk})\) and by (3.90) we get that \(d(y',M) \le \epsilon _{3} \, r_{k}\).
Finally, using (3.111), the two lines before (3.114), and the three lines below it, we get
for \(\epsilon \) such that \(C_{9} \epsilon \le \frac{1}{10}\), where \(C_{9}= C_{9}(n,d)\) which contradicts the fact that \(d > r_{k+1}\). This finishes the proof of (3.94).
Fix \(\epsilon< \epsilon _{2} < 1\) such that the lines after (3.108), (3.110), (3.113), (3.114), and (3.115) hold, and then fix \(\epsilon _{3} \le \epsilon \le \epsilon _{2}\) such that inequality (3.55) is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), (3.68) is satisfied with \(\epsilon \) instead of \(\epsilon _{2}\) and \(\epsilon _{3}\) instead of \(\epsilon _{0}\), the hypothesis of Claim 1 is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and such that the line below (3.103) is satisfied. Writing \(\epsilon _{3} = c_{10} \, \epsilon \), where \(c_{10} = c_{10}(n,d,C_{M},C_{P})\), and replacing in (3.92), we get (1.5). The proof that g is bi-Lipschitz is the same as from Theorem 1.2. \(\square \)
4 The Poincaré Inequality (1.12) is equivalent to the p-Poincaré inequality
Let \((M, d_{0}, \mu )\) to be the metric measure space where \(M \subset B_{2}(0)\) is n-Ahlfors regular rectifiable set in \(\mathbb {R}^{n+d}\), \(\mu =\) is the Hausdorff measure restricted to M, and \(d_{0}\) is the restriction of the standard Euclidean distance in \(\mathbb {R}^{n+d}\) to M. In this section, we prove Theorem 1.7, which states that in the setting described above, the Poincaré inequality (1.12) is equivalent to the p-Poincaré inequality (1.10) and the Lip-Poincaré inequality (1.11).
We prove that \(\mathrm{(iii)} \implies \mathrm{(ii)} \implies \mathrm{(i)} \implies \mathrm{(iii)}\). In fact, \(\mathrm{(iii)} \implies \mathrm{(ii)}\) is proved in [14]. The fact that \(\mathrm{(ii)} \implies \mathrm{(i)}\) follows from a theorem in [10] where Keith proves the equivalence between p-Poincaré inequalities and Lip-Poincaré inequalities. Finally, to prove \((i) \implies (iii)\), we use the well known fact that X supporting a p-Poincaré inequality is equivalent to having inequality (1.8) hold for all measurable functions u on X and all p-weak upper gradients \(\rho \) of u. Then, we show that \(|\nabla ^{M}f|\) is a p-weak upper gradient of f, when f is a Lipschitz function on \(\mathbb {R}^{n+d}\).
Let us start with stating the theorems that we need, as mentioned in the paragraph above.
Theorem 4.1
(see [10, Theorem 2]) Let \(p \ge 1\), and let \((X,d,\nu )\) be a complete metric measure space, with \(\nu \) a doubling measure. Then, the following are equivalent:
\((X,d,\nu )\) admits a p-Poincaré inequality for all measurable functions u on X.
\((X,d,\nu )\) admits a Lip-Poincaré inequality for all Lipschitz functions f on X.
Theorem 4.2
(see [1, Proposition 4.13]) Let \(p \ge 1\), and let \((X,d,\nu )\) be a metric measure space. Then, the following are equivalent:
Inequality (1.8) holds for all measurable (resp. Lipschitz) functions u on X and all upper gradients \(\rho \) of u.
Inequality (1.8) holds for all measurable (resp. Lipschitz) functions u on X and all p-weak upper gradients \(\rho \) of u.
Before stating the theorem we need from [14], let us make a remark on how the metric balls looks like in the metric measure space \((M, d_{0}, \mu )\). In fact, fix \(x \in M\) and \(r>0\). It is easy to see that
where \(B_{r}(x)\) denotes the Euclidean ball in \(\mathbb {R}^{n+d}\) of center x and radius r.
Theorem 4.3
(see [14, Corollary 5.8])Footnote 7Let \((M, d_{0}, \mu )\) be as above. Assume that M satisfies (iii). Then, M satisfies (ii).
To show that \(|\nabla ^{M}f|\) is a p-weak upper gradient of f, when f is a Lipschitz function on \(\mathbb {R}^{n+d}\), we need the following lemma from [1]:
Lemma 4.4
(see [1, Lemma 1.42]) Let \(p \ge 1\) and let \((M, d_{0}, \mu )\) be as above. Suppose that \(E \subset M\), with \(\mu (E) = 0\). Denote by \(\Gamma (M)\) the set of all rectifiable curves in M, and let
where \(\mathcal {L}^{*}_{1}\) denotes the Lebesgue outer measure on \(\mathbb {R}\). Then, \( \text {Mod}_{p}(\Gamma _{E}) = 0\).
Proposition 4.5
Let \((M, d_{0}, \mu )\) be as above, and suppose f be a Lipschitz function on \(\mathbb {R}^{n+d}\). Then, \(|\nabla ^{M}f|\) (or more precisely, any non-negative extension of \(|\nabla ^{M}f|\) to the whole space M) is a p-weak upper gradient of \(f|_{M}\), the restriction of f on M.
Proof
Since f Lipschitz on \(\mathbb {R}^{n+d}\), we know that \(\nabla ^{M}f\) exists \(\mu \)-almost everywhere. Let
Then, \(\mu (E)=0\), and by Lemma 4.4, we know that Mod\(_{p}(\Gamma _{E}) = 0\). Now, let \(\gamma \) be a rectifiable curve in M, parametrized by arc length, such that \(\gamma \notin \Gamma _{E}\). Then, \(\mathcal {L}_{1}(\gamma ^{-1}(E)) = 0\). Moreover, Since \(f \circ \gamma \) is Lipschitz, and thus absolutely continuous on \([0, l_{\gamma }]\), we have
Let \(t \in [0, l_{\gamma }]\) such that \(\gamma (t) \notin E\). Then, \(T_{\gamma (t)}M\) exists, and \(\nabla ^{M}f(\gamma (t)) \in T_{\gamma (t)}M\). We first show that
Since \(\gamma '(t) \in T_{\gamma (t)}M\)Footnote 8 is a unit vector, then by Rademacher’s Theorem, we have
Now, for any \(-t< h < l_{\gamma } - t\), we have
where in the last step, we used the fact that f is Lipschitz on \(\mathbb {R}^{n+d}\).
Taking the limit as \(h \rightarrow 0\) on both sides of (4.5), and using (4.4) and the fact that \(\gamma '(t)\) is a unit vector, we get
which is exactly (4.3). Replacing (4.3) in (4.2), we get
Now, define the map \(G : M \rightarrow [0, \infty ]\) to be any non-negative extension of \(|\nabla ^{M}f|\) to the whole space M (that is, \(G(x) = |\nabla ^{M}f(x)|\) on \(M {\setminus } E\), which means that \(G = |\nabla ^{M}f|\, \mu \)-a.e.). Plugging back in (4.5), we get
Footnote 9This finishes the proof that G is a p-weak upper gradient of \(f|_{M}\). \(\square \)
We are finally ready to prove Theorem 1.7:
Proof of Theorem 1.7
Proof
We prove \(\mathrm{(iii)} \implies \mathrm{(ii)} \implies \mathrm{(i)} \implies \mathrm{(iii)}\):
\(\mathrm{(iii)} \implies \mathrm{(ii)}\):
This is exactly Theorem 4.3.
\(\mathrm{(ii)} \implies \mathrm{(i)}\):
Notice that by using (4.1), we will be done if we apply Theorem 4.1 to the metric measure space \((M, \mu , d_{0})\). In fact, M is complete since it is closed and bounded. Moreover, the fact that \(\mu \) is doubling follows from (4.1) and the Ahlfors regularity of \(\mu \). Hence, we can apply Theorem 4.1 to \((M, \mu , d_{0})\).
\(\mathrm{(i)} \implies \mathrm{(iii)}\): Notice that by Theorem 4.2, we know that (i) implies that inequality (1.8) holds for all measurable functions u on M and all p-weak upper gradients \(\rho \) of u. Let f be a Lipschitz function f on \(\mathbb {R}^{n+d}\), and fix \(x \in M\) and \(r>0\). Then, \(f|_{M}\) is a Lipschitz function on M, and by Lemma 4.5, \(|\nabla ^{M}f|\) agrees \(\mu \)-almost everywhere with G, a p-weak upper gradient of \(f|_{M}\). Applying (1.8) for \(u = f|_{M}\), \(\rho = G\), and the ball \(B = B_{r}(x) \cap M\), we get
hence finishing the proof \(\square \)
5 The conclusion of Theorem 1.2 is optimal
In this section, we prove Theorem 1.8 by giving an example of a non-Reifenberg flat, 2-Ahlfors regular rectifiable set \(M \subset \mathbb {R}^{3}\) that satisfies the Carleson condition (1.3) and the Poincaré-type inequality (1.1).
To construct this example, we use the well known fact that Lipschitz domains support a p-Poincaré-type inequality, together with Theorem 1.7 that allows us to go from a p-Poincaré inequality to the Poincaré inequality (1.12).
In order to keep track of where the balls live, \(B^{2}_{r}(x)\) will denote the Euclidean ball in \(\mathbb {R}^{2}\) of center x and radius r, whereas \(B^{3}_{r}(x)\) will be that in \(\mathbb {R}^{3}\). Moreover diam(A) denotes the diameter of a set A.
Definition 5.1
We say that a bounded set \(A \subset \mathbb {R}^{2}\) satisfies the corkscrew condition if there exists \(\delta >0\) such that for all \(x \in \bar{A}\) and \(0<r \le \text {diam}(A)\), the set \(B^{2}_{r}(x) \cap A\) contains a ball with radius \(\delta r\).
Definition 5.2
We say that an open, bounded set \(A \subset \mathbb {R}^{2}\) is Lipschitz domain if the boundary of A, \(\partial A\) can be written, locally, as a graph of a Lipschitz function. More precisely, A is a Lipschitz domain if for every point \(x \in \partial A\) there exists a radius \(r>0\) and a bijective map \(h_{x}: B^{2}_{r}(x) \rightarrow B^{2}_{1}(0)\) such that the following holds:
\(h_{x}\) and \(h_{x}^{-1}\) are Lipschitz continuous,
\(h_{x}(\partial A \cap B^{2}_{r}(x)) = Q_{0}\), and
\(h_{x}(A \cap B^{2}_{r}(x)) = Q_{1}\),
where \(Q_{0} = \{ (x_{1}, x_{2}) \in B^{2}_{1}(0); x_{2} = 0 \}\) and \(Q_{1} = \{ (x_{1}, x_{2}) \in B^{2}_{1}(0); x_{2} > 0 \}\).
In [2], Björn and Shanmugalingam prove that Lipschitz domains support p-Poincaré-type inequalities:
Theorem 5.3
(see [2, Theorem 4.4]) Consider the Hausdorff measure \(\mathcal {H}^{2}\) on \(\mathbb {R}^{2}\). Let \(\Omega \) be any Lipschitz domain on \(\mathbb {R}^{2}\). Then, \(\Omega \) supports a 2-Poincaré-type inequality, that is there exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for every \(x \in \bar{ \Omega }\), and \(r>0\), and for every Lipschitz function \(u: \Omega \rightarrow \mathbb {R}\) and any upper gradient \(\rho \) of u in \(\Omega \), the following holds
where .
We are now ready to construct our example. Let \(\Omega := B^{2}_{1}(0) {\setminus } Q\) where Q is the closed square of center \((\frac{1}{2},0)\), and side \(l = \frac{1}{10}\). Since \(\Omega \) is a Lipschitz domain, by Theorem 5.3, it supports the 2-Poincaré-type inequality (5.1).
Proof of Theorem 1.8
Proof
Let \(\Omega \) be as in the construction above, and let \(M: = \bar{\Omega } \times \{0\} \subset \mathbb {R}^{3}\). We prove this theorem for \(n=2\), \(d = 1\), and \(\mu =\). However, with a similar constructionFootnote 10, the theorem holds for any \(n \ge 2\) and \(d \ge 1\).
It is trivial to see that M is a rectifiable non-Reifenberg flat set. To see that M is 2-Ahlfors regular, first note that M is closed by construction. So, we show that there exists a constant \(C_{M} \ge 1\) such that for every \(x \in M\) and \(0<r \le 1\), we have
By the definition of \(\mu \) and the construction of M, proving (5.2) translates to proving that for every \(\bar{x} \in \bar{\Omega }\) and \(0<r \le 1\),
The right hand side of (5.3) is trivial since \(\mathcal {H}^{2}(\bar{\Omega } \cap B^{2}_{r}(\bar{x})) \le \mathcal {H}^{2}(B^{2}_{r}(\bar{x})) = \omega _{2} \, r^{2}\). For the left hand side, notice that since \(\Omega \) is a Lipschitz domain, then it is automatically a corkscrew domain, and thus there exists an \(\delta >0\), such that for every \(\bar{x} \in \bar{\Omega }\) and for every \(0<r \le \text {diam}(\Omega ) = 1\), there is a ball \(B^{2}_{\delta r}(\bar{x}) \subset \bar{\Omega } \cap B^{2}_{r}(\bar{x})\). So, \(\omega _{2} \, \delta ^{2}r^{2} =\mathcal {H}^{2}( B^{2}_{\delta r}(\bar{x})) \le \mathcal {H}^{2}(\bar{\Omega } \cap B^{2}_{r}(\bar{x})) \), and the proof of (5.3) is done.
Let us now prove that the Carleson-type condition (1.3) holds. Let \(\epsilon _{0}\) be the constant from the statement of Theorem 1.2. Since M has co-dimension 1, (1.3) can be written as (1.2), and thus proving (1.3) translates to proving
where \(\nu \) denotes the unit normal to M and . But for \(\mu \)-almost every y, \(\nu (y)\) exists and \(\nu (y) = <0,0,1>\). Thus, the left hand side of (5.4) is always 0, and (5.4) is satisfied.
Finally, let us prove that M satisfies the following Poincaré inequality
for some \(\kappa \ge 1\) and \(\lambda \ge 1\), and where \(x \in M\), \(r >0\), f is a Lipschitz function on \(\mathbb {R}^{3}\), and . By Theorem 1.7, it suffices to show that
for some \(\kappa \ge 1\) and \(\lambda \ge 1\), and where \(x \in M\), \(r >0\), f is a LipschitzFootnote 11 function on M, \(\rho \) is an upper gradient of f in M, and .
Let f be a Lipschitz function on M, and \(\rho \) an upper gradient of f on M. Fix \(x \in M\) and \(r>0\). Let \(\tilde{x} \in \bar{\Omega }\) such that \((\tilde{x},0) = x\), and define the functions \(\tilde{f} : \Omega \rightarrow \mathbb {R}\) and \(\tilde{\rho } : \Omega \rightarrow [0, \infty ]\) such that \(\tilde{f}(a,b) = f(a,b,0)\) and \(\tilde{\rho }(a,b) = \rho (a,b,0)\). It is easy to see that \(\tilde{f}\) is a Lipschitz function on \(\Omega \), and \(\tilde{\rho }\) is an upper gradient to \(\tilde{f}\) in \(\Omega \). Thus, by the definition of \(\mu \), the construction of M, the fact that \(\mathcal {H}^{2}(\bar{\Omega } \setminus \Omega ) = 0\), and using (5.1) (for \(x = \tilde{x}\), \(u = \tilde{f}\), and \(\rho = \tilde{\rho }\)), we get
which is exactly (5.6) hence finishing the proof of this theorem \(\square \)
Remark 5.4
Notice that one could take away more that one square Q from the ball \(B^{2}_{1}(0)\) and still get the same result of this section. The important thing about the construction above is that \(\Omega \) is a Lipschitz domain; Thus if we want to construct a set with m holes that satisfies the hypotheses of Theorem 1.2, all we need to do is make sure that the squares we take away from the ball \(B^{2}_{1}(0)\) and are far away from each other (that is, they do not accumulate). That way, \(\Omega \setminus \bigcup _{i=1}^{m} Q_{i}\) remains a Lipschitz domain and the rest of the argument follows directly.
As mentioned in the introduction, the example constructed in Theorem 1.8 proves that the conclusion of Theorem 1.2 is optimal.
Notes
\(|\pi _{T_{y}M} - A_{x,r}|^{2} = \text {trace} ( (\pi _{T_{y}M} - A_{x,r})^{2} ) = \sum _{i,j=1}^{n+d} |a_{ij}(y) - (a_{ij})_{x,r}|^{2}\).
Notice that Theorem 5.5 in [14] is stated and proved in the ambient space \(\mathbb {R}^{n+1}\) (so \(d=1\)) and for \(\lambda = 2\). However, the proof of Theorem 5.5 in [14] is independent from the co-dimension d of M. Thus the exact same statement holds here in the higher co-dimension case, and the quasiconvexity constant \(\kappa _{1}\) stays independent of d. Moreover, it is very easy to see that Theorem 5.5 in [14] still holds with arbitrary \(\lambda \ge 1\), and in that case, \(\kappa _{1}\) would also depend on \(\lambda \).
Notice that Lemma 3.1 in [14] is stated and proved in the ambient space \(\mathbb {R}^{n+1}\), whereas Lemma 3.2 here has \(\mathbb {R}^{n+d}\) as the ambient space. However, one can very easily adapt the same proof of Lemma 3.1 in [14] to this higher co-dimension case here, while noticing that \(c_{0}\) in the latter case should also depend on the co-dimension d.
Notice that Lemma 3.3 in [14] is stated and proved in the ambient space \(\mathbb {R}^{n+1}\), whereas Lemma 3.3 here has \(\mathbb {R}^{n+d}\) as the ambient space. However, the proof of Lemma 3.3 in [14] is in fact independent from the co-dimension d of M. Thus the exact same proof holds here, and the constant \(K_{1}\) stays independent of d.
A note for the interested reader: Theorem 3.5 implies that the series \(\sum _{i=1}^{\infty } \beta _{1}^2(x, 10^{-j})\) is finite. See [14] on how this relates to the \(\beta _{1}\)-numbers, and the theorems found in [7] that involve a Carleson condition on the \(\beta _{1}\)-numbers that guarantees a bi-Lipschitz parameterization of the set.
Note that considering this choice of bases and matrix representations is only valid in co-dimension 1, as otherwise \(B_{x,r}\) will not be well defined. This is because in higher co-dimensions, one will have infinitely many choices for the unit normals that span the normal plane, instead of the one choice (modulo direction) in co-dimension 1.
Notice that Corollary 5.8 in [14] is stated and proved in the ambient space \(\mathbb {R}^{n+1}\). However, the proof of Corollary 5.8 in [14] is independent from the co-dimension d of M. Thus the exact same statement holds here in the higher co-dimension case. Moreover, notice that in Corollary 5.8, the Poincaré inequality assumed is (1.12) but for \(p= \lambda = 2\). This results in getting the Poincaré inequality (1.11) but also for \(p=\lambda = 2\). However, it is easy to see that one can assume the Poincaré inequality (1.12) for any \(p \ge 1\) and \(\lambda \ge 1\), and get inequality (1.11) for the same p and \(\lambda \) that one started with.
This follows directly from the facts that for any sequence \(r \rightarrow 0\), we have \(\gamma '(t) = \lim _{r \rightarrow 0} \displaystyle \frac{\gamma (t+r) - \gamma (t)}{r}\) and \(\lim _{r \rightarrow 0} \sup _{y \in \frac{M - \gamma (t)}{r}} d(y, T_{\gamma (t)}M) = 0\).
The function G defined here is clearly measurable. However, since any non-negative measurable function coincides \(\mu \)-almost everywhere with a non-negative Borel function (see [1, Proposition 1.2]), we can assume, without any loss of generality that G is Borel. In this case, \(\int _{\gamma }G \, ds\) is well defined for any rectifiable curve \(\gamma \) in M, and we do not need to worry about the last step in (4.6).
In general, we take \(\Omega := B^{n}_{1}(0) {\setminus } Q\) where Q is the closed n-cube of center \((\frac{1}{2}, \underbrace{0, \ldots ,0}_\text {n-1-times})\), and side \(l = \frac{1}{10}\). Then, \(M := \bar{\Omega } \times \underbrace{(0, \ldots , 0)}_\text {d-times}\).
References
Björn, A., Björn, J.: Nonlinear Potential Theory on Metric Spaces, vol. 17. European Mathematical Society (EMS), Zürich (2011)
Björn, J., Shanmugalingam, N.: Poincaré inequalities, uniform domains and extension properties for Newton–Sobolev functions in metric spaces. J. Math. Anal. Appl. 332(1), 190–208 (2007)
Cheeger, J.: Differentiability of Lipschitz functions on metric measure spaces. Geom. Funct. Anal. 9(3), 428–517 (1999)
Cheeger, J., Colding, T.H.: On the structure of spaces with Ricci curvature bounded below I. J. Differ. Geom. 46(3), 406–480 (1997)
Colding, T.H., Naber, A.: Lower Ricci curvature, branching and the bilipschitz structure of uniform Reifenberg spaces. Adv. Math. 249, 348–358 (2013)
David, G., Semmes, S.: Singular integrals and rectifiable sets in \({ R}^n\): beyond Lipschitz graphs. Astérisque 193, 152 (1991)
David, G., Toro, T.: Reifenberg parameterizations for sets with holes. Mem. Am. Math. Soc. 215(vi+1012), 102 (2012)
Durand-Cartagena, E., Jaramillo, J.A., Shanmugalingam, N.: First order Poincaré inequalities in metric measure spaces. Ann. Acad. Sci. Fenn. Math. 38(1), 287–308 (2013)
Hajłasz, P., Koskela, P.: Sobolev met Poincaré. Mem. Am. Math. Soc. 145(688), x+101 (2000)
Keith, S.: Modulus and the Poincaré inequality on metric measure spaces. Math. Z. 245(2), 255–292 (2003)
Keith, S.: Measurable differentiable structures and the Poincaré inequality. Indiana Univ. Math. J. 53(4), 1127–1150 (2004)
Laakso, T.J.: Ahlfors \(Q\)-regular spaces with arbitrary \(Q>1\) admitting weak Poincaré inequality. Geom. Funct. Anal. 10(1), 111–123 (2000)
LeVeque, R.J.: Finite Difference Methods for Ordinary and Partial Differential Equations. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2007)
Merhej, J.: On the geometry of rectifiable sets with Carleson and Poincaré-type conditions. Indiana Univ. Math. J. 66(5), 1659–1706 (2017). https://doi.org/10.1512/iumj.2017.66.6161
Reifenberg, E.R.: Solution of the Plateau problem for \(m\)-dimensional surfaces of varying topological type. Bull. Am. Math. Soc. 66, 312–313 (1960)
Semmes, S.: Chord-arc surfaces with small constant. I. Adv. Math. 85(2), 198–223 (1991)
Semmes, S.: Chord-arc surfaces with small constant. II. Good parameterizations. Adv. Math. 88(2), 170–199 (1991)
Simon, L.: Lectures on geometric measure theory. In: Proceedings of the Centre for Mathematical Analysis, vol. 3. Australian National University, Centre for Mathematical Analysis, Canberra (1983)
Toro, T.: Doubling and flatness: geometry of measures. Not. Am. Math. Soc. 44(9), 1087–1094 (1997)
Acknowledgements
The author would like to thank T. Toro for her supervision, direction, and numerous insights into the subject of this project. The author was partially supported by the National Science Foundation DMS-0856687 and DMS-1361823 Grants, and by Notre Dame University.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Merhej, J. Poincaré-type inequalities and finding good parameterizations. Math. Z. 294, 17–49 (2020). https://doi.org/10.1007/s00209-019-02256-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00209-019-02256-2
Keywords
- Rectifiable set
- Carleson-type condition
- Poincaré-type condition
- p-Poincaré inequality
- Ahlfors regular
- Bi-Lipschitz image