1 Introduction

Finding bi-Lipschitz parameterizations of sets is a central question in areas of geometric measure theory and geometric analysis. A Lipschitz function on a metric space plays the role played by a smooth function on a manifold, and a bi-Lipschitz function plays the role of that of a diffeomorphism. Many concepts in metric spaces, such as metric dimensions and Poincaré inequalities, are preserved under bi-Lipschitz mappings. Moreover, a bi-Lipschitz parameterization of a set by Euclidean space leads to its uniform rectifiability. Uniform rectifiability is a quantified version of rectifiability which is well adapted to the study of problems in harmonic analysis on non-smooth sets.

The type of parameterizations discussed in this paper first appeared in 1960 when Reifenberg [15] showed that if a closed set \(M \subset \mathbb {R}^{n+d}\) is well approximated by affine n-planes at every point and every scale, then M is a bi-Hölder image of \(\mathbb {R}^{n}\). Such a set is called a Reifenberg flat set. In recent years, there has been renewed interest in this result and its proof. In particular, Reifenberg type parameterizations have been used to get good parameterizations of many spaces such as chord arc surfaces with small constant (see [16, 17]), and limits of manifolds with Ricci curvature bounded from below (see [4, 5]). Moreover, Reifenberg’s theorem has been refined to get better parameterizations of a set: bi-Lipschitz parameterizations (see [6, 7, 14, 19]). In fact, it is well known today, due to the authors of the latter references, that Carleson-type conditions are the correct conditions to study when seeking necessary and sufficient conditions for bi-Lipschitz parameterizations of sets. For example, in [19], Toro considers a Carleson condition on the Reifenberg flatness of M that guarantees its bi-Lipschitz parameterization. In [7], David and Toro consider a Carleson condition on the Jones beta numbers \(\beta _{\infty }\) and on the (possibly smaller) \(\beta _{1}\) numbers that guarantees the same result. In [14], the author studies a Carleson-type condition on the oscillation of the unit normals to an n-rectifiable set M of co-dimension 1, that guarantees its bi-Lipschitz parameterization. An n-rectifiable set \(M \subset \mathbb {R}^{n+d}\) is a generalization of a smooth n-manifold in \(\mathbb {R}^{n+d}\). Rectifiable sets are characterized by having (approximate) tangent planes (see Definition 2.4) at \(\mathcal {H}^{n}\)-almost every point. Moreover, in the special case when the rectifiable set M has co-dimension 1, then M has an (approximate) unit normal \(\nu \) (see Remark 2.5) at \(\mathcal {H}^{n}\)-almost every point. In fact, in [14], the author considers an n-Ahlfors regular rectifiable set \(M \subset \mathbb {R}^{n+1}\), of co-dimension 1, that satisfies the following Poincaré-type inequality for \(d=1\) and \(\lambda =2\):

For all \(x \in M\), \(r > 0\), and f a Lipschitz function on \(\mathbb {R}^{n+d}\), we have

(1.1)

where \(C_{P}\) denotes the Poincaré constant that appears here, \(\lambda \ge 1\) is the dilation constant, \(\mu =\) is the Hausdorff measure restricted to M, is the average of the function f on \(B_{r}(x)\), \(B_{r}(x)\) is the Euclidean ball in the ambient space \(\mathbb {R}^{n+d}\), and \(\nabla ^{M}f(y)\) denotes the tangential derivative of f (see Definition 2.6).

Then, the author shows that a Carleson-type condition on the oscillation of the unit normal \(\nu \) to M guarantee a bi-Lipschitz parameterization of M.

Theorem 1.1

(see [14, Theorem 1.5]) Let \(M \subset B_{2}(0) \subset \mathbb {R}^{n+1}\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1) with \(d=1\) and \(\lambda = 2\). There exists \(\epsilon _{0} = \epsilon _{0}(n, C_{M}, C_{P})>0\), such that if for some choice of unit normal \(\nu \) to M, we have

(1.2)

then \(M \cap B_{\frac{1}{10^{4}}}(0)\) is contained in the image of an affine n-plane by a bi-Lipschitz mapping, with bi-Lipschitz constant depending only on n, \(C_{M}\) and \(C_{P}\).

In this paper, we generalize Theorem 1.1 to higher co-dimensions d and arbitrary dilation constants \(\lambda \ge 1\). Before stating the theorem, let us introduce some notation. Suppose that \(M \subset \mathbb {R}^{n+d}\) is an n-Ahlfors regular rectifiable set that satisfies the Poincaré-type inequality (1.1). Fix \(x \in M\) and \(r >0\). Let \(y \in M \cap B_{r}(x)\) such that the approximate tangent plane \(T_{y}M\) of M at the point y exists, and denote by \(\pi _{T_{y}M}\) the orthogonal projection of \(\mathbb {R}^{n+d}\) on \(T_{y}M\). Using the standard basis of \(\mathbb {R}^{n+d}\), \(\{ e_{1} , \ldots , e_{n+d} \}\), we can view \(\pi _{T_{y}M}\) as an \((n+d) \times (n+d)\) matrix whose jth column is the vector is \(\pi _{T_{y}M}(e_{j})\). Thus, we denote \(\pi _{T_{y}M}\) by the matrix \(\big (a_{ij}(y)\big )_{ij}\). Finally, let \(A_{x,r} = ((a_{ij})_{x,r} )_{ij}\), be the matrix whose ijth entry is the average of the function \(a_{ij}\) in the ball \(B_{r}(x)\).

Theorem 1.2

Let \(M \subset B_{2}(0) \subset \mathbb {R}^{n+d}\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1). There exist \(\epsilon _{0}= \epsilon _{0}(n,d, C_{M},C_{P}) >0\) and \(\theta _{0} = \theta _{0}(\lambda ) < 1\), such that if

(1.3)

where \(|\pi _{T_{y}M} - A_{x,r}|\) denotes the Frobenius normFootnote 1 of \(\pi _{T_{y}M} - A_{x,r}\), then there exists an onto K-bi-Lipschitz map \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) where the bi-Lipschitz constant \(K = K(n,d, C_{M},C_{P})\) and an n-dimensional plane \(\Sigma _{0}\), with the following properties:

$$\begin{aligned} g(z)= z \quad \text {when } d(z, \Sigma _{0}) \ge 2, \end{aligned}$$
(1.4)

and

$$\begin{aligned} |g(z)-z| \le C_{0} \epsilon _{0} \quad \text {for } z \in \mathbb {R}^{n+d}, \end{aligned}$$
(1.5)

where \(C_{0}= C_{0}(n,d,C_{M},C_{P})\). Moreover,

$$\begin{aligned} g(\Sigma _{0})\,\, \text {is a} \,\,\, C_{0} \epsilon _{0} \text {-Reifenberg flat set}, \end{aligned}$$
(1.6)

and

$$\begin{aligned} M \cap B_{\theta _{0}}(0) \subset g(\Sigma _{0}). \end{aligned}$$
(1.7)

Notice that the conclusion of Theorem 1.2 states that M is (locally) contained in a bi-Lipschitz image of an n-plane instead of M being exactly a (local) bi-Lipschitz image of an n-plane. This is very much expected, since we do not assume that M is Reifenberg flat, and thus we have to deal with the fact that M might have holes. However, if we assume, in addition to the hypothesis of Theorem 1.2, that M is Reifenberg flat, then we do obtain that M is in fact (locally) a bi-Lipschitz image of an n-plane. We show this in this paper as a corollary to Theorem 1.2.

A natural question is whether the hypotheses of Theorem 1.2, that is the Ahlfors regularity of M, the Poincaré inequality (1.1), and the Carleson condition (1.3) imply that M is Reifenberg flat. An affirmative answer to this question would directly imply (by the paragraph above) that the conclusion of Theorem 1.2 should be that M is exactly a bi-Lipschitz image of an n-plane instead of M being just contained in bi-Lipschitz image of an n-plane. A negative answer would show that the conclusion of Theorem 1.2 is the best that we can hope for. It is not surprising that the Poincaré inequality (1.1) is the correct condition to explore in order to answer this question (which as we discuss below, will turn out negative). In fact, it is already known that (1.1) encodes geometric properties of the set M.

Let \((M, d_{0}, \mu )\) be a metric measure space, where \(M \subset B_{2}(0)\) is an n-Ahlfors regular rectifiable set in \(\mathbb {R}^{n+d}\), \(\mu =\) is the measure that lives on M, and \(d_{0}\) is the metric on M which is the restriction of the standard Euclidean metric on \(\mathbb {R}^{n+d}\). In [14], the author proves that the Poincaré inequality (1.1) implies that M is quasiconvex. More precisely,

Definition 1.3

A metric space (Xd) is \(\kappa _{1}\)-quasiconvex if there exists a constant \(\kappa _{1} \ge 1\) such that for any two points x and y in X, there exists a rectifiable curve \(\gamma \) in X, joining x and y, such that \(\text {length}(\gamma ) \le \kappa _{1} \, d(x,y)\).

Theorem 1.4

(see [14, Theorem 5.5])Footnote 2Let \((M, d_{0}, \mu )\) be as discussed above. Suppose that M satisfies the Poincaré-type inequality (1.1). Then \((M, d_{0}, \mu )\) is \(\kappa _{1}\)-quasiconvex, with \(\kappa _{1}= \kappa _{1}(n, \lambda , C_{M}, C_{P})\).

There are many Poincaré-type inequalities found in literature that imply quasiconvexity (see for example [3, 8, 10, 11]). To state a couple of the main ones, let \((X, d, \nu )\) be a measure space endowed with a metric d and a positive complete Borel regular measure \(\nu \) supported on X. Denote by \(B^{X}_{r}(x)\) the metric ball in X, center \(x \in X\) and radius \(r>0\). Moreover, assume that \(0< \nu (B_{r}^{X}(x)) < \infty \) for all \(x \in X\) and \(r>0\).

Definition 1.5

(p-Poincaré inequality) Let \(p \ge 1\). \((X, d, \nu )\) is said to admit a p-Poincaré inequality if there exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for any measurable function \(u: X \rightarrow \mathbb {R}\) and for any upper gradient \(\rho \) (see Definition 2.12) of u, the following holds

(1.8)

where \(x \in X\), \(r>0\), and .

Definition 1.6

(Lip-Poincaré inequality) Let \(p \ge 1\). \((X, d, \nu )\) is said to admit a Lip-Poincaré inequality if there exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for every Lipschitz function f on X, and for every \(x \in X\) and \(r>0\), we have

(1.9)

(see Definition 2.15 for the definition of Lipf).

These Poincaré inequalities are a-priori different because the right hand side varies according to the notion of “derivative” used on the metric space. However, Keith has shown (see [10, 11]) that if \((X, d, \nu )\) is a complete metric measure space with \(\nu \) a doubling measure, then (1.8) and (1.9) are equivalent. It turns out that the Poincaré-type inequality (1.1) is also related to (1.8) and (1.9).

In this paper, we take \((M, d_{0}, \mu )\) as described above and prove that in this setting, the Poincaré-type inequalities (1.1) [or a more generalized version of it, see (1.12) below], (1.8), and (1.9) are equivalent.

Theorem 1.7

Let \(p \ge 1\), and let \((M, d_{0}, \mu )\) be a metric measure space, where \(M \subset B_{2}(0)\) is an n-Ahlfors regular rectifiable set in \(\mathbb {R}^{n+d}\), \(\mu =\) is the measure that lives on M, and \(d_{0}\) is the metric on M which is the restriction of the standard Euclidean metric on \(\mathbb {R}^{n+d}\).Then, the following are equivalent:

  1. (i)

    There exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for any measurable function \(u: M \rightarrow \mathbb {R}\), for any upper gradient \(\rho \) of u, and for every \(x \in M\) and \(r>0\), we have

    (1.10)
  2. (ii)

    There exist constants \(\kappa \ge 1\), and \(\lambda \ge 1\), such that for every Lipschitz function f on M, and for every \(x \in M\) and \(r>0\), we have

    (1.11)
  3. (iii)

    There exist constants \(\kappa \ge 1\), and \(\lambda \ge 1\), such that for every Lipschitz function f on \(\mathbb {R}^{n+d}\), and for every \(x \in M\) and \(r>0\), we have

    (1.12)

Theorem 1.7 is interesting in its own right, as it shows that the Poincaré inequality (1.1) [or more generally, (1.12)] is equivalent to the other usual Poincaré-type inequalities on metric spaces that imply quasiconvexity. Moreover, Theorem 1.7 opens the door to many examples of spaces satisfying the Poincaré inequality (1.12) as there are many examples in literature of spaces satisfying the p-Poincaré and Lip-Poincaré inequalities (see for example [1, 2, 9, 12]). This allows us to get an example of a set that is not Reifenberg flat, and yet satisfies all the hypotheses of Theorem 1.2.

Theorem 1.8

There exists a non-Reifenberg flat, n-Ahlfors regular, rectifiable set \(M \subset B_{2}(0) \subset \mathbb {R}^{n+d}\) that satisfies all the hypotheses of Theorem 1.2.

Theorem 1.8 shows that the hypotheses of Theorem 1.2 on the set M are not strong enough to guarantee its Reifenberg flatness, and thus the conclusion of Theorem 1.2 is optimal.

The paper is structured as follows: in Sect. 2, we introduce some definitions and preliminaries. In Sect. 3, we prove Theorem 1.2. Moreover, we prove that Theorem 1.1 follows as a corollary from Theorem 1.2. Section 4 is dedicated to proving that the Poincaré inequality (1.12) is equivalent to the p-Poincaré and the Lip-Poincaré inequalities. Finally, in the last section, we prove Theorem 1.8 by constructing a concrete example of a set that is not Reifenberg flat, yet satisfies the hypotheses of Theorem 1.2.

2 Preliminaries

Throughout this paper, our ambient space is \(\mathbb {R}^{n+d}\). \(B_{r}(x)\) denotes the open ball center x and radius r in \(\mathbb {R}^{n+d}\), while \(\bar{B}_{r}(x)\) denotes the closed ball center x and radius r in \(\mathbb {R}^{n+d}\). d(., .) denotes the distance function from a point to a set. \(\mathcal {H}^{n}\) is the n-Hausdorff measure. Finally, constants may vary from line to line, and the parameters they depend on will always be specified in a bracket. For example, C(nd) will be a constant that depends on n and d that may vary from line to line.

We begin by the definitions needed starting Sect. 3 and onwards.

Definition 2.1

Let \(M \subset \mathbb {R}^{N_{1}}\). A function \( f: M \rightarrow \mathbb {R}^{N_{2}}\) is called Lipschitz if there exists a constant \(K>0\), such that for all \(x, \, y \in M\) we have

$$\begin{aligned} |f(x) - f(y)| \le K \, |x-y|. \end{aligned}$$
(2.1)

The smallest such constant is called the Lipschitz constant and is denoted by \(L_{f}\).

Definition 2.2

A function \( f: \mathbb {R}^{N_{1}} \rightarrow \mathbb {R}^{N_{2}}\) is called K-bi-Lipschitz if there exists a constant \(K>0\), such that for all \(x, \, y \in \mathbb {R}^{N_{1}}\) we have

\(K^{-1} |x-y| \le |f(x) - f(y)| \le K \, |x-y|.\)

Let’s introduce the class of n-rectifiable sets, and the definition of approximate tangent planes.

Definition 2.3

Let \(M \subset \mathbb {R}^{n+d}\) be an \(\mathcal {H}^{n}\)-measurable set. M is said to be countably n-rectifiable if

$$\begin{aligned} M \subset M_{o} \cup \left( \displaystyle \bigcup _{i=1}^{\infty }f_{i}(A_{i})\right) , \end{aligned}$$

where \( \mathcal {H}^{n}(M_{o}) = 0\), and \( f_{i} : A_{i} \rightarrow \mathbb {R}^{n+d}\) is Lipschitz, and \(A_{i} \subset \mathbb {R}^{n}\), for \(i = 1, 2, \ldots \)

Definition 2.4

If M is an \(\mathcal {H}^{n}\)-measurable subset of \(\mathbb {R}^{n+d}\). We say that the n-dimensional subspace P(x) is the approximate tangent space of M at x, if

$$\begin{aligned} \lim _{h \rightarrow 0} h^{-n} \int _{M} {f \left( h^{-1}(y-x)\right) } \, d \mathcal {H}^{n}(y) = \int _{P(x)} f(y) \, d\mathcal {H}^{n}(y) \quad \forall f \in C^1_c(\mathbb {R}^{n+d}, \mathbb {R}). \end{aligned}$$
(2.2)

Remark 2.5

Notice that if it exists, P(x) is unique. From now on, we shall denote the tangent space of M at x by \(T_{x}M\). Moreover, in the special case when M has co-dimension 1, then one can define the unit normal \(\nu \) to M at the point \(x \in M\) to be the unit normal to \(T_{x}M\). Thus, the unit normal \(\nu \) exists at every point \(x \in M\) that admits a tangent plane, and of course, there are two choices for the direction of the unit normal.

It is well known (see [18, Theorem 11.6]) that n-rectifiable sets have tangent planes at \(\mathcal {H}^{n}\) almost every point in the set.

Definition 2.6

Let f be a real valued Lipschitz function on \(\mathbb {R}^{n+d}\). The tangential derivative of f at the point \(y \in M\) id denoted by \(\nabla ^{M}f(y)\) and defined as follows:

$$\begin{aligned} \nabla ^{M}f(y) = \nabla (f|_{L}) (y) \end{aligned}$$
(2.3)

where \(L := y + T_{y}M\), \(f|_{L}\) is the restriction of f on the affine subspace L, and \(\nabla (f|_{L})\) is the usual gradient of \(f|_{L}\).

In the special case when f is a smooth function on \(\mathbb {R}^{n+d}\), we have

$$\begin{aligned} \nabla ^{M}f(y) = \pi _{T_{y}M} (\nabla f (y)), \end{aligned}$$
(2.4)

where \(\pi _{T_{y}M}\) is the orthogonal projection of \(\mathbb {R}^{n+1}\) on \(T_{y}M\), and \(\nabla f\) is the usual gradient of f.

Note that \(\nabla ^{M}f(y)\) exists at \(\mathcal {H}^{n}\)- almost every point in M.

We also need to define the notion of Reifenberg flatness:

Definition 2.7

Let M be an n-dimensional subset of \(\mathbb {R}^{n+d}\). We say that M is \(\epsilon \)-Reifenberg flat for some \(\epsilon >0\), if for every \(x \in M\) and \(0 < r \le \frac{1}{10^{4}}\), we can find an n-dimensional affine subspace P(xr) of \(\mathbb {R}^{n+d}\) that contains x such that

$$\begin{aligned} d(y, P(x,r)) \le \epsilon r \quad \text {for } y \in M \cap B_{r}(x), \end{aligned}$$

and

$$\begin{aligned} d(y, M) \le \epsilon r \quad \text {for } y \in P(x,r) \cap B_{r}(x). \end{aligned}$$

Remark 2.8

Notice that the above definition is only interesting if \(\epsilon \) is small, since any set is 1-Reifenberg flat.

In the proof of our Theorem 1.2, we need to measure the distance between two n-dimensional planes. We do so in terms of normalized local Hausdorff distance:

Definition 2.9

Let x be a point in \(\mathbb {R}^{n+d}\) and let \(r >0\). Consider two closed sets \(E,\,F \subset \mathbb {R}^{n+d}\) such that both sets meet the ball \(B_{r}(x)\). Then,

$$\begin{aligned} d_{x,r}(E,F) = \frac{1}{r} \, \text {Max} \left\{ \sup _{y \in E \cap B_{r}(x)} \text {dist}(y,F) \,\,; \sup _{y \in F \cap B_{r}(x)} \text {dist}(y,E) \right\} \end{aligned}$$

is called the normalized Hausdorff distance between E and F in \(B_{r}(x)\).

Let us recall the definition of an n-Ahlfors regular measure and an n-Ahlfors regular set:

Definition 2.10

Let \(M \subset \mathbb {R}^{n+d}\) be a closed, \(\mathcal {H}^{n}\) measurable set, and let \(\mu =\) be the n-Hausdorff measure restricted to M. We say that \(\mu \) is n-Ahlfors regular if there exists a constant \(C_{M} \ge 1\), such that for every \(x \in M\) and \( 0< r \le 1\), we have

$$\begin{aligned} C_{M}^{-1} \, r^{n} \le \mu (B_{r}(x)) \le C_{M} \, r^{n}. \end{aligned}$$
(2.5)

In such a case, the set M is called an n-Ahlfors regular set, and \(C_{M}\) is referred to as the Ahlfors regularity constant.

Let us now move to definitions and notations needed in Sects. 4 and 5. In these sections, (Xd) denotes a space X endowed with a metric d. \(B^{X}_{r}(x)\) denotes the open metric ball of center \(x \in X\) and radius \(r>0\). Moreover, \((X, d, \nu )\) denotes a measure space endowed with a metric d and a positive complete Borel regular measure \(\nu \) supported on X such that \(0< \nu (B_{r}^{X}(x)) < \infty \) for all \(x \in X\) and \(r>0\).

Definition 2.11

Let \((X,d, \nu )\) be a metric measure space. We say that \(\nu \) is a doubling measure if there is a constant \(\kappa _{0} >0\) such that

$$\begin{aligned} \nu (B^{X}_{2r}(x) )\le \kappa _{0} \, \nu (B^{X}_{r}(x) ), \end{aligned}$$

where \(x \in X\), \(r>0\).

In Sects. 4 and 5, a curve \(\gamma \) in a metric space (Xd) is a continuous non-constant map from a compact interval \(I \subset \mathbb {R}\) into X. \(\gamma \) is said to be rectifiable if it has finite length, where the latter is denoted by \(l(\gamma )\). Thus, any rectifiable curve can be parametrized by arc length, and we will always assume that it is.

Let us now define the notions of upper gradients, p-weak upper gradients, and the Local Lipschitz constant function.

Definition 2.12

A non-negative Borel function \(\rho : X \rightarrow [0, \infty ]\) is said to be an upper gradient of a function \(u: X \rightarrow \mathbb {R}\) if

$$\begin{aligned} |u(\gamma (0)) - u(\gamma (l_{\gamma }))| \le \int _{\gamma } \rho \, ds, \end{aligned}$$

for any rectifiable curve \(\gamma : [0, l_{\gamma }] \rightarrow X\).

Definition 2.13

Let \(p \ge 1\) and let \(\Gamma \) be a family of rectifiable curves on X. We define the p-modulus of \(\Gamma \) by

$$\begin{aligned} \text {Mod}_{p}(\Gamma ) = \inf \int _{X} g^{p} \, d \nu \end{aligned}$$

where the infimum is taken over all nonnegative Borel functions g such that \(\int _{\gamma } g \, ds \ge 1\) for all \(\gamma \in \Gamma \).

Definition 2.14

A non-negative measurable function \(\rho : X \rightarrow [0, \infty ]\) is said to be a p-weak upper gradient of a function \(u: X \rightarrow \mathbb {R}\) if

$$\begin{aligned} |u(\gamma (0)) - u(\gamma (l_{\gamma }))| \le \int _{\gamma } \rho \, ds, \end{aligned}$$

for p-a.e. rectifiable curve \(\gamma : [0, l_{\gamma }] \rightarrow X\) (that is, with the exception of a curve family of zero p-modulus).

Definition 2.15

Let f be a Lipschitz function on a metric measure space \((X,d,\nu )\). The local Lipschitz constant function of f is defined as follows

$$\begin{aligned} Lipf(x) = \lim _{r \rightarrow 0} \sup _{y \in B^{X}_{r}(x), \,y \ne x} \frac{|f(y) - f(x)|}{d(y,x)}, \quad x \in X, \end{aligned}$$
(2.6)

where \(B_{r}^{X}(x)\) denotes the metric ball in X, center x, and radius r.

Remark 2.16

Let us note here that for any Lipschitz function f, \(L_{f}\) denotes the usual Lipschitz constant [see sentence below (2.1)], whereas Lipf(.) stands for the local Lipschitz constant function defined above.

3 A bi-Lipschitz parameterization of M

The main goal in this section is to prove Theorem 1.2. We begin with three linear Algebra lemmas needed to prove the theorem, as they can be stated and proved independently.

Lemma 3.1

In the next lemma, let V be an n-dimensional subspace of \(\mathbb {R}^{n+d}\). Denote by \(\pi _{V}\) the orthogonal projection on V. Then, there exists a \(\delta _{0}= \delta _{0}(n,d) > 0\), such that for any \(\delta \le \delta _{0}\), and for any linear operator L on \(\mathbb {R}^{n+d}\) such that

$$\begin{aligned} || \pi _{V} - L || \le \delta , \end{aligned}$$
(3.1)

where || . || denotes the induced operator norm, L has exactly n eigenvalues \(\lambda _{1}, \ldots , \lambda _{n}\) such that

$$\begin{aligned} |\lambda _{j}| \ge 1 - (n+d) \, \delta \ge \frac{3}{4}, \quad \forall \, j \in \{ 1 , \ldots , n\}, \end{aligned}$$
(3.2)

and exactly d eigenvalues \(\lambda _{n+1}, \ldots , \lambda _{n+d}\), such that

$$\begin{aligned} |\lambda _{j}| \le (n+d) \, \delta \le \frac{1}{4}, \quad \forall \, j \in \{ n+1 , \ldots , n+d\}. \end{aligned}$$
(3.3)

Proof

Since \(\pi _{V}\) is an orthogonal projection, then there exists an orthonormal basis \(\{ w_{1} , \ldots , w_{n+d} \} \) of \(\mathbb {R}^{n+d}\) such that the matrix representation of \(\pi _{V}\) in this basis is

$$\begin{aligned} \pi _{V} = \begin{pmatrix} Id_{n} &{} 0 \\ 0 &{} 0 \end{pmatrix} \end{aligned}$$

where \(Id_{n}\) denotes the \(n \times n\) identity matrix.

Let \(\delta < \delta _{0}\) (with \(\delta _{0}\) to be determined later), and suppose L is as in the statement of the lemma. Let \(L = (l_{ij})_{ij}\) be the matrix representation of L in the basis \(\{ w_{1} , \ldots , w_{n+d} \} \). Then, by (3.1), we have

$$\begin{aligned} |\pi _{V} w_{j} - L w_{j} |^{2} \le \delta ^{2}, \quad \forall \, j \in \{ 1 \ldots n+d \}, \end{aligned}$$

that is,

$$\begin{aligned} |1 - l_{jj}|^{2} + \sum _{i\ne j} |l_{ij}|^{2} \le \delta ^{2}, \quad \forall \, j \in \{ 1 \ldots n \}, \end{aligned}$$
(3.4)

and

$$\begin{aligned} \sum _{i=1} ^{n+d} |l_{ij}|^{2} \le \delta ^{2}, \quad \forall \, j \in \{ n+1 \ldots n+d \}. \end{aligned}$$
(3.5)

Now, for each \( j \in \{ 1 \ldots n+d\}\), consider the closed disk \(D_{j}\) in the complex plane, of center \((l_{jj}, 0)\) and radius \(R_{j} = \sum _{i \ne j} |l_{ij}|\). Notice that by (3.4), (3.5), and the fact that \(\delta < \delta _{0}\), we have

$$\begin{aligned}&|1 - l_{jj} | \le \delta \le \delta _{0}, \quad \forall \, j \in \{ 1 \ldots n \}, \end{aligned}$$
(3.6)
$$\begin{aligned}&|l_{jj}| \le \delta \le \delta _{0}, \quad \forall \, j \in \{ n+1 \ldots n+d \}, \end{aligned}$$
(3.7)

and

$$\begin{aligned} R_{j} \le (n+d-1) \delta \le (n+d-1) \, \delta _{0}, \quad \forall \, j \in \{1 \ldots n+d \}. \end{aligned}$$
(3.8)

Choosing \(\delta _{0}\) such that \((n+d-1) \delta _{0} \le \frac{1}{8}\) , we can guarantee that \( \bigcup _{j=1}^{n}D_{j}\) is disjoint from \( \bigcup _{j=n+1}^{n+d}D_{j}\). Thus, by the Gershgorin circle theorem (see [13, pp. 277–278]), \( \bigcup _{j=1}^{n}D_{j}\) contains exactly n eigenvalues of L, and \( \bigcup _{j=n+1}^{n+d}D_{j}\) contains exactly d eigenvalues of L. The lemma follows from (3.6) to (3.8) \(\square \)

Notation Let V be an affine subspace of \(\mathbb {R}^{n+d}\) of dimension k, \( k \in \{ 0, \dots , n-1\}\). Denote by \(N_{\delta }(V)\), the \(\delta \)-neighborhood of V, that is,

$$\begin{aligned} N_{\delta }(V) = \{ x \in \mathbb {R}^{n+1} \text { such that } d(x,V) < \delta \}. \end{aligned}$$

Lemma 3.2

(see [14, Lemma 3.1])Footnote 3Let M be an n-Ahlfors regular subset of \(\mathbb {R}^{n+d}\), and let \(\mu =\) be the Hausdorff measure restricted to M. There exists a constant \(c_{0} = c_{0}(n,d, C_{M}) \le \displaystyle \frac{1}{2}\) such that the following is true: Fix \(x_{0} \in M\), \(r_{0} < 1\) and let \(r = c_{0} \, r_{0}\). Then, for every V, an affine subspace of \(\mathbb {R}^{n+d}\) of dimension \(0 \le k \le n-1\), there exists \(x \in M \cap B_{r_{0}}(x_{0})\) such that \(x \notin N_{11 r}(V)\) and \(B_{r}(x) \subset B_{2 r_{0}}(x_{0})\).

Lemma 3.3

(see [14, Lemma 3.3])Footnote 4Fix \(R>0\), and let \(\{u_{1}, \ldots u_{n} \}\) be n vectors in \(\mathbb {R}^{n+d}\). Suppose there exists a constant \(K_{0} >0\) such that

$$\begin{aligned} |u_{j}| \le K_{0} \, R \quad \forall j \in \{1,\ldots , n\}. \end{aligned}$$
(3.9)

Moreover, suppose there exists a constant \(0< k_{0} < K_{0}\), such that

$$\begin{aligned} |u_{1}|\ge k_{0} \, R, \end{aligned}$$
(3.10)

and

$$\begin{aligned} u_{j} \notin N_{k_{0}R} (span\{u_{1}, \ldots u_{j-1}\} ) \quad \forall j \in \{2,\ldots , n\}. \end{aligned}$$
(3.11)

Then, for every vector \(v \in V:= span\{u_{1}, \ldots u_{n}\} \), v can be written uniquely as

$$\begin{aligned} v = \sum _{j=1}^{n} \beta _{j}u_{j}, \end{aligned}$$
(3.12)

where

$$\begin{aligned} |\beta _{j}| \,\le K_{1}\frac{1}{R} \, |v|, \quad \forall j \in \{1,\ldots , n\} \end{aligned}$$
(3.13)

with \(K_{1}\) being a constant depending only on n, \(k_{0}\), and \(K_{0}\).

Throughout the rest of the paper, M denotes an n-Ahlfors regular rectifiable subset of \(\mathbb {R}^{n+d}\) and \(\mu =\) denotes the Hausdorff measure restricted to M. The average of a function f on the ball \(B_{r}(x)\) is denoted by

(3.14)

We recall the statement of Theorem 1.2: if M satisfies the Poincaré-type condition (1.1), and if the Carleson-type condition (1.3) on the oscillation of the tangent planes to M is satisfied, and if then M is contained in a bi-Lipschitz image of an n-dimensional plane.

To prove this theorem, we follow steps similar to those used in [14] to prove the co-dimension 1 case (see Theorem 1.5 in [14]) which is stated as Theorem 1.1 in this paper. First, we define what we call the \(\alpha \)-numbers

(3.15)

where \(x \in M\), and \(0 < r \le \displaystyle \frac{1}{10}\), \(\pi _{T_{y}M}\) has \( ( a_{ij}(y) )_{ij}\) as its matrix representation in the standard basis of \(\mathbb {R}^{n+d}\), and \(A_{x,r}= ( (a_{ij})_{x,r} )_{ij}\) is the matrix whose ijth entry is the average of the function \(a_{ij}\) in the ball \(B_{r}(x)\).

These numbers are the key ingredient to proving our theorem. In Lemma 3.4, we show that the Carleson condition (1.3) implies that these numbers are small at every point \(x \in M\) and every scale \(0< r < \frac{1}{10}\). Moreover, for every point \(x \in M\), and series \( \sum _{i=1}^{\infty } \alpha ^2(x, 10^{-j})\) is finite. Then, in Theorem 3.5, we use the Poincaré-type inequality to get an n-plane \(P_{x,r}\) at every point \(x \in M\) and every scale \(0<r \le \frac{1}{10 \lambda }\) such that the distance (in integral form) from \(M \cap B_{r}(x)\) to \(P_{x,r}\) is bounded by \(\alpha (x,\lambda r)\). This means, by Lemma 3.4, that those distances are small, and for a fixed point x, when we add these distances at the scales \( 10^{-j}\) for \(j \in \mathbb {N}\), this series is finite.Footnote 5 Theorem 3.5 is the key point that allows us to use the bi-Lipschitz parameterization that G. David and T. Toro construct in [7]. In fact, what they do is construct approximating n-planes, and prove that at any two points that are close together, the two planes associated to these points at the same scale, or at two consecutive scales are close in the Hausdorff distance sense. From there, they construct a bi-Hölder parameterization for M. Then, they show that the sum of these distances at scales \( 10^{-j}\) for \(j \in \mathbb {N}\) is finite (uniformly for every \(x \in M\)). This is what is needed for their parameterization to be bi-Lipschitz (see Theorem 3.7 below and the definition before it). Thus, the rest of the proof is devoted to using Theorem 3.5 in order to prove the compatibility conditions between the approximating planes mentioned above.

Note that, in the process of proving Theorem 1.2, we find several parts of the proof very similar to the proof of the co-dimension 1 case found in [14] (see Theorem 1.5 in [14] or Theorem 1.1 in this paper). In fact, most of the differences in the proof happen in Lemma 3.4 and Theorem 3.5, with the most important difference being in the latter. The rest of the proof follows closely to the proof of co-dimension 1 case. Thus, in this paper we do as follows: first, we prove Lemma 3.4 and Theorem 3.5 and include all the details. Then, for the rest of the proof (that is introducing the David and Toro bi-Lipschitz construction, and proving the compatibility conditions between the approximating planes that allow us to use this construction), we only give an outline of the main ideas, and leave the smaller details and tedious calculations out. However, in each place where the details are omitted, we refer the reader to the parts of the proof of Theorem 1.5 in [14] where they can be found. That being said, this part of the proof of Theorem 1.2 still has enough details so that the reader understands all the steps needed to get the bi-Lipschitz parameterization of M, and the intuition behind them. Moreover, the way the proof is presented here includes all the information that we need from the construction of the bi-Lipschitz parameterization of M to prove the corollaries that follow from Theorem 1.2.

Let us begin with Lemma 3.4 that decodes the Carleson condition (1.3).

Lemma 3.4

Let \(M \subset B_{2}(0)\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Let \(\epsilon > 0\), and suppose that

(3.16)

Then, for every \(x \in M\), we have

$$\begin{aligned} \sum _{k=1}^{\infty } \alpha ^{2}(x, 10^{-k}) \le C \, \epsilon ^{2} , \end{aligned}$$
(3.17)

where the \(\alpha \)-numbers are as defined in (3.15) and \(C = C(n, C_{M})\). Moreover, for every \(x \in M\) and \(0 < r \le \displaystyle \frac{1}{10}\), we have

$$\begin{aligned} \alpha (x,r) \le C \, \epsilon , \end{aligned}$$
(3.18)

where \(C = C(n, C_{M})\).

Proof

Let \(\epsilon > 0\) and suppose that (3.16) holds. By the definition of the Frobenius norm, (3.16) becomes

(3.19)

where \(\pi _{T_{y}M} = \big (a_{ij}(y)\big )_{ij}\) and \(A_{x,r} = \big ((a_{ij})_{x,r}\big )_{ij}\).

Fix \(x \in M\), and fix \(i, \, j \in \{ 1, \ldots n+d\}\). For all \(a \in \mathbb {R}\), and for all \(0 < r_{0} \le 1\), we have

(3.20)

since the average \((a_{ij})_{x,r_{0}}\) of \(a_{ij}\) in the ball \(B_{r_{0}}(x)\) minimizes the integrand on the right hand side of (3.20).

To prove (3.17), we note that

(3.21)

This is a straightforward computation that uses (3.20) and the Ahlfors regularity of \(\mu \), and is found in details in [14] (see [14], Lemma 4.1 proof of inequality (4.6)). Moreover, it is trivial to check that

(3.22)

Thus, plugging (3.22) in (3.21), we get

(3.23)

Since (3.23) is true for every \( i, \, j \in \{ 1, \ldots n+d\}\), we can take the sum over i and j on both sides of (3.23), and using (3.15) and (3.19), we get

$$\begin{aligned} \sum _{k=1}^{\infty } \alpha ^{2}(x, 10^{-k}) \le C(n,C_{M}) \, \epsilon ^{2} , \end{aligned}$$

which is exactly (3.17).

To prove inequality (3.18), fix \(x \in M\) and \(0 < r \le \displaystyle \frac{1}{10}\). Then, there exists \(k \ge 1\) such that

$$\begin{aligned} 10^{-k-1}< r \le 10^{-k}, \quad \text {that is} \quad \frac{1}{ 10^{-k}} \le \frac{1}{r} < \frac{1}{ 10^{-k-1}}. \end{aligned}$$
(3.24)

Now, fix \(i, \, j \in \{ 1, \ldots n+d\}\). Using inequality (3.20) for \(a = (a_{ij})_{x, 10^{-k}}\) and \(r_{0} = r\), (3.24), and the fact that \(\mu \) is Ahlfors regular, we get that

(3.25)

Summing over i and j on both sides of (3.25), and using the definition of the the Frobenius norm together with (3.15), we get

$$\begin{aligned} \alpha ^{2}(x,r) \le C(n,C_{M}) \, \alpha ^{2}(x, 10^{-k}). \end{aligned}$$
(3.26)

Taking the square root on both sides of (3.26) and using (3.17) finishes the proof of (3.18) \(\square \)

Next, we use the Poincaré inequality to get good approximating n-planes for M at every point \(x \in M\) and at every scale \(0< r< \frac{1}{10 \lambda }\). In this context, a good approximating n-plane at the point \(x \in M\) and radius r, is a plane \(P_{x,r}\) such that the distance (in integral form) from \(M \cap B_{r}(x)\) to \(P_{x,r}\) is small.

Theorem 3.5

Let \(M \subset B_{2}(0)\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1). There exists an \(\epsilon _{1} >0 = \epsilon _{1}(n,d,C_{M})\), that for every \(0 < \epsilon \le \epsilon _{1}\), if

(3.27)

then for every \(x \in M\) and \(0< r \le \displaystyle \frac{1}{10 \lambda }\), there exists an affine n-dimensional plane \(P_{x,r}\) such that

(3.28)

where \(C= C(n,d,C_{P})\).

Proof

Fix \(x \in M\) and \(r \le \displaystyle \frac{1}{10 \lambda }\). Let \(\epsilon \le \epsilon _{1}\) (with \(\epsilon _{1}\) to be determined later) such that (3.27) is satisfied. By (3.15), (3.18) from Lemma 3.4, and the fact that \(\lambda r \le \displaystyle \frac{1}{10}\), we have

(3.29)

From (3.29) and the fact that M is rectifiable (so approximate tangent planes exist \(\mu \)-a.e.), it is easy to check that there exists \(y_{0} \in B_{\lambda r}(x) \cap M\) such that \(T_{y_{0}}M\) exists, and

$$\begin{aligned} |\pi _{T_{y_{0}}M} - A_{x,\lambda r}| \le \alpha (x,\lambda r) \le C_{1} \, \epsilon , \end{aligned}$$

where \(C_{1}\) is a (fixed) constant depending only on n and \(C_{M}\). Comparing the operator norm with the Frobenius norm (the operator norm is at most the Frobenius norm), we get

$$\begin{aligned} ||\pi _{T_{y_{0}}M} - A_{x,\lambda r}|| \le \alpha (x,\lambda r) \le C_{1} \, \epsilon \le C_{1} \epsilon _{1}. \end{aligned}$$
(3.30)

Let \(\delta _{0}\) be the constant from Lemma 3.1, and choose \(\epsilon _{1} \le \displaystyle \frac{\delta _{0}}{C_{1}}\). Then (3.30), becomes

$$\begin{aligned} ||\pi _{T_{y_{0}}M} - A_{x,\lambda r}|| \le \alpha (x,\lambda r) \le \delta _{0}, \end{aligned}$$

and by Lemma 3.1 (with \(\delta = \alpha (x,\lambda r)\), \(V = T_{y_{0}}M\), and \(L = A_{x,\lambda r}\)), we deduce that \(A_{x,\lambda r}\) has exactly n eigenvalues such that \(\lambda ^{1}_{x,\lambda r}, \ldots , \lambda ^{n}_{x,\lambda r}\) such that \( |\lambda ^{i}_{x,\lambda r}| \ge 1 - c \, \alpha (x,\lambda r)\), for all \(i \in \{ 1, \ldots , n\}\), and exactly d eigenvalues \(\lambda ^{n+1}_{x,\lambda r}, \ldots , \lambda ^{n+d}_{x,\lambda r}\) such that

$$\begin{aligned} |\lambda ^{i}_{x,\lambda r}| \le C(n,d) \, \alpha (x,\lambda r) \quad \forall \, i \in \{ n+1, \ldots , n+d\} . \end{aligned}$$
(3.31)

Since \(A_{x,\lambda r}\) is a real symmetric matrix, \(n+d\) eigenvectors of the matrix \(A_{x,\lambda r}\), say \(v^{1}_{x,\lambda r} , \ldots v^{n+d}_{x,\lambda r}\) (each corresponding to exactly one of the \(n+d\) eigenvalues mentioned above) can be chosen to be orthonormal. Thus, \(v^{1}_{x,\lambda r} , \ldots v^{n+d}_{x,\lambda r}\) are unit, linearly independent vectors such that

$$\begin{aligned} A_{x,\lambda r}v^{i}_{x,\lambda r} = \lambda ^{i}_{x,\lambda r}v^{i}_{x,\lambda r} \quad \forall \, i \in \{ 1, \ldots n+d\} . \end{aligned}$$
(3.32)

Let us now fix our attention to the last d eigenvector and eigenvalues. For \(i \in \{ n+1 , \ldots n+d \}\) and consider the function \(f_{i}\) on \(\mathbb {R}^{n+d}\) defined by

$$\begin{aligned} f_{i}(y) = \langle y, v^{i}_{x,\lambda r} \rangle , \quad y \in \mathbb {R}^{n+d}. \end{aligned}$$

Notice that \(f_{i}\) is a smooth function on \(\mathbb {R}^{n+d}\), and for every point \(y \in M\) where the tangent plane \(T_{y}M\) exists, (which, again, is almost everywhere in M), we have

$$\begin{aligned} |\nabla ^{M}f_{i}(y)| \le | \pi _{T_{y}M} - A_{x,\lambda r}| + |\lambda ^{i}_{x,\lambda r}| . \end{aligned}$$
(3.33)

In fact,

$$\begin{aligned} \nabla ^{M}f_{i}(y) = \pi _{T_{y}M} (\nabla f(y) ) = \pi _{T_{y}M}(v^{i}_{x,\lambda r}) = (\pi _{T_{y}M} - A_{x,\lambda r})(v^{i}_{x,\lambda r}) + A_{x,\lambda r}v^{i}_{x,\lambda r}. \end{aligned}$$

Thus, using the definition of the operator norm, the fact that \(v^{i}_{x,\lambda r}\) is unit, (3.32), and the fact that the operator norm of a matrix is at most its Frobenius norm we get

$$\begin{aligned} | \nabla ^{M}f_{i}(y)|\le & {} |\pi _{T_{y}M} - A_{x,\lambda r})(v^{i}_{x,r})| + |A_{x,\lambda r}v^{i}_{x,\lambda r}| \\\le & {} ||\pi _{T_{y}M} - A_{x,\lambda r}|| + |\lambda ^{i}_{x,\lambda r}| \le |\pi _{T_{y}M} - A_{x,\lambda r}| + |\lambda ^{i}_{x,\lambda r}| . \end{aligned}$$

Now, applying the Poincaré inequality to the function \(f_{i}\) and the ball \(B_{r}(x)\), and using (3.33), we get

(3.34)

But \(v^{i}_{x, \lambda r}\) is a constant vector, so (3.34) can be rewritten as

(3.35)

that is,

(3.36)

Using (3.31) and (3.15), (3.36) becomes

(3.37)

Since (3.37) is true for every \( i \in \{n+1, \ldots , n+d \}\), we can take the sum over i on both sides of (3.37) to get

(3.38)

We are now ready to choose our plane \(P_{x,r}\). Take \(P_{x,r}\) to be the n-plane passing through the point , the centre of mass of \(\mu \) in the ball \(B_{r}(x)\), and such that \(P_{x,r} - c = \text {span} \{ v^{1}_{x,\lambda r}, \ldots , v^{n}_{x,\lambda r} \}\). In other words, \((P_{x,r} - c_{x,r})^{\perp } = \text {span} \{ v^{n+1}_{x,\lambda r}, \ldots , v^{n+d}_{x,\lambda r} \}\). Here \((P_{x,r} - c_{x,r})^{\perp }\) denotes the d-plane of \(\mathbb {R}^{n+d}\) perpendicular to the n-plane \(P_{x,r} - c_{x,r}\).

For \(y \in B_{r}(x)\), we have that

$$\begin{aligned} d(y, P_{x,r})= & {} d(y - c_{x,r}, P_{x,r} - c_{x,r}) = \left| \sum _{i=n+1}^{n+d} \langle y - c_{x,r} , v^{i}_{x,\lambda r} \rangle v^{i}_{x,\lambda r}\right| \nonumber \\\le & {} \sum _{i=n+1}^{n+d} \left| \langle y - c_{x,r} , v^{i}_{x,\lambda r}\rangle \right| \end{aligned}$$
(3.39)

Dividing by r and taking the average over \(B_{r}(x)\) on both sides of (3.39), and using the definition of \(c_{x,r}\), we get

where the last inequality comes from (3.38).

Thus, by the definition of \(\alpha (x,\lambda r)\) (see (3.15)), we get (3.28) and the proof is done. \(\square \)

As mentioned earlier, we want to use the construction of the bi-Lipschitz map given by David and Toro in their paper [7]. To do that, we introduce the notion of a coherent collection of balls and planes. Here, we follow the steps given by David and Toro (see [7], chapter 2).

First, let \(l_{0} \in \mathbb {N}\) such that \(10^{l_{0}} \le \lambda \le 10^{l_{0}+1}\), and set \(r_{k} = 10^{-k-l_{0} - 5}\) for \( k \in \mathbb {N}\), and let \(\epsilon \) be a small number (will be chosen later) that depends only on n and d. Choose a collection \(\{x_{jk} \}, \, \, j \in J_{k}\) of points in \(\mathbb {R}^{n+d}\), so that

$$\begin{aligned} |x_{jk} - x_{ik}| \ge r_{k} \quad \text {for } i,j \in J_{k}, \, i \ne j. \end{aligned}$$
(3.40)

Set \(B_{jk} := B_{r_{k}}(x_{jk})\) and \(V_{k}^{\lambda } := \bigcup _{j \in J_{k}} \lambda B_{jk} = \bigcup _{j \in J_{k}} B_{\lambda r_{k}}(x_{jk}),\,\) for \(\lambda > 1\).

We also ask for our collection \(\{x_{jk} \}, \, \, j \in J_{k}\) and \(k \ge 1\) to satisfy

$$\begin{aligned} x_{jk} \in V_{k-1}^{2} \quad \text {for } k \ge 1 \quad \text {and} \quad j \in J_{k}. \end{aligned}$$
(3.41)

Suppose that our initial net \(\{x_{j0} \}\) is close to an n-dimensional plane \(\Sigma _{0}\), that is

$$\begin{aligned} d(x_{j0},\Sigma _{0}) \le \epsilon \quad \forall \, j \in J_{0}. \end{aligned}$$
(3.42)

For each \(k \ge 0\) and \(j \in J_{k}\), suppose you have an n-dimensional plane \(P_{jk}\), passing through \(x_{jk}\) such that the following compatibility conditions hold:

$$\begin{aligned}&d_{x_{i0},100r_{0}}(P_{i0}, \Sigma _{0}) \le \epsilon \,\,\,\, \text {for} \,\, i \in J_{0}, \end{aligned}$$
(3.43)
$$\begin{aligned}&d_{x_{ik},100r_{k}}(P_{ik},P_{jk}) \le \epsilon \,\,\,\, \text {for}\, k\ge 0 \,\,\,\text {and} \,\,\, i,j \in J_{k} \,\,\, \text {such that} \,\,\, |x_{ik} - x_{jk}| \le 100r_{k},\nonumber \\ \end{aligned}$$
(3.44)

and

$$\begin{aligned} d_{x_{ik},20r_{k}}(P_{ik},P_{j,k+1}) \le \epsilon \,\,\, \text {for}\, k\ge 0 \,\,\,\text {and} \,\,\, i \in J_{k},\,j \in J_{k+1} \,\, \text {such that} \,\,\, |x_{ik} - x_{j,k+1}| \le 2r_{k}.\nonumber \\ \end{aligned}$$
(3.45)

We can now define a coherent collection of balls and planes:

Definition 3.6

A coherent collection of balls and planes, (in short a CCBP), is a triple \((\Sigma _{0}, \{B_{jk} \}, \{P_{jk}\})\) where the properties (3.40) up to (3.45) above are satisfied, with a prescribed \(\epsilon \) that is small enough, and depends only on n and d.

Theorem 3.7

(see Theorems 2.4 in [7]) There exists \(\epsilon _{2} > 0\) depending only on n and d, such that the following holds: If \(\epsilon \le \epsilon _{2}\), and \((\Sigma _{0}, \{B_{jk} \}, \{P_{jk}\})\) is a CCBP (with \(\epsilon \)), then there exists a bijection \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) with the following properties:

$$\begin{aligned} g(z)= z \quad \text {when} \,\,\, d(z, \Sigma _{0}) \ge 2, \end{aligned}$$
(3.46)

and

$$\begin{aligned} |g(z)-z| \le C^{'}_{0} \epsilon \quad \text {for} \,\,\, z \in \mathbb {R}^{n+d}, \end{aligned}$$
(3.47)

where \(C^{'}_{0}= C^{'}_{0}(n,d)\). Moreover, \(g(\Sigma _{0})\) is a \(C^{'}_{0} \epsilon \)-Reifenberg flat set that contains the accumulation set

$$\begin{aligned} E_{\infty }= & {} \Bigg \{x \in \mathbb {R}^{n+d}; \,\, x\,\, \text {can be written as} \\&x = \lim _{m \rightarrow \infty } x_{j(m),k(m)}, \,\, \text {with}\,\,k(m) \in \mathbb {N}, \\&\text {and} \,\, j(m) \in J_{k_{m}} \,\, \text {for}\,\, m \ge 0 \,\, \text {and} \,\, \lim _{m \rightarrow \infty } k(m) = \infty \Bigg \} . \end{aligned}$$

In [7], David and Toro give a sufficient condition for g to be bi-Lipschitz that we want to use in our proof. To state this condition, we need some technical details from the construction of the map g from Theorem 3.7. So, let us briefly discuss the construction here: David and Toro defined a mapping f whose goal is to push a small neighborhood of \(\Sigma _{0}\) towards a final set, which they proved to be Reifenberg flat. They obtained f as a limit of the composed functions \(f_{k} = \sigma _{k-1} \circ \ldots \sigma _{0}\) where each \(\sigma _{k}\) is a smooth function that moves points near the planes \(P_{jk}\) at the scale \(r_{k}\). More precisely,

$$\begin{aligned} \sigma _{k} (y) = y + \sum _{j \in J_{k}} \theta _{jk}(y)[\pi _{jk}(y)-y], \end{aligned}$$
(3.48)

where \(\{\theta _{jk}\}_{j \in J_{k}, k\ge 0}\) is a partition of unity with each \(\theta _{jk}\) supported on \(10B_{jk}\), and \(\pi _{jk}\) denotes the orthogonal projection from \(\mathbb {R}^{n+d}\) onto the plane \(P_{jk}\).

Since f in their construction was defined on \(\Sigma _{0}\), g was defined to be the extension of f on the whole space.

Corollary 3.8

(see Proposition 11.2 in [7]) Suppose we are in the setting of Theorem 3.7. Define the quantity

$$\begin{aligned} \epsilon ^{'}_{k}(y)= & {} sup\{d_{x_{im},100 r_{m}}(P_{jk},P_{im}); \,\,\, j \in J_{k}, \,\, i \in J_{m},\,\,\, m \in \{k,k-1\},\nonumber \\&\,\,\text {and}\,\, y \in 10B_{jk} \cap 11B_{im} \} \end{aligned}$$
(3.49)

for \(k \ge 1 \,\, \text {and} \,\,y \in V_{k}^{10}\), and \(\epsilon _{k}^{'}(y)=0 \,\, \text {when}\,\, y \in \mathbb {R}^{n+d} {\setminus } V_{k}^{10}\) (when there are no pairs (jk) as above). If there exists \(N > 0\) such that

$$\begin{aligned} \sum _{k=0}^{\infty } \epsilon ^{'}_{k}(f_{k}(z))^{2} < N, \end{aligned}$$
(3.50)

then the map g constructed in Theorem 3.7 is K-bi-Lipschitz, where the bi-Lipschitz constant \(K = K(n,d,N)\).

We are finally ready to prove Theorem 1.2.

Proof of Theorem 1.2

Proof

As mentioned before, from here on, the proof of this theorem is essentially the same as that of its co-dimension 1 analogue found in [14] (Theorem 1.5 in [14]). In fact, the essential differences in the proofs of Theorem 1.2 and its co-dimension 1 analogue took place in Lemma 3.4 and Theorem 3.5. Thus, we continue this proof by outlining the main ideas and referring the reader to the proof of Theorem 1.5 in [14] for a more detailed proof.

Let \(\epsilon _{0} > 0\) (to be determined later), and suppose that (1.3) holds. Let \(\epsilon _{2}\) be the constant from Theorem 3.7. We would like to apply Theorem 3.7 for \(\epsilon = \epsilon _{2}\), and then Corollary 3.8. So our first goal is to construct a CCBP, and we do that in several steps:

Let us start with a collection \(\{\tilde{x}_{jk}\},\, j \in J_{k}\) of points in \(M \cap B_{\frac{1}{10^{l_{0} +4}}}(0)\) that is maximal under the constraint

$$\begin{aligned} |\tilde{x}_{jk} - \tilde{x}_{ik}| \ge \displaystyle \frac{4r_{k}}{3} \quad \text {when } i,j \in J_{k}\quad \text {and}\quad i \ne j. \end{aligned}$$
(3.51)

Of course, we can arrange matters so that the point 0 belongs to our initial maximal set, at scale \(r_{0}\). Thus, \(0 = \tilde{x}_{i_{0},0} \) for some \(i_{0} \in J_{0}\). Notice that for every \(k \ge 0\), we have

$$\begin{aligned} M \cap B_{\frac{1}{10^{l_{0} +4}}}(0)\subset \displaystyle \bigcup _{j \in J_{k}}\bar{B}_{\frac{4r_{k}}{3}}(\tilde{x}_{jk}). \end{aligned}$$
(3.52)

Later, we choose

$$\begin{aligned} x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk}), \quad j \in J_{k} . \end{aligned}$$
(3.53)

By (3.52) and (3.53), we can see

$$\begin{aligned} M \cap B_{\frac{1}{10^{l_{0} +4}}}(0)\subset \displaystyle \bigcup _{j \in J_{k}}\bar{B}_{\frac{4r_{k}}{3}}(\tilde{x}_{jk}) \subset \displaystyle \bigcup _{j \in J_{k}}B_{\frac{3r_{k}}{2}}(x_{jk}) . \end{aligned}$$
(3.54)

Using (3.51), (3.53), and (3.54), it is easy to see that the collection \(\{x_{jk} \},\,\,\, j \in J_{k}\) satisfies (3.40) and (3.41). (for details, see [14, page 23]).

Next, we choose our planes \(P_{jk}\) and our collection \(\{ x_{jk} \}\), for \(k \ge 0\) and \( j \in J_{k}\). Fix \(k\ge 0\) and \(j \in J_{k}\). Let \(\epsilon _{1}\) be the constant from Theorem 3.5. For

$$\begin{aligned} \epsilon _{0} \le \epsilon _{1}, \end{aligned}$$
(3.55)

we apply Theorem 3.5 to the point \(\tilde{x}_{jk}\) (by construction \(\tilde{x}_{jk} \in M\)) and radius \(120r_{k}\) (notice that \(120\,r_{k} \le \frac{1}{10 \lambda }\)) to get an n-plane \(P_{\tilde{x}_{jk},120r_{k}}\), denoted in this proof by \( P^{'}_{jk}\) for simplicity reasons, such that

(3.56)

Thus, by (3.56) and the fact that \(\mu \) is Ahlfors regular, there exists \(x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})\) such that

(3.57)

Let \(P_{jk}\) be the plane parallel to \(P^{'}_{jk}\) and passing through \(x_{jk}\). From (3.56), (3.57) and the fact that the two planes are parallel, we see that (see [14] p. 24)

(3.58)

To summarize what we did so far, we have chosen n-dimensional planes \(P_{jk}\) for \(k\ge 0\) and \(j \in J_{k}\) where each \(P_{jk}\) passes through \(x_{jk}\), and satisfies (3.58). Notice that (3.58) shows that \(P_{jk}\) is a good approximating plane to M in the ball \(B_{120r_{k}}(\tilde{x}_{jk})\).

We want to get our CCBP with \(\epsilon _{2}\). Thus, we show that (3.42)–(3.45) hold with \(\epsilon = \epsilon _{2}\). Since the proofs of these inequalities are the same as the proofs of their analogue inequalities in the co-dimension 1 case, we only outline their proofs here (see [14, p. 25–p. 31] for a detailed proof of the inequalities).

Outline of the proofs for (3.44) and (3.45)

Inequalities (3.44) and (3.45) can be proved simultaneously. Fix \(k \ge 0\) and \(j \in J_{k}\); let \(m \in \{k, k-1 \}\) and \(i \in J_m\) such that \(|x_{jk} - x_{im}| \le 100r_{m}\). We want to show that \(P_{jk}\) and \(P_{im}\) are close together. To do that, we construct n linearly independent vectors that “effectively” span \(P_{jk}\), (that is, these vectors span \(P_{jk}\), and are far away from each other in a uniform quantitative manner), and that are close to \(P_{im}\). More precisely, using Lemma 3.2 inductively, together with (3.58), we can prove the following claim:

Claim 1 Denote by \(\pi _{jk}\) is the orthogonal projection of \(\mathbb {R}^{n+d}\) on the plane \(P_{jk}\). Let \(r = c_{0} \, r_{k}\), where \(c_{0} \le \frac{1}{2}\) is the constant from Lemma 3.2 depending only on n, d, and \(C_{M}\). There exists \(C_{1} = C_{1}(n,d,C_{M},C_{P})\), such that if \(C_{1} \epsilon _{0} \le 1\), then there exists a sequence of \(n+1\) balls \(\{B_{r}(y_{l})\}_{l=0}^{n}\), such that

  1. (1)

    \(\forall \, l \in \{ 0, \ldots n\}\), we have \(y_{l} \in M\) and \(B_{r}(y_{l}) \subset B_{2r_{k}}(\tilde{x}_{jk}).\)

  2. (2)

    \(q_{1} - q_{0} \notin B_{5r}(0)\), and \(\forall \, l \in \{ 2, \ldots n\}\), we have \(q_{l} - q_{0} \notin N_{5r}\big (span \{q_{1} - q_{0}, \ldots , q_{l-1} - q_{0} \}\big ),\)

where \(q_{l} = \pi _{jk}(p(y_{l}))\) and is the centre of mass of \(\mu \) in the ball \(B_{r}(y_{l})\).

Now, on one hand, notice that

$$\begin{aligned} P_{jk} - q_{0} = span \{q_{1} - q_{0} , \ldots , q_{n} - q_{0} \}. \end{aligned}$$
(3.59)

On the other hand, by the definition of \(p(y_{l})\), Jensen’s inequality applied on the convex function \(\phi (.) = d(.,P_{jk})\), the fact that \(\mu \) is Ahlfors regular, \(B_{r}(y_{l}) \subset B_{2r_{k}}(\tilde{x}_{jk})\), \(r = c_{0}\, r_{k}\), and (3.58), we have that

$$\begin{aligned} d (p(y_{l}),P_{jk} ) \le C(n,d, C_{M}, C_{P}) \, \alpha (\tilde{x}_{jk},120 \lambda \,r_{k}) \, r_{k}, \quad \forall \, l \in \{ 0, \ldots n\}.\end{aligned}$$
(3.60)

Similarly, we have that

$$\begin{aligned} d (p(y_{l}),P_{im} ) \le C(n,d, C_{M}, C_{P}) \, \alpha (\tilde{x}_{im},120 \lambda \,r_{m}) \, r_{m}, \quad \forall \, l \in \{ 0, \ldots n\}. \end{aligned}$$
(3.61)

Thus, combining (3.60) and (3.61), we directly get

$$\begin{aligned} d (q_{l},P_{im}) \le C(n,d,C_{M},C_{P}) \, ( \alpha (\tilde{x}_{jk},120 \lambda r_{k})\,r_{k} + \alpha (\tilde{x}_{im}, 120 \lambda r_{m}) \, r_{m} ), \,\, \forall \, l \in \{ 0, \ldots n\}.\nonumber \\ \end{aligned}$$
(3.62)

To compute the distance between \(P_{jk}\) and \(P_{im}\), let \(y \in P_{jk} \cap B_{\rho }(x_{im})\) where \(\rho = \{20r_m, 100r_{m}\}\). By (3.59), y can be written uniquely as

$$\begin{aligned} y = q_{0} + \sum _{l=1}^{n} \beta _{l}(q_{l} - q_{0}). \end{aligned}$$
(3.63)

Using Lemma 3.3, for \(u_{l} = q_{l} - q_{0}\), \(R = r\), and \(v = y - q_{0}\) to get an upper bound on the \(\beta _{l}\)’s that show up in (3.63), together with (3.62), we get that

$$\begin{aligned} d\big (y, P_{im}\big ) \le C(n,d,C_{M},C_{P}) ( \alpha (\tilde{x}_{jk}, 120\lambda r_{k})\,r_{k} + \alpha (\tilde{x}_{im}, 120 \lambda r_{m}) \, r_{m} ) \end{aligned}$$

Thus,

$$\begin{aligned} d_{x_{im}, \rho } (P_{jk}, P_{im}) \le c \, ( \alpha (\tilde{x}_{jk},120 \lambda r_{k}) + \alpha (\tilde{x}_{im},120 \lambda r_{m}) )\quad \rho \in \{20r_m, 100r_{m}\}.\qquad \end{aligned}$$
(3.64)

Now, by Lemma 3.4, we know that \( \alpha (\tilde{x}_{jk},120 \lambda r_{k}) \le C(n,C_{M}) \, \epsilon _{0}\), and \(\alpha (\tilde{x}_{im},120 \lambda r_{m}) \le C(n,C_{M}) \, \epsilon _{0}\). Thus, (3.64) becomes

$$\begin{aligned} d_{x_{im}, \rho } (P_{jk}, P_{im}) \le C(n,d,C_{M},C_{P}) \epsilon _{0} \quad \rho \in \{20r_m, 100r_{m}\}. \end{aligned}$$
(3.65)

So, we have shown that there exist two constants \(C_{2}\) and \(C_{3}\), each depending only on n, d, \(C_{M}\), and \(C_{P}\), such that

$$\begin{aligned} d_{x_{ik},100r_{k}}(P_{ik},P_{jk}) \le C_{2} \,\epsilon _{0} \quad \text {for } k\ge 0 \text { and } i,j \in J_{k} \text { such that } |x_{ik} - x_{jk}| \le 100r_{k},\nonumber \\ \end{aligned}$$
(3.66)

and

$$\begin{aligned}&d_{x_{ik},20r_{k}}(P_{ik},P_{j,k+1}) \le C_{3} \,\epsilon _{0} \quad \text {for } k\ge 0 \text { and } i \in J_{k},\nonumber \\&\quad j \in J_{k+1} \text { such that } |x_{ik} - x_{j,k+1}| \le 2r_{k}. \end{aligned}$$
(3.67)

For

$$\begin{aligned} C_{2} \,\epsilon _{0} \le \epsilon _{2} \quad \text {and} \quad C_{3} \,\epsilon _{0} \le \epsilon _{2}, \end{aligned}$$
(3.68)

we get (3.44) and (3.45).

Outline of the proofs for (3.42) and (3.43)

We start with (3.43). Recall that \(0 = \tilde{x}_{i_{0},0}\) for some \(i_{0} \in J_{0}\). Choose \(\Sigma _{0}\) to be the plane \(P_{i_{0},0}\) described above (recall that \(P_{i_{0},0}\) passes through \(x_{i_{0},0}\), where \(r_{0} = 10^{-l_{0} - 5}\)). Then, what we need to show is

$$\begin{aligned} d_{x_{j0},100r_{0}}(P_{j0}, P_{i_{0},0}) \le \epsilon _{2} \quad \text {for } j \in J_{0}. \end{aligned}$$
(3.69)

Fix \(j \in J_{0}\), and take the corresponding \(x_{j0}\). Since by construction \(|\tilde{x}_{j0}| < \displaystyle \frac{1}{10^{l_{0}+4}}\) and since (3.53) says that \( |x_{j_{0},0} - \tilde{x}_{j_{0},0}| \le \displaystyle \frac{r_{0}}{6}\), then, we have

$$\begin{aligned} |x_{j0}| \le \frac{r_{0}}{6} + \frac{1}{10^{l_{0}+4}},\quad j \in J_{0}. \end{aligned}$$
(3.70)

Moreover, by (3.53) and the fact that \(0 = \tilde{x}_{i_{0},0}\), we have

$$\begin{aligned} |x_{i_{0},0} - \tilde{x}_{i_{0},0}| = |x_{i_{0},0}| \le \frac{r_{0}}{6}. \end{aligned}$$
(3.71)

Combining (3.70) and (3.71), and using the fact that \(r_{0} = 10^{-l_{0}-4}\) we get

$$\begin{aligned} |x_{j0} - x_{i_{0},0} | \le \frac{r_{0}}{6} + \frac{1}{10^{l_{0}+4}} + \frac{r_{0}}{6} \le \frac{r_{0}}{6} + 10 r_{0} + \frac{r_{0}}{6} \le 100r_{0}. \end{aligned}$$
(3.72)

Thus, by (3.44) for \(x_{ik} = x_{j0}\), \(P_{ik} = P_{j0}\), and \(P_{jk} = P_{i_{0},0}\), we get exactly (3.69), hence finishing the proof for (3.43).

It remains to show (3.42) with \(\epsilon = \epsilon _{2}\), that is

$$\begin{aligned} d(x_{j0}, P_{i_{0},0}) \le \epsilon _{2} ,\quad \text {for } j \in J_{0}. \end{aligned}$$
(3.73)

However, notice that since \(x_{j0} \in P_{j0}\), (3.42) follows directly from (3.43).

We finally have our CCBP. Now, by the proof of Theorem 3.7 [see paragraph above (3.48)] we get the smooth maps \(\sigma _{k} \,\, \text {and} \,\, f_{k} = \sigma _{k-1} \circ \ldots \sigma _{0} \,\, \text {for} \,\, k \ge 0\), and then the map \(f = \lim _{k \rightarrow \infty } f_{k}\) defined on \(\Sigma _{0}\), and finally the map g that we want.

Moreover, by Theorem 3.7, we know that \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) is a bijection with the following properties:

$$\begin{aligned}&g(z)= z \quad \text {when } d(z, \Sigma _{0}) \ge 2, \end{aligned}$$
(3.74)
$$\begin{aligned}&|g(z)-z| \le C^{'}_{0} \epsilon _{2} \quad \text {for } z \in \mathbb {R}^{n+d}, \end{aligned}$$
(3.75)

and

$$\begin{aligned} g(\Sigma _{0}) \quad \text {is a } C^{'}_{0} \epsilon _{2} \text {-Reifenberg flat set}. \end{aligned}$$
(3.76)

Fix \(\epsilon _{0}\) such that (3.55), (3.68), and the hypothesis of Claim 1 are all satisfied. Notice that by the choice of \(\epsilon _{0}\), we can write \(\epsilon _{0} = c_{4}\, \epsilon _{2}\), where \(c_{4} = c_{4}(n,d,C_{M},C_{P})\). Hence, from (3.74) to (3.76), we directly get (1.4)–(1.6).

Next, we show that

$$\begin{aligned} M \cap B_{\frac{1}{10^{l_{0}+4}}}(0) \subset g(\Sigma _{0}). \end{aligned}$$
(3.77)

Fix \(x \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0) \). Then, by (3.54), we see that for all \(k \ge 0\), there exists a point \(x_{jk}\) such that \(|x - x_{jk}| \le \displaystyle \frac{3r_{k}}{2}\), and hence \(x \in E_{\infty } \subset g(\Sigma _{0})\) (\(E_{\infty }\) is the set defined in Theorem 3.7). Since x was an arbitrary point in \(M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)\), (3.77) is proved. This shows that (1.7) holds for \(\theta _{0} := \frac{1}{10^{l_{0}+4}}\).

We still need to show that g is bi-Lipschitz. By Corollary 3.8, it suffices to show (3.50). To do that, we need the following inequality from [7] (see inequality (6.8) page 27 in [7]

$$\begin{aligned} |f(z) - f_{k}(z)| \le C(n,d) \epsilon _{2} \, r_{k} \quad \text {for } k \ge 0 \text { and } z \in \Sigma _{0}. \end{aligned}$$
(3.78)

Let \(z \in \Sigma _{0}\), and choose \(\bar{z} \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)\) such that

$$\begin{aligned} |\bar{z} - f(z)| \le 2 \, d(f(z),M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)). \end{aligned}$$
(3.79)

Fix \(k \ge 0\), and consider the index \(m \in \{k,k-1\}\) and the indices \(j \in J_{k}\) and \(i \in J_{m}\) such that \(f_{k}(z) \in 10B_{jk} \cap 11B_{im}\). We show that

$$\begin{aligned} d_{x_{im},100 r_{m}}(P_{jk},P_{im}) \le C(n,d,C_{M},C_{P}) \, \alpha (\bar{z},r_{k-l_{0}-5}) \quad \text {for } k \ge 1. \end{aligned}$$
(3.80)

In fact, by (3.79) and (3.78), and since \(\tilde{x}_{jk} \in M \cap B_{\frac{1}{10^{l_{0}+4}}}(0)\), \(|\tilde{x}_{jk} - x_{jk}| \le \displaystyle \frac{r_{k}}{6}\), and \(f_{k}(z) \in 10B_{jk}\), one can show that (see [14, p. 32–33] for detailed proof)

$$\begin{aligned} B_{120 \lambda r_{m}}(\tilde{x}_{im}) \cup B_{120 \lambda r_{k}}(\tilde{x}_{jk}) \subset B_{r_{k-l_{0}-5}}(\bar{z}). \end{aligned}$$
(3.81)

Now, writing \(\pi _{T_{y}M} = (a_{pq}(y) )_{pq}\), and using the definition of the Frobenius norm, together with (3.20) for \( a = (a_{pq})_{\bar{z},r_{k-l_{0}-5}}\), (3.81), and the fact that \(\mu \) is Ahlfors regular

and thus,

$$\begin{aligned} \alpha (\tilde{x}_{jk}, 120 \lambda r_{k}) \le C(n,C_{M})\, \alpha (\bar{z},r_{k-l_{0}-5}). \end{aligned}$$
(3.82)

Similarly, we can show that

$$\begin{aligned} \alpha (\tilde{x}_{im}, 120 \lambda r_{m}) \le C(n,C_{M})\, \alpha (\bar{z},r_{k-l_{0}-5}). \end{aligned}$$
(3.83)

Plugging (3.82) and (3.83) in (3.64) for \(\rho = 100r_{m}\), we get

$$\begin{aligned} d_{x_{im},100 r_{m}}(P_{jk},P_{im}) \le C(n,d,C_{M},C_{P}) \, \alpha (\bar{z},r_{k-l_{0}-5}), \quad \forall k \ge 1. \end{aligned}$$
(3.84)

This finishes the proof of (3.80).

Hence, we have shown that \(\epsilon ^{'}_{k}(f_{k}(z)) \le C(n,d,C_{M},C_{P}) \, \alpha (\bar{z},r_{k-l_{0}-5})\) for every \(k\ge 1\), that is

$$\begin{aligned} \epsilon ^{'}_{k}(f_{k}(z))^{2} \le C(n,d,C_{M},C_{P}) \, \alpha ^{2}(\bar{z},r_{k-l_{0}-5}), \quad \forall \, k\ge 1 \end{aligned}$$
(3.85)

Summing both sides of (3.85) over \(k \ge 0\), and using (3.17) in Lemma 3.4 together with the fact that \(\bar{z} \in M\cap B_{\frac{1}{10^{l_{0}+4}}}(0)\), we get

$$\begin{aligned} \sum _{k=0}^{\infty } \epsilon ^{'}_{k}(f_{k}(z))^{2}\le & {} 1 + C(n,d,C_{M},C_{P}) \, \sum _{k=1}^{\infty } \alpha ^{2}(\bar{z},r_{k-l_{0}-5})\nonumber \\\le & {} 1 + C(n,d,C_{M},C_{P}) \, \epsilon ^{2}_{0} \,\, := N. \end{aligned}$$
(3.86)

Inequality (3.50) is proved, and our theorem follows. \(\square \)

As mentioned in the introduction, in the special case when M has co-dimension 1, (1.3) translates a Carleson-type condition on the oscillation of the unit normals to M.

Proof that Theorem 1.1 follows from Theorem 1.2

Proof

Suppose that (1.2) holds for some choice of unit normal \(\nu \) to M. We show that (1.2) is in fact exactly inequality (1.3). Fix \(x \in M\) and \( 0< r < 1\) and let \(y \in M \cap B_{r}(x)\) be a point where the approximate tangent plane \(T_{y}M\) [and thus the unit normal \(\nu (y)\)] exists. Denote by \( {T_{y}M}^{\perp } \) the subspace perpendicular to \(T_{y}M\). Then, using the matrix representation of \( \pi _{T_{y}M} \) in the standard basis of \(\mathbb {R}^{n+1}\), and the fact that \( \pi _{{T_{y}M}^{\perp }} = Id_{n+1} - \pi _{T_{y}M}\) where \(Id_{n+1}\) is the \((n+1) \times (n+1)\) identity matrix, one can easily see that

$$\begin{aligned} |\pi _{T_{y}M} - A_{x,r}|^{2} = |\pi _{{T_{y}M}^{\perp }}- B_{x,r}|^{2}, \end{aligned}$$
(3.87)

where \( \pi _{{T_{y}M}^{\perp }} = ( b_{ij}(y) )_{ij}\) and \(B_{x,r} = Id_{n+d} - A_{x,r} = ( (b_{ij})_{x,r} )_{ij}\).

Now, we want to express the right hand side of (3.87) using a different basis than the standard basis of \(\mathbb {R}^{n+1}\). For any choice of orthonormal basis \(\{ \nu _{1}(y), \ldots \nu _{n}(y) \}\) of \(T_{y}M\), we have that \(\{ \nu _{1}(y), \ldots , \nu _{n}(y), \nu (y)\}\) is an orthonormal basis for \(\mathbb {R}^{n+1}\). The matrix representation of \( \pi _{{T_{y}M}^{\perp }}\) with \(\{ \nu _{1}(y), \ldots , \nu _{n}(y), \nu (y)\}\) as a basis for the domain \(\mathbb {R}^{n+1}\) and the standard basis for the range \(\mathbb {R}^{n+1}\), is the \((n+1) \times (n+1)\) matrix whose last column is \(\nu (y)\) while the other columns are all zero. Thus, with this choice of bases and matrix representations, \(B_{x,r}\) becomes the matrix whose last column is \(\nu _{x,r}\) while the other column are all zero.Footnote 6 Hence, using (3.87), we get that

$$\begin{aligned} |\pi _{T_{y}M} - A_{x,r}|^{2} = |\pi _{{T_{y}M}^{\perp }}- B_{x,r}|^{2} = | \nu (y) - \nu _{x,r}|^{2}. \end{aligned}$$
(3.88)

Since (3.88) is true for any \(y \in B_{r}(x)\), and since x and r are arbitrary, then,

and the proof is done \(\square \)

We now show that if we assume, in addition to the hypothesis of Theorem 1.2, that M is Reifenberg flat, then (locally) M is exactly the bi-Lipschitz image of an n-plane. In other words, the containment in (1.7) becomes an equality.

Corollary 3.9

Let \(M \subset B_{2}(0)\) be an n-Ahlfors regular rectifiable set containing the origin, and let \(\mu =\) be the Hausdorff measure restricted to M. Assume that M satisfies the Poincaré-type inequality (1.1). There exist \(\epsilon _{3} = \epsilon _{3}(n,d,C_{M},C_{P})>0\), and \(\theta _{1} = \theta _{1}(\lambda )\) such that if (1.3) is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and if for every \(x \in M\) and \(r < 1\) there is an n-plane \(Q_{x,r}\), passing through x such that

$$\begin{aligned} d(y, Q_{x,r}) \le \epsilon _{3} \, r \quad \forall \, y \in M \cap B_{10r}(x) \end{aligned}$$
(3.89)

and

$$\begin{aligned} d(y, M) \le \epsilon _{3} \, r \quad \forall \, y \in Q_{x,r} \cap B_{10r}(x), \end{aligned}$$
(3.90)

then there exists an onto K-bi-Lipschitz map \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) where the bi-Lipschitz constant \(K=K(n,d,C_{M},C_{P})\) and an n-dimensional plane \(\Sigma _{0}\), such that (1.4) holds (1.5), holds with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and with \(C_{0}'' = C_{0}''(n,d,C_{M},C_{P})\) instead of \(C_{0}\), and

$$\begin{aligned} M \cap B_{\theta _{1}}(0) = g(\Sigma _{0}) \cap B_{\theta _{1}}(0). \end{aligned}$$
(3.91)

Proof

Let \(\epsilon _{2}\) be as in Theorem 3.7, and let \(\epsilon _{3} \le \epsilon \le \epsilon _{2}\) (\(\epsilon _{3}\) and \(\epsilon \) to be determined later). Going through the exact same steps as in the proof of Theorem 1.2, but with \(\epsilon \) instead of \(\epsilon _{2}\), and \(\epsilon _{3}\) instead of \(\epsilon _{0}\), we get a bijective map \(g: \mathbb {R}^{n+d} \rightarrow \mathbb {R}^{n+d}\) such that (1.4) holds,

$$\begin{aligned} |g(z)-z| \le C^{'}_{0} \epsilon , \quad \text {for } z \in \mathbb {R}^{n+d}, \end{aligned}$$
(3.92)

and

$$\begin{aligned} M \cap B_{\frac{1}{10^{l_{0}+4}}}(0) \subset g(\Sigma _{0}). \end{aligned}$$
(3.93)

Note that we have not fixed \(\epsilon _{3}\) and \(\epsilon \) yet. However, we know that the above holds for \(\epsilon _{3} \le \epsilon \le \epsilon _{2}\) while inequality (3.55) is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), (3.68) is satisfied with \(\epsilon \) instead of \(\epsilon _{2}\) and \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and the hypothesis of Claim 1 is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\). Now, we want to show that

$$\begin{aligned} g(\Sigma _{0}) \cap B_{\frac{1}{10^{l_{0}+8}}}(0) \subset M. \end{aligned}$$
(3.94)

We first show that for every \(k \ge 0\) and for every \(j \in J_{k}\), \(M \cap B_{120 r_{k}}(\tilde{x}_{jk})\) is close to \(P_{jk}\) and that the n-planes \(P_{jk}\) and \(Q_{jk} := Q_{x_{jk}, r_{k}}\) are close to each other (in the Hausdorff distance sense). Let us begin by showing that for every \(k \ge 0\) and for every \(j \in J_{k}\),

$$\begin{aligned} d(z, P_{jk}) \le \epsilon \, r_{k} \quad \forall \, z \in M \cap B_{120 r_{k}}(\tilde{x}_{jk}). \end{aligned}$$
(3.95)

By Markov’s inequality, we know that

$$\begin{aligned} \begin{aligned}&\mu \bigg ( x \in B_{120r_{k}}(\tilde{x}_{jk}); \frac{d(x,P_{jk})}{120r_{k}}\ge \alpha ^{\frac{1}{2}} (\tilde{x}_{jk}, 120 \lambda r_{k}) \bigg ) \\&\quad \le \frac{1}{\alpha ^{\frac{1}{2}}(\tilde{x}_{jk}, 120 \lambda r_{k})} \int _{B_{120r_{k}}(\tilde{x}_{jk})} \frac{d(y,P_{jk})}{120r_{k}} \, d \mu \end{aligned} \end{aligned}$$

Using (3.58) with the fact that \(\mu \) is Ahlfors regular, and (1.3) with (3.18) from Lemma 3.4 and the fact that \(120 \lambda r_{k} \le \frac{1}{10}\), we get

Now, take a point \(z \in M \cap B_{120r_{k}}(\tilde{x}_{jk})\). We consider two cases: Either

$$\begin{aligned} \frac{d(z ,P_{jk})}{120r_{k}} \le \alpha ^{\frac{1}{2}} (\tilde{x}_{jk}, 120 \lambda r_{k} ) \end{aligned}$$
(3.96)

or

$$\begin{aligned} \frac{d(z ,P_{jk})}{120r_{k}} > \alpha ^{\frac{1}{2}} (\tilde{x}_{jk}, 120 \lambda r_{k}). \end{aligned}$$
(3.97)

In the first case, combining (3.96) with (1.3) and (3.18), we get

$$\begin{aligned} d(z ,P_{jk}) \le C(n, C_{M}) \, r_{k} \, \epsilon _{3}^{\frac{1}{2}}. \end{aligned}$$
(3.98)

In case of (3.97), let \(\rho \) be the biggest radius such that

$$\begin{aligned} B_{\rho }(z) \subset \left\{ x \in B_{120r_{k}}(\tilde{x}_{jk}) ; \,\,\, \frac{d(x,P_{jk})}{120r_{k}} > \alpha ^{\frac{1}{2}}\left( \tilde{x}_{jk}, 120 \lambda r_{k}\right) \right\} . \end{aligned}$$

Now, since \(z \in M\) and \(\mu \) is Ahlfors regular, we get using (3.96) that

$$\begin{aligned} C_{M} \, \rho ^{n} \le \mu (B_{\rho }(z)) \le C(n,d, C_{M}, C_{P}) \, r_{k}^{n} \,\epsilon _{3}^{\frac{1}{2}}. \end{aligned}$$
(3.99)

Thus, relabelling (3.99), becomes

$$\begin{aligned} \rho \le C(n, C_{M}, C_{P}) \,r_{k} \, \epsilon _{3}^{\frac{1}{2n}}. \end{aligned}$$
(3.100)

On the other hand, since \(\rho \) is the biggest radius such that \(B_{\rho }(z) \subset \Big \{ x \in B_{120r_{k}}(\tilde{x}_{jk}) ; \frac{d(x,P_{jk})}{120r_{k}} > \alpha ^{\frac{1}{2}}\left( \tilde{x}_{jk}, 120 \lambda r_{k}\right) \Big \}\), then there exists \(x_{0} \in \partial B_{\rho }(z)\) such that

$$\begin{aligned} \frac{d(x_{0} ,P_{jk})}{120r_{k}} \le \alpha ^{\frac{1}{2}}\left( \tilde{x}_{jk}, 120 \lambda r_{k}\right) . \end{aligned}$$
(3.101)

Thus, by (3.101), (3.100) and (1.3) together with (3.18), we get

$$\begin{aligned} d(z ,P_{jk})\le & {} |z-x_{0}| + d(x_{0} ,P_{jk}) \nonumber \\= & {} \rho + d(x_{0} ,P_{jk}) \le C(n,d, C_{M}, C_{P}) \, r_{k} \, \epsilon _{3}^{\frac{1}{2n}}+ 120r_{k} \, \alpha ^{\frac{1}{2}}\left( \tilde{x}_{jk}, 120 \lambda r_{k}\right) \nonumber \\\le & {} C(n,d,C_{M}, C_{P}) \,r_{k} \, \epsilon _{3}^{\frac{1}{2n}}. \end{aligned}$$
(3.102)

Combining (3.98) and (3.102), we get that

$$\begin{aligned} d(z ,P_{jk}) \le C_{5} \,r_{k} \, \epsilon _{3}^{\frac{1}{2n}} \quad \text {for}\,\, z \in M \cap B_{120r_{k}}(\tilde{x}_{jk}), \end{aligned}$$
(3.103)

where \(C_{5}= C_{5}(n,d,C_{M},C_{P})\). Thus, for \( C_{5} \, \epsilon _{3}^{\frac{1}{2n}} \le \epsilon ,\) we get (3.95) which is the desired inequality.

Now, let us show that \(P_{jk}\) and \(Q_{jk}\) are close together, that is

$$\begin{aligned} d_{x_{jk}, 5 r_{k}}(P_{jk}, Q_{jk}) \le 3 \epsilon \, r_{k}. \end{aligned}$$
(3.104)

Since \(P_{jk}\) and \(Q_{jk}\) are n-planes, it is enough to show

$$\begin{aligned} \sup _{y \in Q_{jk} \cap B_{5r_{k}}(x_{jk})} d(y, P_{jk}) \le 3 \epsilon \, r_{k}. \end{aligned}$$
(3.105)

Let \(y \in Q_{jk} \cap B_{5r_{k}}(x_{jk})\). By (3.90), we get that \(d(y,M) \le \epsilon _{0} r_{k}\), and thus, there exists \(y' \in M\) such that \(|y - y'| \le 2 \, \epsilon _{0} \, r_{k}\). Recalling that \( x_{jk} \in M \cap B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})\) (see (3.53)), we get

$$\begin{aligned} |y' - \tilde{x}_{jk}| \le |y' - y| + |y - x_{jk}| + |x_{jk} - \tilde{x}_{jk}| \le 2 \epsilon _{3} \, r_{k} + 5 r_{k} + \frac{r_{k}}{6} \le 120 r_{k}, \end{aligned}$$

that is \(y' \in B_{120 r_{k}}(\tilde{x}_{jk})\). Hence, by (3.95), we get that \(d(y', P_{jk}) \le \epsilon \,r_{k}\), and using the fact that \(\epsilon _{3} \le \epsilon \), we get

$$\begin{aligned} d(y, P_{jk}) \le |y - y'| + d(y', P_{jk}) \le 3 \epsilon \, r_{k} , \end{aligned}$$

which finishes the proof of (3.105) and in particular (3.104).

Before starting the proof of (3.94), let us recall a little bit how the map g was defined. In the proof of Theorem 3.7 [see paragraph above (3.48)] David and Toro constructed the smooth maps \(\sigma _{k} \,\, \text {and} \,\, f_{k}\) where \(f_{0} = Id\) and \(f_{k} = \sigma _{k-1} \circ \ldots \sigma _{0} \,\, \text {for} \,\, k \ge 1\), and then defined the map \(f = \lim _{k \rightarrow \infty } f_{k}\) defined on \(\Sigma _{0}\), and finally the map g was the extension of f to the whole space.

In order to prove (3.94), we will need the following inequality from [7] (see proposition 5.1 page 19 in [7])

$$\begin{aligned} d(f_{k}(z), P_{jk}) \le C(n,d) \, \epsilon \, r_{k}, \quad \forall \, z \in \Sigma _{0}, \, k\ge 0 \text { and } j \in J_{k} , \text { such that } f_{k}(z) \in B_{5r_{k}}(x_{jk}).\nonumber \\ \end{aligned}$$
(3.106)

We are finally ready to prove (3.94). Let \(w \in g(\Sigma _{0}) \cap B_{\frac{1}{10^{l_{0}+8}}}(0)\), and let \(d_{0} := d(w, M)\). We would like to prove that \(d_{0}=0\) (recall that M is closed by assumption). Let \(z \in \Sigma _{0}\) such that \( w = g(z)\). Notice that by (3.78) (with \(\epsilon \) instead of \(\epsilon _{2}\)), the definition of \(f_{0}\), and the fact that g and f agree on \(\Sigma _{0}\), we have

$$\begin{aligned} |w - z| = |g(z)-z| = |f(z) - f_{0}(z)| \le C(n,d) \epsilon \, r_{0}. \end{aligned}$$
(3.107)

Recalling that \(\Sigma _{0} = P_{i_{0}0}\), \(\tilde{x}_{i_{0}0} = 0\) , \(r_{0} = \frac{1}{10^{l_{0}+5}}\), and that \( x_{jk} \in B_{ \frac{r_{k}}{6}}(\tilde{x}_{jk})\) (see (3.53)), we get

$$\begin{aligned} |z - x_{i_{0}0}|\le & {} |z - w| + | w - \tilde{x}_{i_{0}0}| + |\tilde{x}_{i_{0}0} - x_{i_{0}0}| \nonumber \\\le & {} C(n,d) \epsilon \, r_{0}+ \frac{1}{10^{l_{0}+8}} + \frac{r_{0}}{6} \le C_{6} \epsilon \, r_{0} + 2 r_{0} \le 3 r_{0}, \end{aligned}$$
(3.108)

for \(\epsilon \) such that \(C_{6} \epsilon \le 1\), where \(C_{6} = C_{6}(n,d)\). Thus, \(z \in P_{i_{0}0} \cap B_{5 r_{0}}(x_{i_{0}0})\), and by (3.104), there is a point \(z' \in Q_{i_{0}0}\) such that \( |z - z'| \le 6 \epsilon \, r_{0}\). Moreover,

$$\begin{aligned} |z' - x_{i_{0}0}| \le |z' - z| + |z - x_{i_{0}0}| \le 6 \epsilon \, r_{0} + 3 r_{0} \le 10 r_{0}, \end{aligned}$$
(3.109)

for \(\epsilon < 1\). Thus, \(z' \in Q_{i_{0}0} \cap B_{10 r_{0}}(x_{i_{0}0})\), and by (3.90), we get that \(d(z', M) \le \epsilon _{3} \, r_{0}.\)

Combining (3.107), the line after (3.108), the line before and the line after (3.109), and the fact that \(\epsilon _{3} \le \epsilon \), we get

$$\begin{aligned} d_{0}= & {} d(w, M) \le |w - z| + |z - z'| + d(z', M) \le C_{6} \epsilon \, r_{0}+ 6 \epsilon r_{0} + \epsilon _{3} \, r_{0}\nonumber \\= & {} (C_{6} +7) \, \epsilon \, r_{0} \le \frac{r_{0}}{10}, \end{aligned}$$
(3.110)

for \(\epsilon \) such that \((C_{6} + 7) \, \epsilon \le \frac{1}{10}\), where \(C_{6} = C_{6}(n,d)\).

We proceed by contradiction. Suppose \(d_{0} > 0\), then there exists \(k \ge 0\) such that \(r_{k+1} < d_{0} \le r_{k}\). Notice that since \(w = g(z)\), \(z \in \Sigma _{0}\), and the maps g and f agree on \(\Sigma _{0}\), then by (3.78), we have

$$\begin{aligned} |w - f_{k}(z)| \le C(n,d) \, \epsilon \, r_{k}. \end{aligned}$$
(3.111)

Now, by the definition of \(d_{0}\), there exists \(\xi \in M\) such that \( |\xi - w| \le \frac{3}{2} d_{0}\). Using (3.110) and the fact that \(r_{0} = \frac{1}{10^{l_{0}+5}}\), we get

$$\begin{aligned} |\xi | \le |\xi - w| + |w| \le \frac{3}{2} \frac{r_{0}}{10} + \frac{1}{10^{l_{0}+8}} \le \frac{1}{10^{l_{0}+4}}, \end{aligned}$$
(3.112)

and thus by (3.54), there exists \(j \in J_{k}\) such that \( \xi \in B_{\frac{3}{2}r_{k}}(x_{jk})\).

Since both k and j are now fixed, consider the n-plane \(P_{jk}\) and the point \(x_{jk}\). By the line under (3.112), the line under (3.111), (3.111), and the fact that \(d_{0} \le r_{k}\), we have

$$\begin{aligned} |x_{jk} - f_{k}(z)|\le & {} |x_{jk} - \xi | + |\xi - w| + |w - f_{k}(z)|\nonumber \\\le & {} \frac{3}{2}r_{k} + \frac{3}{2} d_{0} + C(n,d) \, \epsilon \, r_{k} \le 3 r_{k} + C_{7} \, \epsilon \, r_{k} \le 4r_{k}, \end{aligned}$$
(3.113)

for \(\epsilon \) such that \(C_{7} \epsilon \le 1\), where \(C_{7} = C_{7}(n,d)\). Thus, inequality (3.106) tell us that \(d(f_{k}(z), P_{jk}) \le C(n,d) \, \epsilon \, r_{k}\). Let \(y \in P_{jk}\) such that \(| y - f_{k}(z) | \le C(n,d) \, \epsilon \, r_{k}\). Then, by (3.111), the line below it, the line below (3.112), and recalling that \(d_{0} \le r_{k}\), we get

$$\begin{aligned} | y - x_{jk}| \le |y - f_{k}(z)| + |f_{k}(z) - w| + |w - \xi | + |\xi - x_{jk}| \le C_{8} \, \epsilon \, r_{k} + 3 r_{k} \le 5 r_{k}\nonumber \\ \end{aligned}$$
(3.114)

for \(\epsilon \) such that \(C_{8} \, \epsilon \le 1\), where \(C_{8} = C_{8}(n,d)\). Thus, \(y \in P_{jk} \cap B_{5 r_{k}}(x_{jk})\), and by (3.104) there exists \(y' \in Q_{jk}\) such that \(| y - y'| \le 3 \epsilon \, r_{k}\). But then, \(| y' - x_{jk}| \le |y - y'| + |y - x_{jk}| \le 10 \, r_{k}\); thus \( y' \in Q_{jk} \cap B_{10 r_{k}}(x_{jk})\) and by (3.90) we get that \(d(y',M) \le \epsilon _{3} \, r_{k}\).

Finally, using (3.111), the two lines before (3.114), and the three lines below it, we get

$$\begin{aligned} d_{0}= & {} d(w,M) \le |w - f_{k}(z)| + |f_{k}(z) - y| + |y - y'| + d(y',M)\nonumber \\\le & {} C(n,d) \, \epsilon \, r_{k} = C_{9} \epsilon \, r_{k} \le r_{k+1} \end{aligned}$$
(3.115)

for \(\epsilon \) such that \(C_{9} \epsilon \le \frac{1}{10}\), where \(C_{9}= C_{9}(n,d)\) which contradicts the fact that \(d > r_{k+1}\). This finishes the proof of (3.94).

Fix \(\epsilon< \epsilon _{2} < 1\) such that the lines after (3.108), (3.110), (3.113), (3.114), and (3.115) hold, and then fix \(\epsilon _{3} \le \epsilon \le \epsilon _{2}\) such that inequality (3.55) is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), (3.68) is satisfied with \(\epsilon \) instead of \(\epsilon _{2}\) and \(\epsilon _{3}\) instead of \(\epsilon _{0}\), the hypothesis of Claim 1 is satisfied with \(\epsilon _{3}\) instead of \(\epsilon _{0}\), and such that the line below (3.103) is satisfied. Writing \(\epsilon _{3} = c_{10} \, \epsilon \), where \(c_{10} = c_{10}(n,d,C_{M},C_{P})\), and replacing in (3.92), we get (1.5). The proof that g is bi-Lipschitz is the same as from Theorem 1.2. \(\square \)

4 The Poincaré Inequality (1.12) is equivalent to the p-Poincaré inequality

Let \((M, d_{0}, \mu )\) to be the metric measure space where \(M \subset B_{2}(0)\) is n-Ahlfors regular rectifiable set in \(\mathbb {R}^{n+d}\), \(\mu =\) is the Hausdorff measure restricted to M, and \(d_{0}\) is the restriction of the standard Euclidean distance in \(\mathbb {R}^{n+d}\) to M. In this section, we prove Theorem 1.7, which states that in the setting described above, the Poincaré inequality (1.12) is equivalent to the p-Poincaré inequality (1.10) and the Lip-Poincaré inequality (1.11).

We prove that \(\mathrm{(iii)} \implies \mathrm{(ii)} \implies \mathrm{(i)} \implies \mathrm{(iii)}\). In fact, \(\mathrm{(iii)} \implies \mathrm{(ii)}\) is proved in [14]. The fact that \(\mathrm{(ii)} \implies \mathrm{(i)}\) follows from a theorem in [10] where Keith proves the equivalence between p-Poincaré inequalities and Lip-Poincaré inequalities. Finally, to prove \((i) \implies (iii)\), we use the well known fact that X supporting a p-Poincaré inequality is equivalent to having inequality (1.8) hold for all measurable functions u on X and all p-weak upper gradients \(\rho \) of u. Then, we show that \(|\nabla ^{M}f|\) is a p-weak upper gradient of f, when f is a Lipschitz function on \(\mathbb {R}^{n+d}\).

Let us start with stating the theorems that we need, as mentioned in the paragraph above.

Theorem 4.1

(see [10, Theorem 2]) Let \(p \ge 1\), and let \((X,d,\nu )\) be a complete metric measure space, with \(\nu \) a doubling measure. Then, the following are equivalent:

  • \((X,d,\nu )\) admits a p-Poincaré inequality for all measurable functions u on X.

  • \((X,d,\nu )\) admits a Lip-Poincaré inequality for all Lipschitz functions f on X.

Theorem 4.2

(see [1, Proposition 4.13]) Let \(p \ge 1\), and let \((X,d,\nu )\) be a metric measure space. Then, the following are equivalent:

  • Inequality (1.8) holds for all measurable (resp. Lipschitz) functions u on X and all upper gradients \(\rho \) of u.

  • Inequality (1.8) holds for all measurable (resp. Lipschitz) functions u on X and all p-weak upper gradients \(\rho \) of u.

Before stating the theorem we need from [14], let us make a remark on how the metric balls looks like in the metric measure space \((M, d_{0}, \mu )\). In fact, fix \(x \in M\) and \(r>0\). It is easy to see that

$$\begin{aligned} B^{M}_{r}(x) = B_{r}(x) \cap M, \end{aligned}$$
(4.1)

where \(B_{r}(x)\) denotes the Euclidean ball in \(\mathbb {R}^{n+d}\) of center x and radius r.

Theorem 4.3

(see [14, Corollary 5.8])Footnote 7Let \((M, d_{0}, \mu )\) be as above. Assume that M satisfies (iii). Then, M satisfies (ii).

To show that \(|\nabla ^{M}f|\) is a p-weak upper gradient of f, when f is a Lipschitz function on \(\mathbb {R}^{n+d}\), we need the following lemma from [1]:

Lemma 4.4

(see [1, Lemma 1.42]) Let \(p \ge 1\) and let \((M, d_{0}, \mu )\) be as above. Suppose that \(E \subset M\), with \(\mu (E) = 0\). Denote by \(\Gamma (M)\) the set of all rectifiable curves in M, and let

$$\begin{aligned} \Gamma _{E} = \left\{ \gamma \in \Gamma (M), \text { such that } \mathcal {L}^{*}_{1}(\gamma ^{-1}(E)) \ne 0 \right\} , \end{aligned}$$

where \(\mathcal {L}^{*}_{1}\) denotes the Lebesgue outer measure on \(\mathbb {R}\). Then, \( \text {Mod}_{p}(\Gamma _{E}) = 0\).

Proposition 4.5

Let \((M, d_{0}, \mu )\) be as above, and suppose f be a Lipschitz function on \(\mathbb {R}^{n+d}\). Then, \(|\nabla ^{M}f|\) (or more precisely, any non-negative extension of \(|\nabla ^{M}f|\) to the whole space M) is a p-weak upper gradient of \(f|_{M}\), the restriction of f on M.

Proof

Since f Lipschitz on \(\mathbb {R}^{n+d}\), we know that \(\nabla ^{M}f\) exists \(\mu \)-almost everywhere. Let

$$\begin{aligned} E = \left\{ x \in M \text { such that } \nabla ^{M}f(x) \text { does not exist } \right\} . \end{aligned}$$

Then, \(\mu (E)=0\), and by Lemma 4.4, we know that Mod\(_{p}(\Gamma _{E}) = 0\). Now, let \(\gamma \) be a rectifiable curve in M, parametrized by arc length, such that \(\gamma \notin \Gamma _{E}\). Then, \(\mathcal {L}_{1}(\gamma ^{-1}(E)) = 0\). Moreover, Since \(f \circ \gamma \) is Lipschitz, and thus absolutely continuous on \([0, l_{\gamma }]\), we have

$$\begin{aligned} \big | f|_{M}(\gamma (0)) - f|_{M}(\gamma (l_{\gamma })) \big |= & {} | f(\gamma (0)) - f(\gamma (l_{\gamma }))| \nonumber \\= & {} \left| \int _{0}^{l_{\gamma }} (f \circ \gamma )'(t) \, dt \right| \nonumber \\= & {} \left| \int _{t \in [0, l_{\gamma }]; \, \gamma (t) \notin E} (f \circ \gamma )'(t) \, dt \right| \nonumber \\\le & {} \int _{t \in [0, l_{\gamma }]; \, \gamma (t) \notin E} |(f \circ \gamma )'(t)| \, dt \end{aligned}$$
(4.2)

Let \(t \in [0, l_{\gamma }]\) such that \(\gamma (t) \notin E\). Then, \(T_{\gamma (t)}M\) exists, and \(\nabla ^{M}f(\gamma (t)) \in T_{\gamma (t)}M\). We first show that

$$\begin{aligned} | (f \circ \gamma )'(t)| \le | \nabla ^{M} f (\gamma (t))|. \end{aligned}$$
(4.3)

Since \(\gamma '(t) \in T_{\gamma (t)}M\)Footnote 8 is a unit vector, then by Rademacher’s Theorem, we have

$$\begin{aligned} \lim _{h \rightarrow 0} \frac{| f \big (\gamma (t) +h \gamma '(t) \big ) - f \big (\gamma (t)\big ) - h <\nabla ^{M} f (\gamma (t)), \gamma '(t) >|}{h} = 0. \end{aligned}$$
(4.4)

Now, for any \(-t< h < l_{\gamma } - t\), we have

$$\begin{aligned} \begin{aligned}&\frac{|f \big (\gamma (t+h) \big ) - f \big (\gamma (t)\big )|}{h} \\&\quad \le \frac{|f \big (\gamma (t+h) \big ) - f \big (\gamma (t) + h \gamma '(t) \big )|}{h} + \frac{|f \big (\gamma (t) + h \gamma '(t) \big ) - f\big (\gamma (t)\big )|}{h} \\&\quad \le L_{f}\, \frac{| \gamma (t+h) - \gamma (t) - h \gamma '(t) |}{h} + \frac{|f \big (\gamma (t) + h \gamma '(t) \big ) - f\big (\gamma (t)\big )|}{h} \, , \end{aligned} \end{aligned}$$

where in the last step, we used the fact that f is Lipschitz on \(\mathbb {R}^{n+d}\).

Taking the limit as \(h \rightarrow 0\) on both sides of (4.5), and using (4.4) and the fact that \(\gamma '(t)\) is a unit vector, we get

$$\begin{aligned} | (f \circ \gamma )'(t)| \le | \nabla ^{M} f(\gamma (t)) \cdot \gamma ' (t)| \le | \nabla ^{M} f(\gamma (t))| \end{aligned}$$

which is exactly (4.3). Replacing (4.3) in (4.2), we get

$$\begin{aligned} \big | f|_{M}(\gamma (0)) - f|_{M}(\gamma (l_{\gamma }))\big | \le \int _{t \in [0, l_{\gamma }]; \, \gamma (t) \notin E} | \nabla ^{M} f (\gamma (t))| \, dt. \end{aligned}$$
(4.5)

Now, define the map \(G : M \rightarrow [0, \infty ]\) to be any non-negative extension of \(|\nabla ^{M}f|\) to the whole space M (that is, \(G(x) = |\nabla ^{M}f(x)|\) on \(M {\setminus } E\), which means that \(G = |\nabla ^{M}f|\, \mu \)-a.e.). Plugging back in (4.5), we get

$$\begin{aligned} | f|_{M}(\gamma (0)) - f|_{M}(\gamma (l_{\gamma }))|\le & {} \int _{t \in [0, l_{\gamma }]; \, \gamma (t) \notin E} G(\gamma (t)) \, dt \nonumber \\= & {} \int _{t \in [0, l_{\gamma }]; \, \gamma (t) \notin E} G(\gamma (t)) \, dt + \int _{t \in [0, l_{\gamma }]; \, \gamma (t) \in E} G(\gamma (t)) \, dt \nonumber \\= & {} \int _{0}^{l_{\gamma }} G\big (\gamma (t)\big ) dt = \int _{\gamma } G \, ds. \end{aligned}$$
(4.6)

Footnote 9This finishes the proof that G is a p-weak upper gradient of \(f|_{M}\). \(\square \)

We are finally ready to prove Theorem 1.7:

Proof of Theorem 1.7

Proof

We prove \(\mathrm{(iii)} \implies \mathrm{(ii)} \implies \mathrm{(i)} \implies \mathrm{(iii)}\):

\(\mathrm{(iii)} \implies \mathrm{(ii)}\):

This is exactly Theorem 4.3.

\(\mathrm{(ii)} \implies \mathrm{(i)}\):

Notice that by using (4.1), we will be done if we apply Theorem 4.1 to the metric measure space \((M, \mu , d_{0})\). In fact, M is complete since it is closed and bounded. Moreover, the fact that \(\mu \) is doubling follows from (4.1) and the Ahlfors regularity of \(\mu \). Hence, we can apply Theorem 4.1 to \((M, \mu , d_{0})\).

\(\mathrm{(i)} \implies \mathrm{(iii)}\): Notice that by Theorem 4.2, we know that (i) implies that inequality (1.8) holds for all measurable functions u on M and all p-weak upper gradients \(\rho \) of u. Let f be a Lipschitz function f on \(\mathbb {R}^{n+d}\), and fix \(x \in M\) and \(r>0\). Then, \(f|_{M}\) is a Lipschitz function on M, and by Lemma 4.5, \(|\nabla ^{M}f|\) agrees \(\mu \)-almost everywhere with G, a p-weak upper gradient of \(f|_{M}\). Applying (1.8) for \(u = f|_{M}\), \(\rho = G\), and the ball \(B = B_{r}(x) \cap M\), we get

hence finishing the proof \(\square \)

5 The conclusion of Theorem 1.2 is optimal

In this section, we prove Theorem 1.8 by giving an example of a non-Reifenberg flat, 2-Ahlfors regular rectifiable set \(M \subset \mathbb {R}^{3}\) that satisfies the Carleson condition (1.3) and the Poincaré-type inequality (1.1).

To construct this example, we use the well known fact that Lipschitz domains support a p-Poincaré-type inequality, together with Theorem 1.7 that allows us to go from a p-Poincaré inequality to the Poincaré inequality (1.12).

In order to keep track of where the balls live, \(B^{2}_{r}(x)\) will denote the Euclidean ball in \(\mathbb {R}^{2}\) of center x and radius r, whereas \(B^{3}_{r}(x)\) will be that in \(\mathbb {R}^{3}\). Moreover diam(A) denotes the diameter of a set A.

Definition 5.1

We say that a bounded set \(A \subset \mathbb {R}^{2}\) satisfies the corkscrew condition if there exists \(\delta >0\) such that for all \(x \in \bar{A}\) and \(0<r \le \text {diam}(A)\), the set \(B^{2}_{r}(x) \cap A\) contains a ball with radius \(\delta r\).

Definition 5.2

We say that an open, bounded set \(A \subset \mathbb {R}^{2}\) is Lipschitz domain if the boundary of A, \(\partial A\) can be written, locally, as a graph of a Lipschitz function. More precisely, A is a Lipschitz domain if for every point \(x \in \partial A\) there exists a radius \(r>0\) and a bijective map \(h_{x}: B^{2}_{r}(x) \rightarrow B^{2}_{1}(0)\) such that the following holds:

  • \(h_{x}\) and \(h_{x}^{-1}\) are Lipschitz continuous,

  • \(h_{x}(\partial A \cap B^{2}_{r}(x)) = Q_{0}\), and

  • \(h_{x}(A \cap B^{2}_{r}(x)) = Q_{1}\),

where \(Q_{0} = \{ (x_{1}, x_{2}) \in B^{2}_{1}(0); x_{2} = 0 \}\) and \(Q_{1} = \{ (x_{1}, x_{2}) \in B^{2}_{1}(0); x_{2} > 0 \}\).

In [2], Björn and Shanmugalingam prove that Lipschitz domains support p-Poincaré-type inequalities:

Theorem 5.3

(see [2, Theorem 4.4]) Consider the Hausdorff measure \(\mathcal {H}^{2}\) on \(\mathbb {R}^{2}\). Let \(\Omega \) be any Lipschitz domain on \(\mathbb {R}^{2}\). Then, \(\Omega \) supports a 2-Poincaré-type inequality, that is there exist constants \(\kappa \ge 1\) and \(\lambda \ge 1\) such that for every \(x \in \bar{ \Omega }\), and \(r>0\), and for every Lipschitz function \(u: \Omega \rightarrow \mathbb {R}\) and any upper gradient \(\rho \) of u in \(\Omega \), the following holds

(5.1)

where .

We are now ready to construct our example. Let \(\Omega := B^{2}_{1}(0) {\setminus } Q\) where Q is the closed square of center \((\frac{1}{2},0)\), and side \(l = \frac{1}{10}\). Since \(\Omega \) is a Lipschitz domain, by Theorem 5.3, it supports the 2-Poincaré-type inequality (5.1).

Proof of Theorem 1.8

Proof

Let \(\Omega \) be as in the construction above, and let \(M: = \bar{\Omega } \times \{0\} \subset \mathbb {R}^{3}\). We prove this theorem for \(n=2\), \(d = 1\), and \(\mu =\). However, with a similar constructionFootnote 10, the theorem holds for any \(n \ge 2\) and \(d \ge 1\).

It is trivial to see that M is a rectifiable non-Reifenberg flat set. To see that M is 2-Ahlfors regular, first note that M is closed by construction. So, we show that there exists a constant \(C_{M} \ge 1\) such that for every \(x \in M\) and \(0<r \le 1\), we have

$$\begin{aligned} C_{M}^{-1} \, r^{2} \le \mu (M\cap B^{3}_{r}(x)) \le C_{M} \, r^{2}. \end{aligned}$$
(5.2)

By the definition of \(\mu \) and the construction of M, proving (5.2) translates to proving that for every \(\bar{x} \in \bar{\Omega }\) and \(0<r \le 1\),

$$\begin{aligned} C_{M}^{-1} \, r^{2} \le \mathcal {H}^{2} (\bar{\Omega } \cap B^{2}_{r}(\bar{x})) \le C_{M} \, r^{2}. \end{aligned}$$
(5.3)

The right hand side of (5.3) is trivial since \(\mathcal {H}^{2}(\bar{\Omega } \cap B^{2}_{r}(\bar{x})) \le \mathcal {H}^{2}(B^{2}_{r}(\bar{x})) = \omega _{2} \, r^{2}\). For the left hand side, notice that since \(\Omega \) is a Lipschitz domain, then it is automatically a corkscrew domain, and thus there exists an \(\delta >0\), such that for every \(\bar{x} \in \bar{\Omega }\) and for every \(0<r \le \text {diam}(\Omega ) = 1\), there is a ball \(B^{2}_{\delta r}(\bar{x}) \subset \bar{\Omega } \cap B^{2}_{r}(\bar{x})\). So, \(\omega _{2} \, \delta ^{2}r^{2} =\mathcal {H}^{2}( B^{2}_{\delta r}(\bar{x})) \le \mathcal {H}^{2}(\bar{\Omega } \cap B^{2}_{r}(\bar{x})) \), and the proof of (5.3) is done.

Let us now prove that the Carleson-type condition (1.3) holds. Let \(\epsilon _{0}\) be the constant from the statement of Theorem 1.2. Since M has co-dimension 1, (1.3) can be written as (1.2), and thus proving (1.3) translates to proving

(5.4)

where \(\nu \) denotes the unit normal to M and . But for \(\mu \)-almost every y, \(\nu (y)\) exists and \(\nu (y) = <0,0,1>\). Thus, the left hand side of (5.4) is always 0, and (5.4) is satisfied.

Finally, let us prove that M satisfies the following Poincaré inequality

(5.5)

for some \(\kappa \ge 1\) and \(\lambda \ge 1\), and where \(x \in M\), \(r >0\), f is a Lipschitz function on \(\mathbb {R}^{3}\), and . By Theorem 1.7, it suffices to show that

(5.6)

for some \(\kappa \ge 1\) and \(\lambda \ge 1\), and where \(x \in M\), \(r >0\), f is a LipschitzFootnote 11 function on M, \(\rho \) is an upper gradient of f in M, and .

Let f be a Lipschitz function on M, and \(\rho \) an upper gradient of f on M. Fix \(x \in M\) and \(r>0\). Let \(\tilde{x} \in \bar{\Omega }\) such that \((\tilde{x},0) = x\), and define the functions \(\tilde{f} : \Omega \rightarrow \mathbb {R}\) and \(\tilde{\rho } : \Omega \rightarrow [0, \infty ]\) such that \(\tilde{f}(a,b) = f(a,b,0)\) and \(\tilde{\rho }(a,b) = \rho (a,b,0)\). It is easy to see that \(\tilde{f}\) is a Lipschitz function on \(\Omega \), and \(\tilde{\rho }\) is an upper gradient to \(\tilde{f}\) in \(\Omega \). Thus, by the definition of \(\mu \), the construction of M, the fact that \(\mathcal {H}^{2}(\bar{\Omega } \setminus \Omega ) = 0\), and using (5.1) (for \(x = \tilde{x}\), \(u = \tilde{f}\), and \(\rho = \tilde{\rho }\)), we get

which is exactly (5.6) hence finishing the proof of this theorem \(\square \)

Remark 5.4

Notice that one could take away more that one square Q from the ball \(B^{2}_{1}(0)\) and still get the same result of this section. The important thing about the construction above is that \(\Omega \) is a Lipschitz domain; Thus if we want to construct a set with m holes that satisfies the hypotheses of Theorem 1.2, all we need to do is make sure that the squares we take away from the ball \(B^{2}_{1}(0)\) and are far away from each other (that is, they do not accumulate). That way, \(\Omega \setminus \bigcup _{i=1}^{m} Q_{i}\) remains a Lipschitz domain and the rest of the argument follows directly.

As mentioned in the introduction, the example constructed in Theorem 1.8 proves that the conclusion of Theorem 1.2 is optimal.