1 Introduction

Given a subset \(E\) of the real line, and real numbers \(\alpha \in [0, 1]\) and \( \epsilon > 0\), consider all coverings of \(E\) by intervals \((I_n: n =1, 2, \ldots )\) of diameter \(\vert I_n \vert \le \epsilon \) and the associated sums \(\sum _{n\ge 1} \vert I_n \vert ^\alpha \). The infimum of these sums over all such coverings increases to a limit (possibly infinite) when \(\epsilon \) decreases to zero. The limit is called the Hausdorff measure of \(A\) in dimension \(\alpha \) or the \(\alpha \)-Hausdorff measure of \(E\) and is denoted as \(H_{\alpha }E\). The \(\alpha \)-Hausdorff content of \(E\) (denoted \(H_\alpha ^\infty (E)\)) is defined as the infimum of the sums \(\sum _{n\ge 1} \vert A_n \vert ^\alpha \) over all coverings \((A_n)\) of \(E\) by arbitrary subsets of \([0, 1]\). It is well known that \(H_\alpha ^\infty (E)>0\) is equivalent to \(H_\alpha (E) >0\) [15, p. 100]. The Hausdorff dimension of \(E\) is defined as \(\dim E = \sup \{\alpha : H_{\alpha }(E) = \infty \}\) which turns out to be the same as \(\inf \{\beta : H_{\beta }(E) = 0 \}\). General properties of Hausdorff measures and dimensions can be found in [3, 8, 12, 16]. Frostman [6] proved that for any compact \(E \subset \mathbf {R}\) and any \(0<\alpha \le 1, H_{\alpha }(E) > 0\) if and only if \(E\) carries a probability measure \(\mu \) such that \(\mu (I) \le C \vert I \vert ^{\alpha }\), for every interval \(I\) and some constant \(C\). This result called Frostman’s lemma is very important. Many key results of fractal geometry are based on it.

In studying the Hausdorff dimension of a set, it is often easier to construct a probability measure and apply Frostman’s lemma than calculating the Hausdorff measure directly. This is a general procedure to study dimensions of images of compact sets by some functions, in particular, stochastic processes [8, 10, 11].

Frostman’s lemma has been extended to analytic subsets of \(\mathbf {R}^n\) [7], but it is now known that it cannot hold for all subsets. A separable metric space \(X\) is called universally measure zero if \(\mu (X) = 0\) for each finite Borel measure \(\mu \) on \(X\) which is non-atomic (vanishes on singleton) or equivalently, each Borel non-atomic measure on \(X\) is degenerate, in the sense that, for each Borel set \(B \subset X, \mu (B) = 0\) or \(\mu (B)= \infty \). Zindulka [19] proved the existence of universally measure zero subsets of \(\mathbf {R}^n\) with positive Hausdorff dimension (the case of \(n = 2\) is also given in [5, p. 439G]). He also proved in [18] that any analytic subset of \(\mathbf {R}^n\) contains a universally measure zero subset with the same Hausdorff dimension. In particular, there exist subsets \(E\) of \(\mathbf {R}^n\) such that \(H_\alpha (E)>0\) for some \(\alpha >0\), but \(H_\beta (K) = 0\) for every compact subset \(K \subset E\) and \(0 < \beta \le \alpha .\) Obviously for such sets, the Frostman lemma is not valid, making it difficult to study fractal properties of such sets. For example, it is well known that if \(E\) is a compact subset of \([0, 1]\), then its image by a one-dimensional Brownian motion has Hausdorff dimension \(\min (1, 2\dim E)\) [13], but it is not known whether this property holds for more general subsets. (Kaufman [9] proved that this property is true for Brownian motion in dimension \(\ge \)2 simultaneously for all subsets of \([0, 1]\)). Many interesting Fourier analytical properties of sets generated by some stochastic processes are explored using Frostman’s lemma and it is therefore not known if these properties still hold for sets for which the lemma is not valid [8, 17].

In this paper, we study a possible extension of Frostman’s lemma to all subsets of \([0, 1]\). Given a subset \(E\) of \([0, 1]\), and \(\alpha \ge 0\), we consider the class \(\fancyscript{D}\) of all dyadic intervals \(D_n^j = [j 2^{-n}, (j+1) 2^{-n})\), (\(j =0, 1, \ldots , 2^n -2, n=0, 1, 2, \ldots \)) and for \(j= 2^n -1\), we let \(D_n^j = [j 2^{-n}, (j+1) 2^{-n}]\), for all positive integers \(n\).

Definition 1.1

The restricted \(\alpha \)-Hausdorff content on the set \(E\) is the set function

$$\begin{aligned} h(A) = \inf \left\{ \sum _{n\ge 0} \vert I_n \vert ^\alpha : E \cap A \subset \cup _{n\ge 0}I_n, \mathrm{and} \,\, I_n \in \fancyscript{D} \, \mathrm{for\,all}\,n\right\} , \quad \mathrm{for}\, A \subset \mathbf {R}. \end{aligned}$$
(1.1)

This function is such that \(h(\emptyset ) = 0\), is monotone, \(h(A) \le h(B)\) for \(A \subset B\) and is countably sub-additive, \(h(\cup _{n\ge 1} A_n) \le \sum _{n\ge 1} h(A_n)\). (See for example Lemma 1.5.4 in [2, p. 17] for general results on outer-measures.)

We shall prove the following result:

Theorem 1.1

For any subset \(E\) of \([0, 1]\) and \(\alpha >0\), if \(H_\alpha E >0\), then there exists a non-zero Borel measure \(\mu \) on \([0, 1]\) and a non-empty subset \(E_0\) of \(E\) such that

  1. (i)

    \(\mu (I) \le 3 \vert I \vert ^{\alpha }\) for any interval \(I\),

  2. (ii)

    for every open interval \(I, \mu (I) >0\) if and only if \(I \cap E_0 \ne \emptyset \),

  3. (iii)

    for any open interval \(I\) of \([0, 1]\), if \(I \cap {{\mathrm{supp}}}(\mu ) \ne \emptyset \), then \(H_\alpha (I \cap E) >0\),

  4. (iv)

    if \(h(E_0) = h(E)\), then \(\mu (B) = \mu [0,1]\) for every Borel set \(B\) containing \(E\).

We observe that the property (iii) extends the requirement, in the original Frostman’s lemma, that the support of \(\mu \) is contained in \(E\) (for closed subset \(E\)) in the sense that any interval that intercepts the support of \(\mu \) contains a large number of points of \(E\). We conjecture that the condition \(h(E_0) = h(E)\) is true for a large class of sets \(E\). If this is the case, then the measure \(\mu \) should be an important tool to analyse the fractal geometry of \(E\) in the same way Frostman’s measure is used for compact sets.

We further explore the notion of restricted Hausdorff content to extend the notion of capacitarian dimension to non-compact sets. Consider a compact subset \(E\) of \([0, 1]\) and \(0 < \alpha \le 1\). For a probability Borel measure \(\mu \) supported by \(E\), the energy integral of \(\mu \) with respect to the kernel \(k(x)= \vert x \vert ^{-\alpha }\) is given by

$$\begin{aligned} I_\alpha (\mu ) = \int \int \frac{\mathrm{d}\mu (x) \mathrm{d}\mu (y)}{\vert x-y\vert ^\alpha }. \end{aligned}$$

The measure \(\mu \) is said to have finite energy with respect to \(k\) if \(I_\alpha (\mu ) < \infty \). The set \(E\) has positive capacity with respect to \(k (\mathrm{Cap}_\alpha (E) > 0)\) if \(E\) carries a probability Borel measure of finite energy with respect to \(k\). If there is no such measure, \(E\) has capacity zero with respect to the kernel \(k\) and we write \(\mathrm{Cap}_\alpha (E) = 0\). The capacitarian dimension of \(E\), as introduced by Polya and Szegö, is defined by

$$\begin{aligned} \sup \left\{ \alpha : \mathrm{Cap}_\alpha (E) > 0\right\} = \inf \left\{ \alpha :\mathrm{Cap}_\alpha (E) = 0\right\} . \end{aligned}$$

(See [8, p. 133].) The following result is the well-known Frostman theorem: For any compact subset \(E\) of \(\mathbf {R}\) and \(0 < \alpha < \beta < 1\), (1) if \(H_\beta (E) > 0\), then \(\mathrm{Cap}_\alpha (E) > 0\) and if \(\mathrm{Cap}_\alpha (E) > 0\), then \(H_\alpha (E) > 0\), (2) \(\sup \{\alpha : \mathrm{Cap}_\alpha (E) > 0\} = \inf \{\beta : \mathrm{Cap}_\beta (E) = 0 \} = \dim E\). This theorem implies in particular that for a compact set \(E\), the Hausdorff and capacitarian dimensions coincide (see for example [8, p. 133].) It also creates a relationship between fractal geometry and potential theory. We now discuss a possible extension to all subsets of \(\mathbf {R}\).

Definition 1.2

A set function \(\mu \) defined on all subsets of \(\mathbf {R}\) is called a capacity if \(\mu (\emptyset ) = 0\), it is monotone, countably sub-additive, and satisfies \(\mu (\mathbf {R}) =1\). A subset \(E\) of \(\mathbf {R}\) is of positive capacity with respect to the kernel \(k(x) = \vert x \vert ^{-\alpha }\) if there exists a capacity \(\mu \) defined on \(\mathbf {R}\) such that \(\mu (E)=1\) and

$$\begin{aligned} I_\alpha (\mu ) = \int \left( \int \frac{\mathrm{d}\mu (x)}{\vert x-y\vert ^\alpha }\right) \mathrm{d}\mu (y) <\infty , \end{aligned}$$

where the involved integrals are Choquet integrals.

We shall prove the following extension of Frostman’s theorem to arbitrary subsets of \(\mathbf {R}\).

Theorem 1.2

For any subset \(E\) of \(\mathbf {R}\) and \(0 < \alpha < \beta < 1\),

  1. (1)

    if \(H_\beta (E) > 0\), then \(\mathrm{Cap}_\alpha (E) > 0\),

  2. (2)

    if \(\mathrm{Cap}_\alpha (E) > 0\), then \(H_\alpha (E) > 0\),

  3. (3)

    \(\sup \{\alpha : \mathrm{Cap}_\alpha (E) > 0\} = \inf \{\beta : \mathrm{Cap}_\beta (E) = 0 \} = \dim E \).

One can canonically define the notion of Fourier transform for a capacity. We ask the question whether the Fourier transform form of the energy integral

$$\begin{aligned} I_\alpha (\mu ) = C \int _\mathbf {R} \vert \hat{\mu }(u) \vert ^2 \vert u \vert ^{\alpha -1} \mathrm{d}u \end{aligned}$$

remains valid in the general case of capacities. This formula is critical in studying the fractal geometry of images of compacts subsets by stochastic processes.

In the next section, we prove the extended version of Frostman’s lemma (Theorem 1.1) and in Sect. 3, we discuss the notion of Choquet integrals and prove Theorem 1.2.

2 Generalisation of Frostman’s Lemma

The proof of Theorem 1.1 is inspired by a beautiful proof of Frostman’s lemma given by Mörters and Peres [15, pp. 111–113] which is based on the so-called max-flow min-cut theorem of Ford and Fulkerson [4] from graph theory. They considered the canonical graph associated to a closed set and showed that if \(H_\alpha E >0\), then the maximum flow of the graph is positive. The measure is generated by that flow using the Caratheodory extension theorem. We will show how one can modify the construction to obtain a measure on general sets using different results of measure theory. Before proving the theorem, we give some simple properties of the restricted Hausdorff content that will be useful in the sequel.

Lemma 2.1

If \(A\) and \(B\) are subsets of \([0, 1]\) and \(J\) is an interval of \(\fancyscript{D}\) such that \(A \subset J_1\) and \(B \subset J_2\) where \(J_1\) and \(J_2\) are the two direct sub-intervals of \(J\) of length \(\vert J \vert /2\), then \(h(A \cup B) = \vert J \vert ^\alpha \) or \(h(A \cup B) = h(A) + h(B)\).

Proof

The only intervals of \(\fancyscript{D}\) that intercept both \(A\) and \(B\) are those containing \(J\). Hence if \(h_\alpha (A \cup B) < \vert J \vert ^\alpha \), then

$$\begin{aligned} h(A \cup B)&= \inf \left\{ \sum _{n\ge 0} \vert I_n \vert ^\alpha : E \cap A \subset \cup _{n\ge 0}I_n, I_n \subset J_1\right\} \\&\quad +\, \inf \left\{ \sum _{n\ge 0} \vert I_n \vert ^\alpha : E \cap B \subset \cup _{n\ge 0}I_n, I_n \subset J_2\right\} \\&= h(A) + h(B). \end{aligned}$$

\(\square \)

Lemma 2.2

\(H_\alpha (E) >0\) if and only if \(h(E) >0\).

Proof

Since in the definition of \(h(E)\), we only use dyadic intervals to cover \(E\), while for \(H_\alpha ^\infty (E)\), arbitrary subsets are used, it is clear that \(h(E) \ge H_\alpha ^\infty (E)\). Suppose \(H_\alpha ^\infty (E) = 0\). Then for any \(\delta >0\), there exists a covering of \(E\) by sets \((A_n)\) with \(\sum _{n\ge 1} |A_n|^\alpha < \delta \). We can cover each \(A_n\) by three dyadic subintervals \(J_{n,1}, J_{n,2}, J_{n,3}\) each of length \(\le \vert A_n\vert \). The resulting covering \((J_{n,k})\) is such that \(\sum _{n, k} \vert J_{n,k}\vert ^\alpha \le 3^\alpha \delta .\) It follows that \(h(E) \le 3^\alpha \delta \) and hence \(h(E) = 0\). The lemma follows the fact that \(H_\alpha ^\infty (A) >0\) is equivalent to \(H_\alpha (A) >0\) (see [15, p. 100]).\(\square \)

Lemma 2.3

If \(h(A)>0\) and \(G\) is the subset of \(A\) defined by, \(x\in G\) if and only if \(h(I)>0\) for every \(I\in \fancyscript{D}\) containing \(x\), then \(h(G) = h(A)\).

Proof

If \(x\notin G\), then there exists \(J_x \in \fancyscript{D}\) containing \(x\) such that \(h(J_x) = 0\). Then \(G^c\), the complement of \(G\), is contained in the union \(\cup _{x} J_x\). Since the class \(\fancyscript{D}\) is countable, there exists a countable subclass \((J_n)\) of \((J_x)\) that covers \(G^c\). Therefore, \(h(G^c) \le h(\cup _n J_n) \le \sum _{n} h(J_n) = 0.\) Then \(h(A) \le h(G \cup G^c) \le h(G) + h(G^c) = h(G)\) implies that \(h(G) = h(A)\) by the monotonicity of \(h\).\(\square \)

Proof of Theorem 1.1

Let \(f = h(E)\). By Lemma 2.2, \(f>0\). We will define a set function \(\nu : \fancyscript{D} \rightarrow [0, \infty )\) by a recursive procedure. The procedure will also yield a sequence \((x_n)\) of non-dyadic elements of \(E\) that will be contained in the support of the measure to be constructed. We let \(\nu (\emptyset ) = 0\) and \(\nu [0, 1] = f\). For any interval \(I \in \fancyscript{D}\), if \(\nu (I) = 0\), then \(\nu (J) = 0\) for any subinterval \(J\) of \(I\). By Lemma 2.3, we fix a (non-dyadic) \(x_1\) of \(E\) such that for any interval \(I \in \fancyscript{D}\) containing \(x_1\), it is the case that \(h(I)>0\). Let \(X_1 = \{x_1\}\). By the sub-additivity of \(h\), we have that \(f = h[0, 1] \le h[0, 1/2) + h[1/2, 1],\) and therefore there exist \(0\le f_1 \le h[0, 1/2)\) and \(0 \le f_2 \le h[1/2, 1]\) such that \(f = f_1 + f_2\). If \(X_1 \cap [0, 1/2) \ne \emptyset \), then take \(f_1 = h[0, 1/2), f_2 = f - f_1\) and if \(f_2>0\), then fix a non-dyadic real number \(x_2\) of \(E\cap [1/2, 1]\) such that any interval \(I\) containing \(x_2\) is such that \(h(I)>0\). Otherwise, if \(X_1 \cap [0, 1/2) = \emptyset \), then by definition of \(x_1, X_1 \cap [1/2, 1]) \ne \emptyset \). Take \(f_2 = h[1/2, 1], f_1 = f -f_2\) and the point \(x_2\) is be taken in \([0, 1/2)\) in case where \(f_1>0\). Now we take \(X_2 = X_1 \cup \{x_2\}.\) (Note that if one of the numbers \(f_1, f_2\) is zero, then the set \(X_2 = X_1\).)

We let \(\nu [0, 1/2) = f_1 \text{ and } \nu [1/2, 1] = f_2\) and hence \(\nu [0, 1] = \nu [0, 1/2) + \nu [1/2, 1].\) If \(f_1>0\), then we define \(f_{11} = \nu [0, 1/4)\) and \(f_{12} = \nu [1/4, 1/2)\) as follows. We start with \(f_1 \le h[0, 1/2) \le h[0, 1/4) + h[1/4, 1/2)\). If \(X_2 \cap [0, 1/4) \ne \emptyset \), then take \(f_{11} = \min \{f_1,\, h[0, 1/4)\}\) and \(f_{12} = f_1- f_{11}\) and fix \(x_3 \in [1/4, 1/2)\) if \(f_{12} >0\). Otherwise, if \(X_2 \cap [1/4, 1/2) \ne \emptyset \) take \(f_{12} = \min \{f_1,\, h[1/4, 1/2)\}\), \(f_{11} = f_1 - f_{12}\) and fix \(x_3 \in [0, 1/4)\) if \(f_{11}>0\). We take \(\nu [0,1/4) = f_{11}\) and \(\nu [1/4, 1/2) = f_{12}\). The same procedure applies to define \(\nu [1/2,3/4)\) and \(\nu [3/4, 1]\).

In general, assume that we have defined \(\nu \) on all dyadic intervals \(I\) such that \(\vert I \vert = 2^{-n}\) and \(X_k = \{x_1, x_2, \ldots , x_m\}\) is the set of fixed points obtained up to now. Then for any such interval \(I\) such that \(\nu (I)>0\), it is the case that \(I\cap X_k \ne \emptyset \). If \(I_1\) and \(I_2\) are the two disjoint subintervals of \(I\) of length \(2^{-n-1}\), then one of the intersections \(X_k \cap I_1, X_k \cap I_2 \) is empty while the other is non-empty. If \(X_k \cap I_1\ne \emptyset \), then set \(\nu (I_1) = \min \{\nu (I), h(I_1)\}\), \(\nu (I_2) = \nu (I) - \nu (I_1)\) and fix a point in \(I_2\) if \(\nu (I_2) >0\). Otherwise, if \(X_k \cap I_2 \ne \emptyset \), set \(\nu (I_2) = \min \{\nu (I), h(I_2)\}, \nu (I_1) = \nu (I) - \nu (I_2)\) and fix a point in \(I_1\) if \(\nu (I_1)>0\) and update the set \(X_k\) accordingly.

The sequence \((x_n)\) of fixed elements is such that \(\nu (I)>0\) if and only if \(I\) contains some \(x_n\). We denote by \(E_0\) the subset of \(E\) defined by

$$\begin{aligned} x\in E_0 \text{ if } \text{ and } \text{ only } \text{ if } \nu (I) >0 \text{ for } \text{ any } I \in \fancyscript{D} \text{ and } x\in I. \end{aligned}$$

Then \(E_0\) contains in particular the sequence \((x_n)\). It is also clear that the recursive procedure yields for any interval \(I\) with \(\nu (I)>0\) a sequence \((I_n)\) of subintervals such \(\nu (I) = \sum _{n} \nu (I_n)\) and \(\nu (I_n) = h(I_n) >0\) for any \(n\). In particular, we have that \(x\in E_0\) if and only if for any interval \(I\) containing \(x\), it is the case that there exists a sub-interval \(J\) of \(I\) containing \(x\) such that \(\nu (J) = h(J) >0\). Such intervals \(J\) will be called “optimal” intervals. From any covering \((I_n)\) of \(E_0\), we can extract another covering of \(E_0\) by optimal intervals \((J_{n,i})\) such that \(J_{n,i} \subset I_n\) for any \(n, i\) and \(\sum _{n} \nu (I_n) = \sum _{n,i} \nu (J_{n, i}).\)

Using the structure of \(\fancyscript{D}\), it is clear that \(\nu \) is countably additive on \(\fancyscript{D}\) in the sense that, if \((I_n)\) is any sequence of pairwise disjoint intervals of \(\fancyscript{D}\) such that \(\cup _n I_n\in \fancyscript{D}\), then \(\nu (\cup _n I_n) = \sum _n \nu (I_n)\). We extend \(\nu \) to the algebra \(a(\fancyscript{D})\) spanned by \(\fancyscript{D}\) by

$$\begin{aligned} \nu (I_1 \cup I_2 \cup \ldots \cup I_n) = \nu (I_1) + \nu (I_2) + \cdots + \nu (I_n) \end{aligned}$$

for every finite family \((I_1, \ldots , I_n)\) of pairwise disjoint elements of \(\fancyscript{D}\). Because \(\nu \) is countably additive on \(\fancyscript{D}\), then it is also countably additive on \(a(\fancyscript{D})\) (see for example, [2, Proposition 1.3.10, p. 12]).

Consider the outer measure defined by \(\nu \):

$$\begin{aligned} \nu ^*(A) = \inf \left\{ \sum _{n\ge 0} \nu (I_n): A \subset \cup _{n\ge 0}I_n, \text{ and } I_n \in a(\fancyscript{D}) \text{ for } \text{ all } n\right\} . \end{aligned}$$

We first prove that \(\nu ^*\) can be obtained by using the semi-algebra \(\fancyscript{D}\) instead of \(a(\fancyscript{D})\), that is, if

$$\begin{aligned} k(A) = \inf \left\{ \sum _{n\ge 0} \nu (I_n): A \subset \cup _{n\ge 0}I_n, \text{ and } I_n \in \fancyscript{D} \text{ for } \text{ all } n\right\} , \end{aligned}$$

then \(\nu ^*(A) = k(A)\). Let \(\fancyscript{C}\) be the class of all coverings \((I_n)\) of \(A\) by intervals \(I_n \in \fancyscript{D}\) and \(\fancyscript{B}\) be the class of such coverings by elements of \(a(\fancyscript{D})\). Then

$$\begin{aligned} \nu ^*(A) = \inf \left\{ \sum _{n\ge 0} \nu (I_n): (I_n) \in \fancyscript{B}\right\} \quad \text{ and } \quad k(A) = \inf \left\{ \sum _{n\ge 0} \nu (A_n): (A_n) \in \fancyscript{C}\right\} . \end{aligned}$$

Clearly, \(\fancyscript{C}\) is contained in \(\fancyscript{B}\) and hence \(\nu ^*(A) \le k(A)\). Let \((A_n) \in \fancyscript{B}\). Then \(A_n \in a(\fancyscript{D})\) for all \(n\), and therefore it is a union of a finite sequence of pairwise disjoint elements of \(\fancyscript{D}\). By replacing every \(A_n\) by the corresponding class, we obtain a covering \((H_m)\) of \(A\) by elements of \(\fancyscript{D}\). By definition, \(\sum _{n\ge 0} \nu (A_n) = \sum _{m\ge 0} \nu (H_m)\). Then \(k(A) \le \nu ^*(A)\), and hence

$$\begin{aligned} \nu ^*(A) = \inf \left\{ \sum _{n\ge 0} \nu (I_n): A \subset \cup _{n\ge 0}I_n, \text{ and } I_n \in \fancyscript{D} \text{ for } \text{ all } n \right\} . \end{aligned}$$

We now apply the Carathéodory extension theorem to \(\nu ^*\) to obtain a measure \(\mu \) on the \(\sigma \)-algebra \(\fancyscript{D}_{\nu }\) of all \(\nu ^*\)-measurable subsets of \([0, 1]\). (See for example, Theorem 1.5.6 in [2, p. 18]). The \(\sigma \)-algebra \(\fancyscript{D}_{\nu }\) contains \(a(\fancyscript{D})\) and hence in particular the Borel \(\sigma \)-algebra of \([0, 1]\). Also \(\nu ^*\) coincides with \(\nu \) on \(a(\fancyscript{D})\).

Let us show that the measure \(\mu \) satisfies all the conditions of the theorem.

  1. (i)

    For any \(I \in \fancyscript{D}\), we have by construction of \(\nu \) that \(\nu ^*(I) = \nu (I)\le h(I) \le \vert I \vert ^\alpha \). For a general interval \(I\), assume that \(2^{-n} < \vert I \vert \le 2^{-n+1}\). Then \(I\) can be covered by at most three pairwise disjoint intervals \(I_1, I_2, I_3\) of \(\fancyscript{D}\) of length \(2^{-n}\). Then

    $$\begin{aligned} \mu (I)&= \nu ^*(I) \le \nu ^*(I_1 \cup I_2\cup I_3) \le \nu ^*(I_1) + \nu ^*(I_2) + \nu ^*(I_3) \\&= \nu (I_1) + \nu (I_2) + \nu (I_3) \le h(I_1) + h(I_2) + h(I_3) \\&\le \vert I_1 \vert ^\alpha + \vert I_2 \vert ^\alpha + \vert I_3 \vert ^\alpha \le 3 \vert I \vert ^\alpha . \end{aligned}$$
  2. (ii)

    The second condition is obvious by definition of \(E_0\). Indeed, if \(\mu (I) >0\) for an open interval \(I\), then there exists a dyadic sub-interval \(J\) of \(I\) such that \(\mu (J)>0.\) This implies that \(J \cap E_0 \ne \emptyset \). Conversely, if \( x\in I \cap E_0\), then there exists \(J\in \fancyscript{D}\) such that \(x\in J \subset I\). Hence \(\mu (I) \ge \mu (J)>0\) by definition of \(E_0\).

  3. (iii)

    If \(I\) is an open interval such that \(I \cap {{\mathrm{supp}}}(\mu ) \ne \emptyset \), then \(\mu (I)>0\). There exists \(J\in \fancyscript{D}\) contained in \(I\) such that \(\mu (J)>0\). Hence \(h(J \cap E) = h(J)>0\) since \(\mu (J) \le h(J)\). By Lemma 2.2, this is equivalent to \(H_\alpha (E \cap J)>0\).

  4. (iv)

    Let us show that if \(h(E_0) = f\), then \(\nu ^*(E_0) = f\). This will imply that for any Borel subset \(B\) containing \(E, \mu (B) = \nu ^*(B) \ge \nu ^*(E) \ge \nu ^*(E_0) = f = \mu [0, 1]\). Consider any covering \((I_n)\) of \(E_0\) by elements of \(\fancyscript{D}\). As already discussed, we may assume that all the intervals \(I_n\) are optimal. Then \(\sum _n \nu (I_n) = \sum _n h(I_n) \ge h(\cup _n I_n) = h(E_0)= f.\) Hence in particular \(\nu ^*(E_0) \ge f\) and it follows that \(\nu ^*(E_0) = \nu [0, 1]\).\(\square \)

Remark 2.1

One can observe that if for any decreasing sequence \((K_n)\) of compact subsets of \([0, 1]\),

$$\begin{aligned} h(\cap _{n} K_n) = \lim _{n\rightarrow \infty } h(K_n), \end{aligned}$$
(2.1)

then the condition \((iv)\) in Theorem 1.1 is satisfied, that is, \(h(E_0) = h(E)\). Indeed, if we denote by \(F_n\) the closure of the union of intervals \(I \in \fancyscript{D}\) of length \(2^{-n}\) such that \(\nu (I)>0\), then by definition of \(E_0\), we have that \(E_0 = \cap _{n\ge 1} F_n \cap E.\) Clearly, \(\nu (F_n) = h(F_n) = h(E)\) for any \(n\ge 1\). Then \(h(E_0) = h(\cap _{n\ge 1} F_n \cap E) = h(\cap _{n\ge 1} F_n ) = \lim _{n\rightarrow \infty } h(F_n) = h(E).\)

Condition (2.1) holds for Choquet capacities (see for example [1]). An open question is to determine for which class of sets \(E\), the set function \(h\) is a Choquet capacity.

3 Choquet Integrals and Sets of Positive Capacity

Definition 3.1

Let \(\mu \) be a capacity on \(\mathbf {R}\) and let \(f: \mathbf {R} \rightarrow \mathbf {R}\) be a function. The Choquet integral of \(f\) with respect to \(\mu \) is defined as

$$\begin{aligned} \int f \mathrm{d}\mu = \int _{-\infty }^0 [\mu \left( x\in \mathbf {R}: f(x)\ge t\right) - 1]\mathrm{d}t + \int _0^\infty \mu \left( x\in \mathbf {R}: f(x)\ge t\right) \mathrm{d}t. \end{aligned}$$

(Here the integrals on the right-hand side are classical Riemann integrals.) If \(f\) is a non-negative function, then

$$\begin{aligned} \int f \mathrm{d}\mu = \int _0^\infty \mu \left( x\in \mathbf {R}: f(x)\ge t\right) \mathrm{d}t. \end{aligned}$$

For any subset \(A\) of \(\mathbf {R}\) and \(f\ge 0\),

$$\begin{aligned} \int _A f \mathrm{d}\mu = \int f 1_A \mathrm{d}\mu =\int _0^\infty \mu \left( x\in A: f(x)\ge t\right) \mathrm{d}t. \end{aligned}$$

The Choquet integral has the following elementary properties (see for example [14, p 71]): For any non-negative functions \(f\) and \(g\) and a positive constant \(c\),

  1. (1)

    \(\int (c f)\mathrm{d}\mu = c \int f \mathrm{d}\mu \),

  2. (2)

    \(\int (f + c) \mathrm{d}\mu = c + \int f \mathrm{d}\mu \),

  3. (3)

    \(\int (f + g) \mathrm{d}\mu \le \int f \mathrm{d}\mu + \int g \mathrm{d}\mu ,\)

  4. (4)

    if \(f \le g\), then \(\int f \mathrm{d}\mu \le \int g\mathrm{d}\mu \),

  5. (5)

    if \(A \subset \mathbf {R}\), then \(\int _A \mathrm{d}\mu = \mu (A).\)

We can now prove Theorem 1.2.

Proof of Theorem 1.2

Our argument is inspired by a proof of Frostman’s theorem given in [8, p. 133].

  1. (1)

    Assume that \(H_\beta (E) > 0\) and consider the restricted \(\beta \)-Hausdorff content \(h\) on \(E\) [see relation (1.1)]. Let \(\mu (A) = h(A)/c\), where \(c = h(E)\). (Here in the definition of \(h, \alpha \) is replaced by \(\beta \).) It is clear that \(\mu \) is a capacity on \(\mathbf {R}\) and \(\mu (E) = 1.\) Let us show that \(I_\alpha (\mu ) < \infty .\) For any fixed real number \(y\), we can partition the real line into the subsets

    $$\begin{aligned} A_j&= \left\{ x \in \mathbf {R}: \frac{1}{2^{j+1}} < \vert x -y\vert \le \frac{1}{2^j}\right\} , \quad j = 1, 2, \ldots \\ A_0&= \left\{ x \in \mathbf {R}: \vert x -y\vert > \frac{1}{2} \right\} . \end{aligned}$$

    Then

    $$\begin{aligned} \int \frac{\mathrm{d}\mu (x)}{\vert x - y \vert ^\alpha }&= \int \left( \sum _{j\ge 0} 1_{A_j}(x) \frac{1}{\vert x - y \vert ^\alpha }\right) \mathrm{d}\mu (x) \le \sum _{j \ge 0} \int 1_{A_j}(x) \frac{1}{\vert x - y \vert ^\alpha }\mathrm{d}\mu (x) \\&= \int _{A_0} \frac{\mathrm{d}\mu (x)}{\vert x - y \vert ^\alpha } + \sum _{j = 1}^\infty \int _{A_j} \frac{\mathrm{d}\mu (x)}{\vert x - y \vert ^\alpha }. \end{aligned}$$

    The first integral on the right-hand side is bounded by \(2^\alpha \mu (A_0) \le 2^\alpha \). For any \(j \ge 1, A_j = \big [y-2^{-j}, y-2^{-j-1}\big ) \cup \big (y+2^{-j-1}, y+2^{-j}\big ]\). As discussed in the proof of Theorem 1.1, \(h(I) \le 3 \vert I \vert ^\beta \) for any interval \(I\). Then \(h(A_j) \le h\big [y-2^{-j}, y-2^{-j-1}\big ) + h \big (y+2^{-j-1}, y+2^{-j}\big ] \le 6 \times 2^{-(j+1)\beta }\). Therefore, \(\mu (A_j) \le 2^{-(j+1)\beta } c_1\), where \(c_1 = 6/c\) and \(c= h(E)\). Then

    $$\begin{aligned} \int _{A_j} \frac{\mathrm{d}\mu (x)}{\vert x - y \vert ^\alpha } \le \int _{A_j} 2^{(j+1)\alpha } \mathrm{d}\mu (x) = 2^{(j+1)\alpha } \mu (A_j) \le {2^{-(j+1)(\beta - \alpha )}}c_1. \end{aligned}$$

    Then

    $$\begin{aligned} \int \frac{\mathrm{d}\mu (x)}{\vert x - y \vert ^\alpha } \le 2^\alpha + c_1 \sum _{j\ge 1} 2^{-(j+1)(\beta -\alpha )} = c_2 < \infty \end{aligned}$$

    because \(\beta >\alpha .\) Therefore,

    $$\begin{aligned} I_\alpha (\mu ) = \int \left( \int \frac{\mathrm{d}\mu (x)}{\vert x-y\vert ^\alpha }\right) \mathrm{d}\mu (y) \le c_2 \int \mathrm{d}\mu (y) = c_2. \end{aligned}$$

    We conclude that \(\mathrm{Cap}_\alpha E > 0\).

  2. (2)

    Let us assume that \(\mathrm{Cap}_\alpha E > 0\) and show that \(H_\alpha E > 0\). Since \(\mathrm{Cap}_\alpha E > 0\), there exists a capacity \(\mu \) on \(\mathbf {R}\) such that \(\mu (E) = 1\) and \(I_\alpha (\mu ) < \infty \). For any \(t > 0\), define

    $$\begin{aligned} E_t = \left\{ y \in E: \int \frac{\mathrm{d}\mu (x)}{\vert x - y\vert ^\alpha } \le t \right\} . \end{aligned}$$

    We can choose \(t\) so large such that \(\mu (E_t) >0\). Indeed, suppose that for any \(t > 0, \mu (E_t) = 0\). Because \(1 = \mu (E_t \cup E_t^c) \le \mu (E_t) + \mu (E_t^c) = 1\), it follows that \(\mu (E_t^c) = 1.\) Then we find that

    $$\begin{aligned} I_\alpha (\mu )&\ge \int _{E_t^c} \left( \int \frac{\mathrm{d}\mu (x)}{\vert x - y\vert ^\alpha }\right) \mathrm{d}\mu (y) \ge \int _{E_t^c} t \,\mathrm{d}\mu (y) \quad \text{(by } \text{ definition } \text{ of } E_t) \\&= t \mu (E_t^c) = t. \end{aligned}$$

    It follows that \(I_\alpha (\mu ) = \infty \), a contradiction. Let us now fix \(t > 0\) such that \(\mu (E_t) >0\) and consider a covering \((I_n)\) of \(E_t\) by intervals. We want to estimate \(\sum _{n}\vert I_n \vert ^\alpha \) from below. For this purpose, we may assume that \(I_n \cap E_t \ne \emptyset \), for all \(n\). Select \(y_n \in I_n \cap E_t, \, n = 1, 2 ,\ldots \) Since \(\vert I_n \vert ^\alpha \ge \vert x - y_n \vert ^\alpha \) for any \(x \in I_n\), we have that

    $$\begin{aligned} \int _{I_n} \frac{\mathrm{d}\mu (x)}{\vert x - y_n\vert ^\alpha } \ge \int _{I_n} \frac{\mathrm{d}\mu (x)}{\vert I_n\vert ^\alpha } = \frac{\mu (I_n)}{\vert I_n\vert ^\alpha }. \end{aligned}$$

    Then

    $$\begin{aligned} \mu (I_n) \le \vert I_n \vert ^\alpha \int _{I_n} \frac{\mathrm{d}\mu (x)}{\vert x - y_n\vert ^\alpha } \le t\vert I_n \vert ^\alpha \quad (\text{ since } y_n \in E_t). \end{aligned}$$

    Therefore

    $$\begin{aligned} \sum \vert I_n \vert ^\alpha \ge \frac{1}{t} \sum _{n} \mu (I_n) \ge \frac{1}{t} \mu (\cup _n I_n)\ge \frac{1}{t} \mu (E_t)> 0 \end{aligned}$$

    (since \((I_n)\) is an arbitrary covering of \(E_t\)). It follows that \(H_\alpha (E_t) >0 \) and in particular \(H_\alpha (E) > 0\).

  3. (3)

    The first equality is obvious by definition. The second is a consequence of (1). Indeed, denote \(\gamma = \sup \{\alpha : \mathrm{Cap}_\alpha (E) > 0\}\). For any \(\alpha < \gamma \), we have that \(\mathrm{Cap}_\alpha E > 0\) and then from (2), \(H_\alpha (E) >0\). Thus, \(\dim E \ge \alpha \), and therefore \(\dim E \ge \gamma \). If \(\dim E > \gamma ,\) then \(H_{\delta }(E) > 0\) for \(\gamma < \delta < \dim E\), and then from (1), \(\mathrm{Cap}_\delta (E) > 0\) which is a contradiction. It follows that \(\dim E = \gamma \).\(\square \)