1 Introduction and main results

In this paper we investigate the multifractal properties of the continuous convex functions defined on \([0,1]^{d}\). This paper is a quite natural continuation of our papers [3, 4] where generic multifractal properties of measures and of functions monotone increasing in several variables (in short: MISV) were studied. It is interesting that all these natural objects supported on \([0,1]^d\) have very different typical multifractal behaviors.

Let us first recall that the pointwise Hölder exponent and the singularity spectrum for a locally bounded function are defined as follows.

Definition 1

Let \(f \in L^\infty ( { [0,1]^ { d } })\). For \(h\ge 0\) and \({\mathbf {x}}\in { [0,1]^ { d } }\), the function f belongs to \(C^h({\mathbf {x}})\) if there are a polynomial P of degree strictly less than [h] and a constant \(C>0\) such that, for all \({\mathbf {x}}'\) close to \({\mathbf {x}}\),

$$\begin{aligned} |f({\mathbf {x}}') - P({\mathbf {x}}'-{\mathbf {x}})| \le C |{\mathbf {x}}' -{\mathbf {x}}|^h. \end{aligned}$$
(1)

The pointwise Hölder exponent of f at \({\mathbf {x}}\) is

$$\begin{aligned} h_f({\mathbf {x}}) = \sup \left\{ h\ge 0: \ f\in C^h({\mathbf {x}})\right\} . \end{aligned}$$

In the following, \(\dim =\dim _{H}\) denotes the Hausdorff dimension.

Definition 2

The singularity spectrum of f is the mapping

$$\begin{aligned} d_f(h)=\dim E_f({h}) , \ \ \text{ where } E_f({h}) =\left\{ {\mathbf {x}}: h_f({\mathbf {x}})=h\right\} . \end{aligned}$$

By convention \(\dim \emptyset = -\infty \). We will also use the sets

$$\begin{aligned} E_f^{\le }(h) =\left\{ {\mathbf {x}}: h_{f}({\mathbf {x}})\le h \right\} \supset E_f({h}) . \end{aligned}$$
(2)

We denote by \( {{\mathcal {CC}^{d}}}\) the set of continuous convex functions \(f: {[0,1]^d}\rightarrow {\mathbb {R}}\). Equipped with the supremum norm \(\Vert \cdot \Vert \), \( {\mathcal {CC}^{d}}\) is a separable complete metric space. An open ball in \( {\mathcal {CC}^{d}}\) of center \(f\in \mathcal {CC}^{d}\) of radius \(r\ge 0\) is written as \(B_{\Vert \cdot \Vert }(f,r)\), and a closed ball is \(\overline{ B}_{\Vert \cdot \Vert }(f,r)\)

In this paper, we first prove an upper bound for the multifractal spectrum of all functions in \(\mathcal {CC}^{d}\).

Theorem 1

For any function \(f\in \mathcal {CC}^{d}\), one has

$$\begin{aligned} d_f(h) \le {\left\{ \begin{array}{ll} \ \ \, d-1 &{} \text{ if } h \in [0,1)\\ d+h-2 &{} \text{ if } h \in [1,2]\\ \ \ \ \ \ d &{} \text{ if } h > 2. \end{array}\right. } \end{aligned}$$

Then we compute the multifractal spectrum of typical functions in \(\mathcal {CC}^{d}\). Recall that a property is typical, or generic in a complete metric space E, when it holds on a residual set, i.e. a set with a complement of first Baire category.

Theorem 2

For typical functions \(f\in \mathcal {CC}^{d}\), one has

$$\begin{aligned} d_f(h) = {\left\{ \begin{array}{ll} \ \ \, d-1 &{} \text{ if } h =0\\ d+h-2 &{} \text{ if } h \in [1,2]\\ \ \ -\infty &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$

More precisely, one has \(E_{f}({0}) = {\partial } ({[0,1]^d})\).

It is interesting to compare Theorem 2 with the regularity of other typical objects naturally defined on the cube \([0,1]^d\).

When \(d=1\), generic properties of continuous functions on \([0,1]\) have been studied for a long time (see for instance [5, 6], and the other references we mention in the present paper). It was proved that typical continuous functions belong to \(C^0(x)\), for every \(x\in [0,1]\). In [2], typical monotone continuous functions on the interval \([0,1]\) were proved to have a much more interesting multifractal behavior: such functions \(f:[0,1]\rightarrow \mathbb {R}\) satisfy

$$\begin{aligned} d_f(h) = \dim E^{\le }_f(h) = {\left\{ \begin{array}{ll} \ \ h &{} \text{ if } h \in [0,1]\\ -\infty &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$
(3)

The same holds true for typical monotone functions (not necessarily continuous). We remark that it also follows, for example from results in [2], that for arbitrary monotone functions there is an upper estimate

$$\begin{aligned} \dim E^{\le }_f(h) \le h \text{ for } h \in [0,1]. \end{aligned}$$
(4)

It is interesting to extend these results to higher dimensions.

The first natural way is to consider Borel measures on the cube. The local regularity of a positive measure \(\mu \) at a given \({\mathbf {x}}\in { [0,1] }^d\) is given by a local dimension (or a local Hölder exponent) \(h_\mu ({\mathbf {x}})\), defined as

$$\begin{aligned} h_\mu ({\mathbf {x}})=\liminf _{r\rightarrow 0^+} \frac{\log \mu (B({\mathbf {x}},r) )}{ \log r}, \end{aligned}$$

where \(B({\mathbf {x}},r)\) denotes the ball with center \({\mathbf {x}}\) and radius r. The singularity spectrum of \(\mu \) is the map

$$\begin{aligned} d_\mu : h\ge 0 \mapsto \dim \, E_\mu (h), \end{aligned}$$

where \(E_\mu (h):=\{{\mathbf {x}}\in { [0,1]^d }: h_\mu ({\mathbf {x}})=h\}.\)

In [3] (see also [1] for a nice generalization to all compact sets in \(\mathbb {R}^d\)), it is proved that typical measures \(\mu \) supported on \([0,1]^d\) satisfy a multifractal formalism, and

$$\begin{aligned} d_\mu (h) = {\left\{ \begin{array}{ll} \ \ h &{} \text{ if } h \in [0,d]\\ -\infty &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$
Fig. 1
figure 1

Typical spectra for measures, for MISV and for convex functions

Another interesting class is constituted of the continuous monotone increasing in several variables (in short: MISV) functions. These functions extend to higher dimensions in a different direction the one-dimensional monotone functions. A function \(f: { [0,1]^ { d } } \rightarrow { \mathbb {R} }\) is MISV when for all \(i\in \{1,\ldots ,d\}\), the functions

$$\begin{aligned} f^{(i)}(t)= f(x_{1},\ldots ,x_{i-1},t,x_{i+1},\ldots ,x_{d}) \end{aligned}$$
(5)

are continuous monotone increasing. We use the notation

$$\begin{aligned} { { \mathcal M } ^d } =\left\{ f\in C( { [0,1]^ { d } }) : f \text { MISV} \right\} . \end{aligned}$$

The space \( { { \mathcal M } ^d }\) is a separable complete metric space when equipped with the supremum \(L^{ { \infty }}\) norm for functions. Typical MISV functions satisfy (see [2, 4])

$$\begin{aligned} d_f(h) = {\left\{ \begin{array}{ll} d-1+h &{} \text{ if } h \in [0,1]\\ \ \ -\infty &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$
(6)

In Fig. 1 we compare our new results about generic continuous convex functions with the earlier results we mentioned.

Remark 1

One cannot directly infer Theorems 1 and 2 by integrating MISV functions or measures on \([0,1]^d\). For instance, letting \(f(x_{1},x_{2})=10(x_1^2+x_2^2)+x_{1}^{2}x_{2}^{2}\), its second differential \(d^{2}f(x_{1},x_{2},h_{1},h_{2})=(20+2x_{2}^{2})h_{1}^{2} +(20+2x_{1}^{2})h_{2}^{2}+4x_{1}x_{2}h_{1}h_{2}\) is positive definite for any \((x_{1},x_{2})\in [-1,1]^{2}\). Hence this function f is strictly convex on \([-1,1]^{2}\), but \( {\partial }_{1}f=20x_{1}+2x_{1}x_{2}^{2}\) is monotone only in \(x_{1}\) and \( {\partial }_{2}f=20x_{2}+2x_{2}x_{1}^{2}\) is monotone only in \(x_{2}\).

2 Preliminary results

2.1 Basic notation

If it is not stated otherwise we work in \( {\mathbb {R}}^{d}\). Points in the space are denoted by \( {\mathbf {x}}=(x_{1},\ldots ,x_{d})\). The j’th unit vector is denoted by

$$\begin{aligned} {{\mathbf {e}}}_{j}=(0,\ldots ,0,\underset{\underset{j}{\uparrow }}{1},0,\ldots ,0). \end{aligned}$$

Open balls in \(\mathbb {R}^d\) are denoted by \(B({\mathbf {x}},r)\) (to be distinguished from the balls \(B_{\Vert \cdot \Vert }(f,\varepsilon ) \) in \(\mathcal {CC}^{d}\)).

2.2 Local regularity results of continuous convex functions

First, recall that \( {{C^ {\infty }}}\) functions are dense in \( {\mathcal {CC}^{d}}\).

Remark 2

To see this, take \( {\psi }\ge 0\) a \( {{C^ {\infty }}}\) function, which is 0 outside \(\mathbf{B}( {{\mathbf {0}}},1)\), such that \( \displaystyle \int _{{\mathbb {R}}^{d}} {\psi }=1\). Put \( {\psi }_{\lambda }=\lambda ^{d} {\psi }(\frac{1}{\lambda } {\mathbf {x}})\). If f is convex and continuous on \( {[-1,1]^d}\), then the convolution \({{{\overline{f}}}_{\lambda }}( {\mathbf {x}})=\int _{{[0,1]^d}}f( {\mathbf {x}}) {\psi }_{\lambda }( {{\mathbf {y}}}- {\mathbf {x}})d {{\mathbf {y}}}\) is convex on \([-1+\lambda ,1-\lambda ]^{d}\) and \(f_{\lambda }( {\mathbf {x}})={{{\overline{f}}}_{\lambda }} (\frac{1}{1-\lambda } {\mathbf {x}})\) is convex on \( {[-1,1]^d}\). Using the uniform continuity of f on \( {[-1,1]^d}\), one easily sees that \(||f-f_{\lambda }||\rightarrow 0\) as \(\lambda \rightarrow 0.\) One concludes by using an obvious linear transformation.

A first lemma allows one to control the left and right partial derivatives of all functions in a neighborhood of a convex differentiable function in \(\mathcal {CC}^{d}\). The notations \({\partial }_{j,+}f\) and \({\partial }_{j,-}f\) are used for the right and left partial j-th derivatives of a convex function f.

Lemma 3

Suppose \(f\in {\mathcal {CC}^{d}}\cap C^{1} ([0,1]^d)\) and \(\varepsilon >0\). There exists \(\varrho _{f,{\varepsilon }}>0\), such that for all \(j\in {\{1,\ldots ,d \}}\), if \(g\in B_{\Vert \cdot \Vert }(f,\varrho _{f,{\varepsilon }})\), then for every \(x_{j}\in [ {\varepsilon },1- {\varepsilon }]\), \(x_{i}\in [0,1]\), \(i\in {\{1,\ldots ,d \} {\setminus } \{j \}}\) we have

$$\begin{aligned} | {\partial }_{j,\pm } g( {{x_ {1} ,\ldots ,x_ {d}}})- {\partial }_{j}f( {{x_ {1} ,\ldots ,x_ {d}}})|< {\varepsilon }. \end{aligned}$$
(7)

Proof

It is enough to fix one \(j \in \{1,\ldots ,d\}\). Recall that \(f\in {\mathcal {CC}^{d}}\cap C^{1} ({[0,1]^d})\) implies that \( {\partial }_{j}f\) is non-decreasing in \(x_{j}\) and is uniformly continuous on \( {[0,1]^d}\).

Using this uniform continuity, there exists a partition \(0=x_{j,0}<x_{j,1}<\cdots <x_{j,K}=1\) such that \(x_{j,1}<\varepsilon \) and for every \(x_{i}\in [0,1]\) with \(i\in {\{1,\ldots ,d \} {\setminus } \{j \}}\), one has for every \(l=1,\ldots ,K\),

$$\begin{aligned} {\partial }_{j}f(x_{1},\ldots ,x_{j-1}, x_{j,l},x_{j+1},\ldots ,x_{d}) - {\partial }_{j}f(x_{1},\ldots ,x_{j-1}, x_{j,l-1},x_{j+1},\ldots ,x_{d}) <\frac{{\varepsilon }}{4}.\nonumber \\ \end{aligned}$$
(8)

Set

$$\begin{aligned} \varrho _{f, \varepsilon ,j}=\frac{{\varepsilon }}{4}\min _{l=2,\ldots , K}(x_{j,l}-x_{j,l-1}). \end{aligned}$$
(9)

Consider any function \(g\in B_{\Vert \cdot \Vert }(f,\varrho _{f, \varepsilon ,j})\), and fix \( {\mathbf {x}}=( {{x_ {1} ,\ldots ,x_ {d}}}) \in [0,1]^d\), so that \(x_{j}\in [ {\varepsilon },1- {\varepsilon }]\).

There exists an integer \(l\in \{1,\ldots , K-1\}\) such that \(x_{j}\in [x_{j,l},x_{j,l+1}].\)

Set \( {\mathbf {x}}(l')=(x_{1},\ldots ,x_{j-1}, x_{j,l'},x_{j+1},\ldots ,x_{d}).\)

Since the two functions \( t\mapsto g(x_{1},\ldots ,x_{j-1}, t ,x_{j+1},\ldots ,x_{d})\) and \( t\mapsto f(x_{1},\ldots ,x_{j-1}, t ,x_{j+1},\ldots ,x_{d})\) are real convex, one has

$$\begin{aligned} {\partial }_{j,-} g( {\mathbf {x}})\ge & {} {\partial }_{j,-} g( {\mathbf {x}}(l))\ge \frac{g( {\mathbf {x}}(l))-g( {\mathbf {x}}(l-1))}{x_{j,l}-x_{j,l-1}}\nonumber \\\ge & {} \frac{f( {\mathbf {x}}(l))-f( {\mathbf {x}}(l-1))-2\varrho _{f,{\varepsilon },j}}{x_{j,l}-x_{j,l-1}}\nonumber \\\ge & {} \frac{f( {\mathbf {x}}(l))-f( {\mathbf {x}}(l-1))}{x_{j,l}-x_{j,l-1}}-\frac{{\varepsilon }}{2}\nonumber \\\ge & {} {\partial }_{j} f( {\mathbf {x}}(l-1))-\frac{{\varepsilon }}{2} . \end{aligned}$$
(10)

Thanks to the convexity of \( t\mapsto f(x_{1},\ldots ,x_{j-1}, t ,x_{j+1},\ldots ,x_{d})\), Eq. (8) gives

$$\begin{aligned} {\partial }_{j}f( {\mathbf {x}})\le {\partial }_{j}f( {\mathbf {x}}(l+1))< {\partial }_{j}f ( {\mathbf {x}}(l-1))+2\frac{{\varepsilon }}{4}. \end{aligned}$$

Hence, one can continue (10) to obtain

$$\begin{aligned} {\partial }_{j,-} g( {\mathbf {x}})> {\partial }_{j}f( {\mathbf {x}})-\frac{{\varepsilon }}{2}-\frac{{\varepsilon }}{2}= {\partial }_{j}f ( {\mathbf {x}})- {\varepsilon }. \end{aligned}$$

Moreover, a similar argument can show that

$$\begin{aligned} {\partial }_{j,+}g( {\mathbf {x}})< {\partial }_{j}f ( {\mathbf {x}})+ {\varepsilon }. \end{aligned}$$

This ends the proof, since \({\partial }_{j,-}g( {\mathbf {x}})\le {\partial }_{j,+}g( {\mathbf {x}})\).

The conclusion follows by taking \(\varrho _{f,{\varepsilon }} = \min (\varrho _{f,{\varepsilon },j}: j=1,\ldots ,d)\). \(\square \)

The one-dimensional version of Lemma 3 is stated as follows.

Lemma 4

Suppose \(f\in {\mathcal {C}\mathcal {C}^{1}}\cap C^{1}([0,1])\). For \( {\varepsilon }>0\) there exist \( {\varepsilon }>0\) and \(\varrho _{{\varepsilon },f}>0\) such that for any \(g\in B_{\Vert \cdot \Vert }(f,\varrho _{{\varepsilon },f})\) and \(x\in [ {\varepsilon },1- {\varepsilon }]\) we have

$$\begin{aligned} |g'_{\pm } (x)-f'(x)|< {\varepsilon }. \end{aligned}$$
(11)

Next, one compares the pointwise exponents of a differentiable convex function f and its derivative \(f'\). It is a general property that \(h_{f'}(x) \le h_f(x)-1\), for every differentiable f. A surprising property is that equality necessarily holds when f is convex and \(h_f(x)\in [1,2)\),

Lemma 5

If f is convex and differentiable on (ab) and \(h_f(x)\in [1,2)\) for some \(x\in (a,b)\), then \(h_{f}(x)= h_{f'}(x)+1\).

Proof

It is enough to prove that \(h_{f}(x)\le h_{f'}(x)+1\). Since \(h_f(x)\in [1,2)\), necessarily \(h=h_{f'}(x)<1\). Hence, there exists a sequence \((x_{n})_{n\ge 1}\) converging to x such that \(|f'(x_{n})-f'(x)|>|x_{n}-x|^{h+\frac{1}{n}}.\) Without limiting generality, suppose that \(x_{n}>x\). By the monotonicity of \(f'\), one has

$$\begin{aligned} f'(x_{n})>f'(x)+(x_{n}-x)^{h+\frac{1}{n}}. \end{aligned}$$

By convexity of f,

$$\begin{aligned} f(x_{n})\ge f(x)+f'(x)(x_{n}-x). \end{aligned}$$

Setting \(x_{n}'=x+2(x_{n}-x)\), and using again the convexity of f at \(x_{n}\), one gets

$$\begin{aligned} f(x_{n}')\ge & {} f(x_{n})+f'(x_{n})\left( x_{n}'-x_{n}\right) \\\ge & {} f(x)+f'(x)(x_{n}-x)+\left( f'(x)+(x_{n}-x)^{h+\frac{1}{n}}\right) \left( x_{n}'-x_{n}\right) \\\ge & {} f(x)+f'(x)\left( x_{n}'-x\right) +(x_{n}-x)^{h+\frac{1}{n}}\frac{1}{2}\left( x_{n}'-x\right) \\= & {} f(x)+f'(x)\left( x_{n}'-x\right) +\frac{1}{2^{h+1+\frac{1}{n}}}\left( x_{n}'-x\right) ^{h+1+\frac{1}{n}}. \end{aligned}$$

This implies \(h_{f}(x)\le h+1\), hence the result. \(\square \)

One investigates what happens for non-differentiable convex functions.

Lemma 6

If f is convex on \((a,b) \subset \mathbb {R}\) and \(h_f(x)\in [1,2)\) for some \(x\in (a,b)\), then \(\min (h_{f'_+}(x), h_{f'_-}(x)) \le h_f(x)-1\).

Proof

When \(h_{f}(x)=1\) the lemma is obvious. Set \(h=h_f(x)>1\), and let \(\varepsilon >0\) so that \(h-\varepsilon >0\). By definition, there exists \(M\in \mathbb {R}\) such that one has

$$\begin{aligned} |f(x+y)-f(x) - M y|\le |y|^{h-\varepsilon } \end{aligned}$$

for every small y, and there exists a sequence \((y_n)_{n\ge 1}\) converging to zero such that

$$\begin{aligned} |f(x+y_n)-f(x) -M y_n|\ge |y_n|^{h+\varepsilon }. \end{aligned}$$

Hence,

$$\begin{aligned} |y_n|^{h+\varepsilon -1} \le \left| \frac{f(x+y_n)-f(x)}{y_n} -M \right| \le |y_n|^{h-\varepsilon -1}. \end{aligned}$$

Since the left and right derivatives \(f'_+(x)\) and \(f'_-(x)\) both exist, they both equal \(M = f'(x)\).

Assume, without loss of generality, that there are infinitely many positive \(y_n\)’s. For every \(y_n\),

$$\begin{aligned} f'_+(x+y_n) - f'_+(x) \ge \frac{f(x+y_n)-f(x)}{y_n} - f'_+(x) \ge |y_n|^{h+\varepsilon -1}, \end{aligned}$$

thus \(h_{f'_+}(x) \le h+\varepsilon -1\), which gives the result. \(\square \)

We also prove the following proposition, which somehow asserts that a convex function cannot have exceptional isolated directional pointwise regularity.

For this, consider the d-dimensional unit sphere \(S_d=\{{\mathbf {x}}\in \mathbb {R}^d: \Vert {\mathbf {x}}\Vert =1\}\). Then, we select a finite set of pairwise distinct points \((\mathbf {z}_1, \mathbf {z}_2, \ldots \mathbf {z}_N) \in (S_d)^N \) for some integer \(N\ge 1\) such that the convex hull of \(\{\mathbf {z}_1, \mathbf {z}_2, \ldots \mathbf {z}_N\}\) contains the d-dimensional ball \(B(\mathbf{0}, 1/2)\).

Let us choose \(\varepsilon _c>0\) so small that :

  • \(0<\varepsilon _c \le \frac{1}{1000}\min (\Vert \mathbf {z}_i-\mathbf {z}_j\Vert , \ i\ne j, \ i,j\in \{1,\ldots ,N\} )\).

  • Setting for every \(i\in \{1,\ldots ,N\}\)

    $$\begin{aligned} C_i = S_d \cap B(\mathbf {z}_i,\varepsilon _c), \end{aligned}$$
    (12)

    then for any choice of \(\mathbf {z}'_i \in C_i\), the convex hull of \(\{\mathbf {z}'_1, \mathbf {z}'_2, \ldots \mathbf {z}'_N\}\) contains the ball \(B(\mathbf{0}, 1/4)\).

Proposition 7

If \(h_f({\mathbf {x}}) = h\), then there exists \(i\in \{1,\ldots ,N\}\) such that for every \(\mathbf {z}'_i \in C_i\) (see (12)), the restriction of f to the straight line passing through \({\mathbf {x}}\) parallel to the vector \(\mathbf {z}'_i \) has a pointwise Hölder exponent equal to h.

Proof

Let n be such that \(n\le h<n+1\). We assume without loss of generality that \({\mathbf {x}}=\mathbf{0}\), \(f(\mathbf{0}) =0\), \(D^{k}f(\mathbf{0},\ldots ,\mathbf{0})=0\) for every \(k\in \{1,\ldots , n\}\). Let \(\varepsilon >0\). By definition, for every \({\mathbf {x}}\) close to \(\mathbf{0}\), \(|f({\mathbf {x}})| \le |{\mathbf {x}}|^{h-\varepsilon }\), and there exists a sequence \(({\mathbf {x}}_n = (x_{n,1},\ldots ,x_{n,d}))_{n\ge 1}\) of elements in \(\mathbb {R}^d\), converging to \(\mathbf{0}\), such that

$$\begin{aligned} |{\mathbf {x}}_n|^{h+\varepsilon } \le | f({\mathbf {x}}_n) | \le |{\mathbf {x}}_n|^{h-\varepsilon }. \end{aligned}$$
(13)

Consider such an element \({\mathbf {x}}_n\), and the sets \((C_{n,i}:= 4\Vert {\mathbf {x}}_n\Vert \cdot C_i)_{i=1,\ldots , N}\). Let us prove that there exists \(i_n\in \{1,\ldots , N\}\) such that for every \(n\in \mathbb {N}\) and \(\mathbf {z}'_{i_n } \in C_{n,i_n}\), \(|f(\mathbf {z}'_{i_n})|\ge (|\mathbf {z}'_{i_n}|/4)^{h+\varepsilon }\).

Assume first that for every \(i\in \{1,\ldots , N\}\), there exists \(\mathbf {z}'_i \in C_{n,i}\) such that \(|f(\mathbf {z}'_i)|< |f({\mathbf {x}}_n)|\). By construction, the ball \(B(\mathbf{0}, \Vert {\mathbf {x}}_n\Vert )\) is included in the convex hull of these points \((\mathbf {z}'_i)_{i=1,\ldots , N}\). By convexity, this would imply that \(|f({\mathbf {x}}_n)| \le \max ( |f(\mathbf {z}'_i)|: i=1,\ldots ,N)\), hence a contradiction.

Hence, there exists an \(i_n\in \{1,\ldots ,N\}\) such that for every \(\mathbf {z}'_{i_n} \in C_{n,{i_n}}\), \(|f(\mathbf {z}'_{i_n})|\ge |f({\mathbf {x}}_n)| \ge |{\mathbf {x}}_n|^{h+\varepsilon } = (|\mathbf {z}'_{i_n}|/4)^{h+\varepsilon }\).

Turning to a subsequence, one can assume that \(i_n\) is constant, equal to \(i\in \{1,\ldots ,N\}\).

Now, as a consequence of what precedes, for every vector \(\mathbf {z}'_i\in C_i\), there exists an infinite number of values \((r_n=|{\mathbf {x}}_n|)_{n\ge 1}\) such that \(|f(r_n \mathbf {z}'_i)|\ge (|r_n \mathbf {z}'_i|/4)^{h+\varepsilon }\). Hence, the restriction of f to the straight line passing through \(\mathbf{0}\) parallel to \(\mathbf {z}'_i\) has a pointwise Hölder exponent less than \(h+\varepsilon \). Since this holds for every \(\varepsilon >0\), and obviously this exponent is bounded below by h (i.e. the exponent of f), one concludes that this restriction has exactly exponent h. \(\square \)

3 First typical properties of continuous convex functions

Based on Lemma 3 it is very easy to see that the typical function in \( {\mathcal {CC}^{d}}\) is continuously differentiable on \( {(0,1)^d}\). This was proved in [5, 6] for instance. We give another proof for completeness.

Proposition 8

There is a dense \(G_{{\delta }}\) set \( {{\mathcal {G}}} \) in \( {\mathcal {CC}^{d}}\) such that every \(f\in {{\mathcal {G}}}\) is continuously differentiable on \( {(0,1)^d}\).

Proof

By convexity, the partial derivatives \( {\partial }_{j,\pm } f( {\mathbf {x}})\) exist for any \(f\in {\mathcal {CC}^{d}}\), \( {\mathbf {x}}\in {(0,1)^d}\) and \(j\in {\{1,\ldots ,d \}}.\)

Since \(\mathcal {CC}^{d}\) is separable, one can choose a sequence of convex functions \(\{f_{m}:m=1,\ldots \}\) dense in \(\mathcal {CC}^{d}\). In addition, by Remark 2, one can assume that all these functions \(f_m\) are \( {{C^ {\infty }}} ([0,1]^d ) \) functions.

By uniform continuity of all the partial derivatives of \(f_m\), there is a \( {\delta }_{n,m}>0\) such that for every j, for every \( {\mathbf {x}}, {\mathbf {x}}'\in {[0,1]^d} \),

$$\begin{aligned} | {\partial }_{j}f_{m}( {\mathbf {x}})- {\partial }_{j}f _{m}( {\mathbf {x}}')|<\frac{1}{n}, \ \ \text{ when } | {\mathbf {x}}- {\mathbf {x}}'|< {\delta }_{n,m}. \end{aligned}$$
(14)

Applying Lemma 3, it is possible to choose \(0< {\varrho }_{n,m}<\frac{1}{n+m}\) such that if \(f\in B_{\Vert \cdot \Vert }(f_{m}, {\varrho }_{n,m})\) then for every \(j\in {\{1,\ldots ,d \}}\), if \({\mathbf {x}}=( {{x_ {1} ,\ldots ,x_ {d}}}) \in [0,1]^d\) with \(x_{j}\in [\frac{1}{n},1-\frac{1}{n}]\), then

$$\begin{aligned} | {\partial }_{j,\pm } f( {\mathbf {x}})- {\partial }_{j}f _{m}( {\mathbf {x}})|< \frac{1}{n}. \end{aligned}$$
(15)

Let us introduce the sets

$$\begin{aligned} {\mathcal {G}}_{n}=\bigcup _{m=1}^{{\infty }}B_{\Vert \cdot \Vert }(f_{m} , {\varrho }_{n,m}) \ \text{ and } \ {\mathcal {G}} =\bigcap _{n=1}^{{\infty }}{\mathcal {G}}_{n} . \end{aligned}$$

It is clear that \({\mathcal {G}} \) is a dense \(G_{{\delta }}\) set in \( {\mathcal {CC}^{d}}\).

We prove that any \(f\in {\mathcal {G}} \) is continuously differentiable.

By definition, there exists an infinite sequence of integers \((m_{n})_{n\ge 1}\) such that \(f\in B_{\Vert \cdot \Vert }(f_{m_{n}}, {\varrho }_{n,m_{n}}).\)

Fix \(j\in \{1,..,d\}\), and focus on the j-th partial derivatives. Combining inequalities (14) and (15), if \( {\mathbf {x}}, {\mathbf {x}}' \in {[0,1]^d}\), with \(x_{j},x_{j}'\in [\frac{1}{n},1-\frac{1}{n}]\) and \(|| {\mathbf {x}}- {\mathbf {x}}'||< {\varrho }_{n,m_{n}}\), then

$$\begin{aligned} | {\partial }_{j,\pm } f( {\mathbf {x}})- {\partial }_{j,\pm } f( {\mathbf {x}}')|<| {\partial }_{j}{f_{m_{n}}}( {\mathbf {x}})- {\partial }_{j}{f_{m_{n}}}( {\mathbf {x}}')|+\frac{2}{n}<\frac{3}{n}. \end{aligned}$$

The ± is the above inequality means that any choice of left or right derivative can be made.

From this, letting \(n\rightarrow {\infty }\), it follows easily that \( {\partial }_{j,+} f( {\mathbf {x}})= {\partial }_{j,-} f( {\mathbf {x}})\), hence \( {\partial }_{j} f\) is continuous on \( {(0,1)^d}\). \(\square \)

However, the next lemma shows that typical functions f in \( {\mathcal {CC}^{d}}\) are not differentiable on \( {[0,1]^d}\). The problem comes from the boundary of the domain.

Proposition 9

There is a dense \(G_{{\delta }}\) set \({\mathcal {G}}_{{\infty }}\) in \( {{\mathcal {G}}}\) such that every \(f\in {\mathcal {G}}_{{\infty }}\) satisfies the following: for every \(j\in {\{1,\ldots ,d \}}\), for every \(x_{i}\in [0,1]\) with \(i\in {\{1,\ldots ,d \} {\setminus } \{j \}}\), one has

$$\begin{aligned} {\partial }_{j,+}f(x_{1},\ldots ,x_{j-1}, 0 ,x_{j+1},\ldots ,x_{d}) =-\, {\infty } \end{aligned}$$
(16)

and

$$\begin{aligned} {\partial }_{j,-}f(x_{1},\ldots ,x_{j-1}, 1 ,x_{j+1},\ldots ,x_{d}) =+\, {\infty }. \end{aligned}$$
(17)

Moreover,

$$\begin{aligned} h_{f}(x_{1},\ldots ,x_{j-1}, 0,x_{j+1},\ldots ,x_{d})=0= h_{f}(x_{1},\ldots ,x_{j-1}, 1 ,x_{j+1},\ldots ,x_{d}). \end{aligned}$$
(18)

Proof

We are going to show that for a fixed j, there is a dense \(G_{{\delta }}\) set \({\mathcal {G}}^{0,j}\) in \( {{\mathcal {CC}^{d}}}\) such that if \(f\in {\mathcal {G}}^{0,j}\) then (16) and the first equality in (18) holds.

Without loss of generality, one considers \(j=1\). As in the proof of Theorem 8, one chooses a sequence of \(C^\infty \) functions \((f_{m})_{m\ge 1}\). We also select integer constants \(M_{m}\ge 1\) such that \(| {\partial }_{1}f_{m}|\le M_{m}\) on \( {[0,1]^d}\).

For every integer \(l\ge 1\), let us introduce the mapping \( {\varphi }_{l}\) defined as

$$\begin{aligned} {\varphi }_{l}( {{x_ {1} ,\ldots ,x_ {d}}})= {\left\{ \begin{array}{ll} -l^{l-1}x_{1} &{} \text{ if } 0\le x_{1}\le l^{-l},\\ -l^{-1} &{} \text{ if } l^{-l}\le x_{1}\le 1. \end{array}\right. } \end{aligned}$$

Clearly, \( {\varphi }_{l}\) is convex and for any fixed n the functions \(f_{m}+ {\varphi }_{n\cdot m\cdot M_{m}}\), \(m=1,\ldots \) are dense in \( {\mathcal {CC}^{d}}\). Set

$$\begin{aligned} {\mathcal {G}}_{n}^{0,1}=\bigcup _{m=1}^{{\infty }}B_{\Vert \cdot \Vert }(f_{m}+ {\varphi }_{n\cdot m\cdot M_{m}}, (n\cdot m\cdot M_{m})^{-n\cdot m\cdot M_{m}}), \end{aligned}$$

and

$$\begin{aligned} {\mathcal {G}}^{0,1}=\bigcap _{n=1}^{{\infty }} {\mathcal {G}}_{n}^{0,1}. \end{aligned}$$

We prove that if \(f\in {\mathcal {G}}^{0,1}\), then (18) and hence (16) hold. By definition, there exists an infinite sequence of integers \((m_{n})_{n\ge 1}\) such that

$$\begin{aligned} f\in B_{\Vert \cdot \Vert } \big (f_{m_{n}}+ {\varphi }_{n\cdot m_n\cdot M_{m_n}},\big (n\cdot m_n\cdot M_{m_n}\big )^{-n\cdot m_n\cdot M_{m_n}}\big ). \end{aligned}$$

For ease of notation, set \(l_{n}=n\cdot m_n\cdot M_{m_n}\), that is

$$\begin{aligned} f\in B_{\Vert \cdot \Vert }\left( f_{m_{n}}+ {\varphi }_{l_{n}},l_{n}^{-l_{n}}\right) . \end{aligned}$$

Consider \({\mathbf {x}}\in [0,1]^d\), with \(x_1=0\). Then for every \(n\ge 5\),

$$\begin{aligned}&f(0,x_{2},\ldots ,x_{d})-f\big (l_{n}^{-l_{n}},x_{2},\ldots ,x_{d}\big ) \\&\quad \ge \big (f_{m_{n}} + {\varphi }_{l_{n}}\big )(0,x_{2},\ldots ,x_{d}))-\big (f_{m_{n}} + {\varphi }_{l_{n}}\big )\big (l_{n}^{-l_{n}},x_{2},\ldots ,x_{d}\big )) -2\cdot l_{n}^{-l_{n}}\\&\quad \ge -M_{m_{n}} l_{n}^{-l_{n}} + l_n^{-1} -2\cdot l_{n}^{-l_{n}}\\&\quad = l_{n}^{-1}\Big (1-\frac{M_{m_{n}}}{l_{n}}l_{n}^{-l_{n}+2} -\frac{2}{l_{n}}l_{n}^{-l_{n}+1}\Big ) \\&\quad > \frac{1}{2l_{n}}, \end{aligned}$$

where we have used the boundedness of \(\partial f_{m_{n}}\) by \(M_{m_n}\) and the fact that \(l_n>\!\!> M_{m_n} \). Hence, for any \( {\alpha }>0\) we have

$$\begin{aligned} \frac{f(0,x_{2},\ldots ,x_{d})-f\big (l_{n}^{-l_{n}},x_{2},\ldots ,x_{d}\big )}{l_{n}^{-l_{n} {\alpha }}}>\frac{1}{2}l_{n}^{l_{n} {\alpha }-1} , \end{aligned}$$

which tends to infinity when n goes to infinity. Hence (16) holds true, and one also deduces that \(h_f(0,x_{2},\ldots ,x_{d}) = 0\).

Similar arguments yield \(G_\delta \) sets \({\mathcal {G}}^{0,j}\) for \(j>1\) and \({\mathcal {G}}^{1,j}\) for \(j=1,\ldots ,d\) such that the functions f in these sets satisfy the corresponding equalities in (1618).

Finally, the set

$$\begin{aligned} {{\mathcal {G}}}_\infty = \bigcap _{j= 1}^d {\mathcal {G}}^{0,j}\cap {\mathcal {G}}^{1,j} \end{aligned}$$

satisfies the conditions of Proposition 9. \(\square \)

We conclude this section by proving that typical convex functions have only pointwise exponents less than 2.

Proposition 10

There exists a \(G_\delta \)-set \(\mathcal {G}^2\) such that for every \(f\in \mathcal {G}^2\), \(E_f(h) = \emptyset \) for every \(h>2\).

Before proving Proposition 10, we introduce some perturbation functions already used in Sect. 4 of [4].

Definition 3

For every \(l\in {\mathbb {N}}\), the function \(\gamma _{l}: {[0,1]} \rightarrow {[0,1]}\) is defined as follows:

  • \(\gamma _l\) is continuous,

  • For every integer \(j=0,\ldots ,2^{l^{2}}-1\), if \(x_{1}\in [j2^{-l^{2}},(j+1)2^{-l^{2}}-2^{-l^{4}}]\), one sets \(\gamma _{l}(x_{1})=j2^{-l^{2}-l}\). So \(\gamma _l\) is constant on these intervals,

  • \(\gamma _{l}(1)=2^{-l}\),

  • For every integer \(j=0,\ldots ,2^{l^{2}}-1\), the mapping \(\gamma _{l}\) is affine on the intervals \([ (j+1)2^{-l^{2}}-2^{-l^{4}},(j+1)2^{-l^{2}}]\),

These functions \(\gamma _l\) are continuous, ranging from 0 to \(2^{-l}\), and are strictly increasing only on the \(2^{l^2}\) many very small, uniformly distributed, intervals of length \(2^{-l^4}\).

Proof

We start by selecting a set of \( {{C^ {\infty }}}\) functions \(\{ \displaystyle f_{m}:m=1,2,\ldots \}\) which is dense in \( {\mathcal {CC}^{d}}\). We choose an integer \(M_{m,3}\ge 1\) such that the second and third partial derivatives \(\partial _{1}^{3}f_{m}\) with respect to the first variable \(x_1\) of \(f_m\) satisfy \(|\partial _{1}^{2}f_{m}|+|\partial _{1}^{3}f_{m}|\le M_{m,3}\).

Observe that the \(x_{1}\)-partial derivatives \( {\partial }_{1} f_{m}\) of these functions are monotone in the first variable.

Then, one introduces the auxiliary functions, depending only on the first variable: for \(l\ge 0\),

$$\begin{aligned} {\overline{\gamma }}_{l}(x_{1}, x_{2},\ldots ,x_{d})=\gamma _{l}(x_{1}). \end{aligned}$$
(19)

The perturbation functions are defined for \(l\ge 0\) by

$$\begin{aligned} {\overline{f}}_{l}( {\mathbf {x}})={\overline{f}}_{l}(x_{1}, x_{2},\ldots ,x_{d})=\int _{0}^{x_{1}}{\overline{\gamma }}_{l}(t,x_{2},\ldots ,x_{d})dt=\int _{0}^{x_{1}}\gamma _{l}(t)dt. \end{aligned}$$
(20)

Next we apply Lemma 3 to the functions \(f_{m}+ {\overline{f}}_{ m+M_{m,3}+n}\) with \( {\varepsilon }= {\varepsilon }_{m,n}:= 2^{-( m+M_{m,3}+n) ^8}\) and \(j=1\). There exists a constant, denoted by \(\varrho _{m,n}>0\), such that if \(f\in B_{\Vert \cdot \Vert }(f_{m}+{\overline{f}}_{ m+M_{m,3}+n},\varrho _{m,n})\), then for any \(x_{1}\in [{\varepsilon }_{m,n},1- {\varepsilon }_{m,n} ]\) and \(x_{j}\in [0,1]\) for \(j=2,\ldots ,d\), one has

$$\begin{aligned} | {\partial }_{1,\pm }f({\mathbf {x}})- {\partial }_{1}(f_{m}+{\overline{f}}_{ m+M_{m,3}+n})({\mathbf {x}})|< {\varepsilon }_{m,n} . \end{aligned}$$
(21)

Without limiting generality one can assume that \(\varrho _{m,n}\le {\varepsilon }_{m,n}\).

Set

$$\begin{aligned} {{\mathcal {R}}}_{n}=\bigcup _{m=1}^{{\infty }} B_{\Vert \cdot \Vert }(f_{m}+{\overline{f}}_{ m+M_{m,3}+n},\varrho _{m,n}) . \end{aligned}$$

It is not difficult to see that \( {{\mathcal {R}}}_{n}\) is open and dense in \( {\mathcal {CC}^{d}}\). Suppose that \(\mathcal {G}_{\infty }\) is the dense \(G_{\delta }\) set from Proposition  9. Since \(\mathcal {G}_{\infty }\) is a subset of \(\mathcal {G}\) from Proposition 8 all \(f\in \mathcal {G}_{\infty } \) are continuously differentiable on \((0,1)^{d}\). Moreover, the Hölder exponent is zero of these functions on the boundary of \([0,1]^{d}\).

Finally, set

$$\begin{aligned} {{\mathcal {G}}}^2= {{\mathcal {G}}_\infty }\bigcap \left( \bigcap _{n=1}^{{\infty }} {{\mathcal {R}}}_{n}\right) . \end{aligned}$$

By construction, there exists a sequence of integers \((m_{n})_{n\ge 1}\) such that \(f\in B_{\Vert \cdot \Vert }(f_{m_{n}}+{\overline{f}}_{ m_n+M_{m_n,3}+n},\varrho _{n,m_{n}})\) for every n.

For simplification, we set \(l_{n} =m_{n}+M_{m_n,3}+n\), \(\rho _n:= \rho _{m_n,n}\) and \( \varepsilon _n:= {\varepsilon }_{m_n,n} \), so that for every \(n\ge 1\), \(\rho _n \le \varepsilon _n = 2^{-(l_n)^8}\), \(f\in B_{\Vert \cdot \Vert }(f_{m_{n}}+ { {\overline{f}}}_{l_n},\varrho _{n})\), and for any \(x_{1}\in [{\varepsilon }_{n},1- {\varepsilon }_{n} ]\) and \(x_{j}\in [0,1]\) for \(j=2,\ldots ,d\)

$$\begin{aligned} | {\partial }_{1,\pm }f({\mathbf {x}})- {\partial }_{1}\big (f_{m_n}+ {{\overline{f}}}_{l_n} \big )({\mathbf {x}})|< {\varepsilon }_{n} . \end{aligned}$$
(22)

Proceeding towards a contradiction, suppose that there exists \({\mathbf {x}}=(x_{1},x_{2}\ldots ,x_{d})\in [0,1]^{d}\) where \(h_{f}({\mathbf {x}})>2\). Since the Hölder exponent of f is zero on the boundary of \([0,1]^{d}\), necessarily \({\mathbf {x}}\in (0,1)^{d}\).

Since \(h_{f}({{\mathbf {x}}})>2\), one can find \(\varepsilon >0\), \(D_{2}, C_{{{\mathbf {x}}}}\in \mathbb {R}\) such that for every small h,

$$\begin{aligned} |f({\mathbf {x}}+h{\mathbf {e}}_1)-f({\mathbf {x}})-\partial _1 f({\mathbf {x}})h-D_{2}h^{2}| \le C_{{\mathbf {x}}} |h|^{2+\varepsilon }. \end{aligned}$$
(23)

Without limiting generality we can suppose that \(\varepsilon <1/2\) holds as well.

Consider the unique integer \(j_{n}\in \mathbb {N}\) such that \(x_1\in [j_{n} 2^{-l_n^{2}},(j_{n}+1)2^{-l_{n}^{2}})\). Next we consider two cases depending on whether \(x_1\in [j_{n} 2^{-l_n^{2}},(j_{n}+1)2^{-l_{n}^{2}}-2^{-l_n^{4}}]\), or \(x_1\in [(j_{n}+1)2^{-l_{n}^{2}}-2^{-l_n^{4}},(j_{n}+1)2^{-l_{n}^{2}}]\).

Case 1. Assume that \(x_1 \in [j_n2^{-l_n^{2}},(j_n+1)2^{-l_n^{2}}-2^{-l_n^{4}}] \) for infinitely many integers \(n\ge 1\).

We set \({\mathbf {h}}_n=h_{n}{\mathbf {e}}_1\) with \(|{\mathbf {h}}_n|=|h_{n}|=2^{-l_n^{2}}/4\), such that the first coordinates of \({\mathbf {x}}\) and \({\mathbf {x}}+{\mathbf {h}}_n\) both belong to \( [j_n2^{-l_n^{2}},(j_n+1)2^{-l_n^{2}}-2^{-l_n^{4}}]\).

Combining (22), (23) and the fact that \(f\in B_{\Vert \cdot \Vert }(f_{m_{n}}+{ {\overline{f}}}_{l_n},\varrho _{n})\), we obtain

$$\begin{aligned}&\Big | \Big (f_{m_n}({\mathbf {x}}+\mathbf {h}_{n}) +{ {\overline{f}}}_{l_n}({\mathbf {x}}+\mathbf {h}_{n}) \Big ) - \Big ( f_{m_n}({\mathbf {x}})+{ {\overline{f}}}_{l_n} ({\mathbf {x}})\Big )\nonumber \\&\quad - h_{n}\big (\partial _1f_{m_n} +\partial _1 { {\overline{f}}}_{l_n}\big )({\mathbf {x}}) -D_{2}h_{n}^{2} \ \Big |\nonumber \\&\quad \le \,C_{{\mathbf {x}}}|h_{n}|^{2+\varepsilon }+2\varrho _{n }+\varepsilon _{n }|h_{n}|\nonumber \\&\quad \le \,C_{{\mathbf {x}}}|h_{n}|^{2+\varepsilon }+2 \cdot 2^{-(l_n)^8} + 2^{-(l_n)^8} |h_{n}|\nonumber \\&\quad \le \, (C_{{\mathbf {x}}}+1)|h_{n}|^{2+ \varepsilon }, \end{aligned}$$
(24)

where the last inequality holds since \(\rho _n\le \varepsilon _n \le 2^{-(l_n)^8}<\!\!< |h_n|^3\) for large n.

By using the Taylor polynomial estimate of the \(C^{\infty }\) function \(f_{m_n}\), one deduces that

$$\begin{aligned} \Big |f_{m_n}({\mathbf {x}}+\mathbf {h}_{n}) -f_{m_n}({\mathbf {x}})-\partial _1f_{m_n}({\mathbf {x}})h_{n} -\frac{\partial _1^{2}f_{m_n}({\mathbf {x}})}{2!}h_{n}^{2} \Big |\le \frac{M_{m_n,3}}{3!}|h_{n}|^{3}. \end{aligned}$$
(25)

Since by its definition \(\partial _1 { {\overline{f}}}_{l_n} (\mathbf {y})= \gamma _{l_n}(x_{1}) \) (i.e. it is constant) for any \(\mathbf {y}\) on the line segment connecting \({\mathbf {x}}\) and \( {\mathbf {x}} +\mathbf {h}_{n}\), we also have

$$\begin{aligned} \overline{f}_{l_n}({\mathbf {x}}+\mathbf {h}_{n}) -\overline{f}_{l_n}({\mathbf {x}})-\partial _1\overline{f}_{l_n}({\mathbf {x}})h_{n} = 0. \end{aligned}$$
(26)

Using (24), (25) and (26) we infer

$$\begin{aligned}&\Big |\frac{\partial _1^{2}f_{m_n}({\mathbf {x}})}{2!}h_{n}^{2}-D_{2}h_{n}^{2}\Big |\nonumber \\&\quad \le (C_{{\mathbf {x}}}+1)|h_{n}|^{2+\varepsilon } +\frac{M_{m_n,3}}{3!}|h_{n}|^{3} < (C_{{\mathbf {x}}}+2)|h_{n}|^{2+\varepsilon } \end{aligned}$$
(27)

where, using the fact that \(\varepsilon <1/2\), the last inequality holds if n is sufficiently large since \(M_{n,3}\le l_n \le 2^{l_n} \le |h_n|^{-1/2}\) for n large.

Now take \(\overline{{\mathbf {h}}}_n=8|h_{n}|{\mathbf {e}}_1.\) Then (24) and (25) used with \(\overline{{\mathbf {h}}}_n\) instead of \({\mathbf {h}}_n \) for sufficiently large n yield

$$\begin{aligned}&\Big |\overline{f}_{l_n}({\mathbf {x}}+\overline{{\mathbf {h}}}_n) -\overline{f}_{l_n}({\mathbf {x}}) -\partial _1\overline{f}_{l_n} ({\mathbf {x}})|\overline{{\mathbf {h}}}_n|+ \frac{\partial _1^{2}f_{m_n}({\mathbf {x}})}{2!} |\overline{{\mathbf {h}}}_n|^{2}-D_{2}|\overline{{\mathbf {h}}}_n|^{2}\Big | \nonumber \\&\quad \le \frac{M_{m_n,3}}{3!}|\overline{{\mathbf {h}}}_n|^{3}+(C_{{\mathbf {x}}}+1 )|\overline{{\mathbf {h}}}_n|^{2+\varepsilon }\nonumber \\&\quad = |\overline{{\mathbf {h}}}_n|^{2+\varepsilon }\big (\frac{M_{m_n,3}}{3!} |\overline{{\mathbf {h}}}_n|^{1-\varepsilon }+C_{{\mathbf {x}}}+1 \big )\nonumber \\&\quad \le |\overline{{\mathbf {h}}}_n|^{2+\varepsilon }(C_{{\mathbf {x}}}+2 ). \end{aligned}$$
(28)

Now for large n, it follows from (27) and \(|h_{n}|<|\overline{{\mathbf {h}}}_n|\) that

$$\begin{aligned}&\Big | \overline{f}_{l_n} ({\mathbf {x}}+\overline{{\mathbf {h}}}_n)-\overline{f}_{l_n} ({\mathbf {x}})-\partial _1\overline{f}_{l_n}({\mathbf {x}})|\overline{{\mathbf {h}}}_n| \Big | \le |\overline{{\mathbf {h}}}_n|^{2+\varepsilon }(2C_{{\mathbf {x}}}+4 ). \end{aligned}$$
(29)

Next we obtain a contradiction by using a lower estimate of the left-hand side of (29). By convexity of \(\overline{f}_{l_n} \) we have \(\partial _1 \overline{f}_{l_n} (\mathbf {y})\ge \partial _1 \overline{f}_{l_n} ({\mathbf {x}}) \) when \(\mathbf {y}\) is on the line segment connecting \({\mathbf {x}}\) and \({\mathbf {x}}+\overline{{\mathbf {h}}}_n\). Even more, this interval contains a subinterval of length larger than \(|\overline{{\mathbf {h}}}_n|/8=|h_{n}|\) where

$$\begin{aligned} \partial _1 \overline{f}_{l_n} (\mathbf {y})={\overline{\gamma }}_{l_n}(\mathbf {y}) = \partial _1\overline{f}_{l_n} ({\mathbf {x}})+ 2^{-l_n^{2}-l_n}=\gamma _{l_n}(x_{1})+2^{-l_n^{2}-l_n}. \end{aligned}$$

Thus,

$$\begin{aligned} \overline{f}_{l_n} ({\mathbf {x}}+\overline{{\mathbf {h}}}_n)-\overline{f}_{l_n} ({\mathbf {x}})\ge \partial _1\overline{f}_{l_n} ({\mathbf {x}})|\overline{{\mathbf {h}}}_n|+\frac{|\overline{{\mathbf {h}}}_n|}{8}\cdot 2^{-l_n^{2}-l_n}. \end{aligned}$$

By (29), one should have

$$\begin{aligned} \frac{|\overline{{\mathbf {h}}}_n|}{8}\cdot 2^{-l_n^{2}-l_n} \le |\overline{{\mathbf {h}}}_n|^{2+\varepsilon }(2C_{{\mathbf {x}}}+4 ). \end{aligned}$$

Since \(8|h_{n}|=|\overline{{\mathbf {h}}}_n|=2\cdot 2^{-l_n^{2}}\) we would obtain that

$$\begin{aligned} 2^{-l_n^{2}-l_n}\le \big (2\cdot 2^{-l_n^{2}}\big )^{1+\varepsilon }(2C_{{\mathbf {x}}}+4 ), \end{aligned}$$

a contradiction when n is large.

Case 2. Suppose that \(x_1 \in [(j_{n}+1)2^{-l_{n}^{2}}-2^{-l_n^{4}},(j_{n}+1)2^{-l_{n}^{2}}]\) for infinitely many \(n\ge 1\).

We set \({\mathbf {h}}_n=h_{n}{\mathbf {e}}_1\) with \(|{\mathbf {h}}_n|=|h_{n}|=2^{-l_n^{4}}/2\), such that the first coordinates of \({\mathbf {x}}\) and \({\mathbf {x}}+{\mathbf {h}}_n\) both belong to \([(j_{n}+1)2^{-l_{n}^{2}}-2^{-l_n^{4}},(j_{n}+1)2^{-l_{n}^{2}}] \).

Since \(\rho _n \) and \(\varepsilon _n\) are still much smaller than \(|h_n|\), Eqs. (24) and (25) still hold, but now (26) is replaced by

$$\begin{aligned} \overline{f}_{l_n}({\mathbf {x}}+\mathbf {h}_{n}) -\overline{f}_{l_n}({\mathbf {x}})-\partial _1\overline{f}_{l_n}({\mathbf {x}})h_{n} = \frac{\partial _{1}^2\overline{f}_{l_n}({\mathbf {x}})}{2!} h_{n} ^2 = 2^{l_n^4-l_n-l_n^2}\frac{h_{n} ^2}{2} . \end{aligned}$$
(30)

Using the same arguments as before but with (30), we deduce that

$$\begin{aligned} \Big |\frac{\partial _1^{2}f_{m_n}({\mathbf {x}})}{2!}h_{n}^{2}-D_{2}h_{n}^{2} +2^{l_n^4-l_n-l_n^2}\frac{h_{n} ^2}{2}\Big | < (C_{{\mathbf {x}}}+2)|h_{n}|^{2+\varepsilon }. \end{aligned}$$

This last inequality becomes impossible when n becomes large, since \(|\frac{\partial _1^{2}f_{m_n}({\mathbf {x}})}{2}|\le M_{m_n ,3}\le l_{n} \le 2^{l_n^3}\). Hence a contradiction. \(\square \)

4 The upper estimate

We start with the upper bound for the Hausdorff dimensions of the sets \(E^{\le }_f(h)\).

Proposition 11

If \(1< h\le 2\) and \(f\in {\mathcal {CC}^{d}}\), then \(\dim {E_ {f} ^ {\le }}(h) \le d+h-2.\)

Proof

It is sufficient to treat the case \(h\in (1,2)\).

Assume that \(\dim E_ {f} ^ {\le }(h) > d+h-2\).

For every \({\mathbf {x}}\in E_ {f} ^ {\le }(h)\), by Proposition 7, there exists a cone of direction \(C_{i_x}\), where \(i_x\in \{1,\ldots ,d\}\) such that for every \(\mathbf {z}\in C_{i_x}\), the restriction of f to the straight line passing through \({\mathbf {x}}\) parallel to \(\mathbf {z}\) has exponent less than h.

Let us call \(E_i\) the set of elements of \(E_ {f} ^ {\le }(h)\) satisfying this property with \(i_x = i\in \{1,\ldots ,N\}\). Obviously, \( E_ {f} ^ {\le }(h) = \bigcup _{i=1}^N E_i\), so there exists at least one \(i\in \{1,\ldots ,N\}\) such that \(\dim E_i > d+h-2\).

Let us recall the following special case of Marstrand’s slicing theorem, Theorem 10.10 in Chapter 10 of [7]. Recall that \(S_d\) is the unit sphere in \(\mathbb {R}^d\).

Theorem 12

Let \(E\subset [0,1]^d\) be a Borel set with Hausdorff dimension \(\alpha \in (d-1,d)\). Then for almost every \(\mathbf {z}\in S_d\) (in the sense of \((d-1)\)-dimensional “surface” measure), there exists a set \(E_\mathbf {z}\) of positive \((d-1)\)-dimensional Hausdorff measure in the hyperplane orthogonal to \(\mathbf {z}\) such that for every \({\mathbf {x}}\in E_\mathbf {z}\), \(\dim E\cap ({\mathbf {x}}+ \mathbb {R}\mathbf {z}) =\alpha -(d-1)\).

Each \(C_i\) has non-empty interior in the subspace topology of \(S_{d}\), hence it is of positive \(d-1\)-dimensional measure. Applying Theorem 12 to \(E_i\), one can find \(\mathbf {z}\in C_i\) and \({\overline{{\mathbf {x}}}} \in [0,1]^d\) such that if \(\mathcal {D} = (\overline{{\mathbf {x}}} + \mathbb {R}\mathbf {z})\), then \(\dim E_i\cap \mathcal {D} \ge d+h-2-(d-1)= h-1\).

Let us call g the restriction of f to \(\mathcal {D}\). Then g is still a convex function of one variable.

By definition of \(E_i\), every \({\mathbf {x}}\in \mathcal {D} \cap E_i\) satisfies \(h_g({\mathbf {x}}) \le h\).

Next, applying Lemma 6 to g, we deduce that \(\min (h_{g'_+}({\mathbf {x}}), h_{g'_-}({\mathbf {x}})) \le h -1\), for every \({\mathbf {x}}\in \mathcal {D} \cap E_i\).

Hence, at least one of the two sets \(E^{\le }_{g'_+}(h-1)\) and \(E^{\le }_{g'_-}(h-1)\) has Hausdorff dimension strictly greater than \(h-1\).

But this is impossible, since both functions \(g'_+\) and \(g'_-\) are monotone, and for such functions, by (4), the Hausdorff dimension of \(E^{\le }_{g'_+}(h-1) \) and \(E^{\le }_{g'_-}(h-1)\) is necessarily less than \(h-1 \in [0,1]\). Hence a contradiction, and the conclusion that \(\dim E_ {f} ^ {\le }(h) \le d+h-2\). \(\square \)

Proposition 13

If \(0 \le h \le 1\), \(f\in {\mathcal {CC}^{d}}\), then \(\dim {E_ {f} ^ {\le }}(h) \le d-1.\)

Proof

The proof is immediate: if \(f\in {\mathcal {CC}^{d}}\), the pointwise exponent of f at any \({\mathbf {x}}\in (0,1)^d\) is necessarily larger or equal than 1. The remaining points are located on the boundary, whose dimension is \(d-1\). And for every \(h\in [0,1]\), it is easy to build examples of convex functions such that \(h_f({\mathbf {x}})=h\) for every \({\mathbf {x}}\) satisfying \(x_1=0\), so the upper bound \(d-1\) for the Hausdorff dimension of \(E_ {f} ^ {\le }(h) \) is optimal. \(\square \)

5 The lower estimate

Using Lemma 4 it is rather easy to “integrate” the result about functions in \({\mathcal {M}}^{1}= {{\mathcal {M}}}\) to obtain the one-dimensional result.

Theorem 14

There is a dense \(G_{{\delta }}\) set \( {{\mathcal {G}}}\) in \( {{\mathcal {CC}^1}}\) such that for any \(f\in {{\mathcal {G}}}\), and for any \(1\le h\le 2\), \(\dim {E_ {f} ^ { }}(h)= h-1.\)

Proof

Suppose \(0< {\delta }_{0} <1/2\) is fixed.

Recalling the result on typical monotone continuous functions, there exists a \(G_\delta \) set of functions \({\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}} \) in the set \(\mathcal {M}^{1,\delta _0} := \{f:[\delta _0,1-\delta _0] \rightarrow \mathbb {R}, \ f \text{ monotone }\}\) such that every \(f\in {\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}} \) satisfies (3).

Let us write \({\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}} = \bigcap _{n\ge 1} {\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}}_n\), where \({\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}}_n\) is a dense open set in \(\mathcal {M}^{1,\delta _0}\).

Let us choose a dense sequence \((g_{n,m})_{m=1}^{{\infty }}\) in \({\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}}_{n}\).

By taking antiderivatives of the elements of this sequence and by a suitable definition on the intervals \([0,{\delta }_{0})\cup (1-{\delta }_{0},1]\), one can choose a sequence of convex functions \((f_{n,m,k})_{m,k=1}^{+\infty } \) which is dense in \( \mathcal {CC}^1 \) such that for every \(m,k\ge 1\), \(f_{n,m,k}'(x)=g_{n,m}(x)\) for \(x\in [ {\delta }_{0},1- {\delta }_{0}]\). These functions \(f_{n,m,k}\) are continuously differentiable on \([\delta _0,1-\delta _0]\).

Now, choose \( {\varepsilon }_{n,m}>0\) such that

$$\begin{aligned} B_{\Vert \cdot \Vert }(g_{n,m}, {\varepsilon }_{n,m}) {\subset } {\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}}_{n}, \end{aligned}$$

where the ball \(B_{\Vert \cdot \Vert }\) is taken in the set \({{{\mathcal {M}}}^{1, {\delta }_{0}}}\) using the \(L^\infty \)-norm.

Lemma 4 gives the existence of \({\varrho }_{n,m,k}>0\) such that for every \(f\in B_{\Vert \cdot \Vert }(f_{n,m,k}, {\varrho }_{n,m,k}) \subset \mathcal {CC}^1\), the inequality

$$\begin{aligned} |f'_{\pm }(x)-g_{n,m}(x)|< {\varepsilon }_{n,m} \end{aligned}$$

holds for all \(x\in [ {\delta }_{0},1- {\delta }_{0}]\).

Let us now introduce

$$\begin{aligned} {{\mathcal {G}}}_{n}=\bigcup _{n,k}B_{\Vert \cdot \Vert }(f_{n,m,k}, {\varrho }_{n,m,k}). \end{aligned}$$

By construction, \( {{\mathcal {G}}}_{n}\) is dense in \( {{\mathcal {CC}}^1}.\)

Proposition  8 yields the existence of a dense \(G_{{\delta }}\) set \( {{\mathcal {D}}}\) in \( \mathcal {CC}^1\) consisting of functions f continuously differentiable on (0, 1).

We finally set

$$\begin{aligned} {{\mathcal {G}}}= {{\mathcal {D}}}\bigcap \left( \bigcap _{n=1}^{{\infty }} {{\mathcal {G}}}_{n} \right) . \end{aligned}$$

Clearly, \( {{\mathcal {G}}}\) is a dense \(G_{{\delta }}\) set in \( {{\mathcal {CC}^1}}.\)

Suppose that \(f\in {{\mathcal {G}}}\), and set \(g=f'\) on (0, 1) and \(g_{0}=g|_{[ {\delta }_{0},1- {\delta }_{0}]}\).

Then \(g_{0}\in \bigcap _{n=1}^{{\infty }}{\mathcal {G}}^{{\mathcal {M}}, {\delta }_{0}}_{n}\) and Eq. (3) implies that for any \(1\le h\le 2\), \(\dim E_{g_{0}}(h-1)=h-1\).

But Lemma  5 yields, \(E_{g_{0}}(h-1)= E_{f|_{[ {\delta }_{0},1- {\delta }_{0}]}}({h})\), hence \(\dim E_{f}(h)=h-1\) for any \(1\le h\le 2\).

Since \( {\delta }_{0}\) can be chosen arbitrarily small by taking a sequence of \( {\delta }_{0}\)’s tending to zero, we can conclude the proof of the theorem. \(\square \)

Next we turn to the higher dimensional case.

Theorem 15

There is a dense \(G_{{\delta }}\) set \( {{\mathcal {G}}}\) in \( {\mathcal {CC}^{d}}\) such that for any \(f\in {{\mathcal {G}}}\) and \(1\le h \le 2\) , one has \(\dim E_{f}(h)\ge h+d-2\). In addition, \(E_{f}(h)= {\emptyset }\) for \(h>2\), \(E_{f}(h)\cap {[0,1]^d}= {\emptyset }\) for \(0<h<1\), and \( {\partial } ({[0,1]^d})=E_{f}(0)\).

Proof

Now instead of the functions, we can “integrate” the proof used in Sect. 4 of [4]. The idea is again to reduce the problem to the one-dimensional case. We select one coordinate direction, for ease of notation the first, the \(x_{1}\)-axis. We use in our proof the perturbation functions which are constant in the directions of the coordinate axes \(x_{j}\), \(j=2,\ldots ,d\) already used in this paper, given in Definition 3.

Let us select a dense set of \( {{C^ {\infty }}}\) functions \(\{ \displaystyle f_{m}:m=1,2,\ldots \}\) which is dense in \( {\mathcal {CC}^{d}}\). The \(x_{1}\)-partial derivatives, \( {\partial }_{1} f_{m}\), of these functions will be denoted by \(g_{m}\). The important feature of these functions \(g_{m}\) is the fact that they are monotone increasing in the \(x_{1}\)-variable. As our example at the beginning of the paper shows (see Remark 1), these functions are not necessarily monotone in the other variables, and this is why one cannot “integrate” simply the MISV genericity results.

Now, as in [4] and in the proof of Proposition 10, we use the functions \({\overline{\gamma }}_{l}\) and the perturbations \({\overline{f}}_{l}\) defined in (19) and (20).

Next, apply Lemma 3 to the functions \(f_{m}+ {\overline{f}}_{m+n}\) with \( {\varepsilon }= \varepsilon _{n+m}\) and \(j=1\). There exists a constant, denoted by \(\varrho _{m,n}>0\), such that if \(f\in B_{\Vert \cdot \Vert }(f_{m}+{\overline{f}}_{m+n},\varrho _{m,n})\), then for any \(x_{1}\in [ {\varepsilon }_{n+m},1- {\varepsilon }_{n+m}]\) and \(x_{j}\in [0,1]\) for \(j=2,\ldots ,d\), one has

$$\begin{aligned} | {\partial }_{1,\pm }f({\mathbf {x}})- {\partial }_{1}(f_{m}+{\overline{f}}_{m+n})({\mathbf {x}})|< {\varepsilon }_{n+m}, \end{aligned}$$

that is,

$$\begin{aligned} | {\partial }_{1,\pm } f({\mathbf {x}})-(g_{m}+ {\overline{\gamma }}_{m+n}({\mathbf {x}}))|< {\varepsilon }_{n+m} \end{aligned}$$
(31)

holds. Without limiting generality one can assume that \(\varrho _{m,n}\rightarrow 0\) if n is fixed and \(m\rightarrow {\infty }\).

Set

$$\begin{aligned} {{\mathcal {R}}}_{n}=\bigcup _{m=1}^{{\infty }} B_{\Vert \cdot \Vert }(f_{m}+{\overline{f}}_{m+n},\varrho _{m,n}) . \end{aligned}$$

It is not difficult to see that \( {{\mathcal {R}}}_{n}\) is open and dense in \( {\mathcal {CC}^{d}}\). Denote by \( {{\mathcal {D}}}\) a dense \(G_{{\delta }}\) set in \( {\mathcal {CC}^{d}}\), which consists of functions differentiable on \( {(0,1)^d}\), and such that (according to Proposition  10) these functions also have nowhere a pointwise exponent strictly greater than 2.

Finally, set

$$\begin{aligned} {{\mathcal {G}}}= {{\mathcal {D}}}\bigcap \left( \bigcap _{n=1}^{{\infty }} {{\mathcal {R}}}_{n}\right) . \end{aligned}$$

Let \(f\in {{\mathcal {G}}}\), and \(1\le h \le 2\). We are going to prove that \(\dim E_{f}(h)\ge h+d-2\), by reducing the argument to a situation already totally taken care of in [4].

By construction, there exists a sequence of integers \((m_{n})_{n\ge 1}\) such that \(f\in B_{\Vert \cdot \Vert }(f_{m_{n}}+{\overline{f}}_{m_{n}+n},\varrho _{n,m_{n}})\) for every n.

Set \(g= {\partial }_{1} f.\)

By (31), when \(x_{1} \in [ \varepsilon _{n+m_{n}},1- \varepsilon _{n+m_{n}}]\) and \(x_{j}\in [0,1]\) for \(j=2,\ldots ,d\), one has for \({\mathbf {x}}=(x_1,x_2,\ldots ,x_d)\)

$$\begin{aligned} |g({\mathbf {x}})-(g_{m_{n}}+{\overline{\gamma }}_{m_{n}+n})({\mathbf {x}})|< \varepsilon _{n+m_{n}}. \end{aligned}$$
(32)

Given \(0< {\delta }_{0}<1/10\), choose an integer \(n_{1}\) such that \( {\varepsilon }_{n_{1}}< {\delta }_{0}/100.\)

Select an increasing subsequence \((n_{k})_{k\ge 1}\) such that the following conditions are fulfilled: put \(l_{k}=m_{n_{k}}+n_{k}\). One assumes that

$$\begin{aligned} l_k> 2^ k , ((l_k)^2 + l_k)k + 1 < (l_k)^4, \ 2 ^{-((l_{k-1})^2+l_{k-1})(k-1)-1} > 100 \cdot 2^{-(l_k)^2} \end{aligned}$$

and if \(D_k = 2^{(l_k)^2} \cdot 2^{-((l_k)^2+l_k)k-2} < 1 \), then one also assumes that k is so large that

$$\begin{aligned} D_1 \cdots D_{k-1} > 2^{-l_k}. \end{aligned}$$

These conditions are Eqs. (27) and (28) in [4].

Denote by \( {\varphi }_{k}\) the restriction of \(g_{m_{n_{k}}}\) onto \([ {\delta }_{0},1- {\delta }_{0}]\times [0,1]^{d-1}\) and by \({\widetilde{g}}_{l_{k}}\) the restriction of \({\overline{\gamma }} _{m_{n_{k}}+n_{k}}\) onto \([ {\delta }_{0},1- {\delta }_{0}]\times [0,1]^{d-1}\). For ease of notation for the restriction of g onto \([ {\delta }_{0},1- {\delta }_{0}]\times [0,1]^{d-1}\) we will still use the notation g.

Then \(g\in B_{\Vert \cdot \Vert }( {\varphi }_{k}+{\widetilde{g}}_{l_{k}}, {\varepsilon }_{l_{k}})\) for all \(k=1,\ldots \), where the ball is taken with respect to the supremum norm in the space of continuous functions monotone in the first variable.

Now, we are exactly in the context of our previous article [4], in which we proved the following sequence of propositions (cf Propositions 13-18 of [4]). We reproduce the definitions given in [4] and the associated propositions.

For every \(h\in (1,2)\) and \(k\ge 2\), let

$$\begin{aligned} F_{h-1,k}=\bigcup _{j=0}^{2^{(l_k)^2}-1} \left[ (j+1)2^{-(l_k)^2} - 2^{\frac{(l_k)^2+l_k}{h-1}}, (j+1)2^{-(l_k)^2} - \frac{1}{2} 2^{\frac{(l_k)^2+l_k}{h-1}}\right] . \end{aligned}$$

For \(h=1\), set \( F_{h-1,k}=F_{0,k}\) as

$$\begin{aligned} F_{0,k}= \bigcup _{j=0}^{2^{(l_k)^2}-1} \left[ (j+1)2^{-(l_k)^2} - 2^{k{((l_k)^2+l_k)}{}}, (j+1)2^{-(l_k)^2} - \frac{1}{2} 2^{{k((l_k)^2+l_k)}{}}\right] . \end{aligned}$$

One has [4]:

Proposition 16

For \(h\in [1,2)\), let \(k_h = \max (3, [1/h]+2)\) and

$$\begin{aligned} F_{h-1} = \bigcap _{k\ge k_h } F_{h-1,k} \ \subset \ [{\delta }_{0},1- {\delta }_{0}] . \end{aligned}$$

For every \({\mathbf {x}}\in F_{h-1}\times [0,1]^{d-1}\), \(h_g({\mathbf {x}})\le h-1.\)

In particular, \(\dim E_g(0)=d-1\).

Proposition 17

For \(h\in [1,2)\), there exists a probability measure \(\mu _{h-1}\) such that \(\mu _{h-1}( F_{h-1}\times [0,1]^{d-1}) =1\) and for every \({\mathbf {x}}\in F_{h-1}\times [0,1]^{d-1}\),

$$\begin{aligned} \liminf _{r\rightarrow 0+}\frac{\log {{ {\mu }}_{h-1}}\big (B({\mathbf {x}},r) \big )}{\log r}\ge d+h-2. \end{aligned}$$
(33)

Hence, using the mass distribution principle, one deduces from (33) that \(\dim (F_{h-1}\times [0,1]^{d-1} ) \ge d+h-2\). From the last two propositions, one deduces that for every \(h\in [1,2]\),

$$\begin{aligned} \dim E_g^{\le } (h-1) \ge d+h-2. \end{aligned}$$

Finally, Proposition 18 of [4] gives:

Proposition 18

For every \(h>2\), \( E_g(h-1) = \emptyset \).

Finally, we use that the functions f have nowhere a pointwise Hölder exponent greater than 2 (by Proposition 10), and we finish the proof of Theorem 2.

If \({\mathbf {x}}\in F_{h-1}\times [0,1]^{d-1} \), then \(h_g({\mathbf {x}}) \le h -1 \in [0,1]\). Necessarily, \( h_g({\mathbf {x}}) +1\le h_f({\mathbf {x}}) \le 2\). By Lemma 5, the only possibility is \(h_f({\mathbf {x}}) = h_g({\mathbf {x}})+1 \le (h-1)+1 = h\). Hence \( \dim E^{\le }_f(h) = \dim E^{\le }_g(h-1) \ge d+h-2\). Even more, the same argument as above (Proposition 16 combined with Proposition 11) gives \(\mu _{h-1} (E^{\le }_f(h))>0\).

Using the upper bound of Theorem 1, one knows that for every large integer \(n\ge 1\), \(\dim (E^{\le }_f(h-1/n)) \le h-1/n+d-2\), hence \(\mu _{h-1} \big ( E^{\le }_f(h-1/n) \big ) =0\). Since

$$\begin{aligned} E_f(h) =E_f^{\le } (h) {\setminus } \bigcap _{n\ge 1} E^{\le }_f(h-1/n) , \end{aligned}$$

one deduces that \(\mu _{h-1} (E_f(h)) >0\), which implies that \(\dim E_f(h) \ge d+h-2\). Since the converse inequality also holds true, one concludes that \(\dim E_f(h) = d+h-2\). \(\square \)