1 Introduction

The multifractal formalism is a formula which is used to derive the spectrum of singularities of a signal f from average quantities extracted from f. It has proved important in mathematics, physics and signal analysis (for example, see [32, 39] and references therein). Usually, the spectrum consists of the Hausdorff dimension of the sets of points where the regularity of f takes a given value. It allows to understand the fractal geometry of pointwise regularity fluctuations. The multifractal formalism was first introduced in the context of the statistical study of fully developed turbulence in the mid 80’s by Frish and Parisi [26], following the pioneering works [41,42,43,44] of Mandelbrot who had associated fractals to measures (or functions) by introducing multiplicative cascades for the dissipation of energy in turbulent flows. The scope of the mathematical validity of the multifractal formalism has become an important issue (see Ref. [32] and references therein). The most commonly pointwise regularity used for functions is Hölder regularity.

Definition 1

Let \(\alpha >0\), \(\alpha \notin {\mathbb {N}}_0\), \({\mathbf {x}}=(x_1, \ldots ,x_d) \in {\mathbb {R}}^d\) and \(f: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\). We say that \(f \in C^\alpha ({\mathbf {x}})\) if there exists \(C>0\), a neighborhood \(V({\mathbf {x}})\) of \({\mathbf {x}}\), and a polynomial \(P({\mathbf {y}}-{\mathbf {x}})= \displaystyle \sum _{I=(i_1, \ldots , i_d) \in {\mathbb {N}}_0^d} a_{I}({\mathbf {y}}-{\mathbf {x}})^{I} = \displaystyle \sum _{I=(i_1, \ldots , i_d)\in {\mathbb {N}}_0^d} a_{I} (y_1-x_1)^{i_1} \cdots (y_d-x_d)^{i_d}\) of degree \(\ d^o P < \alpha \), such that

$$\begin{aligned} \forall \; {\mathbf {y}}\in V({\mathbf {x}}) \qquad |f({\mathbf {y}})-P({\mathbf {y}}-{\mathbf {x}})| \le C |{\mathbf {y}}-{\mathbf {x}}|^\alpha . \end{aligned}$$
(1)

We say that \(f \in C_{log}^\alpha ({\mathbf {x}})\) if (1) is replaced by

$$\begin{aligned} \forall \; {\mathbf {y}}\in V({\mathbf {x}}){\setminus }\{{\mathbf {x}}\} \qquad |f({\mathbf {y}})-P({\mathbf {y}}-{\mathbf {x}})| \le C |{\mathbf {y}}-{\mathbf {x}}|^\alpha \log (1/|{\mathbf {y}}-{\mathbf {x}}|). \end{aligned}$$
(2)

We say that f is uniformly Hölder if there exists \(\alpha >0\) such that \(f \in C^\alpha ({\mathbb {R}}^d)\) in the sense that (1) holds for all \({\mathbf {x}}, {\mathbf {y}}\in {\mathbb {R}}^d\) with C a uniform constant.

Wavelet analysis plays a key role since it characterizes Hölder regularity and many isotropic functional spaces. Standard orthonormal wavelet bases allow decompositions on tensor products of 1-D wavelets with the same dilation factor \(2^j\) at scale j in all coordinate axes. Nonetheless, they are not optimal for analyzing directional or anisotropic features like edges. Many signals on \({\mathbb {R}}^d\), \(d \ge 2\), present anisotropies quantified through regularity characteristics and features that strongly differ when measured in different directions [4, 15, 16, 18, 20, 33, 36, 48,49,50]. That signals belong to classes of functions that have different degrees of smoothness along different directions (see [1,2,3, 5, 7, 9, 10, 18, 20, 33, 50, 53,54,55,56] and the references therein, see also the recent book [23] for some historical comments on the classical mixed smoothness theory (without pointwise issues)). These directional behaviors are important for detection of edges, efficient image compression,texture classification, etc... Extensions of wavelet bases elongated in particular directions were considered (see [17, 22, 29, 33, 37, 40, 47, 54]). Useful decompositions of anisotropic function spaces in simple building anisotropic or hyperbolic blocks (atoms, quarks, splines, wavelets, Littlewood Paley,...) were also obtained (see [8, 10, 11, 13, 14, 21, 25, 27, 28, 30, 31, 35, 55, 57, 58]), depending on the precise definition of that spaces.

The pointwise Hölder regularity \(C^\alpha ({\mathbf {x}})\) doesn’t take into account possible directional regularity behaviors in coordinate axes. For example, if \(\varvec{\alpha }= (\alpha _1, \ldots ,\alpha _d) \in (0,1)^d\), functions \(f(y_1, \ldots ,y_d) = \displaystyle \sum _{i=1}^d |y_i-x_i|^{\alpha _i}\) and \(g(y_1, \ldots ,y_d) =\displaystyle \prod _{i=1}^d |y_i-x_i|^{\alpha _i} \) have respectively Hölder regularities \(\displaystyle \min \{ \alpha _i \}\) and \(\displaystyle \sum _{i=1}^d \alpha _i\) at point \({\mathbf {x}}=(x_1, \ldots ,x_d) \in {\mathbb {R}}^d\). There exist various ways to measure the directional regularity of a function around a given point. In order to capture directional behavior of the form \( \displaystyle \sum _{i=1}^d |y_i-x_i|^{\alpha _i}\), Jaffard [33] has defined the following d-parameter directional regularity \(C_J^{(\alpha _1, \ldots , \alpha _d)}({\mathbf {x}})\) for \(\alpha _1>0, \ldots , \alpha _d>0\).

Definition 2

Let \(f: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\). Let \(\varvec{\alpha } = (\alpha _1, \ldots , \alpha _d) \in (0,\infty )^d\) (we will write \(\varvec{\alpha } > {\mathbf {0}}\) where \({\mathbf {0}}=(0, \ldots ,0)\) in \({\mathbb {R}}^d\)). We say that \(f \in C_J^{\varvec{\alpha }}({\mathbf {x}})\) if there exist a constant \(C>0\), a neighborhood \(V({\mathbf {x}})\) of \({\mathbf {x}}\), and a polynomial \(P({\mathbf {y}}-{\mathbf {x}}) = \displaystyle \sum _{I=(i_1, \ldots , i_d) \in {\mathbb {N}}_0^d} a_{I}({\mathbf {y}}-{\mathbf {x}})^{I} \) of degree less than \(\varvec{\alpha }\) in the sense that

$$\begin{aligned} \max \left\{ \sum _{n=1}^d \frac{i_n}{\alpha _n} \,: \,a_I \ne 0\right\} < 1, \end{aligned}$$
(3)

such that

$$\begin{aligned} \forall \; {\mathbf {y}}\in V({\mathbf {x}}) \qquad \left| f({\mathbf {y}}) - P({\mathbf {y}}-{\mathbf {x}})\right| \le C \sum _{n=1}^d |x_n-y_n|^{\alpha _n}. \end{aligned}$$
(4)

Actually, Definition 2 is an extension of the notion of anisotropic regularity which was already introduced in [10]; let \({\mathbf {u}}=(u_1, \ldots , u_n)\) satisfies

$$\begin{aligned} 0<u_1, \ldots , u_d < d \ \ \text{ and } \ \displaystyle \sum _{n=1}^d u_n=d\;. \end{aligned}$$
(5)

For \(I=(i_1,\cdots ,i_d) \in {\mathbb {N}}_0^d\), we set \(d_{{\mathbf {u}}}(I) = \displaystyle \sum _{n=1}^d \;u_n\; i_n\). If \(P=\displaystyle \sum _{I\in {\mathbb {N}}_0^d} a_{I} {\mathbf {y}}^{I}\) is a polynomial we define its \({\mathbf {u}}\)-homogeneous degree by

$$\begin{aligned} d_{\mathbf {u}}(P) = \max \left\{ d_{\mathbf {u}}(I)\,: \,a_I \ne 0\right\} . \end{aligned}$$

Definition 3

Let \(f: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\). Let \(s >0\). We say that \(f \in C_{{\mathbf {u}}}^{s}({\mathbf {x}})\) if there exist a constant \(C>0\), a neighborhood \(V({\mathbf {x}})\) of \({\mathbf {x}}\) and a polynomial P of \({\mathbf {u}}\)-homogeneous degree less than s such that

$$\begin{aligned} \forall \; {\mathbf {y}}\in V({\mathbf {x}}) \qquad \left| f ({\mathbf {y}}) - P({\mathbf {y}}-{\mathbf {x}})\right| \le C \sum _{n=1}^{d} |y_n-x_n|^{s/u_n}. \end{aligned}$$
(6)

In order to capture directional behavior of the form \(\displaystyle \prod _{i=1}^d |y_i-x_i|^{\alpha _i}\), Ayache et al. [6], and Kamont [35] have defined the following pointwise rectangular Lipschitz regularity \(Lip^{(\alpha _1, \ldots , \alpha _d)}({\mathbf {x}})\) for \(0< \alpha _1, \ldots , \alpha _d <1\). Denote by \({\mathcal {D}}\) the set \(\{1, \cdots ,d\}\). For \(i \in {\mathcal {D}}\), let \(e_i= ( \delta _{1,i}, \ldots , \delta _{d,i})\) denotes the i-th coordinate vector in \({\mathbb {R}}^d\). Let \({\mathbf {n}}=(n_{1},\ldots ,n_{d})\in {\mathbb {N}}_0^{d}\). For \(f: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\), \({\mathbf {x}}=(x_{1}, \cdots ,x_{d})\in {\mathbb {R}}^{d}\) and \({\mathbf {h}}=(h_{1}, \ldots ,h_{d})\in {\mathbb {R}}^{d}\), the anisotropic differences of order \( {\mathbf {n}}\) were first defined by Kamont [34] as

$$\begin{aligned} {\varDelta }_{{\mathbf {h}}}^{{\mathbf {n}}}f({\mathbf {x}})={\varDelta }_{h_{1},1}^{n_{1}}\circ \cdots \circ {\varDelta }_{h_{d},d}^{n_{d}}f({\mathbf {x}}), \end{aligned}$$
(7)

where for \(n\in {\mathbb {N}}_0\),

$$\begin{aligned} {\varDelta }_{h,i}^{n}f({\mathbf {x}})=\sum _{l=0}^{n}(-1)^{n+l}\left( {\begin{array}{c}n\\ l\end{array}}\right) f\left( {\mathbf {x}}+l h e_{i}\right) . \end{aligned}$$
(8)

Definition 4

Let \(\varvec{\alpha } = (\alpha _1, \ldots , \alpha _d) \in (0,1)^d\). Put \({\mathbf {1}}= (1, \ldots ,1) \in {\mathbb {R}}^d\). We say that \(f \in Lip^{\varvec{\alpha }}({\mathbf {x}})\), if there exists \(C>0\) such that

$$\begin{aligned} \forall \; {\mathbf {h}}= (h_{1}, \ldots , h_{d}) \in {\mathbb {R}}^d \quad \left| {\varDelta }_{{\mathbf {h}}}^{{\mathbf {1}}}f({\mathbf {x}})\right| \le C \prod _{n=1}^d \left| h_{i_n}\right| ^{\alpha _{i_n}}\;. \end{aligned}$$
(9)

Actually, the rectangular Lipschitz regularity was introduced in a global sense for any \({\mathbf {x}}\) and \({\mathbf {x}}+ {\mathbf {h}}\) in a cube \({\mathcal {Q}}\) in order to prove that realizations of fractional Brownian sheets on \({\mathcal {Q}}\) are uniform rectangular Lipschitz.

If \(\varvec{\alpha }> {\mathbf {0}}\), define the average regularity \({\tilde{\alpha }}\) by the harmonic mean of the \(\alpha _n\), i.e.,

$$\begin{aligned} \frac{1}{{\tilde{\alpha }}} = \frac{1}{d} \; \sum _{n=1}^d \frac{1}{\alpha _n}. \end{aligned}$$
(10)

Then the anisotropy indices \({\mathbf {v}}=(v_1, \ldots , v_n)\) defined by

$$\begin{aligned} v_n = \frac{{\tilde{\alpha }}}{\alpha _n}, \end{aligned}$$
(11)

satisfy

$$\begin{aligned} 0<v_1, \ldots , v_d < d \ \ \text{ and } \ \displaystyle \sum _{i=1}^d v_i=d. \end{aligned}$$
(12)

In Ref. [10], it is proved that

$$\begin{aligned} C_J^{\varvec{\alpha }}({\mathbf {x}})=C_{{\mathbf {v}}}^{{\tilde{\alpha }}}({\mathbf {x}}). \end{aligned}$$
(13)

An almost characterization of \(C_{{\mathbf {v}}}^{{\tilde{\alpha }}}({\mathbf {x}})\) in terms of \({\mathbf {v}}\)-anisotropic Triebel wavelet basis was also obtained in Ref. [10]. The functions of the latest basis and more generally anisotropic blocks are tensor products of 1-D functions that allow dilations factors about \(2^{j v_1}, \ldots , 2^{j v_d}\) in coordinate axes. Note that in several 2-D signals, the study was restricted to parabolic anisotropy, i.e a contraction by \(\lambda \) in the \(x_1\) axis and by \(\lambda ^2\) in the \(x_2\) axis (see [37, 47, 54] and references therein).

However, for signals where no a priori anisotropy is prescribed, hyperbolic blocks are tensor products of 1-D wavelets that allow different dilations factors \(2^{j_1}, \ldots , 2^{j_d}\) in coordinate axes (see [1, 21, 52, 59, 60]). The latest have the advantage to contain all possible anisotropies. Recently, in [12], we provided a characterization of \(Lip^{\varvec{\alpha }}({\mathbf {x}})\) in terms of the hyperbolic Schauder (spline) system. We also proved that fractional Brownian sheets are pointwise rectangular monofractal. On the opposite, we constructed a class of Sierpinski selfsimilar functions that are pointwise rectangular multifractal.

In this paper, we first propose a substitute for the rectangular pointwise Lipschitz regularity \(Lip^{\varvec{\alpha }}({\mathbf {x}})\) that has the advantage to cover any \(\varvec{\alpha }> {\mathbf {0}}\). We define rectangular pointwise regularity \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) through local oscillations of the function over rectangular parallelepipeds. Our definition is reminiscent of the Kamont’s anisotropic Besov spaces \(B^{\varvec{\alpha }}_{\infty , \infty }(I^d)\) on the unit cube (see [34]).

For \(\varvec{\epsilon }=(\epsilon _{1}, \ldots , \epsilon _{d}) > {\mathbf {0}}\), we denote by \(B({\mathbf {x}},\varvec{\epsilon })\) the rectangular parallelepiped

$$\begin{aligned} B({\mathbf {x}},\varvec{\epsilon })=\prod _{i=1}^d [x_{i}-\epsilon _{i},x_{i}+\epsilon _{i}]. \end{aligned}$$
(14)

For \({\mathbf {n}} \in {\mathbb {N}}_0^{d}\), the anisotropic (or hyperbolic) \({\mathbf {n}}\)-oscillation of f in \(B({\mathbf {x}},\varvec{\epsilon })\) is defined by

$$\begin{aligned} \omega _{f}({\mathbf {n}},\varvec{\epsilon })({\mathbf {x}})=\sup _{[{\mathbf {y}},{\mathbf {y}}+{\mathbf {n}}{\mathbf {h}}]\subset B({\mathbf {x}},\varvec{\epsilon })}\left| {\varDelta }_{{\mathbf {h}}}^{{\mathbf {n}}}f({\mathbf {y}})\right| , \end{aligned}$$
(15)

where

$$\begin{aligned} {\mathbf {n}}{\mathbf {h}}=(n_{1}h_{1}, \ldots ,n_{d}h_{d}) \;\; \text{ and } \;\; [{\mathbf {y}},{\mathbf {y}}+{\mathbf {n}}{\mathbf {h}}]=\prod _{i=1}^d [y_{i},y_{i}+n_{i}h_{i}]\;. \end{aligned}$$
(16)

For \({\mathbf {x}}=(x_{1}, \ldots ,x_{d})\in {\mathbb {R}}^{d}\) and \(\mathbf {x'}=(x'_{1}, \cdots ,x'_{d})\in {\mathbb {R}}^{d}\), we will write \({\mathbf {x}}\le \mathbf {x'}\) if \(x_{i}\le x'_{i} \; \forall \; i \in {\mathcal {D}}\), \({\mathbf {x}} < \mathbf {x'}\) if \(x_{i} < x'_{i} \; \forall \; i \in {\mathcal {D}}\), \({\mathbf {x}}\ge \mathbf {x'}\) if \(x_{i}\ge x'_{i} \; \forall \; i \in {\mathcal {D}}\), and \({\mathbf {x}} > \mathbf {x'}\) if \(x_{i} > x'_{i} \; \forall \; i \in {\mathcal {D}}\). Symbols \(\nleq , \nless , \ngeq \) and \( \ngtr \) are the negations.

Definition 5

Let \(\varvec{\alpha }=(\alpha _{1}, \ldots , \alpha _{d}) > {\mathbf {0}}\) and \({\mathbf {x}}=(x_{1}, \ldots ,x_{d})\in {\mathbb {R}}^{d}\). We say that f is rectangular pointwise regular \(\varvec{\alpha }\) at point \({\mathbf {x}}\), and we write \(f \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\), if

$$\begin{aligned} \forall \; {\mathbf {n}}> \varvec{\alpha } \; \exists \; C>0 \;\; \forall \;\varvec{\epsilon }> {\mathbf {0}}\qquad \omega _{f}({\mathbf {n}},\varvec{\epsilon })({\mathbf {x}})\le C \prod _{i=1}^d \epsilon _{i}^{\alpha _{i}}. \end{aligned}$$
(17)

Remark 1

Definition 5 is an extension of the following anisotropic rectangular regularity.

Definition 6

Let \(f: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\). Let \(s >0\) and \({\mathbf {u}}\) be as in (5). We say that \(f \in R^s_{\mathbf {u}}({\mathbf {x}})\) if

$$\begin{aligned} \forall \; {\mathbf {n}}> (s/u_1,\ldots ,s/u_d) \; \exists \; C>0 \;\; \forall \;\varvec{\epsilon }> {\mathbf {0}}\qquad \omega _{f}({\mathbf {n}},\varvec{\epsilon })({\mathbf {x}})\le C \prod _{i=1}^d \epsilon _{i}^{s/u_i}. \end{aligned}$$
(18)

If \(\varvec{\alpha }=(\alpha _{1}, \ldots , \alpha _{d}) > {\mathbf {0}}\) and \({\tilde{\alpha }}\) and \({\mathbf {v}}\) are as in (10) and (11) then

$$\begin{aligned} {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}}) = R^{{\tilde{\alpha }}}_{\mathbf {v}}({\mathbf {x}}). \end{aligned}$$
(19)

Another starting point which led us to Definition 5 is the almost characterization of Hölder regularity by means of isotropic local oscillations (see [32] and Remark 2 below); if \(n \in {\mathbb {N}}_0\), define the difference of order n of f by

$$\begin{aligned} {\varDelta }_{{\mathbf {h}}}^{n}f({\mathbf {x}})=\sum _{l=0}^{n}(-1)^{n+l}\left( {\begin{array}{c}n\\ l\end{array}}\right) f \left( {\mathbf {x}}+l {\mathbf {h}}\right) . \end{aligned}$$
(20)

Define the oscillation of order n of f on the Euclidean ball \(B({\mathbf {x}}, \varepsilon )\) of center \({\mathbf {x}}\) and radius \(\varepsilon \) by

$$\begin{aligned} OS_f^nB({\mathbf {x}}, \varepsilon ) = \sup _{[{\mathbf {y}},{\mathbf {y}}+n{\mathbf {h}}]\subset B({\mathbf {x}},\epsilon )}\left| {\varDelta }_{{\mathbf {h}}}^{n}f({\mathbf {y}})\right| , \end{aligned}$$
(21)

where

$$\begin{aligned} {[}{\mathbf {y}},{\mathbf {y}}+n{\mathbf {h}}]=\prod _{i=1}^d [y_{i},y_{i}+nh_{i}]. \end{aligned}$$
(22)

In Ref. [32, Proposition 6], it is proved that if \(\alpha \in (0,\infty )\) and \(f \in C^\alpha ({\mathbf {x}})\) then

$$\begin{aligned} \forall \; n> \alpha \;\; \forall \; \epsilon > 0 \quad OS_f^nB({\mathbf {x}}, \varepsilon ) \le C \epsilon ^\alpha , \end{aligned}$$
(23)

and conversely, if f is uniformly Hölder and (23) holds then \(f \in C_{log}^{\alpha }({\mathbf {x}})\).

Remark 2

Clearly, if \(u_1, \cdots , u_d\) are functions on \({\mathbb {R}}\) and f is a product of separable variable functions \(u_i(x_i)\), i.e., \(f({\mathbf {x}})=\prod _{i=1}^d u_i(x_{i})\) then

$$\begin{aligned} {\varDelta }_{{\mathbf {h}}}^{{\mathbf {n}}}f({\mathbf {x}})= \prod _{i=1}^d {\varDelta }_{h_{i}}^{n_{i}}u_i(x_{i}). \end{aligned}$$

It follows that

$$\begin{aligned} \omega _{f}({\mathbf {n}},\varvec{\epsilon })({\mathbf {x}})= \prod _{i=1}^d OS_{u_i}^{n_i}B(x_i, \varepsilon _i). \end{aligned}$$

If \(u_i \in C^{\alpha _i}(x_i)\) for all i then by applying result (23) for \(u_i\) instead of f, we deduce that \(f \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\).

On the other hand, although Hausdorff dimension provides information on the fullness of a set when viewed at fine scales, two sets of the same dimension may have very different appearances. If C is the Cantor ‘middle half’ set, which has Hausdorff dimension equal to 1/2, then the product set \(C \times C \subset {\mathbb {R}}^2\) has the same Hausdorff dimension as a line segment in \({\mathbb {R}}^2\). Various quantities, such as lacunarity and porosity, have been introduced to complement dimension (see [45]). In 1988, Rogers [51] proposed dimension prints, based on measures of Hausdorff type, to provide more information about the local affine structure of subsets of \({\mathbb {R}}^d\), for \(d \ge 2\); define a d-parameter family of measures \({\mathcal {H}}^{(t_1, \ldots ,t_d)}\) on subsets of \({\mathbb {R}}^d\) similar to the Hausdorff measures \({\mathcal {H}}^{s}\). Let \({\mathcal {B}}\) be the set of rectangle parallelepipeds in \({\mathbb {R}}^d\). For \(B \in {\mathcal {B}}\), we denote by \(l_1(B), \ldots , l_d(B)\) the edge-lengths of B, written in the order \(l_d(B) \le \cdots \le l_1(B)\). For a subset A of \({\mathbb {R}}^d\) and \(\delta >0\), we say that \((B_n)_{n \in {\mathbb {N}}} \subset {\mathcal {B}} \) is a \(\delta \)-cover of A and we write \((B_n)_{n \in {\mathbb {N}}} \in {\mathcal {C}}_\delta (A)\) if \(\forall \; n \;\; l_1(B_n) \le \delta \) and \( A \subset \bigcup _{n} B_n\).

Define for all \(\delta >0\) and \({\mathbf {t}}= (t_1, \ldots ,t_d) \ge {\mathbf {0}}\) the quantity

$$\begin{aligned} {\mathcal {H}}_\delta ^{\mathbf {t}}(A) = \inf \left\{ \sum _{n \in {\mathbb {N}}} \prod _{i=1}^d (l_i(B_n))^{t_i} \;\;:\; (B_n)_n \in {\mathcal {C}}_\delta (A) \right\} . \end{aligned}$$
(24)

Then the map

$$\begin{aligned} A \mapsto {\mathcal {H}}^{\mathbf {t}}(A) = \sup _{\delta > 0} {\mathcal {H}}_\delta ^{\mathbf {t}}(A), \end{aligned}$$
(25)

is a measure of Hausdorff type. The dimension print of A is not a number but the set defined as

$$\begin{aligned} print A = \{ {\mathbf {t}}\ge {\mathbf {0}}\;:\; {\mathcal {H}}^{{\mathbf {t}}}(A) >0 \}. \end{aligned}$$
(26)

As each d-parameter measure weights the side lengths of the rectangle parallelepipeds differently it is possible to distinguish between sets that are easily covered by long thin rectangle parallelepipeds, such as a line segment L, and sets which are not, such as the product of two regular Cantor sets in \({\mathbb {R}}\) (see [51], Example 2, p. 3 and [24], pp. 50–52).

In the next section, we characterize both \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) and \(Lip^{\varvec{\alpha }}({\mathbf {x}})\) in terms of hyperbolic wavelet coefficients (see Theorems 12). We deduce that \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) is a good substitute for \( Lip^{\varvec{\alpha }}({\mathbf {x}})\). We also give the characterization of \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) in terms of hyperbolic wavelet leaders (see Theorem  3). Using Remark 1, we also deduce that hyperbolic wavelets are able to analyze all anisotropies. This confirms that the universality of hyperbolic wavelets, which is true for global regularity (see [1, 55]), is also true for pointwise regularity.

Section 3 aims at obtain a numerical procedure that permits to extract information on the dimension print of sets of level rectangular pointwise regularities, expressed in terms of hyperbolic wavelet leaders (see Theorem 4, Corollary  1).

Finally, in Sect. 4, we apply our results for some selfsimilar cascade wavelet series, written as the superposition of similar anisotropic structures at different scales, reminiscent of some possible modelization of turbulence or cascade models.

2 Characterization in Terms of Hyperbolic Wavelets

2.1 Characterization with Hyperbolic Wavelet Coefficients

We will characterize both \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) and \(Lip^{\varvec{\alpha }}({\mathbf {x}})\) in terms of hyperbolic wavelet coefficients. We will then deduce that \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) is a good substitute for \( Lip^{\varvec{\alpha }}({\mathbf {x}})\) that has the advantage to cover any \( \varvec{\alpha } > {\mathbf {0}}\). We will also give the characterization of \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) in terms of hyperbolic wavelet leaders.

Let \(\psi _{-1}\) and \(\psi _{1}\) be the Daubechies [19] r-smooth father and mother wavelets (resp. \(\infty \)-smooth Lemarié-Meyer [38, 46]), compactly supported and r times continuously differentiable for some large value of r (resp. in the Schwartz class). It is known that \(\displaystyle \int _{{\mathbb {R}}}\psi _{-1}(t) dt=1\) and \(\displaystyle \int t^n \psi _{1}(y) \;dt =0\) for \(0 \le n < r\) (resp. for all \(n \in {\mathbb {N}}_0\)).

Put

$$\begin{aligned} {[}j] =\left\{ \begin{array}{ll} j &{}\hbox { if } \;j \in {\mathbb {N}}_0\;,\\ 0 &{}\hbox { if } \;j=-1. \end{array}\right. \end{aligned}$$
(27)

Put \({\mathbb {N}}_{-1} = {\mathbb {N}}_0 \cup \{-1\}\). For \(j \in {\mathbb {N}}_{-1}\) and \(k \in {\mathbb {Z}}\), put

$$\begin{aligned} \psi _{j,k}(t) =\left\{ \begin{array}{ll} \psi _1(2^{j}t-k) &{}\hbox { if } \;j \in {\mathbb {N}}_0\;,\\ \psi _{-1}(t-k) &{}\hbox { if } \;j=-1. \end{array}\right. \end{aligned}$$
(28)

Then the collection \(\Big (2^{[j]/2}\psi _{j,k}(t)\Big )_{j\in {\mathbb {N}}_{-1}, k\in {\mathbb {Z}}}\) is an orthonormal basis of \(L^2({\mathbb {R}})\).

For \( {\mathbf {j}}=(j_1, \ldots , j_d) \in {\mathbb {N}}_{-1}^d\), put \( [{\mathbf {j}}] =([j_1], \ldots , [j_d]) \) and \(|{\mathbf {j}}| = \displaystyle \sum _{i=1}^d j_i\).

For \( {\mathbf {k}}=(k_1, \ldots , k_d) \in {\mathbb {Z}}^d\) and \({\mathbf {x}}=(x_1,\ldots ,x_d)\in {\mathbb {R}}^d\), put

$$\begin{aligned} {\varPsi }_{{\mathbf {j}}, {\mathbf {k}}}({\mathbf {x}})= \prod _{i=1}^d \psi _{j_i,k_i}(x_i). \end{aligned}$$
(29)

Then the collection \(\{2^{|[{\mathbf {j}}]|/2}\; {\varPsi }_{{\mathbf {j}},{\mathbf {k}}}\;:\;{\mathbf {j}}\in {\mathbb {N}}_{-1}^d, {\mathbf {k}}\in {\mathbb {Z}}^d\}\) is an orthonormal basis of \(L^2({\mathbb {R}}^d)\) called hyperbolic wavelet basis [1, 21, 52, 59, 60]. Thus any function \(f\in L^2({\mathbb {R}}^d)\) can be written as

$$\begin{aligned} f=\sum _{{\mathbf {j}}\in {\mathbb {N}}_{-1}^d} \sum _{{\mathbf {k}}\in {\mathbb {Z}}^d} C_{{\mathbf {j}},{\mathbf {k}}} {\varPsi }_{{\mathbf {j}},{\mathbf {k}}}, \end{aligned}$$
(30)

with

$$\begin{aligned} C_{{\mathbf {j}},{\mathbf {k}}} = 2^{|[{\mathbf {j}}]|}\; \int _{{\mathbb {R}}^d} f({\mathbf {y}}) {\varPsi }_{{\mathbf {j}},{\mathbf {k}}}({\mathbf {y}}) \;d {\mathbf {y}}. \end{aligned}$$
(31)

Remark 3

Since we focus on the local behavior \(f \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) at \({\mathbf {x}}\in {\mathbb {R}}^d\), what happens far from \({\mathbf {x}}\) should not interfere with our questions. This is why we can assume that \({\mathbf {x}} \in (0,1)^d\) and f is supported on the unit cube \([0, 1]^d\), and since we deal with wavelets, we can assume that f is a wavelet series like

$$\begin{aligned} f=\sum _{{\mathbf {j}}\in {\mathbb {N}}_{-1}^d} \sum _{{\mathbf {k}}\in \prod _{i=1}^d [0,2^{[j_i]}-1]} C_{{\mathbf {j}},{\mathbf {k}}} {\varPsi }_{{\mathbf {j}},{\mathbf {k}}}. \end{aligned}$$
(32)

Moreover, it is very easy to analyse \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) for low frequency terms; indeed, without any loss of generality, for \(d=2\)

$$\begin{aligned} f({\mathbf {y}})= & {} C_{(-1,-1),(0,0)} \psi _{-1}(y_1) \psi _{-1}(y_2) + \psi _{-1}(y_1) \sum _{j_2 \in {\mathbb {N}}_0} \sum _{k_2=0}^{2^{j_2}-1} C_{(-1,j_2),(0,k_2)} \psi _{j_2,k_2}(y_2)\\&+ \left( \sum _{j_1 \in {\mathbb {N}}_0} \sum _{k_1=0}^{2^{j_1}-1} C_{(j_1,-1),(k_1,0)} \psi _{j_1,k_1}(y_1) \right) \psi _{-1}(y_2)\\&+ \sum _{(j_1,j_2) \in {\mathbb {N}}_{0}^2} \sum _{k_1=0}^{2^{j_1}-1} \sum _{k_2=0}^{2^{j_2}-1} C_{j_1,j_2,k} {\varPsi }_{j_1,j_2,k}({\mathbf {y}}), \end{aligned}$$

using Remark 2, the first term belongs to \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) for all \(\varvec{\alpha } < (r,r)\) (recall that r is the regularity of \(\psi _{-1}\)). The second and third terms are products of separable variable functions so their rectangular pointwise regularities can be deduced from Remark 2. For these reasons, it suffices to deal with the fourth term series, which from now on we call F.

The following theorems almost characterize the rectangular pointwise regularity of the wavelet series

$$\begin{aligned} F= \sum _{{\mathbf {j}}\in {\mathbb {N}}_{0}^d} \sum _{{\mathbf {k}}\in {\mathbb {Z}}^d} C_{{\mathbf {j}},{\mathbf {k}}} {\varPsi }_{{\mathbf {j}},{\mathbf {k}}}. \end{aligned}$$
(33)

They also show that \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) is an alternative substitute for \( Lip^{\varvec{\alpha }}({\mathbf {x}})\).

Theorem 1

Let \(\varvec{\alpha }=(\alpha _{1}, \ldots , \alpha _{d}) > {\mathbf {0}}\) and \({\mathbf {x}}=(x_{1}, \ldots ,x_{d})\in {\mathbb {R}}^{d}\).

  1. 1.

    If \(F \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) then there exists \(C>0\) such that

    $$\begin{aligned}&\forall \; {\mathbf {j}}=(j_{1},\ldots , j_{d}) \in {\mathbb {N}}_{0}^d\; \forall \; {\mathbf {k}}=(k_{1}, \ldots ,k_{d}) \quad \nonumber \\&\quad \left| C_{{\mathbf {j}},{\mathbf {k}}} \right| \le C \prod _{i=1}^d (2^{-j_{i}}+\left| k_{i}2^{-j_{i}}-x_i\right| )^{\alpha _{i}}. \end{aligned}$$
    (34)
  2. 2.

    Conversely if F is uniformly Hölder and if (34) holds then \(F \in {\mathfrak {L}}^{\varvec{\alpha '}}({\mathbf {x}})\) for all \(\varvec{\alpha '} < \varvec{\alpha }\).

Theorem 2

Let \({\mathbf {0}}< \varvec{\alpha } < {\mathbf {1}}\) and \({\mathbf {x}}\in {\mathbb {R}}^{d}\).

  1. 1.

    If \(F \in Lip^{\varvec{\alpha }}({\mathbf {x}})\) then there exists \(C>0\) such that

    $$\begin{aligned} \forall \; {\mathbf {j}} \in {\mathbb {N}}_{0}^d \;\; \forall \; {\mathbf {k}}\in {\mathbb {Z}}^{d} \quad \left| C_{{\mathbf {j}},{\mathbf {k}}}\right| \le C \prod _{i=1}^d \left( 2^{-j_{i}}+\left| k_{i}2^{-j_{i}}-x_{i}\right| \right) ^{\alpha _i}. \end{aligned}$$
    (35)
  2. 2.

    If F is uniformly Hölder and (35) is satisfied, then

    $$\begin{aligned} \forall \; \varvec{\alpha '} < \varvec{\alpha } \qquad F \in Lip^{\varvec{\alpha '}}({\mathbf {x}}). \end{aligned}$$
    (36)

Remark 4

The converse parts in Theorems 1 and 2 remain valid if (34) is satisfied when \({\mathbf {k}}\) belongs to some well-chosen subset of \({\mathbb {Z}}^d\) depending on \({\mathbf {j}}\). In fact, without any loss of generality, assume that F is supported on the unit cube.

  • Assume that \(\psi _{-1}\) is compactly supported in \([-A,A]\). If there exists \(i \in {\mathcal {D}}\) such that \(|k_i| > A + 2^{j_i}\) then \(C_{{\mathbf {j}},{\mathbf {k}}} = 0\).

  • Assume that \(\psi _{-1}\) belongs to the Schwartz class. Let \(i \in {\mathcal {D}}\). For all \(N \in {\mathbb {N}}\) there exists \(C>0\) such that

    $$\begin{aligned} \forall y_i \quad |\psi _{j_i,k_i}(y_i)| \le \frac{C}{(1+|2^{j_i}y_i-k_i|)^N}. \end{aligned}$$

    If \(|k_i| > 2^{j_i+1}\) and \(0 \le y_i \le 1\) then \( |\psi _{j_i,k_i}(y_i)| \le C 2^{-Nj_i}\). It follows that \(|C_{{\mathbf {j}},{\mathbf {k}}}| \le C \prod _{i=1}^d 2^{-N\theta _ij_i}\) for all \({\mathbf {k}}> (2^{j_1+1}, \ldots , 2^{j_d+1})\) and all \((\theta _1, \ldots , \theta _d) \in (0,1)^{\mathcal {D}}\) with \(\theta _1+ \ldots \theta _d=1\).

Proof of Theorem 1

Since only the wavelet \(\psi _{1}\) is involved, we will write \(\psi \).

  1. 1.

    The proof of this part is reminiscent of that of the converse part of Proposition 6 in [32]. If \(\psi \) is r-smooth and \(r>n\), there exists a function \(\theta _{n}\) with fast decay if \(r = \infty \) (resp. compactly supported) such that \( \psi ={\varDelta }_{\frac{1}{2}}^{n}\theta _{n}\) (see Lemma 2 in [32]).

    If u and v belong to \(L^2({\mathbb {R}})\) then using (20)

    $$\begin{aligned} \int u(t)\; {\varDelta }_{h}^{n} v(t) \; d t=(-1)^{n} \int {\varDelta }_{-h}^{n} u (t)\; v(t) \;dt. \end{aligned}$$
    (37)

    For \({\mathbf {j}}=(j_{1}, \ldots ,j_{d})\in {\mathbb {N}}_0^{d}\) and \({\mathbf {k}}=(k_{1}, \ldots ,k_{d})\in {\mathbb {Z}}^{d}\)

    $$\begin{aligned} C_{{\mathbf {j}},{\mathbf {k}}}&=2^{|{\mathbf {j}}|} \int _{{\mathbb {R}}^{d}} F\left( {\mathbf {y}}\right) \prod _{i=1}^d \psi \left( 2^{j_{i}} y_{i}-k_{i}\right) \mathrm {d}{\mathbf {y}}\\&=\int _{{\mathbb {R}}^{d}}{\widetilde{F}}({\mathbf {y}}) \prod _{i=1}^d \psi (y_{i})\mathrm {d}{\mathbf {y}}, \end{aligned}$$

    where \( \displaystyle {\widetilde{F}}({\mathbf {y}})=F\left( \frac{y_{1}+k_{1}}{2^{j_{1}}},\ldots ,\frac{y_{d}+k_{d}}{2^{j_{d}}}\right) \). Using property (37)

    $$\begin{aligned} C_{{\mathbf {j}},{\mathbf {k}}}&=\int _{{\mathbb {R}}^{d}}{\widetilde{F}}({\mathbf {y}}) \prod _{i=1}^d {\varDelta }_{\frac{1}{2}}^{n_{i}}\theta _{n_{i}}(y_{i}) \mathrm {d}{\mathbf {y}}\\&= (-1)^{n_1+\cdots +n_d} \int _{{\mathbb {R}}^{d}} {\varDelta }_{-\frac{1}{2},1}^{n_{1}} \circ \cdots {\varDelta }_{-\frac{1}{2},d}^{n_{d}}{\widetilde{F}}({\mathbf {y}}) \prod _{i=1}^d \theta _{n_{i}}(y_{i}) \mathrm {d}{\mathbf {y}}\\&= (-1)^{n_1+\cdots +n_d}\sum _{(l_{1}, \cdots , l_d) \in {\mathbb {N}}_0^d} \int _{\prod _{i=1}^d K_{l_{i}}} {\varDelta }_{-\frac{1}{2},1}^{n_{1}}\circ \cdots {\varDelta }_{-\frac{1}{2},d}^{n_{d}}{\widetilde{F}}({\mathbf {y}}) \prod _{i=1}^d \theta _{n_{i}}(y_{i}) \mathrm {d}{\mathbf {y}}, \end{aligned}$$

    where \(K_{0}=\left\{ t \in {\mathbb {R}}\;:\; |t|<1\right\} \) and for \(l\ge 1\), \(K_{l}=\left\{ t\in {\mathbb {R}}\;:\; 2^{l-1}\le |t|<2^{l}\right\} \).

    If \({\mathbf {y}} \in \prod _{i=1}^d K_{l_{i}}\) then

    $$\begin{aligned} \left| \frac{y_{i}+k_{i}}{2^{j_{i}}}-\frac{n_{i}}{2^{(j_{i}+1)}}- x_i\right| \le \left| \frac{k_{i}}{2^{j_{i}}}-x_i\right| +n_{i}2^{l_{i}-j_{i}}. \end{aligned}$$

    For \(i \in {\mathcal {D}}\), put \((\varvec{\epsilon }_{l_{1}, \cdots ,l_{d}})_i= \left| \frac{k_{i}}{2^{j_{i}}}-x_i\right| +n_{i}2^{l_{i}-j_{i}}\).

    Set \(\varvec{\epsilon }_{l_{1},\cdots ,l_{d}}=((\varvec{\epsilon }_{l_{1},\ldots ,l_{d}})_1, \cdots , (\varvec{\epsilon }_{l_{1},\ldots ,l_{d}})_d)\). Therefore

    $$\begin{aligned} \left| C_{{\mathbf {j}},{\mathbf {k}}}\right|&\le \sum _{(l_{1},\ldots ,l_{d}) \in {\mathbb {N}}_0^d} \int _{\prod _{i=1}^d K_{l_{i}}} \omega _{F}({\mathbf {n}},\varvec{\epsilon }_{l_{1},\ldots ,l_{d}})({\mathbf {x}}) \prod _{i=1}^d |\theta _{n_{i}}(y_{i})| \; \mathrm {d}{\mathbf {y}}\\&\le C \sum _{(l_{1},\cdots ,l_{d}) \in {\mathbb {N}}_0^d} \; \int _{\prod _{i=1}^d K_{l_{i}}} \prod _{i=1}^d \frac{\left( \left| \frac{k_{i}}{2^{j_{i}}}-x_i\right| +n_{i}2^{l_{i}-j_{i}}\right) ^{\alpha _{i}}}{(1+|y_{i}|)^{\alpha _{i}+2}} \; \mathrm {d}{\mathbf {y}}\\&\le C \prod _{i=1}^d \left( \sum _{l_{i}=0}^{\infty }\int _{K_{l_{i}}}\frac{\left( \left| \frac{k_{i}}{2^{j_{i}}}-x_i\right| +n_{i}2^{l_{i}-j_{i}}\right) ^{\alpha _{i}}}{(1+|y_{i}|)^{\alpha _{i}+2}} \mathrm {d}y_{i}\right) . \end{aligned}$$

    But for \(y_i \in K_{l_i}\), \(\frac{\left( \left| \frac{k_i}{2^{j_i}}-x_i\right| +n_i2^{l_i-j_i}\right) ^{\alpha _i}}{(1+|y_i|)^{\alpha _i+2}}\) is bounded by \(C \frac{\left| \frac{k_i}{2^{j_i}}-x_i\right| ^{\alpha _i}}{2^{l_i(\alpha _i+1)}}\) if \(n_i 2^{l_i-j_i}\le \left| \frac{k_i}{2^{j_i}}-x_i\right| \), and else by \(C \frac{2^{-\alpha _i j_i}}{2^{l_i}}\). Thus, summing up over \((l_{1}, \ldots ,l_{d})\), we get (34).

  2. 2.

    Let us prove the converse part. Since there exists \(\delta >0\) such that \(F \in C^\delta ({\mathbb {R}}^d)\) then for all \({\mathbf {j}} \in {\mathbb {N}}_0^{d}\) and all \({\mathbf {k}}\in {\mathbb {Z}}^{d}\)

    $$\begin{aligned} |C_{{\mathbf {j}},{\mathbf {k}}}|= & {} 2^{|{\mathbf {j}}|} |\int (F({\mathbf {y}})-F(k_{1}2^{-j_{1}}, y_2, \ldots ,y_d)) {\varPsi }_{{\mathbf {j}},{\mathbf {k}}}({\mathbf {y}}) \;d{\mathbf {y}}| \\\le & {} C 2^{|{\mathbf {j}}|} \int |y_1-k_{1}2^{-j_{1}}|^\delta |{\varPsi }_{{\mathbf {j}},{\mathbf {k}}}({\mathbf {y}})| \;d{\mathbf {y}}\\\le & {} C 2^{j_1} \int |y_1-k_{1}2^{-j_{1}}|^\delta |\psi _{j_1,k_1}(y_1)| \;dy_1 \;\; \prod _{i=2}^{d} 2^{j_i} \int |\psi _{j_i,k_i}(y_i)| \;dy_i\;. \end{aligned}$$

Using the localization of the wavelet \(\psi \)

$$\begin{aligned} |C_{{\mathbf {j}},{\mathbf {k}}}| \le C 2^{-\delta j_{1}}. \end{aligned}$$
(38)

Analogously, for all \(i \in \{2,\ldots ,d\}\)

$$\begin{aligned} |C_{{\mathbf {j}},{\mathbf {k}}}| \le C 2^{-\delta j_{i}}. \end{aligned}$$
(39)

Relations (38) and (39) imply that

$$\begin{aligned} \left| C_{{\mathbf {j}},{\mathbf {k}}}\right| \le C 2^{-\frac{\delta |{\mathbf {j}}|}{d}}. \end{aligned}$$
(40)

Put \(\delta ' = \delta /d\). Relations (34) and (40) yield

$$\begin{aligned} \forall \; \sigma \in [0,1] \; \forall {\mathbf {j}}\in {\mathbb {N}}_0^{d} \; \forall {\mathbf {k}}\in {\mathbb {Z}}^{d} \quad \left| C_{{\mathbf {j}},{\mathbf {k}}}\right| \le C \prod _{i=1}^d 2^{-(1-\sigma )\delta ' j_{i}} \left( 2^{-j_{i}}+\left| k_{i}2^{-j_{i}}-x_i\right| \right) ^{\sigma \alpha _{i}}. \end{aligned}$$

Let \(\varvec{\epsilon }=(\epsilon _{1},\ldots ,\epsilon _{d}) > {\mathbf {0}}\). Let \({\mathbf {y}}\) and \({\mathbf {h}}=(h_{1}, \cdots ,h_{d})\in {\mathbb {R}}^{d}\) such that \([{\mathbf {y}},{\mathbf {y}}+{\mathbf {n}}{\mathbf {h}}]\subset B({\mathbf {x}},\varvec{\epsilon })\).

$$\begin{aligned} \left| {\varDelta }_{{\mathbf {h}}}^{{\mathbf {n}}}F({\mathbf {y}})\right|\le & {} C \sum _{{\mathbf {j}}\in {\mathbb {N}}_0^{d}}\sum _{{\mathbf {k}} \in {\mathbb {Z}}^{d}} \prod _{i=1}^d 2^{-(1-\sigma )\delta ' j_{i}} \left( 2^{-j_{i}}+\left| k_{i}2^{-j_{i}}-x_i\right| \right) ^{\sigma \alpha _{i}} \left| {\varDelta }_{{\mathbf {h}}}^{{\mathbf {n}}} {\varPsi }_{j,k}({\mathbf {y}}) \right| \\\le & {} C \sum _{{\mathbf {j}}\in {\mathbb {N}}_0^{d}}\sum _{{\mathbf {k}} \in {\mathbb {Z}}^{d}} \prod _{i=1}^d 2^{-(1-\sigma )\delta ' j_{i}} \left( 2^{-j_{i}}+\left| k_{i}2^{-j_{i}}-x_i\right| \right) ^{\sigma \alpha _{i}} \left| {\varDelta }_{h_{i}}^{n_{i}}\psi _{j_{i},k_{i}}(y_{i})\right| . \end{aligned}$$

It follows that

$$\begin{aligned} \left| {\varDelta }_{{\mathbf {h}}}^{{\mathbf {n}}}F({\mathbf {y}})\right| \le C \prod _{i=1}^d R_{\alpha _{i},n_{i}}[x_i](h_{i},y_{i}), \end{aligned}$$
(41)

where for \(x,h,y \in {\mathbb {R}}\)

$$\begin{aligned} R_{\alpha ,n}[x](h,y)=\sum _{j=0}^{\infty }\sum _{k\in {\mathbb {Z}}} 2^{-(1-\sigma )\delta ' j}(2^{-j}+\left| k2^{-j}-x\right| )^{\sigma \alpha }\left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| . \end{aligned}$$

Let us show that there exists \(C>0\) such that

$$\begin{aligned} R_{\alpha ,n}[x](h,y)\le C (\left| y-x\right| +|h|)^{\sigma \alpha }. \end{aligned}$$
(42)

Let J be the unique integer such that \(2^{-J}\le \left| h\right| < 2^{-J+1}\). Split \(R_{\alpha ,n}[x](h,y)\) as

$$\begin{aligned}&\sum _{j=0}^{J}\sum _{k\in {\mathbb {Z}}} 2^{-(1-\sigma )\delta ' j}\left( 2^{-j}+\left| k2^{-j}-x\right| \right) ^{\sigma \alpha }\left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| \end{aligned}$$
(43)
$$\begin{aligned}&\quad +\sum _{j=J+1}^{\infty }\sum _{k\in {\mathbb {Z}}} 2^{-(1-\sigma )\delta ' j}\left( 2^{-j}+\left| k2^{-j}-x\right| \right) ^{\sigma \alpha }\left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| . \end{aligned}$$
(44)

Let us first bound (44). Since \(\psi \) is r-smooth then for N and r large enough

$$\begin{aligned} \sum _{k\in {\mathbb {Z}}} \left( 2^{-j}+\left| k2^{-j}-x\right| \right) ^{\sigma \alpha }\left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| \le C \sum _{k\in {\mathbb {Z}}} \left( \sum _{i=0}^{n}\frac{(2^{-\sigma \alpha j}+\left| k2^{-j}-x\right| ^{\sigma \alpha })}{\left( 1+\left| 2^{j} (y+ih)-k\right| \right) ^{N}} \right) . \end{aligned}$$

For \(i\in \left\{ 0,\ldots ,n\right\} \)

$$\begin{aligned}&\frac{(2^{-\sigma \alpha j}+\left| k2^{-j}-x\right| ^{\sigma \alpha })}{\left( 1+\left| 2^{j} (y+ih)-k\right| \right) ^{N}} \nonumber \\&\quad \le C \left( \frac{2^{-\sigma \alpha j}+\left| k2^{-j}-(y+ih) \right| ^{\sigma \alpha } + |y-x|^{\sigma \alpha } + (i|h|)^{\sigma \alpha }}{\left( 1+\left| 2^{j} (y+ih)-k\right| \right) ^{N}}\right) \nonumber \\&\quad \le C \left( \frac{2^{-\sigma \alpha j}}{\left( 1+\left| 2^{j} (y+ih)-k\right| \right) ^{N-\sigma \alpha }} +\frac{ |y-x|^{\sigma \alpha } + |h|^{\sigma \alpha }}{\left( 1+\left| 2^{j} (y+ih)-k\right| \right) ^{N}}\right) . \end{aligned}$$
(45)

It follows that

$$\begin{aligned} \sum _{k\in {\mathbb {Z}}} \left( 2^{-j}+\left| k2^{-j}-x\right| \right) ^{\sigma \alpha }\left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| \le C \left( 2^{-\sigma \alpha j}+|y-x|^{\sigma \alpha }+|h|^{\sigma \alpha }\right) .\qquad \end{aligned}$$
(46)

Summing up over \(j\ge J+1\), (44) is bounded by \(C(|y-x|^{\sigma \alpha }+|h|^{\sigma \alpha })\).

Let us now bound (43). Since \({\varDelta }_{h}^{n}\psi _{j,k}(y)={\varDelta }_{2^{j}h}^{n}\psi (2^{j}y-k)\) then using property 5.1 of [34]

$$\begin{aligned} \left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| \le 2^{jn}\left| h\right| ^{n}\sup _{u\in [2^{j}y-k,2^{j}(y+nh)-k]}\left| \psi ^{(n)}(u)\right| . \end{aligned}$$
  • If \(k\notin [2^{j}y,2^{j}(y+nh)]\) and \(u\in [2^{j}y-k,2^{j}(y+nh)-k]\) then \(\left| u\right| \ge \min \left\{ \left| 2^{j} y-k\right| ,\left| 2^{j}(y+nh)-k\right| \right\} \). Since \(\psi \) is r-smooth then for N and r large enough

    $$\begin{aligned}&\sum _{k\notin [2^{j}y,2^{j}(y+nh)]}(2^{-j}+\left| k2^{-j}-x\right| )^{\sigma \alpha }\left| {\varDelta }^n_{h}\psi _{j,k}(y)\right| \\&\qquad \le C 2^{nj}|h|^{n} \sum _{k\notin [2^{j}y,2^{j}(y+h)]} \left( \frac{(2^{-\sigma \alpha j}+\left| k2^{-j}-x\right| ^{\sigma \alpha })}{\left( 1+\left| 2^{j} y-k\right| \right) ^{N}}+\frac{(2^{-\sigma \alpha j}+\left| k2^{-j}-x\right| ^{\sigma \alpha })}{\left( 1+\left| 2^{j} (y+nh)-k\right| \right) ^{N}}\right) \\&\qquad \le C 2^{nj}|h|^{n} \sum _{k\in Z} \left( \frac{(2^{-\sigma \alpha j}+\left| k2^{-j}-x\right| ^{\sigma \alpha })}{\left( 1+\left| 2^{j} y-k\right| \right) ^{N}} +\frac{(2^{-\sigma \alpha j}+\left| k2^{-j}-x\right| ^{\sigma \alpha })}{\left( 1+\left| 2^{j} (y+nh)-k\right| \right) ^{N}}\right) \\&\qquad \le C 2^{nj}|h|^{n}(2^{-\sigma \alpha j}+|y-x|^{\sigma \alpha }+|h|^{\sigma \alpha }) \quad \text{(as } \text{ in } \text{(45) } \text{ and } \text{(46)) }. \end{aligned}$$

    Since \(j\le J\) and \(2^{-J } \le |h| < 2^{-J+1}\), then

    $$\begin{aligned}&\sum _{k\notin [2^{j}y,2^{j}(y+nh)]}(2^{-j}+\left| k2^{-j}-x\right| )^{\sigma \alpha }\left| {\varDelta }^n_{h}\psi _{j,k}(y)\right| \\&\quad \le C 2^{nj}|h|^{n}(2^{-\sigma \alpha j}+|y-x|^{\sigma \alpha }). \end{aligned}$$
  • If \(k\in [2^{j}y,2^{j}(y+nh)]\) then \(\left| k2^{-j}-x\right| \le \left| k2^{-j}-y\right| + \left| y-x\right| \le n|h| + \left| y-x\right| \). It follows that

    $$\begin{aligned}&\sum _{k\in [2^{j}y,2^{j}(y+nh)]}(2^{-j}+\left| k2^{-j}-x\right| )^{\sigma \alpha }\left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| \\&\qquad \le C \sum _{k\in [2^{j}y,2^{j}(y+nh)]}(2^{-j\sigma \alpha }+(n|h|)^{\sigma \alpha }+\left| y-x\right| ^{\sigma \alpha })2^{nj}|h|^{n}. \end{aligned}$$

    But there are about \(n2^{j}\left| h\right| \) integers k in \([2^{j}y,2^{j}(y+nh)]\). Since \(j\le J\) and \(2^{-J } \le |h| < 2^{-J+1}\), then

    $$\begin{aligned}&\sum _{k\in [2^{j}y,2^{j}(y+h)]}(2^{-j}+\left| k2^{-j}-x\right| )^{\sigma \alpha }\left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| \\&\quad \le C \left( 2^{-j\sigma \alpha }+\left| y-x\right| ^{\sigma \alpha }\right) 2^{nj}|h|^{n}. \end{aligned}$$

Thus

$$\begin{aligned}&\sum _{j=0}^{J} \sum _{k\in {\mathbb {Z}}} 2^{-(1-\sigma )\delta ' j} \left( 2^{-j}+\left| k2^{-j}-x\right| \right) ^{\sigma \alpha } \left| {\varDelta }_{h}^{n}\psi _{j,k}(y)\right| \\&\quad \le C \sum _{j=0}^{J} 2^{nj} |h|^{n} ( 2^{-\sigma \alpha j} + |y-x|^{\sigma \alpha } )\\&\quad \le C |h|^{n} (\sum _{j=0}^{J} 2^{j(n-\sigma \alpha )} + 2^{nJ} |y-x|^{\sigma \alpha }) \\&\quad \le C |h|^{n} ( 2^{J(n-\sigma \alpha )} + 2^{nJ}|y-x|^{\sigma \alpha }) \\&\quad \le C(|h|^{\sigma \alpha }+|y-x|^{\sigma \alpha }). \end{aligned}$$

The two sums (43) and (44) are bounded by \(C(|y-x|^{\sigma \alpha }+|h|^{\sigma \alpha })\). This yields (42).

By (41)

$$\begin{aligned} \forall \; [{\mathbf {y}},{\mathbf {y}}+{\mathbf {n}}{\mathbf {h}}]\subset B({\mathbf {x}},\varvec{\epsilon }) \quad \left| {\varDelta }_{{\mathbf {h}}}^{{\mathbf {n}}}F({\mathbf {y}})\right| \le C \prod _{i=1}^d (\left| y_{i}-x_i\right| +|h_{i}|)^{\sigma \alpha _{i}}\le C \prod _{i=1}^d \epsilon _{i}^{\sigma \alpha _{i}}. \end{aligned}$$

Consequently

$$\begin{aligned} \omega _{F}({\mathbf {n}},\varvec{\epsilon })({\mathbf {x}})\le C \prod _{i=1}^d \epsilon _{i}^{\sigma \alpha _{i}}, \end{aligned}$$

and \(F \in {\mathfrak {L}}^{\varvec{\alpha '}}({\mathbf {x}})\) for all \(\varvec{\alpha '} < \varvec{\alpha }\). \(\square \)

Proof of Theorem 2

For \(d=2\)

$$\begin{aligned} {\varDelta }_{{\mathbf {y}}-{\mathbf {x}}}^{{\mathbf {1}}}F({\mathbf {x}})=F({\mathbf {y}})+F({\mathbf {x}})-F(x_1,y_2)-F(y_1,x_2). \end{aligned}$$
(47)

For \(d=3\)

$$\begin{aligned} {\varDelta }_{{\mathbf {y}}-{\mathbf {x}}}^{{\mathbf {1}}}F({\mathbf {x}})= & {} F({\mathbf {y}})-F(y_1,y_2,x_3)+F(x_1,x_2,y_3) -F({\mathbf {x}}) \nonumber \\&- F(x_1,y_2,y_3) + F(x_1,y_2,x_3)-F(y_1,x_2,y_3)+F(y_1,x_2,x_3).\nonumber \\ \end{aligned}$$
(48)

By induction for any \(d \ge 2\), the quantity \({\varDelta }_{{\mathbf {y}}-{\mathbf {x}}}^{{\mathbf {1}}}F({\mathbf {x}}) - F({\mathbf {y}})\) is a linear combination of the values of F at some points of \({\mathbb {R}}^d\), where at least one component of each such points coincides of that of \({\mathbf {x}}\). Since \(\int _{\mathbb {R}}\psi (t) dt =0\), it follows that

$$\begin{aligned} \forall \; {\mathbf {j}} \in {\mathbb {N}}_{0}^d \qquad C_{{\mathbf {j}},{\mathbf {k}}} = 2^{|{\mathbf {j}}|} \int {\varDelta }_{{\mathbf {y}}-{\mathbf {x}}}^{{\mathbf {1}}}F({\mathbf {x}}) {\varPsi }_{{\mathbf {j}},{\mathbf {k}}}({\mathbf {y}}) \;d{\mathbf {y}}. \end{aligned}$$
(49)

Since \(F \in Lip^{\varvec{\alpha }}({\mathbf {x}})\) then

$$\begin{aligned} |C_{{\mathbf {j}},{\mathbf {k}}}|\le & {} C 2^{|{\mathbf {j}}|} \int |{\varPsi }_{{\mathbf {j}},{\mathbf {k}}}({\mathbf {y}})| \left( \prod _{i=1}^d |y_i-x_i|^{\alpha _i}\right) \;d{\mathbf {y}}\\= & {} C \prod _{i=1}^d \left( 2^{j_i} \int |y_i-x_i|^{\alpha _i} |\psi _{j_i,k_i}(y_i)| \;dy_i \right) \\\le & {} C \prod _{i=1}^d \left( 2^{j_i} \int \left( |y_i- k_{i}2^{-j_{i}}|^{\alpha _i} + |k_{i}2^{-j_{i}}- x_i|^{\alpha _i} \right) |\psi _{j_i,k_i}(y_i)| \;dy_i \right) . \end{aligned}$$

The localization of the wavelet \(\psi _{1}\) yields (35).

The proof of the converse part of Theorem 2 is self-contained in that of the converse part of Theorem 1. \(\square \)

2.2 Characterization with Hyperbolic Wavelet Leaders

We will now see that the \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) regularity of the function F is closely related to the rate of decay of its hyperbolic wavelet leaders; for any \({\mathbf {j}},{\mathbf {k}}\) let \(\varvec{\lambda }({\mathbf {j}},{\mathbf {k}})\) denotes the dyadic rectangular parallelepiped

$$\begin{aligned} \varvec{\lambda } = \varvec{\lambda }({\mathbf {j}},{\mathbf {k}})= \prod _{i=1}^d \left[ k_i2^{-j_i},(k_i+1)2^{-j_i}\right] . \end{aligned}$$

For \({\mathbf {j}}\in {\mathbb {N}}_0^d\), set

$$\begin{aligned} \varvec{{\varLambda }}_{\mathbf {j}}= \{ \varvec{\lambda }({\mathbf {j}},{\mathbf {k}}) \;:\; {\mathbf {k}}\in {\mathbb {Z}}^d\}. \end{aligned}$$
(50)

The hyperbolic wavelet leader \(d_{\varvec{\lambda }}\) associated with \(\varvec{\lambda }\) is defined as

$$\begin{aligned} d_{\varvec{\lambda }} = \sup _{\varvec{\lambda }' \subset \varvec{\lambda }} |C_{\varvec{\lambda }'}|, \end{aligned}$$

where \(\varvec{\lambda }' \in \varvec{{\varLambda }}_{{\mathbf {j}}'}\) with \({\mathbf {j}}' \ge {\mathbf {j}}\).

Set

$$\begin{aligned} 3 \varvec{\lambda } = \prod _{i=1}^d \left[ (k_i-1)2^{-j_i},(k_i+2)2^{-j_i}\right] . \end{aligned}$$
(51)

If \({\mathbf {x}}\in {\mathbb {R}}^d\), denote by \(\varvec{\lambda }_{{\mathbf {j}}}({\mathbf {x}})\) the unique dyadic rectangular parallelepiped in \(\varvec{{\varLambda }}_{\mathbf {j}}\) that contains \({\mathbf {x}}\).

Set

$$\begin{aligned} d_{{\mathbf {j}}}({\mathbf {x}}) = \sup _{\varvec{\lambda }' \subset 3\varvec{\lambda }_{{\mathbf {j}}}({\mathbf {x}})} d_{\varvec{\lambda }'}. \end{aligned}$$
(52)

The almost characterization of \({\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) regularity of the function F by decay conditions of hyperbolic wavelet leaders is the following.

Theorem 3

Let \(\varvec{\alpha }=(\alpha _{1}, \ldots , \alpha _{d}) > {\mathbf {0}}\) and \({\mathbf {x}}=(x_{1}, \ldots ,x_{d})\in {\mathbb {R}}^{d}\).

  1. 1.

    If \(F \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) then there exists \(C>0\) such that

    $$\begin{aligned} \forall \; {\mathbf {j}}=(j_{1},\ldots , j_{d}) \in {\mathbb {N}}_0^d \qquad d_{{\mathbf {j}}}({\mathbf {x}}) \le C 2^{-\displaystyle \sum _{i=1}^d \alpha _i j_{i}}. \end{aligned}$$
    (53)
  2. 2.

    Conversely if F is uniformly Hölder and if (53) holds then \(F \in {\mathfrak {L}}^{\varvec{\alpha '}}({\mathbf {x}})\) for all \(\varvec{\alpha '} < \varvec{\alpha }\).

Proof of Theorem 3

The proof in all its steps and arguments is the same as the one of [12].

  1. 1.

    Let \(F \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\). Let \({\mathbf {j}}\ge {\mathbf {1}}\), if \(\varvec{\lambda }' \subset 3 \varvec{\lambda }({\mathbf {x}})\) then \({\mathbf {j}}' \ge {\mathbf {j}}-{\mathbf {1}}\) and \(\left| k_i'2^{-j_i'}-x_{i}\right| \le C 2^{-j_i}\) for all \(i \in \{1,\ldots ,d\}\). Hence (34) yields (53).

  2. 2.

    Conversely, assume that F is uniformly Hölder and (53) holds. Let \(j_i' \ge 0\) be given. If \(\lambda '_i=[k_i'2^{-j_i'}, (k_i'+1)2^{-j_i'})\), denote by \(\lambda _i=[k_i 2^{-j_i}, (k_i+1)2^{-j_i})\) the dyadic interval defined by

    • if \(\lambda _i' \subset 3 \lambda _{j_i'}(x_i)\), then \(\lambda _i=\lambda _{j_i'}(x_i)\) and \(j_i=j_i'\)

    • else, if \(j_i = \sup \{ l : \lambda _i' \subset 3 \lambda _{l}(x_i)\}\), then \(\lambda _i = \lambda _{j_i}(x_i)\) and it follows that there exists \(C>0\) such that \(\displaystyle \frac{1}{C} 2^{-j_i} \le |k_i'2^{-j_i'}-x_i| \le C 2^{-j_i}\).

    If \({\mathbf {j}}' \ge {\mathbf {0}}\), relation (53) implies that

    $$\begin{aligned} |C_{{\mathbf {j}}',{\mathbf {k}}'}| \le C \prod _{i=1}^d \left( 2^{-j'_{i} \alpha _i}+|k'_i2^{-j'_i}-x_i|^{\alpha _i}\right) . \end{aligned}$$

\(\square \)

3 Dimension Prints of Sets of Level Rectangular Pointwise Regularities

We will obtain a numerical procedure that yields information on the dimension print of sets of level rectangular pointwise regularities, expressed in terms of hyperbolic wavelet leaders.

Definition 7

Let F be as in (33). Let \(p<0\) and \(\varvec{\beta }=(\beta _1, \ldots , \beta _d) \in {\mathbb {R}}^d\). We say that F belongs to the anisotropic oscillation space \(O_p^{\varvec{\beta }}\) if \((C_\lambda )\) is bounded and

$$\begin{aligned} \forall \; \varepsilon>0 \;\; \exists \; C>0 \;\; \exists \; J \ge 0\;\; \forall \; |{\mathbf {j}}| \ge J \qquad 2^{- |{\mathbf {j}}| + p \displaystyle \sum _{i=1}^d \beta _i j_i } \sum _{\varvec{\lambda } \in \varvec{{\varLambda }}_{{\mathbf {j}}}} d_{\varvec{\lambda }}^{p} \le C 2^{\varepsilon |{\mathbf {j}}|}. \end{aligned}$$
(54)

The presence of \(\varepsilon \) is mandatory for functions which have very small \(d_{\varvec{\lambda }}\)’s. Note the zero function does not belong to \(O_p^{\varvec{\beta }}\) (since \(d_{\varvec{\lambda }}^{p} = \infty \)).

For \( \varvec{\alpha }> {\mathbf {0}}\), the set of \(\varvec{\alpha }\) level rectangular pointwise regularity of F is defined as

$$\begin{aligned} {\mathcal {B}}_{\varvec{\alpha }} = \{ {\mathbf {x}}: F \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}}) \}. \end{aligned}$$
(55)

Theorem 4

Let F be as in (33). Let \(p<0\) and \(\varvec{\beta }=(\beta _1, \ldots , \beta _d) \in {\mathbb {R}}^d\). Assume that F belongs to the anisotropic oscillation space \(O_p^{\varvec{\beta }}\). Let \(\varvec{\alpha }> {\mathbf {0}}\). Then \(print \; {\mathcal {B}}_{\varvec{\alpha }}\) is included in the set \(G_{\varvec{\alpha },\varvec{\beta },p}\) of \({\mathbf {t}}\ge {\mathbf {0}}\) such that either \(t_d \le \displaystyle \max _{n \in {\mathcal {D}}} \xi _{n}\), or \( t_{d-1} + t_d \le \displaystyle \max _{n_1\ne n_2} (\xi _{n_1} + \xi _{n_2}), \ldots \), or \( t_2 + \cdots + t_{d} \le \displaystyle \max _{n_1\ne \cdots \ne n_{d-1}} (\xi _{n_1} + \cdots + \xi _{n_{d-1}})\) or \( t_1 + \cdots + t_{d} \le \xi _{1} + \cdots + \xi _{d}\), where \(\xi _i= (\alpha _i-\beta _i)p+1\).

Proof

Let \(\varvec{\alpha }> {\mathbf {0}}\). If \(F \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) then

$$\begin{aligned} \exists \; C >0 \;\; \forall \; {\mathbf {j}}\ge {\mathbf {0}}\quad d_{{\mathbf {j}}} ({\mathbf {x}}) \le C 2^{-\sum _{i=1}^d \alpha _ij_i}. \end{aligned}$$

Let \(p<0\). Then

$$\begin{aligned} \forall \; {\mathbf {j}}\ge {\mathbf {0}}\quad d^p_{{\mathbf {j}}} ({\mathbf {x}}) \ge C^p 2^{-p\sum _{i=1}^d \alpha _ij_i}. \end{aligned}$$
(56)

Let \({\varOmega }_{p, C,\varvec{\alpha }}\) the set of points \({\mathbf {x}}\) satisfying (56). Then

$$\begin{aligned} {\mathcal {B}}_{\varvec{\alpha }} \subset \bigcup _{ C >0} {\varOmega }_{p, C,\varvec{\alpha }}, \end{aligned}$$

where the union can be countable. Thanks to the countable stability of the print dimension

$$\begin{aligned} print \; {\mathcal {B}}_{\varvec{\alpha }} \subset \bigcup _{ C>0} print \; {\varOmega }_{p, C,\varvec{\alpha }}. \end{aligned}$$
(57)

Clearly

$$\begin{aligned} {\varOmega }_{p, C,\varvec{\alpha }}= \bigcap _{{\mathbf {j}}\ge {\mathbf {0}}} {\varOmega }_{p, C, {\mathbf {j}},\varvec{\alpha }}, \end{aligned}$$

with

$$\begin{aligned} {\varOmega }_{p, C, {\mathbf {j}},\varvec{\alpha }} = \left\{ \varvec{\lambda } \in \varvec{{\varLambda }}_{{\mathbf {j}}}\;:\; \;\; d^p_{\varvec{\lambda }} \ge C^p 2^{-p\sum _{i=1}^d \alpha _ij_i} \right\} . \end{aligned}$$

For \({\mathbf {j}}\ge {\mathbf {0}}\), let \( N_{p, C,{\mathbf {j}}, \varvec{\alpha }}\) the cardinality of \({\varOmega }_{p, C, {\mathbf {j}},\varvec{\alpha }}\).

Since \(F \in O_p^{\varvec{\beta }}\) then (54) implies that

$$\begin{aligned} \forall \; \varepsilon>0 \;\; \exists \; C>0 \;\; \exists \; J \ge 0\;\; \forall \; |{\mathbf {j}}| \ge J \qquad N_{p, C,{\mathbf {j}}, \varvec{\alpha }} \le C 2^{\sum _{i=1}^d (\xi _i+\varepsilon )j_i}. \end{aligned}$$
(58)

Let \(\varepsilon >0\). Let \(0<\delta <1\) with \( \delta < 2^{- J+1}\). Let \( j_0 \ge J\) be the unique integer such that \(2^{-j_0} \le \delta < 2^{-j_0+1}\). Choose a \(\delta \)-cover \({\mathcal {C}}_\delta \) of \({\varOmega }_{p, C, \varvec{\alpha }}\) by rectangular parallelepipeds \(\varvec{\lambda } \in {\varOmega }_{p, C, {\mathbf {j}},\varvec{\alpha }}\) and \({\mathbf {j}}\ge j_0 {\mathbf {1}}\).

Let \({\mathbf {t}}=(t_1, \ldots ,t_d) \ge {\mathbf {0}}\). Clearly

$$\begin{aligned} {\mathcal {H}}_\delta ^{\mathbf {t}}({\varOmega }_{p, C,\varvec{\alpha }}) \le \sum _{\varvec{\lambda } \in {\mathcal {C}}_\delta } \prod _{i=1}^d (l_i(\varvec{\lambda }))^{t_i}. \end{aligned}$$
(59)

Clearly the latest series is the sum on all arrangements of the form \(j_{i_d} \ge j_{i_{d-1}} \ge \cdots j_{i_2} \ge j_{i_1}\). For that fixed arrangement \(l_n = 2^{-j_{i_n}}\) for all \(n \in {\mathcal {D}}\) and by (58)

$$\begin{aligned}&\sum _{\varvec{\lambda } \in {\mathcal {C}}_\delta \;:\; j_{i_d} \ge \cdots \ge j_{i_1}} \prod _{i=1}^d (l_i(\varvec{\lambda }))^{t_i}\\&\qquad \le C \sum _{j_{i_1} \ge j_0} 2^{j_{i_1} (-t_1 + \xi _{i_1}+\varepsilon )} \left( \sum _{j_{i_{2}} \ge j_{i_1}} 2^{j_{i_{2}} (-t_{2} + \xi _{i_{2}}+\varepsilon )} \cdots \left( \sum _{j_{i_d} \ge j_{i_{d-1}}} 2^{j_{i_d} (-t_d + \xi _{i_d}+\varepsilon )} \right) \right) \\&\qquad \le C 2^{j_0 \displaystyle \sum _{n \in {\mathcal {D}}} (-t_n+\xi _{i_n}+\varepsilon )} \\&\qquad \le C \delta ^{ \left( \displaystyle \sum _{n \in {\mathcal {D}}} (t_n-\xi _{i_n}-\varepsilon )\right) }, \end{aligned}$$

whenever \(t_d> \xi _{i_d}, t_{d-1} + t_d> \xi _{i_{d-1}} + \xi _{i_d}, \ldots , t_2 + \cdots + t_{d} > \xi _{i_2} + \cdots + \xi _{i_{d}}\) and \( t_1 + \cdots +t_d > \xi _{1} + \cdots + \xi _{d}\).

Therefore by (59)

$$\begin{aligned} {\mathcal {H}}_\delta ^{\mathbf {t}}({\varOmega }_{p, C,\varvec{\alpha }}) \le C \delta ^{ \left( \displaystyle \sum _{n \in {\mathcal {D}}} (t_n-\xi _n-\varepsilon )\right) } \quad \text{ and } \text{ so } \;\; {\mathcal {H}}^{\mathbf {t}}({\varOmega }_{p, C,\varvec{\alpha }}) = 0, \end{aligned}$$

whenever \(t_d> \displaystyle \max _{n \in {\mathcal {D}}} \xi _{n}, t_{d-1} + t_d> \displaystyle \max _{n_1\ne n_2} (\xi _{n_1} + \xi _{n_2}), \cdots , t_2 + \cdots + t_{d} > \displaystyle \max _{n_1\ne \cdots \ne n_{d-1}} (\xi _{n_1} + \cdots + \xi _{n_{d-1}})\) and \( t_1 + \cdots + t_{d} > \xi _{1} + \cdots + \xi _{d}\).

The information we have obtained about \(print {\varOmega }_{p, C,\varvec{\alpha }}\) does not depend on C. Hence (57) achieves the proof. \(\square \)

For \(p<0\), denote by \(HD_p\) the p-hyperbolic domain of F, that is

$$\begin{aligned} HD_p = \left\{ \varvec{\beta }\;:\quad F \in O_p^{\varvec{\beta }}\right\} . \end{aligned}$$
(60)

Corollary 1

$$\begin{aligned} print \; {\mathcal {B}}_{\varvec{\alpha }} \subset \bigcap _{p<0, \varvec{\beta }\in HD_p} G_{\varvec{\alpha },\varvec{\beta },p}. \end{aligned}$$

4 Examples of Anisotropic Selfsimilar Cascade Wavelet Series

We will apply Theorem 4 and Corollary 1 for some examples of anisotropic selfsimilar cascade wavelet series, written as the superposition of similar anisotropic structures at different scales, reminiscent of some possible modelization of turbulence or cascade models. Here the anisotropy corresponds to a Sierpinski carpet K. In Ref. [8], it was proved that the classical multifractal formalism may fail: contrary to the Besov smoothness, the Hölder spectrum depends on the geometric disposition of the selected boxes \((\mathfrak {R}_\omega )_{\omega \in A}\) in the construction of K (see below). To avoid this drawback, we will prove that both p-hyperbolic domain \(HD_p\) of F and dimension prints of sets of level rectangular pointwise regularities depend on the geometric disposition of the selected boxes \((\mathfrak {R}_\omega )_{\omega \in A}\).

4.1 Construction of Anisotropic Selfsimilar Cascade Wavelet Series

Let \(S_1\) and \(S_2\) be two positive integers such that \( S_1<S_2\). Put \(s_1=2^{S_1}\) and \(s_2=2^{S_2}\). We divide the unit square \({\mathfrak {R}}=[0,1]^2\) into a uniform grid of rectangles of sides \(1/s_1\) and \(1/s_2\). Let A be a subset of \( \{0,1,\ldots ,s_1-1\} \times \{0,1,\ldots ,s_2-1\}\) that contains at least two elements. For \(\omega =(a,b) \in A\), the contraction \(\displaystyle D_{\omega }(x_{1},x_{2})= \left( \frac{x_1+a}{s_1}, \frac{x_2+b}{s_2}\right) \) maps the unit square \({\mathfrak {R}}\) into the rectangle

$$\begin{aligned} {\mathfrak {R}}_{\omega }= \left[ \frac{a}{s_1},\frac{a+1}{s_1}\right] \times \left[ \frac{b}{s_2},\frac{b+1}{s_2}\right] . \end{aligned}$$
(61)

A Sierpinski carpet K (see [8, 36, 48]) and references therein) is the unique non-empty compact set (see [24]) that satisfies

$$\begin{aligned} K=\bigcup _{\omega \in A}D_{\omega }(K). \end{aligned}$$
(62)

Let \(L \in {\mathbb {N}}\) and \(\varvec{\omega }= (\omega _{1},\ldots ,\omega _{L}) \in A^{L}\), with \(\omega _{r}=(a_{r},b_{r})\). Put

$$\begin{aligned} D_{\varvec{\omega }}= D_{\omega _{1}}\circ \cdots \circ D_{\omega _{L}}\;\; \text{ and }\;\; {\mathfrak {R}}_{\varvec{\omega }}=D_{\varvec{\omega }} (\mathfrak {R}). \end{aligned}$$
(63)

Then

$$\begin{aligned} {\mathfrak {R}}_{\varvec{\omega }}= \overline{\varvec{\lambda }_{\varvec{\omega }}} = \overline{\lambda _{1}(\varvec{\omega })}\times \overline{\lambda _{2}(\varvec{\omega })} \end{aligned}$$
(64)

where

$$\begin{aligned} \lambda _{1}(\varvec{\omega })=\left[ \frac{k_{1}}{s_1^{L}},\frac{k_{1}+1}{s_1^{L}}\right) \; \; \text{ and }\;\; \lambda _{2}(\varvec{\omega })=\left[ \frac{k_{2}}{s_2^{L}},\frac{k_{2}+1}{s_2^{L}}\right) , \end{aligned}$$
(65)

with

$$\begin{aligned} \frac{k_{1}}{s_1^{L}}=\sum _{r=1}^{L}\frac{a_{r}}{s_1^{r}}\; \;\text{ and }\;\;\frac{k_{2}}{s_2^{L}}=\sum _{r=1}^{L}\frac{b_{r}}{s_2^{r}} . \end{aligned}$$
(66)

The Sierpinski carpet is given by

$$\begin{aligned} K = \displaystyle \bigcap _{L \in {\mathbb {N}}}\left( \bigcup _{\varvec{\omega }\in A^{L}} {\mathfrak {R}}_{\varvec{\omega }} \right) . \end{aligned}$$
(67)

Let \((\gamma _\omega )_{\omega \in A}\) be scalars with \(0<|\gamma _\omega | <1\). For \({\mathbf {x}}=(x_1,x_2) \in {\mathbb {R}}^2\), put \({\varPsi }({\mathbf {x}})=\psi (x_1) \psi (x_2)\) where \(\psi =\psi _1\) is r-smooth. The Sierpinski cascade function adapted to the subdivision A satisfies the selfsimilar equation

$$\begin{aligned} \forall \; {\mathbf {x}}\in {\mathfrak {R}} \qquad F({\mathbf {x}}) = {\varPsi }({\mathbf {x}}) \,\,+\,\,\sum _{\omega \in A} \gamma _{\omega } F(D_{\omega }^{-1}({\mathbf {x}})). \end{aligned}$$
(68)

Define

$$\begin{aligned} |\gamma |_{\max }= & {} \displaystyle \max _{\omega \in A} |\gamma _{\omega }| \;,\; |\gamma |_{\min } = \displaystyle \min _{\omega \in A} |\gamma _{\omega }|\;,\; H_{\min }= \displaystyle -\frac{\log |\gamma |_{\max }}{ \log s_2 } \;,\; \; \nonumber \\ \text{ and } \; H_{\max }= & {} \displaystyle -\frac{\log |\gamma |_{\min }}{ \log s_2 }. \end{aligned}$$
(69)

For \(L \in {\mathbb {N}}\) and \(\varvec{\omega }= (\omega _{1},\ldots ,\omega _{L}) \in A^{L}\), put

$$\begin{aligned} \gamma _{\varvec{\omega }} = \gamma _{\omega _1}\cdots \gamma _{\omega _L} . \end{aligned}$$
(70)

The following result can be deduced from Proposition 1 in [8].

Proposition 1

The series

$$\begin{aligned} F({\mathbf {x}}) = {\varPsi }({\mathbf {x}}) + \sum _{L=1}^{\infty } \sum _{\varvec{\omega } \in A^{L}} \gamma _{\varvec{\omega }}\,\, {\varPsi } \left( D_{\varvec{\omega }}^{-1}({\mathbf {x}})\right) . \end{aligned}$$
(71)

is the unique solution in \(L^{1}({\mathfrak {R}})\) for Eq. (68).

If furthermore \( s_2^{-r}< |\gamma |_{\max } \) then F is uniformly Hölder.

Clearly if \(\omega _l=(a_l,b_l)\) and \(\varvec{\omega } \in A^{L}\) then

$$\begin{aligned} {\varPsi } \left( D_{\varvec{\omega }}^{-1}({\mathbf {x}})\right)= & {} \psi (s_1^{L}x_1 - s_1^{L-1} a_1 - \cdots - s_1 a_{L-1} - a_L) \; \psi (s_2^{L}x_2 \\&- s_2^{L-1} b_1 - \cdots - s_2 b_{L-1} - b_L). \end{aligned}$$

Consider the separated open set condition “SOSC”

$$\begin{aligned} \forall \; (\omega ,\omega ') \in A^2 \qquad \qquad \omega \ne \omega ' \Rightarrow {\mathfrak {R}}_{\omega } \cap {\mathfrak {R}}_{\omega '}= \emptyset . \end{aligned}$$
(72)

If \( {\mathbf {x}}\notin K \) then there exists a neighborhood \(\vartheta ({\mathbf {x}})\) of \({\mathbf {x}}\) and \(L \in {\mathbb {N}}\) such that

$$\begin{aligned} \forall \; {\mathbf {y}}\in \vartheta ({\mathbf {x}}) \qquad F({\mathbf {y}}) = {\varPsi }({\mathbf {y}}) + \sum _{n=1}^{L} \sum _{\varvec{\omega } \in A^{n}} \gamma _{\varvec{\omega }}\,\, {\varPsi } \left( D_{\varvec{\omega }}^{-1}({\mathbf {y}})\right) . \end{aligned}$$
(73)

It follows that \(F \in {\mathfrak {L}}^{\varvec{\alpha }}({\mathbf {x}})\) for all \( \varvec{\alpha } < r {\mathbf {1}}\).

On the other hand, from the “SOSC” (72), any \({\mathbf {x}}\in K\) has a unique expansion

$$\begin{aligned} {\mathbf {x}}= \displaystyle \left( \sum _{l=1}^{\infty } \frac{a_l}{s_1^l}, \sum _{l=1}^{\infty } \frac{b_l}{s_2^l}\right) \;\; \text{ with } \; (a_l,b_l)=(a_l({\mathbf {x}}),b_l({\mathbf {x}}))=\omega _l=\omega _l({\mathbf {x}}) \in A. \end{aligned}$$
(74)

4.2 Estimation of Hyperbolic Wavelet Leaders

For \(j \in {\mathbb {N}}\) and \(k\in \left\{ 0,\ldots ,2^{j}-1\right\} \), put \(\lambda _{j,k}=\displaystyle \left[ \frac{k}{2^{j}},\frac{k+1}{2^{j}}\right) \). Denote by \({\varLambda }_{j}\) the set of dyadic intervals \(\lambda _{j,k}\) with \(k\in \left\{ 0,\ldots ,2^{j}-1\right\} \). For \(\lambda =\lambda _{j,k}\in {\varLambda }_{j}\) there exists a unique \((\epsilon _{i}(\lambda ))_{1\le i\le j}\) in \(\left\{ 0,1\right\} ^j\) such that

$$\begin{aligned} \frac{k}{2^{j}}=\sum _{i=1}^{j}\frac{\epsilon _{i}(\lambda )}{2^{i}}. \end{aligned}$$
(75)

Let \(S \in {\mathbb {N}}\) and \(s =2^S\). For \(j \in {\mathbb {N}}\) put

$$\begin{aligned} m = \left[ \frac{j}{ S}\right] , \end{aligned}$$
(76)

where [.] is the notation for the integer part. By some arrangement for (75), there exists a unique \((\alpha _{r}(\lambda ,S))_{1\le r\le m}\) in \(\left\{ 0,\ldots ,s-1\right\} ^{m}\) such that

$$\begin{aligned} \frac{k}{2^{j}}=\left( \sum _{r=1}^{m}\frac{\alpha _{r}(\lambda ,S)}{s^{r}}\right) +R(\lambda ,S), \end{aligned}$$
(77)

where

$$\begin{aligned} R(\lambda ,S)=\sum _{i=mS+1}^{j}\frac{\epsilon _{i}(\lambda )}{2^{i}}. \end{aligned}$$
(78)

For \(a\in \left\{ 0,\ldots ,s-1\right\} \) and \(r\ge 1\), denote by \((\epsilon _{i}(a,r,,S))_{ i\in \left\{ (r-1)S+1,\ldots ,rS\right\} } \in \left\{ 0,1\right\} ^S\), satisfying

$$\begin{aligned} \frac{a}{s^{r}}=\sum _{i=(r-1)S+1}^{rS}\frac{\epsilon _{i}(a,r,S)}{2^{i}}, \end{aligned}$$
(79)

and for \((r-1)S+1\le j\le rS\), put

$$\begin{aligned} R(a,j,r,S)=\sum _{i=(r-1)S+1}^{j}\frac{\epsilon _{i}(a,r,S)}{2^{i}}. \end{aligned}$$
(80)

Let \(\lambda \in {\varLambda }_{j}\) and \(\lambda '\in {\varLambda }_{j'}\) with \(j\le j'\). Clearly

$$\begin{aligned} \lambda '\subset \lambda \Longleftrightarrow \forall i\in \left\{ 1,\ldots , j\right\} \; \epsilon _{i}(\lambda ')=\epsilon _{i}(\lambda ). \end{aligned}$$

Then, from above, the following result holds.

Lemma 1

Let \(\lambda \in {\varLambda }_{j}\) and \(m=\displaystyle [\frac{j}{ S} ]\). Then \(\lambda ' \subset \lambda \) if and only if

$$\begin{aligned} \forall r\in \left\{ 1,\ldots ,m\right\} \; \alpha _{r}(\lambda ',S)= \alpha _{r}(\lambda ,S_1) \; \text{ and }\;\forall i\in \left\{ rS+1,\ldots ,j\right\} \epsilon _{i}(\lambda ')=\epsilon _{i}(\lambda ). \end{aligned}$$

Let \({\mathbf {j}}=(j_1,j_2) \in {\mathbb {N}}^2\). Let \(\varvec{\lambda }=\lambda _{1}\times \lambda _{2}\in \varvec{{\varLambda }}_{{\mathbf {j}}}\). Let \(\varvec{\omega }= (\omega _{1},\dots ,\omega _{L}) \in A^{L}\), \(\omega _{l}=(a_{l},b_{l})\). For \(i \in \{1,2\}\), let \(m_{i}=\displaystyle [\frac{j}{ S_i} ]\). Then Lemma 1 yields the following result.

Lemma 2

$$\begin{aligned}&{\mathfrak {R}}_{\varvec{\omega }}\subset \varvec{\lambda } \nonumber \\&\Longleftrightarrow \nonumber \\&{\left\{ \begin{array}{ll} L\ge \max \left\{ \frac{j_{1}}{S_1},\frac{j_{2}}{S_2}\right\} \\ \forall n\le m_{1} \; \alpha _{n}(\lambda _{1},S_1)=a_{n}\;,\; \forall i\in \left\{ m_{1}S_1+1,\ldots ,j_{1}\right\} \; \epsilon _{i}(\lambda _{1})=\epsilon _{i}(a_{m_{1}+1},m_{1}+1,S_1)\\ \forall n \le m_{2} \; \alpha _{n}(\lambda _{2},S_2)=b_{n}\;,\; \forall i\in \left\{ m_{2}S_2+1,\ldots ,j_{2}\right\} \; \epsilon _{i}(\lambda _{2})=\epsilon _{i}(b_{m_{2}+1},m_{2}+1,S_2). \end{array}\right. }\nonumber \\ \end{aligned}$$
(81)

Put

$$\begin{aligned} A^{*}=\bigcup _{j\ge 1}A^{j}. \end{aligned}$$
(82)

Definition 8

We say that \(\varvec{\lambda }\) is \(A^{*}-\)admissible if there exists \(j\ge 1\) and \(\varvec{\omega } \in A^{j}\) such that \({\mathfrak {R}}_{\varvec{\omega }}\subset \varvec{\lambda }\).

Lemma 2 yields the following result.

Proposition 2

Let \(\varvec{\lambda } \in {\varLambda }_{j_{1}}\times {\varLambda }_{j_{2}}\). Put \(m=\inf \left\{ m_{1},m_{2}\right\} \). Suppose that \(\varvec{\lambda }\) is \(A^{*}-\)admissible. Put

$$\begin{aligned} a_{r}=\alpha _{r}(\lambda _{1},S_1) \; \forall r\le m_{1}\;,\; b_{r}=\alpha _{r}(\lambda _{2},S_2)\; \forall r\le m_{2}\;, \;\omega _{r}=(a_{r},b_{r}) \; \forall r\le m. \end{aligned}$$

Let j be the smallest integer satisfying \(j\ge \max \left\{ \frac{j_{1}}{S_1},\frac{j_{2}}{S_2}\right\} \). Then

$$\begin{aligned} d_{\varvec{\lambda }} \approx {\left\{ \begin{array}{ll} \left| \gamma _{\omega _{1}}\ldots \gamma _{\omega _{m}}\right| \left| \gamma _{(a_{m_{2}+1},\cdot )}\right| _{\max }\cdots \left| \gamma _{(a_{m_{1}},\cdot )}\right| _{\max } &{}\text{ if } \; \frac{j_{1}}{S_1}\ge \frac{j_{2}}{S_2}\\ \\ \left| \gamma _{\omega _{1}}\cdots \gamma _{\omega _{m}}\right| \left| \gamma _{(\cdot ,b_{m_{1}+1})}\right| _{\max }\cdots \left| \gamma _{(\cdot ,b_{m_{2}})}\right| _{\max } &{}\text{ if } \; \frac{j_{1}}{S_1}\le \frac{j_{2}}{S_2}, \end{array}\right. } \end{aligned}$$

where \(M \approx N\) means that there exists a constant \(C \ge 1\) such that \(N/C \le M \le CN\).

4.3 Two Geometric Cases

Let \(A_{1}=\left\{ a \;:\; \exists b \;;\; (a,b)\in A\right\} \) and \(B_{1}=\left\{ b \;:\; \exists a \;\;;\; (a,b)\in A\right\} \).

If \(\frac{j_{1}}{S_1}\ge \frac{j_{2}}{S_2}\) then using Proposition 2

$$\begin{aligned} \sum _{\varvec{\lambda }\in {\varLambda }_{j_{1}}\times {\varLambda }_{j_{2}}} d_{\varvec{\lambda }}^{p}\approx \left( \sum _{\omega \in A}\left| \gamma _{\omega }\right| ^{p}\right) ^{m_{2}}\left( \sum _{a\in A_{1}}\left| \gamma _{(a,\cdot )}\right| _{\max }^{p}\right) ^{m_{1}-m_{2}}. \end{aligned}$$
(83)

If \(\frac{j_{1}}{S_1}\le \frac{j_{2}}{S_2}\) then using Proposition 2

$$\begin{aligned} \sum _{\varvec{\lambda }\in {\varLambda }_{j_{1}}\times {\varLambda }_{j_{2}}} d_{\varvec{\lambda }}^{p}\approx \left( \sum _{\omega \in A}\left| \gamma _{\omega }\right| ^{p}\right) ^{m_{1}}\left( \sum _{b\in B_{1}}\left| \gamma _{(\cdot ,b)}\right| _{\max }^{p}\right) ^{m_{2}-m_{1}}. \end{aligned}$$
(84)

We will prove that the p-hyperbolic domain \(HD_p\) of F and the dimension prints of sets of level rectangular pointwise regularities depend on the geometric disposition of boxes \(\mathfrak {R}_\omega \) with \(\omega \in A\). For that, it suffices to consider two geometric choices: first choice: each row and column of the grid contains at most one box \(\mathfrak {R}_\omega \) with \(\omega \in A\), and second choice: only one column contains all boxes \(\mathfrak {R}_\omega \) with \(\omega \in A\).

4.4 Case 1: Each Column and Each Row of the Grid Contains at Most One Box \(\mathfrak {R}_\omega \) with \(\omega \in A\).

4.4.1 p-Hyperbolic Domain

For \(q \in {\mathbb {R}}\), define \(\tau (q)\) by    \( \displaystyle \sum _{\omega \in A} |\gamma |_{\omega }^{q} =s_1^{-\tau (q)}\). Let \(\sigma = S_1/S_2\).

Proposition 3

Assume that each column and each row of the grid contains at most one box \(\mathfrak {R}_\omega \) with \(\omega \in A\). Let \(p<0\). Then \(\varvec{\beta }=(\beta _1,\beta _2)\) belongs to the p-hyperbolic domain \(HD_p\) of F iff

$$\begin{aligned} \beta _1p-1 - \tau (p) \le 0, \end{aligned}$$
(85)

and

$$\begin{aligned} \beta _2p-1 - \sigma \tau (p) \le 0. \end{aligned}$$
(86)

Proof

Let \({\mathbf {j}}= (j_1,j_2) \ge {\mathbf {0}}\). For \(i \in \{1,2\}\), let \(m_{i}=\displaystyle [\frac{j}{ S_i} ]\).

  • Case \(\frac{j_{1}}{S_1}\le \frac{j_{2}}{S_2}\) . Then by (84)

    $$\begin{aligned} \sum _{\varvec{\lambda }\in {\varLambda }_{j_{1}}\times {\varLambda }_{j_{2}}} d_{\varvec{\lambda }}^{p} \approx s_1^{-\tau (p) m_2}, \end{aligned}$$

    and

    $$\begin{aligned} 2^{- |{\mathbf {j}}| + p \displaystyle \sum _{i=1}^2 \beta _i j_i } \sum _{\varvec{\lambda }\in {\varLambda }_{j_{1}}\times {\varLambda }_{j_{2}}} d_{\varvec{\lambda }}^{p}\approx 2^{-S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p-S_1\tau (p) m_2}. \end{aligned}$$

    We have

    $$\begin{aligned} \forall \; m_1 \le m_2 \quad 2^{-S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p-S_1\tau (p) m_2} \le C, \end{aligned}$$

    if

    $$\begin{aligned} \forall \; m_1 \le m_2 \quad -S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p-S_1\tau (p) m_2 \le \log C. \end{aligned}$$

    Fix \(m_1\). Coefficient of \(m_2\) is

    $$\begin{aligned} -S_2+S_2 \beta _2p-S_1\tau (p) \le 0. \end{aligned}$$

    Hence (86) holds.

    In that case

    $$\begin{aligned}&\sup _{m_2 \ge m_1} -S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p-S_1\tau (p) m_2 \\&\qquad = -S_1m_1-S_2m_1+S_1m_1\beta _1p+S_2m_1 \beta _2p-S_1\tau (p) m_1. \end{aligned}$$

    So coefficient of \(m_1\) is

    $$\begin{aligned} -S_1-S_2+S\beta _1p+S_2\beta _2p-S_1\tau (p) \le 0 \;\; \forall \; p<0. \end{aligned}$$
  • Case \(\frac{j_{1}}{S_1}\ge \frac{j_{2}}{S_2}\) . Then by (83)

    $$\begin{aligned} \sum _{\varvec{\lambda }\in {\varLambda }_{j_{1}}\times {\varLambda }_{j_{2}}} d_{\varvec{\lambda }}^{p} \approx s_1^{-\tau (p) m_1}, \end{aligned}$$

    and

    $$\begin{aligned} 2^{- |{\mathbf {j}}| + p \displaystyle \sum _{i=1}^2 \beta _i j_i } \sum _{\varvec{\lambda }\in {\varLambda }_{j_{1}}\times {\varLambda }_{j_{2}}} d_{\varvec{\lambda }}^{p} \approx 2^{-S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p -S_1\tau (p) m_1}. \end{aligned}$$

    We have

    $$\begin{aligned} \forall \; m_1 \ge m_2 \quad 2^{-S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p -S_1\tau (p) m_1} \le C, \end{aligned}$$

    if

    $$\begin{aligned} \forall \; m_1 \ge m_2 \quad -S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p -S_1\tau (p) m_1 \le \log C. \end{aligned}$$

    Fix \(m_2\). Coefficient of \(m_1\) is

    $$\begin{aligned} -S_1+S_1\beta _1p-S_1\tau (p) \le 0. \end{aligned}$$

    Hence (85) holds.

    In that case

    $$\begin{aligned}&\sup _{m_1 \ge m_2} -S_1m_1-S_2m_2+S_1m_1\beta _1p+S_2m_2 \beta _2p -S_1\tau (p) m_1 \\&\quad = -S_1m_2-S_2m_2+S_1m_2\beta _1p+S_2m_2 \beta _2p -S_1\tau (p) m_2. \end{aligned}$$

    So coefficient of \(m_2\) is

    $$\begin{aligned} -S_1-S_2+S_1\beta _1p+S_2 \beta _2p -S_1\tau (p) \le 0 \;\; \forall \; p<0. \end{aligned}$$

    \(\square \)

4.4.2 \(Print \; {\mathcal {B}}_{\varvec{\alpha }}\)

Let \(\xi _{1},\xi _{2}\in {\mathbb {R}}\). Then, by Proposition 3, there exist \(\varvec{\beta }=(\beta _1,\beta _2)\) in the p-hyperbolic domain \(HD_p\) of F, for \(p<0\), such that \(\xi _{i}=(\alpha _i-\beta _i)p+1\), \(i=1,2\) if

$$\begin{aligned} \alpha _{1}p-\tau (p)\le \xi _{1}\; \text{ and } \; \alpha _{2}p-\sigma \tau (p)\le \xi _{2}. \end{aligned}$$

For \(p< 0\), denote by

$$\begin{aligned} D_{\varvec{\alpha },p}=\left\{ \varvec{\xi }=(\xi _{1},\xi _{2})\in {\mathbb {R}}^{2}\;;\; \left\{ \begin{array}{l}\alpha _{1}p-\tau (p)\le \xi _{1}\\ \alpha _{2}p-\sigma \tau (p)\le \xi _{2} \end{array}\right. \right\} . \end{aligned}$$

Write \(G(\varvec{\xi })\) instead of \(G_{\varvec{\alpha },\varvec{\beta },p}\) in Theorem 4, i.e.,

$$\begin{aligned} G(\varvec{\xi })=\{{\mathbf {t}}\ge {\mathbf {0}}\;;\;t_1+t_2 \le \xi _{1}+\xi _{2}\}\cup \{{\mathbf {t}}\ge {\mathbf {0}}\;;\; t_1 > \min _{i=1,2} \xi _{i}\; and\; t_2 \le \max _{i=1,2} \xi _{i}\}. \end{aligned}$$

Then thanks to Corollary 1

$$\begin{aligned} print \; {\mathcal {B}}_{\varvec{\alpha }} \subset \bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi })\;. \end{aligned}$$
(87)

Clearly, if \(\xi _1 < 0\) and \(\xi _2 < 0\) then

$$\begin{aligned} G(\varvec{\xi })= \emptyset , \end{aligned}$$

if \(\xi _1 \le 0\) then

$$\begin{aligned} G(\varvec{\xi })= \{{\mathbf {t}}\ge {\mathbf {0}}\;;\; t_2 \le \xi _{2}\}= G(0,\xi _2), \end{aligned}$$

and if \(\xi _2 \le 0\) then

$$\begin{aligned} G(\varvec{\xi })= \{{\mathbf {t}}\ge {\mathbf {0}}\;;\; t_2 \le \xi _{1}\}=G(0,\xi _1). \end{aligned}$$

Recall that \(\tau \) is concave and strictly increasing. The following lemma is straightforward.

Lemma 3

The function \(k:p\mapsto \frac{\tau (p)}{p}\) is strictly increasing on \((-\infty ,0)\). It’s range on \((-\infty ,0)\) is \(\left( \displaystyle \frac{H_{\max }}{\sigma }, \infty \right) \).

Corollary 2

If \(\alpha _{1} > \displaystyle \frac{H_{\max }}{\sigma }\) and \(\alpha _{2} > H_{\max }\) then \(print \; {\mathcal {B}}_{\varvec{\alpha }}=\emptyset \).

Proof

If \(\alpha _{1} > \displaystyle \frac{H_{\max }}{\sigma }\) and \(\alpha _{2} > H_{\max }\) then Lemma 3 implies that \(\alpha _{1}p-\tau (p)<0\) and \(\alpha _{2}p-\sigma \tau (p)<0\) for some \(p<0\). Thus there exists \(p<0\) and \(\varvec{\xi }\in D_{\varvec{\alpha },p}\) such that \(\varvec{\xi }< {\mathbf {0}}\). So Theorem  4 yields \(G(\varvec{\xi })=\emptyset \). Thus \(print \; {\mathcal {B}}_{\varvec{\alpha }}=\emptyset \). \(\square \)

Assume now that \(\varvec{\alpha }\ngtr \left( \displaystyle \frac{H_{\max }}{\sigma }, H_{\max }\right) \). Unfortunately, it isn’t easy to identify \(\displaystyle \bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi })\).

  • Assume that \(\alpha _{1} > \displaystyle \frac{H_{\max }}{\sigma }\) and \(\alpha _{2} \le H_{\max }\). Using Lemma 3, write \(\alpha _{1}= k ({\widetilde{p}}_{1})\), with \({\widetilde{p}}_{1} <0\). Then

    $$\begin{aligned} \alpha _1 p - \tau (p) \le 0 \; \Leftrightarrow \; p \le {\widetilde{p}}_{1}. \end{aligned}$$

    Thus for any \(p \le {\widetilde{p}}_{1}\)

    $$\begin{aligned} \bigcap _{\varvec{\xi }\in D_{\varvec{\alpha },p},\; \xi _{1}\le 0}G(0,\xi _{2})=G\left( 0,\alpha _{2}p-\sigma \tau (p)\right) . \end{aligned}$$

    Thus

    $$\begin{aligned} \bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi })&\subset \bigcap _{p \le {\widetilde{p}}_{1}}G\left( 0,\alpha _{2}p-\sigma \tau (p)\right) \\&=\left\{ {\mathbf {t}} \ge {\mathbf {0}}\;;\; t_{2}\le \displaystyle \inf _{ p \le {\widetilde{p}}_{1} } \left( \alpha _{2}p-\sigma \tau (p)\right) \right\} . \end{aligned}$$
  • Assume that \(\alpha _{1} \le \displaystyle \frac{H_{\max }}{\sigma }\) and \(\alpha _{2} > H_{\max }\). Using Lemma 3, write \(\alpha _{2} = \sigma k ({\widetilde{p}}_{2})\), with \({\widetilde{p}}_{2} <0\). Then

    $$\begin{aligned} \alpha _2 p - \sigma \tau (p) \le 0 \; \Leftrightarrow \; p \le {\widetilde{p}}_{2}. \end{aligned}$$

    Thus for any \(p \le {\widetilde{p}}_{2}\)

    $$\begin{aligned} \bigcap _{\varvec{\xi }\in D_{\varvec{\alpha },p},\; \xi _{2}\le 0}G(\xi _{1},0)=G\left( \alpha _{1}p-\tau (p),0\right) . \end{aligned}$$

    It follows that

    $$\begin{aligned} \bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi })&\subset \bigcap _{p \le {\widetilde{p}}_{2}}G\left( \alpha _{1}p-\tau (p),0\right) \\&=\left\{ {\mathbf {t}} \ge {\mathbf {0}}\;;\; t_{2}\le \displaystyle \inf _{p \le {\widetilde{p}}_{2}} (\alpha _{1}p-\tau (p))\right\} . \end{aligned}$$
  • If \(\alpha _{1} \le \displaystyle \frac{H_{\max }}{\sigma }\) and \(\alpha _{2} \le H_{\max }\) then \( \alpha _1 p - \tau (p) > 0\) and \( \alpha _2 p - \sigma \tau (p) > 0\) for all \(p<0\). We propose to give information on \(\displaystyle \bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi })\) using \(G({\widetilde{\varvec{\xi }}})\), with \(\widetilde{\varvec{\xi }}= ({\tilde{\xi }}_1, {\tilde{\xi }}_2) \in D_{\varvec{\alpha },{\tilde{p}}}\) and \({\tilde{p}} \le 0\), that minimizes the problem

    $$\begin{aligned} {\tilde{\xi }}_1+ {\tilde{\xi }}_2=\inf _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p} } (\xi _{1}+\xi _{2}). \end{aligned}$$
    (88)

    Two steps are needed to solve the previous optimisation problem.

    • We first fix \(p <0\) and minimize

      $$\begin{aligned} {\tilde{\xi }}_1(p)+ {\tilde{\xi }}_2(p)=\inf _{\varvec{\xi }\in D_{\varvec{\alpha },p} } (\xi _{1}+\xi _{2}). \end{aligned}$$
      (89)

      Since the map \((\xi _1,\xi _2) \mapsto \xi _{1}+\xi _{2}\) is affine then the previous infimum is reached on the boundary of \(D_{\varvec{\alpha },p} \).

    • Next, we minimize

      $$\begin{aligned} {\tilde{\xi }}_1+ {\tilde{\xi }}_2=\inf _{p <0}{\tilde{\xi }}_1(p)+ {\tilde{\xi }}_2(p). \end{aligned}$$
      (90)

    We can easily check that \(({\tilde{\xi }}_1(p), {\tilde{\xi }}_2(p)) =(\alpha _{1}p-\tau (p),\alpha _{2}p-\sigma \tau (p)))\) and \(\widetilde{\varvec{\xi }}=(\alpha _{1}p_3-\tau (p_3),\alpha _{2}p_3-\sigma \tau (p_3)))\) where \(p_{3}\) satisfies

    $$\begin{aligned} \alpha _{1}p_{3}+\alpha _{2}p_{3}-(1+\sigma )\tau (p_{3})=\displaystyle \inf _{p<0} (\alpha _{1}p+\alpha _{2}p-(1+\sigma )\tau (p)). \end{aligned}$$

    Since \(\alpha _{1}+\alpha _{2}\le (1+\sigma )\frac{H_{\max }}{\sigma }\) and \(\frac{H_{\max }}{\sigma }=\displaystyle \lim _{p\rightarrow -\infty }\tau '(p)=\displaystyle \sup _{p<0}\tau '(p)\), only two cases rise:

    • If \(\alpha _{1}+\alpha _{2}\le (1+\sigma )\tau '(0)\) then \(p_{3}=0\).

    • If \(\alpha _{1}+\alpha _{2}> (1+\sigma )\tau '(0)\) then \(p_{3}<0\) and \(\alpha _{1}+\alpha _{2}=(1+\sigma )\tau '(p_3)\).

4.5 Case 2: Only One Column Contains All Boxes \(\mathfrak {R}_\omega \) with \(\omega \in A\)

4.5.1 p-Hyperbolic Domain

Proposition 4

Assume that only one column contains all boxes \(\mathfrak {R}_\omega \) with \(\omega \in A\). Let \(p<0\). Then \(\varvec{\beta }=(\beta _1,\beta _2)\) belongs to the p-hyperbolic domain \(HD_p\) of F iff

$$\begin{aligned} \beta _1 p-1-\frac{H_{\min } }{\sigma }p\le 0, \end{aligned}$$
(91)

and

$$\begin{aligned} \beta _2 p-1 - \sigma \tau (p) \le 0\;. \end{aligned}$$
(92)

4.5.2 \(print \; {\mathcal {B}}_{\varvec{\alpha }}\)

Let \(\xi _{1},\xi _{2}\in {\mathbb {R}}\). Then there exist \(\varvec{\beta }=(\beta _1,\beta _2)\) in the p-hyperbolic domain \(HD_p\) of F, for \(p<0\), such that \(\xi _{i}=(\alpha _i-\beta _i)p+1\), \(i=1,2\) iff the two following conditions are fulfilled

$$\begin{aligned} \alpha _{1}p-\frac{H_{\min } }{\sigma }p\le \xi _{1}, \end{aligned}$$

and

$$\begin{aligned} \alpha _{2}p-\sigma \tau (p)\le \xi _{2}. \end{aligned}$$

For \(p< 0\), denote by

$$\begin{aligned} D_{\varvec{\alpha },p}=\left\{ \varvec{\xi }=(\xi _{1},\xi _{2})\in {\mathbb {R}}^{2}\;;\; \left\{ \begin{array}{l} \alpha _{1}p-\frac{H_{\min } }{\sigma }p\le \xi _{1}\\ \alpha _{2}p-\sigma \tau (p)\le \xi _{2} \end{array}\right. \right\} . \end{aligned}$$

We have

$$\begin{aligned} print \; {\mathcal {B}}_{\varvec{\alpha }} \subset \bigcap _{p<0\;,\;\varvec{\beta }\in HD_p} G_{\varvec{\alpha },\varvec{\beta },p}=\bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi }). \end{aligned}$$

Corollary 3

If \(\alpha _{1}>\frac{H_{\min } }{\sigma }\) and \(\alpha _{2}>H_{\max }\) then \(print \; {\mathcal {B}}_{\varvec{\alpha }}=\emptyset \).

Proof

If \(\alpha _{1}>\frac{H_{\min } }{\sigma }\) and \(\alpha _{2}>H_{\max }\) then for some \(p<0\), we have \(\alpha _{1}p-\frac{H_{\min } }{\sigma }p<0\) and \(\alpha _{2}p-\sigma \tau (p)<0\). Thus there exists \(\varvec{\xi }\in D_{\varvec{\alpha },p}\) such that \(\varvec{\xi }< {\mathbf {0}}\), so that \(G(\varvec{\xi })=\emptyset \). Thus \(print \; {\mathcal {B}}_{\varvec{\alpha }}=\emptyset \). \(\square \)

Assume now that \(\varvec{\alpha }\ngtr ( \displaystyle \frac{H_{\min }}{\sigma }, H_{\max })\).

  • If \(\alpha _{1}>\frac{H_{\min } }{\sigma }\) and \(\alpha _{2}\le H_{\max }\) then for all \(p<0\) \(\alpha _{1}p-\frac{H_{\min } }{\sigma }p<0\) and

    $$\begin{aligned} \bigcap _{\varvec{\xi }\in D_{\varvec{\alpha },p},\; \xi _{1}\le 0}G(0,\xi _{2})=G(0,\alpha _{2}p-\sigma \tau (p)). \end{aligned}$$

    It follows that

    $$\begin{aligned} \bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi })&\subset \bigcap _{p<0 }G(0,\alpha _{2}p-\sigma \tau (p))\\&=\left\{ {\mathbf {t}}\ge {\mathbf {0}}\;;\; t_{2}\le \displaystyle \inf _{p<0} (\alpha _{2}p-\sigma \tau (p)) \right\} . \end{aligned}$$
  • Assume that \(\alpha _{1}\le \frac{H_{\min } }{\sigma }\) and \(\alpha _{2}>H_{\max }\). Using Lemma 3, write \(\alpha _{2}=\sigma k ({\widetilde{p}}_{2})\), with \({\widetilde{p}}_{2} <0\). Then

    $$\begin{aligned} \alpha _2 p - \sigma \tau (p) \le 0 \; \Leftrightarrow \; p \le {\widetilde{p}}_{2}. \end{aligned}$$

    Thus for any \(p \le {\widetilde{p}}_{2}\)

    $$\begin{aligned} \bigcap _{\varvec{\xi }\in D_{\varvec{\alpha },p},\; \xi _{2}\le 0}G(\xi _{1},0)=G(\alpha _{1}p-\frac{H_{\min } }{\sigma }p,0). \end{aligned}$$

    Therefore

    $$\begin{aligned} \bigcap _{p< 0\;,\;\varvec{\xi }\in D_{\varvec{\alpha },p}}G(\varvec{\xi })&\subset \bigcap _{p \le {\widetilde{p}}_{2}}G(\alpha _{1}p-\frac{H_{\min } }{\sigma }p,0)\\&=\left\{ {\mathbf {t}}\ge {\mathbf {0}}\;;\; t_{2}\le \displaystyle \inf _{p \le {\widetilde{p}}_{2}} (\alpha _{1}p-\frac{H_{\min } }{\sigma }p) \right\} . \end{aligned}$$
  • If \(\alpha _{1}\le \frac{H_{\min } }{\sigma }\) and \(\alpha _{2}\le H_{\max }\) then \( \alpha _1 p - \tau (p) > 0\) and \( \alpha _2 p - \sigma \tau (p) > 0\) for all \(p<0\). We proceed as in (88) with steps (89) and (90). We can easily check that \(({\tilde{\xi }}_1(p), {\tilde{\xi }}_2(p)) =\left( \alpha _{1}p-\frac{H_{\min } }{\sigma }p,\alpha _{2}p-\sigma \tau (p))\right) \) and \(\widetilde{\varvec{\xi }}=(\alpha _{1}p_{3}-\frac{H_{\min } }{\sigma }p_{3},\alpha _{2}p_3-\sigma \tau (p_3)))\) where \(p_{3}\) satisfies

    $$\begin{aligned} \alpha _{1}p_{3}+\alpha _{2}p_{3}-\frac{H_{\min } }{\sigma }p_{3}-\sigma \tau (p_{3})=\displaystyle \inf _{p<0} \left( \alpha _{1}p+\alpha _{2}p-\frac{H_{\min } }{\sigma }p-\sigma \tau (p)\right) . \end{aligned}$$

    Since \(\alpha _{1}+\alpha _{2}\le \frac{H_{\min } }{\sigma }+H_{\max }\) only two cases rise:

    • If \(\alpha _{1}+\alpha _{2}\le \frac{H_{\min } }{\sigma }+\sigma \tau '(0)\), then \(p_{3}=0\).

    • If \(\alpha _{1}+\alpha _{2}> \frac{H_{\min } }{\sigma }+\sigma \tau '(0)\), then \(p_{3}<0\) and \(\alpha _{1}+\alpha _{2}= \frac{H_{\min } }{\sigma }-\sigma \tau '(p_{3})\).