1 Introduction

Extremal bases, originally introduced in [3, 7], have been established as a useful tool in the study of the properties of the so called \(\mathbb {C}\)-convex domains, D. On the one hand side they induce a natural orthonormal coordinate system around any given point z in the interior of the domain D. On the other hand, under certain assumptions, see below, every extremal basis B at point z enjoys the following inequalities:

$$\begin{aligned} \left( \sum _{b\in B} \frac{|\left\langle {b,v}\right\rangle |}{d_D(z;b)}\right) ^{-1}\lesssim d_D(z;v) \lesssim \left( \sum _{b\in B} \frac{|\left\langle {b,v}\right\rangle |}{d_D(z;b)}\right) ^{-1}, \end{aligned}$$
(1)

where \(d_D(z;v)\) corresponds to the distance from z to the boundary of D in direction v, i.e.:

$$\begin{aligned} d_D(z;v) = \sup \{r\in \mathbb {R}^+|\ z+\lambda v\in D\text { whenever }|\lambda | < r\}. \end{aligned}$$

This means that the extremal bases provide a convenient linear approximation of the structure of the body D in a neighbourhood of any given point z in the interior of the body.

The property (1) of extremal bases has facilitated the construction of plurisubharmonic functions with bounded Hessians and for obtaining of estimates for the Bergman kernels, [7, 8]. In [3, 4], Hefer states and uses the estimates (1) to obtain Hölder and \(L^p\) estimates for the solutions of Cauchy-Riemann equations on smooth bounded pseudoconvex domains D of finite type. Estimates (1) have been also applied in the study of the Kobayashi and Bergman metrics, [7, 9,10,11] and more recently, [12]. For a survey on the geometry of extremal bases and their applications we refer to [2].

The construction of maximal bases can be described as a greedy procedure. One starts with an arbitrary point z in the interior of a \(\mathbb {C}\)-convex domain D and an empty set of vectors/directions \(B_0\). Next, inductively, the current set \(B_k\) is extended to \(B_{k+1}\) by adding an extremal direction \(v_{k+1}\) that is orthogonal to the subspace spanned by \(B_k\). The process terminates once the set \(B:=B_n\) spans the entire space in which case B is the desired basis.

The notion of extremity can be specified as maximal, [7, 8], or as minimal, [3, 4], and the difference consists in whether one selects:

$$\begin{aligned} v_{k+1}\in & {} \arg \max \{d_{D}(z;v)\,|\, v\perp span(B_k)\} \text { or } \\ v_{k+1}\in & {} \arg \min \{d_{D}(z;v)\,|\, v\perp span(B_k)\}. \end{aligned}$$

Though important for the applications, the proofs of the estimates (1) departing from geometric or analytical view points, depend on some kind of smoothness conditions for the domain D, [7, 8, 10], and often provide only implicit or rough estimates for the hidden constant. In particular, it is not evident if and how it depends on the domain D.

In this paper, we give answer to the above questions by proving that the estimates (1) are valid with constant \(2^{n}-1\) where n is the dimensionality of the space in which the (so called weakly linear) convex domain D resides. In this sense this constant is independent of D. Furthermore it turns out that the constant \(2^n-1\) is sharp, that is it cannot be improved.

Our approach is algebraic. It departs from the observation that for a weakly linear convex domain D and any particular point z in the interior of D, the function:

$$\begin{aligned} f(v)=\frac{1}{d_D(z;v)} \end{aligned}$$

is a semi-norm. If further the domain D does not contain complex lines, then f is a norm, but see Remark 1 which suggests that this assumption is not essential and it is only for technical reasons that we restrict our considerations to norms. Thus the estimates (1) can be restated as:

$$\begin{aligned} \sum _{b\in B} |\left\langle {b,v}\right\rangle |f(b) \gtrsim f(v) \gtrsim \sum _{b\in B} |\left\langle {b,v}\right\rangle | f(b) \end{aligned}$$
(2)

and whereas the first inequality is satisfied with constant 1, as it easily follows by the triangle inequality, the second inequality seems to be more challenging.

To keep the outline self-contained, in Sect. 2 we prove that every (bounded on the unit sphere) norm can be represented as:

$$\begin{aligned} f(v)=\sup _{u\in U} |\left\langle {v,u}\right\rangle | \end{aligned}$$

for an appropriate set of vectors U. The geometric interpretation of U is that the vectors \(u\in U\) define the supporting hyperplanes to the body D centred at z.

In Sect. 3 we state and prove that for any extremal basis B:

$$\begin{aligned} (2^n-1) f(v) \ge \sum _{b\in B} |\left\langle {b,v}\right\rangle | f(b). \end{aligned}$$

We should stress that the definition of maximal and minimal in our notation, see Definition 2, are reciprocal to the notions used for bodies. The reason is that maximising \(d_D(z;.)\) is equivalent to minimising f and vice versa. Thus, Theorem 4 proves the statement for maximal bases w.r.t. \(d_D(z;.)\) and for minimal bases w.r.t. f, whereas Theorem 5 proves the statement for minimal bases w.r.t. \(d_D(z;.)\) and maximal bases w.r.t. f.

In Sect. 3 we prove that \(2^n-1\) is the best possible bound. Namely, we show that for every \(\varepsilon >0\) there are norms whose maximal, resp. minimal, bases violate the inequality:

$$\begin{aligned} (2^n-1-\varepsilon ) f(v) \ge \sum _{b\in B} |\left\langle {b,v}\right\rangle | f(b) \end{aligned}$$

for at least one vector v. Since every norm gives rise to a convex body \(D'=\{v\,|\, f(v)\le 1\}\) such that \(d_{D'}(0;v)=\frac{1}{f(v)}\), the results translate immediately for convex bodies. Again, Proposition 6 handles the case of maximal basis w.r.t. \(d_D(z;.)\) and minimal basis w.r.t. f, whereas Proposition 7 handles the case of minimal basis w.r.t. \(d_D(z;.)\) and maximal basis w.r.t. f.

In Sect. 5 we show that minimal and maximal bases are equivalent in their norms. Whereas similar result has been previously proven in [10], the bounds in Sect. 5 are based on more accurate analysis of the algebraic structure and improve the estimate for the constants from \(2^nn!\) to \(2^n\), thus reducing a factor of n!.

We conclude in Sect. 6 with some open problems.

2 Preliminaries

In what follows we assume that V is a linear space on \(\mathbb {F}=\mathbb {R}\) or \(\mathbb {F}=\mathbb {C}\) and that V is supplied with a scalar product \(\left\langle {\cdot ,\cdot }\right\rangle :V\times V\rightarrow \mathbb {F}\). We denote with \(\Vert .\Vert :V\rightarrow \mathbb {R}^+\) and \(\mathbb {S}_1\) the induced norm and the unit sphere in V, respectively, i.e.:

$$\begin{aligned} \Vert u\Vert= & {} \sqrt{\left\langle {u,u}\right\rangle } \text { for } u\in V\\ \mathbb {S}_1= & {} \{u\in V\,|\, \left\langle {u,u}\right\rangle =1\}. \end{aligned}$$

Lemma 1

Let \(f:V\rightarrow \mathbb {R}^+\) be a norm. Assume that f is bounded on the unit sphere \(\mathbb {S}_1\subset V\).

  1. (1)

    There is a set of vectors \(U\subseteq V\) such that:

    $$\begin{aligned} f(v)=\sup _{u\in U} |\left\langle {v,u}\right\rangle |. \end{aligned}$$
  2. (2)

    If \(v_0\in V\) is a unit vector such that \(f(v_0)=\sup _{v\in \mathbb {S}_1} f(v)\), then for all \(u\in V\) it holds that:

    $$\begin{aligned} f(u) \ge |\left\langle {v_0,u}\right\rangle |f(v_0). \end{aligned}$$

Proof

  1. (1)

    The first part is an immediate consequence of Hahn-Banach Theorem. Indeed, let \(v\in V\). Then denoting by \(V_0=span(v)\), we define \(L_0:V_0\rightarrow \mathbb {F}\) as \(L_0(\alpha v)=\alpha f(v)\). Clearly, \(L_0\) is a linear functional on \(V_0\) and \(|L_0(\alpha v)| =|\alpha |f(v)=f(\alpha v)\). By Hanh-Banach Theorem, since f is a norm, we can extend \(L_0\) to a linear functional \(L_v:V\rightarrow \mathbb {F}\) such that \(|L_v(u)|\le f(u)\) for all \(u\in V\). Since \(L_v\) is linear on V and \(\sup _{u\in \mathbb {S}_1} L_v(u)\) is bounded above by \(\sup _{u\in \mathbb {S}_1} f(u)<\infty \), it follows that \(L_v\) is a bounded linear operator and cosequently there is a vector \(v^*\in V\) such that:

    $$\begin{aligned} \left\langle {u,v^*}\right\rangle =L_v(u) \text { for all } u\in V. \end{aligned}$$

    Let \(U=\{v^*\,|\, v\in V\}\). It is straightforward that for any \(u\in V\) and \(v^*\in U\), \(|\left\langle {u,v^*}\right\rangle |=|L_v(u)|\le f(u)\). On the other hand \(f(u)=L_u(u)=|\left\langle {u,u^*}\right\rangle |\). Therefore:

    $$\begin{aligned} f(v)=\sup _{u\in U} |\left\langle {v,u}\right\rangle |. \end{aligned}$$
  2. (2)

    For the second part, consider the linear functional \(L=L_{v_0}\) induced by \(v_0\). Since L is linear, it is determined by its values on an orthonormal basis on V. Without loss of generality we may and we do assume that \((e_i)_{i\in I}\) is such a basis with \(e_1=v_0\). Assume that \(L(e_i)\ne 0\) for some \(i>1\). Then we consider the vector:

    $$\begin{aligned} u= \overline{L(e_1)} v_0 + \overline{L(e_i)} e_i=f(v_0) v_0 + \overline{L(e_i)} e_i. \end{aligned}$$

    Then we have that \(\Vert u\Vert =\sqrt{f^2(v_0) + |L(e_i)|^2}\) and:

    $$\begin{aligned} L(u) = f(v_0) L(v_0)+\overline{L(e_i)}L(e_i)=f(v_0)^2 + |L(e_i)|^2. \end{aligned}$$

    Hence \(u'=\frac{u}{\Vert u\Vert }\) is a unit vector and \(L(u')= \sqrt{f(v_0)^2 + |L(e_i)|^2}\) implying that \(L(u')> f(v_0)\) since \(L(e_i)\ne 0\). However, \(f(u')\ge L(u')>f(v_0)\) and this contradicts the maximality of \(v_0\). Consequently, \(L(e_i)=0\) for any \(i\ne 1\). This proves that \(L(u)=\left\langle {u,e_1}\right\rangle L(e_1)=\left\langle {u,v_0}\right\rangle f(v_0)\) and therefore:

    $$\begin{aligned} |\left\langle {u,v_0}\right\rangle |f(v_0)=|L(u) | \le f(u) \end{aligned}$$

    as required. \(\square \)

Remark 1

We can view the first part of the above lemma as a characterisation of (semi)norms. Indeed, if \(U\subseteq V\), then \(f_U:V\rightarrow \mathbb {R}^+\) defined as:

$$\begin{aligned} f_U(v)=\sup _{u\in U} |\left\langle {v,u}\right\rangle | \end{aligned}$$

is a semi-norm. Actually, it is a norm iff \(U^{\perp }=\{v\in V\,|\, \forall u\in U(\left\langle {u,v}\right\rangle =0)\}\) is the trivial set \(\{0\}\). Since, \(U^{\perp }\) is a subspace of V, the properties of \(f_U\) are uniquely determined by the behaviour of \(f_U\) on the orthogonal subspace of \(U^{\perp }\). Indeed, if \(v=v'+u\) with \(u\in U^{\perp }\) and \(v'\perp U^{\perp }\) then:

$$\begin{aligned} f_U(v') \le f_U(v) + f_U(u)=f(v) \text { and } f_U(v)\le f_U(v') + f_U(-u)=f_U(v'), \end{aligned}$$

that is \(f_U(v')=f_U(v)\).

3 Minimal and maximal bases

In this section we assume that n is a positive integer, and V is an n-dimensional linear space supplied with a scalar product \(\left\langle {\cdot ,\cdot }\right\rangle :V\times V\rightarrow \mathbb {F}\) where \(\mathbb {F}\in \{\mathbb {R},\mathbb {C}\}\).

Definition 2

Let \(f:V\rightarrow \mathbb {R}^+\) be a norm. We define an f-minimal (f-maximal, resp.) (orthonormal) basis for V inductively as follows:

  • \(n=1\), then for any \(b\in \mathbb {S}_1\), (b) is an f-minimal (f-maximal, resp.) basis.

  • \(n>1\), then let:

    $$\begin{aligned} b_1\in & {} \arg \min \{f(b)\,|\, b\in \mathbb {S}_1\} (b_1\in \arg \max \{f(b) \,|\, b\in \mathbb {S}_1\}, \text { resp.})\\ V_1= & {} \{u\in V\,|\, \left\langle {u,b_1}\right\rangle =0\}\\ f_1= & {} f\upharpoonright V_1 \text { be the restriction of } f \text { to } V_1. \end{aligned}$$

    If \((b_2,b_3,\dots ,b_n)\) is an \(f_1\)-minimal (\(f_1\)-maximal, resp.) basis for \(V_1\), then \((b_1,b_2,\dots ,b_n)\) is an f-minimal (f-maximal, resp.) basis for V.

Definition 3

We say that two orthonormal bases \((b_1,\dots ,b_n)\) and \((e_1,\dots ,e_n)\) of V are equivalent if for every \(i\le n\), \(b_i\) and \(e_i\) are collinear, i.e. there is an element \(\alpha _i\in \mathbb {F}\) with \(|\alpha _i|=1\) such that \(e_i=\alpha _i b_i\).

Theorem 4

For any norm \(f:V\rightarrow \mathbb {R}^+\) and any f-minimal basis \((b_1,b_2,\dots ,b_n)\) it holds that:

$$\begin{aligned} (2^n-1)f(v)\ge \sum _{i=1}^n |\left\langle {v,b_i}\right\rangle | f(b_i). \end{aligned}$$

Proof

First note that for any \(v\in V\) there are unique \(\alpha \in \mathbb {F}\) and vector \(u\in span(b_2,\dots ,b_n)\) such that:

$$\begin{aligned} v=u+\alpha b_1. \end{aligned}$$

The statement being obvious for \(v=0\), we assume that \(v\ne 0\) and set \(v'= \frac{v}{\Vert v\Vert }\). Then \(v'\in \mathbb {S}_1\) and by the definition of \(b_1\), we get that:

$$\begin{aligned} f(v')\ge f(b_1)\ge \frac{|\left\langle {v,b_1}\right\rangle |}{\Vert v\Vert }f(b_1)=\frac{|\alpha |}{\Vert v\Vert }f(b_1), \end{aligned}$$

where we used the Cauchy-Schwartz inequality. So far we have that:

$$\begin{aligned} f(v)=\Vert v\Vert f(v') \ge |\alpha | f(b_1). \end{aligned}$$

This settles the case where \(n=1\). Alternatively, i.e. for \(n\ge 2\), we use the triangle inequality for \(u=v - \alpha b_1\) to conclude that:

$$\begin{aligned} f(u) = f(v-\alpha b_1) \le f(v) + |\alpha | f(b_1) \le f(v) + f(v)=2f(v). \end{aligned}$$

With this inequality at hand we can conclude the proof of the theorem by induction on n. As we noticed, the case \(n=1\) is settled. Assume that the conclusion of the theorem holds for any \((n-1)\)-dimensional vector space \(V'\) and consider the vector space V of dimension n. Let \(V'=span(b_2,\dots ,b_n)\) and \(f'=f\upharpoonright V'\). It follows, by definition, that \((b_2,\dots ,b_n)\) is an \(f'\)-minimal basis for \(V'\) and therefore by the induction hypothesis:

$$\begin{aligned} (2^{n-1}-1)f'(u) \ge \sum _{i=2}^n |\left\langle {u,b_i}\right\rangle | f(b_i) \text { for any } u\in V'. \end{aligned}$$

Finally, for any non-zero \(v=u+\alpha b_1\) with \(\alpha \in \mathbb {F}\) and \(u\in V'\) we have proven that:

$$\begin{aligned} f(v) \ge 2 f(u)=2f(u') \text { and } f(v) \ge |\alpha | f(b_1). \end{aligned}$$

Summing up, and using that \(\left\langle {v,b_i}\right\rangle =\left\langle {u,b_i}\right\rangle \) for \(i\ge 2\), we conclude that:

$$\begin{aligned} (2^n-1) f(v)= & {} 2(2^{n-1}-1) f(v) + f(v) \\\ge & {} (2^{n-1}-1) f(u) + |\alpha |f(b_1)\\\ge & {} |\alpha | f(b_1) + \sum _{i=2}^n |\left\langle {u,b_i}\right\rangle |f(b_i)\\= & {} |\left\langle {v,b_1}\right\rangle |f(b_1) + \sum _{i=2}^n |\left\langle {v,b_i}\right\rangle | f(b_i)\\= & {} \sum _{i=1}^n |\left\langle {v,b_i}\right\rangle | f(b_i). \end{aligned}$$

\(\square \)

Theorem 5

For any norm \(f:V\rightarrow \mathbb {R}^+\) and any f-maximal basis \((b_1,b_2,\dots ,b_n)\) it holds that:

$$\begin{aligned} (2^n-1) f(v) \ge \sum _{i=1}^n |\left\langle {v,b_i}\right\rangle | f(b_i). \end{aligned}$$

Proof

We proceed by induction on \(n=\dim V\). For \(n=1\) there is nothing to prove. Assume that the statement of the theorem holds for vector spaces with dimensionality n. Let \(f:V\rightarrow \mathbb {R}^+\) be a norm. Since \(f(b_1)=\max _{v\in \mathbb {S}_1} f(v)\), by Lemma 1 we have that:

$$\begin{aligned} f(u)\ge |\left\langle {u,b_1}\right\rangle |f(b_1). \end{aligned}$$

Let \(V_1=span(b_2,\dots ,b_n)\) and consider an arbitrary vector \(u=\alpha b_1 + v\) with \(v\in V_1\). Then, we have:

$$\begin{aligned} f(u) \ge |\alpha | f(b_1). \end{aligned}$$

Furthermore, by the triangle inequality we have:

$$\begin{aligned} f(u) +|\alpha |f(b_1)=f(\alpha b_1 + v) + f(-\alpha b_1)\ge f(v). \end{aligned}$$

Summing up we obtain that \(2f(u) \ge f(u) + |\alpha | f(b_1)\ge f(v)\). To complete the inductive step, we use that \(\left\langle {u,b_i}\right\rangle =\left\langle {v,b_i}\right\rangle \) for \(i\ge 2\) and the inductive hypothesis for \(V_1\) and \(f_1=f\upharpoonright V_1\). Thus, we compute:

$$\begin{aligned} (2^n-1) f(u)\ge & {} f(u) + 2(2^{n-1}-1) f(u)\\\ge & {} |\alpha | f(b_1) + (2^{n-1}-1) f(v)\\\ge & {} |\alpha |f(b_1) +\sum _{i=2}^n |\left\langle {v,b_i}\right\rangle |f(b_i)\\= & {} |\alpha |f(b_1) +\sum _{i=2}^n |\left\langle {u,b_i}\right\rangle |f(b_i)\\ \end{aligned}$$

where the second line follows by the fact \(f(u)\ge |\alpha |f(b_1)\) and \(2f(u)\ge f(v)\), and the third line follows by the inductive hypothesis. \(\square \)

4 Lower bounds

In this section we prove that the results from Theorems 4 and 5 are sharp. Our constructions use the characterisation from Lemma 1 and Remark 1 as a basic tool to define norms. In both cases given an \(\varepsilon >0\), we construct a finite set \(U\subseteq \mathbb {R}^n\) that defines a norm:

$$\begin{aligned} f:\mathbb {C}^n\rightarrow \mathbb {R}^+ \text { s.t. } f(v) =\max _{u\in U} |\left\langle {v,u}\right\rangle |. \end{aligned}$$

The set U is tailored in such a way that it admits:

  1. (1)

    As f-minimal (f-maximal, resp.) an orthonormalFootnote 1 basis

    $$(e_1,\dots ,e_n)\subset \mathbb {R}^n$$

    and

  2. (2)

    A witness \(v_0\in \mathbb {R}^n\) such that:

    $$\begin{aligned} (2^n-1-\varepsilon ) f(v_0)<\sum _{i=1}^n |\left\langle {v_0,e_i}\right\rangle | f(e_i). \end{aligned}$$

Since \(U\subseteq \mathbb {R}^n\), \((e_1,\dots ,e_n)\subseteq \mathbb {R}^n\) and \(v_0\in \mathbb {R}^n\), the restriction \(f_R\) of f to \(\mathbb {R}^n\) provides a norm:

$$\begin{aligned} f_R:\mathbb {R}^n\rightarrow \mathbb {R}^+ \text { with } f_R(v) = f(v) \end{aligned}$$

that admits as \(f_R\)-minimal (f-maximal, resp.) basis again \((e_1,\dots ,e_n)\) and \(v_0\) is a witnesses that:

$$\begin{aligned} (2^n-1-\varepsilon ) f_R(v_0)<\sum _{i=1}^n |\left\langle {v_0,e_i}\right\rangle | f_R(e_i). \end{aligned}$$

Proposition 6

Let \(n\ge 1\) be an integer and \(V=\mathbb {C}^n\). Then for any \(\varepsilon >0\), there is a norm \(f:V\rightarrow \mathbb {R}^+\) and a nonzero vector \(v\in \mathbb {R}^n\) such that:

  1. (1)

    f admits a unique, up to equivalence, f-minimal basis

    $$(e_1,\dots ,e_n)\subseteq \mathbb {R}^n$$

    and

  2. (2)
    $$\begin{aligned} (2^n-1-\varepsilon )f(v)\le \sum _{i=1}^n |\left\langle {v,e_i}\right\rangle | f(e_i). \end{aligned}$$

Proof

Let \((e_1,e_2,\dots ,e_n)\subseteq \mathbb {R}^n\) be a fixed orthonormal basis for V and \(\varepsilon >0\). We set \(V_i=span(e_i,\dots ,e_n)\). Finally, let \(s\in (0,1)\) whose value will be specified appropriately later. We set out to inductively construct sets \(U_i\subseteq V_i\cap \mathbb {R}^n\), norms \(f_i:V_i\rightarrow \mathbb {R}^+\) and witnesses \(v^{(i)}\in V_i\cap \mathbb {R}^n\) with the following properties:

  1. (1)

    \(f_i(v)=\max \{|\left\langle {v,u}\right\rangle |\,|\, u\in U_i\}\),

  2. (2)

    \(f_i(e_i)=1=\min _{v\in \mathbb {S}_1\cap V_i} f_i(v)\),

  3. (3)

    \(f_{i}(v)=\frac{1}{s^2(2s+1)}f_{i+1}(v)\) for all \(v\in V_{i+1}\),

  4. (4)

    \(\left\langle {v^{(i)},e_j}\right\rangle =(2s^2)^{j-i}\) is such that \(f_i(v^{(i)})\in [1,1+2s)\) and:

    $$\begin{aligned} (2^{n-i}-1)f(v^{(i)})<\sum _{j=i}^n |\left\langle {v^{(i)},e_j}\right\rangle |(1+2s)^{2(j-i)+1} f_i(e_j)) \end{aligned}$$

We start with \(U_n=\{e_n\}\) and \(f_n(\alpha e_n)=|\alpha |\), \(v^{(n)}=e_n\). It is straightforward to see that these objects satisfy the above properties. Assume that for some \(i>1\) the set \(U_i\), the norm \(f_i\) and the witness \(v^{(i)}\) are defined and have the above properties. We define \(U_{i-1}\), \(f_{i-1}\) and \(v^{(i-1)}\) as follows. Let

$$\begin{aligned} U_i^+ =\{u\in U_i \,|\, \left\langle {v^{(i)},u}\right\rangle \ge 0\}\text { and }U_i^- =\{u\in U_i \,|\, \left\langle {v^{(i)},u}\right\rangle < 0\}. \end{aligned}$$

Next we define:

$$\begin{aligned} U_{i-1}&= \left\{ e_{i-1} + \frac{1}{s(2s+1)}u\,|\, u\in U^+_i\right\} \cup \left\{ e_{i-1} - \frac{1}{s^2(2s+1)}u\,|\, u\in U^+_i\right\} \\&\cup \left\{ e_{i-1} - \frac{1}{s(2s+1)}u\,|\, u\in U^-_i\right\} \cup \left\{ e_{i-1} + \frac{1}{s^2(2s+1)}u\,|\, u\in U^-_i\right\} . \end{aligned}$$

Now, we define \(f_{i-1}\) and \(v^{(i-1)}\) as:

$$\begin{aligned} f_{i-1}= & {} \max \{|\left\langle {v,u}\right\rangle |\, |\, u\in U_{i-1}\}\\ v^{(i-1)}= & {} e_{i-1} + 2s^2 v^{(i)}. \end{aligned}$$

Note that since, by assumption, \(U_i\subseteq \mathbb {R}^n\) and \(v^{(i)}\in \mathbb {R}^n\), the sets \(U_i^+\) and \(U_i^{-}\) are well-defined. Hence, it should be clear that the \(U_{i-1}\subseteq \mathbb {R}^n\) and \(v^{(i-1)}\in \mathbb {R}^n\).

Now we verify that \(U_{i-1}\), \(f_{i-1}\) and \(v^{(i-1)}\) possess the desired properties:

  1. (1)

    The first property is satisfied by definition.

  2. (2)

    The third property is also clear for \(s\in (0,1)\).

  3. (3)

    To see that the second property holds, first note that:

    $$\begin{aligned} \left\langle {e_{i-1},u}\right\rangle =1 \text { for all } u\in U_{i-1}. \end{aligned}$$

    Next, consider an arbitrary element \(v\in U_{i-1}\cap \mathbb {S}_1\). It has a unique representation \(v=\alpha e_{i-1} + v'\) with \(|\alpha |^2 + \Vert v'\Vert ^2=1\) with \(v'\in V_i\). By the induction hypothesis we have that \(f_i(v')\ge \Vert v'\Vert \). Since \(U_i\) is finite, the value \(f_i(v')=|\left\langle {v',u'}\right\rangle |\) is attained for some \(u'\in U_i\). Let us fix such \(u'\) and set \(a,b\in \mathbb {R}\) such that:

    $$\begin{aligned} \left\langle {v',u'}\right\rangle = a+ ib. \end{aligned}$$

    We also set \(c,d\in \mathbb {R}\) such that \(\alpha =c+id\). Thus, for \(\sigma \in \{s^{-1},s^{-2},-s^{-1},-s^{-2}\}\) we have:

    $$\begin{aligned} \left\langle {v,e_{i-1}+\sigma u'}\right\rangle = c+id +\sigma (a+ib)=(c+\sigma a) + i (d+ \sigma b). \end{aligned}$$

    Consequently:

    $$\begin{aligned} |\left\langle {v,e_{i-1}+\sigma u'}\right\rangle |= & {} (c+\sigma a)^2 + (d+\sigma b)^2\\= & {} c^2 + d^2 + \sigma ^2 (a^2+b^2) + 2\sigma (ac + bd) \\= & {} |\alpha |^2 + \sigma ^2|\left\langle {v',u'}\right\rangle |^2 + 2\sigma (ac + bd)\\= & {} |\alpha |^2+ \sigma ^2 f^2_i(v') + 2\sigma (ac+bd) \\\ge & {} |\alpha |^2 + \sigma ^2 \Vert v'\Vert ^2 + 2\sigma (ac+bd). \end{aligned}$$

    By construction, we can always choose the sign of \(\sigma \), i.e. we have either the option \(\sigma \in \{s^{-1},-s^{-2}\}\) or \(\sigma \in \{-s^{-1},s^{-2}\}\). Now, choosing \(\sigma \) such that the sign of \(\sigma \) is the same as the sign of \((ac+bd)\) we conclude that:

    $$\begin{aligned} |\left\langle {v,e_{i-1}+\sigma u'}\right\rangle |\ge & {} |\alpha |^2 + \sigma ^2 \Vert v'\Vert ^2 + 2\sigma (ac+bd)\\\ge & {} |\alpha |^2 +\sigma ^2 \Vert v\Vert ^2\\\ge & {} |\alpha |^2 + \Vert v'\Vert ^2\\= & {} 1, \end{aligned}$$

    where the first inequality follows by the choice of \(\sigma \) and the second by the fact that \(|\sigma |>1\). Furthermore the last inequality turns into equality if and only if \(v'=0\). This proves that for any vector \(v\in \mathbb {S}_1\cap V_{i-1}\) which is not collinear with \(e_{i-1}\), it holds that \(f_i(v)>1\). Consequently:

    $$\begin{aligned} f_{i-1}(e_{i-1})=\min _{v\in V_{i-1}\cap \mathbb {S}_1} f_{i-1}(v) \end{aligned}$$

    and the minimum is attained only for vectors of the form \(\alpha e_{i-1}\) with \(|\alpha |=1\). Hence w.l.o.g. \(e_{i-1}\) belongs to the minimal basis.

  4. (4)

    Finally, we check the last condition. Let \(v^{(i-1)}\in V_{i-1}\) be such that:

    $$\begin{aligned} v^{(i-1)}=e_{i-1} + 2s^2 v^{(i)}. \end{aligned}$$

    First, let \(u\in U_i^+\). In particular, \(\left\langle {u,v^{(i)}}\right\rangle \ge 0\). Therefore:

    $$\begin{aligned} \left\langle {v^{(i-1)},e_{i-1}-\frac{1}{s^2(2s+1)}u}\right\rangle= & {} \left\langle {e_{i-1} + 2s^2v^{(i)},e_{i-1}-\frac{1}{s^2(2s+1)}u}\right\rangle \\= & {} 1- \frac{2}{2s+1}\left\langle {v^{(i)},u}\right\rangle \\= & {} 1- \frac{2}{2s+1}|\left\langle {v^{(i)},u}\right\rangle |\le 1. \end{aligned}$$

    Since \(u\in U_i\), we have that \(f_i(v^{(i)})\ge |\left\langle {v^{(i)},u}\right\rangle |\) and by the assumption that \(f_i(v^{(i)})\le 1+2s\) we conclude that:

    $$\begin{aligned} \left\langle {v^{(i-1)},e_{i-1}-\frac{1}{s^2(2s+1)}u}\right\rangle =1- \frac{2}{2s+1}|\left\langle {v^{(i)},u}\right\rangle |>1-2=-1. \end{aligned}$$

    Hence \(|\left\langle {v^{(i-1)},e_{i-1}-\frac{1}{s^2(2s+1)}u}\right\rangle |\le 1\) for all \(u\in U_i^+\). On the other hand:

    $$\begin{aligned} \left\langle {v^{(i-1)},e_{i-1}+\frac{1}{s(2s+1)}u}\right\rangle= & {} 1+ \frac{2s^2}{s(2s+1)}\left\langle {v^{(i)},u}\right\rangle \\= & {} 1+ \frac{2s}{1+2s}|\left\langle {v^{(i)},u}\right\rangle |\in [1,1+2s). \end{aligned}$$

    Similarly, for \(u\in U_i^-\) we have that:

    $$\begin{aligned} \left\langle {v^{(i-1)},e_{i-1}+\frac{1}{s^2(2s+1)}u}\right\rangle= & {} 1- \frac{2}{2s+1}|\left\langle {v^{(i)},u}\right\rangle |\in (-1,1]\\ \left\langle {v^{(i-1)},e_{i-1}-\frac{1}{s(2s+1)}u}\right\rangle= & {} 1+ \frac{2s}{2s+1}|\left\langle {v^{(i)},u}\right\rangle |\in [1,1+2s). \end{aligned}$$

    Hence the maximum value of \(\left\langle {v^{(i-1)},u'}\right\rangle \) when \(u'\in U_{i-1}\) is attained for some \(u'\) of the form

    $$\begin{aligned} u'= & {} e_{i-1}+\frac{1}{s(2s+1)}u \text { with } u\in U_i^+ \text { or}\\ u'= & {} e_{i-1}-\frac{1}{s(2s+1)}u \text { with } u\in U_i^- \end{aligned}$$

    such that \(|\left\langle {v^{(i)},u}\right\rangle |\) is maximised. This shows that

    $$f_{i-1}(v^{(i-1)})=1+\frac{2s}{2s+1}f_{i}(v^{(i)}).$$

    So far we have that \(f_{i-1}(v^{(i-1)})\in [1,1+2s)\). We proceed to show that the last inequality holds. To this end, first note:

    $$\begin{aligned} \left\langle {v^{(i-1)},e_{i-1}}\right\rangle= & {} 1\\ \left\langle {v^{(i-1)},e_{j}}\right\rangle= & {} 2s^2 \left\langle {v^{(i)},e_j}\right\rangle \text { for } j\ge i. \end{aligned}$$

    Recalling that \(f_{i-1}(e_{i-1})=1\) and \(f_{i}(e_j)=s^2(2s+1)f_{i-1}(e_j)\) for \(j\ge i\), we conclude that:

    $$\begin{aligned} (1+2s)|\left\langle {v^{(i-1)},e_{j}}\right\rangle |f_{i-1}(e_j)=2|\left\langle {v^{(i)},e_j}\right\rangle | f_i(e_j) \text { for } j\ge i. \end{aligned}$$

    Therefore, using that \(f_{i-1}(v^{(i-1)})< 1+2s\le (1+2s)f_i(v^{(i)})\), we compute:

    $$\begin{aligned} (2^{n-i+1}-1) f_{i-1}(v^{(i-1)})= & {} f_{i-1}(v^{(i-1)})+ 2(2^{n-i}-1)f_{i-1}(v^{(i-1)}) \\< & {} 1+2s + 2(2^{n-i}-1)f_{i-1}(v^{(i-1)})\\\le & {} (1+2s)(1+2f_i(v^{(i)})(2^{n-i}-1))\\ \text {(inductive hypothesis) }\le & {} (1+2s) \left( 1 + 2\sum _{j=i}^n |\left\langle {v^{(i)},e_j}\right\rangle |(1+2s)^{2(j-i)+1} f_i(e_j)\right) \\\le & {} (1+2s)( |\left\langle {v^{(i-1)},e_{i-1}}\right\rangle | f(e_{i-1}) \\{} & {} +\sum _{j=i}^n (1+2s)^{2(j-i)+3}|\left\langle {v^{(i-1)},e_j}\right\rangle |f_{i-1}(e_j))\\ (\text { setting }i'=i-1)= & {} \sum _{j=i'}^n(1+2s)^{2(j-i')+1}|\left\langle {v^{(i')},e_j}\right\rangle |f_{i'}(e_j) \end{aligned}$$

    as required.

Now letting s tend to zero, we see that \(f_1(v^{(1)})\) satisfies the conclusion of the lemma. \(\square \)

Proposition 7

Let \(n\ge 1\) be an integer and \(V=\mathbb {C}^n\). For any real number \(\varepsilon >0\), there is a norm \(f:V\rightarrow \mathbb {R}^+\) and a non-zero vector \(v\in \mathbb {R}^n\) such that:

  1. (1)

    f admits a unique up to equivalence f-maximal basis

    $$(e_1,e_2,\dots ,e_n)\subseteq \mathbb {R}^n,$$
  2. (2)
    $$\begin{aligned} (2^n-1-\varepsilon ) f(v)<\sum _{i=1}^n |\left\langle {v,e_i}\right\rangle |f(e_i). \end{aligned}$$

Proof

Let \(\varepsilon >0\) be fixed and \(e_1,\dots ,e_n\in \mathbb {R}^n\) be an orthonormal basis of V. We are going to construct a norm \(f:V\rightarrow \mathbb {R}^+\) whose unique f-maximal basis is \((e_1,\dots ,e_n)\) and satisfies the conclusion of the proposition. To this end consider a real number \(c\in (0,1)\) and an angle \(\alpha \in (0,\pi )\) whose precise values will be determined appropriately.

Given, the constants c and \(\alpha \) we define the vector \(u'_k\in V\) for \(k=1,2\dots ,n\) as follows:

$$\begin{aligned} u'_1= & {} e_1\\ u'_{k+1}= & {} \sum _{j=1}^k e_j \sin ^{j-1}\alpha \cos \alpha + e_{k+1}\sin ^k \alpha . \end{aligned}$$

We set \(u_k=c^{k-1}u_k'\) and define the norm \(f:V\rightarrow \mathbb {R}^+\) as:

$$\begin{aligned} f(v) =\sup \{|\left\langle {v,u_k}\right\rangle | \,|\, k\le n\}. \end{aligned}$$

By Remark 1 we know that f is a semi-norm. Since \(f(e_k)\ge |\left\langle {e_k,u_k}\right\rangle |>0\), we see, again by Remark 1, that f is a norm. Further, by Lemma 1, we know that the maximum value f(v) on \(\mathbb {S}_1\) is attained at a vector that is collinear to some of the vectors \(u_k\) and is equal to \(f(v)=\Vert u_k\Vert \). Since \(\Vert u_k\Vert =c^{k-1}\), we conclude that the first vector of the maximal basis is \(e_1=u_1\). Next, the subspace, \(V_1\), of V that is orthogonal to \(e_1\) is spanned by \((e_2,\dots ,e_n)\) and therefore \(f_1=f\upharpoonright V_1\) is actually:

$$\begin{aligned} f_1(v) = \sup \{|\left\langle {v,u_k-u_1\cos \alpha }\right\rangle \,|\, k\le n\}. \end{aligned}$$

Since \(\Vert u_k-u_1\cos \alpha \Vert =c^{k-1}\sin \alpha \), applying again Lemma 1, we conclude that the maximal value of \(f_1\) on \(\mathbb {S}_1\) is attained at \(e_2\). Proceeding inductively, we may prove that \((e_1,\dots ,e_n)\) is the unique, up to equivalence, f-maximal basis. Note that:

$$\begin{aligned} f(e_i)=\left\langle {e_i,u_i}\right\rangle =c^{i-1} \sin ^{i-1} \alpha . \end{aligned}$$

Let \(w_1>0\) and \(w_i=w_1\textrm{tg}^{i-1}\frac{\alpha }{2}\). It should be clear that:

$$\begin{aligned} w_i \cos {\alpha }+w_{i+1} \sin {\alpha }= & {} w_i (\cos {\alpha } +\sin {\alpha }\textrm{tg}\frac{\alpha }{2})\\= & {} w_i(2\cos ^2{\alpha /2}-1 + 2\sin ^2{\alpha /2})=w_i. \end{aligned}$$

Therefore, setting \(w=\sum _{i=1}^n w_i e_i\), we obtain that

$$\left\langle {w,u_k}\right\rangle =c^{k-1}\left\langle {w,u_k'}\right\rangle =c^{k-1}w_1$$

and thus \(|\left\langle {w,u_k}\right\rangle |=c^{k-1}w_1\le w_1\) with equality if and only if \(k=1\). Hence \(f(w)=w_1\).

On the other hand:

$$\begin{aligned} w_i f(e_i)= & {} c^{i-1}w_1 \textrm{tg}^{i-1} \frac{\alpha }{2}\sin ^{i-1}{\alpha }\\= & {} c^{i-1}w_1 \left( 2\sin ^2\frac{\alpha }{2}\right) ^{i-1}\\= & {} 2^{i-1} c^{i-1}w_1\sin ^{(2(i-1))}\frac{\alpha }{2}. \end{aligned}$$

It follows that:

$$\begin{aligned} \sum _{i=1}^n w_i f(e_i)\ge w_1 \sum _{i=1}^n 2^{i-1} [c^{i-1} \sin ^{2(i-1)}\frac{\alpha }{2}]. \end{aligned}$$

Clearly, letting c tend to 1 and \(\alpha \) tend to \(\pi -0\), the right hand side tends to \(w_1\sum _{i=1}^{n}2^{i-1}=(2^n-1)w_1=(2^n-1)f(w)\). Thus, for any \(\varepsilon >0\), we can find appropriate \(c\in (0,1)\) and \(\alpha \in (0,\pi )\) such that \((2^{n}-1-\varepsilon )f(w)<\sum _{i=1}^n w_i f(e_i)\). \(\square \)

5 Equivalence of bases

Definition 8

Let \(f:V\rightarrow \mathbb {R}^+\) be a norm. Let \((e_1,e_2,\dots ,e_n)\) be an orthonormal basis for V arranged in increasing order w.r.t. f, i.e.:

$$\begin{aligned} f(e_1)\le f(e_2)\le \dots \le f(e_n). \end{aligned}$$
  1. (1)

    For a constant \(c\in \mathbb {R}^+\), we say that \((e_1,e_2,\dots ,e_n)\) satisfies the property \(P_f(c)\) if for every \(\beta _1,\beta _2,\dots ,\beta _n\in \mathbb {F}\) it holds:

    $$\begin{aligned} c f\left( \sum _{i=1}^n \beta _i e_i\right) \ge \sum _{i=1}^n |\beta _i|f(e_i). \end{aligned}$$
  2. (2)

    More generally, for constants \(c_1,c_2,\dots ,c_n\in \mathbb {R}^+\), we say that \((e_1,\dots ,e_n)\) satisfies the property \(HP_f(c_1,c_2,\dots ,c_n)\) if for every i and every \(\beta _i,\beta _{i+1},\dots ,\beta _n\) it holds that:

    $$\begin{aligned} c_i f\left( \sum _{j=i}^n \beta _i e_i\right) \ge \sum _{j=i}^n |\beta _i|f(e_i). \end{aligned}$$

Remark 2

With these notions, Theorem 5 states that there is an f-maximal basis that satisfies \(P_f(2^n-1)\). Furthermore, since every prefix of an f-maximal basis, \((b_1,b_2,\dots ,b_n)\), is a maximal basis for its linear span, it follows that the f-maximal basis from Theorem 5 actually satisfies \(HP_f(2^n-1,2^{n-1}-1,\dots ,1)\).

On the other hand, Theorem 4 states that every f-minimal basis satisfies \(P_f(2^n-1)\).

It is also obvious that if a basis \((e_1,\dots ,e_n)\) satisfies \(P_f(c)\), then it satisfies \(HP_f(c,c,\dots ,c)\). Conversely, if an orthonormal basis \((e_1,\dots ,e_n)\) satisfies \(HP_f(c_1,\dots ,c_n)\), then it satisfies \(P_f(c_1)\).

Lemma 9

Let \((b_1,b_2,\dots ,b_n)\) and \((e_1,e_2,\dots ,e_n)\) be orthonormal bases on V and \(f:V\rightarrow \mathbb {R}^+\) be a norm such that:

$$\begin{aligned} f(b_1)\le f(b_2)\le \dots \le f(b_n)&\text { and } \\ f(e_1)\le f(e_2)\le \dots \le f(e_n).&\end{aligned}$$

Then:

  1. (1)

    If \((b_1,\dots ,b_n)\) is f-minimal, then for every \(i\le n\), \( f(b_i)\le \sqrt{i} f(e_i)\),

  2. (2)

    If \((e_1,e_2,\dots ,e_n)\) satisfies \(HP_f(c_1,c_2,\dots ,c_n)\), then for every \(i\le n\) it holds that \(f(e_i)\le c_i\sqrt{i} f(b_i)\).

Proof

Let \(i\le n\) and note that \((b_1,b_2,\dots ,b_{i-1})\) spans an \((i-1)\)-dimensional subspace of V, whereas \((e_1,e_2,\dots ,e_{i})\) spans an i-dimensional subspace of V. Hence there is a non-zero vector \(b'\in span(e_1,\dots ,e_i)\) that is orthogonal to all the vectors \(b_1,b_2,\dots ,b_{i-1}\). Indeed, it is straightforward that the system:

$$\begin{aligned} \sum _{j=1}^i \alpha _j \left\langle {e_j,b_k}\right\rangle = 0 \text { for } k\le i-1 \end{aligned}$$

is overdetermined. Hence it admits a non-zero solution \(\alpha =(\alpha _1,\dots ,\alpha _i)\) and since \(e_1,\dots ,e_i\) are independent, \(b'=\sum _{j=1}^i \alpha _j e_j\) is non-zero. Next, without loss of generality, we may and we do assume that \(\Vert b'\Vert =1\). Thus \(b'\in \mathbb {S}_1\) and \(b'\) is orthogonal to all the vectors \(b_1,\dots ,b_{i-1}\). By the definition of \(b_i\), it follows that:

$$\begin{aligned} f(b_i)\le f(b')=f\left( \sum _{j=1}^i \alpha _j e_j\right) \le \sum _{j=1}^i |\alpha _j| f(e_j), \end{aligned}$$

where the last inequality follows by the triangle inequality. Finally, by the arrangement of the vectors \(e_1,\dots ,e_i\) we have \(f(e_j)\le f(e_i)\) for every \(j\le i\) and by the Cauchy-Schwartz inequality we have \(\sum _{j=1}^i |\alpha _j|\le \sqrt{i}\sqrt{\sum _{j=1}^i |\alpha _j|^2}=\sqrt{i}\). Summing up we obtain:

$$\begin{aligned} f(b_i)\le \sum _{j=1}^i |\alpha _j| f(e_j)\le \sum _{j=1}^i |\alpha _j| f(e_i)\le \sqrt{i} f(e_i). \end{aligned}$$

For the second part of the statement, we proceed similarly. Let \(i\le n\). Then \((b_1,\dots ,b_i)\) span a linear space of dimensionality i whereas \((e_1,\dots ,e_{i-1})\) span a linear space of dimensionality \(i-1\). Then, as above, there is a non-zero vector \(v\in span(b_1,\dots ,b_i)\) that is orthogonal to all the vectors \(e_1,\dots ,e_{i-1}\). Without loss of generality we may and we do assume that v is a unit vector. Since v is orthogonal to \(e_1,\dots ,e_{i-1}\), it belongs to the linear space spanned by \((e_i,\dots ,e_n)\). Hence v can be written as \(v=\sum _{j=i}^n \alpha _j e_j\). By the \(HP_f(c_1,\dots ,c_n)\) of the basis \((e_1,\dots ,e_n)\), we conclude that:

$$\begin{aligned} c_i f(v) = c_if\left( \sum _{j=i}^n \alpha _j e_j\right) \ge \sum _{j=i}^n |\alpha _j| f(e_j)\ge \sum _{j=i}^n |\alpha _j| f(e_i)\ge f(e_i), \end{aligned}$$

where the last but one inequality follows by the ordering of the vectors \((e_1,\dots ,e_n)\) and the last inequality follows by the fact that \(\sum _{j=i}^n|\alpha _j|^2=1\), as v is a unit vector.

On the other hand, \(v\in span(b_1,\dots ,b_i)\) and hence \(v=\sum _{j=1}^i \beta _j b_j\). Since \(\Vert v\Vert =1\), we have that \(\sum _{j=1}^i |\beta _j|^2=1\). Therefore, applying the triangle inequality, we obtain:

$$\begin{aligned} f(v)=f\left( \sum _{j=1}^i \beta _j b_j\right) \le \sum _{j=1}^i |\beta _j| f(b_j)\le \sum _{j=1}^i |\beta _j|f(b_i) \le \sqrt{i} f(b_i), \end{aligned}$$

where the last but one inequality follows by the order of \((b_1,\dots ,b_n)\) and the last one is a trivial application of the Cauchy-Schwartz inequality.

Summing up we get:

$$\begin{aligned} f(e_i) \le c_i f(v) \le c_i \sqrt{i} f(b_i) \end{aligned}$$

as claimed. \(\square \)

Corollary 10

Let \(f:V\rightarrow \mathbb {R}^+\) be a norm and \((b_1,b_2,\dots ,b_n)\) and \((e_1,e_2,\dots ,e_n)\) be orthonormal bases such that:

$$\begin{aligned} f(b_1)\le f(b_2)\le \dots \le f(b_n)&\text { and } \\ f(e_1)\le f(e_2)\le \dots \le f(e_n).&\end{aligned}$$
  1. (1)

    If \((b_1,\dots ,b_n)\) and \((e_1,\dots ,e_n)\) are f-minimal, then

    $$\frac{1}{\sqrt{i}}\le \frac{f(b_i)}{f(e_i)}\le \sqrt{i}.$$
  2. (2)

    If \((b_1,\dots ,b_n)\) is an f-minimal and \((e_1,\dots ,e_n)\) is an f-maximal basis, then \(\frac{1}{\sqrt{i}(2^{n-i+1}-1)}\le \frac{f(b_i)}{f(e_i)}\le \sqrt{i}\).

  3. (3)

    If \((b_1,\dots ,b_n)\) and \((e_1,\dots ,e_n)\) are f-maximal, then

    $$\frac{1}{\sqrt{i}(2^{n-i+1}-1)}\le \frac{f(b_i)}{f(e_i)}\le \sqrt{i}(2^{n-i+1}-1).$$

Under certain additional assumptions, Lemma 9 can be inverted as follows:

Proposition 11

Assume that \(f:V\rightarrow \mathbb {R}^+\) is a norm and \((b_1,\dots ,b_n)\) and \((e_1,\dots ,e_n)\) and \(c\ge 1\) are orthonormal bases for V such that:

$$\begin{aligned} f(b_1)\le f(b_2)\le \dots \le f(b_n) \\ f(e_1)\le f(e_2)\le \dots \le f(e_n) \\ \forall i (f(e_i)/c\le f(b_i)\le c f(e_i)). \end{aligned}$$

If \((b_1,\dots ,b_n)\) satisfies \(H_f(c_1)\) and additionally there is \(\alpha \ge 1\) such that:

$$\begin{aligned} \forall i\ne j \exists \alpha _{i,j}(|\alpha _{i,j}|<\alpha \text { and } \left\langle {e_i-b_i,e_j-\alpha _{i,j} b_j}\right\rangle =0), \end{aligned}$$

then \((e_1,\dots ,e_n)\) satisfies \(H_f(\alpha c^2 c_1)\).

Proof

Since \((b_1,\dots ,b_n)\) satisfies \(H_f(c_1)\), it follows that:

$$\begin{aligned} c_1 f(e_i)\ge \sum _{j=1}^n |\left\langle {e_i,b_j}\right\rangle | f(b_j). \end{aligned}$$

Hence:

$$\begin{aligned} f(b_i)\ge & {} f(e_i)/c \\\ge & {} c_1 f(e_i)/(cc_1)\\\ge & {} \sum _{j=1}^n |\left\langle {e_i,b_j}\right\rangle | f(b_j)/c\\\ge & {} \frac{1}{c^2}\sum _{j=1}^n |\left\langle {e_i,b_j}\right\rangle | f(e_j). \end{aligned}$$

Note that the condition \(\left\langle {e_i-b_i,e_j-\alpha _{i,j} b_j}\right\rangle =0\) can be rewritten as:

$$\begin{aligned} \left\langle {e_i,e_j}\right\rangle + \overline{\alpha _{i,j}}\left\langle {b_i,b_j}\right\rangle - \left\langle {b_i,e_j}\right\rangle - \overline{\alpha _{i,j}}\left\langle {e_i,b_j}\right\rangle =0. \end{aligned}$$

For \(i\ne j\) it holds that \(\left\langle {b_i,b_j}\right\rangle =\left\langle {e_i,e_j}\right\rangle =0\) and therefore for \(i\ne j\) we have:

$$\begin{aligned} |\left\langle {b_i,e_j}\right\rangle |= |\alpha _{i,j}| |\left\langle {e_i,b_j}\right\rangle |\le \alpha |\left\langle {e_i,b_j}\right\rangle |. \end{aligned}$$

Hence the above inequality implies that:

$$\begin{aligned} f(b_i) \ge \frac{1}{c^2}\sum _{j=1}^n |\left\langle {e_i,b_j}\right\rangle | f(e_j) \ge \frac{1}{\alpha c^2}\sum _{j=1}^n |\left\langle {b_i,e_j}\right\rangle | f(e_j). \end{aligned}$$

Therefore:

$$\begin{aligned} \alpha c^2 f(b_i) \ge \sum _{j=1}^n |\left\langle {b_i,e_j}\right\rangle | f(e_j). \end{aligned}$$

Finally, for arbitrary v the above inequality and the validity of \(H_f(c_1)\) for the basis \((b_1,\dots ,b_n)\) imply:

$$\begin{aligned} \alpha c^2 c_1 f(v)\ge & {} \alpha c^2 \sum _{i=1}^n |\left\langle {v,b_i}\right\rangle | f(b_i)\\\ge & {} \sum _{i=1}^n\sum _{j=1}^n |\left\langle {v,b_i}\right\rangle ||\left\langle {b_i,e_j}\right\rangle | f(e_j)\\= & {} \sum _{j=1}^n f(e_j) \sum _{i=1}^n |\left\langle {v,b_i}\right\rangle \left\langle {b_i,e_j}\right\rangle |\\\ge & {} \sum _{j=1}^n f(e_j)\left| \sum _{i=1}^n \left\langle {v,b_i}\right\rangle \left\langle {b_i,e_j}\right\rangle \right| \\= & {} \sum _{j=1}^n |\left\langle {v,e_j}\right\rangle | f(e_j). \end{aligned}$$

This proves that \((e_1,\dots ,e_n)\) satisfies \(H_f(\alpha c^2 c_1)\). \(\square \)

6 Open problems

The definitions of f-minimal and f-maximal bases of a norm suggest a simple greedy strategy to find an orthonormal basis \((e_1,\dots ,e_n)\) which satisfies the inequality:

$$\begin{aligned} f\left( \sum _{i=1}^n \alpha _i e_i\right) \ge \frac{1}{2^n-1}\sum _{i=1}^n |\alpha _i|f(e_i). \end{aligned}$$

Of course, the feasibility of this approach depends on the possibility to efficiently solve the optimisation problem:

$$\begin{aligned} \max _{v\in \mathbb {S}_1} f(v) \text { or } \min _{v\in \mathbb {S}_1} f(v). \end{aligned}$$

Yet, since every (semi)norm f is convex and \(\mathbb {S}_1\) is compact, especially the second problem is well studied and efficient methods for its solution stay at hand.

However, the problem with the greedy approach is that the constant \(2^n-1\) grows exponentially with n and it may be inconvenient to prove precise bounds in general. As we have proven in Proposition 6 and Proposition 7, the constant \(2^n-1\) cannot be improved under the suggested greedy strategy. Thus, the natural question that arises is how this constant can be improved while preserving the clear structure of the bases that it implies. In this respect, we consider the following theoretical problems.

First, for a natural number \(n\ge 1\), and a linear vector space V with inner product, where \(V=\mathbb {C}^n\) or \(V=\mathbb {R}^n\) we define \(c_n^{\perp }:=c_n^{\perp }(V)\) to be the least real number such that for every norm \(f:V\rightarrow \mathbb {R}^+\) there is an orthonormal basis \((b_1,\dots ,b_n)\) such that:

$$\begin{aligned} f\left( \sum _{i=1}^n \alpha _i b_i\right) \ge \frac{1}{c_n^{\perp }}\sum _{i=1}^n |\alpha _i|f(b_i) \text { for all } \alpha _1,\dots ,\alpha _n \in \mathbb {F}. \end{aligned}$$

It is known that for \(V=\mathbb {R}^2\), \(c_2^{\perp }=2\), [6]. The construction in [6] relies upon defining appropriate areas and the continuity principle to show existence. Is there a more explicit way to define such a basis? To the best knowledge of the authors, the techniques from \(n=2\) do not extend to higher dimensions.

Secondly, for a natural number \(n\ge 1\), and a linear vector space V with inner product, where \(V=\mathbb {C}^n\) or \(V=\mathbb {R}^n\) we define \(c_n^{\angle }=c_n^{\angle }(V)\) to be the least real number such that for every norm \(f:V\rightarrow \mathbb {R}^+\) there is a basis \((b_1,\dots ,b_n)\) of unit vectors such that:

$$\begin{aligned} f\left( \sum _{i=1}^n \alpha _i b_i\right) \ge \frac{1}{c_n^{\angle }}\sum _{i=1}^n |\alpha _i|f(b_i) \text { for all } \alpha _1,\dots ,\alpha _n \in \mathbb {F}. \end{aligned}$$

It is known that for \(V=\mathbb {R}^2\), \(c_2^{\angle }=\frac{3}{2}\), [1]. This question is tightly related with John’s Theorem [5] which relies on volumes’ optimisation.

Both questions can be uniformly stated as follows. Let \(n\ge 1\), and \(V=\mathbb {C}^n\) or \(V=\mathbb {R}^n\) and \(\alpha \in [0,1]\). Define \(c_n^{\alpha }:=c_n^{\alpha }(V)\) to be the least real number such that for every norm \(f:V\rightarrow \mathbb {R}^+\) there is a basis \((b_1,\dots ,b_n)\) of unit vectors such that:

$$\begin{aligned} f\left( \sum _{i=1}^n \alpha _i b_i\right)\ge & {} \frac{1}{c_n^{\alpha }}\sum _{i=1}^n |\alpha _i|f(b_i) \text { for all } \alpha _1,\dots ,\alpha _n \in \mathbb {F}\\ \text {subject to}: |\left\langle {b_i,b_j}\right\rangle |\le & {} \alpha \text { for all } i\ne j. \end{aligned}$$

In this framework, \(c_n^{\perp }(V)=c_n^{0}(V)\) and \(c_n^{\angle }(V)=c_n^{1}(V)\). We consider that the freedom to vary \(\alpha \) may be useful in applications where this kind of inequalities are to be combined with other classical inequalities where the scalar products of the basis’ vectors has to be controlled.