1 Introduction

The determination of the best constants in important inequalities of harmonic analysis, or equivalently the determination of the norm of important operators of harmonic analysis on appropriate spaces, has been an area of persistent investigation of decades. For instance, the norm of various variants of the Hardy–Littlewood maximal operator has attracted much attention. Indeed, the question of what this norm is for the centered Hardy–Littlewood maximal operator on \(L^p({\mathbb {R}}^n)\) for \(p>1\) remains open even for dimension 1, even though Stein [59] proved that these norms are uniformly bounded as the dimension grows. As for the operator corresponding to the centered weak type (1,1) inequalities whose domain is \(L^1({\mathbb {R}}^n)\), the question of Stein and Stromberg [60] about whether the norms are uniformly bounded in dimension is still unanswered, although Melas [44] determined the norm of this operator in dimension 1 in a culmination of years of effort by several authors. In a different line of investigation, the norm of the uncentered Hardy–Littlewood maximal operator was determined by Bernal [3] for the weak (1,1) transform in 1 dimension, and by Grafakos and Montgomery-Smith [20] on \(L^p({\mathbb {R}}^n)\).

Arguably the most basic operator in harmonic analysis is the Fourier operator—the operator that takes a function to its Fourier transform, and it also is the common element in both real-variable harmonic analysis and the abstract theory of harmonic analysis on locally compact groups. Because of the nature of the Fourier transform, it is natural to ask not just about the boundedness of the Fourier operator on a given \(L^p\) space, but for its boundedness as an operator from \(L^p\) to \(L^q\). The boundedness of the Fourier operator from \(L^p({\mathbb {R}}^n)\) to \(L^{p'}({\mathbb {R}}^n)\) with \(p'\) the dual index to p and \(p\in (1,2]\), is precisely the content of the Hausdorff–Young inequality, and Beckner [2] obtained the norm of this operator in a celebrated paper. The more general question of the (pq)-norm of the Fourier operator on \({\mathbb {R}}^n\) can be deduced from general results of Lieb [34] about best constants in a wide class of inequalities; indeed, it turns out that for (pq) either in the set \(\{1< p\le 2, 1<q<\infty \}\) or in the set \(\{1<p<\infty , 2\le q<\infty \}\), the (pq)-norm is infinite unless \(q= p'\). Somewhat surprisingly, however, until recently it does not appear that the norm of the Fourier operator had been explored in the abstract group setting, even though the question continues to make perfect sense there (even though maximal functions do not). The only work we are aware of beyond \({\mathbb {R}}^n\) is that of Gilbert and Rzeszotnik [18], who settled the determination of the (pq)-norm of the Fourier operator for arbitrary finite abelian groups.

Our goal in this paper is to determine the (pq)-norm of the Fourier operator for the larger class of compact or discrete abelian groups. Part of the motivation for this comes from the fact that the extremal functions obtained by Gilbert and Rzeszotnik for the case of finite abelian groups do not easily extend to cases where one does not have a discrete topology (e.g., the natural analogue of some of their extremal functions would be something akin to an Euler delta function, but these fail to be in the nice function spaces of interest and also are not straightforward to define and develop in the group setting where one cannot work as concretely as in \({\mathbb {R}}^n\) or finite groups). As a consequence, our proof techniques, while building on those of Gilbert and Rzeszotnik, are necessarily more involved, and for example rely on explicit constructions even in cases where Gilbert and Rzeszotnik were able to make existence arguments suffice.

For finite abelian groups X, the norm of the Fourier transform as an operator from \(L^p(X)\) to \(L^q({\hat{X}})\) is clearly finite for all positive p and q, and this number is computed as earlier mentioned in [18]. Perhaps the main surprise when dealing with the more general situations of compact or discrete abelian groups is that, provided the group is not finite, there is a region in which the (pq)-norm of the Fourier transform is infinite. More precisely, if X is compact and infinite, the Fourier transform is only finite in the region \(R_1\) described in Theorem 3.1 while if X is discrete and infinite, the Fourier transform is only finite in the region \(R'_2\) described in Theorem 4.1. The regions \(R_i\) that appear in Theorem 3.1 are almost the same as the regions \(R'_i\) that appear in Theorem 4.1—they differ only on their boundaries. In other words, denoting the interior of a set A by \(A^\circ \), we have that \(R_i^\circ =R_i^{'\circ }\) for \(i=1, 2, 3\); these regions are shown in Fig. 5. A subset of our main results was developed earlier by Fournier [16]— specifically, he showed that the norm of the Fourier transform is infinity in the region \(\{1\le p\le 2, \frac{1}{p}+\frac{1}{q}>1\}\) for X compact, and in the region \(\{1\le p\le 2, \frac{1}{p}+\frac{1}{q}<1\}\) for X discrete. We emphasize that our results cover all pairs (pq) in the positive quadrant of the extended plane (appropriately interpreted when p or q are less than 1 and we are not dealing with a Banach space). In this sense, the range of values we consider in this paper is more general than the range considered by [18] for finite abelian groups, where only the usual (norm) case of \(p\ge 1, q\ge 1\) is considered.

Our proofs of the compact and discrete cases are distinct. It is conceivable that one may be able to use an argument based on duality to derive one from the other; however we found it more convenient to develop them separately, especially because, for the question to be well defined, we need to sometimes work with a subspace of \(L^p(X)\) rather than the whole space (as explained in the second paragraph of Sect. 2). Let us note in passing that Theorems 3.1 and 4.1 of course contain the Hausdorff–Young inequality (see, e.g., [2]) for compact or discrete LCA groups with the sharp constant \(C_{p,q}=1\) for \(\frac{1}{p}+\frac{1}{q}=1\).

As is well known, the Hausdorff–Young inequality for \({{\mathbb {R}}}^n\), used together with the associated sharp constant, yields on differentiation a sharp uncertainty principle for the Fourier transform on \({{\mathbb {R}}}^n\) expressed in terms of entropy; in particular, the Heisenberg uncertainty principle for position and momentum observables can be extracted as a consequence. For some parameter ranges, our results can analogously be interpreted in terms of certain uncertainty principles for the Fourier transform on LCA groups.

Section 2 contains some preliminary material on abstract harmonic analysis as well as probability that we will need to set notation and for our proofs. Section 3 states and proves the main result for compact abelian groups, while Sect. 4 states and proves the main result for discrete abelian groups.

There is a large literature on uncertainty principles in both Euclidean and abstract settings. There are many different ways to express the intuition that both a function and its Fourier transform cannot be simultaneously too concentrated. For the standard setting of the Euclidean spaces \({{\mathbb {R}}}^n\), these include the following formulations:

  1. 1.

    Hardy-type uncertainty principles: The simplest forms of these assert that if both f and \({\hat{f}}\) are non-zero and sub-gaussian, then the constants in the bounding Gaussian functions must be constrained. See, e.g., [4, 7, 21, 26].

  2. 2.

    Heisenberg-type uncertainty principles: These assert that the product of the variances of conjugate densities must be bounded from below by a positive constant. See, e.g., [5, 9, 31, 33, 53, 64].

  3. 3.

    Uncertainty principles à la Amrein–Berthier or Logvinenko–Sereda: These assert that if the energy of f is largely concentrated in a “small” set E and that of \({\hat{f}}\) is largely concentrated in a “small” set F, then f itself must have small energy (\(L^2\)-norm). See, e.g., [23, 32, 35, 47, 48] for statements where “small” means compact, and [1, 27, 45] for statements where “small” means of finite Lebesgue measure.

  4. 4.

    Entropic uncertainty principles: These assert that the sum of entropies of a density and its conjugate density is bounded from below by a constant. See [2, 11, 24, 58] for various statements of this type involving Shannon entropy, [10] for an exploration of extremals, and [29, 65] for extensions from Shannon entropy to the more general Rényi entropies (the papers [10, 65] also discuss finite cyclic groups and the integers).

Efforts have been made to find more general expressions of each of these forms of uncertainty principles that can apply to settings more general than Euclidean spaces or to more general objects than the Fourier transform; a detailed discussion of the literature is beyond the scope of this paper but the references given above together with the surveys [13, 15, 22, 55, 62] provide a starting point.

In Sect. 5, we discuss the implications of our results for entropic uncertainty principles on compact or discrete abelian groups (expressed using the general class of Rényi entropies), and compare with results that can be deduced directly from the Hausdorff–Young inequality. Once again it is particularly interesting to note that there is a region of (pq)-space in which the natural weighted uncertainty inequality involving Rényi entropies of orders p and q does not hold.

Other papers that explore uncertainty principles in the setting of LCA groups include [25, 42, 57] who study Amrein–Berthier-type phenomena, and [46, 50] who study entropic uncertainty principles expressed in terms of Shannon entropy. Note that it is unclear how to directly extend Heisenberg-type inequalities involving the variance from \({{\mathbb {R}}}^n\) to LCA groups where the moments of a group-valued random variable are not well defined; this is one advantage of entropic uncertainty principles, which imply Heisenberg-type uncertainty principles in the Euclidean case, but can be freely formulated for groups since the entropy is defined only in terms of the density and not moments.

We mention in passing some other directions of interest that we do not address in this work. It is, of course, of great interest to obtain norms of Fourier transforms, and sharp constants in uncertainty principles, for nonabelian locally compact groups (starting with the most common Lie groups) or other general structures such as metric measure spaces, but this is a much harder project that we do not address at all. For the current state of knowledge on such questions, the reader may consult [6, 8, 30, 41, 49] and references therein. In another direction, the so-called entropy power inequality due to Shannon [56] and Stam [58] (see, e.g., [36] and references therein for recent developments) for convolutions of probability measures on a Euclidean space is known to be one route to obtaining uncertainty principles since the work of Stam [58]; there has been much recent work on developing generalizations of this inequality both to the broader class of Rényi entropies (see, e.g., [11, 37, 38, 63]) and to groups beyond the additive group \({{\mathbb {R}}}^n\) (see, e.g., [28, 39, 40]), but we do not discuss these further in this work.

2 Preliminaries

Since this section contains basic material on the Fourier transform on LCA groups that can be found in textbooks (see, e.g., [54]), we do not include most of the proofs.

Let X be a LCA group associated with a Haar measure \(\alpha \). Let \({\hat{X}}\) be its dual group (i.e. the set of continuous group homomorphism from X to the unit circle \({\mathbb {T}}\) in complex plane). Consider \(1\le p,q\le \infty \) and the spaces \(L^p(X)\) and \(L^q({\hat{X}})\) with the corresponding Haar measures. For an integrable function \(f:X\rightarrow {\mathbb {C}}\), we define its Fourier transform:

$$\begin{aligned} {\hat{f}}(\gamma )=\int _Xf(x)\gamma (-x)\alpha (dx). \end{aligned}$$
(1)

Note that the Fourier transform is a linear transform and can be extended, in a unique manner, to an isometry of \(L^2(X)\) onto \(L^2({\hat{X}})\). If X is a compact LCA group (then \({\hat{X}}\) is discrete and the Haar measure \(\hat{\alpha }\) on \({\hat{X}}\) is the counting measure), define the norm of Fourier transform from \(L^1(X)\cap L^p(X)\) to \(L^q({\hat{X}})\) for \(0< p\le \infty \) by

$$\begin{aligned} C_{p,q}:=\sup _{\Vert f\Vert _p=1}\Vert {\hat{f}}\Vert _q. \end{aligned}$$
(2)

Note that if \(p\ge 1\), then \(L^1(X)\cap L^p(X)=L^p(X)\); otherwise if \(0<p<1\), then \(L^1(X)\cap L^p(X)=L^1(X)\). The reason why we define Fourier transform for functions on \(L^1(X)\cap L^p(X)\) instead of \(L^p(X)\) for \(0<p<1\) is that there is no direct definition for Fourier transform of functions on \(L^p(X)\) that are not integrable. Similarly, if X is a discrete LCA group (then \({\hat{X}}\) is compact), define the norm of Fourier transform from \(L^2(X)\cap L^p(X)\) to \(L^q({\hat{X}})\) for \(0< p\le \infty \) by

$$\begin{aligned} C_{p,q}:=\sup _{\Vert f\Vert _p=1}\Vert {\hat{f}}\Vert _q. \end{aligned}$$
(3)

Note that if \(1\le p\le 2\), then \(L^2(X)\cap L^p(X)=L^p(X)\), otherwise if \(p>2\), then \(L^2(X)\cap L^p(X)=L^2(X)\). The goal of this paper is to explore the explicit value of the norm of Fourier operator for arbitrary \(0< p,q\le \infty \).

Definition 2.1

Suppose that X is an LCA group with a Haar measure \(\alpha \), then for any f and g in \(L^2(X)\), define the inner product on \(L^2(X)\):

$$\begin{aligned} \langle f,g\rangle :=\int _X f(x)\overline{g(x)}\alpha (dx). \end{aligned}$$
(4)

Proposition 2.2

Suppose that \((X,+)\) is a compact abelian group, then the elements of \(({\hat{X}},\cdot )\) form an orthogonal basis in \(L^2(X)\). We call this basis the frequency basis of X. If \((X,\cdot )\) is a discrete abelian group, then all delta functions

$$\begin{aligned} {\mathbb {I}}_{x_0}(x):={\left\{ \begin{array}{ll} 1 &{}\text{ if } x=x_0\\ 0 &{}\text{ otherwise } \end{array}\right. } \end{aligned}$$
(5)

form an orthogonal basis of \(L^2(X)\), we call this basis the time basis of \(L^2(X)\).

Definition 2.3

(Normalize the Haar Measures) Throughout this paper, we will normalize the Haar measures in the following way:

  • Let X be a compact LCA group with a Haar measure \(\alpha \). Suppose \({\hat{X}}\) is the dual group of X with the dual Haar measure \(\hat{\alpha }\), then \(\hat{\alpha }\) can be normalized so that for every \(y\in {\hat{X}}\), \(\hat{\alpha }(y)=\alpha (X)^{-1}\).

  • Let X be a discrete LCA group with a Haar measure \(\alpha \). Suppose \({\hat{X}}\) is the compact dual group of X with the Haar measure \(\hat{\alpha }\) normalized so that that for every \(x\in X\), \(\alpha (x)=\hat{\alpha }({\hat{X}})^{-1}\).

Proposition 2.4

Suppose that X is either a compact or a discrete LCA group with a Haar measure \(\alpha \), then under the normalization of \(\hat{\alpha }\) by Definition 2.3, the Plancherel identity holds for any function \(f\in L^2(X)\):

$$\begin{aligned} \Vert f\Vert _2=\Vert {\hat{f}}\Vert _2. \end{aligned}$$
(6)

Theorem 2.5

Suppose that X is a compact LCA group with a Haar measure \(\alpha \). Let \({\hat{X}}\) be its dual group with the Haar measure \(\hat{\alpha }\) normalized so that the Plancherel identity is valid. Then if a function f on X has the Fourier transform \({\hat{f}}\in L^{p}({\hat{X}})\) for some \(p\in (0,2]\), then \(\hat{{\hat{f}}}(x)=f(-x)\). On the other hand, if X is a discrete LCA group and f is a function whose Fourier transform \({\hat{f}}\in L^p({\hat{X}})\) for some \(p\in [2,\infty ]\), then \(\hat{{\hat{f}}}(x)=f(-x)\).

The crucial idea to calculate all the finite operator norms in this paper, as in the paper of Gilbert and Rzeszotnik [18], is the Riesz–Thorin theorem.

Theorem 2.6

(Riesz–Thorin Convexity Theorem) Define \(K(1/p,1/q):=C_{p,q}\), where \(C_{p,q}\) is the norm of Fourier operator defined in (2), then \(\log K(x,y)\) is a convex function on \([0,\infty )\times [0,1]\).

As in [18, Corollary 2.2], one has the following consequence of the Riesz-Thorin theorem.

Corollary 2.7

If f is an affine function and \(\log K(p)\le f(p)\) for all p in a finite set \(P\in [0,\infty )\times [0,1]\) then \(\log K(p)\le f(p)\) for all \(p\in \text{ hull }(P)\), where

$$\begin{aligned}&P:=\{p_1,p_2,\ldots ,p_l\}\\&\text{ hull }(P):=\Big \{\sum _{p_i\in P} \lambda _ip_i:\lambda _i\ge 0~\text{ for } \text{ all }~i~\text{ and }~\sum _{i}\lambda _i=1\Big \}. \end{aligned}$$

We need some facts about lacunary series (see, e.g., [66]), which are series in which the terms that differ from zero are “very sparse”.

Definition 2.8

A lacunary trigonometric series is a series that can be written in the form

$$\begin{aligned} \sum _{k=1}^\infty (a_k\cos n_kx+b_k\sin n_kx)=:\sum _{k=1}^\infty A_{n_k}(x) , \end{aligned}$$
(7)

where the sequence \((n_k:k\in {\mathbb {N}})\) satisfies

$$\begin{aligned} \frac{n_{k+1}}{n_k} \ge \Lambda >1 \end{aligned}$$

for each \(k\in {\mathbb {N}}\).

Theorem 2.9

[66, Vol. 2, p. 264] Consider a lacunary trigonometric series

$$\begin{aligned} \sum _{k=1}^\infty (a_k\cos n_kx+b_k\sin n_kx)=\sum r_k\cos (n_kx+x_k)=:\sum A_{n_k}(x) \end{aligned}$$
(8)

where \(n_{k+1}/n_k\ge \Lambda >1\) and \(r_k\ge 0\) for all k. Write

$$\begin{aligned} S_k(x):= & {} \sum _{j=1}^k(a_j\cos n_jx+b_j\sin n_jx)\\ A_k:= & {} \left( \frac{1}{2}\sum _{j=1}^k(a_j^2+b_j^2)\right) ^{1/2}. \end{aligned}$$

Suppose that \(A_k\) and \(r_k\) satisfies

$$\begin{aligned} (i)~A_k\rightarrow \infty ,~~(ii)~r_k/A_k\rightarrow 0. \end{aligned}$$
(9)

Under the hypothesis (9), the functions \(S_k(x)/A_k\) are asymptotically Gaussian distributed on each set \(E\subset (0,2\pi )\) of positive measure, that is,

$$\begin{aligned} \frac{\lambda \left\{ x\in E:~S_k/A_k\ge y\right\} }{\lambda (E)}\rightarrow \frac{1}{2\pi } \int _y^{\infty }e^{-x^2/2}dx \end{aligned}$$
(10)

where \(\lambda \) is the Lebesgue measure on \([0,2\pi ]\).

Finally we need the Lyapunov Central Limit Theorem.

Theorem 2.10

Suppose \(\{Y_1, Y_2, \ldots \}\) is a sequence of independent random variables, each with finite expected value \(\mu _k\) and variance \(\sigma _k^2\). Define

$$\begin{aligned} s_n^2:=\sum _{k=1}^n\sigma _k^2. \end{aligned}$$

If for some \(\delta >0\), the Lyapunov condition

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{s_n^{2+\delta }}\sum _{k=1}^n{\mathbb {E}}(|Y_k-\mu _k|^{2+\delta })=0 \end{aligned}$$
(11)

is satisfied, then as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{1}{s_n}\sum _{k=1}^n(Y_k-\mu _k)\xrightarrow {d}{\mathcal {N}}(0,1) \end{aligned}$$
(12)

where \({\mathcal {N}}(0,1)\) is the standard normal distribution.

3 The Compact Case

From now on, we always assume that our LCA groups are non-finite. From now on, we define the group operator as “\(+\)” or “\(\cdot \)” for when the group is compact or discrete respectively.

Theorem 3.1

If X is a compact non-finite LCA group, we consider three regions as in Fig. 1:

$$\begin{aligned}&R_1:=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}\le 1, \frac{1}{q}\le \frac{1}{2}\Big \} \end{aligned}$$
(13)
$$\begin{aligned}&R_2:=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}> 1, \frac{1}{p}> \frac{1}{2}\Big \} \end{aligned}$$
(14)
$$\begin{aligned}&R_3:=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}\le \frac{1}{2}, \frac{1}{q}>\frac{1}{2}\Big \}. \end{aligned}$$
(15)

Then the norm of the Fourier operator from \(L^p(X)\) to \(L^q({\hat{X}})\) satisfies

$$\begin{aligned} C_{p,q}={\left\{ \begin{array}{ll} \alpha (X)^{1-1/p-1/q} &{} \text{ if }~\Big (\frac{1}{p},\frac{1}{q}\Big )\in R_1\\ \infty &{} \text{ if }~\Big (\frac{1}{p},\frac{1}{q}\Big )\in R_2\cup R_3. \end{array}\right. } \end{aligned}$$

Proposition 3.2

If \((1/p,1/q)\in R_1\), then \(C_{p,q}= \alpha (X)^{1-1/p-1/q}\).

Proof of Proposition 3.2

We firstly prove that \(C_{p,q}\le \alpha (X)^{1-1/p-1/q}\). Since the region \(R_1\) is convex and

$$\begin{aligned} R_1=\text{ hull }\Big ((0,0),(0,1),(1/2,1/2),(0,1/2)\Big ) \end{aligned}$$

and \(\log \alpha (X)^{1-1/p-1/q}\) is affine for (1/p, 1/q), it suffices to check that \(C_{p,q}\le \alpha (X)^{1-1/p-1/q}\) holds at (0, 0), (1, 0), (1/2, 1/2), (0, 1/2) by Corollary 2.7.

Fig. 1
figure 1

The three regions demarcated by the lines in the figure are the interiors of the regions that show up in Theorems 3.1 and 4.1. For an infinite LCA group X, the (pq)-norm of the Fourier transform is finite precisely on \(R_1\) when X is compact and precisely on \(R'_2\) when X is discrete; in particular, it is infinite on the third region in either case

At the point (0, 0), we have

$$\begin{aligned} \Vert {\hat{f}}\Vert _\infty \le \Vert f\Vert _1\le \alpha (X)\Vert f\Vert _\infty \end{aligned}$$
(16)

so \(C_{\infty ,\infty }\le \alpha (X)\).

At the point (1, 0), we have

$$\begin{aligned} \Vert {\hat{f}}\Vert _\infty \le \Vert f\Vert _1 \end{aligned}$$
(17)

so \(C_{1,\infty }\le 1\).

At the point (1/2, 1/2), we have that \(C_{2,2}=1\) by the fact that Fourier transform is unitary.

At the point (0, 1/2), we have

$$\begin{aligned} \Vert {\hat{f}}\Vert _2=\Vert f\Vert _2=\Big (\int _X|f|^2\alpha (dx)\Big )^{1/2}\le \alpha (X)^{1/2}\Vert f\Vert _\infty \end{aligned}$$
(18)

so \(C_{\infty ,2}\le \alpha (X)^{1/2}\), which provides \(C_{p,q}\le \alpha (X)^{1-1/p-1/q}\).

Further, the upper bound \(\alpha (X)^{1-1/p-1/q}\) can be attained by any frequency basis \(\gamma \in {\hat{X}}\). In fact, \(\Vert \gamma (x)\Vert _p=\alpha (X)^{1/p}\). Moreover, for every element \(\gamma \in {\hat{X}}\), \(\gamma \) is a continuous homeomorphism from X to \({\mathbb {T}}\). Then the Fourier transform, \(\hat{\gamma }\), of \(\gamma \) is a delta function on \({\hat{X}}\)

$$\begin{aligned} \hat{\gamma }(y)={\left\{ \begin{array}{ll} \alpha (X) &{}\text{ if }~y=\gamma \\ 0 &{}\text{ if }~y\ne \gamma . \end{array}\right. } \end{aligned}$$
(19)

By (19) and Definition 2.3, \(\Vert \hat{\gamma }\Vert _q=\alpha (X)^{1-1/q}\), this yields the fact \(C_{p,q}\ge \alpha (X)^{1-1/p-1/q}\), which ends the proof. \(\square \)

Proposition 3.3

If \((1/p,1/q)\in R_2\), then \(C_{p,q}= \infty \).

We will actually prove a stronger condition: If \((1/p,1/q)\in [0,\infty )^2\) with \(1/p+1/q>1\), then \(C_{p,q}=\infty \). Since X is compact, the Haar measure \(\alpha \) on X is finite. Without loss of generality, we assume that \(\alpha \) is a probability measure \({\mathbb {P}}\). On the other hand, since \({\hat{X}}\) is discrete, we have three possible cases:

  • Case 1. \({\hat{X}}\) has an element with order infinity.

  • Case 2. Every element of \({\hat{X}}\) has a finite order, and the order set is not bounded.

  • Case 3. Every element of \({\hat{X}}\) has a finite order, and the order set is uniformly bounded.

We will prove Proposition 3.3 through these three cases after the following lemmas that will be used repeatedly.

Lemma 3.4

Suppose that X is a compact LCA group, then for every element \(g\in {\hat{X}}\), we have that the image g(X) is a closed subgroup of \({\mathbb {T}}\)

  • If the order of g is infinity, then \(g(X)={\mathbb {T}}\).

  • If the order of g is \(m<\infty \), then \(g(X)=[e^{2\pi i/m}]:=\{e^{2\pi ik/m},~k=0,1,2,\ldots ,m-1\}\).

Lemma 3.4 is a well-known result, we omit the proof.

Lemma 3.5

Suppose X is a compact LCA group and \(g\in {\hat{X}}\). We treat g as a random variable: \(X\rightarrow {\mathbb {T}}\), we have

  • If the order of g is infinity, denote \(\lambda '\) the probability measure on \({\mathbb {T}}\) induced by the probability distribution of the random variable g, i.e. for every measurable set \(M\subset {\mathbb {T}}\), \(\lambda '(M):={\mathbb {P}}(x\in X:~g(x)\in M)\). Denote \(\lambda \) the Lebesgue measure on \({\mathbb {T}}\), then \(\lambda '=\lambda /2\pi \) and \(\lambda (g(X))=2\pi \).

  • If the order of g is \(m<\infty \), then \(g: X\rightarrow [e^{2\pi i/m}]\) is a random variable with uniform distribution on \([e^{2\pi i/m}]\).

Proof of Lemma 3.5

We have two claims:

Claim 1. For any \(e^{ia}\in S:=g(X)\) and any measurable set, E, of \({\mathbb {T}}\), we have

$$\begin{aligned} \lambda '(E)=\lambda '(e^{ia}E) \end{aligned}$$
(20)

where \(\lambda '\) is the induced probability measure and \(e^{ia}E:=\{e^{ia+ix}:e^{ix}\in E\}\).

To see this, note that if \(g(x_0)=e^{ia}\)

$$\begin{aligned} \{x_0x:g(x)\in E\}=\{y:g(y)\in e^{ia}E\}. \end{aligned}$$
(21)

In fact, for every \(x_0x\) contained in the left hand side of (21), we have

$$\begin{aligned} g(x_0x)=g(x_0)g(x)=e^{ia}g(x)\in \{y:g(y)\in e^{ia}E\}. \end{aligned}$$

On the other hand, for every y contained on the right hand side of (21), we have \(y=x_0x\) for some \(x\in X\), and

$$\begin{aligned} g(x)=g(x_0)^{-1}g(y)\in e^{-ia}e^{ia}E=E \end{aligned}$$

which provides (21) and hence (20) by the fact that \({\mathbb {P}}\) is a Haar measure.

If g has order m, then it is clear to verify by Claim 1 that g is a random variable with uniform distribution on \([e^{2\pi i/m}]\).

Claim 2. If g’s order is infinity, then the induced probability measure \(\lambda '\) on \({\mathbb {T}}\) is a continuous measure, i.e. the distribution function on \([0,2\pi )\) is a continuous function.

To see this, suppose that \(\lambda '\) is not a continuous measure, then by the fact that the distribution function on \([0,2\pi )\) is monotone, thus there exists countably many jumps. Assume \(y_0\in {\mathbb {T}}\) is a jump point, thus \(\lambda '(y_0)>0\) and \(y_0\in g(X)\). Therefore by Claim 1, we have \(\lambda '(1)=\lambda '(y_0)>0\), therefore for any \(y\in g(X)\), we have \(\lambda '(y)=\lambda '(1)=\lambda '(y_0)>0\) by Claim 1. Thus by the fact that g(X) is dense in \({\mathbb {T}}\), we have \(\lambda '({\mathbb {T}})=\infty \), which contradicts to the fact that \(\lambda '\) is a probability measure.

With the help of Claim 1 and Claim 2, we have that \(\lambda '\) is translation invariant for any open set, thus \(\lambda '\) is uniform distribution. Therefore \(\lambda '\) coincides with \(\lambda /2\pi \) on \({\mathbb {T}}\) on every open subset and hence on every closed subset of \({\mathbb {T}}\), which yields that \(\lambda '=\lambda /2\pi \) on \({\mathbb {T}}\). So we have \(\lambda (g(X))=2\pi \lambda '(g(X))=2\pi \). \(\square \)

Lemma 3.6

Suppose the orders of all elements in \({\hat{X}}\) are uniformly bounded. Define r to be the minimum integer such that there are infinitely many elements in \({\hat{X}}\) with order r, then r is a prime.

Proof of Lemma 3.6

Assume that r is not a prime, then \(r=km\) with the integer \(k>0\) and a prime m, then take a sequence \(\{a_i\}_{i=1}^\infty \) such that every \(a_i\) has order r, then the set defined by \(\{a_i^k\}_{i=1}^\infty \) satisfies that every element in this set has order \(m<r\). We claim that the cardinality of this set \(\{a_i^k\}_{i=1}^\infty \) is infinity, therefore this contradicts to the fact that r is the minimum, which provides Lemma 3.6. Suppose that \(\{a_i^k\}\)’s cardinality is finite then we have a subsequence of \(\{a_i\}\) denoted by, without loss of generality, \(\{a_i\}\) such that

$$\begin{aligned} a_1^k =a_2^k =a_3^k =\cdots \end{aligned}$$

therefore we have \((a_1 a_i^{-1})^k={\hat{e}}\) (\({\hat{e}}\) is the identity in \({\hat{X}}\)) holds for every \(i\ge 2\). Further \(a_1 a_i\ne 1\) since \(a_1 \ne a_i\), \(a_1 a_i\ne a_1 a_j\) for \(i\ne j\). Therefore we have a set \(\{a_1 a_i^{-1}\}_{i=2}^\infty \) with infinitely many elements, all of which have order \(k<r\), contradicts to the fact that r is minimum. \(\square \)

Lemma 3.7

Suppose the orders of all elements in \({\hat{X}}\) are uniformly bounded. Define r to be the same as in Lemma 3.6, then r is a prime by Lemma 3.6. We construct a sequence \(\{g_k\}_{k=1}^\infty \) in \({\hat{X}}\) such that

  • Every \(g_k\) has order r.

  • For every \(k\ge 2\), \(g_k\) is not in the subgroup generated by \(g_1,g_2,\ldots ,g_{k-1}\)

We can do this because there are infinitely many elements in \({\hat{X}}\) with order r. If we treat every \(g_k\) as a random variable: \(X\rightarrow {\mathbb {T}}\), then \(g_k\)’s are mutually independent and identically distributed.

Proof of Lemma 3.7

We firstly prove two claims:

Claim 1. For every k and n with \(n\ge k\), \(g_k\) is not in the subgroup generated by \(A_{n,k}:=\{g_1,\ldots ,g_{n}\}\setminus \{g_k\}\).

To see this, note that it suffices to prove the claim for \(n>k\). Assume that \(g_k\in [A_{n,k}]\), then \(g_k\) can be written as

$$\begin{aligned} g_k=\prod _{g_i\in A_{n,k}}g_i^{a_i} \end{aligned}$$
(22)

with each \(a_i\in [0,r)\cap {\mathbb {Z}}\). Take the largest index i in (22) such that \(a_i\ne 0\), denote this index by \(i_0\), thus \(i_0>k\) by the definition of \(g_k\). So we have that

$$\begin{aligned} g_{i_0}^{a_{i_0}}=g_k\prod _{g_i\in A_{n,k}, ~i<i_0}g_i^{r-a_i}. \end{aligned}$$
(23)

Then by the fact that r is a prime, then there exists an integer \(b_0\) such that \(g_{i_0}^{b_0a_{i_0}}=g_{i_0}\), which means that \(g_{i_0}\) is in the subgroup generated by \(\{g_1,\ldots ,g_{i_0-1}\}\) by (23), contradicts to the definition of \(g_{i_0}\), which provides Claim 1.

Claim 2. The events \(g_k^{-1}(1)\) are mutually independent, i.e.

$$\begin{aligned} {\mathbb {P}}\Big (\bigcap _{k=1}^n g_k^{-1}(\{1\})\Big )= \frac{1}{r^n}. \end{aligned}$$
(24)

To see this, note that every \(g_k\) is a continuous homomorphism from X to \({\mathbb {T}}\), then \(H_n:=\bigcap _{k=1}^n g_k^{-1}(\{1\})\) is a closed subgroup of X. Thus

$$\begin{aligned} X=\bigsqcup _{\widetilde{x_i}\in X/{H_n}} x_i+H_n \end{aligned}$$

where \(x_i +H_n\) are the cosets of \(X/H_n\), “\(\sqcup \)” means disjoint union. Since \({\mathbb {P}}\) is a Haar measure, we have

$$\begin{aligned} {\mathbb {P}}(H_n)=\frac{1}{\text{ number } \text{ of } \text{ cosets }}. \end{aligned}$$

Moreover the vector \((g_1,g_2,\ldots ,g_n)\) is invariant on each coset \(x_i+H_n\) and

$$\begin{aligned} (g_1(x_i+H_n),g_2(x_i+H_n),\ldots ,g_n(x_i+H_n) )\equiv (g_1(x_i),g_2(x_i),\ldots ,g_n(x_i) ) \end{aligned}$$

Furthermore if \(x_i+H_n\ne x_j+ H_n\), we claim that

$$\begin{aligned} (g_1(x_i),g_2(x_i),\ldots ,g_n(x_i) )\ne (g_1(x_j),g_2(x_j),\ldots ,g_n(x_j) ). \end{aligned}$$
(25)

To see this, suppose otherwise we have the equality holds in (25), then

$$\begin{aligned} (g_1(x_i-x_j),g_2(x_i-x_j),\ldots ,g_n(x_i-x_j) )=(1,1,\ldots ,1) \end{aligned}$$

therefore \(x_i-x_j\in H_n\), contradicts to the fact that \(x_iH_n\ne x_j H_n\). Hence the mapping \(G:X/H_n\rightarrow [e^{2\pi i/r}]^n\cong {\mathbb {F}}_r^n\) (since r is prime) defined by

$$\begin{aligned} G(x_i+H_n):=\big (g_1(x_i+H_n),g_2(x_i+H_n),\ldots ,g_n(x_i+H_n)\big ) \end{aligned}$$
(26)

is an injective group homomorphism.Footnote 1 It suffices to prove that G is surjective, i.e.

$$\begin{aligned} G(X/H_n)={\mathbb {F}}_r^n. \end{aligned}$$

Suppose for contradiction that \(G(X/H_n)\) is a proper subgroup of \({\mathbb {F}}_r^n\) and therefore is a proper linear subspace of \({\mathbb {F}}_r^n\). Hence the dimension of \(G(X/H_n)\) is \(n'<n\). Assume l the number of cosets \(\{x_i+H_n\}_{i=1}^{l}\) in X. Then consider the matrix

$$\begin{aligned} P_n:=\begin{pmatrix} g_1(x_1+H_n) &{} g_2(x_1+H_n) &{} \cdots &{} g_n(x_1+H_n) \\ g_1(x_2+H_n) &{} g_2(x_2+H_n) &{} \cdots &{} g_n(x_2+H_n) \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ g_1(x_l+H_n) &{} g_2(x_l+H_n) &{} \cdots &{} g_n(x_l+H_n) \end{pmatrix} \end{aligned}$$

the matrix \(P_n\) has rank \(n'<n\), then there exists a column with index \(k\le n\) that can be written as a linear combination of other column vectors:

$$\begin{aligned} \begin{pmatrix} g_k(x_1+H_n) \\ g_k(x_2+H_n) \\ \vdots \\ g_k(x_l+H_n) \end{pmatrix} =\sum _{i\in [1,n],~i\ne k}a_i \begin{pmatrix} g_i(x_1+H_n) \\ g_i(x_2+H_n) \\ \vdots \\ g_i(x_l+H_n) \end{pmatrix} \end{aligned}$$

with some \(a_1, a_2,\ldots , a_n\in {\mathbb {F}}_r\). Since \(X=\bigsqcup _{i=1}^l x_i+H_n\), thus we have

$$\begin{aligned} g_k=\prod _{g_i\in A_{n,k}}g_i^{a_i} \end{aligned}$$

where \(A_{n,k}\) is defined in Claim 1. This means that \(g_k\) is in the group generated by \(A_{n,k}\), contradicts to Claim 1, which ends the proof of Claim 2.

Next, note that all \(g_k's\) share the same image \([e^{2\pi i/r}]\) by Lemma 3.4. Denote by J this image \([e^{2\pi i/r}]\). So it suffices to prove that for every \((a_1,\ldots ,a_n)\in J^n\), the probability of the random vector

$$\begin{aligned} {\mathbb {P}}\big ((g_1,g_2,\ldots ,g_n)=(a_1,a_2,\ldots ,a_n)\big )=\frac{1}{r^n}. \end{aligned}$$
(27)

On the other hand, we have proven that the mapping \(G: X/H_n\rightarrow J^n\) defined by (26) is a group isomorphism, which means that every \((a_1,\ldots ,a_n)\in J^n\) should be in the image of the random vector \((g_1,g_2,\ldots ,g_n)\), then (27) holds by Claim 2 and the fact that \({\mathbb {P}}\) is a Haar measure, which ends the proof of Lemma 3.7. \(\square \)

Lemma 3.8

Suppose the orders of all elements in \({\hat{X}}\) are uniformly bounded. Define r and \(\{g_k\}_{k=1}^\infty \) the same as in Lemma 3.7, define \(M_n\) the subgroup in \({\hat{X}}\) generated by \(g_1,\ldots , g_n\), i.e.

$$\begin{aligned} M_n:=\{g_1^{i_1}g_2^{i_2}\ldots g_n^{i_n}:~0\le i_l\le r-1~~\text{ for }~~l=1,2,\ldots ,n\}. \end{aligned}$$

Then the cardinality of \(M_n\) is \(r^n\).

Proof of Lemma 3.8

It is equivalent to prove that any two elements \(g_1^{i_1}g_2^{i_2}\ldots g_n^{i_n}\) and \(g_1^{j_1}g_2^{j_2}\ldots g_n^{j_n}\) are disjoint if the vectors \((i_1,\ldots ,i_n)\ne (j_n,\ldots ,j_n)\) (i.e. it is a linear space with basis \(g_1,\ldots ,g_n\)). This is equivalent to

$$\begin{aligned} g_1^{m_1}g_2^{m_2}\ldots g_n^{m_n}\ne {\hat{e}}~~\text{ as } \text{ long } \text{ as }~~(m_1,m_2,\ldots ,m_n)\ne \text{ zero } \text{ vector } \text{ in } {\mathbb {F}}_r^n \end{aligned}$$

where \({\hat{e}}\) is the identity in \({\hat{X}}\). Suppose we have equality holds for some \((m_1,m_2,\ldots ,m_n)\ne \text{ zero } \text{ vector }\), without loss of generality, we assume that \(m_n\ne 0\). Thus we have

$$\begin{aligned} g_n^{r-m_n}=g_1^{m_1}g_2^{m_2}\ldots g_{n-1}^{m_{n-1}}. \end{aligned}$$

Therefore \(g_n^{r-m_n}\) is in the group generated by \(\{g_1,g_2,\ldots ,g_{n-1}\}\), since r is prime, then \(r-m_n\) and r are coprime, we have two integers s and t such that

$$\begin{aligned} s(r-m_n)+tr=1. \end{aligned}$$

Therefore we have \(g_n\) is in the group generated by \(\{g_1,g_2,\ldots ,g_{n-1}\}\), contradicts to the selection of \(\{g_k\}\). \(\square \)

Lemma 3.9

Based on the conditions of Lemmas 3.6, 3.7 and 3.8, define \(H_n^\perp \) the set of all elements in \({\hat{X}}\) such that their restriction on \(H_n\) is trivial. Then

$$\begin{aligned} H_n^{\perp }=M_n. \end{aligned}$$
(28)

Proof of Lemma 3.9

Clearly \(H_n^{\perp }\supset M_n\). Assume for contradiction that there exists a \(g'\in H_n^{\perp }\) and \(g'\notin M_n\). Without loss of generality assume that \(g'\) has a prime order \(r'\) (i.e. if not then take some power of \(g'\)). Thus \(g'\) is not in the group generated by \(g_1,\ldots ,g_n\) and \(H_n\) is a sugroup of \((g')^{-1}(\{1\})\), then we have that \({\mathbb {P}}(H_n)\) divides \({\mathbb {P}}((g')^{-1}(\{1\}))\), thus by Lemma 3.5, we have \(1/r^n\) divides \(1/r'\), which means that \(r'|r^n\), thus \(r'=r\) and we have

$$\begin{aligned} H_n=\Big (\bigcap _{k=1}^n g_k^{-1}(\{1\})\Big )\cap (g')^{-1}(\{1\}). \end{aligned}$$

Then apply Lemma 3.7 to the last equality of the following equation:

$$\begin{aligned} \frac{1}{r^n}={\mathbb {P}}(H_n)={\mathbb {P}}\Bigg (\Big (\bigcap _{k=1}^n g_k^{-1}(\{1\})\Big )\cap (g')^{-1}(\{1\}) \Bigg )=\frac{1}{r^{n+1}} \end{aligned}$$

which yields a contradiction. \(\square \)

We have a direct conclusion from Lemma 3.9.

Lemma 3.10

The cardinality of \({\hat{X}}/H_n^{\perp }\) is infinity.

Lemma 3.11

Suppose that X is a compact LCA group with the Haar measure that is scaled as a probability measure \({\mathbb {P}}\). For any closed subgroup \(H<X\) with \({\mathbb {P}}(H)>0\) and \({\hat{x}}\in {\hat{X}}\) such that the restriction \({\hat{x}}\big |_H\) is non-trivial (i.e. \({\hat{x}}\big |_H\not \equiv 1\)), then we have

$$\begin{aligned} \int _H{\hat{x}}(x){\mathbb {P}}(dx)=0. \end{aligned}$$

Proof of Lemma 3.11

Note that the restriction, \({\hat{x}}\big |_H\), of \({\hat{x}}\) on H is an element of H’s dual group \({\hat{H}}\), and it is also easy to verify that the restriction of \({\mathbb {P}}\) on H is still a Haar measure on H. Then orthogonality of the elements in \({\hat{H}}\) (see Proposition 2.2) provides the lemma. \(\square \)

Proof of Proposition 3.3, Case 1

Assume that g is the element in \({\hat{X}}\) with order infinity. Set G to be the subgroup in \({\hat{X}}\) generated by g. Define for \(k\ge 1\),

$$\begin{aligned} U_k:=\{x\in X:~g(x)\in (e^{-\pi i/(3k)},e^{\pi i/(3k)})\}. \end{aligned}$$

Thus \(U_k\) is decreasing (i.e. \(U_{k+1}\subset U_k\)) and for every \(1\le j\le k\), \(g^j(U_k)=(g(U_k))^j\subset (e^{-\pi i/3},e^{\pi i/3})\). Moreover \(U_k=-U_k\) by the fact that \(g(-x)=(g(x))^{-1}\). Hence we have

$$\begin{aligned} \text{ For } \text{ every } \text{ element } x\in U_k,~Re(g^j(-x))>1/2~\text{ for }~ 1\le j\le k. \end{aligned}$$
(29)

Moreover, by Lemma 3.4, we have \({\mathbb {P}}(U_k)=\frac{1}{3k}\). Define \(f_k(x):=\frac{1}{{\mathbb {P}}(U_k)}{\mathbb {I}}_{U_k}(x)\), where \({\mathbb {I}}_{U_k}(x)\) is the indicator function of \(U_k\), then \(\Vert f_k\Vert _p=(3k)^{1-1/p}\). On the other hand, by Definition 2.3,

$$\begin{aligned} \Vert \widehat{f_k}\Vert _q= & {} \Big (\int _{{\hat{X}}}|\widehat{f_k}({\hat{x}})|^q\hat{\alpha }(d{\hat{x}})\Big )^{1/q} \ge \Big (\sum _{j=1}^k |\widehat{f_k}(g^j)|^q \Big )^{1/q}\\\ge & {} \Big ( \sum _{j=1}^k \Big |\int _{U_k}\frac{1}{{\mathbb {P}}(U_k)}g^j(-x)\alpha (dx)\Big |^q \Big )^{1/q}\\\ge & {} \Big ( \sum _{j=1}^k \Big |\int _{U_k}\frac{1}{{\mathbb {P}}(U_k)}Re\big (g^j(-x)\big )\alpha (dx)\Big |^q \Big )^{1/q}\\\ge & {} \frac{k^{1/q}}{2}~~(\text{ by }\,(29)) \end{aligned}$$

which yields

$$\begin{aligned} \frac{\Vert \widehat{f_k}\Vert _q }{\Vert f_k\Vert _p}\ge \frac{3^{1/p-1}}{2}k^{1/p+1/q-1}\rightarrow \infty \end{aligned}$$

as \(k\rightarrow \infty \), which ends the proof of Case 1. \(\square \)

Proof of Proposition 3.3, Case 2

Take a sequence \(\{g_n\}\) in \({\hat{X}}\) such that each \(g_n\) has order \(m_n\) and \(m_n\nearrow \infty \). Define

$$\begin{aligned} U_n:=\{x\in X:~g_n(x)=1\}. \end{aligned}$$

Thus \(U_n=-U_n\) by the fact that \(U_n\) is a subgroup of X. Hence we have

$$\begin{aligned} \text{ For } \text{ every } x\in U_n,~g_n^j(-x)=1~\text{ for } 1\le j\le m_n. \end{aligned}$$
(30)

Then by Lemma 3.5, \({\mathbb {P}}(U_n)= 1/m_n\). Define \(f_n:=\frac{1}{{\mathbb {P}}(U_n)}{\mathbb {I}}_{(U_n)}\). We have \(\Vert f_n\Vert _p=(m_n)^{1-1/p}\). On the other hand

$$\begin{aligned} \Vert \widehat{f_n}\Vert _q= & {} \Big (\int _{{\hat{X}}}|\widehat{f_n}({\hat{x}})|^q\hat{\alpha }(d{\hat{x}})\Big )^{1/q} \ge \Big ( \sum _{j=1}^{m_n} |\widehat{f_n}(g_n^j)|^q \Big )^{1/q}\\\ge & {} \Big ( \sum _{j=1}^{m_n} \Big |\int _{U_n}\frac{1}{{\mathbb {P}}(U_n)}g_n^{j}(-x)\alpha (dx)\Big |^q \Big )^{1/q}\\= & {} \Big ( \sum _{j=1}^{m_n} \Big |\int _{U_n}\frac{1}{{\mathbb {P}}(U_n)}\alpha (dx)\Big |^q \Big )^{1/q}~~(\text{ by }\,(30))\\= & {} m_n^{1/q} \end{aligned}$$

which yields

$$\begin{aligned} \frac{\Vert \widehat{f_n}\Vert _q}{\Vert f_n\Vert _p}\ge (m_n)^{1/p+1/q-1}\rightarrow \infty \end{aligned}$$

as \(n\rightarrow \infty \), which ends the proof of Case 2. \(\square \)

Proof of Proposition 3.3, Case 3

Suppose the orders of all elements in \({\hat{X}}\) are uniformly bounded. Define r, \(\{g_k\}_{k=1}^\infty \), \(H_n\) and \(M_n\) the same as in Lemmas 3.6, 3.7, 3.8 and 3.9. Define a sequence of functions \(f_n:=\frac{1}{{\mathbb {P}}(H_n)}{\mathbb {I}}_{H_n}\), then \(\Vert f_n\Vert _p={\mathbb {P}}(H_n)^{1/p-1}=(r^n)^{1-1/p}\) by Lemma 3.7. On the other hand, we have, by Definition 2.3

$$\begin{aligned} \Vert \widehat{f_n}\Vert _q= \Big (\int _{{\hat{X}}}|\widehat{f_n}({\hat{x}})|^q\hat{{\mathbb {P}}}(d{\hat{x}})\Big )^{1/q}\ge \Big (\sum _{g\in M_n}|\widehat{f_n}(g)|^q\Big )^{1/q}. \end{aligned}$$
(31)

Then by Lemma 3.9, for every \(g\in M_n\), g is trivial on \(H_n\). So we have that

$$\begin{aligned} \widehat{f_n}(g)=\frac{1}{{\mathbb {P}}(H_n)}\int _{H_n}g(-x){\mathbb {P}}(dx)=1. \end{aligned}$$

Therefore by (31) and Lemma 3.8, we have

$$\begin{aligned} \Vert \widehat{f_n}\Vert _q\ge \Big (\sum _{g\in M_n}|\widehat{f_n}(g)|^q\Big )^{1/q}=(r^n)^{1/q} \end{aligned}$$

and

$$\begin{aligned} \frac{\Vert \widehat{f_n}\Vert _q}{\Vert f_n\Vert _p}\ge (r^n)^{1/p+1/q-1}\rightarrow \infty \end{aligned}$$

as \(n\rightarrow \infty \), what ends the proof of Case 3 and provides Proposition 4.1.2. \(\square \)

Proposition 3.12

If \((1/p,1/q)\in R_3\), then \(C_{p,q}= \infty \).

We will actually prove a stronger conclusion: \(C_{p,q}= \infty \) for all \(0<q<2\) and \(p\ge 1\). We still consider the three cases provided in the proof of Proposition 3.3.

Proof of Proposition 3.12, Case 1

Assume that g is the element in \({\hat{X}}\) with order infinity. Set \(G:=\{g^n,n\in {\mathbb {Z}}\}\) to be the subgroup in \({\hat{X}}\) generated by g. Define for every \(x\in X\),

$$\begin{aligned} g(x):=\exp \{ib(x)\} \end{aligned}$$

where \(b(x):X\rightarrow [0,2\pi ]\) is a continuous function. Therefore \(g^n(x)=\exp \{inb(x)\}\). Then by [66], Volume 1, p. 199, (4-9) Theorem, we define, for \(\beta >1\) and c positive, the function

$$\begin{aligned} h(y):=\sum _{n=2}^\infty \frac{e^{icn\log n}}{n^{1/2}(\log n)^\beta }e^{iny} \end{aligned}$$
(32)

uniformly converges for \(y\in [0,2\pi ]\), whereas its Fourier coefficients are not in \(l_q\) for any \(q<2\). Thus f(x) defined by

$$\begin{aligned} f(x):=h(b(x))=\sum _{n=2}^\infty \frac{e^{icn\log n}}{n^{1/2}(\log n)^\beta }e^{inb(x)}=\sum _{n=2}^\infty \frac{e^{icn\log n}}{n^{1/2}(\log n)^\beta }g^n(x) \end{aligned}$$

is well defined and is continuous and bounded on X. Therefore \(f\in L_p\) for all \(p\ge 1\). On the other hand \(\{{\hat{f}}(g^n)\}\) is not in \(l_q\) for any \(q<2\), which ends the proof of Case 1. \(\square \)

Proof of Proposition 3.12, Case 2

Suppose that the orders of the elements of \({\hat{X}}\) are not bounded. So there is a sequence of elements \(g_n\) in \({\hat{X}}\) such that the order of \(g_n\) is \(m_n\nearrow \infty \). Similarly we define \(g_n(x):=\exp \{ib_n(x)\}\) for some continuous function \(b_n:X\rightarrow [0,2\pi ]\). And define

$$\begin{aligned} f_{n}(x):=\sum _{k=2}^{m_n-1}\frac{e^{ick\log k}}{k^{1/2}(\log k)^\beta }e^{ikb_n(x)}=\sum _{k=2}^{m_n-1}\frac{e^{ick\log k}}{k^{1/2}(\log k)^\beta }g_n^k(x). \end{aligned}$$

Thus \(f_n(x)\) is uniformly bounded for any n and \(x\in X\) by the fact that h(x) defined in (32) converges uniformly. Thus \(\Vert f_n\Vert _p\) is uniformly bounded for any n, and \(\Vert \widehat{f_n}\Vert _q\rightarrow \infty \) as \(n \rightarrow \infty \) for any \(q<2\). In other words, \(C_{p,q}=\infty \) for this case. \(\square \)

Proof of Proposition 3.12, Case 3

Suppose that the orders of the elements of \({\hat{X}}\) are uniformly bounded. Then we define r, \(\{g_k\}_{k=1}^\infty \), \(\{H_n\}\) and \(\{M_n\}\) the same as in the proof of Proposition 3.3, Case 3. Apply Lemma 3.10, we can find \(r^n\) cosets in \({\hat{X}}/H_n^{\perp }\), denote \(\{\gamma _k\}_{k=1}^{r^n}\) a set of the representants of these \(r^n\) cosets with \(k=1,2,\ldots ,r^n\). Define

$$\begin{aligned} f_n(x):=\sum _{k=1}^{r^n}\gamma _k(x){\mathbb {I}}_{x_k+H_n}(x) \end{aligned}$$

where \(x_k+H_n\) are the cosets of \(X/H_n\). Then it is easy to see that \(|f_n|\equiv 1\), therefore \(\Vert f_n\Vert _p=1\). On the other hand, for any \(\gamma \in \gamma _{k_0}H_n^{\perp }\) with some \(k_0\),

$$\begin{aligned} \widehat{f_n}(\gamma )=\sum _{k=1}^{r^n}\int _X\gamma _k(x){\mathbb {I}}_{x_k+H_n}(x)\gamma (-x){\mathbb {P}}(dx) \end{aligned}$$
(33)

For \(k_0\), we have

$$\begin{aligned}&\int _X\gamma _{k_0}(x){\mathbb {I}}_{x_{k_0}+H_n}(x)\gamma (-x){\mathbb {P}}(dx)\\&\quad =\int _{x_{k_0}+H_n}\big (\gamma _{k_0}\gamma ^{-1}\big )(x){\mathbb {P}}(dx)\\&\quad =\big (\gamma _{k_0}\gamma ^{-1}\big )(x_{k_0})\int _{H_n}\big (\gamma _{k_0}\gamma ^{-1}\big )(x){\mathbb {P}}(dx)\\&\quad =\big (\gamma _{k_0}\gamma ^{-1}\big )(x_{k_0})\frac{1}{r^n} \end{aligned}$$

where the last equality is by the fact that \(\gamma \in \gamma _{k_0}H_n^{\perp }\Rightarrow \gamma _{k_0}\gamma ^{-1}\in H_n^{\perp }\). Further if \(k\ne k_0\), we have

$$\begin{aligned}&\int _X\gamma _{k}(x){\mathbb {I}}_{x_{k}+H_n}(x)\gamma (-x){\mathbb {P}}(dx)\\&\quad =\int _{x_{k}+H_n}\big (\gamma _{k}\gamma ^{-1}\big )(x){\mathbb {P}}(dx)\\&\quad =\big (\gamma _{k}\gamma ^{-1}\big )(x_{k})\int _{H_n}\big (\gamma _{k}\gamma ^{-1}\big )(x){\mathbb {P}}(dx)=0. \end{aligned}$$

where the last equality is by the fact that \(\gamma _k\gamma ^{-1}\) is non-trivial on \(H_n\) and therefore applying Lemma 3.11. Thus (33) equals

$$\begin{aligned} \widehat{f_n}(\gamma )={\left\{ \begin{array}{ll} (\gamma _{k}\gamma ^{-1})(x_k)\frac{1}{r^n} &{} \text{ if }~\gamma \in \gamma _kH_n^{\perp }~\text{ for } \text{ some } k\\ 0 &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$
(34)

Denote \(Q_n:=\bigsqcup _{k=1}^{r^n} \gamma _k H_n^{\perp }\), therefore \(|Q_n|=r^{2n}\) by Lemmas 3.8 and 3.9. Then by (34)

$$\begin{aligned} |\widehat{f_n}(\gamma )|={\left\{ \begin{array}{ll} \frac{1}{r^n} &{} \text{ if }~\gamma \in Q_n\\ 0 &{} \text{ otherwise } \end{array}\right. } \end{aligned}$$
(35)

which yields

$$\begin{aligned} \Vert \widehat{f_n}\Vert _q\ge & {} \Big (\sum _{\gamma \in Q_n}\frac{1}{r^{nq}}\Big )^{1/q} =r^{\frac{(2-q)n}{q}}\rightarrow \infty \end{aligned}$$

as \(n\rightarrow \infty \), which ends the proof of Case 3 and provides Proposition 3.12. Propositions 3.2, 3.3 and 3.12 together provide Theorem 3.1. \(\square \)

Remark 3.13

We have a similar conclusion as Lemma 3.7, even though we will not use this conclusion, it is still an interesting conclusion and its proof is stronger than the proof of Lemma 3.7. Suppose X is a compact LCA group associated with a probability Haar measure \({\mathbb {P}}\) and \({\hat{X}}\) is the discrete dual group, and suppose that we can take a sequence \(\{g_k\}_{k=1}^\infty \) such that their orders are distinct primes, then \(g_k\)’s are mutually independent.

Proof of Remark 3.13

We assume that each \(g_k\) has prime order \(r_k\), and \(r_k\)’s are distinct. We firstly prove that the events \(g_k^{-1}(1)\) are mutually independent. It suffices to prove that

$$\begin{aligned} {\mathbb {P}}\Big (\bigcap _{k=1}^n g_k^{-1}(\{1\})\Big )= \frac{1}{\prod _{k=1}^nr_k}. \end{aligned}$$
(36)

We will use induction, suppose that (36) holds for n, then for \(n+1\), define \(H_n\) the subgroup in X such that

$$\begin{aligned} H_n:=\bigcap _{k=1}^n g_k^{-1}(\{1\}). \end{aligned}$$
(37)

Therefore \(H_n\) is a compact subgroup of X. We define \({\tilde{g}}_{n+1}:=g_{n+1}\big |_{H_n}\), i.e. the restriction of \(g_{n+1}\) on \(H_n\). Therefore we have that \({\tilde{g}}_{n+1}^{r_{n+1}}\) is trivial, therefore the order of \({\tilde{g}}_{n+1}\) divides \(r_{n+1}\), hence the order of \({\tilde{g}}_{n+1}\) is either 1 or \(r_{n+1}\) by the fact that \(r_{n+1}\) is prime. If the order of \({\tilde{g}}_{n+1}\) is 1, then \({\tilde{g}}_{n+1}={\tilde{e}}\), where \({\tilde{e}}\) is the identity of \({\hat{H}}\). So we have that

$$\begin{aligned} H_n\subset \left( g_1g_2\ldots g_ng_{n+1}\right) ^{-1}(\{1\}). \end{aligned}$$
(38)

On the other hand, we have the order of \(g_1g_2\ldots g_ng_{n+1}\) is \(\prod _{k=1}^{n+1}r_k\), therefore by Lemma 3.5,

$$\begin{aligned} {\mathbb {P}} \left\{ \big (g_1g_2\ldots g_ng_{n+1}\big )^{-1}(\{1\})\right\} =\frac{1}{\prod _{k=1}^{n+1}r_k}. \end{aligned}$$
(39)

However by induction hypothesis,

$$\begin{aligned} {\mathbb {P}}(H_n)=\frac{1}{\prod _{k=1}^{n}r_k}. \end{aligned}$$
(40)

So by (38), (39) and (40) we have

$$\begin{aligned} \frac{1}{\prod _{k=1}^{n}r_k}\le \frac{1}{\prod _{k=1}^{n+1}r_k} \end{aligned}$$

which leads to a contradiction. So we have that the order of \({\tilde{g}}_{n+1}\) is \(r_{n+1}\). Next, we need to verify that the probability measure on \(H_n\) definde by the conditional probability \({\mathbb {P}}(\cdot |H_n)\) is still a Haar measure in order to use Lemma 3.5. To see this, it suffices to verify that \({\mathbb {P}}(\cdot |H_n)\) is translation invariant. For any measurable set \(M\subset H_n\) and any element \({\hat{a}}\in H_n\), we have

$$\begin{aligned} {\mathbb {P}}({\hat{a}}M|H_n)=\frac{{\mathbb {P}}({\hat{a}}M\cap H_n)}{{\mathbb {P}}(H_n)}=\frac{{\mathbb {P}}({\hat{a}}(M\cap H_n))}{{\mathbb {P}}(H_n)}=\frac{{\mathbb {P}}(M\cap H_n)}{{\mathbb {P}}(H_n)}={\mathbb {P}}(M|H_n) \end{aligned}$$

where the second equality is by the fact that \({\hat{a}}\in H_n\Rightarrow {\hat{a}}H_n=H_n\). The third equality is by the fact that \({\mathbb {P}}\) is a Haar measure. Therefore by applying Lemma 3.5 again on \(H_n\), \({\mathbb {P}}(\cdot |H_n)\) and \({\tilde{g}}_{n+1}\), we have that \({\mathbb {P}}\left( {\tilde{g}}_{n+1}^{-1}(\{1\})|H_n\right) \) is \(1/r_{n+1}\). So we have that

$$\begin{aligned} {\mathbb {P}}(H_{n+1})={\mathbb {P}}(H_n){\mathbb {P}}\big ({\tilde{g}}_{n+1}^{-1}(\{1\})\big |H_n\big )=\frac{1}{\prod _{k=1}^{n+1}r_k} \end{aligned}$$
(41)

which provides conclusion (36). Next, to prove that \(g_k\)’s are mutually independent, we denote by \(J_k\) the range of \(g_k\) on \({\mathbb {T}}\). So it suffices to prove that for every \((a_1,\ldots ,a_n)\in \prod _{k=1}^{n}J_k\), the probability of the random vector

$$\begin{aligned} {\mathbb {P}}\big ((g_1,g_2,\ldots ,g_n)=(a_1,a_2,\ldots ,a_n)\big )=\frac{1}{\prod _{k=1}^{n}r_k}. \end{aligned}$$
(42)

Note that if \((a_1,\ldots ,a_n)\) is in the image of the random vector \((g_1,g_2,\ldots ,g_n)\), then (42) holds by the fact that \({\mathbb {P}}\) is a Haar measure. On the other hand, if the image of \((g_1,g_2,\ldots ,g_n)\) is a proper subset of \(\prod _{k=1}^{n}J_k\), then the probability

$$\begin{aligned} {\mathbb {P}}(X)={\mathbb {P}}\big ((g_1,g_2,\ldots ,g_n)\in \prod _{k=1}^{n}J_k\big )<1 \end{aligned}$$

which yields a contradiction and ends the proof (Fig. 2). \(\square \)

4 The discrete case

Theorem 4.1

If X is a discrete non-finite LCA group, we consider the three regions as in Fig. 1:

$$\begin{aligned}&R_1':=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}< 1, \frac{1}{q}< \frac{1}{2}\Big \} \end{aligned}$$
(43)
$$\begin{aligned}&R_2':=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}\ge 1, \frac{1}{p}\ge \frac{1}{2}\Big \} \end{aligned}$$
(44)
$$\begin{aligned}&R_3':=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,1]^2:\frac{1}{p}< \frac{1}{2}, \frac{1}{q}\ge \frac{1}{2}\Big \}. \end{aligned}$$
(45)

Then the norm of the Fourier operator from \(L^p(X)\) to \(L^q({\hat{X}})\) satisfies

$$\begin{aligned} C_{p,q}={\left\{ \begin{array}{ll} \infty &{} \text{ if }~\Big (\frac{1}{p},\frac{1}{q}\Big )\in R_1'\cup R_3'\\ \hat{\alpha }({\hat{X}})^{1/p+1/q-1} &{} \text{ if }~\Big (\frac{1}{p},\frac{1}{q}\Big )\in R_2'. \end{array}\right. } \end{aligned}$$

Proposition 4.2

If \((1/p,1/q)\in R_2'\), then \(C_{p,q}=\hat{\alpha }({\hat{X}})^{1/p+1/q-1}\).

Proof of Proposition 4.2

We divide \(R_2'\) into two parts: \(R_{21}':= R_2'\cap \{1/q\le 1\}\) and \(R_{22}':=R_2'\setminus R_{21}'\) as in Fig. 3.

We will firstly prove that \(C_{p,q}\le \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) for all \((1/p,1/q)\in R_2'\) and then prove that this upper bound can be attained. Firstly consider the region \(R_{21}'\), which is a subset of \((1/p,1/q)\in [0,\infty )\times [0,1]\) and hence we can use Riesz-Thorin Theorem and Corollary 2.7. We firstly prove that \(C_{p,q}\le \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) for all \((1/p,1/q)\in R_{21}'\). Note that it suffices to prove that for any \(p_0\) with \(p_0<1\), we have \(C_{p,q}\le \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) for all \((1/p,1/q)\in R_{21,p_0}':=R_{21}'\cap \{1/p\le 1/{p_0}\}\) as in Fig. 4.

Fig. 2
figure 2

X discrete LCA group

Fig. 3
figure 3

X discrete LCA group

Since the region \(R_{21,p_0}'\) is convex and

$$\begin{aligned} R_2'=\text{ hull }\Big ((1,0),(1/{p_0},0),(1/{p_0},1),(1/2,1),(1/2,1/2)\Big ) \end{aligned}$$

and \(\log \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) is affine for (1/p, 1/q), it suffices to check that \(C_{p,q}\le \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) holds at \((1,0),(1/{p_0},0),(1/{p_0},1),(1/2,1),(1/2,1/2)\) by Corollary 2.7.

At the point (1, 0), we have

$$\begin{aligned} \Vert {\hat{f}}\Vert _\infty\le & {} \sup _{\gamma \in {\hat{X}}}|{\hat{f}}(\gamma )|=\sup _{\gamma \in {\hat{X}}}|\int _Xf(x)\gamma (-x)\alpha (dx)| \le \int _X|f(x)|\alpha (dx)\\= & {} \Vert f\Vert _1 \end{aligned}$$

so \(C_{1,\infty }\le 1\).

At the point \((1/{p_0},0)\), without loss of generality, we assume that \(\Vert f\Vert _{p_0}=\hat{\alpha }({\hat{X}})^{-1/{p_0}}\), therefore \(|f(x)|\le 1\) for all \(x\in X\). We have,

$$\begin{aligned} \Vert {\hat{f}}\Vert _{\infty }\le \Vert f\Vert _1&=\frac{1}{\hat{\alpha }({\hat{X}})}\sum _{x\in X}|f(x)|\\&\le \frac{1}{\hat{\alpha }({\hat{X}})}\sum _{x\in X}|f(x)|^{p_0}=\Vert f\Vert _{p_0}^{p_0}=\hat{\alpha }({\hat{X}})^{1/{p_0}-1}\Vert f\Vert _{p_0} \end{aligned}$$

where the inequality is by the fact that \(|f(x)|\le 1\) for all \(x\in X\) and the fact that \(p_0<1\). So \(C_{p_0,\infty }\le \hat{\alpha }({\hat{X}})^{1/{p_0}-1}\).

At the point \((1/{p_0},1)\), we have

$$\begin{aligned} \Vert {\hat{f}}\Vert _1\le \hat{\alpha }({\hat{X}})\Vert {\hat{f}}\Vert _\infty \le \hat{\alpha }({\hat{X}})C_{p_0,\infty }\Vert f\Vert _{p_0}=\hat{\alpha }({\hat{X}})^{1/{p_0}}\Vert f\Vert _{p_0} \end{aligned}$$

therefore \(C_{p_0,1}\le \hat{\alpha }({\hat{X}})^{1/{p_0}}\).

At the point (1/2, 1), we have, by Cauchy-Schwartz Inequality and Proposition 2.4

$$\begin{aligned} \Vert {\hat{f}}\Vert _1\le & {} \hat{\alpha }({\hat{X}})^{1/2}\Vert {\hat{f}}\Vert _2=\hat{\alpha }({\hat{X}})^{1/2}\Vert f\Vert _2. \end{aligned}$$

So \(C_{2,1}\le \hat{\alpha }({\hat{X}})^{1/2}\).

At the point (1/2, 1/2), we have that \(C_{2,2}=1\) by Proposition 2.4, which provides \(C_{p,q}\le \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) for all \((1/p,1/q)\in R_{21}'\).

Next, consider the region \(R_{22}'\), we have that for any \((1/p,1/q)\in R_{22}'\), by Cauchy-Schwartz Inequality,

$$\begin{aligned} \Vert {\hat{f}}\Vert _q\le \hat{\alpha }({\hat{X}})^{1/q-1}\Vert {\hat{f}}\Vert _1\le \hat{\alpha }({\hat{X}})^{1/q-1}C_{p,1}\Vert f\Vert _p. \end{aligned}$$
(46)

We have that \((1/p,1)\in R_{21}'\), hence by our conclusion in \(R_{21}'\), \(C_{p,1}\le \hat{\alpha }({\hat{X}})^{1/p}\), therefore by (46), we have

$$\begin{aligned} \Vert {\hat{f}}\Vert _q\le \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\Vert f\Vert _p \end{aligned}$$

which provides that \(C_{p,q}\le \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) for all \((1/p,1/q)\in R_2'\).

Further, the upper bound \(\hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) can be attained by any time basis (i.e. delta functions on X). Assume \(f(x):={\mathbb {I}}_{x_0}(x)\) for some \(x_0\in X\), then \(\Vert f\Vert _p=\hat{\alpha }({\hat{X}})^{-1/p}\), and for any \(\gamma \in {\hat{X}}\),

$$\begin{aligned} {\hat{f}}(\gamma )=\frac{1}{\hat{\alpha }({\hat{X}})}\gamma (-x_0) \end{aligned}$$

which yields \(\Vert {\hat{f}}\Vert _q=\hat{\alpha }({\hat{X}})^{1/q-1}\), which provides \(C_{p,q}\ge \hat{\alpha }({\hat{X}})^{1/p+1/q-1}\) for all \((p,q)\in R_2'\) and ends the proof of Proposition 4.2. \(\square \)

Proposition 4.3

If \((1/p,1/q)\in R_1'\), then \(C_{p,q}=\infty \).

Since X is discrete, as in the proof of Proposition 3.3, we have three cases:

  • Case 1. X has an element with order infinity.

  • Case 2. Every element of X has a finite order, and the order set is not bounded.

  • Case 3. Every element of X has a finite order, and the order set is uniformly bounded.

Similarly, we assume that the Haar measure on \({\hat{X}}\) is a probability measure \({\mathbb {P}}\).

Proof of Proposition 4.3, Case 1

Assume that g is the element in X with order infinity, set \(G:=[g]\), define:

$$\begin{aligned} U_k:=\{{\hat{x}}\in {\hat{X}}:~{\hat{x}}(g)\in (e^{-\pi i/(3k)},e^{\pi i/(3k)})\}. \end{aligned}$$

Then similarly \(U_k\) is decreasing. We treat g as an element in \(\hat{{\hat{X}}}\), note that \(g({\hat{x}}):={\hat{x}}(g)\) where \(g\in X\simeq \hat{{\hat{X}}}\). Hence Lemmas 3.4 and 3.5 are valid by substituting X by \({\hat{X}}\). Then we define a sequence of functions \(f_k\) on X by

$$\begin{aligned} f_k:=\sum _{j=1}^k {\mathbb {I}}_{g^j}. \end{aligned}$$

Thus \(\Vert f_k\Vert _p=k^{1/p}\). On the other hand, for every \({\hat{x}}\in {\hat{X}}\),

$$\begin{aligned} \widehat{f_k}({\hat{x}})=\sum _{j=1}^k {\hat{x}}(g^{-j})=\sum _{j=1}^k (g^j({\hat{x}}))^{-1}. \end{aligned}$$

Moreover, we have for every \({\hat{x}}\in U_k\), and \(1\le j\le k\),

$$\begin{aligned} Re\left( \left( g^j({\hat{x}})\right) ^{-1}\right) =Re\left( {\hat{x}}(g^{-j})\right) =Re\left( \left( {\hat{x}}(g)\right) ^{-j}\right) >1/2 \end{aligned}$$
(47)

which yields

$$\begin{aligned} \Vert \widehat{f_k} \Vert _q= & {} \left( \int _{{\hat{X}}} \Big | \sum _{j=1}^k \left( g^j({\hat{x}})\right) ^{-1}\Big |^q \hat{\alpha }(d{\hat{x}})\right) ^{1/q} \ge \left( \int _{U_k} \Big | \sum _{j=1}^k \left( g^j({\hat{x}})\right) ^{-1}\Big |^q \hat{\alpha }(d{\hat{x}})\right) ^{1/q}\\\ge & {} \left( \int _{U_k} \Big | \sum _{j=1}^k Re\left( (g^j({\hat{x}}))^{-1}\right) \Big |^q \hat{\alpha }(d{\hat{x}})\right) ^{1/q}\\\ge & {} \frac{k{\mathbb {P}}(U_k)^{1/q}}{2}~~(\text{ by }\,(47))\\= & {} \frac{k^{1-1/q}}{2\cdot 3^{1/q}}~~(\text{ by } \text{ Lemma }\,3.4\,\text{ and } \text{ Lemma }\,3.5). \end{aligned}$$

Therefore

$$\begin{aligned} \frac{\Vert \widehat{f_k} \Vert _q}{\Vert f_k\Vert _p}\ge \frac{k^{1-1/p-1/q}}{2\cdot 3^{1/q}}\rightarrow \infty \end{aligned}$$

as \(k\rightarrow \infty \), which ends the proof of Case 1. \(\square \)

Proof of Proposition 4.3, Case 2

If the orders of the elements of X are not bounded, then take a sequence \(g_n\) in X such that each \(g_n\) has order \(m_n\) and \(m_n\nearrow \infty \). Define

$$\begin{aligned} U_n:=\{{\hat{x}}\in {\hat{X}}:~{\hat{x}}(g_n)=1 )\}. \end{aligned}$$

Hence we have that for every \({\hat{x}}\in U_n\) and every \(1\le j\le m_n\),

$$\begin{aligned} \big (g_n^j({\hat{x}})\big )^{-1}=\big ({\hat{x}}(g_n)\big )^{-j}=1. \end{aligned}$$
(48)

Define

$$\begin{aligned} f_n:=\sum _{j=1}^{m_n}{\mathbb {I}}_{g_n^j}. \end{aligned}$$

Hence \(\Vert f_n\Vert _p={m_n}^{1/p}\). On the other hand, for every \({\hat{x}}\in {\hat{X}}\),

$$\begin{aligned} \widehat{f_n}({\hat{x}})=\sum _{j=1}^{m_n}\big (g_n^j({\hat{x}})\big )^{-1}. \end{aligned}$$

Thus, the \(L_q\) norm of \(\widehat{f_n}\)

$$\begin{aligned} \Vert \widehat{f_n}\Vert _q= & {} \left( \int _{{\hat{X}}}\Big |\sum _{j=1}^{m_n}\left( g_n^j({\hat{x}})\right) ^{-1}\Big |^q\hat{\alpha }(d{\hat{x}})\right) ^{1/q} \ge \left( \int _{U_n}\Big |\sum _{j=1}^{m_n}\left( g_n^j({\hat{x}})\right) ^{-1}\Big |^q\hat{\alpha }(d{\hat{x}})\right) ^{1/q}\\= & {} m_n\hat{\alpha }(U_n)^{1/q}~~(\text{ by }\,(48))\\= & {} (m_n)^{1-1/q}~~(\text{ by } \text{ Lemma }\,3.4\,\text{ and } \text{ Lemma }\,3.5). \end{aligned}$$

So we have

$$\begin{aligned} \frac{\Vert \widehat{f_n}\Vert _q}{\Vert f_n\Vert _p}\ge (m_n)^{1-1/p-1/q}\rightarrow \infty \end{aligned}$$

as \(n\rightarrow \infty \), which ends the proof of Case 2. \(\square \)

Proof of Proposition 4.3, Case 3

Suppose that the order of all elements in X are uniformly bounded, we treat X as \(\hat{{\hat{X}}}\), then all Lemmas, 3.4 through 3.11 are valid. Define r, \(\{g_k\}\), \(\{M_n\}\), \(\{H_n\}\) the same as in the proof of Proposition 3.3 Case 3 except for substituting \({\hat{X}}\) by X. Define

$$\begin{aligned} f_n:={\mathbb {I}}_{M_n}. \end{aligned}$$

Then we have \(\Vert f_n\Vert _p=(r^n)^{1/p}\) by Lemma 3.8. On the other hand, for every \({\hat{x}}\in {\hat{X}}\),

$$\begin{aligned} \widehat{f_n}({\hat{x}})=\sum _{g\in M_n}{\hat{x}}(-g). \end{aligned}$$

Thus we have

$$\begin{aligned} \Vert \widehat{f_n}\Vert _q =\left( \int _{{\hat{x}}\in {\hat{X}}}\Big |\sum _{g\in M_n}{\hat{x}}(-g)\Big |^q{\mathbb {P}}(d{\hat{x}})\right) ^{1/q} \ge \left( \int _{{\hat{x}}\in H_n}\Big |\sum _{g\in M_n}{\hat{x}}(-g)\Big |^q{\mathbb {P}}(d{\hat{x}})\right) ^{1/q}.\nonumber \\ \end{aligned}$$
(49)

Since for every \(g\in M_n\), \(g(H_n)\) is trivial, thus by Lemmas 3.7, 3.8 and (49),

$$\begin{aligned} \Vert \widehat{f_n}\Vert _q \ge \left( {\mathbb {P}}(H_n)\right) ^{1/q}r^n= (r^n)^{1-1/q} \end{aligned}$$

which yields

$$\begin{aligned} \frac{\Vert \widehat{f_n}\Vert _q}{\Vert f_n\Vert _p}\ge (r^n)^{1-1/p-1/q}\rightarrow \infty \end{aligned}$$

as \(n\rightarrow \infty \), which ends the proof of Case 3 and provides Proposition 4.3. \(\square \)

Proposition 4.4

If \((1/p,1/q)\in R_3'\), then \(C_{p,q}=\infty \).

We will actually prove a stronger conclusion: \(C_{p,q}=\infty \) for all \(p>2\) and \(q>0\). We still consider the three cases provided in the proof of Proposition 4.3, and assume that the Haar measure on \({\hat{X}}\) is a probability measure.

Proof of 4.4, Case 1

Assume that g is the element in X with order infinity. We treat g as an element in \(\hat{{\hat{X}}}\), then Lemmas 3.4 and 3.5 are valid by substituting X by \({\hat{X}}\). Define for every \({\hat{x}}\in {\hat{X}}\),

$$\begin{aligned} g({\hat{x}}):=\exp \{ib({\hat{x}})\} \end{aligned}$$
(50)

where \(b({\hat{x}}):{\hat{X}}\rightarrow [0.2\pi ]\) is a continuous random variable with uniform distribution by Lemmas 3.4 and 3.5. Define a sequence of functions on X:

$$\begin{aligned} f_n:=\sum _{k=1}^n\frac{1}{\sqrt{k}} {\mathbb {I}}_{g^{-2^k}}. \end{aligned}$$

Thus \(\Vert f_n\Vert _p\) are uniformly bounded by the fact that \(\sum (1/n)^{p/2}<\infty \) for \(p>2\) and \(\Vert \widehat{f_n}\Vert _2=\Vert f_n\Vert _2\rightarrow \infty \). We have, by noting that \(g^{2^k}({\hat{x}}):={\hat{x}}(g^{2^k})\),

$$\begin{aligned} \widehat{f_n}({\hat{x}})=\sum _{k=1}^n\frac{1}{\sqrt{k}}g^{2^k}({\hat{x}})=\sum _{k=1}^n\frac{1}{\sqrt{k}}e^{i2^kb({\hat{x}})}. \end{aligned}$$

Consider the real part:

$$\begin{aligned} Re\left( \widehat{f_n}({\hat{x}})\right) =\sum _{k=1}^n\frac{1}{\sqrt{k}}\cos \left( 2^kb({\hat{x}})\right) =:P_n\left( b({\hat{x}})\right) \end{aligned}$$

where \(P_n(y):=\sum _{k=1}^n\frac{1}{\sqrt{k}}\cos \left( {2^ky}\right) \) is a lacunary series with \(\Lambda =2\) and \(P_n\), \(A_n\) satisfies condition (9) in Theorem 2.9. Thus, apply Theorem 2.9 by setting \(E:=(0,2\pi )\), we have that

$$\begin{aligned} \frac{\lambda \left\{ y\in (0,2\pi ):~P_n(y)/A_n\ge 1\right\} }{2\pi }\rightarrow \frac{1}{2\pi } \int _y^{\infty }e^{-x^2/2}dx>\frac{1}{2\sqrt{2\pi }}e^{-1/2} \end{aligned}$$
(51)

Therefore by Lemmas 3.4 and 3.5, we have that the probability measure \(\lambda '\) induced by g is \(\lambda '=\lambda /2\pi \), therefore we have

$$\begin{aligned} {\mathbb {P}}\left( {\hat{x}}\in {\hat{X}}:~b({\hat{x}})\in (0,2\pi ),~P_n\left( b({\hat{x}})\right) \ge A_n\right)= & {} \lambda '\left\{ y\in (0,2\pi ):~P_n(y)\ge A_n\right\} \\\rightarrow & {} \frac{1}{2\pi } \int _y^{\infty }e^{-x^2/2}dx>\frac{1}{2\sqrt{2\pi }}e^{-1/2} \end{aligned}$$

This means that there exists N large enough such that for any \(n\ge N\),

$$\begin{aligned} {\mathbb {P}}\left( {\hat{x}}\in {\hat{X}}:~|\widehat{f_n}({\hat{x}})|{\ge } A_n\right) {\ge } {\mathbb {P}}\left( {\hat{x}}\in {\hat{X}}:~P_n\left( b({\hat{x}})\right) {\ge } A_n\right) {\ge } \frac{1}{2\sqrt{2\pi }}e^{-1/2}>0.\quad \end{aligned}$$
(52)

Thus by the fact that \(A_n\rightarrow \infty \), the inequality (52) means that we always have a subset in \({\hat{X}}\) with measure at least \(\frac{1}{2\sqrt{2\pi }}e^{-1/2}\), such that the value of \(|\widehat{f_n}|\) on this set is arbitrarily large as \(n\rightarrow \infty \). So we have \(\Vert \widehat{f_n}\Vert _q\rightarrow \infty \), which ends the proof for Case 1. \(\square \)

Proof of Proposition 4.4, Case 2

For this case, Lemmas 3.4 and 3.5 are valid. Set a sequence \(g_n\) in X such that each \(g_n\) has order \(m_n\) with \(m_n\nearrow \infty \). So it is easy to verify that the distribution function of \(g_n\) on \([0,2\pi ]\) convergence uniformly to \(F(x)=x/2\pi \), which means that \(g_n\xrightarrow {d} U\) where U is a random variable from some probability space to \([0,2\pi ]\) with uniform distribution \({\mathbb {P}}_U\). We firstly claim that for

$$\begin{aligned} P_n(y):= & {} \sum _{k=1}^n\frac{1}{\sqrt{k}}\cos \left( 2^ky\right) \end{aligned}$$
(53)
$$\begin{aligned} A_n:= & {} \left( \frac{1}{2}\sum _{k=1}^n\frac{1}{k}\right) ^{1/2} \end{aligned}$$
(54)

we have that

$$\begin{aligned} {\mathbb {P}}_U(P_n(U)\ge A_n)\rightarrow \frac{1}{2\pi } \int _y^{\infty }e^{-x^2/2}dx>\frac{1}{2\sqrt{2\pi }}e^{-1/2}. \end{aligned}$$
(55)

In fact, it is easy to verify that \(P_n\) is a lacunary series with \(\Lambda =2\) in Definition 2.8 and \(A_n\)’s satisfy condition (9) in Theorem 2.9. Thus we have (55) holds, which means that

$$\begin{aligned} {\mathbb {P}}_U(P_{n_0}(U)\ge A_{n_0})\ge \frac{1}{2\sqrt{2\pi }}e^{-1/2}~\text{ for } \text{ all } n_0~\text{ large } \text{ enough }. \end{aligned}$$
(56)

Fix this \(n_0\). On the other hand we set \(g_n({\hat{x}}):=e^{ib_n({\hat{x}})}\) with \(b_n:{\hat{X}}\rightarrow [0,2\pi )\) a continuous random variable. Thus \(b_n\xrightarrow {d} U\) as \(n\rightarrow \infty \). Define, for n large enough such that \(m_n>2^{n_0}\), a sequence of functions on X:

$$\begin{aligned} f_{n,n_0}:=\sum _{k=1}^{n_0}\frac{1}{\sqrt{k}}{\mathbb {I}}_{g_n^{-2^k}} \end{aligned}$$

thus for \(p>2\), \(\Vert f_{n,n_0}\Vert _p\) is uniformly bounded and

$$\begin{aligned} \widehat{f_{n,n_0}}({\hat{x}})=\sum _{k=1}^{n_0}\frac{1}{\sqrt{k}} g_n^{2^k}({\hat{x}})=\sum _{k=1}^{n_0}\frac{1}{\sqrt{k}}e^{2^k ib_n({\hat{x}})}. \end{aligned}$$

Consider the real part of \(\widehat{f_{n,n_0}}({\hat{x}})\):

$$\begin{aligned} Re\left( \widehat{f_{n,n_0}}({\hat{x}})\right) =\sum _{k=1}^{n_0}\frac{1}{\sqrt{k}} \cos \left( 2^k b_n({\hat{x}})\right) =P_{n_0}\left( b_n({\hat{x}})\right) \end{aligned}$$
(57)

where \(P_{n_0}\) is defined as in (53). Therefore for fixed \(n_0\), by the fact that \(P_{n_0}\) is continuous, \(P_{n_0}\left( b_n({\hat{x}})\right) \xrightarrow {d}P_{n_0}(U)\) as \(n\rightarrow \infty \). Therefore

$$\begin{aligned} {\mathbb {P}}\left( P_{n_0}\left( b_n({\hat{x}})\right) \ge A_{n_0}\right) \rightarrow {\mathbb {P}}_U(P_{n_0}(U)\ge A_{n_0})\ge \frac{1}{2\sqrt{2\pi }}e^{-1/2} \end{aligned}$$
(58)

as \(n\rightarrow \infty \). This means for any \(n_0\) large enough, we have an \(n(n_0)\) large enough such that for all \(n\ge n(n_0)\),

$$\begin{aligned} {\mathbb {P}}\left( P_{n_0}\left( b_n({\hat{x}})\right) \ge A_{n_0}\right) \ge \frac{1}{4\sqrt{2\pi }}e^{-1/2}>0. \end{aligned}$$
(59)

This means for any \(n_0\) large enough, we have an \(n(n_0)\in {\mathbb {Z}}^+\) large enough, such that

$$\begin{aligned} {\mathbb {P}}\left( |\widehat{f_{n(n_0),n_0}}|\ge A_{n_0}\right) \ge {\mathbb {P}}\left( P_{n_0}\left( b_{n(n_0)}({\hat{x}})\right) \ge A_{n_0}\right) \ge \frac{1}{4\sqrt{2\pi }}e^{-1/2}>0. \end{aligned}$$
(60)

Therefore by (60), for any \(q>0\),

$$\begin{aligned} \Vert \widehat{f_{n(n_0),n_0}}\Vert _q\ge \left( \frac{(A_{n_0})^q}{4\sqrt{2\pi }}e^{-1/2} \right) ^{1/q} \end{aligned}$$

which can be arbitrarily large by the fact that \(A_{n_0}\rightarrow \infty \) as \(n_0\rightarrow \infty \), which ends the proof of Case 2. \(\square \)

Remark 4.5

The method we used for the proof of Case 1 and Case 2 is valid for all \(q>0\). In particular, for \(q\ge 1\), we have an even better method. The crucial idea is to use the equivalence of \(L^p\) (\(p\ge 1\)) norms of lacunary Fourier sequence mentioned in [19], p. 240, Theorem 3.7.4, which says that for \(q\ge 1\), the \(L^q\) norms of lacunary Fourier series with \(\Lambda \) defined in Definition 2.8 are equivalent, where the bounds between the \(L^q\) norms only depend on q and \(\Lambda \). For Case 1, we define a sequence of functions on X:

$$\begin{aligned} f_n:=\sum _{k=1}^n\frac{1}{\sqrt{k}} {\mathbb {I}}_{g^{-2^k}}. \end{aligned}$$

Thus \(\Vert f_n\Vert _p\) are uniformly bounded by the fact that \(\sum (1/n)^{p/2}<\infty \) for \(p>2\) and \(\Vert \widehat{f_n}\Vert _2=\Vert f_n\Vert _2\rightarrow \infty \). We have, by noting that \(g^{2^k}({\hat{x}}):={\hat{x}}(g^{2^k})\),

$$\begin{aligned} \widehat{f_n}({\hat{x}})=\sum _{k=1}^n\frac{1}{\sqrt{k}}g^{2^k}({\hat{x}})=\sum _{k=1}^n\frac{1}{\sqrt{k}}e^{i2^kb({\hat{x}})}=:P_n(b({\hat{x}})) \end{aligned}$$

where \(P_n(y):=\sum _{k=1}^n\frac{1}{\sqrt{k}}e^{i2^ky}\). Then it is easy to check that \(P_n\) is a lacunary Fourier series with \(L^2\) norms tends to infinity. On the other hand, We have, by Lemmas 3.4, 3.5 and Theorem 3.7.4 in [19], p. 240, there exists a constant \(C_2(2)\) such that

$$\begin{aligned}&\Vert \widehat{f_n}\Vert _1={\mathbb {E}}\left( \Big |\sum _{k=1}^n\frac{1}{\sqrt{k}}g^{2^k}\Big |\right) ={\mathbb {E}}\left( \big |P_n(b)\big |\right) =\Vert P_n\Vert _{L^1({\mathbb {T}})}\\\ge & {} \left( C_2(2)\right) ^{-1}\Vert P_n\Vert _{L^2({\mathbb {T}})} =\left( C_2(2)\right) ^{-1}\left( {\mathbb {E}}\left( \big |P_n(b)\big |^2\right) \right) ^{1/2}=\left( C_2(2)\right) ^{-1}\Vert \widehat{f_n}\Vert _2\rightarrow \infty \end{aligned}$$

as \(n\rightarrow \infty \), moreover by Hölder’s inequality, we have that \(\Vert \widehat{f_n}\Vert _q\rightarrow \infty \) for all \(q\ge 1\), which provides the proof for Case 1.

For Case 2, we take a sequence \(g_n\) in X such that each \(g_n\) has order \(m_n\) with \(m_n\nearrow \infty \). Define a sequence of function function for n large enough such that \(m_n>2^N\)

$$\begin{aligned} f_{n,N}:=\sum _{k=1}^N\frac{1}{\sqrt{k}} {\mathbb {I}}_{g_n^{-2^k}} \end{aligned}$$
(61)

for some fixed \(N\in {\mathbb {Z}}^+\) and

$$\begin{aligned} \widehat{f_{n,N}}({\hat{x}})=\sum _{k=1}^N\frac{1}{\sqrt{k}} g_n^{2^k}({\hat{x}})=:P_N(g_n) \end{aligned}$$
(62)

where

$$\begin{aligned} P_N(y):=\sum _{k=1}^N\frac{1}{\sqrt{k}} y^{2^k}. \end{aligned}$$

Therefore \(\Vert f_{n,N}\Vert _p\) for \(p>2\) is uniformly bounded (independent of n and N) and \(\Vert f_{n,N}\Vert _2\sim \log N\). Note that it is easy to verify that the distribution functions of \(g_n\) on \([0, 2\pi ]\) converge uniformly to the distribution function of the uniform distribution U, which means that \(g_n\) converges to U in distribution. Therefore for fixed N in (61) and (62), the Fourier transform \(P_N(g_n)\xrightarrow {d} P_N(U)\) by the fact that \(P_N\) is continuous. Furthermore for fixed N, \(|P_N(g_n)|\) and \(|P_N(U)|\) are uniformly bounded by \(\sum _{k=1}^N 1/\sqrt{k}\). So we have that \({\mathbb {E}}\left( |P_N(g_n)|\right) \rightarrow {\mathbb {E}} \left( |P_N(U)|\right) \) as \(n\rightarrow \infty \). On the other hand, we have that \(P_N(e^{2\pi ix})\) is a lacunary Fourier series on \({\mathbb {T}}\), so we have, by Theorem 3.7.4 in [19], p. 240

$$\begin{aligned}&\Vert \widehat{f_{n,N}}\Vert _1={\mathbb {E}}\left( |P_N(g_n)|\right) \rightarrow {\mathbb {E}} \left( |P_N(U)|\right) =\Vert P_N\Vert _{L^1({\mathbb {T}})}\\\ge & {} \left( C_2(2)\right) ^{-1}\Vert P_N\Vert _{L^2({\mathbb {T}})}=\left( C_2(2)\right) ^{-1}\sum _{k=1}^N\frac{1}{k}\sim \log N. \end{aligned}$$

This means that for every N, \(\Vert \widehat{f_{n,N}}\Vert _1\) and hence \(\Vert \widehat{f_{n,N}}\Vert _q\) for \(q\ge 1\) is larger than order \(\log N\) for n large enough. But the \(\Vert f_{n,N}\Vert _p\) for \(p>2\) is uniformly bounded, which provides the proof of Case 2.

Proof of Proposition 4.4, Case 3

Suppose the orders of all elements in X are uniformly bounded, then all lemmas, 3.4 through 3.11 are valid. We define r, \(\{g_k\}\) and \(H_n\) the same as in the proof of Proposition 4.3 Case 3 except for substituting \({\hat{X}}\) by X. Then by Lemma 3.7, \(g_k\)’s are mutually independent and identically distributed. Define a sequence of functions on X:

$$\begin{aligned} f_n:=\sum _{k=1}^n\frac{1}{\sqrt{k}}{\mathbb {I}}_{g_k^{-1}}. \end{aligned}$$
(63)

Thus \(\Vert f_n\Vert _p\) are uniformly bounded by \(\sum (1/n)^{p/2}<\infty \) for \(p>2\) and

$$\begin{aligned} \widehat{f_n}({\hat{x}})=\sum _{k=1}^n\frac{1}{\sqrt{k}}g_k({\hat{x}}) \end{aligned}$$
(64)

by noting that \(g_k({\hat{x}}):={\hat{x}}(g_k)\). We will use Lyapunov Central Limit Theorem 2.10 to prove that there exists a function, h(n), of n with \(h(n)\nearrow \infty \) as \(n\rightarrow \infty \), such that for every n large enough, the probability

$$\begin{aligned} {\mathbb {P}}\big (Re(\widehat{f_n})\ge h(n)\big )\ge \frac{1}{2\sqrt{2\pi }}e^{-1/2}. \end{aligned}$$
(65)

This means that for every n large enough, we have a measurable set in \({\hat{X}}\) with a fixed positive probability such that the real part of the partial sum defined by the right hand side of (64) is greater than \(h(n)\rightarrow \infty \) on this set. Therefore \(\Vert \widehat{f_n}\Vert _q\rightarrow \infty \) for any \(q\ge 1\), which provides the proof of Case 3. Define \(X_k:=Re(g_k)\), then \(X_k\)’s are i.i.d. by Lemma 3.7 with \({\mathbb {E}}(X_k)=0\) by the fact that \({\mathbb {E}}(g_k)=0\). Denote \(\sigma ^2:=\text{ Var }(X_k)>0\) by the fact that \(X_k\ne 0\). Define h(n) by

$$\begin{aligned} \big (h(n)\big )^2:=\sum _{k=1}^n\text{ Var }(\frac{X_k}{\sqrt{k}})=\sigma ^2\sum _{k=1}^n\frac{1}{k}\sim \sigma ^2 \log n. \end{aligned}$$
(66)

Therefore we have that \(h(n)\nearrow \infty \) as \(n\rightarrow \infty \). Define \(Y_k:=X_k/\sqrt{k}\), hence \({\mathbb {E}}(Y_k)=0\) and \(\text{ Var }(Y_k)=\sigma ^2/k\) and

$$\begin{aligned} Re\big (\widehat{f_n}\big )=\sum _{k=1}^nY_k. \end{aligned}$$
(67)

We will verify Lyapunov condition (11). To see this, substitute \(\mu _k=0\), \(s_n=h(n)\) and \(\sigma _k^2=\sigma ^2/k\), denote \(\sigma ':={\mathbb {E}}(|X_k|^{2+\delta })\) for some \(\delta >0\), therefore \(0<\sigma '<\infty \) by the fact that \(X_k\) is bounded, we have

$$\begin{aligned}&\frac{1}{s_n^{2+\delta }}\sum _{k=1}^n{\mathbb {E}}(|Y_k-\mu _k|)^{2+\delta }=\frac{1}{\big (h(n)\big )^{2+\delta }}\sum _{k=1}^n{\mathbb {E}}\Big (\Big |\frac{X_k}{\sqrt{k}}\Big |^{2+\delta }\Big )\\= & {} \frac{\sigma '}{\big (h(n)\big )^{2+\delta }}\sum _{k=1}^n\frac{1}{k^{1+\delta /2}}\rightarrow 0~\text{ as }~n\rightarrow \infty \end{aligned}$$

by the fact that \(h(n)\nearrow \infty \) and the fact that \(\sum _{k=1}^n\frac{1}{k^{1+\delta /2}}\) is summable for any \(\delta >0\), which provides (11). Thus, by applying Lyapunov CLT (12) we have

$$\begin{aligned}&{\mathbb {P}}\big (Re(\widehat{f_n})\ge h(n)\big )={\mathbb {P}}\Big (\sum _{k=1}^n Y_k\ge h(n)\Big )\\= & {} {\mathbb {P}}\Big (\frac{1}{h(n)}\sum _{k=1}^n Y_k\ge 1 \Big )\rightarrow {\mathbb {P}}\big ({\mathcal {N}}(0,1)\ge 1\big )~~~(\text{ by }~(12))\\> & {} \frac{1}{2\sqrt{2\pi }}e^{-1/2} \end{aligned}$$

where the last inequality is by the lower bound for the normal distribution function:

$$\begin{aligned} {\mathbb {P}}\big ({\mathcal {N}}(0,1)> t\big )>\frac{1}{\sqrt{2\pi }}\frac{t}{t^2+1}e^{-t^2/2} \end{aligned}$$

which provides (65) and the proof of Case 3 and Proposition 4.4. Propositions 4.2, 4.3 and 4.4 together provide Theorem 4.1. \(\square \)

5 Implications for Uncertainty Principles

Suppose X an LCA group associated with a Haar measure \(\alpha \). A probability density functionf on X is an non-negative measurable function on X with

$$\begin{aligned} \int _{x\in X}f(x)\alpha (dx)=1. \end{aligned}$$
(68)

Suppose we have a probability density function f on an LCA group X associated with a Haar measure \(\alpha \). For \(p\in (0,1)\cup (1,\infty )\), define the Rényi entropy of orderp of f by

$$\begin{aligned} h_{p}(f):=\frac{1}{1-p}\log \left( \int _{x\in X}|f(x)|^p \alpha (dx)\right) . \end{aligned}$$
(69)

We have the following direct corollaries from Theorem 3.1 and Theorem 4.1:

Corollary 5.1

Suppose X is a compact or a discrete LCA group associated with a Haar measure \(\alpha \), and \({\hat{X}}\) the dual group associated with the dual Haar measure \(\hat{\alpha }\) normalized as in Definition 2.3. Define the following two regions (see Fig. 4):

$$\begin{aligned} U_C&:=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}\le 1, \frac{1}{p}> \frac{1}{2}\Big \},\\ U_D&:=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}\ge 1, \frac{1}{q}< \frac{1}{2}\Big \}. \end{aligned}$$

For any probability density function \(|\psi |^2\) on X (which means \(|\hat{\psi }|^2\) is also a probability density function on \({\hat{X}}\)) we have the following weighted Rényi entropy uncertainty principle that holds for \((1/p,1/q)\in U_C\) if X is compact and for \((1/p,1/q)\in U_D\) if X is discrete:

$$\begin{aligned} \left( \frac{1}{p}-\frac{1}{2}\right) h_{p/2}(|\psi |^2)+\left( \frac{1}{2}-\frac{1}{q}\right) h_{q/2}(|\hat{\psi }|^2)\ge -\log C_{p,q} \end{aligned}$$
(70)

where \(C_{p,q}\) is the norm of the Fourier operator as in Theorems 3.1 and 4.1.

Fig. 4
figure 4

Weighted uncertainty principle regions

Corollary 5.2

Suppose X is a compact or a discrete LCA group associated with a Haar measure \(\alpha \), and \({\hat{X}}\) the dual group associated with the dual Haar measure \(\hat{\alpha }\) normalized as in Definition 2.3. Suppose also that X is not finite. Define the following two regions (see Fig. 5):

$$\begin{aligned} U_{CN}&:=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}> 1, \frac{1}{q}\le \frac{1}{2}\Big \},\\ U_{DN}&:=\Big \{\Big (\frac{1}{p},\frac{1}{q}\Big )\in [0,\infty )^2:\frac{1}{p}+\frac{1}{q}< 1, \frac{1}{p}\ge \frac{1}{2}\Big \}. \end{aligned}$$

For any constant \(C<0\) and any pair (pq) satisfying \((1/p,1/q)\in U_{CN}\) if X is compact and \((1/p,1/q)\in U_{DN}\) if X is discrete, there exists a probability density function \(|\psi |^2\) on X (which means \(|\hat{\psi }|^2\) is also a probability density function on \({\hat{X}}\)) depending on C, p and q such that:

$$\begin{aligned} \left( \frac{1}{p}-\frac{1}{2}\right) h_{p/2}(|\psi |^2)+\left( \frac{1}{2}-\frac{1}{q}\right) h_{q/2}(|\hat{\psi }|^2)<C. \end{aligned}$$
(71)
Fig. 5
figure 5

Regions such that weighted uncertainty principle does not exist

Corollary 5.2 is particularly interesting because it identifies regions where a natural weighted uncertainty principle fails to hold for infinite groups that are compact or discrete.

The reason why we restrict the parameters (pq) in the region \(p\le 2\) and \(q\ge 2\) is that we would like to make the two coefficients \(\frac{1}{p}-\frac{1}{2}\) and \(\frac{1}{2}-\frac{1}{q}\) in (70) and (71) non-negative, which will provide a particular “uncertainty” principle.

Corollaries 5.1 and 5.2 are the most that can be extracted directly from our results on the (pq)-norms of the Fourier transform. However, it is possible to obtain other expressions of the entropic uncertainty principle by exploiting the monotonicity property of the Rényi entropies. Indeed, if X is \({\mathbb {R}}^n\), \({\mathbb {T}}^n\), \({\mathbb {Z}}^n\) or \({\mathbb {Z}}/m{\mathbb {Z}}\), Zozor, Portesi and Vignat [65] used this idea to obtain unweighted Rényi entropic uncertainty principle of the form

$$\begin{aligned} h_{\alpha }(|\psi |^2)+h_{\beta }(|\hat{\psi }|^2)\ge B_{\alpha ,\beta } \end{aligned}$$
(72)

for corresponding domains of \(\alpha \) and \(\beta \) with the corresponding sharp constants \(B_{\alpha ,\beta }\). We now observe that a similar result holds for general LCA groups.

Theorem 5.3

Let X be an LCA group isomorphic to a direct product of a finite number of copies of \({\mathbb {R}}\) and an LCA group B which contains an open compact subgroup: \(X={\mathbb {R}}^n\times B\), and let X be equipped with a Haar measure \(\alpha \). Let \({\hat{X}}\) be the dual group with the Haar measure \(\hat{\alpha }\) normalized to be such that the Plancherel identity is valid. Then for any probability density function \(|\psi ^2|\) on X and any \(p\ge 0\), \(q\ge 2\) such that \(1/p+1/q\ge 1\), we have the Rényi entropic uncertainty principle

$$\begin{aligned} h_{p/2}(|\psi |^2)+h_{q/2}(|\hat{\psi }|^2)&\ge n\log C(p,q) , \end{aligned}$$
(73)

where

$$\begin{aligned} C(p,q)=\left( \max \left( q^{1/(q-2)}q'^{1/q'-2},p^{1/(p-2)}p'^{1/p'-2}\right) \right) , \end{aligned}$$

where \(p'\) and \(q'\) are the Hölder duals of p and q respectively. Further if X is a discrete or compact abelian group equipped with Haar measure \(\alpha \), let \({\hat{X}}\) be the dual group with the Haar measure \(\hat{\alpha }\) normalized so that the Plancherel identity is valid. Then for any probability density function \(|\psi ^2|\) on X and any non-negative \(p\ge 0\), \(q\ge 0\) such that \(1/p+1/q\ge 1\), we have the sharp Rényi entropic uncertainty principle

$$\begin{aligned} h_{p/2}(|\psi |^2)+h_{q/2}(|\hat{\psi }|^2)&\ge 0 . \end{aligned}$$
(74)

Proof

We first have (73) holds for all \(p\ge 0\), \(q\ge 2\) with \(1/p+1/q=1\) by Hausdorff–Young inequality. Thus For any pair of \(p\ge 0\) and \(q\ge 2\) with \(1/p+1/q> 1\), consider \(({\tilde{p}},{\tilde{q}}):=(p,p')\) or \(({\tilde{p}},{\tilde{q}}):=(q',q)\) and by monotonicity of Rényi entropy, we have

$$\begin{aligned} h_{p/2}(|\psi |^2)+h_{q/2}(|\hat{\psi }|^2)\ge h_{{\tilde{p}}/2}(|\psi |^2)+h_{{\tilde{q}}/2}(|\hat{\psi }|^2)\ge n\log C(p,q) \end{aligned}$$

which provides (73).

Next we prove (74). Similarly we have that (74) holds for \(1/p+1/q=1\) with \(q\ge 2\) by Hausdorff–Young inequality. For \(q<2\) and \(p> 2\), we treat \(\hat{\psi }\) a function on \({\hat{X}}\) and apply Hausdorff–Young inequality again, we have

$$\begin{aligned} h_{p/2}(|\hat{\hat{\psi }}|^2)+h_{q/2}(|\hat{\psi }|^2)\ge 0. \end{aligned}$$
(75)

By the fact that \(\hat{\psi }\in L^2({\hat{X}})\) hence \(\hat{\hat{\psi }}\in L^2(X)\) and \(\hat{\hat{\psi }}(x)=\psi (-x)\) by Theorem 2.5. So by (75), we have

$$\begin{aligned} h_{p/2}(|\psi |^2)+h_{q/2}(|\hat{\psi }|^2)=h_{p/2}(|\hat{\hat{\psi }}|^2)+h_{q/2}(|\hat{\psi }|^2)\ge 0 \end{aligned}$$
(76)

which provides the desired inequality when \(1/p+1/q=1\).

For any pair of (pq) with \(1/p+1/q> 1\), consider \(({\tilde{p}},{\tilde{q}}):=(p,p')\) or \(({\tilde{p}},{\tilde{q}}):=(q',q)\) and by monotonicity of Rényi entropy, we have

$$\begin{aligned} h_{p/2}(|\psi |^2)+h_{q/2}(|\hat{\psi }|^2)\ge h_{{\tilde{p}}/2}(|\psi |^2)+h_{{\tilde{q}}/2}(|\hat{\psi }|^2)\ge 0 \end{aligned}$$

which provides (74). Moreover, this bound can be attained by any normalized frequency basis if X is compact (i.e., functions of the form \(\gamma /\sqrt{\alpha (X)}\) with any \(\gamma \in {\hat{X}}\)), and by any time basis if X is discrete (i.e., normalized delta functions on X). \(\square \)

Not suprisingly given that a key step in the proof was rewriting the Hausdorff–Young inequality as an entropy inequality, Theorem 5.3 is a direct generalization of Hirschman’s entropic uncertainty principle (as improved by Beckner through his determination of the sharp constant),

$$\begin{aligned} h(|\psi |^2)+h(|\hat{\psi }|^2)&\ge n\log (e/2) . \end{aligned}$$

On the other hand, by applying inequality (74) in Theorem 5.3 and letting p and q go to 0, we have an uncertainty principle related to the support of a density function.

Corollary 5.4

If X is a compact or discrete abelian group, under the assumption of Theorem 5.3, then for any probability density function \(|\psi ^2|\) on X, we have

$$\begin{aligned} \alpha \left( \text{ supp }(\psi )\right) \hat{\alpha }\left( \text{ supp }\left( \hat{\psi }\right) \right) \ge 1 \end{aligned}$$
(77)

Moreover, if X is compact or discrete, then the equality can be attained by any normalized frequency basis if X is compact and any normalized time basis if X is discrete.

For finite abelian groups, a series of works have focused on statements involving the cardinalities of supports of a function and its Fourier transform. Donoho and Stark [12] proved a lower bound for the product of support sizes and identified extremals for cyclic groups and [43] gave a simple proof of the extension for finite abelian groups (although the same fact for general LCA groups was already obtained by Matolcsi and Szűcs [42] and Smith [57]). We observe that if X is a finite abelian group, then Corollary 5.4 reduces immediately to an uncertainty principle of Donoho-Stark-type.

Corollary 5.5

Suppose that X is a finite abelian group with cardinality N and a counting Haar measure \(1/\sqrt{N}\). Let \({\hat{X}}\) be its dual group with Haar measure \(1/\sqrt{N}\). Then for any function \(\psi \) on X, denote \(N_t\) and \(N_w\) the number of nonzero entries in \(\psi \) and in \(\hat{\psi }\) (here we use the same notation as in [12]), we have

$$\begin{aligned} N_t\cdot N_w\ge N. \end{aligned}$$
(78)

Moreover, the inequality (78) is sharp and the equality can be attained by any frequency basis and time basis of X.

Proof

Note that it suffices to prove the case that \(\psi \) is such that \(|\psi |^2\) is a probability mass function, because normalization does not change the cardinality of the support of \(\psi \) and \(\hat{\psi }\). By (77), we have that

$$\begin{aligned} \alpha \left( \text{ supp }(\psi )\right) \hat{\alpha }\left( \text{ supp }\left( \hat{\psi }\right) \right) \ge 1\Rightarrow \frac{N_t}{\sqrt{N}}\cdot \frac{N_w}{\sqrt{N}}\ge 1\Rightarrow N_tN_w\ge N \end{aligned}$$

which provides (78). \(\square \)

A nice refinement of the Donoho-Stark bound was recently obtained by [14] for finite abelian groups, which states that \(N_t\cdot N_w\ge N + N_t -N_H\), where \(N_H\) is the cardinality of the stabilizer of the support of \(\psi \) (which is clearly not less than \(N_t\)). A different refinement for functions on finite fields of prime order that possess certain symmetries may be found in [17].

A trivial application of the AM-GM inequality to Corollary 5.5 implies that

$$\begin{aligned} N_t+N_w\ge 2\sqrt{N_tN_w}\ge 2\sqrt{N} , \end{aligned}$$

constraining the sum of the support sizes of a function and its Fourier transform for a finite abelian group. As one might expect, however, this is not sharp. A sharp inequality for the sum of supports was found for cyclic groups of prime order by Tao [61] (the correct bound turns out to be \(N+1\) rather than \(2\sqrt{N}\)), with a generalization to finite cyclic groups obtained by Ram Murty and Whang [52].

It is worth noting that Dembo, Cover and Thomas [11] obtained a Hirschman-type inequality for finite cyclic groups (i.e., a Shannon entropy version of inequality (78), which is stronger than the inequality on product of support sizes), and that Przebinda, DeBrunner and Özaydın [51] obtained a characterization of extremals for this inequality.