1 Introduction

Let \(\mu \) be any probability measure on \(\mathbb {R}\). Denote by \((S_n)\) a random walk with step distribution \(\mu \), such that \(S_0=0\), a.s. Define the first ladder times associated to \((S_n)\) by

$$\begin{aligned} \tau _-=\inf \{n\ge 1:S_n<0\},\quad \tau _+=\inf \{n\ge 1:S_n\ge 0\}. \end{aligned}$$

Then the Wiener–Hopf factorization of the characteristic function \(\varphi (t)=\int e^{itx}\mu (\mathrm{d}x)\) of \(\mu \) can be written as,

$$\begin{aligned} 1-s\varphi (t)=[1-\chi _-(s,it)][1-\chi _+(s,it)],\quad |s|\le 1,\,t\in \mathbb {R}, \end{aligned}$$
(1.1)

where \(\chi _-\) and \(\chi _+\) are the downward and upward space–time Wiener–Hopf factors,

$$\begin{aligned} \chi _-(s,it)=\mathbb {E}(s^{\tau _-}e^{itS_{\tau _-}}1_{\{\tau _-<\infty \}})\quad \text{ and }\quad \chi _+(s,it)= \mathbb {E}(s^{\tau _+}e^{itS_{\tau _+}}1_{\{\tau _+<\infty \}}). \end{aligned}$$

To paraphrase Feller [3], XVIII.3, the remarkable feature of the factorization (1.1) is that it represents an arbitrary characteristic function\(\varphi \)in terms of two (possibly defective) distributions, one being concentrated on the half line\((-\,\infty ,0)\)and the other one on the half line\([0,\infty )\). However, this feature only exploits identity (1.1) for fixed \(s\ne 0\) and reflects the fact that \(\mu \) is determined by the knowledge of the distributions of both \(S_{\tau _-}\) and \(S_{\tau _+}\). But one may wonder about the extra information brought by the joint distributions \((\tau _-,S_{\tau _-})\) and \((\tau _+,S_{\tau _+})\). In particular, is it true in general that \(\mu \) is determined by only one of these joint distributions? or equivalently, is it true that \(\varphi \) is determined by only one of its space–time Wiener–Hopf factors?

The aim of this paper is an attempt to answer the latter question. We will actually show that \(\mu \) is determined by \(\chi _+(s,it)\) in some quite large classes of distributions including the case where \(\mu \) has some positive exponential moments, or when \(t\mapsto \mu (t,\infty )\) is completely monotone on \((a,\infty )\), for some \(a\ge 0\), or satisfies a property which is slightly stronger than analyticity. Obviously all these assumptions can be verified from the sole data of \(\chi _+(s,it)\). These different cases cover a sufficiently large range of distributions for us to allow ourselves to raise the following conjecture. Let \(\mathcal {M}_1\) be the set of probability measures on \(\mathbb {R}\).

Conjecture C

Any distribution \(\mu \in \mathcal {M}_1\) whose support is not included in \((-\infty ,0)\) is determined by its upward space–time Wiener–Hopf factor \(\chi _+(s,it)\), \(|s|<1\), \(t\in \mathbb {R}\).

Note that if the support of \(\mu \) is contained in \([0,\infty )\), then this measure is clearly determined by \(\chi _+\), since \(S_{\tau _+}=S_1\) in this case.

A crucial step in the proof of (1.1) is the following development of the factor \(\chi _+(s,t)\), for \(|s|<1\) and \(t\in \mathbb {R}\),

$$\begin{aligned} \log \frac{1}{1-\chi _+(s,it)}=\sum _{n=1}^\infty \frac{s^n}{n}\int _{[0,\infty )}e^{itx}\mu ^{*n}(\mathrm{d}x), \end{aligned}$$
(1.2)

see [3], XVIII.3, where \(\mu ^{*n}\) is the nth fold convolution product of \(\mu \) by itself. We will actually refer to \(\mu ^{*n}\), \(n\ge 0\) as the convolution powers of \(\mu \). This identity shows that the data of \(\chi _+\) is equivalent to the knowledge of the measures \(\mu ^{*n}\), \(n\ge 1\) on \([0,\infty )\) and leads to the following equivalent conjecture.

Conjecture C’

Any distribution \(\mu \in \mathcal {M}_1\) whose support is not included in \((-\,\infty ,0)\) is determined by its convolution powers \(\mu ^{*n}\), \(n\ge 1\) restricted to \([0,\infty )\).

In particular, conjecture C is satisfied for distributions whose support is included in \([0,\infty )\). Each of the next sections corresponds to a class of probability distributions for which Conjecture C holds. For the first one in Sect. 2, we prove that distributions having some particular exponential moments satisfy conjecture C. Then in Sect. 3 we consider three other classes for which a much stronger result than Conjectures C and C\('\) is true. We will see that there are actually many distributions which are determined by the sole data of \(\mu \) and \(\mu ^{*2}\) on \([0,\infty )\). This is the case when the function \(t\mapsto \mu (t,\infty )\) is smooth enough. In Sect. 3.2, we will consider the case where the function \(t\mapsto \mu (t,\infty )\) is completely monotone on \((a,\infty )\), for some \(a\ge 0\) and in Sect. 3.3 we will make a slightly stronger assumption than analyticity on this function. We will also present the discrete counterpart of the completely monotone case in Sect. 3.4. Finally in Sect. 4, we will consider Conjecture C in the restricted class of infinitely divisible distributions and show that if the upper tail of the Lévy measure is completely monotone, then \(\mu \) is determined by its upper Wiener–Hopf factor. Then we end this paper in Sect. 5 with some important remarks on the possibility of extending the classes of distributions studied.

Throughout this paper, we will denote by \(\mathscr {C}\) the set of distributions satisfying Conjecture C (or equivalently Conjecture C\('\)). Let us give a proper definition of this set.

Definition 1.1

Let \(\mathscr {C}\) be the set of distributions \(\mu \in \mathcal {M}_1\) which are determined by their upward Wiener–Hopf factor \(\chi _+(s,it)\), for \(|s|\le 1\) and \(t\in \mathbb {R}\) or equivalently by the data of their convolution powers \(\mu ^{*n}\), \(n\ge 1\) restricted to \([0,\infty )\). More formally,

$$\begin{aligned} \mathscr {C}= & {} \{\mu \in \mathcal {M}_1:\text{ if } \mu _1\in \mathcal {M}_1 \text{ satisfies } \mu ^{*n}=\mu _1^{*n},\\&\quad \text{ on } [0,\infty ), \text{ for } \text{ all } n\ge 0, \text{ then } \mu =\mu _1\}. \end{aligned}$$

Then conjectures C and \(\hbox {C}'\) can be rephrased as follows: Any distribution\(\mu \in \mathcal {M}_1\)whose support is not included in\((-\infty ,0)\)belongs to\(\mathscr {C}\).

The problem we investigate here originates from a result in Vigon’s Ph.D. thesis [10], see Section 4.5 therein, where a question equivalent to Conjecture C is raised in the setting of Lévy processes. Our question is actually more general since it concerns distributions which are not necessarily infinitely divisible. In particular, a positive answer to Conjecture C would imply that the law of any Lévy process \((X_t,t\ge 0)\) is determined by one of its space–time Wiener–Hopf factors or equivalently by the marginals of the process \((X_t^+,t\ge 0)\), see Sect. 4.

2 When \(\mu \) Admits Exponential Moments

2.1 Recovering the Characteristic Function and the Moment Generating Function

In this paper, we will always assume that the support of the measure \(\mu \) is not included in \((-\,\infty ,0)\). Let us observe that from the data of the upward Wiener–Hopf factor \(\chi _+(s,it)\), for \(|s|\le 1\) and \(t\in \mathbb {R}\) or equivalently from the data of the measures \(\mu ^{*n}\), \(n\ge 1\) restricted to \([0,\infty )\), we know the sequences, \(\mathbb {P}(S_n<0)\) and \(\mathbb {P}(S_n\ge 0)\), \(n\ge 0\), as well as the distributions of both \(\tau _-\) and \(\tau _+\). In particular, we know whether \((S_n)\) oscillates, drifts to \(-\,\infty \), or drifts to \(\infty \). The next result shows that provided \(n\mapsto \mathbb {P}(S_n<0)\) tends to 0 sufficiently fast along some subsequence, it is possible to recover the characteristic function \(\varphi \) of \(\mu \) on some interval containing 0, from the measures \(\mu ^{*n}\) restricted to \([0,\infty )\).

Lemma 2.1

Assume that there is \(\alpha >0\) and a sequence of integers \((n_j)_{j\ge 1}\) going to \(\infty \) as \(j\rightarrow \infty \) such that

$$\begin{aligned} \mathbb {P}(S_{n_j}<0)\le e^{-\alpha n_j},\quad \text{ for } \text{ all } j\ge 1. \end{aligned}$$
(2.1)

Then for all t such that \(|\varphi (t)|>e^{-\alpha }\),

In particular, if (2.1) holds then \(\varphi \) can be determined on some neighborhood of 0.

Proof

Recall that \(\varphi (t)\) tends to 1 as t tends to 0. Then let t be sufficiently small so that \(\varphi (t)\ne 0\) and let us write,

(2.2)

Whenever t is such that \(|\varphi (t)|>e^{-\alpha }\) and for all j such that \(\mathbb {P}(S_{n_j}<0)<e^{-\alpha {n_j}}\),

Therefore, the left-hand side of the above inequality tends to 0 as \(j\rightarrow \infty \) and this yields the result thanks to Eq. (2.2). \(\square \)

Since, for all \(n\ge 0\),

$$\begin{aligned} \mathbb {P}(S_1<0,S_2-S_1<0,\dots ,S_n-S_{n-1}<0)=\mathbb {P}(S_1<0)^n\le \mathbb {P}(S_n<0), \end{aligned}$$

Equation (2.1) cannot hold for all \(\alpha >0\), unless \(\mathbb {P}(S_1\ge 0)=1\). Note also that if (2.1) holds then the random walk \((S_n)\) cannot drift to \(-\infty \). Moreover, if it holds for all sufficiently large n, then \((S_n)\) necessarily drifts to \(\infty \) thanks to Spitzer’s criterion which asserts that this happens if and only if \(\sum n^{-1}\mathbb {P}(S_n<0)<\infty \).

Lemma 2.2

Let \(\mu _1,\mu _2\in \mathcal {M}_1\). Denote by \(\varphi _1\) and \(\varphi _2\) their characteristic functions and by \(\phi _1\) and \(\phi _2\) their moment generating functions, that is \(\varphi _j(u):=\int _{\mathbb {R}}e^{iu x}\,\mu _j(\mathrm{d}x)\) and \(\phi _j(v):=\int _{\mathbb {R}}e^{v x}\,\mu _j(\mathrm{d}x)\), \(u,v\in \mathbb {R}\), \(j=1,2\). Assume that there exists \(\lambda \ne 0\) such that \(\phi _1(\lambda )<\infty \) and \(\phi _2(\lambda )<\infty \) and that there is an open interval I such that \(\varphi _1(u)=\varphi _2(u)\), for all \(u\in I\). Then \(\mu _1=\mu _2\).

Proof

Assume without loss of generality that \(\lambda >0\). Let \(D=\{z=u+iv\in \mathbb {C}:u\in \mathbb {R},\,-\lambda<v<0\}\). From the assumptions, the function \(f:=\varphi _1-\varphi _2\) admits an analytic continuation in the open domain D. Then let \(O_+=\{z=u+iv\in \mathbb {C}:u\in I,\,0<v<\lambda \}\) and \(O_-=\{z=u+iv\in \mathbb {C}:u\in I,\,-\lambda<v<0\}\). From Schwarz reflection principle, f admits an analytic continuation in the open domain \(O_+\cup I \cup O_-\). From the principle of isolated zeroes, f vanishes in \(O_+\cup I \cup O_-\) and from the same principle, it vanishes in D. By continuity, \(f(u)=0\) for all \(u\in \mathbb {R}\) and the result follows from injectivity of the Fourier transform. \(\square \)

We will say that a distribution \(\mu \in \mathcal {M}_1\) admits an exponential moment if there is \(\lambda \in \mathbb {R}\setminus \{0\}\) such that \(\phi (\lambda ):=\int _{\mathbb {R}}e^{\lambda x}\,\mu (\mathrm{d}x)<\infty \).

Lemma 2.3

The characteristic function of a distribution having an exponential moment cannot vanish identically in an interval.

Proof

Let \(\varphi \) be the characteristic function of \(\mu \in \mathcal {M}_1\). If \(\varphi \) vanishes in an interval and if there is \(\lambda \in \mathbb {R}\setminus \{0\}\) such that \(\phi (\lambda )<\infty \), then we conclude from the same arguments as in the proof of Lemma 2.2 (replacing f by \(\varphi \)) that \(\varphi (u)=0\) for all \(u\in \mathbb {R}\), which is absurd since \(\varphi \) is a characteristic function. \(\square \)

Lemma 2.3 was noticed in [9] for nonnegative random variables.

Theorem 2.1

If a measure \(\mu \) satisfies (2.1), then it belongs to the class \(\mathscr {C}\).

Proof

First observe that if we know \(\mu ^{*n}\) on \([0,\infty )\) for all \(n\ge 1\), then we can determine if (2.1) holds since \(1-\mu ^{*n}[0,\infty )=\mathbb {P}(S_{n}<0)\). Now if (2.1) holds, then from Lemma 2.1, the characteristic function \(\varphi \) of \(\mu \) can be determined on some neighborhood of 0. Since the function \(t\mapsto \chi _+(1,it)\) is known, from (1.1), it means that the function \(t\mapsto \chi _-(1,it)\) can be determined on the same neighborhood. But \(t\mapsto \chi _-(1,it)/\chi _-(1,0)\) is the characteristic function of the non-positive random variable \(S_{\tau _-}\) under \(\mathbb {P}(\cdot |\tau _-<\infty )\). Since \(\mathbb {E}(e^{\lambda S_{\tau _-}}|\tau _-<\infty )\le 1\), for all \(\lambda \ge 0\), we derive from Lemma 2.2 that \(\chi _-(1,it)\) is determined for all \(t\in \mathbb {R}\). Therefore, from (1.1), \(\varphi (t)\) is determined for all \(t\in \mathbb {R}\) and the result follows. \(\square \)

Remark 2.1

Distributions having negative exponential moments provide examples for which (2.1) is satisfied. More specifically, assume that there is \(\lambda <0\) such that \(\phi (\lambda )<1\), then

which implies (2.1). The question of finding an example of a distribution with no exponential moment which satisfies (2.1) remains open.

Recall that \(\phi \) is a convex function on the interval \(\{\alpha :\phi (\alpha )<\infty \}\). Moreover, since the support of \(\mu \) is not included in \((-\,\infty ,0)\), \(\phi (\alpha )\) is nondecreasing for \(\alpha \) large enough. If \(\lambda \in \mathbb {R}\) is such that \(\lambda =\inf \{\alpha :\phi (\alpha )<\infty \}\), then \(\phi '(\lambda )\) will be understood as the right derivative of \(\phi \) at \(\lambda \). Similarly, if \(\lambda =\sup \{\alpha :\phi (\alpha )<\infty \}\), then \(\phi '(\lambda )\) will be the left derivative of \(\phi \) at \(\lambda \).

Lemma 2.4

For all \(\lambda \in \mathbb {R}\) such that \(\phi (\lambda )<\infty \) and \(\phi '(\lambda )>0\),

Proof

Let \((S^{(\lambda )}_n)\) be a random walk with step distribution \(\displaystyle \mu _\lambda (\mathrm{d}x):=\frac{e^{\lambda x}}{\phi (\lambda )}\mu (\mathrm{d}x)\). Since

$$\begin{aligned} \mathbb {E}\left( S^{(\lambda )}_1\right) =\int _{\mathbb {R}}\frac{xe^{\lambda x}}{\phi (\lambda )}\,\mu (\mathrm{d}x)=\frac{\phi '(\lambda )}{\phi (\lambda )}>0\,, \end{aligned}$$

the random walk \((S^{(\lambda )}_n)\) drifts to \(\infty \), so that \(\lim _{n\rightarrow \infty }\mathbb {P}(S^{(\lambda )}_n\ge 0)=1\). Then the result follows from the identity

\(\square \)

The following theorem shows that distributions having some negative exponential moments less than 1 or some positive exponential moments bigger than 1 belong to class \(\mathscr {C}\).

Theorem 2.2

Assume that the moment generating function \(\phi \) of the measure \(\mu \) satisfies (at least) one of the two following conditions:

  1. (a)

    There exists \(\lambda <0\) such that \(\phi (\lambda )<1\).

  2. (b)

    There exists \(\lambda >0\) such that \(\phi (\lambda )\in (1,\infty )\).

Then \(\mu \) belongs to the class \(\mathscr {C}\).

Proof

Let us show that from the knowledge of \(\mu ^{*n}\) restricted to \([0,\infty )\), for all \(n\ge 1\), we can determine if (a) is satisfied. Assume that (2.1) holds, then from Theorem 2.1 the measure \(\mu \) belongs to the class \(\mathscr {C}\) so that we can determine if (a) holds. If (2.1) does not hold, then from Remark 2.1, (a) is not satisfied.

Now let us deal with condition (b). From the data of \(\mu ^{*n}\) restricted to \([0,\infty )\), for all \(n\ge 1\), the expression is known, for all \(\lambda >0\) and \(n\ge 1\). Assume that there is \(\lambda >0\) such that

Since , we have actually \(\phi (\lambda )>1\). Moreover, our data clearly allows us to know if \(\phi (\lambda )<\infty \). Then since \(\phi (0)=1\), by convexity of the function \(\lambda \mapsto \phi (\lambda )\), it is clear that \(\phi '(\lambda )>0\), so that from Lemma 2.4, . This means that from our data, we can determine if there is \(\lambda >0\) such that \(\phi (\lambda )\in (1,\infty )\), which is condition (b), and in this case , so that \(\phi (\lambda )\) is known. Moreover, from the continuity of \(\phi \) on the set \(\{x:\phi (x)<\infty \}\), there is an interval I containing \(\lambda \) such that for all \(x\in I\), \(\phi (x)\in (1,\infty )\) and from the same reasoning as above, \(\phi (x)\) is known for all \(x\in I\), so that the measure \(\mu \) is determined and we conclude that it belongs to the class \(\mathscr {C}\). \(\square \)

We can easily check that condition (b) of Theorem 2.2 is satisfied in the two following situations:

\((b_1)\):

\(\phi (\lambda )<\infty \) for all \(\lambda >0\).

\((b_2)\):

\(\mu \) is absolutely continuous in \([0,\infty )\) and its density f satisfies \(\ln (f(x))\sim -\lambda _0 x\), as \(x\rightarrow \infty \), for some \(\lambda _0\in (0,\infty )\).

Indeed, in case \((b_1)\), if \(\mu \ne \delta _0\) then since \(\phi \) is a nondecreasing convex function such that \(\phi (0)=1\) and since the support of \(\mu \) is not included in \((-\infty ,0)\), \(\lim _{\lambda \rightarrow \infty }\phi (\lambda )=\infty \). In case \((b_2)\), it is clear that \(\lim _{\lambda \rightarrow \lambda _0-}\phi (\lambda )=\infty \).

2.2 Skip Free Distributions

A distribution \(\mu \) whose support is included in \(\mathbb {Z}\) is said to be downward (resp. upward) skip free if \(\mu (n)=0\) for all \(n\le -2\) (resp. for all \(n\ge 2\)). Clearly, skip free distributions possess exponential moments. Moreover, upward skip free distributions belong to class \(\mathscr {C}\) from Theorem 2.2 (b) and the note following its proof. Then in this subsection, we shall see that the case of downward skip free distributions allows us to go a little beyond the cases encompassed by Theorems 2.1 and 2.2. We first need to make sure that our data allows us to determine if the support of a distribution is included in \(\mathbb {Z}\).

Lemma 2.5

The support of the measure \(\mu \) is included in \(\mathbb {Z}\) if and only if the support of the measures \(\mu ^{*n}\), \(n\ge 1\) restricted to \([0,\infty )\) is included in \(\mathbb {Z}_+\).

Proof

The direct implication is obvious. Then assume that the support of \(\mu ^{*n}\), \(n\ge 1\) restricted to \([0,\infty )\) is included in \(\mathbb {Z}_+\), whereas the support of \(\mu \) restricted to \((-\infty ,0]\) is not included in \(\mathbb {Z}_-\). Then there is an interval \(I\subset (-\infty ,0]\setminus \mathbb {Z}_-\) such that \(\mu (I)>0\). Let \(n\in \mathbb {Z}_+\setminus \{0\}\) such that \(\mu (\{n\})>0\) and \(h\in \mathbb {Z}_+\setminus \{0\}\) such that \(hn+\inf I>0\). Then

$$\begin{aligned} 0< \mu (I)\mu (\{n\})^h=\mathbb {P}(S_1\in I,S_{i+1}-S_i=n,i=1,\dots ,h)\le \mathbb {P}(S_{n+1}\in hn+I)\,. \end{aligned}$$

This implies that \(\mu ^{*(n+1)}(hn+I)>0\), where \(hn+I\subset [0,\infty )\setminus \mathbb {Z}_+\), which contradicts the assumption. \(\square \)

Theorem 2.3

Downward skip free distributions belong to class \(\mathscr {C}\).

Proof

Let \(\mu \in \mathcal {M}_1\) whose convolution powers \(\mu ^{*n}\), \(n\ge 1\) restricted to \([0,\infty )\) are known. Then from Lemma 2.5, we can determine if the support of \(\mu \) is included in \(\mathbb {Z}\) or not. Let us assume that it is the case.

As already noticed at the beginning of this section, we can determine if \((S_n)\) drifts to \(\infty \) or not. Assume first that \((S_n)\) drifts to \(\infty \).

Let us observe that under this assumption, if \(\mu \) is a downward skip free distribution, then there exists \(\lambda <0\) such that \(\phi (\lambda )<1\). Indeed in this case, it is clear that \(\phi (\lambda )<\infty \), for all \(\lambda \le 0\). Moreover \(\mathbb {E}(S_1)\) exists and is positive. Since \(\mathbb {E}(S_1)=\lim _{\lambda \uparrow 0}(1-\phi (\lambda ))/\lambda \), there is necessarily \(\lambda <0\) such that \(\phi (\lambda )<1\).

From Theorem 2.2 (a), we can determine if there exists \(\lambda <0\) such that \(\phi (\lambda )<1\). If this is not the case, then \(\mu \) cannot be downward skip free. On the contrary, if there exists \(\lambda <0\) such that \(\phi (\lambda )<1\), then from Theorem 2.2, \(\mu \) can be determined. In particular, we know if \(\mu \) is downward skip free or not.

Now assume that \((S_n)\) does not drift to \(\infty \) and write for \(n\ge 0\),

$$\begin{aligned} \mathbb {P}(S_{\tau _+}>n,\tau _+<\infty )= & {} \sum _{k\ge 1}\mathbb {P}(S_1<0,\dots ,S_{k-1}<0,S_k-S_{k-1}>n-S_{k-1})\nonumber \\= & {} \sum _{k\ge 1}\sum _{r\le -1}\mathbb {P}(S_1<0,\dots ,S_{k-2}<0,S_{k-1}=r)\mathbb {P}(S_1>n-r)\nonumber \\= & {} \sum _{r\ge 1} v(r)\mathbb {P}(S_1>n+r)\,, \end{aligned}$$
(2.3)

where \(v(r)=\sum _{k\ge 1}\mathbb {P}(S_1\le 0,\dots ,S_{k-1}\le 0,S_{k}=-r)\) is the mass function of the renewal measure on \(\{1,2,\dots \}\) of the (strict) downward ladder height process of \((S_n)\), see Chap. XII.2 in [3]. In particular, this renewal measure satisfies \(v(r)\le 1\) for all \(r\ge 1\). Moreover, \((S_n)\) is downward skip free and does not drift to \(\infty \), if and only if \(v(r)=1\), for all \(r\ge 1\). Since it is the only unknown in Eq. (2.3), we can determine if it is the case or not. Finally, knowing that \(\mu \) is downward skip free, we immediately determine this distribution on \(\mathbb {R}\) from its knowledge on \([0,\infty )\). \(\square \)

It appears in the proof of Theorem 2.3 that downward skip free distributions which drift to \(\infty \) actually satisfy condition (a) of Theorem 2.2. Hence the only additional case in this subsection is that of downward skip free distributions which do not drift to \(\infty \).

We will denote by \(\mathscr {E}\) the set of measures \(\mu \) satisfying the assumptions of Theorem 2.2 or those of Theorem 2.3, that is the set of measures satisfying (a) or (b) or downward skip free distributions. It will be called the exponential class. From Theorems 2.2 and 2.3, we have \(\mathscr {E}\subset \mathscr {C}\). Note that from Theorem 2.1, we have determined a subclass of \(\mathscr {C}\) which is presumably bigger than \(\mathscr {E}\).

3 When \(\mu \) is Characterized by \(\mu \) and \(\mu ^{*2}\) on \([0,\infty )\).

3.1 Preliminary Results

We will show that in many cases, the sole data of \(\mu \) and \(\mu *\mu \) on \([0,\infty )\) actually suffices to determine \(\mu \). Let us define the following class of measures:

Definition 3.1

Let \(\mathscr {C}^*\) be the set of distributions \(\mu \in \mathcal {M}_1\) which are determined by the data of \(\mu \) and \(\mu ^{*2}\) restricted to \([0,\infty )\). More formally,

$$\begin{aligned} \mathscr {C}^*=\{\mu \in \mathcal {M}_1:\text{ if } \mu _1\in \mathcal {M}_1 \text{ satisfies } \mu =\mu _1 \text{ and } \mu ^{*2} =\mu _1^{*2} \text{ on } [0,\infty ), \text{ then } \mu =\mu _1\}. \end{aligned}$$

We emphasize here the obvious fact that \(\mathscr {C}^*\subset \mathscr {C}\). In this subsection, we give a theoretical condition for a measure to belong to class the \(\mathscr {C}^*\).

In the rest of this article, we will often need to use the absolute continuity property on \([0,\infty )\) of the distributions we will consider. We first observe that in Conjectures C and C’, there is no loss of generality in assuming that \(\mu \) is absolutely continuous with respect to the Lebesgue measure on \([0,\infty )\).

Conjecture C”

Any absolutely continuous distribution \(\mu \in \mathcal {M}_1\) whose support is not included in \((-\,\infty ,0)\) belongs to the class \(\mathscr {C}\).

Lemma 3.1

Conjectures C, C\('\) and C\(''\) are equivalent.

Proof

We already know from Sect. 1 that Conjectures C and C’ are equivalent. Then clearly, it suffices to prove that if Conjecture C” is true, then Conjecture C’ is true.

Let \(\mu ,\mu _1\in \mathcal {M}_1\) be any two distributions such that the measures \(\mu _1^{*n}\) and \(\mu ^{*n}\) agree on \([0,\infty )\), for all \(n\ge 1\). Let g be any probability density function on \([0,\infty )\), i.e., \(\int _{0}^\infty g(x)\,\mathrm{d}x=1\) and let \(\bar{\mu },\bar{\mu }_1\in \mathcal {M}_1\) be the absolutely continuous measures whose respective densities are \(h(x)=\int _{\mathbb {R}} g(y-x)\,\mu (\mathrm{d}y)\) and \(h_1(x)=\int _{\mathbb {R}} g(y-x)\,\mu _1(\mathrm{d}y)\), \(x\in \mathbb {R}\).

Denoting by \(g^{*n}\) the n-th convolution product of the function g by itself, it is plain that for all \(x\ge 0\),

$$\begin{aligned} \bar{\mu }^{*n}[x,\infty )= & {} \int _0^\infty \mu ^{*n}[x+y,\infty )g^{*n}(y)\,\mathrm{d}y\quad \text{ and } \\ \bar{\mu }_1^{*n}[x,\infty )= & {} \int _0^\infty \mu _1^{*n}[x+y,\infty )g^{*n}(y)\,\mathrm{d}y. \end{aligned}$$

Therefore since the measures \(\mu ^{*n}\) and \(\mu _1^{*n}\) agree on \([0,\infty )\), for all \(n\ge 1\), the measures \(\bar{\mu }^{*n}\) and \(\bar{\mu }_1^{*n}\) also agree on \([0,\infty )\), for all \(n\ge 1\) and from the assumption that conjecture C” is true, we conclude that the measures \(\bar{\mu }\) and \(\bar{\mu }_1\) agree on \(\mathbb {R}\). Then we can identify both characteristic functions:

$$\begin{aligned} \int _{\mathbb {R}}e^{itx}\,\bar{\mu }(\mathrm{d}x)= & {} \int _{\mathbb {R}}e^{itx}\,\mu (\mathrm{d}x) \int _{0}^\infty e^{-itx}g(x)\,\mathrm{d}x\quad \text{ and }\\ \int _{\mathbb {R}}e^{itx}\,\bar{\mu }_1(\mathrm{d}x)= & {} \int _{\mathbb {R}}e^{itx}\,\mu _1(\mathrm{d}x)\int _{0}^\infty e^{-itx}g(x)\,\mathrm{d}x. \end{aligned}$$

But from Lemma 2.3, the characteristic function of g cannot vanish identically in an interval. This implies that \(\int _{\mathbb {R}}e^{itx}\,\mu (\mathrm{d}x)=\int _{\mathbb {R}}e^{itx}\,\mu _1(\mathrm{d}x)\), for all \(t\in \mathbb {R}\) by continuity of characteristic functions. Then we conclude that \(\mu =\mu _1\) on \(\mathbb {R}\), from injectivity of the Fourier transform. \(\square \)

We derive from Lemma 3.1 that there is no loss of generality in assuming that \(\mu \) is absolutely continuous on \(\mathbb {R}\). We will sometimes make this assumption and denote the density of \(\mu \) by f.

Lemma 3.2

For any probability density function, f on \(\mathbb {R}\) and for all \(t\ge 0\),

$$\begin{aligned} \int _0^\infty f(t+s)\bar{f}(s)\,\mathrm{d}s=\frac{1}{2}\left( f*f(t)-\int _0^tf(t-s)f(s)\,\mathrm{d}s\right) , \end{aligned}$$
(3.1)

where \(\bar{f}(s)=f(-s)\).

Proof

It suffices to decompose \(f*f\) as

$$\begin{aligned} f*f(t)= & {} \int _{\mathbb {R}}f(t-s)f(s)\,\mathrm{d}s\\= & {} \int _{-\infty }^0f(t-s)f(s)\,\mathrm{d}s+\int _0^tf(t-s)f(s)\,\mathrm{d}s+\int _t^\infty f(t-s)f(s)\,\mathrm{d}s. \end{aligned}$$

Then from a change of variables, we obtain \(\int _t^\infty f(t-s)f(s)\,\mathrm{d}s=\int _{-\infty }^0f(t-s)f(s)\,\mathrm{d}s=\int _0^\infty f(t+s)\bar{f}(s)\,\mathrm{d}s\), which proves our identity. \(\square \)

The main idea of this section is to exploit identity (3.1) in order to characterize the function \(\bar{f}\) on \([0,\infty )\) (or equivalently f on \((-\,\infty ,0]\)) from the sole data of f and \(f*f\) on \([0,\infty )\). More specifically, assume that f restricted to \([0,\infty )\) fulfills the following property: for any two nonnegative Borel functions \(g_1\) and \(g_2\) defined on \([0,\infty )\) such that \(\int _0^\infty f(t+s)g_1(s)\,\mathrm{d}s<\infty \), \(\int _0^\infty f(t+s)g_2(s)\,\mathrm{d}s<\infty \), for all \(t\ge 0\), the following implication is satisfied,

$$\begin{aligned} \int _{[0,\infty )} f(t+s)g_1(s)\,\mathrm{d}s=\int _{[0,\infty )} f(t+s)g_2(s)\,\mathrm{d}s,\quad \text{ for } \text{ all } t\ge 0\;\,\Rightarrow \;\, g_1\equiv g_2,\quad \text{ a.e. } \end{aligned}$$
(3.2)

Then clearly, the map \(t\mapsto \int _{[0,\infty )} f(t+s)\bar{f}(s)\,\mathrm{d}s\) characterizes \(\bar{f}\) on \([0,\infty )\) and therefore from (3.1), \(\mu \) is determined on \(\mathbb {R}\) by the sole data of \(\mu \) and \(\mu ^{*2}\) on \([0,\infty )\), that is \(\mu \in \mathscr {C}^*\).

Remark 3.1

There are density functions f which do not satisfy (3.2). For instance with \(f(s)=\frac{1}{2}e^{-s}\), the function \(t\mapsto \int _{0}^\infty f(t+s)\bar{f}(s)\,\mathrm{d}s=\frac{1}{2}e^{-t}\int _{0}^\infty e^{-s}\bar{f}(s)\,\mathrm{d}s\) provides a very poor information on \(\bar{f}\) and certainly cannot characterize the function f on \([0,\infty )\). Also if \(\mu \) has a bounded support in \([0,\infty )\), then clearly the function \(t\mapsto \int _{0}^\infty f(t+s)\bar{f}(s)\,\mathrm{d}s\) cannot characterize \(\bar{f}\) outside this support.

Note also that from (3.1), the knowledge of \(t\mapsto \int _{0}^\infty f(t+s)\bar{f}(s)\,\mathrm{d}s\) and f(t), for \(t\ge 0\) is equivalent to this of the functions f and \(f^{*2}\) on \([0,\infty )\). Therefore, in the above examples, f is not determined by the data of f and \(f^{*2}\) on \([0,\infty )\).

The following proposition gives a sufficient condition for (3.2) to hold.

Proposition 3.1

Assume that \(\mu \) is absolutely continuous on \(\mathbb {R}\) with density f. Let us introduce the following set of functions defined on \([0,\infty )\),

$$\begin{aligned} \mathcal {H}:=\left\{ \sum _{k=1}^n\alpha _kf(t_k+\cdot ):n\ge 1,\alpha _k\in \mathbb {R},\,t_k\ge 0\right\} \,. \end{aligned}$$

If the restriction of f to \([0,\infty )\) belongs to \(L^\infty ([0,\infty ))\) and if \(\mathcal {H}\) is dense in \(L^\infty ([0,\infty ))\), then for any \(g\in L^1([0,\infty ))\), the following implication is satisfied,

$$\begin{aligned} \int _{0}^\infty f(t+s)g(s)\,\mathrm{d}s=0,\quad \text{ for } \text{ all } t\ge 0\quad \Rightarrow \quad g\equiv 0,\quad \text{ a.e. } \end{aligned}$$
(3.3)

If (3.3) holds, then \(\mu \in \mathscr {C}^*\).

Proof

If \(\int _{0}^\infty f(t+s)g(s)\,ds=0\), for all \(t\ge 0\), then clearly, since \(\mathcal {H}\) is dense in \(L^\infty ([0,\infty ))\), \(\int _{0}^\infty h(s)g(s)\,ds=0\) for all \(h\in L^\infty ([0,\infty ))\) and this implies that \(g\equiv 0\), a.e.

Assume now that the restriction of the measures \(\mu \) and \(\mu ^{*2}\) are known on \([0,\infty )\). Recall the notation \(\bar{f}\) from Lemma 3.2 and observe that the right-hand side of identity (3.1) is known for all \(t\ge 0\). From (3.3), this determines \(\bar{f}\) on \([0,\infty )\) and the measure \(\mu \) is determined. \(\square \)

Unfortunately, we do not know any example of function satisfying the condition of Proposition 3.1 and finding a simple criterion on f for it to satisfy this condition remains an open problem. More specifically, we may wonder if the converse of Proposition 3.1 holds, that is if assertion (3.3) implies that \(\mathcal {H}\) is dense in \(L^\infty ([0,\infty ))\). The latter problem can be compared with Wiener’s approximation theorem which asserts that for a function f in \(L^1(\mathbb {R})\) the set \(\mathcal {H}\) (thought as a set of functions defined on \(\mathbb {R}\)) is dense in \(L^1(\mathbb {R})\) if and only if the Fourier transform of f does not vanish, see [7].

In Sect. 3.2, we give a class of density functions such that (3.2) holds and in Sect. 3.3, we give a class of density functions which are bounded on \([0,\infty )\) and such that (3.3) holds.

3.2 The Completely Monotone Class

In this subsection, we assume that \(\mu \) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb {R}\) and we denote by f its density.

We will show that if f restricted to \((a,\infty )\), for some \(a\ge 0\) is a completely monotone function satisfying some mild additional assumption, then \(\mu \) is characterized from \(\mu \) and \(\mu ^{*2}\) on \([0,\infty )\). Let us first recall that from Bernstein’s Theorem, the function f is completely monotone on \((a,\infty )\), for \(a\ge 0\), if and only if there is a positive Borel measure \(\nu \) on \((0,\infty )\) such that for all \(t>a\),

$$\begin{aligned} f(t)=\int _0^\infty e^{-ut}\nu (\mathrm{d}u). \end{aligned}$$
(3.4)

Theorem 3.1

Assume that there is \(a\ge 0\) such that the restriction of f to \((a,\infty )\) is completely monotone. Assume moreover that the support of the measure \(\nu \) in (3.4) contains an increasing sequence \((a_n)\) such that \(\sum _n a_n^{-1}=+\infty \). Then (3.2) holds and \(\mu \in \mathscr {C}^*\).

Proof

Let g be any nonnegative Borel function defined on \([0,\infty )\) such that \(\int _0^{\infty }f(t+s)g(s)\,\mathrm{d}s<\infty \), for all \(t>a\). Then from Fubini’s Theorem, for all \(t>a\),

$$\begin{aligned} \int _0^{\infty }f(t+s)g(s)\,\mathrm{d}s= & {} \int _0^{\infty }\int _0^\infty e^{-u(t+s)}\nu (\mathrm{d}u)g(s)\,\mathrm{d}s\\= & {} \int _0^{\infty }e^{- ut}\int _0^\infty e^{-us}g(s)\,\mathrm{d}s\,\nu (\mathrm{d}u). \end{aligned}$$

This expression is the Laplace transform of the measure \(\theta (\mathrm{d}u):=\int _0^\infty e^{-us}g(s)\,\mathrm{d}s\,\nu (du)\). The knowledge of this Laplace transform for all \(t>a\) characterizes the measure \(\theta (\mathrm{d}u)\) so that a version of the density function \(u\mapsto \int _0^\infty e^{-us}g(s)\,\mathrm{d}s\) is known on a Borel set \(B\subset (0,\infty )\) such that \(\nu (B^c)=0\). Since this density function is continuous, it is known on \(\overline{B}\) and hence it is known everywhere on the support of \(\nu \). Therefore, from the assumption on \(\nu \), we can find an increasing sequence \((a_n)\) such that \(\sum _n a_n^{-1}=+\infty \) and such that the Laplace transform \(\int _0^\infty e^{-a_ns}g(s)\,\mathrm{d}s\) of the function g at \(a_n\) is known for each n. From a result in [4], this is enough to determine the function g and (3.2) holds, see also [3, p. 430].

Then we derive from (3.2) and Lemma 3.2 that the function \(\bar{f}\) is determined on \([0,\infty )\) from the restriction on \([0,\infty )\) of \(\mu \) and \(\mu ^{*2}\). Therefore the measure \(\mu \) is determined. \(\square \)

We will denote by \(\mathscr {M}\) the set of absolutely continuous measures \(\mu \) whose density f satisfies the assumption of Theorem 3.1. This class will be called the completely monotone class. Theorem 3.1 shows that \(\mathscr {M}\subset \mathscr {C}^*\).

Remark 3.2

Note that since f is a density function, the measure \(\nu \) in (3.4) should also satisfy

$$\begin{aligned} \int _a^\infty f(t) \, \mathrm{d}t=\int _0^\infty \frac{e^{-au}}{u}\nu (\mathrm{d}u)\le 1. \end{aligned}$$
(3.5)

Remark 3.3

Clearly class \(\mathscr {E}\) is not included in class \(\mathscr {M}\). Moreover, it is easy to find an example of a measure in class \(\mathscr {M}\) which does not belong to class \(\mathscr {E}\). Indeed, we readily check that whenever the support S of \(\nu \) is such that \(S\cap (0,\varepsilon )\ne \emptyset \), for all \(\varepsilon >0\), then \(\mu \) has no positive exponential moments. Let us take for instance

$$\begin{aligned} f(t)=\left\{ \begin{array}{l@{\quad }l} \int _0^1u^2e^{-ut}\,\mathrm{d}u&{}t\ge 0,\\ \int _0^1u^2e^{ut}\,\mathrm{d}u&{}t\le 0. \end{array}\right. \end{aligned}$$

Then f satisfies the assumption of Theorem 3.1, for \(a=0\) and , so it belongs to class \(\mathscr {M}\). Moreover the probability measure with density f has no exponential moments, so it does not belong to class \(\mathscr {E}\).

Remark 3.4

Theorem 3.1 excludes completely monotone functions of the type \(f(t)=\sum _{k=1}^n\alpha _ke^{-\beta _k t}\), \(t>a\), for some \(\alpha _k,\beta _k>0\) and some finite n since in this case the measure \(\nu (\mathrm{d}u)=\sum _{k=1}^n\alpha _k\delta _{\beta _k}(\mathrm{d}u)\) does not satisfy the condition required by this theorem. It also excludes functions f whose support is bounded since completely monotone functions on \((a,\infty )\) are analytic on \((a,\infty )\). This remark is consistent with Remark 3.1.

3.3 The Analytic Class

Let us assume again that \(\mu \) has density f on \(\mathbb {R}\). We will now exploit the same kind of arguments as in the previous subsection by assuming that f is the Fourier transform of some complex valued function.

Theorem 3.2

Assume that there is a Borel complex valued function k such that for all \(t\ge 0\),

$$\begin{aligned} f(t)=\int _{\mathbb {R}}e^{iut}k(u)\,\mathrm{d}u. \end{aligned}$$
(3.6)

Assume moreover that

  1. 1.

    the absolute moments \(M_n=\int _{\mathbb {R}}|u|^n|k(u)|\,\mathrm{d}u\), \(n\ge 0\) are finite and satisfy

    $$\begin{aligned} \limsup _n\frac{M_n}{n!}<\infty , \end{aligned}$$
  2. 2.

    the function k does not vanish on any interval of \(\mathbb {R}\).

Then f is bounded on \([0,\infty )\) and (3.3) holds, in particular \(\mu \in \mathscr {C}^*\). Moreover f admits an analytic continuation on \(\mathbb {R}\).

Proof

The fact that f is bounded follows directly from (3.6) and 1. Let \(g\in L^1([0,\infty ))\), then from (3.6) and Fubini’s theorem, we can write for all \(t\ge 0\),

$$\begin{aligned} \int _{0}^\infty f(t+s)g(s)\,\mathrm{d}s= & {} \int _0^{\infty }\int _{\mathbb {R}}e^{iu(t+s)}k(u)\,\mathrm{d}u\, g(s)\,\mathrm{d}s\\= & {} \int _{\mathbb {R}}e^{iut}\int _0^{\infty }e^{ius}g(s)\,\mathrm{d}s\,k(u)\,\mathrm{d}u. \end{aligned}$$

Assume that this expression vanishes for all \(t\ge 0\) and set \(\varphi (u)=\int _0^{\infty }e^{ius}g(s)\,\mathrm{d}s\). This means that the Fourier transform

$$\begin{aligned} \Psi (t):=\int _{\mathbb {R}}e^{iut}\varphi (u)\,k(u)\,\mathrm{d}u, \end{aligned}$$

of the function \(u\mapsto \varphi (u)k(u)\), \(u\in \mathbb {R}\) vanishes for all \(t\ge 0\). Then let us show that under our assumptions, the function \(\Psi \) is analytic on the whole real axis. First note that since \(|\varphi (u)|\le \Vert g\Vert _{L^1}\), \(u\in \mathbb {R}\) and since all the moments \(M_n\) are finite, then \(\Psi \) is infinitely differentiable on \(\mathbb {R}\) and

$$\begin{aligned} \Psi ^{(n)}(t)=\int _{\mathbb {R}}(iu)^ne^{iut}\varphi (u)\,k(u)\,\mathrm{d}u,\quad t\in \mathbb {R}. \end{aligned}$$
(3.7)

Then notice that for all \(t,u,x\in \mathbb {R}\),

$$\begin{aligned} \left| e^{iux}\left( e^{itu}-1-\frac{itu}{1!}-\cdots -\frac{(itu)^{n-1}}{(n-1)!}\right) \right| \le \frac{|tu|^n}{n!}. \end{aligned}$$

We derive from this inequality and (3.7) that

$$\begin{aligned} \left| \Psi (x+t)-\Psi (x)-\frac{t}{1!}\Psi '(x)-\cdots -\frac{t^{n-1}}{(n-1)!}\Psi ^{(n-1)}(x)\right| \le \frac{M_n}{n!}|t|^n\,. \end{aligned}$$
(3.8)

Set \(c=\limsup _nM_n/n!\), then from Stirling’s formula, for \(|t|<1/(3c)\), the right-hand side of (3.8) tends to 0, as \(n\rightarrow +\infty \); hence, the Taylor series of \(\Psi \) converges in some interval around x, for all \(x\in \mathbb {R}\). It follows that \(\Psi \) is analytic on \(\mathbb {R}\). As a consequence, \(\Psi \) is determined by its expression on the positive half line. Hence the Fourier transform of the continuous function \(u\mapsto \varphi (u)k(u)\) vanishes on \(\mathbb {R}\), which means that this function vanishes a.e. on \(\mathbb {R}\). Since k does not vanish on any interval of \(\mathbb {R}\) and \(\varphi \) is continuous, it implies that \(\varphi (u)=0\), for all \(u\in \mathbb {R}\) and we conclude that \(g(t)=0\), for almost every \(t\in [0,\infty )\). We have proved that (3.3) holds and from the second part of Proposition 3.1, \(\mu \) is characterized by the restriction of \(\mu \) and \(\mu ^{*2}\) on \([0,\infty )\).

We have proved above that \(\Psi (t)=\int _{\mathbb {R}}e^{iut}\varphi (u)\,k(u)\,\mathrm{d}u\) is analytic on \(\mathbb {R}\). It follows from the same arguments that the continuation of f on \(\mathbb {R}\) which is defined in a natural way by \(f(t)=\int _{\mathbb {R}}e^{iut}k(u)\,\mathrm{d}u\), \(t\in \mathbb {R}\) is analytic, which proves the last assertion of the theorem. \(\square \)

We will denote by \(\mathscr {A}\) the class of distributions which satisfy the assumptions of Theorem 3.2. It will be called the analytic class. From Theorem 3.2, \(\mathscr {A}\subset \mathscr {C}^*\).

It is very easy to construct examples of distributions in class \(\mathscr {A}\), simply by choosing any symmetric function k which satisfies assumptions 1 and 2 in Theorem 3.2. Let us consider for instance \(k(u)=e^{-|u|}\), for \(u\in \mathbb {R}\). Then

$$\begin{aligned} f(t)=\int _{\mathbb {R}}e^{iut}e^{-|u|}\,\mathrm{d}u=\frac{1}{2(1+t^2)},\quad t\ge 0. \end{aligned}$$

Any extension on \(\mathbb {R}\) of this function into a density function determines a distribution of \(\mathscr {A}\) which does not belong to classes \(\mathscr {E}\) and \(\mathscr {M}\). Conversely, none of the classes \(\mathscr {E}\) and \(\mathscr {M}\) is included in \(\mathscr {A}\). It is straightforward for \(\mathscr {E}\). Then let us consider

$$\begin{aligned} f(t)=\frac{1}{4}\left( \int _0^1 e^{-ut}\,u^{1/2}\mathrm{d}u+\int _1^\infty e^{-ut}\,u^{-1}\mathrm{d}u\right) ,\quad t>0. \end{aligned}$$

The measure satisfies the condition of Theorem 3.1 so that \(\mu \in \mathscr {M}\) (here we choose \(a=0\)). However, \(\lim _{t\rightarrow 0+}f(t)=\infty \); hence, it does not admit an analytic continuation on \(\mathbb {R}\), so that f does not belong to class \(\mathscr {A}\), from Theorem 3.2. (Note also that since the support of \(\nu \) intersects any interval \((0,\varepsilon )\), \(\varepsilon >0\), the measure \(\mu \) has no positive exponential moments, see Remark 3.3.)

Here is a consequence of Theorem 3.2 for stable distributions.

Corollary 3.1

Let S be a stable distribution on \(\mathbb {R}\) with index \(\alpha \in [1,2]\). (When \(\alpha =1\), we assume that S is the symmetric Cauchy distribution.) If a measure \(\mu \) satisfies \(\mu =S\) and \(\mu *\mu =S*S\) on \([0,\infty )\) then \(\mu =S\) on \(\mathbb {R}\).

Proof

Let f be the density of \(\mu \) in \([0,\infty )\). From the expression of the characteristic exponent of stable distributions and Fourier inverse transform, for all \(t\ge 0\),

$$\begin{aligned} f(t)=\frac{1}{2\pi }\int _{\mathbb {R}}e^{-itu}e^{-c|u|^\alpha (1-i\beta \text{ sgn }(u)\tan (\pi \alpha /2))}\,\mathrm{d}u, \end{aligned}$$

where \(\beta \in [-1,1]\) and c is some positive constant. Then we can easily check that if \(\alpha \in [1,2]\), f satisfies the conditions of Theorem 3.2 and the result follows. \(\square \)

Corollary 3.1 should be compared with a result from Rossberg and Jesiak [6] which asserts that if \(F_1\) and \(F_2\) are the distribution functions of two stable distributions and if \(F_1(x)=F_2(x)\) for x belonging to a set which contains at least three accumulation points, \(l_1\), \(l_2\) and \(l_3\) such that \(F_i(l_j)\ne 0,1\) for \(i=1,2\) and \(j=1,2,3\), then \(F_1\equiv F_2\), on \(\mathbb {R}\). We stress that in Corollary 3.1 it is not even known a priori that \(\mu \) is an infinitely divisible distribution. It is actually conjectured in [6] that if F is the distribution function of an infinitely divisible distribution and satisfies \(F(x)=S(x)\), for all \(x\ge 0\), then \(F\equiv S\) on \(\mathbb {R}\). This problem is solved (and proved to be true) only for \(\alpha =2\).

3.4 A Class of Discrete Distributions

In this section, we present the discrete counterpart of class \(\mathscr {M}\). More specifically, we will consider distributions whose support is included in \(\mathbb {Z}\).

First we need the following equivalent of Lemma 3.2 for discrete distributions. Its proof is straightforward so we omit it.

Lemma 3.3

Let \((q_n)_{n\in \mathbb {Z}}\) be any probability on \(\mathbb {Z}\). Define \(q^{*2}_n=\sum _{k\in \mathbb {Z}}q_{n-k}q_k\). Then for all \(n\ge 1\)

$$\begin{aligned} \sum _{k=-\infty }^0q_{n-k}q_k=\frac{1}{2}\left( q^{*2}_n-\sum _{k=1}^{n-1}q_{n-k}q_k\right) \,, \end{aligned}$$
(3.9)

where we set \(\sum _{k=1}^{n-1}q_{n-k}q_k=0\) if \(n=1\).

A sequence \((a_k)_{k\ge 0}\) of nonnegative real numbers is called completely monotone if for all \(k\ge 0\) and \(n\ge 1\),

$$\begin{aligned} \Delta ^na_k:=\Delta ^{n-1}a_k-\Delta ^{n-1}a_{k+1}\ge 0, \end{aligned}$$

where \(\Delta ^0a=a\). A result from Hausdorff asserts that \((a_k)_{k\ge 0}\) is completely monotone if and only if there is a finite measure \(\nu \) on [0, 1] such that for all \(k\ge 0\),

$$\begin{aligned} a_k=\int _0^1t^k\,\nu (\mathrm{d}t). \end{aligned}$$
(3.10)

Let us set \(\mu (\{n\})=\mu _n\), for \(n\in \mathbb {Z}\). Then by assuming that \((\mu _n)_{n\ge 0}\) is completely monotone, we obtain a new class of distributions satisfying conjecture \(\mathscr {C}\) as shows the following theorem.

Theorem 3.3

Assume that the support of \(\mu \) is included in \(\mathbb {Z}\). Assume moreover that \((\mu _n)_{n\ge 0}\) is completely monotone and that the support of the measure \(\nu \) in the representation (3.10) contains a decreasing sequence \((c_n)\) such that \(\sum (-\ln c_n)^{-1}=\infty \). Then \(\mu \in \mathscr {C}^*\).

Proof

First let us observe that we can derive from the data of \(\mu \) and \(\mu ^{*2}\) restricted to \([0,\infty )\) that the support of \(\mu \) is included in \(\mathbb {Z}\). Indeed, assume that the support of \(\mu \) restricted to \((-\infty ,0]\) is not included in \(\mathbb {Z}_-\). Then there is an interval \(I\subset (-\infty ,0]\setminus \mathbb {Z}_-\) such that \(\mu (I)>0\). From (3.10), \(\mu (\{n\})>0\) for all \(n\ge 1\). Let \(n\in \mathbb {Z}_+\setminus \{0\}\) such that \(n+\inf I>0\), then

$$\begin{aligned} 0< \mu (I)\mu (\{n\})=\mathbb {P}(S_1\in I,S_{2}-S_1=n)\le \mathbb {P}(S_{2}\in n+I). \end{aligned}$$

This implies that \(\mu ^{*2}(n+I)>0\), where \(n+I\subset [0,\infty )\setminus \mathbb {Z}_+\), which contradicts the assumption.

From the Hausdorff representation recalled above, there is a unique finite measure \(\nu \) on [0,1] such that \(\mu _k=\int _0^1t^k\,\nu (\mathrm{d}t)\). Using this representation and Fubini’s theorem, we can write for all \(n\ge 0\),

$$\begin{aligned} \sum _{k=0}^\infty \mu _{-k}\mu _{n+k}=\int _0^1t^n\left( \sum _{k=0}^\infty \mu _{-k}t^k\right) \,\nu (\mathrm{d}t). \end{aligned}$$

From (3.9) in Lemma 3.3 applied to \((\mu _k)\), we derive that this expression is determined from the knowledge of \(\mu \) and \(\mu ^{*2}\) on \([0,\infty )\). This means that we know the moments of the measure \(\left( \sum _{k=0}^\infty \mu _{-k}t^k\right) \cdot \nu (\mathrm{d}t)\). This measure is finite and its support is included in [0, 1]; hence, it is determined by its moments. Then we know the generating function \(t\mapsto \sum _{k=0}^\infty \mu _{-k}t^k\) of the sequence \((\mu _{-k})_{k\ge 0}\) on the support of \(\nu \), since this function is continuous. From the assumption, we know this generating function on a sequence \((c_n)\) such that \(\sum (-\ln c_n)^{-1}=\infty \). This is enough to determine the sequence \((\mu _{-k})_{k\ge 0}\), from [4].

We conclude that the measures \(\mu \) and \(\mu ^{*2}\) restricted to \([0,\infty )\) allow us to determine \(\mu \) on \(\mathbb {Z}\). \(\square \)

The set of measures satisfying the assumptions of Theorem 3.3 will be called the discrete monotone class and will be denoted by \(\mathscr {M}_d\). Theorem 3.3 shows that \(\mathscr {M}_d\subset \mathscr {C}^*\). Moreover, it is clear that none of the classes \(\mathscr {E}\), \(\mathscr {M}\) and \(\mathscr {A}\) is included in \(\mathscr {M}_d\) and that these classes do not contain \(\mathscr {M}_d\).

4 When \(\mu \) is Infinitely Divisible

The aim of this section is to present a problem equivalent to conjecture \(\mathscr {C}\) in the framework of infinitely divisible distributions. When \(\mu \) is infinitely divisible, the Wiener–Hopf factorization can be understood in two different ways: we can either factorize the characteristic function \(\varphi \) as in (1.1) or we can factorize the characteristic exponent \(\psi \), which is defined by

$$\begin{aligned} \varphi (t)=e^{\psi (t)},\quad t\in \mathbb {R}. \end{aligned}$$

Then let us recall the Wiener–Hopf factorization in the latter context. Let \((X_t,\,t\ge 0)\) be a real Lévy process issued from 0 under the probability \(\mathbb {P}\) and such that \(X_1\) has law \(\mu \) under this probability, that is \(\mathbb {E}(e^{iuX_t})=e^{-t\psi (u)}\), for all \(t\ge 0\). The characteristic exponent of \(\mu \) is given explicitly according to the Lévy–Khintchine formula by

$$\begin{aligned} \psi (u)=iau+\frac{\sigma ^2}{2}u^2+\int _{\mathbb {R}\setminus \{0\}}(1-e^{iux}+iux1_{\{|x|\le 1\}})\,\Pi (\mathrm{d}x), \end{aligned}$$

where \(a\in \mathbb {R}\), \(\sigma \ge 0\) and \(\Pi \) is a measure on \(\mathbb {R}\setminus \{0\}\), such that \(\int (x^2\wedge 1)\,\Pi (\mathrm{d}x)<\infty \). Then the Wiener–Hopf factorization of \(\psi \) has the following form:

$$\begin{aligned} s+\psi (u)=\kappa _+(s,-iu)\kappa _-(s,iu),\quad u\in \mathbb {R},\quad s\ge 0, \end{aligned}$$
(4.1)

where \(\kappa _+\) and \(\kappa _-\) are the Laplace exponents of the upward and downward ladder processes \((\tau ^+,H^+)\) and \((\tau ^-,H^-)\) of X, that is \(\mathbb {E}(e^{-\alpha \tau _t^{+/-}-i\beta H_t^{+/-}})=e^{-t\kappa _{+/-}(\alpha ,\beta )}\). These exponents are given explicitly for \(\alpha ,\beta \ge 0\) by the identities,

$$\begin{aligned} \kappa _-(\alpha ,\beta )= & {} k_-\exp \left( \int _0^\infty \int _{(-\infty ,0)}(e^{-t}-e^{-\alpha t-\beta x})\frac{1}{t}\mathbb {P}(X_t\in \mathrm{d}x)\,\mathrm{d}t\right) \end{aligned}$$
(4.2)
$$\begin{aligned} \kappa _+(\alpha ,\beta )= & {} k_+\exp \left( \int _0^\infty \int _{[0,\infty )}(e^{-t}-e^{-\alpha t-\beta x})\frac{1}{t}\mathbb {P}(X_t\in \mathrm{d}x)\,\mathrm{d}t\right) , \end{aligned}$$
(4.3)

where \(k_-\) and \(k_+\) are positive constants depending on the normalization of the local times at the infimum and at the supremum of X. The joint law of \((\tau _1^{+},H_1^{+})\) is the continuous time counterpart of the joint law \((\tau _{+},S_{\tau _{+}})\) defined in Sect. 1, in the setup of random walks. We refer to Chap. VI of [1], Chap. IV of [5] or Chap. IV of [2] for complete definitions of these notions. Note that our formulation of the Wiener–Hopf factorization (4.1) includes compound Poisson processes since expression (4.3) takes account of a possible mass at 0 for the measure \(\mathbb {P}(X_t\in \mathrm{d}x)\). This slight extension can be derived from p. 24 and 25 of [10], see also the end of Section 6.4, p. 183 in [5].

For an infinitely divisible probability measure \(\mu \) with Lévy measure \(\Pi \), we will set \(\overline{\Pi }(t)=\Pi (t,\infty )\), for \(t>0\) and denote by \(\mu _t\) the law of \(X_t\), where \((X_t,\,t\ge 0)\) is a Lévy process such that \(X_1\) has law \(\mu \) (in particular \(\mu =\mu _1\)).

Lemma 4.1

Let \(\mu ^{(1)}\) and \(\mu ^{(2)}\) be two infinitely divisible probability measures with respective Lévy measures \(\Pi ^{(1)}\) and \(\Pi ^{(2)}\) and Wiener–Hopf factors \(\kappa _+^{(1)}\) and \(\kappa _+^{(2)}\). Then \(\kappa _+^{(1)}=\kappa _+^{(2)}\) if and only if \(\mu ^{(1)}_t=\mu ^{(2)}_t\) on \([0,\infty )\), for all \(t\ge 0\). Moreover, if \(\kappa _+^{(1)}=\kappa _+^{(2)}\), then \(\overline{\Pi }^{(1)}(t)=\overline{\Pi }^{(2)}(t)\), for all \(t>0\).

Proof

Let \(\mu \) be an infinitely divisible probability measure. Then from the identity

$$\begin{aligned} \frac{1}{t}\mathbb {P}(X_t\in \mathrm{d}x)\,\mathrm{d}t=\int _0^\infty \mathbb {P}(\tau _u\in dt,H_u\in \mathrm{d}x)\,\frac{\mathrm{d}u}{u},\quad x\ge 0,t>0, \end{aligned}$$
(4.4)

which can be found in Section 5.2 of [2], we see that the law of \(X_t\) in \([0,\infty )\), for all \(t\ge 0\), is determined by the law of \((\tau ,H)\) and hence by \(\kappa ^+\). (Note that equation (4.4) is also valid for compound Poisson processes.) Conversely, it follows directly from formula (4.3) that \(\kappa ^+\) is determined by the data of the measure \(\mu _t\) on \([0,\infty )\), for all \(t\ge 0\).

The second assertion is a consequence of the first one and Exercise 1 of chap. I in [1], which asserts that the family of measures \(\frac{1}{t}\mathbb {P}(X_t\in \mathrm{d}x)\) converges vaguely toward \(\Pi \), as \(t\rightarrow 0\). \(\square \)

The above lemma enables us to make the connection between the two Wiener–Hopf factorizations (1.1) and (4.1). Let us state it more specifically in the following proposition.

Proposition 4.1

Let \(\mu ^{(1)}\) and \(\mu ^{(2)}\) be two infinitely divisible probability measures with respective Wiener–Hopf factors \(\kappa _+^{(1)}\), \(\kappa _+^{(2)}\) and \(\chi _+^{(1)}\), \(\chi _+^{(2)}\). If \(\kappa _+^{(1)}=\kappa _+^{(2)}\), then \(\chi _+^{(1)}=\chi _+^{(2)}\).

Proof

The result is straightforward from Lemma 4.1. Indeed, knowing \(\kappa _+\) we can determine \(\mu _n=\mu ^{*n}\) restricted to \([0,\infty )\), for all \(n\ge 1\) and from Sect. 1 that this data is equivalent to that of \(\chi _+\). \(\square \)

Definition 4.1

We will denote by \(\mathscr {C}_i\) the class of infinitely divisible distributions \(\mu \) which are determined by the data of their upward Wiener–Hopf factor \(\kappa _+(s,t)\), for \(s,t\ge 0\) or equivalently by the data of the measures \(\mu _t\), \(t>0\) restricted to \([0,\infty )\). More formally,

$$\begin{aligned} \mathscr {C}_i= & {} \{\mu \in \mathcal {M}_1,\;\text{ infinitely } \text{ divisible }:\text{ if } \mu ^{(1)}\in \mathcal {M}_1, \text{ infinitely } \text{ divisible }\\&\text{ satisfies } \mu ^{(1)}_t=\mu _t, \text{ on } [0,\infty ), \text{ for } \text{ all } t>0, \text{ then } \mu ^{(1)}=\mu \}. \end{aligned}$$

Let us denote by \(\mathscr {I}\) the set of infinitely divisible distributions. Then it is straightforward that \(\mathscr {I}\cap \mathscr {C}\subset \mathscr {C}_i\). In particular if Conjecture C is true, then \(\mathscr {C}_i=\mathscr {I}\). It was proved in Chapter 4 of [10] that infinitely divisible distributions having some exponential moments belong to class \(\mathscr {C}_i\), which is a consequence of our results. The latter work uses a different technique based on the analytical continuation of the Wiener–Hopf factors \(\kappa _+\) and \(\kappa _-\).

Let \(k_-,\delta _-,\gamma _-\) and \(k_+,\delta _+,\gamma _+\) be the killing rate, the drift and the Lévy measure of the subordinators \(H_-\) and \(H_+\), respectively, and let us set \(\bar{\gamma }_+(x)=\gamma _+(x,\infty )\) and \(\bar{\gamma }_-(x)=\gamma _-(x,\infty )\). Let also \(U_-\) be the renewal measure of the downward ladder height process \(H^-\), that is \(U_-(\mathrm{d}x)=\int _0^\infty \mathbb {P}(H_t^-\in \mathrm{d}x)\).

Theorem 4.1

Assume that the function \(t\in (a,\infty )\mapsto \overline{\Pi }(t)\) is completely monotone, for some \(a\ge 0\), that is there exists a Borel measure \(\nu \) on \((0,\infty )\) such that for all \(t>a\),

$$\begin{aligned} \overline{\Pi }(t)=\int _0^\infty e^{-ut}\,\nu (\mathrm{d}u). \end{aligned}$$
(4.5)

Assume moreover that the support of \(\nu \) contains an increasing sequence \((a_n)\) such that \(\sum _{n}a_n^{-1}=+\infty \). Then the measure \(\mu \) belongs to the class \(\mathscr {C}_i\).

Proof

The proof relies on Vigon’s équation amicale inversée, see [10], p. 71, or (5.3.4) p. 44 in [2] which can be written as

$$\begin{aligned} \bar{\gamma }_+(x)=\int _{[0,\infty )}U_-(\mathrm{d}y)\overline{\Pi }(x+y),\quad x>0. \end{aligned}$$
(4.6)

Note that (4.6) is analogous to (2.3). From Lemma 4.1, given \(\kappa _+\), we know both \(\bar{\gamma }_+(x)\), for \(x>0\) and \(\overline{\Pi }(t)\), for \(t>0\). Then we will show that under our assumption, equation (4.6) allows us to determine the renewal measure \(U_-(dy)\), so that the law of X will be entirely determined, thanks to the relation:

$$\begin{aligned} \hat{U}_-(z)= & {} \int _\mathbb {R_+}e^{-yz}U_-(\mathrm{d}y)\nonumber \\= & {} \frac{1}{\kappa _-(0,z)},\quad z>0, \end{aligned}$$
(4.7)

and the Wiener–Hopf factorization (4.1).

From (4.5), (4.6) and Fubini’s Theorem, we can write for all \(x>0\),

$$\begin{aligned} \bar{\gamma }_+(x)= & {} \int _{[0,\infty )}U_-(\mathrm{d}y)\int _0^\infty e^{-(x+y)z}\,\nu (\mathrm{d}z)\nonumber \\= & {} \int _0^\infty e^{-xz}\hat{U}_-(z)\,\nu (\mathrm{d}z). \end{aligned}$$
(4.8)

Then the left hand side of Eq. (4.8) determines the measure \(\hat{U}_-(z)\,\nu (\mathrm{d}z)\). Since \(z\mapsto \hat{U}_-(z)\) is a continuous function, then it is determined on the support of \(\nu \). From our assumption on this support and [4] we derive that \(\hat{U}_-\) (and hence \(\kappa ^-(0,z)=-\log \mathbb {E}(e^{-z H^-_1})\)) for \(z>0\), is determined. \(\square \)

Note that an analogous result to Theorem 4.1 holds when \(\mu \) has support in \(\mathbb {Z}\) and the sequence \(\Pi (n)\), \(n\ge 1\) satisfies the same assumptions as \((\mu _n)_{n\ge 0}\) in Theorem 3.3. One may also wonder if an assumption such as (3.6) for \(\overline{\Pi }(t)\) would lead to a similar result to Theorem 3.2. However, in order to use the same argument as in the proof of this theorem together with equation (4.6), we need \(\hat{U}_-(z)\) to be bounded, which is not the case in general.

Remark 4.1

As already observed above, the class \(\mathscr {C}_i\) contains at least all probability measures in the set \(\mathscr {I}\cap (\mathscr {E}\cup \mathscr {M}\cup \mathscr {A}\cup \mathscr {M}_d)\) but Theorem 4.1 shows that there are other distributions in \(\mathscr {C}_i\). Indeed, it is easy to construct an example of a compound Poisson process \((X_t,\,t\ge 0)\) with intensity 1, whose Lévy measure \(\Pi \) satisfies conditions of Theorem 4.1 but such that the law \(\mu (\mathrm{d}x)=e^{-1}\sum _{n\ge 0}\Pi ^{*n}(\mathrm{d}x)/{n!}\) of \(X_1\) does not belong to any of the classes \(\mathscr {E}\), \(\mathscr {M}\), \(\mathscr {A}\) and \(\mathscr {M}_d\).

An infinitely divisible distribution is said to be downward skip free (respectively upward skip free) if the support of the measure \(\Pi \) is included in \((-\infty ,0]\) (respectively in \([0,\infty )\)). Upward skip free distributions clearly belong to class \(\mathscr {C}_i\) from the Wiener–Hopf factorization (4.1). Then here is a counterpart of Theorem 2.3.

Theorem 4.2

Downward skip free infinitely divisible distributions belong to the class \(\mathscr {C}_i\).

Proof

The proof relies on Vigon’s équation amicale, p. 71 in [10]. See also equation (5.3.3), p. 44 in [2]. If \(\delta _->0\), then from [10], the Lévy measure \(\gamma _+\) is absolutely continuous and we will denote by \(\gamma _+(x)\) its density. Then Vigon’s équation amicale can be written as

$$\begin{aligned} \overline{\Pi }(x)=\int _0^\infty \gamma _+(x+\mathrm{d}u)\bar{\gamma }_-(u)+\delta _-\gamma _+(x)+k_-\bar{\gamma }_+(x),\quad x>0. \end{aligned}$$

It is plain that in the right-hand side, the term \(\int _0^\infty \gamma _+(x+\mathrm{d}u)\bar{\gamma }_-(u)\) is identically 0 if and only if the Lévy process X is spectrally positive, that is \(\mu \) is downward skip free. Moreover, (4.1) for \(u=0\) entails that the knowledge of \(\kappa _+\) implies that of \(\kappa _-(s,0)\), for all \(s\ge 0\). In particular, we know the killing rate of the subordinator \((\tau _t^-,t\ge 0)\), and that killing rate is the same as that of \((H_t^-,t\ge 0)\), that is \(k_-\). Then we conclude that X is spectrally positive if and only if there is a constant \(\delta _-\) such that

$$\begin{aligned} \overline{\Pi }(x)=\delta _-\gamma _+(x)+k_-\bar{\gamma }_+(x),\quad x>0, \end{aligned}$$

and this can be determined, since from our data, we know \(k_-\), \(\overline{\Pi }(x)\) and \(\bar{\gamma }_+(x)\), for \(x>0\). \(\square \)

5 More Classes of Distributions

In the previous sections, we have highlighted the subclasses \(\mathscr {E}\), \(\mathscr {M}\), \(\mathscr {M}_d\) and \(\mathscr {A}\) of \(\mathscr {C}\) and proved that these sets of distributions are distinct from each other. More specifically, none of them is included into another one. Then the aim of this section is to show that some of these classes can be substantially enlarged through simple arguments.

Actually for most of the subclasses investigated in this paper, we imposed conditions bearing only on \(\mu \) restricted to \([0,\infty )\), but one is also allowed to make assumptions on \(\mu ^{*n}\) restricted to \([0,\infty )\). In order to move in this direction, let us mention the following straightforward extension of results of Sect. 3.

Proposition 5.1

Let \(\mu \in \mathcal {M}_1\) be absolutely continuous with density f. If there is \(n\ge 1\) such that the density function \(f^{*n}\) satisfies the same conditions as f in Theorems 3.1 or in Theorem 3.2, then \(\mu \) is determined by \(\mu ^{*n}\) and \(\mu ^{*2n}\) restricted to \([0,\infty )\). In particular \(\mu \) belongs to class \(\mathscr {C}\).

It is plain that an analogous extension of Theorem 3.3 is satisfied. Then here is a more powerful result allowing us to extend our classes of distributions.

Theorem 5.1

Let \(\mu \in \mathcal {M}_1\). If there is \(\nu \in \mathcal {M}_1\) whose support is included in \((-\,\infty ,0]\) and such that \(\mu *\nu \in \mathscr {C}\), then \(\mu \in \mathscr {C}\).

Proof

Let \(\mu ,\mu _1\in \mathcal {M}_1\) such that for each \(n\ge 1\), the measures \(\mu ^{*n}\) and \(\mu _1^{*n}\) restricted to \([0,\infty )\) coincide. Set \(\bar{\mu }=\mu *\nu \) and \(\bar{\mu }_1=\mu _1*\nu \). Then from commutativity of the convolution product, \(\bar{\mu }^{*n}=\mu ^{*n}*\nu ^{*n}\) and \(\bar{\mu }^{*n}_1=\mu ^{*n}_1*\nu ^{*n}\). Since the support of \(\nu ^{*n}\) is included in \((-\infty ,0]\) and \(\mu ^{*n}\) and \(\mu _1^{*n}\) restricted to \([0,\infty )\) are known and coincide, the measures \(\bar{\mu }^{*n}\) and \(\bar{\mu }^{*n}_1\) restricted to \([0,\infty )\) are known and coincide. Since \(\bar{\mu }\in \mathscr {C}\), the measures \(\bar{\mu }\) and \(\bar{\mu }_1\) are equal. Finally, from Lemma 2.3, the characteristic function of \(\nu \) does not vanish on any interval of \(\mathbb {R}\) and the identity \(\mu =\mu _1\) follows from continuity and injectivity of the Fourier transform. \(\square \)

Theorem 5.1 entails in particular that Conjecture C’ is equivalent to the following one: Any distribution\(\mu \in \mathcal {M}_1\)whose support is not included in\((-\infty ,0)\)is determined by its convolution powers\(\mu ^{*n}\), \(n\ge 1\)restricted to\([a,\infty )\), for some\(a\ge 0\). Indeed, it suffices to choose \(\nu =\delta _{-a}\) in Theorem 5.1. Finding more general examples illustrating this result is an open problem. In order to do so, one needs for instance to find the characteristic function \(\varphi \) of a random variable which belongs to class \(\mathscr {C}\) and the characteristic function \(\varphi _Y\) of a nonnegative random variable Y such that the ratio \(\varphi (t)/\varphi _Y(-t)\) is the characteristic function of some random variable X. Then since the law of \(X-Y\) belongs to class \(\mathscr {C}\), so does the law of X from Theorem 5.1.

Note that neither Proposition 5.1 nor Theorem 5.1 allows us to enlarge class \(\mathscr {E}\). In order to do so in the same spirit as in Theorem 5.1, one needs to find an invertible transformation \(T(\mu )\in \mathcal {M}_1\) of a distribution \(\mu \in \mathcal {M}_1\setminus \mathscr {E}\), such that \(T(\mu )^{*n}\), \(n\ge 1\) restricted to \([0,\infty )\) would be known and such that \(T(\mu )\) belongs to class \(\mathscr {E}\).

Let us end this paper with an example of a distribution which satisfies conjecture C, although it does not belong to any of the classes studied here. Assume that the support of \(\mu \) is included in \(\mathbb {Z}\) and recall that according to Lemma 2.5, this assumption can be checked from the data of the measures \(\mu ^{*n}\), \(n\ge 0\) restricted to \([0,\infty )\). Assume moreover that there are positive integers a and b such that

$$\begin{aligned} \qquad \left\{ \begin{array}{l} \mu (n)>0, \text{ for } \text{ all } n\ge a+b \text{ and } \mu (n)=0, \text{ for } \text{ all } n=0,\ldots ,a+b-1,\\ \mu ^{*2}(n)=0, \text{ for } \text{ all } n=0,\ldots ,a. \end{array}\right. \end{aligned}$$
(5.1)

Then we can determine \(\mu \) on \(\mathbb {Z}_-\), so that \(\mu \in \mathscr {C}\). Let us first show that \(\mu (n)=0\), for all \(n\le -b\). Assume that there is \(n\le -b\) such that \(\mu (n)>0\). Then let \(k=0,\ldots ,a\) such that \(k-n\ge a+b\). By definition of the convolution product \(0\le \mu (k-n)\mu (n) \le \mu ^{*2}(k)\), but from our assumptions \(\mu (k-n)\mu (n)>0\) and \(\mu ^{*2}(k)=0\), which is contradictory, hence \(\mu (n)=0\), for all \(n\le -b\). On the other hand, assumptions (5.1) entail that for all \(k=a+1,\dots ,a+b-1\),

$$\begin{aligned} \mu ^{*2}(k)=\sum _{i=a+b}^{k+b-1}\mu (k-i)\mu (i), \end{aligned}$$

that is \(\mu ^{*2}(a+1)=\mu (-b+1)\mu (a+b)\), \(\mu ^{*2}(a+2)=\mu (-b+2)\mu (a+b)+\mu (-b+1)\mu (a+b+1)\),.... Therefore, this system allows us to determine \(\mu (n)\), for \(n=-b+1,-b+2,\ldots ,-1\) and the conclusion follows.

Let us consider for instance \(a=1\), \(b=3\) and

$$\begin{aligned} \mu (-2)=\mu (-1)=\frac{1-c}{2}\quad \text{ and }\quad \mu (n)=\frac{1}{n^3},\quad n\ge 4, \end{aligned}$$

where \(c=\sum _{n\ge 4}n^{-3}\). Clearly, such a distribution does not belong to any of the classes \(\mathscr {A}\), \(\mathscr {M}\) or \(\mathscr {M}_d\). Then let us check that it does not belong to class \(\mathscr {E}\). The mean of \(\mu \) satisfies

$$\begin{aligned} \sum _{k\ge -2}k\mu (k)=-\frac{3}{2}(1-c)+\sum _{n\ge 4}\frac{1}{n^2}<0\,, \end{aligned}$$

so that (2.1) does not hold. Moreover \(\mu \) has no positive exponential moments. Therefore conditions of Theorems 2.1 and 2.2 are not satisfied and since \(\mu \) is not downward skip free, we obtain the conclusion. Finally, it cannot be proved that \(\mu \) belongs class \(\mathscr {C}\) by applying Proposition 5.1 or Theorem 5.1. However, from the above arguments, \(\mu \) does belong to class \(\mathscr {C}\).