1 Introduction

A probabilistic theory named as noncommutative probability theory has been established and developed to include both classical probability theory and quantum theory. It is called quantum probability theory (also called noncommutative probability spaces). The reader is referred to [18] and references therein for basic ideas of probabilistic modeling in quantum mechanics.

Exponential martingale inequalities and tail probabilities of sums of independent random variables have extensive applications in probability theory. Many classical martingale and concentration inequalities have been reformulated to include noncommutative martingales. Noncommutative probability spaces have gained great interest in the recent years. The reader is referred to [1, 4, 15] and references therein for some interesting results.

In probability theory, the classical Bernstein inequality gives an upper bound on the probability that the sum of independent random variables deviates from its mean. Some noncommutative Bernstein inequalities and other similar inequalities are obtained in [5, 19]. In classical probability, the Freedman inequality is a martingale extension of the Bernstein inequality.

Using a powerful stopping time argument, Freedman obtained a discrete-time supermartingale inequality. More precisely, Freedman proved that if \(\{Y_n: Y_0=0, n \ge 0\}\) is a real-valued martingale with martingale difference sequence \(\{X_n: X_0=0, n \ge 0\}\) such that \(X_n=Y_n-Y_{n-1}\le 1\) for all n, then, for real numbers \(c\ge 0\) and \(h > 0\), it holds that

$$\begin{aligned} {{\mathrm{Prob}}}\left( Y_n\ge c ~\text {and}~ Z_n \le h ~{{{{\mathrm{for~some}}}}}~ n\ge 0 \right) \le \left( \frac{h}{c + h}\right) ^{c + h} e^c, \end{aligned}$$
(1.1)

in which \(Z_n = \sum _{k=1}^n {\mathbb {E}}_{k-1}(X_k^2)\) is predictable quadratic variation, where \({\mathbb {E}}_{k-1}\) denotes the conditional expectation onto the \(k-1\)’st \(\sigma \)-algebra in the underlying filter. Recently, Tropp [21] nicely extended Freedman’s result to the case of matrix martingales by utilizing a deep theorem due to Lieb; see also [3, 22].

The starting point of exploring noncommutative versions of classical martingale inequalities goes back to the noncommutative Burkholder–Gundy inequalities. It is remarkable that several noncommutative inequalities are applied in noncommutative Harmonic analysis. Hence, it is actually interesting to study Freedman inequalities in the content of noncommutative martingales. Another motivation for our work comes from a very natural question concerning the existence of an appropriate version of Freedman inequality. We study this question in the language of noncommutative martingales. We should, however, emphasize that most of the stopping time arguments are no longer applicable and that they are often nontrivial and require additional operator algebraic techniques to transfer classical martingale inequalities to the noncommutative probability setting. In fact, some constructions with projections play an important role in providing a noncommutative candidate for the notion of stopping time.

In this paper, we apply a trace Jensen inequality due to Harada and Kosaki to establish a version of the Freedman inequality in the context of martingales/supermartingales given on some tracial von Neumann algebra in which the left hand side of (1.1) is replaced by the value of \(\tau \) on a countable sum of orthogonal projections. Such estimates have important consequences for quantum probability. As an application, we derive a noncommutative analogue of Bernstein’s inequality, which, in the classical set-up, is correspondingly known to be a consequence of Freedman’s inequality. We also conclude classical versions of Freedman’s inequality from the established noncommutative one. Furthermore, we present a result corresponding to Levy’s form of the iterated logarithm (see [11]). We also give some examples illustrating our conditions.

The techniques and tools used in the paper make an effective use of spectral calculus for (possibly unbounded) self-adjoint operators. A key point is to replace the classical notion of a stopping time \(\sigma \) by a sequence of mutual orthogonal projections \((p_n)\). In the classical set-up these projections would correspond to the indicator functions of the disjoint events \(\{\sigma =n\},\,\,n=0, 1, 2, \ldots \).

In what follows, \(({\mathfrak {M}}, \tau )\) denotes a noncommutative probability spaces consisting of a von Neumann algebra \({\mathfrak {M}}\) on a Hilbert space \({\mathscr {H}}\) with unit element 1 equipped with a normal faithful tracial state \(\tau :{\mathfrak {M}}\rightarrow {\mathbb {C}}\). A closed densely defined linear operator \(x:{\mathcal {D}}(x)\subseteq {\mathscr {H}}\rightarrow {\mathscr {H}}\) is called affiliated with \({\mathfrak {M}}\) when \(ux=xu\) for every unitary u in the commutant \({\mathfrak {M}}^{\prime }\) of \({\mathfrak {M}}\). It is easy to see that if x is in the algebra \({\mathbb {B}}({\mathscr {H}})\) of all bounded linear operators on the Hilbert space \({\mathscr {H}}\), then x is affiliated with \({\mathfrak {M}}\) if and only if \(x \in {\mathfrak {M}}\). A closed densely defined operator x affiliated with \({\mathfrak {M}}\) is called \(\tau \)-measurable if there exists a number \(\lambda \ge 0\) such that \(\tau \left( e^{|x|}(\lambda ,\infty )\right) <\infty \), in which \(|x|=(x^*x)^{1/2}\) and \(e^{|x|}\) denotes the spectral measure of the self-adjoint element |x|.

For a von Neumann subalgebra \({\mathfrak {N}}\) of \({\mathfrak {M}}\), the conditional expectation of \({\mathfrak {M}}\) with respect to \({\mathfrak {N}}\) is denoted by \({\mathcal {E}}_{{\mathfrak {N}}}\). For \(p\ge 1\), the noncommutative \(L^p\)-space \(L^p({\mathfrak {M}})\) is defined as the completion of \({\mathfrak {M}}\) with respect to the \(L^p\)-norm \(\Vert x\Vert _p:=\left( \tau (|x|^p)\right) ^{1/p}\). The space \(L^p({\mathfrak {M}})\) may be regarded as the subspace of the p-integrable operators in the set of all \(\tau \)-measurable operators.

A filtration of \({\mathfrak {M}}\) is an increasing sequence \(({\mathfrak {M}}_j, {\mathcal {E}}_j)_{0\le j\le n}\) of von Neumann subalgebras of \({\mathfrak {M}}\) together with the conditional expectations \( {\mathcal {E}}_j\) of \({\mathfrak {M}}\) with respect to \({\mathfrak {M}}_j\) such that \(\bigcup _j{\mathfrak {M}}_j\) is dense in \({\mathfrak {M}}\) with respect to \(w^*\)-topology. A sequence \((x_j)_{j\ge 0}\) in \(L^1({\mathfrak {M}})\) is said to be a martingale (supermartingale, respectively) with respect to the filtration \(({\mathfrak {M}}_j)_{0\le j\le n}\) if \(x_j \in L^1({\mathfrak {M}}_j)\) and \({\mathcal {E}}_j(x_{j+1})=x_j\) \(\left( {\mathcal {E}}_j(x_{j+1}) \le x_j, ~ \text {respectively} \right) \) for every \(j\ge 0\).

Put \(d_j=x_j-x_{j-1}\,\,(j\ge 0)\) with the convention that \(x_{-1}=0\). Then the sequence \(dx=(d_j)_{j\ge 0}\) is called the martingale difference of \((x_j)\). The reader is referred to [10] for more information on noncommutative analysis elements.

Standing Notation. Throughout the paper, let \((x_n)_{n\ge 0}\) be a self-adjoint martingale in \({\mathfrak {M}}\) with respect to a filtration \(({\mathfrak {M}}_n,{\mathcal {E}}_n)_{n\ge 0}\) with \(x_0=0\) such that the martingale difference sequence \((d_n)_{n\ge 1}\) is uniformly bounded by 1, that is,

$$\begin{aligned} d_n \le 1\quad \text {for all} ~~ n \ge 1. \end{aligned}$$

Put \(z_0 = 0\) and \(z_n = \sum _{k=1}^n {\mathcal {E}}_{k-1}(d_k^2)\). For any positive number t, set

$$\begin{aligned} u_n^{(t)} := \exp \left\{ t x_n - (e^t - 1 - t)z_n\right\} . \end{aligned}$$

Besides this introductory section, the rest of this paper is organized as follows. In Sect. 2, we establish some noncommutative Freedman inequalities for martingales under some mild conditions. In Sect. 3, we give some applications of our main results including a noncommutative Bernstein-type inequality.

2 Noncommutative Freedman Inequality

A useful inequality involving the trace of matrices, the so-called Lieb concavity, is proved in [13]. Then, Araki [2] presented a generalization in the setting of von Neumann algebras as follows.

Theorem 2.1

(Lieb–Araki concavity [2]) If \(v \in {\mathfrak {M}}\) is a self-adjoint operator, then the function

$$\begin{aligned} \phi : w \mapsto \tau \left( \exp (v + \log (w)) \right) \end{aligned}$$

is concave on the strictly positive part of \({\mathfrak {M}}\).

The trace Jensen inequality showed in the paper [9] of Harada and Kosaki states that if \(a \in {\mathfrak {M}}\) is a nonzero contraction in \({\mathfrak {M}}\) and x is a bounded above (that is \(x \le m\) for some \(m \in {\mathbb {R}}^+\)) \(\tau \)-measurable operator, then, for any continuous convex function f with \(f(0)=0\), it holds that

$$\begin{aligned} \tau (f(a^*xa)) \le \tau (a^*f(x)a). \end{aligned}$$

If p is a projection such that pxp is bounded, then

$$\begin{aligned} \sum _{k=1}^n \frac{\tau \left( (pxp)^k\right) }{k!} \le \sum _{k=1}^n \frac{\tau \left( px^kp \right) }{k!} \le \tau (pe^xp) - \tau (p). \end{aligned}$$

Note that, since pxp is bounded, the series \(\sum \nolimits _{n=1}^\infty \frac{\left( pxp \right) ^n}{n!}\) is absolutely convergent. Hence, the series \(\sum \nolimits _{n=1}^\infty \frac{\tau \left( (pxp)^n\right) }{n!}\) is convergent. Thanks to the Taylor formula for the exponential function, we have

$$\begin{aligned} \tau (pe^{pxp}p) = \tau (p) + \sum _{n=1}^\infty \frac{\tau \left( (pxp)^n\right) }{n!} \le \tau (pe^xp). \end{aligned}$$
(2.1)

The following lemma is a variant of a known result of [16, Proposition 1] in the setting of noncommutative probability spaces.

Lemma 2.2

If x and y are either self-adjoint bounded above operators or positive \(\tau \)-measurable operators, and \(x\le y\), then for any continuous monotone increasing function f defined on an large enough interval containing the spectra of x and y, it holds that

$$\begin{aligned} \tau (f(x))\le \tau (f(y))\, \end{aligned}$$

Proof

If x and y are \(\tau \)-measurable operators and \(0\le x\le y\), then the assertion is deduced from Lemmas 2.5 and 2.9 of [7].

Now, let x and y be self-adjoint (not necessarily positive) bounded above operators with \(x \le y \le m\) for some positive real number m. Applying the positive \(\tau \)-measurable case to the operators \(0 \le -y + m \le -x +m \) and the function \(t \mapsto -f(-t + m)\) with f as given in the lemma we arrive at the result. \(\square \)

Remark 2.3

The Lieb–Araki concavity ensures that if \(v \in {\mathfrak {M}}\) is self-adjoint and \(u \in {\mathfrak {M}}\) is a positive operator, then the continuous function \(g: (0, \infty ) \rightarrow (0, \infty )\) defined by \(g(t) = \tau \left( \exp (v + \log (t + u)) \right) \) is concave. It follows from the proof of Jensen’s inequality for positive contractions [17, Theorem A] that if \(\alpha \) is a unital positive map on \({\mathfrak {M}}\) and f is a real concave function on \([0, \infty )\), then for any self-adjoint element \(a \in {\mathfrak {M}}\) , it holds that \(\tau \left( \alpha (f(a)) \right) \le \tau \left( f(\alpha (a))\right) \), where f(a) is defined by the functional calculus.

Therefore, if w is a strictly positive operator in \({\mathfrak {M}}\), then by applying the Jensen’s inequality above for any conditional expectation \({\mathcal {E}}_{\mathfrak {N}}\) corresponding to a von Neumann subalgebra \({\mathfrak {N}}\) of \({\mathfrak {M}}\) and the cancave function \(g_{\varepsilon }(t) = \tau \left( \exp \left\{ v + \log (t + \varepsilon u) \right\} \right) \) in which \(\varepsilon >0\) is arbitrary real number, we have

$$\begin{aligned} \tau \left( \exp \left\{ v + \log w \right\} \right)&\le \tau \left( \exp \left\{ v + \log (w + \varepsilon u) \right\} \right) \quad (\text {by Lemma}~2.2)\\&= \tau \left( g_{\varepsilon }(w) \right) \\&= \tau \left( {\mathcal {E}}_{\mathfrak {N}} (g_{\varepsilon }(w)) \right) \\&\le \tau \left( g_{\varepsilon } \left( {\mathcal {E}}_{\mathfrak {N}}(w) \right) \right) \\&= \tau \left( \exp \left\{ v + \log ({\mathcal {E}}_{\mathfrak {N}} (w) + \varepsilon u) \right\} \right) . \end{aligned}$$

Now, if \(\varepsilon \) tends to zero, then we deduce that

$$\begin{aligned} \tau \left( \exp \left\{ v + \log w \right\} \right) \le \tau \left( \exp \left\{ v + \log {\mathcal {E}}_{\mathfrak {N}}(w) \right\} \right) \end{aligned}$$
(2.2)

for any self-adjoint operator v in \({\mathfrak {M}}\).

We are inspired by some ideas in the commutative case to provide our main result; see [21].

Proposition 2.4

Let the sequence \((x_n)_{n\ge 0}\), fixed as in the last part of the Introduction, be a martingale of positive elements in \({\mathfrak {M}}\). Then the sequence \((u_n^{(t)})_{n\ge 0}\) is trace-decreasing, that is, \(\tau (u_{n+1}^{(t)}) \le \tau (u_n^{(t)}) \) for all \(n \ge 0\). Furthermore, \(\tau (u_n^{(t)}) \le 1\) for all \(n\ge 0\).

Proof

The function \(f(s) = e^s - 1 - s\) is a monotone increasing function on \([1,\infty )\). Let us fix a real number \(t \ge 1\). To prove the trace-decreasing property of \((u_n^{(t)})_{n\ge 0}\), we apply (2.2) with \(v= t x_{n} - f(t)z_{n+1}\) and \(w= e^{t d_{n+1}}\). We have

$$\begin{aligned} \tau (u_{n+1}^{(t)})= & {} \tau (\exp \left\{ t x_{n+1} - f(t)z_{n+1}\right\} )\\= & {} \tau (\exp \left\{ t x_{n} - f(t)z_{n+1} + t d_{n+1}\right\} )\\\le & {} \tau \left( \exp \left\{ t x_{n} - f(t)z_{n+1} + \log {\mathcal {E}}_{n}(e^{t d_{n+1}})\right\} \right) \\\le & {} \tau \left( \exp \left\{ t x_{n} - f(t)z_{n+1} + f(t) {\mathcal {E}}_{n}( d_{n+1}^2)\right\} \right) \\= & {} \tau \left( \exp \left\{ t x_{n} - f(t)z_{n} \right\} \right) = \tau (u_{n}^{(t)}). \end{aligned}$$

To establish the last inequality above, we use the continuous functional calculus to identify \(d_{n+1}\) with the function \(h(s) = s\) in \(C\left( {{\mathrm{sp}}}(d_{n+1})\right) \), where \({{\mathrm{sp}}}(d_{n+1})\) stands for the spectrum of \(d_{n+1}\). We aim to show that

$$\begin{aligned} e^{t d_{n+1}} \le 1 + t d_{n+1} + f(t) d_{n+1}^2. \end{aligned}$$
(2.3)

It is enough to verify that

$$\begin{aligned} e^{t s} \le 1 + t s + f(t) s^2 \end{aligned}$$

for every \(s \le 1\). To this end, we first assume that \(0 \le s \le 1\). Then

$$\begin{aligned} e^{t s} = 1 + t s + \sum _{k=2}^\infty \frac{t^k s^k}{n!} \le 1 + t s + \sum _{k=2}^\infty \frac{t^k s^2}{n!} = 1 + t s + f(t) s^2\qquad (0 \le s \le 1). \end{aligned}$$

Second, we assume that \(s \le 0\) . Let us define \(g(s) := 1 + t s + f(t) s^2 - e^{s t}\) for \(s \le 0\). Since \(g(0) = 0\), we have to establish that g is monotone decreasing on \((-\infty , 0]\), or equivalently, \(g^\prime (s) = t + 2f(t) s - te^{st} \le 0\) for every \(s \le 0\), which is in turn equivalent to show that \(g^{\prime \prime } \ge 0\). The later holds true, since

$$\begin{aligned} g^{\prime \prime } (s) = 2f(t) - t^2 e^{st} \ge 2e^t - 2t - 2 - t^2 \ge 0 \end{aligned}$$

as \(s \le 0\).

Since the conditional expectation operator is positive and \({\mathcal {E}}_{n}(d_{n+1}) = 0\), we have

$$\begin{aligned} {\mathcal {E}}_{n}(e^{t d_{n+1}}) \le 1 + {\mathcal {E}}_{n}\left( f(t) d_{n+1}^2\right) \le \exp \left\{ {\mathcal {E}}_{n}\left( f(t) d_{n+1}^2 \right) \right\} . \end{aligned}$$

and so, by the operator monotonicity of the function \(\log \), we arrive at

$$\begin{aligned} \log \left( {\mathcal {E}}_{n}(\exp (td_{n+1}))\right) \le {\mathcal {E}}_{n}\left( f(t)d_{n+1}^2\right) , \end{aligned}$$

Now use Lemma 2.2 to conclude the argument.

Finally, the last assertion follows from \(\tau (u_{0}^{(t)}) = 1\). \(\square \)

Remark 2.5

An investigation of the proof, shows that if \({\mathfrak {M}}=L^\infty \) is a commutative probability space, then \((u_{n}^{(t)})\) is a supermartingale.

One of the problems which need to be handled is the lack of stopping-time arguments which are used extensively in the classical case. We propose a Cuculescu-type approach and model these arguments by contructing inductively certain classes of projections.

Lemma 2.6

If the sequence \((x_n)_{n\ge 0}\), fixed as in the last part of the Introduction, is a martingale in \({\mathfrak {M}}\), then for any nonnegative number c and positive number h, there exists a sequence \((p_n)_{n\ge 1}\) of mutually orthogonal projections such that

$$\begin{aligned}&\max _{1 \le n \le m} \frac{\tau \left( \mathbf{1 }_{[c, \infty )} (x_n) \wedge \mathbf{1 }_{[0, h]} (z_n) \right) }{2^{n-1}} \le \left\| \sum _{n=1}^m p_n \right\| _1 \nonumber \\&\quad \le \inf _{t>0}\left\{ e^{-tc + (e^t - 1 - t)h}~ \left\| \sum _{n=1}^m p_n\,u_n^{(t)}\,p_n \right\| _1 \right\} , \end{aligned}$$
(2.4)

for any \(m \in {\mathbb {N}}\). Then, if there exists a number \(n \in {\mathbb {N}}\) such that \(\mathbf{1 }_{[c, \infty )} (x_n) \wedge \mathbf{1 }_{[0, h]} (z_n)\) is a nonzero projection, then \(\sum \nolimits _{n=1}^\infty p_n\) can be chosen to be nonzero.

Proof

Let \(c \ge 0\) and let \(h > 0\) be real numbers. Set

$$\begin{aligned} e_0 := 0, ~~\text {and} \quad e_n := e_n^{c,h} = \mathbf{1 }_{[c, \infty )} (x_n) \wedge \mathbf{1 }_{[0, h]} (z_n) \end{aligned}$$

for every \(n \ge 1\). Moreover, define

$$\begin{aligned} p_0 := 0, ~~\text {and} \quad p_n := p_n^{c,h} = e_n \wedge \bigwedge _{i=1}^{n-1} e_{i}^\perp \quad \text {for }n \ge 1 \end{aligned}$$

Therefore, the \(p_n\)’s are mutually orthogonal projections. In fact, if \(k < j\), then

$$\begin{aligned} \Vert p_kp_j \Vert ^2 = \Vert p_jp_kp_j \Vert \le \Vert p_j e_kp_j \Vert = \Vert p_je_k \Vert ^2 = \Vert e_k p_j e_k \Vert \le \Vert e_k e_k^\perp e_k \Vert = 0. \end{aligned}$$

Moreover, \(\mathbf{1 }_{[c, \infty )} (x_n) x_n \ge c \mathbf{1 }_{[c, \infty )} (x_n)\), and by the definition of \(p_n\), we have \(p_n \le \mathbf{1 }_{[c, \infty )} (x_n)\). Hence

$$\begin{aligned} p_nx_np_n = \left( p_n \mathbf{1 }_{[c, \infty )} (x_n)\right) x_n p_n \ge c p_n \mathbf{1 }_{[c, \infty )} (x_n) p_n = c p_n. \end{aligned}$$
(2.5)

Similarly, one can show that \(p_nz_np_n \le hp_n\). Indeed, \(\mathbf{1 }_{[0, h]} (z_n) \, z_n \le h\, \mathbf{1 }_{[0, h]} (z_n)\) and due to \(p_n \le \mathbf{1 }_{[0, h]} (z_n)\), we deduce

$$\begin{aligned} p_nz_np_n = \left( p_n \mathbf{1 }_{[0, h]} (z_n) \right) z_n p_n \le h p_n \mathbf{1 }_{[0, h]} (z_n) p_n = h p_n. \end{aligned}$$

Now, fix \(t>0\). Therefore,

$$\begin{aligned} tcp_n - (e^{t} - 1 - t)hp_n \le tp_nx_np_n - (e^{t} - 1 - t)p_nz_np_n. \end{aligned}$$

Two operators \(\exp \left\{ t p_nx_np_n - (e^t - 1 - t)p_nz_np_n\right\} \) and \(\exp \left\{ (tc - (e^t - 1 - t)h)p_n\right\} \) commute. Hence

$$\begin{aligned} \exp \left\{ (tc - (e^t - 1 - t)h)p_n\right\} \le \exp \left\{ t p_nx_np_n - (e^t - 1 - t)p_nz_np_n\right\} , \end{aligned}$$

and so

$$\begin{aligned} \tau \left( p_n\,\exp \left\{ tc - (e^t - 1 - t)h\right\} \right)&= \tau \left( p_n\exp \left\{ (tc - (e^t - 1 - t)h)p_n\right\} \right) \nonumber \\&\le \tau \left( p_n \exp \left\{ t p_nx_np_n - (e^t - 1 - t)p_nz_np_n\right\} \right) , \end{aligned}$$
(2.6)

where to get the first equality, we write down the Taylor expansion as

$$\begin{aligned} p_n\exp \left\{ (tc - (e^t - 1 - t)h)p_n\right\}= & {} p_n \sum _{j=0}^\infty \frac{\left( (tc - (e^t - 1 - t)h)p_n \right) ^j}{j!} \\= & {} p_n \left( 1 + \sum _{j=1}^\infty \frac{(tc - (e^t - 1 - t)h)^j p_n}{j!} \right) \\= & {} p_n\,e^{tc - (e^t - 1 - t)h}. \end{aligned}$$

On the other hand, by the trace Jensen inequality (2.1), we get

$$\begin{aligned} \tau \left( p_n \exp \left\{ t p_nx_np_n - (e^t - 1 - t)p_nz_np_n\right\} p_n\right) \le \tau (p_nu_n^{(t)}p_n). \end{aligned}$$
(2.7)

Combining inequalities (2.6) and (2.7), and summing over n, we obtain

$$\begin{aligned} e^{tc - (e^t - 1 - t)h}\sum _{n=1}^m \tau (p_n) \le \sum _{n=1}^m \tau (p_nu_n^{(t)}p_n) \end{aligned}$$

for any natural number m. Taking the infimum over \(t> 0\), we get the right hand side of inequality (2.4).

It remains to show the left hand side of inequality (2.4). To this end, we prove that

$$\begin{aligned} \tau \left( \bigwedge \limits _{n=1}^m p_n^\perp \right) \le 1 - \frac{1}{2^{n-1}}\tau \left( e_n\right) \quad (1 \le n \le m) \end{aligned}$$
(2.8)

We use induction. As \(\tau \left( \bigwedge \nolimits _{n=1}^m p_n^\perp \right) \le \tau (p_1^{\perp }) = \tau (e_1^{\perp }) = 1 - \tau (e_1)\), the step \(n=1\) holds. Assume that inequality (2.8) holds for any \(i < m\). Then

$$\begin{aligned} \tau \left( \bigwedge \limits _{n=1}^m p_n^\perp \right)\le & {} \tau \left( p_m^{\perp } \right) = \tau \left( e_m^{\perp } \vee \bigvee _{i=1}^{m-1} e_{i} \right) \\\le & {} \tau \left( e_{m}^{\perp } \right) + \sum _{i=1}^{m-1} \tau \left( e_{i} \right) \\\le & {} \tau \left( e_{m}^{\perp } \right) + \left( 1 - \tau \left( \bigwedge \limits _{n=1}^m p_n^\perp \right) \right) \sum _{i=1}^{k-1} 2^{i-1}\\&\quad \left( \text {by the inductive hypothesis}\right) \\= & {} \tau \left( e_{m}^{\perp } \right) + \left( 1 - \tau \left( \bigwedge \limits _{n=1}^m p_n^\perp \right) \right) \left( 2^{k-1} - 1\right) , \end{aligned}$$

in which to reach the second inequality, we use this fact that if \((p_{\lambda })_{\lambda \in \Lambda }\) is a family of projections in \({\mathfrak {M}}\), then \(\tau \left( \vee _{\lambda \in \Lambda }p_{\lambda }\right) \le \sum _{\lambda \in \Lambda }\tau (p_{\lambda })\); see [20, Page 58]. Therefore,

$$\begin{aligned} \tau \left( \bigwedge \limits _{n=1}^m p_n^\perp \right) \le \frac{2^{k-1} - 1 + \tau \left( e_{k}^{\perp } \right) }{2^{k-1}} = 1 - \frac{1}{2^{k-1}}\tau \left( e_k\right) , \end{aligned}$$

and (2.8) follows.

Now \(\left\| \sum \nolimits _{n=1}^m p_n \right\| _1 = \tau \left( \bigvee \nolimits _{n=1}^m p_n \right) = 1 - \tau \left( \bigwedge \nolimits _{n=1}^m p_n^\perp \right) \ge \frac{1}{2^{n-1}}\tau \left( e_n\right) \) for all \(1 \le n \le m\). \(\square \)

Now, we deduce the classical Freedman inequality from Lemma 2.6.

Corollary 2.7

Let \(\{X_n: X_0=0, n \ge 0\}\) be a real-valued martingale of bounded random variables with martingale difference sequence \(\{\Delta _n: \Delta _0=0, n \ge 0\}\) such that \(\Delta _n \le 1\) for all n. Then, for all numbers \(c \ge 0\) and \(h > 0\), it holds that

$$\begin{aligned} {{\mathrm{Prob}}}\left( X_n\ge c ~\text {and}~ Z_n \le h ~{{\mathrm{for~some}}}~ n\ge 0 \right) \le \left( \frac{h}{c + h}\right) ^{c + h} e^c, \end{aligned}$$
(2.9)

in which \(Z_n = \sum \nolimits _{k=1}^n {\mathbb {E}}_{k-1}(\Delta _k^2)\).

Proof

Notice that in the commutative case, due to the well-ordering principle of \({\mathbb {N}}\), the projection \(\sum \nolimits _{n=1}^\infty p_n = \bigvee \nolimits _{n=1}^\infty p_n = \bigvee \nolimits _{n=1}^\infty \left( e_n \wedge \bigwedge \nolimits _{i=1}^{n-1} e_i^\perp \right) \), appeared in the proof of Lemma 2.6, is the indicator variable of

$$\begin{aligned} A= & {} \left\{ \omega : X_n(\omega )\ge c ~\text {and}~ Z_n(\omega ) \le h ~{{\mathrm{for~some}}}~ n\ge 0 \right\} \\= & {} \bigcup _{n=1}^\infty \big (\left\{ \omega : X_n(\omega ) \ge c \right\} \cap \left\{ \omega : Z_n(\omega ) \le h \right\} \big ), \end{aligned}$$

where \(e_n := \chi _{\{X_n \ge c \}} \chi _{\{Z_n \le h \}}\). Furthermore, the sequence \((p_n)\) in the proof of the noncommutative Freedman inequality (Lemma 2.6) corresponds to the stopping time \(\sigma = \inf \{n \in {\mathbb {N}}: X_n\ge c ~\text {and}~ Z_n \le h \}\) with \(\inf \emptyset = \infty \), and hence \(\sum \nolimits _{n\in {\mathbb {N}}} p_n\,u_n^{(t)}\,p_n = u_\sigma ^{(t)} \chi _{\{\sigma < \infty \}}\). More precisely, we may assume that \(A \ne \emptyset \), since otherwise \({{\mathrm{Prob}}}(A) = 0\) and the desired inequality clearly holds. Now, if \(\omega \in A\), then there exists the least number \(j \in {\mathbb {N}}\) such that \(p_j(\omega ) = 1\). Hence, by the definition of \(p_j\), we have \(e_j(\omega ) = 1\) and \(e_i(\omega ) = 0\) for any \(i < j\). We, therefore obtain \(\left( u_\sigma ^{(t)} \chi _{\{\sigma< \infty \}}\right) (\omega ) = u_{\sigma (\omega )}^{(t)}(\omega ) \chi _{\{\sigma (\omega ) < \infty \}} = u_j^{(t)}(\omega ).\) On the other hand, \(\left( \sum \nolimits _{n\in {\mathbb {N}}} p_n\,u_n^{(t)}\,p_n\right) (\omega ) = p_j(\omega )\,u_j^{(t)}(\omega )\,p_j(\omega ) = u_j^{(t)}(\omega )\), since \(p_n\) are orthogonal. By employing Tropp’s argument [21, Theorem 2.3], since the stopped process \((u_{n \wedge \sigma }^{(t)})_n\) is a positive supermartingale, in which \(n \wedge \sigma = \min \{n, \sigma \}\), it follows from Fatou’s lemma that

$$\begin{aligned} 1 \ge \liminf _{n \rightarrow \infty }{\mathbb {E}}\left( u_{n \wedge \sigma }^{(t)} \right) \ge \liminf _{n \rightarrow \infty }{\mathbb {E}}\left( u_{n \wedge \sigma }^{(t)} \chi _{\{\sigma< \infty \}} \right)\ge & {} {\mathbb {E}}\left( \liminf _{n \rightarrow \infty }u_{n \wedge \sigma }^{(t)}\chi _{\{\sigma< \infty \}} \right) \\= & {} {\mathbb {E}}\left( u_\sigma ^{(t)} \chi _{\{\sigma < \infty \}}\right) . \end{aligned}$$

Thus, \({{\mathrm{Prob}}}(A) \le \inf _{t>0}\left\{ e^{-tc + (e^t - 1 - t)h}\right\} \) and it is easy to verify that the minimized value of \(e^{-tc + (e^t - 1 - t)h}\) occurs at \(t_0 = \log \left( \frac{c + h}{h}\right) \), which gives

$$\begin{aligned} \inf _{t > 0} e^{-tc + (e^t - 1 - t)h} = e^{-t_0c + (e^{t_0} - 1 - t_0)h} = \left( \frac{h}{c + h}\right) ^{c + h} e^c. \end{aligned}$$
(2.10)

Thus, the desired bound is obtained. \(\square \)

If we assume that the martingale \((x_n)_{n \ge 0}\) satisfies an extra condition, we are able to provide a noncommutative Freedman-type inequality as follows.

Theorem 2.8

Let for any two positive numbers ch, the inequality \(x_n \le \varphi (c/h) z_n\) holds for all \(n \ge 1\), where \(\varphi \) is a real-valued function satisfying \(\varphi (s) \le \frac{s}{\log (1 + s)} - 1\,\,(s >0)\). Then there exists a sequence \((p_n)_{n\ge 1}\) of mutually orthogonal projections such that

$$\begin{aligned} \sup _{n \ge 1} \frac{\tau \left( \mathbf{1 }_{[c, \infty )} (x_n) \wedge \mathbf{1 }_{[0, h]} (z_n) \right) }{2^{n-1}} \, \le \, \left\| \sum _{n=1}^\infty p_n \right\| _1 \le \left( \frac{h}{c + h}\right) ^{c + h} e^c. \end{aligned}$$
(2.11)

Then, if there exists \(n \in {\mathbb {N}}\) such that \(\mathbf{1 }_{[c, \infty )} (x_n) \wedge \mathbf{1 }_{[0, h]} (z_n)\) is a nonzero projection, then \(\sum \nolimits _{n=1}^\infty p_n\) can be taken to be nonzero.

Proof

It is enough we assume that \(\varphi (s) = \frac{s}{\log (1 + s)} - 1\,\,(s >0)\). As it is proved in Lemma 2.6 and its notation, we have \(p_nx_np_n \ge cp_n\) and \(p_nz_np_n \le hp_n\), and consequently, for any positive real number t

$$\begin{aligned} tcp_n - (e^{t} - 1 - t)hp_n \le p_n\left( tx_n - (e^{t} - 1 - t)z_n\right) p_n. \end{aligned}$$

Applying the trace Jensen inequality (2.1) we obtain

$$\begin{aligned} \sum _{n=1}^m e^{tc - (e^{t} - 1 - t)h} \tau (p_n) \le \sum _{n=1}^m \tau (p_nu_n^{(t)}p_n) \end{aligned}$$

for any \(m \ge 1\), whence

$$\begin{aligned} \sum _{n=1}^m \tau (p_n) \le e^{(e^{t} - 1 - t)h - tc} \sum _{n=1}^m \tau (p_nu_n^{(t)}p_n) \quad (m \ge 1) \end{aligned}$$
(2.12)

for any \(t > 0\). Now, if we show that \(\sum \nolimits _{n=1}^m \tau (p_nu_n^{(t_0)}p_n) \le 1\), where \(t_0 = \log \left( \frac{c + h}{h}\right) \), then the right hand side of the desired inequality (2.11) follows from inequalities (2.12) and (2.10). Note that by the assumption \(\log (1 + \frac{c}{h})x_n \le (\frac{c}{h} - \log (1 + \frac{c}{h}))z_n\), which implies that \(t_0x_n \le (e^{t_0} - 1 - t_0) z_n\) and hence \(u_n^{(t_0)} \le e^0 = 1\) for all n. Thus \(\sum \nolimits _{n=1}^m \tau (p_nu_n^{(t_0)}p_n) \le \sum \nolimits _{n=1}^m \tau (p_n) = \tau \left( \bigvee \nolimits _{n=1}^m p_n \right) \le 1, ~ (m \ge 1)\) as desired.

The left hand side of (2.11) can be shown by a similar argument as in Lemma 2.6. \(\square \)

In order to find a more appropriate noncommutative version of the Freedman inequality, the main difficulty that one encounters is the lack of a noncommutative analogue of stopped martingales, which Freedman’s original proof and also Tropp’s approach are based on it. Under some mild conditions (see Example 2.10 for the supermartingale condition), we overcome this problem as follows.

Proposition 2.9

For every nonnegative number c and every positive number h, if \((u_n^{(t_0)})_{n\ge 0}\) is a supermartingale in \({\mathfrak {M}}\), where \(t_0 = \log \left( \frac{c + h}{h}\right) \), then there exists a sequence \((p_n)_{n\ge 1}\) of mutually orthogonal projections such that

$$\begin{aligned} \sup _{n \ge 1} \frac{\tau \left( \mathbf{1 }_{[c, \infty )} (x_n) \wedge \mathbf{1 }_{[0, h]} (z_n) \right) }{2^{n-1}} \, \le \, \left\| \sum _{n=1}^\infty p_n \right\| _1 \le \left( \frac{h}{c + h}\right) ^{c + h} e^c. \end{aligned}$$
(2.13)

Then, if there exists \(n \in {\mathbb {N}}\) such that \(\mathbf{1 }_{[c, \infty )} (x_n) \wedge \mathbf{1 }_{[0, h]} (z_n)\) is a nonzero projection, then \(\sum \nolimits _{n=1}^\infty p_n\) can be taken to be nonzero.

Proof

Applying the same notation introduced in Lemma 2.6 and the same argument, we have

$$\begin{aligned} \sum _{k=0}^N e^{t_0c - (e^{t_0} - 1 - t_0)h} \tau (p_k) \le \sum _{k=0}^N \tau (p_ku_k^{(t_0)}p_k) \end{aligned}$$
(2.14)

for any natural number N.

We claim that \(\sum \nolimits _{k=1}^\infty \tau (p_ku_k^{(t_0)}p_k) \le 1\), and the right hand side of the desired inequality (2.13) follows from inequalities (2.10) and (2.14). To prove the claim, set

$$\begin{aligned} \lambda _n := \sum _{k=0}^n \tau (p_k u_k^{(t_0)} p_k) + \tau \left( \left( 1 - \sum _{k=0}^n p_k \right) u_n^{(t_0)} \left( 1 - \sum _{k=0}^n p_k \right) \right) . \end{aligned}$$

Evidently, the operators

$$\begin{aligned} p_k u_k^{(t_0)} p_k \qquad \text {and} \qquad \left( 1 - \sum _{k=0}^n p_k \right) u_n^{(t_0)} \left( 1 - \sum _{k=0}^n p_k \right) \end{aligned}$$

belong to \(L^1({\mathfrak {M}})\) for all k, and

$$\begin{aligned} \sum _{k=0}^n \tau (p_ku_k^{(t_0)}p_k) \le \lambda _n. \end{aligned}$$

We intend to establish that \((\lambda _n)_{n \ge 0}\) is a decreasing sequence. From mutual orthogonality of the \(p_n\)s, it follows that

$$\begin{aligned} \lambda _{n+1} - \lambda _n&= \tau (p_{n+1} u_{n+1}^{(t_0)} p_{n+1}) + \tau \left( \left( 1 - \sum _{k=0}^n p_k \right) \left( u_{n+1}^{(t_0)} - u_n^{(t_0)}\right) \left( 1 - \sum _{k=0}^n p_k \right) \right) \\&\quad - \tau (p_{n+1} u_{n+1}^{(t_0)} p_{n+1}) \\&= \tau \left( \left( 1 - \sum _{k=0}^n p_k \right) \left( u_{n+1}^{(t_0)} - u_n^{(t_0)}\right) \left( 1 - \sum _{k=0}^n p_k \right) \right) \\&= \tau \left( {\mathcal {E}}_n \left[ \left( 1 - \sum _{k=0}^n p_k \right) \left( u_{n+1}^{(t_0)} - u_n^{(t_0)}\right) \left( 1 - \sum _{k=0}^n p_k \right) \right] \right) \\&= \tau \left( \left( 1 - \sum _{k=0}^n p_k \right) {\mathcal {E}}_n \left( u_{n+1}^{(t_0)} - u_n^{(t_0)}\right) \left( 1 - \sum _{k=0}^n p_k \right) \right) \le 0, \end{aligned}$$

where the last inequality follows, since \((u_n^{(t_0)})_{n \ge 0}\) is a supermartingale. Consequently,

$$\begin{aligned} \sum _{n=0}^\infty \tau (p_nu_n^{(t_0)}p_n) \le \lim _{n \rightarrow \infty } \lambda _n \le \lambda _0 = \tau (u_0^{(t_0)}) = 1. \end{aligned}$$

The left hand side of (2.13) can be observed by the same argument as in Theorem 2.8. \(\square \)

In what follows, we present an example of elements in the hypotheses of the assertions of this section, in particular, the assumption that \(u_n^{(t)}\) is a supermartingale. We use the software MATLAB for computations, not a proof.

Example 2.10

Let \({\mathfrak {M}}= {\mathbb {M}}_2({\mathbb {C}})\) be the von Neumann algebra of all \(2\times 2\) matrices with entries in \({\mathbb {C}}\) with the identity \(I_2\) and equipped with the normalized trace \(\tau := {\frac{1}{2} {{\mathrm{tr}}}}\). Let \({\mathfrak {N}}\) stand for the subalgebra of diagonal matrices and

$$\begin{aligned} {\mathcal {E}}_{{\mathfrak {N}}}\left( \begin{pmatrix} a &{} b\\ c &{} d \end{pmatrix}\right) = \begin{pmatrix} a &{} 0\\ 0 &{} d \end{pmatrix}\,. \end{aligned}$$

Let us consider the filtration \(({\mathfrak {M}}_n, {\mathcal {E}}_n)_{n \ge 1}\) such that

$$\begin{aligned}&{\mathfrak {M}}_0 = {\mathbb {C}}I_2,\,\, {\mathcal {E}}_0(x) =\tau (x)I_2, \,\, {\mathfrak {M}}_1 = {\mathfrak {N}},\,\, {\mathcal {E}}_1 = {\mathcal {E}}_{{\mathfrak {N}}}, \,\,\text {and}\\&{\mathfrak {M}}_n = {\mathfrak {M}},\,\, {\mathcal {E}}_n = {{\mathrm{id}}}_{{\mathfrak {M}}} \,\, (n \ge 2). \end{aligned}$$

If we set

$$\begin{aligned} x_0 :=0, ~ x_1 := \begin{pmatrix} 1 &{} 0\\ 0 &{} -1 \end{pmatrix}, ~ x_2 := \begin{pmatrix} 1 &{} i\\ -i &{} -1 \end{pmatrix}, \quad \text {and} \quad x_n := x_2 \quad \text {for every }n \ge 2, \end{aligned}$$

then clearly \((x_n)_{n \ge 0}\) is a self-adjoint martingale and \(x_1x_2\ne x_2x_1\). In addition,

$$\begin{aligned} d_1= \begin{pmatrix} 1 &{} 0\\ 0 &{} -1 \end{pmatrix} \le 1, \,\, d_2 = \begin{pmatrix} 0 &{} i\\ -i &{} 0 \end{pmatrix} \le 1,\,\, \text {and~} d_n = 0 \le 1 ~~ (n \ge 3), \end{aligned}$$

is the corresponding difference sequence. Moreover,

$$\begin{aligned}&z_0 = 0,\\&z_1 = {\mathcal {E}}_0(d_1^2) = \tau \left( \begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix}\right) I_2 = I_2\\&z_2 = {\mathcal {E}}_0(d_1^2) + {\mathcal {E}}_1(d_2^2) = \begin{pmatrix} 2 &{} 0\\ 0 &{} 2 \end{pmatrix}\\&z_n = z_2\,\, (n \ge 3). \end{aligned}$$

Assuming \(t=2\) and setting \(\lambda = e^2 - 3\), where e denotes Euler’s constant, one can check that

$$\begin{aligned}&u_0^{(2)} = \exp \{2x_0 - \lambda z_0 \} = \begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix},\\&u_1^{(2)} = \exp \{2x_1 - \lambda z_1 \} = \begin{pmatrix} e^{5 - e^2} &{} 0\\ 0 &{} e^{2 - e^2} \end{pmatrix} \simeq \begin{pmatrix} 0.0917 &{} 0\\ 0 &{} 0.0045 \end{pmatrix}\\&u_2^{(2)} = \exp \{ 2x_2 - \lambda z_2\} = \exp \begin{pmatrix} 8 - 2e^2 &{} 2i\\ -2i &{} 4 - 2e^2 \end{pmatrix} \simeq \begin{pmatrix} 0.0022 &{} 0.0009i\\ -0.0009i &{} 0.0004 \end{pmatrix}\\&u_n^{(2)} = u_2^{(2)} \quad (n \ge 3). \end{aligned}$$

The exponential matrices above are computed in software MATLAB. In order to show that \((u_n^{(2)})_{n \ge 1}\) is a supermartingale, it is enough to investigate the inequalities \({\mathcal {E}}_0(u_1^{(2)}) \le u_0^{(2)}\) and \({\mathcal {E}}_1 \left( u_2^{(2)}\right) \le u_1^{(2)}\). We have

$$\begin{aligned}&{\mathcal {E}}_0(u_1^{(2)}) =0.0962I_2\le u_0^{(2)} \\ \text {and}&{\mathcal {E}}_1 \left( u_2^{(2)}\right) \simeq \begin{pmatrix} 0.0022 &{} 0\\ 0 &{} 0.0004 \end{pmatrix} \le \begin{pmatrix} 0.0917 &{} 0\\ 0 &{} 0.0045 \end{pmatrix}=u_1^{(2)}. \end{aligned}$$

3 Applications

In this section, we provide a noncommutative Bernstein-type inequality for bounded operators under a mild condition. As defined in [12], a sequence \(x_1, x_2, \ldots , x_n\) is said to be successively independent if \(\tau (ab) = \tau (a)\tau (b)\) for every \(a \in {\mathfrak {N}}(x_j)\) and \(b \in {\mathfrak {N}}(x_1, x_2, \ldots , x_{j-1})\) \(~~ (1 < j \le n)\), where \({\mathfrak {N}}(A)\) denotes the von Neumann algebra generated by \(A \subseteq {\mathfrak {M}}\). Note that, in this case, if \({\mathcal {E}}_{j-1}\) denotes the conditional expectation of \({\mathfrak {M}}\) with respect to \({\mathfrak {N}}(x_1, x_2, \ldots , x_{j-1})\), then

$$\begin{aligned} {\mathcal {E}}_{j-1}(a) = \tau (a) \end{aligned}$$
(3.1)

for any \(a \in {\mathfrak {N}}(x_j)\). Indeed, if \(b \in {\mathfrak {N}}(x_0, x_1, \ldots x_{j-1})\), then

$$\begin{aligned} \tau \left( {\mathcal {E}}_{j-1}(a) b \right) = \tau \left( {\mathcal {E}}_{j-1}(ab) \right) = \tau \left( ab \right) = \tau \left( a \right) \tau (b) = \tau \left( \tau \left( a \right) b \right) . \end{aligned}$$

Proposition 3.1

Let \(d_1, d_2, \ldots , d_n \in {\mathfrak {M}}\) be self-adjoint and successively independent such that

  1. (i)

    \(\tau (d_j)=0\),

  2. (ii)

    \(\tau (d_j^2)\le \sigma ^2\),

  3. (iii)

    \(d_j ^2\le 1\),

for some \(\sigma >0\) and all \(1\le j\le n\). Let \(c \ge 0\) be a real number. With \(x_k := \sum \nolimits _{j=1}^k d_j\) and \(\alpha _k := \left( \frac{c}{n\sigma ^2} - \log (1 + \frac{c}{n\sigma ^2})\right) \sum \nolimits _{j=1}^k \tau (d_j^2)\), if \(x_k \le \frac{1}{\log (1 + \frac{c}{n\sigma ^2})} \alpha _k\) for all \(1 \le k \le n\), then there exists a projection p in \({\mathfrak {M}}\) such that

$$\begin{aligned} \max _{1 \le k \le n} \frac{\tau \left( \mathbf{1 }_{[c, \infty )} (x_k) \right) }{2^{k-1}} \, \le \, \tau (p) \le \exp \left( -\frac{c^2}{2n\sigma ^2 + 2c}\right) . \end{aligned}$$

Moreover, if \(\mathbf{1 }_{[c, \infty )} (x_k)\) is a nonzero projection for some \(1 \le k \le n\), then p can be taken to be nonzero.

Proof

Set \(x_0:= 0\) and \(x_j := \sum \nolimits _{k=1}^j d_k\). Then \({\mathfrak {N}}(d_1, \ldots , d_j) = {\mathfrak {N}}(x_0, x_1, \ldots , x_j)\), and \((x_j)_{0 \le j \le n}\) is a martingale with respect to \(\left( {\mathfrak {N}}(x_0, x_1, \ldots x_j), {\mathcal {E}}_j \right) _{0 \le j \le n}\). In fact, due to (3.1), \(d_j \in {\mathfrak {N}}(d_j)\) and assumption (i), we have

$$\begin{aligned} {\mathcal {E}}_{j-1}(x_j) = x_{j-1} + {\mathcal {E}}_{j-1}(d_j) = x_{j-1} + \tau (d_j) = x_{j-1}. \end{aligned}$$

Again from (3.1), we obtain

$$\begin{aligned} z_n = \sum _{k=1}^n {\mathcal {E}}_{k-1}(d_k^2) = \sum _{k=1}^n \tau (d_k^2) \le n \sigma ^2, \end{aligned}$$

and so by taking \(h = n\sigma ^2\), we get \(\mathbf{1 }_{[0, h]} (z_n) = 1\). If \(x_k \le \frac{1}{\log (1 + \frac{c}{n\sigma ^2})} \alpha _k ~ (1 \le k \le n)\), then the desired inequalities can be deduced from Theorem 2.8 by putting \(p:= \sum \nolimits _{k=1}^n p_k\) and noting that, by the usual calculus,

$$\begin{aligned} \left( \frac{n\sigma ^2}{c + n\sigma ^2}\right) ^{c + n\sigma ^2} e^c \le \exp \left( -\frac{c^2}{2n\sigma ^2 + 2c}\right) . \end{aligned}$$
(3.2)

\(\square \)

Remark 3.2

Notice that, by using the same assumptions as in Proposition 3.1, and if \(x_j = \sum \nolimits _{k=1}^j d_k\) is positive for all \(1 \le j \le n\), then one can check that the projection p satisfies the following Cuculescu weak type (1, 1) inequality [6]:

$$\begin{aligned} \tau (p) \le \frac{1}{c} \sup _{1 \le j \le n} \Vert x_j\Vert _1. \end{aligned}$$

To be more precise,

$$\begin{aligned} \tau (x_n)\ge & {} \sum _{k=1}^n \tau (p_k x_n) ~~~\qquad \quad \left( \text {as }\sum _{k=1}^n p_k \le 1,\text { and }x_n\text { is positive}\right) \\= & {} \sum _{k=1}^n \tau ({\mathcal {E}}_k(p_k x_n)) \qquad (\text {since }{\mathcal {E}}_k\text { is trace-preserving})\\= & {} \sum _{k=1}^n \tau (p_k x_k) \qquad \qquad (\text {as }(x_j)_j\text { is a martingale})\\\ge & {} c\sum _{k=1}^n \tau (p_k). ~\qquad \qquad (\text {by}~ (2.5)) \end{aligned}$$

Hence, we get the desired inequality; see also [20].

In the classical set-up the projection p in Proposition 3.1 corresponds to the characteristic function of the subset \(\{ \max \nolimits _{1 \le j \le n} X_j \ge c\}\). Hence, we arrive at the following Bernstein-type inequality for commutative random variables (see [14, 7.5]).

Corollary 3.3

Let \(\Delta _1, \Delta _2, \ldots , \Delta _n\) be independent real-valued mean-zero random variables such that \(|\Delta _j| \le 1\) for all jk, and assume that \(\sigma ^2 = \frac{1}{n} \sum \nolimits _{j=1}^n {{\mathrm{Var}}}(\Delta _j)\). Then for every \(c \ge 0\), it follows that

$$\begin{aligned} {{\mathrm{Prob}}}\left( \sum _{k=1}^n\Delta _k \ge c \right) \le {{\mathrm{Prob}}}\left( \max _{1 \le j \le n} \sum _{k=1}^j\Delta _k \ge c \right) \le \exp \left( -\frac{c^2}{2n\sigma ^2 + 2c}\right) . \end{aligned}$$

The law of the iterated logarithm for martingales can be deduced form the Freedman inequality; see [8, Theorem 6.1]. In the next corollary, we obtain a result corresponding to the law of the iterated logarithm due to Levy; see [11].

Corollary 3.4

Suppose that \(\lambda >e^e\). If the sequence \((p_n^{c_k, h_k})_{n\ge 1}\) is defined as in the proof of Lemma 2.6, then

$$\begin{aligned} \left\| \sum _{k=1}^\infty \sum _{n=1}^\infty p_n^{c_k, h_k} \right\| _1 < \infty , \end{aligned}$$
(3.3)

where \(c_k = \lambda ^2 \sqrt{ 2\lambda ^{k} \log \log \lambda ^k}\) and \(h_k = \lambda ^{k+1}\), and the convergence is in the \(L^1\)-norm.

Proof

It follows from the noncommutative Freedman inequality 2.8 that

$$\begin{aligned} \left\| \sum _{n=1}^\infty p_n^{c_k, h_k}\right\| _1 \le \left( \frac{h_k}{c_k + h_k}\right) ^{c_k + h_k} e^{c_k} \le \exp \left\{ \frac{-c_k^2}{2c_k + 2h_k}\right\} , \end{aligned}$$

in which the last inequality follows from (3.2). Summing over k, we arrive at

$$\begin{aligned} \left\| \sum _{k=1}^m \sum _{n=1}^\infty p_n^{c_k, h_k} \right\| _1 = \sum _{k=1}^m \left\| \sum _{n=1}^\infty p_n^{c_k, h_k}\right\| _1 \le \sum _{k=1}^m \exp \left\{ \frac{-c_k^2}{2c_k + 2h_k}\right\} . \end{aligned}$$

There exists some \(k_0\) such that for all \(k>k_0\), we have \(c_k + h_k \le \lambda ^{k+3}\). Hence

$$\begin{aligned} \sum _{k=k_0}^\infty \exp \left\{ \frac{-c_k^2}{2c_k + 2h_k}\right\} \le \sum _{k=k_0}^\infty \exp \left\{ \frac{-2\lambda ^4 \lambda ^k \log \log \lambda ^k}{2\lambda ^{k+3}}\right\} \le \sum _{k=k_0}^\infty \frac{1}{k^\lambda }. \end{aligned}$$

The last series is \(\lambda \)-series and hence convergent. Therefore, the desired inequality follows from tending m to infinity. \(\square \)

In the commutative setting, inequality (3.3) means

$$\begin{aligned} \sum _{k=1}^\infty {{\mathrm{Prob}}}\left( X_n\ge c_k ~\text {and}~ Z_n \le h_k ~{{\mathrm{for~some}}}~ n\ge 0 \right) < \infty . \end{aligned}$$

The Borel–Cantelli lemma therefore ensures that

$$\begin{aligned} {{\mathrm{Prob}}}\left( \limsup _k E_k \right) = 0, \end{aligned}$$

where \(E_k = \left\{ X_n\ge c_k ~\text {and}~ Z_n \le h_k {{\mathrm{~for~some~}}} n\ge 0 \right\} \) and then, as shown in [8], it also entails that if \(\phi (t) = \sqrt{\max \{2t \log \log t, 1 \}}\), then

$$\begin{aligned}\limsup _n \frac{X_n}{\phi (Z_n)} \le 1\end{aligned}$$

on \(\{\sum _{n=1}^\infty Z_n = \infty \}\). Note that it is not necessary that the events \(E_k\) are disjoint in k.