Abstract
According to Tsallis’ seminal book on complex systems: “an entropy of a system is extensive if, for a large number n of its elements, the entropy is (asymptotically) proportional to n". According to whether the focus is on the system or on the entropy, an entropy is extensive for a given system or a system is extensive for a given entropy. Yet, exhibiting the right classes of random sequences that are extensive for the right entropy is far from being trivial, and is mostly a new area for generalized entropies. This paper aims at giving some examples or classes of random walks that are extensive for Tsallis entropy.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Keywords
1 Phi-Entropy Functionals and Extensivity
In the most classical information theory, the sources, identified to random sequences, are assumed to be ergodic or stationary. For such sources, the Asymptotic Equipartition Property (AEP) holds, stating that Shannon entropy asymptotically increases linearly with the number of elements of the source, a consequence of the strong additivity of Shannon entropy; see [3] for precise statements of AEPs for various types of sources. For more complex, non-ergodic systems, this asymptotics can be highly non linear, requiring to investigate alternative behaviors or to consider other entropy functionals.
The \(\varphi \)-entropy functionals (also called trace entropies) have now been widely used and studied in numerous scientific fields. The \(\varphi \)-entropy of a random variable X with finite or countable state space E and distribution \(P_X\) is defined as \(\mathbb {S}_\varphi (X) = \sum _{x \in E} \varphi (P_X(x))\), with \(\varphi \) some smooth function. Classical examples include Shannon with \(\varphi (x) = -x \log (x)\), Taneja with \(\varphi (x) = -x^s\log (x)\), and Tsallis with \(\mathbb {T}_s(X) = \frac{1}{s-1} [1 - \varLambda (X;s)], \) where \( \varLambda (X;s) = \sum _{x \in E} P_X(x)^s \) is the so-called Dirichlet series associated to X. Here, we will focus on Tsallis entropy, and suppose that \(s >0.\)
Extensivity of a complex systems is introduced in [10] as follows: "an entropy of a system is extensive if, for a large number n of its elements (probabilistically independent or not), the entropy is (asymptotically) proportional to n".
Precisely, a \(\varphi \)-entropy is extensive for a random sequence \(\textbf{X} = (X_n)_{n \in \mathbb {N}^*}\), with \(X_{1:n} = (X_1, \dots , X_n)\), if some \(c>0\) exists such that \(\mathbb {S}_\varphi (X_{1:n}) \sim _{n \rightarrow \infty } c.n;\) the constant c is the \(\varphi \)-entropy rate of the sequence. Intuitively, all variables contribute equally to the global information of the sequence, an appealing property in connection with the AEP in the theory of stochastic processes and complex systems; see, e.g., [2]. Extensivity is a two-way relationship of compatibility between an entropy functional and a complex system: indeed, the entropy is extensive for a given system or the system is extensive for a given entropy, according to whether the focus is on the system or on the entropy. Yet, exhibiting the right class of random sequences that are extensive for the right entropy is far from being trivial, and is mostly a new area for generalized entropies. This paper, as a first step, aims at giving some examples or classes of random walks that are extensive for Tsallis entropy, widely in use in complex systems theory; see [10].
For ergodic systems, Shannon entropy is well-known to be extensive while Tsallis entropy in non-extensive; see e.g. [7]. More generally, [4] establishes that Shannon entropy is the unique extensive \(\varphi \)-entropy for a large class of random sequences called quasi-power (QP) sequences (see definition given by (2) below), among the class of the so-called quasi-power-log (QPL) entropies introduced in [1], satisfying
for some \(a,b \in \mathbb {R}\), \(s >0\), \(\delta \in \{0,1\}\). QPL entropies are considered in [9, Eq. (6.60), p356] and [1] as the simplest expression of generalized entropies for studying the asymptotic behavior of entropy for random sequences, on which the present paper focuses. Indeed, the asymptotic behavior of the marginal QPL entropy of a random sequence is closely linked to the behavior of its Dirichlet series, characterized for QP sequences by the quasi-power property
where \(0< \sigma _0<1\), c and \(\lambda \) are strictly positive analytic functions, \(\lambda \) is strictly decreasing and \(\lambda (1)=c(1)=1\), and \(R_n\) is an analytic function such that \( |R_n(s)|=O (\rho (s)^{n}\lambda (s)^{n} )\) for some \(\rho (s) \in ]0,1[\). Thanks to Perron-Frobenius theorem, the QP property is satisfied by ergodic Markov chains, including independent and identically distributed (i.i.d.) sequences. It is also satisfied by a large variety of dynamic systems, including continuous fraction expansions; see [11].
In another perspective on the characterization of the asymptotic behavior of entropy, [9] studies uniformly distributed systems, in which each \(X_n\) is drawn from a uniform distribution on a state space that may depend on n; see also [5] and the references therein. The entropies are classified according to the two parameters \(0<c\le 1\) and \(d\in \mathbb {R}\) given by
depending only on the asymptotics of the size \(\varOmega (n) = |E(1:n)|\) of the state space \(E_{1:n}\) of \(X_{1:n}\). In the context of [5], the asymptotic behavior of \(\varOmega (n)\) is assumed to be a smooth expression of n, e.g., \(n^\beta \) with \(\beta >0\); then, \(\varOmega '(n)\) denotes the derivative of this expression at n. The asymptotic classification includes QPL entropies plus Quasi Power Exponential (QPE) entropies, given by \( \varphi (x) \mathop {\sim }_0 a x^s \exp (-\gamma x) +b, \) with \(a,b \in \mathbb {R}^*\), \(s\in \mathbb {R}\), and \( \gamma \in \mathbb {R}_+^*\), that are all asymptotically equivalent to the Tsallis one. Linear combination of such cases may also be considered, but are asymptotically dominated by one of the terms. Therefore, the present paper will focus exclusively on the asymptotic behavior of QPL entropies, for which \((c,d)=(s,\delta ) \) in (1); see [1] and [9, Table 6.2].
All in all, in [5, 9] and the references therein, one can identify the following–non exhaustive– types of growth, attached with class tags linked to the type of maximum entropic distribution:
\(\varOmega (n)\sim n^b\) is power-law leading to \(c=1-1/b, d=0,\) and Tsallis entropy;
\(\varOmega (n)\sim L^{n^{1-\beta }}\) is sub-exponential and leads to \(c=1,\) \( d=1-1/\beta \), with \(0<\beta <1\);
\(\varOmega (n)\sim \exp (\ell n)\) is exponential with \(c=d=1, \) and hence is extensive for Shannon entropy;
\(\varOmega (n)\sim \exp (\ell n^g)\) is stretched exponential with \(c=1, d=1/g,\) with \(g>1\), and extensive for QPE entropies, asymptotically equal to Tsallis.
The paper aims at showing through examples that various simple systems are extensive for Tsallis entropy, by using the growth rate of both the size of the state space and the behavior of the Dirichlet series. This amounts to using the physics approach in [9] to supplement and clarify the mathematics approach in [1, 4]–and other works along the same lines. The approach developed in [9] focuses on the complex systems and the induced maximum entropy distribution, and involves random sequences only via the size of the state space, while we are here interested in entropy as a function of a random sequence. Indeed, we focus on the random variables, together with their distributions, involved in the – asymptotic – behavior of a system and its entropy, as reflected in the Dirichlet series.
Section 2 begins by considering classical random walks, non-extensive for Tsallis entropy, but constituting a good starting point for constructing extensive ones. Then some examples of Tsallis-extensive systems are given in the context of complex systems, in terms of restricted or autocorrelated random walks. Still, the conditions on these systems appear to be difficult to express simply in terms of statistical inference, construction, and simulation, of random sequences. Therefore, the framework is broadened in Sect. 3 by considering non identically distributed increments, that is delayed random walks. Tuning the marginal distributions of the increments leads to Tsallis extensive sequences, with explicit probabilistic conditions allowing for the effective construction of such systems. Precisely, the main result, Theorem 1, gives a procedure for building random walks that are Tsallis-extensive, through an opening to non-uniform systems.
2 Random Walks
Let \(\textbf{X} = (X_n)_{n \in \mathbb {N}^*}\) be a sequence of independent random variables such that, for each \(n \in \mathbb {N}^*\), \(X_n\) takes values in a finite or countable subset E(n) of \(\mathbb {Z}^{\mathbb {N}}\). Let \(\textbf{W} = (W_n)_{n \in \mathbb {N}^*}\) be the random walk on \(\mathbb {Z}^{\mathbb {N}}\) associated to the increments \(\textbf{X}\) through \(W_n = \sum _{k = 1}^n X_n\), for \(n \in \mathbb {N}^*\).
We will derive the asymptotic behavior of the classical and extended random walks thanks to the following properties satisfied by the Dirichlet series; see, e.g., [8].
Properties 1
Let X be a discrete random variable taking values in E. Let \(\mathcal {E}= |E|\) denote the number of states, possibly infinite. Let \(s>0\). Then:
-
1.
\(\varLambda (X,1) = 1\), and if X is deterministic, then \(\varLambda (X,s) = 1\) too for all s.
-
2.
\( \log m_X<\varLambda (X;s) < \log \mathcal {E}\), where \(m_X\) is the number of modes of X.
-
3.
\(s \mapsto \varLambda (X;s)\) is a smooth decreasing function.
-
4.
If \(X_{1},\dots , X_n\) are independent variables, then \(\varLambda (X_{1:n};s) = \prod _{k=1}^n\varLambda (X_k;s)\).
Classical isotropic random walks \((W_n)\) on \(\mathbb {Z}^I\) are associated to sequences \(\textbf{X}\) of i.i.d. random variables with common uniform distribution, say \(X_n \sim \mathcal {U}( E)\), on \(E = \{ \pm e_i, i \in [\![1,I]\!] \}\), with \(I\in \mathbb {N}^*\), where the \(e_i\) are the canonical vectors of \(\mathbb {R}^I\). Property 1.4 and the i.i.d. assumption yield
that is to say \(\varLambda (W_{1:n};s)= (2I)^n,\) so that \( \mathbb {S}(W_{n}) = n \log 2I\) and \( \mathbb {T}_s(W_{n}) = \frac{1}{s-1}\left[ 1-(2I)^{(1-s)n} \right] , \) and hence Shannon is extensive while Tsallis is exponential.
Note that (4) still holds for non identically distributed random variables. Clearly, alternative choices for \(\varLambda (X_k;s)\) yield alternative behaviors for \(\varLambda (X_{1:n};s)\) and hence for Tsallis entropy. Let us give two examples, where the state space of \(X_n\) grows with n.
Example 1
- 1. :
-
The state space of \(X_n\) is linearly expanding if \(X_n \sim \mathcal {U} (\{ \pm e_i,1\le i \le n \})\), since then \(\mathcal {E}({1 : n}) = |E(n)| = 2n\) and \(\varOmega (n)=2^n.n!\). We compute \(\varLambda (W_{n};s) = (2^n n!)^{1-s},\) \(\mathbb {S}(W_{n}) \sim _{\infty } n\log n,\) and \( \mathbb {T}_s (W_{n}) = \frac{1}{s-1}\left[ 1- (2^n n!)^{1-s} \right] ,\) making the random walk \(\textbf{W}\) over-extensive for both Shannon and Tsallis.
- 2. :
-
The state space of \(X_n\) is exponentially expanding if \(X_n \sim \mathcal {U}(\{ \pm 1\}^n)\), since then \(\mathcal {E}(n) = 2^n\), and \(\varOmega (n)=2^{n(n+1)}\), a stretched exponential growth, and leads to a QPE entropy with \(c=1,\ d=1/2\), asymptotically equal to Tsallis. We compute \(\varLambda (W_{n};s) = 2^{(1-s)n(n+1)/2},\) and \(\mathbb {S}(W_{n}) \sim _{\infty } {\log 2} n^2/2 \) and \(\mathbb {T}_s (W_{n}) = \frac{1}{s-1}\left[ 1- 2^{ (1-s)n(n+1)/2} \right] .\)
Both (4) and Examples 1 show that the marginal Tsallis entropy of random walks with such inflating state spaces increases at least exponentially fast. To obtain extensive sequences for Tsallis entropy in this way would require the state spaces to contract, which is impossible. The approach of [5, 6] with either restricted state spaces or autocorrelated random variables next presented will pave the way to possible solutions.
The following restricted binary random walks, with \(E=\{0,1\}\), are heuristically described in [5]. If, asymptotically, the proportion of 1 is the same whichever be the length of the sequence, then \(W_n/n \) converges to a constant limit \(L\in (0;1)\), and \(\varOmega (n)=L^n\), with exponential growth, and hence \(\textbf{W}\) is extensive for Shannon entropy.
If \(W_n \) goes to infinity slower than L.n, its growth is sub-extensive for Shannon, and over-extensive otherwise. Such behaviors induce to restrict in some way the number of either 0 or 1 that the system can produce in n steps. For a power law growth, \(W_n\) converges to a constant \(g>0\) and \(\varOmega (n)\sim n^g\), leading to extensivity for Tsallis entropy with \(s =1-1/g\); see [7]. A rigorous presentation of such a sequence will be obtained in Example 5 below.
Further, autocorrelated random walks are considered in [6]; see also [7]. Suppose here that \(E(n)=\{-1,1\}\). In the classical symmetric uncorrelated RW, \(\mathbb {P}(X_m=-1)=\mathbb {P}(X_m=1)=1/2\), \(\mathbb {E}\,X_m=0\) and \(\mathbb {E}\,X_nX_{m}=\delta _{nm}\). Then \(\varOmega (n)=2^n\) and hence \((c,d)=(1,1)\) leads to extensivity for Shannon entropy, as seen above. Suppose now that the \(X_n\) are correlated random variables, with \( \mathbb {E}\,X_nX_m= 1\) if \(\alpha n^\gamma (\log n)^\beta <z\le \alpha m^\gamma (\log m)^\beta \) and 0 otherwise, for some fixed integer z and real numbers \(\alpha , \beta , \gamma \). Taking \(\gamma =0\) and \(\beta \ne 0\) leads to extensivity for Tsallis entropy. [6] conjectures that all choices of \((\gamma ,\beta )\) lead to all choices of (c, d).
Instead of autocorrelated RW, the somewhat less artificial (sic, [6]) ageing RW can be considered, with \(X_n=\eta _n X_{m-1}\) where \((\eta _m) \) is a sequence of binary random variables taking values \(\pm 1\) ; see [6] and [9, Chapter 6]. The ensuing (c, d) depends on the distribution of \(\eta _{m+1}\) conditional on the the number of \(0\le m\le n\) such that \(\eta _m=1\). A suitable choice leads for instance to the stretched exponential growth and extensivity for a QPE entropy, asymptotically equal to Tsallis.
Applied systems involving Tsallis entropy are given in [5, 9]. For instance, spin systems with a constant network connectivity lead to extensivity for Shannon entropy, while random networks growing with constant connectedness require Tsallis entropy; see [5]. See also [9, p371] for a social network model leading to Tsallis entropy.
Still, both restricted and autocorrelated systems are difficult to express in terms of the behavior, statistical inference or simulation of random variables. The delayed RW that we finally propose in Sect. 3 will be more tractable in these perspectives.
3 Delayed Random Walks
A super diffusive random walk model in porous media is considered in [5]. Each time a direction is drawn, \(\lfloor n^{\beta }\rfloor \) steps occur in this direction before another is drawn, where \(\beta \in [0,1[\) is fixed. More precisely, a first direction \(X_0\) is chosen at random between two possibilities. Then, the \(\lfloor 2^{\beta }\rfloor \) following steps equal \(X_1\) : \(X_1\dots = X_{\lfloor 2^{\beta }\rfloor } = X_0\). At time \(\lfloor 2^{\beta }\rfloor \), again a direction is chosen at random and repeated for the following \(\lfloor 3^{\beta }\rfloor \) steps, and so on. The number of random choices after n steps, of order \(n^{1-\beta }\), decreases in time, and hence \(\varOmega (n)\simeq 2^{n^{1-\beta }}\), and \((c,d)=(1,1/(1-\beta ))\); Shannon entropy is no more extensive.
This example leads to the notion of delayed random walks, that we will develop here in order to construct classes of random sequences that are extensive for Tsallis entropy. Precisely, we will say that \(\textbf{W}\) is a delayed random walk (DRW) if for identified indices \(n \in D \subseteq \mathbb {N}^*\), the behavior of \(W_n\) is deterministic conditionally to \(W_{1:{n-1}}\). In other words, all \(X_n\) are deterministic for these n.
Let us first give three examples where we assume that the random increments \(X_n\), for \(n \in R=\mathbb {N}^*\! \setminus \! D\), are drawn uniformly in a finite set E with cardinal \(\mathcal {E}\).
Example 2
- 1. :
-
A constant delay \(\kappa \in \mathbb {N}^*\) between random steps leads to \(R = \kappa \mathbb {N}^*\) and \(\varOmega (n)=\mathcal {E}^{\lfloor \frac{n}{\kappa } \rfloor }\), an exponential growth leading to Shannon entropy. We compute \( \varLambda (W_{1:n};s) = \mathcal {E}^{\lfloor \frac{n}{\kappa } \rfloor (1-s)}\).
- 2. :
-
A linearly increasing delay, say \(R = \{1+ {n(n+1)}/{2}, n \in \mathbb {N}^* \}\), leads to \( \varOmega (n)=\mathcal {E}^{\left\lfloor {(-1 + \sqrt{1+8n})}/{2} \right\rfloor }\), a stretched exponential growth leading to a QPE entropy, asymptotically equal to Tsallis. We compute \(\varLambda (W_{1:n};s) = \mathcal {E}^{\lfloor {(-1 + \sqrt{1+8n})}/{2} \rfloor (1-s)}\), and \(\mathbb {T}_s (W_{n}) = \frac{1}{s-1}\left[ 1- \mathcal {E}^{\lfloor {(-1 + \sqrt{1+8n})}/{2}\rfloor (1-s)} \right] . \)
- 3. :
-
An exponentially increasing delay, say \(R = \{2^n, n \in \mathbb {N}^* \}\), leads to \(\varOmega (n)=\mathcal {E}^{\lfloor \log _2 n \rfloor } \), a power-law growth leading to Tsallis entropy. We compute \(\varLambda (W_{1:n};s) = \mathcal {E}^{\lfloor \log _2 n \rfloor (1-s)},\) and \(\mathbb {T}_s (W_{1:n}) = \frac{1}{s-1}\left[ 1- \mathcal {E}^{\lfloor \log _2 n \rfloor (1-s)} \right] , \) from which we immediately derive that
$$ \frac{1}{s-1}(1-n^{(1-s) \ln (\mathcal {E})/ln(2)} \mathcal {E}^{s-1}) < \mathbb {T}_s(W_{1:n}) \le \frac{1}{s-1}(1- n^{(1-s) \ln (\mathcal {E})/ln(2)}). $$
In other words, \(\mathbb {T}_s(W_{1:n})\) essentially increases as a power of n. For random increments occurring at times of order \(2^{n{(1-s)}}\) instead of \(2^n\) and if \(\mathcal {E}= 2\), we similarly derive that \(W_{1:n}\) is extensive for \(\mathbb {T}_s\); this will be rigorously stated in Example 3 below.
Examples 1 and 2 illustrate how the Dirichlet series of DRW are affected by state space expansion and delays. On the one hand, the Dirichlet series increase with the expansion of the system while on the other hand, the faster the delay lengths increase between random increments, the slower the Dirichlet series and \(\varOmega (n)\) increase. More generally, one can generate–theoretically–any prescribed asymptotic behavior for the Dirichlet series and \(\varOmega (n)\) by suitably balancing between the introduction of delays and the ability to control the Dirichlet series of the random increments.
Precisely, Properties 1.1 and 1.4 yield the following relation between the Dirichlet series of the DRW and the Dirichlet series \(l_n = \varLambda (X_{r_n};s)\) of the increments,
Let us now exhibit different types DRW that are either strictly extensive for Tsallis entropy, such that \(\lim \mathbb {T}_s(W_{n})/n\) exists and is not zero, or weakly extensive in the sense that both \(\liminf _{n} \frac{1}{n} \mathbb {T}_s(W_{n})\) and \( \limsup _{n} \frac{1}{n} \mathbb {T}_s(W_{n})\) exist and are not zero.
Theorem 1
Let \(s \in (0;1)\). Let \((l_n)_{n \in \mathbb {N}^*}\) be a real sequence such that \(l_n > 1\) and \(\left\lfloor \prod _{k=1}^n l_k \right\rfloor \ge n\) for all n. Let \(\textbf{W} = (W_{n})_{n \in \mathbb {N}^*}\) be the DRW associated to increments \((X_n)_{n \in \mathbb {N}^*}\) and delays \(r_n = \max \left\{ \left\lfloor \prod _{k=1}^n l_k \right\rfloor , r_{n-1}+1 \right\} \), where \(l_n = \varLambda (X_{r_n};s)\). Then \(\limsup _{n \rightarrow \infty } \frac{1}{n}\mathbb {T}_s(W_{1:n}) ={1}/{(1-s)}\).
Moreover, if \(l_n\) converges to \(L \ge 1\), then \(\liminf _{n \rightarrow \infty } \frac{1}{n}\mathbb {T}_s(W_{1:n}) = {1}/{(1-s)L}\), and \(\textbf{W}\) is weakly extensive for \(\mathbb {T}_s\). If \(l_n\) converges to \(L=1\), then the extensivity is strict.
Proof
Assume that the sequence \((\left\lfloor \prod _{k=1}^n l_k \right\rfloor )\) is strictly increasing so that \(r_n = \left\lfloor \prod _{k=1}^n l_k \right\rfloor \). Otherwise, simply discard the first components of \(\textbf{W}\) to fit this assumption.
We compute using (5),
that is piecewise-constant and increasing with respect to n. Its supremum limit is obtained for the subsequence \(\varLambda (W_{1:r_n} ;s) = \prod _{k=1}^n l_n\), \(k \in \mathbb {N}^*\). Since \(r_n = \left\lfloor \prod _{k=1}^n l_k \right\rfloor \), we have \(r_n \le \varLambda (W_{1:r_n} ;s) \le r_n +1,\) so that
and the limsup result holds.
Similarly, the infimum limit exists and is obtained for the subsequence \((W_{1:r_n+1})\) as soon as \(l_n\) converges (to \(L\ge 1\)), which finishes the proof. \(\Box \)
Note that Theorem 1 is based on the existence of a random variable X whose Dirichlet series \(\varLambda (X;s)\) takes any prescribed value \(\ell >1\). Thanks to Property 1.2, this can be achieved in various ways, by choosing X in a parametric model with state space E and number of modes \(m_X\) as soon as \(\ell \in (\log m_X; \log |E|)\); see [8]. Tuning the parameters of the distribution leads to specific values for which \(\varLambda (X;s) = \ell \). See Example 4 below for a Bernoulli model, where \(l \in (1;2)\).
The following example illustrates how to generate simple random sequences that are weakly extensive for Tsallis entropy by suitably introducing delays. Still, the infimum and supremum limits cannot be equal, hindering strict extensivity.
Example 3
Let \(s \in (0,1)\). Let \(\textbf{W} \) be a DRW with exponential delays of order \(2^{1-s}\), say \( r_n = \max \left\{ \left\lfloor 2^{n(1-s)}\right\rfloor , r_{n-1} +1 \right\} \) for \( n \ge 2,\) with \(r_1 = 1\). Random increments \(X_{r_n}\) are drawn according to a uniform distribution \(\mathcal {U}(\{-1, 1\})\) so that \(l_n = \varLambda (X_{r_n};s) = 2^{1-s}\).
Then, Theorem 1 yields \(\varOmega (n) \sim 2^{\left\lfloor \frac{1}{1-s} \log _2 n \right\rfloor }\), and
The last example will consider anisotropic random walks, in which the \(X_{r_n}\) are drawn according to an asymmetric binary distribution with probabilities depending on n.
Example 4
Let \(r_n = \max \{ \lfloor \prod _{k=1}^n (1 + {1}/{(k+1)} ) \rfloor , r_{n-1} +1\}\) for \( n \ge 2,\) with \(r_1 = 1\). Let \(\textbf{X}\) be a sequence of independent variables such that \(\mathbb {P}(X_{r_n} = 1) = 1- \mathbb {P}(X_{r_n} = -1) = p_n\), with \(p_n\) solution of \((p_n)^s + (1-p_n)^s = 1 + 1/{(n+1)},\) while all other \(X_n\) are deterministic. By construction, the Dirichlet series associated with \(X_n\) is \(l_n =\varLambda (X_n, s)= 1+{1}/{(n+1)}\) which converges to 1. Theorem 1 yields extensivity of Tsallis entropy.
Further, Examples 1 become Tsallis-extensive by introducing the respective delays \(R = \{ \left\lfloor 2^{n(1-s)} n!, \ n>0 \right\rfloor \}\) and \(R = \{ \left\lfloor 2^{n(n+1)(1-s)/2} \right\rfloor \}\) and applying Theorem 1.
Note that large classes of Tsallis-extensive DRWs can be built from Theorem 1, a construction that was the main aim of the paper.
References
Ciuperca, G., Girardin, V., Lhote, L.: Computation and estimation of generalized entropy rates for denumerable markov chains. IEEE Trans. Inf. Theory 57(7), 4026–4034 (2011). https://doi.org/10.1109/TIT.2011.2133710
Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, 2 edn. (2006). https://onlinelibrary.wiley.com/doi/book/10.1002/047174882X
Girardin, V.: On the different extensions of the ergodic theorem of information theory. In: Baeza-Yates, R., Glaz, J., Gzyl, H., Hüsler, J., Palacios, J.L. (eds.) Recent Advances in Applied Probability, pp. 163–179. Springer, US (2005). https://doi.org/10.1007/0-387-23394-6_7
Girardin, V., Regnault, P.: Linear \((h,\varphi )\)-entropies for quasi-power sequences with a focus on the logarithm of Taneja entropy. Phys. Sci. Forum 5(1), 9 (2022). https://doi.org/10.3390/psf2022005009
Hanel, R., Thurner, S.: When do generalized entropies apply? How phase space volume determines entropy. Europhys. Lett. 96(50003) (2011). https://doi.org/10.1209/0295-5075/96/50003
Hanel, R., Thurner, S.: Generalized (c, d)-entropy and aging random walks. Entropy 15(12), 5324–5337 (2013). https://doi.org/10.3390/e15125324, https://www.mdpi.com/1099-4300/15/12/5324
Marsh, J.A., Fuentes, M.A., Moyano, L.G., Tsallis, C.: Influence of global correlations on central limit theorems and entropic extensivity. Phys. A 372(2), 183–202 (2006). https://doi.org/10.1016/j.physa.2006.08.009
Regnault, P.: Différents problèmes liés à l’estimation de l’entropie de Shannon d’une loi, d’un processus de Markov (2011). https://www.theses.fr/2011CAEN2042
Thurner, S., Klimek, P., Hanel, R.: Introduction to the Theory of Complex Systems. Oxford University Press (2018). https://doi.org/10.1093/oso/9780198821939.001.0001
Tsallis, C.: Introduction to Nonextensive Statistical Mechanics. Springer (2009). https://springerlink.bibliotecabuap.elogim.com/book/10.1007/978-0-387-85359-8
Vallée, B.: Dynamical sources in information theory: Fundamental intervals and word prefixes. Algorithmica 29(1), 262–306 (2001). https://doi.org/10.1007/BF02679622
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Girardin, V., Regnault, P. (2023). Extensive Entropy Functionals and Non-ergodic Random Walks. In: Nielsen, F., Barbaresco, F. (eds) Geometric Science of Information. GSI 2023. Lecture Notes in Computer Science, vol 14071. Springer, Cham. https://doi.org/10.1007/978-3-031-38271-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-38271-0_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-38270-3
Online ISBN: 978-3-031-38271-0
eBook Packages: Computer ScienceComputer Science (R0)