1 Introduction

A discrete q-uniform distribution first emerged as a congruence class distribution, modulo n, of Bernoulli generated numbers, in a probabilistic number theory paper of Rawlings [5] . Kupershmidt [4] discussed a discrete q-uniform distribution starting with a nonnegative q-function defined on the set \(\{0,1,\ldots ,n\}\) and summing to one. Charalambides [1] extensively presented properties and applications of discrete q-uniform distributions.

The most important multivariate discrete uniform distributions are defined on the Fermi–Dirac and Bose–Einstein stochastic models, with probability (mass) functions:

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=1\bigg /\left( {\begin{array}{c}k+1\\ n\end{array}}\right) , \end{aligned}$$

for \(x_j=0,1\) and \(j=1,2,\ldots ,k\), with \(\sum _{j=1}^k x_j\le n\), and

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=1\bigg /\left( {\begin{array}{c}k+n\\ n\end{array}}\right) , \end{aligned}$$

for \(x_j=0,1,\ldots ,n\) and \(j=1,2,\ldots ,k\), with \(\sum _{j=1}^k x_j\le n\), respectively. In both models (or statistics in the physicist’s terminology), a mechanical system of n particles is considered and \(X_j\) is the number of particles in the jth energy level, \(j=1,2,\ldots ,k\), of the system. In the Fermi–Dirac model, the particles obey the Pauli exclusion principle. These distributions are obtained by assuming that any particle is equally likely to move from the zeroth energy level to any of the \(k+1\) energy levels [3,  p. 40]. In the present article, allowing the probability of a particle to move from the zeroth energy level to one of the \(k+1\) energy levels to vary geometrically, with rate q, multivariate discrete q-uniform distributions are introduced and studied. Section 2 is devoted to the presentation of multivariate q-hypergeometric sums, which are used in the study of multivariate discrete q-uniform distributions of the first and second kind. In Sect. 3, a stochastic model of a sequence of successive q-distribution of n indistinguishable balls into distinguishable urns (cells) is presented. Then, assuming that the urns are of limited capacity, a multivariate discrete q-uniform distribution of the first kind (q-Fermi–Dirac statistic) is defined on this model and its properties are thoroughly examined. In Sect. 4, supposing that the urns are of unlimited capacity a multivariate discrete q-uniform distribution of the second kind (q-Bose–Einstein statistic) is defined on this model and its properties are extensively studied.

2 Multivariate q-Hypergeometric Sums

Two multivariate q-hypergeometric sums over all partitions into a specific number of unequal parts and (any) parts, respectively, none of which is greater than another specific number, which emerge in the study of q-analogues of the Fermi–Dirac and Bose–Einstein stochastic models (statistics) are presented in the following corollary of Theorem 1.2 in the book of Charalambides [1].

Corollary 1

Let k and n be positive integers, and q be a real number, with \(q\ne 1\). Then,

$$\begin{aligned} \underset{r_1+r_2+\cdots +r_k\le n}{\sum _{r_j=0,1,\; j=1,2,\ldots ,k,}}q^{r_1+2r_2+\cdots +kr_k-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }=\genfrac[]{0.0pt}{}{k+1}{n}_q, \ \ n\le k+1, \end{aligned}$$
(1)

and

$$\begin{aligned} \underset{r_1+r_2+\cdots +r_k\le n}{\sum _{r_j=0,1,\ldots ,n,\;j=1,2,\ldots ,k,}}q^{r_1+2r_2+\cdots +kr_k}=\genfrac[]{0.0pt}{}{k+n}{n}_q. \end{aligned}$$
(2)

Proof

The q-binomial coefficients \(\genfrac[]{0.0pt}{}{k+1}{n}_q\) and \(\genfrac[]{0.0pt}{}{k+n}{n}_q\), according to Theorem 1.2 in Charalambides [1] may be expressed as:

$$\begin{aligned} {\sum _{1\le i_1<i_2<\cdots <i_n\le k+1}}q^{i_1+i_2+\cdots +i_n-\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) }=\genfrac[]{0.0pt}{}{k+1}{n}_q \end{aligned}$$
(3)

and

$$\begin{aligned} {\sum _{1\le i_1\le i_2\le \cdots \le i_n\le k+1}}q^{i_1+i_2+\cdots +i_n-n}=\genfrac[]{0.0pt}{}{k+n}{n}_q. \end{aligned}$$
(4)

Let \(r_j\) be the number of variables \(i_1,i_2,\ldots ,i_n\) that are equal to \(j+1\), for \(j=0,1,\ldots ,k\). Note that \(r_j=0,1\), for \(1\le i_1<i_2<\cdots <i_n\le k+1\) and \(r_j=0,1,\ldots ,n\), for \(1\le i_1\le i_2\le \cdots \le i_n\le k+1\). Then,

$$\begin{aligned} i_1+i_2+\cdots +i_n=r_0+2r_1+\cdots +(k+1)r_k, \ \ \text {with}\ \ r_0+r_1+\cdots +r_k=n. \end{aligned}$$

Thus,

$$\begin{aligned} i_1+i_2+\cdots +i_n\!-\!\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) \!=r_1+2r_2+\cdots +kr_k\!-\!\left( {\begin{array}{c}n\\ 2\end{array}}\right) , \; \text {with}\; r_1\!+\!r_2\!+\!\cdots \!+\!r_k\le n, \end{aligned}$$

and

$$\begin{aligned} i_1+i_2+\cdots +i_n-n=r_1+2r_2+\cdots +kr_k, \ \ \text {with}\ \ r_1+r_2+\cdots +r_k\le n. \end{aligned}$$

Consequently, (3) and (4) may be expressed as (1) and (2), respectively.

It is interesting to note an alternative evaluation of these multiple sums,

$$\begin{aligned} a_{n,k}(q)=\underset{r_1+r_2+\cdots +r_k\le n}{\sum _{r_j=0,1,\ldots ,n,\;j=1,2,\ldots ,k,}}q^{r_1+2r_2+\cdots +kr_k-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }, \ \ n=1,2,\ldots ,k+1,\ \ k=1,2,\ldots , \end{aligned}$$

and

$$\begin{aligned} b_{n,k}(q)=\underset{r_1+r_2+\cdots +r_k\le n}{\sum _{r_i=0,1,\ldots ,n,\;i=1,2,\ldots ,k,}}q^{r_1+2r_2+\cdots +kr_k}, \ \ n=1,2,\ldots ,\ \ k=1,2,\ldots , \end{aligned}$$

which may be carried out inductively by using the relations

$$\begin{aligned} a_{n,k}(q)=\sum _{r_1=0}^1q^{(k-n+r_1)r_1-\left( {\begin{array}{c}r_1\\ 2\end{array}}\right) }a_{n-r_1,k-1}(q), \end{aligned}$$

and

$$\begin{aligned} b_{n,k}(q)=\sum _{r_1=0}^nq^{kr_1}b_{n-r_1,k-1}(q). \end{aligned}$$

\(\square \)

Two more general multivariate q-hypergeometric sums are evaluated in the following theorem.

Theorem 1

Let k and n be positive integers, and q be a real number, with \(q\ne 1\). Also, let \(m_i\), \(i=1,2\ldots ,\nu \), be positive integers and set \(n_j=\sum _{i=1}^j m_i\), \(j=1,2\ldots ,\nu \). Then,

$$\begin{aligned} \underset{r_1+r_2+\cdots +r_\nu \le n}{\sum _{r_j=0,1,\ldots ,m_j,\;j=1,2,\ldots ,\nu ,}} q^{\sum _{j=1}^\nu (k-n_j-n+s_j+1)r_j}\prod _{j=1}^\nu \genfrac[]{0.0pt}{}{m_j}{r_j}_q=\genfrac[]{0.0pt}{}{k+1}{n}_q, \end{aligned}$$
(5)

where \(s_j=\sum _{i=1}^jr_i\), and

$$\begin{aligned} \underset{r_1+r_2+\cdots +r_\nu \le n}{\sum _{r_j=0,1,\ldots ,n,\;j=1,2,\ldots ,\nu ,}} q^{\sum _{j=1}^\nu (k-n_j+1)r_j}\prod _{j=1}^\nu \genfrac[]{0.0pt}{}{m_j+r_j-1}{r_j}_q=\genfrac[]{0.0pt}{}{k+n}{n}_q. \end{aligned}$$
(6)

Proof

According to q-Cauchy’s formula, it holds true

$$\begin{aligned} \sum _{r_j=0}^{n-s_{j-1}}q^{(k-n_j-n+s_j+1)r_j}\genfrac[]{0.0pt}{}{m_j}{r_j}_q\genfrac[]{0.0pt}{}{k-n_j+1}{n-s_j}_q =\genfrac[]{0.0pt}{}{k-n_{j-1}+1}{n-s_{j-1}}_q, \end{aligned}$$

for \(j=1,2,\ldots ,\nu \). Starting with the first expression, \(j=1\),

$$\begin{aligned} \sum _{r_1=0}^nq^{(k-n_1-n+s_1+1)r_1}\genfrac[]{0.0pt}{}{m_1}{r_1}_q\genfrac[]{0.0pt}{}{k-n_1+1}{n-s_1}_q =\genfrac[]{0.0pt}{}{k+1}{n}_q, \end{aligned}$$

and replacing the second factor of the general term of the sum by

$$\begin{aligned} \genfrac[]{0.0pt}{}{k-n_1+1}{n-s_1}_q =\sum _{r_2=0}^{n-{r_1}}q^{(k-n_2-n+s_2+1)r_2}\genfrac[]{0.0pt}{}{m_2}{r_2}_q\genfrac[]{0.0pt}{}{k-n_2+1}{n-s_2}_q, \end{aligned}$$

and, continuing in this manner, at the last step replacing the second factor of the general term of the sum by

$$\begin{aligned} \genfrac[]{0.0pt}{}{k-n_{\nu -1}+1}{n-s_{\nu -1}}_q&=\sum _{r_\nu =0}^{n-s_{\nu -1}}q^{(k-n_\nu -n+s_\nu +1)r_\nu }\genfrac[]{0.0pt}{}{m_\nu }{r_\nu }_q\genfrac[]{0.0pt}{}{k-n_\nu +1}{n-s_\nu }_q\\&=\sum _{r_\nu =0}^{n-s_{\nu -1}}q^{(k-n_\nu -n+s_\nu +1)r_\nu }\genfrac[]{0.0pt}{}{m_\nu }{r_\nu }_q, \end{aligned}$$

expression (5) is deduced; the last equality is a direct consequence of \(n_\nu =k\). Similarly, according to q-Cauchy’s formula, it holds true

$$\begin{aligned} \sum _{r_j=0}^{n-s_{j-1}}q^{(k-n_j+1)r_j}\genfrac[]{0.0pt}{}{m_j\!+r_j-1}{r_j}_q\genfrac[]{0.0pt}{}{k\!-n_j\!+n\!-s_j}{n-s_j}_q =\genfrac[]{0.0pt}{}{k\!-n_{j-1}\!+n\!-s_{j-1}}{n-s_{j-1}}_q, \end{aligned}$$

for \(j=1,2,\ldots ,\nu \). Starting with the first expression, \(j=1\),

$$\begin{aligned} \sum _{r_1=0}^nq^{(k-n_1+1)r_1}\genfrac[]{0.0pt}{}{m_1+r_1-1}{r_1}_q\genfrac[]{0.0pt}{}{k-n_1+n-s_1}{n-s_1}_q =\genfrac[]{0.0pt}{}{k\!+n}{n}_q, \end{aligned}$$

and replacing the second factor of the general term of the sum by

$$\begin{aligned} \genfrac[]{0.0pt}{}{k-n_1+n-s_1}{n-s_1}_q =\sum _{r_2=0}^{n-r_{1}}q^{(k-n_2+1)r_2}\genfrac[]{0.0pt}{}{m_2+r_2-1}{r_2}_q\genfrac[]{0.0pt}{}{k-n_2+n-s_2}{n-s_2}_q, \end{aligned}$$

and, continuing in this manner, at the last step replacing the second factor of the general term of the sum by

$$\begin{aligned} \genfrac[]{0.0pt}{}{k\!-n_{\nu -1}\!+n\!-s_{\nu -1}}{n-s_{\nu -1}}_q&=\sum _{r_\nu =0}^{n-s_{\nu -1}}q^{(k-n_\nu +1)r_\nu }\genfrac[]{0.0pt}{}{m_\nu \!+r_\nu -1}{r_\nu }_q\genfrac[]{0.0pt}{}{k\!-n_\nu \!+n\!-s_\nu }{n-s_\nu }_q\\&=\sum _{r_\nu =0}^{n-s_{\nu -1}}q^{(k-n_\nu +1)r_\nu }\genfrac[]{0.0pt}{}{m_\nu \!+r_\nu -1}{r_\nu }_q, \end{aligned}$$

expression (6) is deduced; note that the last equality is a direct consequence of \(n_\nu =k\). \(\square \)

3 q-Fermi–Dirac Stochastic Model (Statistic)

A random distribution (placement) of balls into distinguishable urns (cells) is a simple and very useful stochastic model. Among its most striking and useful applications, the Bose–Einstein and Fermi–Dirac stochastic models (statistics) worth special attention.

A random q-distribution (placement) of a ball into r distinguishable urns (cells) \(\{c_1,c_2,\ldots ,c_r\}\) may be introduced as follows. Assume that r numbered balls \(\{1,2,\ldots ,r\}\), representing the r urns are forced to pass through a random mechanism, one after the other, in the order \((1,2,\ldots ,r)\) or in the reverse order \((r,r-1,\ldots ,2,1)\). Also, suppose that each passing ball may or may not be caught by the mechanism, with probabilities \(p=1-q\) and q, respectively. In the case all r balls pass through the mechanism and no ball is caught, the ball passing procedure is repeated, with the same order. Then, the number on the first caught ball determines the urn (cell) in which the ball is placed. Clearly, the probability that a ball is placed in the jth in order urn is given by

$$\begin{aligned} p_j=\sum _{k=0}^\infty (1-q)q^{(j-1)+kr}=\frac{q^{j-1}}{[r]_q}, \ \ j=1,2,\ldots ,r, \end{aligned}$$

or by

$$\begin{aligned} p_j=\sum _{k=0}^\infty (1-q)q^{(r-j)+kr}=\frac{q^{r-j}}{[r]_q}, \ \ j=1,2,\ldots ,r, \end{aligned}$$

where \(0<q<1\), according to whether the ball passing order is \((1,2,\ldots ,r)\) or \((r,r-1,\ldots ,2,1)\). These probabilities, on using the expression \(q^{j-1}/[r]_q=q^{-\{r-j\}}/[r]_{q^{-1}}\), may be written in a single formula as:

$$\begin{aligned} p_j=\frac{q^{r-j}}{[r]_q}, \ \ j=1,2,\ldots ,r, \end{aligned}$$
(7)

where \(0<q<1\) or \(1<q<\infty \). Note that this is the probability function of a discrete q-uniform distribution of the set \(\{1,2,\ldots ,r\}\). It is worth mentioning that in a quite close analogy, in Combinatorics, Chung and Kang [2] introduced the notion of a q-selection of an element from the set \(C=\{c_1,c_2,\ldots ,c_r\}\) by considering a weight \(q^{i-1}\) as the payment for \(i-1\) jumps made in traveling from the left to the right of the permutation \(p_r=(c_1,c_2,\ldots ,c_r)\), with \(c_1<c_2<\cdots <c_r\), before selecting the element \(c_i\in C\).

Furthermore, assume that n indistinguishable balls are randomly q-distributed, one after the other, into \(r=k+1\) distinguishable urns (cells) \(\{c_1,c_2,\ldots ,c_{k+1}\}\), each with capacity limited to one ball, with \(n\le k+1\). Let \(X_j\) be the number of balls placed in urn \(c_j\), for \(j=1,2,\ldots ,k+1\). Note that \(X_{k+1}=n-X_1-X_2-\cdots -X_k\). The distribution of the random vector \((X_1, X_2,\ldots ,X_k)\) is called Multivariate Discrete q-Uniform Distribution of the first kind, with parameters n and q. Its probability function is derived in the following theorem.

Theorem 2

The probability (mass) function of the multivariate discrete q-uniform distribution of the first kind, with parameters n and q, is given by

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=q^{\sum _{j=1}^k(k-j+1)x_j-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_q, \end{aligned}$$
(8)

for \(x_j=0,1\) and \(j=1,2,\ldots ,k\), with \(\sum _{j=1}^k x_j\le n\), and \(0<q<1\) or \(1<q<\infty \).

Proof

A random q-distribution of n indistinguishable balls into \(k+1\) distinguishable urns, of capacity limited to one ball, may be represented by the collection of n q-selected urns \(\{c_{i_1},c_{i_2},\ldots ,c_{i_n}\}\), where the q-selection of an urn x times corresponds to the placement of x balls into it, for \(x=0,1\). Notice that, after the q-selection of an urn and the placement in it a ball, because its capacity is limited to one ball, the next q-selection is made among the remaining urns. Therefore, the probability for such a q-distribution, on using successively (7), with \(r=k+1,k,\ldots ,k-n+2\), is given by

$$\begin{aligned} c\,q^{k-i_1+1}q^{k-i_2}\cdots q^{k-i_n-n+2}=c\,q^{(k+1)n-(i_1+i_2+\cdots +i_n)-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }, \end{aligned}$$

with \(1\le i_1<i_2<\cdots <i_n\le k+1\). Clearly, the number \(x_j\) of balls q-distributed into urn \(c_j\) equals the number of variables \(i_1,i_2,\ldots ,i_n\) that are equal to j, for \(j=1,2,\ldots ,k+1\), with \(x_{k+1}=n-\sum _{j=1}^kx_j\). Also, the exponent of q in the expression of the preceding random q-distribution, may be expressed as:

$$\begin{aligned} (k+1)n-\sum _{r=1}^ni_r-\left( {\begin{array}{c}n\\ 2\end{array}}\right) =\sum _{j=1}^{k+1}(k+1)x_j-\sum _{j=1}^{k+1}jx_j-\left( {\begin{array}{c}n\\ 2\end{array}}\right) =\sum _{j=1}^k(k-j+1)x_j-\left( {\begin{array}{c}n\\ 2\end{array}}\right) \end{aligned}$$

and so

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=c\,q^{\sum _{j=1}^k(k-j+1)x_j-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }, \end{aligned}$$

for \(x_j=0,1\) and \(j=1,2,\ldots ,k\), with \(\sum _{j=1}^k x_j\le n\). Summing these probabilities, using (1), and equating this sum to one, we get the expression \(c=1/\genfrac[]{0.0pt}{}{k+1}{n}_q\), which completes the derivation of (8). \(\square \)

The multivariate discrete q-uniform distribution of the first kind may be obtained as the conditional distribution of k independent q-Bernoulli distributions of the first kind, given their sum with another q-Bernoulli distribution of the first kind independent of them, according to the following theorem.

Theorem 3

Consider a sequence of independent Bernoulli trials and assume that the probability of success at the ith trial is given by

$$\begin{aligned} p_i=\frac{\theta q^{i-1}}{1+\theta q^{i-1}}, \quad i=1,2,\ldots ,\ \ 0<q<1 \ \ \text {or}\ \ 1<q<\infty . \end{aligned}$$

Let \(X_j\) be the number of successes at the jth trial, for \(j=1,2,\ldots ,k+1\). Then, the conditional probability function of the random vector \((X_1,X_2,\ldots ,X_k)\), given that \(X_1+X_2+\cdots +X_{k+1}=n\), is the multivariate discrete q-uniform distribution of the first kind with probability function (8).

Proof

The random variables \(X_j\), \(j=1,2,\ldots ,k+1\), are independent, with probability function, according to Theorem 2.1 in Charalambides [1], is given by

$$\begin{aligned} P(X_j=x_j)=\frac{\theta ^{x_j} q^{(j-1)x_j}}{1+\theta q^{j-1}}, \quad x_j=0,1, \ \ j=1,2,\ldots ,k+1. \end{aligned}$$

Similarly, the probability function of the sum \(Y_{k+1}=X_1+X_2+\cdots +X_{k+1}\), which is the number of successes in \(k+1\) trials, is

$$\begin{aligned} P(Y_{k+1}=n)=\genfrac[]{0.0pt}{}{k+1}{n}_q\frac{\theta ^n q^{\left( {\begin{array}{c}n\\ 2\end{array}}\right) }}{\prod _{i=1}^{k+1}(1+\theta q^{i-1})},\quad n=0,1,\ldots ,k+1. \end{aligned}$$

Then, the joint conditional probability function of the random vector \((X_1,X_2,\ldots ,X_k)\), given that \(Y_{k+1}=n\),

$$\begin{aligned} P(X_1\!=\!x_1,\ldots ,X_k\!=\!x_k | Y_{k+1}\!=\!n)\!=\!\frac{P(X_1\!=\!x_1)\cdots P(X_k\!=\!x_k)P(X_{k+1}\!=\!n\!-\!y_k)}{P(Y_{k+1}=n)}, \end{aligned}$$

with \(y_k=\sum _{j=1}^kx_j\), on using these expressions, is obtained as:

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k|Y_{k+1}=n)=q^{c_{n,k}(x_1,x_2,\ldots ,x_k)}\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_{q}, \end{aligned}$$

where

$$\begin{aligned} c_{n,k}(x_1,x_2,\ldots ,x_k)&=\sum _{j=1}^k(j-1)x_j-\sum _{j=1}^kkx_j+nk-\left( {\begin{array}{c}n\\ 2\end{array}}\right) \\&=-\sum _{j=1}^k(k-j+1)x_j+\left( {\begin{array}{c}n\\ 2\end{array}}\right) +n(k-n+1). \end{aligned}$$

Thus, since

$$\begin{aligned} q^{-n(k-n+1)}\genfrac[]{0.0pt}{}{k+1}{n}_{q}=\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}, \end{aligned}$$

it reduces to

$$\begin{aligned} P(X_1\!=\!x_1,X_2\!=\!x_2,\ldots ,X_k\!=\!x_k|Y_{k+1}\!=\!n) \!=\!q^{-\sum _{j=1}^k(k-j+1)x_j+\left( {\begin{array}{c}n\\ 2\end{array}}\right) }\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}, \end{aligned}$$

which is expression (8) with q replaced by \(q^{-1}\). \(\square \)

Certain marginal and conditional distributions of the multivariate q-uniform distribution of the first kind are derived in the following theorem.

Theorem 4

Assume that the random vector \((X_1,X_2,\ldots ,X_k)\) obeys a multivariate discrete q-uniform distribution of the first kind. Then, the probability function of

(a) the marginal distribution of \((X_1,X_2,\ldots ,X_r)\), for \(1\le r<k\), is given by

$$\begin{aligned} P(X_1=x_1,\ldots ,X_r=x_r)=q^{\sum _{j=1}^r(k-j-n+y_r+1)x_j-\left( {\begin{array}{c}y_r\\ 2\end{array}}\right) } \genfrac[]{0.0pt}{}{k-r+1}{n-y_r}_q\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_q, \end{aligned}$$
(9)

for \(x_j=0,1\), \(j=1,2,\ldots ,r\), with \(\sum _{j=1}^rx_j\le n\), where \(y_r=\sum _{j=1}^rx_j\), and

(b) the conditional distribution of the random vector \((X_{r+1}, X_{r+2},\ldots ,X_{r+m})\), given that \((X_1, X_2,\ldots ,X_r)=(x_1, x_2,\ldots ,x_r)\), for \(1\le r<m\le k\), is given by

$$\begin{aligned} P(X_{r+1}&=x_{r+1},\ldots ,X_m=x_m|X_1=x_1,\ldots ,X_r=x_r)\nonumber \\&=q^{\sum _{j=r+1}^m(k-j-n+y_m+1)x_j-\left( {\begin{array}{c}y_m-y_r\\ 2\end{array}}\right) } \genfrac[]{0.0pt}{}{k-m+1}{n-y_m}_q\bigg /\genfrac[]{0.0pt}{}{k-r+1}{n-y_r}_q, \end{aligned}$$
(10)

for \(x_j=0,1\), \(j=r+1,r+2,\ldots ,m\), with \(\sum _{j=r+1}^mx_j\le n-y_r\), where \(y_j=\sum _{i=1}^jx_i\).

Proof

(a) Summing the probability function of the multivariate discrete q-uniform distribution of the first kind, for \(x_j=0,1\), \(j=r+1,r+2,\ldots ,k\), with \(\sum _{j=r+1}^kx_j\le n-y_r\), and using the relation

$$\begin{aligned} \left( {\begin{array}{c}n\\ 2\end{array}}\right) =\left( {\begin{array}{c}n-y_r\\ 2\end{array}}\right) +\left( {\begin{array}{c}y_r\\ 2\end{array}}\right) +(n-y_r)y_r, \end{aligned}$$

we get, for the marginal probability function of \((X_1,X_2,\ldots ,X_r)\), the expression

$$\begin{aligned} P(X_1=x_1,\ldots ,&\,X_r=x_r)=q^{\sum _{j=1}^r(k-j-n+y_r+1)x_j-\left( {\begin{array}{c}y_r\\ 2\end{array}}\right) }\\&\times \underset{x_{r+1}+x_{r+2}+\cdots +x_k\le n-y_r}{\sum _{x_{r+j}=0,1,\;j=1,2,\ldots ,k-r}} q^{\sum _{j=1}^{k-r}(k-r-j+1)x_{r+j}-\left( {\begin{array}{c}n-y_r\\ 2\end{array}}\right) }\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_q. \end{aligned}$$

Since, the multiple sum, using (1), equals

$$\begin{aligned} \underset{x_{r+1}+x_{r+2}+\cdots +x_k\le n-y_r}{\sum _{x_{r+j}=0,1,\;j=1,2,\ldots ,k-r}} q^{\sum _{j=1}^{k-r}(k-r-j+1)x_{r+j}-\left( {\begin{array}{c}n-y_r\\ 2\end{array}}\right) }=\genfrac[]{0.0pt}{}{k-r+1}{n-y_r}_q, \end{aligned}$$

the last expression of probability function reduces to (9).

(b) The conditional probability function of \((X_{r+1}, X_{r+2},\ldots ,X_{m})\), given that \((X_1, X_2,\ldots ,X_r)=(x_1, x_2,\ldots ,x_r)\), is given by

$$\begin{aligned} P(X_{r+1}\!=x_{r+1},\dots ,X_{m}\!=x_{m}|&\,X_1\!=x_1,\ldots ,X_r\!=x_r)\\&=\frac{P(X_1\!=x_1,X_2\!=x_2\ldots ,X_m\!=x_m)}{P(X_1\!=x_1,X_2\!=x_2,\ldots ,X_r\!=x_r)}. \end{aligned}$$

Then, using the result of part (a), together with the relation

$$\begin{aligned} \left( {\begin{array}{c}y_m-y_r\\ 2\end{array}}\right) =\left( {\begin{array}{c}y_m\\ 2\end{array}}\right) -\left( {\begin{array}{c}y_r\\ 2\end{array}}\right) -(y_m-y_r)y_r, \end{aligned}$$

we conclude that

$$\begin{aligned} P(X_{r+1}=x_{r+1},&\ldots ,X_m=x_m|X_1=x_1,\ldots ,X_r=x_r)\\&=q^{\sum _{j=r+1}^m(k-j-n+y_m+1)x_j-\left( {\begin{array}{c}y_m-y_r\\ 2\end{array}}\right) } \genfrac[]{0.0pt}{}{k-m+1}{n-y_m}_q\bigg /\genfrac[]{0.0pt}{}{k-r+1}{n-y_r}_q, \end{aligned}$$

for \(x_j=0,1\), \(j=r+1,r+2,\ldots ,m\), with \(\sum _{j=r+1}^mx_j\le n-y_r\), where \(y_j=\sum _{i=1}^jx_i\). \(\square \)

Let us now turn the study to the moments of a multivariate q-uniform distribution of the first kind. Aiming primarily to the derivation of its q-power moments, and especially to the q-means, q-variances, and q-covariances, the attention may be restricted to the marginal distribution of the random vector \((X_1, X_2)\). Its probability function is given by

$$\begin{aligned} P(X_1\!=\!x_1,X_2\!=\!x_2)\!=\!q^{(k-n+x_1)x_1+(k-1-n+x_1+x_2)x_2} \genfrac[]{0.0pt}{}{k-1}{n\!-\!x_1\!-\!x_2}_q\bigg /\genfrac[]{0.0pt}{}{k\!+\!1}{n}_q, \end{aligned}$$
(11)

for \(x_1=0,1\) and \(x_2=0,1\), with \(x_1+x_2\le n\). The \(q^{-1}\)-power moments of the random vector \((X_1, X_2)\) are derived in the following theorem. These moments, may be suitably rephrased as conditional q-power moments of \((X_{\nu +1}, X_{\nu +2})\), given that \((X_1, X_2,\ldots ,X_\nu )=(x_1, x_2,\ldots ,x_\nu )\), with \(1\le \nu \le r-2\).

Theorem 5

Suppose that the probability function of the random vector \((X_1, X_2)\) is given by (11). Then,

$$\begin{aligned} E\left( [X_1]^{i_1}_{q^{-1}}\right) =\frac{[n]_{q^{-1}}}{[k+1]_{q^{-1}}},\ \ i_1=1,2,\ldots , \end{aligned}$$
(12)

and

$$\begin{aligned} V\left( [X_1]_{q^{-1}}\right) =\frac{[n]_{q^{-1}}[k+1-n]_{q^{-1}}q^{-n}}{[k+1]^2_{q^{-1}}}. \end{aligned}$$
(13)

Also,

$$\begin{aligned} E\big (q^{-X_1}[X_2]^{i_2}_{q^{-1}}\big ) =\frac{[n]_{q^{-1}}q^{-1}}{[k+1]_{q^{-1}}},\ \ i_2=0,1,\ldots , \end{aligned}$$
(14)

and

$$\begin{aligned} C\big ([X_1]_{q^{-1}},q^{-X_1}[X_2]_{q^{-1}}\big ) =-\frac{[n]^2_{q^{-1}}[k+1-n]_{q^{-1}}q^{-n}}{[k+1]^2_{q^{-1}}[k]_{q^{-1}}}. \end{aligned}$$
(15)

Proof

The marginal probability function of \(X_1\),

$$\begin{aligned} P(X_1=x_1)=q^{(k-n+x_1)x_1}\genfrac[]{0.0pt}{}{k}{n-x_1}_q\bigg / \genfrac[]{0.0pt}{}{k+1}{n}_q, \quad x_1=0,1, \end{aligned}$$

as the interest is focused on \(q^{-1}\)-factorial moments may be written, equivalently and in a more manageable form, as:

$$\begin{aligned} P(X_1=x_1)=q^{n(1-x_1)}\genfrac[]{0.0pt}{}{k}{n-x_1}_{q^{-1}}\bigg / \genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}, \quad x_1=0,1. \end{aligned}$$

Clearly, the \(q^{-1}\)-power moments of \(X_1\) is readily obtained as:

$$\begin{aligned} E\left( [X_1]^{i_1}_{q^{-1}}\right) =\sum _{x_1=0}^1q^{n(1-x_1)}[x_1]^{i_1}_{q^{-1}}\frac{\genfrac[]{0.0pt}{}{k}{n-x_1}_{q^{-1}}}{\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}} =\frac{\genfrac[]{0.0pt}{}{k}{n-1}_{q^{-1}}}{\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}}, \end{aligned}$$

and since

$$\begin{aligned} \genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}=\frac{[k+1]_{q^{-1}}}{[n]_{q^{-1}}}\genfrac[]{0.0pt}{}{k}{n-1}_{q^{-1}}, \end{aligned}$$

it reduces to (12). Also, the \(q^{-1}\)-variance of \(X_1\) is deduced as:

$$\begin{aligned} V\left( [X_1]_{q^{-1}}\right)&=E\left( [X_1]^2_{q^{-1}}\right) -\big [E([X_1]_{q^{-1}})\big ]^2 =\frac{[n]_{q^{-1}}}{[k+1]_{q^{-1}}}\bigg (1-\frac{[n]_{q^{-1}}}{[k+1]_{q^{-1}}}\bigg )\\&=\frac{[n]_{q^{-1}}[k+1-n]_{q^{-1}}q^{-n}}{[k+1]^2_{q^{-1}}}. \end{aligned}$$

Furthermore, the expected value of \(q^{-X_1}[X_2]^{i_2}_{q^{-1}}\) may be derived by using the relation

$$\begin{aligned} E\big (q^{-X_1}[X_2]^{i_2}_{q^{-1}}\big )=E\big [E\big (q^{-X_1}[X_2]^{i_2}_{q^{-1}}/X_1\big )\big ]. \end{aligned}$$

Since the conditional probability function of \(X_2\), given that \(X_1=x_1\),

$$\begin{aligned} P(X_2=x_2|X_1=x_1)=q^{(n-x_1)(1-x_2)}\genfrac[]{0.0pt}{}{k-1}{n-x_1-x_2}_{q^{-1}}\bigg / \genfrac[]{0.0pt}{}{k}{n-x_1}_{q^{-1}}, \ \ x_2=0,1, \end{aligned}$$

is of the same form as the probability function of \(X_1\), with the parameters k and n replaced by \(k-1\) and \(n-x_1\), respectively, it follows that

$$\begin{aligned} E\left( [X_2]^{i_2}_{q^{-1}}|X_1=x_1\right) =\frac{[n-x_1]_{q^{-1}}}{[k]_{q^{-1}}}. \end{aligned}$$

Also, the expected value of \(q^{-X_1}[n-X_1]_{q^{-1}}\) is given by

$$\begin{aligned} E\big (q^{-X_1}[n-X_1]_{q^{-1}}\big )&=\sum _{x_1=0}^1q^{n(1-x_1)-x_1}[n-x_1]_{q^{-1}} \genfrac[]{0.0pt}{}{k}{n-x_1}_{q^{-1}}\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}\\&=[k]_{q^{-1}}\sum _{x_1=0}^1q^{n(1-x_1)-x_1} \genfrac[]{0.0pt}{}{k-1}{n-x_1-1}_{q^{-1}}\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}\\&=[k]_{q^{-1}}q^{-1}\bigg (q^{n-1}\genfrac[]{0.0pt}{}{k-1}{n-1}_{q^{-1}} +\genfrac[]{0.0pt}{}{k-1}{n-2}_{q^{-1}}\bigg )\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}}. \end{aligned}$$

Thus, using the triangular recurrence relation of the q-binomial coefficients, it reduces to

$$\begin{aligned} E\big (q^{-X_1}[n-X_1]_{q^{-1}}\big )=[k]_{q^{-1}}q^{-1} \genfrac[]{0.0pt}{}{k}{n-1}_{q^{-1}}\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}} =\frac{[n]_{q^{-1}}[k]_{q^{-1}}q^{-1}}{[k+1]_{q^{-1}}} \end{aligned}$$

and so

$$\begin{aligned} E\big (q^{-X_1}[X_2]^{i_2}_{q^{-1}}\big )&=E\big [E\big (q^{-X_1}[X_2]^{i_2}_{q^{-1}}|X_1\big )\big ] =\frac{1}{[k]_{q^{-1}}}E\big (q^{-X_1}[n-X_1]_{q^{-1}}\big )\\&=\frac{1}{[k]_{q^{-1}}}\cdot \frac{[n]_{q^{-1}}[k]_{q^{-1}}q^{-1}}{[k+1]_{q^{-1}}} =\frac{[n]_{q^{-1}}q^{-1}}{[k+1]_{q^{-1}}}. \end{aligned}$$

Similarly, the expected value of the q-function \(q^{-X_1}[X_1]^{i_1}_{q^{-1}}[X_2]^{i_2}_{q^{-1}}\), may be evaluated by using the relation:

$$\begin{aligned} E\left( q^{-X_1}[X_1]^{i_1}_{q^{-1}}[X_2]^{i_2}_{q^{-1}}\right) =E\big [E\big (q^{-X_1}[X_1]^{i_1}_{q^{-1}}[X_2]^{i_2}_{q^{-1}}|X_1\big )\big ]. \end{aligned}$$

Clearly,

$$\begin{aligned} E\big (q^{-X_1}[X_1]^{i_1}_{q^{-1}}[n\!-\!X_1]_{q^{-1}}\big )&\!=\!\sum _{x_1=0}^1q^{n(1-x_1)-x_1}[x_1]^{i_1}_{q^{-1}}[n\!-\!x_1]_{q^{-1}} \genfrac[]{0.0pt}{}{k}{n\!-\!x_1}_{q^{-1}}\bigg /\genfrac[]{0.0pt}{}{k\!+\!1}{n}_{q^{-1}}\\&\!=\![k]_{q^{-1}}\sum _{x_1=0}^1q^{n(1-x_1)-x_1}[x_1]^{i_1}_{q^{-1}} \genfrac[]{0.0pt}{}{k\!-1}{n\!-\!x_1\!-\!1}_{q^{-1}}\bigg /\genfrac[]{0.0pt}{}{k\!+\!1}{n}_{q^{-1}}\\&\!=\![k]_{q^{-1}}q^{-1}\genfrac[]{0.0pt}{}{k-1}{n-2}_{q^{-1}} \bigg /\genfrac[]{0.0pt}{}{k+1}{n}_{q^{-1}} \end{aligned}$$

and

$$\begin{aligned} E\big (q^{-X_1}[X_1]^{i_1}_{q^{-1}}[n\!-\!X_1]_{q^{-1}}\big )=\frac{[n]_{2,q^{-1}}q^{-1}}{[k+1]_{q^{-1}}}, \end{aligned}$$

whence

$$\begin{aligned} E\left( q^{-X_1}[X_1]^{i_1}_{q^{-1}}[X_2]^{i_2}_{q^{-1}}\right)&=E\big [E\big (q^{-X_1}[X_1]^{i_1}_{q^{-1}}[X_2]^{i_2}_{q^{-1}}|X_1\big )\big ]\\&=\frac{1}{[k]_{q^{-1}}}E\big (q^{-X_1}[X_1]^{i_1}_{q^{-1}}[n-X_1]_{q^{-1}}\big )=\frac{[n]_{2,q^{-1}}q^{-1}}{[k+1]_{2,q^{-1}}}. \end{aligned}$$

The covariance of \([X_1]_{q^{-1}}\) and \(q^{-X_1}[X_2]_{q^{-1}}\), is given by

$$\begin{aligned} C\big ([X_1]_{q^{-1}},q^{-X_1}[X_2]_{q^{-1}}\big )&=E\big (q^{-X_1}[X_1]_{q^{-1}}[X_2]_{q^{-1}}\big ) -E\big ([X_1]_{q^{-1}}\big )E\big (q^{-X_1}[X_2]_{q^{-1}}\big )\\&=\frac{[n]_{q^{-1}}[n-1]_{q^{-1}}q^{-1}}{[k+1]_{q^{-1}}[k]_{q^{-1}}}-\frac{[n]^2_{q^{-1}}q^{-1}}{[k+1]^2_{q^{-1}}}\\&=\frac{[n]_{q^{-1}}q^{-1}\big ([n-1]_{q^{-1}}[k+1]_{q^{-1}}-[n]_{q^{-1}}[k]_{q^{-1}}\big )}{[k+1]^2_{q^{-1}}[k]_{q^{-1}}}. \end{aligned}$$

Using the relations \([n-1]_{q^{-1}}=[n]_{q^{-1}}-q^{-n+1}\) and \([k]_{q^{-1}}=[k+1]_{q^{-1}}-q^{-k}\), we get

$$\begin{aligned}{}[n-1]_{q^{-1}}[k+1]_{q^{-1}}-[n]_{q^{-1}}[k]_{q^{-1}}&=[n]_{q^{-1}}[k+1]_{q^{-1}}-q^{-n+1}[k+1]_{q^{-1}}\\&\quad -[n]_{q^{-1}}[k+1]_{q^{-1}}+[n]_{q^{-1}}q^{-k}\\&=-[k+1-n]_{q^{-1}}q^{-n+1} \end{aligned}$$

and the last expression reduces to (15). \(\square \)

In the following theorem the probabilistic behaviour of groups of successive urns (energy levels) is examined.

Theorem 6

Suppose that the random vector \((X_1,X_2,\ldots ,X_k)\) obeys a multivariate discrete q-uniform distribution of the first kind and consider the random variables

$$\begin{aligned} Y_j=\sum _{i=s_{j-1}+1}^{s_j}X_i=\sum _{i=1}^{m_j}X_{s_{j-1}+i},\quad j=1,2,\ldots ,r, \end{aligned}$$

where \(m_i\), \(i=1,2,\ldots ,r\), are positive integers and \(s_j=\sum _{i=1}^jm_i\), \(j=1,2,\ldots ,r\), with \(s_r=k\), and \(s_0=0\). Then, the probability function of

(a) the distribution of the random vector \((Y_1,Y_2,\ldots ,Y_r)\) is given by

$$\begin{aligned} P(Y_1=y_1,\ldots ,Y_r=y_r) =q^{\sum _{j=1}^r(k-s_j-n+z_j+1)y_j}\prod _{j=1}^r\genfrac[]{0.0pt}{}{m_j}{y_j}_q\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_q, \end{aligned}$$
(16)

for \(y_j=0,1,\ldots ,m_j\), where \(z_j=\sum _{i=1}^jy_i\), \(j=1,2,\ldots ,r\), with \(\sum _{j=1}^ry_j\le n\).

(b) the marginal distribution of the random vector \((Y_1,Y_2,\ldots ,Y_\nu )\), for \(1\le \nu <r\), is given by

$$\begin{aligned} P(Y_1\!=\!y_1,\ldots ,Y_\nu \!=\!y_\nu ) \!=\!q^{\sum _{j=1}^\nu (k-s_j+n-z_j+1)y_j}\prod _{j=1}^r\genfrac[]{0.0pt}{}{m_j}{y_j}_q \genfrac[]{0.0pt}{}{k\!-\!s_\nu \!+\!1}{n-z_\nu }_q \bigg /\genfrac[]{0.0pt}{}{k\!+\!1}{n}_q, \end{aligned}$$
(17)

for \(y_j=0,1,\ldots ,m_j\), where \(z_j=\sum _{i=1}^j y_i\), \(j=1,2,\ldots ,\nu \), with \(\sum _{j=1}^\nu y_j\le n\),

(c) the conditional distribution of the random vector \((Y_{\nu +1},Y_{\nu +2},\ldots ,Y_\kappa )\), given that \((Y_1,Y_2,\ldots ,Y_\nu )=(y_1,y_2,\ldots ,y_\nu )\), for \(1\le \nu <\kappa \le r\), is given by

$$\begin{aligned} P(Y_{\nu +1}\!=\!y_{\nu +1},\ldots ,Y_\kappa \!=\!y_\kappa |Y_1\!=\!y_1,&\ldots ,Y_\nu \!=\!y_\nu )\!=\!q^{\sum _{j=\nu +1}^\kappa (k-s_j+n-z_j+1)y_j}\nonumber \\&\times \prod _{j=\nu +1}^\kappa \genfrac[]{0.0pt}{}{m_j}{y_j}_q\genfrac[]{0.0pt}{}{k\!-\!s_\kappa \!+\!1}{n-z_\kappa }_q \bigg /\genfrac[]{0.0pt}{}{k\!-\!s_\nu \!+\!1}{n-z_\nu }_q, \end{aligned}$$
(18)

for \(y_j=0,1,\ldots ,n-z_\nu \), \(j=\nu +1,\nu +2,\ldots ,\kappa \), with \(\sum _{j=\nu +1}^\kappa y_j\le n-z_\nu \) and \(z_j=\sum _{i=1}^j y_i\), \(j=\nu ,\nu +1,\ldots ,\kappa \).

Proof

(a) The probability function of the random vector \((Y_1,Y_2,\ldots ,Y_r)\) is derived from the probability function

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=q^{\sum _{j=1}^k(k-j+1)x_j-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }\bigg /\genfrac[]{0.0pt}{}{k+1}{n}_q, \end{aligned}$$

by inserting into it the r new variables \((y_1,y_2,\ldots ,y_r)\) and summing the resulting expression over all the remaining \(k-r\) old variables. Note first that

$$\begin{aligned} \sum _{j=1}^ry_j=\sum _{j=1}^r\sum _{i=s_{j-1}+1}^{s_j}x_i=\sum _{i=1}^kx_i. \end{aligned}$$

Clearly, the sum in the exponent of q may be expressed as:

$$\begin{aligned} \sum _{j=1}^k(k-j+1)x_j=\sum _{j=1}^r\sum _{i=s_{j-1}+1}^{s_j}(k-i+1)x_i=\sum _{j=1}^r\sum _{i=s_{j-1}+1}^{s_{j-1}+m_j}(k-i+1)x_i. \end{aligned}$$

Furthermore, replacing in the last inner sum the variable i by \(s_{j-1}+i\) and inserting into the resulting expression the variables \((y_1,y_2,\ldots ,y_r)\), we get

$$\begin{aligned} \sum _{j=1}^k(k-j+1)x_j&=\sum _{j=1}^r\sum _{i=1}^{m_j}(k-s_{j-1}-i+1)x_{s_{j-1}+i}\\&=\sum _{j=1}^r(k-s_j+1)\sum _{i=1}^{m_j}x_{s_{j-1}+i}+\sum _{j=1}^r\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}\\&=\sum _{j=1}^r(k-s_j+1)y_j+\sum _{j=1}^r\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}. \end{aligned}$$

Then, the probability function of the random vector \((Y_1,Y_2,\ldots ,Y_r)\) is given by

$$\begin{aligned} P(Y_1\!=\!y_1,Y_2\!=\!y_2,\ldots ,Y_r\!=\!y_r)&\!=\!\frac{q^{\sum _{j=1}^r(k-s_j+1)y_j-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }}{\genfrac[]{0.0pt}{}{k+1}{n}_q} \sum q^{\sum _{j=1}^r\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}}\\&\!=\!\frac{q^{\sum _{j=1}^r(k-s_j+1)y_j-\left( {\begin{array}{c}n\\ 2\end{array}}\right) }}{\genfrac[]{0.0pt}{}{k+1}{n}_q} \prod _{j=1}^r\sum q^{\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}}, \end{aligned}$$

where the summation, in the last sum, is extended over all \(x_{s_{j-1}+i}=0,1\), for \(i=1,2,\ldots ,m_j-1\), with \(\sum _{i=1}^{m_j-1}x_{s_{j-1}+i}\le y_j\); in addition to these values, the summation in the first sum is extended to all \(j=1,2,\ldots ,r\). Furthermore, by (1),

$$\begin{aligned} \underset{x_{s_{j-1}+1}+\cdots +x_{s_{j-1}+m_j-1}\le y_j}{\sum _{x_{s_{j-1}+i}=0,1,\;i=1,2,\ldots ,m_j-1,}} q^{\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}}=q^{\left( {\begin{array}{c}y_j\\ 2\end{array}}\right) }\genfrac[]{0.0pt}{}{m_j}{y_j}_q. \end{aligned}$$

Also, summing the relations

$$\begin{aligned} \left( {\begin{array}{c}n-z_j\\ 2\end{array}}\right) =\left( {\begin{array}{c}n-z_{j-1}\\ 2\end{array}}\right) -\left( {\begin{array}{c}y_j\\ 2\end{array}}\right) -y_j(n-z_j), \quad j=1,2,\ldots ,r, \end{aligned}$$

where \(z_j=\sum _{i=1}^jy_i\), \(j=1,2,\ldots ,r\), \(z_0=0\), and since

$$\begin{aligned} \left( {\begin{array}{c}n-z_r\\ 2\end{array}}\right)&=\left( {\begin{array}{c}n-y_1-y_2-\cdots -y_r\\ 2\end{array}}\right) =\left( {\begin{array}{c}n-x_1-x_2-\cdots -x_k\\ 2\end{array}}\right) \\&=\left( {\begin{array}{c}x_{k+1}\\ 2\end{array}}\right) =0,\quad \text {for} \ \ x_{k+1}=0,1, \end{aligned}$$

we get

$$\begin{aligned} \left( {\begin{array}{c}n\\ 2\end{array}}\right) -\sum _{j=1}^r\left( {\begin{array}{c}y_j\\ 2\end{array}}\right) =\sum _{j=1}^ry_j(n-z_j). \end{aligned}$$

Introducing into the last expression of \(P(Y_1=y_1,Y_2=y_2,\ldots ,Y_r=y_r)\) these two expressions, it reduces to the required formula (16).

(b) Summing the probability function of the random vector \((Y_1,Y_2,\ldots ,Y_r)\), for \(y_j=0,1,\ldots ,m_j\), \(j=\nu +1,\nu +2,\ldots ,r\), with \(\sum _{j=\nu +1}^ry_j\le n-z_r\),

$$\begin{aligned} P(Y_1=y_1,&\ldots ,Y_\nu =y_\nu )=q^{\sum _{j=1}^\nu (k-s_j-n+z_j+1)y_j}\prod _{j=1}^\nu \genfrac[]{0.0pt}{}{m_j}{y_j}_q \bigg /\genfrac[]{0.0pt}{}{k+1}{n}_q\\&\times \underset{y_{\nu +1}+\cdots +y_{r}\le n-z_\nu }{\sum _{y_{\nu +j}=0,1,\ldots ,m_j\;j=1,2,\ldots ,r-\nu ,}} q^{\sum _{j=\nu +1}^r(k-s_j-n+z_j+1)y_j}\prod _{j=\nu +1}^r\genfrac[]{0.0pt}{}{m_j}{y_j}_q, \end{aligned}$$

and using (5),

$$\begin{aligned} \underset{y_{\nu +1}+\cdots +y_{r}\le n-z_\nu }{\sum _{y_{\nu +j}=0,1,\ldots ,m_j\;j=1,2,\ldots ,r-\nu ,}} q^{\sum _{j=\nu +1}^r(k-s_j-n+z_j+1)y_j}\prod _{j=\nu +1}^r\genfrac[]{0.0pt}{}{m_j}{y_j}_q=\genfrac[]{0.0pt}{}{k-s_\nu +1}{n-z_\nu }_q, \end{aligned}$$

the probability function (17) is obtained.

(c) The conditional probability function of \((Y_{\nu +1}, Y_{\nu +2},\ldots ,Y_{\kappa })\), given that \((Y_1, Y_2,\ldots ,Y_\nu )=(y_1, y_2,\ldots ,y_\nu )\), is given by

$$\begin{aligned} P(Y_{\nu +1}\!=y_{\nu +1},\dots ,Y_\kappa \!=y_\kappa |&\,Y_1\!=y_1,\ldots ,Y_\nu \!=y_\nu )\\&=\frac{P(Y_1\!=y_1,Y_2\!=y_2\ldots ,Y_\kappa \!=y_\kappa )}{P(Y_1\!=y_1,Y_2\!=y_2,\ldots ,Y_\nu \!=y_\nu )}. \end{aligned}$$

Then, using parts (a) and (b), formula (18) is readily deduced. \(\square \)

4 q-Bose–Einstein Stochastic Model (Statistic)

Suppose now that n indistinguishable balls are randomly q-distributed, one after the other, into \(r=k+1\) distinguishable urns (cells) \(\{c_1,c_2,\ldots ,c_{k+1}\}\), with unlimited capacity. Let \(X_j\) be the number of balls placed in urn \(c_j\), for \(j=1,2,\ldots ,k+1\). Note that \(X_{k+1}=n-X_1-X_2-\cdots -X_k\). The distribution of the random vector \((X_1, X_2,\ldots ,X_k)\) is called Multivariate Discrete q-Uniform Distribution of the second kind, with parameters n and q. Its probability function is derived in the following theorem.

Theorem 7

The probability function of the multivariate discrete q-uniform distribution of the second kind, with parameters n and q, is given by

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=q^{\sum _{j=1}^k(k-j+1)x_j}\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_q, \end{aligned}$$
(19)

for \(x_j=0,1,\ldots ,n\) and \(j=1,2,\ldots ,k\), with \(\sum _{j=1}^k x_j\le n\), and \(0<q<1\) or \(1<q<\infty \).

Proof

A random q-distribution of n indistinguishable balls into the \(k+1\) distinguishable urns may be represented by the collection of n q-selected urns \(\{c_{i_1},c_{i_2},\ldots ,c_{i_n}\}\), with repetition, where the q-selection of an urn x times corresponds to the placement of x balls into it. The probability for such a q-distribution, on using (7) with \(r=k+1\), is given by

$$\begin{aligned} c\,q^{k-i_1+1}q^{k-i_2+1}\cdots q^{k-i_n+1}=c\,q^{(k+1)n-(i_1+i_2+\cdots +i_n)}, \end{aligned}$$

with \(1\le i_1\le i_2\le \cdots \le i_n\le k+1\). Clearly, the number \(x_j\) of balls q-distributed into urn \(c_j\) equals the number of variables \(i_1,i_2,\ldots ,i_n\) that are equal to j, for \(j=1,2,\ldots ,k+1\), with \(x_{k+1}=n-\sum _{j=1}^kx_j\). Also, the exponent of q in the expression of the preceding random q-distribution, may be expressed as:

$$\begin{aligned} (k+1)n-\sum _{r=1}^ni_r=\sum _{j=1}^{k+1}(k+1)x_j-\sum _{j=1}^{k+1}jx_j=\sum _{j=1}^k(k-j+1)x_j \end{aligned}$$

and so

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=c\,q^{\sum _{j=1}^k(k-j+1)x_j}, \end{aligned}$$

for \(x_j=0,1,\ldots ,n\) and \(j=1,2,\ldots ,k\), with \(\sum _{j=1}^k x_j\le n\). Summing these probabilities, using (2), and equating this sum to one, we get the expression \(c=1/\genfrac[]{0.0pt}{}{k+n}{n}_q\), which completes the derivation of (19). \(\square \)

The multivariate discrete q-uniform distribution of the second kind may be obtained as the conditional distribution of k independent q-geometric distributions of the second kind, given their sum with another q-geometric distribution of the second kind independent of them, according to the following theorem.

Theorem 8

Consider a sequence of independent Bernoulli trials and assume that the conditional probability of success at a trial, given that \(j-1\) successes occur in the previous trials, is given by

$$\begin{aligned} p_j=1-\theta q^{j-1}, \ \ j=1,2,\ldots ,\ \ 0<\theta<1, \ \ 0<q<1 \ \ \text {or}\ \ 1<q<\infty , \end{aligned}$$

where, for \(1<q<\infty \), the number j of successes is restricted by \(j\le m=-\log \theta /\log q\). Let \(W_j\) be the number of failures after the \((j-1)\)th success and until the occurrence of the jth success, for \(j=1,2,\ldots ,k+1\), where \(k+1\le m\) in the case \(1<q<\infty \). Then, the conditional probability function of the random vector \((W_1,W_2,\ldots ,W_k)\), given that \(W_1+W_2+\cdots +W_{k+1}=n\), is the multivariate discrete q-uniform distribution of the second kind with probability function (19).

Proof

Clearly, the random variables \(W_j\), \(j=1,2,\ldots ,k+1\), are independent, with probability function,

$$\begin{aligned} P(W_j=w_j)=\big (\theta q^{j-1}\big )^{w_j}\big (1-\theta q^{j-1}\big ), \ \ w_j=0,1,\ldots ,\ \ j=1,2,\ldots ,k+1. \end{aligned}$$

Also, the probability function of the sum \(U_{k+1}=W_1+W_2+\cdots +W_{k+1}\), which is the number of failures until the occurrence of the \((k+1)\)th success, is

$$\begin{aligned} P(U_{k+1}=n)=\genfrac[]{0.0pt}{}{k+n}{n}_q\theta ^n \prod _{i=1}^{k+1}\big (1-\theta q^{i-1}\big ),\quad n=0,1,\ldots \,. \end{aligned}$$

Then, the joint conditional probability function of the random vector \((W_1,W_2,\ldots ,W_k)\), given that \(U_{k+1}=n\),

$$\begin{aligned} P(W_1\!=\!w_1,\ldots ,\!W_k\!=\!w_k|U_{k+1}\!=\!n)\!=\!\frac{P(W_1\!=\!w_1)\ldots \!P(W_k\!=\!w_k)P(W_{k+1}\!=\!n\!-\!u_k)}{P(U_{k+1}=n)}, \end{aligned}$$

with \(u_k=\sum _{j=1}^kw_j\), on using these expressions, is obtained as:

$$\begin{aligned} P(W_1=w_1,\ldots ,W_k=w_k|U_{k+1}=n)=q^{c_{n,k}(w_1,w_2,\ldots ,w_k)}\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_q, \end{aligned}$$

where

$$\begin{aligned} c_{n,k}(w_1,w_2,\ldots ,w_k)=\sum _{j=1}^k(j-1)w_j-\sum _{j=1}^kkw_j+kn=-\sum _{j=1}^k(k-j+1)w_j+kn. \end{aligned}$$

Thus, since

$$\begin{aligned} q^{-kn}\genfrac[]{0.0pt}{}{k+n}{n}_q=\genfrac[]{0.0pt}{}{k+n}{n}_{q^{-1}}, \end{aligned}$$

it reduces to

$$\begin{aligned} P(W_1=w_1,\ldots ,W_k=w_k|U_{k+1}=n)=q^{-\sum _{j=1}^k(k-j+1)w_j} \bigg /\genfrac[]{0.0pt}{}{k+n}{n}_{q^{-1}} \end{aligned}$$

which is expression (19) with q replaced by \(q^{-1}\). \(\square \)

Certain marginal and conditional distributions of the multivariate q-uniform distribution of the second kind are derived in the following theorem.

Theorem 9

Assume that the random vector \((X_1,X_2,\ldots ,X_k)\) obeys a multivariate discrete q-uniform distribution of the second kind. Then, the probability function of

(a) the marginal distribution of \((X_1,X_2,\ldots ,X_r)\), for \(1\le r<k\), is given by

$$\begin{aligned} P(X_1=x_1,\ldots ,X_r=x_r)=q^{\sum _{j=1}^r(k-j+1)x_j} \genfrac[]{0.0pt}{}{k-r+n-y_r}{n-y_r}_q\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_q, \end{aligned}$$
(20)

for \(x_j=0,1,\ldots ,n\), \(j=1,2,\ldots ,r\), with \(\sum _{j=1}^r\le n\), where \(y_r=\sum _{j=1}^rx_j\),

(b) the conditional distribution of the random vector \((X_{r+1}, X_{r+2},\ldots ,X_{m})\), given that \((X_1, X_2,\ldots ,X_r)=\) \((x_1, x_2,\ldots ,x_r)\), for \(1\le r<m\le k\), is given by

$$\begin{aligned} P(X_{r+1}=x_{r+1},&\ldots ,X_m=x_m|X_1=x_1,\ldots ,X_r=x_r)\nonumber \\&=q^{\sum _{j=r+1}^m(k-j+1)x_j} \genfrac[]{0.0pt}{}{k-m+n-y_m}{n-y_m}_q\bigg /\genfrac[]{0.0pt}{}{k-r+n-y_r}{n-y_r}_q, \end{aligned}$$
(21)

for \(x_j=0,1,\ldots ,n-y_r\), \(j=r+1,r+2,\ldots ,m\), with \(\sum _{j=r+1}^mx_j\le n-y_r\), where \(y_r=\sum _{i=1}^rx_i\).

Proof

(a) Summing the probability function of the multivariate discrete q-uniform distribution of the second kind, for \(x_j=0,1,\ldots ,n-y_r\), \(j=r+1,r+2\), \(\ldots ,k\), with \(\sum _{j=r+1}^kx_j\le n-y_r\), we get, the expression

$$\begin{aligned} P(X_1\!=x_1,\ldots ,&\,X_r\!=x_r)\!=\!q^{\sum _{j=1}^r(k-j+1)x_j}\\&\times \underset{x_{r+1}+x_{r+2}+\cdots +x_k\le n-y_r}{\sum _{x_{r+j}=0,1,\ldots ,n-y_r\; j=1,2,\ldots ,k-r,}} q^{\sum _{j=1}^{k-r}(k-r-j+1)x_{r+j}}\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_q. \end{aligned}$$

Then, the multiple sum, using (2), equals

$$\begin{aligned} \underset{x_{r+1}+x_{r+2}+\cdots +x_k\le n-y_r}{\sum _{x_{r+j}=0,1,\ldots ,n-y_r\; j=1,2,\ldots ,k-r,}} q^{\sum _{j=1}^{k-r}(k-r-j+1)x_{r+j}}=\genfrac[]{0.0pt}{}{k-r+n-y_r}{n-y_r}_q, \end{aligned}$$

and the last expression of the probability function reduces to (20).

(b) The conditional probability function of \((X_{r+1}, X_{r+2},\ldots ,X_m)\), given that \((X_1, X_2,\ldots ,X_r)=\) \((x_1, x_2,\ldots ,x_r)\), is given by

$$\begin{aligned} P(X_{r+1}\!=x_{r+1},\dots ,X_{m}\!=x_{m}|&\,X_1\!=x_1,\ldots ,X_r\!=x_r)\\&=\frac{P(X_1\!=x_1,X_2\!=x_2\ldots ,X_m\!=x_m)}{P(X_1\!=x_1,X_2\!=x_2,\ldots ,X_r\!=x_r)}. \end{aligned}$$

Then, using the result of part (a), we conclude that

$$\begin{aligned} P(X_{r+1}=x_{r+1},&\ldots ,X_m=x_m|X_1=x_1,\ldots ,X_r=x_r)\\&=q^{\sum _{j=r+1}^m(k-j+1)x_j} \genfrac[]{0.0pt}{}{k-m+n-y_m}{n-y_m}_q\bigg /\genfrac[]{0.0pt}{}{k-r+n-y_r}{n-y_r}_q, \end{aligned}$$

for \(x_j=0,1,\ldots ,n-y_r\), \(j=r+1,r+2,\ldots ,m\), with \(\sum _{j=r+1}^mx_j\le n-y_r\), where \(y_j=\sum _{i=1}^jx_i\). \(\square \)

As regards the derivation of the q-factorial moments of the multivariate q-uniform distribution of the second kind, and especially the q-means, q-variances, and q-covariances, the attention may be restricted to the marginal distribution of the random vector \((X_1, X_2)\). Its probability function is given by

$$\begin{aligned} P(X_1=x_1,X_2=x_2)=q^{kx_1+(k-1)x_2} \genfrac[]{0.0pt}{}{k-2+n-x_1-x_2}{n-x_1-x_2}_q\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_q, \end{aligned}$$
(22)

for \(x_1=0,1,\ldots ,n\) and \(x_2=0,1,\ldots ,n\), with \(x_1+x_2\le n\). The q-factorial moments of the random vector \((X_1, X_2)\) are derived in the following theorem. These moments, may be suitably rephrased as conditional q-factorial moments of \((X_{\nu +1}, X_{\nu +2})\), given that \((X_1, X_2,\ldots ,X_\nu )=(x_1, x_2,\ldots ,x_\nu )\), with \(1\le \nu \le r-2\).

Theorem 10

Suppose that the probability function of the random vector \((X_1, X_2)\) is given by (22). Then,

$$\begin{aligned} E([X_1]_{i_1,q})=\frac{[n]_{i_1q}[i_1]_q!q^{ki_1}}{[k+i_1]_{i_1,q}},\quad i_1=1,2,\ldots ,n, \end{aligned}$$
(23)

and

$$\begin{aligned} V([X_1]_{q})=\frac{[n]_{q}[k]_q[n+k+1]_{q}q^{k}}{[k+1]^2_{q}[k+2]_{q}}. \end{aligned}$$
(24)

Also,

$$\begin{aligned} E\big (q^{i_2X_1}[X_2]_{i_2,q}\big ) =\frac{[n]_{i_2,q}[i_2]_q!q^{(k-1)i_2}}{[k+i_2]_{i_2,q}},\quad i_2=0,1,\ldots ,n, \end{aligned}$$
(25)

and

$$\begin{aligned} C\big ([X_1]_{q},q^{X_1}[X_2]_{q}\big ) =-\frac{[n]_q[n+k+1]_{q}q^{2k-1}}{[k+1]^2_{q}[k+2]_{q}}. \end{aligned}$$
(26)

Proof

The marginal probability function of \(X_1\) is

$$\begin{aligned} P(X_1=x_1)=q^{kx_1}\genfrac[]{0.0pt}{}{k-1+n-x_1}{n-x_1}_q\bigg / \genfrac[]{0.0pt}{}{k+n}{n}_q, \ \ x_1=0,1,\ldots ,n, \end{aligned}$$

and its q-factorial moments are given by

$$\begin{aligned} E([X_1]_{i_1,q})&=\sum _{x_1=i_1}^nq^{kx_1}[x_1]_{i_1,q}\genfrac[]{0.0pt}{}{k-1+n-x_1}{n-x_1}_q\bigg / \genfrac[]{0.0pt}{}{k+n}{n}_q\\&=[i_1]_q!\sum _{x_1=i_1}^nq^{kx_1}\genfrac[]{0.0pt}{}{x_1}{x_1-i_1}_q\genfrac[]{0.0pt}{}{k-1+n-x_1}{n-x_1}_q\bigg / \genfrac[]{0.0pt}{}{k+n}{n}_q. \end{aligned}$$

Setting \(r=x_i-i_1\) and using the q-Cauchy’s formula,

$$\begin{aligned} \sum _{r=0}^nq^{(k-m)r}\genfrac[]{0.0pt}{}{m+r}{r}_q\genfrac[]{0.0pt}{}{k-m+n-r-1}{n-r}_q =\genfrac[]{0.0pt}{}{k\!+n}{n}_q, \end{aligned}$$

with \(m=i_1\) and k and n replaced by \(k+i_1\) and \(n-i_1\), respectively, the last expression reduces to (23). Also, the q-variance of \(X_1\),

$$\begin{aligned} V([X_1]_{q})=qE([X_1]_{2,q})+E([X_1]_{q})-\big [E([X_1]_{q})\big ]^2, \end{aligned}$$

using (23) is obtained as:

$$\begin{aligned} V([X_1]_{q})&=\frac{[n]_{2,q}[2]_qq^{2k+1}}{[k+1]_{2,q}}+\frac{[n]_qq^k}{[k+1]_q}-\frac{[n]^2_qq^{2k}}{[k+1]^2_q}\\&=\frac{[n]_qq^k\big ([n\!-\!1]_q[k+1]_q(1\!+q)q^{k+1}\!+\![k\!+1]_q[k+2]_q\!-\![n]_q[k\!+\!2]_qq^k\big )}{[k\!+\!1]^2_q[k\!+\!2]_q}. \end{aligned}$$

Since

$$\begin{aligned}{}[n-1]_q[k+1]_qq^{k+1}-[n]_q[k+2]_qq^k=\frac{[n+k-1]_qq^{k+1}-[n+k-1]_qq^{k}}{1-q} \end{aligned}$$

and

$$\begin{aligned}{}[n-1]_q[k+1]_qq^{k+2}+[k+1]_q[k+2]_q=\frac{[n+k-1]_q-[n+k-1]_qq^{k+1}}{1-q}, \end{aligned}$$

the last expression, reduces to (24).

Furthermore, the expected value of \(q^{i_2X_1}[X_2]_{i_2, q}\) may be derived by using the relation:

$$\begin{aligned} E\big (q^{i_2X_1}[X_2]_{i_2, q}\big )=E\big [E\big (q^{i_2X_1}[X_2]_{i_2, q}|X_1\big )\big ]. \end{aligned}$$

Since the conditional probability function of \(X_2\), given that \(X_1=x_1\),

$$\begin{aligned} P(X_2=x_2|X_1=x_1)=q^{(k-1)x_2}\genfrac[]{0.0pt}{}{k-2+n-x_1-x_2}{n-x_1-x_2}_{q}\bigg / \genfrac[]{0.0pt}{}{k-1+n-x_1-x_2}{n-x_1}_{q}, \end{aligned}$$

\(x_2=0,1,\ldots ,n-x_1\), is of the same form as the probability function of \(X_1\), with the parameters k and n replaced by \(k-1\) and \(n-x_1\), respectively, it follows that

$$\begin{aligned} E\big ([X_2]_{i_2,q}|X_1=x_1\big )=\frac{[n-x_1]_{i_2,q}[i_2]_q!q^{(k-1)i_2}}{[k+i_2-1]_{i_2, q}}. \end{aligned}$$

Also, the expected value of \(q^{i_2X_1}[n-X_1]_{i_2,q}\) is given by

$$\begin{aligned} E\big (q^{i_2X_1}[n-X_1]_{i_2,q}\big )&=\sum _{x_1=0}^{n-i_2}q^{(k+i_2)x_1}[n-x_1]_{i_2,q} \genfrac[]{0.0pt}{}{k-1+n-x_1}{n-x_1}_{q}\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_{q}\\&=[k+i_2-1]_{i_2,q}\sum _{x_1=0}^{n-i_2}q^{(k+i_2)x_1} \genfrac[]{0.0pt}{}{k-1+n-x_1}{n-i_2-x_1}_{q}\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_{q}. \end{aligned}$$

Thus, using the above q-Cauchy’s formula with \(m=0\) and k and n replaced by \(k+i_1\) and \(n-i_1\), respectively, the last expression reduces to

$$\begin{aligned} E\big (q^{i_2X_1}[n-X_1]_{i_2,q}\big )=\frac{[n]_{i_2,q}[k+i_2-1]_{i_2,q}}{[k+i_2]_{i_2,q}} \end{aligned}$$

and so

$$\begin{aligned} E\big (q^{i_2X_1}[X_2]_{i_2,q}\big )&=E\big [E\big (q^{i_2X_1}[X_2]_{i_2,q}|X_1\big )\big ] =\frac{[i_2]_q!q^{(k-1)i_2}}{[k+i_2-1]_{i_2,q}}E\big (q^{i_2X_1}[n-X_1]_{i_2,q}\big )\\&=\frac{[i_2]_q!q^{(k-1)i_2}}{[k+i_2-1]_{i_2,q}}\cdot \frac{[n]_{i_2,q}[k+i_2-1]_{i_2,q}}{[k+i_2]_{i_2,q}} =\frac{[n]_{i_2,q}[i_2]_q!q^{(k-1)i_2}}{[k+i_2]_{i_2,q}}. \end{aligned}$$

Similarly, the expected value of the q-function \(q^{i_2X_1}[X_1]_{i_1,q}[X_2]_{i_2,q}\), may be evaluated by using the relation:

$$\begin{aligned} E\big (q^{i_2X_1}[X_1]_{i_1,q}[X_2]_{i_2,q}\big )=E\big [E\big (q^{i_2X_1}[X_1]_{i_1,q}[X_2]_{i_2,q}|X_1\big )\big ]. \end{aligned}$$

Clearly,

$$\begin{aligned} E\big (q^{i_2X_1}[X_1]_{i_1,q}\,[n\!-\!X_1]_{i_2,q}\big )&\!=\!\sum _{x_1=i_1}^{n-i_2}q^{(k+i_2)x_1}[x_1]_{i_1,q}[n\!-\!x_1]_{i_2,q} \frac{\genfrac[]{0.0pt}{}{k\!-\!1\!+\!n\!-\!x_1}{n\!-\!x_1}_{q}}{\genfrac[]{0.0pt}{}{k\!+\!n}{n}_{q}}\\&=[i_1]_q![k\!+\!i_2\!-\!1]_{i_2,q}\sum _{x_1=i_1}^{n-i_2}q^{(k+i_2)x_1}\frac{\genfrac[]{0.0pt}{}{x_1}{x_1-i_1}_{q} \genfrac[]{0.0pt}{}{k\!-\!1\!+\!n\!-\!x_1}{n\!-\!i_2\!-\!x_1}_{q}}{\genfrac[]{0.0pt}{}{k\!+\!n}{n}_{q}} \end{aligned}$$

and using the above q-Cauchy’s formula with \(m=i_1\) and k and n replaced by \(k+i_1\) and \(n-i_1\), respectively, the last expression reduces to

$$\begin{aligned} E\big (q^{i_2X_1}[X_1]_{i_1,q}[n-X_1]_{i_2,q}\big ) =\frac{[n]_{i_1+i_2,q}[i_1]_q![k+i_2-1]_{i_2,q}q^{(k+i_2)i_1}}{[k+i_1+i_2]_{i_1+i_2,q}}, \end{aligned}$$

whence

$$\begin{aligned} E\big (q^{i_2X_1}[X_1]_{i_1,q}[X_2]_{i_2,q}\big )&=E\big [E\big (q^{i_2X_1}[X_1]_{i_1,q}[X_2]_{i_2,q}|X_1\big )\big ]\\&=\frac{[i_2]_q!q^{(k-1)i_2}}{[k+i_2-1]_{i_2,q}}E\big (q^{i_2X_1}[X_1]_{i_1,q}[n-X_1]_{i_2,q}\big )\\&=\frac{[n]_{i_1+i_2,q}[i_1]_q![i_2]_q!q^{k(i_1+i_2)+(i_1-1)i_2}}{[k+i_1+i_2]_{i_1+i_2,q}}. \end{aligned}$$

The covariance of \([X_1]_q\) and \(q^{X_1}[X_2]_q\), is given by

$$\begin{aligned} C\big ([X_1]_{q},q^{X_1}[X_2]_{q}\big )&=E\big (q^{X_1}[X_1]_{q}[X_2]_{q}\big ) -E\big ([X_1]_{q}\big )E\big (q^{X_1}[X_2]_{q}\big )\\&=\frac{[n]_{q}[n-1]_{q}q^{2k}}{[k+2]_{q}[k+1]_{q}}-\frac{[n]^2_{q}q^{2k-1}}{[k+1]^2_{q}}\\&=\frac{[n]_{q}q^{2k-1}\big ([n-1]_{q}[k+1]_{q}q-[n]_{q}[k+2]_{q}\big )}{[k+1]^2_{q}[k+2]_{q}}. \end{aligned}$$

Using the relations \([n-1]_{q}=[n]_{q}-q^{n-1}\) and \([k+1]_{q}q=[k+2]_{q}-1\), we get

$$\begin{aligned}{}[n-1]_{q}[k+1]_{q}-[n]_{q}[k+2]_{q}&=[n]_q[k+2]_q-[n-1]_{q}-[k+2]_{q}q^{n-1}-[n]_q[k+2]_q\\&=-[n+k+1]_q \end{aligned}$$

and the last expression of the covariance reduces to (26). \(\square \)

The interesting probabilistic behaviour of groups of successive urns (energy levels) is presented in the following theorem.

Theorem 11

Suppose that the random vector \((X_1,X_2,\ldots ,X_k)\) obeys a multivariate discrete q-uniform distribution of the second kind and consider the random variables

$$\begin{aligned} Y_j=\sum _{i=s_{j-1}+1}^{s_j}X_i=\sum _{i=1}^{m_j}X_{s_{j-1}+i},\ \ j=1,2,\ldots ,r, \end{aligned}$$

where \(m_i\), \(i=1,2,\ldots ,r\), are positive integers and \(s_j=\sum _{i=1}^jm_i\), \(j=1,2,\ldots ,r\), with \(s_r=k\), and \(s_0=0\). Then, the probability function of

(a) the distribution of the random vector \((Y_1,Y_2,\ldots ,Y_r)\) is given by

$$\begin{aligned} P(Y_1=y_1,\ldots ,Y_r=y_r) =q^{\sum _{j=1}^r(k-s_j+1)y_j}\prod _{j=1}^r\genfrac[]{0.0pt}{}{m_j+y_j-1}{y_j}_q\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_q, \end{aligned}$$
(27)

for \(y_j=0,1,\ldots ,n\), \(j=1,2,\ldots ,r\), with \(\sum _{j=1}^ry_j\le n\),

(b) the marginal distribution of the random vector \((Y_1,Y_2,\ldots ,Y_\nu )\), for \(1\le \nu <r\), is given by

$$\begin{aligned} P(Y_1\!=\!y_1,\ldots ,Y_\nu \!=\!y_\nu ) \!=\!q^{\sum _{j=1}^\nu (k-s_j+1)y_j}\frac{\displaystyle \prod _{j=1}^\nu \genfrac[]{0.0pt}{}{m_j\!+\!y_j\!-\!1}{y_j}_q \genfrac[]{0.0pt}{}{k\!-\!s_\nu \!+\!n\!-\!z_\nu }{n\!-\!z_\nu }_q}{\displaystyle \genfrac[]{0.0pt}{}{k+n}{n}_q}, \end{aligned}$$
(28)

for \(y_j=0,1,\ldots ,n\), \(j=1,2,\ldots ,\nu \), with \(\sum _{j=1}^\nu y_j\le n\) and \(z_\nu =\sum _{i=1}^\nu y_i\),

(c) the conditional distribution of the random vector \((Y_{\nu +1},Y_{\nu +2},\ldots ,Y_\kappa )\), given that \((Y_1,Y_2,\ldots ,Y_\nu )=(y_1,y_2,\ldots ,y_\nu )\), for \(1\le \nu <\kappa \le r\), is given by

$$\begin{aligned} P(Y_{\nu +1}\!=\!y_{\nu +1},&\ldots ,Y_\kappa \!=\!y_\kappa |Y_1\!=\!y_1,\ldots ,Y_\nu \!=\!y_\nu )\!=\!q^{\sum _{j=\nu +1}^\kappa (k-s_j+1)y_j}\nonumber \\&\times \prod _{j=\nu +1}^\kappa \genfrac[]{0.0pt}{}{m_j\!+\!y_j\!-\!1}{y_j}_q \genfrac[]{0.0pt}{}{k\!-\!s_\kappa \!+\!n\!-\!z_\kappa }{n\!-\!z_\kappa }_q \bigg /\genfrac[]{0.0pt}{}{k\!-\!s_\nu \!+\!n\!-\!z_\nu }{n\!-\!z_\nu }_q, \end{aligned}$$
(29)

for \(y_j=0,1,\ldots ,n-z_\nu \), \(j=\nu +1,\nu +2,\ldots ,\kappa \), with \(\sum _{j=\nu +1}^r y_j\le n\) and \(z_j=\sum _{i=1}^j y_i\).

Proof

(a) The probability function of the random vector \((Y_1,Y_2,\ldots ,Y_r)\) is derived from the probability function

$$\begin{aligned} P(X_1=x_1,X_2=x_2,\ldots ,X_k=x_k)=q^{\sum _{j=1}^k(k-j+1)x_j}\bigg /\genfrac[]{0.0pt}{}{k+n}{n}_q, \end{aligned}$$

by inserting into it the r new variables \((y_1,y_2,\ldots ,y_r)\) and summing the resulting expression for all the remaining old \(k-r\) variables. Clearly, the exponent of q may be expressed as:

$$\begin{aligned} \sum _{j=1}^k(k-j+1)x_j=\sum _{j=1}^r\sum _{i=s_{j-1}+1}^{s_j}(k-i+1)x_i=\sum _{j=1}^r\sum _{i=s_{j-1}+1}^{s_{j-1}+m_j}(k-i+1)x_i. \end{aligned}$$

Furthermore, replacing in the last inner sum the variable i by \(s_{j-1}+i\) and inserting into the resulting expression the variables \((y_1,y_2,\ldots ,y_r)\), we get

$$\begin{aligned} \sum _{j=1}^k(k-j+1)x_j&=\sum _{j=1}^r\sum _{i=1}^{m_j}(k-s_{j-1}-i+1)x_{s_{j-1}+i}\\&=\sum _{j=1}^r(k-s_j+1)\sum _{i=1}^{m_j}x_{s_{j-1}+i}+\sum _{j=1}^r\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}\\&=\sum _{j=1}^r(k-s_j+1)y_j+\sum _{j=1}^r\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}. \end{aligned}$$

Then, the probability function of the random vector \((Y_1,Y_2,\ldots ,Y_r)\) is given by

$$\begin{aligned} P(Y_1\!=y_1,Y_2\!=y_2,\ldots ,Y_r\!=y_r)&\!=\!\frac{q^{\sum _{j=1}^r(k-s_j+1)y_j}}{\genfrac[]{0.0pt}{}{k+n}{n}_q} \sum q^{\sum _{j=1}^r\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}}\\&\!=\!\frac{q^{\sum _{j=1}^r(k-s_j+1)y_j}}{\genfrac[]{0.0pt}{}{k+n}{n}_q} \prod _{j=1}^r\sum q^{\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}}, \end{aligned}$$

where the summation, in the last sum, is extended over all \(x_{s_{j-1}+i}=0,1,\ldots ,y_j\), for \(i=1,2,\ldots ,m_j-1\), with \(\sum _{i=1}^{m_j-1}x_{s_{j-1}+i}\le y_j\); in addition to these values, the summation in the first sum is extended to all \(j=1,2,\ldots ,r\). Since, by (2),

$$\begin{aligned} \underset{x_{s_{j-1}+1}+\cdots +x_{s_{j-1}+m_j-1}\le y_j}{\sum _{x_{s_{j-1}+i}=0,1,\ldots ,y_j\;i=1,2,\ldots ,m_j-1,}} q^{\sum _{i=1}^{m_j-1}(m_j-i)x_{s_{j-1}+i}}=\genfrac[]{0.0pt}{}{m_j+y_j-1}{y_j}_q, \end{aligned}$$

the last expression reduces to (27).

(b) Summing the probability function of the random vector \((Y_1,Y_2,\ldots ,Y_r)\), for \(y_j=0,1,\ldots ,m_j\), \(j=\nu +1,\nu +2,\ldots ,r\), with \(\sum _{j=\nu +1}^ry_j\le n-z_r\),

$$\begin{aligned} P(Y_1=y_1,\ldots ,Y_\nu&=y_\nu )=q^{\sum _{j=1}^\nu (k-s_j+1)y_j}\prod _{j=1}^\nu \genfrac[]{0.0pt}{}{m_j+y_j-1}{y_j}_q \bigg /\genfrac[]{0.0pt}{}{k+1}{n}_q\\&\times \underset{y_{\nu +1}+\cdots +y_{r}\le n-z_\nu }{\underset{j=1,2,\ldots ,r-\nu ,}{\sum _{y_{\nu +j}=0,1,\ldots ,n-z_\nu ,}}} q^{\sum _{j=\nu +1}^r(k-s_j+1)y_j}\prod _{j=\nu +1}^r\genfrac[]{0.0pt}{}{m_j\!+\!y_j\!-\!1}{y_j}_q, \end{aligned}$$

and using (6),

$$\begin{aligned} \underset{y_{\nu +1}+\cdots +y_{r}\le n-z_\nu }{\underset{j=1,2,\ldots ,r-\nu ,}{\sum _{y_{\nu +j}=0,1,\ldots ,n-z_\nu ,}}} q^{\sum _{j=\nu +1}^r(k-s_j+1)y_j}\prod _{j=\nu +1}^r\genfrac[]{0.0pt}{}{m_j\!+\!y_j\!-\!1}{y_j}_q \!=\!\genfrac[]{0.0pt}{}{k\!-\!s_\nu \!+\!n\!-\!z\nu }{n-z_\nu }_q, \end{aligned}$$

the probability function (28) is obtained.

(c) The conditional probability function of \((Y_{\nu +1}, Y_{\nu +2},\ldots ,Y_{r})\), given that \((Y_1, Y_2,\ldots ,Y_r)=(y_1, y_2,\ldots ,y_r)\), is given by

$$\begin{aligned} P(Y_{\nu +1}=y_{\nu +1},\dots ,Y_{r}=y_{r}|&\,Y_1=y_1,\ldots ,Y_\nu =y_\nu )\\&=\frac{P(Y_1=y_1,Y_2=y_2\ldots ,Y_r=y_r)}{P(Y_1=y_1,Y_2=y_2,\ldots ,Y_\nu =y_\nu )}. \end{aligned}$$

Then, using parts (a) and (b), the required formula (29) is readily deduced. \(\square \)