1 Introduction

We consider the question of how to get any two-dimensional Dirichlet distribution as a limit of the sequence of discrete distributions constructed by multiplicative functions. Actually, the Dirichlet distribution is a multivariate generalization of the Beta distribution and the two-dimensional Dirichlet distribution is usually called the bivariate Beta distribution [1].

Let abc be positive constants and

The bivariate Beta distribution concentrated on the triangle E(1, 1) is defined by the distribution function

where \( \Gamma \) denotes the Gamma function. The one-dimensional Dirichlet distribution is the well-known Beta law with distribution function

We note, that Beta distributions appear as finite dimensional distributions of the Poisson–Dirichlet process (see [19]). In this paper we generate two-dimensional Dirichlet vectors using the arguments of probabilistic number theory. We construct a sequence of two-dimensional vectors, defined via multiplicative functions, whose average distribution functions converge to two-dimensional Dirichlet distribution and estimate the convergence rate.

In the sequel we will use the following notations: p is prime, \(d, q,k, m, n\in {\mathbb {N}}\). In the asymptotic relations it is assumed that \( x\rightarrow \infty \). The letters c and C with or without subscripts denote constants.

Let \( f_i:{\mathbb {N}}\rightarrow {\mathbb {R}}\), \( i=1, \dots , k-1 \), be non-negative multiplicative functions. Define the multiplicative function \(T_k\) by

where the sum is taken over all ordered collections \((j_1,j_2,\dots , j_{k-1})\). In case of all multiplicative functions \( f_i\equiv 1, \) this function coincides with the classical function \(\tau _k\) which counts the number of ordered factorisations of \( n\in {\mathbb {N}}\) into k factors

The first attempt to simulate the Arcsine law, that is , by means of the divisor function was made in [12]. Later other authors considered this problem on various subsets of natural numbers, for example, on the set of numbers free of large prime factors (see [7, 18]), in short intervals (see [2, 8, 9, 13, 15]), on the set of square-free natural numbers in short intervals [14].

In [16] it was pointed out that using sequences of discrete distributions, constructed by multiplicative functions from some classes, one can simulate the Beta distribution. This idea was realized in papers [4, 6, 10, 11].

For the first time the bivariate Beta distribution as a limit of the sequence of discrete distributions defined via multiplicative functions was considered by Nyandwi and Smati. They proved in [17] that using the divisor function \( \tau _3(n)\) one can model some distribution which, as was noted in [5], turned out to be the two-dimensional Dirichlet distribution . In the paper [11] it was shown that using the divisor function \(\tau _2(n)\) one can model the Dirichlet distribution . In [5] we showed that by means of multiplicative functions one can model one-parameter Dirichlet distributions , \( 0<a<1/2 \).

In this paper we show that taking a different construction of distribution function and using some ideas of [5, 10] we can model the bivariate Beta distribution for any collection of positive parameters abc. We note that new interesting questions arise if this problem is extended to some special subsets of natural numbers.

In the following we will need the multiplicative function \( T_3 \). Note that

$$\begin{aligned} T_3(p^m)= \sum \limits _{i=0}^mf_1(p^i)\sum \limits _{k=i}^mf_2(p^{m-k}). \end{aligned}$$
(1.1)

Let us introduce the random vectors \((X_n; Y_n)\), which take values

when \(d_1, d_2\) run through all divisors of n with uniform probability \( 1/T_3(n) \). The distribution function of vector \((X_n; Y_n)\) is

It is easy to check that the sequence of distributions \(F_n\) does not converge pointwise on (see [17]). Therefore, following [4] we consider the corresponding Cesàro mean

(1.2)

here g is some multiplicative function and

In this paper we show, that if the multiplicative functions \( g, f_1, f_2 \) satisfy some conditions of regularity, then the corresponding Cesàro mean (1.2) approaches a Dirichlet distribution function D(uvabc) . We note, that any Dirichlet distribution can be modeled by a suitable choice of multiplicative functions.

2 Results

Definition 2.1

We say that a multiplicative function \(g :{\mathbb {N}}\rightarrow [0; \infty )\) belongs to the class , for some constants \( \varkappa , \delta \geqslant 0\), if the function

for some \(0<c\leqslant 1/2\), has an analytic continuation P(s) into the region

where P(s) is holomorphic and for some \(\ c_0\geqslant 0\).

Definition 2.2

We say that a pair of non-negative multiplicative functions \((\varphi , g)\) belongs to the class if and for some \(C>0\) and all integers \(0\leqslant j\leqslant k\).

Note

We say that a multiplicative function if .

The aim of this paper is to prove the following result.

Theorem 2.3

Let multiplicative functions \( g, f_1, f_2 :{\mathbb {N}}\rightarrow [0, \infty )\) be such that , , for some \(\beta , \gamma >0\) and \(\beta +\gamma < \alpha \), \(0\leqslant \delta _1+\delta _2+ \delta _3<1\). Then for all \(u, v\in [0, 1]\),

Here

and

Unless otherwise indicated, here and in what follows we assume that the implicit constants in \( \ll \) or depend at most on the parameters and constants involved in the definitions of the corresponding classes and .

Example 2.4

Consider a Dirichlet distribution with any positive parameters abc. Let us find multiplicative functions \( g, f_1, f_2\) such that as \(x\rightarrow \infty \). Assume that these functions are strongly multiplicative with non-negative constant values on prime numbers, say \( g(p)=z_0\), \(f_1(p)= z_1\), \(f_2(p)= z_2\). Then ,

By Theorem 2.3 the limit distribution of \(S_x\) becomes provided \(z_0= a+b+c\), \( z_1= a/c\), \(z_2= b/c \).

Example 2.5

Suppose that \( g(n)= \mu ^2(n) \) and \( f_1(n)= f_2(n)\equiv 1 \). In this case we have that \( T_3(n)\equiv \tau _3(n) \), and , , . By the classical formula for the number of square-free integers (see e.g. [20, Theorem 3.10])

Then Theorem 2.3 yields

3 Preliminaries

For \(\varkappa >0 \) and any multiplicative function \(\theta \) set

Note, that \(A(\varkappa ,\theta )>0 \), when , \( 0\leqslant \delta <1\).

Lemma 3.1 in [3] and Lemma 1 in [5] imply

Lemma 3.1

Assume that , \(\varkappa >0\) and \(0\leqslant \delta <1\). Then, uniformly for all \(x\geqslant 1\) and \(d\in {\mathbb {N}}\),

where the multiplicative functions \({\widetilde{h}}\) and \({\widehat{h}}\) are defined by

(3.1)

Here \(\sigma _0=\sigma (0)\) and \(c_1\geqslant 0\) is a constant, depending on the parameters \(c, \varkappa \) and C of the classes and . Moreover

(3.2)

Remark 3.2

If , then

(3.3)

for any \( k\in {\mathbb {N}}\). Hence and . In the sequel we will often use this property.

For \( 0\leqslant u\leqslant w\leqslant 1\), \( x\geqslant 1\), \( b\in {\mathbb {R}}\) we set

This sum may be evaluated in terms of the integral

(3.4)

provided some information about the behaviour of the sum

is given.

For any \(y>0\) and \(a\in {\mathbb {R}}\) let us define

In addition we assume that \(\lambda (0,a)=\infty \), when \(a\geqslant 1\), and \(\lambda (0,a)=0\) otherwise.

For \( x\geqslant \mathrm {e}\), \( 0\leqslant w\leqslant 1\), , \( a_1, a_2\in {\mathbb {R}}\) we set ,

Note that \(\lambda (\eta _x, a)=l_x(a) \ln ^{a-1} x \).

The following consequence from [4, Lemmas 3 and 4] will be applied to evaluate the sum \({\mathfrak {S}}_x(0,w,b)\).

Lemma 3.3

Assume that \(x\geqslant \mathrm {e}\) and

$$\begin{aligned} \biggl | M(v)- \frac{Av}{\ln ^a (\mathrm {e}v)}\biggr |\leqslant \frac{Bv}{\ln ^{a+1} (\mathrm {e}v)} \end{aligned}$$

for some \(A, a\in {\mathbb {R}}\), \( B\geqslant 0 \), and all \(1\leqslant v\leqslant x \). Then

(3.5)

The implicit constant in depends at most on a and b.

We will need some estimates of the integrals

It is easy to see that

(3.6)

The following four lemmas can be proved by repeating the corresponding arguments in the proofs of [5, Lemmas 4–7].

Lemma 3.4

If \(a,b,c \in (-\infty ; 1)\) and \( 0\leqslant \eta \leqslant 1 \), then

(3.7)

uniformly for \(u,v\in [0, 1]\), \( u+v\leqslant 1\). Constants in \(\ll \) depend on abc only.

Lemma 3.5

Let \(0\leqslant \varepsilon \leqslant 1/4\), \(0 \leqslant \eta \leqslant 1\), \( \lambda (a) =\lambda (\varepsilon +\eta ,a)\). If \( c< 2 \), then

uniformly for \(u,v\in [\varepsilon , 1]\), \( u+v\leqslant 1\). Constants in \(\ll \) depend at most on abc.

Lemma 3.6

Let \(0\leqslant \varepsilon \leqslant 1/4\), \(0 \leqslant \eta \leqslant 1\), \( \lambda (a)=\lambda (\varepsilon +\eta ,a)\). Then

uniformly for \(u \in [\varepsilon , 1-2\varepsilon ]\). The constant in \(\ll \) depends at most on abc.

Lemma 3.7

Let \( 0 \leqslant \eta \leqslant 1\) and \(a,b,c \in (-\infty , 1) \). Then

(3.8)

uniformly for \(u,v\in [0, 1]\), \( u+v\leqslant 1\). Constants in \(\ll \) depend at most on abc.

Note

Evaluating in Lemmas 3.5 and 3.6 we assume that .

Combining Lemmas 3.3 and 3.1 we obtain the following result.

Lemma 3.8

Assume that \( b\in {\mathbb {R}}\) and for some \( a<1\), \( 0\leqslant \delta <1 \). Then for \(q\in {\mathbb {N}}\), \( x\geqslant \mathrm {e}\), \( \eta _x\leqslant w\leqslant 1\), \( 0<t\leqslant x^{1-w} \), we have

Moreover,

(3.9)

The multiplicative functions \({\widetilde{h}}\) and \({\widehat{h}}\) are defined in (3.1). The implicit constants in \( \ll \) and depend at most on b and the parameters of class .

Proof

Using notations of Lemma 3.3 and taking , we can write

(3.10)

where \( z= w\frac{\ln x}{\ln ({x}/{t})} \). By Lemma 3.1 with \( d=q\), having in mind that , we get

Therefore we may evaluate \( {\mathfrak {S}}_{{x}/{t}}( 0, z, b) \) by means of Lemma 3.3. Then (3.10) becomes

where the integral I is defined in (3.4). Changing the integration variable gives

This proves the first relation of the lemma.

It remains to prove the estimate (3.9). By (3.2) we have

Therefore (3.9) follows from (3.10) and (3.5) by taking \(A=0\), and choosing \(a-1\) instead of a. \(\square \)

In the next two lemmas we consider the triplet of non-negative multiplicative functions \((\varphi _1, \varphi _2, \theta )\) that satisfy the conditions

(3.11)

for some \( a<1\), \( d<1\), ;

(3.12)

for some \(C_1>0\) and all non-negative integers ijk such that \(i+j\leqslant k\).

Remark 3.9

Note, that (3.1), (3.3), (3.11) and (3.12) imply

The same relations hold for \({\widehat{h}}\) instead of \({\widetilde{h}}\).

Lemma 3.10

Assume that \(x\geqslant \mathrm {e}\), \( u, v\geqslant 0\), \( u+v\leqslant 1 \) and the multiplicative functions \((\varphi _1, \varphi _2, \theta )\) satisfy (3.11) and (3.12).

If \(b<2\), then

(3.13)

Moreover, if \(b<1\), then

$$\begin{aligned} E_x(u,v,b;\varphi _1 , \varphi _2, \theta ) = A^*\, \frac{J_1(0,\eta _x,u,v,d,a,b)}{(\ln x ) ^{a+b +d-2}} + R_E, \end{aligned}$$
(3.14)

here , and

(3.15)

The implicit constants in \( \ll \) depend at most on \(b, C_1\) and the parameters of classes .

Proof

Let \(b\in {\mathbb {R}}\). Firstly assume that \( u\leqslant \eta _x \). Then using (3.9), (3.3) and (3.12) we have

If \( v\leqslant \eta _x \), then

$$\begin{aligned} E_x(u,v,b; \varphi _1, \varphi _2 , \theta ) \ll \rho _x(1,1; d-1, b ) , \end{aligned}$$

since \( E_x(u,v,b; \varphi _1, \varphi _2 , \theta )= E_x(v,u,b; \varphi _2, \varphi _1 , \theta ) \). Thus

$$\begin{aligned} E_x(u,v,b; \varphi _1, \varphi _2 , \theta ) \ll \rho _x(1,1; a-1, b )+ \rho _x(1,1; d-1, b ) \end{aligned}$$
(3.16)

uniformly in .

Assume that . Then \( u,v\in (\eta _x; 1-\eta _x) \), since \(u+v\leqslant 1\).

It is easy to see that

Having this in mind and using Lemma 3.8 we get

(3.17)

By Remark 3.9, . Then using Lemma 3.8 once again we get

Therefore the term \(S_1\) in (3.17) becomes

(3.18)

where ,

(3.19)

We have

$$\begin{aligned} \int \limits _{0}^{v}\frac{\mathrm {d}s}{(\eta _x +s)^a (1-s) ^{\gamma _0} }\ll 1+\lambda (\eta _x , \gamma _0 ). \end{aligned}$$
(3.20)

Similarly,

$$\begin{aligned} \int \limits _{0}^v \frac{ r_{x^{1-s}}(d,b)\, \mathrm {d}s}{(\eta _x + s)^a}\ll \frac{l_x(d+1,b)}{\ln ^{d+b}x}\,(1+ \lambda (\eta _x,d+b)). \end{aligned}$$

This estimate together with (3.20), (3.19) and (3.18) yield

(3.21)

Since (see Remark 3.9), we can employ (3.9) to estimate the remainder term \( R_1 \) in (3.17). We have

(3.22)
(3.23)

and

(3.24)

Taking into account the last three estimates we obtain that the remainder term in (3.17) can be estimated by

(3.25)

Note that . When , from (3.17), (3.21) and (3.25) we deduce that the remainder term in (3.14) is

(3.26)

Therefore the estimate (3.13) follows from (3.26) by means of (3.16) and Lemma 3.5 provided \(b<2\).

When \(b<1\) the estimate (3.26) implies (3.15). Note that in this case (3.15) easily follows from (3.16) and (3.7) if . \(\square \)

Lemma 3.11

Assume that \( b\in {\mathbb {R}}\) and the multiplicative functions \((\varphi _1, \varphi _2, \theta )\) satisfy (3.11) and (3.12). Then uniformly for \( 0\leqslant u\leqslant 1-\eta _x, \)

The implicit constant in depends at most on \(b, C_1\) and the parameters of classes .

Proof

By Lemma 3.8 we get

(3.27)

We have that (see Remark 3.9). Applying Lemma 3.8 and relations (3.22), (3.24) we obtain

Set and

where \(\omega (t)= 1-{\ln t}/{\ln x} \). The main term in (3.27) can be written as

(3.28)

Partial summation yields

(3.29)

For \( 0\leqslant u\leqslant 1-\eta _x \) and \(a<1\) we have

$$\begin{aligned} (1-u)^{a+b-1}I_3(x^u)=I\biggl (0,1,a,b,\frac{\eta _x}{1-u}\biggr ) \ll 1+\lambda \biggl (\frac{\eta _x}{1-u}, b\biggr ). \end{aligned}$$

It follows from Remark 3.9 that Therefore the last estimate and Lemma 3.1 yield

If \(b\ne 1\), then this estimate becomes

If \(b= 1\), then similarly

Separately estimating the last summand for \(u\in [0, 1/2]\) and \(u\in [1/2, 1-\eta _x]\) we obtain

Thus for any \(b\in {\mathbb {R}}\),

$$\begin{aligned} S_{21} \ll (\ln x) ^{a+b-d-1} +(1+\ln ^{-d}x)(1+l_x(b)\ln ^{b-1}x). \end{aligned}$$
(3.30)

Evaluating \(I_3'(s)\) and using Lemma 3.1 we deduce

$$\begin{aligned} S_{23}\ll I(0, u, d, a, \eta _x) \ln ^{b-d} x+ b J_2(0,\eta _x, u,d,a,b+1) \ln ^{-d}x. \end{aligned}$$

In view of Lemma 3.6 we have

$$\begin{aligned} bJ_2(0,\eta _x,u,d,a,b+1) \ll 1+\ln ^{b} x . \end{aligned}$$

Therefore

(3.31)

To evaluate the second term in (3.29) we use Lemma 3.1 once again. Since we get

Using Lemma 3.6 we obtain

(3.32)

Combining the estimates (3.30), (3.31), (3.32) together with (3.29), (3.28), (3.27) and having in mind the estimate of \(R_2 \) we obtain assertion of the lemma. \(\square \)

4 Proof of the main theorem

Let us start the proof of Theorem 2.3 with the following

Remark 4.1

The conditions of Theorem 2.3 and (1.1) imply and , moreover there exists a constant \( C_2\geqslant 0 \) such that

$$\begin{aligned} f_1(p^i)f_2(p^j)\,\frac{g}{T_3}(p^k) \leqslant C_2, \end{aligned}$$
(4.1)

for all \(i,j,k \geqslant 0\), \(i+j\leqslant k \).

Setting

we have

(4.2)

For \(i=1, 2\) set

Then

(4.3)

Similarly,

(4.4)

By Remark 4.1 we have . Then using Lemma 3.1 we get

where

Remarks 3.9 and 4.1 give us . According to Lemma 3.1 we have

(4.5)

where the multiplicative function \(h_2\) is defined by . We note that (3.3) and (4.1) imply

Then having in mind the assumptions of the theorem one can show that \( g/T_3\) and . Moreover by Lemma 3.1,

(4.6)

since . Thus (4.5) becomes

$$\begin{aligned} R_1(u, v)\ll ( \ln x)^{1-\alpha } (1 +u\ln x)^{\beta -1}Z_{x^{1-u}}(1,1,1,1-\gamma ; 1, h_2). \end{aligned}$$
(4.7)

Therefore in (3.23) taking \( a=1-\beta \), \( b=1-\gamma \), \( d=1-\alpha +\beta +\gamma \) we obtain

$$\begin{aligned} R_1(u, v)\ll \ln ^{\beta -\alpha } x + \ln ^{-\beta } x + \ln ^{-1} x \end{aligned}$$
(4.8)

uniformly for \( 0\leqslant u \leqslant 1- \eta _x \).

If \( 1-\eta _x < u \leqslant 1 \), then we estimate in (4.7) using (3.9) with \( a=1-\alpha +\beta +\gamma \), \(b=1-\gamma \) and get

$$\begin{aligned} R_1(u,v)\ll \ln ^{\beta -\alpha } x. \end{aligned}$$

Thus (4.8) is valid uniformly for \( u,v\in [0, 1] \). Similarly,

$$\begin{aligned} R_2(u, v)\ll \ln ^{\gamma -\alpha } x + \ln ^{-\gamma } x +\ln ^{-1} x \end{aligned}$$

uniformly for \( u,v\in [0, 1] \).

Taking into account that \(\alpha -\beta -\gamma >0\), the estimates for \(R_1\) and \(R_2 \) together with (4.2), (4.3) and (4.4) yield

(4.9)

uniformly for \( u,v\in [0, 1] \). Here .

Consider the first summand of (4.2). Changing the order of summation we have

(4.10)

Since applying Lemma 3.1 we obtain

(4.11)

where

By Remark 3.2 we see that .

Let us split the unit square into two parts \( K=K_1\cup K_2\), here and .

1. Firstly we consider the case where \((u,v)\in K_1\). Then (4.11) and (4.10) yield

here and below . Now applying (4.6), (3.8) and Lemma 3.10 with \( a=1-\gamma \), \( d=1-\beta \), \(b=1-\alpha +\beta +\gamma \) we deduce

(4.12)

uniformly for \((u,v)\in K_1\). Here

Taking into account (1.1) we get

Thus

This together with (4.12) and (4.9) complete the proof of Theorem 2.3 in the region \( K_1\).

2. Let \((u,v)\in K_2\). If , taking into account Remarks 3.2, 3.9 and 4.1 from (4.10), (4.11), (3.9) and (4.6) we obtain

(4.13)

Consider the case \( u > \eta _x \) and \( v > \eta _x \). For any \( t\in [0,1] \) we define

Then

$$\begin{aligned} H(u,v)= H(1,1)-V_1 (v)+H(1-v,v)-V_2(u)+H(u,1-u). \end{aligned}$$

By definition, \( S_x(1,1)=1 \). Hence from (4.9) it follows that

(4.14)

Since , by Lemma 3.1,

where \(E_x^*\) is defined in Lemma 3.11. Therefore, having in mind (4.6) and using Lemmas 3.11, 3.6 and 3.7, we obtain

where . Analogously,

From this, (4.12) and (4.14) we get

(4.15)

From (3.6) we have

$$\begin{aligned} BJ_2(0,0,1,1-\beta ,1-\gamma , b)=1. \end{aligned}$$

Hence the main term in (4.15) equals to \(D(u,v;\beta ,\gamma ,1-b)\). Moreover by Lemma 3.4,

Thus (4.13) and (4.15) yield

uniformly for \( (u,v)\in K_2 \).

The proof of Theorem 2.3 follows now from this estimate, (4.12) and (4.9).