1. Introduction

It is amusing to witness that various degenerate versions of many special numbers and polynomials have been studied recently not only with their number-theoretic or combinatorial interests but also with their applications to others, including probability, quantum mechanics and differential equations. This exploration for degenerate versions began with the work by Carlitz on the degenerate Bernoulli and degenerate Euler polynomials in [4]. It is remarkable that many different tools, like generating functions, combinatorial methods, \(p\)-adic analysis, umbral calculus, operator theory, differential equations, special functions, probability theory and analytic number theory, are employed in the course of this quest (see [5, 9, 11-19] and the references therein).

Let \(Y\) be a random variable satisfying the moment conditions (see (15)). The aim of this paper is to introduce and study probabilistic versions of the degenerate Stirling numbers of the second kind and the degenerate Bell polynomials, namely the probabilisitc degenerate Stirling numbers of the second kind associated with \(Y\) and the probabilistic degenerate Bell polynomials associated with \(Y\), which are also degenerate versions of the probabilisitc Stirling numbers of the second and the probabilistic Bell polynomials considered in [2]. The definitions for those numbers and polynomials in (20) and (23) are very natural, as one can easily figure out. Then we derive some properties, explicit expressions, certain identities and recurrence relations for those numbers and polynomials. In addition, we consider the special cases that \(Y\) is the Poisson random variable with parameter \(\alpha (>0)\) and the Bernoulli random variable with probability of success \(p\), and derive several identities.

The outline of this paper is as follows. In Section 1, we recall the degenerate exponentials and degenerate logarithms. We remind the reader of the partial and complete Bell polynomials. We recall the degenerate Stirling polynomials and numbers of the second kind together with their explicit expressions. We also remind the reader of the degenerate Bell polynomials \(\phi_{n,\lambda}(x)\). Assume that \(Y\) is a random variable such that the moment generating function of \(Y\), \(E[e^{tY}]=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}E[Y^{n}], \quad (|t| <r)\), exists for some \(r >0\). Let \((Y_{j})_{j\ge 1}\) be a sequence of mutually independent copies of the random variable \(Y\), and let \(S_{k}=Y_{1}+Y_{2}+\cdots+Y_{k},\,\, (k \ge 1)\), with \(S_{0}=0\). Then we recall the probablistic Stirling numbers of the second kind associated with \(Y\), \(S_{Y}(n,m)\), which are defined in terms of the \(n\)th moments of \(S_{k},\,\,(k=0,1,\dots,m)\), (see [2]). Section 2 is the main result of this paper. Let \((Y_{j})_{j \ge1},\,\, S_{k},\,\, (k=0,1,\dots)\) be as in the above. Then we first define the probabilistic degenerate Stirling numbers of the second kind associated with the random variable \(Y\), \({n \brace k}_{Y,\lambda}\), as a degenerate version of \(S_{Y}(n,k)\). We derive for \({n \brace k}_{Y,\lambda}\) an explicit expression in Theorem 2.1 and an expression in term of the partial Bell polynomial in Theorem 2.10. Next, we define the probabilistic degenerate Bell polynomials associated with the random variable \(Y\), \(\phi_{n,\lambda}^{Y}(x)\), as a natural extension of the numbers \({n \brace k}_{Y,\lambda}\). Then we derive for \(\phi_{n,\lambda}^{Y}(x)\) a Dobinski-like formula in Theorem 2.2 and an expression in terms of the partial Bell polynomials in Theorem 2.3. We obtain for \(\phi_{n,\lambda}^{Y}(x)\) a recurrence formula in Theorem 2.4 and a binomial identity in Theorem 2.5. In Theorem 2.6, three identities all related to the partial Bell polynomials are obtained. An expression for the \(k\)th derivative of \(\phi_{n,\lambda}^{Y}(x)\) is derived in Theorem 2.7. In Theorems 2.8 and 2.9, some identities involving the degenerate Stirling polynomials of the second kind and the generalized falling factorials are obtained. Let \(Y\) be the Poisson random variable with parameter \(\alpha (>0)\). Then we derive an identity involving \(\phi_{n,\lambda}(k\alpha)\) and \({m \brace k}_{Y,\lambda},\,\,(k=0,1,\dots,m)\), in Theorem 2.11 and an expression of \(\phi_{n,\lambda}^{Y}(x)\) in terms of \({n \brace k}_{\lambda}\) and \(\phi_{k}(x)\) in Theorem 2.12. Let \(Y\) be the Bernoulli random variable with probability of success \(p\). Then we show \(\phi_{n,\lambda}^{Y}(x)=\phi_{n,\lambda}(px)\) in Theorem 2.13. Also, we find an integral representation for the finite sum \(\frac{1}{k+1}\Big((1)_{n,\lambda}+(2)_{n,\lambda}+\cdots+(k)_{n,\lambda}\Big)\) in Theorem 2.14.

For any nonzero \(\lambda\in\mathbb{R}\), the degenerate exponentials are defined by

$$e_{\lambda}^{x}(t)=(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{k=0}^{\infty}\frac{(x)_{k,\lambda}}{k!}t^{k},\quad (\mathrm{see}\ [4,10,19]), $$
(1)

where the generalized falling factorials are given by

$$(x)_{0,\lambda}=1,\quad (x)_{n,\lambda}=x(x-\lambda)(x-2\lambda)\cdots(x-(n-1)\lambda),\quad (n\ge 1). $$
(2)

For \(x=1\), we write \(e_{\lambda}(t)=e_{\lambda}^{1}(t)\). Let \(\log_{\lambda}(t)\) be the degenerate logarithm which is the compositional inverse of \(e_{\lambda}(t)\) satisfying

$$e_{\lambda}\big(\log_{\lambda}(t)\big)=\log_{\lambda}(e_{\lambda}(t))=t.$$

Then we have

$$\log_{\lambda}(1+t)=\sum_{k=1}^{\infty}\frac{t^{k} \lambda^{k-1}}{k!}(1)_{k,1/\lambda},\quad (\mathrm{see}\ [9]). $$
(3)

For any integer \(k \ge 0\), the partial Bell polynomials are given by

$$\frac{1}{k!}\bigg(\sum_{m=1}^{\infty}x_{m}\frac{t^{m}}{m!}\bigg)^{k}=\sum_{n=k}^{\infty}B_{n,k}(x_{1},x_{2},\dots,x_{n-k+1})\frac{t^{n}}{n!},\quad(\mathrm{see}\ [6,10,18], $$
(4)

where

$$\begin{aligned} \, &B_{n,k}(x_{1},x_{2},\dots,x_{n-k+1})\\ &\quad =\sum_{\substack{l_{1}+\cdots+l_{n-k+1}=k\\ l_{1}+2l_{2}+\cdots+(n-k+1)l_{n-k+1}=n}}\frac{n!}{l_{1}!l_{2}\cdots l_{n-k+1}!}\bigg(\frac{x_{1}}{1!}\bigg)^{l_{1}} \bigg(\frac{x_{2}}{2!}\bigg)^{l_{2}}\cdots \bigg(\frac{x_{n-k+1}}{(n-k+1)!}\bigg)^{l_{n-k+1}}. \end{aligned} $$
(5)

The complete Bell polynomials are defined by

$$\exp\bigg(\sum_{i=1}^{\infty}x_{i}\frac{t^{i}}{i!}\bigg)=\sum_{n=0}^{\infty}B_{n}(x_{1},x_{2},\dots,x_{n})\frac{t^{n}}{n!},\quad (\mathrm{see}\ [6,10,18]). $$
(6)

The Stirling numbers of the second are given by

$$\frac{1}{k!}(e^{t}-1)^{k}=\sum_{n=k}^{\infty}{n \brace k}\frac{t^{n}}{n!}.$$

From (5) and (6), we note that

$$B_{n,k}(1,1,\dots,1)={n \brace k},\quad (\mathrm{see}\ [1,6,18]), $$
(7)

and

$$B_{n}(x,x,\dots,x)=\phi_{n}(x),\quad (n\ge 0), $$
(8)

where \(\phi_{n}(x)\) are the Bell polynomials given by

$$e^{x(e^{t}-1)}=\sum_{n=0}^{\infty}\phi_{n}(x)\frac{t^{n}}{n!},\quad (\text{ see [1-24]}). $$

The degenerate Stirling numbers of the second kind are defined by

$$(x)_{n,\lambda}=\sum_{k=0}^{n}{n \brace k}_{\lambda}(x)_{k},\quad (n\ge 0),\quad (\mathrm{see}\ [9]). $$
(9)

From (9), we note that

$$\frac{1}{k!}(e_{\lambda}(t)-1)^{k}=\sum_{n=k}^{\infty}{n \brace k}_{\lambda}\frac{t^{n}}{n!},\quad (k\ge 0),\quad (\mathrm{see}\ [9]). $$
(10)

Thus, by (10), we easily get

$${n \brace k}_{\lambda}=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}(j)_{n,\lambda},\quad (n\ge k\ge 0),\quad (\mathrm{see}\ [9,12,13,16]). $$
(11)

Note that \(\lim_{\lambda\rightarrow 0}{n \brace k}_{\lambda}={n \brace k},\ (n,k\ge 0)\). In [11], the degenerate Stirling polynomials of the second kind are introduced by Kim as

$$\frac{1}{k!}(e_{\lambda}(t)-1)^{k}e_{\lambda}^{x}(t)=\sum_{n=k}^{\infty}S_{2,\lambda}(n,k|x)\frac{t^{n}}{n!},\quad (k\ge 0). $$
(12)

Note that \({n \brace k}_{\lambda}=S_{2,\lambda}(n,k|0),\ (n,k\ge 0)\). From (12), we have

$$S_{2,\lambda}(n,k|x)=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}(x+j)_{n,\lambda},\quad (n\ge k\ge 0),\quad (\mathrm{see}\ [11]). $$
(13)

Recently, Kim-Kim introduced the degenerate Bell polynomials given by

$$e^{x(e_{\lambda}(t)-1)}=\sum_{n=0}^{\infty}\phi_{n,\lambda}(x)\frac{t^{n}}{n!},\quad (\mathrm{see}\ [8,13,15,16]). $$
(14)

Note that \(\lim_{\lambda\rightarrow 0}\phi_{n,\lambda}(x)=\phi_{n}(x),\ (n\ge 0)\).

We assume that \(Y\) is a random variable satisfying the moment conditions

$$E\big[|Y|^{n}\big]<\infty,\quad n\in\mathbb{N}\cup\{0\},\quad \lim_{n\rightarrow\infty}\frac{|t|^{n}E[|Y|^{n}]}{n!}=0,\quad |t|<r, $$
(15)

for some \(r>0\), where \(E\) stands for the mathematical expectation. The equation (15) guarantees the existence of the moment generating function of \(Y\) given by

$$E\big[e^{tY}\big]=\sum_{n=0}^{\infty}\frac{t^{n}}{n!}E[Y^{n}],\quad (\mathrm{see}\ [2,21,22]). $$
(16)

Let \((Y_{j})_{j\ge 1}\) be a sequence of mutually independent copies of the random variable \(Y\), and let

$$S_{k}=Y_{1}+Y_{2}+\cdots+Y_{k},\quad k\in\mathbb{N},\quad (S_{0}=0). $$
(17)

The probabilistic Stirling numbers of the second kind associated with the random variable \(Y\) are defined by

$$S_{Y}(n,m)=\frac{1}{m!}\sum_{k=0}^{m}\binom{m}{k}(-1)^{m-k} E[S_{k}^{n}],\quad (0 \le m\le n),\quad (\mathrm{see}\ [2]). $$
(18)

2. Probabilistic degenerate Bell polynomials associated with random variables

Let \((Y_{j})_{j\ge 1}\) be a sequence of mutually independent copies of the random variable \(Y\), and let

$$S_{0}=0,\quad S_{k}=Y_{1}+Y_{2}+\cdots+Y_{k},\quad k\in\mathbb{N}. $$
(19)

Now, we consider the probabilistic degenerate Stirling numbers of the second kind associated with the random variable \(Y\) given by

$$\frac{1}{k!}\Big(E\big[e_{\lambda}^{Y}(t)\big]-1 \Big)^{k}=\sum_{n=k}^{\infty}{n \brace k}_{Y,\lambda}\frac{t^{n}}{n!},\quad (k\ge 0). $$
(20)

On the other hand, by (1), we get

$$\begin{aligned} \, &\frac{1}{k!}\Big(E\big[e_{\lambda}^{Y}(t)\big]-1\Big)^{k}=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\Big(E\big[e_{\lambda}^{Y}(t)\big]\Big)^{j} \\ &=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\underbrace{E\big[e_{\lambda}^{Y}(t)\big]\cdots E\big[e_{\lambda}^{Y}(t)\big]}_{j-times}=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big[e_{\lambda}^{Y_{1}+\cdots+Y_{j}}(t)\big]\nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big[\big(Y_{1}+Y_{2}+\cdots+Y_{j}\big)_{n,\lambda}]\bigg)\frac{t^{n}}{n!} \nonumber \\ &=\sum_{n=0}^{\infty}\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big[(S_{j})_{n,\lambda}\big]\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(21)

Thus, by (20) and (21), we get

$${n \brace k}_{Y,\lambda}=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big[(S_{j})_{n,\lambda}\big],\quad (n\ge k \ge 0). $$
(22)

By (11), we note that if \(Y=1\), then \({n \brace k}_{Y,\lambda}={n \brace k}_{\lambda}\). Therefore, by (22), we obtain the following theorem.

Theorem 2.1.

For \(n,k\) with \(n\ge k \ge 0\), we have

$${n \brace k}_{Y,\lambda}=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big[(S_{j})_{n,\lambda}\big].$$

We define the probabilistic degenerate Bell polynomials associated with the random variable \(Y\) by

$$\phi_{n,\lambda}^{Y}(x)=\sum_{k=0}^{n}{n \brace k}_{Y,\lambda}x^{k},\quad (n\ge 0). $$
(23)

Note that if \(Y=1\), then \(\phi_{n,\lambda}^{Y}(x)=\phi_{n,\lambda}(x),\ (n\ge 0)\). For \(x=1,\ \phi_{n,\lambda}^{Y}=\phi_{n,\lambda}(1)\) are called the probabilistic degenerate Bell numbers associated with the random variable \(Y\). From (23), we note that

$$\begin{aligned} \, e^{x(E[e_{\lambda}^{Y}(t)]-1)}&=\sum_{n=0}^{\infty}x^{k}\frac{1}{k!}\Big(E\big[e_{\lambda}^{Y}(t)\big]-1\big)^{k} \\ &=\sum_{k=0}^{\infty}x^{k}\sum_{n=k}^{\infty}{n\brace k}_{Y,\lambda}\frac{t^{n}}{n!} \nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}x^{k}{n\brace k}_{Y,\lambda}\bigg)\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(24)

By (24), we get

$$\begin{aligned} \, \sum_{n=0}^{\infty}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}&=e^{x(E[e_{\lambda}^{Y}(t)]-1)}=e^{-x}e^{xE[e_{\lambda}^{Y}(t)]} \\ &=e^{-x}\sum_{k=0}^{\infty}x^{k}\frac{1}{k!}\Big(E[e_{\lambda}^{Y}(t)]\Big)^{k}=e^{-x} \sum_{k=0}^{\infty}x^{k}\frac{1}{k!}E[e_{\lambda}^{Y_{1}+\cdots+Y_{k}}(t)]\nonumber \\ &=\sum_{n=0}^{\infty}e^{-x}\sum_{k=0}^{\infty}x^{k}\frac{1}{k!}E[\big(Y_{1}+Y_{2}+\cdots+Y_{k})_{n,\lambda}\big]\frac{t^{n}}{n!} \nonumber \\ &=\sum_{n=0}^{\infty}e^{-x}\sum_{k=0}^{\infty}x^{k}\frac{1}{k!}E\big[(S_{k})_{n,\lambda}\big]\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(25)

Therefore, by comparing the coefficients on both sides of (25), we obtain the following Dobinski-like formula.

Theorem 2.2.

For \(n\ge 0\), we have

$$\phi_{n,\lambda}^{Y}(x)=e^{-x}\sum_{k=0}^{\infty}x^{k}\frac{1}{k!}E\big[(S_{k})_{n,\lambda}\big].$$

From, (24), we have

$$\begin{aligned} \, \sum_{n=0}^{\infty}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}&=e^{x(E[e_{\lambda}^{Y}(t)]-1)}=e^{x\sum_{j=1}^{\infty}E[(Y)_{j,\lambda}]\frac{t^{j}}{j!}} \\ &=\sum_{k=0}^{\infty}\frac{1}{k!}\bigg(x\sum_{j=1}^{\infty}E[(Y)_{j,\lambda}]\frac{t^{j}}{j!}\bigg)^{k} \nonumber \\ &=\sum_{k=0}^{\infty}\sum_{n=k}^{\infty}B_{n,k}\Big(xE[(Y)_{1,\lambda}],\dots,xE[(Y)_{n-k+1,\lambda}]\Big) \frac{t^{n}}{n!} \nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}B_{n,k} \Big(xE[(Y)_{1,\lambda}],\dots,xE[(Y)_{n-k+1,\lambda}]\Big) \frac{t^{n}}{n!} \nonumber \end{aligned}$$
(26)

Therefore, by comparing the coefficients on both sides of (26), we obtain the following theorem.

Theorem 2.3.

For \(n\ge 0\), we have

$$\phi_{n,\lambda}^{Y}(x)=\sum_{k=0}^{n}B_{n,k}\Big(xE[(Y)_{1,\lambda}], xE[(Y)_{2,\lambda}],\dots, xE[(Y)_{n-k+1,\lambda}]\Big).$$

From (24), we have

$$\begin{aligned} \, &\sum_{n=0}^{\infty}\phi_{n+1,\lambda}^{Y}(x)\frac{t^{n}}{n!} =\frac{d}{dt}\sum_{n=0}^{\infty}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}=\frac{d}{dt}e^{x(E[e_{\lambda}^{Y}(t)]-1)} \\ &=xE[Ye_{\lambda}^{Y-\lambda}(t)]e^{x(E[e_{\lambda}^{Y}(t)]-1)}=x\sum_{k=0}^{\infty}E[(Y)_{k+1,\lambda}]\frac{t^{k}}{k!}\sum_{m=0}^{\infty}\phi_{m,\lambda}^{Y}(x)\frac{t^{m}}{m!} \nonumber \\ &=\sum_{n=0}^{\infty}\bigg(x\sum_{k=0}^{n}\binom{n}{k}E[(Y)_{k+1,\lambda}]\phi_{n-k,\lambda}^{Y}(x)\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(27)

Therefore, by comparing the coefficients on both sides of (27), we obtain the following recurrence relation.

Theorem 2.4.

For \(n\ge 0\), we have

$$\phi_{n+1,\lambda}^{Y}(x)=x\sum_{k=0}^{n}\binom{n}{k}E[(Y)_{k+1,\lambda}]\phi_{n-k,\lambda}^{Y}(x).$$

Now, we observe that

$$\begin{aligned} \, &\sum_{n=0}^{\infty}\phi_{n,\lambda}^{Y}(x+y)\frac{t^{n}}{n!}=e^{(x+y)(E[e_{\lambda}^{Y}(t)]-1)}=e^{x(E[e_{\lambda}^{Y}(t)]-1}e^{y(E[e_{\lambda}^{Y}(t)]-1} \\ &=\sum_{k=0}^{\infty}\phi_{k,\lambda}^{Y}(x)\frac{t^{k}}{k!}\sum_{m=0}^{\infty}\phi_{m,\lambda}^{Y}(y)\frac{t^{m}}{m!} \nonumber \\ &=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\binom{n}{k}\phi_{k,\lambda}^{Y}(y)\phi_{n-k,\lambda}^{Y}(y) \frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(28)

Therefore, by (28), we obtain the following binomial identity.

Theorem 2.5.

For \(n\ge 0\), we have

$$\phi_{n,\lambda}^{Y}(x+y)=\sum_{k=0}^{n}\binom{n}{k}\phi_{k,\lambda}^{Y}(x)\phi_{n-k,\lambda}^{Y}(y).$$

From (24), we note that

$$\begin{aligned} \, \sum_{n=0}^{\infty}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}&=e^{x(E[e_{\lambda}^{Y}(t)]-1)}=\Big(e^{E[e_{\lambda}^{Y}(t)]-1}-1+1\Big)^{x} \\ &=\sum_{k=0}^{\infty}\binom{x}{k}\Big(e^{E[e_{\lambda}^{Y}(t)]-1}-1\Big)^{k}=\sum_{k=0}^{\infty}\binom{x}{k}k!\frac{1}{k!}\bigg(\sum_{j=1}^{\infty}\phi_{j,\lambda}^{Y}\frac{t^{j}}{j!}\bigg)^{k} \nonumber \\ &=\sum_{k=0}^{\infty}\binom{x}{k}k!\sum_{n=k}^{\infty}B_{n,k}\Big(\phi_{1,\lambda}^{Y},\phi_{2,\lambda}^{Y},\dots,\phi_{n-k+1,\lambda}^{Y}\Big)\frac{t^{n}}{n!}\nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\binom{x}{k}k!B_{n,k}\Big(\phi_{1,\lambda}^{Y},\phi_{2,\lambda}^{Y},\dots,\phi_{n-k+1,\lambda}^{Y}\Big)\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(29)

Thus, by comparing the coefficients on both sides of (29), we get

$$\phi_{n,\lambda}^{Y}(x)=\sum_{k=0}^{n}\binom{x}{k}k!B_{n,k}\Big(\phi_{1,\lambda}^{Y},\phi_{2,\lambda}^{Y},\dots,\phi_{n-k+1,\lambda}^{Y}\Big),\quad (n\ge 0). $$
(30)

Now, we observe that

$$te^{x(E[e_{\lambda}^{Y}(t)]-1)}=t\sum_{j=0}^{\infty}\phi_{j,\lambda}^{Y}(x)\frac{t^{j}}{j!}=\sum_{j=1}^{\infty}j\phi_{j-1,\lambda}^{Y}(x)\frac{t^{j}}{j!}. $$
(31)

By (31), we get

$$\begin{aligned} \, &\bigg(\sum_{j=1}^{\infty}j\phi_{j-1,\lambda}^{Y}(x)\frac{t^{j}}{j!}\bigg)^{k}=t^{k}\Big(e^{x(E[e_{\lambda}^{Y}(t)]-1)}\Big)^{k} \\ &=t^{k}\sum_{j=0}^{\infty}k^{j}x^{j}\frac{1}{j!}\Big(E[e_{\lambda}^{Y}(t)]-1\Big)^{j}=t^{k}\sum_{j=0}^{\infty}k^{j}x^{j}\sum_{n=j}^{\infty}{n \brace j}_{Y,\lambda}\frac{t^{n}}{n!} \nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{j=0}^{n}k^{j}x^{j}{n \brace j}_{Y,\lambda}\bigg)\frac{t^{n+k}}{n!}=\sum_{n=k}^{\infty}\bigg(\sum_{j=0}^{n-k}k^{j}x^{j}{n-k \brace j}_{Y,\lambda}\bigg)\frac{t^{n}}{(n-k)!} \nonumber \\ &=\sum_{n=k}^{\infty}\bigg(k!\sum_{j=0}^{n-k}\binom{n}{k}k^{j}x^{j}{n-k \brace j}_{Y,\lambda}\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(32)

From (4) and (32), we note that

$$\begin{aligned} \, &\sum_{n=k}^{\infty}\bigg(\sum_{j=0}^{n-k}\binom{n}{k}k^{j}x^{j}{n-k \brace j}_{Y,\lambda}\bigg)\frac{t^{n}}{n!}=\frac{1}{k!}\bigg(\sum_{j=1}^{\infty}j\phi_{j-1,\lambda}^{Y}(x)\frac{t^{j}}{j!}\bigg)^{k} \\ &=\sum_{n=k}^{\infty}B_{n,k}\Big(\phi_{0,\lambda}^{Y}(x), 2\phi_{1,\lambda}^{Y}(x), 3\phi_{2,\lambda}^{Y}(x),\dots, (n-k+1)\phi_{n-k,\lambda}^{Y}(x)\Big)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(33)

Thus, by (33), we get

$$\sum_{j=0}^{n-k}\binom{n}{k}k^{j}x^{j}{n-k \brace j}_{Y,\lambda}=B_{n,k}\Big(\phi_{0,\lambda}^{Y}(x),2\phi_{1,\lambda}^{Y}(x),\dots,(n-k+1)\phi_{n-k,\lambda}^{Y}(x)\Big). $$
(34)

From (4), we note taht

$$\begin{aligned} \, &\sum_{n=k}^{\infty}B_{n,k}\Big(\phi_{1,\lambda}^{Y}(x),\phi_{2,\lambda}^{Y}(x),\dots,\phi_{n-k+1,\lambda}^{Y}(x)\Big)\frac{t^{n}}{n!}=\frac{1}{k!}\bigg(\sum_{j=1}^{\infty}\phi_{j,\lambda}^{Y}(x)\frac{t^{j}}{j!}\bigg)^{k} \\ &=\frac{1}{k!}\Big(e^{x(E[e_{\lambda}^{Y}(t)]-1)}-1\Big)^{k}=\sum_{j=k}^{\infty}{j \brace k}\frac{x^{j}}{j!}\Big(E[e_{\lambda}^{Y}(t)]-1\Big)^{j}\nonumber \\ &=\sum_{j=k}^{\infty}{j \brace k}x^{j}\sum_{n=j}^{\infty}{n \brace j}_{Y,\lambda}\frac{t^{n}}{n!}=\sum_{n=k}^{\infty}\bigg(\sum_{j=k}^{n}{j \brace k}{n \brace j}_{Y,\lambda}x^{j}\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(35)

By comparing the coefficients on both sides of (35), we have

$$B_{n,k}\Big(\phi_{1,\lambda}^{Y}(x),\phi_{2,\lambda}^{Y}(x),\dots,\phi_{n-j+1,\lambda}^{Y}(x)\Big)=\sum_{j=k}^{n}{j \brace k}{n \brace j}_{Y,\lambda}x^{j}, $$
(36)

where \(n\ge k\ge 0\). Therefore, by (30), (34), and (36), we obtain the following theorem.

Theorem 2.6.

The following identities hold true.

$$\begin{aligned} \, &\phi_{n,\lambda}^{Y}(x)=\sum_{k=0}^{n}\binom{x}{k}k!B_{n,k}\Big(\phi_{1,\lambda}^{Y},\phi_{2,\lambda}^{Y},\dots,\phi_{n-k+1,\lambda}^{Y}\Big),\quad (n \ge 0),\\ &\sum_{j=0}^{n-k}\binom{n}{k}k^{j}x^{j}{n-k \brace j}_{Y,\lambda}=B_{n,k}\Big(\phi_{0,\lambda}^{Y}(x),2\phi_{,\lambda}^{Y}(x),3\phi_{2,\lambda}^{Y}(x),\dots,(n-k+1)\phi_{n-k,\lambda}^{Y}(x)\Big),\quad (n \ge k), \end{aligned}$$

and

$$B_{n,k}\Big(\phi_{1,\lambda}^{Y}(x),\phi_{2,\lambda}^{Y}(x),\dots,\phi_{n-k+1,\lambda}^{Y}(x)\Big)=\sum_{j=k}^{n}{j \brace k}{n \brace j}_{Y,\lambda}x^{j},\quad (n \ge k).$$

From (24), we note that

$$\begin{aligned} \, \sum_{n=0}^{\infty}\bigg(\frac{d}{dx}\bigg)^{k}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}&=\bigg(\frac{d}{dx}\bigg)^{k}e^{x(E[e_{\lambda}^{Y} (t)]-1)} \\ &=k!\frac{1}{k!}\Big(E[e_{\lambda}^{Y}(t)]-1\Big)^{k} e^{x(E[e_{\lambda}^{Y} (t)]-1)}\nonumber \\ &=k!\sum_{l=k}^{\infty}{l \brace k}_{Y,\lambda}\frac{t^{l}}{l!}\sum_{j=0}^{\infty}\phi_{j,\lambda}^{Y}(x)\frac{t^{j}}{j!}. \nonumber \\ &=\sum_{n=k}^{\infty}\bigg(k!\sum_{j=0}^{n-k}\phi_{j,\lambda}^{Y}(x){n-j \brace k}_{r,\lambda}\binom{n}{j}\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(37)

Thus, by (37), we get

$$\bigg(\frac{d}{dx}\bigg)^{k}\phi_{n,\lambda}^{Y}(x)=k!\sum_{j=0}^{n-k}\phi_{j,\lambda}^{Y}(x){n-j \brace k}_{Y,\lambda}\binom{n}{j}. $$
(38)

In particular, for \(k=1\), we have

$$\begin{aligned} \, \sum_{n=1}^{\infty}\frac{d}{dx}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}&=\Big(E[e_{\lambda}^{Y}(t)]-1\Big)e^{x(E[e_{\lambda}^{Y}(t)]-1)} \\ &=\sum_{l=1}^{\infty}E[(Y)_{l,\lambda}]\frac{t^{l}}{l!}\sum_{j=0}^{\infty}\phi_{j,\lambda}^{Y}(x)\frac{t^{j}}{j!} \nonumber \\ &=\sum_{n=1}^{\infty}\bigg(\sum_{j=0}^{n-1}E[(Y)_{n-j,\lambda}]\phi_{j,\lambda}^{Y}(x)\binom{n}{j}\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(39)

Thus, by comparing the coefficients on both sides of (39), we get

$$\frac{d}{dx}\phi_{n,\lambda}^{Y}(x)= \sum_{j=0}^{n-1}E[(Y)_{n-j,\lambda}]\phi_{j,\lambda}^{Y}(x)\binom{n}{j},\quad (n\in\mathbb{N}). $$
(40)

Therefore, by (38) and (40), we obtain the following theorem.

Theorem 2.7.

For \(n,k\in\mathbb{N}\) with \(n\ge k\), we have

$$\bigg(\frac{d}{dx}\bigg)^{k}\phi_{n,\lambda}^{Y}(x)=k!\sum_{j=0}^{n-k}\phi_{j,\lambda}^{Y}(x){n-j \brace k}_{Y,\lambda}\binom{n}{j}.$$

In particular, for \(k=1\), we have

$$\frac{d}{dx}\phi_{n,\lambda}^{Y}(x)=\sum_{j=0}^{n-1}E[(Y)_{n-j,\lambda}]\phi_{j,\lambda}^{Y}(x)\binom{n}{j}.$$

Let \(f\) be a real valued function. The forward difference operator \(\triangle\) is defined by \(\triangle f(x)=f(x+1)-f(x)\). For \(y\in\mathbb{R}\), the operator \(\triangle_{y}\) is defined by

$$\triangle_{y}f(x)=f(x+y)-f(x),\quad (\mathrm{see}\ [2]), $$
(41)

together with the iterates

$$\triangle_{y_{1},y_{2},\dots,y_{m}}f(x)= \triangle_{y_{1}}\circ \triangle_{y_{2}}\circ \triangle_{y_{3}}\circ\cdots\circ \triangle_{y_{m}}f(x),\quad (y_{1},y_{2},\dots,y_{m})\in\mathbb{R}^{m}. $$
(42)

From (41) and (42), we note that

$$\triangle_{\underbrace{1,1,\dots,1}_{m-times}}=\triangle^{m}f(x)=\sum_{j=0}^{m}\binom{m}{j}(-1)^{m-j}f(x+j),\quad (m\in\mathbb{N}\cup\{0\}). $$
(43)

Thus, by (42), we get

$$\triangle_{y_{1},y_{2},\dots,y_{m}}f(x)=(-1)^{m}f(x)+\sum_{k=1}^{m}(-1)^{m-k}\sum_{I_{m}(k)}f(x+y_{i_{1}}+y_{i_{2}}+\cdots+y_{i_{k}}), $$
(44)

where

$$I_{m}(k)=\{i_{1},i_{2},\dots,i_{k}\in\{1,2,3,\dots,m\}|i_{r}\ne i_{s},\ r\ne s\},\quad (\mathrm{see}\ [2]).$$

From (44), we note that

$$\begin{aligned} \, E\Big[\triangle_{Y_{1},Y_{2},\dots,Y_{m}}f(x)\Big]&=\sum_{k=0}^{m}\binom{m}{k}(-1)^{m-k}E\Big[f(x+S_{k})\Big] \\ &=(-1)^{m}f(x)+\sum_{k=1}^{m}\binom{m}{k}(-1)^{m-k}E\Big[f(x+S_{k})\Big],\nonumber \end{aligned}$$
(45)

where \(m\) is a positive integer.

By inversion, from (45), we have

$$\sum_{j=0}^{m}\binom{m}{j}E\Big[\triangle_{Y_{1},Y_{2},\dots,Y_{j}} f(x)\Big]=E\Big[f(x+S_{m})\Big]. $$
(46)

From (42), we note that

$$\begin{aligned} \, \sum_{k=0}^{N}(x+k)_{n,\lambda}&=\sum_{k=0}^{N}(1+ \triangle)^{k}(x)_{n,\lambda}=\sum_{k=0}^{N}\sum_{m=0}^{k}\binom{k}{m} \triangle^{m}(x)_{n,\lambda} \\ &=\sum_{m=0}^{N} \triangle^{m}(x)_{n,\lambda}\sum_{k=m}^{N}\binom{k}{m}=\sum_{m=0}^{N} \triangle^{m}(x)_{n,\lambda}\sum_{k=m}^{N}\bigg(\binom{k+1}{m+1}-\binom{k}{m+1}\bigg)\nonumber \\ &=\sum_{m=0}^{N} \binom{N+1}{m+1}\triangle^{m}(x)_{n,\lambda},\nonumber \end{aligned}$$
(47)

where \(N\) is a nonnegative integer. Thus, by (13) and (43), we have

$$S_{2,\lambda}(n,k|x)=\frac{1}{k!}\sum_{m=0}^{k}\binom{k}{m}(-1)^{k-m}(x+m)_{n,\lambda}=\frac{1}{k!} \triangle^{k}(x)_{n,\lambda}, $$
(48)

where \(n,k\) are nonnegative integers with \(n\ge k\).

From (47) and (48), we note that

$$\sum_{k=0}^{N}(x+k)_{n,\lambda}=\sum_{m=0}^{N}\binom{N+1}{m+1} \triangle^{m}(x)_{n,\lambda}=\sum_{m=0}^{N}\binom{N+1}{m+1}m!S_{2,\lambda}(n,m|x). $$
(49)

The equation (49) can be rewritten as

$$\begin{aligned} \, \sum_{m=0}^{N}\binom{N+1}{m+1}m!S_{2,\lambda}(n,m|x)&=\sum_{m=0}^{N}\binom{N+1}{m+1} \triangle^{m}(x)_{n,\lambda} \\ &=\sum_{m=0}^{N}\binom{N+1}{m+1}\sum_{k=0}^{m}\binom{m}{k}(-1)^{m-k}(x+k)_{n,\lambda} \nonumber\\ &=\sum_{k=0}^{N}(x+k)_{n,\lambda}\sum_{m=k}^{N}\binom{N+1}{m+1}\binom{m}{k}(-1)^{m-k},\quad (N\ge 0). \nonumber \end{aligned}$$
(50)

Therefore, by (50), we obtain the following theorem.

Theorem 2.8.

For \(N\ge 0\), we have

$$\sum_{m=0}^{N}\binom{N+1}{m+1}m!S_{2,\lambda}(n,m|x) =\sum_{k=0}^{N}(x+k)_{n,\lambda}A(N,k),$$

where

$$A(N,k)=\sum_{m=k}^{N}\binom{N+1}{m+1}\binom{m}{k}(-1)^{m-k}. $$
(51)

In view of (12), we define the probabilistic degenerate Stirling polynomials of the second kind associated with the random variable \(Y\) by

$$\frac{1}{k!}\Big(E[e_{\lambda}^{Y}(t)]-1\Big)^{k}e_{\lambda}^{x}(t)=\sum_{n=k}^{\infty}S_{Y,\lambda}(n,k|x)\frac{t^{n}}{n!},\quad (k\ge 0). $$
(52)

On the other hand, by binomial expansion, we get

$$\begin{aligned} \, &\frac{1}{k!}\Big(E\big[e_{\lambda}^{Y}(t)\big]-1\Big)^{k}e_{\lambda}^{x}(t)=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}\Big(E\big[e_{\lambda}^{Y}(t)\big]\Big)^{j}e_{\lambda}^{x}(t) \\ &=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j} E\big[e_{\lambda}^{Y_{1}+Y_{2}+\cdots+Y_{j}+x}(t)\big]\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big[(Y_{1}+Y_{2}+\cdots+Y_{j}+x)_{n,\lambda}\big]\bigg)\frac{t^{n}}{n!} \nonumber \\ &=\sum_{k=0}^{\infty}\bigg(\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\big[(S_{j}+x)_{n,\lambda}\big]\bigg)\frac{t^{n}}{n!},\quad (k\ge 0). \nonumber \end{aligned}$$
(53)

By (52) and (53), we get

$$S_{Y,\lambda}(n,k|x)=\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E[(x+S_{j})_{n,\lambda}],\quad S_{Y,\lambda}(n,k|0)={n \brace k}_{Y,\lambda},\quad (n\ge k\ge 0). $$
(54)

From (45), we note that

$$\begin{aligned} \, E\Big[\triangle_{Y_{1},Y_{2},\dots,Y_{k}}(x)_{n,\lambda}\Big] &=\sum_{j=0}^{k}\binom{k}{j}(-1)^{k-j}E\Big[(x+S_{j})_{n,\lambda}\Big] \\ &=k!S_{Y,\lambda}(n,k|x),\quad (n\ge k\ge 0). \nonumber \end{aligned}$$
(55)

By (46) and (55), we get

$$\begin{aligned} \, &\sum_{k=0}^{m}\binom{m}{k}E\big[(x+S_{k})_{n,\lambda}\big]=\sum_{k=0}^{m}\binom{m}{k}\sum_{j=0}^{k}\binom{k}{j}E\Big[\triangle_{Y_{1},\dots,Y_{j}}(x)_{n,\lambda}\Big] \\ &=\sum_{k=0}^{m}\binom{m}{k}\sum_{j=0}^{k}\binom{k}{j}j!S_{Y,\lambda}(n,j|x)=\sum_{j=0}^{m}j!S_{Y,\lambda}(n,j|x)\sum_{k=j}^{m}\binom{m}{k}\binom{k}{j} \nonumber \\ &=\sum_{j=0}^{m}j! S_{Y,\lambda}(n,j|x)\binom{m}{j}\sum_{k=0}^{m-j}\binom{m-j}{k}=\sum_{j=0}^{m}j!S_{Y,\lambda}(n,j|x)\binom{m}{j}2^{m-j}.\nonumber \end{aligned}$$
(56)

Therefore, by (56), we obtain the following theorem.

Theorem 2.9.

For \(m,n\ge 0\), we have

$$\sum_{k=0}^{m}\binom{m}{k}E\big[(x+S_{k})_{n,\lambda}\big]=\sum_{j=0}^{m}j!S_{Y,\lambda}(n,j|x)\binom{m}{j}2^{m-j}.$$

The Cauchy numbers of order \(r\) are given by

$$ \bigg(\frac{t}{\log(1+t)}\bigg)^{r}=\sum_{n=0}^{\infty}C_{n}^{(r)}\frac{t^{n}}{n!},\quad (\mathrm{see}\ [6,19,20]).\nonumber$$

Let \(Y\) be a continuous uniform random variable on (0,1), and let \((Y_{j})_{j\ge 1}\) be a sequence of mutually independent copies of the random variable \(Y\). Then we have

$$\begin{aligned} \, E\Big[e_{\lambda}^{Y_{1}+\cdots+Y_{k}}(t)\Big]&=\int_{0}^{1}\cdots\int_{0}^{1}e_{\lambda}^{y_{1}}(t)\cdots e_{\lambda}^{y_{k}}(t)dy_{1}\cdots dy_{k}=\bigg(\frac{\lambda}{\log(1+\lambda t)}\bigg)^{k}\big(e_{\lambda}(t)-1\big)^{k} \\ &=\frac{k!}{t^{k}}\sum_{l=0}^{\infty}\lambda^{l}C_{l}^{(k)}\frac{t^{l}}{l!}\sum_{m=k}^{\infty}{m \brace k}_{\lambda}\frac{t^{m}}{m!} \nonumber \\ &=\frac{k!}{t^{k}}\sum_{n=k}^{\infty}\bigg(\sum_{m=k}^{n}{m\brace k}_{\lambda}C_{n-m}^{(k)}\lambda^{n-m}\binom{n}{m}\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(57)

Thus, by (57), we get

$$\begin{aligned} \, &\sum_{n=k}^{\infty}\bigg(\sum_{m=k}^{n}{m\brace k}_{\lambda}C_{n-m}^{(k)}\lambda^{n-m}\binom{n}{m}\bigg)\frac{t^{n}}{n!}=\frac{t^{k}}{k!}E\Big[e_{\lambda}^{Y_{1}+\cdots+Y_{k}}(t)\Big] \\ &=\frac{t^{k}}{k!}\sum_{n=0}^{\infty}E\Big[(Y_{1}+\cdots+Y_{k})_{n,\lambda}\Big]\frac{t^{n}}{n!}=\sum_{n=k}^{\infty}\binom{n}{k}E\Big[(Y_{1}+\cdots+Y_{k})_{n-k,\lambda}\Big]\frac{t^{n}}{n!} \nonumber \\ &=\sum_{n=k}^{\infty}\binom{n}{k}E\Big[(S_{k})_{n-k,\lambda}\Big]\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(58)

By comparing the coefficients on both sides of (58), we get

$$\sum_{m=k}^{n }{m \brace k}_{\lambda}C_{n-m}^{(k)}\lambda^{n-m}\binom{n}{m}=\binom{n}{k}E\Big[(S_{k})_{n-k,\lambda}\Big], $$
(59)

where \(n\ge k\ge 0\).

From (46) and (55), we have

$$\begin{aligned} \, &\sum_{k=0}^{m}\binom{m}{k}E\Big[(x+S_{k})_{m-k,\lambda}\Big]=\sum_{k=0}^{m}\binom{m}{k}\sum_{j=0}^{k}\binom{k}{j}E\Big[\triangle_{Y_{1},Y_{2},\dots,Y_{j}}(x)_{m-k,\lambda}\Big] \\ &=\sum_{k=0}^{m}\binom{m}{k}\sum_{j=0}^{k}\binom{k}{j}j!S_{Y,\lambda}(m-k,j|x)=\sum_{j=0}^{m}j!\sum_{k=j}^{m}\binom{m}{k}\binom{k}{j}S_{Y,\lambda}(m-k,j|x) \nonumber\\ &=\sum_{j=0}^{m}j!\binom{m}{j}\sum_{k=j}^{m}\binom{m-j}{m-k}S_{Y,\lambda}(m-k,j|x), \nonumber \end{aligned}$$
(60)

where \(m\) is a nonnegative integer. Then, from (59) and (60), we have

$$\begin{aligned} \, &\sum_{j=0}^{m}j!\binom{m}{j}\sum_{k=j}^{m}\binom{m-j}{m-k}{m-k \brace j}_{Y,\lambda}=\sum_{k=0}^{m}\binom{m}{k}E\Big[(S_{k})_{m-k,\lambda}\Big] \\ &=\sum_{k=0}^{m}\sum_{l=k}^{m}{l \brace k}_{\lambda}C_{m-l}^{(k)}\lambda^{m-l}\binom{m}{l}.\nonumber \end{aligned}$$
(61)

From (20), we note that

$$\begin{aligned} \, \sum_{n=k}^{\infty}{n\brace k}_{Y,\lambda}\frac{t^{n}}{n!}&=\frac{1}{k!}\Big(E\big[e_{\lambda}^{Y}(t)\big]-1\Big)^{k}=\frac{1}{k!}\bigg(\sum_{j=1}^{\infty}E\big[(Y)_{j,\lambda}\big]\frac{t^{j}}{j!}\bigg)^{k} \\ &=\sum_{n=k}^{\infty}B_{n,k}\Big(E[(Y)_{1,\lambda}],E[(Y)_{2,\lambda}],\dots,E[(Y)_{n-k+1,\lambda}]\Big)\frac{t^{n}}{n!}.\nonumber \end{aligned}$$
(62)

Thus, we get

$${n \brace k}_{Y,\lambda}=B_{n,k}\Big(E[(Y)_{1,\lambda}],E[(Y)_{2,\lambda}],\dots,E[(Y)_{n-k+1,\lambda}]\Big), $$
(63)

where \(n\ge k\ge 0\). Therefore, by (63), we obtain the following theorem.

Theorem 2.10.

For \(n\ge k\ge 0\), we have

$${n \brace k}_{Y,\lambda}=B_{n,k}\Big(E[(Y)_{1,\lambda}],E[(Y)_{2,\lambda}],\dots,E[(Y)_{n-k+1,\lambda}]\Big).$$

Let \(Y\) be the Poisson random variable with parameter \(\alpha(>0)\). Then we have

$$\begin{aligned} \, \sum_{n=0}^{\infty}E[(S_{k})_{n,\lambda}]\frac{t^{n}}{n!}&=E[e_{\lambda}^{S_{k}}(t)]=E[e_{\lambda}^{Y_{1}+\cdots+Y_{k}}(t)] \\ &=E[e_{\lambda}^{Y_{1}}(t)]E[e_{\lambda}^{Y_{2}}(t)]\cdots E[e_{\lambda}^{Y_{k}}(t)]=e^{k\alpha(e_{\lambda}(t)-1)} \nonumber \\ &=\sum_{n=0}^{\infty}\phi_{n,\lambda}(k\alpha)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(64)

By comparing the coefficients on both sides (64), we get

$$E[(S_{k})_{n,\lambda}]=E[(Y_{1}+Y_{2}+\cdots+Y_{k})_{n,\lambda}]=\phi_{n,\lambda}(k\alpha),\quad (n\ge 0). $$
(65)

Therefore, by (56) and (65), we obtain the following theorem.

Theorem 2.11.

Let \(Y\) be the Poisson random variable with parameter \(\alpha(>0)\). Then we have

$$\sum_{k=0}^{m}\binom{m}{k}\phi_{n,\lambda}(k \alpha)=\sum_{j=0}^{m}j!{n\brace j}_{Y,\lambda}\binom{m}{j}2^{m-j},\quad (m\ge 0).$$

Let \(Y\) be the Poisson random variable with parameter \(\alpha(>0)\). Then, from (24), we note that

$$\begin{aligned} \, &\sum_{n=0}^{\infty}\phi_{n,\lambda}^{Y}(x)\frac{t^{n}}{n!}=e^{x(E[e_{\lambda}^{Y}(t)]-1)}=e^{x(e^{\alpha (e_{\lambda}(t)-1)}-1)} \\ &=\sum_{k=0}^{\infty}\phi_{k}(x)\alpha^{k}\frac{1}{k!}(e_{\lambda}(t)-1)^{k}=\sum_{k=0}^{\infty}\phi_{k}(x)\alpha^{k}\sum_{n=k}^{\infty}{n\brace k}_{\lambda}\frac{t^{n}}{n!} \nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}{n\brace k}_{\lambda}\phi_{k}(x)\alpha^{k}\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(66)

Therefore, by (66), we obtain the following theorem.

Theorem 2.12.

Let \(Y\) be the Poisson random variable with parameter \(\alpha(>0)\). Then we have

$$\phi_{n,\lambda}^{Y}(x)= \sum_{k=0}^{n}{n\brace k}_{\lambda}\phi_{k}(x)\alpha^{k},\quad (n\ge 0).$$

Let \(Y\) be the Bernoulli random variable with probability of success \(p\). Then we have

$$E\Big[(Y)_{n,\lambda}\Big]=\sum_{i=0}^{1}(i)_{n,\lambda}p(i)=(0)_{n,\lambda}p(0)+(1)_{n,\lambda}p(1)=(1)_{n,\lambda}p,\quad(n \ge 1). $$
(67)

Thus, by (63) and (67), we get

$$\begin{aligned} \, {n \brace k}_{Y,\lambda}&=B_{n,k}\Big((1)_{1,\lambda}p,(1)_{2,\lambda}p,\dots,(1)_{n-k+1,\lambda}p\Big)\\ &=p^{k}B_{n,k}\Big((1)_{1,\lambda},(1)_{2,\lambda},\dots,(1)_{n-k+1,\lambda}\Big)=p^{k}{n \brace k}_{\lambda}, \end{aligned} $$
(68)

where \(n\ge k\ge 0\). From (68), we have

$$\phi_{n,\lambda}^{Y}(x) =\sum_{k=0}^{n}{n\brace k}_{Y,\lambda}x^{k}=\sum_{k=0}^{n}(px)^{k}{n\brace k}_{\lambda}=\phi_{n,\lambda}(px),\quad (n\ge 0). $$
(69)

Therefore, by (68) and (69), we obtain the following theorem.

Theorem 2.13.

Let \(Y\) be the Bernoulli random variable with probability of success \(p\). Then we have

$$\phi_{n,\lambda}^{Y}(x)=\phi_{n,\lambda}(px),\quad (n\ge 0).$$

In particular, for \(n\ge k\ge 0\), we get

$${n \brace k}_{Y,\lambda}=p^{n}{n \brace k}_{\lambda}.$$

We observe that, for \(n,k \ge 1\), we have

$$\begin{aligned} \, (1)_{n,\lambda}+(2)_{n,\lambda}+\cdots+(k)_{n,\lambda}&=\sum_{j=1}^{k}(j)_{n,\lambda}=\sum_{j=1}^{k}\sum_{l=1}^{k}{n\brace l}_{\lambda}(j)_{l} \\ &=\sum_{l=1}^{n}{n\brace l}_{\lambda}l!\sum_{j=1}^{k}\binom{j}{l}=\sum_{l=1}^{n}{n \brace l}_{\lambda}l!\sum_{j=1}^{k}\bigg(\binom{j+1}{l+1}-\binom{j}{l+1}\bigg) \nonumber \\ &=\sum_{l=1}^{n}{n \brace l}_{\lambda}l!\binom{k+1}{l+1}.\nonumber \end{aligned}$$
(70)

By (70), we get

$$(1)_{n,\lambda}+(2)_{n,\lambda}+\cdots+(k)_{n,\lambda}= \sum_{l=1}^{n}{n \brace l}_{\lambda}l!\binom{k+1}{l+1},\quad (n\in\mathbb{N}). $$
(71)

Let \(Y\) be the Bernoulli random variable with probability of success \(p\), and let \((Y_{j})_{j\ge 1}\) be mutually independent copies of \(Y\). Then we have

$$\begin{aligned} \, E\Big[e_{\lambda}^{Y_{1}+\cdots+Y_{k}}(t)\Big]&=E\Big[e_{\lambda}^{Y_{1}}(t)\Big] E\Big[e_{\lambda}^{Y_{2}}(t)\Big]\cdots E\Big[e_{\lambda}^{Y_{k}}(t)\Big] \\ &=\Big(p(e_{\lambda}(t)-1)+1\Big)^{k}=\sum_{i=0}^{k}\binom{k}{i}p^{i}(e_{\lambda}(t)-1)^{i}\nonumber\\ &=\sum_{i=0}^{k}\binom{k}{i}p^{i}i!\sum_{n=i}^{\infty}{n\brace i}_{\lambda}\frac{t^{n}}{n!},\quad (k\in\mathbb{N}). \nonumber \end{aligned}$$
(72)

Thus, by (72), we get

$$\begin{aligned} \, \sum_{n=0}^{\infty}E\Big[(Y_{1}+Y_{2}+\cdots+Y_{k})_{n,\lambda}\Big]\frac{t^{n}}{n!}&=\sum_{n=0}^{\infty}E\Big[(S_{k})_{n,\lambda}\Big]\frac{t^{n}}{n!} \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{i=0}^{n}\binom{k}{i}p^{i}{i!}{n\brace i}_{\lambda}\bigg)\frac{t^{n}}{n!}. \nonumber \end{aligned}$$
(73)

From (46) and (73), we note that

$$\begin{aligned} \, \sum_{i=1}^{n}\binom{k}{i}p^{i}i!{n\brace i}_{\lambda}&=E\Big[(S_{k})_{n,\lambda}\Big]=\sum_{j=0}^{k}\binom{k}{j}E\Big[\triangle_{Y_{1},Y_{2},\dots,Y_{j}}(0)_{n,\lambda}\Big] \\ &=\sum_{j=0}^{k}\binom{k}{j}j!{n\brace j}_{Y,\lambda},\quad (n\in\mathbb{N}). \nonumber \end{aligned}$$
(74)

For \(n\in\mathbb{N}\), by (74), we get

$$\int_{0}^{1}\sum_{i=1}^{n}\binom{k}{i}p^{i}i!{n\brace i}_{\lambda}dp=\int_{0}^{1}\sum_{j=1}^{k}\binom{k}{j}j!{n\brace j}_{Y,\lambda}dp. $$
(75)

Thus, we see that

$$\frac{1}{k+1}\sum_{i=1}^{n}\binom{k+1}{i+1}i!{n \brace i}_{\lambda}=\int_{0}^{1}\sum_{j=0}^{k}\binom{k}{j}j!{n\brace j}_{Y,\lambda}dp. $$
(76)

By (71) and (76), we get

$$\begin{aligned} \, \frac{1}{k+1}\Big((1)_{n,\lambda}+(2)_{n,\lambda}+\cdots+(k)_{n,\lambda}\Big)&=\frac{1}{k+1}\sum_{i=1}^{n}\binom{k+1}{i+1}i!{n\brace i}_{\lambda} \\ &=\int_{0}^{1}\sum_{j=0}^{k}\binom{k}{j}j!{n\brace j}_{Y,\lambda}dp. \nonumber \end{aligned}$$
(77)

Therefore, by (77), we obtain the following theorem.

Theorem 2.14.

Let \(Y\) be the Bernoulli random variable with probability of success \(p\). For \(n,k\in\mathbb{N}\), we have

$$\frac{1}{k+1}\Big((1)_{n,\lambda}+(2)_{n,\lambda}+\cdots+(k)_{n,\lambda}\Big)= \int_{0}^{1}\sum_{j=0}^{k}\binom{k}{j}j!{n\brace j}_{Y,\lambda}dp.$$

3. conclusion

In this paper, we studied by using generating functions probabilistic degenerate Stirling numbers of the second associated with \(Y\) and the probabilistic degenerate Bell polynomials associated with \(Y\) as degenerate versions of the probabilistic Stirling numbers of the second associated with \(Y\) and probabilistic Bell polynomials associated with \(Y\), respectively (see [2]). Here \(Y\) is a random variable satisfying moment conditions in (15) so that the moment generating function of \(Y\) exists. In more detail, we derived some expressions and related identities for \({n \brace k}_{Y,\lambda}\) (see Theorems 2.1, 2.6, 2.10), an identity involving \(S_{Y,\lambda}(n,k|x)\) (see Theorem 2.9), some expressions for \(\phi_{n,\lambda}^{Y}(x)\) (see Theorems 2.2, 2.3, 2.6), a recurrence relation (see Theorem 2.4) and some properties (see 2.5, 2.7) for \(\phi_{n,\lambda}^{Y}(x)\), an identity (see Theorem 2.11) and an expression for \(\phi_{n,\lambda}^{Y}(x)\) (see Theorem 2.12) when \(Y\) is a Poisson random variable with parameter \(\alpha (>0)\), and an expression for \(\phi_{n,\lambda}^{Y}(x)\) (see Theorem 2.13) and an integral representation for \(\frac{1}{k+1}\Big((1)_{n,\lambda}+(2)_{n,\lambda}+\cdots+(k)_{n,\lambda}\Big)\) (see Theorem 2.14) when \(Y\) is a Bernoulli random variable with probability of success \(p\).

As one of our future projects, we would like to continue to study degenerate versions, \(\lambda\)-analogues and probabilistic versions of many special polynomials and numbers and to find their applications to physics, science and engineering as well as to mathematics.