# Explicit expressions and integral representations for the Stirling numbers. A probabilistic approach

• 239 Accesses

## Abstract

We show that the Stirling numbers of the first and second kind can be represented in terms of moments of appropriate random variables. This allows us to obtain, in a unified way, a variety of old and new explicit expressions and integral representations of such numbers.

## Introduction

Let $$\mathbb {N}$$ be the set of positive integers and $$\mathbb {N}_{0}= \mathbb {N}\cup\{0\}$$. The Stirling numbers of the first and second kind, respectively denoted by $$s(n+k,k)$$ and $$S(n+k,k)$$, $$n,k\in \mathbb {N}_{0}$$, can be defined in various equivalent ways (see, for instance, Abramowitz and Stegun [1, p. 824] or Comtet [2, Chap. 5]). Here, we consider the following definitions in terms of their respective generating functions:

$$\frac{1}{k!} \biggl( \frac{\log(1+z)}{z} \biggr)^{k}= \sum_{n=0}^{ \infty}s(n+k,k)\frac{z^{n}}{(n+k)!}, \quad z\in \mathbb {C}, \vert z \vert < 1$$
(1)

and

$$\frac{1}{k!} \biggl( \frac{e^{z}-1}{z} \biggr)^{k}= \sum_{n=0}^{ \infty}S(n+k,k)\frac{z^{n}}{(n+k)!}, \quad z\in \mathbb {C}.$$
(2)

The Stirling numbers play an important role in many branches of mathematics and physics as ingredients in the computation of diverse quantities. For this reason, one can find in the literature many different explicit expressions and integral representations of such numbers (see the references in Sects. 3 and 4).

In this paper, we give a probabilistic perspective by showing that the Stirling numbers of both kinds are, in fact, the moments of appropriate random variables. This allows us to obtain in a unified way a variety of old and new explicit expressions and integral representations of these numbers.

It is worth noting that, very recently, various authors have established identities between various kinds of special numbers and moments of suitable random variables by using the generating function of such random variables. In this respect, we mention that Kim et al.  connected extended Stirling polynomials and extended Bell polynomials with moments of Poisson random variables, and Kim et al.  showed that type 2 Bernoulli and Euler polynomials can be written as moments of sums of independent random variables having the Laplace distribution. Specially, this paper is close in spirit to that by Kim et al. , where the authors use uniform and gamma distributed random variables to give probabilistic representations of classical and degenerate Stirling numbers, derangement numbers, higher-order Bernoulli numbers and Bernoulli numbers of the second kind (see also Sect. 3).

More precisely, we will consider throughout the paper three sequences $$(U_{j})_{j\geq1}$$, $$(X_{j})_{j\geq1}$$, and $$(Y_{j})_{j\geq1}$$ of independent identically distributed random variables such that $$U_{1}$$ has the uniform distribution on $$[0,1]$$, and $$X_{1}$$ and $$Y_{1}$$ have the exponential density $$\rho(\theta)=e^{-\theta}$$, $$\theta\geq0$$. We assume that these three sequences are mutually independent and denote

$$S_{k}=U_{1}+\cdots+U_{k},\qquad T_{k}=X_{1}+\cdots+X_{k},\qquad W_{k}=Y _{1}U_{1}+\cdots+Y_{k}U_{k},\quad k\in \mathbb {N}_{0},$$
(3)

where it is understood that $$S_{0}=T_{0}=W_{0}=0$$. It is well known that the random variable $$T_{k}$$ has the gamma probability density (cf. Johnson et al. [6, p. 340])

$$\rho_{k}(\theta)=\frac{1}{(k-1)!} \theta ^{k-1}e^{-\theta},\quad\theta\geq0, k\in \mathbb {N}.$$
(4)

The probability densities of the random variables $$S_{k}$$ and $$W_{k}$$ are more involved and will be given in Lemma 2.1 below. For any $$x\in \mathbb {R}$$, we denote the rising factorial as

$$\langle x \rangle _{j}=x(x+1)\cdots(x+j-1),\quad j\in \mathbb {N}, \qquad\langle x \rangle _{0}=1.$$

Let $$z\in \mathbb {C}$$ and $$k\in \mathbb {N}_{0}$$. From (4), it is readily seen that

$$\mathbb {E}T_{k}^{n}=\langle k \rangle _{n},\quad n\in \mathbb {N}_{0}, \qquad \mathbb {E}e^{zT_{k}}= \frac{1}{(1-z)^{k}}, \quad \vert z \vert < 1,$$
(5)

where $$\mathbb {E}$$ stands for mathematical expectation. Using the independence and identical distribution of the random variables involved, we have from (2) and (3)

$$\sum_{n=0}^{\infty} \frac{\mathbb {E}S_{k}^{n}}{n!}z^{n}=\mathbb {E}e ^{zS_{k}}= \bigl( \mathbb {E}e^{zU_{1}} \bigr)^{k}= \biggl( \frac{e ^{z}-1}{z} \biggr)^{k}= \sum_{n=0}^{\infty}S(n+k,k) \frac{z^{n}}{\langle k+1 \rangle _{n}}.$$
(6)

On the other hand, by Fubini’s theorem, we have for $$|z|<1$$

\begin{aligned} \mathbb {E}e^{-zU_{1}Y_{1}}&= \int_{0}^{\infty} \mathbb {E}e^{-zU_{1} \theta}e^{-\theta} \,d\theta= \mathbb {E} \int_{0}^{\infty}e^{- \theta(1+zU_{1})} \,d\theta \\ &=\mathbb {E}\frac{1}{1+zU_{1}}=\frac{\log(1+z)}{z}, \end{aligned}
(7)

thus implying, by virtue of (1) and (3),

\begin{aligned} \sum_{n=0}^{\infty} \frac{(-1)^{n} \mathbb {E}W_{k}^{n}}{n!}z^{n}&= \mathbb {E}e^{-zW_{k}}= \bigl( e^{-zU_{1}Y_{1}} \bigr)^{k} \\ &= \biggl( \frac{\log(1+z)}{z} \biggr)^{k}= \sum _{n=0}^{\infty}s(n+k,k)\frac{z ^{n}}{\langle k+1 \rangle _{n}},\quad \vert z \vert < 1. \end{aligned}
(8)

Identifying coefficients in (6) and (8), we can write the Stirling numbers of the first and second kind in terms of moments of appropriate random variables as

$$S(n+k,k)=\binom{n+k}{k}\mathbb {E}S_{k}^{n}, \qquad s(n+k,k)=(-1)^{n}\binom{n+k}{k}\mathbb {E}W_{k}^{n}, \quad n,k\in \mathbb {N}_{0}.$$
(9)

The first equality in (9) was obtained by Sun , whereas the second one can be found in .

The probabilistic approach developed in this paper is motivated by the following considerations. Denote by i the imaginary unit. Replacing z by $$itT_{k+1}$$, $$t\in \mathbb {R}$$, in (6) and taking into account (5), we get, at least formally,

$$\mathbb {E}e^{itS_{k}T_{k+1}}=\sum_{n=0}^{\infty}S(n+k,k) \frac{(it)^{n} \mathbb {E}T_{k+1}^{n}}{\langle k+1 \rangle _{n}}=\sum_{n=0}^{\infty}S(n+k,k) (it)^{n}.$$
(10)

A similar procedure in Eq. (8) leads us to

$$\mathbb {E}e^{-itW_{k}T_{k+1}}= \sum_{n=0}^{\infty}s(n+k,k) \frac{(it)^{n} \mathbb {E}T_{k+1}^{n}}{\langle k+1 \rangle _{n}}=\sum_{n=0}^{ \infty}s(n+k,k) (it)^{n}.$$
(11)

Since $$0< S_{k}< k$$, Eq. (10) is a true power series for $$|t|<1/k$$, whereas Eq. (11) is only a formal power series. In any case, the left-hand sides in (10) and (11) always make sense and constitute the first ingredient of our approach. The second ingredient is the generalized difference operator defined in the following section. As shown in Sects. 3 and 4, Eqs. (10) and (11) give us a variety of explicit expressions and integral representations for the Stirling numbers.

## Technical results

For any function $$f:\mathbb {R}\to \mathbb {R}$$, we consider the difference operator

$$\Delta_{y}^{1}f(x)=f(x+y)-f(x), \qquad \bigl(\Delta _{y}^{0}f(x)=f(x) \bigr),\quad x,y\in \mathbb {R},$$

together with the iterates

$$\Delta_{y_{1},\ldots,y_{k}}^{k} f(x)= \bigl( \Delta_{y_{1}}^{1} \circ\cdots\circ\Delta_{y_{k}}^{1} \bigr) f(x),\quad x,y_{1}, \ldots,y_{k} \in \mathbb {R}, k\in \mathbb {N}.$$

Such iterates were considered by Mrowiec et al.  in connection with Wright-convex functions of order k and by Dilcher and Vignat  in the context of convolution identities for Bernoulli polynomials. He  used these iterates for $$y_{1}=\cdots=y_{k}=\beta$$ to deal with Stirling numbers.

Let $$x,y_{1},\ldots, y_{k}\in \mathbb {R}$$ and $$k\in \mathbb {N}$$. It was shown in  that

$$\Delta_{y_{1},\ldots,y_{k}}^{k} f(x)=(-1)^{k}f(x)+ \sum_{j=1}^{k} (-1)^{k-j} \sum _{I_{k}(j)} f(x+y_{i_{1}}+\cdots+y_{i_{j}}),$$
(12)

where

$$I_{k}(j)= \bigl\{ (i_{1},\ldots,i_{j}): \{i_{1},\ldots,i_{j}\} \subseteq\{1,\ldots,k\}, i_{\nu}\neq i_{s}, \text{if } \nu\neq s \bigr\} ,\quad j=1, \ldots ,k.$$

In particular, the usual kth forward difference of f is defined as

$$\Delta^{k} f(x):=\Delta_{1,\ldots,1}^{k}f(x)= \sum_{j=0}^{k} \binom{k}{j}(-1)^{k-j}f(x+j).$$
(13)

If f is k times differentiable, we have (cf. )

$$\Delta_{y_{1},\ldots,y_{k}}^{k} f(x)=y_{1}\cdots y_{k} \mathbb {E}f ^{(k)}(x+y_{1}U_{1}+ \cdots+ y_{k} U_{k}).$$
(14)

Denote by $$1_{A}$$ the indicator function of the set A, by $$x_{+}=\max(0,x)$$, and by

$$g_{k,\theta}(y)=(y-\theta)_{+}^{k-1},\quad \theta, y\in \mathbb {R},$$
(15)

the truncated power function. The following auxiliary result gives us in a unified way the probability densities of the random variables $$S_{k}$$ and $$W_{k}$$ defined in (3).

### Lemma 2.1

Let $$k\in \mathbb {N}$$ and $$y_{1},\ldots,y_{k}\in \mathbb {R}\setminus \{0\}$$. The random variable $$y_{1}U_{1}+\cdots+y_{k}U_{k}$$ has probability density

$$d_{k}(\theta)=\frac{1}{(k-1)!y_{1}\cdots y_{k}} \Delta _{y_{1},\ldots,y_{k}}^{k} g_{k,\theta}(0),\quad\theta\in \mathbb {R}.$$
(16)

In particular, the probability density of $$S_{k}$$ is given by

$$\tau_{k}(\theta)=\frac{1}{(k-1)!}\sum _{j=1}^{k} \binom{k}{j}(-1)^{k-j}(j- \theta)_{+}^{k-1},\quad0\leq\theta\leq k.$$
(17)

Moreover, the probability density of $$W_{k}$$ is given by

$$\nu_{k}(\theta)=\frac{1}{(k-1)!} \int_{(0,\infty)^{k}} \Delta_{y_{1},\ldots,y_{k}}^{k} g_{k,\theta}(0)\frac{e^{-(y_{1}+ \cdots+y_{k})}}{y_{1}\cdots y_{k}} \,dy_{1}\cdots \,dy_{k},\quad\theta\geq0.$$
(18)

### Proof

Let $$\theta\in \mathbb {R}$$ and set $$f^{(k)}(y)=1_{(\theta,\infty)}(y)$$, $$y\in \mathbb {R}$$. Obviously,

$$f(y)=\frac{(y-\theta)_{+}^{k}}{k!}, \quad y\in \mathbb {R}.$$
(19)

We therefore have from (14), with $$x=0$$,

$$P(y_{1}U_{1}+\cdots+y_{k}U_{k}>\theta )=\mathbb {E}1_{(\theta,\infty )}(y_{1}U_{1}+\cdots +y_{k}U_{k})=\frac{1}{y_{1}\cdots y_{k}} \Delta_{y_{1},\ldots, y_{k}}^{k} f(0).$$

Differentiating this expression with respect to to θ and taking into account (12) and (19), we obtain (16). Choosing $$y_{1}=\cdots=y_{k}=1$$ in (16) and recalling (13) and (15), we get (17). Finally, since the sequences $$(Y_{j})_{j\geq1}$$ and $$(U_{j})_{j\geq1}$$ are independent, we have from (16) and Fubini’s theorem

\begin{aligned} & P(Y_{1}U_{1}+\cdots+Y_{k}U_{k}> \theta) \\ &\quad= \int_{(0,\infty)^{k}} P(y_{1}U_{1}+\cdots +y_{k}U_{k}>\theta)e ^{-(y_{1}+\cdots+y_{k})} \,dy_{1} \cdots\,dy_{k} \\ &\quad= \int_{\theta}^{\infty}\,dz \int_{(0,\infty)^{k}}\,d_{k}(z) e^{-(y _{1}+\cdots+y_{k})} \,dy_{1}\cdots\,dy_{k}. \end{aligned}

Differentiating this expression with respect to θ and using (16), we show (18). The proof is complete. □

We point out that Eq. (17) is well known (see, for instance, Feller [13, p. 27]).

On the other hand, recall that the characteristic function or Fourier transform of a random variable X is defined as

$$\phi(t)=\mathbb {E}e^{itX},\quad t\in \mathbb {R}.$$

It is well known that $$\phi(t)$$ univocally determines the law of X (see, for instance, Billingsley [14, p. 346]). If, in addition, $$\phi(t)$$ is absolutely integrable, then X has the probability density $$\rho(\theta)$$ given by (cf. Billingsley [14, p. 347])

$$\rho(\theta)=\frac{1}{2\pi} \int_{-\infty}^{\infty}e^{-it\theta }\phi(t) \,dt, \quad \theta \in \mathbb {R}.$$
(20)

### Lemma 2.2

Let $$k\in \mathbb {N}_{0}$$ and $$t\in \mathbb {R}$$. Then

$$\mathbb {E}e^{itS_{k}T_{k+1}}=\mathbb {E}e^{it (X_{1}+2X_{2}+\cdots+kX _{k})}=\frac{1}{(1-it)(1-i2t)\cdots(1-ikt)}.$$

### Proof

Using Fubini’s theorem, (4), and (6), we have

\begin{aligned} \mathbb {E}e^{itS_{k}T_{k+1}} &=\frac{1}{k!} \int_{0}^{\infty} \mathbb {E}e^{it\theta S_{k}}\theta ^{k} e^{-\theta} \,d\theta= \frac{1}{(it)^{k}k!} \int_{0}^{\infty} \bigl( e^{it\theta}-1 \bigr) ^{k}e^{-\theta} \,d\theta \\ &=\frac{1}{(1-it)(1-i2t)\cdots(1-ikt)}, \end{aligned}

where the last equality follows by applying successively integration by parts. On the other hand, since $$(X_{j})_{j\geq1}$$ is a sequence of independent identically distributed random variables, we have from (4)

$$\mathbb {E}e^{it(X_{1}+2X_{2}+\cdots+kX_{k})}=\mathbb {E}e^{itX_{1}} \mathbb {E}e^{i2tX_{2}} \cdots \mathbb {E}e^{iktX_{k}}=\frac{1}{(1-it)(1-i2t) \cdots(1-ikt)}.$$

The proof is complete. □

### Lemma 2.3

Let $$k\in \mathbb {N}_{0}$$ and $$t\in \mathbb {R}$$. Then

$$\mathbb {E}e^{-it W_{k} T_{k+1}}= \mathbb {E}e^{-it(Y_{1}T_{1}+\cdots+Y _{k}T_{k})}=\mathbb {E} \frac{1}{(1+itT_{1})(1+itT_{2})\cdots(1+itT _{k})}.$$

### Proof

Observe that Eq. (7) also holds for $$z=it\theta$$, $$t,\theta\in \mathbb {R}$$. Therefore, by Fubini’s theorem and (3), we have

\begin{aligned} \mathbb {E}e^{-it W_{k}T_{k+1}}&=\frac{1}{k!} \int_{0}^{\infty} \mathbb {E}e^{-it \theta W_{k}}\theta ^{k} e^{-\theta} \,d\theta \\ &=\frac{1}{k!} \int_{0}^{\infty} \bigl( \mathbb {E}e^{-it\theta U _{1}Y_{1}} \bigr)^{k} \theta^{k}e^{-\theta} \,d\theta= \frac{1}{(it)^{k}k!} \int_{0}^{\infty}\log^{k} (1+it\theta )e^{- \theta} \,d\theta. \end{aligned}
(21)

Applying successive integration by parts and interchanging integral and expectation signs, we obtain

\begin{aligned} & \frac{1}{(it)^{k}k!} \int_{0}^{\infty}\log^{k} (1+it\theta )e^{- \theta} \,d\theta \\ &\quad=\frac{1}{(it)^{k-1}(k-1)!} \int_{0}^{\infty} \log^{k-1}(1+it \theta) \frac{e^{-\theta}}{1+it\theta} \,d\theta \\ &\quad=\frac{1}{(it)^{k-1}(k-1)!} \mathbb {E} \int_{0}^{\infty}\log^{k-1}(1+it \theta )e^{-\theta(1+itT_{1})} \,d\theta \\ &\quad=\frac{1}{(it)^{k-2}(k-2)!}\mathbb {E}\frac{1}{1+itT_{1}} \int_{0} ^{\infty}\log^{k-2}(1+it\theta) \frac{e^{-\theta(1+itT_{1})}}{1+it \theta} \,d\theta \\ &\quad=\frac{1}{(it)^{k-2}(k-2)!}\mathbb {E}\frac{1}{1+itT_{1}} \int_{0} ^{\infty}\log^{k-2}(1+it\theta )e^{-\theta(1+itT_{2})} \,d\theta \\ &\quad=\cdots= \mathbb {E}\frac{1}{(1+itT_{1})(1+itT_{2})\cdots(1+itT _{k})}. \end{aligned}

This, together with (21), shows the first equality in Lemma 2.3. On the other hand, since the random variables $$(Y_{j})_{j\geq1}$$ are independent and exponentially distributed, we have for any $$\theta_{1},\ldots, \theta_{k}\in \mathbb {R}$$

$$\mathbb {E}e^{-it(Y_{1}\theta_{1}+\cdots +Y_{k}\theta_{k})}=\mathbb {E}e ^{-it\theta_{1}Y_{1}}\cdots \mathbb {E}e^{-it\theta_{k}Y_{k}}=\frac{1}{(1+it \theta_{1})\cdots(1+it\theta_{k})}.$$
(22)

Replacing $$\theta_{j}$$ by $$T_{j}$$, $$j=1,\ldots, k$$, in (22) and then taking expectations, we obtain the second equality in Lemma 2.3. The proof is complete. □

## Explicit expressions

The following result gives probabilistic representations and explicit expressions for the Stirling numbers of the second kind. Denote $$I_{n}(x)=x^{n}$$, $$x\in \mathbb {R}$$, $$n\in \mathbb {N}_{0}$$.

### Theorem 3.1

For any $$n,k\in \mathbb {N}_{0}$$, we have

\begin{aligned} S(n+k,k)&=\binom{n+k}{k}\mathbb {E}S_{k}^{n} \\ &= \frac{1}{n!} \mathbb {E} ( X_{1}+2X_{2}+ \cdots+ kX_{k} )^{n} \\ & = \frac{(n+k)!}{k!}\sum_{j_{1}+\cdots+j_{k}=n} \frac{1}{(j_{1}+1)! \cdots(j_{k}+1)!} \\ &=\sum_{j_{1}+\cdots+j_{k}=n}1^{j_{1}}2^{j_{2}} \cdots k^{j_{k}}= \frac{1}{k!}\Delta^{k} I_{n+k}(0). \end{aligned}
(23)

### Proof

Let $$k\in \mathbb {N}_{0}$$. The first equality was shown in (9). By Lemma 2.2, the random variables $$S_{k}T_{k+1}$$ and $$X_{1}+2X_{2}+ \cdots+kX_{k}$$ have the same characteristic function and, therefore, the same law and the same moments. Hence, we have from (5)

$$\frac{1}{n!}\mathbb {E}(X_{1}+2X_{2}+\cdots +kX_{k})^{n}= \frac{1}{n!}\mathbb {E}S_{k}^{n} \mathbb {E}T_{k+1}^{n}=\binom{n+k}{k} \mathbb {E}S_{k}^{n}.$$

Since $$\mathbb {E}U_{1}^{j}=1/(j+1)$$, $$j\in \mathbb {N}_{0}$$, we have from the independence and identical distribution of the random variables involved

$$\mathbb {E}S_{k}^{n}=\sum_{j_{1}+\cdots+j_{k}=n} \frac{n!}{j_{1}! \cdots j_{k}!} \mathbb {E}U_{1}^{j_{1}}\cdots \mathbb {E}U_{k}^{j_{k}}=n! \sum_{j_{1}+\cdots+j_{k}=n} \frac{1}{(j_{1}+1)!\cdots(j_{k}+1)!},$$

thus showing the third equality in (23). On the other hand, choosing $$k=1$$ in (5) we get $$\mathbb {E}X_{1}^{n}=\langle1 \rangle _{n}=n!$$, $$n\in \mathbb {N}$$. Since the random variables $$(X_{j})_{j \geq1}$$ are independent and identically distributed, we thus have

\begin{aligned} & \mathbb {E}(X_{1}+2X_{2}\cdots +kX_{k})^{n} \\ &\quad= \sum_{j_{1}+\cdots+j _{k}=n} \frac{n!}{j_{1}!\cdots j_{k}!} \mathbb {E}X_{1}^{j_{1}}2^{j _{2}} \mathbb {E}X_{2}^{j_{2}} \cdots k^{j_{k}} \mathbb {E}X_{k}^{j_{k}} \\ &\quad=n! \sum_{j_{1}+\cdots+j_{k}=n} 1^{j_{1}}2^{j_{2}} \cdots k^{j_{k}}. \end{aligned}

Finally, choosing $$f(x)=I_{n+k}(x)$$ in (13) and (14), we obtain

$$\frac{1}{k!}\Delta^{k} I_{n+k}(0)=\frac{1}{k!} \mathbb {E}I_{n+k}^{(k)}(S _{k})=\binom{n+k}{k} \mathbb {E}S_{k}^{n}.$$

This shows the last equality in (23) and completes the proof. □

As already mentioned in the Introduction, the first equality in (23) was shown by Sun . The second and third ones seem to be new. The fourth is well known and can be found in Comtet [2, Chap. 5] (see also Belbachir et al.  for a generalization of this formula to r-Stirling numbers of the second kind). The last one is used in many occasions to define the Stirling numbers of the second kind (see, for instance, Abramowitz and Stegun [1, p. 824]). For other explicit expressions we refer the reader to Cakić et al. , He , and Sun . We finally mention that the moments

$$\mathbb {E}(X_{1}+2X_{2}+\cdots+kX_{k}-k)^{n}$$

can be used to describe higher-order convolutions of derangement polynomials, as shown by Kim et al. .

The analogous result for the Stirling numbers of the first kind is the following.

### Theorem 3.2

For any $$n,k\in \mathbb {N}_{0}$$, we have

\begin{aligned} s(n+k,k)={}&(-1)^{n}\binom{n+k}{k} \mathbb {E}W_{k}^{n} \\ ={}& \frac{(-1)^{n}}{n!} \mathbb {E}(Y_{1}T_{1}+ \cdots+Y_{k}T_{k})^{n} \\ ={}&(-1)^{n} \frac{(n+k)!}{k!}\sum_{j_{1}+\cdots+j_{k}=n} \frac{1}{(j _{1}+1)\cdots(j_{k}+1)} \\ ={}&(-1)^{n}\sum_{j_{1}+\cdots+j_{k}=n}\langle1 \rangle _{j_{1}} \langle2 \rangle _{j_{2}} \cdots\langle k \rangle _{j_{k}} \\ ={}&\frac{(-1)^{n}}{k!} \int_{(0,\infty)^{k}}\Delta^{k}_{y_{1},\ldots ,y_{k}}I_{n+k}(0) \frac{e^{-(y_{1}+\cdots+y_{k})}}{y_{1}\cdots y_{k}} \,dy_{1}\cdots\,dy_{k}. \end{aligned}
(24)

### Proof

Let $$k\in \mathbb {N}_{0}$$. The first equality was shown in (9). As follows from Lemma 2.3, the random variables $$W_{k}T_{k+1}$$ and $$Y_{1}T_{1}+\cdots+Y_{k}T_{k}$$ have the same characteristic function and, therefore, the same law and the same moments. Thus, we have from (5)

$$\frac{(-1)^{n}}{n!} \mathbb {E}(Y_{1}T_{1}+\cdots +Y_{k}T_{k})^{n}= \frac{(-1)^{n}}{n!} \mathbb {E}W_{k}^{n}\mathbb {E}T_{k+1}^{n}= (-1)^{n} \binom{n+k}{k}\mathbb {E}W_{k}^{n}.$$

As in the proof of Theorem 3.1, $$\mathbb {E}Y_{1}^{j}=\mathbb {E}X _{1}^{j}=j!$$, $$j\in \mathbb {N}_{0}$$. We therefore have from (3)

\begin{aligned} \mathbb {E}W_{k}^{n}&=\sum _{j_{1}+\cdots+j_{k}=n}\frac{n!}{j_{1}! \cdots j_{k}!}\mathbb {E}U_{1}^{j_{1}} \mathbb {E}Y_{1}^{j_{1}}\cdots \mathbb {E}U_{k}^{j_{k}} \mathbb {E}Y_{k}^{j_{k}} \\ &=n! \sum_{j_{1}+\cdots+j_{k}=n} \frac{1}{(j_{1}+1)\cdots(j_{k}+1)}, \end{aligned}

thus showing the third equality in (24). Similarly, we have from (5)

\begin{aligned} \mathbb {E}(Y_{1}T_{1}+\cdots +Y_{k}T_{k})^{n}&= \sum _{j_{1}+\cdots+j_{k}=n} \frac{n!}{j_{1}!\cdots j_{k}!} \mathbb {E}Y_{1}^{j_{1}} \mathbb {E}T_{1}^{j_{1}}\cdots \mathbb {E}Y_{k} ^{j_{k}}\mathbb {E}T_{k}^{j_{k}} \\ & =n! \sum_{j_{1}+\cdots+j_{k}=n} \langle1 \rangle _{j_{1}} \cdots \langle k \rangle _{j _{k}}, \end{aligned}

which shows the fourth equality in (24). Finally, choosing $$f(y)=I_{n+k}(y)$$ and $$x=0$$ in (14), we have

\begin{aligned} &(-1)^{n} \binom{n+k}{k}\mathbb {E}(y_{1}U_{1}+ \cdots+y_{k}U_{k})^{n} \\ &\quad=\frac{(-1)^{n}}{k! y_{1}\cdots y_{k}} \Delta_{y_{1},\ldots,y _{k}}^{k} I_{n+k}(0),\quad y_{1},\ldots, y_{k}>0. \end{aligned}

Replacing in this formula $$y_{j}$$ by $$Y_{j}$$, $$j=1,\ldots,k$$, and then taking expectations, we obtain from (3)

$$(-1)^{n} \binom{n+k}{k} \mathbb {E}W_{k}^{n}= \frac{(-1)^{n}}{k!} \int_{(0,\infty)^{k}} \Delta^{k}_{y_{1},\ldots,y_{k}}I_{n+k}(0) \frac{e ^{-(y_{1}+\cdots+y_{k})}}{y_{1}\cdots y_{k}} \,dy_{1}\cdots\,dy_{k}.$$

This shows the last equality in (24) and completes the proof. □

The first probabilistic representations in (24) was given in . T. Kim et al.  extended such a representation to degenerate Stirling numbers of the first kind. The second equality in (24) seems to be new. The third expression in (24) was obtained by Adamchik  (see also Qi ). A multitude of expressions of similar type can be found in Katriel . Analogous formulas to the fourth one in (24) were given in Cakić  and Sun . The last formula in (24) is new. In contraposition to the last expression in (23), this formula is not useful to perform computations. However, it shows that both Stirling numbers of the first and second kinds can be written in terms of generalized difference operators. Finally, other explicit expressions are given in He , Qi , and Blagouchine .

## Integral representations

The integral representations for the Stirling numbers will be mainly based on Eq. (20), which requires the absolute integrability of the characteristic function $$\phi(t)$$ under consideration. This is the reason why we consider integral representations for $$S(n+k,k)$$ and $$s(n+k,k)$$ for $$k=2,3,\ldots$$ .

### Theorem 4.1

For any $$n\in \mathbb {N}_{0}$$ and $$k=2,3,\ldots$$ , we have

\begin{aligned} S(n+k,k)&=\binom{n+k}{k} \int_{0}^{k} \theta^{n} \tau _{k}(\theta) \,d \theta \\ &=\frac{1}{2\pi}\binom{n+k}{k} \int_{0}^{k} \theta^{n} \,d\theta \int_{-\infty}^{\infty}e^{-it\theta} \biggl( \frac{e^{it}-1}{it} \biggr) ^{k} \,dt \\ & =\frac{1}{n!2\pi} \int_{0}^{\infty}\theta^{n} \,d \theta \int_{-\infty}^{\infty} \frac{e^{-it\theta}}{(1-it)(1-i2t)\cdots (1-ikt)} \,dt, \end{aligned}
(25)

where $$\tau_{k}(\theta)$$ is defined in (17).

### Proof

The first equality in (25) readily follows from (17) and the first equality in Theorem 3.1. As seen in (6), the characteristic function of $$S_{k}$$ is given by

$$\tilde{\phi}_{k}(t)=\mathbb {E}e^{itS_{k}}= \biggl( \frac{e^{it}-1}{it} \biggr)^{k},\quad t\in \mathbb {R}.$$

Thus, the second equality in (25) is a consequence of the first one and Eq. (20). Finally, denote by $$h_{k}(\theta)$$, $$\theta\geq0$$, the probability density of the random variable $$X_{1}+2X_{2}+\cdots+kX_{k}$$. By (20) and Lemma 2.2, we have

$$h_{k}(\theta)=\frac{1}{2\pi} \int_{-\infty}^{\infty}\frac{e^{-it \theta}}{(1-it)(1-i2t)\cdots(1-ikt)} \,dt.$$
(26)

On the other hand, the second equality in Theorem 3.1 gives us

$$S(n+k,k)=\frac{1}{n!} \int_{0}^{\infty}\theta^{n}h_{k}( \theta) \,d \theta.$$

This, together with (26), shows the last integral representation in (25) and completes the proof. □

The first and third integral expressions in (25) appear to be new, while a very similar integral representation to the second one was considered in Wegner  and used in the location of the maximum of $$S(n+k,k)$$.

Recalling (3) and Lemma 2.3, the characteristic function of $$Y_{1}T_{1}+\cdots+Y_{k}T_{k}$$ is given by

\begin{aligned} \phi_{k}(t)&=\mathbb {E}e^{it(Y_{1}T_{1}+\cdots+Y_{k}T_{k})} \\ &= \int_{(0,\infty)^{k}}\frac{e^{-(x_{1}+\cdots+x_{k})} \,dx_{1} \cdots\,dx_{k}}{(1-itx_{1})(1-it(x_{1}+x_{2}))\cdots(1-it(x_{1}+ \cdots+x_{k}))}, \end{aligned}
(27)

where $$t\in \mathbb {R}$$ and $$k\in \mathbb {N}$$.

### Theorem 4.2

For any $$n\in \mathbb {N}_{0}$$ and $$k=2,3,\ldots$$ , we have

\begin{aligned} s(n+k,k)&=(-1)^{n}\binom{n+k}{k} \int_{0}^{\infty}\theta^{n}\nu _{k}( \theta) \,d\theta \\ &=\frac{(-1)^{n+k}}{2\pi}\binom{n+k}{k} \int_{0}^{\infty}\theta^{n} \,d\theta \int_{-\infty}^{\infty} \biggl( \frac{\log(1-it)}{it} \biggr)^{k} \,dt \\ & =\frac{(-1)^{n}}{n! 2\pi} \int_{0}^{\infty}\theta^{n} \,d\theta \int_{-\infty}^{\infty}\phi_{k}(t) \,dt, \end{aligned}
(28)

where $$\nu_{k}(\theta)$$ and $$\phi_{k}(t)$$ are defined in (18) and (27), respectively.

### Proof

The first equality in (28) is a direct consequence of the first expression in Theorem 3.2 and (17). As in (8), the characteristic function of $$W_{k}$$ is given by

$$\mathbb {E}e^{itW_{k}}=(-1)^{k} \biggl( \frac{\log(1-it)}{it} \biggr) ^{k},\quad t\in \mathbb {R}.$$

Hence, the second equality in (28) follows from the first equality in Theorem 3.2 and (20). Analogously, the third equality in (28) follows from the third equality in Theorem 3.2, (20), and (27). This concludes the proof. □

The first and third representation in Theorem 4.2 are new. The second one was shown in . For other integral expressions of different character, we refer the reader to Qi  and Agoh and Dilcher .

A comparison between Theorems 4.1 and 4.2 reveals that the integral representations for the Stirling numbers of the first kind are, in general, more involved than those for the Stirling numbers of the second kind. This is explained by the more involved probability densities of the random variables describing the Stirling numbers of the first kind.

## References

1. 1.

Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standards Applied Mathematics Series, vol. 55, p. 1046. U.S. Government Printing Office, Washington (1964). For sale by the superintendent of documents

2. 2.

Comtet, L.: Advanced Combinatorics, enlarged edn. p. 343. Reidel, Dordrecht (1974) The art of finite and infinite expansions

3. 3.

Kim, T., Kim, D.S., Jang, G.-W.: Extended Stirling polynomials of the second kind and extended Bell polynomials. Proc. Jangjeon Math. Soc. 20(3), 365–376 (2017)

4. 4.

Kim, D.S., Dolgy, D.V., Kwon, J., Kim, T.: Note on type 2 degenerate q-Bernoulli polynomials. Symmetry 11(7) 914 (2019). https://doi.org/10.3390/sym11070914

5. 5.

Kim, T., Yao, Y., Kim, D.S., Kwon, H.-I.: Some identities involving special numbers and moments of random variables. Rocky Mt. J. Math. 49(2), 521–538 (2019). https://doi.org/10.1216/RMJ-2019-49-2-521

6. 6.

Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions. Vol. 1, 2nd edn. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, p. 756. Wiley, New York (1994)

7. 7.

Sun, P.: Product of uniform distribution and Stirling numbers of the first kind. Acta Math. Sin. Engl. Ser. 21(6), 1435–1442 (2005). https://doi.org/10.1007/s10114-005-0631-4

8. 8.

Adell, J.A., Lekuona, A.: Closed form expressions for the Stirling numbers of the first kind. Integers 17, 26 (2017)

9. 9.

Mrowiec, J., Rajba, T., Wąsowicz, S.: On the classes of higher-order Jensen-convex functions and Wright-convex functions, II. J. Math. Anal. Appl. 450(2), 1144–1147 (2017). https://doi.org/10.1016/j.jmaa.2017.01.083

10. 10.

Dilcher, K., Vignat, C.: General convolution identities for Bernoulli and Euler polynomials. J. Math. Anal. Appl. 435(2), 1478–1498 (2016). https://doi.org/10.1016/j.jmaa.2015.11.006

11. 11.

He, T.-X.: Expression and computation of generalized Stirling numbers. J. Comb. Math. Comb. Comput. 86, 239–268 (2013)

12. 12.

Adell, J.A., Lekuona, A.: Binomial convolution and transformations of Appell polynomials. J. Math. Anal. Appl. 456(1), 16–33 (2017). https://doi.org/10.1016/j.jmaa.2017.06.077

13. 13.

Feller, W.: An Introduction to Probability Theory and Its Applications. Vol. II, 2nd edn. p. 669. Wiley, New York (1971)

14. 14.

Billingsley, P.: Probability and Measure, 3rd edn. Wiley Series in Probability and Mathematical Statistics, p. 593. Wiley, New York (1995)

15. 15.

Belbachir, H., Boutiche, M.A., Medjerredine, A.: Enumerating some stable partitions involving Stirling and r-Stirling numbers of the second kind. Mediterr. J. Math. 15(3), 87 (2018). https://doi.org/10.1007/s00009-018-1130-z

16. 16.

Cakić, N.P., El-Desouky, B.S., Milovanović, G.V.: Explicit formulas and combinatorial identities for generalized Stirling numbers. Mediterr. J. Math. 10(1), 57–72 (2013). https://doi.org/10.1007/s00009-011-0169-x

17. 17.

Sun, Z.-H.: Some inversion formulas and formulas for Stirling numbers. Graphs Comb. 29(4), 1087–1100 (2013). https://doi.org/10.1007/s00373-012-1155-1

18. 18.

Adamchik, V.: On Stirling numbers and Euler sums. J. Comput. Appl. Math. 79(1), 119–130 (1997). https://doi.org/10.1016/S0377-0427(96)00167-7

19. 19.

Qi, F.: Explicit formulas for computing Bernoulli numbers of the second kind and Stirling numbers of the first kind. Filomat 28(2), 319–327 (2014). https://doi.org/10.2298/FIL1402319O

20. 20.

Katriel, J.: A multitude of expressions for the Stirling numbers of the first kind. Integers 10, 273–297 (2010). https://doi.org/10.1515/INTEG.2010.023

21. 21.

Blagouchine, I.V.: Two series expansions for the logarithm of the gamma function involving Stirling numbers and containing only rational coefficients for certain arguments related to $$\pi^{-1}$$. J. Math. Anal. Appl. 442(2), 404–434 (2016). https://doi.org/10.1016/j.jmaa.2016.04.032

22. 22.

Wegner, H.: On the location of the maximum Stirling number(s) of the second kind. Results Math. 54(1–2), 183–198 (2009). https://doi.org/10.1007/s00025-009-0361-5

23. 23.

Qi, F.: Integral representations and properties of Stirling numbers of the first kind. J. Number Theory 133(7), 2307–2319 (2013). https://doi.org/10.1016/j.jnt.2012.12.015

24. 24.

Agoh, T., Dilcher, K.: Representations of Stirling numbers of the first kind by multiple integrals. Integers 15, 8–9 (2015)

### Acknowledgements

The authors would like to thank the referees and the editors for their careful reading of the manuscript and for their suggestions, which greatly improved the final outcome.

Not applicable.

### Funding

This work was supported by research grants MTM2015-67006-P and DGA (E-64).

## Author information

Both authors read and approved the final manuscript. Both authors contributed equally.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 