Skip to main content

Theory and Modern Applications

On a generalized Gerber-Shiu function in a compound Poisson model perturbed by diffusion

Abstract

In the spirit of a publication by Cheung in 2013 we study generalized Gerber-Shiu functions in the compound Poisson risk model perturbed by diffusion. These generalized Gerber-Shiu functions can be used to analyze the moments of the total discounted claim costs until ruin. Integral equations for the generalized Gerber-Shiu functions are derived and a solution procedure is also provided. Some explicit results are given when the claim size density is a combination of exponentials, and some numerical results are also given.

1 Introduction

In this paper, we describe the surplus process of an insurance company by the following compound Poisson risk model perturbed by diffusion:

$$ U(t)=u+ct-\sum_{i=1}^{N(t)}X_{i}+ \sigma B(t), $$
(1.1)

where \(u\geq0\) is the initial surplus, \(c>0\) is the incoming premium rate per unit time. The counting process

$$N(t)=\max\{n: V_{1}+\cdots+ V_{n}\leq t\} $$

is a homogeneous Poisson process with intensity \(\lambda>0\), where \(V_{1}\) is the time until the first claim arrival, and for \(i\geq2\), \(V_{i}\) is the inter-claim time between the \((i-1)\)th claim and the ith claim. For \(k\geq1\), let \(T_{k}=\sum_{i=1}^{k}V_{i}\) be the kth claim arrival time. The claim sizes \(\{X_{i}\}_{i=1}^{\infty}\) are positive continuous random variables which form an i.i.d. sequence distributed like a generic variable X with density function \(f_{X}\) and Laplace transform \(\widehat{f}_{X}(s)=\int_{0}^{\infty}e^{-sx} f_{X}(x)\,dx\). In addition, \(\sigma>0\) is the diffusion coefficient and \(\{B(t)\}_{t\geq 0}\) is a standard Brownian motion. Finally, we assume that \(\{N(t)\}\), \(\{X_{i}\}\), and \(\{B(t)\}\) are mutually independent.

The perturbed compound Poisson risk model, first proposed by Gerber [1], is an extension of the classical risk model by adding diffusion process to denote small volatility in the surplus process. Since then, a lot of contributions have been made to this model and its extension. See e.g. Tsai and Willmot [2] for the compound Poisson risk model, Li and Garrido [3] for a Sparre Andersen risk model with Erlang inter-claim times, Zhang and Yang [4] for a perturbed compound Poisson model with dependence between inter-claim times and claim sizes, and Zhang et al. [5] for a Sparre Andersen risk model with time dependent claim sizes.

In ruin theory, one of the effective tools to study ruin related problems is the expected discounted penalty function (also known as Gerber-Shiu function) proposed by Gerber and Shiu [6]. It provides a unified approach to study the time to ruin, the surplus before ruin and the deficit at ruin. Recently, some generalized Gerber-Shiu functions are proposed by some researchers to study other ruin related quantities. For example, Cai et al. [7] propose a generalized discounted penalty function that can be used to study the total discounted operating costs up to ruin; Cheung [8] considers a generalized Gerber-Shiu type function that can be used to analyze the moments of the total discounted costs up to ruin in a Sparre Andersen risk model with general inter-claim times; Cheung and Woo [9] study the discounted aggregate claim costs in a class of dependent Sparre Andersen risk models.

For risk model (1.1), we define the ruin time by

$$\tau=\inf\bigl\{ t\geq0: U(t)\leq0\bigr\} , $$

where \(\tau=\infty\) if \(U(t)>0\) for all \(t\geq0\). In this paper, we study the following generalized Gerber-Shiu type function proposed by Cheung [8]:

$$ \phi_{n}(u)=E \Biggl[ e^{-\alpha\tau} \Biggl( \sum _{k=1}^{N(\tau)}e^{-\delta T_{k}}\theta(X_{k}) \Biggr)^{n}w\bigl(\bigl|U(\tau)\bigr|\bigr)I(\tau< \infty) \bigg|U(0)=u \Biggr], $$
(1.2)

where \(\alpha, \delta\geq0\) are the forces of interest; n is a nonnegative integer; w is a nonnegative penalty function that depends only on the deficit at ruin; \(I(A)\) is the indicator function of event A. The random variable \(\sum_{k=1}^{N(\tau)}e^{-\delta T_{k}}\theta (X_{k})\) can be interpreted as the total discounted claim costs until ruin, where \(\theta(\cdot)\) is the ’cost’ function of a given claim size. We remark that the expected total discounted claim costs have been studied by Cai et al. [7] in the compound Poisson model. For higher order moments (\(n>1\)), we refer the interested readers to Cheung [8]. To the best of our knowledge, there has been no contribution to this generalized Gerber-Shiu function in the diffusion perturbed risk model.

Due to the existence of Brownian motion, ruin can be caused by claims or diffusion oscillation. Hence, we decompose \(\phi_{n}(u)\) as follows:

$$\begin{aligned}& \phi_{n,w}(u)=E \Biggl[ e^{-\alpha\tau} \Biggl( \sum _{k=1}^{N(\tau)}e^{-\delta T_{k}}\theta(X_{k}) \Biggr)^{n}w\bigl(\bigl|U(\tau)\bigr|\bigr)I \bigl(\tau< \infty, U(\tau)<0\bigr) \bigg|U(0)=u \Biggr],\\& \phi_{n,d}(u)=w(0)E \Biggl[ e^{-\alpha\tau} \Biggl( \sum _{k=1}^{N(\tau)}e^{-\delta T_{k}}\theta(X_{k}) \Biggr)^{n}I \bigl(\tau<\infty, U(\tau)=0\bigr) \bigg|U(0)=u \Biggr]. \end{aligned}$$

It follows that \(\phi_{n,w}(u)\) is the generalized Gerber-Shiu function when ruin is due to a claim. In particular, when \(n=0\),

$$\phi_{0,w}(u)=E \bigl[ e^{-\alpha\tau} w\bigl(\bigl|U(\tau)\bigr|\bigr)I \bigl( \tau< \infty, U(\tau)<0\bigr)|U(0)=u \bigr] $$

is the traditional discounted penalty function of the deficit at ruin when ruin is caused by a claim, which has been studied extensively in ruin theory. In the remainder of this paper, whenever we talk about \(\phi_{n,d}(u)\), we suppose that \(w(0)=1\), and for the case \(n=0\), we mean

$$\phi_{0,d}(u)=E \bigl[ e^{-\alpha\tau}I \bigl(\tau< \infty, U(\tau)=0 \bigr) |U(0)=u \bigr], $$

which is the Laplace transform of the ruin time when ruin is caused by oscillation. For applications of the above general discounted penalty functions, we refer the interested readers to Cheung [8].

The paper is organized as follows. In Section 2, we derive integral equations for \(\phi_{n,w}(u)\) and \(\phi_{n,d}(u)\), and propose a recursive approach to solve the integral equations. In Section 3, we derive some explicit expressions when the claim size density is a combination of exponentials. Some numerical examples are illustrated in Section 4.

2 Integral equations and the solutions

In order to derive integral equations for \(\phi_{n,w}(u)\) and \(\phi _{n,d}(u)\), we need to condition on the surplus level immediately before the first claim arrival time. To this end, for \(q\geq0\) we introduce the following q-potential measure:

$$\mathcal{R}^{(q)}(u,dx)=\int_{0}^{\infty}e^{-qt}\operatorname{Pr}\bigl(u+ct+\sigma B(t)\in dx, t< \tau_{0}^{-}\bigr)\,dt, $$

where \(\tau_{0}^{-}=\inf\{t\geq0: u+ct+\sigma B(t)\leq0\}\). Let

$$s_{1,q}=\frac{-c+\sqrt{c^{2}+2\sigma^{2}q}}{\sigma^{2}},\qquad s_{2,q}=\frac{-c-\sqrt{c^{2}+2\sigma^{2}q}}{\sigma^{2}} $$

be the roots of the quadratic equation (in s)

$$\frac{1}{2}\sigma^{2}s^{2}+cs-q=0. $$

It follows from Theorem 8.7 and Corollary 8.8 in Kyprianou [10] that \(\mathcal{R}^{(q)}(u,dx)\) admits a density which is such that \(\mathcal {R}^{(q)}(u,dx)=r^{(q)}(u,x)\,dx\) and is given by

$$r^{(q)}(u,x)=e^{-s_{1,q}x}W^{(q)}(u)-W^{(q)}(u-x), $$

where \(W^{(q)}\) is a q-scale function defined as \(W^{(q)}(x)=0\) for \(x<0\) and

$$W^{(q)}(x)=\frac{e^{s_{1,q}x}-e^{s_{2,q}x} }{\frac{\sigma ^{2}}{2}(s_{1,q}-s_{2,q}) } $$

for \(x\geq0\). More explicitly, we have

$$ r^{(q)}(u,x)=\left \{ \begin{array}{@{}l@{\quad}l} \frac{e^{s_{2,q}(u-x)}-e^{s_{2,q}u-s_{1,q}x} }{\frac{\sigma ^{2}}{2}(s_{1,q}-s_{2,q}) },& 0\leq x\leq u,\\ \frac{e^{s_{1,q}(u-x)}-e^{s_{2,q}u-s_{1,q}x} }{\frac{\sigma ^{2}}{2}(s_{1,q}-s_{2,q}) },& x>u. \end{array} \right . $$
(2.1)

Now we are ready to derive integral equations for the generalized Gerber-Shiu functions. First, we consider \(\phi_{n,w}(u)\). Conditioning on the time of the first claim arrival and distinguishing whether or not ruin occurs due to the first claim, by binomial expansion we obtain

$$\begin{aligned} \phi_{n,w}(u) =&E \bigl[ e^{-\alpha V_{1}} \bigl(e^{-\delta V_{1}}\theta(X_{1})\bigr)^{n}w \bigl(\bigl|U(V_{1})\bigr|\bigr)I(\tau=V_{1}) |U(0)=u \bigr] \\ &{}+E \Biggl[ e^{-\alpha[V_{1}+(\tau-V_{1})]} \Biggl( e^{-\delta V_{1}}\theta(X_{1})+e^{-\delta V_{1}} \sum_{k=2}^{N(\tau)}e^{-\delta (T_{k}-V_{1})}\theta (X_{k}) \Biggr)^{n} \\ &{} \times w\bigl(\bigl|U(\tau)\bigr|\bigr)I(V_{1}< \tau<\infty) \bigg|U(0)=u \Biggr] \\ =&\int_{0}^{\infty}\lambda e^{-(\lambda+\alpha+n\delta)t} \int _{0}^{\infty}\int_{y}^{\infty}\theta^{n}(x)w(x-y)f_{X}(x)\,dx \\ &{}\times\operatorname{Pr}\bigl(u+ct+\sigma B(t)\in dy, t<\tau_{0}^{-}\bigr)\,dt \\ &{}+\sum_{j=0}^{n} \binom{n}{j} \int _{0}^{\infty}\lambda e^{-(\lambda+\alpha+n\delta)t} \int _{0}^{\infty}\int_{0}^{y} \theta^{n-j}(x)\phi_{j,w}(y-x)f_{X}(x)\,dx \\ &{}\times \operatorname{Pr}\bigl(u+ct+\sigma B(t)\in dy, t<\tau_{0}^{-}\bigr)\,dt \\ =&\lambda\int_{0}^{\infty}\int_{y}^{\infty}\theta^{n}(x)w(x-y)f_{X}(x) r^{(\lambda+\alpha+n\delta)}(u,y)\,dx\,dy \\ &{}+\lambda\sum_{j=0}^{n} \binom{n}{j}\int _{0}^{\infty}\int_{0}^{y} \theta ^{n-j}(x)\phi_{j,w}(y-x)f_{X}(x) r^{(\lambda+\alpha+n\delta)}(u,y)\,dx\,dy \\ =&\sum_{j=0}^{n} \binom{n}{j}\int _{0}^{\infty}\int_{0}^{y} \phi_{j,w}(y-x)\gamma_{n-j}(x)\,dx r^{(\lambda+\alpha+n\delta)}(u,y)\,dy \\ &{}+\int _{0}^{\infty}\beta_{n}(y) r^{(\lambda+\alpha+n\delta)}(u,y)\,dy, \end{aligned}$$
(2.2)

where

$$\begin{aligned}& \beta_{n}(y)=\lambda\int_{y}^{\infty}\theta^{n}(x)w(x-y)f_{X}(x)\,dx,\\& \gamma_{j}(x)=\lambda\theta^{j}(x)f_{X}(x),\quad j=0,1,2,\ldots. \end{aligned}$$

Similarly, we have

$$\begin{aligned} \phi_{n,d}(u)=\sum_{j=0}^{n} \binom{n}{j}\int_{0}^{\infty}\int_{0}^{y} \phi_{j,d}(y-x)\gamma_{n-j}(x)\,dx r^{(\lambda+\alpha+n\delta)}(u,y)\,dy. \end{aligned}$$
(2.3)

We use Laplace transform method to solve integral equations (2.2) and (2.3). In the remainder of this paper, we denote the Laplace transform of a function by adding a hat on the corresponding letter. For example,

$$\begin{aligned}& \widehat{\phi}_{n,w}(s)= \int_{0}^{\infty}e^{-su}\phi_{n,w}(u)\,du,\qquad \widehat{\phi }_{n,d}(s)=\int _{0}^{\infty}e^{-su}\phi_{n,d}(u)\,du, \\& \widehat{\beta}_{n}(s)=\int_{0}^{\infty}e^{-sy}\beta_{n}(y)\,dy, \qquad \widehat{\gamma}_{j}(s)= \int_{0}^{\infty}e^{-sx}\gamma_{j}(x)\,dx. \end{aligned}$$

For notational convenience, we set

$$s_{1,n}=s_{1,\lambda+\alpha+n\delta},\qquad s_{2,n}=s_{2,\lambda+\alpha +n\delta}. $$

For \(\operatorname{Re}(s)>0\), multiplying both sides of (2.2) by \(e^{-su}\) and then performing integration from 0 to ∞ gives

$$\begin{aligned} \widehat{\phi}_{n,w}(s) =&\sum _{j=0}^{n} \binom{n}{j}\int_{0}^{\infty}e^{-su}\int_{0}^{\infty}\int _{0}^{y} \phi_{j,w}(y-x) \gamma_{n-j}(x)\,dx r^{(\lambda+\alpha+n\delta)}(u,y)\,dy\,du \\ &{}+\int_{0}^{\infty}e^{-su}\int _{0}^{\infty}\beta_{n}(y) r^{(\lambda+\alpha+n\delta)}(u,y)\,dy\,du. \end{aligned}$$
(2.4)

It follows from (2.1) that

$$\begin{aligned} &\int_{0}^{\infty}e^{-su}r^{(\lambda+\alpha+n\delta)}(u,y)\,du \\ &\quad=\frac{1}{\frac{\sigma^{2}}{2}(s_{1,n}-s_{2,n})} \biggl( \int_{0}^{y} e^{-su+s_{1,n}(u-y)}\,du+\int_{y}^{\infty}e^{-su+s_{2,n}(u-y)}\,du-\int_{0}^{\infty}e^{-su+s_{2,n}u-s_{1,n}y}\,du \biggr) \\ &\quad=\frac{1}{\frac{\sigma^{2}}{2}(s_{1,n}-s_{2,n})} \biggl( \frac{e^{-s_{1,n}y}-e^{-sy} }{s-s_{1,n} }+\frac {e^{-sy}}{s-s_{2,n}}- \frac{e^{-s_{1,n} y}}{s-s_{2,n}} \biggr) \\ &\quad=\frac{e^{-s_{1,n}y}-e^{-sy} }{\frac{\sigma^{2}}{2} (s-s_{1,n})(s-s_{2,n}) }=\frac{e^{-s_{1,n}y}-e^{-sy} }{\frac{1}{2}\sigma ^{2}s^{2}+cs-(\lambda +\alpha+n\delta) }. \end{aligned}$$

Hence, by changing the order of integrals we can obtain

$$\begin{aligned} &\int_{0}^{\infty}e^{-su}\int _{0}^{\infty}\beta_{n}(y) r^{(\lambda+\alpha+n\delta)}(u,y)\,dy\,du \\ &\quad=\frac{\int_{0}^{\infty}[e^{-s_{1,n}y}-e^{-sy}]\beta_{n}(y)\,dy }{\frac {1}{2}\sigma^{2}s^{2}+cs-(\lambda +\alpha+n\delta) }= \frac{ \widehat{\beta}_{n}(s_{1,n})-\widehat{\beta}_{n}(s) }{\frac{1}{2}\sigma^{2}s^{2}+cs-(\lambda +\alpha+n\delta) }. \end{aligned}$$
(2.5)

Similarly, for \(j=0,1,\ldots, n\)

$$\begin{aligned} &\int_{0}^{\infty}e^{-su}\int _{0}^{\infty}\int_{0}^{y} \phi_{j,w}(y-x)\gamma_{n-j}(x)\,dx r^{(\lambda+\alpha+n\delta)}(u,y)\,dy\,du \\ &\quad= \frac{\widehat{\gamma}_{n-j}(s_{1,n})\widehat{\phi}_{j,w}(s_{1,n}) -\widehat{\gamma}_{n-j}(s)\widehat{\phi}_{j,w}(s) }{\frac{1}{2}\sigma^{2}s^{2}+cs-(\lambda +\alpha+n\delta) }. \end{aligned}$$
(2.6)

Substituting (2.5) and (2.6) back into (2.4) we find that

$$\begin{aligned} \widehat{\phi}_{n,w}(s) =&\frac{1 }{\frac{1}{2}\sigma^{2}s^{2}+cs-(\lambda +\alpha+n\delta) }\\ &{}\times \Biggl( \sum _{j=0}^{n} \binom{n}{j}\bigl[\widehat{ \gamma}_{n-j}(s_{1,n}) \widehat{\phi}_{j,w}(s_{1,n}) -\widehat{\gamma}_{n-j}(s)\widehat{\phi}_{j,w}(s)\bigr] +\bigl[ \widehat{\beta}_{n}(s_{1,n})-\widehat{\beta}_{n}(s) \bigr] \Biggr). \end{aligned}$$

Rearranging terms in the above equation gives rise to

$$\begin{aligned} &\biggl(\frac{1}{2}\sigma^{2}s^{2}+cs-( \lambda+\alpha+n\delta)+\lambda \widehat{f}_{X}(s) \biggr)\widehat{ \phi}_{n,w}(s) \\ &\quad= a_{n,w}-\sum_{j=0}^{n-1} \binom{n}{j}\widehat{\gamma }_{n-j}(s)\widehat{\phi}_{j,w}(s)- \widehat{\beta}_{n}(s), \end{aligned}$$
(2.7)

where

$$a_{n,w}=\sum_{j=0}^{n} \binom{n}{j} \widehat{\gamma}_{n-j}(s_{1,n}) \widehat{\phi}_{j,w}(s_{1,n})+ \widehat{\beta}_{n}(s_{1,n}). $$

Similarly, we can obtain from (2.3)

$$\begin{aligned} \biggl(\frac{1}{2}\sigma^{2}s^{2}+cs-( \lambda+\alpha+n\delta)+\lambda \widehat{f}_{X}(s) \biggr)\widehat{ \phi}_{n,d}(s)= a_{n,d}-\sum_{j=0}^{n-1} \binom{n}{j}\widehat{\gamma}_{n-j}(s) \widehat{\phi}_{j,d}(s), \end{aligned}$$
(2.8)

where \(a_{n,d}=\sum_{j=0}^{n} \binom{n}{j}\widehat{\gamma}_{n-j}(s_{1,n}) \widehat{\phi}_{j,d}(s_{1,n}) \).

To make the following analysis more transparent, we introduce the Dickson-Hipp operator \(\mathcal{T}_{s}\) (see e.g. Dickson and Hipp [11]), which for any integrable function f on \((0, \infty)\) and any complex number s with \(\operatorname{Re}(s)\geq0\) is defined as

$$\mathcal{T}_{s}f(y)=\int_{y}^{\infty}e^{-s(x-y)}f(x)\,dx=\int_{0}^{\infty}e^{-sx}f(x+y)\,dx,\quad y\geq0. $$

The following commutative property will be used in the sequel:

$$\mathcal{T}_{s}\mathcal{T}_{r}f(y)=\mathcal{T}_{r} \mathcal{T}_{s}f(y)=\frac{\mathcal{T}_{s}f(y) -\mathcal{T}_{r}f(y) }{r-s },\quad y\geq0, $$

for complex numbers \(s\neq r\).

By Gerber and Landry [12] we know that the following generalized Lundberg equation:

$$ \frac{1}{2}\sigma^{2}s^{2}+cs- (\lambda+ \alpha+n\delta)+\lambda\widehat{f}_{X}(s)=0 $$
(2.9)

has a unique nonnegative root, say \(\rho_{\alpha+n\delta}\). By subtraction we have

$$ \begin{aligned}[b] &\frac{1}{2}\sigma^{2}s^{2}+cs-( \lambda+\alpha+n\delta)+\lambda\widehat{f}_{X}(s)\\ &\quad=\frac{1}{2}\sigma^{2}(s-\rho_{\alpha+n \delta}) (s+ \rho_{\alpha+n\delta}) +c(s-\rho_{\alpha+n\delta}) +\lambda\bigl[\widehat{f}_{X}(s)- \widehat{f}_{X}( \rho_{\alpha+n\delta})\bigr]\\ &\quad=(s-\rho_{\alpha+n\delta}) \biggl(\frac{1}{2} \sigma^{2}(s+ \rho_{\alpha+n\delta})+c - \lambda\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta}} f_{X}(0) \biggr). \end{aligned} $$
(2.10)

Setting \(s=\rho_{\alpha+n\delta}\) in (2.7) gives

$$a_{n,w}=\sum_{j=0}^{n-1} \binom{n}{j} \widehat{\gamma}_{n-j} (\rho_{\alpha+n\delta}) \widehat{ \phi}_{j,w}(\rho_{\alpha+n\delta})+\widehat{\beta}_{n}(\rho _{\alpha+n\delta}). $$

Hence,

$$\begin{aligned} &a_{n,w}-\sum_{j=0}^{n-1} \binom{n}{j}\widehat{\gamma }_{n-j}(s)\widehat{\phi}_{j,w}(s)- \widehat{\beta}_{n}(s) \\ &\quad=\sum_{j=0}^{n-1} \binom{n}{j}\bigl[ \widehat{\gamma}_{n-j} (\rho_{\alpha+n\delta}) \widehat{ \phi}_{j,w}(\rho_{\alpha+n\delta})- \widehat{\gamma}_{n-j}(s) \widehat{\phi}_{j,w}(s)\bigr] +\bigl[\widehat{\beta}_{n}( \rho_{\alpha+n\delta}) -\widehat{\beta}_{n}(s)\bigr] \\ &\quad=(s-\rho_{\alpha+n\delta})\!\sum_{j=0}^{n-1} \binom{n}{j} \biggl(\widehat{\gamma}_{n-j} (\rho_{\alpha+n\delta}) \frac{ \widehat{\phi}_{j,w}(\rho_{\alpha+n\delta}) -\widehat{\phi}_{j,w}(s)}{s-\rho_{\alpha+n\delta}}+ \frac{\widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta})- \widehat{\gamma}_{n-j}(s)}{s-\rho_{\alpha+n\delta}}\widehat{\phi }_{j,w}(s) \biggr) \\ &\qquad{}+(s-\rho_{\alpha+n\delta})\frac{\widehat{\beta}_{n}(\rho_{\alpha +n\delta}) -\widehat{\beta}_{n}(s)}{s-\rho_{\alpha+n\delta}} \\ &\quad=(s-\rho_{\alpha+n\delta})\sum_{j=0}^{n-1} \binom{n}{j} \bigl( \widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta}) \mathcal{T}_{s}\mathcal {T}_{\rho_{\alpha+n\delta}}\phi_{j,w}(0) + \widehat{\phi}_{j,w}(s)\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta }} \gamma_{n-j}(0) \bigr) \\ &\qquad{}+(s-\rho_{\alpha+n\delta})\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha +n\delta}} \beta_{n}(0). \end{aligned}$$
(2.11)

Plugging (2.10) and (2.11) into (2.7) gives

$$\begin{aligned} &\biggl(\frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c - \lambda\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta}} f_{X}(0) \biggr)\widehat{\phi}_{n,w}(s) \\ &\quad=\sum_{j=0}^{n-1} \binom{n}{j} \bigl( \widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta})\mathcal{T}_{s} \mathcal {T}_{\rho_{\alpha+n\delta}}\phi_{j,w}(0) +\widehat{\phi}_{j,w}(s) \mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta }}\gamma_{n-j}(0) \bigr)+\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta}}\beta_{n}(0). \end{aligned}$$
(2.12)

Similarly, for \(\widehat{\phi}_{n,d}(s)\) we have

$$\begin{aligned} & \biggl(\frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c - \lambda\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta}} f_{X}(0) \biggr)\widehat{\phi}_{n,d}(s) \\ &\quad=\sum_{j=0}^{n-1} \binom{n}{j} \bigl( \widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta})\mathcal{T}_{s} \mathcal {T}_{\rho_{\alpha+n\delta}}\phi_{j,d}(0) +\widehat{\phi}_{j,d}(s) \mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta }}\gamma_{n-j}(0) \bigr). \end{aligned}$$
(2.13)

Applying some arrangements to the above two equations results in

$$\begin{aligned} \widehat{\phi}_{n,w}(s) =&\frac{\lambda\mathcal{T}_{s}\mathcal{T}_{\rho _{\alpha+n\delta}} f_{X}(0)}{\frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c }\widehat{ \phi}_{n,w}(s) \\ &{}+ \sum_{j=0}^{n-1} \binom{n}{j} \biggl( \frac{\widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta})}{ \frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c}\mathcal{T}_{s}\mathcal{T}_{\rho _{\alpha+n\delta}} \phi_{j,w}(0) +\frac{\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta}}\gamma_{n-j}(0) }{\frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c}\widehat{\phi}_{j,w}(s) \biggr) \\ &{}+\frac{\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta}}\beta_{n}(0) }{\frac{1}{2}\sigma^{2}(s+\rho_{\alpha+n\delta})+c} \end{aligned}$$
(2.14)

and

$$\begin{aligned} \widehat{\phi}_{n,d}(s) =&\frac{\lambda\mathcal{T}_{s}\mathcal{T}_{\rho _{\alpha+n\delta}} f_{X}(0)}{\frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c }\widehat{ \phi}_{n,d}(s) + \sum_{j=0}^{n-1} \binom{n}{j} \biggl( \frac{\widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta})}{ \frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c}\mathcal{T}_{s}\mathcal{T}_{\rho _{\alpha+n\delta}} \phi_{j,d}(0) \\ &{}+\frac{\mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta}}\gamma_{n-j}(0) }{\frac{1}{2} \sigma^{2}(s+\rho_{\alpha+n\delta})+c}\widehat{\phi}_{j,d}(s) \biggr). \end{aligned}$$
(2.15)

Define

$$\begin{aligned}& g_{n}(x)=\frac{2\lambda}{\sigma^{2}}\int_{0}^{x} e^{-(\rho_{\alpha+n\delta }+\frac{2c}{\sigma^{2}})(x-y)} \mathcal{T}_{\rho_{\alpha+n\delta}}f_{X}(y)\,dy, \\& \chi_{1,n,j}(x)=\frac{2}{\sigma^{2}}\widehat{\gamma}_{n-j}( \rho_{\alpha +n\delta}) e^{-(\rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}})x},\quad j=0,1,\ldots, n-1, \\& \chi_{2,n,j}(x)=\frac{2}{\sigma^{2}}\int_{0}^{x} e^{-(\rho_{\alpha+n\delta }+\frac{2c}{\sigma^{2}})(x-y)} \mathcal{T}_{\rho_{\alpha+n\delta}}\gamma_{n-j}(y)\,dy, \quad j=0,1, \ldots , n-1, \\& \zeta_{n}(x)=\frac{2}{\sigma^{2}}\int_{0}^{x} e^{-(\rho_{\alpha+n\delta}+\frac {2c}{\sigma^{2}})(x-y)} \mathcal{T}_{\rho_{\alpha+n\delta}}\beta_{n}(y)\,dy. \end{aligned}$$

Inverting the Laplace transforms in (2.14) and (2.15) we obtain the following result.

Proposition 1

The generalized Gerber-Shiu functions \(\phi_{n,w}(u)\) and \(\phi _{n,d}(u)\) satisfy the following integral equations:

$$\begin{aligned}& \begin{aligned}[b] \phi_{n,w}(u)={}&\int_{0}^{u} \phi_{n,w}(u-x)g_{n}(x)\,dx+\sum_{j=0}^{n-1} \biggl( \int _{0}^{u}\chi_{1,n,j}(u-x) \mathcal{T}_{\rho_{\alpha+n\delta}} \phi_{j,w}(x)\,dx\\ &{}+\int_{0}^{u} \chi_{2,n,j}(u-x)\phi_{j,w}(x)\,dx \biggr)+\zeta_{n}(u), \end{aligned} \end{aligned}$$
(2.16)
$$\begin{aligned}& \begin{aligned}[b] \phi_{n,d}(u)={}&\int_{0}^{u} \phi_{n,d}(u-x)g_{n}(x)\,dx\\ &{}+\sum_{j=0}^{n-1} \biggl( \int _{0}^{u}\chi_{1,n,j}(u-x) \mathcal{T}_{\rho_{\alpha+n\delta}} \phi_{j,d}(x)\,dx+\int_{0}^{u} \chi_{2,n,j}(u-x)\phi_{j,d}(x)\,dx \biggr). \end{aligned} \end{aligned}$$
(2.17)

The integral equations (2.16) and (2.17) are defective renewal equations since \(\int_{0}^{\infty}g_{n}(x)\,dx<1\) under the net profit condition \(c>\lambda EX\) (see Tsai and Willmot [2]). Define \(S_{n}(x)=\sum_{k=1}^{\infty}g_{n}^{*k}(x)\), where \(g_{n}^{*1}(x)=g_{n}(x)\) and for \(k\geq2\), \(g_{n}^{*k}(x)=\int_{0}^{x} g_{n}(x-y)g_{n}^{*(k-1)}(y)\,dy\). By renewal theory, we can express the solutions to (2.16) and (2.17) as follows:

$$\begin{aligned} \phi_{n,w}(u) =&\sum_{j=0}^{n-1} \biggl( \int_{0}^{u}\chi_{1,n,j}(u-x) \mathcal{T}_{\rho_{\alpha+n\delta}} \phi_{j,w}(x)\,dx+\int_{0}^{u} \chi_{2,n,j}(u-x)\phi_{j,w}(x)\,dx \biggr)+\zeta_{n}(u) \\ &{}+\sum_{j=0}^{n-1}\int_{0}^{u} S_{n}(u-y) \biggl( \int_{0}^{y} \chi_{1,n,j}(y-x)\mathcal{T}_{\rho_{\alpha+n\delta}} \phi_{j,w}(x)\,dx \\ &{}+\int _{0}^{y}\chi_{2,n,j}(y-x) \phi_{j,w}(x)\,dx \biggr)\,dy +\int_{0}^{u} S_{n}(u-y) \zeta_{n}(y)\,dy \\ =&\sum_{j=0}^{n-1} \binom{n}{j}\int _{0}^{u}\bigl[p_{1,n,j}(u-x) \mathcal{T}_{\rho_{\alpha+n\delta}}\phi_{j,w}(x) +p_{2,n,j}(u-x) \phi_{j,w}(x)\bigr]\,dx +p_{n}(u) \end{aligned}$$
(2.18)

and

$$\begin{aligned} \phi_{n,d}(u)=\sum_{j=0}^{n-1} \binom{n}{j}\int_{0}^{u}\bigl[p_{1,n,j}(u-x) \mathcal{T}_{\rho_{\alpha+n\delta}}\phi_{j,d}(x) +p_{2,n,j}(u-x) \phi_{j,d}(x)\bigr]\,dx, \end{aligned}$$
(2.19)

where

$$\begin{aligned}& p_{n}(u)=\zeta_{n}(u)+\int_{0}^{u} S_{n}(u-y)\zeta_{n}(y)\,dy, \\& p_{i,n,j}(u)= \chi_{i,n,j}(u)+\int_{0}^{u} S_{n}(u-y)\chi_{i,n,j}(y)\,dy,\quad i=1,2. \end{aligned}$$

It follows from (2.18) and (2.19) that we can compute \(\phi_{n,w}(u)\) and \(\phi_{n,d}(u)\) recursively with the starting points \(\phi_{0,w}(u)\) and \(\phi_{0,d}(u)\). For \(\phi_{0,w}(u)\), setting \(n=0\) in (2.18) gives

$$ \phi_{0,w}(u)=p_{0}(u)=\zeta_{0}(u)+ \int_{0}^{u} S_{0}(u-y) \zeta_{0}(y)\,dy. $$
(2.20)

For \(\phi_{0,d}(u)\), it follows from Tsai and Willmot [2] that the following defective renewal equation holds:

$$ \phi_{0,d}(u)=\int_{0}^{u} \phi_{0,d}(u-x)g_{0}(x)\,dx+e^{-(\rho_{\alpha}+\frac {2c}{\sigma^{2}})u}. $$
(2.21)

Hence,

$$ \phi_{0,d}(u)=e^{-(\rho_{\alpha}+\frac{2c}{\sigma^{2}})u} +\int_{0}^{u}S_{0}(u-x)e^{-(\rho_{\alpha}+\frac{2c}{\sigma^{2}})x}\,dx. $$
(2.22)

3 Explicit results

In this section, we assume that the individual claim size is distributed as a combination of exponentials with density

$$ f_{X}(x)=\sum_{i=1}^{m} \eta_{i} \mu_{i} e^{-\mu_{i}x},\quad x>0, $$
(3.1)

where \(\mu_{1},\ldots, \mu_{m}>0\), \(\eta_{1}, \ldots, \eta_{m}\) are distinct constants such that \(\sum_{i=1}^{m}\eta_{i}=1\). It is well known that the class of combinations of exponentials can be used to approximate any absolutely continuous distribution on \((0, \infty)\) (see e.g. Dufresne [13]). We assume that the cost function is a linear function with the following form:

$$ \theta(x)=\theta_{0}+\theta_{1}x, $$
(3.2)

where \(\theta_{0}, \theta_{1}\geq0\), and \(\theta_{0}+\theta_{1}>0\). Hence, each claim cost comprises a fixed cost \(\theta_{0}\) and a proportional part with coefficient \(\theta_{1}\). It follows from (2.18) and (2.19) that we need to determine the functions \(p_{n}(u)\), \(p_{1,n,j}(u)\), \(p_{2,n,j}(u)\) as well as the starting values \(\phi_{0,w}(u)\) and \(\phi_{0,d}(u)\). We will use the Laplace transform method to determine these functions.

3.1 Determination of \(p_{n}(u)\)

By Laplace transform we have

$$\begin{aligned} \widehat{p}_{n}(s)=\bigl(1+\widehat{S}_{n}(s) \bigr)\widehat{\zeta}_{n}(s)= \frac{\widehat{\zeta}_{n}(s)}{1-\widehat{g}_{n}(s)}. \end{aligned}$$
(3.3)

For \(f_{X}\) given by (3.1) we have

$$\begin{aligned} \widehat{g}_{n}(s)=\frac{\frac{2\lambda}{\sigma^{2}}}{s+\rho_{\alpha+n\delta }+\frac{2c}{\sigma^{2}}} \frac{\widehat{f}_{X}(s)-\widehat{f}_{X}(\rho_{\alpha+n\delta}) }{\rho _{\alpha+n\delta}-s }= \frac{\frac{2\lambda}{\sigma^{2}}}{s+\rho_{\alpha+n\delta}+\frac {2c}{\sigma^{2}}} \sum_{i=1}^{m} \frac{\eta_{i}\mu_{i}}{(s+\mu_{i})(\rho_{\alpha+n\delta}+\mu_{i})}. \end{aligned}$$

Since \(\rho_{\alpha+n\delta}\) is the root of (2.9), we have

$$\begin{aligned} &(s-\rho_{\alpha+n\delta}) \biggl(s+\rho_{\alpha+n\delta}+ \frac {2c}{\sigma^{2}} \biggr) \bigl(1-\widehat{g}_{n}(s)\bigr) \\ &\quad=(s-\rho_{\alpha+n\delta}) \biggl(s+\rho_{\alpha+n\delta}+\frac {2c}{\sigma^{2}} \biggr) +\frac{2\lambda}{\sigma^{2}}\bigl(\widehat{f}_{X}(s)- \widehat{f}_{X}(\rho_{\alpha +n\delta})\bigr) \\ &\quad=\frac{2}{\sigma^{2}} \biggl( \frac{1}{2}\sigma^{2}s^{2}+cs-( \lambda+\alpha+n\delta)+\lambda\widehat{f}_{X}(s) \biggr) \\ &\quad=\frac{(s-\rho_{\alpha+n\delta})\prod_{i=1}^{m+1}(s+R_{n,i}) }{\prod_{i=1}^{m}(s+\mu_{i}) }, \end{aligned}$$
(3.4)

where \(-R_{n,1}, \ldots, -R_{n,m+1}\) are negative roots of the generalized Lundberg equation (2.9). In the remainder of this paper, it is assumed that \(-R_{n,1}, \ldots, -R_{n,m+1}\) are distinct.

For \(\widehat{\zeta}_{n}(s)\), we consider two cases. First, we consider the case \(\theta_{0}\neq0\) and \(\theta_{1}=0\). Since

$$\beta_{n}(y)=\lambda\int_{y}^{\infty}\theta^{n}_{0}w(x-y)f_{X}(x)\,dx=\lambda\theta _{0}^{n}\sum_{i=1}^{m} \eta_{i} \mu_{i} e^{-\mu_{i}y}\int_{0}^{\infty}w(x) e^{-\mu_{i}x}\,dx, $$

we have

$$\begin{aligned} \widehat{\zeta}_{n}(s)=\frac{2}{\sigma^{2}}\frac{1}{s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}}} \frac{\widehat{\beta}_{n}(s) -\widehat{\beta}_{n}(\rho_{\alpha+n\delta})}{\rho_{\alpha+n\delta} -s}=\frac{\frac{2\lambda}{\sigma^{2}}}{s+ \rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}}} \sum_{i=1}^{m} \frac{\theta_{0}^{n}\eta_{i}\mu_{i}\widehat{w} (\mu_{i})}{(s+\mu_{i})(\rho_{\alpha+n\delta}+\mu_{i})}. \end{aligned}$$

Then by (3.3) and (3.4) we have

$$\begin{aligned} \widehat{p}_{n}(s)=\frac{\widehat{\zeta}_{n}(s)}{1-\widehat{g}_{n}(s)} = \frac{(s-\rho_{\alpha+n\delta})(s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma ^{2}})\widehat{\zeta}_{n}(s) }{\frac{(s-\rho_{\alpha+n\delta}) \prod_{i=1}^{m+1}(s+R_{n,i}) }{\prod_{i=1}^{m}(s+\mu_{i}) } }= \frac{ L_{1,n}(s) }{\prod_{i=1}^{m+1}(s+R_{n,i}) }, \end{aligned}$$

where

$$L_{1,n}(s)= \biggl(s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}} \biggr) \widehat{\zeta}_{n}(s) \prod_{i=1}^{m}(s+ \mu_{i}) $$

is a polynomial with degree \(m-1\). By partial fraction we have

$$\widehat{p}_{n}(s)=\sum_{i=1}^{m+1} \frac{L_{1,n,i}}{ s+R_{n,i}}, $$

where

$$L_{1,n,i}=\frac{L_{1,n}(-R_{n,i})}{\prod_{j=1,j\neq i}^{m+1}(R_{n,j}-R_{n,i})},\quad i=1,2,\ldots, m+1. $$

By Laplace inversion we obtain the following result.

Proposition 2

Suppose that \(-R_{n,1}, \ldots, -R_{n,m+1}\) are distinct. If \(\theta _{0}\neq0\) and \(\theta_{1}=0\), then

$$ p_{n}(u)=\sum_{i=1}^{m+1}L_{1,n,i} e^{-R_{n,i}u}. $$
(3.5)

Next, we consider the case \(\theta_{1}\neq0\). By binomial expansion we have

$$\begin{aligned} \beta_{n}(y) =&\lambda\int_{y}^{\infty}( \theta_{0}+\theta_{1}x)^{n}w(x-y)f_{X}(x)\,dx \\ =&\lambda\int_{0}^{\infty}(\theta_{0}+ \theta_{1}x +\theta_{1}y)^{n}w(x)f_{X}(x+y)\,dx \\ =&\lambda\sum_{i=1}^{m}\sum _{k=0}^{n} \binom{n}{k} \eta_{i}\mu _{i}(\theta_{1}y)^{k} e^{-\mu_{i}y}\int _{0}^{\infty}(\theta_{0}+\theta _{1}x)^{n-k}w(x)e^{-\mu_{i}x}\,dx. \end{aligned}$$

Then

$$\begin{aligned} \widehat{\beta}_{n}(s) =&\lambda\sum_{i=1}^{m} \sum_{k=0}^{n} \binom{n}{k} \eta_{i}\int_{0}^{\infty}\mu_{i}( \theta_{1}y)^{k} e^{-\mu_{i}y}e^{-sy}\,dy\int _{0}^{\infty}(\theta_{0}+ \theta_{1}x)^{n-k}w(x)e^{-\mu_{i}x}\,dx \\ =&\lambda\sum_{i=1}^{m}\sum _{k=0}^{n}\eta_{i} \biggl( \frac{\theta_{1}}{\mu _{i}} \biggr)^{k} \frac{n!}{(n-k)!}\int _{0}^{\infty}\frac{\mu_{i}^{k+1}y^{k}}{k!} e^{-\mu_{i}y} e^{-sy}\,dy\\ &{}\times\int_{0}^{\infty}( \theta_{0}+\theta_{1}x)^{n-k}w(x)e^{-\mu_{i}x}\,dx \\ =&\lambda\sum_{i=1}^{m}\sum _{k=0}^{n}\frac{n!}{(n-k)!} \frac{\eta_{i}\mu_{i}\theta_{1}^{k} }{(s+\mu_{i})^{k+1} }\int _{0}^{\infty}(\theta _{0}+ \theta_{1}x)^{n-k}w(x)e^{-\mu_{i}x}\,dx \\ =&\lambda\sum_{i=1}^{m}\sum _{k=0}^{n}\frac{z_{n,i,k}}{(s+\mu_{i})^{k+1}}, \end{aligned}$$

where

$$z_{n,i,k}= \eta_{i}\mu_{i}\theta_{1}^{k} \frac{n!}{(n-k)!} \int_{0}^{\infty}( \theta_{0}+\theta _{1}x)^{n-k}w(x)e^{-\mu_{i}x}\,dx. $$

Consequently,

$$\begin{aligned} \widehat{\zeta}_{n}(s) =&\frac{2}{\sigma^{2}}\frac{1}{s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}}} \frac{\widehat{\beta}_{n}(s) -\widehat{\beta}_{n}(\rho_{\alpha+n\delta})}{\rho_{\alpha+n\delta} -s} \\ =&\frac{\frac{2\lambda}{\sigma^{2}}}{s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}}}\sum_{i=1}^{m}\sum _{k=0}^{n}\frac{ (\rho_{\alpha +n\delta}+\mu_{i})^{k+1}-(s+\mu_{i})^{k+1}}{ \rho_{\alpha+n\delta}-s } \frac{z_{n,i,k}}{(s+\mu_{i})^{k+1}(\rho_{\alpha+n\delta} +\mu_{i})^{k+1}}. \end{aligned}$$

By (3.3) and (3.4) we have

$$\begin{aligned} \widehat{p}_{n}(s)=\frac{\widehat{\zeta}_{n}(s)}{1-\widehat{g}_{n}(s)}= \frac{(s-\rho_{\alpha+n\delta})(s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma ^{2}})\widehat{\zeta}_{n}(s) }{\frac{(s-\rho_{\alpha+n\delta})\prod_{i=1}^{m+1}(s+R_{n,i}) }{\prod_{i=1}^{m}(s+\mu_{i}) } }= \frac{ L_{2,n}(s) }{[\prod_{i=1}^{m+1}(s+R_{n,i})][ \prod_{i=1}^{m}(s+\mu_{i})^{n} ] }, \end{aligned}$$

where

$$L_{2,n}(s)= \biggl(s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}} \biggr) \widehat{\zeta}_{n}(s)\prod_{i=1}^{m}(s+ \mu_{i})^{n+1} $$

is a polynomial with degree \(m(n+1)\). It is easily seen that \(-\mu_{1},\ldots, -\mu_{m}\) are not zeros of \(L_{2,n}(s)\). Hence, by partial fraction we have

$$ \widehat{p}_{n}(s)=\sum_{i=1}^{m+1} \frac{L_{2,n,i}}{s+R_{n,i}} +\sum_{i=1}^{m}\sum _{j=1}^{n}L_{2,n,i,j} \biggl( \frac{\mu_{i}}{s+\mu_{i}} \biggr)^{j}, $$
(3.6)

where

$$\begin{aligned}& L_{2,n,i}=\frac{L_{2,n}(-R_{n,i})}{ [\prod_{j=1,j\neq i}^{m+1}(R_{n,j}-R_{n,i}) ][ \prod_{j=1}^{m}(\mu_{j}-R_{n,i})^{n} ]}, \quad i=1,\ldots, m, \\& L_{2,n,i,j}=\frac{1}{(n-j)!\mu_{i}^{j}}\frac{d^{n-j}}{ds^{n-j}} \frac{ L_{2,n}(s) }{[\prod_{k=1}^{m+1}(s+R_{n,k})][ \prod_{k=1,k\neq i}^{m}(s+\mu_{k})^{n} ] }\Big|_{s=-\mu_{i}},\\& \quad i=1,\ldots, m; j=1,\ldots, n. \end{aligned}$$

Inverting the Laplace transforms in (3.6) yields the following result.

Proposition 3

Suppose that \(-R_{n,1}, \ldots, -R_{n,m+1}\) are distinct. If \(\theta _{1}\neq0\), then

$$ p_{n}(u)=\sum_{i=1}^{m}L_{2,n,i} e^{-R_{n,i}u}+\sum_{i=1}^{m}\sum _{j=1}^{n}L_{2,n,i,j} \frac{\mu_{i}^{j} u^{j-1} e^{-\mu_{i}u}}{(j-1)!}. $$
(3.7)

3.2 Determination of \(p_{1,n,j}(u)\) and \(p_{2,n,j}(u)\)

The Laplace transform of \(p_{1,n,j}(u)\) is given by

$$\begin{aligned} \widehat{p}_{1,n,j}(s)&=\bigl(1+\widehat{S}_{n}(s)\bigr) \widehat{\chi}_{1,n,j}(s)=\frac{\widehat{\chi}_{1,n,j}(s)}{1-\widehat{g}_{n}(s)} =\frac{ \frac{2}{\sigma^{2}}\widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta}) \frac{1}{s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}}}}{1-\widehat {g}_{n}(s) }\\ &= \frac{\frac{2}{\sigma^{2}}\widehat{\gamma}_{n-j}(\rho_{\alpha +n\delta}) \prod_{i=1}^{m}(s+\mu_{i}) }{\prod_{i=1}^{m+1}(s+R_{n,i}) }, \end{aligned}$$

where the last step follows from (3.4). By partial fraction we have

$$\widehat{p}_{1,n,j}(s)=\sum_{i=1}^{m+1} \frac{K_{n,j,i}}{s+R_{n,i}}, $$

where

$$K_{n,j,i}=\frac{\frac{2}{\sigma^{2}}\widehat{\gamma}_{n-j}(\rho_{\alpha +n\delta}) \prod_{k=1}^{m}(\mu_{k}-R_{n,i}) }{\prod_{k=1, k\neq i}^{m+1}(R_{n,k}-R_{n,i}) },\quad i=1,2,\ldots, m+1. $$

Then Laplace inversion gives the following result.

Proposition 4

Suppose that \(-R_{n,1}, \ldots, -R_{n,m+1}\) are distinct. Then

$$ p_{1,n,j}(u)=\sum_{i=1}^{m+1}K_{n,j,i}e^{-R_{n,i}u}. $$
(3.8)

Now we consider \(p_{2,n,j}(u)\), which has the Laplace transform

$$ \widehat{p}_{2,n,j}(s)=\bigl(1+\widehat{S}_{n}(s) \bigr) \widehat{\chi}_{2,n,j}(s)= \frac{\widehat{\chi}_{2,n,j}(s)}{1-\widehat{g}_{n}(s)} . $$
(3.9)

First, for the case \(\theta_{0}\neq0\) and \(\theta_{1}=0\), we have

$$\begin{aligned} \widehat{\chi}_{2,n,j}(s) =&\frac{2}{\sigma^{2}}\frac{1}{s+\rho_{\alpha +n\delta} +\frac{2c}{\sigma^{2}}} \mathcal{T}_{s}\mathcal{T}_{\rho_{\alpha+n\delta }}\gamma_{n-j}(0) \\ =&\frac{\frac{2\lambda}{\sigma^{2}}\theta_{0}^{n-j}}{s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}}}\frac{\widehat{f}_{X}(s)-\widehat{f}_{X}(\rho_{\alpha +n\delta})}{ \rho_{\alpha+n\delta}-s} \\ =&\frac{\frac{2\lambda}{\sigma^{2}}\theta_{0}^{n-j}}{s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}}}\sum_{i=1}^{m} \frac{\eta_{i}\mu_{i}}{ (s+\mu_{i})(\rho_{\alpha+n\delta}+\mu_{i})}. \end{aligned}$$

Hence, by (3.4) and (3.9) we have

$$\begin{aligned} \widehat{p}_{2,n,j}(s)=\frac{W_{n,j}(s)}{\prod_{i=1}^{m+1} (s+R_{n,i})}, \end{aligned}$$

where

$$W_{n,j}(s)= \biggl(s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}} \biggr) \widehat{\chi}_{2,n,j}(s)\prod_{i=1}^{m}(s+ \mu_{i}) $$

is a polynomial with degree \(m-1\). By partial fraction we can obtain

$$\widehat{p}_{2,n,j}(s)=\sum_{i=1}^{m+1} \frac{W_{n,j,i}}{ s+R_{n,i}}, $$

where

$$W_{n,j,i}=\frac{W_{n,j}(-R_{n,i})}{\prod_{j=1,j\neq i}^{m+1}(R_{n,j}-R_{n,i})},\quad i=1,2,\ldots, m+1. $$

Then Laplace inversion gives the following result.

Proposition 5

Suppose that \(-R_{n,1}, \ldots, -R_{n,m+1}\) are distinct. If \(\theta _{0}\neq0\) and \(\theta_{1}=0\), then

$$ p_{2,n,j}(u)=\sum_{i=1}^{m+1}W_{n,j,i} e^{-R_{n,i}u}. $$
(3.10)

Next, we consider the case \(\theta_{1}\neq0\). For \(\theta_{0}=0\), we have

$$\gamma_{n-j}(x)=\lambda(\theta_{1}x)^{n-j}f_{X}(x), $$

which has the Laplace transform

$$\begin{aligned} \widehat{\gamma}_{n-j}(s) =&\lambda\sum_{i=1}^{m} \eta_{i}\mu_{i} \int_{0}^{\infty}( \theta_{1} x)^{n-j} e^{-\mu_{i}x} e^{-sx}\,dx= \lambda\sum_{i=1}^{m}\eta_{i} \mu_{i} \theta_{1}^{n-j} \frac{(n-j)!}{(s+\mu_{i})^{n-j+1}}, \end{aligned}$$

then

$$\begin{aligned} \widehat{\chi}_{2,n,j}(s) ={}&\frac{2}{\sigma^{2}}\frac{1}{s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}}}\frac{\widehat{\gamma}_{n-j}(s) -\widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta})}{ \rho_{\alpha+n\delta}-s} \\ ={}&\frac{\frac{2\lambda}{\sigma^{2}}}{ s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}}} \sum_{i=1}^{m} \frac{(\rho_{\alpha+n\delta}+\mu_{i})^{n-j+1} -(s+\mu_{i})^{n-j+1}}{\rho_{\alpha+n\delta}-s } \\ &{}\times \frac{\eta_{i}\mu_{i}\theta_{1}^{n-j}(n-j)!}{(s+\mu_{i})^{n-j+1}(\rho_{\alpha +n\delta} +\mu_{i})^{n-j+1}}. \end{aligned}$$
(3.11)

For \(\theta_{0}\neq0\), by binomial expansion we have

$$\gamma_{n-j}(x)=\lambda(\theta_{0}+\theta_{1}x)^{n-j}f_{X}(x) =\lambda\sum_{k=0}^{n-j} \binom{n-j}{k} \theta_{0}^{n-j-k}(\theta _{1}x)^{k}f_{X}(x), $$

which has Laplace transform

$$\begin{aligned} \widehat{\gamma}_{n-j}(s) =&\lambda\sum_{i=1}^{m} \sum_{k=0}^{n-j} \binom{n-j}{k} \eta_{i}\mu_{i}\theta_{0}^{n-j-k}\int _{0}^{\infty}(\theta_{1}x)^{k} e^{-\mu_{i}x}e^{-sx}\,dx \\ =&\lambda\sum_{i=1}^{m}\sum _{k=0}^{n-j} \eta_{i}\mu_{i} \theta_{0}^{n-j-k}\theta_{1}^{k} \frac{(n-j)!}{(n-j-k)!} \frac{1}{(s+\mu_{i})^{k+1}}, \end{aligned}$$

then

$$\begin{aligned} \widehat{\chi}_{2,n,j}(s) ={}&\frac{2}{\sigma^{2}}\frac{1}{s+\rho_{\alpha+n\delta} +\frac{2c}{\sigma^{2}}}\frac{\widehat{\gamma}_{n-j}(s) -\widehat{\gamma}_{n-j}(\rho_{\alpha+n\delta})}{ \rho_{\alpha+n\delta}-s} \\ ={}&\frac{\frac{2\lambda}{\sigma^{2}}}{ s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}}} \sum_{i=1}^{m}\sum _{k=0}^{n-j}\eta_{i} \mu_{i}\theta_{0}^{n-j-k} \theta_{1}^{k} \frac{(n-j)!}{(n-j-k)!} \\ &{} \times\frac {(\rho_{\alpha+n\delta}+\mu_{i})^{k+1} -(s+\mu_{i})^{k+1} }{\rho_{\alpha+n\delta}-s}\frac{1}{(s+\mu_{i})^{k+1}(\rho _{\alpha +n\delta}+\mu_{i})^{k+1}}. \end{aligned}$$
(3.12)

By (3.4) and (3.9) we have

$$\begin{aligned} \widehat{p}_{2,n,j}(s)=\frac{G_{n,j}(s)}{ [\prod_{i=1}^{m+1} (s+R_{n,i})][\prod_{i=1}^{m}(s+\mu_{i})^{n-j}]}, \end{aligned}$$

where by (3.11) and (3.12) we find that

$$G_{n,j}(s)= \biggl(s+\rho_{\alpha+n\delta}+\frac{2c}{\sigma^{2}} \biggr) \widehat{\chi}_{2,n,j}(s)\prod_{i=1}^{m}(s+ \mu_{i})^{n-j+1} $$

is a polynomial with degree \(m(n-j+1)-1\). By partial fraction we obtain

$$\widehat{p}_{2,n,j}(s)=\sum_{i=1}^{m+1} \frac{G_{n,j,i}}{ s+R_{n,i}}+\sum_{i_{1}=1}^{m}\sum _{i_{2}=1}^{n-j} G_{n,j,i_{1},i_{2}} \biggl( \frac{\mu_{i_{1}}}{s+\mu_{i_{1}}} \biggr)^{i_{2}}, $$

where

$$\begin{aligned}& G_{n,j,i}=\frac{G_{n,j}(-R_{n,i}) }{ [\prod_{k=1, k\neq i}^{m+1}(R_{n,k}-R_{n,i}) ] [ \prod_{k=1}^{m}(\mu _{k}-R_{n,i})^{n-j} ]},\quad i=1,\ldots, m+1,\\& G_{n,j,i_{1},i_{2}}=\frac{1}{(n-j-i_{2})!\mu_{i_{1}}^{i_{2}} }\frac {d^{n-j-i_{2}}}{ds^{n-j-i_{2}}}\frac{G_{n,j}(s)}{ [\prod_{i=1}^{m+1}(s+R_{n,i}) ][\prod_{i=1, i\neq i_{1}}^{m}(s+\mu _{i})^{n-j} ] }\Big|_{s=-\mu_{i_{1}}},\\& \quad i_{1}=1,\ldots, m; i_{2}=1,\ldots, n-j. \end{aligned}$$

Finally, Laplace inversion gives the following result.

Proposition 6

Suppose that \(-R_{n,1}, \ldots, -R_{n,m+1}\) are distinct. If \(\theta _{1}\neq0\), then

$$ p_{2,n,j}(u)=\sum_{i=1}^{m+1}G_{n,j,i} e^{-R_{n,i}u}+\sum_{i_{1}=1}^{m}\sum _{i_{2}=1}^{n-j} G_{n,j,i_{1},i_{2}} \frac{\mu_{i_{1}}^{i_{2}} u^{i_{2}-1} e^{-\mu_{i_{1}}u} }{(i_{2}-1)! }. $$
(3.13)

3.3 Determination of \(\phi_{0,w}(u)\) and \(\phi_{0,d}(u)\)

First, note that \(\phi_{0,w}(u)\) does not depend on \(\theta(x)\). By (2.20) and (3.5) we have

$$ \phi_{0,w}(u)=p_{0}(u)= \sum _{i=1}^{m+1}L_{1,0,i} e^{-R_{0,i}u}. $$
(3.14)

Next, for \(\phi_{0,d}(u)\), taking Laplace transform in (2.21) and using (3.4) we obtain

$$\begin{aligned} \widehat{\phi}_{0,d}(s)=\frac{\frac{1}{s+\rho_{\alpha}+\frac{2c}{\sigma ^{2}}} }{1-\widehat{g}_{0}(s) }=\frac{\prod_{i=1}^{m}(s+\mu_{i}) }{\prod_{i=1}^{m+1}(s+R_{0,i}) }=\sum _{i=1}^{m+1}\frac{A_{d,i} }{s+R_{0,i} }, \end{aligned}$$

where

$$A_{d,i}=\frac{\prod_{j=1}^{m}(\mu_{j}-R_{0,i}) }{ \prod_{j=1,j\neq i}^{m+1}(R_{0,j}-R_{0,i}) },\quad i=1,\ldots, m+1. $$

Hence, Laplace inversion gives

$$ \phi_{0,d}(u)=\sum_{i=1}^{m+1}A_{d,i}e^{-R_{0,i}u}. $$
(3.15)

4 Numerical illustrations

In this section, we present some numerical examples. First, we compute the expected number of claims until ruin. We set \(c=1.2\), \(\lambda=1\), \(\delta=\alpha=0\), \(\sigma^{2}=0.5\), \(w\equiv1\), \(\theta_{0}=1\), and \(\theta _{1}=0\). We consider the exponential claim size density \(f_{X}(x)=e^{-x}\). In Figure 1(a), we depict the curves of

$$\begin{aligned}& \operatorname{Mean}_{n,d}(u):=E\bigl[N(\tau)I\bigl(\tau< \infty, U( \tau)=0\bigr) |U(0)=u\bigr],\\& \operatorname{Mean}_{n,w}(u):=E\bigl[N(\tau)I\bigl(\tau<\infty, U( \tau)<0\bigr) |U(0)=u\bigr],\\& \operatorname{Mean}_{n}(u)=\operatorname{Mean}_{n,d}(u)+ \operatorname{Mean}_{n,w}(u). \end{aligned}$$

We observe that \(\operatorname{Mean}_{n,d}(u)\), \(\operatorname{Mean}_{n,w}(u)\) and \(\operatorname{Mean}_{n}(u)\) start from zero, then increase w.r.t. u, and finally decrease to zero as \(u\rightarrow\infty\). This is due to the fact that ruin occurs immediately for zero initial surplus, and ruin is unlikely to occur for large initial surplus. Instead, if we consider the condition means

$$\begin{aligned}& \operatorname{CMean}_{n,d}(u):=E\bigl[N(\tau)I\bigl( U(\tau)=0\bigr) |\tau< \infty, U(0)=u\bigr], \\& \operatorname{CMean}_{n,w}(u):=E\bigl[N(\tau)I\bigl( U(\tau)<0\bigr) |\tau<\infty, U(0)=u\bigr], \\& \operatorname{CMean}_{n}(u)=\operatorname{Mean}_{n,d}(u)+ \operatorname{Mean}_{n,w}(u), \end{aligned}$$

it follows from Figure 1(b) that they are increasing functions of u and almost linear functions for large enough u.

Figure 1
figure 1

The expected numbers of claims until ruin. (a) \(\operatorname{Mean}_{n,d}(u)\) (the red curve), \(\operatorname{Mean}_{n,w}(u)\) (the green curve), \(\operatorname{Mean}_{n}(u)\) (the blue curve); (b) \(\operatorname{CMean}_{n,d}(u)\) (the red curve), \(\operatorname{CMean}_{n,w}(u)\) (the green curve), \(\operatorname{CMean}_{n}(u)\) (the blue curve).

We also study variances of the number of claims until ruin conditional on ruin occurring, which are defined as

$$\begin{aligned}& \operatorname{CVar}_{n,d}(u)=\operatorname{Var} \bigl[N(\tau) I \bigl(U(\tau)=0\bigr) |\tau< \infty, U(0)=u \bigr], \\& \operatorname{CVar}_{n,w}(u)=\operatorname{Var} \bigl[ N(\tau)I\bigl( U(\tau)<0\bigr) |\tau<\infty, U(0)=u \bigr], \\& \operatorname{CVar}_{n}(u)=\operatorname{Var} \bigl[ N(\tau) |\tau< \infty, U(0)=u \bigr]. \end{aligned}$$

We plot the conditional variances in Figure 2. It follows that the conditional variances \(\operatorname{CVar}_{n,d}(u)\) and \(\operatorname{CVar}_{n,w}(u)\) behave like quadratic functions of u as the initial surplus becomes large, while the total conditional variance \(\operatorname{CVar}_{n}(u)\) behaves like a linear function of u for large initial surplus.

Figure 2
figure 2

The conditional variances of the number of claims until ruin. \(\operatorname{CVar}_{n,d}(u)\) (the red curve), \(\operatorname{CVar}_{n,w}(u)\) (the green curve) and \(\operatorname{CVar}_{n}(u)\) (the blue curve).

Next, we consider the expected total discounted claims until ruin. With the other settings as above, we set \(\delta=0.01\), \(\theta_{0}=0\), and \(\theta_{1}=1\). For the following functions:

$$\begin{aligned}& \operatorname{Mean}_{X,d}(u)=E \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}X_{k} I\bigl( \tau < \infty, U(\tau)=0\bigr) \bigg| U(0)=u \Biggr],\\& \operatorname{Mean}_{X,w}(u)=E \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}X_{k} I\bigl( \tau <\infty, U(\tau)<0\bigr) \bigg| U(0)=u \Biggr],\\& \operatorname{Mean}_{X}(u)=\operatorname{Mean}_{X,d}(u)+ \operatorname{Mean}_{X,w}(u), \end{aligned}$$

it follows from Figure 3 that they have the same behavior as the expected number of claims until ruin, which can be explained as above. Furthermore, we study the mean and variance conditional on ruin occurring, which are defined as

$$\begin{aligned}& \operatorname{CMean}_{X,d}(u)=E \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}X_{k} I\bigl(U( \tau )=0\bigr) \bigg|\tau< \infty, U(0)=u \Biggr], \\& \operatorname{CMean}_{X,w}(u)=E \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}X_{k} I\bigl( U( \tau)<0\bigr) \bigg|\tau<\infty, U(0)=u \Biggr], \\& \operatorname{CMean}_{X}(u)=\operatorname{CMean}_{X,d}(u)+ \operatorname{CMean}_{X,w}(u), \\& \operatorname{CVar}_{X,d}(u)=\operatorname{Var} \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}X_{k} I\bigl(U( \tau)=0\bigr) \bigg|\tau<\infty, U(0)=u \Biggr], \\& \operatorname{CVar}_{X,w}(u)=\operatorname{Var} \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}X_{k} I\bigl( U( \tau)<0\bigr) \bigg|\tau<\infty, U(0)=u \Biggr], \\& \operatorname{CVar}_{X}(u)=\operatorname{Var} \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}X_{k} \bigg|\tau< \infty, U(0)=u \Biggr]. \end{aligned}$$

We find from Figure 4 that the conditional means and variances are increasing functions of u and converge to some finite constants as \(u\rightarrow\infty\). The finite limit behavior has also been found by Cheung [8].

Figure 3
figure 3

The expected total discounted claims until ruin. \(\operatorname{Mean}_{X,d}(u)\) (the red curve), \(\operatorname{Mean}_{X,w}(u)\) (the green curve) and \(\operatorname{Mean}_{X}(u)\) (the blue curve).

Figure 4
figure 4

The mean and variance of total discounted claims until ruin conditional ruin occurring. (a) \(\operatorname{CMean}_{X,d}(u)\) (the red curve), \(\operatorname{CMean}_{X,w}(u)\) (the green curve), \(\operatorname{CMean}_{X}(u)\) (the blue curve); (b) \(\operatorname{CVar}_{X,d}(u)\) (the red curve), \(\operatorname{CVar}_{X,w}(u)\) (the green curve), \(\operatorname{CVar}_{X}(u)\) (the blue curve).

Finally, we consider the case when \(f_{X}\) is a combination of exponentials with the following setting:

$$f_{X}(x)=0.7 e^{-x}+0.24 e^{-0.8x}. $$

Furthermore, we set \(c=1.5\), \(\lambda=1\), \(\sigma^{2}=0.5\), \(\delta=0.01\), \(\alpha =0\), and the cost function \(\theta(x)=x+0.2\).

$$\begin{aligned}& \operatorname{CMean}_{\theta,d}(u)=E \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}\theta (X_{k}) I \bigl(U(\tau)=0\bigr) \bigg|\tau< \infty, U(0)=u \Biggr], \\& \operatorname{CMean}_{\theta,w}(u)=E \Biggl[\sum _{k=1}^{N(\tau)} e^{-\delta T_{k}}\theta (X_{k}) I \bigl( U(\tau)<0\bigr) \bigg|\tau<\infty, U(0)=u \Biggr], \\& \operatorname{CMean}_{\theta}(u)=\operatorname{CMean}_{\theta,d}(u)+ \operatorname{CMean}_{\theta,w}(u). \end{aligned}$$

In Figure 5, we plot the means and conditional means of the total discounted claim costs until ruin. Again, we observe that the means start from zero, then increase, and finally decrease to zero, however, the conditional means increase to some finite constants.

Figure 5
figure 5

The mean and conditional mean of total discounted claim costs until ruin conditional ruin occurring. (a) \(\phi_{1,d}(u)\) (the red curve), \(\phi_{1,w}(u)\) (the green curve), \(\phi_{1,d}(u)+\phi_{1,w}(u)\) (the blue curve); (b) \(\operatorname{CMean}_{\theta,d}(u)\) (the red curve), \(\operatorname{CMean}_{\theta,w}(u)\) (the green curve), \(\operatorname{CMean}_{\theta}(u)\) (the blue curve).

References

  1. Gerber, HU: An extension of the renewal equation and its application in the collective theory of risk. Skand. Aktuarietidskr. 1970, 205-210 (1970)

    MATH  MathSciNet  Google Scholar 

  2. Tsai, CCL, Willmot, GE: A generalized defective renewal equation for the surplus process perturbed by diffusion. Insur. Math. Econ. 30, 51-66 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  3. Li, S, Garrido, J: The Gerber-Shiu function in a Sparre Andersen risk process perturbed by diffusion. Scand. Actuar. J. 2005, 161-186 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  4. Zhang, Z, Yang, H: Gerber-Shiu analysis in a perturbed risk model with dependence between claim sizes and interclaim times. J. Comput. Appl. Math. 235, 1189-1204 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  5. Zhang, Z, Yang, H, Yang, H: On a Sparre Andersen risk model with time-dependent claim sizes and jump-diffusion perturbation. Methodol. Comput. Appl. Probab. 14, 973-995 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  6. Gerber, HU, Shiu, ESW: On the time value of ruin. N. Am. Actuar. J. 2, 48-72 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  7. Cai, J, Feng, R, Willmot, GE: On the total discounted operating costs up to default and its applications. Adv. Appl. Probab. 41, 495-522 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  8. Cheung, ECK: Moments of discounted aggregate claim costs until ruin in a Sparre Andersen risk model with general interclaim times. Insur. Math. Econ. 53, 343-354 (2013)

    Article  Google Scholar 

  9. Cheung, ECK, Woo, JK: On the discounted aggregate claim costs until ruin in dependent Sparre Andersen risk processes. Scand. Actuar. J. (2014, in press). http://www.tandfonline.com/doi/abs/10.1080/03461238.2014.900519#.VMjdztKl8oE

  10. Kyprianou, AE: Introductory Lectures on Fluctuations of Lévy Processes with Applications. Springer, Berlin (2006)

    MATH  Google Scholar 

  11. Dickson, DCM, Hipp, C: On the time to ruin for Erlang (2) risk processes. Insur. Math. Econ. 29, 333-344 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  12. Gerber, HU, Landry, B: On the discounted penalty at ruin in a jump-diffusion and the perpetual put option. Insur. Math. Econ. 1998, 263-276 (1998)

    Article  MathSciNet  Google Scholar 

  13. Dufresne, D: Fitting combinations of exponentials to probability distributions. Appl. Stoch. Models Bus. Ind. 23, 23-48 (2007)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank two anonymous referees for their helpful comments and suggestions, which improved an earlier version of the paper. This work is supported by the National Natural Science Foundation of China (11101451, 11471058) and the Natural Science Foundation Project of CQ CSTC of China (cstc2014jcyjA00007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chaolin Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, C., Zhang, Z. On a generalized Gerber-Shiu function in a compound Poisson model perturbed by diffusion. Adv Differ Equ 2015, 34 (2015). https://doi.org/10.1186/s13662-015-0378-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0378-x

Keywords