Skip to main content

Theory and Modern Applications

On the averaging principle for stochastic delay differential equations with jumps

Abstract

In this paper, we investigate the averaging principle for stochastic delay differential equations (SDDEs) and SDDEs with pure jumps. By the Itô formula, the Taylor formula, and the Burkholder-Davis-Gundy inequality, we show that the solution of the averaged SDDEs converges to that of the standard SDDEs in the sense of pth moment and also in probability. Finally, two examples are provided to illustrate the theory.

1 Introduction

The averaging principle for dynamical system is important in problems of mechanics, control and many other areas. It was first put forward by Krylov and Bogolyubov [1], then by Gikhman [2] and Volosov [3] for non-linear ordinary differential equations. With the developing of the theory of stochastic analysis, many authors began to study the averaging principle for stochastic differential equations (SDEs). See, for instance, Khasminskii [4], Khasminskii and Yin [5], Golec and Ladde [6], Veretennikov [7, 8], Givon et al. [9], Tan and Lei [10].

On the other hand, in real phenomena, many real systems may be perturbed by abrupt pulses or extreme events and these systems with white noise are not always appropriate to interpret real data in a reasonable way. A more natural mathematical framework for these phenomena has been taken into account other than purely Brownian perturbations. It is recognized that SDEs with jumps are quite suitable to describe such discontinuous systems. In the meantime, the averaging method for a class of stochastic equations with jumps has received much attention from many authors and there exists some literature [11–14] concerned with the averaging method for SDEs with jumps.

Motivated by the above discussion, in this paper we study the averaging principle for a class of stochastic delay differential equations (SDDEs) with variable delays and jumps. To the best of our knowledge, the published papers on the averaging method are concentrated on the case of SDEs, there are few results concerning the averaging principle for SDDEs with jumps. What is more, associated with all the work mentioned above, we pay special attention to the fact that most authors just focus on the mean-square convergence of the solution of the averaged stochastic equations and that of the standard stochastic equations. They do not consider the general pth (\(p>2\)) moment convergence case. In order to close this gap, the main aim of this paper is to study the solution of the averaged SDDEs converging to that of the standard SDDEs in the sense of pth moment. By using the Itô formula, the Taylor formula, and stochastic inequalities, we give the proof of the pth moment convergence results. It should be pointed out that we will not get the pth moment convergence results by using the proof of [11–14] and we need to develop several new techniques to deal with the pth moment case and the term with Poisson random measure. The results obtained are a generalization and improvement of some results in [11–14].

The rest of this paper is organized as follows. In Section 2, we prove that the solution of the averaged SDDEs converges to that of the standard SDDEs in the sense of pth moment and also in probability; in Section 3 we also consider the above results as regards the pure jump case. Finally, we give two examples to illustrate the theory in Section 4.

2 Averaging principle for Brownian motion case

Let \((\Omega,\mathcal{F},P)\) be a complete probability space equipped with some filtration \((\mathcal{F}_{t})_{t\ge0}\) satisfying the usual conditions. Here \(w(t)\) is an m-dimensional Brownian motion defined on the probability space \((\Omega,\mathcal{F},P)\) adapted to the filtration \((\mathcal {F}_{t})_{t\ge{0}}\). Let \(\tau>0\) and \(C([-\tau,0];R^{n})\) denote the family of all continuous functions φ from \([-\tau,0] \to R^{n}\). The space \(C([-\tau,0];R^{n})\) is assumed to be equipped with the norm \(\|\varphi\|=\sup_{-\tau\le t\le0}|\varphi(t)|\). \(C_{\mathcal {F}_{0}}^{b}([-\tau,0];R^{n})\) denote the family of all almost surely bounded, \(\mathcal{F}_{0}\)-measurable, \(C([-\tau,0];R^{n})\) valued random variables \(\xi=\{\xi(\theta):-\tau\le\theta\le0\}\). Let \(p\ge2\), \(\mathcal{L}_{\mathcal{F}_{0}}^{p}([-\tau,0];R^{n})\) denote the family of all \(\mathcal{F}_{0}\) measurable, \(C([-\tau,0];R^{n})\)-valued random variables \(\varphi=\{\varphi(\theta):{-\tau\le\theta\le0}\}\) such that \(E\sup_{-\tau\le\theta\le0}|\varphi(\theta)|^{p}<\infty\).

Consider the following SDDEs:

$$\begin{aligned} dx(t)=f\bigl(t,x(t),x\bigl(\delta(t)\bigr)\bigr)\,dt+g\bigl(t,x(t),x \bigl(\delta(t)\bigr)\bigr)\,dw_{t}, \end{aligned}$$
(1)

where \(f:[0,T]\times R^{n}\times R^{n} \to R^{n} \) and \(g:[0,T]\times R^{n}\times R^{n} \to R^{n\times m} \) are both Borel-measurable functions. The function \(\delta: [0,T]\to R\) is the time delay which satisfies \(-\tau\le\delta (t)\le t\). The initial condition \(x_{0}\) is defined by

$$x_{0}=\xi=\bigl\{ \xi(t):-\tau\le t\le0\bigr\} \in\mathcal{L}_{\mathcal {F}_{0}}^{p} \bigl([-\tau,0];R^{n}\bigr), $$

that is, ξ is an \(\mathcal{F}_{0}\)-measurable \(C([-\tau,0];R^{n})\)-valued random variable and \(E\|\xi\|^{p}<\infty\).

To study the averaging method of (1), we need the following assumptions.

(H2.1) For all \(x_{1},y_{1},x_{2},y_{2}\in R^{n}\) and \(t\in[0,T]\), there exist two positive constants \(k_{1}\) and \(k_{0}\) such that

$$\begin{aligned} \bigl|f(t,x_{1},y_{1})-f(t,x_{2},y_{2})\bigr|^{2} \vee\bigl|g(t,x_{1},y_{1})-g(t,x_{2},y_{2})\bigr|^{2} \le k_{1}\bigl(|x_{1}-x_{2}|^{2}+|y_{1}-y_{2}|^{2} \bigr) \end{aligned}$$
(2)

and

$$\begin{aligned} \bigl|f(t,0,0)\bigr|^{2}\vee\bigl|g(t,0,0)\bigr|^{2}\le k_{0}. \end{aligned}$$
(3)

Clearly, condition (2) together with (3) implies the linear growth condition

$$\begin{aligned} \bigl|f(t,x,y)\bigr|^{2}\vee\bigl|g(t,x,y)\bigr|^{2}\le K \bigl(1+|x|^{2}+|y|^{2}\bigr), \end{aligned}$$
(4)

where \(K=\max\{k_{1},k_{0}\}\).

In fact, we have, for any \(x,y\in R^{n}\),

$$\begin{aligned} \bigl|f(t,x,y)\bigr|^{2} \le& \bigl|f(t,x,y)-f(t,0,0)\bigr|^{2}+\bigl|f(t,0,0)\bigr|^{2}\\ \le& k_{1}\bigl(|x|^{2}+|y|^{2} \bigr)+k_{0} \le \max\{k_{1},k_{0}\} \bigl(1+|x|^{2}+|y|^{2}\bigr). \end{aligned}$$

Similar to the above derivation, we have

$$\begin{aligned} \bigl|g(t,x,y)\bigr|^{2} \le \max\{k_{1},k_{0}\} \bigl(1+|x|^{2}+|y|^{2}\bigr), \end{aligned}$$

which implies condition (4).

Similar to the proof of [15], we have the following existence result.

Theorem 2.1

Under condition (H2.1), (1) has a unique solution in \(L^{p}\), \(p\ge2\). Moreover, we have

$$E\sup_{0\le t\le T}\bigl|x(t)\bigr|^{p}< C. $$

The proof of Theorem 2.1 is given in the Appendix.

Next, let us consider the standard form of SDDEs (1),

$$\begin{aligned} x_{\varepsilon}(t) =&\xi(t)+\varepsilon\int_{0}^{t}f \bigl(s,x_{\varepsilon }(s),x_{\varepsilon}\bigl(\delta(s)\bigr)\bigr)\,ds +\sqrt{ \varepsilon} \int_{0}^{t}g\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta (s)\bigr)\bigr)\,dw_{s}, \end{aligned}$$
(5)

where the coefficients f, g have the same conditions as in (2), (4) and \(\varepsilon\in[0,\varepsilon_{0}]\) is a positive small parameter with \(\varepsilon_{0}\) is a fixed number.

Let \(\bar{f}(x,y): R^{n}\times R^{n} \to R^{n} \) and \(\bar{g}(x,y): R^{n}\times R^{n} \to R^{n\times m} \) be measurable functions, satisfying condition (H2.1). We also assume that the following conditions are satisfied.

(H2.2) For any \(x,y\in R^{n}\), there exist two positive bounded functions \(\varphi_{i}(T_{1})\), \(i=1,2\), such that

$$\begin{aligned} \frac{1}{T_{1}}\int_{0}^{T_{1}}\bigl|a(s,x,y)- \bar{a}(x,y)\bigr|^{p}\,ds \le\varphi _{i}(T_{1}) \bigl(1+|x|^{p}+|y|^{p}\bigr), \end{aligned}$$

where \(a=f,g\) and \(\lim_{T_{1}\to\infty}\varphi_{i}(T_{1})=0\).

Then we have the averaging form of the corresponding standard SDDEs

$$\begin{aligned} y_{\varepsilon}(t) =&\xi(t)+\varepsilon\int_{0}^{t} \bar{f}\bigl(y_{\varepsilon }(s),y_{\varepsilon}\bigl(\delta(s)\bigr)\bigr)\,ds+ \sqrt{\varepsilon}\int_{0}^{t}\bar {g} \bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta(s)\bigr) \bigr)\,dw_{s}. \end{aligned}$$
(6)

Obviously, under condition (H2.1), the standard SDDEs (5) and the averaged SDDEs (6) has a unique solution on \(t\in[0,T]\), respectively.

Now, we present and prove our main results, which are used for revealing the relationship between the \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).

Theorem 2.2

Let the conditions (H2.1), (H2.2) hold. For a given arbitrary small number \(\delta_{1}>0\), there exist \(L>0\), \(\varepsilon_{1}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that

$$\begin{aligned} E\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p}\le \delta_{1}, \quad \forall t\in\bigl[0,L\varepsilon^{-\beta}\bigr], \end{aligned}$$
(7)

for all \(\varepsilon\in(0,\varepsilon_{1}]\).

Proof

For simplicity, denote the difference \(e_{\varepsilon}(t)=x_{\varepsilon}(t)-y_{\varepsilon}(t)\). From (5) and (6), we have

$$\begin{aligned} e_{\varepsilon}(t) =&\varepsilon\int_{0}^{t} \bigl[f\bigl(s,x_{\varepsilon }(s),x_{\varepsilon}\bigl(\delta(s)\bigr)\bigr)- \bar{f}\bigl(y_{\varepsilon }(s),y_{\varepsilon}\bigl(\delta(s)\bigr)\bigr) \bigr]\,ds\\ &{}+\sqrt{\varepsilon}\int_{0}^{t}\bigl[g \bigl(s,x_{\varepsilon}(s),x_{\varepsilon }\bigl(\delta(s)\bigr)\bigr)-\bar{g} \bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta(s)\bigr)\bigr) \bigr]\,dw_{s}. \end{aligned}$$

By the Itô formula (see [15]), we have, for \(p\ge2\),

$$\begin{aligned} & \bigl|e_{\varepsilon}(t)\bigr|^{p} \\ &\quad=p\varepsilon\int_{0}^{t}\bigl|e_{\varepsilon}(s)\bigr|^{p-2} e_{\varepsilon}(s)^{\top}\bigl[f\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta (s)\bigr)\bigr)-\bar{f}\bigl(y_{\varepsilon}(s),y_{\varepsilon} \bigl(\delta(s)\bigr)\bigr)\bigr]\,ds \\ &\qquad {}+\frac{p\varepsilon}{2}\int_{0}^{t}\bigl|e_{\varepsilon}(s)\bigr|^{p-2} \bigl|g\bigl(s,x_{\varepsilon}(s),x_{\varepsilon}\bigl(\delta(s)\bigr)\bigr)-\bar {g}\bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta(s)\bigr) \bigr)\bigr|^{2}\,ds \\ &\qquad{}+\frac{p(p-2)\varepsilon}{2}\int_{0}^{t}\bigl|e_{\varepsilon}(s)\bigr|^{p-4}\bigl|e_{\varepsilon}(s)^{\top}\bigl[g \bigl(s,x_{\varepsilon}(s),x_{\varepsilon }\bigl(\delta(s)\bigr)\bigr)-\bar{g } \bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta(s)\bigr)\bigr) \bigr]\bigr|^{2}\,ds \\ &\qquad{}+p\sqrt{\varepsilon}\int_{0}^{t}\bigl|e_{\varepsilon}(s)\bigr|^{p-2}e_{\varepsilon}(s)^{\top}\bigl[g\bigl(s,x_{\varepsilon}(s),x_{\varepsilon}\bigl( \delta(s)\bigr)\bigr)-\bar {g}\bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl( \delta(s)\bigr)\bigr)\bigr]\,dw_{s}. \end{aligned}$$
(8)

Using the basic inequality \(2ab\le a^{2}+b^{2}\) and taking expectation on both sides of (8), it follows that, for any \(u\in[0,T]\),

$$\begin{aligned} &E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \\ &\quad\le p\varepsilon E\int_{0}^{u}\bigl|e_{\varepsilon}(t)\bigr|^{p-1} \bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)-\bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)\bigr|\,dt \\ &\qquad{}+\frac{p(p-1)\varepsilon}{2}\int_{0}^{u}\bigl|e_{\varepsilon}(t)\bigr|^{p-2} \bigl|g\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)-\bar {g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{2}\,dt \\ &\qquad{}+E\sup_{0\le t\le u}\int_{0}^{t}p \sqrt{\varepsilon}\bigl|e_{\varepsilon}(s)\bigr|^{p-2}e_{\varepsilon}(s)^{\top} \bigl[g\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta(s)\bigr)\bigr) -\bar{g}\bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta (s)\bigr) \bigr)\bigr]\,dw_{s} \\ &\quad=p\varepsilon I_{1}+\frac{p(p-1)\varepsilon}{2}I_{2}+I_{3}. \end{aligned}$$
(9)

By the Young inequality, it follows that for any \(\epsilon_{1}>0\)

$$\begin{aligned} I_{1} \le& E\int_{0}^{u} \biggl[\frac{(p-1)\epsilon_{1}}{p}\bigl|e_{\varepsilon}(t)\bigr|^{p} \\ &{} + \frac{1}{p\epsilon_{1}^{p-1}}\bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon }\bigl( \delta(t)\bigr)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl( \delta(t)\bigr)\bigr)\bigr|^{p}\biggr]\,dt, \end{aligned}$$
(10)

where the second term of (10) can be written by

$$\begin{aligned} &\bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\\ &\quad=\bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta (t)\bigr)\bigr)-f \bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)\\ &\qquad{}+f\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}. \end{aligned}$$

For any \(\epsilon_{2}>0\), we derive that

$$\begin{aligned} &\bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)-\bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)\bigr|^{p} \\ &\quad\le\bigl[1+\epsilon_{2}^{\frac{1}{p-1}}\bigr]^{p-1}\biggl( \frac{1}{\epsilon_{2}} \bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)-f\bigl(t,y_{\varepsilon }(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p} \\ &\qquad{} +\bigl|f\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\biggr). \end{aligned}$$
(11)

By (H2.1), we have

$$\begin{aligned} & \bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\\ &\quad\le\bigl[1+\epsilon_{2}^{\frac{1}{p-1}}\bigr]^{p-1}\biggl( \frac{1}{\epsilon_{2}}k_{1}^{\frac{p}{2}} \bigl(\bigl|e_{\varepsilon}(t)\bigr|^{2}+\bigl|e_{\varepsilon} \bigl(\delta(t)\bigr)\bigr)\bigr|^{2}\biggr)^{\frac {p}{2}}\\ &\qquad {}+\bigl|f\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p})\\ &\quad\le\bigl[1+\epsilon_{2}^{\frac{1}{p-1}}\bigr]^{p-1}\biggl[ \frac{1}{\epsilon _{2}}(2k_{1})^{\frac{p}{2}} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}( \sigma)\bigr|^{p}\\ &\qquad{} +\bigl|f\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\biggr]. \end{aligned}$$

Letting \(\epsilon_{2}=(\sqrt{2k_{1}})^{p-1}\) yields

$$\begin{aligned} &\bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)-\bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)\bigr|^{p} \\ &\quad\le(1+\sqrt{2k_{1}})^{p-1}\Bigl[\sqrt{2k_{1}} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p} \\ &\qquad{} +\bigl|f\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\Bigr]. \end{aligned}$$
(12)

Inserting (12) into (10), we have

$$\begin{aligned} I_{1} \le& E\int_{0}^{u} \biggl\{ \frac{(p-1)\epsilon_{1}}{p}\bigl|e_{\varepsilon}(t)\bigr|^{p} + \frac{1}{p\epsilon_{1}^{p-1}}(1+\sqrt{2k_{1}})^{p-1}\Bigl[ \sqrt{2k_{1}} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}( \sigma)\bigr|^{p} \\ &{} +\bigl|f\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\Bigr]\biggr\} \,dt. \end{aligned}$$
(13)

By setting \(\epsilon_{1}=1+\sqrt{2k_{1}}\), we get

$$\begin{aligned} I_{1} \le& 2(1+\sqrt{2k_{1}})E\int_{0}^{u} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt\\ &{} +(1+\sqrt{2k_{1}})u E\int_{0}^{u} \frac{1}{u}\bigl|f\bigl(t,y_{\varepsilon }(t),y_{\varepsilon}\bigl( \delta(t) \bigr)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)\bigr|^{p}\,dt. \end{aligned}$$

From condition (H2.2), we then see that

$$\begin{aligned} I_{1} \le& 2(1+\sqrt{2k_{1}})E\int _{0}^{u} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}( \sigma)\bigr|^{p}\,dt \\ &{} +(1+\sqrt{2k_{1}})u\varphi_{1}(u) \Bigl(1+E\sup _{-r\le t\le0}\bigl|y_{\varepsilon }(t)\bigr|^{p}+2E\sup _{0\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr). \end{aligned}$$
(14)

Next, we will estimate \(I_{2}\) of (9). Using the Young inequality again, it follows that for any \(\epsilon_{3}>0\)

$$\begin{aligned} I_{2} \le& E\int_{0}^{u} \biggl[\frac{(p-2)\epsilon_{3}}{p}\bigl|e_{\varepsilon}(t)\bigr|^{p} \\ &{} + \frac{2}{p\epsilon_{3}^{(p-2)/2}}\bigl|g\bigl(t,x_{\varepsilon}(t),x_{\varepsilon }\bigl( \delta(t)\bigr)\bigr)-\bar{g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl( \delta(t)\bigr)\bigr)\bigr|^{p}\biggr]\,dt. \end{aligned}$$
(15)

Similar to the computation of (12), we can obtain

$$\begin{aligned} & \bigl|g\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)-\bar {g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t) \bigr)\bigr)\bigr|^{p} \\ &\quad\le(1+\sqrt{2k_{1}})^{p-1}\Bigl[\sqrt{2k_{1}} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p} \\ &\qquad{} +\bigl|g\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\Bigr]. \end{aligned}$$
(16)

Substituting (16) into (15),

$$\begin{aligned} I_{2} \le& E\int_{0}^{u}\biggl\{ \frac{(p-2)\epsilon_{3}}{p}\bigl|e_{\varepsilon}(t)\bigr|^{p}\\ &{} + \frac{2}{p\epsilon_{3}^{(p-2)/2}}(1+\sqrt{2k_{1}})^{p-1}\Bigl[ \sqrt{2k_{1}} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}( \sigma)\bigr|^{p}\\ &{} +\bigl|g\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\Bigr]\biggr\} \,dt. \end{aligned}$$

Letting \(\epsilon_{3}=(1+\sqrt{2k_{1}})^{2}\), we get

$$\begin{aligned} I_{2} \le& (1+\sqrt{2k_{1}})^{2}E\int _{0}^{u}\Bigl[2 \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}( \sigma)\bigr|^{p}\\ &{} +\bigl|g\bigl(t,y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)- \bar {g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{p}\Bigr]\,dt. \end{aligned}$$

From condition (H2.2), it follows that

$$\begin{aligned} I_{2} \le& 2(1+\sqrt{2k_{1}})^{2}E \int_{0}^{u} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}( \sigma)\bigr|^{p}\,dt \\ &{} +(1+\sqrt{2k_{1}})^{2}u\varphi_{2}(u) \Bigl(1+E\sup_{-r\le t\le0}\bigl|y_{\varepsilon }(t)\bigr|^{p}+2E\sup _{0\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr). \end{aligned}$$
(17)

On the other hand, by the Burkholder-Davis-Gundy inequality, we have

$$\begin{aligned} I_{3} \le& 3 E\biggl[\int_{0}^{u}p^{2} \varepsilon\bigl|e_{\varepsilon}(t)\bigr|^{2p-2}\bigl|g\bigl(t,x_{\varepsilon }(t),x_{\varepsilon} \bigl(\delta(t)\bigr)\bigr)-\bar{g}\bigl(y_{\varepsilon}(t),y_{\varepsilon} \bigl(\delta (t)\bigr)\bigr)\bigr|^{2}\,ds\biggr]^{\frac{1}{2}}\\ \le& 3 E\biggl[\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \int_{0}^{u}p^{2}\varepsilon \bigl|e_{\varepsilon}(t)\bigr|^{p-2}\bigl|g\bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr)\bigr)\\ &{}-\bar{g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{2}\,ds\biggr]^{\frac {1}{2}}\\ \le& \frac{1}{2} E\Bigl(\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \Bigr)+Kp^{2}\varepsilon E\int_{0}^{u}\bigl|e_{\varepsilon}(t)\bigr|^{p-2}\bigl|g \bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr)\bigr)\\ &{}-\bar{g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr) \bigr)\bigr|^{2}\,ds. \end{aligned}$$

Similar to the estimation of \(I_{2}\), we derive that

$$\begin{aligned} I_{3} \le& \frac{1}{2} E\Bigl(\sup _{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p}\Bigr)+Kp^{2} \varepsilon2(1+\sqrt {2k_{1}})^{2}E\int_{0}^{u} \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt \\ &{} +(1+\sqrt{2k_{1}})^{2}u\varphi_{2}(u) \Bigl(1+E\sup_{-r\le t\le0}\bigl|y_{\varepsilon }(t)\bigr|^{p}+2E\sup _{0\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr). \end{aligned}$$
(18)

Hence, combing (14), (17), and (18),

$$\begin{aligned} E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \le&c_{1}\varepsilon E\int_{0}^{u} \sup _{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt\\ &{} +c_{2}\varepsilon u\Bigl(1+E\sup_{-r\le t\le0}\bigl|y_{\varepsilon}(t)\bigr|^{p}+2E \sup_{0\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr), \end{aligned}$$

where

$$\begin{aligned}& c_{1}=4(1+\sqrt{2k_{1}})p\bigl[1+(Kp+p-1) (1+ \sqrt{2k_{1}})\bigr], \\& c_{2}=2(1+\sqrt{2k_{1}})p\bigl[\varphi_{1}(u)+(Kp+p-1) (1+\sqrt{2k_{1}})\varphi_{2}(u)\bigr]. \end{aligned}$$

By Theorem 2.1, we have the following fact: for each \(t\ge0\), if \(E\|\xi\|^{p}<\infty\), then \(E|y_{\varepsilon }(t)|^{p}<\infty\). Hence condition (H2.2) implies that

$$\begin{aligned} E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \le&c_{2}\varepsilon u\bigl(1+E\|\xi\|^{p}+2C\bigr) +c_{1}\varepsilon E\int_{0}^{u} \sup _{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt. \end{aligned}$$

Finally, by the Gronwall inequality, we have

$$\begin{aligned} E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \le& c_{2}\varepsilon u\bigl(1+E\|\xi\|^{p}+2C \bigr)e^{c_{1}\varepsilon u}. \end{aligned}$$

Choose \(\beta\in(0,1)\) and \(L>0\) such that for every \(t\in [0,L\varepsilon^{-\beta}]\subseteq[0,T]\),

$$\begin{aligned} E\sup_{t\in[0,L\varepsilon^{-\beta}]}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|^{p} \le cL\varepsilon^{1-\beta}, \end{aligned}$$

where \(c=c_{2}(1+E\|\xi\|^{p}+2C)e^{c_{1}L\varepsilon^{1-\beta}}\). Consequently, given any number \(\delta_{1}\), we can choose \(\varepsilon _{1}\in[0,\varepsilon_{0}]\) such that for each \(\varepsilon\in[0,\varepsilon _{1}]\) and for \(t\in[0,L\varepsilon^{-\beta}]\),

$$\begin{aligned} E\sup_{t\in[0,L\varepsilon^{-\beta}]}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|^{p} \le\delta_{1}. \end{aligned}$$

The proof is completed. □

With Theorem 2.2, it is easy to show the convergence in probability between \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).

Corollary 2.1

Let the conditions (H2.1) and (H2.2) hold. For a given arbitrary small number \(\delta_{2}>0\), there exists \(\varepsilon_{2}\in[0,\varepsilon_{0}]\) such that for all \(\varepsilon\in (0,\varepsilon_{2}]\), we have

$$\begin{aligned} \lim_{\varepsilon\to0}P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta }}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|> \delta_{2}\Bigr)=0, \end{aligned}$$

where L and β are defined by Theorem  2.2.

Proof

By Theorem 2.2 and the Chebyshev inequality, for any given number \(\delta_{2}>0\), we can obtain

$$P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta}}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|> \delta_{2}\Bigr)\le\frac{1}{\delta_{2}^{p}}E\Bigl(\sup_{0<t\le L\varepsilon ^{-\beta}}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \Bigr)\le\frac {cL\varepsilon^{1-\beta}}{\delta_{2}^{p}}. $$

Let \(\varepsilon\to0\), and the required result follows. □

3 Averaging principle for pure jump case

In this section we turn to the counterpart for SDDEs with jumps. We further need to introduce some notations.

Let \(\tau>0\), and \(D([-\tau,0];R^{n})\) denote the family of all right-continuous functions with left-hand limits φ from \([-\tau,0] \to R^{n}\). The space \(D([-\tau,0];R^{n})\) is assumed to be equipped with the norm \(\|\varphi\|=\sup_{-\tau\le t\le0}|\varphi(t)|\). \(D_{\mathcal{F}_{0}}^{b}([-\tau,0];R^{n})\) denotes the family of all almost surely bounded, \(\mathcal{F}_{0}\)-measurable, \(D([-\tau,0];R^{n})\) valued random variables \(\xi=\{\xi(\theta):-\tau\le\theta\le0\}\). Let \(p\ge2\), \(\mathcal{L}_{\mathcal{F}_{0}}^{p}([-\tau,0];R^{n})\) denote the family of all \(\mathcal{F}_{0}\) measurable, \(D([-\tau,0];R^{n})\)-valued random variables \(\varphi=\{\varphi(\theta):{-\tau\le\theta\le0}\}\) such that \(E\sup_{-\tau\le\theta\le0}|\varphi(\theta)|^{p}<\infty\).

Let \((Z,\mathcal{B}(Z))\) be a measurable space and \(\pi(dv)\) a σ-finite measure on it. Let \(\{\bar{p}=\bar{p}(t), t\ge0\}\) be a stationary \(\mathcal {F}_{t}\)-Poisson point process on Z with characteristic measure π. Then, for \(A\in\mathcal{B}(Z-\{0\})\), here \(0\in\mbox{the closure of }A\), the Poisson counting measure N is defined by

$$N\bigl((0,t]\times A\bigr):=\sum_{0< s\le t}I_{A}\bigl({ \bar{p}}(s)\bigr)=\#\quad\bigl(0<s\le t,{\bar {p}}(s)\in A\bigr). $$

By [16], we find that there exists a σ-finite measure π such that

$$E\bigl[N(t,A)\bigr]=\pi(A)t,\qquad P\bigl(N(t,A)=n\bigr)=\frac{e(-t\pi(A))(\pi(A)t)^{n}}{n!}, $$

where \(N(t,A):=N((0,t]\times A)\). This measure π is said the Lévy measure. Then the compensated Poisson random measure \(\tilde{N}\) is defined by

$$\tilde{N}\bigl([0,t],A\bigr):=N\bigl([0,t],A\bigr)-t\pi(A),\quad t>0. $$

We refer to Ikeda and Watanable [16] and Applebaum [17] for the details of Poisson point processes and Lévy processes.

In this section, we consider the SDDEs with pure jumps:

$$\begin{aligned} dx(t)=f\bigl(t,x(t),x\bigl(\delta(t)\bigr)\bigr)\,dt+\int _{Z}h\bigl(t,x(t),x\bigl(\delta(t)\bigr),v\bigr) \tilde{N}(ds,dv), \end{aligned}$$
(19)

where \(f:[0,T]\times R^{n}\times R^{n} \to R^{n} \) and \(h:[0,T]\times R^{n}\times R^{n} \times Z \to R^{n} \) are both Borel-measurable functions. The initial condition \(x_{0}\) is defined by

$$x_{0}=\xi=\bigl\{ \xi(t):-\tau\le t\le0\bigr\} \in\mathcal{L}_{\mathcal {F}_{0}}^{p} \bigl([-\tau,0];R^{n}\bigr). $$

To guarantee the existence and uniqueness of the solution, we introduce the following conditions on the jump term.

(H3.1) For all \(x_{1},y_{1},x_{2},y_{2}\in R^{n}\) and \(v\in Z\), there exist two positive constants \(k_{2}\) and \(k_{3}\) such that

$$\begin{aligned}& \int_{Z}\bigl|h(t,x_{1},y_{1},v)-h(t,x_{2},y_{2},v)\bigr|^{2} \pi(dv)\le k_{2}\bigl(|x_{1}-x_{2}|^{2}+|y_{1}-y_{2}|^{2} \bigr), \end{aligned}$$
(20)
$$\begin{aligned}& \int_{Z}\bigl|h(t,x,y,v)\bigr|^{2}\pi(dv)\le k_{3}\bigl(1+|x|^{2}+|y|^{2}\bigr). \end{aligned}$$
(21)

(H3.2) For all \(x_{1},x_{2},y_{1},y_{2}\in R^{n}\), \(v\in Z\) and \(p>2\), there exists \(L>0\) such that

$$\begin{aligned} \bigl|h(t,x_{1},y_{1},v)-h(t,x_{2},y_{2},v)\bigr|^{2} \le L\bigl(|x_{1}-x_{2}|^{2}+|y_{1}-y_{2}|^{2} \bigr)|v|^{2}, \end{aligned}$$
(22)

with \(\int_{Z}|v|^{p}\pi(dv)<\infty\).

Theorem 3.1

Under conditions (H2.1), (H3.1), and (H3.2), (19) has a unique solution in \(L^{p}\), \(p\ge2\). Moreover, we have

$$E\sup_{0\le t\le T}\bigl|x(t)\bigr|^{p}< C. $$

Proof

Similar to the proof of [18], we find that (19) has a unique solution in \(L^{p}\). □

Let us consider the standard form of SDDEs with pure jumps (19),

$$\begin{aligned} x_{\varepsilon}(t) =&\xi(t)+\varepsilon\int_{0}^{t}f \bigl(s,x_{\varepsilon }(s),x_{\varepsilon}\bigl(\delta(s)\bigr)\bigr)\,ds \\ &{}+\sqrt{\varepsilon}\int_{0}^{t}\int _{Z}h\bigl(s,x_{\varepsilon }(s),x_{\varepsilon}\bigl( \delta(s)\bigr),v\bigr)\tilde{N}(ds,dv), \end{aligned}$$
(23)

where the coefficients f, h have the same conditions as in (H2.1), (H3.1), and (H3.2) and \(\varepsilon\in[0,\varepsilon_{0}]\) is a positive small parameter with \(\varepsilon_{0}\) is a fixed number.

Let \(\bar{f}(x,y): R^{n}\times R^{n} \to R^{n} \) and \(\bar{h}(x,y,v):R^{n}\times R^{n}\times Z \to R^{n}\) be measurable functions, satisfying conditions (H2.1), (H3.1), and (H3.2). We also assume that the following inequalities are satisfied.

(H3.3) For any \(x,y\in R^{n}\) and \(v\in Z\), there exists a positive bounded function \(\varphi_{3}(T_{1})\), such that

$$\begin{aligned} \frac{1}{T_{1}}\int_{0}^{T_{1}}\int _{Z}\bigl|h(t,x,y,v)-\bar{h}(x,y,v)\bigr|^{p}\pi(dv)\,dt \le \varphi_{3}(T_{1}) \bigl(1+|x|^{p}+|y|^{p} \bigr), \end{aligned}$$

where \(\lim_{T_{1}\to\infty}\varphi_{3}(T_{1})=0\).

Then the averaging form of (19) is given by

$$\begin{aligned} y_{\varepsilon}(t) =&\xi(t)+\varepsilon\int_{0}^{t} \bar{f}\bigl(y_{\varepsilon }(s),y_{\varepsilon}\bigl(\delta(s)\bigr)\bigr)\,ds \\ &{} +\sqrt{\varepsilon}\int_{0}^{t}\int _{Z}\bar{h}\bigl(y_{\varepsilon }(s),y_{\varepsilon}\bigl( \delta(s)\bigr),v\bigr)\tilde{N}(ds,dv). \end{aligned}$$
(24)

Obviously, under conditions (H2.1), (H3.1), and (H3.2), the standard SDDEs with pure jumps (23) and the averaged SDDEs with pure jumps (24) have unique solutions on \(t\in[0,T]\), respectively.

Now, we present and prove our main results.

Theorem 3.2

Let the conditions (H2.1) and (H3.1)-(H3.3) hold. For a given arbitrary small number \(\delta_{3}>0\) and \(p\ge2\), there exist \(L>0\), \(\varepsilon_{3}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that

$$\begin{aligned} E\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p}\le \delta_{3}, \quad\forall t\in\bigl[0,L\varepsilon^{-\beta}\bigr], \end{aligned}$$
(25)

for all \(\varepsilon\in(0,\varepsilon_{3}]\).

Proof

For simplicity, denote the difference \(e_{\varepsilon}(t)=x_{\varepsilon}(t)-y_{\varepsilon}(t)\). From (23) and (24), we have

$$\begin{aligned} e_{\varepsilon}(t) =&\varepsilon\int_{0}^{t} \bigl[f\bigl(s,x_{\varepsilon }(s),x_{\varepsilon}\bigl(\delta(s)\bigr)\bigr)- \bar{f}\bigl(y_{\varepsilon }(s),y_{\varepsilon}\bigl(\delta(s)\bigr)\bigr) \bigr]\,ds \\ &{}+\sqrt{\varepsilon}\int_{0}^{t}\int _{Z}\bigl[h\bigl(s,x_{\varepsilon }(s),x_{\varepsilon} \bigl(\delta(s)\bigr),v\bigr)-\bar{h}\bigl(y_{\varepsilon}(s),y_{\varepsilon} \bigl(\delta (s)\bigr),v\bigr)\bigr]\tilde{N}(ds,dv). \end{aligned}$$
(26)

By the Itô formula (see [17, 19]), we obtain

$$\begin{aligned} &\bigl|e_{\varepsilon}(t)\bigr|^{p} \\ &\quad=p\varepsilon\int_{0}^{t}\bigl|e_{\varepsilon}(s)\bigr|^{p-2} e_{\varepsilon}(s)^{\top}\bigl[f\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta (s)\bigr)\bigr)-\bar{f}\bigl(y_{\varepsilon}(s),y_{\varepsilon} \bigl(\delta(s)\bigr)\bigr)\bigr]\,ds \\ &\qquad {}+\int_{0}^{t}\int_{Z}\bigl\{ \bigl|e_{\varepsilon}(s)+\sqrt{\varepsilon }\bigl[h\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta(s)\bigr),v\bigr)-\bar{h}\bigl(y_{\varepsilon}(s),y_{\varepsilon} \bigl(\delta (s)\bigr),v\bigr)\bigr]\bigr|^{p} \\ &\qquad{} -\bigl|e_{\varepsilon}(s)\bigr|^{p}-p\sqrt{\varepsilon}\bigl|e_{\varepsilon}(s)\bigr|^{p-2} e_{\varepsilon}(s)^{\top}\bigl[h\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta(s)\bigr),v\bigr) \\ &\qquad{}-\bar{h}\bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta(s)\bigr),v \bigr)\bigr]\bigr\} \pi (dv)\,ds \\ &\qquad{}+\int_{0}^{t}\int_{Z} \bigl\{ \bigl|e_{\varepsilon}(s)+\sqrt{\varepsilon } \bigl[h\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta(s)\bigr),v\bigr)-\bar{h}\bigl(y_{\varepsilon}(s),y_{\varepsilon} \bigl(\delta (s)\bigr),v\bigr)\bigr]\bigr|^{p} \\ &\qquad{} -\bigl|e_{\varepsilon}(s)\bigr|^{p}\bigr\} \tilde{N}(ds,dv). \end{aligned}$$
(27)

Using the basic inequality \(2ab\le a^{2}+b^{2}\) and taking expectations on both sides of (27), it follows that

$$\begin{aligned} & E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \\ &\quad\le p\varepsilon E\int_{0}^{u}\bigl|e_{\varepsilon}(t)\bigr|^{p-1} \bigl|f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)-\bar {f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr)\bigr)\bigr|\,dt \\ &\qquad{}+p\sqrt{\varepsilon}E\sup_{0\le t\le u}\int_{0}^{t} \int_{Z}\bigl|e_{\varepsilon}(s)\bigr|^{p-2} e_{\varepsilon}(s)^{\top}\bigl[h\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta(s)\bigr),v\bigr) \\ &\qquad{}-\bar{h}\bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta(s)\bigr),v \bigr)\bigr]\tilde {N}(ds,dv) \\ &\qquad{}+E\sup_{0\le t\le u}\int_{0}^{t}\int _{Z}\bigl\{ \bigl|e_{\varepsilon}(s)+\sqrt {\varepsilon}\bigl[h \bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta(s)\bigr),v\bigr)-\bar{h} \bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta (s)\bigr),v\bigr) \bigr]\bigr|^{p} \\ &\qquad{} -\bigl|e_{\varepsilon}(s)\bigr|^{p}-p\sqrt{\varepsilon}\bigl|e_{\varepsilon}(s)\bigr|^{p-2} e_{\varepsilon}(s)^{\top}\bigl[h\bigl(s,x_{\varepsilon}(s),x_{\varepsilon} \bigl(\delta(s)\bigr),v\bigr) \\ &\qquad{}-\bar{h}\bigl(y_{\varepsilon}(s),y_{\varepsilon}\bigl(\delta(s)\bigr),v \bigr)\bigr]\bigr\} N(ds,dv) \\ &\quad=p\varepsilon I_{1}+J_{2}+J_{3}. \end{aligned}$$
(28)

By the Burkholder-Davis-Gundy inequality, there exists a positive constant \(c_{p}\) such that

$$\begin{aligned} J_{2} \le& c_{p} E\biggl[\int_{0}^{u} \int_{Z}p^{2}\varepsilon\bigl|e_{\varepsilon}(t)\bigr|^{2p-2}\bigl|h\bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t) \bigr),v\bigr)\\ &{}-\bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr),v \bigr)\bigr|^{2}\pi (dv)\,dt\biggr]^{\frac{1}{2}}\\ \le& c_{p} E\biggl[\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \int_{0}^{u}\int_{Z}p^{2} \varepsilon \bigl|e_{\varepsilon}(t)\bigr|^{p-2}\bigl|h\bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr),v\bigr)\\ &{}-\bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr),v \bigr)\bigr|^{2}\pi (dv)\,dt\biggr]^{\frac{1}{2}}. \end{aligned}$$

Next, the Young inequality implies that

$$\begin{aligned} J_{2} \le&c_{p}\Bigl[\epsilon_{4}E\sup _{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p}\Bigr]^{\frac {1}{2}} \biggl[{\frac{1}{\epsilon_{4}}}E\int_{0}^{u}\int _{Z}p^{2}\varepsilon \bigl|e_{\varepsilon}(t)\bigr|^{p-2}\bigl|h \bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr),v\bigr)\\ &{}-\bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr),v \bigr)\bigr|^{2}\pi (dv)\,dt\biggr]^{\frac{1}{2}}\\ \le&\frac{c_{p}\epsilon_{4}}{2}E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p}+\frac{c_{p}p^{2}\varepsilon}{2\epsilon_{4}}E\int_{0}^{u} \int_{Z}\bigl|e_{\varepsilon}(t)\bigr|^{p-2}\bigl|h \bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr),v\bigr)\\ &{}-\bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr),v \bigr)\bigr|^{2}\pi(dv)\,dt, \end{aligned}$$

where \(\epsilon_{4}>0\). By setting \(\epsilon_{4}=c_{p}^{-1}\), we get

$$\begin{aligned} J_{2} \le&\frac{1}{2}E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p}+ \frac {c_{p}^{2}p^{2}\varepsilon}{2}E\int_{0}^{u}\int _{Z}\bigl|e_{\varepsilon}(t)\bigr|^{p-2}\bigl|h \bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr),v\bigr)\\ &{}-\bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr),v \bigr)\bigr|^{2}\pi(dv)\,dt. \end{aligned}$$

Similar to the estimate of \(I_{2}\), we have

$$\begin{aligned} J_{2} \le&\frac{1}{2}E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p}+ c_{3}\varepsilon\biggl[\int_{Z} \bigl(1+|v|^{p}\bigr)\pi(dv)\biggr]\int_{0}^{u}E \sup_{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt \\ &{} +\frac{c_{3}}{2}\varepsilon E\int_{0}^{u} \int_{Z}\bigl|h\bigl(t,y_{\varepsilon}(t),y_{\varepsilon} \bigl( \delta(t)\bigr),v\bigr)-\bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl( \delta (t)\bigr),v\bigr)\bigr|^{p}\pi(dv)\,dt, \end{aligned}$$

where \(c_{3}=c_{p}^{2}p^{2}(1+\sqrt{2L})^{2}\). Condition (H3.3) implies that

$$\begin{aligned} J_{2} \le&\frac{1}{2}E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p}+ c_{3}\biggl[\int_{Z}\bigl(1+|v|^{p} \bigr)\pi(dv)\biggr]\int_{0}^{u}E\sup _{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt \\ &{} +\frac{c_{3}}{2}u\varphi_{3}(u) \Bigl(1+E\sup _{-r\le t\le0}\bigl|y_{\varepsilon}(t)\bigr|^{p}+2E\sup _{0\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr). \end{aligned}$$
(29)

Finally, let us estimate \(J_{3}\). Since \(\tilde{N}(dt,du)\) is a martingale measure and \(N(dt,du)=\tilde{N}(dt,du)+\pi(du)\,dt\), we have

$$\begin{aligned} J_{3} =&E\int_{0}^{u}\int _{Z}\bigl\{ \bigl|e_{\varepsilon}(t)+\sqrt{\varepsilon }\bigl[h \bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr),v\bigr)-\bar{h} \bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta (t)\bigr),v\bigr) \bigr]\bigr|^{p} \\ &{} -\bigl|e_{\varepsilon}(t)\bigr|^{p}-p\sqrt{\varepsilon}\bigl|e_{\varepsilon}(t)\bigr|^{p-2} e_{\varepsilon}(t)^{\top}\bigl[h\bigl(t,x_{\varepsilon}(t),x_{\varepsilon} \bigl(\delta(t)\bigr),v\bigr) \\ &{}-\bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}\bigl(\delta(t)\bigr),v \bigr)\bigr]\bigr\} N(dt,dv). \end{aligned}$$

Note that \(J_{3}\) has the form

$$\begin{aligned} E\int_{0}^{u}\int_{Z}\bigl\{ f\bigl(e_{\varepsilon}(t)+\tilde{h}_{\varepsilon }^{v}(t)\bigr)-f \bigl(e_{\varepsilon}(t)\bigr)-f'\bigl(e_{\varepsilon}(t)\bigr) \tilde{h}_{\varepsilon }^{v}(t)\bigr\} \pi(dv)\,dt, \end{aligned}$$

where \(f(x)=|x|^{p}\) and \(\tilde{h}_{\varepsilon}^{v}(t)=\sqrt{\varepsilon }[h(t,x_{\varepsilon}(t),x_{\varepsilon} (\delta(t)),v)-\bar{h}(y_{\varepsilon}(t),y_{\varepsilon}(\delta(t)),v)]\). By the Taylor formula, there exists a positive constant \(M(p)\) such that

$$\begin{aligned} & f\bigl(e_{\varepsilon}(t)+\tilde{h}_{\varepsilon}^{v}(t)\bigr)-f \bigl(e_{\varepsilon}(t)\bigr)-f'\bigl(e_{\varepsilon}(t)\bigr) \tilde{h}_{\varepsilon}^{v}(t)\\ &\quad=\bigl|e_{\varepsilon}(t)+\tilde{h}_{\varepsilon}^{v}\bigr|^{p}-\bigl|e_{\varepsilon}(t)\bigr|^{p}-p\bigl|e_{\varepsilon}(t)\bigr|^{p-2}e_{\varepsilon}(t)^{\top} \tilde {h}_{\varepsilon}^{v}(t)\\ &\quad\le M(p)\bigl|e_{\varepsilon}(t)+\tilde{h}_{\varepsilon}^{v}(t)\bigr|^{p-2}\bigl| \tilde {h}_{\varepsilon}^{v}(t)\bigr|^{2}. \end{aligned}$$

Applying the basic inequality \(|a+b|^{p-2}\le 2^{p-3}(|a|^{p-2}+|b|^{p-2})\), we have

$$\begin{aligned} J_{3} \le&M(p)2^{p-3}E\int_{0}^{u} \int_{Z}\bigl[\bigl|e_{\varepsilon}(t) \bigr|^{p-2}\bigl| \tilde{h}_{\varepsilon}^{v}(t)\bigr|^{2}+\bigl|\tilde{h}_{\varepsilon }^{v}(t)\bigr|^{p} \bigr]\pi(dv)\,dt. \end{aligned}$$

Similar to \(J_{2}\), we derive that

$$\begin{aligned} J_{3} \le&c_{4}\varepsilon\biggl[\int _{Z}\bigl(1+|v|^{p}\bigr)\pi(dv)\biggr]\int _{0}^{u}E\sup_{0\le \sigma\le t}\bigl|e_{\varepsilon}( \sigma)\bigr|^{p}\,dt \\ &{} +c_{4}\varepsilon u\varphi_{3}(u) \Bigl(1+E\sup _{-r\le t\le0}\bigl|y_{\varepsilon}(t)\bigr|^{p}+2E\sup _{0\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr), \end{aligned}$$
(30)

where \(c_{4}=2M(p)2^{p-3}(1+\sqrt{2L})^{p}\). Combing (29) and (30), we have

$$\begin{aligned} E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \le&c_{5}\varepsilon\int_{0}^{u}E\sup _{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt\\ &{}+c_{6}\varepsilon\Bigl(1+E\sup_{-r\le t\le0}\bigl|y_{\varepsilon}(t)\bigr|^{p}+2E \sup_{0\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr), \end{aligned}$$

where

$$\begin{aligned}& c_{5}=4p(1+\sqrt{2k_{1}})+2(c_{3}+c_{4}) \biggl[\int_{Z}\bigl(1+|v|^{p}\bigr)\pi(dv)\biggr], \\& c_{6}=2p(1+\sqrt{2k_{1}})u\varphi_{1}(u)+(c_{3}+2c_{4})u \varphi_{3}(u). \end{aligned}$$

By Theorem 3.1, we have the following fact: for each \(t\ge0\), if \(E\|\xi\|^{p}<\infty\), then \(E|y_{\varepsilon }(t)|^{p}<\infty\). Hence, condition (H3.3) implies that

$$\begin{aligned} E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \le& c_{6}\varepsilon\bigl(1+E\|\xi\|^{p}+2C\bigr) +c_{5}\varepsilon E\int_{0}^{u} \sup _{0\le\sigma\le t}\bigl|e_{\varepsilon}(\sigma)\bigr|^{p}\,dt. \end{aligned}$$

By the Gronwall inequality, we obtain

$$\begin{aligned} E\sup_{0\le t\le u}\bigl|e_{\varepsilon}(t)\bigr|^{p} \le& c_{6}\varepsilon\bigl(1+E\|\xi\|^{p}+2C\bigr)e^{c_{6}\varepsilon u}. \end{aligned}$$

Consequently, given any number \(\delta_{3}\), we can choose \(\varepsilon _{3}\in[0,\varepsilon_{0}]\) such that for each \(\varepsilon\in[0,\varepsilon _{3}]\) and for \(t\in[0,L\varepsilon^{-\beta}]\),

$$\begin{aligned} E\sup_{t\in[0,L\varepsilon^{-\beta}]}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|^{p} \le\delta_{3}. \end{aligned}$$

The proof is completed. □

Similarly, we have the following results as regards the convergence in probability between \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).

Corollary 3.1

Let the conditions (H2.1) and (H3.1)-(H3.3) hold. For a given arbitrary small number \(\delta_{4}>0\), there exists \(\varepsilon_{4}\in[0,\varepsilon_{0}]\) such that for all \(\varepsilon\in [0,\varepsilon_{4}]\), we have

$$\begin{aligned} \lim_{\varepsilon\to0}P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta }}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|> \delta_{4}\Bigr)=0, \end{aligned}$$
(31)

where L and β are defined by Theorem  3.2.

Remark 3.1

When the time delay \(\delta(t)=t\), (19) will reduce to SDEs with jumps, which have been studied by [11–14]. In particularly, if \(p=2\) in (25), then we have the mean-square sense convergence of the standard solution of (23) and the averaged solution of (24). So the corresponding results in [11–14] are generalized and improved.

Remark 3.2

In [10], Tan and Lei studied the averaging method for SDDEs under non-Lipschitz conditions. In particular, we see that the Lipschitz condition is a special case of non-Lipschitz conditions which are studied by many scholars [20–23]. Similarly, by applying the proof of Theorem 3.2, we can prove the standard solution of (23) converges to the averaged solution of (24) in the pth moment under non-Lipschitz conditions. In other words, we obtain a more general result on the averaging principle for SDDEs with jumps than Theorem 3.2.

4 Examples

In this section, we construct two examples to demonstrate the averaging principle results.

Example 4.1

Let \(\tilde{N}(dt,dv)\) be a compensated Poisson random measures and is given by \(\pi(du)\,dt=\lambda f(v)\,dv\,dt\), where \(\lambda>0\) is a constant and

$$f(v)=\frac{1}{\sqrt{2\pi}v}e^{-\frac{(\ln v)^{2}}{2}}, \quad 0\le v< \infty, $$

is the density function of a lognormal random variable. Consider the following SDEs with pure jumps:

$$\begin{aligned} dx_{\varepsilon}(t) =&\varepsilon f\bigl(t,x_{\varepsilon}(t) \bigr)\,dt+\sqrt {\varepsilon} \int_{0}^{\infty}\phi(v)h \bigl(t,x_{\varepsilon}(t)\bigr)\tilde{N}(dt,dv), \end{aligned}$$
(32)

with initial data \(x_{\varepsilon}(0)=x_{0}\), where \(\delta(t)=t\). Here \(f(t,x_{\varepsilon}(t))=\sin t x_{\varepsilon}(t)\) and \(h(t,x_{\varepsilon}(t))=-x_{\varepsilon}(t)\log x_{\varepsilon}(t)\). Let

$$\bar{f}\bigl(t,y_{\varepsilon}(t)\bigr)=\frac{1}{\pi}\int _{0}^{\pi }f\bigl(t,y_{\varepsilon}(t)\bigr)\,dt= \frac{2}{\pi}y_{\varepsilon}(t) $$

and

$$\bar{h}\bigl(t,y_{\varepsilon}(t),v\bigr)= \frac{1}{\pi}\int _{0}^{\pi}\phi (v)h\bigl(t,y_{\varepsilon}(t) \bigr)\,dt=-\phi(v)y_{\varepsilon}(t)\log y_{\varepsilon}(t). $$

Hence, we have the corresponding averaged SDEs with pure jumps

$$\begin{aligned} dy_{\varepsilon}(t) =&\varepsilon\frac{2}{\pi}y_{\varepsilon }(t)\,dt- \sqrt{\varepsilon} \int_{0}^{\infty} \phi(v)y_{\varepsilon}(t)\log y_{\varepsilon}(t)\tilde{N}(dt,dv). \end{aligned}$$
(33)

Now, we impose the non-Lipschitz condition on (32).

(H4.1) For all \(x,y\in R^{n}\), \(v\in Z\), and \(p\ge2\),

$$\begin{aligned} \bigl|f(t,x)-f(t,y)\bigr|^{p}\vee\int_{Z}\bigl|h(t,x,v)-h(t,y,v)\bigr|^{p} \pi(dv) \le\rho\bigl(|x-y|^{p}\bigr), \end{aligned}$$
(34)

where \(\rho(\cdot)\) is a concave nondecreasing function from \(R_{+}\) to \(R_{+}\) such that \(\rho(0)=0\), \(\rho(u)>0\) for \(u>0\) and \(\int_{0}^{1} \frac{du}{\rho (u)}=\infty\).

Let us return to (32). It is easy to see that \(h(t, \cdot)\) is a nondecreasing, positive and concave function on \([0,\infty]\) with \(h(t,0)=0\). Moreover, by a straight computation, we have

$$\begin{aligned} \int_{0}^{1}\frac{1}{h(t,u)}\,du=-\int _{0}^{1}\frac{1}{u\log u}\,du=-\lim _{\varepsilon \to0}\int_{\varepsilon}^{1} \frac{1}{u\log u}\,du=-\lim_{\varepsilon\to0}\int_{\log\varepsilon}^{0} \frac{1}{v}\,dv=\infty. \end{aligned}$$

Hence, the coefficients of (32) and (33) satisfy our condition (H4.1). Similar to the proof of [20–23], we find that (32) and (33) have unique solutions in \(L^{p}\), \(p\ge2\), respectively.

Similar to the proof of Theorem 3.2, we find that the standard solution of (32) converges to the averaged solution of (33) in the sense of the pth moment.

Corollary 4.1

Let the conditions (H2.2), (H3.3), and (H4.1) hold and \(\delta(t)=t\). For a given arbitrary small number \(\delta_{5}>0\) and \(p\ge2\), there exist \(L>0\), \(\varepsilon_{5}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that

$$\begin{aligned} E\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p}\le \delta_{5}, \quad \forall t\in\bigl[0,L\varepsilon^{-\beta}\bigr], \end{aligned}$$
(35)

for all \(\varepsilon\in(0,\varepsilon_{5}]\).

The proof of Corollary 4.1 is given in the Appendix.

Remark 4.1

Similarly to Corollary 4.1, we can show the convergence in probability of the standard solution of (32) and the averaged solution of (33).

Example 4.2

Let \(w_{t}\) be a one-dimensional Brownian motion and the compensated Poisson random measure \(\tilde{N}(dt,dv)\) is defined by Example 4.1. Of course \(\tilde{N}(dt,dv)\) and \(w_{t}\) are assumed to be independent. Consider the following linear SDDEs with jumps:

$$\begin{aligned} dx_{\varepsilon}(t) =&\varepsilon\bigl[ax_{\varepsilon}(t)+bx_{\varepsilon }(t- \tau)\bigr]\,dt+2\sqrt{\varepsilon}\sin^{2}tx_{\varepsilon}(t-\tau )\,dw_{t} \\ &{}+c\sqrt{\varepsilon} \int_{0}^{\infty}vx_{\varepsilon}(t) \tilde{N}(dt,dv), \end{aligned}$$
(36)

with initial data \(x_{\varepsilon}(t)=\xi(t)\), when \(t\in[-\tau,0]\), Ï„ is a fixed delay, where \(\delta(t)=t-\tau\), \(T=N\tau\), \(N\in Z^{+}\), \(a,b,c\in R\). Here

$$f\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}(t-\tau)\bigr)=ax_{\varepsilon }(t)+bx_{\varepsilon}(t- \tau),\qquad g\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}(t-\tau)\bigr)=2\sqrt{ \varepsilon }\sin^{2}tx_{\varepsilon}(t-\tau) $$

and

$$h\bigl(t,x_{\varepsilon}(t),x_{\varepsilon}(t-\tau),v\bigr)=cvx_{\varepsilon}(t). $$

Let

$$\begin{aligned}& \bar{f}\bigl(y_{\varepsilon}(t),y_{\varepsilon}(t-\tau)\bigr)= \frac{1}{\pi}\int_{0}^{\pi}f \bigl(t,y_{\varepsilon}(t),y_{\varepsilon}(t-\tau)\bigr)\,dt =ay_{\varepsilon}(t)+by_{\varepsilon}(t- \tau), \\& \bar{g}\bigl(y_{\varepsilon}(t),y_{\varepsilon}(t-\tau)\bigr)= \frac{1}{\pi}\int_{0}^{\pi}g \bigl(t,y_{\varepsilon}(t),y_{\varepsilon}(t-\tau)\bigr)\,dt =y_{\varepsilon}(t- \tau), \\& \bar{h}\bigl(y_{\varepsilon}(t),y_{\varepsilon}(t-\tau),v\bigr)= \frac{1}{\pi}\int_{0}^{\pi}h \bigl(t,y_{\varepsilon}(t),y_{\varepsilon}(t-\tau),v\bigr)\,dt =cvy_{\varepsilon}(t). \end{aligned}$$

Hence, we have the corresponding averaged SDDEs with jumps

$$\begin{aligned} dy_{\varepsilon}(t) =&\varepsilon \bigl[ay_{\varepsilon}(t)+by_{\varepsilon }(t- \tau)\bigr]\,dt+\sqrt{\varepsilon}y_{\varepsilon}(t-\tau)\,dw_{t} \\ &{}+c\sqrt{\varepsilon} \int_{0}^{\infty}vy_{\varepsilon}(t) \tilde{N}(dt,dv). \end{aligned}$$
(37)

When \(t\in[0,\tau]\), the explicit solution of SDDEs with jumps is given by

$$\begin{aligned} y_{\varepsilon}(t) =&\phi_{0}(t)\biggl\{ \xi(0)+ \varepsilon b\int_{0}^{t}\phi _{0}^{-1}(s) \xi(s-\tau)\,ds +\sqrt{\varepsilon}\int_{0}^{t} \phi_{0}^{-1}(s)\xi(s-\tau)\,dw_{s}\biggr\} , \end{aligned}$$
(38)

where

$$\begin{aligned} \phi_{0}(t) =&\exp\biggl\{ at+\int_{0}^{t} \int_{0}^{\infty}\bigl[\ln(1+c\sqrt{\varepsilon }v)-c \sqrt{\varepsilon}v\bigr]\pi(dv)\,ds\\ &{}+ \int_{0}^{t}\int_{0}^{\infty}\ln(1+c \sqrt{\varepsilon}v)\tilde{N}(dt,dv)\biggr\} . \end{aligned}$$

When \(t\in[\tau,2\tau]\), the explicit solution of SDDEs with jumps is given by

$$\begin{aligned} y_{\varepsilon}(t) =&\phi_{\tau}(t)\biggl\{ y_{\varepsilon}(\tau)+\varepsilon b\int_{\tau}^{t} \phi_{\tau}^{-1}(s)y_{\varepsilon}(s-\tau)\,ds +\sqrt{\varepsilon}\int_{\tau}^{t} \phi_{\tau}^{-1}(s)y_{\varepsilon }(s-\tau)\,dw_{s} \biggr\} , \end{aligned}$$
(39)

where

$$\begin{aligned} \phi_{\tau}(t) =&\exp\biggl\{ a(t-\tau)+\int_{\tau}^{t} \int_{0}^{\infty }\bigl[\ln(1+c\sqrt{\varepsilon}v)-c \sqrt{\varepsilon}v\bigr]\pi(dv)\,ds \\ &{}+\int_{\tau}^{t}\int_{0}^{\infty}\ln(1+c \sqrt{\varepsilon}v)\tilde {N}(dt,dv)\biggr\} . \end{aligned}$$

Repeating this procedure over the interval \([2\tau,3\tau]\), \([3\tau,4\tau ]\), etc. we can obtain the explicit solution \(y_{\varepsilon}(t)\) on the entire interval \([0,T]\). On the other hand, it is easy to find that the conditions of Theorems 2.2 and 3.2 are satisfied, so the solution of averaged SDDEs with jumps (36) will converge to that of the standard SDDEs with jumps (37) in the sense of the pth moment and in probability.

5 Conclusion

In this paper, we study the averaging method for SDDEs and SDDEs with pure jumps. By applying the Itô formula, the Taylor formula, and the BDG inequality, we prove that the solution of the averaged SDDEs converges to that of the standard SDDEs in the sense of the pth moment and also in probability. Finally, two examples are provided to demonstrate the proposed results.

References

  1. Krylov, NM, Bogolyubov, NN: Les proprietes ergodiques des suites des probabilites en chaine. C. R. Math. Acad. Sci. 204, 1454-1546 (1937)

    MATH  Google Scholar 

  2. Gikhman, II: On a theorem of N.N. Bogoliubov. Ukr. Mat. Zh. 4, 215-219 (1952)

    Google Scholar 

  3. Volosov, VM: Averaging in systems of ordinary differential equations. Russ. Math. Surv. 17, 1-126 (1962)

    Article  MATH  MathSciNet  Google Scholar 

  4. Khasminskii, RZ: On the principle of averaging the Itô stochastic differential equations. Kibernetika 4, 260-279 (1968)

    Google Scholar 

  5. Khasminskii, RZ, Yin, G: On averaging principles: an asymptotic expansion approach. SIAM J. Math. Anal. 35, 1534-1560 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  6. Golec, J, Ladde, G: Averaging principle and systems of singularly perturbed stochastic differential equations. J. Math. Phys. 31, 1116-1123 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  7. Veretennikov, AY: On the averaging principle for systems of stochastic differential equations. Math. USSR Sb. 69, 271-284 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  8. Veretennikov, AY: On large deviations in the averaging principle for SDEs with full dependence. Ann. Probab. 27, 284-296 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  9. Givon, D, Kevrekidis, IG, Kupferman, R: Strong convergence of projective integration schemes for singular perturbed stochastic differential systems. Commun. Math. Sci. 4, 707-729 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  10. Tan, L, Lei, D: The averaging method for stochastic differential delay equations under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 38 (2013)

    Article  MathSciNet  Google Scholar 

  11. Stoyanov, IM, Bainov, DD: The averaging method for a class of stochastic differential equations. Ukr. Math. J. 26, 186-194 (1974)

    Article  Google Scholar 

  12. Kolomiets, VG, Melnikov, AI: Averaging of stochastic systems of integral-differential equations with Poisson noise. Ukr. Math. J. 43, 242-246 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  13. Givon, D: Strong convergence rate for two-time-scale jump-diffusion stochastic differential systems. SIAM J. Multiscale Model. Simul. 6, 577-594 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  14. Xu, Y, Duan, JQ, Xu, W: An averaging principle for stochastic dynamical systems with Lévy noise. Physica D 240, 1395-1401 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  15. Mao, X: Stochastic Differential Equations and Their Applications. Ellis Horwood, Chichester (1997)

    MATH  Google Scholar 

  16. Ikeda, N, Watanable, S: Stochastic Differential Equations and Diffusion Processes. North-Holland, Amsterdam (1989)

    Google Scholar 

  17. Applebaum, D: Lévy Process and Stochastic Calculus. Cambridge University Press, Cambridge (2009)

    Book  Google Scholar 

  18. Yuan, C, Bao, J: On the exponential stability of switching-diffusion processes with jumps. Q. Appl. Math. 2, 311-329 (2013)

    MathSciNet  Google Scholar 

  19. Peszat, S, Zabczyk, J: Stochastic Partial Differential Equations with Lévy Noise: An Evolution Equation Approach. Cambridge University Press, Cambridge (2007)

    Book  Google Scholar 

  20. Taniguchi, T: Successive approximations to solutions of stochastic differential equations. J. Differ. Equ. 96, 152-169 (1992)

    Article  MATH  Google Scholar 

  21. Mao, X: Adapted solutions of backward stochastic differential equations with non-Lipschitz coefficients. Stoch. Process. Appl. 58, 281-292 (1995)

    Article  MATH  Google Scholar 

  22. Vinodkumar, A: Existence, uniqueness and stability results of impulsive stochastic semilinear functional differential equations with infinite delays. J. Nonlinear Sci. Appl. 4, 236-246 (2011)

    MATH  MathSciNet  Google Scholar 

  23. Negrea, R, Preda, C: Fixed point technique for a class of backward stochastic differential equations. J. Nonlinear Sci. Appl. 6, 41-50 (2013)

    MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This paper is completed when the first author visits the Department of Mathematics and Statistics in the University of Strathclyde, whose hospitality is highly appreciated. The authors would like to thank the Royal Society of Edinburgh, the National Natural Science Foundation of China under NSFC grant (No. 11401261), Qing Lan Project of Jiangsu Province (2012), the NSF of Higher Education Institutions of Jiangsu Province (13KJB110005), the grant of Jiangsu Second Normal University (JSNU-ZY-02) and the Jiangsu Government Overseas Study Scholarship for their financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Mao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the manuscript and typed, read and approved the final manuscript.

Appendix

Appendix

Proof of Theorem 2.1

Let \(x^{0}(t)=\xi(0)\) for \(t\in[{0},T]\) and define the sequence of successive approximations to (1)

$$\begin{aligned} x^{n}(t) =&\xi(0)+\int_{0}^{t}f \bigl(s,x^{n-1}(s),x^{n-1}\bigl(\delta(s)\bigr)\bigr)\,ds +\int_{0}^{t}g\bigl(s,x^{n-1}(s),x^{n-1} \bigl(\delta(s)\bigr)\bigr)\,dw_{s}. \end{aligned}$$
(40)

The proof will be split into the following steps.

Step 1. Let us show that \(\{x^{n}(t)\}_{n\ge 1}\) is bounded. Let \(f_{t}^{n}=f(t,x^{n}(t),x^{n}(\delta(t)))\), \(g_{t}^{n}=g(t,x^{n}(t), x^{n}(\delta(t)))\). From (40), by the inequality \(|a+b+c|^{p}\le3^{p-1}[|a|^{p}+|b|^{p}+|c|^{p}]\), we have

$$\begin{aligned} E\sup_{{0} \le s\le t}\bigl|x^{n}(s)\bigr|^{p} \le 3^{p-1}E\|\xi\|^{p}+3^{p-1}E\biggl|\int _{0}^{t}f_{s}^{n-1}\,ds\biggr|^{p}+3^{p-1}E \sup_{{0} \le s\le t}\biggl|\int_{0}^{s}g_{\sigma}^{n-1}\,dw_{\sigma}\biggr|^{p}. \end{aligned}$$

Using the Hölder inequality and the BDG inequality, we get

$$\begin{aligned} E\sup_{{0} \le s\le t}\bigl|x^{n}(s)\bigr|^{p} \le 3^{p-1}E\|\xi\|^{p}+3^{p-1}t^{p-1}E\int _{0}^{t}\bigl|f_{s}^{n-1}\bigr|^{p}\,ds +3^{p-1}c_{p}t^{\frac{p}{2}-1}E\int_{0}^{t}\bigl|g_{s}^{n-1}\bigr|^{p}\,ds. \end{aligned}$$

By condition (H2.1), we obtain

$$\begin{aligned} E\sup_{0 \le s\le t}\bigl|x^{n}(s)\bigr|^{p} \le& c_{1}+c_{2}\int_{0}^{t}E\sup _{{0} \le\sigma\le s}\bigl|x^{n-1}(\sigma)\bigr|^{p}\,ds. \end{aligned}$$

For any \(r\ge1\), we have

$$\begin{aligned} \max_{1\le n\le r}E\sup_{{0} \le s\le t}\bigl|x^{n}(s)\bigr|^{p} \le& c_{1}+c_{2}\int_{0}^{t} \Bigl(E\|\xi\|^{p}+\max_{1\le n\le r}E\sup _{{0} \le\sigma\le s}\bigl|x^{n}(\sigma)\bigr|^{p}\Bigr)\,ds. \end{aligned}$$

From the Gronwall inequality, we derive that

$$\begin{aligned} \max_{1\le n\le r}E\sup_{0 \le t\le T}\bigl|x^{n}(t)\bigr|^{p} \le& \bigl(c_{1}+c_{2}TE\|\xi\|^{p} \bigr)e^{c_{2}(T)}. \end{aligned}$$

Step 2. Let us show that \(\{x^{n}(t)\}_{n\ge 1}\) is Cauchy. For \(n\ge1\) and \(t\in[{0},T]\), we derive that, from (40),

$$\begin{aligned} x^{n+1}(t)-x^{n}(t)=\int_{0}^{t}f_{s}^{n,n-1}\,ds+ \int_{0}^{t}g_{s}^{n,n-1}\,dw_{s}, \end{aligned}$$

where

$$\begin{aligned}& f_{s}^{n,n-1}=f\bigl(s,x^{n}(s),x^{n} \bigl(\delta (s)\bigr)\bigr)-f\bigl(s,x^{n-1}(s),x^{n-1}\bigl( \delta(s)\bigr)\bigr),\\& g_{s}^{n,n-1}=g\bigl(s,x^{n}(s),x^{n} \bigl(\delta (s)\bigr)\bigr)-g\bigl(s,x^{n-1}(s),x^{n-1}\bigl( \delta(s)\bigr)\bigr). \end{aligned}$$

By the Hölder inequality and the BDG inequality, we have

$$\begin{aligned}& E\Bigl(\sup_{{0} \le s\le t}\bigl|x^{n+1}(s)-x^{n}(s)\bigr|^{p} \Bigr)\le c_{3}\int_{0}^{t}E\Bigl(\sup _{{0} \le\sigma \le s}\bigl|x^{n}(\sigma)-x^{n-1}( \sigma)\bigr|^{p}\Bigr)\,ds. \end{aligned}$$

Setting \(\varphi_{n}(t)=E\sup_{{0} \le s\le t}|x^{n+1}(s)-x^{n}(s)|^{p}\), we have

$$\begin{aligned} \varphi_{n}(t) \le& c_{3}\int _{0}^{t}\varphi_{n-1}(s_{1})\,ds_{1} \le c_{3}^{2}\int_{0}^{t}\,ds_{1} \int_{0}^{s_{1}}\varphi_{n-2}(s_{2})\,ds_{2} \\ \le& \cdots \\ \le&c_{3}^{n}\int_{0}^{t}\,ds_{1} \int_{0}^{s_{1}}\,ds_{2}\cdots\int _{0}^{s_{n-1}}\varphi_{0}(s_{n})\,ds_{n}. \end{aligned}$$
(41)

By (41) and \(\varphi_{0}(t)\le c_{4}KE\|\xi\|^{p}=\tilde{c}_{4}\), we obtain

$$\begin{aligned} E\Bigl(\sup_{{0} \le s\le t}\bigl|x^{n+1}(s)-x^{n}(s)\bigr|^{p} \Bigr)\le\tilde{c}_{4} \frac{(c_{3}t)^{n}}{n!}. \end{aligned}$$
(42)

Hence (42) implies that for each t, \(\{x^{n}(t)\}_{n=1,2,\ldots}\) is a Cauchy sequence on \([{t_{0}},T]\).

Step 3. Uniqueness. Let \(x(t)\) and \(y(t)\) be two solutions of (1). Then, for \(t\in[{0},T]\), we have

$$\begin{aligned}& E\sup_{{0} \le s\le t}\bigl|x(s)-y(s)\bigr|^{p}\le c\int _{0}^{t}E\sup_{{0} \le u\le s}\bigl|x(u)-y(u)\bigr|^{p}\,ds. \end{aligned}$$

Therefore, the Gronwall inequality implies

$$E\sup_{{0} \le s\le t}\bigl|x(s)-y(s)\bigr|^{p}=0,\quad t\in[{t_{0}},T]. $$

The above expression means that \(x(t)=y(t)\) for all \(t\in[0,T]\).

Existence. We derive from (42) that \(\{x^{n}(t)\}_{n=1,2,\ldots}\) is a Cauchy sequence. Hence there exists a unique \(x(t)\) such that \(x^{n}(t) \to x(t)\) as \(n\to\infty\). For all \(t\in[{0},T]\), taking the limits on both sides of (40) and letting \(n\to\infty\), we then can show that \(x(t)\) is the solution of (1). So the proof of Theorem 2.1 is complete. □

Proof of Corollary 4.1

The key technique to prove this corollary is already presented in the proofs of Theorems 2.2 and 3.2, so we here only highlight some parts which need to be modified. We use the same notations as in the proofs of Theorems 2.2 and 3.2. It is easy to see that inequality (11) should become

$$\begin{aligned} \bigl|f\bigl(t,x_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p} \le&\bigl[1+\epsilon_{2}^{\frac{1}{p-1}} \bigr]^{p-1}\biggl(\frac{1}{\epsilon_{2}} \rho\bigl(\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \bigr) \\ &{} +\bigl|f\bigl(t,y_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p}\biggr). \end{aligned}$$
(43)

In fact, since the function \(\rho(\cdot)\) is concave and increasing, there must exist a positive number \(k^{p}\) such that

$$\rho(x)\le k^{p}(1+x),\quad \mbox{on }x\ge0. $$

Hence,

$$\begin{aligned} \bigl|f\bigl(t,x_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p} \le&\bigl[1+\epsilon_{2}^{\frac{1}{p-1}} \bigr]^{p-1}\biggl(\frac{k^{p}}{\epsilon_{2}} \bigl(1+\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \bigr) \\ &{} +\bigl|f\bigl(t,y_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p}\biggr). \end{aligned}$$

Letting \(\epsilon_{2}=k^{p-1}\), we get

$$\begin{aligned} \bigl|f\bigl(t,x_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p} \le&(1+k)^{p-1}\bigl[k (1+\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \\ &{} +\bigl|f\bigl(t,y_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p}\bigr]. \end{aligned}$$
(44)

Inserting (44) into (43), it follows that

$$\begin{aligned} I_{1} \le& E\int_{0}^{u}\biggl\{ \frac{(p-1)\epsilon_{1}}{p}\bigl|e_{\varepsilon}(t)\bigr|^{p} + \frac{1}{p\epsilon_{1}^{p-1}}(1+k)^{p-1} \bigl[k\bigl(1+\bigl|e_{\varepsilon }(t)\bigr|^{p}\bigr) \\ &{} +\bigl|f\bigl(t,y_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p}\bigr]\biggr\} \,dt. \end{aligned}$$

By setting \(\epsilon_{1}=1+k \), we have

$$\begin{aligned} I_{1} \le& 2(1+k)E\int_{0}^{u} \bigl(1+\bigl|e_{\varepsilon}(t)\bigr|^{p}\bigr)\,dt +E\int_{0}^{u}\bigl|f \bigl(t,y_{\varepsilon}(t)\bigr)-\bar{f}\bigl(y_{\varepsilon}(t) \bigr)\bigr|^{p}\,dt. \end{aligned}$$

Similarly, \(J_{2}\) and \(J_{3}\) can be estimated as \(I_{1}\). Finally, all of required assertions can be obtained in the same way as the proof of Theorems 2.2 and 3.2. The proof is therefore complete. □

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mao, W., You, S., Wu, X. et al. On the averaging principle for stochastic delay differential equations with jumps. Adv Differ Equ 2015, 70 (2015). https://doi.org/10.1186/s13662-015-0411-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0411-0

Keywords