Open Access

An averaging principle for neutral stochastic functional differential equations driven by Poisson random measure

Advances in Difference Equations20162016:77

https://doi.org/10.1186/s13662-016-0802-x

Received: 11 January 2016

Accepted: 6 March 2016

Published: 15 March 2016

Abstract

In this paper, we study the averaging principle for neutral stochastic functional differential equations (SFDEs) with Poisson random measure. By stochastic inequality, Burkholder-Davis-Gundy’s inequality and Kunita’s inequality, we prove that the solution of the averaged neutral SFDEs with Poisson random measure converges to that of the standard one in \(L^{p}\) sense and also in probability. Some illustrative examples are presented to demonstrate this theory.

Keywords

averaging principle neutral SFDEs \(L^{p}\) convergence convergence in probability Poisson random measure

1 Introduction

Since Krylov and Bogolyubov [1] put forward the averaging principles for dynamical systems in 1937, the averaging principles have received great attention and many people have devoted their efforts to the study of averaging principles of nonlinear dynamical systems. For example, the averaging principles for nonlinear ordinary differential equations (ODEs) can be found in [2, 3]. For the averaging principles of nonlinear partial differential equations (PDEs), we refer to [4, 5].

With the developing of stochastic analysis theory, many authors began to study the averaging principle for stochastic differential equations (SDEs). Khasminskii [6] first extended the averaging theory for ODEs to the case of stochastic differential equations (SDEs) and studied the averaging principle of SDEs driven by Brownian motion. After that, there grew an extensive literature on averaging principles for SDEs. Freidlin and Wentzell [7] provided a mathematically rigorous overview of fundamental stochastic averaging method. Golec and Ladde [8], Veretennikov [9], Khasminskii and Yin [10], Givon et al. [11] studied the averaging principle to stochastic differential systems in the sense of mean square and probability. On the other hand, Stoyanov and Bainov [12], Kolomiets and Melnikov [13], Givon [14], Xu et al. [15] established the averaging principle for stochastic differential equations with Lévy jumps. They proved that the solutions of averaged systems converge to the solutions of original systems in mean square under the Lipschitz conditions.

On the other hand, Yin and Ramachandran [16] studied the asymptotic properties of stochastic differential delay equation (SDDEs) with wideband noise perturbations. By adopting the martingale averaging techniques and the method of weak convergence, they showed that the underlying process \(y^{\varepsilon}(t)\) converges weakly to a random process \(x^{\varepsilon}(t)\) of SDDEs as \(\varepsilon\to0\). Tan and Lei [17] investigated the averaging method for a class of SDDEs with constant delay. Under non-Lipschitz conditions, they showed the convergence between the standard form and the averaged form of SDDEs. Furthermore, Xu et al. [18] and Mao et al. [19] also extended the convergence results [9, 12, 13, 15] to the case of stochastic functional differential equations (SFDEs) and SDDEs with variable delays, respectively.

SDDEs and SFDEs are well known to model problems from many areas of science and engineering, the future state of which is determined by the present and past states. In fact, many stochastic systems not only depend on the present and past states but also involve the derivatives with delays as well as the function itself. In this case, neutral SDDEs (SFDEs) has been used to described such systems. In the past few years, the theory of neutral SDDEs (SFDEs) has attracted more and more attention (see e.g. [2025]). However, to the best of our knowledge, there is no research about using the averaging methods to obtain the approximate solutions to neutral SDDEs (SFDEs). In order to fill the gap, we will study the averaging principle of neutral SFDEs with Poisson random measure. By using the averaging method, we give the averaged form of neutral SFDEs (1) and show that the pth moment of solution to equation (7) is bounded. Then, applying the stochastic inequality, Burkholder-Davis-Gundy’s inequality and Kunita’s inequality, we prove that the solution of the averaged neutral SFDEs with Poisson random measure (7) converges to that of the standard one (6) in \(L^{p}\) sense and also in probability under the Lipschitz conditions. Meantime, we relax the Lipschitz condition and obtain the averaging principle for neutral SFDEs with Poisson random measure (1) under non-Lipschitz conditions. It should be pointed out that the previous works [6, 9, 1215, 17, 18] on averaging principle mainly discussed \(L^{2}\) strong convergence for stochastic differential equations and they do not imply \(L^{p} \) (\(p > 2\)) strong convergence. Moreover, since the neutral term is involved, the proof of the main results are much more technical. The results obtained of this paper are a generalization and improvement of some results in [6, 9, 12, 13, 15, 17, 18].

The rest of this paper is organized as follows. In Section 2, we introduce some preliminaries and establish our main results. In Section 3, some lemmas will be given which will be crucial in the proof of the main results, Theorems 2.2 and 2.4. Section 4 is devoted to the proof of the main results. Finally, two illustrative examples will be given in Section 5.

2 Averaging principle and main results

Throughout this paper, let \((\Omega,\mathcal{F},P)\) be a complete probability space equipped with some filtration \((\mathcal{F}_{t})_{t\ge0}\) satisfying the usual conditions. Here \(w(t)\) is an m-dimensional Brownian motion defined on the probability space \((\Omega,\mathcal{F},P)\) adapted to the filtration \((\mathcal {F}_{t})_{t\ge{0}}\). Let \(\tau>0\), and \(D([-\tau,0];R^{n})\) denote the family of all right-continuous functions with left-hand limits φ from \([-\tau,0] \to R^{n}\). The space \(D([-\tau,0];R^{n})\) is assumed to be equipped with the norm \(\|\varphi\|=\sup_{-\tau\le t\le0}|\varphi(t)|\). \(D_{\mathcal{F}_{0}}^{b}([-\tau,0];R^{n})\) denotes the family of all almost surely bounded, \(\mathcal{F}_{0}\)-measurable, \(D([-\tau,0];R^{n})\) valued random variable \(\xi=\{\xi(\theta):-\tau\le\theta\le0\}\). For any \(p\ge2\), let \(\mathcal{L}_{\mathcal{F}_{0}}^{p}([-\tau ,0];R^{n})\) denote the family of all \(\mathcal{F}_{0}\) measurable, \(D([-\tau,0];R^{n})\)-valued random variables \(\varphi=\{\varphi(\theta):{-\tau\le\theta\le0}\}\) such that E\(\sup_{-\tau\le\theta\le0}|\varphi(\theta)|^{p}<\infty\).

Let \(\{\bar{p}=\bar{p}(t), t\ge0\}\) be a stationary \(\mathcal {F}_{t}\)-adapted and \(R^{n}\)-valued Poisson point process. Then, for \(A\in\mathcal{B}(R^{n}-\{0\})\), 0 the closure of A, we define the Poisson counting measure N associated with by
$$N\bigl((0,t]\times A\bigr):=\#\bigl\{ {0}< s\le t,{\bar{p}}(s)\in A\bigr\} =\sum _{{t_{0}}< s\le t}I_{A}\bigl({\bar{p}}(s)\bigr), $$
where # denotes the cardinality of the set \(\{\cdot\}\). For simplicity, we denote \(N(t,A):=N(({0},t]\times A)\). It is well known that there exists a σ-finite measure π such that
$$E\bigl[N(t,A)\bigr]=\pi(A)t, \qquad P\bigl(N(t,A)=n\bigr)=\frac{\exp(-t\pi(A))(\pi(A)t)^{n}}{n!}. $$
This measure π is called the Lévy measure. Moreover, by Doob-Meyer’s decomposition theorem, there exists a unique \(\{\mathcal{F}_{t}\}\)-adapted martingale \(\tilde{N}(t,A)\) and a unique \(\{\mathcal{F}_{t}\}\)-adapted natural increasing process \(\hat{N}(t,A)\) such that
$$N(t,A)=\tilde{N}(t,A)+\hat{N}(t,A),\quad t>0. $$
Here \(\tilde{N}(t,A)\) is called the compensated Poisson random measure and \(\hat{N}(t,A)=\pi(A)t\) is called the compensator. For more details on Poisson point process and Lévy jumps, see [2628].
Consider the following neutral SFDEs with Poisson random measure
$$\begin{aligned} d\bigl[x(t)-D(x_{t})\bigr]=f(t,x_{t})\,dt+g(t,x_{t})\,dw(t)+ \int_{Z}h(t,x_{t},v)N(dt,dv), \end{aligned}$$
(1)
where \(x_{t}=\{x(t+\theta):-\tau\le\theta\le0\}\) is regarded as a \(D([-\tau,0];R^{n})\)-valued stochastic process. \(f:[0,T]\times D([-\tau,0];R^{n}) \to R^{n}\), \(g:[0,T]\times D([-\tau,0];R^{n}) \to R^{n\times m}\) and \(h:[0,T]\times D([-\tau,0];R^{n})\times Z \to R^{n} \) are both Borel-measurable functions. The initial condition \(x_{0}\) is defined by
$$x_{0}=\xi=\bigl\{ \xi(t):-\tau\le t\le0\bigr\} \in\mathcal{L}_{\mathcal {F}_{0}}^{2} \bigl([-\tau,0];R^{n}\bigr), $$
that is, ξ is an \(\mathcal{F}_{0}\)-measurable \(D([-\tau,0];R^{n})\)-valued random variable and \(E\|\xi\|^{2}<\infty\).

To study the averaging method of equation (1), we need the following assumptions.

Assumption 2.1

Let \(D(0)=0\) and for all \(\varphi,\psi\in D([-\tau ,0];R^{n})\), there exists a constant \(k_{0}\in(0,1)\) such that
$$\begin{aligned} \bigl|D(\varphi)-D(\psi)\bigr|\le k_{0}\|\varphi-\psi\|. \end{aligned}$$
(2)

Assumption 2.2

For all \(\varphi,\psi\in D([-\tau,0];R^{n})\) and \(t\in[0,T]\), there exist two positive constants \(k_{1}\), \(k_{2}\) such that
$$\bigl|f(t,\varphi)-f(t,\psi)\bigr|^{2}\vee\bigl|g(t,\varphi)-g(t,\psi)\bigr|^{2} \le k_{1}\|\varphi -\psi\|^{2} $$
and
$$\begin{aligned} \int_{Z}\bigl|h(t,\varphi,v)-h(t,\varphi,v)\bigr|^{p}\pi(dv) \le k_{2}\|\varphi-\psi\|^{p},\quad p\ge2. \end{aligned}$$
(3)

Assumption 2.3

For all \(\varphi\in D([-\tau,0];R^{n})\) and \(t\in [0,T]\), there exist two positive constants \(k_{3}\), \(k_{4}\) such that
$$\bigl|f(t,\varphi)\bigr|^{2}\vee\bigl|g(t,\varphi)\bigr|^{2}\le k_{3} \bigl(1+\|\varphi\|^{2}\bigr) $$
and
$$\begin{aligned} \int_{Z}\bigl|h(t,\varphi,v)\bigr|^{p}\pi(dv)\le k_{4}\bigl(1+\|\varphi\|^{p}\bigr),\quad p\ge2. \end{aligned}$$
(4)
Let \(C^{2,1}( [-\tau,T]\times R^{n}; R_{+})\) denote the family of all nonnegative functions \(V(t,x)\) defined on \([-\tau,T]\times R^{n} \) which are continuously twice differentiable in x and once differentiable in t. For each \(V\in C^{2,1}( [-\tau ,T]\times R^{n}; R_{+})\), define an operator LV by
$$\begin{aligned} LV(t,x,y) =& V_{t}\bigl(t,x-D(y)\bigr)+V_{x} \bigl(t,x-D(y)\bigr) f(t,y) \\ &{}+\frac{1}{2}\operatorname{trace}\bigl[g^{\top}(t,y)V_{xx} \bigl(t,x-D(y)\bigr)g(t,y)\bigr] \\ &{}+ \int_{Z}\bigl[V\bigl(t,x-D(y)+h(t,y,v)\bigr)-V\bigl(t,x-D(y) \bigr)\bigr]\pi(dv), \end{aligned}$$
(5)
where
$$\begin{aligned}& V_{t}(t,x) = \frac{\partial V(t,x) }{\partial t}, \qquad V_{x}(t,x) = \biggl( \frac{\partial V(t,x) }{\partial x_{1}}, \ldots, \frac{\partial V(t,x) }{\partial x_{n}} \biggr), \\& V_{xx}(t,x) = \biggl( \frac{\partial^{2} V(t,x) }{\partial x_{i}\,\partial x_{j}} \biggr)_{n\times n}. \end{aligned}$$

Similar to the proof of [29], we have the following existence result.

Theorem 2.1

If Assumptions 2.1-2.3 hold, equation (1) has a unique solution in the sense of \(L^{p}\).

Now, we study the averaging principle for neutral SFDEs with Poisson random measure. Let us consider the standard form of equation (1)
$$\begin{aligned} x_{\varepsilon}(t) =&x(0)+D(x_{\varepsilon,t})-D(x_{0})+ \varepsilon \int _{0}^{t}f(s,x_{\varepsilon,s})\,ds \\ &{}+\sqrt{\varepsilon} \int_{0}^{t}g(s,x_{\varepsilon,s})\,dw(s)+\sqrt{ \varepsilon} \int_{0}^{t} \int_{Z}h(s,x_{\varepsilon,s},v)N(ds,dv), \end{aligned}$$
(6)
where the coefficients f, g, and h have the same assumptions as in (3), (4), and \(\varepsilon\in[0,\varepsilon_{0}]\) is a positive small parameter with \(\varepsilon_{0}\) is a fixed number.

Let \(\bar{f}(x): D([-\tau,0];R^{n}) \to R^{n} \), \(\bar{g}(x): D([-\tau,0];R^{n}) \to R^{n\times m} \) and \(\bar{h}(x,v): D([-\tau,0];R^{n})\times Z \to R^{n}\) be measurable functions, satisfying Assumptions 2.2 and 2.3. We also assume that the following condition is satisfied.

Assumption 2.4

For any \(\varphi\in D([-\tau,0];R^{n})\) and \(p\ge2\), there exist three positive bounded functions \(\psi_{i}(T_{1})\), \(i=1,2,3\), such that
$$\begin{aligned}& \frac{1}{T_{1}} \int_{0}^{T_{1}}\bigl|f(t,\varphi)-\bar{f}( \varphi)\bigr|^{p}\,dt \le\psi _{1}(T_{1}) \bigl(1+\| \varphi\|^{p}\bigr), \\& \frac{1}{T_{1}} \int_{0}^{T_{1}}\bigl|g(t,\varphi)-\bar{g}( \varphi)\bigr|^{p}\,dt \le\psi _{2}(T_{1}) \bigl(1+\| \varphi\|^{p}\bigr), \end{aligned}$$
and
$$\begin{aligned} \frac{1}{T_{1}} \int_{0}^{T_{1}} \int_{Z}\bigl|h(t,\varphi,v)-\bar{h}(\varphi ,v)\bigr|^{p} \pi(dv)\,dt \le\psi_{3}(T_{1}) \bigl(1+\|\varphi \|^{p}\bigr), \end{aligned}$$
where \(\lim_{T_{1}\to\infty}\psi_{i}(T_{1})=0\).
Then we have the averaging form of the standard neutral SFDEs with Poisson random measure
$$\begin{aligned} y_{\varepsilon}(t) =&y(0)+D(y_{\varepsilon,t})-D(y_{0})+ \varepsilon \int _{0}^{t}\bar{f}(y_{\varepsilon,s})\,ds+\sqrt{ \varepsilon} \int_{0}^{t}\bar{g}(y_{\varepsilon,s})\,dw(s) \\ &{}+\sqrt{\varepsilon} \int_{0}^{t} \int_{Z}\bar{h}(y_{\varepsilon,s},v)N(ds,dv), \end{aligned}$$
(7)
where \(y(0)=x(0)\), \(y_{0}=x_{0}\).

Obviously, under Assumptions 2.1-2.3, the standard neutral SFDEs with Poisson random measure (6) and the averaged one (7) have a unique solutions in \(L^{p}\), respectively.

Now, we present our main results which are used for revealing the relationship between the processes \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).

Theorem 2.2

Let Assumptions 2.1-2.4 hold. For a given arbitrary small number \(\delta_{1}>0\) and \(p\ge2\), there exist \(L>0\), \(\varepsilon_{1}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that
$$\begin{aligned} E\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p}\le \delta_{1}, \quad \forall t\in\bigl[0,L\varepsilon^{-\beta} \bigr], \end{aligned}$$
(8)
for all \(\varepsilon\in(0,\varepsilon_{1}]\).

The proof of this theorem will be shown in Section 4.

Remark 2.1

In particular, when \(p=2\), we see that the solution of the averaged neutral SFDEs with Poisson random measure converges to that of the standard one in second moment.

With Theorem 2.2, it is easy to show the convergence in probability between the processes \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).

Corollary 2.1

Let Assumptions 2.1-2.4 hold. For a given arbitrary small number \(\delta_{2}>0\), there exists \(\varepsilon_{2}\in[0,\varepsilon_{0}]\) such that for all \(\varepsilon\in (0,\varepsilon_{2}]\), we have
$$\begin{aligned} \lim_{\varepsilon\to0}P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta }}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|> \delta_{2}\Bigr)=0, \end{aligned}$$
where L and β are defined by Theorem 2.2.

Proof

By Theorem 2.2 and the Chebyshev inequality, for any given number \(\delta_{2}>0\), we can obtain
$$P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta}}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|> \delta_{2}\Bigr)\le\frac{1}{\delta_{2}^{p}}E\Bigl(\sup_{0< t\le L\varepsilon ^{-\beta}}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \Bigr)\le\frac {cL\varepsilon^{1-\beta}}{\delta_{2}^{p}}. $$
Let \(\varepsilon\to0\), and the required result follows. □

Next, we extend the averaging principle for neutral SFDEs with Poisson random measure to the case of non-Lipschitz condition.

Assumption 2.5

Let \(k(\cdot)\), \(\rho(\cdot)\) be two concave nondecreasing functions from \(R_{+}\) to \(R_{+}\) such that \(k(0)=\rho(0)=0\) and \(\int_{0^{+}} \frac{u^{p-1}}{k^{p}(u)+\rho ^{p}(u)}\,du=\infty\). For all \(\varphi,\psi\in D([-\tau,0];R^{n})\), \(t\in [0,T]\), and \(p\ge2\), then
$$ \begin{aligned} &\bigl|f(t,\varphi)-f(t,\psi)\bigr|\vee\bigl|g(t,\varphi)-g(t,\psi)\bigr| \le k\bigl(\| \varphi -\psi\|\bigr),\\ &{\biggl[ \int_{Z}\bigl|h(t,\varphi,v)-h(t,\psi,v)\bigr|^{p}\pi(dv) \biggr]}^{\frac{1}{p}} \le \rho\bigl(\|\varphi-\psi\|\bigr). \end{aligned} $$
(9)

Remark 2.2

As we know, the existence and uniqueness of solution for NSFDEs under the above assumptions were proved by Bao and Hou [30], Ren and Xia [31] and Wei and Cai [32]. If \(k(u)=\rho(u)=Lu\), then Assumption 2.5 reduces to the Lipschitz conditions (3). In other words, Assumption 2.5 is much weaker than Assumption 2.2.

Theorem 2.3

If Assumptions 2.1 and 2.5 hold, then there exists a unique solution to equation (1) in the sense of \(L^{p}\).

Proof

The proof is similar to Ren and Xia [31] and Wei and Cai [32], and we thus omit here. □

Theorem 2.4

Let Assumptions 2.1, 2.4, and 2.5 hold. For a given arbitrary small number \(\delta_{3}>0\), there exist \(L>0\), \(\varepsilon_{3}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that
$$\begin{aligned} E\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{2}\le \delta_{3}, \quad \forall t\in\bigl[0,L\varepsilon^{-\beta} \bigr], \end{aligned}$$
(10)
for all \(\varepsilon\in(0,\varepsilon_{3}]\).

Proof

The proof of this theorem will be shown in Section 4. □

Similarly, with Theorem 2.4, we can show that the convergence in probability of the standard solution of equation (6) and averaged solution of equation (7).

Corollary 2.2

Let Assumptions 2.1, 2.4, and 2.5 hold. For a given arbitrary small number \(\delta_{4}>0\), there exists \(\varepsilon_{4}\in[0,\varepsilon_{0}]\) such that for all \(\varepsilon\in (0,\varepsilon_{4}]\), we have
$$\begin{aligned} \lim_{\varepsilon\to0}P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta }}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|> \delta_{4}\Bigr)=0, \end{aligned}$$
where L and β are defined by Theorem 2.4.

Remark 2.3

If jump term \(h=\tilde{h}=0\), then equation (1) and (36) will become neutral SFDEs (SDDEs) which have been investigated by [2125]. Under our assumptions, we can show that the solution of the averaged neutral SFDEs (SDDEs) converges to that of the standard one in pth moment and in probability.

Remark 2.4

If the neutral term \(D(\cdot)=0\) and \(\tilde{D}(\cdot)=0\), then equation (1) and (36) will reduce to SFDEs (SDDEs) with jumps which have been studied by [18, 19]. Hence, the corresponding results in [18, 19] are generalized and improved.

3 Some useful lemmas

In order to prove our main results, we need to introduce the following lemmas.

Lemma 3.1

Let \(p>1\) and \(a,b\in R^{n}\). Then, for \(\epsilon>0\),
$$|a+b|^{p}\le\bigl[1+\epsilon^{\frac{1}{p-1}}\bigr]^{p-1} \biggl(|a|^{p}+\frac {|b|^{p}}{\epsilon}\biggr). $$

Lemma 3.2

Let \(p>2\) and \(a,b>0\). Then, for \(\epsilon>0\),
$$a^{p-1}b\le\frac{\epsilon(p-1)}{p}a^{p}+\frac{1}{p\epsilon^{p-1}}b^{p}, \qquad a^{p-2}b^{2}\le\frac{\epsilon(p-2)}{p}a^{p}+ \frac{1}{p\epsilon^{\frac {p-2}{2}}}b^{p}. $$

Lemma 3.3

Let \(p>2\) and \(a,b\in R^{n}\). Then, for any \(\delta\in(0,1)\),
$$|a+b|^{p}\le\frac{|a|^{p}}{(1-\delta)^{p-1}}+\frac{|b|^{p}}{\delta ^{p-1}}. $$

Lemma 3.4

Let \(\phi:R_{+}\times Z\to R^{n}\) and assume that
$$\int_{0}^{t} \int_{Z}\bigl|\phi(s,v)\bigr|^{p}\pi(dv)\,ds< \infty,\quad p\ge2. $$
Then there exists \(D_{p}>0\) such that
$$\begin{aligned} E\biggl(\sup_{0\le t\le u}\biggl| \int_{0}^{t} \int_{Z}\phi(s,v)\tilde{N}(ds,dv)\biggr|^{p}\biggr) \le& D_{p}\biggl\{ E\biggl( \int_{0}^{u} \int_{Z}\bigl|\phi(s,v)\bigr|^{2}\pi(dv)\,ds \biggr)^{\frac{p}{2}} \\ &{}+E \int_{0}^{u} \int_{Z}\bigl|\phi(s,v)\bigr|^{p}\pi(dv)\,ds\biggr\} . \end{aligned}$$

The proof of Lemma 3.1 and Lemma 3.2 can be found in [33], the proof of Lemma 3.4 can be found in [26, 28] and the proof of Lemma 3.3 can be obtained from Lemma 3.1 by putting \(\epsilon=\frac{\delta }{1-\delta}\). The following lemma shows that if the initial data are in \(L^{p}\) (\(p\ge 2\)) then the solution of averaged neutral SFDEs with Poisson random measure will be in \(L^{p}\).

Lemma 3.5

Let Assumptions 2.1 and 2.3 hold. If the initial data \(\xi\in\mathcal{L}_{\mathcal{F}_{0}}^{p}([-\tau,0];R^{n})\) for some \(p\ge2\), then for any \(t\ge0\), the unique solution \(y_{\varepsilon}(t)\) of equation (7) has the property that
$$\begin{aligned} E\sup_{-\tau\le s\le t}\bigl|y_{\varepsilon}(s)\bigr|^{p}\le C, \end{aligned}$$
(11)
where \(C=[(1+\tilde{C})E\|\xi\|^{p}+\frac{2\bar{C}}{(1-k_{0})^{p}}T]e^{\frac {2\bar{C}}{(1-k_{0})^{p}}T}\).

Proof

By the Itô formula to \(V(t,y_{\varepsilon }(t)-D(y_{\varepsilon,t}))=|y_{\varepsilon}(t)-D(y_{\varepsilon ,t})|^{p}\), we obtain
$$\begin{aligned} &\bigl|y_{\varepsilon}(t)-D(y_{\varepsilon,t})\bigr|^{p} \\ &\quad=\bigl|y_{\varepsilon}(0)-D(y_{0})\bigr|^{p}+ \int_{0}^{t}LV\bigl(y_{\varepsilon }(s),y_{\varepsilon,s},s \bigr)\,ds \\ &\qquad{}+p\sqrt{\varepsilon} \int_{0}^{t}\bigl|y_{\varepsilon}(s)-D(y_{\varepsilon ,s})\bigr|^{p-2} \bigl[y_{\varepsilon}(s)-D(y_{\varepsilon,s})\bigr]^{\top}\bar {g}(y_{\varepsilon,s})\,dw(s) \\ &\qquad{}+ \int_{0}^{t} \int_{Z}\bigl\{ \bigl|y_{\varepsilon}(s)-D(y_{\varepsilon,s})+\sqrt { \varepsilon}\bar{h}(y_{\varepsilon,s},v)\bigr|^{p} \\ &\qquad{}-\bigl|y_{\varepsilon}(s) -D(y_{\varepsilon,s})\bigr|^{p}\bigr\} \tilde{N}(ds,du), \end{aligned}$$
(12)
where
$$\begin{aligned} LV(x,y,t) =&p\varepsilon\bigl|x-D(y)\bigr|^{p-2}\bigl[x-D(y)\bigr]^{\top} \bar{f}(y) \\ &{}+\frac{p(p-1)}{2}\varepsilon\bigl|x-D(y)\bigr|^{p-2}\bigl|\bar{g}(y)\bigr|^{2} \\ &{}+ \int_{Z}\bigl[\bigl|x-D(y)+\sqrt{\varepsilon}\bar{h}(y,v)\bigr|^{p}-\bigl|x-D(y)\bigr|^{p} \bigr]\pi (dv). \end{aligned}$$
Taking the expectation on both sides of (12), one gets
$$\begin{aligned} &E\sup_{{0} \le s\le t}\bigl|y_{\varepsilon}(s)-D(y_{\varepsilon ,s})\bigr|^{p} \\ &\quad\le E\sup_{0 \le s\le t}\bigl|y_{\varepsilon}(0)-D(y_{0})\bigr|^{p}+E \sup_{0 \le s\le t} \int_{0}^{s}p\varepsilon\bigl|y_{\varepsilon}(\sigma )-D(y_{\varepsilon,\sigma})\bigr|^{p-2} \\ &\qquad{}\times\bigl[y_{\varepsilon}(\sigma)-D(y_{\varepsilon,\sigma}) \bigr]^{\top}\bar{f}(y_{\varepsilon,\sigma})\,d\sigma \\ &\qquad{}+E\sup_{0 \le s\le t} \int_{0}^{s}\frac{p(p-1)}{2}\varepsilon \bigl|y_{\varepsilon}(\sigma)-D(y_{\varepsilon,\sigma})\bigr| ^{p-2}\bigl| \bar{g}(y_{\varepsilon,\sigma})\bigr|^{2}\,d\sigma \\ &\qquad{}+E\sup_{0 \le s\le t} \int_{0}^{s} p\sqrt{\varepsilon}\bigl|y_{\varepsilon}( \sigma)-D(y_{\varepsilon,\sigma })\bigr|^{p-2}\bigl[y_{\varepsilon}( \sigma)-D(y_{\varepsilon,\sigma})\bigr]^{\top}\bar {g}(y_{\varepsilon,\sigma})\,dw( \sigma) \\ &\qquad{}+E\sup_{0 \le s\le t} \int_{0}^{s} \int_{Z}\bigl\{ \bigl|y_{\varepsilon }(\sigma)-D(y_{\varepsilon,\sigma})+ \sqrt{\varepsilon}\bar {h}(y_{\varepsilon,\sigma},v)\bigr|^{p} \\ &\qquad{}-\bigl|y_{\varepsilon}(\sigma) -D(y_{\varepsilon,\sigma})\bigr|^{p}\bigr\} N(d \sigma,du) \\ &\quad=E\sup_{0 \le s\le t}\bigl|y_{\varepsilon}(0)-D(y_{0})\bigr|^{p}+ \sum_{i=1}^{4}I_{i}. \end{aligned}$$
(13)
By Lemma 3.1 and Assumption 2.1, we get
$$\begin{aligned} E\sup_{0 \le s\le t}\bigl|y_{\varepsilon}(0)-D(y_{0})\bigr|^{p} \le&\bigl[1+\epsilon_{1}^{\frac{1}{p-1}}\bigr]^{p-1} \biggl(\bigl|y_{\varepsilon}(0)\bigr|^{p}+\frac {|D(y_{0})|^{p}}{\epsilon_{1}}\biggr) \\ \le&\bigl[1+\epsilon_{1}^{\frac{1}{p-1}}\bigr]^{p-1} \biggl(\bigl|y_{\varepsilon}(0)\bigr|^{p}+\frac {k_{0}^{p}\|y_{0}\|^{p}}{\epsilon_{1}}\biggr). \end{aligned}$$
Letting \(\epsilon_{1}=k_{0}^{p-1} \),
$$\begin{aligned} E\sup_{0 \le s\le t}\bigl|y_{\varepsilon}(0)-D(y_{0})\bigr|^{p} \le&(1+k_{0})^{p}E\|\xi\|^{p}. \end{aligned}$$
(14)
Recalling Lemma 3.2, there exists a \(\epsilon_{2}>0\) such that
$$\begin{aligned} I_{1} \le&p\varepsilon E \int_{0}^{t}\biggl[\frac{\epsilon_{2}(p-1)}{p}\bigl|y_{\varepsilon }(s)-D(y_{\varepsilon,s})\bigr|^{p}+ \frac{1}{p\epsilon_{2}^{p-1}}\bigl|\bar {f}(y_{\varepsilon,s})\bigr|^{p}\biggr]\,ds \\ \le&p\varepsilon E \int_{0}^{t}\biggl[\frac{\epsilon _{2}(p-1)}{p}(1+k_{0})^{p} \|y_{\varepsilon,s}\|^{p}+\frac{1}{p\epsilon _{2}^{p-1}}\bigl|\bar{f}(y_{\varepsilon,s})\bigr|^{p} \biggr]\,ds. \end{aligned}$$
(15)
By Assumption 2.3 and the basic inequality, we get
$$\begin{aligned} \bigl|\bar{f}(y_{\varepsilon,s})\bigr|^{p}\le(2k_{3})^{\frac {p}{2}} \bigl(1+\|y_{\varepsilon,s}\|^{p}\bigr). \end{aligned}$$
(16)
Letting \(\epsilon_{2}=\frac{\sqrt{2k_{3}}}{1+k_{0}}\), it follows from (15) and (16) that
$$ I_{1}\le p\varepsilon\sqrt{2k_{3}}(1+k_{0})^{p-1}E \int _{0}^{t}\bigl(1+\|y_{\varepsilon,s} \|^{p}\bigr)\,ds. $$
(17)
By Lemma 3.2 and Assumption 2.3, we obtain
$$\begin{aligned} I_{2} \le&\frac{p(p-1)}{2}\varepsilon E \int_{0}^{t}\biggl[\frac{\epsilon _{3}(p-2)}{p}|y_{\varepsilon}(s)-D(y_{\varepsilon,s})|^{p}+ \frac {2}{p\epsilon_{3}^{\frac{p-2}{2}}}\bigl|\bar{g}(y_{\varepsilon ,s})\bigr|^{p}\biggr]\,ds \\ \le&\frac{p(p-1)}{2}\varepsilon E \int_{0}^{t}\biggl[\frac{\epsilon _{3}(p-2)}{p}(1+k_{0})^{p} \|y_{\varepsilon,s}\|^{p}+\frac{(2k_{3})^{\frac {p}{2}}}{p\epsilon_{3}^{\frac{p-2}{2}}}\bigl(1+\|y_{\varepsilon,s} \|^{p}\bigr)\biggr]\,ds. \end{aligned}$$
Letting \(\epsilon_{3}=\frac{2k_{3}}{(1+k_{0})^{2}}\),
$$\begin{aligned} I_{2} \le& (p-1)^{2}\varepsilon k_{3}(1+k_{0})^{p-2}E \int _{0}^{t}\bigl(1+\|y_{\varepsilon,s} \|^{p}\bigr)\,ds. \end{aligned}$$
(18)
For the estimation of \(I_{3}\): by the Burkholder-Davis-Gundy’s inequality, there exists a positive constant \(C_{p}\) such that
$$\begin{aligned} I_{3} \le&\sqrt{\varepsilon}C_{p}E\biggl[ \int_{0}^{t}\bigl|y_{\varepsilon }(s)-D(y_{\varepsilon,s})\bigr|^{2p-2}\bigl| \bar{g}(y_{\varepsilon ,s})\bigr|^{2}\,ds\biggr]^{\frac{1}{2}}. \end{aligned}$$
Further, by the Young inequality and Assumption 2.3, we deduce that
$$\begin{aligned} I_{3} \le&\frac{1}{2} E\sup_{{0} \le s\le t}\bigl|y_{\varepsilon }(s)-D(y_{\varepsilon,s})\bigr|^{p} \\ &{}+\varepsilon C_{p}^{2}k_{3}(1+k_{0})^{p-2}E \int _{0}^{t}\bigl(1+\|y_{\varepsilon,s} \|^{p}\bigr)\,ds. \end{aligned}$$
(19)
Finally, we will estimate \(I_{4}\). Note \(N(dt,dv)=\tilde{N}(dt,dv)+\pi(dv)\,dt\) and \(\tilde{N}(dt,dv)\) is a martingale, one has
$$I_{4}\le E \int_{0}^{t} \int_{Z}\bigl[\bigl|y_{\varepsilon}(s)-D(y_{\varepsilon ,s})+\sqrt{ \varepsilon}\bar{h}(y_{\varepsilon,s},v)\bigr|^{p}-\bigl|y_{\varepsilon}(s) -D(y_{\varepsilon,s})\bigr|^{p}\bigr]\pi(dv)\,ds. $$
By the mean value theorem, we obtain
$$I_{4}\le pE \int_{0}^{t} \int_{Z}\bigl[\bigl|y_{\varepsilon}(s)-D(y_{\varepsilon ,s})+\theta \sqrt{\varepsilon}\bar{h}(y_{\varepsilon,s},v)\bigr|^{p-1}\bigl|\sqrt{\varepsilon} \bar{h}(y_{\varepsilon ,s},v)\bigr|\bigr]\pi(dv)\,ds, $$
where \(|\theta|\le1\). This, together with the basic inequality \(|a+b|^{p-1}\le 2^{p-2}(|a|^{p-1}+|b|^{p-1})\), implies that
$$I_{4}\le pCE \int_{0}^{t} \int_{Z}\bigl[\bigl|y_{\varepsilon}(s)-D(y_{\varepsilon ,s})\bigr|^{p-1}\bigl| \sqrt{\varepsilon}\bar{h}(y_{\varepsilon,s},v)\bigr|+\bigl|\sqrt {\varepsilon} \bar{h}(y_{\varepsilon,s},v)\bigr|^{p}\bigr]\pi(dv)\,ds. $$
By Lemma 3.2, Assumptions 2.1 and 2.3, it follows that
$$\begin{aligned} I_{4}\le pC\bigl[\bigl(k_{4}+\pi(Z)\bigr) (1+k_{0})^{p-1}\sqrt{\varepsilon}+k_{4}\sqrt { \varepsilon}^{p}\bigr]E \int_{0}^{t}\bigl(1+\|y_{\varepsilon,s} \|^{p}\bigr)\,ds. \end{aligned}$$
(20)
Combining (14), (17)-(20), we obtain
$$E\sup_{{0} \le s\le t}\bigl|y_{\varepsilon}(s)-D(y_{\varepsilon ,s})\bigr|^{p} \le2(1+k_{0})^{p}E\|\xi\|^{p}+2\bar{C} \int _{t_{0}}^{t}E\bigl(1+\|y_{\varepsilon,s} \|^{p}\bigr)\,ds, $$
where
$$\begin{aligned} \bar{C} =&\bigl[(p-1)^{2}+C_{p}^{2}\bigr] \varepsilon k_{3}(1+k_{0})^{p-2}+p\varepsilon \sqrt{2k_{3}}(1+k_{0})^{p-1} \\ &{}+pC\bigl[\bigl(k_{4}+\pi(Z)\bigr) (1+k_{0})^{p-1} \sqrt{\varepsilon}+k_{4}\sqrt{\varepsilon }^{p}\bigr]. \end{aligned}$$
On the other hand, by Lemma 3.1, we have
$$\begin{aligned} E\sup_{0 \le s\le t}\bigl|y_{\varepsilon}(s)\bigr|^{p} \le& \frac{k_{0}}{1-k_{0}}E\|\xi\|^{p}+\frac {1}{(1-k_{0})^{p}}E\sup _{{t_{0}} \le s\le t}\bigl(\bigl|y_{\varepsilon }(s)-D(y_{\varepsilon,s})\bigr|^{p} \bigr) \\ \le&\tilde{C}E\|\xi\|^{p}+\frac{2\bar{C}}{(1-k_{0})^{p}}T+\frac{2\bar {C}}{(1-k_{0})^{p}} \int_{t_{0}}^{t}E\|y_{\varepsilon,s}\|^{p}ds, \end{aligned}$$
where \(\tilde{C}=\frac{k_{0}}{1-k_{0}}+\frac{2(1+k_{0})^{p}}{(1-k_{0})^{p}}\). Consequently,
$$\begin{aligned} E\sup_{{-\tau} \le s\le t}\bigl|y_{\varepsilon}(s)\bigr|^{p} \le&(1+ \tilde{C})E\|\xi\|^{p}+\frac{2\bar{C}}{(1-k_{0})^{p}}T +\frac{2\bar{C}}{(1-k_{0})^{p}} \int_{t_{0}}^{t}E\Bigl(\sup_{{-\tau} \le \sigma\le s}\bigl|y_{\varepsilon}( \sigma)\bigr|^{p}\Bigr)\,ds. \end{aligned}$$
Therefore, we apply the Gronwall inequality to get
$$E\sup_{{-\tau} \le s\le t}\bigl|y_{\varepsilon}(s)\bigr|^{p} \le \biggl[(1+ \tilde{C})E\|\xi\|^{p}+\frac{2\bar{C}}{(1-k_{0})^{p}}T\biggr]e^{\frac {2\bar{C}}{(1-k_{0})^{p}}T}. $$
The proof is complete. □

4 Proof of main results

Proof of Theorem 2.2

By Lemma 3.3, it follows that
$$\begin{aligned} \bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} =&\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)-\bigl[D(x_{\varepsilon,t})-D(y_{\varepsilon,t})\bigr]+\bigl[D(x_{\varepsilon ,t})-D(y_{\varepsilon,t}) \bigr]\bigr|^{p} \\ \le&\frac{|x_{\varepsilon}(t)-y_{\varepsilon}(t)-[D(x_{\varepsilon ,t})-D(y_{\varepsilon,t})]|^{p}}{ (1-\delta)^{p-1}} \\ \ &+\frac{|D(x_{\varepsilon,t})-D(y_{\varepsilon,t})|^{p}}{ \delta^{p-1}}. \end{aligned}$$
(21)
Letting \(\delta=k_{0}\) and taking the expectation on both sides of (21), we have
$$\begin{aligned} E\sup_{0\le t\le u}|x_{\varepsilon}(t)-y_{\varepsilon}(t)|^{p} \le&\frac{E\sup_{0\le t\le u}|x_{\varepsilon}(t)-y_{\varepsilon}(t)-[D(x_{\varepsilon,t})-D(y_{\varepsilon ,t})]|^{p}}{(1-k_{0})^{p-1}} \\ &{}+ k_{0}E\sup_{0\le t\le u}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p}. \end{aligned}$$
Consequently,
$$\begin{aligned} E\sup_{0\le t\le u}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \le\frac{E\sup_{0\le t\le u}|x_{\varepsilon}(t)-y_{\varepsilon}(t)-[D(x_{\varepsilon,t})-D(y_{\varepsilon ,t})]|^{p}}{(1-k_{0})^{p}}, \end{aligned}$$
(22)
where \(k_{0}\in(0,1)\). Next, we will estimate \(E\sup_{0\le t\le u}|x_{\varepsilon}(t)-y_{\varepsilon}(t)-[D(x_{\varepsilon,t})-D(y_{\varepsilon,t})]|^{p}\). From (6) and (7), we have
$$\begin{aligned} &x_{\varepsilon}(t)-y_{\varepsilon}(t)-\bigl[D(x_{\varepsilon ,t})-D(y_{\varepsilon,t}) \bigr] \\ &\quad=\varepsilon \int_{0}^{t}\bigl[f(s,x_{\varepsilon,s})- \bar{f}(y_{\varepsilon ,s})\bigr]\,ds \\ &\qquad{}+\sqrt{\varepsilon} \int_{0}^{t}\bigl[g(s,x_{\varepsilon,s})-\bar {g}(y_{\varepsilon,s})\bigr]\,dw(s) \\ &\qquad{}+\sqrt{\varepsilon} \int_{0}^{t} \int_{Z}\bigl[h(s,x_{\varepsilon,s},v)-\bar {h}(y_{\varepsilon,s},v)\bigr]N(ds,dv). \end{aligned}$$
Using the elementary inequality \(|a+b+c|^{p}\le 3^{p-1}(|a|^{p}+|b|^{p}+|c|^{p})\), it follows that for any \(u\in[0,T]\),
$$\begin{aligned} &E\sup_{0\le t\le u}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)-\bigl[D(x_{\varepsilon,t})-D(y_{\varepsilon,t})\bigr]\bigr|^{p} \\ &\quad\le3^{p-1}\varepsilon^{p}E\sup_{0\le t\le u}\biggl| \int_{0}^{t}\bigl[f(s,x_{\varepsilon ,s})- \bar{f}(y_{\varepsilon,s})\bigr]\,ds\biggr|^{p} \\ &\qquad{}+3^{p-1}{\varepsilon}^{\frac{p}{2}} E\sup _{0\le t\le u} \biggl| \int _{0}^{t}\bigl[g(s,x_{\varepsilon,s})- \bar{g}(y_{\varepsilon,s})\bigr]\,dw(s)\biggr|^{p} \\ &\qquad{}+3^{p-1}{\varepsilon}^{\frac{p}{2}}E\sup_{0\le t\le u}\biggl| \int_{0}^{t} \int _{Z}\bigl[h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon ,s},v) \bigr]N(ds,dv)\biggr|^{p} \\ &\quad=J_{1}+J_{2}+J_{3}. \end{aligned}$$
(23)
By the Hölder inequality, we get
$$J_{1}\le 3^{p-1}\varepsilon^{p}u^{p-1}E \int_{0}^{u}\bigl|f(s,x_{\varepsilon ,s})- \bar{f}(y_{\varepsilon,s})\bigr|^{p}\,ds. $$
By Lemma 3.1 and Assumption 2.2, it follows that for any \(\epsilon_{4}>0\)
$$\begin{aligned} J_{1} \le& 3^{p-1}\varepsilon^{p}u^{p-1}E \int_{0}^{u}\bigl|f(s,x_{\varepsilon ,s})-f(s,y_{\varepsilon,s})+f(s,y_{\varepsilon,s})- \bar {f}(y_{\varepsilon,s})\bigr|^{p}\,ds \\ \le&3^{p-1}\varepsilon^{p}u^{p-1}\bigl[1+ \epsilon_{4}^{\frac {1}{p-1}}\bigr]^{p-1}E \int_{0}^{u}\biggl(\frac{\sqrt{k_{1}}^{p}\|x_{\varepsilon ,s}-y_{\varepsilon,s}\|^{p}}{ \epsilon_{4}} +\bigl|f(s,y_{\varepsilon,s})-\bar{f}(y_{\varepsilon,s})\bigr|^{p} \biggr)\,ds. \end{aligned}$$
Letting \(\epsilon_{4}=\sqrt{k_{1}}^{p-1}\),
$$\begin{aligned} J_{1} \le&3^{p-1}\varepsilon^{p}u^{p-1}(1+ \sqrt{k_{1}})^{p} \int_{0}^{u}E \sup_{0\le\sigma\le s}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma )\bigr|^{p}\,ds \\ &{}+ 3^{p-1}\varepsilon^{p}u^{p}(1+ \sqrt{k_{1}})^{p}E\frac{1}{u} \int _{0}^{u}\bigl|f(s,y_{\varepsilon,s})- \bar{f}(y_{\varepsilon,s})\bigr|^{p}\,ds. \end{aligned}$$
Then Assumption 2.4 implies that
$$\begin{aligned} J_{1} \le&3^{p-1}\varepsilon^{p}u^{p-1}(1+ \sqrt{k_{1}})^{p} \int_{0}^{u}E \sup_{0\le\sigma\le s}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma )\bigr|^{p}\,ds \\ &{}+ 3^{p-1}\varepsilon^{p}u^{p}(1+ \sqrt{k_{1}})^{p}\psi_{1}(u) \Bigl(1+E\sup _{0\le t\le u} \|y_{\varepsilon,t}\|^{p}\Bigr). \end{aligned}$$
(24)
For the second term \(J_{2}\) of (23): by the Burkholder-Davis-Gundy inequality, there exists a \(C_{p}>0\) such that
$$\begin{aligned} J_{2} \le&3^{p-1}{\varepsilon}^{\frac{p}{2}} C_{p}E\biggl( \int _{0}^{u}\bigl|g(s,x_{\varepsilon,s})- \bar{g}(y_{\varepsilon,s})\bigr|^{2}\,ds\biggr)^{\frac {p}{2}} \\ \le&3^{p-1}{\varepsilon}^{\frac{p}{2}} C_{p}u^{\frac{p}{2}-1} E \int _{0}^{u}\bigl|g(s,x_{\varepsilon,s})- \bar{g}(y_{\varepsilon,s})\bigr|^{p}\,ds. \end{aligned}$$
Similar to \(J_{1}\), we get
$$\begin{aligned} J_{2} \le&3^{p-1}{\varepsilon}^{\frac{p}{2}}C_{p}u^{\frac{p}{2}-1}(1+ \sqrt {k_{1}})^{p} \int_{0}^{u}E\sup_{0\le\sigma\le s}\bigl|x_{\varepsilon}( \sigma )-y_{\varepsilon}(\sigma)\bigr|^{p}\,ds \\ &{}+ 3^{p-1}{\varepsilon}^{\frac{p}{2}} C_{p}u^{\frac{p}{2}}(1+ \sqrt {k_{1}})^{p}\psi_{2}(u) \Bigl(1+E\sup _{0\le t\le u} \|y_{\varepsilon,t}\|^{p}\Bigr). \end{aligned}$$
(25)
Since \(N(dt,dv)=\tilde{N}(dt,dv)+\pi(dv)\,dt\) and using the basic inequality \(|a+b|^{p}\le2^{p-1}(|a|^{p}+|b|^{p})\), we have
$$ \begin{aligned}[b] J_{3}\le{}&6^{p-1}{\varepsilon}^{\frac{p}{2}}E \sup_{0\le t\le u}\biggl| \int _{0}^{t} \int_{Z}\bigl[h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon ,s},v) \bigr]\tilde{N}(ds,dv)\biggr|^{p}\\ &{}+6^{p-1}{\varepsilon}^{\frac{p}{2}} E\sup_{0\le t\le u}\biggl| \int_{0}^{t} \int _{Z}\bigl[h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon,s},v) \bigr]\pi (dv)\,ds\biggr|^{p}\\ ={}&6^{p-1}{\varepsilon}^{\frac{p}{2}} (L_{1}+L_{2}). \end{aligned} $$
(26)
By Lemma 3.4, there exists a \(D_{p}\) such that
$$\begin{aligned} L_{1} \le&D_{p}\biggl\{ E\biggl( \int_{0}^{u} \int_{Z}\bigl|h(s,x_{\varepsilon,s},v)-\bar {h}(y_{\varepsilon,s},v)\bigr|^{2} \pi(dv)\,ds\biggr)^{\frac{p}{2}} \\ &{}+E \int_{0}^{u} \int_{Z}\bigl|h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon ,s},v)\bigr|^{p} \pi(dv)\,ds\biggr\} . \end{aligned}$$
(27)
By Assumptions 2.2 and 2.4, we have
$$\begin{aligned} &E\biggl( \int_{0}^{u} \int_{Z}\bigl|h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon ,s},v)\bigr|^{2} \pi(dv)\,ds\biggr)^{\frac{p}{2}} \\ &\quad\le E\biggl(2k_{2} \int_{0}^{u}\|x_{\varepsilon,s}-y_{\varepsilon,s} \|^{2}\,ds \\ &\qquad{}+2u\frac{1}{u} \int_{0}^{u} \int_{Z} \bigl|h(s,y_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon,s},v)\bigr|^{2} \pi (dv)\,ds\biggr)^{\frac{p}{2}} \\ &\quad\le E\biggl[2k_{2} \int_{0}^{u}\|x_{\varepsilon,s}-y_{\varepsilon,s} \|^{2}\,ds+2u\psi _{3}(u) \bigl(1+\|y_{\varepsilon,s} \|^{2}\bigr)\biggr]^{\frac{p}{2}}. \end{aligned}$$
(28)
Using the basic inequality and the Hölder inequality, we obtain
$$\begin{aligned} &E\biggl( \int_{0}^{u} \int_{Z}\bigl|h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon ,s},v)\bigr|^{2} \pi(dv)\,ds\biggr)^{\frac{p}{2}} \\ &\quad\le 3^{\frac{p}{2}-1}\biggl\{ E\biggl[2k_{2} \int_{0}^{u}\|x_{\varepsilon,s}-y_{\varepsilon ,s}|^{2}\,ds \biggr]^{\frac{p}{2}}+\bigl[2u\psi_{3}(u)\bigr]^{\frac{p}{2}}+ \bigl[2u\psi_{3}(u)\bigr]^{\frac{p}{2}}E\|y_{\varepsilon,s} \|^{p}\biggr\} \\ &\quad\le 3^{\frac{p}{2}-1}\biggl\{ (2k_{2})^{\frac{p}{2}}u^{\frac{p}{2}-1} \int_{0}^{u}E \sup_{0\le\sigma\le s}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma )\bigr|^{p}\,ds \\ &\qquad{}+\bigl[2u\psi_{3}(u)\bigr]^{\frac{p}{2}}+\bigl[2u \psi_{3}(u)\bigr]^{\frac{p}{2}}E\|y_{\varepsilon,s}\|^{p} \biggr\} . \end{aligned}$$
(29)
Similar to the estimation of \(J_{1}\), we derive that
$$\begin{aligned} &E \int_{0}^{u} \int_{Z}\bigl|h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon ,s},v)\bigr|^{p} \pi(dv)\,ds \\ &\quad\le (1+k_{2})E \int_{0}^{u}\|x_{\varepsilon,s}-y_{\varepsilon ,s} \|^{p}\,ds +(1+k_{2})E \int_{0}^{u} \int_{Z}\bigl|h(s,y_{\varepsilon,s} ,v)-\bar{h}(y_{\varepsilon,s},v)\bigr|^{p} \pi(dv)\,ds \\ &\quad\le (1+k_{2}) \int_{0}^{u}E \sup_{0\le\sigma\le s}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma )\bigr|^{p}\,ds \\ &\qquad{}+(1+k_{2})u\psi_{3}(u) \Bigl(1+E\sup _{0\le t\le u}\|y_{\varepsilon,t}\|^{p}\Bigr). \end{aligned}$$
(30)
On the other hand, using the Hölder inequality, it follows that
$$\begin{aligned} L_{2} \le& E\sup_{0\le t\le u}\biggl\{ \biggl( \int_{0}^{t}\,ds\biggr)^{p-1}\biggl( \int_{0}^{t}\biggl| \int _{Z}\bigl[h(s,x_{\varepsilon,s},v)-\bar{h}(y_{\varepsilon,s},v) \bigr]\pi (dv)\biggr|^{p}\,ds\biggr)\biggr\} \\ \le&\bigl[u\pi(Z)\bigr]^{p-1}E \int_{0}^{u} \int_{Z}\bigl|h(s,x_{\varepsilon,s},v)-\bar {h}(y_{\varepsilon,s},v)\bigr|^{p} \pi(dv)\,ds \\ \le&(1+k_{2})\bigl[u\pi(Z)\bigr]^{p-1}\biggl[ \int_{0}^{u}E \sup_{0\le\sigma\le s}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma )\bigr|^{p}\,ds \\ &{}+u\psi_{3}(u) \Bigl(1+E\sup_{0\le t\le u} \|y_{\varepsilon,t}\|^{p}\Bigr)\biggr]. \end{aligned}$$
(31)
Hence, substituting (29)-(31) into (26), we get
$$\begin{aligned} J_{3} \le&6^{p-1}{\varepsilon}^{\frac{p}{2}} \bigl[D_{p}3^{\frac {p}{2}-1}(2k_{2})^{\frac{p}{2}}u^{\frac{p}{2}-1}+(1+k_{2}) \bigl(D_{p}+\bigl(u\pi (Z)\bigr)^{p-1}\bigr)\bigr] \\ &{}\times \int_{0}^{u}E\sup_{0\le\sigma\le s}\bigl|x_{\varepsilon}( \sigma )-y_{\varepsilon}(\sigma)\bigr|^{p}\,ds+ 6^{p-1}{ \varepsilon}^{\frac{p}{2}}\bigl[D_{p}3^{\frac{p}{2}-1}\bigl(2u\psi _{3}(u)\bigr)^{\frac{p}{2}}u^{\frac{p}{2}-1} \\ &{}+(1+k_{2}) \bigl(D_{p}+\bigl(u\pi(Z) \bigr)^{p-1}\bigr)u\psi_{3}(u)\bigr]\Bigl(1+E\sup _{0\le t\le u} \|y_{\varepsilon,t}\|^{p}\Bigr). \end{aligned}$$
(32)
Combining with (24), (25), and (32),
$$\begin{aligned} &E\sup_{0\le t\le u}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)-\bigl[D(x_{\varepsilon,t})-D(y_{\varepsilon,t})\bigr]\bigr|^{p} \\ &\quad\le C_{1}\varepsilon E \int_{0}^{u} \sup_{0\le\sigma\le t}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma)\bigr|^{p}\,dt +C_{2}\varepsilon u \Bigl(1+E\sup_{-\tau\le t\le u}\bigl|y_{\varepsilon}(t)\bigr|^{p}\Bigr), \end{aligned}$$
(33)
where \(C_{1}=3^{p-1}(1+\sqrt{k_{1}})^{p}(\varepsilon ^{p-1}u^{p-1}+{\varepsilon}^{\frac{p}{2}-1}C_{p}u^{\frac{p}{2}-1})+ 6^{p-1}{\varepsilon}^{\frac{p}{2}-1}[D_{p}3^{\frac{p}{2}-1}(2k_{2})^{\frac {p}{2}}u^{\frac{p}{2}-1}+(1+k_{2})(D_{p}+(u\pi(Z))^{p-1})]\), \(C_{2}=3^{p-1}(1+\sqrt{k_{1}})^{p}(\varepsilon^{p-1}u^{p-1}\psi _{1}(u)+{\varepsilon}^{\frac{p}{2}-1} C_{p}u^{\frac{p}{2}-1}\psi _{2}(u))+6^{p-1}{\varepsilon}^{\frac{p}{2}-1} [D_{p}3^{\frac {p}{2}-1}(2u\psi_{3}(u))^{\frac{p}{2}}u^{\frac{p}{2}-2}+ (1+k_{2})(D_{p}+(u\pi(Z))^{p-1})\psi_{3}(u)]\). Hence Assumption 2.4 and Lemma 3.5 imply that
$$\begin{aligned} &E\sup_{0\le t\le u}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)-\bigl[D(x_{\varepsilon,t})-D(y_{\varepsilon,t})\bigr]\bigr|^{p} \\ &\quad\le C_{1}\varepsilon \int_{0}^{u}E \sup_{0\le\sigma\le t}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma )\bigr|^{p}\,dt \\ &\qquad{}+C_{2}\varepsilon u(1+C). \end{aligned}$$
(34)
Inserting (34) into (22),
$$\begin{aligned} E\sup_{0\le t\le u}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \le &\frac{C_{1}\varepsilon}{(1-k_{0})^{p}} \int_{0}^{u}E \sup_{0\le\sigma\le t}\bigl|x_{\varepsilon}( \sigma)-y_{\varepsilon}(\sigma )\bigr|^{p}\,dt \\ &{}+\frac{C_{2}\varepsilon u(1+C)}{(1-k_{0})^{p}}. \end{aligned}$$
Finally, by the Gronwall inequality, we have
$$\begin{aligned} E\sup_{0\le t\le u}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \le& \frac{C_{2}\varepsilon u(1+C)}{(1-k_{0})^{p}}e^{\frac{C_{1}\varepsilon u}{(1-k_{0})^{p}}}. \end{aligned}$$
Choose \(\beta\in(0,1)\) and \(L>0\) such that for every \(t\in [0,L\varepsilon^{-\beta}]\subseteq[0,T]\),
$$E\sup_{t\in[0,L\varepsilon^{-\beta}]}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|^{p} \le cL\varepsilon^{1-\beta}, $$
where \(c=\frac{C_{2}(1+C)}{(1-k_{0})^{p}}e^{\frac{C_{1}}{(1-k_{0})^{p}}L\varepsilon ^{1-\beta}}\). Consequently, given any number \(\delta_{1}\), we can choose \(\varepsilon_{1}\in[0,\varepsilon_{0}]\) such that for each \(\varepsilon\in [0,\varepsilon_{1}]\) and for \(t\in[0,L\varepsilon^{-\beta}]\),
$$E\sup_{t\in[0,L\varepsilon^{-\beta}]}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|^{p} \le\delta_{1}. $$
The proof is complete. □

Proof of Theorem 2.4

The key technique to prove this theorem is already presented in the proof of Theorem 2.2, so we here only highlight some parts which need to be modified. By Assumption 2.5, \(J_{1}\) of (23) should become
$$\begin{aligned} J_{1} \le&3^{p-1}\varepsilon^{p}u^{p-1}2^{p-1}E \int_{0}^{u}\bigl(k^{p}\bigl(\|x_{\varepsilon ,s}-y_{\varepsilon,s}\|\bigr) +\bigl|f(s,y_{\varepsilon,s})- \bar{f}(y_{\varepsilon,s})\bigr|^{p} \bigr)\,ds. \end{aligned}$$
In fact, since the function \(k(\cdot)\) is concave and increasing, there must exist a positive number \(c_{p}\) such that
$$k^{p}\bigl(|x|\bigr)\le c_{p}\bigl(1+|x|^{p}\bigr), \quad \mbox{for all } p\ge2. $$
Hence,
$$\begin{aligned} J_{1} \le&c_{p}C_{3} \int_{0}^{u}\Bigl(1+E\sup_{0\le t\le u}\bigl|x_{\varepsilon }(s)-y_{\varepsilon}(s)\bigr|^{p} \Bigr)\,ds+c_{p}C_{3}u \\ &{}+ uC_{3}\psi_{1}(u) \Bigl(1+E\sup_{0\le t\le u} \|y_{\varepsilon,t}\|^{p}\Bigr), \end{aligned}$$
(35)
where \(C_{3}=3^{p-1}\varepsilon^{p}u^{p-1}2^{p}\). Similarly, \(J_{2}\) and \(J_{3}\) can be estimated as \(J_{1}\). Finally, all of required assertions can be obtained in the same way as the proof of Theorem 2.2. The proof is therefore completed. □

5 Examples

Example 5.1

Consider the following neutral stochastic differential delay equations:
$$\begin{aligned} \,d\bigl[x(t)-\tilde{D}\bigl(x(t-\tau)\bigr)\bigr] =&\tilde{f} \bigl(t,x(t),x(t-\tau)\bigr)\,dt+\tilde {g}\bigl(t,x(t),x(t-\tau)\bigr)\,dw(t) \\ &{}+ \int_{Z}\tilde{h}\bigl(t,x(t),x(t-\tau),v\bigr)N(dt,dv), \end{aligned}$$
(36)
where \(\tau>0\) is a constant delay and the coefficients of equation (36) satisfy Assumptions 2.1-2.3. Obviously, if we define
$$D(\varphi)=\tilde{D}\bigl(\varphi(-\tau)\bigr), \qquad f(t,\varphi)=\tilde {f} \bigl(t,\varphi(0),\varphi(-\tau)\bigr), $$
and
$$g(t,\varphi)=\tilde{g}\bigl(t,\varphi(0),\varphi(-\tau)\bigr), \qquad h(t, \varphi ,v)=\tilde{h}\bigl(t,\varphi(0),\varphi(-\tau),v\bigr), $$
then equation (36) will become equation (1). It is naturally seen that equation (36) has a unique solution in the sense of \(L^{p}\). Meanwhile, similar to (6) and (7), we can get the standard form of equation (36)
$$\begin{aligned} x_{\varepsilon}(t) =&x(0)+D\bigl(x_{\varepsilon}(t-\tau)\bigr)-D \bigl(x(-\tau)\bigr)+ \int _{0}^{t}f\bigl(s,x_{\varepsilon}(s),x_{\varepsilon}(s- \tau)\bigr)\,ds \\ &{}+\sqrt{\varepsilon} \int_{0}^{t}g\bigl(s,x_{\varepsilon}(s),x_{\varepsilon}(s- \tau )\bigr)\,dw(s) \\ &{}+\sqrt{\varepsilon} \int_{0}^{t} \int_{Z}h \bigl(s,x_{\varepsilon}(s),x_{\varepsilon}(s- \tau),v\bigr)N(ds,dv), \end{aligned}$$
(37)
and the averaging form of equation (36)
$$\begin{aligned} y_{\varepsilon}(t) =&x(0)+D\bigl(y_{\varepsilon}(t-\tau)\bigr)-D \bigl(y(-\tau)\bigr)+ \int _{0}^{t}\bar{f}\bigl(y_{\varepsilon}(s),y_{\varepsilon}(s- \tau)\bigr)\,ds \\ &{}+\sqrt{\varepsilon} \int_{0}^{t}\bar{f}\bigl(y_{\varepsilon}(s),y_{\varepsilon}(s-\tau)\bigr)\,dw(s) \\ &{}+\sqrt{\varepsilon} \int_{0}^{t} \int_{Z}\bar{h}\bigl(y_{\varepsilon}(s),y_{\varepsilon}(s- \tau),v\bigr)N(ds,dv). \end{aligned}$$
(38)
Similar to the proof of Theorem 2.2 and Corollary 2.1, we can show the convergence of the standard solution of equation (37) and the averaged one of equation (38) in pth moment and in probability.

Example 5.2

Let \(N(t)\) be a scalar Poisson processes. Consider neutral SFDEs with Poisson processes of the form
$$\begin{aligned} d\bigl[x_{\varepsilon}(t)-D(x_{\varepsilon,t})\bigr]=\varepsilon f(t,x_{\varepsilon ,t})\,dt+\sqrt{\varepsilon}h(t,x_{\varepsilon,t})\,dN(t), \end{aligned}$$
(39)
with initial data \(x_{\varepsilon,0}=x_{0}=\xi(t)\), when \(-\tau\le t\le 0 \). Here
$$D(x)=0.1x, \qquad f(t,x)=\cos^{2}tx, $$
and
$$\begin{aligned} h(t,x)=\rho(x)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0, & \mbox{if } x=0, \\ cx(\log x^{-1})^{\alpha}, & \mbox{if } 0< x\le\delta, \\ c\delta(\log\delta^{-1})^{\alpha}, &\mbox{if } x> \delta, \end{array}\displaystyle \right . \end{aligned}$$
where \(\alpha\le\frac{1}{2}\), \(c\ge0\), and \(\delta\in(0,1)\) is sufficiently small. Let
$$\bar{f}(y_{\varepsilon,t})=\frac{1}{\pi} \int_{0}^{\pi}f(t,y_{\varepsilon ,t})\,dt= \frac{1}{2}y_{\varepsilon,t} $$
and
$$\bar{h}(y_{\varepsilon,t})= \frac{1}{\pi} \int_{0}^{\pi}h(t,y_{\varepsilon ,t})\,dt= \rho(y_{\varepsilon,t}). $$
Hence, we have the corresponding averaged equation
$$\begin{aligned} d\bigl[y_{\varepsilon}(t)-0.1y_{\varepsilon,t}\bigr]= \frac{1}{2}\varepsilon y_{\varepsilon,t}\,dt+\sqrt{\varepsilon} \rho(y_{\varepsilon,t})\,dN(t). \end{aligned}$$
(40)
Clearly, the coefficient \(\rho(\cdot)\) does not satisfy the Lipschitz condition. It is a concave nondecreasing continuous function on \([0,\infty]\) with \(\rho(0)=0\) and
$$\begin{aligned}& \begin{aligned}[b] \int_{0^{+}}\frac{x}{\rho^{2}(x)}\,dx&=-\frac{1}{c^{2}} \int_{0^{+}}\frac {1}{x\log x}\,dx=-\frac{1}{c^{2}} \int_{0^{+}}\frac{1}{\log x}d(\log x) \\ &=-\frac{1}{c^{2}}\log|\log x|\Big|_{0^{+}}=\infty, \quad \mbox{if } \alpha =\frac{1}{2}, \end{aligned} \\& \begin{aligned}[b] \int_{0^{+}}\frac{x}{\rho^{2}(x)}\,dx&=\frac{1}{c^{2}} \int_{0^{+}}\frac {1}{x(-\log x)^{2\alpha}}\,dx=-\frac{1}{c^{2}} \int_{0^{+}}\frac {1}{(-\log x)^{2\alpha}}d(-\log x) \\ &=-\frac{1}{c^{2}}\frac{1}{-2\alpha+1}(-\log x)^{-2\alpha+1}\Big|_{0^{+}}= \infty, \quad \mbox{if } \alpha< \frac{1}{2}. \end{aligned} \end{aligned}$$
Therefore, it follows that Assumption 2.5 is satisfied. Consequently, by Theorem 2.4 and Corollary 2.2, we see that the solutions of averaged equation (40) will converge to that of the standard equation (39) in the sense of \(L^{2}\) and probability.

Declarations

Acknowledgements

The authors would like to thank the referees for their valuable comments and suggestions. The authors would also like to thank the National Natural Science Foundation of China under NSFC grant (11401261) and the NSF of Higher Education Institutions of Jiangsu Province (13KJB110005) for their financial support.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics and Information Technology, Jiangsu Second Normal University
(2)
Department of Mathematics and Statistics, University of Strathclyde

References

  1. Krylov, NM, Bogolyubov, NN: Les proprietes ergodiques des suites des probabilites en chaine. C. R. Math. Acad. Sci. 204, 1454-1546 (1937) MATHGoogle Scholar
  2. Besjes, JG: On the asymptotic methods for non-linear differential equations. J. Méc. 8, 357-373 (1969) MathSciNetMATHGoogle Scholar
  3. Volosov, VM: Averaging in systems of ordinary differential equations. Russ. Math. Surv. 17, 1-126 (1962) MathSciNetView ArticleMATHGoogle Scholar
  4. Matthies, K: Time-averaging under fast periodic forcing of parabolic partial differential equations: exponential estimates. J. Differ. Equ. 174, 133-180 (2001) MathSciNetView ArticleMATHGoogle Scholar
  5. Mierczynski, J, Shen, W: Time averaging for non-autonomous random linear parabolic equations. Discrete Contin. Dyn. Syst., Ser. B 9, 661-699 (2008) MathSciNetView ArticleMATHGoogle Scholar
  6. Khasminskii, RZ: On the principle of averaging the Ito stochastic differential equations. Kibernetika 4, 260-279 (1968) Google Scholar
  7. Freidlin, M, Wentzell, A: Random Perturbations of Dynamical Systems. Springer, New York (1998) View ArticleMATHGoogle Scholar
  8. Golec, J, Ladde, G: Averaging principle and systems of singularly perturbed stochastic differential equations. J. Math. Phys. 31, 1116-1123 (1990) MathSciNetView ArticleMATHGoogle Scholar
  9. Veretennikov, AY: On the averaging principle for systems of stochastic differential equations. Math. USSR Sb. 69, 271-284 (1991) MathSciNetView ArticleMATHGoogle Scholar
  10. Khasminskii, RZ, Yin, G: On averaging principles: an asymptotic expansion approach. SIAM J. Math. Anal. 35, 1534-1560 (2004) MathSciNetView ArticleMATHGoogle Scholar
  11. Givon, D, Kevrekidis, IG, Kupferman, R: Strong convergence of projective integration schemes for singular perturbed stochastic differential systems. Commun. Math. Sci. 4, 707-729 (2006) MathSciNetView ArticleMATHGoogle Scholar
  12. Stoyanov, IM, Bainov, DD: The averaging method for a class of stochastic differential equations. Ukr. Math. J. 26, 186-194 (1974) MathSciNetView ArticleGoogle Scholar
  13. Kolomiets, VG, Melnikov, AI: Averaging of stochastic systems of integral-differential equations with Poisson noise. Ukr. Math. J. 43, 242-246 (1991) MathSciNetView ArticleMATHGoogle Scholar
  14. Givon, D: Strong convergence rate for two-time-scale jump-diffusion stochastic differential systems. SIAM J. Multiscale Model. Simul. 6, 577-594 (2007) MathSciNetView ArticleMATHGoogle Scholar
  15. Xu, Y, Duan, JQ, Xu, W: An averaging principle for stochastic dynamical systems with Lévy noise. Physica D 240, 1395-1401 (2011) MathSciNetView ArticleMATHGoogle Scholar
  16. Yin, G, Ramachandran, KM: A differential delay equation with wideband noise perturbation. Stoch. Process. Appl. 35, 231-249 (1990) MathSciNetView ArticleMATHGoogle Scholar
  17. Tan, L, Lei, D: The averaging method for stochastic differential delay equations under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 38 (2013) MathSciNetView ArticleGoogle Scholar
  18. Xu, Y, Pei, B, Li, Y: Approximation properties for solutions to non-Lipschitz stochastic differential equations with Lévy noise. Math. Methods Appl. Sci. 38, 2120-2131 (2015) MathSciNetView ArticleMATHGoogle Scholar
  19. Mao, W, You, S, Wu, X, Mao, X: On the averaging principle for stochastic delay differential equations with jumps. Adv. Differ. Equ. 2015, 70 (2015) MathSciNetView ArticleGoogle Scholar
  20. Bao, J, Yuan, C: Large deviations for neutral functional SDEs with jumps. Stoch. Int. J. Probab. Stoch. Process. 87, 48-70 (2015) MathSciNetView ArticleMATHGoogle Scholar
  21. Jankovic, S, Randjelovic, J, Jovanovic, M: Razumikhin-type exponential stability criteria of neutral stochastic functional differential equations. J. Math. Anal. Appl. 355, 811-820 (2009) MathSciNetView ArticleMATHGoogle Scholar
  22. Mao, X: Exponential stability in mean square of neutral stochastic differential functional equations. Syst. Control Lett. 26, 245-251 (1995) MathSciNetView ArticleMATHGoogle Scholar
  23. Mao, X: Razumikhin-type theorems on exponential stability of neutral stochastic differential equations. SIAM J. Math. Anal. 28, 389-401 (1997) MathSciNetView ArticleMATHGoogle Scholar
  24. Wu, F, Mao, X: Numerical solutions of neutral stochastic functional differential equations. SIAM J. Numer. Anal. 46, 1821-1841 (2008) MathSciNetView ArticleMATHGoogle Scholar
  25. Zong, X, Wu, F, Huang, C: Exponential mean square stability of the theta approximations for neutral stochastic differential delay equations. J. Comput. Appl. Math. 286, 172-185 (2015) MathSciNetView ArticleMATHGoogle Scholar
  26. Applebaum, D: Lévy Process and Stochastic Calculus. Cambridge University Press, Cambridge (2009) View ArticleMATHGoogle Scholar
  27. Gikhman, II, Skorokhod, AV: Stochastic Differential Equations. Springer, Berlin (1972) View ArticleMATHGoogle Scholar
  28. Kunita, H: Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms. In: Real and Stochastic Analysis. New Perspectives, pp. 305-373. Birkhäuser, Basel (2004) View ArticleGoogle Scholar
  29. Mao, W, Zhu, Q, Mao, X: Existence, uniqueness and almost surely asymptotic estimations of the solutions to neutral stochastic functional differential equations driven by pure jumps. Appl. Math. Comput. 254, 252-265 (2015) MathSciNetView ArticleGoogle Scholar
  30. Bao, J, Hou, Z: Existence of mild solutions to stochastic neutral partial functional differential equations with non-Lipschitz coefficients. Comput. Math. Appl. 59, 207-214 (2010) MathSciNetView ArticleMATHGoogle Scholar
  31. Ren, Y, Xia, N: Existence, uniqueness and stability of the solutions to neutral stochastic functional differential equations with infinite delay. Appl. Math. Comput. 210, 72-79 (2009) MathSciNetView ArticleMATHGoogle Scholar
  32. Wei, F, Cai, Y: Existence, uniqueness and stability of the solution to neutral stochastic functional differential equations with infinite delay under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 151 (2013) MathSciNetView ArticleGoogle Scholar
  33. Mao, X: Stochastic Differential Equations and Applications. Ellis Horwood, Chichester (2008) View ArticleGoogle Scholar

Copyright

© Mao and Mao 2016