Skip to main content

Theory and Modern Applications

Neutral stochastic functional differential equations with Lévy jumps under the local Lipschitz condition

Abstract

In this paper, a general neutral stochastic functional differential equations with infinite delay and Lévy jumps (NSFDEwLJs) is studied. We investigate the existence and uniqueness of solutions to NSFDEwLJs at the phase space \(C_{g}\) under the local Carathéodory type conditions. Meanwhile, we also give the exponential estimates and almost surely asymptotic estimates of solutions to NSFDEwLJs.

1 Introduction

Many dynamical systems depend not only on present and past states but also involve derivatives with delays as well as the function itself. Deterministic neutral functional differential equations (NFDEs) are often used to describe such systems. The theory of NFDEs has been studied by many authors, e.g., Hale [1, 2]. Motivated by the chemical engineering systems as well as the theory of aeroelasticity, Kolmanovskii and Myshkis [3, 4] introduced the neutral stochastic functional differential equations (NSFDEs) and gave its applications in chemical engineering and aeroelasticity. Since then, the theory of NSFDEs has attracted more and more attention. For example, the existence, uniqueness, and stability of solutions to NSFDEs can be found in [515]. Various efficient computational methods of NSFDEs are obtained and their convergence and stability have been studied by many authors. One can see Jiang [16], Liu [17], Mo [18], Wu [19], Wang [20], Yu [21], Zhou [22], Zong [23, 24].

However, the global Lipschitz condition imposed on [5, 7, 8, 10, 11, 13, 15, 25, 26] is seemed to be considerably strong when one discusses variable applications in real world. For instance, Cox [27] proposed the Cox-Ingersoll-Ross process for describing the short-term interest rates

$$ dx(t)=k\bigl[\lambda-x(t)\bigr]\,dt+\theta\sqrt{x(t)}\,dw(t), \quad x(0)=x_{0}, $$
(1.1)

where \(k,\lambda\ge0, \theta>0\) and \(x_{0}>0\). \(w(t)\) is a one-dimensional Brownian motion. It is well known that the diffusion coefficient of equation (1.1) is not globally Lipschitz. In this case, it is necessary for us to find other conditions to replace the Lipschitz condition. In the past few decades, many people have paid much attention to the existence, uniqueness of stochastic differential systems under some weaker conditions (see [2830]). Different from the global Lipschitz condition, the non-Lipschitz condition is a much weaker sufficient condition with a wider range of applications. Very recently, this condition was investigated by many scholars to study the existence and uniqueness of NSFDEs. For example, Ren [31, 32] extended the result of [15] and derived the existence and uniqueness of the solution to NSFDEs at the phase space \(BC((-\infty, 0];R^{d})\) under non-Lipschitz conditions in [30]; Bao [33] established the existence and uniqueness theorem of mild solutions to a class of stochastic neutral partial functional differential equations under non-Lipschitz conditions; Boufoussi [34] and Luo [35] studied the existence and uniqueness of mild solutions to neutral stochastic partial differential equations with jumps and non-Lipschitz coefficients, respectively; In addition, Ren [36] and Wei [37] extended the phase space \(BC((-\infty, 0];R^{d})\) of [31, 32] to the phase space \(\mathfrak{B}\) and \(C_{g}\), they obtained the existence and uniqueness theorem of solutions to NSFDEs under non-Lipschitz conditions.

Motivated by the aforementioned work, in this paper we aim to study the existence and uniqueness of solutions to NSFDEs with infinite delay and Lévy jumps at the phase space \(C_{g}\) which is proposed by [38]. Meantime, we establish the exponential estimates and almost surely asymptotic estimates of solutions to NSFDEs with infinite delay and Lévy jumps under non-linear growth conditions. Unlike the condition imposed by Wei [37], we prove that equation (2.1) has a unique solution under some Carathéodory type conditions and we extend the existence results [15, 31, 32] to the phase space \(C_{g}\). To the best of our knowledge, under non-linear growth conditions, the exponential estimates and almost surely asymptotic estimates of solutions for NSFDEs with infinite delay and Lévy jumps have scarcely been investigated.

The rest of this paper is organized as follows. In Section 2, we introduce some basic preliminaries and assumptions on equation (2.1); while in Section 3 we show that equation (2.1) has a unique solution on \([0,T]\) under the Carathéodory conditions; in Section 4, we prove the pth moment of solution will grow at most exponentially with exponent \(\frac{M_{2}}{(1-k_{0})^{p}}\) and show that the exponential estimations implies the almost surely asymptotic estimations. Finally, we give two examples to illustrate the theory in Section 5.

2 Preliminaries and some assumptions

Throughout this paper, unless otherwise specified, we use the following notation. Let \(|x|\) be the Euclidean norm of a vector \(x\in R^{n}\). If A is a matrix, its trace norm is denoted by \(|A|=\sqrt{\operatorname{trace} (A^{\top}A)}\). Let \((\Omega , {\mathcal {F}}, \{{\mathcal {F}}_{t}\}_{t\ge0}, P)\) be a complete probability space with a filtration \(\{{\mathcal {F}}_{t}\}_{t\ge 0}\) satisfying the usual conditions (i.e. it is increasing and right continuous while \({\mathcal {F}}_{0}\) contains all P-null sets). Let \(w(t)= (w_{1}(t), \ldots, w_{m}(t))^{T}\) be an m-dimensional Brownian motion defined on the probability space \((\Omega , {\mathcal {F}}, P)\).

Let \(\{\bar{p}=\bar{p}(t), t\ge0\}\) be a stationary \(\mathcal {F}_{t}\)-adapted and \(R^{n}\)-valued Poisson point process. Then, for \(A\in\mathcal{B}(R^{n}-\{0\})\), here \(\mathcal{B}(R^{n}-\{0\} )\) denotes the Borel σ-field on \(R^{n}-\{0\}\) and 0 the closure of A, we define the Poisson counting measure N associated with by

$$N\bigl((0,t]\times A\bigr):=\#\bigl\{ {0}< s\le t,{\bar{p}}(s)\in A\bigr\} =\sum _{{t_{0}}< s\le t}I_{A}\bigl({\bar{p}}(s)\bigr), $$

where # denotes the cardinality of the set \(\{\cdot\}\). For simplicity, we denote \(N(t,A):=N(({0},t]\times A)\). It is well known that there exists a σ-finite measure π such that

$$E\bigl[N(t,A)\bigr]=\pi(A)t, \qquad P\bigl(N(t,A)=n\bigr)=\frac{\exp(-t\pi(A))(\pi(A)t)^{n}}{n!}. $$

This measure π is called the Lévy measure. Moreover, by Doob-Meyer’s decomposition theorem, there exists a unique \(\{\mathcal{F}_{t}\}\)-adapted martingale \(\tilde{N}(t,A)\) and a unique \(\{\mathcal{F}_{t}\}\)-adapted natural increasing process \(\hat{N}(t,A)\) such that

$$N(t,A)=\tilde{N}(t,A)+\hat{N}(t,A),\quad t>0. $$

Here \(\tilde{N}(t,A)\) is called the compensated Lévy jumps and \(\hat {N}(t,A)=\pi(A)t\) is called the compensator.

Let \(C=C((-\infty, 0],R^{n})\) denote the family of continuous functions from \((-\infty,0]\) to \(R^{n}\), define

$$C_{g}=\biggl\{ \phi\in C:\frac{\phi}{g} \mbox{ is uniform continuous on } (-\infty,0] \mbox{ and } \sup_{s\le0} \frac{|\phi (s)|}{g(s)}< \infty\biggr\} , $$

where \(g:(-\infty,0]\to[1,\infty)\) be a continuous and non-increasing function such that \(g(-\infty)=+\infty, g(0)=1\). For any \(\phi\in C_{g}\), define the norm: \(|\phi|_{g}=\sup_{s\le0}\frac {|\phi(s)|}{g(s)}\), then the space \((C_{g},|\cdot|_{g})\) is a Banach space, which was proved by Arion [38].

Let \(p\ge2, \mathcal{M}^{p}([a,b];R^{n})\) denote the family of \(\mathcal {F}_{t}\)-measurable, \(R^{n}\)-valued process \(f(t)=\{f(t,\omega)\}, t\in [a,b]\) such that \(E\int_{a}^{b}|f(t)|^{p}\,dt<\infty\). In this paper, we assume that Lévy jumps N is independent of Brownian motion w. For \(Z\in\mathcal{B}(R^{n}-\{0\})\), consider the following NSFDEs with Lévy jumps:

$$\begin{aligned}& d\bigl[x(t)-D(t,x_{t})\bigr] \\& \quad =f(t,x_{t})\,dt+g(t,x_{t})\,dw(t)+ \int _{Z}h(t,x_{t-},v)N(dt,dv),\quad 0\le t\le T, \end{aligned}$$
(2.1)

where \(x_{t}=\{x(t+\theta):-\infty< \theta\le0\}\) and \(x_{t-}\) denotes the left limit of \(x_{t}\). \(D:[0,T]\times C_{g}\to R^{n}\), \(f:[0,T]\times C_{g}\to R^{n}, g:[0,T]\times C_{g} \to R^{n\times m}\) and \(h : [0,T]\times C_{g} \times Z\to R^{n}\) are both Borel-measurable functions. The initial function \(x_{0}\) is given as

$$x_{0}=\xi=\bigl\{ \xi(\theta):-\infty< \theta\le0\bigr\} . $$

That is, ξ is an \(\mathcal{F}_{0}\)-measurable \(C_{g}\)-valued random variable such that \(\xi\in\mathcal{M}^{2}((-\infty,0];R^{n})\).

In order to obtain the main results, we give the following conditions.

Assumption 2.1

Let \(D(t,0)=0\) and for any \(\varphi,\psi\in C_{g}\), there exists a constant \(k_{0}\in(0,1)\) such that

$$ \bigl\vert D(t,\varphi)-D(t,\psi)\bigr\vert \le k_{0}|\varphi-\psi|_{g}. $$
(2.2)

Here, \(D(t,\varphi)\) is continuous in t for each fixed \(\varphi\in C_{g}\).

Assumption 2.2

For any \(\varphi,\psi\in C_{g}\) and \(t\in[0,T]\), there exists a function \(k(t,u): R_{+}\times R_{+}\to R_{+}\) such that

$$\begin{aligned}& \bigl\vert f(t,\varphi) - f(t,\psi)\bigr\vert ^{2} \vee \bigl\vert g(t,\varphi) - g(t,\psi )\bigr\vert ^{2}\vee \int_{Z} \bigl\vert h(t,\varphi,v) - h(t,\psi,v)\bigr\vert ^{2}\pi(dv) \\& \quad \le k\bigl(t,\vert \varphi - \psi \vert _{g}^{2}\bigr). \end{aligned}$$
(2.3)

Here \(k(t,u)\) is locally integrable in t for each fixed \(u\in [0,\infty)\), it is continuous, nondecreasing, and concave in u for each fixed \(t\in[0,T]\). Moreover, \(k(t,0)=0\) and for any constant \(C_{1}>0\), if a non-negative continuous function \(z(t), t\in[0,T]\), satisfies \(z(0)=0\) and

$$ z(t)\le C_{1} \int_{0}^{t}k\bigl(s,z(s)\bigr)\,ds, $$
(2.4)

then \(z(t)=0\) for all \(t\in[0,T]\).

Assumption 2.3

For any constant \(C_{2}>0\), the deterministic ordinary differential equation

$$\frac{du}{dt}=C_{2}k(t,u), \quad 0\le t\le T, $$

has a global solution for any initial value \(u_{0}\).

Assumption 2.4

For any \(t\in[0,T]\), there exists a constant K such that

$$ \bigl\vert f(t,0)\bigr\vert ^{2}\vee \bigl\vert g(t,0)\bigr\vert ^{2}\vee \int_{Z} \bigl\vert h(t,0,v)\bigr\vert ^{2} \pi(dv)\le K. $$
(2.5)

Assumption 2.5

For any integer \(N>0\), there exists a function \(k_{N}(t,u):R_{+}\times R_{+}\to R_{+}\), such that

$$\begin{aligned}& \bigl\vert f(t,\varphi) - f(t,\psi)\bigr\vert ^{2}\vee \bigl\vert g(t,\varphi) - g(t,\psi )\bigr\vert ^{2}\vee \int_{Z} \bigl\vert h(t,\varphi,v) - h(t,\psi,v)\bigr\vert ^{2}\pi(dv) \\& \quad\le k_{N}\bigl(t,\vert \varphi - \psi \vert _{g}^{2}\bigr), \end{aligned}$$
(2.6)

for any \(\varphi,\psi\in C_{g}\) with \(|\varphi|_{g},|\psi|_{g}\le N\) and \(t\in[0,T]\). Here \(k_{N}(t,u)\) is nondecreasing and locally integrable in t for each fixed \(u\in[0,\infty)\), it is continuous and nondecreasing and concave in u for each fixed \(t\in[0,T]\). Moreover, \(k_{N}(t,0)=0\) and for any constant \(C_{3}>0\), if a non-negative continuous function \(z(t)\) satisfies \(z(0)=0\) and

$$ z(t)\le C_{3} \int_{0}^{t}k_{N}\bigl(s,z(s)\bigr)\,ds, $$
(2.7)

then \(z(t)=0\) for all \(t\in[0,T]\).

Remark 2.1

Clearly, the conditions (2.3) and (2.5) imply the growth condition. That is, for any \(\varphi\in C_{g}\) and \(t\in[0,T]\), there exists a function \(k(t,u): R_{+}\times R_{+}\to R_{+}\) such that

$$ \bigl\vert f(t,\varphi)\bigr\vert ^{2}\vee \bigl\vert g(t,\varphi) \bigr\vert ^{2}\vee \int_{Z} \bigl\vert h(t,\varphi ,v)\bigr\vert ^{2}\pi(dv) \le2k\bigl(t,\vert \varphi \vert _{g}^{2} \bigr)+2K, $$
(2.8)

where \(k(t,u)\) is defined as Assumption 2.2.

Remark 2.2

By using the Carathéodory theorem (see [39]), it follows that the deterministic ordinary differential equation

$$\frac{du}{dt}=C_{2}k(t,u), \quad 0\le t\le T, $$

has a global solution for any initial value \(u_{0}\). In addition, by applying Lemma 3 in [29], we see that \(z(t)\) of (2.7) is identically zero on \([0,T]\).

3 Existence and uniqueness theorem

In this section, we establish the existence and uniqueness theorem to equation (2.1) under the Carathéodory type conditions.

In order to prove our main results, we need to introduce the following lemmas.

Lemma 3.1

Let \(p\ge2\) and \(a,b\in R^{n}\). Then, for \(\epsilon>0\),

$$\vert a+b \vert ^{p}\le\bigl[1+\epsilon^{\frac{1}{p-1}} \bigr]^{p-1}\biggl(\vert a\vert ^{p}+\frac {|b|^{p}}{\epsilon} \biggr). $$

Lemma 3.2

Let \(\phi:R_{+}\times Z\to R^{n}\) and assume that

$$\int_{0}^{t} \int_{Z} \bigl\vert \phi(s,v)\bigr\vert ^{p} \pi(dv)\,ds< \infty,\quad p\ge2. $$

Then there exists \(D_{p}>0\) such that

$$\begin{aligned} E\biggl(\sup_{0\le t\le u}\biggl\vert \int_{0}^{t} \int_{Z}\phi(s,v)\tilde {N}(ds,dv)\biggr\vert ^{p} \biggr) \le& D_{p}\biggl\{ E\biggl( \int_{0}^{u} \int_{Z}\bigl\vert \phi(s,v)\bigr\vert ^{2}\pi (dv)\,ds\biggr)^{\frac{p}{2}} \\ &{}+E \int_{0}^{u} \int_{Z}\bigl\vert \phi(s,v)\bigr\vert ^{p} \pi(dv)\,ds\biggr\} . \end{aligned}$$

The proofs of Lemma 3.1 and Lemma 3.2 can be found in [9] and [40, 41], we omit them here.

Lemma 3.3

Let \(p\ge2\) and \(a,b\in R^{n}\). Then, for any \(\delta \in(0,1)\),

$$\vert a+b\vert ^{p}\le\frac{|a|^{p}}{(1-\delta)^{p-1}}+\frac{|b|^{p}}{\delta ^{p-1}}. $$

The proof of Lemma 3.3 can be obtained from Lemma 3.1 by putting \(\epsilon=\frac{\delta}{1-\delta}\).

Lemma 3.4

For any \(f\in\mathcal{M}^{2}([0,T];R^{n})\), \(g\in \mathcal{M}^{2}([0,T];R^{n\times d})\) and \(h\in\mathcal{M}^{2}([0,T]\times Z;R^{n})\), the following equation:

$$ \textstyle\begin{cases} d[x(t)-D(t,x_{t})]=f(t)\,dt+g(t)\,dw(t)+\int_{Z}h(t,v)N(dt,dv), & t\in [0,T], \\ x(0)=\xi, \end{cases} $$
(3.1)

has a unique solution \(x(t)\) on \([0,T]\) under Assumption  2.1.

Proof

Define the operator Φ,

$$\begin{aligned} (\Phi x) (t) =&x_{0}+D(t,x_{t})-D(0,x_{0})+ \int_{0}^{t}f(s)\,ds \\ &{}+ \int_{0}^{t}g(s)\,dw(s)+ \int_{0}^{t} \int_{Z}h(s,v)N(ds,dv), \end{aligned}$$

and \((\Phi x)(0)=\xi\), then we write equation (3.1) as \(x(t)=(\Phi x)(t)\). Clearly, Φx is an \(R^{n}\)-valued measurable \(\{\mathcal{F}_{t}\} \)-adapted process and continuous in \(t\in[0,T]\). By the Hölder inequality and the Doob martingale inequality, we obtain

$$\begin{aligned}& E\Bigl(\sup_{0 \le t\le T} \bigl\vert (\Phi x) (t)\bigr\vert ^{2}\Bigr)\\& \quad\le 5E\sup_{0 \le t\le T}\vert x_{0} \vert ^{2}+5E\sup_{0 \le t\le T}\bigl\vert D(t,x_{t})-D(0,x_{0})\bigr\vert ^{2}+ 5E\sup _{0 \le t\le T}\biggl\vert \int_{0}^{t}f(s)\,ds\biggr\vert ^{2} \\& \qquad{}+ 5E\sup_{0 \le t\le T}\biggl\vert \int_{0}^{t}g(s)\,dw(s)\biggr\vert ^{2}+5E\sup_{0 \le t\le T}\biggl\vert \int_{0}^{t} \int_{Z}h(s,v)N(ds,dv)\biggr\vert ^{2} \\& \quad\le 5 \vert x_{0}\vert ^{2}+5k_{0}^{2}E \sup_{0 \le t\le T}\vert x_{t}-x_{0}\vert _{g}^{2}+5TE \int_{0}^{ T}\bigl\vert f(s)\bigr\vert ^{2}\,ds \\& \qquad{}+20E \int_{0}^{T}\bigl\vert g(s)\bigr\vert ^{2}\,ds+12E \int_{0}^{ T} \int_{R^{n}}\bigl\vert h(s,u)\bigr\vert ^{2} \pi(du) \,ds. \end{aligned}$$

Note that \(|x_{t}|_{g}^{2}=\sup_{-\infty\le\sigma\le0}\frac{x(t+\sigma)}{g(\sigma)} \), we obtain

$$E\sup_{0\le s\le t}\vert x_{s}\vert _{g}^{2}\le E\sup_{0\le s\le t}\sup _{-\infty\le \sigma\le0}\biggl(\frac{|x(s+\sigma)|}{g(\sigma)}\biggr)^{2}\le E \vert \xi \vert _{g}^{2}+E\sup_{0\le s \le t} \bigl\vert x(s)\bigr\vert ^{2}. $$

Therefore, by applying the basic inequality \(|a+b|^{2}\le2|a|^{2}+2|b|^{2}\), we have

$$\begin{aligned} E\Bigl(\sup_{0 \le t\le T}\bigl\vert (\Phi x) (t)\bigr\vert ^{2}\Bigr) \le&\bigl(5+20k_{0}^{2}\bigr) E \vert \xi \vert _{g}^{2}+10k_{0}^{2}E\sup _{0 \le t\le T}\bigl\vert x(t)\bigr\vert ^{2}++5TE \int_{0}^{ T}\bigl\vert f(s)\bigr\vert ^{2}\,ds \\ &{}+20E \int_{0}^{T}\bigl\vert g(s)\bigr\vert ^{2}\,ds+12E \int_{0}^{ T} \int_{R^{n}}\bigl\vert h(s,u)\bigr\vert ^{2} \pi(du) \,ds. \end{aligned}$$

Since \(f\in\mathcal{M}^{2}([0,T];R^{n})\), \(g\in\mathcal {M}^{2}([0,T];R^{n\times d})\) and \(h\in\mathcal{M}^{2}([0,T]\times Z;R^{n})\), if \(E\sup_{0 \le t\le T}|x(t)|^{2}<\infty\), then we get

$$ E\Bigl(\sup_{0 \le t\le T} \bigl\vert (\Phi x) (t)\bigr\vert ^{2}\Bigr)< \infty. $$
(3.2)

Hence, (3.2) implies Φ is an operator from \(\mathcal {M}^{2}([0,T];R^{n})\) to itself and we conclude that Φ is well defined. Now, we prove that Φ has a unique fixed point. For any \(x,y\in\mathcal{M}^{2}([0,T];R^{n})\), we have

$$\begin{aligned} E\sup_{0 \le t\le T}\bigl\vert (\Phi x) (t)-(\Phi y) (t)\bigr\vert ^{2} =&E\sup_{0 \le t\le T}\bigl\vert D(t,x_{t})-D(t,y_{t}) \bigr\vert ^{2} \\ \le& k_{0}^{2}E\sup_{0 \le t\le T}\bigl\vert x(t)-y(t)\bigr\vert ^{2}. \end{aligned}$$

From \(0< k_{0}^{2}<1\), it follows that ϕ is a contractive mapping. Thus, by the Banach fixed point theorem, we have the operator Φ has a unique fixed point in \(\mathcal{M}^{2}([0,T];R^{n})\), i.e., there exists a unique stochastic process \(x=x(t)\) satisfying

$$E\sup_{0 \le t\le T}\bigl\vert x(t)-(\Phi x) (t)\bigr\vert ^{2}=0. $$

So \(x(t)\) is a unique solution of equation (3.1) in \([0,T]\). The proof is complete. □

Theorem 3.1

Let Assumptions  2.1-2.4 hold. Then there exists a unique \(\mathcal{F}_{t}\)-adapted solution \(\{x(t)\}_{t\ge0}\) to equation (2.1) such that \(E(\sup_{0\le t\le T}|x(t)|^{2})<\infty\) for all \(T>0\).

Proof

We construct the sequence of successive approximations defined as follows:

$$\begin{aligned}& x^{0}(t)=\xi(0),\quad t\in(-\infty, T] \quad \mbox{and}\quad x_{0}^{n}=\xi,\quad n\ge1, \\& x^{n}(t)-D\bigl(t,x_{t}^{n}\bigr)=\xi(0)-D(0, \xi)+ \int _{0}^{t}f\bigl(s,x_{s}^{n-1} \bigr)\,ds+ \int_{0}^{t}g\bigl(s,x_{s}^{n-1} \bigr)\,dw(s) \\& \phantom{x^{n}(t)-D\bigl(t,x_{t}^{n}\bigr)=} {}+ \int_{0}^{t} \int_{Z}h\bigl(s,x_{s-}^{n-1},v \bigr)N(ds,dv), \quad t\in[0,T], n\ge1. \end{aligned}$$
(3.3)

The solution \(x^{n}(t)\) of the above equation exists according to Lemma 3.4. The proof will be split into the following three steps.

Step 1. Let us show that \(\{x^{n}(t)\}_{n\ge 1}\) is bounded. Taking \(\epsilon\in(0,\frac{1}{k_{0}^{2}}-1)\), by applying the elementary inequality \(|a+b|^{2}\le(1+\epsilon )|a|^{2}+(1+\epsilon^{-1})|b|^{2}\) and Assumption 2.1, we derive that

$$\begin{aligned} \bigl|x^{n}(t)\bigr|^{2} \le&(1+\epsilon)\bigl\vert D \bigl(t,x_{t}^{n}\bigr)\bigr\vert ^{2}+\bigl(1+ \epsilon ^{-1}\bigr)\bigl\vert x^{n}(t)-D \bigl(t,x_{t}^{n}\bigr)\bigr\vert ^{2} \\ \le& (1+\epsilon)k_{0}^{2} \bigl\vert x_{t}^{n} \bigr\vert _{g}^{2}+\bigl(1+ \epsilon ^{-1}\bigr)\bigl\vert x^{n}(t)-D \bigl(t,x_{t}^{n}\bigr)\bigr\vert ^{2}. \end{aligned}$$

In particular, taking \(\epsilon=\frac{1}{2}(\frac{1}{k_{0}^{2}}-1)\), we get

$$ \bigl\vert x^{n}(t)\bigr\vert ^{2} \le \frac{1+k_{0}^{2}}{2} \bigl\vert x_{t}^{n} \bigr\vert _{g}^{2}+\frac {1+k_{0}^{2}}{1-k_{0}^{2}} \bigl\vert x^{n}(t)-D\bigl(t,x_{t}^{n}\bigr)\bigr\vert ^{2}. $$
(3.4)

Taking the expectation on both sides of (3.4), we have

$$E\sup_{0\le s\le t} \bigl\vert x^{n}(s)\bigr\vert ^{2} \le \frac{1+k_{0}^{2}}{2}E\sup_{0\le s\le t}\bigl\vert x_{s}^{n}\bigr\vert _{g}^{2}+ \frac {1+k_{0}^{2}}{1-k_{0}^{2}}E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-D \bigl(s,x_{s}^{n}\bigr)\bigr\vert ^{2}. $$

Similarly, note that \(|x_{t}^{n}|_{g}^{2}=\sup_{-\infty\le\sigma\le0}\frac{x^{n}(t+\sigma )}{g(\sigma)} \), we obtain

$$E\sup_{0\le s\le t}\bigl\vert x_{s}^{n}\bigr\vert _{g}^{2}\le E\sup_{0\le s\le t}\sup _{-\infty \le\sigma\le0}\biggl(\frac{|x^{n}(s+\sigma)|}{g(\sigma)}\biggr)^{2}\le E\vert \xi \vert _{g}^{2}+E\sup_{0\le s \le t} \bigl\vert x^{n}(s)\bigr\vert ^{2}. $$

Since \(k_{0}\in(0,1)\), then we have \(\frac{1+k_{0}^{2}}{2}<1\),

$$ E\sup_{0\le s\le t} \bigl\vert x^{n}(s) \bigr\vert ^{2} \le \frac{1+k_{0}^{2}}{1-k_{0}^{2}}E \vert \xi \vert _{g}^{2}+\frac {2(1+k_{0}^{2})}{(1-k_{0}^{2})^{2}}E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-D\bigl(s,x_{s}^{n}\bigr)\bigr\vert ^{2}. $$
(3.5)

Next, we will estimate the second term of (3.5). Using the elementary inequality \(|a+b+c+d|^{2}\le 4(|a|^{2}+|b|^{2}+|c|^{2}+|d|^{2})\), it follows from (3.3) that

$$\begin{aligned}& E\sup_{0\le s\le t} \bigl\vert x^{n}(t)-D \bigl(t,x_{t}^{n}\bigr)\bigr\vert ^{2} \\ & \quad \le 4E\sup_{0\le s\le t} \bigl\vert \xi(0)-D(0,\xi)\bigr\vert ^{2}+4E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s}f\bigl(\sigma,x_{\sigma}^{n-1} \bigr)\,d\sigma\biggr\vert ^{2} \\ & \qquad{}+4E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s}g\bigl(\sigma,x_{\sigma }^{n-1} \bigr)\,dw({\sigma})\biggr\vert ^{2}+4E\sup_{0\le s\le t} \biggl\vert \int_{0}^{s} \int _{Z}h\bigl({\sigma},x_{\sigma-}^{n-1},v \bigr)N(d{\sigma},dv)\biggr\vert ^{2}. \end{aligned}$$
(3.6)

By Lemma 3.1 with \(p=2\), we have

$$\begin{aligned} E\sup_{0\le s\le t}\bigl\vert \xi(0)-D(0,\xi)\bigr\vert ^{2} \le&(1+k_{0}) \biggl(E\bigl\vert \xi (0)\bigr\vert ^{2}+\frac{E|D(0,\xi)|^{2}}{k_{0}}\biggr) \\ \le& (1+k_{0})^{2} E \vert \xi \vert _{g}^{2}. \end{aligned}$$
(3.7)

Using the Hölder inequality and Doob’s martingale inequality, we get

$$ E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s}f\bigl({\sigma},x_{\sigma}^{n-1} \bigr)\,d{\sigma }\biggr\vert ^{2} \le TE \int_{0}^{t} \bigl\vert f\bigl(s,x_{s}^{n-1} \bigr)\bigr\vert ^{2}\,ds $$
(3.8)

and

$$ E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s}g\bigl({\sigma},x_{\sigma}^{n-1} \bigr)\,dw({\sigma })\biggr\vert ^{2}\le4E \int_{0}^{t} \bigl\vert g\bigl(s,x_{s}^{n-1} \bigr)\bigr\vert ^{2}\,ds. $$
(3.9)

Now for the fourth term of (3.6). By using the basic inequality \(|a+b|^{2}\le2(|a|^{2}+|b|^{2})\), we have

$$\begin{aligned} E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s} \int_{Z}h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)N(d{\sigma},dv)\biggr\vert ^{2} \le& 2E\sup _{0\le s\le t}\biggl\vert \int_{0}^{s} \int _{Z}h\bigl({\sigma},x_{\sigma-}^{n-1},v \bigr)\tilde{N}(d{\sigma },dv)\biggr\vert ^{2} \\ & {}+2E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s} \int_{Z}h\bigl({\sigma },x_{\sigma-}^{n-1},v \bigr)\pi(dv)\,d{\sigma}\biggr\vert ^{2}, \end{aligned}$$

where \(N(dt,dv)=\tilde{N}(dt,dv)+\pi(dv)\,dt\). By Lemma 3.2 with \(p=2\) and the Hölder inequality, we derive that

$$E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s} \int_{Z}h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)\pi(dv)\,d{\sigma}\biggr\vert ^{2} \le\bigl[T\pi(Z)\bigr]E \int_{0}^{t} \int _{Z} \bigl\vert h\bigl(s,x_{s-}^{n-1},v \bigr)\bigr\vert ^{2}\pi(dv)\,ds $$

and

$$E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s} \int_{Z}h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)\tilde{N}(d{\sigma},dv)\biggr\vert ^{2}\le D_{2}E \int_{0}^{t} \int _{Z}\bigl\vert h\bigl({s},x_{s-}^{n-1},v \bigr)\bigr\vert ^{2}\pi(dv)\,d{s}. $$

Therefore,

$$\begin{aligned}& E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s} \int_{Z} h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)N(d{\sigma},dv)\biggr\vert ^{2} \\ & \quad \le 2\bigl[D_{2}+ T \pi(Z)\bigr]E \int_{0}^{t} \int_{Z} \bigl\vert h\bigl(s,x_{s-}^{n-1},v \bigr)\bigr\vert ^{2}\pi(dv)\,ds. \end{aligned}$$
(3.10)

Combining with (3.6)-(3.10), it follows that

$$\begin{aligned}& E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-D \bigl(t,x_{s}^{n}\bigr)\bigr\vert ^{2} \\& \quad\le4(1+k_{0})^{2}E \vert \xi \vert _{g}^{2}+4TE \int_{0}^{t}\bigl\vert f\bigl(s,x_{s}^{n-1} \bigr)\bigr\vert ^{2}\,ds+16E \int _{0}^{t}\bigl\vert g\bigl(s,x_{s}^{n-1} \bigr)\bigr\vert ^{2}\,ds \\& \qquad{}+8\bigl[D_{2}+T\pi(Z)\bigr]E \int_{0}^{t} \int_{Z} \bigl\vert h\bigl(s,x_{s-}^{n-1},v \bigr)\bigr\vert ^{2}\pi (dv)\,ds. \end{aligned}$$

Then the condition (2.8) implies that

$$\begin{aligned}& E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-D \bigl(t,x_{s}^{n}\bigr)\bigr\vert ^{2} \\& \quad\le 4(1+k_{0})^{2}E \vert \xi \vert _{g}^{2}+8TE \int _{0}^{t} \bigl\vert f\bigl(s,x_{s}^{n-1} \bigr)-f(s,0)\bigr\vert ^{2}\,ds+8T \int_{0}^{t} \bigl\vert f(s,0)\bigr\vert ^{2}\,ds \\& \qquad{}+32E \int_{0}^{t}\bigl\vert g\bigl(s,x_{s}^{n-1} \bigr)-g(s,0)\bigr\vert ^{2}\,ds+32 \int _{0}^{t} \bigl\vert g(s,0)\bigr\vert ^{2}\,ds \\& \qquad {}+16\bigl[D_{2}+T\pi(Z)\bigr]E \int_{0}^{t} \int _{Z}\bigl\vert h\bigl(s,x_{s-}^{n-1},v \bigr)-h(s,0,v)\bigr\vert ^{2}\pi(dv)\,ds \\& \qquad {}+16\bigl[D_{2}+T\pi(Z)\bigr] \int_{0}^{t} \int_{Z} \bigl\vert h(s,0,v)\bigr\vert ^{2} \pi(dv)\,ds \\& \quad\le c_{1}E \vert \xi \vert _{g}^{2}+c_{2}K+(8T+32)E \int _{0}^{t}k\bigl(s,\bigl\vert x_{s}^{n-1}\bigr\vert _{g}^{2}\bigr) \,ds\\& \qquad{}+\bigl[16D_{2}+16T\pi(Z)\bigr]E \int _{0}^{t}k\bigl(s,\bigl\vert x_{s-}^{n-1}\bigr\vert _{g}^{2}\bigr) \,ds, \end{aligned}$$

where \(c_{1}=4(1+k_{0})^{2}, c_{2}=(8T+32+16D_{2}+16T\pi(Z))\). By applying the Jensen inequality, we have

$$\begin{aligned}& E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-D \bigl(t,x_{s}^{n}\bigr)\bigr\vert ^{2} \\& \quad\le c_{1}E \vert \xi \vert _{g}^{2}+c_{2}K+(8T+32) \int _{0}^{t}k\bigl(s,E \bigl\vert x_{s}^{n-1}\bigr\vert _{g}^{2}\bigr) \,ds \\& \qquad{}+\bigl[16D_{2}+16T\pi(Z)\bigr] \int _{0}^{t}k\bigl(s,E \bigl\vert x_{s-}^{n-1}\bigr\vert _{g}^{2}\bigr) \,ds \\& \quad \le c_{1}E \vert \xi \vert _{g}^{2}+c_{2}K+c_{2} \int_{0}^{t}k\Bigl(s,E\sup_{0\le{\sigma}\le s} \bigl\vert x_{\sigma}^{n-1}\bigr\vert _{g}^{2} \Bigr)\,ds. \end{aligned}$$
(3.11)

Then, inserting (3.11) into (3.5), we get

$$\begin{aligned} E\sup_{0\le s\le t} \bigl\vert x^{n}(t)\bigr\vert ^{2} \le& c_{3}E \vert \xi \vert _{g}^{2}+c_{4}K+c_{4} \int_{0}^{t}k\Bigl(s,E\sup_{0\le{\sigma}\le s} \bigl\vert x_{\sigma}^{n-1}\bigr\vert _{g}^{2} \Bigr)\,ds \\ \le&c_{3}E\vert \xi \vert _{g}^{2}+c_{4}K+c_{4} \int_{0}^{t}k\Bigl(s,E \vert \xi \vert _{g}^{2}+E\sup_{0\le {\sigma}\le s}\bigl\vert x^{n-1}({\sigma})\bigr\vert ^{2}\Bigr)\,ds, \end{aligned}$$

where \(c_{3}=\frac{(1+k_{0})^{2}(5-4k_{0}^{2})}{1-k_{0}^{2}}\) and \(c_{4}=\frac {2(1+k_{0})^{2}}{(1-k_{0})^{2}}\). This indicates that

$$\begin{aligned}& E \vert \xi \vert _{g}^{2}+E\sup_{0\le s\le t} \bigl\vert x^{n}(t)\bigr\vert ^{2} \\& \quad\le(1+c_{3})E \vert \xi \vert _{g}^{2}+c_{4}K+c_{4} \int_{0}^{t}k\Bigl(s,E\vert \xi \vert _{g}^{2}+E\sup_{0\le{\sigma}\le s}\bigl\vert x^{n-1}({\sigma})\bigr\vert ^{2}\Bigr)\,ds. \end{aligned}$$
(3.12)

By Assumption 2.3, it follows that \(u(t)\) is a global solution of the equation

$$u(t)=u_{0}+c_{4} \int_{0}^{t} k\bigl(s,u(s)\bigr)\,ds $$

with the initial condition \(u_{0}>(1+c_{3})E|\xi|_{g}^{2}+c_{4}K\). Now, we will prove the following inequality:

$$ E\vert \xi \vert _{g}^{2}+E\sup _{0\le s\le t}\bigl\vert x^{n}(s)\bigr\vert ^{2}\le u(t), $$
(3.13)

holding for all \(n\ge0\). For \(n=0\), the inequality (3.13) holds by the definition of u. When \(n=k-1\), suppose that

$$ E \vert \xi \vert _{g}^{2}+E\sup _{0\le s\le t} \bigl\vert x^{k-1}(s) \bigr\vert ^{2}\le u(t) $$
(3.14)

holds. Then, by (3.12),

$$\begin{aligned}& u(t)-E \vert \xi \vert _{g}^{2}-E\sup _{0\le s\le t}\bigl\vert x^{k}(s)\bigr\vert ^{2} \\& \quad\ge u_{0}-(1+c_{3})E \vert \xi \vert _{g}^{2}-c_{4}K +c_{4} \int_{0}^{t}\Bigl[k\bigl(s,u(s)\bigr)-k\Bigl(s,E \vert \xi \vert _{g}^{2}+E\sup_{0\le{\sigma}\le s} \bigl\vert x^{n-1}({\sigma})\bigr\vert ^{2}\Bigr)\Bigr]\,ds \\& \quad\ge0. \end{aligned}$$

From mathematical induction and (3.14), we have

$$E \vert \xi \vert _{g}^{2}+E\sup_{0\le s\le t} \bigl\vert x^{k}(s)\bigr\vert ^{2}\le u(t). $$

Since \(k(t,u)\) is continuous and nondecreasing in u for each fixed \(t\ge0\), we obtain

$$ E\sup_{0\le t\le T}\bigl\vert x^{n}(t)\bigr\vert ^{2}\le u(T)< \infty $$
(3.15)

for all \(n\ge0\). This proves the boundedness of \(\{x^{n}(t), n\ge1\}\).

Step 2. Let us show that \(\{x^{n}(t)\}_{n\ge 1}\) is a Cauchy sequence. Since the sequence \(\{x^{n}(t)\}_{n\ge 1}\) is bounded by (3.15), we obtain a positive constant C such that

$$E\sup_{0\le t\le T} \bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2}\le2E\sup_{0\le t\le T}\bigl\vert x^{n}(t)\bigr\vert ^{2}+E\sup_{0\le t\le T}\bigl\vert x^{m}(t)\bigr\vert ^{2}\le C. $$

For any \(t\in[0,T]\) and \(m,n\ge0\), it follows from Lemma 3.3 with \(p=2\) and Assumption 2.1 that

$$\begin{aligned} \bigl\vert x^{n}(t)-x^{m}(t)\bigr\vert ^{2} \le& \frac {|x^{n}(t)-x^{m}(t)-[D(t,x_{t}^{n})-D(t,x_{t}^{m})]|^{2}}{1-k_{0}}+\frac {|D(t,x_{t}^{n})-D(t,x_{t}^{m})|^{2}}{k_{0}} \\ \le&\frac {|x^{n}(t)-x^{m}(t)-[D(t,x_{t}^{n})-D(t,x_{t}^{m})]|^{2}}{1-k_{0}}+k_{0}\bigl\vert x_{t}^{n}-x_{t}^{m}\bigr\vert _{g}^{2}, \end{aligned}$$
(3.16)

where \(\delta=k_{0}\). Taking the expectation on both sides of (3.16), it follows that for any \(t\in[0,T]\)

$$\begin{aligned} E\sup_{0\le s\le t}\bigl\vert x^{n}(t)-x^{m}(t) \bigr\vert ^{2} \le&\frac{E\sup_{0\le s\le t}|x^{n}(s)-x^{m}(s)-[D(s,x_{s}^{n})-D(s,x_{s}^{m})]|^{2}}{1-k_{0}} \\ &{}+k_{0}E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-x^{m}(s)\bigr\vert ^{2}. \end{aligned}$$

Consequently,

$$ E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-x^{m}(s) \bigr\vert ^{2} \le\frac{E\sup_{0\le s\le t}|x^{n}(s)-x^{m}(s)-[D(s,x_{s}^{n})-D(s,x_{s}^{m})]|^{2}}{(1-k_{0})^{2}}. $$
(3.17)

Now, we will estimate \(E\sup_{0\le s\le t}|x^{n}(s)-x^{m}(s)-[D(s,x_{s}^{n})-D(s,x_{s}^{m})]|^{2}\). Using the elementary inequality \(|a+b+c|^{2}\le3(|a|^{2}+|b|^{2}+|c|^{2})\), we have

$$\begin{aligned}& E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-x^{m}(s)- \bigl[D\bigl(s,x_{s}^{n}\bigr)-D\bigl(s,x_{s}^{m} \bigr)\bigr]\bigr\vert ^{2} \\& \quad\le 3E\sup_{0\le s\le t}\biggl\vert \int_{0}^{t}\bigl[f\bigl({\sigma},x_{\sigma }^{n-1} \bigr)-f\bigl({\sigma},x_{\sigma}^{m-1}\bigr)\bigr]\,d{\sigma} \biggr\vert ^{2} \\& \qquad{}+3E\sup_{0\le s\le t} \biggl\vert \int_{0}^{s}\bigl[g\bigl({\sigma},x_{\sigma }^{n-1} \bigr)-g\bigl({\sigma},x_{\sigma}^{m-1}\bigr)\bigr]\,dw({\sigma}) \biggr\vert ^{2} \\& \qquad{}+3E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s} \int_{Z}\bigl[h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)-h\bigl({\sigma},x_{\sigma-}^{m-1},v\bigr)\bigr]N(d{ \sigma},dv)\biggr\vert ^{2}. \end{aligned}$$
(3.18)

Using the Hölder inequality and Doob’s martingale inequality again, we get

$$ E\sup_{0\le s\le t}\biggl\vert \int_{0}^{t}\bigl[f\bigl({\sigma},x_{\sigma}^{n-1} \bigr)-f\bigl({\sigma },x_{\sigma}^{m-1}\bigr)\bigr]\,d{\sigma} \biggr\vert ^{2}\le TE \int _{0}^{t}\bigl\vert f\bigl(s,x_{s}^{n-1} \bigr)-f\bigl(s,x_{s}^{m-1}\bigr)\bigr\vert ^{2} \,ds $$
(3.19)

and

$$ E\sup_{0\le s\le t} \biggl\vert \int_{0}^{s}\bigl[g\bigl({\sigma},x_{\sigma }^{n-1} \bigr)-g\bigl({\sigma},x_{\sigma}^{m-1}\bigr)\bigr]\,dw({\sigma}) \biggr\vert ^{2}\le4E \int _{0}^{t}\bigl\vert g\bigl(s,x_{s}^{n-1} \bigr)-g\bigl(s,x_{s}^{m-1}\bigr)\bigr\vert ^{2} \,ds. $$
(3.20)

In the meantime, by Lemma 3.2 with \(p=2\) and the Hölder inequality, we have

$$\begin{aligned}& E\sup_{0\le s\le t}\biggl\vert \int_{0}^{s} \int_{Z}\bigl[h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)-h\bigl({\sigma},x_{\sigma-}^{m-1},v\bigr)\bigr]N(d{\sigma },dv)\biggr\vert ^{2} \\& \quad\le 2E\sup_{0\le s\le t}\biggl\vert \int_{0}^{t} \int_{Z}\bigl[h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)-h\bigl({\sigma},x_{\sigma-}^{m-1},v\bigr)\bigr]\tilde{N}(d{ \sigma },dv)\biggr\vert ^{2} \\& \qquad{}+2E\sup_{0\le s\le t}\biggl\vert \int_{0}^{t} \int_{Z}\bigl[h\bigl({\sigma},x_{\sigma -}^{n-1},v \bigr)-h\bigl({\sigma},x_{\sigma-}^{m-1},v\bigr)\bigr]\pi(dv)\,d{ \sigma }\biggr\vert ^{2} \\& \quad\le 2\bigl[D_{2}+T\pi(Z)\bigr]E \int_{0}^{t} \int _{Z}\bigl\vert h\bigl(s,x_{s-}^{n-1},v \bigr)-h\bigl(s,x_{s-}^{m-1},v\bigr)\bigr\vert ^{2} \pi(dv)\,ds. \end{aligned}$$
(3.21)

Combining with (3.18)-(3.21), it follows from Assumption 2.2 that

$$\begin{aligned}& E\sup_{0\le s\le t} \bigl\vert x^{n}(s)-x^{m}(s)- \bigl[D\bigl(s,x_{s}^{n}\bigr)-D\bigl(s,x_{s}^{m} \bigr)\bigr]\bigr\vert ^{2} \\& \quad\le 3(T+4)E \int _{0}^{t}k\bigl(s, \bigl\vert x_{s}^{n-1}-x_{s}^{m-1}\bigr\vert _{g}^{2}\bigr)\,ds+6\bigl[D_{2}+T\pi(Z)\bigr]E \int _{0}^{t}k\bigl(s,\bigl\vert x_{s-}^{n-1}-x_{s-}^{m-1}\bigr\vert _{g}^{2}\bigr)\,ds \\& \quad\le 3(T+4) \int _{0}^{t}k\bigl(s,E\bigl\vert x_{s}^{n-1}-x_{s}^{m-1}\bigr\vert _{g}^{2}\bigr)\,ds+6\bigl[D_{2}+T\pi(Z)\bigr] \int _{0}^{t}k\bigl(s,E \bigl\vert x_{s-}^{n-1}-x_{s-}^{m-1}\bigr\vert _{g}^{2}\bigr)\,ds \\& \quad\le \bigl[3T+12+6D_{2}+6T\pi(Z)\bigr] \int_{0}^{t}k\Bigl(s,E\sup_{0\le\sigma\le s} \bigl\vert x^{n-1}(\sigma)-x^{m-1}(\sigma)\bigr\vert _{g}^{2}\Bigr)\,ds. \end{aligned}$$
(3.22)

Substituting (3.22) into (3.17), we get

$$\begin{aligned}& E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-x^{m}(s) \bigr\vert ^{2} \\& \quad\le \frac{3T+12+6D_{2}+6T\pi(Z)}{(1-k_{0})^{2}} \int_{0}^{t}k\Bigl(s,E\sup_{0\le\sigma\le s} \bigl\vert x^{n-1}(\sigma)-x^{m-1}(\sigma )\bigr\vert _{g}^{2}\Bigr)\,ds. \end{aligned}$$

Then, by the Fatou lemma and (3.15), it is easily seen that

$$\begin{aligned}& \lim_{n,m \to\infty}E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-x^{m}(s)\bigr\vert ^{2} \\& \quad\le \frac{3T+12+6D_{2}+6T\pi(Z)}{(1-k_{0})^{2}} \int_{0}^{t}k\Bigl(s,\lim_{n,m\to \infty}E \sup_{0\le\sigma\le s}\bigl\vert x^{n}(\sigma)-x^{m}( \sigma )\bigr\vert ^{2}\Bigr)\,ds. \end{aligned}$$

Hence, Assumption 2.2 implies that

$$ \lim_{n,m \to\infty}E\sup_{0\le s\le t}\bigl\vert x^{n}(s)-x^{m}(s)\bigr\vert ^{2}=0. $$
(3.23)

This shows that \(\{x^{n}(t)\}\) is a Cauchy sequence in \(\mathcal {M}^{2}([0,T];R^{n})\).

Step 3. According to (3.23), it follows that there exists \(x(t)\in\mathcal {M}^{2}([0,T];R^{n})\) such that \(\lim_{n \to\infty}E\sup_{0\le s\le t}|x^{n}(s)-x(s)|^{2}=0\). For any \(\delta>0\), by the Chebyshev inequality, we have

$$ \lim_{n \to\infty}P\Bigl(\sup_{0\le s\le t} \bigl\vert x^{n}(s)-x(s)\bigr\vert \ge\delta\Bigr)=0. $$
(3.24)

Hence, there exists a subsequence \(\{n_{i}\}_{ i=1,2,3, \ldots,\infty}\) satisfying

$$ P\biggl(\sup_{0\le s\le t}\bigl\vert x^{n_{i}}(s)-x(s)\bigr\vert \ge\frac{1}{i}\biggr)\le \frac {1}{2^{i}}, \quad i\ge1. $$
(3.25)

The Borel-Cantelli lemma shows that \(x^{n_{i}}(t)\) converges to \(x(t)\) almost surely uniformly on \([0,T]\) as \(n_{i}\to\infty\). Taking the limits on both sides of (3.3) and letting \(n\to\infty\), we see that \(x(t)\) is a solution of equation (2.1). In addition, similar to the proof of (3.15), we obtain \(E\sup_{0\le t\le T}|x(t)|^{2}<\infty \) for any \(T> 0\).

Now we address proving the uniqueness of equation (2.1). Let \(x(t)\) and \(y(t)\) be any two solutions of equation (2.1), we can prove by the same procedure as Step 2 that

$$E\sup_{0\le s\le t}\bigl\vert x(s)-y(s)\bigr\vert ^{2} \le C \int_{0}^{t}k\Bigl(s,E\Bigl(\sup _{0\le \sigma\le s}\bigl\vert x(\sigma)-y(\sigma)\bigr\vert ^{2}\Bigr)\Bigr)\,d\sigma $$

for all \(t\in[0,T]\). By Assumption 2.2, we obtain

$$E\sup_{0 \le s\le t}\bigl\vert x(s)-y(s)\bigr\vert ^{2}=0, $$

i.e., for any \(t\in[0,T]\), \(x(t)\equiv y(t)\) a.s. The proof is completed. □

Remark 3.1

Let \(k(t,u)=c(t)k(u), t\in[0,T]\), where \(c(t)\ge0\) is locally integrable and \(k(u)\) is a continuous, nondecreasing, and concave function with \(k(0)=0\) such that \(\int_{0^{+}}\frac{1}{k(u)}\,du=\infty\). Then by the comparison theorem of differential dynamic systems we know that Assumptions 2.2 and 2.3 hold.

Remark 3.2

Now let us give some concrete examples of the function \(k(\cdot)\). Let \(L>0\) and \(\delta\in(0,1)\) be sufficiently small. Define \(k_{1}(u)=Lu\),

$$k_{2}(u)= \textstyle\begin{cases} u\sqrt{\log(\frac{1}{u})}, & u\in[0,\delta], \\ \delta\sqrt{\log(\frac{1}{\delta})}+k_{2}'(\delta -)(u-\delta), & u\in[\delta,+\infty], \end{cases} $$

where \(k'\) denotes the derivative of the function k. They are all concave nondecreasing functions satisfy \(\int_{0^{+}} \frac {du}{k_{i}(u)}=+\infty\) (\(i=1,2\)).

Next, we will prove the existence and uniqueness of solutions to equation (2.1) under the local Carathéodory conditions.

Theorem 3.2

If Assumptions 2.1, 2.3, 2.4, and 2.5 hold, then there exists a unique local solution \(\{x(t)\}_{t\ge0}\) to equation (2.1).

Proof

Let \(T_{0}\in(0,T)\), for each \(N\ge1\), we define the truncation function \(f_{N}(t,\varphi)\) as follows:

$$f_{N}(t,\varphi)= \textstyle\begin{cases} f(t,\varphi) , & |\varphi|_{g}\le N,\\ f(t,N\frac{\varphi}{|\varphi|_{g}}), & |\varphi|_{g}>N, \end{cases} $$

and \(g_{N}(t,\varphi), h_{N}(t,\varphi,v)\) similarly. Then \(f_{N}\), \(g_{N}\), and \(h_{N}\) satisfy Assumption 2.2 due to the following inequality as regards \(f_{N}\), \(g_{N}\), and \(h_{N}\):

$$\begin{aligned}& \bigl\vert f_{N}(t,\varphi)-f_{N}(t,\psi)\bigr\vert ^{2}\vee \bigl\vert g_{N}(t,\varphi)-g_{N}(t, \psi )\bigr\vert ^{2} \le 2k_{N}\bigl(t,\vert \varphi-\psi \vert _{g}^{2}\bigr), \\& \int_{Z}\bigl\vert h_{N}(t,\varphi,v)-h_{N}(t, \psi,v)\bigr\vert ^{2}\pi(dv) \le 2k_{N}\bigl(t,\vert \varphi -\psi \vert _{g}^{2}\bigr), \end{aligned}$$

where \(\varphi,\psi\in C_{g}\) and \(t\in[0,T_{0}]\). Therefore, by Theorem 3.1, there exists a unique solution \(x_{N}(t)\) and \(x_{N+1}(t)\), respectively, to the following stochastic systems:

$$\begin{aligned}& x_{N}(t)=\xi(0)+D\bigl(t,(x_{N})_{t} \bigr)-D(0,\xi)+ \int _{0}^{t}f_{N}\bigl(s,(x_{N})_{s} \bigr)\,ds+ \int_{0}^{t} g_{N}\bigl(s,(x_{N})_{s} \bigr)\,dw(s) \\& \phantom{x_{N}(t)=}{}+ \int_{0}^{t} \int_{Z}h_{N}\bigl(s,(x_{N})_{s-},v \bigr)N(ds,dv), \\& x_{N+1}(t)=\xi(0)+D\bigl(t,(x_{N+1})_{t} \bigr)-D(0,\xi)+ \int _{0}^{t}f_{N+1}\bigl(s,(x_{N+1})_{s} \bigr)\,ds \\& \phantom{x_{N+1}(t)=}{}+ \int_{0}^{t} g_{N+1}\bigl(s,(x_{N+1})_{s} \bigr)\,dw(s)+ \int_{0}^{t} \int _{Z}h_{N+1}\bigl(s,(x_{N+1})_{s-},v \bigr)N(ds,dv). \end{aligned}$$

By Lemma 3.3 with \(p=2\) and Assumption 2.1, it follows that

$$\begin{aligned}& E\sup_{0\le s\le t}\bigl\vert x_{N+1}(s)-x_{N}(s) \bigr\vert ^{2} \\ & \quad\le\frac{E\sup_{0\le s\le t}|x_{N+1}(s)-x_{N}(s)-[D(s,(x_{N+1})_{s})-D(s,(x_{N})_{s})]|^{2}}{(1-k_{0})^{2}}. \end{aligned}$$
(3.26)

Define the stopping times

$$\begin{aligned}& \sigma_{N} := T_{0}\wedge \operatorname{inf}\bigl\{ t \in[0,T]:\bigl\vert x_{N}(t)\bigr\vert \ge N\bigr\} , \\ & \sigma_{N+1} := T_{0}\wedge \operatorname{inf}\bigl\{ t \in[0,T]:\bigl\vert x_{N+1}(t)\bigr\vert \ge N+1\bigr\} , \\ & \tau_{N} := \sigma_{N}\wedge\sigma_{N+1}. \end{aligned}$$

Again the Hölder inequality, the Burkholder-Davis-Gundy inequality, and Lemma 3.2 with \(p=2\) imply that

$$\begin{aligned}& E\sup_{0\le s\le t\wedge\tau _{N}}\bigl\vert x_{N+1}(s)-x_{N}(s)- \bigl[D\bigl(s,(x_{N+1})_{s}\bigr)-D\bigl(s,(x_{N})_{s} \bigr)\bigr]\bigr\vert ^{2} \\& \quad\le 3TE \int_{0}^{t\wedge\tau _{N}}\bigl\vert f_{N+1} \bigl(s,(x_{N+1})_{s}\bigr)-f_{N} \bigl(s,(x_{N})_{s}\bigr)\bigr\vert ^{2}\,ds \\& \qquad{}+12E \int_{0}^{t\wedge\tau _{N}}\bigl\vert g_{N+1} \bigl(s,(x_{N+1})_{s}\bigr)-g_{N} \bigl(s,(x_{N})_{s}\bigr)\bigr\vert ^{2}\,ds \\& \qquad{}+6\bigl[D_{2}+T\pi(Z)\bigr]E \int_{0}^{t\wedge\tau_{N}} \int _{Z}\bigl\vert h_{N+1}\bigl(s,(x_{N+1})_{s-},v \bigr)-h_{N}\bigl(s,(x_{N})_{s-},v\bigr)\bigr\vert ^{2}\pi (dv)\,ds. \end{aligned}$$

Noting that, for any \(0\le t\le\tau_{N}\),

$$\begin{aligned}& f_{N+1}\bigl(s,(x_{N})_{s}\bigr) = f_{N}\bigl(s,(x_{N})_{s}\bigr),\qquad g_{N+1}\bigl(s,(x_{N})_{s}\bigr)=g_{N} \bigl(s,(x_{N})_{s}\bigr), \\ & h_{N+1}\bigl(s,(x_{N})_{s-},v\bigr) = h_{N}\bigl(s,(x_{N})_{s-},v\bigr), \end{aligned}$$

we derive that

$$\begin{aligned}& E\sup_{0\le s\le t\wedge\tau _{N}}\bigl\vert x_{N+1}(s)-x_{N}(s)- \bigl[D\bigl(s,(x_{N+1})_{s}\bigr)-D\bigl(s,(x_{N})_{s} \bigr)\bigr]\bigr\vert ^{2} \\ & \quad\le 3TE \int_{0}^{t\wedge\tau _{N}}\bigl\vert f_{N+1} \bigl(s,(x_{N+1})_{s}\bigr)-f_{N+1} \bigl(s,(x_{N})_{s}\bigr)\bigr\vert ^{2}\,ds \\ & \qquad{}+12E \int_{0}^{t\wedge\tau _{N}}\bigl\vert g_{N+1} \bigl(s,(x_{N+1})_{s}\bigr)-g_{N+1} \bigl(s,(x_{N})_{s}\bigr)\bigr\vert ^{2}\,ds \\ & \qquad {}+6\bigl[D_{2}+T\pi(Z)\bigr]E \\& \qquad{}\times\int_{0}^{t\wedge\tau_{N}} \int _{Z}\bigl\vert h_{N+1}\bigl(s,(x_{N+1})_{s-},v \bigr) - h_{N+1}\bigl(s,(x_{N})_{s-},v\bigr)\bigr\vert ^{2}\pi(dv)\,ds. \end{aligned}$$
(3.27)

Substituting (3.27) into (3.26), it follows from Assumption 2.5 that

$$\begin{aligned}& E\sup_{0\le s\le t}\bigl\vert x_{N+1}(s\wedge \tau_{N})-x_{N}(s\wedge\tau _{N})\bigr\vert ^{2} \\& \quad\le\frac{3T+12+6[D_{2}+T\pi(Z)]}{(1-k_{0})^{2}} \int_{0}^{t\wedge\tau _{N}}k_{N+1}\bigl(s,E\bigl\vert (x_{N+1})_{{s}-}-(x_{N})_{{s}-}\bigr\vert _{g}^{2}\bigr)\,ds. \end{aligned}$$
(3.28)

If \(t\le\tau_{N}\), then we have

$$\begin{aligned}& \int_{0}^{t\wedge\tau _{N}}k_{N+1}\bigl(s,E\bigl\vert (x_{N+1})_{{s}-}-(x_{N})_{{s}-}\bigr\vert _{g}^{2}\bigr)\,ds\\& \quad= \int _{0}^{t}k_{N+1}\bigl(s\wedge \tau_{N},E\bigl\vert (x_{N+1})_{ {s\wedge\tau_{N}}-}-(x_{N})_{{s\wedge\tau_{N}}-} \bigr\vert _{g}^{2}\bigr)\,ds. \end{aligned}$$

If \(t>\tau_{N}\), then we have

$$\begin{aligned}& \int_{0}^{t\wedge\tau _{N}}k_{N+1}\bigl(s,E\bigl\vert (x_{N+1})_{{s}-}-(x_{N})_{{s}-}\bigr\vert _{g}^{2}\bigr)\,ds\\& \quad\le \int _{0}^{t}k_{N+1}\bigl(s\wedge \tau_{N},E\bigl\vert (x_{N+1})_{ {s\wedge\tau_{N}}-}-(x_{N})_{{s\wedge\tau_{N}}-} \bigr\vert _{g}^{2}\bigr)\,ds. \end{aligned}$$

For any fixed \(u\in[0,\infty)\), \(k_{N}(t,u)\) is nondecreasing in t, so we obtain

$$\begin{aligned}& E\sup_{0\le s\le t}\bigl\vert x_{N+1}(s\wedge \tau_{N})-x_{N}(s\wedge\tau _{N})\bigr\vert ^{2} \\& \quad\le\frac{3T+12+6[D_{2}+T\pi(Z)]}{(1-k_{0})^{2}} \int _{0}^{t}k_{N+1}\bigl(s\wedge \tau_{N},E\bigl\vert (x_{N+1})_{{s\wedge\tau _{N}}-}-(x_{N})_{{s\wedge\tau_{N}}-} \bigr\vert _{g}^{2}\bigr)\,ds \\& \quad\le\frac{3T+12+6[D_{2}+T\pi(Z)]}{(1-k_{0})^{2}} \\& \qquad{}\times \int _{0}^{t}k_{N+1}\Bigl(s\wedge \tau_{N},E\sup_{0\le\sigma\le s}\bigl\vert x_{N+1}({ \sigma\wedge\tau_{N}})-x_{N}({\sigma\wedge\tau _{N}})\bigr\vert ^{2}\Bigr)\,ds \\& \quad\le\frac{3T+12+6[D_{2}+T\pi(Z)]}{(1-k_{0})^{2}} \int _{0}^{t}k_{N+1}\Bigl(s,E\sup _{0\le\sigma\le s}\bigl\vert x_{N+1}({\sigma\wedge\tau _{N}})-x_{N}({\sigma\wedge\tau_{N}})\bigr\vert ^{2}\Bigr)\,ds. \end{aligned}$$

From Assumption 2.5, one sees that

$$E\sup_{0\le s\le t}\bigl\vert x_{N+1}(s\wedge \tau_{N})-x_{N}(s\wedge\tau _{N})\bigr\vert ^{2}=0. $$

Therefore, we obtain

$$x_{N+1}(t)=x_{N}(t), \quad \mbox{for } t \in[0,T_{0}\wedge\tau _{N}]. $$

For each \(\omega\in\Omega\), there exists an \(N_{0}(\omega)>0\) such that \(0< T_{0}\le\tau_{N_{0}}\). Now define \(x(t)\) by \(x(t)=x_{N_{0}}(t)\) for \(t\in[0,T_{0}]\). Since \(x({t\wedge\tau _{N}})=x_{N}({t\wedge \tau_{N}})\), it follows that

$$\begin{aligned} x({t\wedge\tau_{N}}) =&\xi(0)+D\bigl(t\wedge\tau_{N},(x_{N})_{t\wedge\tau _{N}} \bigr)-D(0,\xi)+ \int_{0}^{t\wedge\tau_{N}}f_{N}\bigl(s,(x_{N})_{s} \bigr) \,ds \\ &{}+ \int_{0}^{t\wedge\tau_{N}} g_{N}\bigl(s,(x_{N})_{s} \bigr)\,dw(s)+ \int_{0}^{t\wedge\tau_{N}} \int_{Z} h_{N}\bigl(s,(x_{N})_{s-},v \bigr)N(ds,dv) \\ =&\xi(0)+D(t\wedge\tau_{N},x_{t\wedge\tau_{N}})-D(0,\xi)+ \int _{0}^{t\wedge\tau_{N}} f(s,x_{s})\,ds \\ &{}+ \int_{0}^{t\wedge\tau_{N}} g(s,x_{s})\,dw(s)+ \int_{0}^{t\wedge\tau_{N}} \int_{Z} h(s,x_{s-},v)N(ds,dv). \end{aligned}$$

Letting \(N\to\infty\) then yields

$$\begin{aligned} x({t}) =&\xi(0)+D(t,x_{t})-D(0,\xi)+ \int_{0}^{t} f(s,x_{s})\,ds \\ &{}+ \int_{0}^{t} g(s,x_{s})\,dw(s)+ \int_{0}^{t} \int_{Z} h(s,x_{s-},v)N(ds,dv), \quad t \in[0,T_{0}]. \end{aligned}$$

We can see that \(x(t)\) is the solution of equation (2.1). □

Remark 3.3

From Theorem 3.2, we derive the existence and uniqueness of solution to equation (2.1) under local Carathéodory conditions with the non-Lipschitz conditions in [32, 35, 36] being regarded as special cases, which makes it more feasible that the conditions of equation (2.1) can be satisfied.

If \(k_{N}(t,u)\) is independent of t, i.e. \(k_{N}(t,u)=k_{N}(u)\), we obtain the following corollary.

Corollary 3.1

Let Assumptions  2.1 and 2.4 hold. Assume that, for any integer \(N>0\) and \(t\in[0,T]\), there exists a positive constant \(k_{N}\) such that, for any \(\varphi,\psi\in C_{g}\) with \(|\varphi|_{g},|\psi |_{g}\le N\), it follows that

$$\begin{aligned}& \bigl\vert b(t,\varphi)-b(t,\psi)\bigr\vert ^{2} \vee\bigl|\sigma(t,\varphi)-\sigma(t,\psi )\bigr|^{2} \le k_{N} \bigl(\vert \varphi-\psi \vert _{g}^{2}\bigr), \end{aligned}$$
(3.29)
$$\begin{aligned}& \int_{Z}\bigl\vert h(t,\varphi,v)-h(t,\psi,v)\bigr\vert ^{2}\pi(dv)\le k_{N}\bigl(\vert \varphi-\psi \vert _{g}^{2}\bigr), \end{aligned}$$
(3.30)

where \(k_{N}(u)\) is a concave and nondecreasing function such that \(k_{N}(0)=0\) and \(\int_{0^{+}}\frac{1}{k_{N}(u)}\,du=\infty\). Then equation (2.1) has a unique local solution \(x(t)\) on \([0,T]\).

Proof

In fact, if the conditions of Corollary 3.1 hold, then Assumption 2.5 also holds. Thus, from Theorem 3.2, we can prove that equation (2.1) has a unique local solution \(x(t)\) on \([0,T]\). □

Remark 3.4

If we let \(k_{N}(u)\equiv k_{N} u\), \(k_{N},u\ge0\), then conditions (3.29) and (3.30) imply the local Lipschitz condition which was investigated by [5, 9, 13]. Therefore, the corresponding results of [5, 9, 13] are improved and generalized by Theorem 3.2 and Corollary 3.1.

4 Exponential estimations for solutions

In this section, we will give the pth exponential estimates and almost surely asymptotic estimations of solutions to equation (2.1).

Assumption 4.1

Non-linear growth condition

For any \(\varphi,\psi\in C_{g}\) and \(t\in[0,T]\),

$$\begin{aligned}& \bigl\vert f(t,\varphi)\bigr\vert ^{2}\vee \bigl\vert g(t, \varphi)\bigr\vert ^{2}\le \rho\bigl(1+\vert \varphi \vert _{g}^{2}\bigr), \\& \biggl( \int_{Z} \bigl\vert h(t,\varphi,v)\bigr\vert ^{p}\pi(dv)\biggr)^{\frac{2}{p}}\le \rho \bigl(1+\vert \varphi \vert _{g}^{2}\bigr), \end{aligned}$$

where \(\rho(\cdot)\) is a concave and nondecreasing function from \(R^{+}\) to \(R^{+}\) such that \(\rho(0)=0\) and \(\rho(u)>0\) for \(u>0\).

Since \(\rho(\cdot)\) is concave and \(\rho(0)=0\), one can find a pair of positive constants a and b such that \(\rho(u)\le a+bu\) for all \(u\ge0\).

Remark 4.1

In particular, we see clearly that if let \(\rho (u)=Lu\), \(L>0\), then Assumption 4.1 reduces to the linear growth condition. In other words, Assumption 4.1 is much weaker than the linear growth condition.

Suppose that equation (2.1) has a unique solution \(x(t), t\in [-\infty,T]\). Along with the non-linear growth condition, we first establish the pth exponential estimates.

Theorem 4.1

Let \(E|\xi|_{g}^{p}<\infty\) and Assumption  4.1 hold. Then, for any \(t\in[0,T]\) and \(p\ge2\),

$$ E\sup_{0 \le t\le T}\bigl\vert x(t)\bigr\vert ^{p} \le \biggl[\frac{k_{0}}{1-k_{0}}E \vert \xi \vert _{g}^{p}+\frac{M_{1}}{(1-k_{0})^{p}}\biggr]e^{\frac {M_{2}}{(1-k_{0})^{p}}T}, $$
(4.1)

where \(M_{1}\), \(M_{2}\) are two positive constants.

Proof

Without loss of generality, we assume that \(x(t)\) is bounded. Otherwise, for each integer \(n\ge1\), define the stopping time

$$\tau_{n}=\operatorname{inf}\bigl\{ t\in[0,T]:\bigl\vert x(t)\bigr\vert \ge n\bigr\} . $$

If we can show (4.1) for the stopped processes \(x(t\wedge\tau_{n})\), then the general case follows upon letting \(n\to\infty\). By the Itô formula, we derive that

$$\begin{aligned}& \bigl\vert x(t)-D(t,x_{t})\bigr\vert ^{p} \\& \quad= \bigl\vert x(0)-D(0,x_{0})\bigr\vert ^{p} + \int_{0}^{t}p \bigl\vert x(s)-D(s,x_{s}) \bigr\vert ^{p-2}\bigl[x(s)-D(s,x_{s})\bigr]^{\top }f(s,x_{s}) \,ds \\& \qquad{}+ \int_{0}^{t}\frac {p}{2}\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p-2}\bigl\vert g(s,x_{s})\bigr\vert ^{2}\,ds \\& \qquad{}+ \int_{0}^{t}\frac {p(p-2)}{2}\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p-4}\bigl\vert \bigl[x(s)-D(s,x_{s})\bigr]^{\top }g(s,x_{s})\bigr\vert ^{2}\,ds \\& \qquad{}+ \int_{0}^{t}p \bigl\vert x(s)-D(s,x_{s}) \bigr\vert ^{p-2}\bigl[x(s)-D(s,x_{s})\bigr]^{\top }g(s,x_{s}) \,dw(s) \\& \qquad{}+ \int_{0}^{t} \int _{Z}\bigl(\bigl\vert x(s)-D(s,x_{s})+h(s,x_{s-},v) \bigr\vert ^{p}-\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}\bigr)N(ds,dv). \end{aligned}$$
(4.2)

Taking the expectation on both sides of (4.2), one gets

$$\begin{aligned}& E\sup_{{0} \le s\le t}\bigl(\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}\bigr) \\& \quad\le E\sup_{{0} \le s\le t}\bigl\vert x(0)-D(0,x_{0}) \bigr\vert ^{p}+E \int _{0}^{t}p\bigl\vert x(s)-D(s,x_{s}) \bigr\vert ^{p-1}\bigl\vert f(s,x_{s})\bigr\vert \,ds \\& \qquad{}+E \int_{0}^{t}\frac {p(p-1)}{2}\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p-2}\bigl\vert g(s,x_{s})\bigr\vert ^{2}\,ds \\& \qquad{}+E\sup_{0 \le s\le t} \int_{0}^{s}p\bigl\vert x(\sigma)-D(\sigma ,x_{\sigma})\bigr\vert ^{p-2}\bigl[x(\sigma)-D( \sigma,x_{\sigma})\bigr]^{\top}g(\sigma ,x_{\sigma})\,dw( \sigma) \\& \qquad{}+E\sup_{0 \le s\le t} \int_{0}^{s} \int_{Z}\bigl(\bigl\vert x(\sigma) - D(\sigma,x_{\sigma})+h( \sigma,x_{\sigma-},v)\bigr\vert ^{p} \\& \qquad{}- \bigl\vert x(\sigma)- D( \sigma,x_{\sigma})\bigr\vert ^{p}\bigr)N(d\sigma,dv). \end{aligned}$$
(4.3)

By Lemma 3.1 and Assumption 2.1, we get

$$\begin{aligned} E\sup_{{0} \le s\le t}\bigl\vert x(0)-D(0,x_{0})\bigr\vert ^{p} \le& \bigl[1+\epsilon _{1}^{\frac{1}{p-1}} \bigr]^{p-1}\biggl(E\bigl\vert x(0)\bigr\vert ^{p}+ \frac{E|D(0,x_{0})|^{p}}{\epsilon _{1}}\biggr) \\ \le&\bigl[1+\epsilon_{1}^{\frac{1}{p-1}}\bigr]^{p-1}\biggl(E \bigl\vert x(0)\bigr\vert ^{p}+k^{p}\frac {E|x_{0}|_{g}^{p}}{\epsilon_{1}} \biggr) \\ \le& (1+k_{0})^{p}E \vert \xi \vert ^{p}_{g}, \end{aligned}$$
(4.4)

where \(\epsilon_{1}=k_{0}^{p-1}\). Using the basic inequality

$$a^{p-1}b\le\frac{\epsilon(p-1)}{p}a^{p}+\frac{1}{p\epsilon ^{p-1}}b^{p}, \quad a,b>0, p\ge2, \mbox{ and } \epsilon >0, $$

we have

$$\begin{aligned}& E \int_{0}^{t}p\bigl\vert x(s)-D(s,x_{s}) \bigr\vert ^{p-1} \bigl\vert f(s,x_{s})\bigr\vert \,ds \\& \quad\le E \int_{0}^{t}p\biggl[\frac{\epsilon _{2}(p-1)}{p}\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}+\frac{1}{p\epsilon _{2}^{p-1}}\bigl\vert f(s,x_{s})\bigr\vert ^{p}\biggr]\,ds \\& \quad\le\epsilon_{2}(p-1) \int_{0}^{t}E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma )-D(\sigma,x_{\sigma})\bigr\vert ^{p}\,ds+ \frac{1}{\epsilon_{2}^{p-1}} \int _{0}^{t}E \bigl\vert f(s,x_{s}) \bigr\vert ^{p}\,ds, \end{aligned}$$
(4.5)

where \(\epsilon_{2}>0\). By Lemma 3.1, it follows that

$$\begin{aligned}& E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma)-D(\sigma,x_{\sigma}) \bigr\vert ^{p}\,ds \\& \quad\le (1+k_{0})^{p-1}\Bigl(E\sup_{0\le\sigma\le s} \bigl\vert x(\sigma)\bigr\vert ^{p}+k_{0}E\sup _{0\le\sigma\le s}\vert x_{\sigma} \vert _{g}^{p} \Bigr) \\& \quad\le (1+k_{0})^{p}\biggl[E\sup_{0\le\sigma\le s} \bigl\vert x(\sigma)\bigr\vert ^{p}+E\sup_{0\le\sigma\le s} \sup_{-\infty\le u\le0}\biggl(\frac {|x(s+u)|}{g(u)}\biggr)^{p}\biggr] \\& \quad\le(1+k_{0})^{p}\Bigl(E\vert \xi \vert ^{p}_{g}+2E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma)\bigr\vert ^{p}\Bigr). \end{aligned}$$
(4.6)

From Assumption 4.1, we obtain

$$\begin{aligned} E \int_{0}^{t}\bigl\vert f(s,x_{s})\bigr\vert ^{p}\,ds \le&E \int_{0}^{t}\bigl[\rho\bigl(1+\vert x_{s}\vert _{g}^{2}\bigr)\bigr]^{\frac{p}{2}} \,ds, \\ \le&E \int_{0}^{t}\bigl[a+b\bigl(1+\vert x_{s}\vert _{g}^{2}\bigr)\bigr]^{\frac{p}{2}} \,ds. \end{aligned}$$

Applying the basic inequality \(|a+b|^{\frac{p}{2}}\le2^{\frac {p}{2}-1}(|a|^{\frac{p}{2}}+|b|^{\frac{p}{2}})\), it is easy to see that

$$\begin{aligned}& E \int_{0}^{t}\bigl\vert f(s,x_{s})\bigr\vert ^{p}\,ds \\& \quad\le 2^{\frac{p}{2}-1}E \int_{0}^{t}\bigl[(a+b)^{\frac{p}{2}}+b^{\frac {p}{2}} \vert x_{s}\vert _{g}^{p}\bigr]\,ds \\& \quad\le 2^{\frac{p}{2}-1}(a+b)^{\frac{p}{2}}\bigl(1+E \vert \xi \vert ^{p}_{g}\bigr)T+2^{\frac {p}{2}-1}(a+b)^{\frac{p}{2}} \int_{0}^{t}E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma)\bigr\vert ^{p}\,ds. \end{aligned}$$
(4.7)

Inserting (4.6) and (4.7) into (4.5) and letting \(\epsilon_{2}=\sqrt {2(a+b)}\), we obtain

$$ E \int_{0}^{t}p \bigl\vert x(s)-D(s,x_{s}) \bigr\vert ^{p-1}\bigl\vert f(s,x_{s})\bigr\vert \,ds \le \bar{c}_{1}\bigl(1+E \vert \xi \vert ^{p}_{g} \bigr)T+\bar{c}_{1} \int_{0}^{t}E\sup_{0\le \sigma\le s} \bigl\vert x(\sigma)\bigr\vert ^{p}\,ds, $$
(4.8)

where \(\bar{c}_{1}=2\sqrt{2(a+b)}(p-1)(1+k_{0})^{p}+\sqrt{\frac {a+b}{2}}\). For the third term of the inequality (4.3), by applying the basic inequality

$$a^{p-2}b^{2}\le\frac{\epsilon(p-2)}{p}a^{p}+ \frac{1}{p\epsilon ^{\frac{p-2}{2}}}b^{p}, $$

it follows that, for \(a,b>0\), \(p\ge2\), and \(\epsilon>0\),

$$\begin{aligned}& E \int_{0}^{t}\frac {p(p-1)}{2}\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p-2}\bigl\vert g(s,x_{s})\bigr\vert ^{2}\,ds \\& \quad\le \frac{p(p-1)}{2} E \int_{0}^{t}\biggl[\frac{\epsilon _{3}(p-2)}{p}\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}+\frac{2}{p\epsilon_{3}^{\frac {p-2}{2}}}\bigl\vert g(s,x_{s})\bigr\vert ^{p}\biggr]\,ds \\& \quad\le\frac{p(p-1)}{2} \int_{0}^{t}\biggl[\frac{\epsilon _{3}(p-2)}{p}(1+k_{0})^{p} \Bigl(E\vert \xi \vert ^{p}_{g}+2E\sup _{0\le\sigma\le s}\bigl\vert x(\sigma )\bigr\vert ^{p}\Bigr) \biggr]\,ds \\& \qquad{}+\frac{p(p-1)}{2}\frac{2}{p\epsilon_{3}^{\frac{p-2}{2}}}2^{\frac {p}{2}-1}(a+b)^{\frac{p}{2}} \biggl[\bigl(1+E\vert \xi \vert ^{p}_{g}\bigr)T+ \int_{0}^{t}E\sup_{0\le \sigma\le s}\bigl\vert x(\sigma)\bigr\vert ^{p}\,ds\biggr]. \end{aligned}$$

Letting \(\epsilon_{3}=2(a+b)\),

$$\begin{aligned}& E \int_{0}^{t}\frac {p(p-1)}{2}\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p-2}\bigl\vert g(s,x_{s})\bigr\vert ^{2}\,ds \\& \quad\le\bar{c}_{2}\bigl(1+E \vert \xi \vert ^{p}_{g}\bigr)T+\bar{c}_{2} \int_{0}^{t}E\sup_{0\le\sigma \le s} \bigl\vert x(\sigma)\bigr\vert ^{p}\,ds, \end{aligned}$$
(4.9)

where \(\bar{c}_{2}=(p-1)(a+b)[1+2(p-2)(1+k_{0})^{p}]\). Noting that the fourth term of (4.3) is a martingale, we easily obtain

$$ E\sup_{0 \le s\le t} \int_{0}^{s}p \bigl\vert x(\sigma)-D(\sigma ,x_{\sigma})\bigr\vert ^{p-2}\bigl[x(\sigma)-D( \sigma,x_{\sigma})\bigr]^{\top}g(\sigma ,x_{\sigma})\,dw( \sigma)=0. $$
(4.10)

Finally, note that the Lévy jump \(\tilde{N}(dt,dv)\) is a martingale and \(N(dt,dv)=\tilde{N}(dt,dv)+\pi(dv)\,dt\), we derive that

$$\begin{aligned}& E\sup_{0 \le s\le t} \int_{0}^{s} \int_{Z}\bigl(\bigl\vert x(\sigma )-D(\sigma,x_{\sigma})+h( \sigma,x_{\sigma-},v)\bigr\vert ^{p}-\bigl\vert x(\sigma )-D( \sigma,x_{\sigma})\bigr\vert ^{p}\bigr)N(d\sigma,dv) \\ & \quad\le E\sup_{0 \le s\le t} \int_{0}^{s} \int_{Z}\bigl(\bigl\vert x(\sigma )-D(\sigma,x_{\sigma})+h( \sigma,x_{\sigma-},v)\bigr\vert ^{p}-\bigl\vert x(\sigma )-D( \sigma,x_{\sigma})\bigr\vert ^{p}\bigr)\pi(dv)\,ds. \end{aligned}$$

By the mean value theorem, we obtain, for any \(|\theta|<1\),

$$\begin{aligned}& E\sup_{0 \le s\le t} \int_{0}^{s} \int_{Z}\bigl(\bigl\vert x(\sigma )-D(\sigma,x_{\sigma})+h( \sigma,x_{\sigma-},v)\bigr\vert ^{p}-\bigl\vert x(\sigma )-D( \sigma,x_{\sigma})\bigr\vert ^{p}\bigr)N(d\sigma,dv) \\ & \quad\le pE \int_{0}^{t} \int_{Z}\bigl[\bigl\vert x(s)-D(s,x_{s})+\theta h(s,x_{s-},v)\bigr\vert ^{p-1}\bigl\vert h(s,x_{s-},v)\bigr\vert \bigr]\pi(dv)\,ds. \end{aligned}$$

Together with the basic inequality \(|a+b|^{p-1}\le 2^{p-2}(|a|^{p-1}+|b|^{p-1})\), this implies that

$$\begin{aligned}& E\sup_{0 \le s\le t} \int_{0}^{s} \int_{Z}\bigl(\bigl\vert x(\sigma )-D(\sigma,x_{\sigma})+h( \sigma,x_{\sigma-},v)\bigr\vert ^{p}-\bigl\vert x(\sigma )-D( \sigma,x_{\sigma})\bigr\vert ^{p}\bigr)N(d\sigma,dv) \\ & \quad\le p2^{p-2}E \int_{0}^{t} \int _{Z}\bigl[\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p-1}\bigl\vert h(s,x_{s-},v)\bigr\vert + \vert \theta \vert ^{p-1} \bigl\vert h(s,x_{s-},v)\bigr\vert ^{p}\bigr]\pi(dv)\,ds. \end{aligned}$$

By Assumption 4.1, we get

$$\begin{aligned} E \int_{0}^{t} \int_{Z}\bigl\vert h(s,x_{s-},v)\bigr\vert ^{p}\pi(dv)\,ds \le & E \int _{0}^{t}\bigl[\rho\bigl(1+\vert x_{s-}\vert _{g}^{2}\bigr)\bigr]^{\frac{p}{2}} \,ds \\ \le&2^{\frac{p}{2}-1}(a+b)^{\frac{p}{2}}\biggl(E\vert \xi \vert ^{p}_{g}T+ \int _{0}^{t}E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma)\bigr\vert ^{p}\,ds\biggr). \end{aligned}$$

Similar to the computation of (4.8), it follows that

$$\begin{aligned}& E \int_{0}^{t} \int _{Z}p2^{p-2}\bigl\vert x(s)-D(s,x_{s}) \bigr\vert ^{p-1}\bigl\vert h(s,x_{s-},v)\bigr\vert \pi(dv)\,ds \\ & \quad\le \biggl[(p-1)2^{p}(1+k_{0})^{p}\pi(Z)+ \frac{\sqrt{2}}{4}\sqrt {a+b}\biggr]\bigl(1+E\vert \xi \vert ^{p}_{g}\bigr)T \\ & \qquad{}+\biggl[(p-1)2^{p}(1+k_{0})^{p}\pi(Z)+ \frac{\sqrt{2}}{4}\sqrt{a+b}\biggr] \int _{0}^{t}E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma)\bigr\vert ^{p}\,ds. \end{aligned}$$

Hence, we have

$$\begin{aligned}& E\sup_{0 \le s\le t} \int_{0}^{s} \int_{Z}\bigl(\bigl\vert x(\sigma )-D(\sigma,x_{\sigma})+h( \sigma,x_{\sigma-},v)\bigr\vert ^{p}-\bigl\vert x(\sigma )-D( \sigma,x_{\sigma})\bigr\vert ^{p}\bigr)N(d\sigma,dv) \\ & \quad\le \bar{c}_{3}\bigl(1+E \vert \xi \vert ^{p}_{g}\bigr)T+\bar{c}_{3} \int_{0}^{t}E\sup_{0\le \sigma\le s} \bigl\vert x(\sigma)\bigr\vert ^{p}\,ds, \end{aligned}$$
(4.11)

where \(\bar{c}_{3}=p2^{p-2}2^{\frac{p}{2}-1}(a+b)^{\frac {p}{2}}+(p-1)2^{p}(1+k_{0})^{p}\pi(Z)+\frac{\sqrt{2}}{4}\sqrt{a+b}\). Combining (4.3), (4.4), and (4.8)-(4.11), we obtain

$$ E\sup_{{0} \le s\le t}\bigl(\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}\bigr) \le M_{1}+M_{2} \int _{0}^{t}E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma)\bigr\vert ^{p}\,ds, $$
(4.12)

where \(M_{1}=(1+k_{0})^{p}E|\xi|^{p}_{g}+\sum_{k=1}^{3}\bar{c}_{k}(1+E|\xi |^{p}_{g})T,M_{2}=2\sum_{k=1}^{3}\bar{c}_{k}\). On the other hand, by Lemma 3.3 and Assumption 2.1, it follows that

$$\begin{aligned} E\sup_{0 \le s\le t} \bigl\vert x(s)\bigr\vert ^{p} \le& k_{0}E\sup_{0 \le s\le t}\vert x_{s}\vert _{g}^{p}+\frac{1}{(1-k_{0})^{p-1}}E\sup_{0 \le s\le t} \bigl(\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}\bigr) \\ \le& k_{0}E \vert \xi \vert _{g}^{p}+k_{0}E \sup_{0 \le s\le t}\bigl|x(s)\bigr|^{p}+\frac{1}{(1-k_{0})^{p-1}}E\sup _{0 \le s\le t}\bigl(\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}\bigr). \end{aligned}$$

Therefore,

$$\begin{aligned} E\sup_{0 \le s\le t}\bigl\vert x(s)\bigr\vert ^{p} \le& \frac{k_{0}}{1-k_{0}}E \vert \xi \vert _{g}^{p}+ \frac{1}{(1-k_{0})^{p}}E\sup_{0 \le s\le t}\bigl(\bigl\vert x(s)-D(s,x_{s})\bigr\vert ^{p}\bigr) \\ \le & \frac{k_{0}}{1-k_{0}}E \vert \xi \vert _{g}^{p}+ \frac{M_{1}}{(1-k_{0})^{p}}+\frac {M_{2}}{(1-k_{0})^{p}} \int_{0}^{t}E\sup_{0\le\sigma\le s}\bigl\vert x(\sigma )\bigr\vert ^{p}\,ds. \end{aligned}$$

Therefore, the Gronwall inequality implies that

$$E\sup_{0 \le s\le t} \bigl|x(s) \bigr|^{p} \le \biggl[ \frac{k_{0}}{1-k_{0}}E \vert \xi \vert _{g}^{p}+ \frac{M_{1}}{(1-k_{0})^{p}}\biggr]e^{\frac {M_{2}}{(1-k_{0})^{p}}T}. $$

The proof is complete. □

Remark 4.2

From Theorem 4.1, we see that the pth moment will grow at most exponentially with exponent \(\frac{M_{2}}{(1-k_{0})^{p}}\). It should be mentioned that (4.1) can be expressed as

$$ \limsup_{t\to\infty}\frac{1}{t}\log\bigl(E \bigl\vert x(t) \bigr\vert ^{p}\bigr) \le \frac{M_{2}}{(1-k_{0})^{p}}. $$
(4.13)

The inequality (4.13) shows that the pth moment Lyapunov exponent should not be greater than \(\frac{M_{2}}{(1-k_{0})^{p}}\).

The next theorem shows that the pth exponential estimations implies the almost surely asymptotic estimations, and we give an upper bound for the sample Lyapunov exponent.

Theorem 4.2

Under Assumption  4.1, we have

$$ \limsup_{t\to\infty}\frac{1}{t}\log \bigl\vert x(t)\bigr\vert \le\frac{9\sqrt {a+b}+3(a+b)+16\pi(Z)}{(1-k_{0})^{2}}, \quad \textit{a.s.} $$
(4.14)

That is, the sample Lyapunov exponent of the solution should not be greater than \(\frac{9\sqrt{a+b}+3(a+b)+16\pi(Z)}{(1-k_{0})^{2}}\).

Proof

For each \(n=1,2,\ldots\) , it follows from Theorem 4.1 (taking \(p=2\)) that

$$E\Bigl(\sup_{n-1 \le t\le n }\bigl\vert x(t)\bigr\vert ^{2} \Bigr) \le \beta e^{\gamma n}, $$

where \(\beta=\frac{1+3k_{0}}{(1-k_{0})^{2}}E|\xi|^{2}_{g}+\frac{9\sqrt {a+b}+3(a+b)+16\pi(Z)}{(1-k_{0})^{2}}(1+E|\xi|^{2}_{g})T\) and \(\gamma=\frac {18\sqrt{a+b}+6(a+b)+32\pi(Z)}{(1-k_{0})^{2}}\). Hence, for any \(\varepsilon>0\), by the Chebysher inequality, it follows that

$$P\Bigl\{ \omega:\sup_{n-1 \le t\le n}\bigl\vert x(t)\bigr\vert ^{2}>e^{(\gamma+\epsilon )n}\Bigr\} \le \beta e^{-\epsilon n}. $$

Since \(\sum _{n=0}^{\infty}\beta e^{-\varepsilon n}<\infty\), by the Borel-Cantelli lemma, we deduce that there exists an integer \(n_{0}\) such that

$$\sup_{n-1 \le t\le n} \bigl\vert x(t)\bigr\vert ^{2}\le e^{(\gamma+\varepsilon)n}\quad \mbox{a.s. } n\ge n_{0}. $$

Thus, for almost all \(\omega\in\Omega\), if \(n-1 \le t\le n\) and \(n\ge n_{0}\), then

$$ \frac{1}{t}\log \bigl\vert x(t)\bigr\vert = \frac{1}{2t}\log\bigl(\bigl\vert x(t)\bigr\vert ^{2}\bigr)\le \frac{(\gamma +\varepsilon)n}{2(n-1)}. $$
(4.15)

Taking the limsup in (4.15) leads to an almost surely exponential estimate, that is,

$$\limsup_{t\to\infty}\frac{1}{t}\log \bigl\vert x(t)\bigr\vert \le\frac{\gamma +\varepsilon}{2}=\frac{9\sqrt{a+b}+3(a+b)+16\pi(Z)}{(1-k_{0})^{2}}, \quad \mbox{a.s.} $$

The required assertion (4.14) follows because \(\varepsilon>0\) is arbitrary. □

5 Examples

Example 5.1

Let us return to equation (1.1),

$$ dx(t)=k\bigl[\lambda-x(t)\bigr]\,dt+\theta\sqrt{x(t)}\,dw(t), \quad x(0)=x_{0}, $$
(5.1)

where \(k,\lambda\ge0, \theta>0\), and \(x_{0}>0\). \(w(t)\) is a one-dimensional Brownian motion.

Clearly, the diffusion coefficient of equation (5.1) does not satisfy the Lipschitz condition. Let \(k(t,u)=\theta k(u)\), we see that \(k(u)=\sqrt{u}\) is a nondecreasing, positive, and concave function on \([0,\infty)\) with \(k(0)=0\) and

$$\int_{0^{+}}\frac{du}{k(u)}=\lim_{\varepsilon\to0^{+}} \int _{\varepsilon}^{+\infty}\frac{1}{\sqrt{u}}\,du= \lim _{\varepsilon\to0^{+}} \int_{\varepsilon}^{+\infty}2\,d{\sqrt {u}}=2\lim _{\varepsilon\to0^{+}}{\sqrt{u}}|_{\varepsilon}^{+\infty }=\infty. $$

Then by the comparison theorem of differential dynamic systems we know that Assumptions 2.2 and 2.3 hold for equation (5.1).

Example 5.2

Consider the semi-linear NSFDEs with pure jumps

$$ d\bigl[x(t)-0.1x_{t}\bigr]=ax_{t}\,dt+ \int_{Z}b(x_{t-})\sigma(t,x_{t-})vN(dt,dv), \quad t\in[0,T]. $$
(5.2)

Here \(D(t,x_{t})=0.1x_{t}\) and a is a constant. Assume that \(b(\cdot)\) satisfies the local Lipschitz condition: for any \(N>0\), there exists a positive constant \(J_{N}\) such that, for all \(\varphi,\psi\in C_{g}\) with \(|\varphi|,|\psi|\le N\) and \(t\in[0,T]\),

$$\bigl\vert b(\varphi)-b(\psi)\bigr\vert ^{2}\le J_{N} \vert \varphi-\psi \vert _{g}^{2}. $$

\(\sigma(t,\cdot)\) satisfies Assumptions 2.2-2.4 and there exists a positive constant such that

$$\sup_{\varphi\in C_{g}, t\in[0,T]} \bigl\vert \sigma(t,\varphi)\bigr\vert \le \bar{C}. $$

It is easily seen that equation (5.2) does not satisfy the non-Lipschitz condition of [32, 34], so the result in [32, 34] does not apply to equation (5.2).

We assume that \(\int_{Z}|v|^{2}\pi(dv)<\infty\) and \(b(u)\), \(\sigma (t,u)\) are continuous in u for each \(u\in[0,\infty)\). Let

$$A_{N}(x)=\biggl[a^{2}+2J_{N} \bar{C}^{2} \int_{Z} \vert v\vert ^{2}\pi(dv) \biggr]x,\qquad B_{N}(t,x)=2\biggl[\sup_{|x|\le N}\bigl\vert b(x)\bigr\vert ^{2} \int_{Z} \vert v\vert ^{2}\pi(dv)\biggr]k(t,x), $$

where \(k(t,x)\) satisfies Assumptions 2.2, 2.3. Obviously, the coefficients \(0.1x, ax\), \(b(x)\sigma(t,x)v\) satisfy Assumptions 2.2-2.4 and \(k_{N}(t,x)=A_{N}(x)+B_{N}(t,x)\) satisfies Assumption 2.5, by Theorem 3.2, we see that equation (5.2) has a unique solution \(x(t)\) on \([0,T]\).

References

  1. Hale, J, Meyer, K: A class of functional equations of neutral type. Mem. Am. Math. Soc. 76, 1-65 (1967)

    MathSciNet  MATH  Google Scholar 

  2. Hale, J, Lunel, S: Introduction to Functional Differential Equations. Springer, New York (1991)

    MATH  Google Scholar 

  3. Kolmanovskii, V, Myshkis, A: Applied Theory of Functional Differential Equations. Kluwer Academic Publishers, Norwell (1992)

    Book  MATH  Google Scholar 

  4. Kolmanovskii, V, Nosov, V: Stability of Functional Differential Equations. Academic Press, New York (1986)

    Google Scholar 

  5. Bao, H, Cao, J: Existence and uniqueness of solutions to neutral stochastic functional differential equations with infinite delay. Appl. Comput. Math. 215, 1732-1743 (2009)

    MathSciNet  MATH  Google Scholar 

  6. Huang, L, Deng, F: Razumikhin-type theorems on stability of neutral stochastic functional differential equations. IEEE Trans. Autom. Control 53, 1718-1723 (2008)

    Article  MathSciNet  Google Scholar 

  7. Jankovic, S, Randjelovic, J, Jovanovic, M: Razumikhin-type exponential stability criteria of neutral stochastic functional differential equations. J. Math. Anal. Appl. 355, 811-820 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Liu, K, Xia, X: On the exponential stability in mean square of neutral stochastic functional differential equations. Syst. Control Lett. 37, 207-215 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  9. Mao, X: Stochastic Differential Equations and Applications, 2nd edn. Horwood Publishing Limited, Chichester (2008)

    Book  Google Scholar 

  10. Mao, X: Exponential stability in mean square of neutral stochastic differential functional equations. Syst. Control Lett. 26, 245-251 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  11. Mao, X: Razumikhin-type theorems on exponential stability of neutral stochastic differential equations. SIAM J. Math. Anal. 28, 389-401 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  12. Wu, F, Hu, S, Mao, X: Razumikhin-type theorem for neutral stochastic functional differential equations with unbounded delay. Acta Math. Sci. 31B, 1245-1258 (2011)

    MathSciNet  MATH  Google Scholar 

  13. Xu, Y, Hu, S: The existence and uniqueness of the solution for neutral stochastic functional differential equations with infinite delay in abstract space. Acta Appl. Math. 110, 627-638 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  14. Yang, X, Zhu, Q: Existence, uniqueness, and stability of stochastic neutral functional differential equations of Sobolev-type. J. Math. Phys. 56, no. 12, 122701, 16 pp. (2015)

    MathSciNet  MATH  Google Scholar 

  15. Zhou, S, Xue, M: The existence and uniqueness of the solutions for neutral stochastic functional differential equations with infinite delay. Math. Appl. 21, 75-83 (2008)

    MathSciNet  MATH  Google Scholar 

  16. Jiang, F, Shen, Y, Wu, F: A note on order of convergence of numerical method for neutral stochastic functional differential equations. Commun. Nonlinear Sci. Numer. Simul. 17, 1194-1200 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  17. Liu, L, Zhu, Q: Mean square stability of two classes of theta method for neutral stochastic differential delay equations. J. Comput. Appl. Math. 305, 55-67 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Mo, H, Zhao, X, Deng, F: Exponential mean-square stability of the θ-method for neutral stochastic delay differential equations with jumps. International Journal of Systems Science, 1-9 (2016)

  19. Wu, F, Mao, X: Numerical solutions of neutral stochastic functional differential equations. SIAM J. Numer. Anal. 46, 1821-1841 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  20. Wang, W, Chen, Y: Mean-square stability of semi-implicit Euler method for nonlinear neutral stochastic delay differential equations. Appl. Numer. Math. 61, 696-701 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  21. Yu, Z: Almost sure and mean square exponential stability of numerical solutions for neutral stochastic functional differential equations. Int. J. Comput. Math. 92, 132-150 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  22. Zhou, S, Wu, F: Convergence of numerical solutions to neutral stochastic delay differential equations with Markovian switching. J. Comput. Appl. Math. 229, 85-96 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Zong, X, Wu, F, Huang, C: Exponential mean square stability of the theta approximations for neutral stochastic differential delay equations. J. Comput. Appl. Math. 286, 172-185 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Zong, X, Wu, F: Exponential stability of the exact and numerical solutions for neutral stochastic delay differential equations. Appl. Math. Model. 40, 19-30 (2016)

    Article  MathSciNet  Google Scholar 

  25. Zhu, Q: Pth moment exponential stability of impulsive stochastic functional differential equations with Markovian switching. J. Franklin Inst. 351, 3965-3986 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Zhu, Q: Razumikhin-type theorem for stochastic functional differential equations with Lévy noise and Markov switching. Int. J. Control, 1-10 (2016)

  27. Cox, J: Notes on option pricing I: Constant elasticity of variance diffusions. Working paper, Stanford University (1975)

  28. Mao, X: Adapted solutions of backward stochastic differential equations with non-Lipschitz coefficients. Stoch. Process. Appl. 58, 281-292 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  29. Taniguchi, T: Successive approximations to solutions of stochastic differential equations. J. Differ. Equ. 96, 152-169 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  30. Yamada, T: On the successive approximation of solutions of stochastic differential equations. J. Math. Kyoto Univ. 21, 501-515 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  31. Ren, Y, Xia, N: Existence, uniqueness and stability of the solutions to neutral stochastic functional differential equations with infinite delay. Appl. Comput. Math. 210, 72-79 (2009)

    MathSciNet  MATH  Google Scholar 

  32. Ren, Y, Xia, N: A note on the existence and uniqueness of the solution to neutral stochastic functional differential equations with infinite delay. Appl. Comput. Math. 214, 457-461 (2009)

    MathSciNet  MATH  Google Scholar 

  33. Bao, J, Hou, Z: Existence of mild solutions to stochastic neutral partial functional differential equations with non-Lipschitz coefficients. Comput. Appl. Math. 59, 207-214 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  34. Boufoussi, B, Hajji, S: Successive approximation of neutral functional stochastic differential equations with jumps. Stat. Probab. Lett. 80, 324-332 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  35. Luo, J, Taniguchi, T: The existence and uniqueness for non-Lipschitz stochastic neutral delay evolution equations driven by Poisson jumps. Stoch. Dyn. 9, 135-152 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  36. Ren, Y, Chen, L: A note on the neutral stochastic functional differential equation with infinite delay and Poisson jumps in an abstract space. J. Math. Phys. 50, 1-8 (2009)

    MathSciNet  MATH  Google Scholar 

  37. Wei, F, Cai, Y: Existence, uniqueness and stability of the solution to neutral stochastic functional differential equations with infinite delay under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 151 (2013)

    Article  MathSciNet  Google Scholar 

  38. Arino, O, Burton, T, Haddock, J: Periodic solutions to functional differential equations. Proc. R. Soc. Edinb. 101, 253-271 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  39. Coddington, EA, Levinson, N: Theory of Ordinary Differential Equations. McGraw-Hill, New York (1955)

    MATH  Google Scholar 

  40. Kunita, H: Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms. In: Real and Stochastic Analysis, pp. 305-373. Birkhäuser, Boston (2004)

    Chapter  Google Scholar 

  41. Zhu, Q: Asymptotic stability in the pth moment for stochastic differential equations with Lévy noise. J. Math. Anal. Appl. 416, 126-142 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Edinburgh Mathematical Society (RKES130172) and the National Natural Science Foundation of China under NSFC grant (11401261, 11471071) for their financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Mao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the manuscript and typed, read, and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mao, W., Hu, L. & Mao, X. Neutral stochastic functional differential equations with Lévy jumps under the local Lipschitz condition. Adv Differ Equ 2017, 57 (2017). https://doi.org/10.1186/s13662-017-1102-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1102-9

Keywords