Skip to main content

Theory and Modern Applications

The existence and uniqueness of mild solutions to stochastic differential equations with Lévy noise

Abstract

In this paper, we study a class of neutral stochastic differential equations (NSDEs) with the cylindrical Brownian motion and Lévy noises in an infinite-dimensional Hilbert space. The existence and uniqueness of the mild solutions to these stochastic differential equations are discussed under assumptions of linear growth on the coefficients. The results of Taniguchi (J. Math. Anal. Appl. 360:245-253, 2009) are generalized and improved as a special case of our theory.

1 Introduction

The stochastic neutral differential equations have attracted much attention because of their practical applications in many areas such as physics, population dynamics, electrical engineering, medicine biology, ecology and other areas of science and engineering [17]. It is very important to find the solutions of stochastic differential equations (SDEs) with additive-noise on infinite-dimensional state spaces, so there has been an increasing interest in the investigation of the existence and uniqueness of mild solutions for a class of neutral stochastic differential equations [812]. In particular, the neutral stochastic differential equations are driven by Poisson jump processes [1318]. By introducing the Itô stochastic calculus with G-Brownian motion, Revathi and Sakthivel extended the existence and uniqueness results to a class of nonautonomous stochastic neutral differential equations with infinite delay in real separable Hilbert spaces [19, 20]. In addition, Benchaabane studied a class of nonlinear fractional Sobolev-type stochastic differential equations in Hilbert spaces [21].

However, it should be mentioned that only a few papers have discussed the existence and uniqueness of mild solutions of stochastic differential equations driven by Brownian motion and Lévy noises. Cao established the existence and uniqueness of mild solutions to semilinear backward stochastic evolution equations driven by the cylindrical Brownian motion and the Poisson point process in a Hilbert space with non-Lipschitzian coefficients by the successive approximation [22]. Luo considered the existence and uniqueness of mild solutions to stochastic neutral delay evolution equations with finite delay and Poisson jumps by the Banach fixed point theorem [23]. Furthermore, Albeverio discussed the existence of mild solutions for stochastic differential equations and semilinear equations with non-Gaussian Lévy noise [24]. Lately, Cui and Yan proved the existence and uniqueness of mild solutions of neutral stochastic evolution equations with infinite delay and Poisson jumps by using ‘non-Lipschitz conditions’ [25]. Very recently, Mao established the existence and uniqueness theorem of mild solutions to general neutral stochastic functional differential equations with infinite delay and Lévy jumps under the local Carathéodory-type conditions [26]. Note that these works established the existence and uniqueness of the solution under the non-Lipschitz conditions, and their methods depend heavily on the given hypotheses. So it is important to find some applicable and flexible conditions to ensure the existence results. Motivated by the aforementioned works, we aim to study the existence and uniqueness of mild solutions for SDEs with the cylindrical Brownian motion and Lévy noises in an infinite-dimensional Hilbert space of the form

$$ dx(t)= \bigl[Tx(t)+A\bigl(t,x(t)\bigr)\bigr]\, dt+g\bigl(t,x(t) \bigr)\,dW_{t}+ \int_{H\setminus\{0\}} h\bigl(t,u,x(t)\bigr) \tilde{N}(dt,du), $$
(1.1)

where T is the infinitesimal generator of a pseudo-contraction semigroup \((S_{t})_{t\geq0}\), and \(A: {\mathbb {R}}^{+}\times D({\mathbb {R}}^{+},H)\rightarrow H\), \(g: {\mathbb {R}}^{+}\times H\to L_{2}(H)\) and \(h: {\mathbb {R}}^{+}\times H\setminus\{0\}\times D({ \mathbb {R}}^{+},H)\rightarrow H\) are jointly measurable functions, \(W_{t}\) and \(\tilde{N}(t,u)\) denote the cylindrical Brownian motion and compensated Poisson random measure, respectively. Our main result is established with the help of Banach contraction semigroup theory. Thus, the results of Taniguchi are generalized [11].

This paper is organized as follows. In Section 2 we define the fundamental concepts and notations for stochastic integration with respect to Wiener processes and Lévy random measures. In Section 3 we provide, for the sake of completeness, existence and uniqueness results for Hilbert spaces valued SDEs. Finally, we give an example to illustrate the theory in Section 4.

2 Preliminaries

Let E and H be separable Hilbert spaces with the norms \(\| \cdot\|_{E}\) and \(\|\cdot\|_{H}\), respectively. The inner products in E and H are denoted by \((\cdot,\cdot)_{E} \) and \(\langle\cdot,\cdot\rangle_{H} \), let \(L(E, H)\) denote the space of all bounded linear operators from E to H.

Let \((\Omega, \mathbf{P}, {\mathbb {F}})\) be a complete probability space on which an increasing and right-continuous family \(({\mathbb {F}}_{t} )_{t\in[0,\infty]}\) of complete sub-σ-algebra of \({\mathbb {F}}\) is defined. The filtered probability space \((\Omega, {\mathbb {F}}, ({\mathbb {F}}_{t})_{t\in[0,\infty]},\mathbf{P})\) satisfying the ‘usual hypothesis’ is given as follows:

  1. (i)

    \({\mathbb {F}}_{t}\) contains all null sets of \({\mathbb {F}}\) for all t such that \(0\leq t< \infty\).

  2. (ii)

    \({\mathbb {F}}_{t}={\mathbb {F}}^{+}_{t}\), where \({\mathbb {F}}^{+}_{t}=\bigcap_{ u>t}{\mathbb {F}}_{u}\) for all t such that \(0\leq t< \infty\), i.e., the filtration is right continuous.

Let \((L(t):t > 0)\) be a Lévy process with values in a separable Banach space H and define

$$ N(t,\Lambda):=\sum_{s\in[0,t]}I_{\Lambda}\bigl( \triangle L(s)\bigr) $$

for every \(\Lambda\in \mathcal {B}(H\setminus\{0\})\). Together with the Lévy measure \(\nu(\Lambda):={\mathbb {E}}[1,\Lambda]\), the compensated Poisson random measure is defined by

$$ \widetilde{N}(t,\Lambda):=N(t,\Lambda)-t\nu(\Lambda) $$

with \({\mathbb {E}}[\widetilde{N}(t,\Lambda)^{2}]=t\nu(\Lambda)\).

A stochastic process X is said to be càdlàg if it almost surely has sample path which are right continuous, with left limits. Let \(\Omega=D({\mathbb {R}}^{+},H)\) be a space of càdlàg functions defined on \({\mathbb {R}}^{+}\) and with values in H with norm \(\Vert\cdot\Vert_{\infty} :=\sup_{t\in[0,T]}\Vert\cdot \Vert_{H}\).

Let \(Q\in L(E)\) be a nonnegative self-adjoint operator, and let \(L_{2}^{0}(E, H)\) denote the space of all \(\xi\in L(E, H)\) such that \(\xi\sqrt{Q}\) is a Hilbert-Schmidt operator and with norm \(\lVert\xi \rVert_{L_{2}^{0}}^{2}:=\operatorname{tr}(\xi Q \xi^{*})<\infty\). Then ξ is called the Q-Hilbert-Schmidt operator from E to H. Let \(\beta_{n}(t)\) (\(n =1, 2, 3,\dots\)) be a sequence of real-valued one-dimensional standard Brownian motions mutually independent on \((\Omega, \mathbf{P}, \mathbb{F})\), and let \(\{e_{n}\}\) (\(n=1, 2, 3,\ldots\)) be a complete orthonormal basis in E. Let Q be a nonnegative, linear and bounded covariance operator such that \(\operatorname {tr}(Q)<\infty\), and assume that there exists a bounded sequence of nonnegative real numbers \(\{\lambda_{n}\}\) such that \(Qe_{n}=\lambda_{n}e_{n}\), \(n=1, 2,\ldots\) . Thus we consider an E-valued stochastic process W(t) given formally by the following series:

$$ W(t):=\sum_{n=1}^{\infty} \beta_{n}(t)\sqrt{Q} e_{n}\quad(t\geqslant0), Q\in L(E). $$
(2.1)

From now on, we suppose that the operator \(Q \in L(E)\) is a nonnegative self-adjoint trace operator, then this series converges in the space E, that is, \(W(t)\in L_{2}(\Omega, E)\). Let \(v \in E\), the inner product \((W(t),v)_{E}\) given by the above E-valued stochastic process \(W(t)\) satisfies the conditions of a cylindrical Wiener process. Since

$$ \bigl(W(t),v\bigr)_{E}=\Biggl(\sum_{n=1}^{\infty} \beta_{n}(t)e_{n},v\Biggr)_{E},\quad v\in E, t\geqslant0. $$
(2.2)

So, when the operator \(Q\in L(E)\) is a nonnegative self-adjoint trace operator, we usually say the above \(W(t)\) is the E-valued Q-cylindrical Wiener process with a covariance operator Q.

Next, let \(\varphi(s)\) be an \(L_{0}^{2}(E, H)\) valued \({\mathbb {F}}_{t}\)-adapted stochastic process with

$$ {\mathbb {E}}\biggl[ \int_{0}^{t}\bigl\lVert \varphi(s)\bigr\rVert _{L_{2}^{0}}^{2}\,ds \biggr]< +\infty. $$
(2.3)

Then we define the stochastic integral

$$ \int_{0}^{t}\varphi(s)\,dW(s)\in H,\quad t\geqslant0 $$
(2.4)

of φ with respect to the E-valued Q-cylindrical Wiener process \(W(t)\) by

$$ \biggl\langle \int_{0}^{t}\varphi(s)\,dW(s),v\biggr\rangle _{H} :=\sum_{n=1}^{\infty} \int_{0}^{t}\bigl(\varphi(s)\sqrt{Q}e_{n},v \bigr)\,d\beta_{n}(t) $$

for any \(v \in H\) using the Itö integral with respect to \(\beta_{n}(s)\).

Throughout this paper, we assume that the filtration is generated by the E-valued Q-cylindrical Wiener process \(W(t)\), and Lévy process is augmented, that is, \({\mathbb {F}}_{t}:=\sigma\{W(s); s\leq t\}\vee\sigma\{N(t,\Lambda);\Lambda\in \mathcal {B}(H\setminus\{0\})\} \vee \mathcal {N}\), where \(\mathcal {N}\) is the null sets of \({\mathbb {F}}\). For more details, one can see [13, 27] and the references therein.

Definition 2.1

[24]

Let \((S_{t})_{t\geq0}\) be a continuous semigroup on H, which has the property

$$ \lVert S_{t}\rVert_{H}\leqslant\exp{(\alpha t)}, \quad\forall t>0 $$
(2.5)

for some constant \(\alpha>0\), and with \(\Vert\cdot\Vert_{H}\) denoting the operator norm on H, it is called a pseudo-contraction semigroup on H.

Definition 2.2

Let \(T>0\) be fixed. The stochastic process \(x(t):=(x_{t}(\omega))_{t\in [0,T]}\) is a mild solution of (1.1) with the initial condition \(x_{0}=x_{0}(\omega)\) if it is a solution of the following convolution equation:

$$ \begin{aligned}[b]x(t)={}& S_{t}x_{0}+ \int_{0}^{t}S_{t-s}A\bigl(s, x(s)\bigr)\,ds+ \int_{0}^{t}S_{t-s}g\bigl(s,x(s) \bigr)\,dW_{s} \\ &+ \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s} h\bigl(s,u ,x(s)\bigr) \tilde{N}(ds,du) \end{aligned} $$
(2.6)

P-almost surely hold for each \(t\in[0,T]\).

Lemma 2.3

[27]

Let \(p>2\), \(T>0\) and suppose \(\varphi(t)\) is an \(L^{0}_{2}\)-valued predictable process such that \({\mathbb {E}}(\int_{0}^{T}\lVert\varphi(s)\rVert_{L_{2}^{0}}^{p}\,ds )<+\infty\). Let

$$ W_{S}^{\varphi} :=\int_{0}^{t}S_{t-s}\varphi(s)\,dW(s), \quad t \in[0,T]. $$

Then there exists \(C(p,S)\geq0\) such that

$$ {\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant T}\bigl\lVert W_{S}^{\varphi}\bigr\rVert ^{p} \Bigr]\leqslant C(p,S) \sup_{0\leqslant t\leqslant T}\lVert S_{t}\rVert^{p}\cdot {\mathbb {E}}\biggl[ \int_{0}^{T}\bigl\lVert \varphi(s)\bigr\rVert ^{p}\,ds \biggr]. $$
(2.7)

Moreover, if inequality (2.3) is satisfied, then there exists a continuous version of the process \(W_{S}^{\varphi}\), \(t\geq0\). If \((S_{t})_{t\geq0}\) is a contraction semigroup, then the above result is true for \(p\geqslant2\).

Lemma 2.4

[28]

Let \(\phi: {\mathbb {R}}^{+}\times H\setminus\{0\}\times \Omega\to H\) be a predictable function satisfying

$$ \int_{0}^{t} \int_{H\setminus\{0\}}\bigl\lVert \phi(s,u)\bigr\rVert ^{2}\nu (du)\,ds\leqslant+\infty $$

for all \(t\geqslant0\) P-almost surely. Let

$$ Z(t):=\int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s}\phi(s,u)\widetilde{N}(du,dt). $$

If \((S_{t})_{t\geq0}\) is a contraction semigroup, then \(\forall0< p\leqslant2\), there exists a constant \(C_{p,T}\geqslant0\) such that

$$ {\mathbb {E}}\Bigl[\sup_{0\leqslant t\leqslant T}\bigl\lVert Z(t)\bigr\rVert ^{p} \Bigr] \leqslant C_{p,T}\cdot {\mathbb {E}}\biggl[ \int_{0}^{T} \int_{H\setminus\{0\}}\bigl\lVert \phi(s,u)\bigr\rVert ^{2} \nu(du)\,ds \biggr]^{\frac{p}{2}}. $$
(2.8)

3 Existence and uniqueness of mild solutions

In this section, we discuss the existence and uniqueness of mild solutions to the stochastic equation (1.1) in a Hilbert space.

For each \(t\in{\mathbb {R}}^{+}\), we definite the function \(\theta_{t}: D({\mathbb {R}}^{+},H)\rightarrow D({\mathbb {R}}^{+},H)\)

$$ \theta_{t}(x) (s)=\left \{ \textstyle\begin{array}{l@{\quad}l} x_{s}, & \hbox{if } 0\leqslant s< t ;\\ x_{t}, & \hbox{if } 0\leqslant t< s. \end{array}\displaystyle \right . $$
(3.1)

Starting from here, we suppose that the following additional assumptions hold.

Assumption 3.1

  1. (1)

    \(h(t, u, x)\) is jointly measurable, and for all \(u\in H\) and \(t\in{\mathbb {R}}^{+}\) fixed, \(h(t, u, \cdot)\) is \({\mathbb {F}}_{t}\)-adapted;

  2. (2)

    \(A(t, x)\) is jointly measurable, and for all \(t\in {\mathbb {R}}^{+}\) fixed, \(A(t,\cdot)\) is \({\mathbb {F}}_{t}\)-adapted;

  3. (3)

    assume that \(h(t, u, x)=h(t, u, \theta_{t}(x))\) and \(A(t, x)=A(t,\theta_{t}(x))\).

Assumption 3.2

There exists a constant \(K>0 \) such that for any \(t\in{ \mathbb {R}}^{+} \) the following inequality is satisfied:

$$ \bigl\lVert A(t,x)\bigr\rVert _{H}^{2}+\bigl\lVert g(t,x)\bigr\rVert _{H}^{2}+ \int_{H\setminus\{0\}}\bigl\lVert h(t,u,x)\bigr\rVert _{H}^{2} \nu(du) \leqslant K\bigl[1+\bigl\lVert \theta_{t}(x)\bigr\rVert _{\infty}^{2}\bigr]. $$
(3.2)

Assumption 3.3

There exists a constant \(K\geqslant0 \) such that for any fixed \(x, y\in D({\mathbb {R}}^{+},H)\),

$$ \begin{aligned}[b] &\bigl\lVert A(t,x)-A(t,y)\bigr\rVert ^{2}+\bigl\lVert g(t,x)-g(t,y)\bigr\rVert ^{2} + \int_{H\setminus\{0\}}\bigl\lVert h(t,u,x)-h(t,u,y)\bigr\rVert ^{2} \nu(du) \\ &\quad\leqslant K\bigl\lVert \theta_{s}(x)-\theta_{s}(y) \bigr\rVert _{\infty}^{2} \end{aligned} $$
(3.3)

for P-almost surely.

For simply, \(C_{K,\alpha,T}>0\) denotes a suitable positive real number which changes from line to line, let \(x\in D({\mathbb {R}}^{+},H)\) and \(t\in[0,T]\), define

$$ \begin{aligned}[b] I(t,x):={} & \int_{0}^{t}S_{t-s}A\bigl(s, x(s)\bigr)\,ds+ \int_{0}^{t}S_{t-s}g\bigl(s,x(s) \bigr)\,dW_{s} \\ &+ \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s} h\bigl(s,u ,x(s)\bigr) \tilde{N}(ds,du). \end{aligned} $$
(3.4)

Theorem 3.4

There exists a constant \(C_{K,T,\alpha}\) such that for any \(({\mathbb {F}}_{t})\)-stopping time τ

$$ {\mathbb {E}}\Bigl(\sup_{0\leqslant s\leqslant t\wedge\tau}\bigl\lVert I(s,x)\bigr\rVert ^{2}_{H} \Bigr)\leqslant C_{K,T,\alpha} \biggl[t+ \int_{0}^{t}{\mathbb {E}}\Bigl(\sup_{0\leqslant\nu\leqslant s\wedge\tau} \lVert x_{\nu}\rVert^{2} \Bigr)\,ds \biggr],\quad t\in[0,T]. $$

Proof

From (3.4), we have

$$ \begin{aligned}[b] {\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t\wedge\tau}\bigl\lVert I(s,x)\bigr\rVert ^{2}_{H} \Bigr] \leqslant{}&3 {\mathbb {E}}\biggl[\sup _{0\leqslant s\leqslant t\wedge\tau }\biggl\lVert \int_{0}^{s}S_{s-r}A\bigl(r, x(r)\bigr)\,dr \biggr\rVert ^{2} \biggr] \\ & +3 {\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant t\wedge\tau}\biggl\lVert \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s} h\bigl(s,u ,x(s)\bigr) \tilde{N}(ds,du)\biggr\rVert ^{2} \biggr] \\ & +3 {\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant t\wedge\tau}\biggl\lVert \int_{0}^{s}S_{s-r}g\bigl(r,x(r) \bigr)\,dW_{r}\biggr\rVert ^{2} \biggr]. \end{aligned} $$
(3.5)

By Definition 2.1, we have

$$ \begin{aligned} \biggl\lVert \int_{0}^{s}S_{s-r}A\bigl(r, x(r)\bigr)\,dr \biggr\rVert ^{2} &\leqslant\exp{(2\alpha s )} \biggl[ \int_{0}^{s}\bigl\lVert A\bigl(r,x(r)\bigr)\bigr\rVert \,dr \biggr]^{2} \\ &\leqslant s\cdot\exp{(2\alpha s )} \biggl[ \int_{0}^{s}\bigl\lVert A\bigl(r,x(r)\bigr)\bigr\rVert ^{2}\,dr \biggr]. \end{aligned} $$

Using this relation, together with Assumption 3.2 and Assumption 3.3, we obtain

$$ \begin{aligned}[b] &{\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant t\wedge\tau}\biggl\lVert \int_{0}^{s} S_{s-r}A\bigl(r, x(r)\bigr)\,dr \biggr\rVert ^{2} \biggr] \\ &\quad\leqslant C_{K,\alpha,T}\cdot {\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant t\wedge\tau} \biggl( \int_{0}^{s} \bigl(1+ \bigl\lVert \theta_{r}(x)\bigr\rVert _{\infty}^{2}\bigr)\,dr \biggr) \biggr] \\ &\quad \leqslant C_{K,\alpha,T} \biggl[ t+ \int_{0}^{t}{\mathbb {E}}\Bigl(\sup_{0\leqslant r\leqslant s\wedge\tau} \lVert x_{r}\rVert^{2} \Bigr)\,ds \biggr]. \end{aligned} $$
(3.6)

Using Lemma 2.3, Assumption 3.2 and Definition 2.1 yields

$$ \begin{aligned}[b] &{\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant t\wedge\tau}\biggl\lVert \int_{0}^{s}S_{s-r}g\bigl(r,x(r) \bigr)\,dW_{r}\biggr\rVert ^{2} \biggr] \\ &\quad\leqslant C(2,t)\sup_{0\leqslant s\leqslant t\wedge\tau}\bigl\lVert S(s)\bigr\rVert ^{2}\cdot {\mathbb {E}}\biggl[ \int_{0}^{t\wedge\tau}\bigl\lVert g\bigl(r,x(r)\bigr)\bigr\rVert ^{2}\,dr \biggr] \\ &\quad \leqslant C_{K,\alpha, t} \biggl[{t}+ \int_{0}^{t}{\mathbb {E}}\Bigl(\sup_{0\leqslant r\leqslant s\wedge\tau} \bigl\lVert x(r)\bigr\rVert ^{2} \Bigr)\,ds \biggr]. \end{aligned} $$
(3.7)

Moreover, from Lemma 2.4, it follows that

$$ \begin{aligned}[b] &{\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant t\wedge\tau}\biggl\lVert \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s} h\bigl(s,u ,x(s)\bigr) \tilde{N}(ds,du)\biggr\rVert ^{2} \biggr] \\ &\quad\leqslant C_{K,\alpha, t } \biggl[t+ \int_{0}^{t}{\mathbb {E}}\Bigl(\sup_{0\leqslant r\leqslant s\wedge\tau} \bigl\lVert x(r)\bigr\rVert ^{2} \Bigr)\,ds \biggr]. \end{aligned} $$
(3.8)

Taking (3.6), (3.7) and (3.8) into (3.5), we have

$$ {\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t\wedge\tau}\bigl\lVert I(s,x)\bigr\rVert ^{2}_{H} \Bigr] \leqslant C_{K,\alpha,T} \biggl(t+ \int_{0}^{t}{\mathbb {E}}\Bigl[\sup_{0\leqslant\nu \leqslant s\wedge\tau} \lVert x_{\nu}\rVert^{2}\Bigr]\,ds \biggr),\quad t\in[0,T]. $$

Let \(T>0\), and define

$$ \mathcal {H}_{2}^{T}:=\bigl\{ \xi=( \xi_{s})_{s\in[0,T]}\colon {\mathbb {E}}\lVert\xi_{s} \rVert_{\infty}^{2}< +\infty\bigr\} , $$
(3.9)

here \(\xi_{s}\) is jointly measurable and \({\mathbb {F}}_{t}\)-adapted. It follows from Theorem 3.4 that the map

$$ I: \mathcal {H}_{2}^{T}\longrightarrow \mathcal {H}_{2}^{T}, \quad x\longrightarrow I(\cdot, x) $$

is well defined, where \(I(\cdot, x)\) is defined in Eq. (3.4). □

Theorem 3.5

The map \(I: \mathcal {H}_{2}^{T}\longrightarrow \mathcal {H}_{2}^{T}\) is continuous, and assume that Assumption  3.3 is satisfied. Then there exists a constant \(C_{K,T,\alpha}\), depending on K, T and α, such that

$$ {\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant T}\bigl\lVert I(s,x)-I(s,y)\bigr\rVert ^{2}_{H} \Bigr] \leqslant C_{K,T,\alpha} \int_{0}^{T}{\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t} \lVert x_{s}-y_{s}\rVert^{2} \Bigr]\,dt. $$
(3.10)

Proof

By using (3.4), we have

$$ \begin{aligned} &\bigl\lVert I(t,x)-I(t,y)\bigr\rVert \\ & \quad\leqslant\biggl\lVert \int_{0}^{t}S_{t-s}\bigl[A\bigl(s,x(s) \bigr)-A\bigl(s, y(s)\bigr)\bigr]\,ds\biggr\rVert +\biggl\lVert \int_{0}^{t}S_{t-s}\bigl[g\bigl(s,x(s) \bigr)-g\bigl(s,y(s)\bigr)\bigr]\,dW_{s}\biggr\rVert \\ & \quad\quad+\biggl\lVert \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s} \bigl[ h\bigl(s,u ,x(s)\bigr)- h \bigl(s,u ,y(s)\bigr) \bigr]\tilde{N}(ds,du)\biggr\rVert , \end{aligned} $$

so that

$$ \begin{aligned}[b] &{\mathbb {E}}\Bigl[\sup_{0\leqslant t\leqslant T}\bigl\lVert I(t,x)-I(t,x)\bigr\rVert ^{2} \Bigr] \\ &\quad\leqslant3 {\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant T}\biggl\lVert \int_{0}^{t}S_{t-s}\bigl[A\bigl(s, x(s) \bigr)-A\bigl(s, y(s)\bigr)\bigr]\,ds\biggr\rVert ^{2} \biggr] \\ &\quad\quad+3 {\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant T}\biggl\lVert \int_{0}^{t}S_{t-s}\bigl[g\bigl(s,x(s) \bigr)-g\bigl(s,y(s)\bigr)\bigr]\,dW_{s}\biggr\rVert ^{2} \biggr] \\ &\quad\quad+3 {\mathbb {E}}\biggl[\sup_{0\leqslant s\leqslant T}\biggl\lVert \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s}\bigl[ h\bigl(s,u ,x(s)\bigr)- h \bigl(s,u,y(s)\bigr)\bigr]\tilde{N}(ds,du)\biggr\rVert ^{2} \biggr]. \end{aligned} $$
(3.11)

By Assumption 3.3, we have

$$ \begin{aligned}[b] &{\mathbb {E}}\biggl[\sup_{0\leqslant t\leqslant T}\biggl\lVert \int_{0}^{t}S_{t-s}\bigl[A\bigl(s, x(s) \bigr)-A\bigl(s, y(s)\bigr)\bigr]\,ds\biggr\rVert ^{2} \biggr] \\ & \quad\leqslant K\cdot T\exp{(2\alpha T)} \int_{0}^{T} {\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t} \bigl\lVert \theta_{s}(x)-\theta_{s}(y)\bigr\rVert _{\infty}^{2} \Bigr]\,dt \\ &\quad\leqslant C_{K,\alpha,T} \int_{0}^{T}{\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t} \lVert x_{s}-y_{s}\rVert^{2} \Bigr]\,dt. \end{aligned} $$
(3.12)

From Lemma 2.3, Assumption 3.1 and Assumption 3.3, we obtain

$$ \begin{aligned}[b] &{\mathbb {E}}\biggl[\sup_{0\leqslant t\leqslant T}\biggl\lVert \int_{0}^{t}S_{t-s}\bigl[g\bigl(s,x(s) \bigr)-g\bigl(s,y(s)\bigr)\bigr]\,dW_{s}\biggr\rVert ^{2} \biggr] \\ &\quad\leqslant C(2,T)\sup_{0\leqslant s\leqslant T}\bigl\lVert S(s)\bigr\rVert ^{2}\cdot {\mathbb {E}}\biggl[ \int_{0}^{T}\bigl\lVert g\bigl(s,x(s)\bigr)-g \bigl(s,y(s)\bigr)\bigr\rVert ^{2}\,ds \biggr] \\ &\quad\leqslant C_{K,\alpha,T} \int_{0}^{T}{\mathbb {E}}\Bigl(\sup_{0\leqslant s\leqslant t} \lVert x_{s}-y_{s}\rVert^{2} \Bigr)\,dt. \end{aligned} $$
(3.13)

Similarly, we get

$$ \begin{aligned} &{\mathbb {E}}\biggl[\sup_{0\leqslant t\leqslant T}\biggl\lVert \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s}\bigl[ h\bigl(s,u ,x(s)\bigr)- h \bigl(s,u ,y(s)\bigr)\bigr]\tilde{N}(ds,du)\biggr\rVert ^{2} \biggr] \\ &\quad\leqslant C_{K,\alpha, T } \int_{0}^{T}{\mathbb {E}}\Bigl(\sup_{0\leqslant s \leqslant t} \bigl\lVert x(s)-y(s)\bigr\rVert ^{2} \Bigr)\,dt. \end{aligned} $$
(3.14)

Taking (3.12), (3.13) and (3.14) into (3.11), we have proved Theorem 3.5. □

Theorem 3.6

Let \(T>0\), \(x_{0}\in H\) and suppose that Assumption  3.1, Assumption  3.2 and Assumption  3.3 are satisfied. Then there exists a unique mild solution \(X(t)=(x_{s})_{s\in[0,T]}\) in \(\mathcal {H}_{2}^{T}\) satisfying

$$ \begin{aligned}[b] X(t)={}& S_{t}x_{0}+ \int_{0}^{t}S_{t-s}A\bigl(s, X(s)\bigr)\,ds+ \int_{0}^{t}S_{t-s}g\bigl(s,X(s) \bigr)\,dW_{s} \\ &+ \int_{0}^{t} \int_{H\setminus\{0\}}S_{t-s} h\bigl(s,u ,X(s)\bigr) \tilde{N}(ds,du). \end{aligned} $$
(3.15)

Proof

Let \(X^{0}_{s}=S_{s}(x_{0})\) P-almost surely, and \(X^{n+1}_{s}:=I(s, X^{n}_{s})\). Then from Eq. (3.4) it follows that \((X^{n+1}_{s})_{s\in[0,T]}\) is \({\mathbb {F}}_{t}\)-adapted. Let

$$ z^{n}_{t}:={\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t} \bigl\lVert X^{n+1}_{s}-X^{n}_{s}\bigr\rVert _{\mathcal {H}}^{2} \Bigr]. $$
(3.16)

Thus, by (3.16), Theorem 3.4 and the definition of \(X^{0}_{s}=S_{s}(x_{0})\), it follows that

$$ \begin{aligned} z^{0}_{t}&={\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t} \bigl\lVert X^{1}_{s}-X^{0}_{s}\bigr\rVert _{\mathcal {H}}^{2} \Bigr] \\ &\leqslant C_{K,\alpha, t} \biggl[t+ \int_{0}^{t}{\mathbb {E}}\Bigl(\sup_{0\leqslant r\leqslant s} \bigl\lVert X^{0}_{r}\bigr\rVert ^{2} \Bigr)\,ds+{\mathbb {E}}\bigl\lVert X^{0}_{s}\bigr\rVert _{\mathcal {H}}^{2} \biggr]. \end{aligned} $$
(3.17)

Since

$$ \bigl\lVert X^{0}_{s}\bigr\rVert _{\mathcal {H}}^{2}= \lVert S_{s}x_{0}\rVert_{\mathcal {H}}^{2}\leqslant \exp(2\alpha s)\lVert x_{0}\rVert_{\mathcal {H}}^{2}. $$
(3.18)

Then from inequalities (3.17) and (3.18), there exists a constant L, depending on α, t and \(x_{0}\), such that

$$ \begin{aligned}[b] z^{0}_{t}&={\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t} \bigl\lVert X^{1}_{s}-X^{0}_{s}\bigr\rVert _{\mathcal {H}}^{2} \Bigr] \\ &\leqslant C_{K,\alpha, t } \biggl[t+\frac{1}{2\alpha} \bigl(2 \alpha-1+\exp{(2\alpha t)} \bigr){\mathbb {E}}\lVert x_{0}\rVert_{\mathcal {H}}^{2} \biggr] \\ &=C_{K,\alpha, t }\cdot L. \end{aligned} $$
(3.19)

Similarly, combining inequality (3.19) and Theorem 3.5, by mathematical induction, we have

$$ z^{n}_{t}={\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t} \bigl\lVert X^{n+1}_{s}-X^{n}_{s}\bigr\rVert _{\mathcal {H}}^{2} \Bigr]\leqslant\frac{1}{n!}[T\cdot C_{K,\alpha, t }]^{n+1}L. $$
(3.20)

It is easy to see that

$$ \sum_{n=1}^{\infty}\sqrt{\frac{1}{n!}[T \cdot C_{K,\alpha, t }]^{n+1}} $$

is convergent, so that \(\{z^{n}_{t}\}\) is a Cauchy sequence in \(\mathcal {H}_{2}^{T}\). Then, by the Chebyshev inequality and (3.20), we have

$$ {\mathbb {P}}\biggl[\sup_{0\leqslant s\leqslant t}\bigl\lVert X^{n+1}_{s}-X^{n}_{s}\bigr\rVert _{\mathcal {H}}^{2}\geqslant\frac {1}{2^{n}} \biggr] \leqslant \frac{1}{n!}[2T\cdot C_{K,\alpha, t }]^{n+1}L. $$
(3.21)

By the Borel-Cantelli lemma, we get that P-almost surely there exists \(k_{0}\in {\mathbb {N}}\) such that

$$ \sup_{0\leqslant s \leqslant t}\bigl\lVert X^{k}_{s}-X^{k+1}_{s} \bigr\rVert ^{2}\leqslant2^{-k}\quad\text{for each } k \geqslant k_{0}. $$
(3.22)

Define

$$ X^{k}_{s}(\omega):=X^{0}_{s}+ \sum_{k=0}^{n-1}\bigl[X^{k+1}_{s}( \omega)-X^{k}_{s}(\omega)\bigr]. $$
(3.23)

Then \(X^{k}_{s}\) converges P-almost surely uniformly on \([0, T]\). Let

$$ X_{s}(\omega)=\lim_{n\to\infty}X^{n}_{s}( \omega). $$

Since \(\{X^{k}_{s}(\omega)\}_{t\in[0,T]}\) are càdlàg, and the limit is in sup norm, therefore \(X_{s}(\omega)\) is càdlàg and \({\mathbb {F}}_{t}\)-measurable, \(X_{s}(\omega)\in D([0,T],H)\). Moreover,

$$ \begin{aligned}[b] {\mathbb {E}}\bigl[\bigl\lVert X_{s}-X^{n}_{s} \bigr\rVert _{\infty}^{2} \bigr] &={\mathbb {E}}\Bigl[\lim _{m\to\infty}\sup_{0\leqslant s\leqslant t}\bigl\lVert X^{m}_{s}-X^{n}_{s}\bigr\rVert ^{2} \Bigr] \\ &={\mathbb {E}}\Biggl[\lim_{m\to\infty}\sup_{0\leqslant s\leqslant t} \Biggl\lVert \sum_{k=n}^{n+m-1}\bigl(X^{k+1}_{s}-X^{k}_{s}\bigr) \Biggr\rVert ^{2} \Biggr] \\ &\leqslant {\mathbb {E}}\Biggl[\lim_{m\to\infty} \Biggl(\sum _{k=n}^{n+m-1}\sup_{0\leqslant s\leqslant t}\bigl\lVert X^{k+1}_{s}-X^{k}_{s}\bigr\rVert \Biggr)^{2} \Biggr]. \end{aligned} $$
(3.24)

By the Cauchy-Schwarz inequality, it follows that

$$ \lim_{m\to\infty} \Biggl[\sum _{k=n}^{n+m-1}\sup_{0\leqslant s\leqslant t}\bigl\lVert X^{k+1}_{s}-X^{k}_{s}\bigr\rVert \Biggr]^{2}\leqslant\sum _{k=n}^{\infty}\sup_{0\leqslant s\leqslant t}\bigl\lVert X^{k+1}_{s}-X^{k}_{s}\bigr\rVert ^{2}k^{2}\cdot\sum_{k=n}^{\infty} \frac{1}{k^{2}}. $$
(3.25)

According to (3.22), we see that \(\sum_{k=n}^{\infty}\sup_{0\leqslant s\leqslant t}\lVert X^{k+1}_{s}-X^{k}_{s}\rVert^{2}k^{2}\) is convergent, together with (3.20) this implies

$$ {\mathbb {E}}\Biggl[\lim_{m\to\infty} \Biggl(\sum _{k=n}^{n+m-1}\sup_{0\leqslant s\leqslant t}\bigl\lVert X^{k+1}_{s}-X^{k}_{s}\bigr\rVert \Biggr)^{2} \Biggr]\leqslant\sum_{k=n}^{\infty}\frac{1}{n!}[Tk \cdot C_{K,\alpha, t }]^{n+1}L\cdot\sum_{k=n}^{\infty} \frac{1}{k^{2}}. $$

Therefore, we obtain that

$$ \lim_{m\to\infty} {\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant T} \bigl\lVert X_{s}-X^{n}_{s}\bigr\rVert ^{2} \Bigr]=0, $$
(3.26)

and \(X_{s}\in \mathcal {H}_{2}^{T}\). From Theorem 3.5 it follows that \(X_{s}\) is a mild solution to Eq. (3.15).

Next, we prove the uniqueness of Eq. (3.15). Suppose that \(X_{s}\) and \(\overline{X}_{s}\) are two solutions of (3.15). Let

$$ G(t):={\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t}\lVert X_{s}-\overline{X}_{s}\rVert_{\mathcal {H}}^{2} \Bigr]. $$
(3.27)

Then, similarly as for (3.20), it follows that

$$ G(t)\leqslant\frac{1}{n!}[Tk\cdot C_{K,\alpha, t }]^{n+1} {\mathbb {E}}\Bigl[\sup_{0\leqslant s\leqslant t}\lVert X_{s}-\overline{X}_{s} \rVert_{\mathcal {H}}^{2} \Bigr]\longrightarrow0 $$
(3.28)

as \(n\to+\infty\). Hence \(G(t)\equiv0\), which completes the proof of the theorem. □

Remark 3.7

In a special case, when \(h\equiv0\), Eq. (1.1) reduces to

$$ dx(t)= \bigl[Tx(t)+A\bigl(t,x(t)\bigr)\bigr]\, dt+g\bigl(t,x(t) \bigr)\,dW_{t}, $$
(3.29)

which was studied in [11].

4 Examples

Example 4.1

Let \(H=L^{2}(0,\pi)\) and \(e_{n}:=\sqrt{\frac{2}{\pi}}\sin (nx)\), \(n=1,2,\ldots\) . Then \(\{e_{n}\}\) is a complete orthonormal basis in H. Let \(W_{t}:=\sum_{n=1}^{\infty}\sqrt{\lambda_{n}}\beta_{n}(t) e_{n}\), \(\lambda_{n}>0\), where \(\{\beta_{n}(t)\}\) are one-dimensional standard Brownian motions mutually independent on a usual complete probability space. Let \(\nu(\Lambda)\) be a σ-finite stationary Poisson point process on H with \(\widetilde{N}(t,\Lambda)=N(t,\Lambda)-t\nu(\Lambda)\). Define the operator \(Q :H \longrightarrow H\) by setting \(Q e_{n}=\lambda_{n} e_{n}\) (\({n=1,2,3,\ldots}\)) and assume that \(\operatorname{tr}(Q)=\sum_{n=1}^{\infty}\lambda_{n}<+\infty\). Let \(T:=\frac{\partial ^{2}}{\partial x^{2}}\) with the domain \(D(T)=H_{0}^{1}(0,\pi )\cap H^{2}(0,\pi)\).

Consider the following stochastic evolution equation:

$$\begin{aligned} \left \{ \textstyle\begin{array}{l@{\quad}l} dX_{t}=[\frac{\partial ^{2}}{\partial x^{2}}X_{t}+\lambda X_{t}]\,dt+ \gamma X(t)\,dW_{t}+\int_{H\setminus\{0\}} X_{t} \tilde{N}(dt,du), & \gamma >0, \lambda\in{\mathbb {R}},\\ X(0,\xi)= x_{0}(\xi)\in H, & \xi\in[0,\pi],\\ X(t,0)=X(t,\pi)=0, & t\in[0,+\infty), \end{array}\displaystyle \right . \end{aligned}$$
(4.1)

where \(T:=\frac{\partial ^{2}}{\partial x^{2}}\), \(A(t,X_{t})=\lambda X_{t}\), \(g(t,X_{t})=X_{t}\), \(h(t, u, X_{t})=X_{t}\). We can verify that T is an infinitesimal generator of the bounded linear operator \(S(t):H\to H\). It holds that \({\lVert S(t)\rVert\leq e^{-t}}\), \(t\geq0\), and Assumption 3.1, Assumption 3.2, Assumption 3.3 are satisfied. Therefore by Theorem 3.6, Eq. (4.1) has a unique solution.

5 Conclusions

In this paper, we consider a class of stochastic differential equations driven by Brownian motion and Lévy noises in real separable Hilbert spaces. Sufficient conditions for the existence and uniqueness are derived. The conditions, under which the coefficients satisfy linear growth condition, are formulated and proved.

There are two direct issues which require further study. First, we will investigate the exponential stability of mild solutions for NSDEs in Hilbert spaces. Second, we will devote our efforts to the study of the mean-square stability and the asymptotic mean-square stability of NSDEs driven by the cylindrical Brownian motion and Lévy noises.

References

  1. Barbu, D: Local and global existence for mild solutions of stochastic differential equations. Port. Math. 55, 411-424 (1998)

    MathSciNet  MATH  Google Scholar 

  2. Mao, XR: Asymptotic properties of neutral stochastic delay differential equations. Stoch. Stoch. Rep. 68, 273-295 (2000)

    Article  MATH  Google Scholar 

  3. Fu, X, Ezzinbi, K: Existence of solutions for neutral functional differential evolution equations with nonlocal conditions. Nonlinear Anal. 54, 215-227 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cont, R, Tankov, P: Financial Modelling with Jump Processes. Financial Mathematics Series. Chapman & Hall, Boca Raton (2004)

    MATH  Google Scholar 

  5. Luo, Q, Mao, XR: New criteria on exponential stability of neutral stochastic differential delay equations. Syst. Control Lett. 55, 826-834 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. Zhang, HM, Gan, SQ: Mean square convergence of one-step methods for neutral stochastic differential delay equations. Appl. Math. Comput. 204, 884-890 (2008)

    MathSciNet  MATH  Google Scholar 

  7. Yin, BJ, Ma, ZH: Convergence of the semi-implicit Euler method for neutral stochastic delay differential equations with phase semi-Markovian switching. Appl. Math. Model. 35, 2094-2109 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  8. Barbu, D, Bocsan, G: Approximations to mild solutions of stochastic semilinear equations with non-Lipschitz coefficients. Czechoslov. Math. J. 52(127), 87-95 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. Li, F, Gaston, MN: Existence and uniqueness of mild solution for fractional integrodifferential equations. Adv. Differ. Equ. 2012, 34 (2012)

    Article  Google Scholar 

  10. Taniguchi, T, Liu, K, Truman, A: Existence, uniqueness and asymptotic behavior of mild solutions to stochastic functional differential equations in Hilbert spaces. J. Differ. Equ. 181, 72-91 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  11. Taniguchi, T: The existence and uniqueness of energy solutions to local non-Lipschitz stochastic evolution equations. J. Math. Anal. Appl. 360, 245-253 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Brzez̀niak, Z, Hausenblas, E: Uniqueness in law of the stochastic convolution process driven by Lévy noise (2010). arXiv:1010.5941

  13. Rüdiger, B: Stochastic integration with respect to compensated Poisson random measures on separable Banach spaces. Stoch. Stoch. Rep. 76(3), 213-242 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  14. Royer, M: Backward stochastic differential equations with jumps and related non-linear expectations. Stoch. Process. Appl. 116, 1358-1376 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  15. Albeverio, S, Brzeźniak, Z, Wu, J: Existence of global solutions and invariant measures for stochastic differential equations driven by Poisson type noise with non-Lipschitz coefficients. J. Math. Anal. Appl. 371, 309-322 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. Mandrekar, V, Rüdiger, B: Existence and uniqueness of path wise solutions for stochastic integral equations driven by Lévy noise on separable Banach spaces. Stochastics 78(4), 189-212 (2006)

    MathSciNet  MATH  Google Scholar 

  17. Balasubramaniam, P, Park, JY, Antony Kumar, AV: Existence of solutions for semilinear neutral stochastic functional differential equations with nonlocal conditions. Nonlinear Anal. 71, 1049-1058 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. Boufoussi, B, Hajji, S: Successive approximation of neutral functional stochastic differential equations with jumps. Stat. Probab. Lett. 80, 324-332 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  19. Revathi, P, Sakthivel, R, Ren, Y, Anthoni, SM: Existence of almost automorphic mild solutions to non-autonomous neutral stochastic differential equations. Appl. Math. Comput. 230, 639-649 (2014)

    MathSciNet  Google Scholar 

  20. Gu, Y, Ren, Y, Sakthivel, R: Square-mean pseudo almost automorphic mild solutions for stochastic evolution equations driven by G-Brownian motion. Stoch. Process. Appl. 34(3), 528-545 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Benchaabane, A, Sakthivel, R: Sobolev-type fractional stochastic differential equations with non-Lipschitz coefficients. J. Comput. Appl. Math. 312, 65-73 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Cao, GL, He, K: Successive approximation of infinite dimensional semilinear backward stochastic evolution equations with jumps. Stoch. Process. Appl. 117, 1251-1264 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  23. Luo, J, Taniguchi, T: The existence and uniqueness for non-Lipschitz stochastic neutral delay evolution equations driven by Poisson jumps. Stoch. Dyn. 9(1), 135-152 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Albeverio, S, Mandrekar, V, Rüdiger, B: Existence of mild solutions for stochastic differential equations and semilinear equations with non-Gaussian Lévy noise. Stoch. Process. Appl. 119, 835-863 (2009)

    Article  MATH  Google Scholar 

  25. Cui, J, Yan, L: Successive approximation of neutral stochastic evolution equations with infinite delay and Poisson jumps. Appl. Math. Comput. 218, 6776-6784 (2012)

    MathSciNet  MATH  Google Scholar 

  26. Mao, W, Hu, LJ, Mao, XR: Neutral stochastic functional differential equations with Lévy jumps under the local Lipschitz condition. Adv. Differ. Equ. 2017, 57 (2017)

    Article  Google Scholar 

  27. Da Prato, G, Zabczyk, J: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)

    Book  MATH  Google Scholar 

  28. Hausenblas, E, Seidler, J: Stochastic convolutions driven by martingales: maximal inequalities and exponential integrability. Stoch. Anal. Appl. 26(1), 98-119 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are deeply grateful to the anonymous referee and the editor for their careful reading, valuable comments and correcting some errors, which have greatly improved the quality of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lasheng Wang.

Additional information

Competing interests

The author declares that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L. The existence and uniqueness of mild solutions to stochastic differential equations with Lévy noise. Adv Differ Equ 2017, 175 (2017). https://doi.org/10.1186/s13662-017-1224-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1224-0

MSC

Keywords