Skip to main content

Theory and Modern Applications

On the non-Lipschitz stochastic differential equations driven by fractional Brownian motion

Abstract

In this paper, we use a successive approximation method to prove the existence and uniqueness theorems of solutions to non-Lipschitz stochastic differential equations (SDEs) driven by fractional Brownian motion (fBm) with the Hurst parameter \(H\in(\frac{1}{2},1)\). The non-Lipschitz condition which is motivated by a wider range of applications is much weaker than the Lipschitz one. Due to the fact that the stochastic integral with respect to fBm is no longer a martingale, we definitely lost good inequalities such as the Burkholder-Davis-Gundy inequality which is crucial for SDEs driven by Brownian motion. This point motivates us to carry out the present study.

1 Introduction

Stochastic differential equations (SDEs) have been greatly developed and are well known to model diverse phenomena, including but not limited to fluctuating stock prices, physical systems subject to thermal fluctuations, forecasting the growth of a population, from various points of view [14]. There is no doubt that the mathematical models under a random disturbance of ‘Gaussian white noise’ have seen rapid development. However, it is not appropriate to model some real situations where stochastic fluctuations with long-range dependence might exist. Due to the long-range dependence of the fBm which was introduced by Hurst [5], Kolmogorov [6], Mandelbrot [7] originally, SDEs driven by fBm have been used as the models of a number of practical problems in various fields, such as queueing theory, telecommunications, and economics [810].

On most occasions, the coefficients of SDEs driven by fBm are assumed to satisfy the Lipschitz condition. The existence and uniqueness of solutions of SDEs driven by fBm with Lipschitz condition have been studied by many scholars [1114]. However, this Lipschitz condition seemed to be considerably strong when one discusses variable applications in real world. For example, the hybrid square root process and the one-dimensional semi-linear SDEs with Markov switching. Such models appear widely in many branches of science, engineering, industry and finance [1517]. Therefore, it is important to obtain some weaker condition than the Lipschitz one under which the SDEs still have unique solutions. Fortunately, many researchers have investigated the SDEs under non-Lipschitz condition and they presented many meaningful results [1822]. But, to the best of our knowledge, the existence and uniqueness of solutions of SDEs driven by fBm with a non-Lipschitz condition have not been considered. Since fBm is neither a semi-martingale nor a Markov process, we definitely lost good inequalities such as the Burkholder-Davis-Gundy inequality, which is crucial for SDEs driven by Brownian motion. Then it seems not to be very easy to obtain the existence and uniqueness of solutions to non-Lipschitz SDEs with fBm. This point motivates us to carry out the present study.

We in the present paper discuss the SDEs with fBm under the non-Lipschitz condition. Using the successive approximation method, the existence and uniqueness theorems of solutions to the following non-Lipschitz SDEs driven by fBm are proved:

$$\begin{aligned} X( t) = X( 0) + \int_{0}^{t} {b\bigl( {s,X( s)} \bigr)}\,ds + \int_{0}^{t} {\sigma\bigl( {s,X( s)} \bigr)} \,d{B^{H}}( s),\quad t \in[ {0,T} ], \end{aligned}$$
(1.1)

where the initial data \(X(0)=\xi\) is a random variable, \(0< T<\infty\), the process \(B^{H}(t)\) represents the fBm with Hurst index \(H\in(\frac {1}{2},1)\) defined in a complete probability space \((\Omega,\mathcal {F}, \mathbb{P})\), and \(b( {t,X( t)}):[0,T] \times R \to R\) and \(\sigma( {t,X( t)}):[ {0,T} ] \times R \to R \) are all measurable functions; \(\int_{0}^{t} \cdot{\,d{B^{H}}( s)}\) stands for the stochastic integral with respect to fBm.

2 Preliminaries

Let \(( {\Omega,\mathcal{F},\mathbb{P}})\) be a complete probability space. SDEs with respect to fBm have been interpreted via various stochastic integrals, such as the Wick integral, the Wiener integral, the Skorohod integral, and path-wise integrals [13, 2326]. In this paper, we consider the path-wise integrals [27] with respect to fBm.

Let \(\varphi:{{R}_{+} } \times{{R}_{+} } \to{{R}_{+} }\) be defined by

$$\begin{aligned} \varphi( {t,s}) = H( {2H - 1}){\vert {t - s} \vert ^{2H - 2}},\quad t,s \in{{R}_{+} }, \end{aligned}$$

where H is a constant with \(\frac{1}{2} < H < 1\).

Let \(g:{{R}_{+} } \to{R}\) be Borel measurable.

Define

$$\begin{aligned} L_{\varphi}^{2}( {{{R}_{+} }}) = \biggl\{ {g:\Vert g \Vert _{\varphi}^{2} = \int _{{{R}_{+} }} { \int_{{{R}_{+} }} {g( t)g( s)\varphi( {t,s})\,ds\,dt < \infty} } } \biggr\} . \end{aligned}$$

If we equip \(L^{2}_{\varphi}({R}_{+})\) with the inner product

$$\begin{aligned} {\langle{{g_{1}},{g_{2}}} \rangle_{\varphi}} = \int_{{{R}_{+}}} { \int _{{{R}_{+}}} {{g_{1}}( t){g_{2}}( s) \varphi( {t,s})\,ds\,dt} },\quad {g_{1}},{g_{2}} \in L_{\varphi}^{2}( {{R_{+} }}), \end{aligned}$$

then \(L^{2}_{\varphi}({R}_{+})\) becomes a separable Hilbert space.

Let \(\mathcal{S}\) be the set of smooth and cylindrical random variables of the form

$$\begin{aligned} F(\omega) = f \biggl( \int_{0}^{T}\psi_{1}(t)\,dB^{H}_{t}, \ldots, \int _{0}^{T}\psi_{n}(t)\,dB^{H}_{t} \biggr), \end{aligned}$$

where \(n \ge1\), \(f \in\mathcal{C}_{b}^{\infty}( {{{R}^{n}}})\) (i.e. f and all its partial derivatives are bounded), and \({\psi_{i}} \in\mathcal{H}\), \(i = 1,2,\ldots, n\). \(\mathcal{H}\) is the completion of the measurable functions such that \(\Vert \psi \Vert _{\varphi}^{2} <\infty\) and \(\{\psi_{n}\}\) is a sequence in \(\mathcal{H}\) such that \(\langle \psi_{i}, \psi_{j}\rangle_{\varphi}=\delta_{ij}\).

The Malliavin derivative \(D_{t}^{H}\) of a smooth and cylindrical random variable \(F\in\mathcal{S}\) is defined as the \(\mathcal{H}\)-valued random variable:

$$\begin{aligned} D_{t}^{H}F = \sum_{i = 1}^{n} {\frac{{\partial f}}{{\partial {x_{i}}}} \biggl( \int_{0}^{T}\psi_{1}(t)\,dB^{H}_{t}, \ldots, \int _{0}^{T}\psi_{n}(t)\,dB^{H}_{t} \biggr)} {\psi_{i}(t)}. \end{aligned}$$

Then, for any \(p\geq1\), the derivative operator \(D_{t}^{H}\) is a closable operator from \(L^{p}(\Omega)\) into \(L^{p}(\Omega;\mathcal{H})\). Next, we introduce the φ-derivative of F:

$$\begin{aligned} D_{t}^{\varphi}F = \int_{{{R}_{+} }} {\varphi( {t,v})} D_{v}^{H} F\,dv. \end{aligned}$$

The elements of \(\mathcal{H}\) may not be functions but distributions of negative order. Thanks to this, it is convenient to introduce the space \(\vert \mathcal{H} \vert \) of the measurable function h on \([ {0,T} ]\) satisfying

$$\begin{aligned} \Vert h \Vert _{\vert \mathcal{H} \vert }^{2} = \int_{0}^{T} { \int_{0}^{T} {\bigl\vert {h( t)} \bigr\vert \bigl\vert {h( s)}\bigr\vert \varphi( {t,s})\,ds\,dt < \infty}}. \end{aligned}$$

It is not difficult to show that \(\vert \mathcal{H} \vert \) is a Banach space with the norm \(\Vert {\cdot} \Vert _{\vert \mathcal{H} \vert }^{2}\).

In addition, we denote by \(D_{t}^{H,k}\) the iteration of the derivative operator for any integer \(k\geq1\). The Sobolev space \({\mathbb {D}}^{k,p}\) is the closure of \(\mathcal{S}\) with respect to the norm, for any \(p\geq1\) ( denotes the tensor product),

$$\begin{aligned} \Vert F\Vert _{k,p}^{p}=\mathbb{E}\vert F\vert ^{p}+\mathbb{E}\sum_{j=1}^{k}{ \bigl\Vert D_{t}^{H,j}F\bigr\Vert _{\mathcal{H}^{\bigotimes j}}^{p}}. \end{aligned}$$

Similarly, for a Hilbert space U, we denote by \({\mathbb {D}}^{k,p}(U)\) the corresponding Sobolev space of U-valued random variables. For any \(p>0\) we denote by \({\mathbb{D}}^{1,p}(\vert \mathcal{H}\vert )\) the subspace of \({\mathbb{D}}^{1,p}(\mathcal{H})\) formed by the elements h such that \(h\in \vert \mathcal{H}\vert \).

Biagini et al. [14], Alos, Mazet and Nualart [24], Hu and Øksendal [9] have given more details as regards the fBm.

Lemma 1

Let \(u(t)\) be a stochastic process in the space \({\mathbb{D}}^{1,2}(\vert \mathcal{H}\vert )\), satisfying

$$\begin{aligned} \int_{0}^{T} { \int_{0}^{T} {\bigl\vert {D_{s}^{H}u( t)} \bigr\vert {{\vert {t - s} \vert }^{2H - 2}}\,ds\,dt} } < \infty, \end{aligned}$$

then the symmetric integral coincides with the forward and backward integrals (P159,[14]).

Definition 2

The space \(\mathcal{L}_{\varphi}[0,T]\) of integrands is defined as the family of stochastic processes \(u(t)\) on \([0,T]\), such that \(\mathbb{E}\Vert {u( t)} \Vert _{\varphi}^{2} < \infty\), \(u(t)\) is φ-differentiable, the trace of \(D_{s}^{\varphi}u( t)\) exists, \(0 \le s \le T\), \(0 \le t \le T\), and

$$\begin{aligned} \mathbb{E} \int_{0}^{T} { \int_{0}^{T} {{{\bigl[ {D_{t}^{\varphi}u( s)} \bigr]}^{2}}\,ds\,dt} } < \infty, \end{aligned}$$

and for each sequence of partitions \(( {{\pi_{n}},n \in\mathbb{N}})\) such that \(\vert {{\pi_{n}}} \vert \to0\) as \(n \to\infty\),

$$\begin{aligned} \sum_{i = 0}^{n - 1} {\mathbb{E} \biggl[ { \int_{t_{i}^{( n)}}^{t_{i + 1}^{( n)}} { \int_{t_{j}^{( n)}}^{t_{j + 1}^{( n)}} {\bigl\vert {D_{s}^{\varphi}u^{\pi}\bigl(t_{i}^{( n)}\bigr) D_{t}^{\varphi}u^{\pi}\bigl(t_{j}^{( n)}\bigr) -D_{s}^{\varphi}{u(t)}D_{t}^{\varphi}{u(s)}} \bigr\vert \,ds\,dt} } } \biggr]} \end{aligned}$$

and

$$\begin{aligned} \mathbb{E}\bigl[ {\bigl\Vert {{u^{\pi}} - u} \bigr\Vert _{\varphi}^{2}} \bigr] \end{aligned}$$

tend to 0 as \(n\to\infty\), where \({\pi_{n}} = t_{0}^{( n)} < t_{1}^{( n)} < \cdots < t_{n - 1}^{( n)} < t_{n}^{( n)} = T\).

Lemma 3

Let \(B^{H}(t)\) be a fBm with \(\frac{1}{2}< H<1\), and \(u(t)\) be a stochastic process in \({{\mathbb{D}}^{1,2}}( {\vert \mathcal{H} \vert }) \cap {\mathcal{L}_{\varphi}}[ {0,T} ]\), then for every \(T<\infty\),

$$\begin{aligned} \mathbb{E} { \biggl[ { \int_{0}^{T} {u( s)\,{d^{\circ}} {B^{H}}( s)} } \biggr]^{2}} \le2H{T^{2H - 1}} \mathbb{E} \biggl[ { \int_{0}^{T} {{{ \bigl\vert {u( s)}\bigr\vert }^{2}}\,ds} } \biggr] + 4T\mathbb{E} \int_{0}^{T} \bigl[{D_{s}^{\varphi}u( s)\bigr]^{2}\,ds}. \end{aligned}$$

The detailed proof of Lemma  3 can be found in the authors’ previous work [2830].

In this paper, we always assume the following non-Lipschitz condition, which was proposed by Yamada and Watanabe [22], is satisfied.

Hypothesis 4

  1. (1)

    There exists a function \(\kappa( q)>0\), \(q > 0\), \(\kappa( 0 )=0\) such that \(\kappa( q)\) is a continuous non-decreasing, concave function and \(\int_{0 + } {\frac{{dq}}{{\kappa( q)}}} = + \infty\),

  2. (2)

    \(b( {t,0})\), \(\sigma( {t,0})\) are locally integral with respect to t,

  3. (3)

    Furthermore, \(\forall t \in[ {0,T} ]\), \(b( {t,\cdotp}),\sigma( {t,\cdotp}) \in {\mathcal{L}_{\varphi}}[ {0,T} ] \cap{\mathbb{D}^{1,2}}( {\vert \mathcal{H} \vert } )\), we have

    $$\begin{aligned}& \mathbb{E} {\bigl\vert {b( {t,X}) - b( {t,Y})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {\sigma( {t,X}) - \sigma( {t,Y})} \bigr\vert ^{2}} \\& \quad {} + \mathbb{E} {\bigl\vert {D_{t}^{\varphi}\bigl( {\sigma({t,X}) - \sigma( {t,Y})} \bigr)} \bigr\vert ^{2}} \le\kappa\bigl( {\mathbb{E} {{\vert {X - Y} \vert }^{2}}} \bigr). \end{aligned}$$
    (2.1)

The above-mentioned Hypothesis 4 is the so-called non-Lipschitz condition. The non-Lipschitz condition has a variety of forms [3134]. Here, we consider one kind of them. In particular, we see clearly that if we let \(\kappa( q) = K'q\), then the non-Lipschitz condition reduces to the Lipschitz condition. In other words, the non-Lipschitz condition is weaker than the Lipschitz condition.

Now, we give some concrete examples of the function κ. Let \(K'>0\) and let \(\mu\in\mathopen{]}0,1[\) be sufficiently small. Define

$$\begin{aligned}& {\kappa_{1}} ( x) = K'x,\quad x \ge0,\\& {\kappa_{2}} ( x) = \textstyle\begin{cases} x\log({x^{ - 1}}),& 0 \le x \le\mu, \\ \mu\log({\mu^{ - 1}}) + \kappa'_{2} ( {\mu-}) ( {x - \mu}),& x > \mu, \end{cases}\displaystyle \\& {\kappa_{3}} ( x) = \textstyle\begin{cases} x\log({x^{ - 1}})\log\log({x^{ - 1}}),& 0 \le x \le\mu,\\ \mu\log({\mu^{ - 1}})\log\log({\mu^{-1}}) + \kappa'_{3} ( {\mu - }) ( {x - \mu}),& x > \mu, \end{cases}\displaystyle \end{aligned}$$

where \(\kappa'\) denotes the derivative of the function κ. They are all concave and non-decreasing functions satisfying \(\int_{0 + } {\frac{1}{{{\kappa_{i}}( { x})}}}\,dx = \infty\) (\(i = 1,2,3\)).

3 The main theorems

In this section, using an iteration of the Picard type, we will discuss the solutions for non-Lipschitz SDEs with fBm. Let \({X_{0}}( t) \equiv \xi\) be a random variable with \(\mathbb{E}{\vert \xi \vert ^{2}} < + \infty\), and construct an approximate sequence of stochastic process \(\{ X_{k}(t)\}_{k \geq1}\) as follows:

$$\begin{aligned} {X_{k}}( t) = \xi + \int_{0}^{t} {b\bigl( {s,{X_{k - 1}}( s)} \bigr)}\,ds + \int _{0}^{t} {\sigma\bigl( {s,{X_{k - 1}}( s)} \bigr)} \,d^{\circ}{B^{H}}( s),\quad k = 1,2, \ldots. \end{aligned}$$
(3.1)

Hereafter, we assume that \(1 \le T < + \infty\) without losing generality.

First, we given the following four key lemmas. The proofs for Lemma 5 and Lemma 6 will be presented in the Appendix.

Lemma 5

There exists a positive number K, \(\forall b( {t,\cdot}),\sigma( {t,\cdot}) \in{\mathcal{L}_{\varphi}}[ {0,T} ] \cap{\mathbb{D}^{1,2}}( {\vert \mathcal{H} \vert })\), \(t \in[ {0,T} ]\), and we have

$$\begin{aligned} \mathbb{E} {\bigl\vert {b( {t,X})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {\sigma( {t,X})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {D_{t}^{\varphi}\sigma( {t,X})} \bigr\vert ^{2}} \le K\bigl( {1 + \mathbb {E} {{\vert X \vert }^{2}}} \bigr). \end{aligned}$$

Lemma 6

Under the conclusion of Lemma  5, one can get

$$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k}}( t)} \bigr\vert ^{2}} \le{C_{1}},\quad k = 1,2,\ldots,t \in[0,T], \end{aligned}$$
(3.2)

where \({C_{1}} = 3( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}})\exp( {12K{T^{2}}} )\).

Lemma 7

If \(b(t,X)\) and \(\sigma(t, X)\) satisfy the Hypothesis  4, then for \(t \in[0,T]\), \(n \ge1\), \(k \geq1\), we have

$$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)} \bigr\vert ^{2}} \le{C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {{X_{n + k - 1}}( s) - X_{n - 1}( s)}\bigr\vert }^{2}}} \bigr)}\,ds \end{aligned}$$
(3.3)

and

$$\begin{aligned} \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n+k}}( s) -{X_{n}}( s)} \bigr\vert ^{2}} \le{C_{3}}t, \end{aligned}$$

where \(C_{2}=8T\) and \(C_{3}\) is a constant.

Proof

For \(0 \le s \le t\), we show that

$$\begin{aligned}& \mathbb{E}\bigl\vert X_{n+k}( s) - X_{n}( s)\bigr\vert ^{2} \\& \quad \le2\mathbb{E}\biggl\vert \int_{0}^{s} \bigl(b\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - b\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) \bigr)\,d{s_{1}} \biggr\vert ^{2} \\& \qquad {} + 2\mathbb{E}\biggl\vert \int_{0}^{s} \bigl(\sigma\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - \sigma\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) \bigr) \,d^{\circ}{B^{H}}(s_{1}) \biggr\vert ^{2} \\& \quad \le8T\mathbb{E} \int_{0}^{t} \bigl[\bigl\vert b\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - b(s_{1},X_{n - 1}(s_{1}) \bigr\vert ^{2} \\& \qquad {} +\bigl\vert \sigma\bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - \sigma( s_{1},X_{n -1}(s_{1})\bigr\vert ^{2} \\& \qquad {}+\bigl\vert D_{s_{1}}^{\varphi}\bigl(\sigma \bigl( s_{1},X_{n+k - 1}(s_{1}) \bigr) - \sigma \bigl(s_{1},X_{n - 1}(s_{1})\bigr)\bigr)\bigr\vert ^{2}\bigr]\,d{s_{1}} \\& \quad \le{C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {{X_{n + k - 1}}( s) -X_{n - 1}( s)} \bigr\vert }^{2}}} \bigr)}\,ds. \end{aligned}$$

Then it is easy to verify

$$\begin{aligned} \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n+k}}( s) -{X_{n}}( s)} \bigr\vert ^{2}} \le& {C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{ \bigl\vert {{X_{n+ k - 1}}( s) - X_{n - 1}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& {C_{2}} \int_{0}^{t} {\kappa( {4{C_{1}}})}\,ds \le{C_{3}}t. \end{aligned}$$

This completes the proof of Lemma 7. □

Now, choose \(0 < {T_{1}} \le T\), such that \(t \in[ {0,{T_{1}}} ]\), for \({\kappa_{1}}( {{C_{3}}t}) \le{C_{3}}\), \({\kappa_{1}}( q) = {C_{2}}\kappa( q)\) holds. We should note that in the following part, we first of all prove the following main theorem, Theorem 9, in the time interval \([0,{T_{1}}]\), then we extend the result in the whole interval \([0,T]\). Fix \(k \geq1\) arbitrarily and define two sequences of functions \({\{ {{\phi_{n}}( t)} \}_{n = 1,2, \ldots}}\) and \({\{ {{{\tilde{\phi}}_{n,k}}( t)} \}_{n = 1,2, \ldots}}\), where

$$\begin{aligned}& {\phi_{1}}( t)= {C_{3}}t, \\& {\phi_{n + 1}}( t) = \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n}}( s)} \bigr)}\,ds, \\& {\tilde{\phi}_{n,k}}( t) = \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)} \bigr\vert ^{2}},\quad n = 1,2, \ldots. \end{aligned}$$

Lemma 8

Under the Hypothesis  4,

$$\begin{aligned} 0 \le{\tilde{\phi}_{n,k}}( t) \le{\phi_{n}}( t) \le{\phi_{n - 1}}( t) \le \cdots \le{\phi_{1}}( t),\quad t \in[ {0,{T_{1}}} ], \end{aligned}$$
(3.4)

for all positive integer n.

Proof

By Lemma 7, we have

$$\begin{aligned} {\tilde{\phi}_{1,k}}( t) = \sup _{0 \le s \le t} \mathbb{E} { \bigl\vert {{X_{1+k}}( s) - {X_{1}}( s)} \bigr\vert ^{2}} \le{C_{3}}t = {\phi _{1}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$

Then, since \({\kappa_{1}}( q) = {C_{2}}\kappa( q)\), \(\kappa( q) \) is a concave function and

$$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k + 1}}( s) - {X_{1}}( s)} \bigr\vert ^{2}} \le\sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{k + 1}}( s) - {X_{1}}( s)}\bigr\vert ^{2}} = {\tilde{\phi}_{1,k}}( t),\quad 0 \le s \le t, \end{aligned}$$

it is easy to verify

$$\begin{aligned} {{\tilde{\phi}}_{2,k}}( t) =& \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{2 + k}}( s) - {X_{2}}( s)} \bigr\vert ^{2}} \\ \le& {C_{2}} \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {{X_{k + 1}}( s) -{X_{1}}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{{ \tilde{\phi}}_{1,k}}( s)} \bigr)}\,ds \le \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{1}}( s)} \bigr)}\,ds \\ =& {\phi_{2}}( t) = \int_{0}^{t} {{\kappa_{1}}( {{C_{3}}s})}\,ds \\ \le& {C_{3}}t = { \phi_{1}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$

That is to say, for \(n=2\), we have

$$\begin{aligned} {\tilde{\phi}_{2,k}}( t) \le{\phi_{2}}( t) \le{ \phi_{1}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$

Next, assume (3.4) for \(n \geq2\) and by the assumption for n

$$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)} \bigr\vert ^{2}} \le\sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{n}}( s)}\bigr\vert ^{2}} = {\tilde{\phi}_{n,k}}( t) \le{\phi_{n}}( t), \end{aligned}$$

it is easy to verify for \(n+1\)

$$\begin{aligned} {{\tilde{\phi}}_{n + 1,k}}( t) =& \sup _{0 \le s \le t} \mathbb{E} {\bigl\vert {{X_{n + k + 1}}( s) - {X_{n + 1}}( s)} \bigr\vert ^{2}} \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( { \mathbb{E} {{\bigl\vert {{X_{n + k}}( s) -{X_{n}}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{{ \tilde{\phi}}_{n,k}}( s)} \bigr)}\,ds \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n}}( s)} \bigr)}\,ds = {\phi_{n + 1}}( t) \\ \le& \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n - 1}}( s)} \bigr)}\,ds = {\phi _{n}}( t),\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$

This completes the proof of Lemma 8. □

Theorem 9

Under the Hypothesis  4, then

$$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$

By Theorem 9, we say that \(\{{X_{k}}(\cdotp)\}_{k\geq1}\) is a Cauchy sequence and define its limit as \(X( \cdotp)\). Then letting \(k\to \infty\) in (3.1), we finally see that the solutions to (1.1) exist.

Proof

Step 1: In this step we shall show

$$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le{T_{1}}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$

By Lemma 8, we know \({\phi_{n}}( t)\) decreases monotonically when \(n \to\infty\) and \({\phi_{n}}( t)\) is non-negative function on \(t \in[ {0,{T_{1}}} ]\). Therefore, we can define the limit function \(\phi( t)\) by \({\phi_{n}}( t) \downarrow\phi( t)\). It is easy to verify that \(\phi( 0) = 0\) and \(\phi( t)\) is a continuous function on \(t \in[ {0,{T_{1}}} ]\) [35]. According to the definition of \({\phi_{n}}( t)\) and \({\phi}( t)\), we obtain

$$\begin{aligned} \phi( t) = \lim _{n \to\infty} {\phi_{n + 1}}( t) = \lim _{n \to\infty} \int_{0}^{t} {{\kappa_{1}}\bigl( {{ \phi_{n}}( s)} \bigr)}\,ds = \int_{0}^{t} {{\kappa_{1}}\bigl( {\phi( s)} \bigr)}\,ds,\quad t \in[ {0,{T_{1}}} ]. \end{aligned}$$
(3.5)

Since \(\phi( 0) = 0\) and

$$\begin{aligned} \int_{0 + } {\frac{{dq}}{{{\kappa_{1}}( q)}}} = \frac {1}{{{C_{2}}}} \int_{0 + } {\frac{{dq}}{{\kappa( q)}}} = + \infty, \end{aligned}$$

(3.5) implies \(\phi( t) \equiv0\), \(t\in[0,T_{1}]\).

Therefore we obtain

$$\begin{aligned} 0 \le\lim _{k,n \to\infty} \sup _{0 \le t \le{T_{1}}} \mathbb{E} { \bigl\vert {{X_{n + k}}( t) - {X_{n}}( t)} \bigr\vert ^{2}} = \lim _{k,n \to\infty} {\tilde{\phi}_{n,k}}( {{T_{1}}}) \le\lim _{n \to\infty} {\phi_{n}}( {{T_{1}}}) = 0, \end{aligned}$$

namely,

$$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le{T_{1}}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$

Step 2: Define

$$\begin{aligned} T_{2} = \sup \Bigl\{ {\tilde{T} :\tilde{T} \in[ {0,T} ] \mbox{ and } \lim _{n,i \to\infty} \sup _{0 \le t \le\tilde{T}} \mathbb{E} \bigl\vert {X_{n}}( t) - {X_{i}}( t) \bigr\vert ^{2}= 0} \Bigr\} . \end{aligned}$$

Immediately, we can observe \(0 < {T_{1}} \le T_{2} \le T\). Now, we shall show

$$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$

Let \(\varepsilon>0\) be an arbitrary positive number. Choose \(S_{0}\) so that \(0 < {S_{0}} < \min( {T_{2},1})\). And

$$\begin{aligned} {C_{4}} {S_{0}} < \frac{\varepsilon}{{10}}, \end{aligned}$$
(3.6)

where \({C_{4}} = 8K({1 + {K_{1}}( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}} )}){S_{0}}\).

From the definition of \(T_{2}\), we have

$$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T_{2} - {S_{0}}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$

Then, for large enough N, we observe

$$\begin{aligned} \sup _{0 \le t \le T_{2} - {S_{0}}} \mathbb{E} {\bigl\vert {{X_{n}}(t) - {X_{i}}( t)} \bigr\vert ^{2}} < \frac{\varepsilon}{{10}},\quad n,i \ge N. \end{aligned}$$
(3.7)

On the other hand, one can get

$$\begin{aligned} \sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) -{X_{i}}( t)} \bigr\vert ^{2}} \le& 3\sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb {E} {\bigl\vert {{X_{n}}( t) - {X_{n}}( {T_{2} - {S_{0}}})} \bigr\vert ^{2}} \\ &{}+3\mathbb{E} {\bigl\vert {{X_{n}}( {T_{2} - {S_{0}}}) - {X_{i}}( {T_{2} - {S_{0}}})}\bigr\vert ^{2}} \\ &{}+3\sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{i}}( {T_{2} - {S_{0}}}) - {X_{i}}( t)} \bigr\vert ^{2}} \\ =& 3{I_{1}} + 3{I_{2}} + 3{I_{3}}. \end{aligned}$$
(3.8)

Now, using Lemma 3, we obtain

$$\begin{aligned} {I_{1}} =& \sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb {E} {\bigl\vert {{X_{n}}( t) - {X_{n}}( {T_{2} - {S_{0}}})} \bigr\vert ^{2}} \\ \le& 2{S_{0}} \mathbb{E} \int_{T_{2} - {S_{0}}}^{T_{2}} {{{\bigl\vert {b\bigl( s_{1},X_{n -1}(s_{1}) \bigr)} \bigr\vert }^{2}}} \,d{s_{1}} \\ &{}+ 4H{S_{0}}^{2H - 1} \mathbb{E} \int_{T_{2} - {S_{0}}}^{T_{2}} {{{\bigl\vert {\sigma \bigl(s_{1},X_{n - 1}(s_{1}) \bigr)} \bigr\vert }^{2}}} \,d{s_{1}} \\ &{}+ 8{S_{0}}\mathbb{E} \int_{T_{2} - {S_{0}}}^{T_{2}} {{{\bigl\vert {D_{s_{1}}^{\varphi}\sigma\bigl( s_{1},X_{n - 1}(s_{1}) \bigr)} \bigr\vert }^{2}}} \,d{s_{1}} \\ \le& 8{S_{0}} \int_{T_{2} - {S_{0}}}^{T_{2}} {K\bigl({1 + {K_{1}}\bigl( {1 + \mathbb {E} {{\vert \xi \vert }^{2}}} \bigr)} \bigr)} \,d{s_{1}} \\ \le& 8S_{0}^{2}K\bigl( {1 + {K_{1}}\bigl( {1 + \mathbb{E} {{\vert \xi \vert }^{2}}} \bigr)} \bigr). \end{aligned}$$

Therefore by (3.6) we have

$$\begin{aligned} {I_{1}} \le\frac{\varepsilon}{{10}} \end{aligned}$$
(3.9)

and

$$\begin{aligned} {I_{3}} \le\frac{\varepsilon}{{10}}. \end{aligned}$$
(3.10)

Meanwhile, (3.7) implies

$$\begin{aligned} {I_{2}} = \mathbb{E} {\bigl\vert {{X_{n}}( {T_{2} - {S_{0}}}) - {X_{i}}( {T_{2} - {S_{0}}})}\bigr\vert ^{2}} < \frac{\varepsilon}{{10}},\quad n,i \ge N. \end{aligned}$$
(3.11)

Now putting (3.7)-(3.11) together, we have

$$\begin{aligned} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) -{X_{i}}( t)} \bigr\vert ^{2}} \le& \sup _{0 \le t \le T_{2} - {S_{0}}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} \\ &{}+ \sup _{T_{2} - {S_{0}} \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} \\ \le& \frac{\varepsilon}{{10}} + 3{I_{1}} + 3{I_{2}} + 3{I_{3}} < \varepsilon. \end{aligned}$$

That is to say,

$$\lim _{n,i \to\infty} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. $$

Step 3: Using the method of reduction to absurdity, we shall show \(T_{2}=T\). Assume \(T_{2}< T\), we can choose a sequence of numbers \({\{ {{a_{i}}} \} _{i = 1,2, \ldots}}\) so that \({a_{i}} \downarrow0\) (\({i \to + \infty }\)) and for \(n > i \ge1\),

$$\begin{aligned} \sup _{0 \le t \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n}}( t) -{X_{i}}( t)} \bigr\vert ^{2}} \le{a_{i}}. \end{aligned}$$
(3.12)

We shall divide the step into several sub-steps.

First, for \(n > i \ge1\), we shall show

$$\begin{aligned} \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n}}( s) - {X_{i}}( s)} \bigr\vert ^{2}} \le3{a_{i}} + {C_{5}}t,\quad T_{2} + t \le T, \end{aligned}$$
(3.13)

where \({C_{5}} = 12TK({1 + {K_{1}}( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}})})\).

To show this, set

$$\begin{aligned}& J_{1}^{( i)} = \mathbb{E} {\bigl\vert {{X_{n}}( {T_{2}}) - {X_{i}}( {T_{2}})} \bigr\vert ^{2}}, \\& J_{2}^{( i)}( t) = \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\biggl\vert { \int_{T_{2}}^{s} {\bigl( {b\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) - b\bigl( {{s_{1}},{X_{i - 1}}(s_{1})} \bigr)} \bigr)\,d{s_{1}}} } \biggr\vert ^{2}}, \\& J_{3}^{( i)}( t)= \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\biggl\vert { \int_{T_{2}}^{s} {\bigl( {\sigma\bigl( s_{1},X_{n - 1}(s_{1}) \bigr) - \sigma\bigl( {{s_{1}},{X_{i - 1}}(s_{1})} \bigr)} \bigr)} \,d^{\circ}{B^{H}}(s_{1})} \biggr\vert ^{2}}. \end{aligned}$$

Then (3.12) implies \(J_{1}^{( i)} \le{a_{i}}\) and

$$\begin{aligned} J_{2}^{i}( t) + J_{3}^{i}( t) \le& 4T \mathbb{E} \int_{T_{2}}^{T_{2}+ t} \bigl[ \bigl\vert b \bigl(s_{1},X_{n - 1}(s_{1})\bigr) - b \bigl(s_{1},X_{i - 1}(s_{1})\bigr) \bigr\vert ^{2} \\ &{}+\bigl\vert \sigma\bigl(s_{1},X_{n - 1}(s_{1}) \bigr) - \sigma\bigl(s_{1},X_{i - 1}(s_{1})\bigr) \bigr\vert ^{2} \\ &{}+ \bigl\vert D_{s_{1}}^{\varphi}\bigl(\sigma\bigl(s_{1},X_{n - 1}(s_{1})\bigr) - \sigma\bigl(s_{1},X_{i -1}(s_{1})\bigr) \bigr)\bigr\vert ^{2}\bigr]\,ds_{1} \\ \le& 4TK\bigl(1 + {K_{1}}\bigl( 1 + \mathbb{E}\vert \xi \vert ^{2}\bigr) \bigr)t. \end{aligned}$$

Therefore

$$\begin{aligned} \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n}}( s) - {X_{i}}( s)} \bigr\vert ^{2}} \le& 3J_{1}^{( i)} + 3J_{2}^{( i)}( t) + 3J_{3}^{( i)}( t) \\ \le& 3{a_{i}} + {C_{5}}t,\quad T_{2} + t \le T. \end{aligned}$$

Next, we shall show an assertion which is analogous to Lemma 8. To state the assertion, we need to introduce several notations.

Choose a positive number \(0 < \eta \le T - T_{2}\) and a positive integer \(j \geq1\), so that

$$\begin{aligned} {C_{6}}\kappa( {3{a_{j}} + {C_{5}}t}) \le{C_{5}},\quad t \in[ {0,\eta} ],{\kappa _{2}}( q) = {C_{6}}\kappa( q), \end{aligned}$$
(3.14)

where \(C_{6}=12T\).

Introduce the sequence of functions \({\{ {{\psi_{k}}( t)} \}_{k = 1,2, \ldots}}\), \(t \in[ {0,\eta} ]\), defined by

$$\begin{aligned}& {\psi_{1}}( t) = 3{a_{j}} + {C_{5}}t, \\ & { \psi_{k + 1}}( t)= 3{a_{j + k}} + \int_{0}^{t} {{\kappa_{2}}\bigl( {{\psi _{k}}( s)} \bigr)\,ds} , \\& {\tilde{\psi}_{k,n}}( t)= \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j + k}}( s)} \bigr\vert ^{2}}. \end{aligned}$$

Now, the assertion to be proved is the following:

$$\begin{aligned} {\tilde{\psi}_{k,n}}( t) \le{\psi_{k}}( t) \le{\psi_{k - 1}}( t) \le \cdots \le{\psi_{1}}( t),\quad t \in[ {0, \eta} ], \end{aligned}$$
(3.15)

for all positive integer k.

Noticing that \({\kappa_{2}}( q)\) is a non-decreasing, concave function, and (3.13) holds, from this for \(k=1\), we work out

$$\begin{aligned} {{\tilde{\psi}}_{1,n}}( t) =& \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n + 1}}( s) - {X_{j + 1}}( s)} \bigr\vert ^{2}} \\ \le& 3a_{j + 1} + {C_{6}} \mathbb{E} \int_{T_{2}}^{T_{2} + t} \bigl[ \bigl\vert b \bigl({s_{1}},X_{n}(s_{1})\bigr) - b\bigl( s_{1},X_{j}(s_{1}) \bigr) \bigr\vert ^{2} \\ & {}+ \bigl\vert \sigma\bigl( s_{1},X_{n}(s_{1}) \bigr) - \sigma\bigl(s_{1},X_{j}(s_{1})\bigr) \bigr\vert ^{2} \\ &{}+\bigl\vert D_{s_{1}}^{\varphi}\bigl( \sigma\bigl( s_{1},X_{n}(s_{1})\bigr) - \sigma\bigl(s_{1},X_{j}(s_{1})\bigr)\bigr)\bigr\vert ^{2}\bigr]\,d{s_{1}} \\ \le& 3{a_{j + 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( { \mathbb {E} {{\bigl\vert {{X_{n}}(s_{1}) - {X_{j}}(s_{1})} \bigr\vert }^{2}}} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}( {3{a_{j}} + {C_{5}} {s_{1}}} )\,d{s_{1}}} \le{\psi_{1}}( t),\quad t \in[ {0,\eta} ]. \end{aligned}$$

On the other hand, using (3.14) we arrive at

$$\begin{aligned} {{\tilde{\psi}}_{2,n}}( t) \le& \sup _{T_{2} \le s \le T_{2} + t} \mathbb{E} {\bigl\vert {{X_{n + 2}}( s) - {X_{j + 2}}( s)} \bigr\vert ^{2}} \\ \le& 3{a_{j + 2}} + {C_{6}} \int_{T_{2}}^{T_{2} + t} {\kappa\bigl( {\mathbb {E} {{\bigl\vert {{X_{n + 1}}(s_{1}) - {X_{j + 1}}(s_{1})} \bigr\vert }^{2}}} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + 2}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{{ \tilde{\psi}}_{1,n}}( t)} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{ \psi_{1}}( t)} \bigr)\,d{s_{1}}} = {\psi_{2}}( t) \\ \le& 3{a_{j}}+{C_{5}}t = {\psi_{1}}( t),\quad t \in[ {0,\eta} ]. \end{aligned}$$

Then we have proved

$$\begin{aligned} {\tilde{\psi}_{2,n}}( t) \le{\psi_{2}}( t) \le{ \psi_{1}}( t). \end{aligned}$$

Now assume that the assertion holds for \(k \geq2\). Then, by an analogous argument, one can obtain

$$\begin{aligned} {{\tilde{\psi}}_{k + 1,n}}( t) \le& 3{a_{j + k + 1}} + \int _{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( { \mathbb{E} {{\bigl\vert {{X_{n + k}}(s_{1})-{X_{j + k}}(s_{1})} \bigr\vert }^{2}}} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + k + 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{{ \tilde{\psi}}_{k,n}}(s_{1})} \bigr)\,d{s_{1}}} \\ \le& 3{a_{j + k}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{\psi _{k}}(s_{1})} \bigr)\,d{s_{1}}} = { \psi_{k + 1}}( t) \\ \le& 3{a_{j + k - 1}} + \int_{T_{2}}^{T_{2} + t} {{\kappa_{2}}\bigl( {{\psi _{k - 1}}(s_{1})} \bigr)\,d{s_{1}}} \\ =& { \psi_{k}}( t),\quad t \in[ {0,\eta} ]. \end{aligned}$$

Therefore, we obtain (3.15) for all k. In terms of (3.15), we can define the function \(\psi( t)\) by \({\psi_{k}}( t) \downarrow\psi( t)\) (\({k \to\infty}\)). We observe that

$$\begin{aligned} \psi( 0) =& \lim _{k \to\infty} {\psi_{k + 1}}( 0) \\ =& \lim _{k \to\infty} {a_{j + k}} = 0. \end{aligned}$$

It is easy to verify that \(\psi( t)\) is a continuous function on \([ {0,\eta} ]\). Now by the definition of \({\psi_{k + 1}}( t)\) and \(\psi( t)\), we have

$$\begin{aligned} \psi( t) =& \lim _{k \to\infty} {\psi_{k + 1}}( t) \\ =& \lim _{k \to\infty} \biggl[ {3{a_{j + k}} + \int_{0}^{t} {{\kappa_{2}}\bigl( {{ \psi_{k}}( s)} \bigr)\,ds} } \biggr] \\ =& \int_{0}^{t} {{\kappa_{2}}\bigl( {\psi( s)} \bigr)\,ds} . \end{aligned}$$
(3.16)

Since \(\psi( 0) = 0\) and

$$\begin{aligned} \int_{0 + } {\frac{{dq}}{{{\kappa_{2}}( q)}}} = \frac {1}{{{C_{6}}}} \int_{0 + } {\frac{{dq}}{{\kappa( q)}}} = + \infty, \end{aligned}$$

(3.16) implies \(\psi( t) = 0\), \(t \in[ {0,\eta} ]\).

Therefore, we obtain

$$\begin{aligned} \lim _{k \to\infty} {{\tilde{\psi}}_{k,n}}( t) =& \lim _{k \to\infty} \sup _{0 \le s \le T_{2} + t } \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j + k}}( s)}\bigr\vert ^{2}} \\ \le& \lim _{k \to\infty} \sup _{0 \le s \le T_{2}} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j + k}}( s)}\bigr\vert ^{2}} \\ &{}+ \lim _{k \to\infty} \sup _{T_{2} \le s \le T_{2} + \eta} \mathbb{E} {\bigl\vert {{X_{n + k}}( s) - {X_{j +k}}( s)} \bigr\vert ^{2}} \\ \le& \lim _{k \to\infty} {\psi_{k}}( \eta) = \psi( \eta) = 0, \end{aligned}$$

namely

$$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T_{2} + \eta} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$

But this conclusion is contradictory to the definition of \(T_{2}\). In other words, we have already shown that

$$\begin{aligned} \lim _{n,i \to\infty} \sup _{0 \le t \le T} \mathbb{E} {\bigl\vert {{X_{n}}( t) - {X_{i}}( t)} \bigr\vert ^{2}} = 0. \end{aligned}$$

The proof of the existence of solutions of SDEs (1.1) is complete. □

Theorem 10

Under the Hypothesis  4, the path-wise uniqueness holds for (1.1), \(t\in[0,T]\).

Proof

Let \(X( t)\) and \(\tilde{X}( t)\) be two solutions of (1.1) on the same probability space and \(X( 0) = \tilde{X}( 0)\). We observe

$$\begin{aligned}& \mathbb{E} {\bigl\vert {X( t) - \tilde{X}( t)} \bigr\vert ^{2}} \\& \quad = \mathbb{E} {\biggl\vert { \int_{0}^{t} {\bigl( {b\bigl( {s,X( s)} \bigr) - b \bigl( {s,\tilde{X}( s)} \bigr)} \bigr)\,ds} + \int_{0}^{t} {\bigl( {\sigma\bigl( {{s_{1}},X( s)} \bigr) - \sigma\bigl( {s,\tilde{X}( s)} \bigr)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\& \quad \le2\mathbb{E} {\biggl\vert { \int_{0}^{t} {\bigl({b\bigl( {s,X( s)} \bigr) - b \bigl( {s,\tilde{X}( s)} \bigr)} \bigr)\,ds} } \biggr\vert ^{2}} + 2 \mathbb{E} {\biggl\vert { \int_{0}^{t} {\bigl( {\sigma\bigl( {s,X( s)} \bigr) - \sigma\bigl( {s,\tilde{X}( s)} \bigr)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\& \quad \le8T\mathbb{E} \int_{0}^{t} \bigl(\bigl\vert {b\bigl( s, X( s) \bigr) - b\bigl( s,\tilde{X}( s) \bigr)\bigr\vert ^{2} + \bigl\vert \sigma\bigl( s,X( s) \bigr) - \sigma\bigl( s,\tilde{X}( s)\bigr)\bigr\vert }^{2} \\& \qquad {} + \bigl\vert D_{s}^{\varphi}\bigl( \sigma \bigl( s,X( s) \bigr) - \sigma\bigl( s,\tilde{X}( s) \bigr) \bigr) \bigr\vert ^{2}\bigr)\,ds. \end{aligned}$$

Combining the above inequalities and the Hypothesis 4, one has

$$\begin{aligned} \mathbb{E} {\bigl\vert {X( t) - \tilde{X}( t)} \bigr\vert ^{2}} \le8T \int_{0}^{t} {\kappa\bigl( {\mathbb{E} {{\bigl\vert {X( s) - \tilde{X}( s)} \bigr\vert }^{2}}} \bigr)}\,ds. \end{aligned}$$
(3.17)

Then, noticing that \(\int_{0 + } {\frac{{dq}}{{\kappa( q)}}} = + \infty\), the above inequality (3.17) implies

$$\begin{aligned} \mathbb{E} {\bigl\vert {X( t) - \tilde{X}( t)} \bigr\vert ^{2}} = 0,\quad t\in[0,T]. \end{aligned}$$

Since T is an arbitrary positive number, we obtain from this \(X( t) \equiv\tilde{X}( t)\), for all \(0\le t \le T\).

Thus the path-wise uniqueness holds for (1.1). □

References

  1. Øksendal, B: Stochastic Differential Equations. Springer, Berlin (2005)

    MATH  Google Scholar 

  2. Arnold, L: Stochastic Differential Equations, Theory and Applications. John Wiley and Sons, New York (1974)

    MATH  Google Scholar 

  3. Friedman, A: Stochastic Differential Equations and Applications. Dover Publications, New York (2006)

    MATH  Google Scholar 

  4. Gard, T: Introduction to Stochastic Differential Equations. Marcel Dekker, New York (1988)

    MATH  Google Scholar 

  5. Hurst, H: Long-term storage capacity in reservoirs. Trans. Am. Soc. Civ. Eng. 116, 400-410 (1951)

    Google Scholar 

  6. Kolmogorov, A: Wienersche spiralen und einige andere interessante kurven im Hilbertschen raum. C. R. (Dokl.) Acad. Sci. URSS 26, 115-118 (1940)

    MathSciNet  MATH  Google Scholar 

  7. Mandelbrot, B, Van Ness, J: Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10(4), 422-427 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chakravarti, N, Sebastian, K: Fractional Brownian motion models for polymers. Chem. Phys. Lett. 267, 9-13 (1997)

    Article  Google Scholar 

  9. Hu, Y, Øksendal, B: Fractional white noise calculus and application to finance. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 6, 1-32 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  10. Scheffer, R, Maciel, F: The fractional Brownian motion as a model for an industrial airlift reactor. Chem. Eng. Sci. 56, 707-711 (2001)

    Article  Google Scholar 

  11. Lyons, T: Differential equations driven by rough signals. Rev. Mat. Iberoam. 14, 215-310 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  12. Nualart, D, Rascanu, A: Differential equations driven by fractional Brownian motion. Collect. Math. 53, 55-81 (2002)

    MathSciNet  MATH  Google Scholar 

  13. Mishura, Y: Stochastic Calculus for Fractional Brownian Motion and Related Processes. Springer, Berlin (2008)

    Book  MATH  Google Scholar 

  14. Biagini, F, Hu, Y, Oksendal, B, Zhang, T: Stochastic Calculus for Fractional Brownian Motion and Applications. Springer, London (2008)

    Book  MATH  Google Scholar 

  15. Cox, J, Ingersoll, J, Ross, S: A theory of the term structure of interest rate. Econometrica 53, 385-407 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  16. Taniguchi, T: Successive approximations to solutions of stochastic differential equations. J. Differ. Equ. 96, 152-169 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  17. Kwok, Y: Pricing multi-asset options with an external barrier. Int. J. Theor. Appl. Finance 1, 523-541 (1998)

    Article  MATH  Google Scholar 

  18. Watanabe, S, Yamada, T: On the uniqueness of solution of stochastic differential equations II. J. Math. Kyoto Univ. 11(3), 553-563 (1971)

    MathSciNet  MATH  Google Scholar 

  19. Barlow, M: One dimensional stochastic differential equations with no strong solution. J. Lond. Math. Soc. 26, 335-347 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  20. Yamada, T: On a comparison theorem for solutions of stochastic differential equations and its applications. J. Math. Kyoto Univ. 13(3), 497-512 (1973)

    MathSciNet  MATH  Google Scholar 

  21. Yamada, T: On the successive approximation of solutions of stochastic differential equations. J. Math. Kyoto Univ. 21(3), 501-515 (1981)

    MathSciNet  MATH  Google Scholar 

  22. Yamada, T, Watanabe, S: On the uniqueness of solutions of stochastic differential equations. J. Math. Kyoto Univ. 11, 155-167 (1971)

    MathSciNet  MATH  Google Scholar 

  23. Carmona, P, Coutin, L, Montseny, G: Stochastic integration with respect to fractional Brownian motion. Ann. Inst. Henri Poincaré Probab. Stat. 39, 27-68 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  24. Alòs, E, Mazet, O, Nualart, D: Stochastic calculus with respect to Gaussian process. Ann. Probab. 29, 766-801 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  25. Duncan, T, Hu, Y, Pasik-Duncan, B: Stochastic calculus for fractional Brownian motion I: theory. SIAM J. Control Optim. 38, 582-612 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  26. Alòs, E, Nualart, D: Stochastic integration with respect to the fractional Brownian motion. Stoch. Stoch. Rep. 75(3), 129-152 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  27. Russo, F, Vallois, P: Forward, backward and symmetric stochastic integration. Probab. Theory Relat. Fields 97, 403-421 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  28. Xu, Y, Pei, B, Guo, R: Stochastic averaging for slow-fast dynamical systems with fractional Brownian motion. Discrete Contin. Dyn. Syst., Ser. B 20, 2257-2267 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  29. Xu, Y, Guo, R, et al.: Stochastic averaging principle for dynamical systems with fractional Brownian motion. Discrete Contin. Dyn. Syst., Ser. B 19(4), 1197-1212 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  30. Xu, Y, Pei, B, Li, Y: An averaging principle for stochastic differential delay equations with fractional Brownian motion. Abstr. Appl. Anal. 2014, Article ID 479195 (2014)

    MathSciNet  Google Scholar 

  31. Albeverio, S, Brzézniak, Z, Wu, J: Existence of global solutions and invariant measures for stochastic differential equations driven by Poisson type noise with non-Lipschitz coefficients. J. Math. Anal. Appl. 371, 309-322 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  32. Taniguchi, T: The existence and uniqueness of energy solutions to local non-Lipschitz stochastic evolution equations. J. Math. Anal. Appl. 360, 245-253 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  33. Barbu, D, Bocsan, G: Approximations to mild solutions of stochastic semilinear equations with non-Lipschitz coefficients. Czechoslov. Math. J. 52, 87-95 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  34. Xu, Y, Pei, B, Wu, J: Stochastic averaging principle for differential equations with non-Lipschitz coefficients driven by fractional Brownian motion. Stoch. Dyn. (2016). doi:10.1142/S0219493717500137

    Google Scholar 

  35. Xu, Y, Pei, B, Guo, G: Existence and stability of solutions to non-Lipschitz stochastic differential equations driven by Lévy noise. Appl. Math. Comput. 263, 398-409 (2015)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the NSF of China (11572247), the Fundamental Research Funds for the Central Universities and Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University. Authors would like to thank the referees for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Xu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Appendix

Appendix

Proof of Lemma 5

Since \(\kappa( q)\) is a concave and non-negative function, we can choose two positive constants \(a > 0\) and \(b > 0\), so that

$$\kappa(q) \le a + bq,\quad q \ge0, $$

then, by (2.1), we get

$$\begin{aligned}& \mathbb{E} {\bigl\vert {\sigma( {t,X})} \bigr\vert ^{2}} + \mathbb{E} {\bigl\vert {b( {t,X})}\bigr\vert ^{2}} + \mathbb{E} { \bigl\vert {D_{t}^{\varphi}\sigma( {t,X})} \bigr\vert ^{2}} \\& \quad \le2\mathbb{E}\bigl( {{{\bigl\vert {\sigma( {t,0})} \bigr\vert }^{2}} + {{\bigl\vert {b( {t,0})}\bigr\vert }^{2}}} +{ \bigl\vert {D_{t}^{\varphi}\sigma( {t,0})} \bigr\vert ^{2}}\bigr)+ 2\mathbb{E} {\bigl\vert {\sigma( {t,X}) - \sigma( {t,0})} \bigr\vert ^{2}} \\& \qquad {} + 2\mathbb{E} {\bigl\vert {b( {t,X}) - b( {t,0})} \bigr\vert ^{2}} + 2\mathbb{E} {\bigl\vert {D_{t}^{\varphi}(\sigma( {t,X})-\sigma({t,0}))} \bigr\vert ^{2}} \\& \quad \le2\sup _{0 \le t \le T} \mathbb{E}\bigl( {{{\bigl\vert {\sigma( {t,0})} \bigr\vert }^{2}} + {{\bigl\vert {b( {t,0})} \bigr\vert }^{2}} + {{\bigl\vert {D_{t}^{\varphi}\sigma( {t,0})} \bigr\vert }^{2}}} \bigr) + 2\kappa\bigl( {\mathbb{E} {{\vert X \vert }^{2}}} \bigr) \\& \quad \le K\bigl( {1 + \mathbb{E} {{\vert X \vert }^{2}}} \bigr), \end{aligned}$$

where \(K = \max[ {2\sup _{0 \le t \le T} \mathbb {E}( {{{\vert {\sigma( {t,0})} \vert }^{2}} + {{\vert {b( {t,0})} \vert }^{2}} + {{\vert {D_{t}^{\varphi}\sigma( {t,0})} \vert }^{2}}}) + 2a,2b} ] < + \infty\). □

Proof of Lemma 6

Using mathematical induction, we first assume that

$$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k}}( t)} \bigr\vert ^{2}} \le3\mathbb{E} {\vert \xi \vert ^{2}}\sum _{l = 0}^{k} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} + \sum_{l = 1}^{k} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} \end{aligned}$$
(A.1)

holds, \(t \in[0,T]\), \(k = 1,2, \ldots\) .

Clearly, by Lemma 3 and Lemma 5, we arrive at

$$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{1}}( t)} \bigr\vert ^{2}} \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 3 \mathbb{E} {\biggl\vert { \int_{0}^{t} {b\bigl( {s,{X_{0}}( s)} \bigr)}\,ds} \biggr\vert ^{2}} + 3\mathbb{E} {\biggl\vert { \int_{0}^{t} {\sigma\bigl( {s,{X_{0}}( s)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12T\mathbb{E} \int_{0}^{t} {\bigl( {{{ \bigl\vert {b \bigl({s,{X_{0}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {\sigma\bigl( {s,{X_{0}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {D_{s}^{\varphi}\sigma\bigl( {s,{X_{0}}( s)} \bigr)} \bigr\vert }^{2}}} \bigr)\,ds} \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12KTt\bigl( {1 + \mathbb{E} {{\vert \xi \vert }^{2}}} \bigr). \end{aligned}$$
(A.2)

Now, assume that (A.1) holds for k, then we have, for \(k+1\),

$$\begin{aligned} \mathbb{E} {\bigl\vert {{X_{k + 1}}( t)} \bigr\vert ^{2}} \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 3\mathbb{E} {\biggl\vert { \int_{0}^{t} {b\bigl( {s,{X_{k}}( s)} \bigr)}\,ds} \biggr\vert ^{2}} + 3\mathbb{E} {\biggl\vert { \int_{0}^{t} {\sigma\bigl( {s,{X_{k}}( s)} \bigr)} \,d^{\circ}{B^{H}}( s)} \biggr\vert ^{2}} \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12T\mathbb{E} \int_{0}^{t} {\bigl( {{{ \bigl\vert {b \bigl({s,{X_{k}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {\sigma\bigl( {s,{X_{k}}( s)} \bigr)} \bigr\vert }^{2}} + {{\bigl\vert {D_{s}^{\varphi}\sigma\bigl( {s,{X_{k}}( s)} \bigr)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12KT \int_{0}^{t} {\bigl( {1 + \mathbb{E} {{\bigl\vert {{X_{k}}( s)} \bigr\vert }^{2}}} \bigr)}\,ds \\ \le& 3 \mathbb{E} {\vert \xi \vert ^{2}} + 12KT \int_{0}^{t} {\Biggl( {1 + 3\mathbb{E} {{\vert \xi \vert }^{2}}\sum_{l = 0}^{k} { \frac{{{{( {12KT})}^{l}}}}{{l!}}} {s^{l}} + \sum_{l = 1}^{k} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {s^{l}}} \Biggr)}\,ds \\ =& 3\mathbb{E} {\vert \xi \vert ^{2}} + 12KTt + 3\mathbb{E} {\vert \xi \vert ^{2}}\sum_{l = 1}^{k + 1} { \frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} + \sum_{l = 2}^{k + 1} {\frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} \\ =& 3\mathbb{E} {\vert \xi \vert ^{2}}\sum_{l = 0}^{k + 1} { \frac{{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}} + \sum_{l = 1}^{k + 1} {\frac {{{{( {12KT})}^{l}}}}{{l!}}} {t^{l}}. \end{aligned}$$

Therefore, by induction, (A.1) holds for all k.

Now, we obtain the form \({C_{1}} = 3( {1 + \mathbb{E}{{\vert \xi \vert }^{2}}} )\exp( {12K{T^{2}}})\), then (A.2) implies (3.2). □

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pei, B., Xu, Y. On the non-Lipschitz stochastic differential equations driven by fractional Brownian motion. Adv Differ Equ 2016, 194 (2016). https://doi.org/10.1186/s13662-016-0916-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0916-1

Keywords