Open Access

Convergence of the compensated split-step θ-method for nonlinear jump-diffusion systems

Advances in Difference Equations20172017:189

https://doi.org/10.1186/s13662-017-1247-6

Received: 15 August 2016

Accepted: 16 June 2017

Published: 30 June 2017

Abstract

In this paper, our aim is to develop a compensated split-step θ (CSSθ) method for nonlinear jump-diffusion systems. First, we prove the convergence of the proposed method under a one-sided Lipschitz condition on the drift coefficient, and global Lipschitz condition on the diffusion and jump coefficients. Then we further show that the optimal strong convergence rate of CSSθ can be recovered, if the drift coefficient satisfies a polynomial growth condition. At last, a nonlinear test equation is simulated to verify the results obtained from theory. The results show that the CSSθ method is efficient for simulating the nonlinear jump-diffusion systems.

Keywords

jump-diffusion systems nonlinear compensated split-step θ-method convergence rate

1 Introduction

The aim of this paper is to study the strong convergence of the CSSθ method for the following nonlinear jump-diffusion systems:
$$ \mathrm{d}X(t) = f \bigl( X \bigl( t^{-} \bigr) \bigr) \,\mathrm{d}t + g \bigl( X \bigl( t^{-} \bigr) \bigr) \,\mathrm{d}W ( t ) + h \bigl( X \bigl( t^{-} \bigr) \bigr) \,\mathrm{d}N ( t ) $$
(1.1)
for \(t>0\), with \(X(0^{-})=X_{0}\in \mathbb{R}^{n}\), where \(X(t^{-})\) denotes \(\lim_{s\rightarrow t^{-}} X(s)\), \(f:\mathbb{R} ^{n} \to \mathbb{R}^{n}\), \(g:\mathbb{R}^{n} \to \mathbb{R}^{n \times m}\), \(h:\mathbb{R}^{n} \to \mathbb{R}^{n}\), \(W ( t ) \) is an m-dimensional Wiener process, and \(N ( t ) \) is a scalar Poisson process with intensity λ.

Most of the studies concerned with numerical analysis for stochastic differential equations with jumps (SDEwJs) are based on the assumption of globally Lipschitz continuous coefficients, for example, [16]. However, they cannot be applied to many real-world models, such as financial models [7] and biology models [8], which violate the global Lipschitz assumptions. Hence, the development of numerical methods for SDEwJs under a non-globally Lipschitz condition has become a focus point.

Firstly, we review some achievements of the numerical analysis for highly nonlinear SDEs. Here, we highlight work by Higham et al. [9], Hutzenthaler et al. [10], Szpruch and Mao [11], Mao and Szpruch [12], Huang [13], Zong et al. [14, 15].

However, the development of numerical methods for nonlinear jump-diffusion systems with non-globally Lipschitz continuous coefficients is not as fast as nonlinear SDEs. There are only few results on the numerical methods for nonlinear SDEwJs. For example, Higham and Kloeden proved the strong convergence and its order of the split-step backward Euler (SSBE) method and compensated split-step backward Euler (CSSBE) method for nonlinear jump-diffusion system in [16, 17]. Huang applied the split-step θ (SSθ) method to SDEwJs, but he only studied the mean-square stability of the SSθ method for SDEwJs in [13]. To the best of our knowledge, there is no result about the strong convergence of the CSSθ method for SDEwJs with non-globally Lipschitz continuous coefficients. The main difference of this paper from our previous work [5] is that we deal with SDEwJs with non-globally Lipschitz condition on the drift coefficient f.

The outline of the paper is as follows. In Section 2, we introduce some notions and assumptions for SDEwJs. In Section 3, we construct the CSSθ method for nonlinear SDEwJs. In Section 4, the strong convergence of the numerical solutions produced by the CSSθ method is investigated. The convergence rate is studied in Section 5. Finally, a nonlinear numerical experiment is given to verify the convergence and efficiency of the proposed method.

2 Conditions on the SDEwJs

Let \(( \Omega ,\mathcal{F},\mathbb{P} ) \) be a complete probability space with a filtration \(\{ \mathcal{F}_{t} \} _{t\geq 0}\), which satisfies the usual conditions, i.e., the filtration is continuous on the right and \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\)-null sets. Let \(\langle \cdot ,\cdot \rangle \) denote the Euclidean scalar product, and \(\vert \cdot \vert \) denote both the Euclidean vector norm in \(\mathbb{R}^{n}\) and the Frobenius matrix norm in \(\mathbb{R}^{n\times m}\). For simplicity, we also denote \(a \wedge b = \min \{ a,b \} \), \(a \vee b = \max \{ a,b \} \).

Now, we give the following assumptions on the coefficients f, g and h.

Assumption 2.1

The functions f, g, h in (1.1) are \(C^{1}\), there exist constants K, \(L_{g}\) and \(L_{h}>0\), such that the drift coefficient f satisfies a one-sided Lipschitz condition,
$$ \bigl\langle x-y, f ( x ) - f ( y ) \bigr\rangle \le K \vert x - y \vert ^{2}, \quad \forall x, y\in \mathbb{R}^{n}, $$
(2.1)
and the diffusion and jump coefficients satisfy the global Lipschitz conditions,
$$\begin{aligned}& \bigl\vert g ( x ) -g(y) \bigr\vert ^{2}\le L_{g} \vert x-y \vert ^{2} , \quad \forall x, y\in \mathbb{R}^{n}, \end{aligned}$$
(2.2)
$$\begin{aligned}& \bigl\vert h ( x ) -h(y) \bigr\vert ^{2}\le L_{h} \vert x-y \vert ^{2} , \quad \forall x, y\in \mathbb{R}^{n}. \end{aligned}$$
(2.3)
We also assume that all moments of the initial solution are bounded, that is, for any \(p\in [1,+\infty )\) there exists a positive constant C, such that
$$ \mathbb{E} \vert Y_{0} \vert ^{p}\le C. $$
(2.4)

Lemma 2.1

Under Assumption  2.1, equation (1.1) has a unique cadlag solution on \([0,+\infty )\).

Proof

See [16], and for a more relaxed conditions see [18]. □

From Assumption 2.1, we have the following estimates:
$$\begin{aligned}& \bigl\langle x, f ( x ) \bigr\rangle = \bigl\langle x, f ( x ) -f(0)+f(0) \bigr\rangle \le \biggl(K+\frac{1}{2} \biggr) \vert x \vert ^{2}+ \frac{1}{2} \bigl\vert f(0) \bigr\vert ^{2}, \end{aligned}$$
(2.5)
$$\begin{aligned}& \bigl\vert g ( x ) \bigr\vert ^{2}= \bigl\vert g ( x ) -g(0)+g(0) \bigr\vert ^{2}\le 2L_{g} \vert x \vert ^{2} +2 \bigl\vert g(0) \bigr\vert ^{2}, \end{aligned}$$
(2.6)
$$\begin{aligned}& \bigl\vert h ( x ) \bigr\vert ^{2}= \bigl\vert h ( x ) -h(0)+h(0) \bigr\vert ^{2}\le 2L_{h} \vert x \vert ^{2}+2 \bigl\vert h(0) \bigr\vert ^{2}. \end{aligned}$$
(2.7)
It follows that
$$ \bigl\langle x, f ( x ) \bigr\rangle \vee \bigl\vert g(x) \bigr\vert ^{2}\vee \bigl\vert h(x) \bigr\vert ^{2} \le L \bigl(1+ \vert x \vert ^{2} \bigr), $$
(2.8)
where \(L=\max \{ (K+\frac{1}{2}),2L_{g},2L_{h},\frac{1}{2} \vert f(0) \vert ^{2},2 \vert g(0) \vert ^{2},2 \vert h(0) \vert ^{2} \} \).

3 The compensated split-step θ-method

First defining
$$ f_{\lambda }:=f(x)+\lambda h(x), $$
we can rewrite the jump-diffusion system (1.1) in the following form:
$$ \mathrm{d}X(t) = f_{\lambda } \bigl( X \bigl( t^{-} \bigr) \bigr) \,\mathrm{d}t + g \bigl( X \bigl( t^{-} \bigr) \bigr) \,\mathrm{d}W ( t ) + h \bigl( X \bigl( t^{-} \bigr) \bigr) \,\mathrm{d} \tilde{N} ( t ) , $$
(3.1)
where
$$ \tilde{N}(t):=N(t)-\lambda t, $$
is the compensated Poisson process.
Note that \(f_{\lambda }\) satisfies the one-sided Lipschitz condition with lager constant; that is,
$$\begin{aligned} \bigl\langle x-y, f_{\lambda } ( x ) - f_{\lambda } ( y ) \bigr\rangle \le & (K+\lambda \sqrt{L_{h}}) \vert x - y \vert ^{2} \\ :=& K_{\lambda } \vert x - y \vert ^{2}, \quad \forall x, y\in \mathbb{R}^{n}. \end{aligned}$$
(3.2)
Then we can get
$$ \bigl\langle x, f_{\lambda } ( x ) \bigr\rangle \vee \bigl\vert g(x) \bigr\vert ^{2} \vee \bigl\vert h(x) \bigr\vert ^{2} \le L_{\lambda } \bigl(1+ \vert x \vert ^{2} \bigr), $$
(3.3)
where \(L_{\lambda }=\max \{ (K+\lambda \sqrt{L_{h}}+\frac{1}{2}),2L _{g},2L_{h},\frac{1}{2} \vert f_{\lambda }(0) \vert ^{2},2 \vert g(0) \vert ^{2},2 \vert h(0) \vert ^{2} \} \).
Now we define the CSSθ method for the jump-diffusion system (1.1) by \(Y_{0}=X(0)\) and
$$\begin{aligned}& Y_{n}^{*} = Y_{n} + \theta f_{\lambda } \bigl( Y_{n}^{*} \bigr) \Delta t, \end{aligned}$$
(3.4)
$$\begin{aligned}& Y_{n + 1} = Y_{n}+ f_{\lambda } \bigl( Y_{n}^{*} \bigr) \Delta t + g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} + h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n}, \end{aligned}$$
(3.5)
where \(\theta \in [ 0,1 ] \), \(\Delta t>0\), \(Y_{n}\) is the numerical approximation of \(X ( t_{n} ) \) with \(t_{n} = n \cdot \Delta t\). Moreover, \(\Delta W_{n}: = W ( t_{n + 1} ) - W ( t_{n} ) \), \(\Delta \tilde{N} _{n}: = \tilde{N} ( t_{n + 1} ) - \tilde{N} ( t_{n} ) \) are independent increments of the Wiener process and Poisson process, respectively.

If we have \(\theta =1\), the CSSθ method becomes the CSSBE method in [16].

Since the CSSθ method is an implicit scheme, we need to make sure that equation (3.4) has a unique solution \(Y_{n}^{*}\) given \(Y_{n}\).

In fact, under the one-sided Lipschitz condition (3.2) with \(\theta \Delta t K_{\lambda }< 1\), equation (3.4) admits a unique solution (see [12]). Meanwhile, if \(K_{\lambda }< 0\), then \(\theta \Delta t K_{\lambda }< 1\) holds for any \(\Delta t > 0\). Hence, we define
$$ \Delta = \textstyle\begin{cases} \infty , &\text{if } K_{\lambda }< 0, \text{ or }\theta =0, \\ \frac{1}{\theta K_{\lambda }},& \text{if } K_{\lambda }>0, \theta \in (0,1]. \end{cases} $$
(3.6)
From now on we always assume that \(\Delta t \le \Delta \).

4 Strong convergence on finite time interval \([0,T]\)

First, for \(t \in [ t_{n},t_{n + 1} ) \), we define the step function:
$$ Y ( t ) = \sum_{n=0}^{N_{T}-1} Y_{n}^{*}I_{[n\Delta , ( n + 1) \Delta ) } ( t ) , $$
(4.1)
where \(N_{T}\) is the largest number such that \(N_{T}\Delta t\leq T\), and \(I_{A}\) is the indicator function for the set A, i.e.,
$$I_{A} ( x ) = \textstyle\begin{cases} 1,&x \in A, \\ 0,&x \notin A. \end{cases} $$
Then we define the continuous-time approximations
$$\begin{aligned} \overline{Y} ( t ) =& Y_{n} + f_{\lambda } \bigl( Y_{n} ^{*} \bigr) ( t - t_{n} ) + g \bigl( Y_{n}^{*} \bigr) \bigl( W(t) - W(t_{n}) \bigr) \\ &{}+ h \bigl( Y_{n}^{*} \bigr) \bigl( \tilde{N}(t) - \tilde{N}( t_{n}) \bigr) , \quad t \in [ t_{n},t_{n + 1} ) . \end{aligned}$$
(4.2)
Thus we can rewrite (4.2) in integral form:
$$\begin{aligned} \overline{Y} ( t ) = &Y_{0} + \int_{0}^{t} f_{\lambda } \bigl( Y \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}s + \int_{0}^{t} g \bigl( Y \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}W ( s ) \\ &{} + \int_{0}^{t} h \bigl( Y \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}\tilde{N} ( s ) . \end{aligned}$$
(4.3)
It is easy to verify that \(\overline{Y}(t_{n})=Y_{n}\), that is, \(\overline{Y}(t)\) is a continuous-time extension of the discrete approximation \(\{ Y_{n} \} \).

Now we will prove the strong convergence of the CSSθ method. The main technique of the following proof is based on the fundamental papers [9, 13, 16], we refer to them for a fuller description of some of the technical details.

The following two lemmas show the pth moment properties of the true solutions and numerical solutions.

Lemma 4.1

Let Assumption  2.1 hold, and \(0 <\theta \leq 1\), \(p\ge 1\), \(0 < \Delta t < \min \{ 1,\frac{1}{ 2\theta L_{\lambda }} \} \), then there exists a positive constant A independent of \(N_{T}\) such that
$$ \mathbb{E} \Bigl( \sup_{ 0 \le n\Delta t \le T} \vert Y_{n} \vert ^{2p} \Bigr) \vee \mathbb{E} \Bigl( \sup_{ 0 \le n\Delta t \le T} \bigl\vert Y_{n}^{*} \bigr\vert ^{2p} \Bigr) < A, $$
where \(Y_{n}^{*}\) and \(Y_{n}\) are produced by (3.4) and (3.5).

Proof

In the following we assume that M is a positive integer such that \(n\Delta t\leq M \Delta t \le T\).

Squaring both sides of (3.4), we find
$$\begin{aligned} \bigl\vert Y_{n}^{*} \bigr\vert ^{2} =& \bigl\vert Y_{n}+ \theta \Delta t f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\vert ^{2} \\ =& \vert Y_{n} \vert ^{2} +\theta^{2} \Delta t^{2} \bigl\vert f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\vert ^{2} + 2\theta \Delta t \bigl\langle Y_{n}, f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\rangle \end{aligned}$$
(4.4)
and
$$ \bigl\langle Y_{n}, f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\rangle = \bigl\langle Y_{n}^{*}, f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\rangle - \theta \Delta t \bigl\langle f_{\lambda } \bigl( Y_{n}^{*} \bigr) , f _{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\rangle . $$
(4.5)
Substituting (4.5) into (4.4), we have
$$\begin{aligned} \bigl\vert Y_{n}^{*} \bigr\vert ^{2} =& \vert Y_{n} \vert ^{2} -\theta^{2} \Delta t^{2} \bigl\langle f_{\lambda } \bigl( Y_{n}^{*} \bigr) , f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\rangle + 2\theta \Delta t \bigl\langle Y_{n}^{*}, f_{\lambda } \bigl( Y_{n} ^{*} \bigr) \bigr\rangle \\ \le & \vert Y_{n} \vert ^{2} + 2\theta \Delta t \bigl\langle Y _{n}^{*}, f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\rangle \\ \le & \vert Y_{n} \vert ^{2} + 2\theta \Delta t L_{\lambda } \bigl(1+ \bigl\vert Y _{n}^{*} \bigr\vert ^{2} \bigr), \end{aligned}$$
(4.6)
which gives
$$\begin{aligned} \bigl\vert Y_{n}^{*} \bigr\vert ^{2} \le & \frac{1}{1-2\theta \Delta t L_{\lambda }} \vert Y_{n} \vert ^{2} + \frac{2\theta \Delta t L _{\lambda }}{1-2\theta \Delta t L_{\lambda }} \\ = & \vert Y_{n} \vert ^{2}+\frac{2\theta \Delta t L_{\lambda }}{1-2 \theta \Delta t L_{\lambda }} \vert Y_{n} \vert ^{2} + \frac{2 \theta \Delta t L_{\lambda }}{1-2\theta \Delta t L_{\lambda }} \\ =& \vert Y_{n} \vert ^{2}+ \alpha \vert Y_{n} \vert ^{2} + \alpha \\ = &\beta \vert Y_{n} \vert ^{2} + \alpha , \end{aligned}$$
(4.7)
where \(\alpha =\frac{2\theta \Delta t L_{\lambda }}{1-2\theta \Delta t L_{\lambda }}\), \(\beta =1+\alpha \). By (3.5) we have
$$\begin{aligned} \vert Y_{n + 1} \vert ^{2} =& \bigl\vert Y_{n} +f_{\lambda } \bigl(Y_{n}^{*} \bigr) \Delta t+ g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} + h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\vert ^{2} \\ =& \vert Y_{n} \vert ^{2}+ \bigl\vert f_{\lambda } \bigl(Y_{n}^{*} \bigr) \Delta t \bigr\vert ^{2}+ \bigl\vert g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\vert ^{2} + \bigl\vert h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\vert ^{2} \\ &{}+2 \bigl\langle Y_{n}, f_{\lambda } \bigl( Y_{n}^{*} \bigr) \Delta t \bigr\rangle +2 \bigl\langle Y_{n}, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\rangle +2 \bigl\langle Y_{n}, h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &{}+2 \bigl\langle f_{\lambda } \bigl( Y_{n}^{*} \bigr) \Delta t, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\rangle \\ &{}+ 2 \bigl\langle f_{\lambda } \bigl( Y_{n}^{*} \bigr) \Delta t,h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &{}+2 \bigl\langle g \bigl( Y_{n}^{*} \bigr) \Delta W_{n},h \bigl( Y _{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle . \end{aligned}$$
(4.8)
Then by (3.3), (3.4) and (4.5), we get
$$\begin{aligned} \vert Y_{n + 1} \vert ^{2} &\le \vert Y_{n} \vert ^{2}+ \bigl\vert g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\vert ^{2} + \bigl\vert h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N} _{n} \bigr\vert ^{2}+2 \bigl\langle Y_{n}^{*}, f_{\lambda } \bigl( Y_{n}^{*} \bigr) \Delta t \bigr\rangle \\ &\quad{} +2 \bigl\langle Y_{n}, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\rangle +2 \bigl\langle Y_{n}, h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &\quad{} +2 \biggl\langle \frac{Y_{n}^{*}-Y_{n}}{\theta }, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \biggr\rangle \\ &\quad{} + 2 \biggl\langle \frac{Y_{n}^{*}-Y_{n}}{\theta },h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \biggr\rangle \\ &\quad{} +2 \bigl\langle g \bigl( Y_{n}^{*} \bigr) \Delta W_{n},h \bigl( Y _{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &\le \vert Y_{n} \vert ^{2}+ \bigl\vert g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\vert ^{2} + \bigl\vert h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\vert ^{2}+2L_{\lambda } \Delta t \bigl(1+ \bigl\vert Y_{n}^{*} \bigr\vert ^{2} \bigr) \\ &\quad{} +2 \biggl(1-\frac{1}{\theta } \biggr) \bigl\langle Y_{n}, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\rangle \\ &\quad{} +2 \biggl(1-\frac{1}{\theta}\biggr) \bigl\langle Y_{n}, h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &\quad{} +\frac{2}{\theta } \bigl\langle Y_{n}^{*}, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\rangle + \frac{2}{\theta } \bigl\langle Y_{n}^{*},h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &\quad{} +2 \bigl\langle g \bigl( Y_{n}^{*} \bigr) \Delta W_{n},h \bigl( Y _{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle . \end{aligned}$$
(4.9)
Hence from (4.7) we have
$$\begin{aligned} \vert Y_{n + 1} \vert ^{2} \le & \vert Y_{n} \vert ^{2}+ 2\beta L_{\lambda }\Delta t \vert Y_{n} \vert ^{2} + 2(\alpha +1) L_{\lambda } \Delta t \\ &{}+ \bigl\vert g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\vert ^{2} + \bigl\vert h \bigl( Y _{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\vert ^{2} \\ &{}+2 \biggl(1-\frac{1}{\theta } \biggr) \bigl\langle Y_{n}, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\rangle \\ &{}+2 \biggl(1-\frac{1}{\theta } \biggr) \bigl\langle Y_{n}, h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &{}+\frac{2}{\theta } \bigl\langle Y_{n}^{*}, g \bigl( Y_{n}^{*} \bigr) \Delta W_{n} \bigr\rangle + \frac{2}{\theta } \bigl\langle Y_{n}^{*},h \bigl( Y_{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle \\ &{}+2 \bigl\langle g \bigl( Y_{n}^{*} \bigr) \Delta W_{n},h \bigl( Y _{n}^{*} \bigr) \Delta \tilde{N}_{n} \bigr\rangle . \end{aligned}$$
(4.10)
By the recursive calculation, we can get
$$\begin{aligned} \vert Y_{n } \vert ^{2} &\le \vert Y_{0} \vert ^{2}+ 2\beta L_{\lambda } \Delta t\sum _{j=0}^{n-1} \vert Y_{j} \vert ^{2} + 2( \alpha +1) L_{\lambda }T \\ &\quad{} + \sum_{j=0}^{n-1} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W _{j} \bigr\vert ^{2} +\sum_{j=0}^{n-1} \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\vert ^{2} \\ &\quad{} +2 \biggl(1-\frac{1}{\theta } \biggr)\sum_{j=0}^{n-1} \bigl\langle Y_{ j}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{ j} \bigr\rangle \\ &\quad{} +2 \biggl(1-\frac{1}{\theta } \biggr)\sum_{j=0}^{n-1} \bigl\langle Y_{ j}, h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N} _{j} \bigr\rangle \\ &\quad{} +\frac{2}{\theta }\sum_{j=0}^{n-1} \bigl\langle Y_{j} ^{*}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\rangle + \frac{2}{\theta }\sum _{j=0}^{n-1} \bigl\langle Y_{ j}^{*},h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle \\ &\quad{} +2\sum_{j=0}^{n-1} \bigl\langle g \bigl( Y_{j}^{*} \bigr) \Delta W_{j},h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle . \end{aligned}$$
(4.11)
Raising both sides to the power p, we can obtain
$$\begin{aligned} \vert Y_{n } \vert ^{2p} &\le 10^{p-1} \Biggl\{ \vert Y_{0} \vert ^{2p}+ n ^{p-1}(2\beta L_{\lambda }\Delta t)^{p}\sum_{j=0}^{n-1} \vert Y_{j} \vert ^{2p} + \bigl(2(\alpha +1) L_{\lambda }T \bigr)^{p} \\ &\quad{} + n^{p-1}\sum_{j=0}^{n-1} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\vert ^{2p} +n^{p-1}\sum_{j=0}^{n-1} \bigl\vert h \bigl( Y_{j} ^{*} \bigr) \Delta \tilde{N}_{j} \bigr\vert ^{2p} \\ &\quad{} +2^{p} \biggl(\frac{1}{\theta }-1 \biggr)^{p} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+2^{p} \biggl(\frac{1}{\theta }-1 \biggr)^{p} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}, h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+2^{p} \biggl(\frac{2}{\theta } \biggr)^{p} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}^{*}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+2^{p} \biggl(\frac{2}{\theta } \biggr)^{p} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}^{*}, h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+ 2^{p} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle g \bigl( Y_{j}^{*} \bigr) \Delta W_{j},h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} \Biggr\} . \end{aligned}$$
(4.12)
Notice that
$$ \mathbb{E}\sup_{0 \leq n \leq M} \Biggl[ \sum _{j=0}^{n-1} \vert Y_{j} \vert ^{2p} \Biggr] =\sum_{j=0}^{M-1} \mathbb{E} \vert Y_{j} \vert ^{2p}. $$
(4.13)
Thus, for \(0\leq M\leq N_{T}\), we obtain
$$\begin{aligned} \mathbb{E}\sup_{0 \leq n \leq M} \vert Y_{n } \vert ^{2p} &\le 10^{p-1} \Biggl\{ \vert Y_{0} \vert ^{2p}+ n^{p-1}(2\beta L _{\lambda }\Delta t)^{p}\sum_{j=0}^{M-1} \mathbb{E} \vert Y_{j} \vert ^{2p} + \bigl(2( \alpha +1) L_{\lambda }T \bigr)^{p} \\ &\quad{}+ n^{p-1}\mathbb{E}\sup_{0 \leq n \leq M} \sum _{j=0}^{n-1} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\vert ^{2p} \\ &\quad{}+n^{p-1}\mathbb{E}\sup_{0 \leq n \leq M}\sum _{j=0}^{n-1} \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N} _{j} \bigr\vert ^{2p} \\ &\quad{}+2^{p} \biggl(\frac{1}{\theta }-1 \biggr)^{p}\mathbb{E} \sup_{0 \leq n \leq M} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+2^{p} \biggl(\frac{1}{\theta }-1 \biggr)^{p}\mathbb{E} \sup_{0 \leq n \leq M} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}, h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+2^{p} \biggl(\frac{2}{\theta } \biggr)^{p}\mathbb{E} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}^{*}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+2^{p} \biggl(\frac{2}{\theta } \biggr)^{p}\mathbb{E} \Biggl\vert \sum_{j=0}^{n-1} \bigl\langle Y_{j}^{*}, h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N} _{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad{}+ 2^{p}\mathbb{E}\sup_{0 \leq n \leq M} \Biggl\vert \sum _{j=0}^{n-1} \bigl\langle g \bigl( Y_{j}^{*} \bigr) \Delta W_{j},h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} \Biggr\} . \end{aligned}$$
(4.14)
To bound the fourth term on the right side of (4.14), we note that \(Y_{n}^{*}\in \mathcal{F}_{t_{n}}\), \(\Delta W_{n}\) is independent of \(\mathcal{F}_{t_{n}}\) and \(\mathbb{E} \vert \Delta W_{j} \vert ^{2p}\le c _{p} \Delta t^{p}\), where \(c_{p}\) is a constant. Meanwhile, letting \(C=C(p,T,L_{\lambda },\theta )\) be a constant that may change from line to line,
$$\begin{aligned} n^{p-1}\mathbb{E}\sup_{0 \leq n \leq M} \sum _{j=0}^{n-1} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\vert ^{2p} &=n ^{p-1} \mathbb{E}\sum_{j=0}^{M-1} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\vert ^{2p} \\ &\leq n^{p-1}\sum_{j=0}^{M-1} \mathbb{E} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \bigr\vert ^{2p}\mathbb{E} \vert \Delta W_{j} \vert ^{2p} \\ &\leq n^{p-1}c_{p}\Delta t^{p}L_{\lambda }^{p} \sum_{j=0}^{M-1} \mathbb{E} \bigl[1+ \bigl\vert Y_{j}^{*} \bigr\vert ^{2} \bigr]^{p} \\ &\leq T^{p-1}c_{p}\Delta tL_{\lambda }^{p}\sum _{j=0}^{M-1} \mathbb{E} \bigl[1+\beta \vert Y_{j} \vert ^{2}+\alpha \bigr]^{p} \\ &\leq 2^{p-1}T^{p-1}c_{p}\Delta t L_{\lambda }^{p}\sum_{j=0} ^{M-1} \mathbb{E} \bigl[\beta^{p}+\beta^{p} \vert Y_{j} \vert ^{2p} \bigr] \\ &\leq (2T)^{p-1}c_{p}\Delta t L_{\lambda }^{p} \beta^{p} \Biggl( M+ \sum_{j=0}^{M-1} \mathbb{E} \vert Y_{j} \vert ^{2p} \Biggr) \\ &\leq C+C\Delta t\sum_{j=0}^{M-1}\mathbb{E} \vert Y_{j} \vert ^{2p}. \end{aligned}$$
(4.15)
Using a similar approach to the fifth term and noticing that \(\mathbb{E} \vert \Delta \widetilde{N}_{j} \vert ^{2p}\le c_{p} \Delta t^{p}\), we have
$$\begin{aligned} n^{p-1}\mathbb{E}\sum_{j=0}^{M-1} \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \widetilde{N}_{j} \bigr\vert ^{2p} \leq C+C\Delta t\sum _{j=0} ^{M-1}\mathbb{E} \vert Y_{j} \vert ^{2p}. \end{aligned}$$
(4.16)
Now we bound the sixth term in (4.14), using the Burkholder-Davis-Gundy inequality
$$\begin{aligned} \mathbb{E}\sup_{0 \leq n \leq M} \Biggl\vert \sum _{j=0}^{n-1} \bigl\langle Y_{j}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\rangle \Biggr\vert ^{p} &\leq C\mathbb{E} \Biggl[ \sum _{j=0} ^{M-1} \vert Y_{j} \vert ^{2} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \bigr\vert ^{2} \Delta t \Biggr] ^{p/2} \\ &\leq C\Delta t^{p/2}M^{p/2-1}L_{\lambda }^{p} \mathbb{E}\sum_{j=0}^{M-1} \vert Y_{j} \vert ^{p} \bigl(1+ \bigl\vert Y_{j}^{*} \bigr\vert ^{2} \bigr)^{p/2} \\ &\leq CT^{p/2-1}\Delta t\mathbb{E}\sum_{j=0}^{M-1} \frac{ \vert Y _{j} \vert ^{2p}+(1+ \vert Y_{j}^{*} \vert ^{2})^{p}}{2} \\ &\leq C\Delta t\mathbb{E}\sum_{j=0}^{M-1} \bigl[ \vert Y_{j} \vert ^{2p}+2^{p-1}+2^{p-1} \bigl\vert Y _{j}^{*} \bigr\vert ^{2p} \bigr] \\ &\leq C\Delta t\mathbb{E}\sum_{j=0}^{M-1} \bigl[ \vert Y_{j} \vert ^{2p}+2^{p-1} \bigl( \beta \vert Y_{j} \vert ^{2}+\alpha \bigr)^{p}+2^{p-1} \bigr] \\ &\leq C+C\Delta t\sum_{j=0}^{M-1}\mathbb{E} \vert Y_{j} \vert ^{2p}. \end{aligned}$$
(4.17)
Similar to the sixth term, we can bound the seventh term
$$\begin{aligned} \mathbb{E}\sup_{0 \leq n \leq M} \Biggl\vert \sum _{j=0}^{n-1} \bigl\langle Y_{j}, h \bigl( Y_{j}^{*} \bigr) \Delta \widetilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} &\leq C+C\Delta t\sum _{j=0}^{M-1}\mathbb{E} \vert Y_{j} \vert ^{2p}. \end{aligned}$$
(4.18)
Also similar to the sixth term, we can bound the eighth term in (4.14),
$$\begin{aligned} \mathbb{E}\sup_{0 \leq n \leq M} \Biggl\vert \sum _{j=0}^{n-1} \bigl\langle Y_{j}^{*}, g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\rangle \Biggr\vert ^{p} &\leq C\mathbb{E} \Biggl[ \sum _{j=0}^{M-1} \bigl\vert Y_{j}^{*} \bigr\vert ^{2} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \bigr\vert ^{2} \Delta t \Biggr] ^{p/2} \\ &\leq C\Delta t^{p/2}M^{p/2-1}L_{\lambda }^{p} \mathbb{E}\sum_{j=0}^{M-1} \bigl\vert Y_{j}^{*} \bigr\vert ^{p} \bigl(1+ \bigl\vert Y_{j}^{*} \bigr\vert ^{2} \bigr)^{p/2} \\ &\leq CT^{p/2-1}\Delta t\mathbb{E}\sum_{j=0}^{M-1} \frac{ \vert Y _{j}^{*} \vert ^{2p}+(1+ \vert Y_{j}^{*} \vert ^{2})^{p}}{2} \\ &\leq C\Delta t\mathbb{E}\sum_{j=0}^{M-1} \bigl[ \bigl\vert Y_{j}^{*} \bigr\vert ^{2p}+2^{p-1}+2^{p-1} \bigl\vert Y _{j}^{*} \bigr\vert ^{2p} \bigr] \\ &\leq C\Delta t\mathbb{E}\sum_{j=0}^{M-1} \bigl[ \bigl(1+2^{p-1} \bigr) \bigl(\beta \vert Y _{j} \vert ^{2}+\alpha \bigr)^{p}+2^{p-1} \bigr] \\ &\leq C+C\Delta t\sum_{j=0}^{M-1}\mathbb{E} \vert Y_{j} \vert ^{2p}, \end{aligned}$$
(4.19)
and the ninth term,
$$\begin{aligned} \mathbb{E}\sup_{0 \leq n \leq M} \Biggl\vert \sum _{j=0}^{n-1} \bigl\langle Y_{j}^{*}, h \bigl( Y_{j}^{*} \bigr) \Delta \widetilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} &\leq C+C\Delta t \sum _{j=0}^{M-1}\mathbb{E} \vert Y_{j} \vert ^{2p}. \end{aligned}$$
(4.20)
Finally we bound the tenth term in (4.14), by (4.15)-(4.16); we have
$$\begin{aligned} &\mathbb{E}\sup_{0 \leq n \leq M} \Biggl\vert \sum _{j=0}^{n-1} \bigl\langle g \bigl( Y_{j}^{*} \bigr) \Delta W_{j},h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N}_{j} \bigr\rangle \Biggr\vert ^{p} \\ &\quad \leq 2^{-p}\mathbb{E}\sup_{0 \leq n \leq M} \Biggl\vert \sum _{j=0}^{n-1} \bigl( \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\vert ^{2}+ \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N} _{j} \bigr\vert ^{2} \bigr) \Biggr\vert ^{p} \\ &\quad \leq 2^{-p}M^{p-1}\mathbb{E}\sup_{0 \leq n \leq M} \sum_{j=0}^{n-1} \bigl( \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\vert ^{2}+ \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N} _{j} \bigr\vert ^{2} \bigr) ^{p} \\ &\quad \leq 2^{-1}M^{p-1}\mathbb{E}\sup_{0 \leq n \leq M} \sum_{j=0}^{n-1} \bigl( \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j} \bigr\vert ^{2p}+ \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \tilde{N} _{j} \bigr\vert ^{2p} \bigr) \\ &\quad \leq C+C\Delta t\sum_{j=0}^{M-1}\mathbb{E} \vert Y_{j} \vert ^{2p}. \end{aligned}$$
(4.21)
Combining (4.15)-(4.21) into (4.14), we obtain
$$\begin{aligned} \mathbb{E}\sup_{0 \leq n \leq M} \vert Y_{n } \vert ^{2p} & \leq C+C\Delta t\sum_{j=0}^{M-1} \mathbb{E} \vert Y_{j} \vert ^{2p} \\ &\leq C+C\Delta t\sum_{j=0}^{M-1}\mathbb{E} \sup_{0 \leq n \leq j} \vert Y_{n} \vert ^{2p}. \end{aligned}$$
(4.22)
Using the discrete-type Gronwall inequality and noting that \(M\Delta t \leq T\), we obtain
$$\begin{aligned} \mathbb{E}\sup_{0 \leq n \leq M} \vert Y_{n } \vert ^{2p} \leq C e^{C \Delta t M}\leq C e^{C T}. \end{aligned}$$
(4.23)
By (4.7), we find that \(\mathbb{E} \sup_{0 \leq n \leq M} \vert Y_{n}^{*} \vert ^{2p}\) is also bounded. □

Lemma 4.2

Let Assumption  2.1 hold, and \(0 < \theta \le 1\), \(p\ge 1\), \(0 < \Delta t < \min \{ 1,\frac{1}{ 2\theta L_{\lambda }} \} \), then the exact solution of (3.1) and the continuous-time extension (4.3) satisfy
$$ \mathbb{E} \Bigl( \sup_{ 0 \le t \le T} \bigl\vert X(t) \bigr\vert ^{2p} \Bigr) \vee \mathbb{E} \Bigl( \sup_{ 0 \le t \le T} \bigl\vert \overline{Y}(t) \bigr\vert ^{2p} \Bigr) < A_{1}, $$
where \(A_{1}\) is a positive constant independent of \(N_{T}\).

Proof

From Lemma 1 in [16], we can see that \(\mathbb{E} ( \sup_{ 0 \le t \le T} \vert X(t) \vert ^{2p} ) \) is bounded. Now we prove that \(\mathbb{E} ( \sup_{ 0 \le t \le T} \vert \overline{Y}(t) \vert ^{2p} ) \) is bounded.

From (4.2), we obtain
$$\begin{aligned} \overline{Y} ( t ) =& Y_{n} + f_{\lambda } \bigl( Y_{n} ^{*} \bigr) ( t - t_{n} ) + g \bigl( Y_{n}^{*} \bigr) \bigl( W(t) - W(t_{n}) \bigr) \\ &{}+ h \bigl( Y_{n}^{*} \bigr) \bigl( \tilde{N}(t) - \tilde{N}( t_{n}) \bigr) , \quad t \in [ t_{n},t_{n + 1} ) . \end{aligned}$$
(4.24)
Let \(s\in [0, \Delta t)\), we have
$$\begin{aligned} \overline{Y} ( t_{n}+s ) = Y_{n} + f_{\lambda } \bigl( Y_{n}^{*} \bigr) s + g \bigl( Y_{n}^{*} \bigr) \Delta W_{n}(s) + h \bigl( Y_{n}^{*} \bigr) \Delta \widetilde{ N}_{n}(s), \end{aligned}$$
(4.25)
where
$$\begin{aligned}& \Delta W_{n}(s)=W(t_{n}+s)-W(t_{n}), \\& \Delta \widetilde{N}_{n}(s)=\widetilde{N}(t_{n}+s)- \widetilde{N}(t _{n}). \end{aligned}$$
However, \(Y_{n}^{*} = Y_{n}+\theta \Delta t f_{\lambda }(Y _{n}^{*})\) and so, for \(a=s/\Delta t\), we can rewrite equation (4.25) in the following form:
$$\begin{aligned} \overline{Y} ( t_{n}+s ) = \frac{a}{\theta } Y_{n}^{*} + \biggl(1-\frac{a}{\theta } \biggr)Y_{n} + g \bigl( Y_{n}^{*} \bigr) \Delta W_{n}(s) + h \bigl( Y_{n}^{*} \bigr) \Delta \widetilde{ N}_{n}(s). \end{aligned}$$
(4.26)
By (4.7), we have
$$\begin{aligned} \bigl\vert \overline{Y} ( t_{n}+s ) \bigr\vert ^{2}\le C \bigl[1+ \vert Y_{n} \vert ^{2} + \bigl\vert g \bigl( Y_{n}^{*} \bigr) \Delta W_{n}(s) \bigr\vert ^{2} + \bigl\vert h \bigl( Y_{n}^{*} \bigr) \Delta \widetilde{ N}_{n}(s) \bigr\vert ^{2} \bigr]. \end{aligned}$$
(4.27)
Thus
$$\begin{aligned}& \sup_{0 \le t \le T} \bigl\vert \overline{Y} ( t ) \bigr\vert ^{2p} \\& \quad \le \sup_{0 \le n\Delta t \le T} \sup_{0 \le s \le \Delta t} \bigl\vert \overline{Y} ( t_{n}+s ) \bigr\vert ^{2p} \\& \quad \le \sup_{0 \le n\Delta t \le T} \sup_{0 \le s \le \Delta t} C \bigl[1+ \vert Y_{n} \vert ^{2p} + \bigl\vert g \bigl( Y_{n}^{*} \bigr) \Delta W_{n}(s) \bigr\vert ^{2p} + \bigl\vert h \bigl( Y_{n}^{*} \bigr) \Delta \widetilde{ N}_{n}(s) \bigr\vert ^{2p} \bigr] \\& \quad \le C \Biggl[1+\sup_{0 \le n\Delta t \le T} \vert Y_{n} \vert ^{2p} + \sup_{0 \le s \le \Delta t}\sum_{j=0}^{N_{T}} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j}(s) \bigr\vert ^{2p} \\& \quad\quad{} + \sup_{0 \le s \le \Delta t}\sum_{j=0}^{N_{T}} \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \widetilde{ N}_{j}(s) \bigr\vert ^{2p} \Biggr]. \end{aligned}$$
Now using Doob’s martingale inequality, (3.3) and Lemma 4.1, we have
$$\begin{aligned} \mathbb{E}\sup_{0 \le s \le \Delta t} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j}(s) \bigr\vert ^{2p} &\le C \mathbb{E} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \Delta W_{j}(\Delta t) \bigr\vert ^{2p} \\ &\le C\mathbb{E} \bigl\vert g \bigl( Y_{j}^{*} \bigr) \bigr\vert ^{2p} \mathbb{E} \bigl\vert \Delta W_{j}( \Delta t) \bigr\vert ^{2p} \\ &\le C \bigl(1+\mathbb{E} \bigl\vert Y_{j}^{*} \bigr\vert ^{2p} \bigr) \Delta t^{p} \\ &\le C \Delta t. \end{aligned}$$
(4.28)
Since the \(\Delta \widetilde{N}_{j}(s)\) is also a martingale, by a similar method, we get
$$\begin{aligned} \mathbb{E}\sup_{0 \le s \le \Delta t} \bigl\vert h \bigl( Y_{j}^{*} \bigr) \Delta \widetilde{N}_{j}(s) \bigr\vert ^{2p} \le C \Delta t. \end{aligned}$$
(4.29)
Then by (4.28), (4.29) and Lemma 4.1, combining \(N_{T}\Delta t\le T\), we have
$$ \mathbb{E} \Bigl( \sup_{ 0 \le t \le T} \bigl\vert \overline{Y}(t) \bigr\vert ^{2p} \Bigr) \le A_{1}. $$
Then we get the desired results. □

Now we use the above lemmas to prove a strong convergence result.

Remark 4.1

Since \(f(x)\in C^{1}\), i.e. \(f'(x)\) is continuous, \(\vert f'(x) \vert \) is bounded locally. Then by the mean value theorem, there exists a positive constant \(L_{R}\) for each \(R>0\), such that
$$ \bigl\vert f ( x ) -f(y) \bigr\vert = \bigl\vert f'(z) \bigr\vert \vert x-y \vert \le L_{R} \vert x-y \vert , $$
(4.30)
for all \(x, y,z\in \mathbb{R}^{n}\) with \(\vert x \vert \vee \vert y \vert \le R\).

We note that the function \(f_{\lambda }\) in (3.1) automatically inherits this condition, with a larger \(L_{R}\).

Theorem 4.3

Under Assumption  2.1, let \(0 < \theta \le 1\), \(0 < \Delta t < \min \{ 1,\frac{1}{2\theta L_{\lambda }} \} \), the continuous-time approximate solution \(\overline{Y}(t)\) defined by (4.3) will converge to the true solution \(X(t)\) of (3.1) in the mean-square sense, i.e.
$$ \lim_{\Delta t\rightarrow 0}\mathbb{E}\sup_{0 \le t \le T} \bigl\vert \overline{Y} ( t ) - X ( t ) \bigr\vert ^{2} =0. $$
(4.31)

Proof

First, we define
$$ \tau_{d} : = \inf \bigl\{ t \ge 0, \bigl\vert X(t) \bigr\vert \geq d \bigr\} , \quad\quad \sigma _{d} : = \inf \bigl\{ t \ge 0, \bigl\vert \overline{Y}(t) \bigr\vert \ge d \bigr\} , \quad\quad \upsilon_{d} = \tau_{d} \wedge \sigma_{d}, $$
and let
$$ e(t)=\overline{Y}(t)-X(t). $$
Recall the Young inequality: for \(\frac{1}{p} + \frac{1}{q} = 1\) (\(p,q > 0\)), we have
$$ ab = a\delta^{\frac{1}{p}} \frac{b}{\delta^{\frac{1}{p}} } \le \frac{ ( a\delta^{\frac{1}{p}} ) ^{p} }{p} + \frac{b^{q} }{ q\delta^{\frac{q}{p}} } = \frac{a^{p} \delta }{p} + \frac{b^{q} }{ q\delta^{\frac{q}{p}} }, \quad \forall a,b,\delta > 0. $$
Thus, for any \(\delta >0\), we have
$$\begin{aligned}& \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} \Bigr] \\& \quad = \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} I_{ \{ \tau_{d} > T \text{ and } \sigma_{d} > T \} } \Bigr] + \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} I_{ \{ \tau_{d} \le T \text{ or } \sigma_{d} \le T \} } \Bigr] \\& \quad = \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} I_{ \{ \upsilon_{d} > T \} } \Bigr] + \mathbb{E} \Bigl[ \sup _{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} I_{ \{ \tau_{d} \le T \text{ or }\sigma_{d} \le T \} } \Bigr] \\& \quad \le \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t \wedge \upsilon_{d} ) \bigr\vert ^{2} \Bigr] + \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} I_{ \{ \tau_{d} \le T \text{ or } \sigma_{d} \le T \} } \Bigr] \\& \quad \le \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t \wedge \upsilon_{d} ) \bigr\vert ^{2} \Bigr] + \frac{ \delta }{p} \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2p} \Bigr] \\& \quad\quad{} + \frac{1 - \frac{1}{p}}{\delta^{\frac{1}{p- 1}} }\mathbb{P} \{ \tau_{d}\le T \mbox{ or } \sigma_{d} \le T \} . \end{aligned}$$
(4.32)
By Lemma 4.2, then
$$ \mathbb{P} \{ \tau_{d} \le T \} \le \mathbb{E} \biggl[ I_{ \{ \tau_{d} \le T \} } \frac{ \vert X(\tau_{d}) \vert ^{2p} }{d^{2p} } \biggr] \le \frac{1}{d^{2p} } \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert X(t) \bigr\vert ^{2p} \Bigr] \le \frac{A_{1}}{d^{2p} }. $$
Similarly, the result can be derived for \(\sigma_{d}\)
$$ \mathbb{P} \{ \sigma_{d} \le T \} = \mathbb{E} \biggl[ I _{ \{ \sigma_{d} \le T \} } \frac{ \vert \overline{Y}( \sigma_{d} ) \vert ^{2p} }{d^{2p} } \biggr] \le \frac{1}{ d^{2p} } \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert \overline{Y}(t) \bigr\vert ^{2p} \Bigr] \le \frac{A_{1}}{d ^{2p} }, $$
so that
$$ \mathbb{P} \{ \sigma_{d} \le T \hbox{or} \nu_{d} \le T \} \le \mathbb{P} \{ \sigma_{d} \le T \} + \mathbb{P} \{ \nu_{d} \le T \} \le \frac{2A_{1}}{d^{2p} }. $$
Using the bounds of \(X(t)\) and \(\overline{Y}(t)\), we have
$$ \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2p} \Bigr] \le 2^{2p-1}\mathbb{E} \Bigl[ \sup _{0 \le t \le T} \bigl( \bigl\vert X(t) \bigr\vert ^{2p} + \bigl\vert \overline{Y}(t) \bigr\vert ^{2p} \bigr) \Bigr] \le 2^{2p}A_{1}. $$
Substituting the above inequality into (4.32) leads to
$$\begin{aligned} &\mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} \Bigr] \\ &\quad \le \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert \overline{Y} ( t \wedge \upsilon_{d} ) -X ( t \wedge \upsilon_{d} ) \bigr\vert ^{2} \Bigr] +\frac{2^{2p} \delta A_{1}}{p} + \frac{2A_{1}(1 - \frac{1}{p})}{d}^{2p} \delta^{\frac{1}{ p - 1}}. \end{aligned}$$
(4.33)
Now we bound the first term on the right-hand side of (4.33). By the definitions of \(X(t)\) and \(\overline{Y}(t)\), combining the fact that \(0<\theta \le 1\), we have
$$\begin{aligned} & \bigl\vert \overline{Y} ( t \wedge \upsilon_{d} ) - X ( t \wedge \upsilon_{d} ) \bigr\vert ^{2} \\ &\quad = \biggl\vert \int_{0}^{t \wedge \upsilon_{d} } \bigl[ f_{\lambda } \bigl(Y(s) \bigr)-f_{ \lambda } \bigl(X(s) \bigr) \bigr]\,\mathrm{d}s \\ &\quad\quad{} + \int _{0}^{t \wedge \upsilon _{d} } g \bigl( Y(s) \bigr) - g \bigl( X ( s ) \bigr) \,\mathrm{d}W ( s ) \\ & \quad\quad{} + \int_{0}^{t \wedge \upsilon_{d} } h \bigl( Y(s) \bigr) - h \bigl( X ( s ) \bigr) \,\mathrm{d}\tilde{N} ( s ) \biggr\vert ^{2} \\ &\quad \leq 3 \biggl\vert \int_{0}^{t \wedge \upsilon_{d} } \bigl[f_{\lambda } \bigl(Y(s) \bigr)-f _{\lambda } \bigl(X(s) \bigr) \bigr]\,\mathrm{d}s \biggr\vert ^{2} \\ &\quad\quad{} + 3 \biggl\vert \int_{0}^{t \wedge \upsilon_{d} } g \bigl( Y(s) \bigr) - g \bigl( X ( s ) \bigr) \,\mathrm{d}W ( s ) \biggr\vert ^{2} \\ &\quad\quad{} + 3 \biggl\vert \int_{0}^{t \wedge \upsilon_{d} } h \bigl( Y(s) \bigr) - h \bigl( X ( s ) \bigr) \,\mathrm{d}\tilde{N} ( s ) \biggr\vert ^{2}. \end{aligned}$$
For any \(\tau \in [0,T]\), using the Cauchy-Schwarz inequality and the Doob martingale inequality, we obtain
$$\begin{aligned}& \mathbb{E} \Bigl[ \sup_{0 \le t \le \tau } \bigl\vert \overline{Y} ( t \wedge \upsilon_{d} ) -X ( t \wedge \upsilon_{d} ) \bigr\vert ^{2} \Bigr] \\& \quad \le 3T\mathbb{E} \int_{0}^{\tau \wedge \upsilon_{d} } \bigl\vert f_{ \lambda } \bigl(Y(s) \bigr)-f_{\lambda } \bigl(X(s) \bigr) \bigr\vert ^{2} \,\mathrm{d}s \\& \quad\quad{} + 12\mathbb{E} \int_{0}^{\tau \wedge \upsilon_{d} } \bigl\vert g \bigl( Y ( s ) \bigr) - g \bigl( X ( s ) \bigr) \bigr\vert ^{2} \,\mathrm{d}s \\& \quad\quad{} + 12\mathbb{E}\lambda \int_{0}^{\tau \wedge \upsilon _{d} } \bigl\vert h \bigl( Y ( s ) \bigr) - h \bigl( X ( s ) \bigr) \bigr\vert ^{2} \,\mathrm{d}s. \end{aligned}$$
Applying the local Lipschitz condition (4.30) and Assumption 2.1, we get
$$\begin{aligned}& \mathbb{E} \Bigl[ \sup_{0 \le t \le \tau } \bigl\vert \overline{Y} ( t \wedge \upsilon_{d} ) -X ( t \wedge \upsilon_{d} ) \bigr\vert ^{2} \Bigr] \\& \quad \le (3TL_{R}+12L_{g}+12L_{h}\lambda ) \mathbb{E} \int_{0}^{\tau \wedge \upsilon_{d} } \bigl\vert Y(s)-X(s) \bigr\vert ^{2} \,\mathrm{d}s \\& \quad \le 2(3TL_{R}+12L_{g}+12L_{h}\lambda ) \biggl[ \mathbb{E} \int_{0}^{\tau \wedge \upsilon_{d} } \bigl\vert Y(s)-\overline{Y}(s) \bigr\vert ^{2} \,\mathrm{d}s \\& \quad\quad{} + \int_{0}^{\tau } \mathbb{E} \sup_{0\le r \le s} \bigl\vert \overline{Y}(r\wedge \upsilon_{d} )-X(r \wedge \upsilon_{d}) \bigr\vert ^{2} \,\mathrm{d}s \biggr]. \end{aligned}$$
(4.34)
To bound the first term inside the parentheses of (4.34), we denote by \(n_{s}\) the integer for which \(s\in [t_{n_{s}}, t_{n_{s+1}}]\) and note that
$$\begin{aligned} Y(s)-\overline{Y} ( s ) &= - f_{\lambda } \bigl( Y_{n_{s}} ^{*} \bigr) ( s - t_{n_{s}} ) - g \bigl( Y_{n_{s}} ^{*} \bigr) \bigl( W(s) - W(t_{n_{s}}) \bigr) \\ &\quad{} - h \bigl( Y_{n_{s}}^{*} \bigr) \bigl( \tilde{N}(s) - \tilde{N}( t_{n_{s}}) \bigr) , \end{aligned}$$
and hence that
$$\begin{aligned} \bigl\vert Y(s)-\overline{Y} ( s ) \bigr\vert ^{2} \le 3 \bigl[ \bigl\vert f _{\lambda } \bigl( Y_{n_{s}}^{*} \bigr) \Delta t \bigl\vert ^{2}+ \bigr\vert g \bigl( Y_{n_{s}}^{*} \bigr) \Delta W_{n_{s}} \bigr\vert ^{2}+ \bigl\vert h \bigl( Y_{n_{s}}^{*} \bigr) \Delta N_{n_{s}} \bigr\vert ^{2} \bigr] . \end{aligned}$$
Note that
$$\begin{aligned} \bigl\vert f_{\lambda } \bigl( Y_{n_{s}}^{*} \bigr) \bigr\vert ^{2} &\le 2 \bigl[ \bigl\vert f_{\lambda } \bigl( Y_{n_{s}}^{*} \bigr) -f_{\lambda }(0) \bigr\vert ^{2}+ \bigl\vert f_{\lambda }(0) \bigr\vert ^{2} \bigr] \\ &\le 2L_{R} \bigl\vert Y_{n_{s}}^{*} \bigr\vert ^{2}+2 \bigl\vert f_{\lambda }(0) \bigr\vert ^{2}. \end{aligned}$$
Thus by the second moments of martingale increments and the moment bound on the numerical solution \(Y_{n}^{*}\), we can obtain
$$ \mathbb{E} \int_{0}^{\tau \wedge \upsilon_{d} } \bigl\vert Y(s)- \overline{Y}(s) \bigr\vert ^{2} \,\mathrm{d}s\le C_{1} \Delta t, $$
for a constant \(C_{1}=C_{1}(R, T, A)\). Substituting this bound into (4.34) and applying the continuous Gronwall inequality gives
$$\begin{aligned} \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert \overline{Y} ( t \wedge \upsilon_{d} ) -X ( t \wedge \upsilon _{d} ) \bigr\vert ^{2} \Bigr] \le C_{2} \Delta t e^{(3TL_{R}+12L _{g}+12L_{h}\lambda )T}, \end{aligned}$$
(4.35)
for a constant \(C_{2}=C_{2}(R, T, A)\).
Now combining (4.35) with (4.33), we have
$$\begin{aligned} \mathbb{E} \Bigl[ \sup_{0 \le t \le T} \bigl\vert e ( t ) \bigr\vert ^{2} \Bigr] \le C_{2}\Delta t e^{(3TL_{R}+12L _{g}+12L_{h}\lambda )T}+ \frac{2^{2p}\delta A_{1}}{p} + \frac{2A_{1}( 1 - \frac{1}{p})}{d}^{2p} \delta^{\frac{1}{p - 1}}. \end{aligned}$$
(4.36)
For any given \(\varepsilon >0\), we can choose δ sufficiently small for
$$ \frac{2^{2p}\delta A_{1}}{p}\le \frac{\varepsilon }{3}, $$
and then choose d sufficient large for
$$ \frac{2A_{1}(1 - \frac{1}{p})}{d^{2p}\delta^{\frac{1}{p - 1}}} < \frac{\varepsilon }{3}, $$
and finally choose Δt so that
$$ C_{2}\Delta t e^{(3TL_{R}+12L_{g}+12L_{h}\lambda )T} < \frac{\varepsilon }{3}. $$
Thus \(\mathbb{E} [ \sup_{0 \le t \le T} \vert e ( t ) \vert ^{2} ] < \varepsilon \). The proof is completed. □

5 Convergence rate

To prove the convergence rate of the CSSθ method, we give the following assumption.

Assumption 5.1

There exist constants \(D\in \mathbb{R}^{+}\) and \(q\in \mathbb{Z}^{+}\) such that, for all \(a,b \in \mathbb{R}^{n}\),
$$ \bigl\vert f_{\lambda }(a)-f_{\lambda }(b) \bigr\vert ^{2}\le D \bigl(1+ \vert a \vert ^{q}+ \vert b \vert ^{q} \bigr) \vert a-b \vert ^{2}. $$
(5.1)

Firstly, we establish Lemma 5.1 under Assumptions 2.1 and 5.1.

Lemma 5.1

Under Assumptions 2.1 and 5.1, let \(0 < \theta \le 1\), \(0 < \Delta t < \min \{ 1,\frac{1}{2\theta L_{\lambda }} \} \), for any given integer \(r\geq 2\), there exists a positive constant \(E=E(r)\) such that
$$ \mathbb{E}\sup_{0 \le t \le T} \bigl\vert Y ( t ) - \overline{Y} ( t ) \bigr\vert ^{r} \le E\Delta t^{\frac{r}{2}}. $$
(5.2)

Proof

Since for any given \(t\in [n\Delta t, (n+1)\Delta t]\), we have \(Y(t)=Y_{n}\), and then by the continuous-time approximate solution \(\overline{Y}(t)\) defined by (4.3), we can get
$$\begin{aligned} Y(t)-\overline{Y} ( t ) &= - f_{\lambda } \bigl( Y_{n}^{*} \bigr) ( t - t_{n} ) - g \bigl( Y_{n}^{*} \bigr) \bigl( W(t) - W(t_{n}) \bigr) \\ &\quad{} - h \bigl( Y_{n}^{*} \bigr) \bigl( \tilde{N}(t) - \tilde{N}( t_{n}) \bigr) , \end{aligned}$$
and hence for \(t-t_{n}\le \Delta t\)
$$\begin{aligned} \bigl\vert Y(t)-\overline{Y} ( t ) \bigr\vert ^{r} &\le 3^{r-1} \bigl[\Delta t ^{r} \bigl\vert f_{\lambda } \bigl( Y_{n}^{*} \bigr) \bigr\vert ^{r}+ \bigl\vert g \bigl( Y_{n} ^{*} \bigr) \bigr\vert ^{r} \bigl\vert W(t)-W(t_{n}) \bigr\vert ^{r} \\ &\quad{} + \bigl\vert h \bigl( Y_{n}^{*} \bigr) \bigr\vert ^{r} \bigl\vert \widetilde{N}(t)-\widetilde{N}(t _{n}) \bigr\vert ^{r} \bigr]. \end{aligned}$$
By Assumption 5.1 on \(f_{\lambda }\), and linear growth condition (2.6)-(2.7) for g and h, we have
$$\begin{aligned}& \mathbb{E} \sup_{0 \le t \le T} \bigl\vert Y ( t ) - \overline{Y} ( t ) \bigr\vert ^{r} \\& \quad \le C_{3}(r) \Bigl[\Delta t^{r} \Bigl(1+ \sup _{0\le t\le T}\mathbb{E} \bigl\vert Y_{n}^{*} \bigr\vert ^{u} \Bigr)+ \Bigl(1+ \sup_{0\le t\le T} \mathbb{E} \bigl\vert Y_{n}^{*} \bigr\vert ^{u} \Bigr) \vert t-t_{n} \vert ^{r/2} \\& \quad\quad{} + \Bigl(1+\sup_{0\le t\le T}\mathbb{E} \bigl\vert Y_{n}^{*} \bigr\vert ^{u} \Bigr) \vert t-t _{n} \vert ^{r/2} \Bigr], \end{aligned}$$
(5.3)
where \(C_{3}(r)\) and u are both integer constants depending on q from Assumption 5.1 and r. By Lemma 4.1, we obtain
$$ \mathbb{E}\sup_{0 \le t \le T} \bigl\vert Y ( t ) - \overline{Y} ( t ) \bigr\vert ^{r} \le E\Delta t^{\frac{r}{2}}, $$
where \(E=E(r)\) a positive constant which depends on r. □

Theorem 5.2

Under Assumptions 2.1 and 5.1, let \(0 < \theta \le 1\), \(0 < \Delta t < \min \{ 1,\frac{1}{2\theta L_{\lambda }} \} \), the continuous-time approximate solution \(\overline{Y}(t)\) defined by (4.3) will converge to the true solution \(X(t)\) of (3.1) with strong order of one half, i.e.
$$ \mathbb{E}\sup_{0 \le t \le T} \bigl\vert \overline{Y} ( t ) - X ( t ) \bigr\vert ^{2} =O(\Delta t). $$
(5.4)

Proof

Let
$$ e(t)=\overline{Y}(t)-X(t). $$
From the identity
$$\begin{aligned} X ( t ) = &X_{0} + \int_{0}^{t} f_{\lambda } \bigl( X \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}s + \int_{0}^{t} g \bigl( X \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}W ( s ) \\ & {}+ \int_{0}^{t} h \bigl( X \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}\tilde{N} ( s ) , \end{aligned}$$
(5.5)
and (4.3), we apply the Itô formula [17] to obtain
$$\begin{aligned} \bigl\vert e(t) \bigr\vert ^{2} & = 2 \int_{0}^{t} \bigl\langle f_{\lambda } \bigl(Y \bigl(s^{-} \bigr) \bigr)-f_{\lambda } \bigl(X \bigl(s^{-} \bigr) \bigr), e \bigl(s^{-} \bigr) \bigr\rangle \,\mathrm{d}s + \int_{0}^{t} \bigl\vert g \bigl( Y \bigl(s^{-} \bigr) \bigr) - g \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ & \quad{} + \lambda \int_{0}^{t} \bigl\vert h \bigl( Y \bigl(s^{-} \bigr) \bigr) - h \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ &\quad{} + \int _{0}^{t} 2 \bigl\langle e \bigl(s^{-} \bigr), g \bigl( Y \bigl(s^{-} \bigr) \bigr) - g \bigl( X \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}W ( s ) \bigr\rangle \\ & \quad{} + \int_{0}^{t} 2 \bigl\langle e \bigl(s^{-} \bigr), h \bigl( Y \bigl(s^{-} \bigr) \bigr) - h \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\rangle \,\mathrm{d} \tilde{N} ( s ) \\ & \quad{} + \int_{0}^{t} \bigl\vert h \bigl( Y \bigl(s^{-} \bigr) \bigr) - h \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\vert ^{2}\,\mathrm{d}\tilde{N} ( s ) \\ & \le 2 \int_{0}^{t} \bigl\langle f_{\lambda } \bigl(Y \bigl(s^{-} \bigr) \bigr)-f_{\lambda } \bigl( \overline{Y} \bigl(s^{-} \bigr) \bigr), e \bigl(s^{-} \bigr) \bigr\rangle + \bigl\langle f_{\lambda } \bigl( \overline{Y} \bigl(s^{-} \bigr) \bigr)-f_{\lambda } \bigl(X \bigl(s^{-} \bigr) \bigr), e \bigl(s^{-} \bigr) \bigr\rangle \,\mathrm{d}s \\ & \quad{} + \int_{0}^{t} \bigl\vert g \bigl( Y \bigl(s^{-} \bigr) \bigr) - g \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ & \quad{} + \lambda \int_{0}^{t} \bigl\vert h \bigl( Y \bigl(s^{-} \bigr) \bigr) - h \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ &\quad{} + M_{1}(t)+ M_{2}(t)+ M_{3}(t), \end{aligned}$$
where
$$\begin{aligned}& M_{1}(t)= \int _{0}^{t} 2 \bigl\langle e \bigl(s^{-} \bigr), g \bigl( Y \bigl(s^{-} \bigr) \bigr) - g \bigl( X \bigl( s^{-} \bigr) \bigr) \,\mathrm{d}W ( s ) \bigr\rangle , \\& M_{2}(t)= \int_{0}^{t} 2 \bigl\langle e \bigl(s^{-} \bigr), h \bigl( Y \bigl(s^{-} \bigr) \bigr) - h \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\rangle \,\mathrm{d} \tilde{N} ( s ) , \\& M_{3}(t)= \int_{0}^{t} \bigl\vert h \bigl( Y \bigl(s^{-} \bigr) \bigr) - h \bigl( X \bigl( s^{-} \bigr) \bigr) \bigr\vert ^{2}\,\mathrm{d}\tilde{N} ( s ) . \end{aligned}$$
Using the Assumptions 2.1 and 5.1, and (3.2) we have
$$\begin{aligned} \bigl\vert e(t) \bigr\vert ^{2} &\le \int_{0}^{t} 2 \bigl\langle f_{\lambda } \bigl(Y \bigl(s^{-} \bigr) \bigr)-f_{ \lambda } \bigl(\overline{Y} \bigl(s^{-} \bigr) \bigr), e \bigl(s^{-} \bigr) \bigr\rangle +2 K_{\lambda } \bigl\vert e \bigl(s ^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ &\quad{} + \int_{0}^{t} (L_{g}+\lambda L_{h}) \bigl\vert Y \bigl(s^{-} \bigr)-X \bigl( s^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ & \quad{} + M_{1}(t)+ M_{2}(t)+ M_{3}(t) \\ & \le \int_{0}^{t} \bigl\vert f_{\lambda } \bigl(Y \bigl(s^{-} \bigr) \bigr)-f_{\lambda } \bigl( \overline{Y} \bigl(s^{-} \bigr) \bigr) \bigr\vert ^{2}+ \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s +2K_{\lambda } \int _{0}^{t} \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ &\quad{} +2(L_{g}+\lambda L_{h} ) \int_{0}^{t} \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2}+ \bigl\vert Y \bigl(s^{-} \bigr)- \overline{Y} \bigl( s^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ & \quad{} + M_{1}(t)+ M_{2}(t)+ M_{3}(t) \\ &\le \bigl[1+2(K_{\lambda }+L_{g}+\lambda L_{h} ) \bigr] \int_{0}^{t} \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2} \,\mathrm{d}s \\ &\quad{} +D \int_{0}^{t} \bigl(1+ \bigl\vert Y \bigl(s^{-} \bigr) \bigr\vert ^{q}+ \bigl\vert \overline{Y} \bigl(s^{-} \bigr) \bigr\vert ^{q} \bigr) \bigl\vert Y \bigl(s^{-} \bigr)-\overline{Y} \bigl( s^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s \\ &\quad{} +2(L_{g}+\lambda L_{h} ) \int_{0}^{t} \bigl\vert Y \bigl(s^{-} \bigr)-\overline{Y} \bigl( s^{-} \bigr) \bigr\vert ^{2} \,\mathrm{d}s \\ &\quad{} + M_{1}(t)+ M_{2}(t)+ M_{3}(t) \\ &\le \bigl[1+2(K_{\lambda }+L_{g}+\lambda L_{h}) \bigr] \int_{0}^{t} \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2} \,\mathrm{d}s \\ &\quad{} +D_{1} \Bigl(\sup_{0\le s\le t} \bigl\vert Y \bigl(s^{-} \bigr)-\overline{Y} \bigl( s^{-} \bigr) \bigr\vert ^{2} \Bigr) \int_{0}^{t} \bigl(1+ \bigl\vert Y \bigl(s^{-} \bigr) \bigr\vert ^{q}+ \bigl\vert \overline{Y} \bigl(s ^{-} \bigr) \bigr\vert ^{q} \bigr) \,\mathrm{d}s \\ &\quad{} + M_{1}(t)+ M_{2}(t)+ M_{3}(t), \end{aligned}$$
where we use \(D_{1}\) to denote a generic constant (independent of Δt) that may change from line to line.
Using the Lemma 4.1, Lemma 4.2 and Lemma 5.1, we have
$$\begin{aligned} \mathbb{E}\sup_{0\le s\le t} \bigl\vert e(s) \bigr\vert ^{2} &\le D_{1} \int_{0} ^{t} \mathbb{E} \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s +D_{1} \Delta t \\ &\quad{} + \mathbb{E}\sup_{0\le s\le t} M_{1}(s)+ \mathbb{E} \sup _{0\le s\le t}M_{2}(s)+ \mathbb{E}\sup_{0\le s\le t}M_{3}(s). \end{aligned}$$
(5.6)
Now, as in the proof of [9], the Burkholder-Davis-Gundy inequality can be used to get the estimate
$$ \mathbb{E}\sup_{0\le s\le t} M_{i}(s)\le \frac{1}{2} \mathbb{E}\sup_{0\le s\le t} \bigl\vert e(t) \bigr\vert ^{2}+D_{1} \int_{0}^{t} \mathbb{E} \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2}\,\mathrm{d}s +D_{1} \Delta t, \quad i=1,2,3. $$
Using this in (5.6), we obtain
$$\begin{aligned} \mathbb{E}\sup_{0\le s\le t} \bigl\vert e(s) \bigr\vert ^{2} \le D_{1} \int_{0} ^{t} \mathbb{E}\sup_{0\le s\le t} \bigl\vert e \bigl(s^{-} \bigr) \bigr\vert ^{2} \,\mathrm{d}s +D _{1} \Delta t. \end{aligned}$$
The result follows from the continuous Gronwall inequality. □

6 Numerical experiments

We consider the following nonlinear stochastic different equation with jumps from [16]:
$$ \textstyle\begin{cases} \mathrm{d}X(t) =( -4X ( t^{-} ) -X^{3}(t^{-}))\,\mathrm{d}t + X ( t^{-} ) \,\mathrm{d}W ( t ) + X ( t^{-} ) \,\mathrm{d}N ( t ) , \\ X(0) = 1. \end{cases} $$
(6.1)
Define \(f(x(t))=-4x ( t ) -x^{3}(t)\), \(g(x(t))=x(t)\), \(h(x(t))=x(t)\). It is easy to compute that
$$\begin{aligned} \bigl\langle x-y, f ( x ) - f ( y ) \bigr\rangle &= \bigl\langle x-y, -4(x-y)- \bigl(x^{3}-y^{3} \bigr) \bigr\rangle \\ &=-4 \vert x-y \vert ^{2} \bigl(1+x^{2}+xy+y^{2} \bigr) \\ &\le -4 \vert x - y \vert ^{2}, \end{aligned}$$
which implies that \(f(x)\) satisfies the one-sided Lipschitz condition, \(g(x)\) and \(h(x)\) satisfy the global Lipschitz condition, then the Assumptions of Theorem 5.2 hold. That is to say, the numerical solution by our method will converge to the true solution of system (6.1).
To show the convergence of the CSSθ method for system (6.1), we fix \(\Delta t=2^{-14}\), \(T=2\), \(\lambda =1\), \(\theta =0.7\). Noting that the exact solution of nonlinear jump-diffusion system (6.1) is not available, we use the numerical solution by the SSBE method with step size \(\Delta t=2^{-14}\) as the ‘referenced exact solution’ (Theorem 2 in [16] can guarantee its strong convergence) in Figure 1.
Figure 1

CSS θ solution and ‘referenced exact solution’ for system ( 6.1 ).

In Figure 1, we show the numerical solution by the CSSθ method with step size \(\Delta t=2^{-10}\) and the ‘referenced exact solution’. we can easy to find that the CSSθ approximation and the ‘referenced exact solution’ make no major difference between both paths. That is to say the CSSθ method converges to the ‘referenced exact solution’ well. Hence our method is efficient for the nonlinear jump-diffusion systems.

To show the strong convergence order of the CSSθ method with different parameter θ, we fix \(T=2\), \(\lambda =1\) and change \(\theta =0.5, 0.7, 0.9\), respectively. The ‘referenced exact solution’ of system (6.1) is also used by the SSBE method with step size \(\Delta t=2^{-14}\). We simulate the numerical solutions with five different step sizes \(h=2^{p-1}\Delta t\) for \(1\leq p\leq 5\), \(\Delta t=2^{-14} \). The mean-square errors \(\varepsilon = 1/5{,}000\sum_{i = 1}^{5{,}000} \vert Y_{n} ( \omega _{i} ) - X ( T ) \vert ^{2}\), all measured at time \(T=2\), are estimated by trajectory averaging. We plot our approximation to \(\sqrt{\epsilon }\) against Δt on a log-log scale. For reference a dashed line of slope one-half is added.

In Figure 2, we see that the slopes of the four curves appear to match well. Therefore, the results verify the strong convergence order of the proposed method.
Figure 2

Errors simulation by CSS θ method with different θ .

Declarations

Acknowledgements

This research is partial supported with funds provided by the National Natural Science Foundation of China (Grant Nos. 11501410, 11471243 and 11672207), and in part by Natural Science Foundation of Tianjin (No: 17JCQNJC03800). We thank the anonymous reviewers for their very valuable comments and helpful suggestions which improve this paper significantly.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, Tianjin Polytechnic University
(2)
Department of Sport Culture and Communication, Tianjin University of Sport

References

  1. Bruti-Liberati, N, Platen, E: Strong approximations of stochastic differential equations with jumps. J. Comput. Appl. Math. 205, 982-1001 (2007) MathSciNetView ArticleMATHGoogle Scholar
  2. Chalmers, GD, Higham, DJ: Convergence and stability analysis for implicit simulations of stochastic differential equations with random jump magnitudes. Discrete Contin. Dyn. Syst., Ser. B 9, 47-64 (2008) MathSciNetMATHGoogle Scholar
  3. Higham, DJ, Kloeden, PE: Convergence and stability of implicit methods for jump-diffusion. Int. J. Numer. Anal. Model. 3, 125-140 (2006) MathSciNetMATHGoogle Scholar
  4. Hu, L, Gan, SQ: Convergence and stability of the balanced methods for stochastic differential equations with jumps. Int. J. Comput. Math. 88, 2089-2108 (2011) MathSciNetView ArticleMATHGoogle Scholar
  5. Tan, JG, Mu, ZM, Guo, YF: Convergence and stability of the compensated split-step θ-method for stochastic differential equations with jumps. Adv. Differ. Equ. 2014, 209 (2014) MathSciNetView ArticleGoogle Scholar
  6. Wang, XJ, Gan, SQ: Compensated stochastic theta methods for stochastic differential equations with jumps. Appl. Numer. Math. 60, 87-887 (2010) MathSciNetMATHGoogle Scholar
  7. Jiang, F, Shen, Y, Wu, FK: Jump systems with the mean-reverting γ-process and convergence of the numerical approximation. Stoch. Dyn. 12, 1150018 (2012) MathSciNetView ArticleMATHGoogle Scholar
  8. Bao, J, Mao, XR, Yin, G: Competitive Lotka-Volterra population dynamics with jumps. Nonlinear Anal. 74, 6601-6616 (2011) MathSciNetView ArticleMATHGoogle Scholar
  9. Higham, DJ, Mao, XR, Stuart, AM: Strong convergence of Euler-type methods for nonlinear stochastic differential equations. SIAM J. Numer. Anal. 40, 1041-1063 (2002) MathSciNetView ArticleMATHGoogle Scholar
  10. Hutzenthaler, M, Jentzen, A, Kloeden, PE: Strong and weak divergence in finite time of Euler’s method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proc. R. Soc. Lond., Ser. A, Math. Phys. Eng. Sci. 467, 1563-1576 (2010) MathSciNetView ArticleMATHGoogle Scholar
  11. Mao, XR, Szpruch, L: Strong convergence and stability of implicit numerical methods for stochastic differential equations with non-globally Lipschitz continuous coefficients. J. Comput. Appl. Math. 238, 14-28 (2013) MathSciNetView ArticleMATHGoogle Scholar
  12. Mao, XR, Szpruch, L: Strong convergence rates for backward Euler-Maruyama method for non-linear dissipative-type stochastic differential equations with super-linear diffusion coefficients. Stochastics 85, 144-171 (2013) MathSciNetMATHGoogle Scholar
  13. Huang, CM: Exponential mean square stability of numerical methods for systems of stochastic differential equations. J. Comput. Appl. Math. 236, 4016-4026 (2012) MathSciNetView ArticleMATHGoogle Scholar
  14. Zong, XF, Wu, FK, Huang, CM: Exponential mean square stability of the theta approximations for neutral stochastic differential delay equations. J. Comput. Appl. Math. 286, 172-185 (2015) MathSciNetView ArticleMATHGoogle Scholar
  15. Zong, XF, Wu, FK, Huang, CM: Theta schemes for SDDEs with non-globally Lipschitz continuous coefficients. J. Comput. Appl. Math. 278, 258-277 (2015) MathSciNetView ArticleMATHGoogle Scholar
  16. Higham, DJ, Kloeden, PE: Numerical methods for nonlinear stochastic differential equations with jumps. Numer. Math. 101, 101-119 (2005) MathSciNetView ArticleMATHGoogle Scholar
  17. Higham, DJ, Kloeden, PE: Strong convergence rates for backward Euler on a class of nonlinear jump-diffusion problems. J. Comput. Appl. Math. 205, 949-956 (2007) MathSciNetView ArticleMATHGoogle Scholar
  18. Gyöngy, I, Krylov, NV: On stochastic equations with respect to semi-martingales I. Stochastics 4, 1-21 (1980) MathSciNetView ArticleMATHGoogle Scholar

Copyright

© The Author(s) 2017