Skip to content

Advertisement

  • Research
  • Open Access

\(L^{p }\) (\(p>2\))-strong convergence of multiscale integration scheme for jump-diffusion systems

Advances in Difference Equations20192019:20

https://doi.org/10.1186/s13662-019-1956-0

  • Received: 14 November 2018
  • Accepted: 13 January 2019
  • Published:

Abstract

In this paper we shall prove the \(L^{p}\) (\(p>2\))-strong convergence of multiscale integration scheme for stochastic jump-diffusion systems with two-time-scale, which gives a numerical method for effective dynamical systems.

Keywords

  • Fast-slow SDEs with jumps
  • \(L^{p}\) (\(p>2\))-strong convergence
  • Multiscale integration scheme

MSC

  • 60H10
  • 34F05

1 Introduction

This paper focuses on the following two-time-scale jump-diffusion SDEs:
$$\begin{aligned} \textstyle\begin{cases} dx_{t}^{\epsilon }=a(x_{t}^{\epsilon },y_{t}^{\epsilon })\,dt+b(x_{t} ^{\epsilon })\,dB_{t} +c(x_{t}^{\epsilon })\,dP_{t}, & x_{0}^{\epsilon }=x _{0}, \\ dy_{t}^{\epsilon }=\frac{1}{\epsilon }f(x_{t}^{\epsilon },y_{t}^{ \epsilon }) \,dt+\frac{1}{\sqrt{\epsilon }} g(x_{t}^{\epsilon },y_{t} ^{\epsilon })\,dW_{t}+h(x_{t}^{\epsilon }, y_{t}^{\epsilon }) \,dN_{t} ^{\epsilon },& y_{0}^{\epsilon }=y_{0}, \end{cases}\displaystyle \end{aligned}$$
(1)
where \(x_{t}^{\epsilon }\in \mathbb{R}^{n}\) and \(y_{t}^{\epsilon } \in \mathbb{R}^{m}\) are jump-diffusion processes. The functions \(a(x,y)\in \mathbb{R}^{n}\) and \(f(x,y)\in \mathbb{R}^{m}\) are the drift coefficients, the functions \(b(x)\in \mathbb{R}^{n\times d_{1}}\) and \(g(x,y)\in \mathbb{R}^{m\times d_{2}}\) are the diffusion coefficients, and the functions \(c(x)\in \mathbb{R}^{n}\) and \(h(x,y)\in \mathbb{R} ^{m}\) are the jump coefficients; \(B_{t}\) and \(W_{t}\) are \(d_{1}\), \(d_{2}\)-dimensional independent Wiener processes, \(P_{t}\) is a scalar simple Poisson process with intensity \(\lambda _{1}\), and \(N_{t}^{\epsilon }\) is a scalar simple Poisson process with intensity \(\frac{\lambda _{2}}{\epsilon }\). ϵ is a small parameter, which represents the ratio of time scale between the processes \(x_{t}^{ \epsilon }\) and \(y_{t}^{\epsilon }\). With this time scale, the vector \(x_{t}^{\epsilon }\) is referred to as the “slow component” and \(y_{t}^{\epsilon }\) as the “fast component”. Under suitable assumptions the authors [1, 2] proved that when \(\epsilon \rightarrow 0\), the slow component \(x_{t}^{\epsilon }\) mean square converges to the solution of SDEs in the following form:
$$\begin{aligned} \textstyle\begin{cases} d\bar{x}_{t}=\bar{a}(\bar{x}_{t})\,dt+b(\bar{x}_{t})\,dB_{t} +c(\bar{x} _{t})\,dP_{t}, \\ \bar{x}_{0}=x_{0}, \end{cases}\displaystyle \end{aligned}$$
(2)
with
$$\begin{aligned} \bar{a}(x)= \int _{\mathbb{R}^{m}}a(x,y)\mu ^{x}(dy), \quad x\in \mathbb{R}^{n}, \end{aligned}$$
and \(\mu ^{x}\) is the invariant, ergodic measure generated by the following equation with frozen x:
$$\begin{aligned} dy_{t}=f(x,y_{t})\,dt+g(x,y_{t})\,dW_{t}+h(x, y_{t})\,dN_{t}, \quad y_{0}=y_{0}. \end{aligned}$$

Multiscale jump-diffusion stochastic differential equations arise in many applications and have already been studied widely. What is usually of interest for this kind of system (1) is the time evolution of the slow variable \(x_{t}^{\epsilon }\). Thus a simplified equation, which is independent of the fast variable and possesses the essential features of the system, is highly desirable. On the one hand, while averaging principle [16] plays an important role in the research of slow component by getting a reduced equation (2), the difficulty of obtaining the effective equation (2) lies in the fact that the coefficient \(\bar{a}(\cdot )\) is given via expectation with respect to measure \(\mu ^{x}(dy)\), which is usually difficult or impossible to obtain analytically, especially when the dimension m is large. On the other hand, even if we get the reduced equation, the equation cannot be solved explicitly. Therefore, the construction of the efficient computational methods is of great importance. Furthermore, the idea of multiscale integration schemes (cf. [7]) overcomes these difficulties exactly, which solves \(\bar{x}_{t}\) with \(\bar{a}(\cdot )\) being estimated on the fly using an empirical average of the original slow coefficients \(a(\cdot )\) with respect to numerical solutions of the fast processes. This is one of our motivations.

For another significant motivation, a substantial body of work has been done concerning multiscale integration scheme for fast-slow SDEs. Most of the existing research theories discuss the convergence in \(L^{p}\) (\(0< p\leqslant 2\)), even in a weaker sense [4, 5, 810]. Nevertheless, convergence in a stronger sense is what we want. In 2007, the \(L^{2}\) averaging principle was proposed for a system, in which slow and fast dynamics were driven by Brownian noises and Poisson noises in [1]. Subsequently, the authors gave a multiscale integration scheme for the result in [9]. In 2015, Xu and Miao extended the result of [1] to the \(L^{p}\) (\(p>2\)) case under assumptions (H1)–(H5) in [2]. A natural question is as follows: Can we also establish the \(L^{p}\) (\(p>2\)) averaging principle by the multiscale integration scheme? It is well known that \(L^{1}\) convergence and \(L^{2}\) convergence cannot conclude \(L^{p}\) (\(p>2\)) convergence. However, \(L^{1}\) convergence and \(L^{2}\) convergence can be deduced by \(L^{p}\) (\(p>2\)) convergence. Once the \(L^{p}\) (\(p>2\)) convergence has been established, then a much bigger degree of freedom for parameter q in research of \(L^{q}\) (\(0< q< p\)) convergence would be obtained.

Based on the above discussion, the aim of this paper is to prove the \(L^{p}\) (\(p>2\)) convergence of the multiscale integration scheme under the following assumptions:
(H1): 
The measurable functions a, b, c, f, g, and h satisfy the global Lipschitz conditions, i.e., there is a positive constant L such that
$$ \begin{aligned} & \bigl\vert a(u_{1},v_{1})-a(u_{2},v_{2}) \bigr\vert ^{2}+ \bigl\vert b(u_{1})-b(u_{2}) \bigr\vert ^{2} + \bigl\vert c(u _{1})-c(u_{2}) \bigr\vert ^{2} \\ &\qquad {}+ \bigl\vert f(u_{1},v_{1})-f(u_{2},v_{2}) \bigr\vert ^{2}+ \bigl\vert g(u_{1},v_{1})-g(u_{2}, v _{2}) \bigr\vert ^{2}+ \bigl\vert h(u_{1},v_{1})-h(u_{2},v_{2}) \bigr\vert ^{2} \\ &\quad \leqslant L\bigl( \vert u_{1}-u_{2} \vert ^{2}+ \vert v_{1}-v_{2} \vert ^{2}\bigr) \end{aligned} $$
for all \(u_{i}\in \mathbb{R}^{n}\), \(v_{i}\in \mathbb{R}^{m}\), \(i=1,2\). Here and below we use \(|\cdot |\) to denote both the Euclidean vector norm and the Frobenius matrix norm.

Remark 1.1

With the help of (H1), it immediately follows that there is a positive constant K such that
$$ \begin{aligned} \bigl\vert &a(u,v) \bigr\vert ^{2}+ \bigl\vert b(u) \bigr\vert ^{2}+ \bigl\vert c(u) \bigr\vert ^{2} + \bigl\vert f(u,v) \bigr\vert ^{2}+ \bigl\vert g(u,v) \bigr\vert ^{2}+ \bigl\vert h(u,v) \bigr\vert ^{2} \\ &\quad \leqslant K\bigl(1+ \vert u \vert ^{2}+ \vert v \vert ^{2}\bigr) \end{aligned} $$
for \((u,v)\in \mathbb{R}^{n}\times \mathbb{R}^{m}\). Thus a, b, c, f, g, and h satisfy the sublinear growth condition.
(H2): 

a, g, and h are globally bounded.

Remark 1.2

By (H1) and (H2), it is easy to derive that ā in (2) is bounded and satisfies the Lipschitz condition [11].

(H3): 
There exist constants \(\beta _{1}>0\) and \(\beta _{j} \in \mathbb{R}\), \(j=2, 3, 4\), which are all independent of \((u_{1},v _{1},v_{2})\), such that
$$\begin{aligned}& \begin{gathered} v_{1}\cdot f(u_{1},v_{1})\leqslant -\beta _{1} \vert v_{1} \vert ^{2}+\beta _{2}, \\ \bigl(f(u_{1},v_{1})-f(u_{1},v_{2}) \bigr) (v_{1}-v_{2})\leqslant \beta _{3} \vert v _{1}-v_{2} \vert ^{2}, \end{gathered} \end{aligned}$$
(3)
and
$$ \bigl(h(u_{1},v_{1})-h(u_{1},v_{2}) \bigr) (v_{1}-v_{2})\leqslant \beta _{4} \vert v _{1}-v_{2} \vert ^{2} $$
(4)
for all \(u_{1}\in \mathbb{R}^{n}\) and \(v_{1},v_{2}\in \mathbb{R}^{m}\).
(H4): 
\(\eta :=-(2\beta _{3}+2\lambda _{2}\beta _{4}+C_{g}+ \lambda _{2}C_{h})>0\), here \(\beta _{3}\) and \(\beta _{4}\) are taken from (3) and (4), \(\lambda _{2}\) is from \(N_{t}^{\epsilon }\) with intensity \(\lambda _{2}/\epsilon \), \(C_{g}\), and \(C_{h}\) are the Lipschitz coefficients for g and h, respectively, i.e.,
$$ \bigl\vert g(u_{1},v_{1})-g(u_{2},v_{2}) \bigr\vert ^{2}\leqslant C_{g}\bigl( \vert u_{1}-u_{2} \vert ^{2}+ \vert v _{1}-v_{2} \vert ^{2}\bigr) $$
and
$$ \bigl\vert h(u_{1},v_{1})-h(u_{2},v_{2}) \bigr\vert ^{2}\leqslant C_{h}\bigl( \vert u_{1}-u_{2} \vert ^{2}+ \vert v_{1}-v_{2} \vert ^{2}\bigr) $$
for all \(u_{1},u_{2}\in \mathbb{R}^{n}\), \(v_{1},v_{2}\in \mathbb{R}^{m}\).
(H5): 
There exists a constant \(\gamma >0\), which is independent of \((u,v)\), such that
$$ v^{\mathrm{T}}g(u,v)g^{\mathrm{T}}(u,v)v\geqslant \gamma \vert v \vert ^{2} $$
for all \((u,v)\in \mathbb{R}^{n}\times \mathbb{R}^{m}\).

An example that satisfies (H1)–(H5) is \(a(u,v)= \frac{1}{1+(u+v)^{2}}\), \(b(u)=e^{-u^{2}}\), \(c(u)=\sin u\), \(f(u,v)=-1.5( \lambda _{2}+1)\nu \), \(g(u,\nu )=\frac{3+\sin u+\sin \nu }{\sqrt{2}}\), and \(h(u,\nu )=\frac{\sin u+\sin \nu }{\sqrt{2}}\).

It is worth pointing out that the \(L^{p}\) (\(p>2\)) averaging principle under assumptions (H1)–(H5) had been established in [2].

Now, we will introduce the multiscale integration scheme. The scheme is made up of a macro solver to evolve (2) and a micro solver to simulate the fast dynamics in (1):

1. Macro solver. Let Δt be a fixed step, and let \(X_{n}\) be a numerical approximation to the coarse variable at time \(t_{n}=n\Delta t\). The simplest choice is the Euler–Maruyama scheme
$$\begin{aligned} X_{n+1}=X_{n}+A(X_{n})\Delta t+b(X_{n})\Delta B_{n}+c(X_{n})\Delta P _{n}, \quad X_{0}=x_{0}, \end{aligned}$$
(5)
where \(A(X_{n})\) is estimated by an empirical average
$$ A(X_{n})=\frac{1}{M}\sum _{m=1}^{M}a\bigl(X_{n},Y_{m}^{n} \bigr). $$
(6)
2. Micro solver. To get \(A(X_{n})\) used in the macro solver, we adopt the Euler–Maruyama scheme to generate \(Y_{m}^{n}\):
$$ Y_{m+1}^{n}=Y_{m}^{n}+\frac{1}{\epsilon }f \bigl(X_{n},Y_{m}^{n}\bigr)\delta t+ \frac{1}{\sqrt{ \epsilon }}g\bigl(X_{n},Y_{m}^{n}\bigr)\Delta W_{m}^{n}+h\bigl(X_{n},Y_{m}^{n} \bigr) \Delta N_{m}^{n}, $$
(7)
with fixed \(X_{n}\), and we denote the solution by \(Y_{m}^{n}\), \(m=0, 1, \ldots,M\), where \(\Delta W_{m}^{n}\) are Brownian increments over a time interval δt, and \(\Delta N_{m}^{n}\) are Poisson increments with intensity \(\frac{\lambda _{2}}{\epsilon }\). Due to ergodicity [2] of the fast dynamics, we can select, among other selections, \(Y_{0}^{n}=y_{0}\).
Note that the effective dynamics do not rely on ϵ. Meanwhile, since the discrete solution \(Y_{m}^{n}\) obtained by the micro-solver is for \(X_{n}\) fixed, it only depends on the ratio \(\frac{\delta t}{ \epsilon }\). Thus, without loss of generality, we may take \(\epsilon =1\). Then we have
$$ Y_{m+1}^{n}=Y_{m}^{n}+f \bigl(X_{n},Y_{m}^{n}\bigr)\delta t+g \bigl(X_{n}, Y_{m}^{n}\bigr) \Delta W_{m}^{n}+h\bigl(X_{n},Y_{m}^{n} \bigr)\Delta N_{m}^{n}, $$
(8)
where \(\Delta W_{m}^{n}=W_{(m+1)\delta t}^{n}-W_{m\delta t}^{n}\) are the Brownian increments, and \(\Delta N_{m}^{n}=N_{(m+1) \delta t}^{n}-N _{m\delta t}^{n}\) are the Poisson increments with intensity \(\lambda _{2}\).
Simultaneously, \(Y_{m}^{n}\) are numerically generated discrete solutions of the family of SDEs as well:
$$ dz_{t}^{n}=f\bigl(X_{n},z_{t}^{n} \bigr)\,dt+g\bigl(X_{n},z_{t}^{n} \bigr)\,dW_{t}^{n} +h\bigl(X_{n},z _{t}^{n}\bigr)\,dN_{t}^{n}, $$
(9)
with initial conditions \(z_{0}^{n}=Y_{0}^{n}=y_{0}\) and a time step δt (the choice of a fixed \(Y_{0}^{n}\) for all n simplifies our estimates; in practice, we could take \(Y_{0}^{n}=Y_{M}^{n-1}\) for all \(n>0\)).
We also present a discrete auxiliary process \(\bar{X}_{n}\), the Euler solution to the effective dynamics (2):
$$ \bar{X}_{n+1}=\bar{X}_{n}+\bar{a}( \bar{X}_{n})\Delta t+ b(\bar{X}_{n}) \Delta B_{n}+c( \bar{X}_{n})\Delta P_{n}. $$
(10)

Concretely speaking, we are concentrating on estimating the \(L^{p}\)-strong error between the solution \(\bar{x}_{t}\) of the effective dynamics (2) and the solution \(X_{n}\) of the multiscale integration scheme (5), (6), and (8) in this paper. Furthermore, we may easily obtain that the solution \(X_{n}\) of the multiscale integration scheme can approximate the solution \(\bar{x} _{t}\) of the effective dynamics in both the sense of \(L^{q}\) (\(0< q< p\)) and the probability by Hölder’s inequality and Chebyshev’s inequality. Then the process of proving the main result can be divided into two parts: \((I')\) the difference between the process \(\bar{x}_{t_{n}}\) and the auxiliary process \(\bar{X}_{n}\) (see Lemma 2.4 below); \((\mathit{II}')\) the difference between the process \(X_{n}\) and the auxiliary process \(\bar{X}_{n}\) (see Lemma 3.8 below).

We now describe the structure of the present paper. In Sect. 2, we introduce some a priori estimates to testify the error between the process \(\bar{x}_{t_{n}}\) and the auxiliary process \(\bar{X}_{n}\). In Sect. 3, we devote ourselves to proving the error between the process \(X_{n}\) and the auxiliary process \(\bar{X}_{n}\). In Sect. 4, based on the above two estimates, we can derive our main result (see Theorem 4.1).

Throughout this paper, we will denote by C or K a generic positive constant which may change its value from line to line. In chains of inequalities, we will adopt C, \(C^{\prime }\), \(C^{\prime \prime }\), … or \(C_{1}\), \(C_{2}\), \(K_{1}\), \(K_{2}\), … to avoid confusion.

2 Some a priori estimates

In this section, we shall give some a priori estimates in the first three lemmas. Then we can apply the obtained results to estimate the difference between the process \(\bar{x}_{t_{n}}\) and the auxiliary process \(\bar{X}_{n}\).

For convenience, we will extend the discrete numerical solution \(\bar{X}_{n}\) of (10) to continuous time. We first define the ‘step functions’
$$ Z(t)=\sum_{k}\bar{X}_{k}1_{[k\Delta t, (k+1)\Delta t)}(t), $$
(11)
where \(1_{G}\) is the indicator function for the set G. Then we define
$$ \bar{X}(t)=x_{0}+ \int _{0}^{t}\bar{a}\bigl(Z(s)\bigr) \,ds+ \int _{0}^{ t}b\bigl(Z(s)\bigr)\,dB _{s}+ \int _{0}^{t}c\bigl(Z(s)\bigr)\,dP_{s}. $$
(12)
(Note that by construction \(Z(t-)=Z(t)\) for \(t\neq k\Delta t\).) It is not difficult to verify \(Z(t_{k})=\bar{X}(t_{k})=\bar{X}_{k}\). The aim of this section is to prove a convergence result for \(\bar{X}(t)\) because the discrete numerical solution is interpolated to \(\bar{X}(t)\). Then, we can obtain the convergence result for \(\bar{X}_{k}\) straightly.

Firstly, we show that the discrete numerical solution \(\bar{X}_{k}\) and the continuous approximation \(\bar{X}(t)\) have 2p bounded moments in the first two lemmas.

Lemma 2.1

For any \(p>1\) and \(T>0\), there exist positive constants \(\Delta t^{ \ast }\) and \(C_{1}\) such that, for all \(0<\Delta t\leqslant \Delta t ^{\ast }\),
$$ \mathbb{E} \vert \bar{X}_{k} \vert ^{2p}\leqslant C_{1}\bigl(1+ \vert x_{0} \vert ^{2p}\bigr) $$
(13)
for \(k\Delta t\leqslant T\), where \(C_{1}\) is independent of \((k, \Delta t)\).

Proof

By construction (12), we have
$$ \begin{aligned} \bar{X}_{k+1}= & x_{0}+ \int _{0}^{(k+1)\Delta t}\bar{a}\bigl(Z(s)\bigr) \,ds+ \int _{0}^{(k+1)\Delta t}b\bigl(Z(s)\bigr)\,dB_{s}+ \int _{0}^{(k+1)\Delta t}c\bigl(Z(s)\bigr)\,dP _{s}. \end{aligned} $$
Then we obtain
$$\begin{aligned} \mathbb{E} \vert \bar{X}_{k+1} \vert ^{2p} \leqslant & C \vert x_{0} \vert ^{2p}+C\mathbb{E} \biggl\vert \int _{0}^{(k+1)\Delta t}\bar{a}\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p} +C\mathbb{E} \biggl\vert \int _{0}^{(k+1)\Delta t}b\bigl(Z(s)\bigr)\,dB_{s} \biggr\vert ^{2p} \\ &{}+C\mathbb{E} \biggl\vert \int _{0}^{(k+1)\Delta t}c\bigl(Z(s)\bigr)\,dP_{s} \biggr\vert ^{2p} \\ :=&C\bigl( \vert x_{0} \vert ^{2p}+I_{1}+I_{2}+I_{3} \bigr) \end{aligned}$$
(14)
for \((k+1)\Delta t\leqslant T\). For \(I_{2}\) and \(I_{3}\), using \(\tilde{P}_{t}:=P_{t}-\lambda _{1}t\), Burkhölder’s inequality [12], Hölder’s inequality, Remark 1.1, and (11), we have
$$\begin{aligned} I_{2} \leqslant & C\mathbb{E} \biggl[ \int _{0}^{(k+1)\Delta t} \bigl\vert b\bigl(Z(s)\bigr) \bigr\vert ^{2}\,ds \biggr]^{p} \\ \leqslant &C \int _{0}^{(k+1)\Delta t}\mathbb{E} \bigl\vert b\bigl(Z(s) \bigr) \bigr\vert ^{2p}\,ds \\ \leqslant &C \int _{0}^{(k+1)\Delta t}\mathbb{E}\bigl(1+ \bigl\vert Z(s) \bigr\vert ^{2p}\bigr)\,ds \\ \leqslant &C+C\Delta t\sum_{i=0}^{k} \mathbb{E} \vert \bar{X}_{i} \vert ^{2p} \end{aligned}$$
(15)
and
$$\begin{aligned} I_{3} =&\mathbb{E} \biggl\vert \int _{0}^{(k+1)\Delta t}c\bigl(Z(s)\bigr)\,d \tilde{P}_{s}+ \lambda _{1} \int _{0}^{(k+1)\Delta t}c\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p} \\ \leqslant & C\mathbb{E} \biggl\vert \int _{0}^{(k+1)\Delta t}c\bigl(Z(s)\bigr)\,d \tilde{P}_{s} \biggr\vert ^{2p}+C \mathbb{E} \biggl\vert \lambda _{1} \int _{0}^{(k+1) \Delta t}c\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p} \\ \leqslant & C\mathbb{E} \biggl[ \int _{0}^{(k+1)\Delta t} \bigl\vert c\bigl(Z(s)\bigr) \bigr\vert ^{2}\,ds \biggr]^{p}+C\mathbb{E} \biggl\vert \lambda _{1} \int _{0}^{(k+1)\Delta t}c\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p} \\ \leqslant & C \int _{0}^{(k+1)\Delta t}\mathbb{E} \bigl\vert c\bigl(Z(s) \bigr) \bigr\vert ^{2p}\,ds+C \int _{0}^{(k+1)\Delta t}\mathbb{E} \bigl\vert c\bigl(Z(s) \bigr) \bigr\vert ^{2p}\,ds \\ \leqslant & C \int _{0}^{(k+1)\Delta t}\mathbb{E}\bigl(1+ \bigl\vert Z(s) \bigr\vert ^{2p}\bigr)\,ds \\ \leqslant & C+C\Delta t\sum_{i=0}^{k} \mathbb{E} \vert \bar{X}_{i} \vert ^{2p}. \end{aligned}$$
(16)
Similarly, we may deal with \(I_{1}\) and have
$$\begin{aligned} I_{1} \leqslant & C+C\Delta t\sum _{i=0}^{k}\mathbb{E} \vert \bar{X}_{i} \vert ^{2p}. \end{aligned}$$
(17)
Choosing Δt sufficiently small and by (14)–(17), we have
$$ \mathbb{E} \vert \bar{X}_{k+1} \vert ^{2p}\leqslant C \bigl(1+ \vert x_{0} \vert ^{2p}\bigr)+C\Delta t \sum _{i=0}^{k}\mathbb{E} \vert \bar{X}_{i} \vert ^{2p}, $$
which, with the aid of discrete Gronwall’s inequality, gives the result. □

Lemma 2.2

For any \(p>1\) and \(T>0\), there exist positive constants \(\Delta t^{ \ast }\) and \(C_{2}\) such that, for all \(0<\Delta t\leqslant \Delta t ^{\ast }\),
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t) \bigr\vert ^{2p}\leqslant C_{2}\bigl(1+ \vert x_{0} \vert ^{2p}\bigr), \end{aligned}$$
(18)
where \(C_{2}\) is independent of Δt.

Proof

From (12), we have
$$ \begin{aligned} \bigl\vert \bar{X}(t) \bigr\vert ^{2p} \leqslant C \vert x_{0} \vert ^{2p}+C \biggl\vert \int _{0}^{ t}\bar{a}\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p} &+C \biggl\vert \int _{0}^{t}b\bigl(Z(s)\bigr)\,dB_{s} \biggr\vert ^{2p}+C \biggl\vert \int _{0}^{t}c\bigl(Z(s)\bigr)\,dP_{s} \biggr\vert ^{2p}. \end{aligned} $$
Thus, by the definition of \(\tilde{P}_{t}\), we have
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t) \bigr\vert ^{2p} \leqslant & C \vert x_{0} \vert ^{2p}+C \mathbb{E}\sup_{t\in [0,T]} \biggl\vert \int _{0}^{ t}\bar{a}\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p}+C \mathbb{E}\sup_{t\in [0,T]} \biggl\vert \int _{0}^{t}b\bigl(Z(s)\bigr)\,dB_{s} \biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{t\in [0,T]} \biggl\vert \int _{0}^{t}c\bigl(Z(s)\bigr)\,d\tilde{P} _{s} \biggr\vert ^{2p}+C\mathbb{E}\sup _{t\in [0,T]} \biggl\vert \lambda _{1} \int _{0}^{t}c\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p}. \end{aligned}$$
(19)
By the same method as in the previous lemma, we obtain
$$ \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t) \bigr\vert ^{2p}\leqslant C\bigl(1+ \vert x_{0} \vert ^{2p}\bigr)+C \int _{0}^{T}\mathbb{E} \bigl\vert Z(s) \bigr\vert ^{2p}\,ds. $$
Applying (11) and Lemma 2.1 over the interval [0,T], we obtain result (18). □

Secondly, we show that the continuous-time approximation remains close to the step functions Z(s) in a strong sense.

Lemma 2.3

For any \(p>1\) and \(T>0\), there exist positive constants \(\Delta t^{ \ast }\) and \(C_{3}\) such that, for all \(0<\Delta t\leqslant \Delta t ^{\ast }\),
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t)-Z(t) \bigr\vert ^{2p}\leqslant C_{3} \Delta t^{p}\bigl(1+ \vert x_{0} \vert ^{2p}\bigr), \end{aligned}$$
(20)
where \(C_{3}\) is independent of Δt.

Proof

Consider \(t\in [k\Delta t,(k+1)\Delta t]\subseteq [0,T]\), we have
$$ \bar{X}(t)-Z(t)=\bar{X}(t)-\bar{X}_{k}= \int _{k\Delta t}^{t}\bar{a}\bigl(Z(s)\bigr)\,ds+ \int _{k\Delta t}^{t}b\bigl(Z(s)\bigr)\,dB_{s}+ \int _{k\Delta t}^{t}c\bigl(Z(s)\bigr)\,dP_{s}. $$
Thus
$$ \bigl\vert \bar{X}(t)- Z(t) \bigr\vert ^{2p}\leqslant C \biggl\vert \int _{k\Delta t}^{t}\bar{a}\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p}+C \biggl\vert \int _{k\Delta t}^{t}b\bigl(Z(s)\bigr)\,dB_{s} \biggr\vert ^{2p}+C \biggl\vert \int _{k\Delta t}^{t}c\bigl(Z(s)\bigr)\,dP_{s} \biggr\vert ^{2p} $$
for each \(t\in [k\Delta t,(k+1)\Delta t]\). Then
$$\begin{aligned} \sup_{t\in [0,T]} \bigl\vert \bar{X}(t)-Z(t) \bigr\vert ^{2p} \leqslant & \max_{k=0,1,\ldots T/\Delta t-1}\sup _{\tau \in [k\Delta t,(k+1)\Delta t]} \biggl\{ C \biggl\vert \int _{k\Delta t}^{\tau }\bar{a}\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p} \\ &{}+C \biggl\vert \int _{k\Delta t}^{\tau }b\bigl(Z(s)\bigr)\,dB_{s} \biggr\vert ^{2p}+C \biggl\vert \int _{k\Delta t}^{\tau }c\bigl(Z(s)\bigr)\,d \tilde{P}_{s} \biggr\vert ^{2p} \\ &{}+C \biggl\vert \lambda _{1} \int _{k\Delta t}^{\tau }c\bigl(Z(s)\bigr)\,ds \biggr\vert ^{2p} \biggr\} . \end{aligned}$$
(21)
Now, taking expectations on both sides of (21), then using Burkhölder’s inequality on the martingale integrals and by Hölder’s inequality, we have
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t)- Z(t) \bigr\vert ^{2p} \leqslant & \max_{k=0,1,\ldots T/\Delta t-1} \biggl\{ C\Delta t^{2p-1} \int _{k\Delta t}^{(k+1)\Delta t}\mathbb{E} \bigl\vert \bar{a} \bigl(Z(s)\bigr) \bigr\vert ^{2p}\,ds \\ &{}+C\Delta t^{p-1} \int _{k\Delta t}^{(k+1)\Delta t}\mathbb{E} \bigl\vert b\bigl(Z(s) \bigr) \bigr\vert ^{2p}\,ds \\ &{}+C\Delta t^{p-1} \int _{k\Delta t}^{(k+1)\Delta t}\mathbb{E} \bigl\vert c\bigl(Z(s) \bigr) \bigr\vert ^{2p}\,ds \\ &{}+C\Delta t^{2p-1} \int _{k\Delta t}^{(k+1)\Delta t}\mathbb{E} \bigl\vert c\bigl(Z(s) \bigr) \bigr\vert ^{2p}\,ds \biggr\} . \end{aligned}$$
Applying Remarks 1.1 and 1.2, we have
$$ \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t)- Z(t) \bigr\vert ^{2p}\leqslant \max_{k=0,1,\ldots T/\Delta t-1} \biggl\{ C\bigl( \Delta t^{p-1}+\Delta t^{2p-1}\bigr) \int _{k\Delta t}^{(k+1)\Delta t}\bigl(1+\mathbb{E} \bigl\vert Z(s) \bigr\vert ^{2p}\bigr)\,ds \biggr\} . $$
But \(Z(s)\equiv \bar{X}_{k}\) on \([k\Delta t,(k+1)\Delta t)\), hence, it follows from Lemma 2.1 that
$$ \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t)- Z(t) \bigr\vert ^{2p}\leqslant C\bigl(\Delta t^{p}+\Delta t^{2p}\bigr)\bigl[1+C_{1}\bigl(1+ \vert x_{0} \vert ^{2p}\bigr)\bigr], $$
which yields result (20). □

Lastly, we prove a strong convergence result for \(\bar{X}(t)\).

Lemma 2.4

For any \(p>1\) and \(T>0\), there exist positive constants \(\Delta t^{ \ast }\) and \(C_{4}\) such that, for all \(0<\Delta t\leqslant \Delta t ^{\ast }\),
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t)- \bar{x}(t) \bigr\vert ^{2p}\leqslant C _{4}\Delta t^{p}\bigl(1+ \vert x_{0} \vert ^{2p}\bigr), \end{aligned}$$
(22)
where \(C_{4}\) is independent of Δt.

Proof

By construction (12), we get
$$\begin{aligned} \bar{X}(t)-\bar{x}(t) =& \int _{0}^{t}\bigl[\bar{a}\bigl(Z(s)\bigr)-\bar{a} \bigl(\bar{x}(s)\bigr)\bigr]\,ds+ \int _{0}^{t}\bigl[b\bigl(Z(s)\bigr)-b\bigl( \bar{x}(s)\bigr)\bigr]\,dB_{s} \\ &{}+ \int _{0}^{t}\bigl[c\bigl(Z(s)\bigr)-c\bigl( \bar{x}(s)\bigr)\bigr]\,dP_{s}. \end{aligned}$$
Hence, we have
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,t_{1}]} \bigl\vert \bar{X}(t)- \bar{x}(t) \bigr\vert ^{2p} \leqslant & C\mathbb{E}\sup _{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[\bar{a}\bigl(Z(s)\bigr)- \bar{a} \bigl(\bar{x}(s)\bigr)\bigr]\,ds \biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[b\bigl(Z(s)\bigr)-b\bigl( \bar{x}(s)\bigr)\bigr]\,dB_{s} \biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[c\bigl(Z(s)\bigr)-c\bigl( \bar{x}(s)\bigr)\bigr]\,dP_{s} \biggr\vert ^{2p} \\ :=&C(I_{1}+I_{2}+I_{3}) \end{aligned}$$
(23)
for any \(0\leqslant t_{1}\leqslant T\). By the definition of \(\tilde{P}_{t}\), Burkhölder’s inequality, Hölder’s inequality, and (H1), we obtain
$$\begin{aligned}& \begin{aligned}[b] I_{2} &\leqslant C\mathbb{E} \biggl[ \int _{0}^{t_{1}} \bigl\vert b\bigl(Z(s)\bigr)-b \bigl(\bar{x}(s)\bigr) \bigr\vert ^{2} \,ds \biggr]^{p} \\ &\leqslant C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert b\bigl(Z(s) \bigr)-b\bigl(\bar{x}(s)\bigr) \bigr\vert ^{2p} \,ds \\ &\leqslant C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert Z(s)- \bar{x}(s) \bigr\vert ^{2p}\,ds, \end{aligned} \end{aligned}$$
(24)
$$\begin{aligned}& \begin{aligned}[b] I_{3} &\leqslant C\mathbb{E}\sup _{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[c\bigl(Z(s)\bigr)-c\bigl( \bar{x}(s)\bigr)\bigr]\,d\tilde{P}_{s} \biggr\vert ^{2p} \\ &\quad {}+C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \lambda _{1} \int _{0}^{t}\bigl[c\bigl(Z(s)\bigr)-c\bigl( \bar{x}(s)\bigr)\bigr]\,ds \biggr\vert ^{2p} \\ &\leqslant C\mathbb{E} \biggl[ \int _{0}^{t_{1}} \bigl\vert c\bigl(Z(s)\bigr)-c \bigl(\bar{x}(s)\bigr) \bigr\vert ^{2} \,ds \biggr]^{p} \\ &\quad {}+C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \lambda _{1} \int _{0}^{t}\bigl[c\bigl(Z(s)\bigr)-c\bigl( \bar{x}(s)\bigr)\bigr]\,ds \biggr\vert ^{2p} \\ &\leqslant C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert c\bigl(Z(s) \bigr)-c\bigl(\bar{x}(s)\bigr) \bigr\vert ^{2p}\,ds \\ &\leqslant C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert Z(s)- \bar{x}(s) \bigr\vert ^{2p}\,ds. \end{aligned} \end{aligned}$$
(25)
Dealing with \(I_{1}\) similarly and combining (23)–(25), it follows that
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t)-\bar{x}(t) \bigr\vert ^{2p} \leqslant & C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert Z(s)- \bar{x}(s) \bigr\vert ^{2p}\,ds \\ \leqslant & C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert \bar{X}(s)- \bar{x}(s) \bigr\vert ^{2p}\,ds+C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert \bar{X}(s)-Z(s) \bigr\vert ^{2p}\,ds. \end{aligned}$$
Applying Lemma 2.3, we obtain
$$ \begin{aligned} \mathbb{E}\sup_{t\in [0,T]} \bigl\vert \bar{X}(t)-\bar{x}(t) \bigr\vert ^{2p} &\leqslant C _{5} \Delta t^{p}\bigl(1+ \vert x_{0} \vert ^{2p} \bigr)+C_{6} \int _{0}^{t_{1}}\mathbb{E} \sup_{t\in [0,s]} \bigl\vert \bar{X}(t)-\bar{x}(t) \bigr\vert ^{2p}\,ds. \end{aligned} $$
By continuous Gronwall’s inequality, the desired estimate (22) is obtained. □

3 Strong convergence of the scheme

In this section, some a priori estimates would be provided in the first seven lemmas. Then we can use the established estimates to get the error between the process \(X_{n}\) and the auxiliary process \(\bar{X}_{n}\).

Now, we firstly show the 2pth moment estimates for the processes \(z_{t}^{n}\), \(X_{n}\), and \(Y_{m}^{n}\).

Lemma 3.1

For any \(p>1\) and \(T>0\), there exists a positive constant \(K_{1}\) such that
$$\begin{aligned} \sup_{0\leqslant t\leqslant T} \mathbb{E} \bigl\vert z_{t}^{n} \bigr\vert ^{2p}\leqslant K _{1}. \end{aligned}$$
(26)

Proof

For \(|z_{t}^{n}|^{2p}\), direct computation with Itô’s formula gives that
$$\begin{aligned} \bigl\vert z_{t}^{n} \bigr\vert ^{2p} =& \vert y_{0} \vert ^{2p}+2p \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p-2}\bigl(f\bigl(X _{n},z_{s}^{n} \bigr),z_{s}^{n}\bigr)\,ds+2p \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p-2}\bigl(g\bigl(X _{n},z_{s}^{n} \bigr),z_{s}^{n}\bigr)\,dW_{s}^{n} \\ &{}+2p \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p-2}\bigl( h\bigl(X_{n},z_{s}^{n} \bigr),z_{s}^{n}\bigr) \,dN _{s}^{n}+2p(p-1) \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2(p-2)}\bigl( g\bigl(X_{n},z_{s}^{n} \bigr),z _{s}^{n}\bigr)^{2}\,ds \\ &{}+2p(p-1)\lambda _{2} \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2(p-2)}\bigl( h\bigl(X_{n},z_{s} ^{n}\bigr),z_{s}^{n}\bigr)^{2}\,ds+p \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p-2} \bigl\vert g\bigl(X_{n},z_{s} ^{n}\bigr) \bigr\vert ^{2}\,ds \\ &{}+p\lambda _{2} \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p-2} \bigl\vert h\bigl(X_{n},z_{s}^{n} \bigr) \bigr\vert ^{2}\,ds. \end{aligned}$$
(27)
We have by (H3)
$$ \bigl(f\bigl(X_{n},z_{s}^{n} \bigr),z_{s}^{n}\bigr)\leqslant -\beta _{1} \bigl\vert z_{s}^{n} \bigr\vert ^{2}+ \beta _{2}. $$
(28)
By Young’s inequality and (H2), we have
$$\begin{aligned}& \begin{aligned} 2\lambda _{2}\bigl( h\bigl(X_{n},z_{s}^{n} \bigr),z_{s}^{n}\bigr)\leqslant \frac{\lambda _{2}^{2}}{\beta _{1}} \bigl\vert h\bigl(X_{n},z_{s}^{n}\bigr) \bigr\vert ^{2}+\beta _{1} \bigl\vert z_{s}^{n} \bigr\vert ^{2} \leqslant C+\beta _{1} \bigl\vert z_{s}^{n} \bigr\vert ^{2}, \end{aligned} \end{aligned}$$
(29)
$$\begin{aligned}& \bigl(g\bigl(X_{n},z_{s}^{n}\bigr),z_{s}^{n} \bigr)^{2}\leqslant C \bigl\vert z_{s}^{n} \bigr\vert ^{2}, \end{aligned}$$
(30)
and
$$ \bigl(h\bigl(X_{n},z_{s}^{n} \bigr),z_{s}^{n}\bigr)^{2}\leqslant C \bigl\vert z_{s}^{n} \bigr\vert ^{2}. $$
(31)
Taking expectations on both sides of (27) and combining (28)–(31), we have
$$ \mathbb{E} \bigl\vert z_{t}^{n} \bigr\vert ^{2p}\leqslant \vert y_{0} \vert ^{2p}-p\beta _{1}\mathbb{E} \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p}\,ds+C_{p,\lambda _{2},\beta _{1},\beta _{2}} \mathbb{E} \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p-2}\,ds. $$
Moreover, taking \(k>0\) small enough for Young’s inequality in the form \(ab\leqslant k|b|^{m}+C_{k,m}|a|^{m/m-1}\), we have
$$ \mathbb{E} \bigl\vert z_{s}^{n} \bigr\vert ^{2p}\leqslant \vert y_{0} \vert ^{2p}-C_{p,\lambda _{2}, \beta _{1},\beta _{2}} \mathbb{E} \int _{0}^{t} \bigl\vert z_{s}^{n} \bigr\vert ^{2p}\,ds+C_{p, \lambda _{2},\beta _{1},\beta _{2}}^{\prime }t, $$
which, with the help of continuous Gronwall’s inequality, yields the result. □

The proof of the following lemma is similar to Sect. 2. We omit the details.

Lemma 3.2

For any \(p>1\) and small enough Δt, there exists a positive constant \(K_{2}\) such that
$$ \sup_{0\leqslant n\leqslant T/\Delta t}\mathbb{E} \vert X_{n} \vert ^{2p}\leqslant K_{2}, $$
(32)
where \(K_{2}\) is independent of Δt.

Lemma 3.3

For small enough δt and \(p>1\), there exists a positive constant \(K_{3}\) such that
$$ \sup_{\stackrel{0\leqslant n\leqslant \frac{T}{\Delta t}}{0< m< M}}E \bigl\vert Y _{m}^{n} \bigr\vert ^{2p}\leqslant K_{3}, $$
(33)
where \(K_{3}\) is independent of \((M,\delta t)\).

Proof

Now we give a definition of \(Y_{t}^{n}\) by
$$\begin{aligned} Y_{t}^{n}:=y_{0}+ \int _{0}^{t}f\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,ds+ \int _{0}^{t}g\bigl(X _{n}, \hat{Y}_{s}^{n}\bigr)\,dW_{s}^{n}+ \int _{0}^{t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dN _{s}^{n}, \end{aligned}$$
where \(\hat{Y}_{t}^{n}:=Y_{k}^{n}\) for \(t\in [k\delta t,(k+1)\delta t)\), \(k=1,2,\ldots M\), and \(\hat{Y}_{t_{k}}^{n}=Y_{t_{k}}^{n}=Y_{k}^{n}\) (\(t_{k}=k\delta t\)).
Thus we have
$$ \begin{aligned}[b] Y_{k+1}^{n}&=y_{0}+ \int _{0}^{(k+1)\delta t}f\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,ds+ \int _{0}^{(k+1)\delta t}g\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dW_{s}^{n}\\ &\quad {}+ \int _{0} ^{(k+1)\delta t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dN_{s}^{n}. \end{aligned} $$
(34)
Taking the 2pth moment and expectations on both sides of (34), we get
$$\begin{aligned} \mathbb{E} \bigl\vert Y_{k+1}^{n} \bigr\vert ^{2p} =&\mathbb{E} \biggl\vert y_{0}+ \int _{0}^{(k+1) \delta t}f\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,ds+ \int _{0}^{(k+1)\delta t}g\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dW_{s}^{n} \\ &{}+ \int _{0}^{(k+1)\delta t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dN_{s}^{n} \biggr\vert ^{2p} \\ \leqslant &C \vert y_{0} \vert ^{2p}+C\mathbb{E} \biggl\vert \int _{0}^{(k+1)\delta t}f\bigl(X _{n}, \hat{Y}_{s}^{n}\bigr)\,ds \biggr\vert ^{2p}+C \mathbb{E} \biggl\vert \int _{0}^{(k+1) \delta t}g\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dW_{s}^{n} \biggr\vert ^{2p} \\ &{}+C\mathbb{E} \biggl\vert \int _{0}^{(k+1)\delta t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dN _{s}^{n} \biggr\vert ^{2p}. \end{aligned}$$
(35)
Using Hölder’s inequality, Remark 1.1, and the definition of \(\hat{Y}_{t}^{n}\), we have
$$\begin{aligned} \mathbb{E} \biggl\vert \int _{0}^{(k+1)\delta t}f\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,ds \biggr\vert ^{2p} \leqslant &C \int _{0}^{(k+1)\delta t}\mathbb{E} \bigl\vert f \bigl(X_{n}, \hat{Y}_{s}^{n}\bigr) \bigr\vert ^{2p}\,ds \\ \leqslant &C \int _{0}^{(k+1)\delta t}\mathbb{E}\bigl(1+ \vert X_{n} \vert ^{2p}+ \bigl\vert \hat{Y}_{s}^{n} \bigr\vert ^{2p}\bigr)\,ds \\ \leqslant &C+C\mathbb{E} \vert X_{n} \vert ^{2p}+C \delta t\sum_{i=0}^{k} \mathbb{E} \bigl\vert Y_{i}^{n} \bigr\vert ^{2p}. \end{aligned}$$
(36)
By the definition of \(\tilde{N}_{t}\), Burkhölder’s inequality, Hölder’s inequality, and (H2), we obtain
$$\begin{aligned} \mathbb{E} \biggl\vert \int _{0}^{(k+1)\delta t}g\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dW _{s}^{n} \biggr\vert ^{2p} \leqslant &C\mathbb{E} \biggl[ \int _{0}^{(k+1)\delta t} \bigl\vert g\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr) \bigr\vert ^{2}\,ds \biggr]^{p} \\ \leqslant &C \int _{0}^{(k+1)\delta t}\mathbb{E} \bigl\vert g \bigl(X_{n},\hat{Y}_{s} ^{n}\bigr) \bigr\vert ^{2p}\,ds \\ \leqslant &C_{p,T} \end{aligned}$$
(37)
and
$$\begin{aligned}& \mathbb{E} \biggl\vert \int _{0}^{(k+1)\delta t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,dN _{s}^{n} \biggr\vert ^{2p} \\& \quad = \mathbb{E} \biggl\vert \int _{0}^{(k+1)\delta t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,d\tilde{N}_{s}^{n}+\lambda _{2} \int _{0}^{(k+1)\delta t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,ds \biggr\vert ^{2p} \\& \quad \leqslant C\mathbb{E} \biggl[ \int _{0}^{(k+1)\delta t} \bigl\vert h\bigl(X_{n}, \hat{Y} _{s}^{n}\bigr) \bigr\vert ^{2}\,ds \biggr]^{p} \\& \qquad {}+C\mathbb{E} \biggl\vert \lambda _{2} \int _{0}^{(k+1)\delta t}h\bigl(X_{n}, \hat{Y}_{s}^{n}\bigr)\,ds \biggr\vert ^{2p} \\& \quad \leqslant C \int _{0}^{(k+1)\delta t}\mathbb{E} \bigl\vert h \bigl(X_{n},\hat{Y}_{s} ^{n}\bigr) \bigr\vert ^{2p}\,ds \\& \quad \leqslant C_{p,T,\lambda _{2}}. \end{aligned}$$
(38)
Substituting (36)–(38) into (35) gives that
$$\begin{aligned} \mathbb{E} \bigl\vert Y_{k+1}^{n} \bigr\vert ^{2p}\leqslant C\bigl(1+ \vert y_{0} \vert ^{2p}\bigr)+C\mathbb{E} \vert X _{n} \vert ^{2p}+C\delta t\sum_{i=0}^{k} \mathbb{E} \bigl\vert Y_{i}^{n} \bigr\vert ^{2p}. \end{aligned}$$
Using Lemma 3.2 and discrete Gronwall’s inequality, we get the result. □

Next, we give the 2pth moment deviation between two successive iterations of the micro-solver.

Lemma 3.4

For small enough δt and \(p>1\), there exists a positive constant \(K_{4}\) such that
$$ \sup_{\stackrel{0\leqslant n\leqslant \frac{T}{\Delta t}}{0< m< M}} \mathbb{E} \bigl\vert Y_{m+1}^{n}- Y_{m}^{n} \bigr\vert ^{2p}\leqslant K_{4}(\delta t)^{p}, $$
(39)
where \(K_{4}\) is independent of \((M, \delta t)\).

Proof

It is clear that
$$ Y_{m+1}^{n}- Y_{m}^{n}=f \bigl(X_{n},Y_{m}^{n}\bigr)\delta t+g \bigl(X_{n},Y_{m}^{n}\bigr) \Delta W_{m}^{n}+h\bigl(X_{n},Y_{m}^{n} \bigr)\Delta \tilde{N}_{m}^{n}+\lambda _{2}h \bigl(X_{n},Y_{m}^{n}\bigr)\delta t. $$
(40)
Taking the 2pth moment and expectations on both sides of (40), we get
$$\begin{aligned}& \mathbb{E} \bigl\vert Y_{m+1}^{n}- Y_{m}^{n} \bigr\vert ^{2p} \\& \quad = \mathbb{E} \bigl\vert f\bigl(X_{n},Y_{m} ^{n}\bigr)\delta t+g\bigl(X_{n},Y_{m}^{n} \bigr)\Delta W_{m}^{n}+h\bigl(X_{n},Y_{m}^{n} \bigr) \Delta \tilde{N}_{m}^{n}+\lambda _{2}h \bigl(X_{n},Y_{m}^{n}\bigr)\delta t \bigr\vert ^{2p} \\& \quad \leqslant C\delta t^{2p}\mathbb{E} \bigl\vert f \bigl(X_{n},Y_{m}^{n}\bigr) \bigr\vert ^{2P}+C( \delta t)^{p}\mathbb{E} \bigl\vert g \bigl(X_{n},Y_{m}^{n}\bigr) \bigr\vert ^{2p} \\& \qquad {}+C\delta t^{2p}\mathbb{E} \bigl\vert h\bigl(X_{n},Y_{m}^{n} \bigr) \bigr\vert ^{2p}+C\delta t^{p} \mathbb{E} \bigl\vert h\bigl(X_{n},Y_{m}^{n}\bigr) \bigr\vert ^{2p}. \end{aligned}$$
By Remark 1.1 and (H2), we have
$$ \mathbb{E} \bigl\vert Y_{m+1}^{n}- Y_{m}^{n} \bigr\vert ^{2p}\leqslant C\delta t^{2p}\bigl(1+ \mathbb{E} \vert X_{n} \vert ^{2p}+\mathbb{E} \bigl\vert Y_{m}^{n} \bigr\vert ^{2p}\bigr)+C\delta t^{p}. $$
Using Lemmas 3.2 and 3.3, for small enough δt, we get
$$ \mathbb{E} \bigl\vert Y_{m+1}^{n}- Y_{m}^{n} \bigr\vert ^{2p}\leqslant K_{4}\delta t^{p}. $$
 □

Lemma 2.1 in [9] shows that \(z_{t}^{n}\) is statistically equivalent to a shifted and rescaled version of \(y_{t}^{\epsilon }\), with x being a parameter, that is, \(z_{t}^{k}\sim y_{t-t_{k}/\epsilon }^{\epsilon }\).

It is proved in [2] that dynamic (9) is ergodic with a unique invariant measure \(\mu ^{X_{n}}\) (Assumptions H3–H5), which possesses exponentially mixing property in the following sense. Let \(P^{X_{n}}(t, z, E)\) denote the transition probability of (9). Then there exist positive constants \(\eta , \alpha <1\) such that
$$\begin{aligned} \bigl\vert P^{X_{n}}(t, z, E)-\mu ^{X_{n}}(E) \bigr\vert < \eta \alpha ^{t} \end{aligned}$$
for every \(E\in \mathcal{B}(R^{m})\).

Then we establish the mixing properties of the auxiliary processes \(z_{t}^{n}\). Note that \(\bar{a}(X_{n})\) is the average of \(a(X_{n},y)\) with respect to \(\mu ^{X_{n}}\), which is the invariant measure induced by \(z_{t}^{n}\). We denote \(z_{m}^{n}=z_{m\delta t}^{n}\).

Lemma 3.5

For small enough δt and \(p>1\), there exists a positive constant \(K_{5}\) such that
$$ \mathbb{E} \Biggl\vert \frac{1}{M}\sum _{m=1}^{M}a\bigl(X_{n},z_{m}^{n} \bigr)-\bar{a}(X _{n}) \Biggr\vert ^{2p}\leqslant K_{5} \biggl[\frac{-\log _{\alpha }{M\delta t+1}}{M \delta t}+\frac{1}{M} \biggr], $$
(41)
where \(K_{5}\) is independent of \((M, \delta t)\).

Proof

By (H2) and Remark (1.2), we have
$$\begin{aligned}& \mathbb{E} \Biggl\vert \frac{1}{M}\sum _{m=1}^{M}a\bigl(X_{n},z_{m}^{n} \bigr)-\bar{a}(X _{n}) \Biggr\vert ^{2p} \\& \quad = \mathbb{E} \Biggl[ \Biggl\vert \frac{1}{M}\sum _{m=1}^{M}a\bigl(X_{n},z_{m}^{n} \bigr)- \bar{a}(X_{n}) \Biggr\vert ^{2p-2} \times \Biggl\vert \frac{1}{M}\sum_{m=1}^{M}a \bigl(X _{n},z_{m}^{n}\bigr)-\bar{a}(X_{n}) \Biggr\vert ^{2} \Biggr] \\& \quad \leqslant \mathbb{E} \Biggl[ \Biggl(\frac{1}{M}\sum _{m=1}^{M} \bigl\vert a\bigl(X_{n},z _{m}^{n}\bigr)-\bar{a}(X_{n}) \bigr\vert ^{2p-2} \Biggr)\times \Biggl\vert \frac{1}{M} \sum _{m=1}^{M}a\bigl(X_{n},z_{m}^{n} \bigr)-\bar{a}(X_{n}) \Biggr\vert ^{2} \Biggr] \\& \quad \leqslant C_{a,\bar{a}}\mathbb{E} \Biggl\vert \frac{1}{M}\sum _{m=1}^{M}a\bigl(X _{n},z_{m}^{n} \bigr)-\bar{a}(X_{n}) \Biggr\vert ^{2}. \end{aligned}$$
(42)

It remains to estimate the mean-square term, and the proof for the term is similar to the method in [9] (Lemma 2.6). We omit the details. Thus we obtain the desired result (41). □

Afterwards, we establish the 2pth moments deviation between (9) and its numerical approximation (8).

Lemma 3.6

Let \(z_{t}^{n}\) be the family of processes defined by (9). For small enough δt and \(p>1\), there exists a positive constant \(K_{8}\) such that
$$ \max_{\stackrel{0\leqslant n\leqslant \lfloor \frac{T}{\Delta t} \rfloor }{0\leqslant m\leqslant M}} \mathbb{E} \bigl\vert Y_{m}^{n}-z_{m}^{n} \bigr\vert ^{2p} \leqslant K_{8}\delta t^{p}, $$
(43)
where \(K_{8}\) is independent of \((M, \delta t)\).

Proof

Define \(t_{\delta t}=\lfloor t/\delta t\rfloor \delta t\). Let \(Y_{t}^{n}\) be the Euler approximation \(Y_{m}^{n}\), interpolated continuously by
$$ Y_{t}^{n}= \int _{0}^{t}f\bigl(X_{n},Y_{s_{\delta t}}^{n} \bigr)\,ds+ \int _{0}^{t}g\bigl(X _{n},Y_{s_{\delta t}}^{n} \bigr)\,dW_{s}^{n}+ \int _{0}^{t}h\bigl(X_{n},Y_{s_{\delta t}}^{n} \bigr)\,dN_{s}^{n}. $$
(44)
Hence, we have
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,t_{1}]} \bigl\vert Y_{t}^{n}-z_{t}^{n} \bigr\vert ^{2p} \leqslant & C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[f\bigl(X_{n},Y_{s_{ \delta t}}^{n} \bigr)-f\bigl(X_{n},z_{s}^{n}\bigr)\bigr]\,ds \biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[g\bigl(X_{n},Y_{s _{\delta t}}^{n} \bigr)-g\bigl(X_{n},z_{s}^{n}\bigr) \bigr]\,dW_{s}^{n} \biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[h\bigl(X_{n},Y_{s _{\delta t}}^{n} \bigr)-h\bigl(X_{n},z_{s}^{n}\bigr)\bigr]\,d \tilde{N}_{s}^{n} \biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\lambda _{2}\bigl[h\bigl(X _{n},Y_{s_{\delta t}}^{n}\bigr)-h\bigl(X_{n},z_{s}^{n} \bigr)\bigr]\,ds \biggr\vert ^{2p} \end{aligned}$$
(45)
for any \(0\leqslant t_{1}\leqslant T\), where we have used the definition of Ñ. Now, we use Burkhölder’s inequality and Hölder’s inequality on the two martingale terms to get
$$\begin{aligned}& \mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[g\bigl(X_{n},Y_{s_{ \delta t}}^{n} \bigr)-g\bigl(X_{n},z_{s}^{n}\bigr) \bigr]\,dW_{s}^{n} \biggr\vert ^{2p} \\& \quad \leqslant C \mathbb{E} \biggl[ \int _{0}^{t_{1}} \bigl\vert g\bigl(X_{n},Y_{s_{\delta t}}^{n} \bigr)-g\bigl(X _{n},z_{s}^{n}\bigr) \bigr\vert ^{2}\,ds \biggr]^{p} \\& \quad \leqslant C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert g \bigl(X_{n},Y_{s_{\delta t}}^{n}\bigr)-g\bigl(X _{n},z_{s}^{n}\bigr) \bigr\vert ^{2p}\,ds \end{aligned}$$
(46)
and
$$\begin{aligned}& \mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[h\bigl(X_{n},Y_{s_{ \delta t}}^{n} \bigr)-h\bigl(X_{n},z_{t}^{n}\bigr)\bigr]\,d \tilde{N}_{s}^{n} \biggr\vert ^{2p} \\& \quad \leqslant C\mathbb{E} \biggl[ \int _{0}^{t_{1}}\lambda _{2} \bigl\vert h \bigl(X_{n},Y _{s_{\delta t}}^{n}\bigr)-h\bigl(X_{n},z_{s}^{n} \bigr) \bigr\vert ^{2}\,ds \biggr]^{p} \\& \quad \leqslant C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert h \bigl(X_{n},Y_{s_{\delta t}}^{n}\bigr)-h\bigl(X _{n},z_{s}^{n}\bigr) \bigr\vert ^{2p}\,ds. \end{aligned}$$
(47)
By Hölder’s inequality, we have
$$\begin{aligned}& \begin{gathered}[b] \mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\bigl[f\bigl(X_{n},Y_{s_{ \delta t}}^{n} \bigr)-f\bigl(X_{n},z_{s}^{n}\bigr)\bigr]\,ds \biggr\vert ^{2p}\\ \quad\leqslant C \int _{0} ^{t_{1}}\mathbb{E} \bigl\vert f \bigl(X_{n},Y_{s_{\delta t}}^{n}\bigr)-f\bigl(X_{n},z_{s}^{n} \bigr) \bigr\vert ^{2p}\,ds, \end{gathered} \end{aligned}$$
(48)
$$\begin{aligned}& \begin{gathered}[b]\mathbb{E}\sup_{t\in [0,t_{1}]} \biggl\vert \int _{0}^{t}\lambda _{2}\bigl[h \bigl(X_{n},Y _{s_{\delta t}}^{n}\bigr)-h\bigl(X_{n},z_{s}^{n} \bigr)\bigr]\,ds \biggr\vert ^{2p}\\ \quad \leqslant C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert h \bigl(X_{n},Y_{s_{\delta t}}^{n}\bigr)-h\bigl(X_{n},z_{s} ^{n}\bigr) \bigr\vert ^{2p}\,ds. \end{gathered} \end{aligned}$$
(49)
Combining (44)–(49) and applying the Lipschitz condition in (H1), we have
$$\begin{aligned} \mathbb{E}\sup_{t\in [0,t_{1}]} \bigl\vert Y_{t}^{n}-z_{t}^{n} \bigr\vert ^{2p} \leqslant & C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert Y_{s_{\delta t}}^{n}-z_{s}^{n} \bigr\vert ^{2p}\,ds \\ \leqslant & C \int _{0}^{t_{1}}\mathbb{E} \bigl\vert Y_{s_{\delta t}}^{n}-Y_{s} ^{n} \bigr\vert ^{2p}\,ds+C \int _{0}^{t_{1}}\mathbb{E}\sup_{t\in [0,s]} \bigl\vert Y_{t}^{n}-z _{t}^{n} \bigr\vert ^{2p}\,ds \\ \leqslant & C^{\prime }\delta t^{p}+C^{\prime \prime }\mathbb{E} \int _{0}^{t_{1}}\sup_{t\in [0,s]} \bigl\vert Y_{t}^{n}-z_{t}^{n} \bigr\vert ^{2p}\,ds, \end{aligned}$$
which, with the help of continuous Gronwall’s inequality, yields the result. □

Lemma 3.7

There exists a positive constant \(K_{6}\) such that, for all \(p>1\) and \(0\leqslant n\leqslant \lfloor \frac{T}{\Delta t}\rfloor \),
$$ \mathbb{E} \bigl\vert \bar{a}(X_{n})-A(X_{n}) \bigr\vert ^{2p}\leqslant K_{6} \biggl(\frac{- \log _{\alpha }M\delta t+1}{M\delta t}+ \frac{1}{M}+\delta t^{p} \biggr), $$
(50)
where \(K_{6}\) is independent of \((M, \delta t)\).

Proof

By definition, we have
$$\begin{aligned} \mathbb{E} \bigl\vert \bar{a}(X_{n})-A(X_{n}) \bigr\vert ^{2p} =&\mathbb{E} \Biggl\vert \int _{\mathbb{R}^{m}} a(X_{n},y)\mu ^{X_{n}}(dy)- \frac{1}{M}\sum_{m=1} ^{M}a \bigl(X_{n},Y_{m}^{n}\bigr) \Biggr\vert ^{2p} \\ \leqslant & CI_{1}^{n}+CI_{2}^{n}, \end{aligned}$$
(51)
where
$$\begin{aligned}& I_{1}^{n}:=\mathbb{E} \Biggl\vert \int _{\mathbb{R}^{m}} a(X_{n},y)\mu ^{X_{n}}(dy)- \frac{1}{M}\sum_{m=1}^{M}a \bigl(X_{n},z_{m}^{n}\bigr) \Biggr\vert ^{2p}, \\& I_{2}^{n}:=\mathbb{E} \Biggl\vert \frac{1}{M}\sum _{m=1}^{M}a\bigl(X_{n},z_{m}^{n} \bigr)- \frac{1}{M}\sum_{m=1}^{M}a \bigl(X_{n},Y_{m}^{n}\bigr) \Biggr\vert ^{2p}, \end{aligned}$$
where \(z_{t}^{n}\) is the family of processes defined by (9). \(I_{1}^{n}\) is the difference between the ensemble average of \(a(X_{n},\cdot )\) with respect to the (exact) invariant measure of \(z_{t}^{n}\) and its empirical average over M equi-distanced sample points. \(I_{2}^{n}\) is the difference between empirical averages of \(a(X_{n},\cdot )\) over M equi-distanced sample points, once for the process \(z_{t}^{n}\) and once for its Euler approximation \(Y_{m}^{n}\).
The estimation of \(I_{1}^{n}\) is given in Lemma 3.5,
$$\begin{aligned} I_{1}^{n} =&\mathbb{E} \Biggl\vert \int _{\mathbb{R}^{m}} a(X_{n},y)\mu ^{X _{n}}(dy)- \frac{1}{M}\sum_{m=1}^{M}a \bigl(X_{n},z_{m}^{n}\bigr) \Biggr\vert ^{2p} \\ \leqslant &K_{5} \biggl[\frac{-\log _{\alpha }{M\delta t+1}}{M\delta t}+ \frac{1}{M} \biggr]. \end{aligned}$$
(52)
Then we estimate \(I_{2}^{n}\) by (H2)
$$\begin{aligned} I_{2}^{n} =&\mathbb{E} \Biggl\vert \frac{1}{M}\sum _{m=1}^{M}a\bigl(X_{n},z_{m} ^{n}\bigr)-\frac{1}{M}\sum_{m=1}^{M}a \bigl(X_{n},Y_{m}^{n}\bigr) \Biggr\vert ^{2p} \\ \leqslant &\sum_{m=1}^{M}\mathbb{E} \bigl\vert a\bigl(X_{n},z_{m}^{n}\bigr)-a \bigl(X_{n},Y _{m}^{n}\bigr) \bigr\vert ^{2p} \\ \leqslant &C\max_{m\leqslant M}\mathbb{E} \bigl\vert Y_{m}^{n}-z_{m}^{n} \bigr\vert ^{2p}. \end{aligned}$$
Using Lemma 3.6, we obtain
$$ I_{2}^{n}\leqslant CK_{8}\delta t^{p}. $$
(53)
Combining (51)–(53), we get
$$ \mathbb{E} \bigl\vert \bar{a}(X_{n})-A(X_{n}) \bigr\vert ^{2p}\leqslant K_{6} \biggl[\frac{- \log _{\alpha }{M\delta t+1}}{M\delta t}+ \frac{1}{M}+\delta t^{p} \biggr], $$
which is uniform in \(n\leqslant T/\Delta t\). □

Finally, we estimate the difference between the process \(X_{n}\) and the auxiliary process \(\bar{X}_{n}\).

Lemma 3.8

There exist positive constants \(\Delta t^{\ast }\) and \(K_{7}\) such that, for \(p>1\) and \(0<\Delta t\leqslant \Delta t^{\ast }\),
$$ \mathbb{E}\sup_{0\leqslant n\leqslant \lfloor T/\Delta t\rfloor } \vert X _{n}- \bar{X}_{n} \vert ^{2p}\leqslant K_{7} \biggl( \frac{-\log _{\alpha }{M \delta t+1}}{M\delta t}+\frac{1}{M}+\delta t^{p} \biggr), $$
(54)
where \(K_{7}\) is independent of \((M, \delta t, \Delta t)\).

Proof

Set \(E_{n}=\mathbb{E}\sup_{l\leqslant n}|\bar{X}_{l}-X_{l}|^{2p}\), then
$$\begin{aligned} E_{n} =&\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[\bar{a}( \bar{X}_{i})-A(X_{i})\bigr]\Delta t+\sum _{i=0}^{l-1}\bigl[b(\bar{X}_{i})-b(X _{i})\bigr]\Delta W_{i}+\sum_{i=0}^{l-1} \bigl[c(\bar{X}_{i})-c(X_{i})\bigr]\Delta P _{i} \Biggr\vert ^{2p} \\ =&\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[\bar{a}( \bar{X}_{i})-A(X_{i}) \bigr]\Delta t+\sum_{i=0}^{l-1}\bigl[b( \bar{X}_{i})-b(X _{i})\bigr]\Delta W_{i}+\sum _{i=0}^{l-1}\bigl[c(\bar{X}_{i})-c(X_{i}) \bigr]\Delta \tilde{P}_{i} \\ &{}+\lambda _{1}\sum_{i=0}^{l-1} \bigl[c(\bar{X}_{i})-c(X_{i})\bigr]\Delta t \Biggr\vert ^{2p} \\ \leqslant & C\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[ \bar{a}(\bar{X}_{i})-A(X_{i}) \bigr]\Delta t \Biggr\vert ^{2p}+C\mathbb{E} \sup_{l\leqslant n} \Biggl\vert \sum_{i=0}^{l-1}\bigl[b( \bar{X}_{i})-b(X_{i})\bigr] \Delta W_{i} \Biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[c(\bar{X}_{i})-c(X _{i})\bigr]\Delta \tilde{P}_{i} \Biggr\vert ^{2p}+C\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \lambda _{1}\sum_{i=0}^{l-1}\bigl[c( \bar{X}_{i})-c(X_{i})\bigr]\Delta t \Biggr\vert ^{2p}. \end{aligned}$$
We split the first term on the right-hand side:
$$\begin{aligned} E_{n} \leqslant & C\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum_{i=0}^{l-1}\bigl[ \bar{a}( \bar{X}_{i})-\bar{a}(X_{i})\bigr]\Delta t \Biggr\vert ^{2p}+C\mathbb{E} \sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[\bar{a}(X_{i})-A(X_{i}) \bigr] \Delta t \Biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[b(\bar{X}_{i})-b(X _{i})\bigr]\Delta W_{i} \Biggr\vert ^{2p}+C \mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[c(\bar{X}_{i})-c(X_{i}) \bigr]\Delta \tilde{P}_{i} \Biggr\vert ^{2p} \\ &{}+C\mathbb{E}\sup_{l\leqslant n} \Biggl\vert \lambda _{1}\sum_{i=0}^{l-1}\bigl[c( \bar{X}_{i})-c(X_{i})\bigr]\Delta t \Biggr\vert ^{2p}. \end{aligned}$$
(55)
The first and fifth terms on the right-hand side are estimated using the Lipschitz continuity of ā and c:
$$\begin{aligned} \mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[\bar{a}(\bar{X} _{i})- \bar{a}(X_{i})\bigr]\Delta t \Biggr\vert ^{2p} \leqslant & C\mathbb{E} \sup_{l\leqslant n}\sum_{i=0}^{l-1} \bigl\vert \bar{a}(\bar{X}_{i})-\bar{a}(X _{i}) \bigr\vert ^{2p}\Delta t^{2p} \\ \leqslant & C\sum_{i=0}^{n-1}\mathbb{E} \vert \bar{X}_{i}-X_{i} \vert ^{2p} \Delta t \leqslant C\sum_{i=0}^{n-1}E_{i} \Delta t \end{aligned}$$
(56)
and
$$\begin{aligned} \mathbb{E}\sup_{l\leqslant n} \Biggl\vert \lambda _{1}\sum _{i=0}^{l-1}\bigl[c( \bar{X}_{i})-c(X_{i}) \bigr]\Delta t \Biggr\vert ^{2p} \leqslant & C\mathbb{E} \sup _{l\leqslant n}\sum_{i=0}^{l-1} \bigl\vert c(\bar{X}_{i})-c(X_{i}) \bigr\vert ^{2p} \Delta t^{2p} \\ \leqslant & C\sum_{i=0}^{n-1}\mathbb{E} \vert \bar{X}_{i}-X_{i} \vert ^{2p} \Delta t \leqslant C\sum_{i=0}^{n-1}E_{i} \Delta t. \end{aligned}$$
(57)
Now using Burkhölder’s inequality and (H2) on the two martingale terms, we get
$$\begin{aligned} \mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[c(\bar{X}_{i})-c(X _{i})\bigr]\Delta \tilde{P}_{i} \Biggr\vert ^{2p} \leqslant & C\mathbb{E} \Biggl\vert \sum _{i=0}^{n-1}\lambda _{1}\Delta t\bigl[c( \bar{X}_{i})-c(X_{i})\bigr]^{2} \Biggr\vert ^{p} \\ \leqslant & C\sum_{i=0}^{n-1}\mathbb{E} \vert \bar{X}_{i}-X_{i} \vert ^{2p} \Delta t \leqslant C\sum_{i=0}^{n-1}E_{i} \Delta t \end{aligned}$$
(58)
and
$$\begin{aligned} \mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[b(\bar{X}_{i})-b(X_{i})\bigr]\Delta W_{i} \Biggr\vert ^{2p} \leqslant & C\mathbb{E} \Biggl\vert \sum _{i=0}^{n-1}\Delta t\bigl[b(\bar{X}_{i})-b(X_{i}) \bigr]^{2} \Biggr\vert ^{p} \\ \leqslant & C\sum_{i=0}^{n-1}\mathbb{E} \vert \bar{X}_{i}-X_{i} \vert ^{2p} \Delta t \leqslant C\sum_{i=0}^{n-1}E_{i} \Delta t. \end{aligned}$$
(59)
The second term on the right-hand side can be bounded as follows:
$$ \mathbb{E}\sup_{l\leqslant n} \Biggl\vert \sum _{i=0}^{l-1}\bigl[\bar{a}(X_{i})-A(X _{i})\bigr]\Delta t \Biggr\vert ^{2p}\leqslant C\max _{i< n}\mathbb{E} \bigl\vert \bar{a}(X _{i})-A(X_{i}) \bigr\vert ^{2p}. $$
(60)
Combining (55)–(60) with Lemma 3.7, we obtain a discrete linear integral inequality
$$ E_{n}\leqslant C\sum_{i=0}^{n-1}E_{i} \Delta t+CK_{6} \biggl(\frac{- \log _{\alpha }M\delta t+1}{M\delta t}+\frac{1}{M}+\delta t^{p} \biggr), $$
with the initial condition \(E_{0}=0\). It follows that, for sufficiently small Δt,
$$\begin{aligned} E_{n} \leqslant & CK_{6} \biggl(\frac{-\log _{\alpha }M\delta t+1}{M \delta t}+ \frac{1}{M}+\delta t^{p} \biggr)\{1+C\Delta t\}^{n} \\ \leqslant & CK_{6} \biggl(\frac{-\log _{\alpha }M\delta t+1}{M\delta t}+ \frac{1}{M}+ \delta t^{p} \biggr)e^{CT}. \end{aligned}$$
This estimate proves the lemma with \(K_{7}=CK_{6}e^{CT}\). □

4 Main result

Now we can state and prove our main theorem readily.

Theorem 4.1

Suppose that conditions (H1)–(H5) hold, then there exist positive constants \(K^{\prime }\) and \(K^{\prime \prime }\) such that
$$ \mathbb{E}\sup_{0\leqslant n\leqslant \lfloor T/\Delta t\rfloor } \bigl\vert X _{n}- \bar{x}(t_{n}) \bigr\vert ^{2p}\leqslant K^{\prime } \Delta t^{p}+K^{\prime \prime } \biggl(\frac{-\log _{\alpha }{M\delta t+1}}{M\delta t}+ \frac{1}{M}+\delta t^{p} \biggr), $$
where \(K^{\prime }\), \(K^{\prime \prime }\) are independent of \((\Delta t, \delta t, M)\).

Proof

We begin our proof with subtracting and adding the term \(\bar{X}_{n}\):
$$\begin{aligned} \mathbb{E}\sup_{0\leqslant n\leqslant \lfloor T/\Delta t\rfloor } \bigl\vert X _{n}- \bar{x}(t_{n}) \bigr\vert ^{2p} \leqslant &C\mathbb{E} \sup _{0\leqslant n\leqslant \lfloor T/\Delta t\rfloor } \vert X_{n}-\bar{X} _{n} \vert ^{2p}+C\mathbb{E} \sup_{0\leqslant n\leqslant \lfloor T/\Delta t\rfloor } \bigl\vert \bar{X}_{n}- \bar{x}(t_{n}) \bigr\vert ^{2p} \\ \leqslant &K^{\prime }\Delta t^{p}+K^{\prime \prime } \biggl( \frac{- \log _{\alpha }{M\delta t+1}}{M\delta t}+\frac{1}{M}+\delta t^{p} \biggr), \end{aligned}$$
where we have used the result of Lemmas 2.4 and 3.8. □

5 Conclusions

In this paper, the \(L^{p}\) (\(p>2\))-strong convergence of the multiscale integration scheme has been studied for the two-time-scale jump-diffusion systems. By Lemmas 2.4 and 3.8, we obtained our desired main result. The results in [2, 9] are extended in this paper. First, we provide a numerical method for the \(L^{p}\) (\(p>2\)) averaging principle in [2]; second, in [9], the authors only studied \(L^{2}\) convergence of the multiscale integration scheme, and we extended the result into the \(L^{p}\) (\(p>2\)) case.

Declarations

Acknowledgements

The first author is very grateful to Associate Professor Jie Xu for his encouragement and useful discussions.

Availability of data and materials

Not applicable.

Funding

The author acknowledges the support provided by NSFs of China No. U1504620 and Youth Science Foundation of Henan Normal University Grant No. 2014QK02.

Authors’ contributions

The authors declare that the study was realized in collaboration with the same responsibility. All authors read and approved the final manuscript.

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Mathematics and Information Science, Henan Normal University, Xinxiang, P.R. China

References

  1. Givon, D.: Strong convergence rate for two-time-scale jump-diffusion stochastic differential systems. Multiscale Model. Simul. 6(2), 577–594 (2007) MathSciNetView ArticleGoogle Scholar
  2. Xu, J., Miao, Y.: \(L^{p}\) (\(p>2\))-strong convergence of an averaging principle for two-time-scales jump-diffusion stochastic differential equations. Nonlinear Anal. Hybrid Syst. 18, 33–47 (2015) MathSciNetView ArticleGoogle Scholar
  3. Khasminskii, R.Z.: On the principle of averaging the Itô’s stochastic differential equations. Kybernetika 4, 260–279 (1968) (in Russian) MathSciNetGoogle Scholar
  4. E, W., Liu, D., Vanden-Eijnden, E.: Analysis of multiscale methods for stochastic differential equations. Commun. Pure Appl. Math. 58(11), 1544–1585 (2005) MathSciNetView ArticleGoogle Scholar
  5. Liu, D.: Strong convergence of principle of averaging for multiscale stochastic dynamical systems. Commun. Math. Sci. 8(4), 999–1020 (2010) MathSciNetView ArticleGoogle Scholar
  6. Li, Z., Yan, L.: Stochastic averaging for two-time-scale stochastic partial differential equations with fractional Brownian motion. Nonlinear Anal. Hybrid Syst. 31, 317–333 (2019) MathSciNetView ArticleGoogle Scholar
  7. Vanden-Eijnden, E.: Numerical techniques for multi-scale dynamical systems with stochastic effects. Commun. Math. Sci. 1(2), 385–391 (2003) MathSciNetView ArticleGoogle Scholar
  8. Givon, D., Kevrekidis, I.G., Kupferman, R.: Strong convergence of projective integration schemes for singularly perturbed stochastic differential systems. Commun. Math. Sci. 4(4), 707–729 (2006) MathSciNetView ArticleGoogle Scholar
  9. Givon, D., Kevrekidis, I.G.: Multiscale integration schemes for jump-diffusion systems. Multiscale Model. Simul. 7(2), 495–516 (2008) MathSciNetView ArticleGoogle Scholar
  10. Liu, D.: Analysis of multiscale methods for stochastic dynamical systems with multiple time scales. Multiscale Model. Simul. 8(3), 944–964 (2010) MathSciNetView ArticleGoogle Scholar
  11. Cerrai, S., Freidlin, M.I.: Averaging principle for a class of stochastic reaction-diffusion equations. Probab. Theory Relat. Fields 144(1–2), 147–177 (2009) MathSciNetMATHGoogle Scholar
  12. Protter, P.: Stochastic Integration and Differential Equations, 2nd edn. Springer, Berlin (2004) MATHGoogle Scholar

Copyright

© The Author(s) 2019

Advertisement