Skip to main content

Theory and Modern Applications

Equivalence of the mean square stability between the partially truncated Euler–Maruyama method and stochastic differential equations with super-linear growing coefficients

Abstract

For stochastic differential equations (SDEs) whose drift and diffusion coefficients can grow super-linearly, the equivalence of the asymptotic mean square stability between the underlying SDEs and the partially truncated Euler–Maruyama method is studied. Using the finite time convergence as a bridge, a twofold result is proved. More precisely, the mean square stability of the SDEs implies that of the partially truncated Euler–Maruyama method, and the mean square stability of the partially truncated Euler–Maruyama method indicates that of the SDEs given the step size is carefully chosen.

1 Introduction

In [9], the authors initialed the study on the equivalence of the stability between the underlying equations and their numerical methods for stochastic differential equations (SDEs). More precisely, for the SDE of the Itô type

$$ dy(t) = \mu\bigl(y(t)\bigr)\,dt + \sigma\bigl(y(t)\bigr) \,d B(t) $$

with the initial value \(y(0) = y_{0}\) satisfying \(\mathbb {E}|y_{0}|^{2} < \infty\). One says that the solution is mean square stable if there exist positive constants \(K_{s}\) and \(\lambda_{s}\) such that

$$ \mathbb {E}\bigl\vert y(t) \bigr\vert ^{2} \leq K_{s} \mathbb {E}\bigl\vert y(0) \bigr\vert ^{2} e^{-\lambda_{s} t}. $$
(1.1)

Denote some numerical approximations to \(y(t)\) by \(x_{\Delta}(t)\) with \(x_{\Delta}(0) = y(0)\). One claims that the numerical solution is mean square stable if there exist positive constants \(K_{n}\) and \(\lambda_{n}\) such that

$$ \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \leq K_{n} \mathbb {E}\bigl\vert x_{\Delta}(0) \bigr\vert ^{2} e^{-\lambda_{n} t}. $$
(1.2)

The authors in [9] proved a very general “if and only if” theorem between (1.1) and (1.2). The theorem states in twofold that

  1. (A)

    the mean square stability of the SDE implies the mean square stability of the numerical method,

  2. (B)

    the mean square stability of the numerical method implies the mean square stability of the SDE if the step size Δ of the numerical method is carefully chosen.

By saying the theorem is very general, one means that conditions required are the second moment boundedness of the numerical method and the finite time convergence of the numerical method to the SDE (see the details in Sect. 3), where the structure of the numerical method is not needed to be specified.

Since then, many works have been devoted to the study on the equivalence of the mean square stability between different types of SDEs and their numerical methods. The author in [18] investigated the stochastic differential delay equations (SDDEs) and the Euler–Maruyama (EM) method. The SDDEs with Poisson jump and Markov switching and the semi-implicit Euler method are studied in [28]. The authors in [17] analyze the neutral delayed stochastic differential equations and the EM method.

It should be pointed out that although the very general results are obtained in the papers above for different types of SDEs and numerical methods, the global Lipschitz condition is always imposed on the drift and diffusion coefficients when the theorems are applied to some specified numerical methods. One may notice that SDEs with the coefficients not obeying the global Lipschitz condition have been extensively applied more recently in many areas such as biology, finance, and epidemiology [1, 4, 19, 23]. Therefore, the first motivation of this paper is as follows.

  1. (M1)

    For some SDEs whose coefficients do not satisfy the global Lipschitz condition, can we find some numerical method that shares the mean square stability with the underlying SDEs, i.e., both (A) and (B) hold?

Actually, many interesting works have been devoted to (A), i.e., given the SDE is stable under certain conditions, some numerical method can reproduce such a stability. We just mention some of the works here [2, 3, 7, 10, 11, 14, 22, 26, 27] and refer the readers to the references therein. It is not hard to observe that when the coefficients of SDE do not satisfy the global Lipschitz condition, the classical EM method is always abandoned and some implicit methods are adapted as alternatives. This phenomenon is explained in [12], where the authors proved that the classical EM diverges from the SDE if either drift or diffusion coefficient grows super-linearly.

However, due to the advantages such as simple structure and less computational cost in each iteration (not like implicit methods in which some non-linear equation system needs to be solved in each iteration) [8], the explicit Euler-type methods are still attracting lots of attention. Therefore, in the past several years some modified explicit Euler methods, such as the tamed Euler method [13, 24, 25, 29] and the truncated EM method [6, 15, 16, 20, 21], have been developed. The bloom of explicit methods brings the second motivation of this paper as follows.

  1. (M2)

    Can we use some explicit method to answer (M1), i.e., can we find some explicit methods for some SDEs with super-linear growing coefficients that shares the mean square stability with the underlying SDEs?

Bearing (M1) and (M2) in mind, in this paper we investigate the partially truncated EM method to see if it could share the mean square stability with the SDEs when both the drift and diffusion coefficients can grow super-linear. By using the finite time convergence as the bridge, we prove the “if and only if” theorem for the partially truncated EM method. More precisely, we prove that

  1. (a)

    the mean square stability of the SDE implies the mean square stability of the partially truncated EM method,

  2. (b)

    the mean square stability of the partially truncated EM method implies the mean square stability of the SDE if the step size Δ of the numerical method is carefully chosen.

To our best knowledge, few works have dealt with (B) using explicit methods for SDEs with super-linear coefficients, though some works have studied (A) using explicit methods [6, 29]. As pointed out in [9], sometimes (B) is more interesting and important since if (B) is true for some numerical method, then by carefully conducting the numerical simulation one can know whether the SDE is stable or not without using the Lyapunov technique.

This paper is constructed as follows. In Sect. 2, a general theorem that guarantees the equivalence of the stability between the SDEs and their numerical methods is provided, which is a slight generalization of Theorem 2.6 in [9]. Our main results for the partially truncated EM method are presented in Sect. 3. Section 4 concludes this paper with some future research mentioned.

2 A general theorem

Throughout this paper, unless otherwise specified, the following notations are used. The transpose of a vector or matrix A is denoted by \(A^{T}\). \(|y|\) is the Euclidean norm if \(y\in \mathbb {R}^{d}\). If A is a matrix, we let \(|A| = \sqrt{ \operatorname{trace}(A^{T}A)}\) be its trace norm. For two real numbers α and β, we use \(\alpha \vee \beta=\max(\alpha , \beta)\) and \(\alpha \wedge \beta= \min(\alpha, \beta)\). If D is a set, its indicator function is denoted by \(I_{D}\), namely \(I_{D}(x)=1\) if \(x\in D\) and 0 otherwise. Moreover, let \((\Omega, \mathcal{F}, \mathbb {P})\) be a complete probability space with a filtration \(\{\mathcal{F}_{t} \}_{t \ge0}\) satisfying the usual conditions (that is, it is right continuous and increasing while \(\mathcal{F}_{0}\) contains all \(\mathbb {P}\)-null sets), and let \(\mathbb {E}\) denote the expectation corresponding to \(\mathbb {P}\). Let \(B(t)\) be an m-dimensional Brownian motion defined on the space.

In this paper, we study stochastic differential equations of the Itô type

$$ dy(t) = \mu\bigl(y(t)\bigr)\,dt + \sigma\bigl(y(t)\bigr) \,d B(t) $$
(2.1)

with the initial value \(y(0) = y_{0}\), where

$$ \mu: \mathbb {R}^{d} \to \mathbb {R}^{d} \quad\hbox{and}\quad \sigma: \mathbb {R}^{d} \to \mathbb {R}^{d\times m}. $$

Theorem 2.1

Assume that, for all sufficiently small step size Δ, a numerical method applied to (2.1) with initial value \(x_{\Delta}(0) = y(0)=y_{0}\) satisfies:

  1. (I)

    for any \(T > 0\),

    $$ \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} < C(y_{0},T), $$

    where \(C(y_{0},T)\) depends on \(y_{0}\) and T, but not on Δ;

  2. (II)

    there exists a strictly increasing continuous function \(\alpha(\Delta)\) with \(\alpha(0) = 0\) such that

    $$ \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) - y(t) \bigr\vert ^{2} \leq \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr) C_{T} \alpha( \Delta), $$

    where \(C_{T}\) depends on T but not on \(y_{0}\) and Δ.

Then the SDE is mean square exponentially stable if and only if there exists \(\Delta> 0\) such that the numerical method is mean square exponentially stable with rate constant \(\lambda_{n}\), growth constant \(K_{n}\), step size Δ, and the global error constant \(C_{T}\) with \(T:= 1 + (4 \log K_{n})/\lambda_{n}\) satisfying

$$ C_{2T} \bigl( \alpha(\Delta) + \sqrt{\alpha(\Delta)} \bigr)e^{\lambda_{n} T} + 1 + \sqrt{\alpha(\Delta)} \leq e^{(1/4)\lambda _{n} T} \quad\textit{and}\quad C_{T} \Delta\leq1. $$

Remark 2.2

Theorem 2.1 is a general theorem for any numerical method. Compared with Condition 2.3 for Theorem 2.6 in [9], one may notice that a more general function of Δ, i.e., \(\alpha (\Delta)\), is used in Theorem 2.1. This change enables the new theorem to cover numerical methods with different convergence rates. For example, in this paper the partially truncated EM method needs \(\alpha(\Delta) = \Delta^{\epsilon}\) with \(\epsilon\in (0,1)\), which is not covered by Theorem 2.6 in [9].

Remark 2.3

Although Theorem 2.1, to some extent, could be regarded as a generalization of Theorem 2.6 in [9], the proof follows a similar manner. Therefore, we refer the readers to [9] for the detailed proof. In this paper, we focus on how to fulfill (I) and (II) by using the partially truncated EM method for SDEs with super-linear growing coefficients.

3 Main results

We start this section by imposing some conditions on the coefficient and introducing the partially truncated EM method in Sect. 3.1. The main results and proofs are presented in Sect. 3.2

3.1 Partially truncated EM method

We assume that both the drift and diffusion coefficients in (2.1) could be separated into two parts as follows:

$$ \mu(y) = \mu_{1}(y) + \mu_{2}(y) \quad\text{and} \quad\sigma(y) = \sigma_{1}(y) + \sigma_{2}(y). $$

We impose some assumptions on \(\mu_{i}\) and \(\sigma_{i}\) for \(i=1,2\).

Assumption 3.1

Assume that there exist constants \(L_{1} \geq1\) and \(\gamma\geq0\) such that, for any \(x,y \in \mathbb {R}^{d}\),

$$ \bigl\vert \mu_{1} (x) - \mu_{1} (y) \bigr\vert \vee \bigl\vert \sigma_{1} (x) - \sigma_{1} (y) \bigr\vert \leq L_{1} \vert x - y \vert $$

and

$$ \bigl\vert \mu_{2} (x) - \mu_{2} (y) \bigr\vert \vee \bigl\vert \sigma_{2} (x) - \sigma_{2} (y) \bigr\vert \leq L_{1} \bigl( 1 + \vert x \vert ^{\gamma}+ \vert y \vert ^{\gamma}\bigr) \vert x - y \vert . $$

Assumption 3.2

Assume that there is a pair of constants \(\bar{r} > 2\) and \(L_{2}\) such that

$$ {(x - y)^{T} \bigl(\mu_{2} (x) - \mu_{2} (y)\bigr) + \frac{\bar{r}-1}{2} \bigl\vert \sigma_{2} (x) - \sigma_{2} (y) \bigr\vert ^{2} \le L_{2} \vert x - y \vert ^{2}} $$

for all \(x,y\in \mathbb {R}^{d}\).

For the purpose of the study on the stability, we require

$$ \mu_{1} (0) = \mu_{2} (0) = \sigma_{1} (0) = \sigma_{2} (0) = 0. $$

Assumption 3.3

Assume that there are constants \(\bar{p} >\bar{r}\) and \(K_{2}>0\) such that

$$ x^{T} \mu_{2}(x) + \frac{\bar{p}-1}{2} \bigl\vert \sigma_{2} (x) \bigr\vert ^{2} \le K_{2} \vert x \vert ^{2} $$

for all \(x\in \mathbb {R}^{d}\).

As pointed out in [6], Assumption 3.3 cannot be deduced from Assumption 3.2 as we need \(\bar {p} >\bar{r}\).

In addition, it can be derived from Assumption 3.1 that \(\mu_{1}\) and \(\sigma_{1}\) satisfy the linear growth condition and \(\mu _{2}\) and \(\sigma_{2}\) satisfy the polynomial growth condition that there exists a constant \(K_{1}>0\) such that, for all \(x\in \mathbb {R}^{d}\),

$$ \bigl\vert \mu_{1}(x) \bigr\vert \vee \bigl\vert \sigma_{1}(x) \bigr\vert \le K_{1} \vert x \vert $$
(3.1)

and

$$ \bigl\vert \mu_{2}(x) \bigr\vert \vee \bigl\vert \sigma_{2}(x) \bigr\vert \le K_{1}\bigl( 1 + \vert x \vert ^{\gamma+1}\bigr). $$
(3.2)

It is not hard to see that the combination of Assumption 3.3 and (3.1) can imply that, for any \(p \in(2,\bar{p})\),

$$ x^{T} \mu(x) + \frac{p-1}{2} \bigl\vert \sigma(x) \bigr\vert ^{2} \le K_{3} \vert x \vert ^{2} $$
(3.3)

for all \(x\in \mathbb {R}^{d}\), where \(K_{3}\) is a constant dependent on \(K_{1}\), \(K_{2}\), p, and .

It can also be seen that under Assumptions 3.1 and 3.2, for any \(q \in(2,\bar{r})\),

$$ (x - y)^{T} \bigl(\mu(x) - \mu(y)\bigr) + \frac{q-1}{2} \bigl\vert \sigma(x) - \sigma (y) \bigr\vert ^{2} \le L_{3} \vert x - y \vert ^{2}, $$
(3.4)

where \(L_{3}\) is a constant dependent on \(L_{1}\), \(L_{2}\), q, and .

To make the paper self-contained, we provide the definition of the partially truncated EM method here and refer for the original ideas to [5, 6]. Firstly, we choose a strictly increasing continuous function \(\kappa: \mathbb {R}_{+} \rightarrow \mathbb {R}_{+}\) satisfying

$$ \kappa(r) \rightarrow\infty\quad\text{as } r \rightarrow\infty\quad\text{and} \quad\sup _{ \vert x \vert \leq r} \bigl( \bigl\vert \mu_{2}(x) \bigr\vert \vee \bigl\vert \sigma_{2}(x) \bigr\vert \bigr) \leq\kappa(r) \quad\text{for any } r\geq1. $$

It is clear to see that the inverse function of κ, denoted by \(\kappa^{-1}\), is strictly increasing continuous from \([\kappa (0),\infty)\) to \(\mathbb {R}_{+}\).

Next, we choose a strictly decreasing continuous function \(h: (0,1] \rightarrow(0,\infty)\) such that

$$ \lim_{\Delta\rightarrow0} h(\Delta) = \infty\quad\text{and} \quad\Delta ^{1/4}h(\Delta) \leq \vert y_{0} \vert \quad\text{for any } \Delta\in(0,1]. $$
(3.5)

Remark 3.4

Comparing (3.5) with (2.9) in [5], we need \(\Delta^{1/4}h(\Delta)\) to be bounded by the initial value here. It should be noted that this requirement is not hard to satisfy as the initial value \(y_{0}\) is always provided. In addition, such an upper bound is key in the proofs of our results.

For a given time step \(\Delta\in(0,1] \), we define the truncating function \(\pi_{\Delta}: \mathbb {R}_{d} \rightarrow \mathbb {R}_{d}\) by

$$ \pi_{\Delta}(x) = \bigl( \vert x \vert \wedge\kappa^{-1} \bigl( h(\Delta) \bigr) \bigr) \frac{x}{ \vert x \vert }. $$

Now, we embed the truncating function into \(\mu_{2}\) and \(\sigma_{2}\) to get

$$ \mu_{2,\Delta}(x) = \mu_{2} \bigl( \pi_{\Delta}(x) \bigr)\quad \text{and} \quad\sigma_{2,\Delta}(x) = \sigma_{2} \bigl( \pi_{\Delta}(x) \bigr) \quad\text{for } x\in \mathbb {R}^{d}. $$

It is easy to see that, for any \(x\in \mathbb {R}^{d}\),

$$ \bigl\vert \mu_{2,\Delta}(x) \bigr\vert \vee \bigl\vert \sigma_{2,\Delta }(x) \bigr\vert \leq\kappa \bigl( \kappa^{-1} \bigl( h(\Delta) \bigr) \bigr) = h(\Delta). $$
(3.6)

Using the notations that

$$ \mu_{\Delta}(x) = \mu_{1}(x) + \mu_{2,\Delta}(x) \quad\text{and}\quad \sigma _{\Delta}(x) = \sigma_{1}(x) + \sigma_{2,\Delta}(x), $$

the partially truncated EM method is defined as

$$ x_{\Delta,i+1} = x_{\Delta,i} + \mu_{\Delta}(x_{\Delta,i}) \Delta+ \sigma_{\Delta}(x_{\Delta,i}) \Delta B_{i}\quad \text{with } x_{\Delta,0} = y_{0}, $$
(3.7)

where \(\Delta B_{i} = B((i+1)\Delta) - B(i\Delta)\) is the Brownian motion increment for \(i = 1,2,3,\dots\), and \(x_{\Delta,i}\) is the numerical approximation to \(y(i\Delta)\) for \(i = 1,2,3,\dots\).

In some cases, it is more convenient to work with the continuous version of the numerical method. Thus, we define the continuous version of (3.7) by

$$ x_{\Delta}(t) = x_{\Delta,0} + \int_{0}^{t} \mu_{\Delta}\bigl( \bar{x}_{\Delta}(t)\bigr) \,dt + \int_{0}^{t} \sigma_{\Delta}\bigl( \bar{x}_{\Delta}(t)\bigr) \,dB(t), $$
(3.8)

where

$$ \bar{x}_{\Delta}(t) = x_{\Delta,i}, \quad\text{when } t \in[i\Delta, (i+1) \Delta). $$

3.2 (I) and (II) for the partially truncated EM method

To show that the SDE is mean square exponentially stable if and only if the partially truncated EM method is mean square exponentially stable, we need to prove that (I) and (II) in Theorem 2.1 hold for the method.

One may argue that the second moment boundedness and the \(L^{2}\) strong convergence, i.e., requirements (I) and (II) in Theorem 2.1, have already been proven in [5, 6]. But it should be noted that (II) requires the \(L^{2}\) strong error to be linearly related to \(\sup_{0 \leq t \leq T} \mathbb {E}|x_{\Delta}(t)|^{2}\) and this special structure of the error is important to the proof of Theorem 2.1. By looking into [5, 6], one notices that no such structure of error is provided.

Therefore, the main task of this part is to prove (II) for the partially truncated EM method by carefully tracking the related constants. We need several lemmas before we proceed to the main theorem.

Firstly, we cite two lemmas from [5]. Roughly speaking, Lemma 3.5 shows that \(\mu_{2,\Delta}\) and \(\sigma _{2,\Delta}\) inherit Assumption 3.3. And Lemma 3.6 shows that \(\mu_{\Delta}\) and \(\sigma_{\Delta}\) inherit Assumption 3.1.

Lemma 3.5

Let Assumption 3.3 hold. Then, for all \(\Delta \in(0,1]\), we have

$$ x^{T} \mu_{2,\Delta}(x) + \frac{\bar{p}-1}{2} \bigl\vert \sigma_{2,\Delta }(x) \bigr\vert ^{2} \le K_{4} \vert x \vert ^{2}, \quad\forall x\in \mathbb {R}^{d}, $$
(3.9)

where \(K_{4} = 2K_{2} (1 \vee [1/\kappa^{-1}(h(1))] )\).

Lemma 3.6

Let Assumption 3.1 hold. Then

$$ \bigl\vert \mu(x)-\mu(y) \bigr\vert \vee \bigl\vert \sigma(x)-\sigma(y) \bigr\vert \le2L_{1}\bigl(1+ \vert x \vert ^{\gamma}+ \vert y \vert ^{\gamma}\bigr) \vert x-y \vert $$
(3.10)

for all \(x,y\in \mathbb {R}^{d}\). Moreover, for any \(\Delta \in(0,1]\),

$$ \bigl\vert \mu_{\Delta}(x)-\mu_{\Delta}(y) \bigr\vert \vee \bigl\vert \sigma_{\Delta}(x)-\sigma _{\Delta}(y) \bigr\vert \le3L_{1}\bigl(1+ \vert x \vert ^{\gamma}+ \vert y \vert ^{\gamma}\bigr) \vert x-y \vert $$
(3.11)

for all \(x,y\in \mathbb {R}^{d}\).

Lemma 3.7

Suppose that (3.5) and Assumptions 3.1, 3.2, and 3.3 hold, then the partially truncated EM method solution (3.8) satisfies

$$ \sup_{0< \Delta\leq1} \sup_{0 \leq t\leq T} \mathbb {E}\bigl\vert x_{\Delta }(t) \bigr\vert ^{p} < C_{1} \quad\textit{for } p \in(2,\bar{p}], $$

where

$$\begin{aligned} C_{1} ={}& \bigl( \vert y_{0} \vert + 2^{p-1} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) T + 2^{p} y_{0}^{p} T \bigr) \\ & \times \exp \bigl( pK_{4} + 2p K_{1} + (p-2) + 2^{p-1} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) \bigr).\end{aligned} $$

Proof

From (3.8) and by the Itô formula, we have

$$\begin{aligned}& \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{p} - \bigl\vert x_{\Delta}(0) \bigr\vert \\& \quad= \mathbb {E}\int_{0}^{t} p \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2}\\& \qquad{}\times \biggl( x^{T}_{\Delta}(s) \bigl( \mu_{1} \bigl(\bar{x}_{\Delta}(s)\bigr) + \mu_{2,\Delta} \bigl(\bar{x}_{\Delta}(s)\bigr) \bigr) + \frac{p-2}{2} \bigl\vert \sigma_{1}\bigl(\bar{x}_{\Delta}(s)\bigr) + \sigma_{2,\Delta} \bigl(\bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{2} \biggr) \,ds \\& \quad= \mathbb {E}\int_{0}^{t} p \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \\& \qquad{}\times\biggl( \bar{x}^{T}_{\Delta}(s) \bigl( \mu_{1} \bigl(\bar{x}_{\Delta}(s)\bigr) + \mu_{2,\Delta}\bigl(\bar{x}_{\Delta}(s)\bigr) \bigr) + \frac{p-2}{2} \bigl\vert \sigma_{1}\bigl(\bar{x}_{\Delta}(s) \bigr) + \sigma_{2,\Delta}\bigl(\bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{2} \biggr)\,ds \\& \qquad{}+ \mathbb {E}\int_{0}^{t} p \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl( x_{\Delta}(x) - \bar {x}_{\Delta}(s) \bigr)^{T} \bigl( \mu_{1} \bigl( \bar{x}_{\Delta}(s)\bigr) + \mu _{2,\Delta}\bigl(\bar{x}_{\Delta}(s) \bigr) \bigr) \,ds. \end{aligned}$$

By (3.9), we have

$$ \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{p} - \bigl\vert x_{\Delta}(0) \bigr\vert \leq J_{1} + J_{2} + J_{3}, $$

where

$$\begin{gathered} J_{1} \leq \mathbb {E}\int_{0}^{t} p K_{4} \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{2}\, ds, \\ J_{2} \leq \mathbb {E}\int_{0}^{t} p \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl\vert x_{\Delta}(s) - \bar {x}_{\Delta}(s) \bigr\vert \bigl\vert \mu_{1} \bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert \, ds,\end{gathered} $$

and

$$ J_{3} \leq \mathbb {E}\int_{0}^{t} p \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl\vert x_{\Delta}(s) - \bar {x}_{\Delta}(s) \bigr\vert \bigl\vert \mu_{2,\Delta} \bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert \,ds. $$

By the Hölder inequality and the Young inequality \(\alpha^{b} \beta ^{1-b} \leq b \alpha+ (1-b) \beta\) for \(\alpha, \beta\geq0 \) and \(b \in(0,1)\), we have

$$\begin{aligned} \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{2} &\leq \bigl( \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{(p-2)\times\frac{p}{p-2}} \bigr)^{\frac{p-2}{p}} \bigl( \mathbb {E}\bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{2\times\frac{p}{2}} \bigr)^{\frac {2}{p}} \\ &\leq \frac{p-2}{p} \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p} + \frac{2}{p} \mathbb {E}\bigl\vert \bar {x}_{\Delta}(s) \bigr\vert ^{p}.\end{aligned} $$

Then we obtain the estimate of \(J_{1}\)

$$ J_{1} \leq K_{4} \int_{0}^{t} \bigl( (p-2) \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p} + 2 \mathbb {E}\bigl\vert \bar {x}_{\Delta}(s) \bigr\vert ^{p} \bigr)\, ds. $$

In a similar manner, by (3.1) we have

$$\begin{gathered} \mathbb {E}\bigl( \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl\vert x_{\Delta}(s) - \bar{x}_{\Delta}(s) \bigr\vert \bigl\vert \mu_{1} \bigl(\bar{x}_{\Delta}(s)\bigr) \bigr\vert \bigr) \\ \quad\leq K_{1} \mathbb {E}\bigl( \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl( \bigl\vert x_{\Delta}(s) \bigr\vert + \bigl\vert \bar{x}_{\Delta}(s) \bigr\vert \bigr) \bigl\vert \bar{x}_{\Delta}(s) \bigr\vert \bigr) \\ \quad\leq K_{1} \biggl( \frac{2p-3}{p} \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p} + \frac{3}{p} \mathbb {E}\bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{p} \biggr).\end{gathered} $$

Thus, we have the estimate for \(J_{2}\)

$$ J_{2} \leq p K_{1} \int_{0}^{t} \biggl( \frac{2p-3}{p} \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p} + \frac{3}{p} \mathbb {E}\bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{p} \biggr)\,ds. $$

Now we are going to estimate \(J_{3}\). By the Hölder inequality, the Young inequality, and (3.6), we have

$$\begin{gathered} \mathbb {E}\bigl( \bigl\vert x_{\Delta}(s) \bigr\vert ^{p-2} \bigl\vert x_{\Delta}(s) - \bar{x}_{\Delta}(s) \bigr\vert \bigl\vert \mu_{2,\Delta}\bigl(\bar{x}_{\Delta}(s)\bigr) \bigr\vert \bigr) \\ \quad\leq \frac{p-2}{p} \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p} + \frac{2}{p} \bigl( h(\Delta) \bigr)^{p/2} \mathbb {E}\bigl\vert x_{\Delta}(s) - \bar{x}_{\Delta}(s) \bigr\vert ^{p/2}.\end{gathered} $$

By the Hölder inequality, the elementary inequality, (3.1), (3.6), Theorem 7.1 on page 39 of [19], for \(s \in[i\Delta, (i+1)\Delta)\), we have

$$ \mathbb {E}\bigl\vert x_{\Delta}(s) - \bar{x}_{\Delta}(s) \bigr\vert ^{p/2} \leq2^{p-2} \Delta^{p/4} \bigl( \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr)K_{1} \mathbb {E}\bigl\vert \bar{x}_{\Delta}(i\Delta) \bigr\vert ^{p/2} + 2 \bigl(h(\Delta) \bigr)^{p/2} \bigr). $$

Combining the two inequalities above and using the fact that \(|\bar {x}_{\Delta}(s)|^{p/2} \leq1 + |\bar{x}_{\Delta}(s)|^{p}\), we have the estimate for \(J_{3}\)

$$\begin{aligned} J_{3} \leq{}& \int_{0}^{t} \bigl( (p-2) \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p} + 2^{p-1} \bigl( h( \Delta) \bigr)^{p/2} \Delta^{p/4} \\ &\times \bigl[K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) \bigl( \mathbb {E}\bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{p} + 1 \bigr) + 2 \bigl(h(\Delta)\bigr)^{p/2} \bigr] \bigr) \,ds \\ \leq{}& \int_{0}^{t} \bigl( (p-2) \mathbb {E}\bigl\vert x_{\Delta}(s) \bigr\vert ^{p} + 2^{p-1} \Delta ^{p/8} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) \mathbb {E}\bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{p} \bigr) \,ds \\ &+ 2^{p-1} \Delta^{p/8} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) t + 2^{p} y_{0}^{p} t,\end{aligned} $$

where (3.5) is used. Putting the estimates for \(J_{1}\), \(J_{2}\), and \(J_{3}\) together, we have that

$$\begin{aligned} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{p} \leq{}& \vert y_{0} \vert + 2^{p-1} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) t + 2^{p} y_{0}^{p} t \\ & + \int_{0}^{t} \Bigl( \bigl[(p-2)K_{4} + (2p-3)K_{1} +(p-2) \bigr] \sup_{0 \leq u \leq s}\mathbb {E}\bigl\vert x_{\Delta}(u) \bigr\vert ^{p} \\ & + \bigl[ 2K_{4} + 3K_{1} + 2^{p-1} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) \bigr] \sup_{0 \leq u \leq s} \mathbb {E}\bigl\vert \bar{x}_{\Delta}(u) \bigr\vert ^{p} \Bigr) \,ds,\end{aligned} $$

where \(\Delta< 1\) is used. Since \(\sup_{0 \leq u \leq s}\mathbb {E}|x_{\Delta}(u)|^{p} = \sup_{0 \leq u \leq s} \mathbb {E}|\bar{x}_{\Delta}(u)|^{p}\) and the inequality above holds for any \(t \in[0,T]\) and any \(\Delta\in (0,1]\), we obtain that

$$ \sup_{0< \Delta\leq1} \sup_{0 \leq t\leq T} \mathbb {E}\bigl\vert x_{\Delta }(t) \bigr\vert ^{p} \leq C_{1}, $$

where

$$\begin{aligned} C_{1} ={}& \bigl( \vert y_{0} \vert + 2^{p-1} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) T + 2^{p} y_{0}^{p} T \bigr) \\ & \times \exp \bigl( pK_{4} + 2p K_{1} + (p-2) + 2^{p-1} y_{0}^{p/2}K_{1} \bigl( 1+ \bigl( p(p-2)/8 \bigr)^{p/4} \bigr) \bigr).\end{aligned} $$

 □

The next lemma can be proved by following the typical way, we refer the readers to, for example, [19] for details.

Lemma 3.8

Suppose that (3.3) holds, the solution to (2.1) satisfies

$$ \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert y(t) \bigr\vert ^{p} \leq C_{2}, $$

where \(C_{2} = y_{0} \exp (pK_{3}T )\).

Lemma 3.9

Suppose that Assumption 3.1 holds, then

$$ \bigl( \mathbb {E}\bigl\vert x_{\Delta}(t) - \bar{x}_{\Delta}(t) \bigr\vert ^{4} \bigr)^{1/2} \leq C_{3} \Delta^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr), $$

where

$$ C_{3} = 97 \bigl(K_{1}^{4} C_{1} + 1 \bigr)^{1/2}. $$

Proof

We see from (3.8) that, for any \(t \in[i\Delta ,(i+1)\Delta)\),

$$\begin{gathered} \mathbb {E}\bigl\vert x_{\Delta}(t) - \bar{x}_{\Delta}(t) \bigr\vert ^{4} \\ \quad\leq 8 \mathbb {E}\biggl\vert \int_{i \Delta}^{t} \bigl( \mu_{1}\bigl(\bar {x}_{\Delta}(s)\bigr) + \mu_{2,\Delta}\bigl(\bar{x}_{\Delta}(s) \bigr) \bigr) \,ds \biggr\vert ^{4} + 8 \mathbb {E}\biggl\vert \int_{i \Delta}^{t} \bigl( \sigma_{1}\bigl( \bar{x}_{\Delta}(s)\bigr) + \sigma_{2,\Delta}\bigl( \bar{x}_{\Delta}(s)\bigr) \bigr) \,dB(s) \biggr\vert ^{4} \\ \quad\leq 8\Delta^{3} \mathbb {E}\int_{i \Delta}^{t} \bigl\vert \mu_{1}\bigl( \bar {x}_{\Delta}(s)\bigr) + \mu_{2,\Delta}\bigl(\bar{x}_{\Delta}(s) \bigr) \bigr\vert ^{4} \,ds \\ \qquad{}+ 288 \Delta \mathbb {E}\int_{i \Delta}^{t} \bigl\vert \sigma_{1} \bigl(\bar{x}_{\Delta}(s)\bigr) + \sigma_{2,\Delta}\bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{4} \,ds \\ \quad\leq 64 \Delta\bigl(\Delta^{2} +36\bigr)\\ \qquad{}\times \mathbb {E}\int_{i \Delta}^{t} \bigl( \bigl\vert \mu_{1} \bigl(\bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{4} + \bigl\vert \mu_{2,\Delta }\bigl(\bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{4} + \bigl\vert \sigma_{1}\bigl(\bar{x}_{\Delta}(s) \bigr) \bigr\vert ^{4} + \bigl\vert \sigma _{2,\Delta}\bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{4} \bigr)\, ds,\end{gathered} $$

where the Hölder inequality, Theorem 7.1 on page 39 of [19] and the elementary inequality \(|a+b|^{4} \leq8 (|a|^{4} + |b|^{4})\) have been used.

Now, using (3.1) and (3.6), we have

$$\begin{aligned} \mathbb {E}\bigl\vert x_{\Delta}(t) - \bar{x}_{\Delta}(t) \bigr\vert ^{4} &\leq 128 \Delta\bigl(\Delta^{2} +36\bigr) \int_{i \Delta}^{t} \bigl( K_{1}^{4} \mathbb {E}\bigl\vert \bar {x}_{\Delta}(s) \bigr\vert ^{4} + \bigl(h( \Delta)\bigr)^{4} \bigr)\,ds \\ &\leq 128 \Delta^{2} \bigl(\Delta^{2} +36\bigr) \bigl( K_{1}^{4} \mathbb {E}\bigl\vert \bar {x}_{\Delta}(i \Delta) \bigr\vert ^{4} + \bigl(h(\Delta)\bigr)^{4} \bigr) \\ &\leq 128 \Delta^{2} \bigl(\Delta^{2} +36\bigr) \bigl( K_{1}^{4} C_{1} + \bigl(h(\Delta ) \bigr)^{4} \bigr),\end{aligned} $$

where Lemma 3.7 is used for the last inequality.

From (3.5), we derive \(\Delta(h(\Delta))^{4} \leq|y_{0}|^{4}\) and \(\Delta\leq|y_{0}|^{4}\). Then

$$ \mathbb {E}\bigl\vert x_{\Delta}(t) - \bar{x}_{\Delta}(t) \bigr\vert ^{4} \leq256 \Delta\bigl(\Delta^{2} +36\bigr) \bigl(K_{1}^{4} C_{1} + 1\bigr) \vert y_{0} \vert ^{4}. $$

Taking square root on both sides, we have

$$ \bigl( \mathbb {E}\bigl\vert x_{\Delta}(t) - \bar{x}_{\Delta}(t) \bigr\vert ^{4} \bigr)^{1/2} \leq16 \Delta^{1/2} (1 +36)^{1/2} \bigl(K_{1}^{4} C_{1} + 1 \bigr)^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr), $$

where the fact \(|y_{0}|^{2} = |x_{\Delta}(0)|^{2} \leq\sup_{0 \leq t \leq T} \mathbb {E}\vert x_{\Delta}(t) \vert ^{2}\) is used. Since the analysis above holds for any \(t \in[0,T]\), the assertion is obtained. □

For any real number \(R > |y_{0}|\), define the stopping times

$$ \tau_{R} = \inf\bigl\{ t \geq0: \bigl\vert y(t) \bigr\vert \geq R \bigr\} \quad\text{and}\quad \rho_{R} = \inf\bigl\{ t \geq0: \bigl\vert x_{\Delta}(t) \bigr\vert \geq R\bigr\} . $$

Set

$$ \theta_{R} = \tau_{R} \wedge\rho_{R} \quad\text{and}\quad e(t) = x_{\Delta}(t) - y(t). $$

Lemma 3.10

Suppose that Assumptions 3.1, 3.2, and 3.3 hold, then for any \(t \in[0,T]\)

$$ \mathbb {E}\bigl\vert y(t \wedge\theta_{R}) - x_{\Delta}(t\wedge \theta_{R}) \bigr\vert ^{2} \leq C_{4} \Delta^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr), $$

where

$$ C_{4} = \frac{3^{3/2}(8q - 12)L_{1}^{2} (1+2C_{1})^{1/2} C_{3} e^{(2L_{3}+1)T}}{q-2}. $$

Proof

For any \(t \in[0,T]\), it is clear that \(t \wedge\theta_{R} \leq\theta _{R}\) a.s. Therefore, we observe from the definitions of \(\mu_{\Delta}\) and \(\sigma_{\Delta}\) that \(\mu_{\Delta}(\bar{x}_{\Delta}(t)) = \mu (\bar{x}_{\Delta}(t))\) and \(\sigma_{\Delta}(\bar{x}_{\Delta}(t)) = \sigma(x_{\Delta}(t))\).

By the Itô formula, we have

$$ \mathbb {E}\bigl\vert e(t \wedge\theta_{R}) \bigr\vert ^{2} = 2 \mathbb {E}\int_{0}^{t \wedge\theta_{R}} \biggl( e^{T}(s) \bigl( \mu \bigl(y(s)\bigr) - \mu\bigl(\bar{x}_{\Delta}(s)\bigr) \bigr) + \frac{1}{2} \bigl\vert \sigma\bigl(y(s)\bigr) - \sigma\bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{2} \biggr) \,ds. $$

Now, using the elementary inequality, we can see that

$$\begin{gathered} \bigl\vert \sigma\bigl(y(s)\bigr) - \sigma\bigl(\bar{x}_{\Delta}(s) \bigr) \bigr\vert ^{2} \\ \quad\leq \bigl(1 + (q-2)\bigr) \bigl\vert \sigma\bigl(y(s)\bigr) - \sigma \bigl(x_{\Delta}(s)\bigr) \bigr\vert ^{2} + \bigl(1 + 1/(q-2) \bigr) \bigl\vert \sigma\bigl(x_{\Delta}(s)\bigr) - \sigma\bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{2},\end{gathered} $$

where \(q >2\) is used. Thus we obtain

$$ \mathbb {E}\bigl\vert e(t \wedge\theta_{R}) \bigr\vert ^{2} \leq J_{4} + J_{5}, $$

where

$$ J_{4} = 2 \mathbb {E}\int_{0}^{t \wedge\theta_{R}} \biggl( e^{T}(s) \bigl( \mu \bigl(y(s)\bigr) - \mu\bigl(x_{\Delta}(s)\bigr) \bigr) + \frac{q-1}{2} \bigl\vert \sigma\bigl(y(s)\bigr) - \sigma\bigl(x_{\Delta}(s)\bigr) \bigr\vert ^{2} \biggr)\, ds $$

and

$$ J_{5} = 2 \mathbb {E}\int_{0}^{t \wedge\theta_{R}} \biggl( e^{T}(s) \bigl( \mu \bigl(x_{\Delta}(s)\bigr) - \mu\bigl(\bar{x}_{\Delta}(s)\bigr) \bigr) + \frac {q-1}{2(q-2)} \bigl\vert \sigma\bigl(x_{\Delta}(s)\bigr) - \sigma \bigl(\bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{2} \biggr)\, ds. $$

Applying (3.4) yields

$$ J_{4} \leq2 L_{3} \mathbb {E}\int_{0}^{t} \mathbb {E}\bigl\vert e(s \wedge \theta_{R}) \bigr\vert ^{2}\, ds. $$

Applying the elementary inequality and (3.10) gives

$$\begin{aligned} J_{5} \leq{}& \mathbb {E}\int_{0}^{t \wedge\theta_{R}} \biggl( \bigl\vert e(s) \bigr\vert ^{2} + \bigl\vert \mu\bigl(x_{\Delta}(s)\bigr) - \mu\bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{2} + \frac {q-1}{q-2} \bigl\vert \sigma\bigl(x_{\Delta}(s)\bigr) - \sigma\bigl( \bar{x}_{\Delta}(s)\bigr) \bigr\vert ^{2} \biggr)\, ds \\ \leq{}& \int_{0}^{t} \mathbb {E}\bigl\vert e(s \wedge \theta_{R}) \bigr\vert ^{2} \,ds \\ & + 4 \biggl( 1+ \frac{q-1}{q-2} \biggr) L_{1}^{2} \mathbb {E}\int_{0}^{t \wedge \theta_{R}} \bigl( 1 + \bigl\vert x_{\Delta}(s) \bigr\vert ^{\gamma}+ \bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{\gamma}\bigr)^{2} \bigl\vert x_{\Delta}(s) - \bar{x}_{\Delta}(s) \bigr\vert ^{2} \,ds \\ \leq{}& \int_{0}^{t} \mathbb {E}\bigl\vert e(s \wedge \theta_{R}) \bigr\vert ^{2} \,ds \\ & + \frac{8q - 12}{q-2} L_{1}^{2} \int_{0}^{T} \bigl( \mathbb {E}\bigl( 1 + \bigl\vert x_{\Delta}(s ) \bigr\vert ^{\gamma}+ \bigl\vert \bar{x}_{\Delta}(s) \bigr\vert ^{\gamma}\bigr)^{4} \bigr)^{1/2} \bigl( \mathbb {E}\bigl\vert x_{\Delta}(s) - \bar{x}_{\Delta}(s) \bigr\vert ^{4} \bigr)^{1/2} \,ds,\end{aligned} $$

where \(t \wedge\theta_{R} \leq T\) a.s. for \(t \in[0,T]\) is used for the last integral. Using Lemmas 3.7 and 3.9, we have

$$ J_{5} \leq \int_{0}^{t} \mathbb {E}\bigl\vert e(s \wedge \theta_{R}) \bigr\vert ^{2} ds + \frac{3^{3/2}(8q - 12)L_{1}^{2} (1+2C_{1})^{1/2} C_{3}}{q-2} \Delta^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr), $$

where the elementary inequality \(|a + b + c|^{4} \leq3^{3} (|a|^{4} + |b|^{4} + |c|^{4})\) is used.

Combining the estimates of \(J_{4}\) and \(J_{5}\) and applying the Gronwall inequality prove the assertion. □

Theorem 3.11

Suppose that Assumptions 3.1, 3.2, and 3.3 hold, then for any \(t \in[0,T]\)

$$ \mathbb {E}\bigl\vert e(t) \bigr\vert ^{2} \leq C_{5} \Delta^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr), $$

where

$$ C_{5} = C_{4} + \frac{(C_{1} + C_{2})(2^{p} + p - 2)}{p}. $$

Proof

For any \(t \in[0,T]\), we have

$$ \mathbb {E}\bigl\vert e(t) \bigr\vert ^{2} = \mathbb {E}\bigl( \bigl\vert e(t) \bigr\vert ^{2} I_{\{\theta_{R} > t\}} \bigr) + \mathbb {E}\bigl( \bigl\vert e(t) \bigr\vert ^{2} I_{\{\theta_{R} \leq t\}} \bigr). $$

For any \(\delta> 0\), the Young inequality yields

$$ \mathbb {E}\bigl( \bigl\vert e(t) \bigr\vert ^{2} I_{\{\theta_{R} \leq t\}} \bigr) \leq\frac{2 \delta}{p} \mathbb {E}\bigl\vert e(t) \bigr\vert ^{p} + \frac{p-2}{p\delta^{2/(p-2)}} \mathbb {P}(\theta_{R} \leq t ). $$

Due to Lemmas 3.7 and 3.8, we obtain

$$ \mathbb {E}\bigl\vert e(t) \bigr\vert ^{p} \leq2^{p-1} \mathbb {E}\bigl\vert y(t) \bigr\vert ^{p} + 2^{p-1} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{p} \leq2^{p-1} (C_{1} + C_{2}). $$

By the definition of \(\tau_{R}\), it is not hard to see

$$ \mathbb {P}(\tau_{R} \leq t ) = \mathbb {E}\biggl( I_{\{ \tau_{R} \leq t \} } \frac{ \vert y(\tau_{R}) \vert ^{p}}{R^{p}} \biggr) \leq\frac{1}{R^{p}} \Bigl( \sup_{0\leq t \leq T} \mathbb {E}\bigl\vert y(t) \bigr\vert ^{p} \Bigr) \leq\frac{C_{2}}{R^{p}}. $$

In a similar way, we have

$$ \mathbb {P}(\rho_{R} \leq t ) \leq\frac{C_{1}}{R^{p}}. $$

Thus, we see that

$$ \mathbb {P}(\theta_{R} \leq t ) \leq \mathbb {P}(\tau_{R} \leq t ) + \mathbb {P}(\rho_{R} \leq t ) \leq\frac{C_{1}+C_{2}}{R^{p}}. $$

Now choosing \(\delta= |y_{0}|^{2} \Delta^{1/2}\) and \(R = ( |y_{0}|^{2} \Delta^{1/2} )^{-1/(p-2)}\) gives

$$\begin{aligned} \mathbb {E}\bigl( \bigl\vert e(t) \bigr\vert ^{2} I_{\{\theta_{R} \leq t\}} \bigr) &\leq \frac {(C_{1} + C_{2})(2^{p} + p - 2)}{p} \vert y_{0} \vert ^{2} \Delta^{1/2} \\ &\leq \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr)\frac{(C_{1} + C_{2})(2^{p} + p - 2)}{p} \Delta^{1/2},\end{aligned} $$

where the fact \(|y_{0}|^{2} = |x_{\Delta}(0)|^{2} \leq ( \sup_{0 \leq t \leq T} \mathbb {E}|x_{\Delta}(t)|^{2} )\) is used. Using Lemma 3.10, we have

$$\begin{aligned} \mathbb {E}\bigl\vert e(t) \bigr\vert ^{2} &= \mathbb {E}\bigl( \bigl\vert e(t) \bigr\vert ^{2} I_{\{\theta_{R} > t\}} \bigr) + \mathbb {E}\bigl( \bigl\vert e(t) \bigr\vert ^{2} I_{\{\theta_{R} \leq t\}} \bigr) \\ &\leq \mathbb {E}\bigl\vert e(t \wedge\theta_{R}) \bigr\vert ^{2} + \frac{(C_{1} + C_{2})(2^{p} + p - 2)}{p} \Delta^{1/2} \Bigl( \sup _{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr) \\ &\leq C_{4} \Delta^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr) + \frac{(C_{1} + C_{2})(2^{p} + p - 2)}{p} \Delta^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr) \\ &\leq C_{5} \Delta^{1/2} \Bigl( \sup_{0 \leq t \leq T} \mathbb {E}\bigl\vert x_{\Delta}(t) \bigr\vert ^{2} \Bigr).\end{aligned} $$

Hence, the proof is completed. □

We finish this section by providing the theorem of equivalence for the partially truncated Euler–Maruyama method. The proof of the theorem is straightforward following Theorem 2.1, Lemma 3.7, and Theorem 3.11.

Theorem 3.12

Suppose that Assumptions 3.1, 3.2, and 3.3 hold, then the SDE is mean square exponentially stable if and only if the partially truncated Euler–Maruyama method solution is mean square exponentially stable providing the step size is small enough and satisfies (3.5).

4 Conclusion and future research

For stochastic differential equations with super-linear growing coefficients, this paper studies equivalence of the mean square stability between the partially truncated Euler–Maruyama method and the underlying SDEs. By carefully tracking the constant term in the finite time convergence error, the “if and only if” result is obtained.

For the equivalence of other types of stabilities, such as pth moment stability or almost sure stability, we will report in the future works.

References

  1. Allen, E.: Modeling with Itô Stochastic Differential Equations. Mathematical Modelling: Theory and Applications, vol. 22. Springer, Dordrecht (2007)

    MATH  Google Scholar 

  2. Appleby, J.A.D., Berkolaiko, G., Rodkina, A.: Non-exponential stability and decay rates in nonlinear stochastic difference equations with unbounded noise. Stochastics 81(2), 99–127 (2009)

    Article  MathSciNet  Google Scholar 

  3. Buckwar, E., Kelly, C.: Towards a systematic linear stability analysis of numerical methods for systems of stochastic differential equations. SIAM J. Numer. Anal. 48(1), 298–321 (2010)

    Article  MathSciNet  Google Scholar 

  4. Gray, A., Greenhalgh, D., Hu, L., Mao, X., Pan, J.: A stochastic differential equation SIS epidemic model. SIAM J. Appl. Math. 71(3), 876–902 (2011)

    Article  MathSciNet  Google Scholar 

  5. Guo, Q., Liu, W., Mao, X.: A note on the partially truncated Euler–Maruyama method. Appl. Numer. Math. 130, 157–170 (2018)

    Article  MathSciNet  Google Scholar 

  6. Guo, Q., Liu, W., Mao, X., Yue, R.: The partially truncated Euler–Maruyama method and its stability and boundedness. Appl. Numer. Math. 115, 235–251 (2017)

    Article  MathSciNet  Google Scholar 

  7. Higham, D.J.: Mean-square and asymptotic stability of the stochastic theta method. SIAM J. Numer. Anal. 38(3), 753–769 (2000)

    Article  MathSciNet  Google Scholar 

  8. Higham, D.J.: Stochastic ordinary differential equations in applied and computational mathematics. IMA J. Appl. Math. 76(3), 449–474 (2011)

    Article  MathSciNet  Google Scholar 

  9. Higham, D.J., Mao, X., Stuart, A.M.: Exponential mean-square stability of numerical solutions to stochastic differential equations. LMS J. Comput. Math. 6, 297–313 (2003)

    Article  MathSciNet  Google Scholar 

  10. Higham, D.J., Mao, X., Yuan, C.: Almost sure and moment exponential stability in the numerical simulation of stochastic differential equations. SIAM J. Numer. Anal. 45(2), 592–609 (2007)

    Article  MathSciNet  Google Scholar 

  11. Hu, Y., Wu, F., Huang, C.: Stochastic stability of a class of unbounded delay neutral stochastic differential equations with general decay rate. Int. J. Syst. Sci. 43(2), 308–318 (2012)

    Article  MathSciNet  Google Scholar 

  12. Hutzenthaler, M., Jentzen, A., Kloeden, P.E.: Strong and weak divergence in finite time of Euler’s method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proc. R. Soc. Lond., Ser. A, Math. Phys. Eng. Sci. 467(2130), 1563–1576 (2011)

    Article  MathSciNet  Google Scholar 

  13. Hutzenthaler, M., Jentzen, A., Kloeden, P.E.: Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients. Ann. Appl. Probab. 22(4), 1611–1641 (2012)

    Article  MathSciNet  Google Scholar 

  14. Lamba, H., Mattingly, J.C., Stuart, A.M.: An adaptive Euler–Maruyama scheme for SDEs: convergence and stability. IMA J. Numer. Anal. 27(3), 479–506 (2007)

    Article  MathSciNet  Google Scholar 

  15. Lan, G., Xia, F.: Strong convergence rates of modified truncated EM method for stochastic differential equations. J. Comput. Appl. Math. 334, 1–17 (2018)

    Article  MathSciNet  Google Scholar 

  16. Li, X., Mao, X., Yin, G.: Explicit numerical approximations for stochastic differential equations in finite and infinite horizons: truncation methods, convergence in pth moment, and stability. IMA J. Numer. Anal. (2018). https://doi.org/10.1093/imanum/dry015

    Article  Google Scholar 

  17. Liu, L., Li, M., Deng, F.: Stability equivalence between the neutral delayed stochastic differential equations and the Euler–Maruyama numerical scheme. Appl. Numer. Math. 127, 370–386 (2018)

    Article  MathSciNet  Google Scholar 

  18. Mao, X.: Exponential stability of equidistant Euler–Maruyama approximations of stochastic differential delay equations. J. Comput. Appl. Math. 200(1), 297–316 (2007)

    Article  MathSciNet  Google Scholar 

  19. Mao, X.: Stochastic Differential Equations and Applications, 2nd edn. Horwood, Chichester (2008)

    Book  Google Scholar 

  20. Mao, X.: The truncated Euler–Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 290, 370–384 (2015)

    Article  MathSciNet  Google Scholar 

  21. Mao, X.: Convergence rates of the truncated Euler–Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 296, 362–375 (2016)

    Article  MathSciNet  Google Scholar 

  22. Mitsui, T., Saito, Y.: MS-stability analysis for numerical solutions of stochastic differential equations—beyond single-step single dim. In: Some Topics in Industrial and Applied Mathematics. Ser. Contemp. Appl. Math. CAM, vol. 8, pp. 181–194. Higher Education Press, Beijing (2007)

    Chapter  Google Scholar 

  23. Platen, E., Bruti-Liberati, N.: Numerical Solution of Stochastic Differential Equations with Jumps in Finance. Stochastic Modelling and Applied Probability, vol. 64. Springer, Berlin (2010)

    MATH  Google Scholar 

  24. Sabanis, S.: A note on tamed Euler approximations. Electron. Commun. Probab. 18, Article ID 47 (2013)

    Article  MathSciNet  Google Scholar 

  25. Sabanis, S.: Euler approximations with varying coefficients: the case of superlinearly growing diffusion coefficients. Ann. Appl. Probab. 26(4), 2083–2105 (2016)

    Article  MathSciNet  Google Scholar 

  26. Schurz, H.: Stability, Stationarity, and Boundedness of Some Implicit Numerical Methods for Stochastic Differential Equations and Applications. Logos Verlag Berlin, Berlin (1997)

    MATH  Google Scholar 

  27. Wu, F., Mao, X., Szpruch, L.: Almost sure exponential stability of numerical solutions for stochastic delay differential equations. Numer. Math. 115(4), 681–697 (2010)

    Article  MathSciNet  Google Scholar 

  28. Zhao, G., Song, M., Liu, M.: Numerical solutions of stochastic differential delay equations with jumps. Int. J. Numer. Anal. Model. 6(4), 659–679 (2009)

    MathSciNet  Google Scholar 

  29. Zong, X., Wu, F., Huang, C.: Convergence and stability of the semi-tamed Euler scheme for stochastic differential equations with non-Lipschitz continuous coefficients. Appl. Math. Comput. 228, 240–250 (2014)

    MathSciNet  MATH  Google Scholar 

Download references

Funding

The authors would like to thank the National Natural Science Foundation of China (11701378, 11871343), “Chenguang Program” supported by both Shanghai Education Development Foundation and Shanghai Municipal Education Commission (16CG50) and Shanghai Pujiang Program (16PJ1408000) for their financial support.

Author information

Authors and Affiliations

Authors

Contributions

The authors have equal contributions to each part of this paper. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Wei Liu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, Y., Huang, Z. & Liu, W. Equivalence of the mean square stability between the partially truncated Euler–Maruyama method and stochastic differential equations with super-linear growing coefficients. Adv Differ Equ 2018, 355 (2018). https://doi.org/10.1186/s13662-018-1818-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1818-1

Keywords