Skip to content


  • Research
  • Open Access

An improved Milstein method for stiff stochastic differential equations

Advances in Difference Equations20152015:369

  • Received: 9 August 2015
  • Accepted: 16 November 2015
  • Published:


To solve the stiff stochastic differential equations, we propose an improved Milstein method, which is constructed by adding an error correction term to the Milstein scheme. The correction term is derived from an approximation of the difference between the exact solution of stochastic differential equations and the Milstein continuous-time extension. The scheme is proved to be strongly convergent with order one and is as easy to implement as standard explicit schemes but much more efficient for solving stiff stochastic problems. The efficiency and the advantage of the method lie in its very large stability region. For a linear scalar test equation, it is shown that the mean-square stability domain of the method is much bigger than that of the Milstein method. Finally, numerical examples are reported to highlight the accuracy and effectiveness of the method.


  • stochastic differential equations
  • stiffness
  • improved Milstein method
  • strong convergence
  • mean-square stability

1 Introduction

Stochastic differential equations (SDEs) play a prominent role in a range of scientific areas like biology, chemistry, epidemiology, mechanics, microelectronics, and finance [16]. Since explicit solutions are rarely available for nonlinear SDEs, numerical approximations become increasingly important in many applications. To make the implementation viable, effective numerical methods are clearly the key ingredient and deserve much investigation. In the present work we make efforts in this direction and propose a new efficient scheme, which enjoys cheap computational costs in a strong approximation of stiff SDEs.

As the considered problem, we look at the following d-dimensional SDEs driven by multiplicative noise:
$$ \left \{ \textstyle\begin{array}{@{}l} dX(t)=f(X(t))\,dt+\sum_{j=1}^{m}g^{j}(X(t))\,dW^{j}(t),\quad t\in(0, T],\\ X(0)=X_{0}, \end{array}\displaystyle \right . $$
where T is a positive constant, \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is the drift coefficient and \(g^{j}\), \(j=1,2,\ldots,m:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) are the diffusion coefficients. Moreover, \(W^{j}(t)\), \(j=1,2,\ldots,m\), are independent scalar Wiener processes defined on the complete probability space \((\Omega,\mathcal{F},P)\) with a filtration \(\{\mathcal{F}_{t}\}_{t\geq0}\) satisfying the usual conditions (that is, it is increasing and right continuous while \(\mathcal{F}_{0}\) contains all P-null sets). Furthermore, the initial data \(X_{0}\) is assumed to be independent of the Wiener process satisfying \(\mathbb{E}|X_{0}|^{2}<\infty\).

Over the last decades, much progress has been made in construction and analysis of various numerical schemes for (1.1) from different numerical points of view; see, e.g., [726]. Roughly speaking, there are two major types of numerical methods, called explicit numerical methods and implicit numerical methods in the existing literature. As for the deterministic case, explicit methods [79] are easy to implement and are advocated to solve non-stiff problems. For stiff problems, however, the standard explicit methods with poor stability properties suffer a lot from step-size reduction and turn out to be inefficient in terms of overall computational costs. In order to address this issue, a number of implicit methods including drift-implicit methods [10, 15] and fully implicit methods [12, 1619, 21, 22] have been introduced, which possess better stability properties than the explicit methods and thus are well adapted for stiff problems. Although implicit methods can usually ease the difficulty arising from stiffness in SDEs, one needs to solve (possibly large) nonlinear algebraic equations at each time step. This might lead to traditional implicit methods still being costly when they are used to approximate large stiff systems, for instance, SDEs produced from spatially discretization of stochastic partial differential equations. In this paper, an improved Milstein (IM) method is developed, which successfully avoids solving nonlinear algebra equations encountered with implicit methods as mentioned above. More importantly, the proposed scheme admits good mean-square stability (MS-stability) properties and therefore serves as a good candidate to treat stiff SDEs.

For simplicity of presentation, here we restrict ourselves to the special case \(d =m =1\) in (1.1) and refer to Section 2 for the general case. In this setting we introduce the IM method for equation (1.1) as follows:
$$ \left \{ \textstyle\begin{array}{@{}l} Y_{n+1}=\bar{Y}_{n+1}+ (1-hf'(\bar{Y}_{n+1}) )^{-1}h (f(\bar{Y}_{n+1})-f(Y_{n}) ),\\ \bar{Y}_{n+1}=Y_{n}+hf(Y_{n})+g(Y_{n})\Delta W_{n}+\frac {1}{2}g'(Y_{n})g(Y_{n})(\Delta W_{n}^{2}-h), \end{array}\displaystyle \right . $$
where \(Y_{n}\) is the approximation of the exact solution \(X(t)\) of (1.1) at time \(t_{n} = nh\) with h being the time step size. Apparently, scheme (1.2) can be regarded as a modification of the classical Milstein scheme by adding the error correction term \((1-hf'(\bar {Y}_{n+1}) )^{-1}h (f(\bar{Y}_{n+1})-f(Y_{n}) )\), where \(\bar {Y}_{n+1}\) is produced by the classical Milstein method. The benefits of this error correction term are twofold. On the one hand, the MS-stability property of the new scheme shows much better than that of the classical Milstein method and no nonlinear algebra problems are encountered during implementation. On the other hand, the computational accuracy of the IM method is, to a certain extent, improved even for the non-stiff problem (see Table 1 in Section 5), despite the fact that the strong convergence rate of IM method remains the same as that of the classical Milstein method. We shall elaborate the derivation of the IM method in the forthcoming section. In short, the derivation of (1.2) is based on the local truncated error analysis and the error correction term comes from a certain approximation of the difference between the true solution of SDEs and one-step explicit Milstein approximation. Further, the proposed method (1.2) is justified by proving its strong convergence of order one under standard assumptions (see Theorem 3.4). Also, the linear MS-stability of the numerical scheme is examined and it turns out that the proposed scheme possesses a much larger stability domain than the classical Milstein method.

In addition, it is worthwhile to point out that the proposed scheme is close to the Rosenbrock type methods in the literature [24, 27] due to the presence of the inverse Jacobian matrices. Despite the similarity, the scheme we develop here does not coincide with any Rosenbrock type method formulated in [24, 27]. Indeed, the new scheme can be regarded as a modified version of the predictor-corrector method. Based on the classical Milstein method as a predictor, a corrector term involved with the inverse Jacobian matrices is determined following the way in Section 2 of the manuscript. This approach is different from the idea of obtaining Rosenbrock type methods in the literature. Similarly to Rosenbrock methods, the proposed scheme is well suited for stiff problems with the stiffness itself appearing in linear terms from the drift coefficients, and may lose efficiency in the case of stiff nonlinear terms. Finally, we would like to mention that the idea of error correction methods based on Chebyshev collocation was previously employed in [28] to construct methods for stiff deterministic ordinary differential equations.

The remainder of the paper is organized as follows. In the next section, how to construct the IM scheme based on the local truncated error analysis is presented. In Section 3, the strong convergence order in mean-square sense is analyzed. Section 4 is devoted to the MS-stability of the IM method. Numerical experiments are reported to confirm the accuracy and effectiveness of the method in Section 5. At the end of this article, conclusions are made briefly.

2 Derivation of the IM method

In this section, to illuminate the derivation of the IM method, we first restrict our attention to a scalar SDE driven by a scalar Wiener process, i.e.,
$$ \left \{ \textstyle\begin{array}{@{}l} dX(t)=f(X(t))\,dt+g(X(t))\,dW(t),\quad 0< t\leq T,\\ X(0)=X_{0}. \end{array}\displaystyle \right . $$
The extension to a general multi-dimensional case will be presented at the end of this section. For SDE (2.1), we define a uniform mesh on the finite time interval \([0,T ]\) with step size \(h=\frac{T}{N}\), \(N\in\mathbb{N}^{+}\) by
$$\mathcal{T}^{N}:0 = t_{0} < t_{1} < t_{2} < \cdots< t_{N} = T. $$
Based on a local truncation error analysis, we are to devise a one-step approximation of \(X(t_{1})\), denoted by \(Y_{1}\), starting with the initial value \(Y_{0} =X_{0}\). To this end, we introduce an auxiliary one-step approximation \(\bar{Y}_{1}\) defined by
$$ \bar{Y}_{1}=Y_{0}+f(Y_{0})h+g(Y_{0}) \Delta W_{1}+\frac{1}{2}g'(Y_{0})g(Y_{0}) \bigl(\Delta W_{1}^{2}-h\bigr), $$
where \(\Delta W_{1}:= W(t_{1}) - W(t_{0})\). Note that \(\bar{Y}_{1}\) can be viewed as a one-step approximation generated by the classical Milstein method [8, 29] starting with the initial value \(Y_{0} = X_{0}\). Moreover, we define the continuous-time extension \(\tilde{Y}(t)\) on the interval \([t_{0},t_{1}]\) such that
$$\begin{aligned} \tilde{Y}(t)={}&Y_{0}+f(Y_{0}) (t-t_{0})+g(Y_{0}) \bigl(W(t)-W(t_{0}) \bigr) \\ &{}+g'(Y_{0})g(Y_{0}) \int_{t_{0}}^{t} \bigl(W(s)-W(t_{0}) \bigr)\,dW(s). \end{aligned}$$
Note that \(\tilde{Y}(t_{0})=Y_{0}\) and \(\tilde{Y}(t_{1})=\bar{Y}_{1}\). Now let us examine the difference between \(X(t)\) and \(\tilde{Y}(t)\) defined by
$$ \varphi(t)=X(t)- \tilde{Y}(t),\quad t\in[t_{0},t_{1}]. $$
Further, we attempt to find a proper approximation of \(\varphi(t_{1})\), denoted by \(\bar{\varphi}_{1}\), based on which we introduce a new one-step approximation \(Y_{1}\) given by
$$ Y_{1}=\tilde{Y}(t_{1})+\bar{\varphi}_{1}= \bar{Y}_{1}+\bar{\varphi}_{1}. $$
Such a scheme can be regarded as an improved Milstein method, which is expected to reduce the local truncation error compared with the original Milstein scheme (2.2). Bearing this idea in mind, we carry out the following analysis. Recall that \(X(t_{0})=X_{0}=Y_{0}=\tilde{Y}(t_{0})\) and therefore \(\varphi(t_{0})=0\). For \(t\in(t_{0},t_{1}]\), it follows from (2.1) and (2.3) that
$$\begin{aligned} d\varphi(t)={}& \bigl(f\bigl(X(t)\bigr)-f(Y_{0}) \bigr)\,dt+ \bigl(g\bigl(X(t)\bigr)-g(Y_{0}) \bigr)\,dW(t) \\ &{}-g'(Y_{0})g(Y_{0}) \bigl(W(t)-W(t_{0}) \bigr)\,dW(t) \\ ={}& \bigl(f\bigl(X(t)\bigr)-f\bigl( \tilde{Y}(t)\bigr) \bigr)\,dt+ \bigl(f\bigl( \tilde{Y}(t)\bigr)-f(Y_{0}) \bigr)\,dt \\ &{}+ \bigl(g\bigl(X(t)\bigr)-g\bigl( \tilde{Y}(t)\bigr) \bigr)\,dW(t)+ \bigl(g \bigl( \tilde{Y}(t)\bigr)-g(Y_{0}) \\ &{}-g'(Y_{0})g(Y_{0}) \bigl(W(t)-W(t_{0}) \bigr) \bigr)\,dW(t) \\ ={}&m(t)\varphi(t)\,dt+h(t)\varphi(t)\,dW(t)+F(t)\,dt+G(t)\,dW(t), \end{aligned}$$
where we denote
$$\begin{aligned}& m(t)= \int_{0}^{1}{f'\bigl( \tilde{Y}(t)+\xi \varphi(t)\bigr)\,d\xi}, \end{aligned}$$
$$\begin{aligned}& h(t)= \int_{0}^{1}{g'\bigl( \tilde{Y}(t)+ \zeta\varphi(t)\bigr)\,d\zeta}, \end{aligned}$$
$$\begin{aligned}& F(t)=f \bigl( \tilde{Y}(t) \bigr)-f(Y_{0}), \end{aligned}$$
$$\begin{aligned}& G(t)=g \bigl(\tilde{Y}(t) \bigr)-g(Y_{0})-g'(Y_{0})g(Y_{0}) \bigl(W(t)-W(t_{0}) \bigr) \end{aligned}$$
for \(t\in(t_{0},t_{1}]\). To sum up, \(\varphi(t)\) is governed by the following SDE:
$$ \left \{ \textstyle\begin{array}{@{}l} d\varphi(t)= (m(t)\varphi(t)+F(t) )\,dt+ (h(t)\varphi (t)+G(t) )\,dW(t),\quad t\in(t_{0},t_{1}],\\ \varphi(t_{0})=0. \end{array}\displaystyle \right . $$
With \(m(t)\) and \(h(t)\) being approximated by
$$m(t)\approx f'\bigl(\tilde{Y}(t)\bigr),\qquad h(t)\approx g'\bigl(\tilde{Y}(t)\bigr) $$
for \(t\in(t_{0},t_{1}]\), we can get the following approximation equation of (2.11):
$$ \left \{ \textstyle\begin{array}{@{}l} d\overline{\varphi}(t)= (f' (\tilde{Y}(t) )\overline{\varphi}(t)+F(t) )\,dt+ (g' (\tilde{Y}(t) )\overline{\varphi}(t)+G(t) )\,dW(t),\\ \overline{\varphi}(t_{0})=0. \end{array}\displaystyle \right . $$
Here the process \(\bar{\varphi}(t)\) is an approximation to the exact solution \(\varphi(t)\) of (2.11) for \(t\in(t_{0},t_{1}]\). As mentioned earlier, we are interested in the approximation \(\overline{\varphi}_{1}\) to \(\bar{\varphi}(t_{1})\), which is used to construct a new scheme. To this aim, we apply the semi-implicit Milstein method [15] with \(\theta=1\) to the linear SDE (2.12) to obtain
$$\begin{aligned} \overline{\varphi}_{1}={}& \bigl(1-hf' \bigl(\tilde{Y}(t_{1}) \bigr) \bigr)^{-1} \biggl(\overline{\varphi}_{0}+hF(t_{1})+ \bigl(g' \bigl(\tilde{Y}(t_{0}) \bigr)\overline{\varphi}_{0}+G(t_{0}) \bigr) \\ &{}\times\Delta W_{1}+\frac{1}{2}g' \bigl(\tilde{Y}(t_{0}) \bigr) \bigl(g'\bigl(\tilde{Y}(t_{0}) \bigr)\overline{\varphi}_{0}+G(t_{0}) \bigr) \bigl(\Delta W_{1}^{2}-h\bigr) \biggr) \\ ={}& \bigl(1-hf'(\bar{Y}_{1}) \bigr)^{-1}h \bigl(f(\bar{Y}_{1})-f(Y_{0}) \bigr), \end{aligned}$$
where the second equality holds since \(\tilde{Y}(t_{1})=\bar{Y}_{1}\), \(\overline{\varphi}_{0} = \overline{\varphi} (t_{0})=0\), and
$$G(t_{0})=g \bigl(\tilde{Y}(t_{0}) \bigr)-g(Y_{0})-g'(Y_{0})g(Y_{0}) \bigl(W(t_{0})-W(t_{0}) \bigr)=0. $$
According to (2.5) we construct the following local improved Milstein method:
$$ Y_{1}=\bar{Y}_{1}+\overline{\varphi}_{1} = \bar{Y}_{1}+ \bigl(1-hf' (\bar{Y}_{1} ) \bigr)^{-1}h \bigl(f(\bar{Y}_{1})-f(Y_{0}) \bigr), $$
where \(\bar{Y}_{1}\) is given by (2.2). As a result, we propose the following global scheme for (2.1):
$$ \left \{ \textstyle\begin{array}{@{}l} Y_{n+1}=\bar{Y}_{n+1} + (1-hf'(\bar{Y}_{n+1}) )^{-1}h (f(\bar{Y}_{n+1})-f(Y_{n}) ),\\ \bar{Y}_{n+1}=Y_{n}+hf(Y_{n})+g(Y_{n})\Delta W_{n}+\frac {1}{2}g'(Y_{n})g(Y_{n})(\Delta W_{n}^{2}-h), \end{array}\displaystyle \right . $$
where \(\Delta W_{n}=W(t_{n+1})-W(t_{n})\).
For the general system (1.1) with \(d,m>1\), the above analysis can be adapted without any difficulty. Recall that the classical Milstein method in multi-dimensional setting is given by
$$ Y_{n+1}=Y_{n}+f(Y_{n})h+\sum _{j=1}^{m}g^{j}(Y_{n}) \Delta W_{n}^{j}+\sum_{j_{2}=1}^{m} \sum_{j_{1}=1}^{m}L^{j_{1}}g^{j_{2}}(Y_{n})I_{(j_{1},j_{2})}, $$
where for \(j_{1},j_{2}=1,2,\ldots,m\) we denote
$$L^{j_{1}}=\sum_{i=1}^{d}g^{i,j_{1}} \frac{\partial}{\partial x^{i}},\qquad I_{(j_{1},j_{2})}= \int_{t_{n}}^{t_{n+1}} \int_{t_{n}}^{s_{2}}\,dW^{j_{1}}(s_{1})\,dW^{j_{2}}(s_{2}) $$
with \(x^{i}\) and \(g^{i,j_{1}}\) being the ith element of the vector functions x and \(g^{j_{1}}\), respectively. Along the same lines as above, we derive the IM method for general system (1.1)
$$ \left \{ \textstyle\begin{array}{@{}l} Y_{n+1}=\bar{Y}_{n+1}+(I-hf'(\bar{Y}_{n+1}))^{-1}h (f(\bar{Y}_{n+1})-f(Y_{n}) ),\\ \bar{Y}_{n+1}=Y_{n}+f(Y_{n})h+\sum_{j=1}^{m}g^{j}(Y_{n})\Delta W_{n}^{j}+\sum_{j_{2}=1}^{m}\sum_{j_{1}=1}^{m}L^{j_{1}}g^{j_{2}}(Y_{n}) I_{(j_{1},j_{2})}, \end{array}\displaystyle \right . $$
where I is the d-dimensional identity matrix and \(f'\) stands for Jacobian matrix of the vector-valued function f.

3 Mean-square convergence analysis

In this section, we will justify the proposed method by proving its strong convergence order of one in mean-square sense. To this end, we make the following standard assumptions [15].

Assumption 3.1

Assume that \(f(x)\) and \(g(x)\) in (2.1) satisfy globally Lipschitz condition and linear growth condition, that is, there exist positive constants L and K such that
  1. (1)
    (globally Lipschitz condition) for all \(x,y\in\mathbb{R}\)
    $$ \bigl|f(x)-f(y)\bigr|^{2}\vee\bigl|g(x)-g(y)\bigr|^{2}\leq L|x-y|^{2}, $$
  2. (2)
    (linear growth condition) for all \(x\in\mathbb{R}\)
    $$ \bigl|f(x)\bigr|^{2}\vee\bigl|g(x)\bigr|^{2}\leq K \bigl(1+|x|^{2}\bigr). $$

Here and throughout this work, we use the convention that K represents a generic positive constant independent of h, whose value may be different for different appearances. This assumption guarantees the existence and uniqueness of the exact solution \(X(t)\) of equation (2.1), and, moreover, the solution \(X(t)\) satisfies \(\sup_{0\leq t\leq T} \mathbb{E}|X(t)|^{2}<\infty\); see, e.g., [30] for more details. In addition, we require the following assumption.

Assumption 3.2

Assume that the functions \(f(x)\) and \(g(x)\) in (2.1) have continuously bounded derivatives up to the required order for the following analysis, and the coefficient functions in Itô-Taylor expansions (up to a sufficient order) are globally Lipschitz and satisfy the linear growth conditions.

Subsequently, we present the fundamental strong convergence theorem [29, 31], which was frequently used in the literature to establish the mean-square convergence orders of various numerical schemes.

Theorem 3.3

([29, 31])

Suppose that a one-step approximation \(\bar{X}_{t,x}(t+h)\) has order of accuracy \(p_{1}\) for the mathematical expectation of the deviation and order of accuracy \(p_{2}\) for the mean-square deviation. More precisely, for arbitrary \(0\leq t\leq T-h\), \(x\in\mathbb{R}^{d}\) the following inequalities hold:
$$\begin{aligned}& \bigl|\mathbb{E} \bigl(X_{t,x}(t+h)-\bar{X}_{t,x}(t+h) \bigr) \bigr|\leq K\bigl(1+|x|^{2}\bigr)^{\frac{1}{2}}h^{p_{1}}, \end{aligned}$$
$$\begin{aligned}& \bigl(\mathbb{E} \bigl|X_{t,x}(t+h)-\bar{X}_{t,x}(t+h) \bigr|^{2} \bigr)^{\frac{1}{2}}\leq K\bigl(1+|x|^{2} \bigr)^{\frac{1}{2}}h^{p_{2}}. \end{aligned}$$
Also let \(p_{2}\geq\frac{1}{2}\), \(p_{1}\geq p_{2}+\frac{1}{2}\). Then for all \(N \in\mathbb{N}^{+}\) and \(k=0,1,\ldots, N\) the following inequality holds:
$$ \bigl(\mathbb{E} \bigl(\bigl|X(t_{k})-Y_{k} \bigr|^{2} \bigr) \bigr)^{\frac{1}{2}}\leq K\bigl(1+\mathbb{E}|X_{0}|^{2}\bigr)^{\frac{1}{2}}h^{p_{2}-\frac{1}{2}}, $$
i.e., the method is mean-square convergent with order \(p=p_{2}-\frac{1}{2}\).

The notations used in Theorem 3.3 are explained as follows: \(Y_{k}\) generated by the one-step method is an approximation to the exact solution \(X(t_{k})\) of (1.1) with \(t_{k}=kh\), \(X_{t,x}(t+h)\) denotes the exact solution of (1.1) with initial value x at time t and \(\bar{X}_{t,x}(t+h)\) denotes a numerical solution generated by the one-step method with initial value x at time t.

After the above preparations, we start to prove rigorously that the IM method is mean-square convergent with order one under Assumption 3.1 and Assumption 3.2. For simplicity of presentation, we focus on the scalar SDE and the extension to multi-dimensional case is an easy work and hence omitted here.

Theorem 3.4

Assume that all conditions in Assumption 3.1 and Assumption 3.2 are fulfilled. Then there exists a step size \(h_{0}<\frac{1}{\sqrt {L}}\) such that, for any \(h=T/N\leq h_{0}\), \(N\in\mathbb{N}^{+}\) the method (2.15) applied to SDE (2.1) is mean-square convergent with order one, i.e., for all \(N \in\mathbb{N}^{+}\) and \(k=0,1,\ldots , N\), the following inequality holds:
$$ \bigl(\mathbb{E} \bigl|\bigl(X(t_{k})-Y_{k}\bigr) \bigr|^{2} \bigr)^{\frac{1}{2}}\leq K\bigl(1+\mathbb{E}|X_{0}|^{2} \bigr)^{\frac{1}{2}}h. $$


The proof is divided into two steps.

Step 1. We shall prove that the inequality (3.3) holds for the IM method with \(p_{1}=2\). Let \(\bar{X}_{t,x}^{M}(t+h)\) denote the one-step Milstein approximation defined by
$$ \bar{X}_{t,x}^{M}(t+h)=x+f(x)h+g(x)\Delta W_{h}+\frac {1}{2}g'(x)g(x) \bigl(\Delta W_{h}^{2}-h\bigr), $$
and let \(\bar{X}_{t,x}(t+h)\) denote the one-step version of the proposed scheme (2.15):
$$ \bar{X}_{t,x}(t+h)=\bar{X}_{t,x}^{M}(t+h)+h \bigl(1-hf' \bigl(\bar{X}_{t,x}^{M}(t+h) \bigr) \bigr)^{-1} \bigl(f\bigl(\bar{X}_{t,x}^{M}(t+h) \bigr)-f(x) \bigr), $$
where \(x\in\mathbb{R}\) and \(\Delta W_{h}=W(t+h)-W(t)\). Analogously, let \(X_{t,x}(t+h)\) denote the one-step exact solution of (2.1). Therefore, one can get
$$\begin{aligned} &X_{t,x}(t+h)-\bar{X}_{t,x}(t+h) \\ &\quad= \int_{t}^{t+h}f\bigl(X_{t,x}(s)\bigr)\,ds-hf \bigl(\bar{X}_{t,x}^{M}(t+h) \bigr) \\ &\qquad{} + \int_{t}^{t+h}g\bigl(X_{t,x}(s)\bigr)\,dW(s)-g(x)\Delta W_{h}-\frac {1}{2}g'(x)g(x) \bigl(\Delta W_{h}^{2}-h\bigr) \\ &\qquad{} - h^{2} \bigl( 1-hf' \bigl(\bar{X}_{t,x}^{M}(t+h) \bigr) \bigr)^{-1} f' \bigl(\bar{X}_{t,x}^{M}(t+h) \bigr) \bigl(f\bigl(\bar{X}_{t,x}^{M}(t+h)\bigr)-f(x) \bigr), \end{aligned}$$
and thus
$$\begin{aligned} & \bigl|\mathbb{E} \bigl(X_{t,x} (t+h)-\bar{X}_{t,x}(t+h) \bigr) \bigr| \\ &\quad\leq \biggl| \mathbb{E} \biggl[ \int_{t}^{t+h}f\bigl(X_{t,x}(s)\bigr)\,ds-hf \bigl(\bar{X}_{t,x}^{M}(t+h) \bigr) \biggr] \biggr| \\ &\qquad{} + h^{2} \bigl| \mathbb{E} \bigl[ \bigl( 1-hf' \bigl( \bar{X}_{t,x}^{M}(t+h) \bigr) \bigr)^{-1} f' \bigl(\bar{X}_{t,x}^{M}(t+h) \bigr) \bigl(f \bigl(\bar{X}_{t,x}^{M}(t+h)\bigr)-f(x) \bigr) \bigr] \bigr| \\ &\quad:=H_{1} + H_{2}. \end{aligned}$$
Next, we address the estimates of \(H_{1}\) and \(H_{2}\). First of all, \(H_{1}\) can be split into the following two parts:
$$\begin{aligned} H_{1}\leq{}& \biggl|\mathbb{E} \biggl[ \int_{t}^{t+h}f\bigl(X_{t,x}(s)\bigr)-f \bigl(X_{t,x}(t+h)\bigr)\,ds \biggr] \biggr| +h \bigl|\mathbb{E} \bigl[f\bigl(X_{t,x}(t+h)\bigr)-f\bigl(\bar{X}_{t,x}^{M}(t+h)\bigr) \bigr] \bigr| \\ :={}&H_{11}+H_{12}. \end{aligned}$$
To handle the estimate of \(H_{11}\), one can use Itô’s formula under conditions imposed on the coefficient functions in Itô-Taylor expansions (see Assumption 3.2) to get
$$ H_{11}\leq K\bigl(1+|x|^{2} \bigr)^{\frac{1}{2}}h^{2}. $$
The estimate of \(H_{12}\) relies on the mean-square deviation of the classical Milstein one-step approximation [29]:
$$ \mathbb{E} \bigl|X_{t,x}(t+h)-\bar{X}_{t,x}^{M}(t+h) \bigr|^{2}\leq K\bigl(1+|x|^{2}\bigr)h^{3}. $$
Armed with this, one can readily check that
$$ H_{12}\leq K\bigl(1+|x|^{2} \bigr)^{\frac{1}{2}}h^{\frac{5}{2}}. $$
Substituting (3.12) and (3.14) into (3.11) yields
$$ H_{1}\leq H_{11}+H_{12}\leq K \bigl(1+|x|^{2}\bigr)^{\frac{1}{2}}h^{2}. $$
At this point, it remains to estimate \(H_{2}\). Note first that
$$ \mathbb{E}\bigl|\bar{X}_{t,x}^{M}(t+h)-x\bigr|^{2} \leq K\bigl(1+|x|^{2}\bigr)h, $$
where due to (3.7), Assumption 3.1 and Assumption 3.2. To guarantee the denominator \(|1-hf' (\bar{X}_{t,x}^{M}(t+h) ) |\neq0\) in \(H_{2}\), that is, \(|hf' (\bar{X}_{t,x}^{M}(t+h) ) |\neq1\), one can derive that \(h\leq h_{0}<\frac {1}{\sqrt{L}}\) armed with \(|f'(x)|\leq\sqrt{L}\) due to Assumption 3.1. Therefore, using Assumption 3.2, (3.1) and (3.16) for \(h\leq h_{0}<\frac{1}{\sqrt{L}}\) together shows that
$$ H_{2}\leq K\bigl(1+|x|^{2} \bigr)^{\frac{1}{2}}h^{\frac{5}{2}}. $$
Finally, inserting (3.15) and (3.17) into (3.10) implies that
$$ \bigl|\mathbb{E} \bigl(X_{t,x}(t+h)-\bar{X}_{t,x}(t+h) \bigr) \bigr|\leq K\bigl(1+|x|^{2}\bigr)^{\frac{1}{2}}h^{2}. $$
Therefore, the inequality (3.3) with \(p_{1}=2\) is satisfied for the IM method.
Step 2. We prove that the inequality (3.4) with \(p_{2}=\frac{3}{2}\) holds for the IM method. To this aim, we divide (3.4) into two parts as follows:
$$ \begin{aligned}[b] &\mathbb{E} \bigl|X_{t,x}(t+h)-\bar{X}_{t,x}(t+h) \bigr|^{2}\\ &\quad\leq 2\mathbb{E} \bigl|X_{t,x}(t+h)-\bar{X}_{t,x}^{M}(t+h) \bigr|^{2}+2\mathbb{E} \bigl|\bar{X}_{t,x}^{M}(t+h)- \bar{X}_{t,x}(t+h) \bigr|^{2}. \end{aligned} $$
For the second part on the right-hand side of the (3.19), due to (3.8), Assumption 3.2, (3.1) and (3.16), for \(h\leq h_{0}<\frac{1}{\sqrt{L}}\) one can derive that
$$ 2\mathbb{E} \bigl|\bar{X}_{t,x}^{M}(t+h)- \bar{X}_{t,x}(t+h) \bigr|^{2}\leq Kh\mathbb{E}\bigl|\bar{X}_{t,x}^{M}(t+h)-x\bigr|^{2} \leq K\bigl(1+|x|^{2}\bigr)h^{3}. $$
This together with (3.13) enables us to arrive at
$$ \mathbb{E} \bigl|X_{t,x}(t+h)-\bar{X}_{t,x}(t+h) \bigr|^{2}\leq K \bigl(1+|x|^{2} \bigr)h^{3}. $$
Thus the inequality (3.4) with \(p_{2}=\frac{3}{2}\) holds for the IM method.

Now an application of Theorem 3.3 with \(p_{1}=2\) and \(p_{2}=\frac{3}{2}\) shows that the scheme is mean-square convergent with order \(p_{2}-\frac{1}{2}=1\). □

In the same manner, it is not hard to establish the mean-square convergence of order one for IM method (2.17) applied to general system (1.1).

4 Mean-square stability

For SDEs, two very natural, but distinct stability concepts are MS-stability and asymptotical stability. MS-stability is applied to measure the stability of moments, while the asymptotical stability is to measure the overall behavior of sample function. In this section, we focus on the MS-stability of the IM method applied to a scalar linear test equation.

Consider the scalar linear test equation
$$ \left \{ \textstyle\begin{array}{@{}l} dX(t)=\lambda X(t)\,dt+\mu X(t)\,dW(t),\quad 0< t\leq T, \\ X(0)=X_{0}, \end{array}\displaystyle \right . $$
where \(\lambda,\mu\in\mathbb{C}\) are constants and \(X_{0}\neq0\) with probability 1. The exact solution of (4.1) is given by
$$ X(t)=X_{0}\exp \biggl(\biggl(\lambda-\frac{1}{2} \mu^{2}\biggr)t+\mu W(t) \biggr). $$
It is a classical result [32] that the zero solution of (4.1) is MS-stable if and only if
$$ 2\operatorname{Re}(\lambda)+|\mu|^{2}< 0. $$
Suppose that the parameters λ and μ are chosen so that the SDE (4.1) is stable in the mean-square sense. A natural question is for what range of h the numerical solution is stable in an analogous sense. We apply a one-step numerical scheme to equation (4.1) and represent the resulting stochastic difference equation as
$$ Y_{n+1}= R(h,\lambda,\mu,\widehat{N}_{n})Y_{n}, $$
where \(\widehat{N}_{n}\) are independent standard Gaussian random variables \(\widehat{N}_{n}=\frac{\Delta W_{n}}{\sqrt{h}}\sim N(0,1)\). Saito and Mitsui [32] introduced the following definition of MS-stability for a numerical scheme.

Definition 4.1

For fixed h, λ, μ, a numerical method is said to be MS-stable if
$$ \overline{R}(h,\lambda,\mu)=\mathbb{E} \bigl(\bigl|R(h,\lambda,\mu, \widehat{N}_{n})\bigr|^{2} \bigr)< 1, $$
where \(\overline{R}(h,\lambda,\mu)\) is called MS-stability function of the numerical method.

Theorem 4.2

For fixed h, λ, μ, the IM method is MS-stable if
$$ 2\operatorname{Re}(\lambda)+|\mu|^{2}+ \frac{1}{2}h|\mu|^{4}-h|\lambda|^{2}< 0. $$


Applying the IM method (2.15) to (4.1) results in
$$ \left \{ \textstyle\begin{array}{@{}l} Y_{n+1}=\bar{Y}_{n+1} +\frac{\lambda h}{1-\lambda h}(\bar{Y}_{n+1}-Y_{n}),\\ \bar{Y}_{n+1}=Y_{n}+\lambda hY_{n}+\mu Y_{n}\Delta W_{n}+\frac{1}{2}\mu ^{2}Y_{n}(\Delta W_{n}^{2}-h), \end{array}\displaystyle \right . $$
where \(\widehat{N}_{n}=\frac{\Delta W_{n}}{\sqrt{h}}\sim N(0,1)\) and \(1-\lambda h\neq0\). Substituting \(\bar{Y}_{n+1}\) into \(Y_{n+1}\) in (4.7) yields
$$\begin{aligned} Y_{n+1} &=\frac{1}{1-\lambda h} \biggl(Y_{n}+\lambda h Y_{n}+\mu\Delta W_{n} Y_{n}+\frac{1}{2} \mu^{2}\bigl(\Delta W_{n}^{2}-h \bigr)Y_{n} \biggr)-\frac{\lambda h}{1-\lambda h} Y_{n} \\ &=\frac{1-\frac{1}{2}\mu^{2}h+\mu\sqrt{h}\widehat{N}_{n}+\frac{1}{2}\mu ^{2}h\widehat{N}_{n}^{2}}{1-\lambda h}Y_{n}. \end{aligned}$$
Therefore, the function \(R(h,\lambda,\mu,\widehat{N}_{n})\) of the IM method is given by
$$R(h,\lambda,\mu,\widehat{N}_{n})=\frac{1-\frac{1}{2}\mu^{2}h+\mu\sqrt{h}\widehat{N}_{n}+\frac{1}{2}\mu^{2}h\widehat{N}_{n}^{2}}{1-\lambda h}, $$
and thus
$$\begin{aligned} &\bigl|R(h,\lambda,\mu,\widehat{N}_{n})\bigr|^{2} \\ &\quad=R(h,\lambda,\mu,\widehat{N}_{n})\cdot \overline{R(h,\lambda, \mu,\widehat{N}_{n})} \\ &\quad=\frac{1-\frac{1}{2}\mu^{2}h+\mu\sqrt{h}\widehat{N}_{n}+\frac{1}{2}\mu ^{2}h\widehat{N}_{n}^{2}}{1-\lambda h}\cdot\frac{1-\frac{1}{2}\bar{\mu}^{2}h+\bar{\mu}\sqrt{h}\widehat{N}_{n}+\frac{1}{2}\bar{\mu}^{2}h\widehat{N}_{n}^{2}}{1-\bar{\lambda}h} \\ &\quad=\frac{1-h(\operatorname{Re}\mu)^{2}+h(\operatorname{Im}\mu)^{2}+\frac{1}{4}h^{2}|\mu |^{4}+(2\sqrt{h}\operatorname{Re}\mu-h\sqrt{h}|\mu|^{2}\operatorname{Re}\mu)\widehat {N}_{n}}{1-2h\operatorname{Re}\lambda+h^{2}|\lambda|^{2}} \\ &\qquad{}+\frac{ (h(\operatorname{Re}\mu)^{2}-h(\operatorname{Im}\mu)^{2}+h|\mu|^{2}-\frac {1}{2}h^{2}|\mu|^{4} )\widehat{N}_{n}^{2}}{ 1-2h\operatorname{Re}\lambda+h^{2}|\lambda|^{2}} \\ &\qquad{}+\frac{h\sqrt{h}|\mu|^{2}\operatorname{Re}\mu\widehat{N}_{n}^{3}+\frac{1}{4}h^{2}|\mu |^{4}\widehat{N}_{n}^{4}}{1-2h\operatorname{Re}\lambda+h^{2}|\lambda|^{2}}. \end{aligned}$$
Applying \(\mathbb{E}(\widehat{N}_{n})=\mathbb{E}(\widehat{N}_{n}^{3})=0\), \(\mathbb{E}(\widehat{N}_{n}^{2})=1\), and \(\mathbb{E}(\widehat{N}_{n}^{4})=3\) yields
$$\overline{R}(h,\lambda,\mu)=\mathbb{E} \bigl(\bigl|R(h,\lambda,\mu,\widehat {N}_{n})\bigr|^{2} \bigr) =\frac{1+h|\mu|^{2}+\frac{1}{2}h^{2}|\mu|^{4}}{1-2h\operatorname{Re}\lambda +h^{2}|\lambda|^{2}}. $$
This together with Definition 4.1 implies that the method is MS-stable if
$$2\operatorname{Re}\lambda+|\mu|^{2}+\frac{1}{2}h| \mu|^{4}-h|\lambda|^{2}< 0. $$

It turns out that the proposed scheme is not mean-square A-stable in the sense that its mean-square stability domain does not contain the stability domain of the exact solution (compare (4.3) and (4.6)). Thus, the stability condition of Theorem 4.2 is not very convenient in practical applications. As immediate consequences of Theorem 4.2, the following corollaries provide convenient stability conditions.

Corollary 4.3

Let \(\lambda, \mu\in\mathbb{R}\) such that \(2 \lambda+ \sqrt{2} |\mu|^{2}<0\). Then the test problem (4.1) is MS-stable and the proposed method is MS-stable for any step size \(h >0\).


The desired assertion follows from (4.3) and (4.6) directly. □

Based on the above observations, we believe that the new scheme is well suited for stiff mean-square stable problems with moderate (small) stochastic noise intensity or additive noise, where the drift coefficient plays an essential role in the dynamics.

Corollary 4.4

Suppose that \(2\operatorname{Re}\lambda+|\mu|^{2}<0\) and \(|\operatorname{Re}\lambda|\leq |\operatorname{Im}\lambda|\), then the IM method is MS-stable for any step size \(h>0\).


Applying the condition \(2\operatorname{Re}\lambda+|\mu|^{2}<0\) yields
$$ | \mu|^{4}< 4(\operatorname{Re}\lambda)^{2}. $$
Using (4.10) and the condition \(|\operatorname{Re}\lambda|\leq|\operatorname {Im}\lambda|\), one can obtain
$$\begin{aligned} 2\operatorname{Re}\lambda+|\mu|^{2}+\tfrac{1}{2}h| \mu|^{4}-h|\lambda|^{2}&< 2\operatorname {Re}\lambda+| \mu|^{2}+2h(\operatorname{Re}\lambda)^{2}-h| \lambda|^{2} \\ &=2\operatorname{Re}\lambda+|\mu|^{2}+h \bigl((\operatorname{Re} \lambda)^{2}-(\operatorname {Im}\lambda)^{2} \bigr) < 0. \end{aligned}$$
By Theorem 4.2, the IM scheme is MS-stable. □

Remark 4.5

For fixed h, λ, μ, it is well known that [33] the usual explicit Milstein method is MS-stable if
$$ 2\operatorname{Re}\lambda+|\mu|^{2}+\frac{1}{2}h| \mu|^{4}+h|\lambda|^{2}< 0. $$
Clearly, (4.12) implies (4.6). In other words, the MS-stable region of the IM method contains the domain of the Milstein method. From Figure 1, one can easily observe that the stability domain of the proposed scheme is much larger than that of the explicit Milstein method.
Figure 1
Figure 1

MS-stability regions of the IM and Milstein methods.

5 Numerical tests

In this section, three numerical experiments are reported to illustrate convergence properties and MS-stability properties of the IM method. In the following numerical experiments, to approximate the mean-square error at time \(T = N h\), given by
$$e_{h}^{\mathrm{strong}}:=\sqrt{\mathbb{E}\bigl|X(T)-Y_{N}\bigr|^{2}}, $$
we use averages over 10,000 paths, i.e., \(e_{h}^{\mathrm{strong}}\approx\sqrt{\tfrac{1}{10{,}000}\sum_{i=1}^{10{,}000} |Y_{N}^{(i)}-X^{(i)}(t_{N})|^{2}}\), where \(Y_{N}^{(i)}\) denotes the numerical approximation to \(X^{(i)}(t_{N})\) at the step point \(t_{N}\) in the ith simulation of all 10,000 paths.

Example 1

Consider the scalar test equation
$$ \left \{ \textstyle\begin{array}{@{}l} dX(t)=\lambda X(t)\,dt+\mu X(t)\,dW(t),\quad 0< t\leq T, \\ X(0)=1, \end{array}\displaystyle \right . $$
with two groups of parameters as follows:
  • parameter I: \(\lambda=2\), \(\mu=1\),

  • parameter II: \(\lambda=-20\), \(\mu=5\).

As a test of mean-square convergence, we apply the IM method to equation (5.1) on the interval \([0,1]\) with the parameter I. In order to visualize the strong convergence order of the IM method, the resulting mean-square errors against h on a log-log scale is plotted in Figure 2. This produces the blue asterisks connected with solid lines. For reference, a dashed red line of slope one is also added. There one can see that the slopes of the two curves appear to match well. As expected, the IM method gives errors that decreases proportional to h.
Figure 2
Figure 2

Mean-square convergence order of the IM method.

In Table 1, we list the mean-square errors of the IM and Milstein methods for equation (5.1) with the parameter I. Table 1 shows that the computational accuracy of the IM method is, to some degree, improved even for the non-stiff problem. Table 2 highlights that, for the stiff problem, the Milstein method works unreliably and has large errors for not too small step sizes, whereas the IM method works very well. Figure 3 depicts the mean-square error behavior of the IM method for equation (5.1) with the parameter II. It is noted that the vertical axis in Figure 3 is logarithmically scaled.
Table 1

Mean-square errors for ( 5.1 ) with \(\pmb{\lambda=2}\) , \(\pmb{\mu=1}\)


\(\boldsymbol {h=2^{-5}}\)

\(\boldsymbol {h=2^{-6}}\)

\(\boldsymbol {h=2^{-7}}\)

\(\boldsymbol {h=2^{-8}}\)

\(\boldsymbol {h=2^{-9}}\)













Figure 3
Figure 3

The mean-square errors of ( 5.1 ) ( \(\pmb{\lambda=-20}\) , \(\pmb{\mu=5}\) ) of the IM method with various time step sizes h .

Table 2

Mean-square errors for ( 5.1 ), \(\pmb{\lambda=-20}\) , \(\pmb{\mu=5}\)


\(\boldsymbol {2^{-1}}\)

\(\boldsymbol {2^{-2}}\)

\(\boldsymbol {2^{-3}}\)

\(\boldsymbol {2^{-4}}\)











To test the MS-stability of the IM method, we numerically solve (5.1) over \([0, 20]\) with the parameter II. The parameter II satisfies (4.3) and hence the problem is MS-stable. We apply the IM and Milstein methods over 10,000 discrete Brownian paths for three different step sizes \(h=1,\frac{1}{2},\frac{1}{4}\). Figure 4 plots the sample average of \(Y_{j}^{2}\) against \(t_{j}\) for the IM method and Milstein method. Note that the vertical axis is logarithmically scaled. In the upper picture, curves decay toward to zero for all \(h=1,\frac{1}{2},\frac{1}{4}\), which demonstrates that the IM scheme is MS-stable for these three step sizes. On the contrary, the Milstein method for \(h=1,\frac{1}{2}, \frac {1}{4}\) gives unstable numerical solutions as shown in the lower picture in Figure 4.
Figure 4
Figure 4

MS-stability tests for the IM and Milstein methods.

In order to offer further insight into the above stability results, we restrict ourselves to \(\lambda,\mu\in\mathbb{R}\) in (5.1) and plot MS-stability regions of the IM and Milstein methods in Figure 1. As shown there, the MS-stability region of the IM method is much larger than that of the Milstein method.

Example 2

Consider a 2-dimensional stiff linear SDE system
$$ dX(t)=UX(t)\,dt+VX(t)\,dW(t),\quad X(0)=X_{0},t\in[0,1], $$
where U and V are matrices defined by
$$ U= \begin{pmatrix} -u & u \\ u & -u \end{pmatrix},\qquad V= \begin{pmatrix} v & 0 \\ 0 & v \end{pmatrix}. $$
The exact solution of this equation is given by [15]
$$X(t)=P \begin{pmatrix} \exp(\rho^{+}(t)) & 0 \\ 0 & \exp(\rho^{-}(t)) \end{pmatrix} P^{-1}X_{0},\qquad P= \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, $$
where \(\rho^{\pm}(t)=(-u-\frac{1}{2}v^{2}\pm u)t+vW(t)\), \(P^{-1}=P\), and we set \(X_{0}=(1,2)^{T}\). It is noticed that the Lyapunov exponents of (5.2) are explicitly given by \(L_{1}=-\frac{v^{2}}{2}\), \(L_{2}=-\frac {v^{2}}{2}-2u\). Therefore, the stiffness is from its deterministic and stochastic components (u and v, respectively) [15]. We choose the parameters in the numerical experiment as \(u=80\), \(v=1\). Under this condition, the Lyapunov exponents are \(L_{1}=-0.5\), \(L_{2}=-160.5\). It is clear that the system (5.2) is stiff for the great difference between the Lyapunov exponents \(L_{1}\) and \(L_{2}\) [15]. The approximation errors by applying the IM and Milstein methods to solve (5.2) are shown in Table 3. Clearly, the numerical results reported in Table 3 are in favor of the IM method, which has higher accuracy and advantage of using larger step sizes.
Table 3

Mean-square errors of the numerical values for ( 5.2 ) and ( 5.3 ), \(\pmb{u=80}\) , \(\pmb{v=1}\)


\(\boldsymbol {h=2^{-3}}\)

\(\boldsymbol {h=2^{-4}}\)

\(\boldsymbol {h=2^{-5}}\)

\(\boldsymbol {h=2^{-6}}\)











Example 3

Consider the following nonlinear problem:
$$ \left \{ \textstyle\begin{array}{@{}l} dX(t)=-\lambda X(t) (1-X(t) )\,dt-\mu X(t) (1-X(t) )\,dW(t),\quad 0< t\leq1,\\ X(0)=X_{0}, \end{array}\displaystyle \right . $$
which is a normalized version of a problem dynamics model (see [11]). A linearization about the stationary solution \(X(t)\equiv1\) leads to the linear test problem (4.1). In the experiment, we choose parameters \(\lambda<-1\), \(\mu=-\sqrt{-2(1+\lambda)}\), \(X(0)=0.9\). We notice that increasing the value \(|\lambda|\) will increase the stiffness both in drift and in diffusion term. At the same time, the choice ensures that \(\lambda+\frac{\mu^{2}}{2}=-1<0\), which is required by (4.3). We choose \(\lambda=-2\) for non-stiff case and \(\lambda=-15\) for stiff case with stiffness in both drift and diffusion coefficients. Because the analytic form of the exact solution \(X(t)\) of (5.4) is not available, we solve (5.4) by Euler-Maruyama (EM) method with a sufficiently small mesh size (here \(h=2^{-11}\)) and identify their outcomes as the ‘exact solution’ \(X(t)\) for the error-comparison. In Table 4 and Table 5, we give the mean-square errors and the relative errors of the numerical solution for (5.4) with \(\lambda=-2\), respectively. Analogously, the approximation for (5.4) with \(\lambda=-15\) are listed in Table 6 and Table 7. Numerical results in Tables 4, 5, 6, and 7 indicate that, even for a nonlinear equation, the computational accuracy of the IM method is improved for both a non-stiff problem and a stiff problem.
Table 4

Mean-square errors of the numerical solutions for ( 5.4 ) with \(\pmb{\lambda=-2}\)


\(\boldsymbol {h=2^{-3}}\)

\(\boldsymbol {h=2^{-4}}\)

\(\boldsymbol {h=2^{-5}}\)

\(\boldsymbol {h=2^{-6}}\)











Table 5

Relative errors of the numerical solutions for ( 5.4 ) with \(\pmb{\lambda=-2}\)


\(\boldsymbol {h=2^{-3}}\)

\(\boldsymbol {h=2^{-4}}\)

\(\boldsymbol {h=2^{-5}}\)

\(\boldsymbol {h=2^{-6}}\)











Table 6

Mean-square errors of the numerical solutions for ( 5.4 ) with \(\pmb{\lambda=-15}\)


\(\boldsymbol {h=2^{-6}}\)

\(\boldsymbol {h=2^{-7}}\)

\(\boldsymbol {h=2^{-8}}\)

\(\boldsymbol {h=2^{-9}}\)











Table 7

Relative errors of the numerical solutions for ( 5.4 ) with \(\pmb{\lambda=-15}\)


\(\boldsymbol {h=2^{-6}}\)

\(\boldsymbol {h=2^{-7}}\)

\(\boldsymbol {h=2^{-8}}\)

\(\boldsymbol {h=2^{-9}}\)











6 Conclusions

This work has proposed the IM method for solving stiff stochastic differential equations. The method is derived by adding a correction term to the classical Milstein method and is easy to implement. Further, the good MS-stability and strong convergence order of one are obtained for the scheme. It turns out that the IM method has a larger MS-stability region than the classical Milstein method. Numerical results also confirm that the IM method is computationally effective and superior to the Milstein method for solving stiff SDE systems.

In this work, we always assume that the drift and diffusion functions satisfy globally Lipschitz conditions (cf. (3.1)), which excludes many important model equations in applications. Therefore, a future direction is to establish strong convergence rate of the IM scheme for SDEs under a nonglobal Lipschitz condition as studied in [34].



The authors would like to thank the anonymous referees for their valuable and insightful comments which have improved the paper. This work is supported by the National Natural Science Foundation of China (No. 11171352, No. 11571373, No. 11371123), the New Teachers’ Specialized Research Fund for the Doctoral Program from Ministry of Education of China (No. 20120162120096) and Mathematics and Interdisciplinary Sciences Project, Central South University.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China
School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang, Henan, 410083, China


  1. Bodo, BA, Thompson, ME, Unny, TE: A review on stochastic differential equations for application in hydrology. Stoch. Hydrol. Hydraul. 1(2), 81-100 (1987) MATHView ArticleGoogle Scholar
  2. Boucher, DH: The Biology of Mutualism. Oxford University Press, New York (1985) Google Scholar
  3. Beretta, E, Kolmanovskii, VB, Shaikhet, L: Stability of epidemic model with time delays influenced by stochastic perturbations. Math. Comput. Simul. 45(3-4), 269-277 (1998) MATHView ArticleMathSciNetGoogle Scholar
  4. Gillespie, DT: Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem. 58, 35-55 (2007) View ArticleGoogle Scholar
  5. Platen, E, Bruti-Liberati, N: Numerical Solutions of Stochastic Differential Equations with Jumps in Finance. Springer, Berlin (2010) View ArticleGoogle Scholar
  6. Wilkinson, DJ: Stochastic modelling for quantitative description of heterogeneous biological systems. Nat. Rev. Genet. 10, 122-133 (2009) View ArticleGoogle Scholar
  7. Maruyama, G: Continuous Markov processes and stochastic equations. Rend. Circ. Mat. Palermo 4(1), 48-90 (1955) MATHView ArticleMathSciNetGoogle Scholar
  8. Milstein, GN: Approximate integration of stochastic differential equations. Theory Probab. Appl. 19(3), 557-562 (1975) View ArticleMathSciNetGoogle Scholar
  9. Rümelin, W: Numerical treatment of stochastic differential equations. SIAM J. Numer. Anal. 19(3), 604-613 (1982) MATHView ArticleMathSciNetGoogle Scholar
  10. Kloeden, PE, Platen, E, Schurz, H: The numerical solution of nonlinear stochastic dynamical systems: a brief introduction. Int. J. Bifurc. Chaos 1(2), 277-286 (1991) MATHView ArticleMathSciNetGoogle Scholar
  11. Guard, TC: Introduction to Stochastic Differential Equations. Dekker, New York (1988) Google Scholar
  12. Kahl, C, Schurz, H: Balanced Milstein methods for ordinary SDEs. Monte Carlo Methods Appl. 12(2), 143-164 (2006) MATHView ArticleMathSciNetGoogle Scholar
  13. Buckwar, E, Kelly, C: Towards a systematic linear stability analysis of numerical methods for systems of stochastic differential equations. SIAM J. Numer. Anal. 48(1), 298-321 (2010) MATHView ArticleMathSciNetGoogle Scholar
  14. Buckwar, E, Sickenberger, T: A comparative linear mean-square stability analysis of Maruyama-and Milstein-type methods. Math. Comput. Simul. 81(6), 1110-1127 (2011) MATHView ArticleMathSciNetGoogle Scholar
  15. Kloeden, PE, Platen, E: The Numerical Solution of Stochastic Differential Equations. Springer, Berlin (1999) Google Scholar
  16. Milstein, GN, Platen, E, Schurz, H: Balanced implicit methods for stiff stochastic systems. SIAM J. Numer. Anal. 35(3), 1010-1019 (1998) MATHView ArticleMathSciNetGoogle Scholar
  17. Alcock, J, Burrage, K: A note on the balanced method. BIT Numer. Math. 46(4), 689-710 (2006) MATHView ArticleMathSciNetGoogle Scholar
  18. Omar, MA, Aboul-Hassan, A, Rabia, SI: The composite Milstein methods for the numerical solution of Stratonovich stochastic differential equations. Appl. Math. Comput. 215(2), 727-745 (2009) MATHView ArticleMathSciNetGoogle Scholar
  19. Burrage, K, Tian, T: The composite Euler method for stiff stochastic differential equations. J. Comput. Appl. Math. 131(1-2), 407-426 (2001) MATHView ArticleMathSciNetGoogle Scholar
  20. Burrage, K, Tian, T: Implicit stochastic Runge-Kutta methods for stochastic differential equations. BIT Numer. Math. 44(1), 21-39 (2004) MATHView ArticleMathSciNetGoogle Scholar
  21. Wang, X, Gan, G, Wang, D: A family of fully implicit Milstein methods for stiff stochastic differential equations with multiplicative noise. BIT Numer. Math. 52(3), 741-772 (2012) MATHView ArticleMathSciNetGoogle Scholar
  22. Tian, T, Burrage, K: Implicit Taylor methods for stiff stochastic differential equations. Appl. Numer. Math. 38(1-2), 167-185 (2001) MATHView ArticleMathSciNetGoogle Scholar
  23. Abdulle, A, Cirilli, S: Stabilized methods for stiff stochastic systems. C. R. Math. Acad. Sci. Paris 345(10), 593-598 (2007) MATHView ArticleMathSciNetGoogle Scholar
  24. Schurz, H: A brief introduction to numerical analysis of (ordinary) stochastic differential equations without tears. Minneapolis: Institute for Mathematics and its Applications, 1-161 (1999) Google Scholar
  25. Abdulle, A, Cirilli, S: S-ROCK: Chebyshev methods for stiff stochastic differential equations. SIAM J. Sci. Comput. 30(2), 997-1014 (2008) MATHView ArticleMathSciNetGoogle Scholar
  26. Abdulle, A, Li, T: S-ROCK methods for stiff Itô SDEs. Commun. Math. Sci. 6(4), 845-868 (2008) MATHView ArticleMathSciNetGoogle Scholar
  27. Artemiev, S, Averina, A: Numerical Analysis of Systems of Ordinary and Stochastic Differential Equations. de Gruyter, Berlin (1997) MATHView ArticleGoogle Scholar
  28. Kim, P, Piao, X, Kim, SD: An error corrected Euler method for solving stiff problems based on Chebyshev collocation. SIAM J. Numer. Anal. 48, 1759-1780 (2011) MathSciNetGoogle Scholar
  29. Milstein, GN, Tretyakov, MV: Stochastic Numerics for Mathematical Physics. Spring, Berlin (2004) MATHView ArticleGoogle Scholar
  30. Mao, X: Stochastic Differential Equations and Their Applications. Horwood Publishing, Chichester (1997) MATHGoogle Scholar
  31. Milstein, GN: A theorem on the order of convergence of mean square approximations of solutions of systems of stochastic differential equations. Theory Probab. Appl. 32(4), 738-741 (1988) View ArticleGoogle Scholar
  32. Saito, Y, Mitsui, T: Stability analysis of numerical schemes for stochastic differential equations. SIAM J. Numer. Anal. 33(6), 2254-2267 (1996) MATHView ArticleMathSciNetGoogle Scholar
  33. Higham, DJ: A-stability and stochastic mean-square stability. BIT Numer. Math. 40(2), 404-409 (2000) MATHView ArticleMathSciNetGoogle Scholar
  34. Neuenkirch, A, Szpruch, L: First order strong approximations of scalar SDEs defined in a domain. Numer. Math. 128(1), 103-136 (2014) MATHView ArticleMathSciNetGoogle Scholar


© Yin and Gan 2015