Skip to main content

Continuous stage stochastic Runge–Kutta methods

Abstract

In this work, a version of continuous stage stochastic Runge–Kutta (CSSRK) methods is developed for stochastic differential equations (SDEs). First, a general order theory of these methods is established by the theory of stochastic B-series and multicolored rooted tree. Then the proposed CSSRK methods are applied to three special kinds of SDEs and the corresponding order conditions are derived. In particular, for the single integrand SDEs and SDEs with additive noise, we construct some specific CSSRK methods of high order. Moreover, it is proved that with the help of different numerical quadrature formulas, CSSRK methods can generate corresponding stochastic Runge–Kutta (SRK) methods which have the same order. Thus, some efficient SRK methods are induced. Finally, some numerical experiments are presented to demonstrate those theoretical results.

Introduction

Stochastic differential equations (SDEs) have wide applications in many disciplines like biology, economics, medicine, engineering and finance (see, e.g., [13]). However, most SDEs arising in practice are nonlinear, and cannot be solved explicitly. There has been tremendous interests in developing effective and reliable numerical methods for SDEs during the last few decades, for example see [414]. Runge–Kutta (RK) methods with continuous stage were firstly presented by Butcher in 1970s [15], and they have been investigated and discussed by several authors recently because of the great advantages in conserving symplecticity [16], preserving energy [17] and so on. So constructing continuous stage stochastic Runge–Kutta (CSSRK) methods for SDEs is a valuable task. This paper mainly aims to construct CSSRK methods for SDEs which can be written in integral form as

$$ \mathrm{d}x(t)=g_{0} \bigl(x(t) \bigr)\, \mathrm{d}t+ \sum_{k=1}^{d} g_{k} \bigl(x(t) \bigr)\circ \mathrm{d}W_{k}(t),\quad x(0)=x_{0}\in \mathbb{R}^{m},$$
(1.1)

with a d-dimensional Winer process \((W(t) )_{t\geq 0}=((W_{1}(t),\ldots ,W_{d}(t))^{T})_{t\geq 0}\). Here \(g_{k}:\mathbb{R}^{m}\mapsto \mathbb{R}^{m}\), \(k=1,\ldots ,d\), are Borel measurable, satisfying Lipschitz condition and linear growth condition, \(x(t)\) is the stochastic process which denotes the solution of (1.1).

In engineering, the convergence order of applied numerical methods plays an important role in efficient operation. And B-series appear as a fundamental tool to do error analysis on a wide range of problems. Therefore, a general order theory for CSSRK methods based on stochastic B-series and multicolored rooted tree is developed. Then, for the above SDE (1.1) with \(d=1\), we discuss the order conditions, and construct relevant CSSRK methods of order 1.0. Furthermore, CSSRK methods are also constructed for problems with some special structures, such as single integrand SDEs, SDEs with additive noise.

Single integrand SDEs are given by

$$ \mathrm{d}x(t)=\lambda g(x)\,\mathrm{d}t+\sum _{k=1}^{d}\sigma _{k}g(x) \circ \mathrm{d}W_{k}(t),\quad x(0)=x_{0}\in \mathbb{R}^{m}, $$
(1.2)

where \(\lambda \in \{0,1\}\) and \(\sigma _{k}\in \mathbb{R}\), \(k=1,2,\ldots ,d\), are given constants. Some well known examples for single integrand SDEs are the SDE describing fatigue cracking [18], the stochastic Van der Pol equation [19]. A theory for the derivation of B-series of the numerical approximation by CSSRK methods is given. Due to the single integrant, it is similar to the ordinary differential equations (ODEs) case [20]. The deterministic continuous stage Runge–Kutta (DCSRK) methods of deterministic order \(p_{d}\) are corresponding with the CSSRK methods of mean square \(p_{\mu }=\lfloor p_{d}/2\rfloor \), which can be viewed as the stochastic generalization of the DCSRK methods.

Concerning the specific problems with additive noise, thus

$$ \mathrm{d}x(t)=f \bigl(x(t) \bigr)\,\mathrm{d}t+\sum _{k=1}^{d}\sigma _{k} \circ \mathrm{d}W_{k}(t),\quad x(0)=x_{0}\in \mathbb{R}^{m}, $$
(1.3)

where \(\sigma _{k}, i=1,2,\ldots ,d\), are constants. Based on stochastic B-series, we propose a class of CSSRK methods, totally derivative-free for (1.3), which are able to reach mean-square order 1.5. Then we focus on the special second-order systems with additive noise of the following form:

$$ \ddot{x}(t)=f(x)+\sum_{k=1}^{d} \sigma _{k}\circ \dot{ W_{k}}(t),\quad x(0)=x_{0} \in \mathbb{R}^{m},\ \dot{x}(0)=y_{0} \in \mathbb{R}^{m}, $$
(1.4)

the corresponding CSSRK methods are able to reach mean-square order 2.0.

Interestingly, it is verified that a great deal of SRK methods could be derived from the CSSRK methods by means of the adoption of different numerical quadrature formulas. Furthermore, the conditions on algebraic precision of the numerical quadrature formulas and the degree of the coefficient polynomials of CSSRK methods are analyzed to guarantee that SRK methods share the same order with the CSSRK one. Thus, some efficient SRK methods are obtained.

The outline of this paper is organized as follows. In the next section, a family of CSSRK methods is proposed. Based on the multicolored rooted tree and stochastic B-series, the general order conditions are calculated. Up to strong global order 1.0 of SRK methods are proposed in Sect. 3. In Sect. 4, we consider a particular class of Stratonovich SDEs, single integrand SDEs. Sections 5 is devoted to discuss the CSSRK methods solving SDEs with additive noise. Finally, numerical experiments are reported in Sect. 6.

CSSRK methods and their order conditions

In this section, we first introduce the definition of CSSRK methods, then show some results which are useful in receiving the order theory.

CSSRK methods

We consider the following methods given by \(y_{0}=x_{0}\) and the step size \(h>0\), let \(A_{\tau ,\xi }^{(i,k)}\) be a polynomial in τ and ξ, \(B_{\tau }^{(i,k)}\) be a polynomial in τ. The one-step method \(\Phi _{h}:y_{0}\rightarrow y_{1}\) given by

$$\begin{aligned}& Y_{\tau }=y_{0}+ \sum _{k=0}^{d} \int _{0}^{1} \sum_{i=0} ^{\hat{r}}\bar{\phi }_{i,k}A_{\tau ,\xi }^{(i,k)}g_{k}(Y_{\xi }) \,\mathrm{d}\xi , \end{aligned}$$
(2.1a)
$$\begin{aligned}& y_{1}=y_{0}+ \sum _{k=0}^{d} \int _{0}^{1} \sum_{i=0}^{\hat{r}} \bar{\phi }_{i,k}B_{\tau }^{(i,k)} g_{k}(Y_{\tau })\,\mathrm{d}\tau , \end{aligned}$$
(2.1b)

with \(\bar{\phi }_{0,0}=h\), \(\bar{\phi }_{i,k}\), \(k=0, \ldots ,d\), \(i=0,\ldots ,\hat{r}\), are random variables. Invoking \(Z_{\tau ,\xi }^{(k)}= \sum_{i=0}^{\hat{r}}\bar{\phi }_{i,k}A_{ \tau ,\xi }^{(i,k)}\), \(z_{\tau }^{(k)}=\sum_{i=0}^{\hat{r}}\bar{\phi }_{i,k}B_{ \tau }^{(i,k)}\), then another presentation can be obtained,

$$\begin{aligned}& Y_{\tau }=y_{0}+ \sum _{k=0}^{d} \int _{0}^{1}Z_{ \tau ,\xi }^{(k)}g_{k}(Y_{\xi }) \,\mathrm{d}\xi , \end{aligned}$$
(2.2a)
$$\begin{aligned}& y_{1}=y_{0}+ \sum _{k=0}^{d} \int _{0}^{1}z_{ \tau }^{(k)}g_{k}(Y_{\tau }) \,\mathrm{d}\tau , \end{aligned}$$
(2.2b)

the coefficients \(Z_{\tau ,\xi }^{(k)}\) and \(z_{\tau }^{(k)}\) include random variables that depend on the stepsize h. Then it defines a m-dimensional approximation process \(y_{n}\) with \(y_{n}\approx x_{n}\) which called CSSRK methods.

Order theory

The basic tool of constructing our numerical methods is the multicolored rooted tree theory in [21]. Thus, we briefly list some definitions and theorems used in constructing numerical schemes later. Our first goal is to find B-series representations of (2.2a)–(2.2b) which can be written as a B-series,

$$ B(\phi ,x_{0};h)=\sum_{t\in T}\alpha (t) \phi (t) (h)F(t) (x_{0}), $$

where T is the set of multicolored rooted tree, \(\alpha (t)\) are combination terms, the elementary weight functions \(\phi (t)(h)\) are stochastic integrals or random variables, and \(F(t)(x_{0})\) are the elementary differentials.

Definition 2.1

(Trees and combinatorial coefficients)

The set of multicolored rooted trees

$$ T=\{\emptyset \}\cup T_{0}\cup T_{1}\cup \cdots \cup T_{d}, $$

is recursively defined as follows:

  1. (a)

    The graph \(\bullet _{k}=[\emptyset ]_{k}\) with only one vertex of color k belongs to \(T_{k}\). Let \(t=[t_{1},t_{2},\ldots ,t_{l}]_{k}\) be the tree formed by joining the subtrees \(t_{1}, t_{2}, \ldots , t_{l}\) each by a single branch to a common root of color k.

  2. (b)

    If \(t_{1}, t_{2}, \ldots , t_{l}\in T\), then \(t=[t_{1},t_{2},\ldots ,t_{l}]_{k}\in T_{k}\).

Thus, \(T_{k}\) is the set of trees with a k-colored root, and T is the union of these sets. Further, we define \(\alpha (t)\) as

$$ \alpha (\emptyset )=1, \qquad \alpha (\bullet _{k})=1,\qquad \alpha (\circ )=1,\qquad \alpha \bigl(t=[t_{1},t_{2},\ldots ,t_{l}]_{k} \bigr)= \frac{1}{r_{1}!r_{2}!\cdots r_{q}!}\prod _{j=1}^{l} \alpha (t_{j}), $$

where \(r_{1}, r_{2}, \ldots , r_{q}\) count equal trees among \(t_{1}, t_{2}, \ldots , t_{l}\).

Definition 2.2

(Elementary differentials)

For a tree \(t\in T\), the elementary differential is a mapping \(F(t):\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}\) defined recursively by

  1. (a)

    \(F(\emptyset )(x_{0})=x_{0}\).

  2. (b)

    \(F ([\emptyset _{k}] )(x_{0})=g_{k}(x_{0})\).

  3. (c)

    If \(t=[t_{1},t_{2},\ldots ,t_{l}]_{k}\in T_{k}\), then \(F(t)(x_{0})=g_{k}^{(l)} (F(t_{1})(x_{0}),F(t_{2})(x_{0}),\ldots ,F(t_{l})(x_{0}) )\), \(t\in T_{k}\).

The next lemma proves that, if \(Y(h)\) can be written as a B-series, then \(f (Y(h) )\) can be written as a similar series. The lemma is fundamental for deriving B-series for the exact and the numerical solution.

Lemma 2.1

([21])

Assume that \(Y(h)=B(\phi ,x_{0};h)\) is stochastic B-series, and \(f\in C^{\infty }(R^{m}, R^{\hat{m}})\), then \(f (Y(h) )\) can be indicated

$$ f \bigl(Y(h) \bigr)= \sum_{u\in U_{f}} \beta (u)\Psi _{\phi (u)}(h)G(u) (x_{0}), $$
(2.3)

where \(U_{f}\) are the collection derived from T.

  1. (a)

    \([{\emptyset }]_{f}\in U_{f}\), and if \(t_{1},t_{2}, \ldots ,t_{l}\in T\), then \([t_{1},t_{2},\ldots ,t_{l}]_{f}\in U_{f}\);

  2. (b)

    \(G ([\emptyset ]_{f} )(x_{0})=f(x_{0})\), \(G (u=[t_{1},t_{2},\ldots ,t_{l}]_{f} )(x_{0})=f^{(l)}(x_{0}) (F(t_{l})(x_{0}),\ldots ,F(t_{l})(x_{0}) )\);

  3. (c)

    \(\beta (u=[t_{1},t_{2},\ldots ,t_{l}]_{f} )= \frac{1}{r_{1}!r_{2}! \cdots r_{q}!}\prod_{i=1}^{l}\alpha (t_{i})\);

  4. (d)

    \(\Psi _{\phi } ([\emptyset ]_{f} )(h)=1\), \(\Psi _{\phi } (u=[t_{1},t_{2},\ldots ,t_{l}]_{f} )(h)= \prod_{i=1}^{l}\phi (t_{i})(h)\).

Lemma 2.2

([21])

The exact solution \(x(h)\) of (1.1) can be written as a B-series \(x(h)=B(\phi ,x_{0},h)\) with

$$\begin{aligned}& \phi (\emptyset ) (h)=1, \qquad \phi (\bullet _{k}) (h)=W_{k}(h), \\& \phi \bigl(t=[t_{1},t_{2},\ldots ,t_{l}]_{k} \bigr) (h)= \int _{0}^{h} \prod _{i=1}^{l}\phi (t_{i}) (s)\circ \mathrm{d}W_{k}(s). \end{aligned}$$

A similar result can be found for the numerical solution of (1.1) by the CSSRK methods (2.2a)–(2.2b).

Theorem 2.1

The numerical solution \(y_{1}\) as well as the continuous stage values can be written in terms of B-series

$$ Y_{\tau }=B(\Phi _{\tau },y_{0};h),\qquad y_{1}=B(\varphi ,y_{0};h), $$

for \(\tau \in [0,1]\), with

$$\begin{aligned}& \begin{aligned} &\Phi _{\tau }({\emptyset }) (h)=1, \qquad \Phi _{\tau }(\bullet _{k}) (h)= \int _{0}^{1}Z_{ \tau ,\xi }^{(k)} \,\mathrm{d}\xi , \\ &\Phi _{\tau } \bigl(t=[t_{1},t_{2},\ldots ,t_{l}]_{k} \bigr) (h)= \int _{0}^{1} Z_{\tau ,\xi }^{(k)} \prod_{i=1}^{l}\Phi _{\xi }(t_{i}) (h)\,\mathrm{d}\xi , \end{aligned} \end{aligned}$$
(2.4)
$$\begin{aligned}& \begin{aligned} &\varphi ({\emptyset }) (h)=1,\qquad \varphi ( \bullet _{k}) (h)= \int _{0}^{1}z_{\tau }^{(k)} \,\mathrm{d}\xi , \\ &\varphi \bigl(t=[t_{1},t_{2},\ldots ,t_{l}]_{k} \bigr) (h)= \int _{0}^{1} z_{\tau }^{(k)} \prod_{i=1}^{l}\Phi _{\tau }(t_{i}) (h)\,\mathrm{d}\tau . \end{aligned} \end{aligned}$$
(2.5)

Proof

Write \(Y_{\tau }\) as a B-series, that is,

$$ Y(\tau )= \sum_{t\in T}\alpha (t)\Phi _{\tau }(t) (h)F(t) (y_{0}), $$

for \(\tau \in [0,1]\), use the definition of the method (2.2a) and the term by term comparison yielding (2.2a) to obtain

$$ \begin{aligned} Y_{\tau }&=y_{0}+ \sum _{k=0}^{d} \int _{0}^{1}Z_{\tau ,\xi }^{(k)}g_{k}(Y_{\xi }) \,\mathrm{d}\xi \\ &=y_{0}+ \sum_{k=0}^{d} \int _{0}^{1}Z_{\tau ,\xi }^{(k)}g_{k} \bigl(B(\Phi _{\xi },y_{0};h) \bigr)\,\mathrm{d}\xi \\ &=y_{0}+ \sum_{k=0}^{d} \int _{0}^{1}Z_{\tau ,\xi }^{(k)} \sum_{t\in T_{k}}\alpha (t)\Phi ^{\prime }_{\xi ,k}(t) (h)F(t) (y_{0}) \,\mathrm{d}\xi \\ &=y_{0}+ \sum_{k=0}^{d} \int _{0}^{1} \sum_{t \in T_{k}} \alpha (t) \bigl(Z_{\tau ,\xi }^{(k)}\Phi _{\xi ,k}^{\prime }(t) (h) \bigr)\,\mathrm{d}\xi F(t) (y_{0}) \\ &=y_{0}+ \sum_{t\in T/\{{\emptyset }\}}\alpha (t) \Biggl( \int _{0}^{1}Z_{\tau ,\xi }^{(k)} \prod_{i=1}^{l}\Phi _{\xi }(t_{i}) (h)\,\mathrm{d}\xi \Biggr)F(t) (y_{0}) \\ &= \sum_{t\in T}\alpha (t)\Phi _{\tau }(t) (h)F(t) (y_{0}), \end{aligned} $$

where \(t=[t_{1},t_{2},\ldots ,t_{l}]_{k}\in T_{k}\). The proof of (2.5) is similar. □

Now, the local order of accuracy of the CSSRK methods can be decided by comparing the B-series of the exact and the numerical solution. First, we need to define the tree order.

Definition 2.3

The order of a tree \(t\in T\) is defined by

$$ \rho (\emptyset )=0,\qquad \rho \bigl(t=[t_{1},t_{2}, \ldots ,t_{l}]_{k} \bigr)= \sum _{j=1}^{l} \rho (t_{j})+ \textstyle\begin{cases} 1,& \text{$k=0$}, \\ \frac{1}{2},& \text{$k\neq 0$}. \end{cases} $$

The following lemma relates the global order of accuracy to the local order. Here, we assume that (2.5) is constructed such that \(\varphi (t)(h)=O(h^{\rho (t)})\) for all \(t\in T\) for mean square convergence.

Lemma 2.3

([21])

The method has mean square global order p if

$$\begin{aligned}& \bigl(E \bigl\| \varphi (t) (h)-\phi (t) (h) \bigr\| ^{2} \bigr)^{\frac{1}{2}}=O \bigl(h^{p+ \frac{1}{2}} \bigr), \quad \forall t\in T, \rho (t) \leq p, \end{aligned}$$
(2.6)
$$\begin{aligned}& E \bigl(\varphi (t) (h) \bigr)=E \bigl(\phi (t) (h) \bigr)+O \bigl(h^{p+1} \bigr),\quad \forall t\in T, \rho (t)\leq p+ \frac{1}{2}. \end{aligned}$$
(2.7)

Here, the \(O(\cdot )\)-notation refers to \(h\rightarrow 0\) and, especially in (2.6), to the \(L^{2}\)-norm.

Construction of CSSRK methods for SDEs with one random variable

Without loss of generality, we restrict consideration to autonomous systems (the coefficients of SDEs do not depend on t explicitly) in this part. We write

$$ \mathrm{d}x(t)=f \bigl(x(t) \bigr)\,\mathrm{d}t+g \bigl(x(t) \bigr)\circ \mathrm{d}W(t),\quad x(0)=x_{0}, $$
(3.1)

then the CSSRK methods for system (3.1) are given by

$$\begin{aligned}& Y_{\tau }=y_{0}+h \int _{0}^{1}A_{\tau ,\xi }^{(0)}f(Y_{\xi }) \,\mathrm{d} \xi +J_{1} \int _{0}^{1}A_{\tau ,\xi }^{(1)}g(Y_{\xi }) \,\mathrm{d}\xi , \end{aligned}$$
(3.2a)
$$\begin{aligned}& y_{1}=y_{0}+h \int _{0}^{1}B_{\tau }^{(0)}f(Y_{\tau }) \,\mathrm{d}\tau +J_{1} \int _{0}^{1}B_{\tau }^{(1)}g(Y_{\tau }) \,\mathrm{d}\tau , \end{aligned}$$
(3.2b)

where \(J_{1}=\bigtriangleup W_{n}=W(t_{n+1})-W(t_{n})\) is an independent \(N(0,h)\)-distributed Gaussian random variable. Based on the multicolored rooted tree theory in Sect. 2, we are able to obtain a set of order conditions guaranteeing that CSSRK methods (3.2a)–(3.2b) of mean-square order 1.0 as detailed below. A list of all trees with \(\rho (t)\leq 1.5\) and their corresponding functions are given in Table 1.

Table 1 Multicolored rooted trees for (3.2a)–(3.2b) with order less than or equal to 1.5

Above all, the following conditions need to be satisfied:

  1. (a)

    \(\int _{0}^{1}z_{\tau }^{(0)}\,\mathrm{d}\tau =h+O(h^{ \frac{3}{2}})\);

  2. (b)

    \(\int _{0}^{1}z_{\tau }^{(1)}\,\mathrm{d}\tau =W_{1}(h)+O(h^{ \frac{3}{2}})\);

  3. (c)

    \(\int _{0}^{1}z_{\tau }^{(1)} \int _{0}^{1}Z_{\tau ,\xi }^{(1)}\,\mathrm{d}\xi \,\mathrm{d}\tau =\int _{0}^{h}W_{1}(s)\circ \mathrm{d}W_{1}(s)+O(h^{ \frac{3}{2}})\);

  4. (d)

    \(E (\int _{0}^{1}z_{\tau }^{(0)}\,\mathrm{d}\tau )=E(h)+O(h^{2})\);

  5. (e)

    \(E (\int _{0}^{1}z_{\tau }^{(1)}\,\mathrm{d}\tau )=E (W_{1}(h) )+O(h^{2})\);

  6. (f)

    \(E (\int _{0}^{1}z_{\tau }^{(1)} \int _{0}^{1}Z_{\tau ,\xi }^{(1)}\,\mathrm{d}\xi \,\mathrm{d}\tau )=E (\int _{0}^{h}W_{1}(s)\circ \mathrm{d}W_{1}(s) )+O(h^{2})\).

Through calculation, if the coefficients of CSSRK methods (3.2a)–(3.2b) satisfy the conditions

$$ \int _{0}^{1}B_{\tau }^{(0)} \,\mathrm{d}\tau =1,\qquad \int _{0}^{1}B_{\tau }^{(1)} \,\mathrm{d}\tau =1,\qquad \int _{0}^{1}B_{\tau }^{(1)} \int _{0}^{1}A_{\tau ,\xi }^{(1)} \,\mathrm{d}\xi \,\mathrm{d}\tau =\frac{1}{2}, $$
(3.3)

then it is of mean-square order 1.0. We can choose some polynomials satisfying (3.3), for example

$$ B_{\tau }^{(0)}=1, \qquad B_{\tau }^{(1)}=1,\qquad A_{\tau ,\xi }^{(0)}= \gamma ^{(0)},\qquad A_{ \tau ,\xi }^{(1)}= \frac{1}{2}, $$
(3.4)

by \(B_{\tau }^{(i)}=\lambda ^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma ^{(i)}\), \(i=0,1\),

$$ \begin{aligned} &B_{\tau }^{(0)}=1,\qquad B_{\tau }^{(1)}=1,\qquad A_{\tau ,\xi }^{(0)}= \gamma _{1}^{(0)} \tau +\gamma _{2}^{(0)} \xi +\gamma _{3}^{(0)}, \\ &A_{\tau ,\xi }^{(1)}=\gamma _{1}^{(1)} \biggl(\tau -\frac{1}{2} \biggr)+\gamma _{2}^{(1)} \biggl( \xi -\frac{1}{2} \biggr)+\frac{1}{2}, \end{aligned} $$
(3.5)

by \(B_{\tau }^{(i)}=\lambda ^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(i=0,1\),

$$ B_{\tau }^{(0)}=\lambda _{1}^{(0)} \biggl(\tau -\frac{1}{2} \biggr)+1, \qquad B_{\tau }^{(1)}= \lambda _{1}^{(1)} \biggl(\tau -\frac{1}{2} \biggr)+1,\qquad A_{\tau ,\xi }^{(0)}= \gamma ^{(0)},\qquad A_{\tau ,\xi }^{(1)}= \frac{1}{2}, $$
(3.6)

by \(B_{\tau }^{(i)}=\lambda _{1}^{(i)}\tau +\lambda _{2}^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma ^{(i)}\), \(i=0,1\),

$$ \begin{aligned} &B_{\tau }^{(0)}=\lambda _{1}^{(0)} \biggl(\tau -\frac{1}{2} \biggr)+1, \qquad B_{\tau }^{(1)}= \lambda _{1}^{(1)} \biggl(\tau -\frac{1}{2} \biggr)+1, \\ &A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)} \tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}, \qquad A_{\tau ,\xi }^{(1)}=\gamma _{1}^{(1)} \biggl(\tau - \frac{2}{3} \biggr)+\gamma _{2}^{(1)} \biggl(\xi -\frac{1}{2} \biggr)+\frac{1}{2}, \end{aligned} $$
(3.7)

by \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(B_{\tau }^{(i)}=\lambda _{1}^{(i)}\tau +\lambda _{2}^{(i)}\), \(i=0,1\).

It is almost mandatory that the practical implementation of the CSSRK methods (3.2a)–(3.2b) uses a numerical quadrature formula. By applying the quadrature formula \((b_{i},c_{i})_{i=1}^{s}\) to (3.2a)–(3.2b), we use of the notations \(Y_{i}=Y_{c_{i}}\), deriving the SRK methods

$$\begin{aligned}& Y_{i}=y_{0}+h\sum _{j=1}^{s}b_{j}A_{c_{i},c_{j}}^{(0)}f(Y_{j})+J_{1} \sum_{j=1}^{s}b_{j}A_{c_{i},c_{j}}^{(1)}g(Y_{j}), \quad i=1,2,\ldots ,s, \end{aligned}$$
(3.8a)
$$\begin{aligned}& y_{1}=y_{0}+h\sum _{i=1}^{s}b_{i}B_{c_{i}}^{(0)}f(Y_{i})+J_{1} \sum_{i=1}^{s}b_{i}B_{c_{i}}^{(1)}g(Y_{i}), \end{aligned}$$
(3.8b)

where

$$\begin{aligned}& A_{c_{i},c_{j}}^{(0)}=A_{\tau ,\xi }^{(0)} |_{\tau =c_{i},\xi =c_{j}},\qquad A_{c_{i},c_{j}}^{(1)}=A_{ \tau ,\xi }^{(1)} |_{\tau =c_{i},\xi =c_{j}}, \\& B_{c_{i}}^{(0)}=B_{\tau }^{(0)} |_{\tau =c_{i}},\qquad B_{c_{i}}^{(1)}=B_{ \tau }^{(1)} |_{\tau =c_{i}}, \end{aligned}$$

which can be formulated by the following Butcher tableau:

$$ \textstyle\begin{array}{c@{\quad}|c@{\quad}c} & \ (b_{j}A_{c_{i},c_{j}}^{(0)} )_{s\times s} & (b_{j}A_{c_{i},c_{j}}^{(1)} )_{s\times s} \\ \hline & \ (b_{i}B_{c_{i}}^{(0)} )_{1\times s} & (b_{i}B_{c_{i}}^{(1)} )_{1\times s} \end{array}\displaystyle . $$
(3.9)

The following result implies that we can construct SRK methods with the same order of CSSRK methods with the help of suitable numerical quadrature formulas.

Theorem 3.1

If the CSSRK methods (3.2a)(3.2b) satisfy the order conditions (3.3), and \(A_{\tau ,\xi }^{(1)}\) is a \(m_{1}\)-order polynomial of τ and ξ, \(B_{\tau }^{(0)}\), \(B_{\tau }^{(1)}\) are \(m_{2}\)-, \(m_{3}\)-order polynomials of τ, respectively. Then the associated SRK methods (3.9) derived by using the quadrature formulas \((b_{i},c_{i})_{i=1}^{s}\) with accuracy up to degree \(\max \{m_{2},m_{1}+m_{3}\}\) are also of order 1.0.

Proof

Recall that \(A_{\tau ,\xi }^{(1)}\) is \(m_{1}\)-order polynomial, \(B_{\tau }^{(0)}\), \(B_{\tau }^{(1)}\) are \(m_{2}\)-, \(m_{3}\)-order polynomial, based on algebra analysis, the numerical quadrature formulas \((b_{i},c_{i})_{i=1}^{s}\) with accuracy up to degree \(\mathrm{max}\{m_{2},m_{1}+m_{3}\}\) are exactly established for (3.3). Then we have

$$ \sum_{i=1}^{s}b_{i}B_{c_{i}}^{(0)}=1, \qquad \sum_{i=1}^{s}b_{i}B_{c_{i}}^{(1)}=1, \qquad \sum_{i=1}^{s}\sum _{j=1}^{s}b_{i}b_{j}B_{c_{i}}^{(1)}A_{c_{i},c_{j}}^{(1)}= \frac{1}{2}, $$

thus, the associated SRK methods (3.9) are of order 1.0. □

As an application, for the constructed CSSRK methods of mean-square order 1.0, if we use more quadrature points then we can get lots of new SRK methods which are not found in the literature. For example, it gives the SRK schemes shown in the following.

A SRK method of order 1.0 derived by \(B_{\tau }^{(i)}=\lambda ^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma ^{(i)}\), \(i=0,1\), using the left rectangle quadrature,

$$ \textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \gamma ^{(0)} &\frac{1}{2} \\ \hline &\ 1 &1 \end{array}\displaystyle . $$
(3.10)

A SRK method of order 1.0 derived by \(B_{\tau }^{(i)}=\lambda ^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(i=0,1\), using the trapezium quadrature formula,

$$ \begin{aligned} &\textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \frac{1}{2}\gamma _{3}^{(0)} &\frac{1}{2} (\gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{2} (\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{1}{2} (\gamma _{1}^{(0)}+\gamma _{2}^{(0)}+ \gamma _{3}^{(0)} ) \\ \hline & \frac{1}{2} &\frac{1}{2} \end{array}\displaystyle \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \frac{1}{2} ( \frac{1}{2}-\frac{1}{2}\gamma _{1}^{(1)}- \frac{1}{2}\gamma _{2}^{(1)} ) & \frac{1}{2}-\frac{1}{2}\gamma _{1}^{(1)} \\ &\ \frac{1}{2} (\frac{1}{2}+\frac{1}{2} \gamma _{1}^{(1)}-\frac{1}{2}\gamma _{2}^{(1)} ) &\frac{1}{2} ( \frac{1}{2}+\frac{1}{2}\gamma _{1}^{(1)}+ \frac{1}{2}\gamma _{2}^{(1)} ) \\ \hline &\frac{1}{2} &\frac{1}{2} \end{array}\displaystyle \end{aligned} . $$
(3.11)

A SRK method of order 1.0 derived by \(B_{\tau }^{(i)}=\lambda ^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(i=0,1\), using the Lobatto quadrature formula,

$$ \begin{aligned} &\textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ \frac{1}{6}\gamma _{3}^{(0)} &\frac{2}{3} (\frac{1}{2}\gamma _{2}^{(0)}+ \gamma _{3}^{(0)} ) &\frac{1}{6} ( \gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{12}\gamma _{1}^{(0)}+ \frac{1}{6}\gamma _{3}^{(0)} & \frac{2}{3} (\frac{1}{2}\gamma _{2}^{(0)}+ \frac{1}{2}\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{1}{6} ( \frac{1}{2}\gamma _{1}^{(0)}+\gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{6} (\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{2}{3} ( \frac{1}{2}\gamma _{2}^{(0)}+\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{1}{6} (\gamma _{1}^{(0)}+ \gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ \hline &\frac{1}{6} &\frac{2}{3} &\frac{1}{6} \end{array}\displaystyle \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ \frac{1}{12} &- \frac{1}{3}\gamma _{1}^{(1)}+ \frac{1}{3} &-\frac{1}{12}\gamma _{1}^{(1)}+ \frac{1}{12}\gamma _{2}^{(1)}+ \frac{1}{12} \\ &\ -\frac{1}{12}\gamma _{2}^{(1)}+ \frac{1}{12} &\frac{1}{3} &\frac{1}{12}\gamma _{2}^{(1)}+\frac{1}{12} \\ &\ \frac{1}{12}\gamma _{1}^{(1)}- \frac{1}{12}\gamma _{2}^{(1)}+ \frac{1}{12} &\frac{1}{3}\gamma _{1}^{(1)}+ \frac{1}{3} &\frac{1}{12}\gamma _{1}^{(1)}+ \frac{1}{12}\gamma _{2}^{(1)}+ \frac{1}{12} \\ \hline &\frac{1}{6} &\frac{2}{3} &\frac{1}{6} \end{array}\displaystyle \end{aligned} . $$
(3.12)

A SRK method of order 1.0 derived by \(B_{\tau }^{(i)}=\lambda _{1}^{(i)}\tau +\lambda _{2}^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma ^{(i)}\), \(i=0,1\), using the trapezium quadrature formula,

$$ \textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c@{\quad}c} &\ \frac{1}{2} \gamma ^{(0)} &\frac{1}{2}\gamma ^{(0)} & \frac{1}{4} &\frac{1}{4} \\ &\ \frac{1}{2}\gamma ^{(0)} &\frac{1}{2}\gamma ^{(0)} &\frac{1}{4} &\frac{1}{4} \\ \hline &\ \frac{1}{2} (1-\frac{1}{2}\lambda _{1}^{(0)} ) &\frac{1}{2} (1+ \frac{1}{2}\lambda _{1}^{(0)} ) & \frac{1}{2} (1-\frac{1}{2}\lambda _{1}^{(1)} ) &\frac{1}{2} (1+\frac{1}{2}\lambda _{1}^{(1)} ) \end{array}\displaystyle . $$
(3.13)

A SRK method of order 1.0 derived by \(B_{\tau }^{(i)}=\lambda _{1}^{(i)}\tau +\lambda _{2}^{(i)}\), \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(i=0,1\), using the Gaussian quadrature formula,

$$ \begin{aligned} &\textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \frac{1}{2} (1-\frac{1}{2\sqrt{3}} (\gamma _{1}^{(0)}+\gamma _{2}^{(0)} )+\gamma _{3}^{(0)} ) &\frac{1}{2} (1-\frac{1}{2\sqrt{3}} (\gamma _{1}^{(0)}- \gamma _{2}^{(0)} )+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{2} (1+\frac{1}{2\sqrt{3}} (\gamma _{1}^{(0)}-\gamma _{2}^{(0)} )+\gamma _{3}^{(0)} ) &\frac{1}{2} (1+\frac{1}{2\sqrt{3}} (\gamma _{1}^{(0)}+ \gamma _{2}^{(0)} )+\gamma _{3}^{(0)} ) \\ \hline &\ \frac{1}{2} (1-\frac{1}{2\sqrt{3}}\lambda _{1}^{(0)} ) &\frac{1}{2} (1+ \frac{1}{2\sqrt{3}}\lambda _{1}^{(0)} ) \end{array}\displaystyle \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \frac{1}{4} (1-\frac{1}{6}\gamma _{1}^{(1)}\lambda _{1}^{(1)}-\frac{1}{\sqrt{3}} (\gamma _{1}^{(1)}+\gamma _{2}^{(1)} ) ) &\frac{1}{4} (1-\frac{1}{6}\gamma _{1}^{(1)}\lambda _{1}^{(1)}- \frac{1}{\sqrt{3}} (\gamma _{1}^{(1)}-\gamma _{2}^{(1)} ) ) \\ &\ \frac{1}{4} (1-\frac{1}{6}\gamma _{1}^{(1)}\lambda _{1}^{(1)}+ \frac{1}{\sqrt{3}} (\gamma _{1}^{(1)}-\gamma _{2}^{(1)} ) ) &\frac{1}{4} (1-\frac{1}{6}\gamma _{1}^{(1)}\lambda _{1}^{(1)}+\frac{1}{\sqrt{3}} (\gamma _{1}^{(1)}+\gamma _{2}^{(1)} ) ) \\ \hline &\ \frac{1}{2} (1-\frac{1}{2\sqrt{3}}\lambda _{1}^{(1)} ) &\frac{1}{2} (1+ \frac{1}{2\sqrt{3}}\lambda _{1}^{(1)} ) \end{array}\displaystyle \end{aligned} . $$
(3.14)

A SRK method of order 1.0 derived by \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(B_{\tau }^{(i)}=\lambda _{1}^{(i)}\tau +\lambda _{2}^{(i)}\), \(i=0,1\), using the Lobatto quadrature formula,

$$ \begin{aligned} &\textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ \frac{1}{6}\gamma _{3}^{(0)} &\ \frac{2}{3} (\frac{1}{2}\gamma _{2}^{(0)}+ \gamma _{3}^{(0)} ) &\frac{1}{6} ( \gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{12}\gamma _{1}^{(0)}+ \frac{1}{6}\gamma _{3}^{(0)} & \frac{2}{3} (\frac{1}{2}\gamma _{2}^{(0)}+ \frac{1}{2}\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{1}{6} ( \frac{1}{2}\gamma _{1}^{(0)}+\gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{6} (\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{2}{3} ( \frac{1}{2}\gamma _{2}^{(0)}+\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{1}{6} (\gamma _{1}^{(0)}+ \gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ \hline &\ \frac{1}{6} (1-\frac{1}{2}\lambda _{1}^{(0)} ) &\frac{2}{3} & \frac{1}{6} (1+\frac{1}{2}\lambda _{1}^{(0)} ) \end{array}\displaystyle \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ \frac{1}{6}\gamma _{3}^{(1)} &\frac{2}{3} ( \frac{1}{2}\gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) &\frac{1}{6} (\gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) \\ &\ \frac{1}{12}\gamma _{1}^{(1)}+ \frac{1}{6}\gamma _{3}^{(1)} & \frac{2}{3} (\frac{1}{2}\gamma _{2}^{(1)}+ \frac{1}{2}\gamma _{1}^{(1)}+\gamma _{3}^{(1)} ) &\frac{1}{6} ( \frac{1}{2}\gamma _{1}^{(1)}+\gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) \\ &\ \frac{1}{6} (\gamma _{1}^{(1)}+\gamma _{3}^{(1)} ) &\frac{2}{3} ( \frac{1}{2}\gamma _{2}^{(1)}+\gamma _{1}^{(1)}+\gamma _{3}^{(1)} ) &\frac{1}{6} (\gamma _{1}^{(1}+ \gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) \\ \hline &\ \frac{1}{6} (1-\frac{1}{2}\lambda _{1}^{(1)} ) &\frac{2}{3} & \frac{1}{6} (1+\frac{1}{2}\lambda _{1}^{(1)} ) \end{array}\displaystyle \end{aligned} . $$
(3.15)

High order CSSRK methods for single integrand SDEs

In this section, we consider a particular class of Stratonovich SDEs, single integrand SDEs (1.2). All results in this section can straightforwardly be considered as the case of one-dimensional Wiener process,

$$ \mathrm{d}x(t)=\lambda g(x)\,\mathrm{d}t+\sigma g(x)\circ \mathrm{d}W(t), \quad x(0)=x_{0} \in \mathbb{R}, $$
(4.1)

with

$$ W(t):=\frac{1}{\sigma }\sum_{k=1}^{d} \sigma _{k}W_{k}(t),\quad \sigma = \sqrt{\sum _{k=1}^{d}\sigma _{k}^{2}}. $$

Then (4.1) is solved by (3.2a)–(3.2b), we can obtain

$$\begin{aligned}& Y_{\tau }=y_{0}+\triangle \mu \int _{0}^{1}A_{\tau ,\xi }g(Y_{\xi }) \,\mathrm{d}\xi , \end{aligned}$$
(4.2a)
$$\begin{aligned}& y_{1}=y_{0}+\triangle \mu \int _{0}^{1}B_{\tau }g(Y_{\tau }) \,\mathrm{d} \tau , \end{aligned}$$
(4.2b)

where \(\triangle \mu =\mu (t+h)-\mu (t)=\lambda h+\sigma (W(t+h)-W(t) )\), \(h>0\).

Due to the following lemmas, the main result is that the B-series of the exact and the numerical solution are exactly as in the ODEs case, with the exception that integration is now performed with respect to μ instead of h.

Lemma 4.1

[20] \(n\in \mathbb{N}\), for all \(h>0\), we have

$$ \int _{0}^{h}\mu ^{n}\circ \mathrm{d}\mu =\frac{1}{1+n}\mu (h)^{n+1}. $$

Lemma 4.2

([20])

For \(n\in \mathbb{N}\), then we have

$$ E\mu (h)^{n}= \textstyle\begin{cases} O(h^{n/2}),& \textit{if } n \textit{ is even}, \\ O(h^{(n+1)/2}),& \textit{if } n \textit{ is odd}. \end{cases} $$

Lemma 4.3

([20])

Let \(\gamma :T\rightarrow \mathbb{R}\) be given by

$$ \gamma (\emptyset )=1,\qquad \gamma (\bullet )=1, \qquad \gamma \bigl([t_{1},t_{2}, \ldots ,t_{l}] \bigr)= \hat{\rho } \bigl([t_{1},t_{2}, \ldots ,t_{l}] \bigr) \prod_{i=1}^{l} \gamma (t_{i}), $$

with \(\hat{\rho }(t)\) denotes the number of nodes in a tree t, then the solution of (4.1) can be written as B-series \(B(\phi ,x_{0};h)\)

$$ \phi (t) (h)=\frac{\mu (h)^{\hat{\rho }(t)}}{\gamma (t)}, \quad t\in T. $$

For the numerical approximation (4.2a)–(4.2b), the following results hold.

Theorem 4.1

The numerical solution \(y_{1}\) as well as the continuous stage values \(Y_{\tau }\) can be written in terms of B-series

$$ Y_{\tau }=B(\Phi _{\tau },y_{0};h),\qquad y_{1}=B( \varphi ,y_{0};h), $$

where \(t=[t_{1},t_{2},\ldots ,t_{l}]\in T\), and

$$\begin{aligned}& \Phi _{\tau }(t)=\triangle \mu ^{\hat{\rho }(t)}\hat{\Phi }_{\tau }(t),\qquad \hat{\Phi }_{\tau }(\emptyset )=1,\qquad \varphi (t)= \triangle \mu ^{ \hat{\rho }(t)},\qquad \hat{\varphi }(\emptyset )=1, \\& \hat{\Phi }_{\tau }(\bullet )= \int _{0}^{1}A_{\tau ,\xi }\,\mathrm{d}\xi , \qquad \hat{\Phi }_{\tau }(t)= \int _{0}^{1}A_{\tau ,\xi }\prod _{i=1}^{l} \hat{\Phi }_{\xi }(t_{i}) \,\mathrm{d}\xi , \\& \hat{\varphi }(\bullet )= \int _{0}^{1}B_{\tau }\,\mathrm{d}\tau , \qquad \hat{\varphi }(t) \int _{0}^{1}B_{\tau }\prod _{i=1}^{l}\hat{\Phi }_{\tau }(t_{i}) \,\mathrm{d}\tau . \end{aligned}$$

Proof

Write \(Y_{\tau }\) as a B-series, that is,

$$ Y_{\tau }=B(\Phi _{\tau },x_{0};h)=\sum _{t\in T}\alpha (t)\Phi _{\tau }(t) (h)F(t) (y_{0}). $$

Use (4.2a) to obtain

$$ \begin{aligned} Y_{\tau }&=y_{0}+\triangle \mu \int _{0}^{1}A_{\tau ,\xi }g(Y_{\xi }) \,\mathrm{d}\xi \\ &=y_{0}+\triangle \mu \int _{0}^{1}A_{\tau ,\xi } \sum _{t\in T_{g}}\alpha (t)\Phi '_{\xi }(t) (h)F(t) (y_{0}) \,\mathrm{d}\xi \\ &=y_{0}+ \sum_{t\in T_{g}}\alpha (t) \biggl( \triangle \mu \int _{0}^{1}A_{\tau ,\xi }\Phi '_{\xi }(t) (h)\,\mathrm{d}\xi \biggr)F(t) (y_{0}), \end{aligned} $$

where

$$\begin{aligned}& \Phi '_{\xi }(\bullet )=1,\qquad \Phi _{\tau }( \emptyset )=1, \qquad \Phi '_{\xi }(t) (h)= \prod_{i=1}^{l}\Phi _{\xi }(t_{i}) (h), \\& \Phi _{\tau }(t) (h)=\triangle \mu \int _{0}^{1} A_{\tau ,\xi }\Phi '_{\xi }(t) (h)\,\mathrm{d}\xi =\triangle \mu \int _{0}^{1}A_{\tau ,\xi } \prod _{i=1}^{l}\Phi _{\xi }(t_{i}) (h)\,\mathrm{d}\xi , \\& \hat{\Phi }_{\tau }(t) (h)= \int _{0}^{1} A_{\tau ,\xi } \prod _{i=1}^{l}\hat{\Phi }_{\xi }(t_{i}) (h)\,\mathrm{d}\xi ,\quad t=[t_{1},t_{2}, \ldots ,t_{l}]_{g}\in T_{g}\in T, \end{aligned}$$

thus \(\Phi _{\tau }(t)(h)=\triangle \mu ^{\hat{\rho }(t)}\hat{\Phi }_{\tau }(t)(h)\). Analogous to \(y_{1}\). □

In the following, the main results of this section are presented.

Theorem 4.2

The DCSRK methods of deterministic order \(p_{d}\) is corresponding with the CSSRK methods (4.2a)(4.2b) of mean square \(p_{\mu }=\lfloor p_{d}/2\rfloor \), under the condition that μ is either sampled from the exact distribution or chosen such that at least the first \(2p_{\mu }+1\) moments are considered with the ones of the exact distribution.

Proof

With all the B-series in place, we can now present the order conditions for the mean square global order \(p_{\mu }\) if and only if

$$\begin{aligned}& E \bigl(\phi (t) (h)-\varphi (t) (h) \bigr)^{2}=O \bigl(h^{2p_{\mu }+1} \bigr),\quad t\in T, \end{aligned}$$
(4.3)
$$\begin{aligned}& E \bigl(\phi (t) (h)-\varphi (t) (h) \bigr)=O \bigl(h^{p_{\mu }+1} \bigr), \quad t\in T, \end{aligned}$$
(4.4)

and all elementary differentials \(F(t)\) fulfill the linear growth condition. Assume that μ is either sampled from the exact distribution or chosen such that at least the first \(2p_{\mu }+1\) moments coincided with the ones of the exact distribution. Due to \(E (\mu (h)^{2\hat{\rho }(t)} )=O(h^{\hat{\rho }(t)})\) by Lemma 4.3, (4.3) is then by Lemma 4.2 and Theorem 4.1 automatically fulfilled for all \(t\in T\) with \(\hat{\rho (t)}\geq 2p_{\mu }+1\), and satisfying the remaining trees if and only if

$$ \varphi (t)=\frac{1}{\gamma (t)},\quad t\in T,\qquad \hat{\rho }(t)\leq 2p_{\mu }. $$
(4.5)

Note that (4.5) is just the condition that for the order \(p_{d}\) of the DCSRK methods applied to a deterministic system \((\sigma =0)\) with \(p_{d}=2p_{\mu }\). Similarly, (4.4) is automatically fulfilled for all \(t\in T\) with

$$ \hat{\rho }(t)\geq \textstyle\begin{cases} 2p_{\mu }+2, & \text{if $\hat{\rho }(t) $ is even}, \\ 2p_{\mu }, & \text{if $\hat{\rho }(t)$ is odd}, \end{cases} $$

satisfying the remaining trees if and only if

$$ \varphi (t)=\frac{1}{\gamma (t)},\quad t\in T,\qquad \hat{\rho }(t)\leq \textstyle\begin{cases} 2p_{\mu }+1, & \text{if $\hat{\rho }(t) $ is even}, \\ 2p_{\mu }, & \text{if $\hat{\rho }(t)$ is odd}. \end{cases} $$

Thus, the methods will be mean-square consistent of order \(p_{\mu }\) if the deterministic order is

$$ p_{d}= \textstyle\begin{cases} 2p_{\mu }, & \text{$p_{\mu }\in \mathrm{N} $}, \\ 2p_{\mu }+1, & \text{$p_{\mu }+\frac{1}{2}\in \mathrm{N}$}, \end{cases} $$

or, vice versa, the methods of deterministic order \(p_{d}\) will converge with mean-square order of \(\lfloor \frac{p_{d}}{2}\rfloor \). □

As is well known, when solving the deterministic differential equation, the DCSRK method with the polynomial

$$ A_{\tau ,\xi }=\tau +\frac{\sqrt{3}}{2} \bigl(6\tau ^{2}-6\tau +1 \bigr) (2\tau -1)- \xi +\frac{1}{2},\qquad B_{\tau }=1, $$
(4.6)

is of mean-square order 4.0 [22]. Hence, the corresponding CSSRK method is of order 2.0. We can also choose multiple numerical quadrature formulas to get different SRK methods of high order.

CSSRK methods for SDEs with additive noise

Concerning the specific problems with additive noise, thus (1.3) is solved by (2.1a)–(2.1b) defined by

$$\begin{aligned}& \begin{aligned} Y_{\tau }={}&y_{0}+h \int _{0}^{1}A_{\tau ,\xi }^{(0)}f(Y_{\xi }) \,\mathrm{d} \xi + \sum_{k=1}^{d}J_{k} \int _{0}^{1}A_{\tau ,\xi }^{(1)} \sigma _{k} \,\mathrm{d}\xi \\ &{}+ \sum_{k=1}^{d} \frac{J_{k0}}{h} \int _{0}^{1}A_{\tau ,\xi }^{(2)} \sigma _{k}\,\mathrm{d}\xi , \end{aligned} \end{aligned}$$
(5.1a)
$$\begin{aligned}& \begin{aligned} y_{1}={}&y_{0}+h \int _{0}^{1}B_{\tau }^{(0)}f(Y_{\tau }) \mathrm{\tau }+ \sum_{k=1}^{d}J_{k} \int _{0}^{1}B_{\tau }^{(1)} \sigma _{k}\,\mathrm{d} \tau \\ &{}+ \sum_{k=1}^{d} \frac{J_{k0}}{h} \int _{0}^{1}B_{\tau }^{(2)} \sigma _{k} \,\mathrm{d}\tau , \end{aligned} \end{aligned}$$
(5.1b)

where \(J_{1}=\triangle W_{k}\), \(J_{k0}=\int _{0}^{h}\int _{0}^{s}\,\mathrm{d}W_{k}(s) \,\mathrm{d}s\). Because the noise terms are additive, we find that a elementary differential vanishes if its multicolored rooted tree contains a node following a stochastic node directly except if the deterministic node \(t_{0}\) is the only succeeding end node. Table 2 consists of multicolored rooted trees whose elementary differentials are non-zero. We are able to obtain a set of order conditions guaranteeing that the CSSRK methods (5.1a)–(5.1b) obtained mean-square order 1.5 as detailed below.

Table 2 Multicolored rooted trees for (5.1a)–(5.1b) with order less than or equal to 2.0

Theorem 5.1

Suppose that (1.3) with d independent additive noises is approximated by the CSSRK methods (5.1a)(5.1b). If coefficients \(B_{\tau }^{(0)}\), \(B_{\tau }^{(1)}\), \(B_{\tau }^{(2)}\), \(A_{\tau ,\xi }^{(0)}\), \(A_{\tau ,\xi }^{(1)}\), \(A_{\tau ,\xi }^{(2)}\) of the CSSRK methods (5.1a)(5.1b) satisfy conditions

$$ \begin{aligned} & \int _{0}^{1}B_{\tau }^{(0)} \,\mathrm{d}\tau =1,\qquad \int _{0}^{1}B_{\tau }^{(1)} \,\mathrm{d}\tau =1,\qquad \int _{0}^{1}B_{\tau }^{(2)}=0, \\ & \int _{0}^{1}B_{\tau }^{(0)} \int _{0}^{1}A_{\tau ,\xi }^{(0)} \,\mathrm{d}\xi \,\mathrm{d}\tau =\frac{1}{2}, \\ & \int _{0}^{1}B_{\tau }^{(0)} \int _{0}^{1}A_{\tau ,\xi }^{(1)} \,\mathrm{d} \xi \,\mathrm{d}\tau =0, \\ & \int _{0}^{1}B_{\tau }^{(0)} \int _{0}^{1}A_{\tau ,\xi }^{(2)} \,\mathrm{d} \xi \,\mathrm{d}\tau =1, \\ & \int _{0}^{1}B_{\tau }^{(0)} \biggl( \biggl( \int _{0}^{1}A_{\tau ,\xi }^{(1)} \,\mathrm{d}\tau \biggr)^{2}+\frac{1}{3} \biggl( \int _{0}^{1}A_{\tau ,\xi }^{(2)} \,\mathrm{d}\xi \biggr)^{2} + \int _{0}^{1}A_{\tau ,\xi }^{(1)} \,\mathrm{d} \xi \int _{0}^{1}A_{\tau ,\xi }^{(2)} \,\mathrm{d}\xi \biggr) \,\mathrm{d} \tau =\frac{1}{2}, \end{aligned} $$
(5.2)

then they are of mean-square order 1.5.

Proof

First, we have these facts for the increments:

$$\begin{aligned}& E(J_{i})=E(J_{i0})=0,\qquad E \bigl(J_{i}^{2} \bigr)=h, \\& E \biggl(J_{i0}^{2}=\frac{h^{3}}{3} \biggr), \qquad E(J_{i}J_{i0})=\frac{h^{2}}{2},\qquad E(J_{i}J_{j0})=0,\quad i \neq j. \end{aligned}$$

In order to attain mean-square order 1.5, we need \(\phi (t)=\varphi (t)\) for trees 1–3 (\(\rho (t)\leq 1.5 \)), and \(E (\phi (t) )=E (\varphi (t) )\) for trees 4–6 (\(\rho (t)=2 \)), respectively, in Table 2. To be specific,

  1. (a)

    \(J_{i}=\int _{0}^{1}J_{i}B_{\tau }^{(1)}\,\mathrm{d}\tau + \int _{0}^{1}\frac{J_{i0}}{h}B_{\tau }^{(2)}\,\mathrm{d}\tau \), we have \(\int _{0}^{1}B_{\tau }^{(1)}\,\mathrm{d}\tau =1\), \(\int _{0}^{1}B_{\tau }^{(2)}\,\mathrm{d}\tau =0\);

  2. (b)

    \(J_{0}= h\int _{0}^{1}B_{\tau }^{(0)}\,\mathrm{d}\tau \), we have \(\int _{0}^{1}B_{\tau }^{(0)}=1\);

  3. (c)

    \(\int _{0}^{1}s\circ \mathrm{d}W_{i}(s)= \int _{0}^{1}B_{\tau }^{(0)} ( \int _{0}^{1}J_{i}A_{\tau ,\xi }^{(1)}\,\mathrm{d}\xi + \int _{0}^{1}\frac{J_{i0}}{h} A_{\tau ,\xi }^{(2)} \,\mathrm{d}\xi )\,\mathrm{d}\tau \), we have \(\int _{0}^{1}B_{\tau }^{(0)} \int _{0}^{1} A_{\tau ,\xi }^{(1)}\,\mathrm{d}\xi \,\mathrm{d}\tau =0\);

  4. (d)

    \(\int _{0}^{1}B_{\tau }^{(0)}\int _{0}^{1}A_{\tau ,\xi }^{(0)} \,\mathrm{d}\xi \,\mathrm{d}\tau =\frac{1}{2} \);

  5. (e)

    \(E (\int _{0}^{1}W_{i}(s)W_{j}(s)\,\mathrm{d}s )=0\);

  6. (f)

    \(E (\int _{0}^{1}W_{i}(s)^{2}\,\mathrm{d}s ) = \frac{1}{2}h^{2}=\int _{0}^{1}hB_{\tau }^{(0)} ( \int _{0}^{1}J_{i}A_{\tau ,\xi }^{(1)}\,\mathrm{d}\xi + \int _{0}^{1}\frac{J_{i0}}{h}A_{\tau ,\xi }^{(2)} \,\mathrm{d}\xi )^{2}\,\mathrm{d}\tau \).

The proof is completed. □

We can choose some polynomials satisfying (5.2), for example

$$ \begin{aligned} &B_{\tau }^{(0)}=1,\qquad B_{\tau }^{(1)}=1,\qquad B_{\tau }^{(2)}=0, \\ &A_{\tau ,\xi }^{(0)}=\gamma _{1}^{(0)} \biggl(\tau -\frac{1}{2} \biggr)+\gamma _{2}^{(0)} \biggl( \xi -\frac{1}{2} \biggr)+\frac{1}{2},\qquad A_{\tau ,\xi }^{(1)}=\gamma _{1}^{(1)} \biggl( \tau -\frac{1}{2} \biggr)+\gamma _{2}^{(1)} \biggl(\xi -\frac{1}{2} \biggr), \\ &A_{\tau ,\xi }^{(2)}= \biggl(\frac{-3}{2}\gamma _{1}^{(1)}\pm 18\sqrt{ \biggl( \frac{1}{54}- \frac{(\gamma _{1}^{(1)})^{2}}{432} \biggr)} \biggr) \biggl(\tau - \frac{1}{2} \biggr)+ \gamma _{2}^{(2)}\xi -\frac{1}{2}\gamma _{2}^{(2)}+1, \end{aligned} $$
(5.3)

by \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(B_{\tau }^{(i)}=\lambda _{1}^{(i)}\tau +\lambda _{2}^{(i)}\), \(i=0,1,2\).

Similarly, by applying the quadrature formula \((b_{i},c_{i})_{i=1}^{s}\) to (5.1a)–(5.1b), utilizing the notations \(Y_{i}=Y_{c_{i}}\), \(i=1,2,\ldots ,s\), we derive the corresponding SRK methods,

$$\begin{aligned}& Y_{i}=y_{0}+h\sum _{j=1}^{s}b_{j}A_{c_{i},c_{j}}^{(0)}f(Y_{j})+ \sum_{k=1}^{d}J_{k} \sum _{j=1}^{s}b_{j}\sigma _{k}+\sum_{k=1}^{d} \frac{J_{k0}}{h}\sum_{j=1}^{s}b_{j}A_{c_{i},c_{j}}^{(2)} \sigma _{k}, \end{aligned}$$
(5.4a)
$$\begin{aligned}& y_{1}=y_{0}+h\sum _{i=1}^{s}b_{i}B_{c_{i}}^{(0)}f(Y_{i})+ \sum_{k=1}^{d}J_{k} \sum _{i=1}^{s}b_{i}B_{c_{i}}^{(1)} \sigma _{k}+\sum_{k=1}^{d} \frac{J_{k0}}{h}\sum_{i=1}^{s}b_{i}B_{c_{i}}^{(2)} \sigma _{k}, \end{aligned}$$
(5.4b)

where

$$\begin{aligned}& A_{c_{i},c_{j}}^{(0)}=A_{\tau ,\xi }^{(0)} |_{\tau =c_{i},\xi =c_{j}},\qquad A_{c_{i},c_{j}}^{(1)}=A_{ \tau ,\xi }^{(1)} |_{\tau =c_{i},\xi =c_{j}},\qquad A_{c_{i},c_{j}}^{(2)}=A_{ \tau ,\xi }^{(2)} |_{\tau =c_{i},\xi =c_{j}}, \\& B_{c_{i}}^{(0)}=B_{\tau }^{(0)} |_{\tau =c_{i}},\qquad B_{c_{i}}^{(1)}=B_{ \tau }^{(1)} |_{\tau =c_{i}},\qquad B_{c_{i}}^{(2)}=B_{\tau }^{(2)} |_{ \tau =c_{i}}, \end{aligned}$$

which can be formulated by the following Butcher tableau:

$$ \textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ (b_{j}A_{c_{i},c_{j}}^{(0)} )_{s\times s} &(b_{j}A_{c_{i},c_{j}}^{(1)} )_{s\times s} & (b_{j}A_{c_{i},c_{j}}^{(2)} )_{s\times s} \\ \hline &\ (b_{i}B_{c_{i}}^{(0)} )_{1\times s} & (b_{i}B_{c_{i}}^{(1)} )_{1\times s} & (b_{i}B_{c_{i}}^{(2)} )_{1\times s} \end{array}\displaystyle . $$
(5.5)

The following result implies that we can also construct SRK methods of the same order via quadrature formulas.

Theorem 5.2

If the CSSRK methods (5.1a)(5.1b) satisfy the order conditions (5.2), and \(B_{\tau }^{(0)}\), \(B_{\tau }^{(1)}\), \(B_{\tau }^{(2)}\), \(A_{\tau ,\xi }^{(0)}\), \(A_{\tau ,\xi }^{(1)}\), \(A_{\tau ,\xi }^{(2)}\) are, respectively, \(m_{1}\), \(m_{2}\), \(m_{3}\), \(m_{4}\), \(m_{5}\), \(m_{6}\)-order polynomials, then the associated SRK methods (5.5) derived by using quadrature formulas \((b_{i},c_{i})_{i=1}^{s}\) with accuracy up to degree \(\mathrm{max}\{m_{5},m_{6},m_{4}+m_{1},2m_{2}+m_{4},2m_{3}+m_{4},m_{2}+m_{3}+m_{4} \}\) are also of order 1.5.

Proof

Similar to Theorem 3.1. □

By using polynomials with different orders and any numerical quadrature formula with order satisfied Theorem 5.2, we can get the classical SRK methods of order 1.5.

A SRK method of order 1.5 derived by \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(B_{\tau }^{(i)}=\lambda ^{(i)}\), \(i=0,1,2\), using the Simpson quadrature,

$$\begin{aligned} &\textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ \frac{1}{6}\gamma _{3}^{(0)} &\ \frac{2}{3} (\frac{1}{2}\gamma _{2}^{(0)}+ \gamma _{3}^{(0)} ) &\frac{1}{6} ( \gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{12}\gamma _{1}^{(0)}+ \frac{1}{6}\gamma _{3}^{(0)} & \frac{2}{3} (\frac{1}{2}\gamma _{2}^{(0)}+ \frac{1}{2}\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{1}{6} ( \frac{1}{2}\gamma _{1}^{(0)}+\gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ &\ \frac{1}{6} (\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{2}{3} ( \frac{1}{2}\gamma _{2}^{(0)}+\gamma _{1}^{(0)}+\gamma _{3}^{(0)} ) &\frac{1}{6} (\gamma _{1}^{(0)}+ \gamma _{2}^{(0)}+\gamma _{3}^{(0)} ) \\ \hline &\ \frac{1}{6} (1-\frac{1}{2}\lambda ^{(0)} ) &\frac{2}{3} &\frac{1}{6} (1+\frac{1}{2}\lambda ^{(0)} ) \end{array}\displaystyle \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ \frac{1}{6}\gamma _{3}^{(1)} &\frac{2}{3} ( \frac{1}{2}\gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) &\frac{1}{6} (\gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) \\ &\ \frac{1}{12}\gamma _{1}^{(1)}+ \frac{1}{6}\gamma _{3}^{(1)} & \frac{2}{3} (\frac{1}{2}\gamma _{2}^{(1)}+ \frac{1}{2}\gamma _{1}^{(1)}+\gamma _{3}^{(1)} ) &\frac{1}{6} ( \frac{1}{2}\gamma _{1}^{(1)}+\gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) \\ &\ \frac{1}{6} (\gamma _{1}^{(1)}+\gamma _{3}^{(1)} ) &\frac{2}{3} ( \frac{1}{2}\gamma _{2}^{(1)}+\gamma _{1}^{(1)}+\gamma _{3}^{(1)} ) &\frac{1}{6} (\gamma _{1}^{(1}+ \gamma _{2}^{(1)}+\gamma _{3}^{(1)} ) \\ \hline &\ \frac{1}{6} (1-\frac{1}{2}\lambda ^{(1)} ) &\frac{2}{3} &\frac{1}{6} (1+\frac{1}{2}\lambda ^{(1)} ) \end{array}\displaystyle . \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c} &\ \frac{1}{6} (1-\frac{1}{2}\gamma _{1}^{(2)}- \frac{1}{2}\gamma _{2}^{(2)} ) & \frac{2}{3} (1-\frac{1}{2}\gamma _{1}^{(2)} ) &\frac{1}{6} (-\frac{1}{2}\gamma _{1}^{(2)}+\frac{1}{2}\gamma _{2}^{(2)}+1 ) \\ &\ \frac{1}{6} (-\frac{1}{2}\gamma _{2}^{(2)}+1 ) &\frac{2}{3} & \frac{1}{6} (\frac{1}{2}\gamma _{2}^{(2)}+1 ) \\ &\ \frac{1}{6} (1+\frac{1}{2}\gamma _{1}^{(2)}- \frac{1}{2}\gamma _{2}^{(2)} ) & \frac{2}{3} ( \frac{1}{2}\gamma _{1}^{(2)}+1 ) & \frac{1}{6} (\frac{1}{2}\gamma _{1}^{(2)}+ \frac{1}{2}\gamma _{2}^{(2)}+1 ) \\ \hline &0 &0 &0 \end{array}\displaystyle \end{aligned}$$
(5.6)

A SRK method of order 1.5 derived by \(A_{\tau ,\xi }^{(i)}=\gamma _{1}^{(i)}\tau +\gamma _{2}^{(i)}\xi + \gamma _{3}^{(i)}\), \(B_{\tau }^{(i)}=\lambda ^{(i)}\), \(i=0,1,2\) with \(\gamma _{1}^{(2)}=-3/2\gamma _{1}^{(1)}\pm 18(1/54-(\gamma _{1}^{(1)})^{2}/432)^{1/2}\) using Gaussian quadrature with 2 nodes,

$$ \begin{aligned} &\textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \frac{1}{2} (-\frac{1}{2\sqrt{3}} \gamma _{1}^{(0)}-\frac{1}{2\sqrt{3}}\gamma _{2}^{(0)}+\frac{1}{2} ) & \frac{1}{2} (-\frac{1}{2\sqrt{3}}\gamma _{1}^{(0)}+ \frac{1}{2\sqrt{3}}\gamma _{2}^{(0)}+ \frac{1}{2} ) \\ &\ \frac{1}{2} (\frac{1}{2\sqrt{3}}\gamma _{1}^{(0)}- \frac{1}{2\sqrt{3}}\gamma _{2}^{(0)}+ \frac{1}{2} ) &\frac{1}{2} ( \frac{1}{2\sqrt{3}} \gamma _{1}^{(0)}+ \frac{1}{2\sqrt{3}}\gamma _{2}^{(0)}+ \frac{1}{2} ) \\ \hline &\ \frac{1}{2} &\frac{1}{2} \end{array}\displaystyle \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \frac{1}{2} (-\frac{1}{2\sqrt{3}}\gamma _{1}^{(1)}- \frac{1}{2\sqrt{3}}\gamma _{2}^{(1)} ) & \frac{1}{2} (-\frac{1}{2\sqrt{3}}\gamma _{1}^{(1)}+ \frac{1}{2\sqrt{3}}\gamma _{2}^{(1)} ) \\ &\ \frac{1}{2} (\frac{1}{2\sqrt{3}}\gamma _{1}^{(1)}-\frac{1}{2\sqrt{3}}\gamma _{2}^{(1)} ) &\frac{1}{2} ( \frac{1}{2\sqrt{3}}\gamma _{1}^{(1)}+ \frac{1}{2\sqrt{3}}\gamma _{2}^{(1)} ) \\ \hline &\ \frac{1}{2} &\frac{1}{2} \end{array}\displaystyle \\ &\textstyle\begin{array}{c@{\quad}|c@{\quad}c} &\ \frac{1}{2} (-\frac{1}{2\sqrt{3}}\gamma _{1}^{(2)}- \frac{1}{2\sqrt{3}}\gamma _{2}^{(2)}+1 ) & \frac{1}{2} (-\frac{1}{2\sqrt{3}}\gamma _{1}^{(2)}+ \frac{1}{2\sqrt{3}}\gamma _{2}^{(2)}+1 ) \\ &\ \frac{1}{2} (\frac{1}{2\sqrt{3}}\gamma _{1}^{(2)}- \frac{1}{2\sqrt{3}}\gamma _{2}^{(2)}+1 ) & \frac{1}{2} (\frac{1}{2\sqrt{3}}\gamma _{1}^{(2)}+ \frac{1}{2\sqrt{3}}\gamma _{2}^{(2)}+1 ) \\ \hline &\ 0 &0 \end{array}\displaystyle \end{aligned} . $$
(5.7)

The special second-order systems with additive noise (1.4) can be regarded as a 2m-dimensional system with \(y(t)=\dot{x}(t)\), \(y(0)=\dot{x}(0)\):

$$ \textstyle\begin{cases} \mathrm{d}x(t)=y(t)\,\mathrm{d}t, \quad x(0)=x_{0}\in \mathrm{R}^{m}, \\ \mathrm{d}y(t)=f(x)\,\mathrm{d}t+ \sum_{k=1}^{d}\sigma _{k} \circ \mathrm{d}W_{k}(t), \quad y(0)=y_{0}\in \mathrm{R}^{m}. \end{cases} $$
(5.8)

Obviously, this is a special example of (1.3), and the mean-square order conditions of the foregoing CSSRK methods are also available here. However, according to the specific features of (5.8), we are able to simplify some conditions and get a higher mean-square order without more effort.

Theorem 5.3

If the CSSRK methods (5.1a)(5.1b) for the second-order systems (5.8) possess coefficients satisfying

$$ \begin{aligned} & \int _{0}^{1}B_{\tau }^{(0)} \,\mathrm{d}\tau =1,\qquad \int _{0}^{1}B_{\tau }^{(1)} \,\mathrm{d}\tau =1, \\ & \int _{0}^{1}B_{\tau }^{(2)}=0 ,\qquad \int _{0}^{1}B_{\tau }^{(0)} \int _{0}^{1}A_{\tau ,\xi }^{(0)} \,\mathrm{d}\xi \,\mathrm{d}\tau = \frac{1}{2}, \\ & \int _{0}^{1}B_{\tau }^{(0)} \int _{0}^{1}A_{\tau ,\xi }^{(1)} \,\mathrm{d} \xi \,\mathrm{d}\tau =0,\qquad \int _{0}^{1}B_{\tau }^{(0)} \int _{0}^{1}A_{ \tau ,\xi }^{(2)} \,\mathrm{d}\xi \,\mathrm{d}\tau =1, \end{aligned} $$
(5.9)

then they are mean-square order 2.0.

Proof

We find that conditions of the theorem above are similar to Theorem 5.1 except that condition f in Theorem 5.1 is unnecessary here. In order to obtain the mean-square order 2.0, we need to analyze two additional colored rooted trees \((\rho (t)=2.5)\) listed in Table 3. The condition (f) in Theorem 5.1 is derived from the tree 6 in Table 2, which is essential for general system. However, for system (5.8), the corresponding elemental differential of tree 6 is

$$ F(t) (x)=f^{(2)}(x) \bigl(F(t_{i}) (x),F(t_{i}) (x) \bigr)=0, $$

because the nth component of it is

$$ \bigl(F(t) (x) \bigr)_{n}= \textstyle\begin{cases} \sum_{j_{1},j_{2}=1}^{m} \frac{\partial ^{2} (f(x)_{n} )}{\partial x^{j_{1}}\partial x^{j_{2}}}(0,0),& \text{$n=1,2,\ldots ,\frac{m}{2}$}, \\ \sum_{j_{1},j_{2}=1}^{m} \frac{\partial ^{2} ((y)_{n} )}{\partial y^{j}_{1}\partial y^{j}_{2}}( \sigma _{j_{1}},\sigma _{j_{2}}),& \text{$n=\frac{m}{2}+1,\ldots ,m$} \end{cases}\displaystyle =0, $$

where \((f(x) )_{n}\) means the nth element of the vector function \(f(x)\). Thus tree 6 is unnecessary in this situation. With the same analysis, the elementary differential of tree 5 in Table 2 vanishes as well. Moreover, for the additional trees 7, 8 in Table 3, we can check that

$$ \phi (t)=\varphi (t),\qquad \rho (t)\leq 2, \qquad E \bigl(\phi (t) \bigr)=E \bigl( \varphi (t) \bigr),\qquad \rho (t)=2.5. $$
Table 3 Colored rooted tree for (5.8) with order equal to 2.5

 □

Numerical experiments

In this section, we perform numerical tests to verify the mean-square convergence order proposed in Sects. 3, 4 and 5. The first group consists of method (3.4)–(3.7) and (3.11), (3.14) of different free parameters. The second group of methods consists of the methods (4.6) of deterministic order \(p_{d}=4.0\) for solving deterministic differential equation, in which case the predicted stochastic order \(p_{\mu }=2\). Finally, we consider methods (5.9) for SDEs with additive noise. In each example, the solution is approximated with step sizes \(2^{-5},\ldots ,2^{-9}\) and the sample average of \(M=2000\) independent simulated realizations of the absolute error at the terminal time \(T=1\) is calculated in order to estimate the expectation, accordingly.

Linear stochastic oscillator

The one-dimensional Linear stochastic oscillator in the sense of Stratonovich SDE is described by

$$ \mathrm{d}x(t)=ax(t)\,\mathrm{d}t+bx(t)\circ \mathrm{d}W(t),\quad x(0)=x_{0}. $$
(6.1)

Here a and b are constants. The exact solution of (6.1) is given by

$$ x(t)=x_{0}\exp \bigl(at+bW(t) \bigr). $$

In this experiment, we select \(\gamma ^{(0)}=0\) in (3.4), \(\gamma _{2}^{(0)}=\gamma _{3}^{(0)}=\gamma _{2}^{(1)}=0\), \(\gamma _{1}^{(0)}=\gamma _{1}^{(1)}=1\) in (3.5), \(\lambda _{1}^{(0)}=\lambda _{1}^{(1)}=2\), \(\gamma ^{(0)}=0\) in (3.6), \(\lambda _{1}^{(0)}=\lambda _{1}^{(1)}=2\), \(\gamma _{1}^{(0)}=\gamma _{1}^{(1)}=3\), \(\gamma _{2}^{(0)}=\gamma _{2}^{(1)}=-3\), \(\gamma _{3}^{(0)}=1\) in (3.7), and \(\gamma _{1}^{(0)}=\gamma _{1}^{(1)}=\gamma _{2}^{(1)}=\gamma _{2}^{(0)}=1\), \(\gamma _{3}^{(0)}=0\) in (3.11), \(\gamma _{1}^{(0)}=\gamma _{1}^{(1)}=1\), \(\gamma _{2}^{(0)}=0\), \(\gamma _{3}^{(0)}=1/(2\sqrt{3})\), \(\gamma _{2}^{(1)}=-1\), \(\lambda _{1}^{(0)}=\lambda _{1}^{(1)}=2\sqrt{3}\) in (3.14), then we have the methods, respectively, of

$$\begin{aligned}& B_{\tau }^{(0)}=1,\qquad B_{\tau }^{(1)}=1,\qquad A_{\tau ,\xi }^{(0)}=0,\qquad A_{\tau ,\xi }^{(1)}= \frac{1}{2}, \end{aligned}$$
(6.2)
$$\begin{aligned}& B_{\tau }^{(0)}=1,\qquad B_{\tau }^{(1)}=1,\qquad A_{\tau ,\xi }^{(0)}=\tau ,\qquad A_{\tau , \xi }^{(1)}= \tau , \end{aligned}$$
(6.3)
$$\begin{aligned}& B_{\tau }^{(0)}=2\tau ,\qquad B_{\tau }^{(1)}=2\tau ,\qquad A_{\tau ,\xi }^{(0)}=0,\qquad A_{ \tau ,\xi }^{(1)}=\frac{1}{2}, \end{aligned}$$
(6.4)
$$\begin{aligned}& B_{\tau }^{(0)}=2\tau ,\qquad B_{\tau }^{(1)}=2\tau ,\qquad A_{\tau ,\xi }^{(i)}=3 \tau -3\xi +1,\qquad A_{\tau ,\xi }^{(1)}=3\tau -3\xi , \end{aligned}$$
(6.5)
$$\begin{aligned}& \textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c@{\quad}c} &\ 0 & \frac{1}{2} & -\frac{1}{4} &\frac{1}{4} \\ &\ \frac{1}{2} &1 & \frac{1}{4} &\frac{3}{4} \\ \hline &\ \frac{1}{2} &\frac{1}{2}& \frac{1}{2} &\frac{1}{2} \end{array}\displaystyle , \end{aligned}$$
(6.6)
$$\begin{aligned}& \textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c@{\quad}c} &\ \frac{1}{4} &\frac{1}{4} & \frac{1}{4}-\frac{1}{4\sqrt{3}} & \frac{1}{4}-\frac{\sqrt{3}}{4} \\ &\ \frac{1}{4}+\frac{1}{2\sqrt{3}} &\frac{1}{4}+ \frac{1}{2\sqrt{3}} & \frac{1}{4}+\frac{1}{4\sqrt{3}} & \frac{1}{4}+\frac{1}{4\sqrt{3}} \\ \hline &\ 0 &1& 0 &1 \end{array}\displaystyle . \end{aligned}$$
(6.7)

We choose the coefficients of (6.1) as \(a=0.5\), \(b=1\) with initial value \(x_{0}=1\). We present the average sample errors \(\sum_{i=1}^{2000}\sqrt{|x_{N}(\omega _{i})-x(1,\omega _{i})|^{2}/2000}\) for method (6.2)–(6.7) in Table 4. Figure 1 shows the results of Table 4 in a log-log plot.

Figure 1
figure1

The convergence rates of methods

Table 4 The endpoint average sample errors of method (6.2)–(6.7) for solving (6.1)

Kubo stochastic oscillator

The Kubo stochastic oscillator in the sense of Stratonovich SDE is described by

$$ \textstyle\begin{cases} \mathrm{d}p=-aq\,\mathrm{d}t-bq\circ \mathrm{d}B(t),\quad t\in [0,1], \\ \mathrm{d}q=ap\,\mathrm{d}t+bp\circ \mathrm{d}B(t),\quad t\in [0,1], \\ p(0)=p_{0}\in \mathbb{R}, \qquad q(0)=q_{0}\in \mathbb{R}. \end{cases} $$
(6.8)

Here a and b are constants. The general solution of (6.8) is given by

$$\begin{aligned}& p(t)=p_{0}\cos \bigl(at+bB(t) \bigr)-q_{0}\sin \bigl(at+bB(t) \bigr), \\& q(t)=p_{0}\sin \bigl(at+bB(t) \bigr)+q_{0}\cos \bigl(at+bB(t) \bigr). \end{aligned}$$

We choose the coefficients of (6.8) as \(a=0\), \(b=1\) with initial value \(p_{0}=0.5\), \(q_{0}=0\). By using appropriate numerical quadrature formula, we can get the classical SRK methods of high order. For (4.6), we can choose Gaussian quadrature with 2 nodes to get the following method of order 2.0:

$$ \textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} &\ \frac{5}{36}- \frac{\sqrt{15}}{90} &\frac{2}{9}-\frac{2\sqrt{15}}{45} & \frac{5}{36}-\frac{2\sqrt{15}}{45} & \frac{5}{36}- \frac{\sqrt{15}}{90} &\frac{2}{9}-\frac{2\sqrt{15}}{45} & \frac{5}{36}-\frac{2\sqrt{15}}{45} \\ &\ \frac{5}{36}+\frac{\sqrt{15}}{24} &\frac{2}{9} & \frac{5}{36}-\frac{\sqrt{15}}{24} & \frac{5}{36}+ \frac{\sqrt{15}}{24} &\frac{2}{9} & \frac{5}{36}- \frac{\sqrt{15}}{24} \\ &\ \frac{5}{36}+\frac{2\sqrt{15}}{90} &\frac{2}{9}+ \frac{2\sqrt{15}}{45} & \frac{2}{9}+\frac{2\sqrt{15}}{45} & \frac{5}{36}+\frac{2\sqrt{15}}{90} &\frac{2}{9}+ \frac{2\sqrt{15}}{45} & \frac{2}{9}+\frac{2\sqrt{15}}{45} \\ \hline &\ \frac{5}{18} &\frac{4}{9} & \frac{5}{18} & \frac{5}{18} &\frac{4}{9} & \frac{5}{18} \end{array}\displaystyle . $$
(6.9)

The average sample errors at terminal time \(T=1\) are expressed by

$$\sum_{i=1}^{2000}\sqrt{\bigl|p(1,\omega _{i})-p_{N}(\omega _{i})\bigr|^{2}+\bigl|q(1, \omega _{i})-q_{N}(\omega _{i})\bigr|^{2}}/2000 $$

for methods in Table 5. Figure 2 shows the results of Table 5 in a log-log plot.

Figure 2
figure2

The convergence rates of methods

Table 5 The endpoint average sample errors of method (4.6) and (6.9) for solving (6.8)

Linear stochastic oscillator with additive noise

Then we consider the following linear stochastic oscillator with additive noise:

$$ \textstyle\begin{cases} \mathrm{d}y(t)=-x(t)\,\mathrm{d}t+\mathrm{d}B(t), \quad t\geq 0, \\ \mathrm{d}x(t)=y(t)\,\mathrm{d}t, \quad t\geq 0, \\ y(0)=y_{0}\in \mathbb{R},\qquad x(0)=x_{0}\in \mathbb{R}, \end{cases} $$
(6.10)

where x is the position and y is the velocity of a particle under the simple harmonic restoring force. Since (6.10) is a SDE with additive noise, its Itô and Stratonovich form are identical. We consider it as a SDE of Stratonovich type. Choose the free parameters of (5.3) and method (5.7) as \(\gamma _{1}^{(0)}=\gamma _{1}^{(1)}=\gamma _{2}^{(0)}=\gamma _{2}^{(1)}= \gamma _{2}^{(2)}=0\), other choices are also promoted. Then we have the methods

$$\begin{aligned}& \begin{aligned} &B_{\tau }^{(0)}=1,\qquad B_{\tau }^{(1)}=1,\qquad B_{\tau }^{(2)}=0, \\ &A_{\tau ,\xi }^{(0)}=\frac{1}{2},\qquad A_{\tau ,\xi }^{(1)}=0,\qquad A_{\tau ,\xi }^{(2)}= \biggl(\pm 18\sqrt{\frac{1}{54}} \biggr) \biggl(\tau -\frac{1}{2} \biggr)+1, \end{aligned} \end{aligned}$$
(6.11)
$$\begin{aligned}& \textstyle\begin{array}{c@{\quad}|c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} &\ \frac{1}{4} & \frac{1}{4} & 0 & 0 &-\frac{\sqrt{2}}{4}+ \frac{1}{2} & -\frac{\sqrt{2}}{4}+\frac{1}{2} \\ &\ \frac{1}{4} &\frac{1}{4} & 0 & 0 & \frac{\sqrt{2}}{4}+\frac{1}{2} & \frac{\sqrt{2}}{4}+ \frac{1}{2} \\ \hline &\ \frac{1}{2} &\frac{1}{2} & \frac{1}{2} & \frac{1}{2} &0 & 0 \end{array}\displaystyle . \end{aligned}$$
(6.12)

To check the order of method (6.11), (6.12), the average sample errors at terminal time \(T=1\) (i.e., \(\sum_{i=1}^{2000}\sqrt{|x(1,\omega _{i})-x_{N}(\omega _{i})|^{2}+|y(1, \omega _{i})-y_{N}(\omega _{i})|^{2}}/2000\)) for method (6.11), (6.12) in Table 6. Figure 3 shows the results of Table 6 in a log-log plot and the slope of the reference line is 2.0.

Figure 3
figure3

The convergence rates of methods

Table 6 The endpoint average sample errors of method (6.11) and (6.12) for solving (6.10)

Conclusions

This paper presents the extension work of DCSRK methods which are upgraded to the stochastic counterpart. For general autonomous SDEs in the Stratonovich sense, a class of CSSRK methods are proposed. The general order conditions are obtained by the use of colored rooted tree and stochastic B-series. The order conditions up to strong global order 2.0 of the CSSRK methods for solving single integrand SDEs are simplified. And for solving SDEs with additive noise, the CSSRK methods of strong global order 1.5 and 2.0 are given. It can be proved that a series of SRK methods of the same order with one CSSRK method can be obtained by the rational application of numerical quadrature formulas. The numerical experiments are examined to verify the results of our theoretical analysis and show that the schemes are useful in very long-time numerical simulation of SDEs. The properties of more numerical SRK methods can be obtained by the CSSRK methods with all of these numerical experiments demonstrating significant order convergence of the numerical methods.

Availability of data and materials

Not applicable.

References

  1. 1.

    Mao, X.: Stochastic Differential Equations and Applications. Horwood, New York (1997)

    MATH  Google Scholar 

  2. 2.

    Liu, M., Zhu, Y.: Stability of a budworm growth model with random perturbations. Appl. Math. Lett. 79, 13–19 (2018)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Liu, M., Yu, L.: Stability of a stochastic logistic model under regime switching. Adv. Differ. Equ. (2015). https://doi.org/10.1186/s13662-015-0666-5

    MathSciNet  Article  MATH  Google Scholar 

  4. 4.

    Li, X., Ma, Q., Yang, H., Yuan, C.: The numerical invariant measure of stochastic differential equations with Markovian switching. SIAM J. Numer. Anal. 56(3), 1435–1455 (2018)

    MathSciNet  Article  Google Scholar 

  5. 5.

    Huang, C.: Exponential mean square stability of numerical methods for systems of stochastic differential equations. J. Comput. Appl. Math. 236(16), 4016–4026 (2012)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Li, X., Zhang, C., Ma, Q., Ding, X.: Discrete gradient methods and linear projection methos for preserving a conserved quantity of stochastic differential equations. Int. J. Comput. Math. 95(12), 2511–2524 (2018)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Zong, X., Wu, F., Huang, C.: Preserving exponential mean square stability and decay rates in two classes of theta approximations of stochastic differential equations. J. Differ. Equ. Appl. 20(7), 1091–1111 (2014)

    MathSciNet  Article  Google Scholar 

  8. 8.

    Mao, W., Hu, L., Mao, X.: Approximate solutions for a class of doubly perturbed stochastic differential equations. Adv. Differ. Equ. (2018). https://doi.org/10.1186/s13662-018-1490-5

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Yin, Z., Gan, S.: Chebyshev spectral collocation method for stochastic delay differential equations. Adv. Differ. Equ. (2015). https://doi.org/10.1186/s13662-015-0447-1

    MathSciNet  Article  MATH  Google Scholar 

  10. 10.

    Wang, X., Gan, S., Wang, D.: A family of fully implicit Milstein methods for stiff stochastic differential equations with multiplicative noise. BIT Numer. Math. 52(3), 741–771 (2012)

    MathSciNet  Article  Google Scholar 

  11. 11.

    Tan, J., Yang, H., Men, W., Guo, Y.: Construction of positivity preserving numerical method for jump-diffusion option pricing models. J. Comput. Appl. Math. 320, 96–100 (2017)

    MathSciNet  Article  Google Scholar 

  12. 12.

    Hu, L., Li, X., Mao, X.: Convergence rate and stability of the truncated Euler–Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 337, 274–289 (2018)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Tan, J., Mu, Z., Guo, Y.: Convergence and stability of the compensated split-step θ-method for stochastic differential equations with jumps. Adv. Differ. Equ. (2014). https://doi.org/10.1186/1687-1847-2014-209

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Zhou, W., Zhang, J., Hong, J., Song, S.: Stochastic symplectic Runge–Kutta methods for the strong approximation of Hamiltonian systems with additive noise. J. Comput. Appl. Math. 325, 134–148 (2017)

    MathSciNet  Article  Google Scholar 

  15. 15.

    Butcher, J.C.: An algebraic theory of integration methods. Math. Comput. 26, 79–106 (1972)

    MathSciNet  Article  Google Scholar 

  16. 16.

    Tang, W., Lang, G., Luo, X.: Construction of symplectic (partitioned) Runge–Kutta methods type methods with continuous stage. Appl. Math. Comput. 286, 279–287 (2016)

    MathSciNet  MATH  Google Scholar 

  17. 17.

    Tang, W.: A note on continuous-stage Runge–Kutta methodsconstruction of Runge–Kutta type methods for solving ordinary differential equations. Appl. Math. Comput. 339, 231–241 (2018)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations. Springer, Berlin (1992)

    Book  Google Scholar 

  19. 19.

    Tian, T., Burrage, K.: Implicit Taylor methods for stiff stochastic differential equations. Appl. Numer. Math. 38, 167–185 (2001)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Debrabant, K.: Cheap arbitrary high order methods for single integrand sdes. BIT Numer. Math. 57, 153–168 (2017)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Debrabant, K.: B-series analysis of stochastic Runge–Kutta methods that use an iterative scheme to compute their integral stage values. SIAM J. Numer. Anal. 47, 181–203 (2008)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Miyatake, Y., Butcher, J.C.: A characterization of energy-preserving methods and the construction of parallel integrators for Hamiltonian systems. SIAM J. Numer. Anal. 54, 1993–2013 (2016)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors would like to express their appreciation to the editors and the anonymous referees for their many valuable suggestions and for carefully the preliminary version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 11701124) and the Natural Science Foundation of Shandong Province of China (No. ZR2017PA006).

Author information

Affiliations

Authors

Contributions

All authors contributed equally to this work. They all read and approved the final version of the manuscript.

Corresponding author

Correspondence to Wendi Qin.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Xin, X., Qin, W. & Ding, X. Continuous stage stochastic Runge–Kutta methods. Adv Differ Equ 2021, 61 (2021). https://doi.org/10.1186/s13662-021-03221-2

Download citation

Keywords

  • Stochastic differential equations
  • Continuous stage stochastic Runge–Kutta methods
  • B-series
\