Open Access

Mean-square numerical approximations to random periodic solutions of stochastic differential equations

Advances in Difference Equations20152015:292

https://doi.org/10.1186/s13662-015-0626-0

Received: 7 April 2015

Accepted: 31 August 2015

Published: 17 September 2015

Abstract

This paper is devoted to the possibility of mean-square numerical approximations to random periodic solutions of dissipative stochastic differential equations. The existence and expression of random periodic solutions are established. We also prove that the random periodic solutions are mean-square uniformly asymptotically stable, which ensures the numerical approximations are feasible. The convergence of the numerical approximations by the random Romberg algorithm is also proved to be mean-square. A numerical example is presented to show the effectiveness of the proposed method.

Keywords

stochastic differential equationrandom periodic solutionsrandom Romberg algorithmpullbackforward infinite horizon stochastic integral equations

1 Introduction

Stochastic differential equations (SDEs) have an important position in theory and application, for more details we refer the reader to [1] and [2]. In recent years, there has an increasing interest in random periodic solutions of SDEs. Random periodic solutions describe many physical phenomena and play an important role in aeronautics, electronics, biology, and so on [3, 4]. The existence of random periodic solutions was established by Feng et al. [5]. However, random periodic solutions have not been explicitly constructed as yet. Therefore, numerical approximations to random periodic solutions are an important method for studying their dynamic behavior. There are, however, few numerical studies in this field. The main difficulties lie in determining the initial value at the starting time and simulating improper integrals more efficiently. Therefore, we are concerned with the possibility of mean-square numerical approximation and numerical analysis of convergence in this paper.

There are two main motivations for this work. It is well known that, in the deterministic case, some researchers have obtained extensive results, including numerical approximations to periodic solution. We refer the reader to [6] and [7] and the references therein. However, few studies have been done in the random case. Yevik and Zhao [8] treated the numerical stationary solutions of SDEs. Liu et al. [3] investigated square-mean almost periodic solutions for a class of stochastic integro-differential equations. To the best of our knowledge, no investigations of mean-square numerical approximations to random periodic solutions of SDEs exist in the literature. Numerical approximation is still an interesting method for studying random periodicity in random dynamical systems.

Because there exist errors in the initial value at the starting time, and the random periodic solutions are sensitive to the initial value, we can only deal with SDEs whose random periodic solutions are mean-square uniformly asymptotically stable. The main results we obtain are the numerical approximations to random periodic solutions of dissipative SDEs, and the proof of mean-square convergence. This shows that mean-square numerical approximations to random periodic solutions are in fact close to the exact solutions and the iterative error can be controlled in the range of the presupposed error tolerance.

This paper is organized as follows. Section 2 deals with some preliminaries intended to clarify the presentation of concepts and norms used later. In Section 3 we present theoretical results on random periodic solutions of dissipative SDEs. This is the main conclusion of the article, which contains the existence and stability of random periodic solutions, the numerical implementation method and the mean-square convergence theorem. Section 4 is devoted to numerical experiments, which demonstrate that these algorithms can be applied to simulate random periodic solutions of dissipative SDEs. Finally, Section 5 gives some brief conclusions.

2 Preliminaries

Let \(W(t)\), \(t\in R\) be an k-dimensional Brownian motion, and \((\Omega, \mathcal{F}, P)\) be the filtered Wiener space. Here \(\mathcal{F}_{s}^{t}:=\sigma(W_{u}-W_{v},s\leq v\leq u\leq t)\), and \(\mathcal {F}^{t}:=\bigvee_{s\leq t}\mathcal{F}_{s}^{t}\), where \(s\in R\) is any given time [9]. We consider a class of Itô SDE of the form
$$ dX_{t}=-AX_{t}\,dt+f(t,X_{t}) \,dt+g(t)\,dW_{t},\quad X(s)=x_{0}\in R^{d}, $$
(1)
where \(X_{t}:\Omega\rightarrow R^{d}\), \(f:R\times R^{d}\rightarrow R^{d}\), \(g:R\rightarrow R^{d\times k}\), A is a class of \(d\times d\) hyperbolic matrix whose all eigenvalues are positive, and we define \(T_{t}=e^{-At}\) to be a hyperbolic linear flow induced by −A.
We define
$$ \theta:(-\infty,+\infty)\times \Omega\rightarrow\Omega,\qquad \theta_{t}\omega (s)=\omega(t+s)-\omega(t) $$
(2)
and \(\Delta:=\{(s,t)\in R^{2},s\leq t\}\). By the conclusions in [9], SDE (1) generates a stochastic flow \(\varphi :\Delta\times R^{d}\times\Omega\rightarrow R^{d}\) when the solution of SDE (1) exists uniquely, which is usually written as \(\varphi (s,t,x_{0},\omega):=\varphi(s, t,\omega)x_{0}\) on the metric dynamical systems \((\Omega, \mathcal{F}, P,\theta_{t})\). The stochastic flow φ is given by
$$ \varphi(s,t,\omega)x_{0}=x_{0}+ \int _{s}^{t}\bigl(-A\varphi(s,r,\omega )x_{0}+f\bigl(r,\varphi(s,r,\omega)x_{0}\bigr)\bigr)\,dr+\int _{s}^{t}g(r)\,dW_{r},\quad t \geq s. $$
(3)

Throughout the rest of this paper, we make the following notations.

Let \(L^{2}(\Omega,P)\) be the space of all bounded square-integrable random variables \(x:\Omega\rightarrow R^{d}\). For any random vector \(x=({x_{1}},{ x_{2}},\ldots,{ x_{d}}) \in R^{d}\), the norm of x is defined in the form of
$$ \|x\|_{2}= \biggl[\int_{\Omega}\bigl[\bigl|x_{1}(\omega)\bigr|^{2}+\bigl|x_{2}( \omega)\bigr|^{2}+\cdots+\bigl|x_{d}(\omega )\bigr|^{2}\bigr]\,dP \biggr]^{\frac{1}{2}}< \infty. $$
(4)
For any stochastic process \(x(t,\omega) \in R^{d}\), the norm of \(x(t,\omega)\) is defined as follows:
$$\bigl\| x(t,\omega)\bigr\| _{2}=\sup_{t\in R}\bigl\| x_{t}( \omega)\bigr\| _{2}< \infty. $$
We define the norm of random matrices as follows:
$$ \| G \|_{L^{2}(\Omega,P)} = \bigl[E\bigl(|G|^{2}\bigr) \bigr]^{\frac{1}{2}}, $$
(5)
where G is a random matrix and \(|\cdot|\) is the operator norm.

For simplicity in notations, the norms \(\|\cdot\|_{2}\) and \(\| \cdot\| _{L^{2}(\Omega,P)}\) are usually written as \(\|\cdot\|\).

The following hypotheses are made for the theoretical analysis.

Hypothesis 2.1

  1. (i)

    There exists a constant \(K^{*}>0\) such that \(\|x_{0}\|\leq K^{*}\).

     
  2. (ii)
    The mapping \(f:R\times R^{d}\rightarrow R^{d} \) is continuous, and there exist positive constants \(J_{1}\) and \(K_{1}\) such that \(f(t,0)\) is globally bounded with \(|f(t,0)|\leq J_{1}\) and for any \(X_{1},X_{2}\in R^{d}\), the following inequality holds:
    $$ \bigl|f(t,X_{1})-f(t,X_{2})\bigr|\leq K_{1}|X_{1}-X_{2}|. $$
    (6)
     
  3. (iii)

    The mapping \(g:R\rightarrow R^{d} \) is continuous, and there exists a positive constant \(J_{2}\) such that \(g(t)\) is globally bounded with \(|g(t)|\leq J_{2}\).

     

3 Theoretical results

3.1 Existence of random periodic solutions

The following result guarantees the existence of random periodic solutions for dissipative SDEs, and is a direct consequence of Theorem 3.2.4 in [4].

Lemma 3.1

For any \(-\infty< s\leq t<+\infty\), \(x_{0},\hat{x}_{0}\in B\), if the following conditions hold:
  1. (i)

    \(\varphi(s,t,\omega)\cdot:B\rightarrow B\) is a.s. continuous;

     
  2. (ii)

    \(\varphi(s+\tau,t+\tau,\omega)x_{0}=\varphi (s,t,\theta_{\tau}\omega)x_{0}\);

     
  3. (iii)

    there exist constants \(c\in(0,1)\) and \(M>0\) such that \(\|\varphi(s,t,\omega)x_{0}-\varphi(s,t,\omega)\hat{x}_{0}\|\leq c^{t-s}\| x_{0}-\hat{x}_{0}\|\) and \(\|\varphi(s,t,\omega)x_{0}\|\leq M\), where M may depend on \((t-s)\),

     
then there exists a unique random τ-periodic solution \(Y(t,\omega)\) of φ. Moreover, \(\varphi(t-m\tau,t,\theta_{-m\tau }\omega)x_{0}\rightarrow Y(t,\omega)\in L^{p}(\Omega,B)\) as \(m \rightarrow +\infty\), where \(B\subset R^{d}\) and m is a positive integer.

Lemma 3.2

Suppose that A is a class of hyperbolic \(d\times d\) matrix, the eigenvalues of A are denoted by \(\{\lambda _{j},j=1,2,\ldots,d\}\) and satisfy \(0< \lambda_{1} \leq\lambda_{2} \leq \cdots\leq\lambda_{d}\). Then the function \(e^{-At}\) tends to zero as \(t\rightarrow+\infty\), that is,
$$\lim_{t\rightarrow+\infty}e^{-At}=0. $$

Proof

We start with the one-dimensional case, that is, \(d=1\). If λ is the eigenvalue of A, that is, \(A=\lambda\), we obtain
$$\lim_{t\rightarrow+\infty}e^{-At}=\lim_{t\rightarrow+\infty }e^{-\lambda t}=0, $$
where \(\lambda>0\).

So it is valid for the one-dimensional case.

Now we consider the case \(d>1\). The matrix exponential of A is diagonalizable \(e^{A}=Qe^{D}Q^{-1}\), where A is invertible and D is diagonal with eigenvalues of A as its spectrum [8]. Then we get
$$e^{-At}=Q\bigl(e^{-Dt}\bigr)Q^{-1} $$
and
$$e^{-Dt}= \begin{pmatrix} e^{-\lambda_{1}t} & 0 & 0 &\cdots &0\\ 0 & e^{-\lambda_{2}t} & 0 &\cdots &0\\ \vdots &\vdots &\vdots&\vdots& \vdots\\ 0 &0 &0 &\cdots & e^{-\lambda_{d}t} \end{pmatrix}. $$
It follows from the result of the one-dimensional case that it is also valid for the d-dimensional case. This completes the proof. □

From the conclusions of Lemmas 3.1 and 3.2, we obtain the following theorem.

Theorem 3.3

Assume that there exists a constant \(\tau>0\) such that for any \(t \in R\) and any \(X\in R^{d}\), the following equalities hold:
$$ f(t,X)=f(t+\tau,X),\qquad g(t)=g(t+\tau). $$
(7)
Suppose that A satisfies Lemma  3.2. Moreover, suppose that SDE (1) satisfies Hypothesis 2.1 and the global Lipschitz constant of f be \(K_{1}\in[0,\sqrt{2}\lambda_{d})\).
If SDE (1) has a unique random periodic solution \(Y(t,\omega):(-\infty,+\infty)\times\Omega\rightarrow R^{d}\), that is,
$$ \varphi(t,t+\tau,\omega)Y(t,\omega)=Y(t+\tau,\omega)=Y(t, \theta_{\tau}\omega), \quad\textit{for all } t\in R \textit{ a.s}. $$
(8)
then \(Y(t,\omega)\) is a solution of the forward infinite horizon integral equation
$$ Y(t,\omega)=\int^{t}_{-\infty}T_{t-r}f \bigl(r,Y(r,\omega)\bigr)\,dr+(\omega)\int^{t}_{-\infty}T_{t-r}g(r) \,dW_{r}. $$
(9)

Proof

In order to utilize Lemma 3.1 to this problem, we only need to check that the conditions of this theorem satisfy its three hypotheses.

First and foremost, by the assumptions of SDE (1), the hypothesis (i) obviously holds.

Secondly, utilizing (3) and Duhamel’s formula, we obtain
$$\begin{aligned} &\varphi(s,t,\omega)x_{0} \\ &\quad=e^{-A(t-s)}x_{0}+ \int_{s}^{t}e^{-A(t-r)}f \bigl(r,\varphi(s,r,\omega )x_{0}\bigr)\,dr+(\omega)\int _{s}^{t}e^{-A(t-r)}g(r)\,dW_{r}. \end{aligned}$$
(10)
Using (2) and (7), we have
$$\begin{aligned} &\varphi(s+\tau,t+\tau,\omega)x_{0} \\ &\quad=e^{-A(t-s)}x_{0}+ \int_{s+\tau}^{t+\tau}e^{-A(t+\tau-r)}f \bigl(r,\varphi (s+\tau,r,\omega)x_{0}\bigr)\,dr+(\omega)\int _{s+\tau}^{t+\tau}e^{-A(t+\tau-r)}g(r)\,dW_{r} \\ &\quad=e^{-A(t-s)}x_{0}+ \int_{s}^{t}e^{-A(t-r)}f \bigl(r+\tau,\varphi(s+\tau,r+\tau ,\omega)x_{0}\bigr)\,dr\\ &\qquad{}+( \theta_{\tau}\omega)\int_{s}^{t}e^{-A(t-r)}g(r+ \tau)\,dW_{r} \\ &\quad=e^{-A(t-s)}x_{0}+ \int_{s}^{t}e^{-A(t-r)}f \bigl(r,\varphi(s+\tau,r+\tau,\omega )x_{0}\bigr)\,dr+( \theta_{\tau}\omega)\int_{s}^{t}e^{-A(t-r)}g(r) \,dW_{r}. \end{aligned}$$
Putting \(\bar{\omega}:=\theta_{\tau}\omega\) and \(\bar{\varphi}(s,r,\bar {\omega})x_{0}:=\varphi(s+\tau,r+\tau,\omega)x_{0}\), we have found that
$$\bar{\varphi}(s,t,\bar{\omega})x_{0}=e^{-A(t-s)}x_{0}+ \int_{s}^{t}e^{-A(t-r)}f\bigl(r,\bar{ \varphi}(s,r,\bar{\omega})x_{0}\bigr)\,dr+(\bar {\omega})\int _{s}^{t}e^{-A(t-r)}g(r)\,dW_{r}. $$
By the uniqueness of the solution, we obtain
$$\bar{\varphi}(s,t,\bar{\omega})x_{0}=\varphi(s+\tau,t+\tau, \omega)x_{0}. $$
Therefore, it implies that
$$\begin{aligned} &\varphi(s+\tau,t+\tau,\omega)x_{0} \\ &\quad=e^{-A(t-s)}x_{0}+ \int_{s}^{t}e^{-A(t-r)}f \bigl(r,\varphi(s,r,\theta_{\tau}\omega)x_{0}\bigr)\,dr+( \theta_{\tau}\omega)\int_{s}^{t}e^{-A(t-r)}g(r) \,dW_{r} \\ &\quad=\varphi(s,t,\theta_{\tau}\omega)x_{0}. \end{aligned}$$
(11)
This completes the check of the second hypothesis.
Last but not least, for any \(x_{0},\hat{x}_{0}\in R^{d}\), it follows from (10) that
$$\begin{aligned} &E\bigl|\varphi(s,t,\omega)x_{0}-\varphi(s,t,\omega) \hat{x}_{0}\bigr|^{2} \\ &\quad:=I_{1} \leq E\biggl|e^{-A(t-s)}(x_{0}- \hat{x}_{0}) \\ &\qquad{}+\int_{s}^{t}e^{-A(t-r)}\bigl(f \bigl(r,\varphi(s,r,\omega)x_{0}\bigr)-f\bigl(r,\varphi(s,r,\omega ) \hat{x}_{0}\bigr)\bigr)\,dr\biggr|^{2}. \end{aligned}$$
(12)
We notice from the fact \((a+b)^{2}\leq2a^{2}+2b^{2}\), \(a,b\in R\) and the Cauchy-Schwarz inequality that we can obtain
$$\begin{aligned} I_{1}\leq{}&2e^{-2A(t-s)}\cdot E|x_{0}- \hat{x}_{0}|^{2} \\ &{}+2\int_{s}^{t}e^{-2A(t-r)}\,dr\cdot E \biggl[\int_{s}^{t}\bigl|f\bigl(r,\varphi(s,r,\omega )x_{0}\bigr)-f\bigl(r,\varphi(s,r,\omega)\hat{x}_{0} \bigr)\bigr|^{2}\,dr \biggr]. \end{aligned}$$
By the condition (6) we have
$$I_{1}\leq2K_{2}E|x_{0}-\hat{x}_{0}|^{2}+2K_{3}K_{1}^{2} \int_{s}^{t}E\bigl|\varphi(s,r,\omega )x_{0}- \varphi(s,r,\omega)\hat{x}_{0}\bigr|^{2}\,dr, $$
where
$$K_{2}=e^{-2A(t-s)} \quad \mbox{and}\quad K_{3}=\int _{s}^{t}e^{-2A(t-r)}\,dr. $$
By the Gronwall inequality, there exists a number \(M_{1}\) such that
$$\bigl\| \varphi(s,t,\omega)x_{0}-\varphi(s,t,\omega)\hat{x}_{0}\bigr\| \leq M_{1}, $$
where
$$M_{1}=\|x_{0}-\hat{x}_{0}\|\sqrt {2K_{2}\cdot\exp\bigl(2K_{3}K_{1}^{2}(t-s) \bigr)}. $$
It is the fact that \(M_{1}\) tends to zero as \(s\rightarrow-\infty\). Therefore there exists \(0< c<1\) such that the inequality \(M_{1}\leq c^{t-s}\|x_{0}-\hat{x}_{0}\|\) holds.
By the method which is similar to the paper [8], we obtain the estimation of \(\|\varphi(s, t,\omega)x_{0}\|\). We also notice from (10) and the fact \((a+b+c)^{2}\leq 3a^{2}+3b^{2}+3c^{2}\), \(a,b,c\in R\), that we can obtain
$$\begin{aligned} E\bigl|\varphi(s,t,\omega)x_{0}\bigr|^{2} \leq{}&3E\bigl|e^{-A(t-s)} x_{0}\bigr|^{2}+3E\biggl|\int _{s}^{t}e^{-A(t-r)} f\bigl(r,\varphi(s,r, \omega)x_{0}\bigr)\,dr\biggr|^{2}\\ &{}+3E\biggl|(\omega)\int_{s}^{t}e^{-A(t-r)} g(r)\,dW_{r}\biggr|^{2}. \end{aligned}$$
Using the Cauchy-Schwarz inequality and the Itô isometry we have
$$\begin{aligned} E\bigl|\varphi(s,t,\omega)x_{0}\bigr|^{2} \leq{}&3e^{-2A(t-s)}E|x_{0}|^{2} +3\int _{s}^{t}e^{-2A(t-r)}\,dr \cdot E \biggl[\int _{s}^{t} \bigl|f\bigl(r,\varphi(s,r,\omega )x_{0}\bigr)\bigr|^{2}\,dr \biggr] \\ &{}+3\int_{s}^{t}e^{-2A(t-r)} g^{2}(r)\,dr. \end{aligned}$$
From the global Lipschitz condition of the function f it follows that for any \(X\in R^{d}\), the linear growth condition also holds:
$$ \bigl|f(t,X)\bigr|\leq K_{1}|X|+J_{1}. $$
(13)
From the globally boundedness of the function g and the boundedness of the initial value in Hypothesis 2.1, we obtain
$$E\bigl|\varphi(s,t,\omega)x_{0}\bigr|^{2} \leq3K_{2} \bigl(K^{*}\bigr)^{2}+6K_{3} J_{1}^{2}(t-s)+3K_{3}J_{2}^{2}(t-s) +6K_{3} K_{1}^{2}\cdot\int_{s}^{t} E\bigl| \varphi(s,r,\omega )x_{0}\bigr|^{2}\,dr. $$
By the Gronwall inequality, there exists a number \(M_{2}\) such that
$$\bigl\| \varphi(s,t,\omega)x_{0}\bigr\| \leq M_{2}, $$
where
$$M_{2}=\sqrt{3\bigl[K_{2}\bigl(K^{*} \bigr)^{2}+2K_{3} J_{1}^{2}(t-s)+K_{3}J_{2}^{2}(t-s) \bigr]\cdot\exp\bigl(6K_{3} K_{1}^{2}(t-s)\bigr)}. $$
Here \(M_{2}\) satisfies the assumption that it may depend on \(t-s\).

This completes the check of the third hypothesis.

Moreover, the pullback method only works for dissipative systems, that is, the systems are contractive. The pullback of SDE (1) is
$$\begin{aligned} &dX(t-m\tau,t,\theta_{-m\tau}\omega, x_{0}) \\ &\quad=-AX(t-m\tau,t,\theta_{-m\tau }\omega, x_{0})\,dt+f \bigl(t,X(t-m\tau,t,\theta_{-m\tau}\omega, x_{0})\bigr)\,dt \\ &\qquad{}+g(t)\,dW_{t+m\tau}, \quad X\bigl(-m\tau,0,\theta_{-m\tau} \omega, x_{0}(\theta _{-m\tau}\omega)\bigr)=x_{0}( \theta_{-m\tau}\omega)\in R^{d}, \end{aligned}$$
(14)
where \(x_{0}\) is \(\mathcal{F}_{-m\tau}\)-measurable, and \(X(t-m\tau ,t,\theta_{-m\tau}\omega, x_{0})\) denotes the solution at the time t with the initial condition \(x_{0}(\theta_{-m\tau}\omega)\) at the time \(t-m\tau\).
Therefore, the exact solution of SDE (14) at the time t has the form
$$\begin{aligned} \varphi(t-m\tau,t,\theta_{-m\tau}\omega)x_{0} ={}&e^{-Am\tau}x_{0}+\int_{t-m\tau}^{t}e^{-A(t-r)}f \bigl(r,\varphi(r,t,\theta_{-m\tau}\omega )x_{0}\bigr)\,dr \\ &{}+(\omega)\int_{t-m\tau}^{t}e^{-A(t-r)}g(r) \,dW_{(r+m\tau)}. \end{aligned}$$
(15)
It follows from Lemma 3.2 and the periodic property of f and g that
$$\begin{aligned} &\lim_{m \rightarrow+\infty}\varphi(t-m\tau,t,\theta_{-m\tau} \omega)x_{0} \\ &\quad=\lim_{m\rightarrow+\infty} \biggl[e^{-Am\tau}x_{0}+ \int _{t-m\tau }^{t}e^{-A(t-r)}f\bigl(r,\varphi(r,t, \theta_{-m\tau}\omega)x_{0}\bigr)\,dr\\ &\qquad{}+(\omega)\int _{t-m\tau}^{t}e^{-A(t-r)}g(r)\,dW_{(r+m\tau)} \biggr] \\ &\quad=\lim_{m\rightarrow+\infty} \biggl[\int_{t-m\tau}^{t}e^{-A(t-r)}f \bigl(r+m\tau,\varphi(r,r+m\tau,\omega )x_{0}\bigr)\,dr\\ &\qquad{}+(\omega)\int _{t-m\tau}^{t}e^{-A(t-r)}g(r+m\tau) \,dW_{(r+m\tau)} \biggr] \\ &\quad= \int_{-\infty}^{t}e^{-A(t-r)}f\bigl(r,Y(r, \omega)\bigr)\,dr+(\omega)\int_{-\infty }^{t}e^{-A(t-r)}g(r) \,dW_{r}= Y(t,\omega). \end{aligned}$$
Therefore, the conclusion follows from Lemma 3.1 and Theorem 4.2.2 in [4]. The proof is finished. □

3.2 Stability

In this section we investigate the mean-square uniformly asymptotic stability of the random periodic solution \(Y(t,\omega)\) of SDE (1). The pullback method is a powerful tool in the proof of uniformly asymptotic stability. To be precise, let us introduce some related definitions [10].

Definition 3.1

(i) The random periodic solution \(Y(t,\omega)\) of SDE (1) is said to be mean-square asymptotically stable if for any given \(\epsilon>0\), every other random periodic solution \(\hat{Y}(t,\omega)\) of SDE (1) satisfies
$$\lim_{t\rightarrow+\infty} \bigl\| Y(t,\omega)-\hat{Y}(t,\omega)\bigr\| =0 $$
for any bounded \(\mathcal{F}_{s}\)-measurable bounded initial values \(x_{0}\) and \(\hat{x}_{0}\), respectively, where \(\|x_{0}-\hat{x}_{0}\|<\epsilon\), where \(s=t-m\tau\).

(ii) The random periodic solution \(Y(t,\omega)\) of SDE (1) is said to be mean-square uniformly stable if for any given \(\epsilon>0\) and every other random periodic solution \(\hat{Y}(t,\omega )\) of SDE (1), there exists \(\delta=\delta(\epsilon)\) such that \(\|x_{0}-\hat{x}_{0}\|\leq\delta\) implies the inequality \(\|Y(t,\omega )-\hat{Y}(t,\omega)\|<\epsilon\) holds for any \(t\geq s\), where \(s=t-m\tau\).

(iii) The random periodic solution \(Y(t,\omega)\) of SDE (1) is said to be mean-square uniformly asymptotically stable if it is mean-square uniformly stable and mean-square asymptotically stable.

Theorem 3.4

Assume that for any initial values \(x_{0} \) and \(\hat{x}_{0}\in L^{2}(\Omega,P)\), the coefficients of SDE (1) satisfy Theorem  3.3, then the random periodic solution \(Y(t,\omega)\) of SDE (1) is mean-square uniformly asymptotically stable.

Proof

First and foremost, let \(\varphi(t-m\tau,t,\theta_{-m\tau}\omega)\hat {x}_{0}\) be another solution of SDE (1) and \(\epsilon>0\) be an arbitrary constant. If \(\|x_{0}-\hat{x}_{0}\|\leq\epsilon\), it follows from (15) and the method which is used to estimate (12) that
$$\begin{aligned} &E\bigl|\varphi(t-m\tau,t,\theta_{-m\tau}\omega)x_{0}-\varphi(t-m \tau,t,\theta _{-m\tau}\omega)\hat{x}_{0}\bigr|^{2} \\ &\quad\leq2K_{4}E|x_{0}-\hat{x}_{0}|^{2} +2K_{5}K_{1}^{2}\int _{t-m\tau}^{t}E\bigl|\varphi(r,t,\theta_{-m\tau}\omega )x_{0}-\varphi(r,t,\theta_{-m\tau}\omega)\hat{x}_{0}\bigr|^{2} \,dr, \end{aligned}$$
where
$$K_{4}=e^{-2Am\tau} \quad\mbox{and}\quad K_{5}=\int _{t-m\tau}^{t}e^{-2A(t-r)}\,dr. $$
By the Gronwall inequality, there exists a number \(M_{3}\) such that
$$\bigl\| \varphi(t-m\tau,t,\theta_{-m\tau}\omega)x_{0}-\varphi(t-m \tau,t,\theta _{-m\tau}\omega)\hat{x}_{0}\bigr\| \leq M_{3}, $$
where
$$M_{3}=\|x_{0}-\hat{x}_{0}\|\sqrt {2K_{4}\cdot\exp\bigl(2K_{5}K_{1}^{2}m \tau\bigr)}. $$
Therefore, by the fact that \(M_{3}\rightarrow0\) as \(m\rightarrow+\infty \), we obtain
$$\lim_{m\rightarrow+\infty}\bigl\| \varphi(t-m\tau,t,\theta_{-m\tau}\omega )x_{0}-\varphi(t-m\tau,t,\theta_{-m\tau}\omega) \hat{x}_{0}\bigr\| =0. $$
Fatou’s lemma implies that
$$\begin{aligned}[b] E\bigl|Y(t,\omega)-\hat{Y}(t,\omega)\bigr|^{2} &=E\Bigl[\lim_{m\rightarrow+\infty}\bigl|\varphi (t-m\tau,t, \theta_{-m\tau}\omega)x_{0}-\varphi(t-m\tau,t, \theta_{-m\tau }\omega)\hat{x}_{0}\bigr|^{2}\Bigr] \\ &\leq\lim_{m\rightarrow+\infty}E\bigl|\varphi(t-m\tau,t,\theta_{-m\tau } \omega)x_{0}-\varphi(t-m\tau,t,\theta_{-m\tau}\omega) \hat{x}_{0}\bigr|^{2}. \end{aligned} $$
Then we have
$$\lim_{t\rightarrow+\infty}\bigl\| Y(t,\omega)-\hat{Y}(t,\omega)\bigr\| =0. $$

Then by Definition 3.1(i), it is mean-square asymptotically stable.

Secondly, let \(V(s,t,\omega)\bar{x}_{0}=Y(t,\omega)-\hat{Y}(t,\omega)\), where \(\bar{x}_{0}=(x_{0},\hat{x}_{0})\). It is the fact that \(V(s,t,\omega )\bar{x}_{0}\) is also a random periodic solution of SDE (1). Without loss of generality, we only consider the case \(s\geq0\). The proof of other case is similar, that is, by the transformation \(\breve {s}=s+m'\tau\), we can change the case of \(s\leq0\) to the case of \(\breve{s}\geq0\), where \(m'\) is a positive integer. Let \(\bar{x}'_{0}\) be the initial value at the starting time \(s=0\). From the above result, that is, the mean-square asymptotic stability, it follows that for any given \(\epsilon>0\), there exists \(\delta_{0}=\delta_{0}(\epsilon )>0\) such that \(\|x'_{0}-\hat{x}'_{0}\|\leq\delta_{0}\) implies the inequality \(\|V(0,t,\omega)\bar{x}'_{0}\|<\epsilon\) holds for \(t\geq0\).

For the first case \(s\in[0,\tau]\), by the fact that \(V(s,t,\omega)\bar {x}_{0}\) is continuous with respect to \((s,\bar{x}_{0})\) and uniformly continuous with respect to s for \(s\in[0,\tau]\), there exists \(\delta=\delta(\epsilon)>0\) such that \(\|x_{0}-\hat{x}_{0}\| \leq\delta\) implies the inequality \(\|V(s,0,\omega)\bar{x}_{0}\|<\delta _{0} \) holds for \(s\in[0,\tau]\).

Put \(\bar{x}''_{0}=V(s,0,\omega)\bar{x}_{0}\), and we obtain \(V(s,t,\omega )\bar{x}_{0}=V(0,t,\theta_{s}\omega)\bar{x}''_{0}\) for any \(t\geq0\). Therefore if \(\|x_{0}-\hat{x}_{0}\|\leq\delta\) and \(s\in[0,\tau]\), the inequality \(\|V(s,t,\omega)\bar{x}_{0}\|<\epsilon\) holds for any \(t \geq s\), that is,
$$\bigl\| Y(t,\omega)-\hat{Y}(t,\omega)\bigr\| < \epsilon. $$
For the second case \(s>0\), there exists a positive integer \(m''\) such that \(s\in[m''\tau, (m''+1)\tau]\). It follows from the random periodicity that \(V(s-m''\tau,t-m''\tau,\theta_{-m''\tau}\omega)\bar{x}_{0}\) is also the random periodic solution of SDE (1) and
$$V(s,t,\omega)\bar{x}_{0}=V\bigl(s-m'' \tau,t-m''\tau,\theta_{-m''\tau}\omega \bigr) \bar{x}_{0}. $$

Then \(\|x_{0}-\hat{x}_{0}\|\leq\delta\) implies the inequality \(\|Y(t,\omega )-\hat{Y}(t,\omega)\|<\epsilon\) holds for any \(s\geq0\) and \(t\geq s\).

Therefore it follows from Definition 3.1(ii) that it is mean-square uniformly asymptotically stable. The conclusion follows from Definition 3.1(iii). This completes the proof. □

3.3 Numerical implementation method of random periodic solutions

It follows from Theorem 3.3 that the forward infinite horizon integral equation (9) is the random periodic solution of SDE (1). However, if the numerical method is applied to the improper integral (9), only the numerical approximations to (9) are obtained. Therefore, approximating the random periodic solution requires that the random periodic solution (9) is mean-square uniformly asymptotically stable. It follows from Theorem 3.4 that the random periodic solution (9) is mean-square uniformly asymptotically stable. This implies that the numerical solution to initial problem is the numerical solution to the random periodic solution.

Therefore, a numerical implementation method is as follows. We obtain from (9),
$$ Y(0,\omega)=\int^{0}_{-\infty}T_{-r}f \bigl(r,Y(r,\omega)\bigr)\,dr+(\omega)\int^{0}_{-\infty}T_{-r}g(r) \,dW_{r}, $$
(16)
which can be viewed as the initial value at the time \(t=0\) of random periodic solutions of SDE (1). The finite time interval \([0,t]\) is divided into N subintervals with the length \(\Delta t:=\frac{t}{N}\). For any given presupposed error tolerance \(\delta\in(0, \Delta t]\), if \(s'<0\) is chosen such that
$$\begin{aligned} &\biggl\| Y(0,\omega)-\int^{0}_{s'}T_{-r}f \bigl(r,Y(r,\omega)\bigr)\,dr-(\omega)\int^{0}_{s'}T_{-r}g(r) \,dW_{r}\biggr\| \\ &\quad=\biggl\| \int_{-\infty}^{s'}T_{-r}f \bigl(r,Y(r,\omega)\bigr)\,dr+(\omega)\int_{-\infty }^{s'}T_{-r}g(r) \,dW_{r}\biggr\| \leq\delta, \end{aligned}$$
(17)
then the improper integral (16) can be approximated by the Itô integral \(\bar{Y}(0,\omega)\), where
$$\bar{Y}(0,\omega)=\int^{0}_{s'}T_{-r}f \bigl(r,\bar{Y}(r,\omega)\bigr)\,dr+(\omega)\int^{0}_{s'}T_{-r}g(r) \,dW_{r}. $$
Therefore the improper integral (9) in the finite time interval \([0,t]\) can be approximated by the Itô integral (18)
$$ \tilde{Y}(t,\omega)=\int_{0}^{t}T_{t-r}f \bigl(r,\tilde{Y}(r,\omega)\bigr)\,dr+(\omega )\int_{0}^{t}T_{t-r}g(r) \,dW_{r}, $$
(18)
with initial value \(\bar{Y}(0,\omega)\) at the time \(t=0\).

By means of reselecting the corresponding starting time and \(s'\), we can simulate a random periodic solution in an arbitrary finite time interval with any given presupposed error tolerance.

In order to improve the accuracy of the integral, the random Romberg algorithm is applied to (18) and \(\bar{Y}(0,\omega)\). The method applied to (18) in detail is shown as follows.

Let \(\tilde{Y}_{n}(t,\omega)\) be the approximation of \(\tilde{Y}(0,n\Delta t,\omega)\), where
$$\tilde{Y}(0,n\Delta t,\omega)=\int_{0}^{n\Delta t}T_{n\Delta t-r}f \bigl(r,\tilde{Y}(r,\omega)\bigr)\,dr+(\omega)\int_{0}^{n\Delta t}T_{n\Delta t-r}g(r) \,dW_{r}, $$
then we see that the iterative relation
$$ \tilde{Y}_{n}(t,\omega)=\int_{0}^{n\Delta t}T_{n\Delta t-r}f \bigl(r,\tilde {Y}_{n-1}(r,\omega)\bigr)\,dr+(\omega)\int _{0}^{n\Delta t}T_{n\Delta t-r}g(r)\,dW_{r}, $$
(19)
where \(n=1,\ldots,N\) and \(\tilde{Y}_{N}(t,\omega)\) is the numerical approximation of \(\tilde{Y}(t,\omega)\).
The sequence of time step and the increment of the Brownian motion are defined in the form of
$$\begin{aligned}& t_{n}^{1}=n\Delta t,\qquad t_{n}^{2}= \frac{1}{2}(n\Delta t),\qquad t_{n}^{3}= \frac{1}{4}(n\Delta t),\qquad\ldots,\qquad t_{n}^{j}= \frac{1}{2^{j-1}}(n\Delta t), \\& \Delta W\bigl(t_{n}^{1}\bigr)=W(n\Delta t)-W(0),\qquad \Delta W\bigl(t_{n}^{2}\bigr)=\frac{1}{2}\bigl(W(n \Delta t)-W(0)\bigr), \\& \Delta W\bigl(t_{n}^{3}\bigr)=\frac{1}{4}\bigl(W(n \Delta t)-W(0)\bigr),\qquad\ldots, \qquad \Delta W\bigl(t_{n}^{j} \bigr)=\frac{1}{2^{j-1}}\bigl(W(n\Delta t)-W(0)\bigr). \end{aligned}$$
Let
$$\begin{aligned} R_{11}= \frac{t_{n}^{1}}{2}\bigl[f\bigl(0,\tilde{Y}_{n-1}(0, \omega)\bigr)+f\bigl(n\Delta t,\tilde {Y}_{n-1}(n\Delta t,\omega)\bigr) \bigr] +\frac{\Delta W(t_{n}^{1})}{2}\bigl[g(0)+g(n \Delta t)\bigr], \end{aligned}$$
then we obtain
$$R_{21}=\frac{1}{2}R_{11}+t_{n}^{2}T_{\frac{n\Delta t}{2}}f \biggl(\frac{n \Delta t}{2},\tilde{Y}_{n-1}\biggl(\frac{n\Delta t}{2},\omega \biggr)\biggr)+\Delta W\bigl(t_{n}^{2}\bigr)T_{\frac{n\Delta t}{2}}g \biggl(\frac{n\Delta t}{2}\biggr). $$
From the induction principle, it implies that
$$\begin{aligned} R_{j1}={}&\frac{1}{2}R_{j-1,1}+t_{n}^{j} \sum_{i=1}^{2^{j-2}} T_{n\Delta t-(2i-1)t_{n}^{j}}f \bigl((2i-1)t_{n}^{j},\tilde{Y}_{n-1} \bigl((2i-1)t_{n}^{j},\omega\bigr)\bigr) \\ &{}+\Delta W\bigl(t_{n}^{j}\bigr)\sum _{i=1}^{2^{j-2}} T_{n\Delta t-(2i-1)t_{n}^{j}}g\bigl((2i-1)t_{n}^{j} \bigr),\quad j=2,3,\ldots. \end{aligned}$$
Utilizing the extrapolation method, we obtain the element
$$ R_{jk}=\frac{4^{k-1}R_{j,k-1}-R_{j-1,k-1}}{4^{k-1}-1},\quad k=2,\ldots,j. $$
(20)
For any presupposed error tolerance \(\varepsilon\in[0, \delta]\), if the inequality holds
$$\|R_{j,j-1}-R_{jj}\|\leq\varepsilon, $$
the computation is ended and \(R_{jj}\) is viewed as the approximation of (19). That is,
$$\tilde{Y}_{n}(t,\omega)=R_{jj}. $$

The method applied to \(\bar{Y}(0,\omega)\) in detail is shown as follows, which is similar to the former.

Let \(\Delta h:=\frac{-s'}{N'}\) and \(\bar{Y}_{n}(0,\omega)\) be the approximation of \(\bar{Y}(0,-n\Delta h,\omega)\), where
$$\bar{Y}(0,-n\Delta h,\omega)=\int^{0}_{-n\Delta h}T_{-r}f \bigl(r,\bar {Y}(r,\omega)\bigr)\,dr+(\omega)\int^{0}_{-n\Delta h}T_{-r}g(r) \,dW_{r}, $$
then we see that the iterative relation
$$ \bar{Y}_{n}(0,\omega)=\int^{0}_{-n\Delta h}T_{-r}f \bigl(r,\bar{Y}_{n-1}(r,\omega )\bigr)\,dr+(\omega)\int ^{0}_{-n\Delta h}T_{-r}g(r)\,dW_{r}, $$
(21)
where \(n=1,\ldots,N'\) and \(\bar{Y}_{N'}(0,\omega)\) is the numerical approximation of \(\bar{Y}(0,\omega)\).
The sequence of time steps and the increment of the Brownian motion are defined in the form
$$\begin{aligned}& h_{n}^{1}=n\Delta h,\qquad h_{n}^{2}= \frac{1}{2}(n\Delta h),\qquad h_{n}^{3}= \frac{1}{4}(n\Delta h),\qquad\ldots,\qquad h_{n}^{j}= \frac{1}{2^{j-1}}(n\Delta h), \\& \Delta W\bigl(h_{n}^{1}\bigr)=W(0)-W(-n\Delta h),\qquad \Delta W\bigl(h_{n}^{2}\bigr)=\frac {1}{2} \bigl(W(0)-W(-n\Delta h)\bigr), \\& \Delta W\bigl(h_{n}^{3}\bigr)=\frac{1}{4} \bigl(W(0)-W(-n\Delta h)\bigr),\qquad\ldots, \qquad \Delta W\bigl(h_{n}^{j} \bigr)=\frac{1}{2^{j-1}}\bigl(W(0)-W(-n\Delta h)\bigr). \end{aligned}$$
Let
$$\begin{aligned} R'_{11}={}&\frac{h_{n}^{1}}{2}\bigl[f\bigl(0, \bar{Y}_{0}(0,\omega)\bigr)+T_{n\Delta h}f\bigl(-n\Delta h, \bar{Y}_{n-1}(-n\Delta h,\omega)\bigr)\bigr] \\ &{}+\frac{\Delta W(h_{n}^{1})}{2}\bigl[g(0)+T_{n\Delta h}g(-n \Delta h)\bigr], \end{aligned}$$
then we obtain
$$R'_{21}=\frac{1}{2}R'_{11}+h_{n}^{2}T_{\frac{n\Delta h}{2}}f \biggl(\frac{-n \Delta h}{2},\bar{Y}_{n-1}\biggl(\frac{-n\Delta h}{2},\omega \biggr)\biggr)+\Delta W\bigl(h_{n}^{2}\bigr)T_{\frac{n\Delta h}{2}}g \biggl(\frac{-n\Delta h}{2}\biggr). $$
By a similar method to (20), we can obtain \(R'_{j1}\), \(j=2,3,\ldots\) , by the induction principle and \(R'_{jk}\), \(k=2,\ldots,j\) by the extrapolation method. For any presupposed error tolerance \(\varepsilon' \in[0, \delta]\), if the following inequality holds:
$$\bigl\| R'_{j,j-1}-R'_{jj}\bigr\| \leq \varepsilon', $$
the computation is ended and \(R'_{jj}\) is viewed as the approximation of (21). That is,
$$\bar{Y}_{n}(0,\omega)=R'_{jj}. $$

3.4 Convergence

The finite time interval \([0,t]\) is divided into N subintervals with the length Δt. The exact solution of SDE (1) in \([0,t]\) has the form
$$ \check{Y}(t,\omega)= \int^{N \Delta t}_{0}e^{-A(N\Delta t-r)}f \bigl(r,\check{Y}(r,\omega )\bigr)\,dr+(\omega)\int^{N \Delta t}_{0}e^{-A(N\Delta t-r)}g(r) \,dW_{r}. $$
(22)
The following result shows that the numerical approximation \(\hat {Y}_{N}(t,\omega)\) to random periodic solutions is mean-square convergent to the exact solution (22) under some conditions.

Theorem 3.5

Assume that for any initial value \(x_{0}\in L^{2}(\Omega,P)\), the coefficients of SDE (1) satisfy Theorems 3.3 and 3.4, then the numerical approximation \(\tilde{Y}_{N}(t,\omega)\) to random periodic solutions of SDE (1) by the random Romberg algorithm is mean-square convergent.

Proof

From the expression of \(\tilde{Y}_{N}(t,\omega)\), we obtain
$$\tilde{Y}_{N}(t,\omega)=\int^{N \Delta t}_{0}e^{-A(N\Delta t-r)}f \bigl(r,\tilde {Y}_{N-1}(r,\omega)\bigr)\,dr+(\omega)\int ^{N \Delta t}_{0}e^{-A(N\Delta t-r)}g(r)\,dW_{r}. $$
Then it implies that
$$E\bigl|\check{Y}(t,\omega)-\tilde{Y}_{N}(t,\omega)\bigr|^{2}=E \biggl[\int^{N \Delta t}_{0}e^{-A(N\Delta t-r)}\bigl[f\bigl(r, \check{Y}(r,\omega)\bigr)-f\bigl(r,\tilde {Y}_{N-1}(r,\omega)\bigr)\bigr] \,dr \biggr]^{2}:=I_{2}. $$
We notice from the Cauchy-Schwarz inequality and the global Lipschitz condition (6) of f that we can obtain
$$\begin{aligned} I_{2} &\leq\int^{N \Delta t}_{0}e^{-2A(N\Delta t-r)} \,dr\cdot\int^{N \Delta t}_{0} K_{1}^{2}E\bigl| \check{Y}(r,\omega)-\tilde{Y}_{N-1}(r,\omega)\bigr|^{2}\,dr \\ &\leq K_{6}K_{1}^{2}\int^{N \Delta t}_{0} E\bigl|\check{Y}(r,\omega)-\tilde {Y}_{N-1}(r,\omega)\bigr|^{2}\,dr, \end{aligned}$$
where
$$K_{6}=\int^{t}_{0}e^{-2A(t-r)}\,dr. $$
Then the fact \((a+b)^{2}\leq2a^{2}+2b^{2}\), \(a,b \in R\) implies that
$$I_{2}\leq2K_{6}K_{1}^{2}\int ^{N \Delta t}_{0} \bigl[E\bigl|\check{Y}(r,\omega)-\tilde {Y}_{N}(r,\omega)\bigr|^{2}+E\bigl|\tilde{Y}_{N}(r,\omega)- \tilde{Y}_{N-1}(r,\omega)\bigr|^{2}\bigr]\,dr. $$
By the random Romberg algorithm in Section 3.3 and mean-square uniformly asymptotic stability, we obtain
$$I_{2}\leq2K_{6}\delta^{2}K_{1}^{2}t+2K_{6}K_{1}^{2} \int^{N \Delta t}_{0} E\bigl|\check {Y}(r,\omega)- \tilde{Y}_{N}(r,\omega)\bigr|^{2}\,dr. $$
It follows from the Gronwall inequality that there exists a number \(M_{4}\) such that
$$\bigl\| \check{Y}(t,\omega)-\tilde{Y}_{N}(t,\omega)\bigr\| \leq M_{4}, $$
where
$$M_{4}=\sqrt{\frac{2K_{6}K_{1}^{2} t^{3}}{N^{2}}\cdot\exp\bigl(2K_{6}K_{1}^{2}t \bigr)}. $$
By the fact that \(M_{4}\) tends to zero as \(N\rightarrow+\infty\), we obtain
$$\lim_{N\rightarrow+\infty}\bigl\| \check{Y}(t,\omega)-\tilde{Y}_{N}(t, \omega)\bigr\| =0. $$
Therefore, it is mean-square convergent. This proof is finished. □

4 Numerical experiments

We would like to provide a numerical example to illustrate the effectiveness of the algorithms which are used to simulate random periodic solutions of dissipative SDEs. Assume that we are working in a one-dimensional space of real numbers and consider the following SDE:
$$ dX_{t}=-X_{t}\,dt+\biggl(1+\frac{X_{t}}{2} \cos t\biggr)\,dt+\sin t\,dW_{t},\qquad X(0)=x_{0}, $$
(23)
that is,
$$A=1,\qquad f(t,X_{t})=1+\frac{X_{t}}{2}\cos t,\qquad g(t)=\sin t . $$
It follows from Theorem 3.3 that there exist random periodic solutions with period \(\tau=2\pi\) of SDE (23). If we choose \(\delta=0.01\), \(s'\) is equal to −30 such that the inequality (17) holds. We can check that it satisfies Theorem 3.4, then the random periodic solution of (23) is mean-square uniformly asymptotically stable, therefore we can obtain the numerical approximations in the presupposed initial error tolerance. To get the Brownian trajectory for the negative time, we construct the positive time path and reflect it against point zero. We will run the simulation with the following meshes [2, 11, 12]:
$$s'=-30,\qquad t=35,\qquad\Delta t=0.1,\qquad N=350 $$
to construct a random periodic solution with the starting point \(x_{0}=0.1\). We generate Brownian trajectories in the following way:
$$W_{0}=0,\qquad W_{(i+1)\Delta t}=W_{i\Delta t}+\psi_{i+1}, $$
where
$$\psi_{i}=N(0,\sqrt{\Delta t}),\quad i=1,2,\ldots,N. $$

Similarly, we can choose another presupposed initial error tolerance \(\delta=0.011\) such that \(s'=-35\) and \(x_{0}=0.15\) are determined, which satisfy Theorem 3.3, too.

Then we obtain the graphs for numerical approximations to random periodic solutions in the time interval \([0,35]\) as Figures 1 and 2, respectively. As we see there exist random periodic phenomena with period \(\tau=2\pi\). That is, unlike the case in the deterministic systems, random periodic solutions of SDE with the same starting point exist with a small difference between two periods of time. However, the approximative structure of the graphs of the random periodic solutions with different starting points is preserved in the same period time. Random oscillation in the phase space leads to these phenomena due to the random noise pumped into this system constantly.
Figure 1

Random periodic solutions with the starting point \(\pmb{x_{0}=0.1}\) .

Figure 2

Random periodic solutions with the starting point \(\pmb{x_{0}=0.15}\) .

To check the convergence of numerical approximations, we plot the curves from different starting points at the time \(t=0\) in the same graph. As we see from Figure 3, whose starting points are \(x_{0}=0.1\) and \(x_{0}=0.15\), respectively, as time progresses, the trajectories become asymptotically close. Figure 4, whose starting points are \(x_{0}=0.12\) and \(x_{0}=0.08\), respectively, also reflects the fact that whatever starting points we choose, as we move forward in time, the random periodic solutions arrive at the exact trajectories which depend on different \(\omega\in\Omega\), that is, random periodic solutions are stochastic processes and different for every \(\omega\in\Omega\). These results confirm that the numerical methods are efficient.
Figure 3

Convergence of random periodic solutions with different starting points.

Figure 4

Convergence of random periodic solutions with different starting points.

5 Conclusion

Finally, conclusions and future work are summarized. In this paper, the possibility of mean-square numerical approximations to random periodic solutions of SDEs is discussed. The random Romberg algorithm is shown in detail. The results show that the method is effective and universal; numerical experiments are performed and match the results of theoretical analysis. In our further work, we will consider more simple and practical methods which will be used to simulate a broader class of SDEs whose diffusion coefficient is a function of t and x.

Declarations

Acknowledgements

The author would like to express his gratitude to Prof. Jialin Hong for his helpful discussion. This work is supported by NSFC (Nos. 11021101, 11290142, and 91130003).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Computer and Information Science, Fujian Agriculture and Forestry University
(2)
Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences

References

  1. Mao, X: Stochastic Differential Equations and Applications, 2nd edn. Ellis Horwood, Chichester (2008) View ArticleGoogle Scholar
  2. Milstein, G: Numerical Integration of Stochastic Differential Equations. Kluwer Academic, Dordrecht (1995) View ArticleGoogle Scholar
  3. Liu, B, Han, Y, Sun, X: Square-mean almost periodic solutions for a class of stochastic integro-differential equations. J. Jilin Univ. Sci. Ed. 51(3), 393-397 (2013) MATHMathSciNetGoogle Scholar
  4. Luo, Y: Random periodic solutions of stochastic functional differential equations. PhD thesis, Loughborough University, Department of Mathematical Sciences (2014) Google Scholar
  5. Feng, C, Zhao, H, Zhou, B: Pathwise random periodic solutions of stochastic differential equations. J. Differ. Equ. 251, 119-149 (2011) MATHMathSciNetView ArticleGoogle Scholar
  6. Hong, J, Liu, Y: Numerical simulation of periodic and quasiperiodic solutions for nonautonomous Hamiltonian systems via the scheme preserving weak invariance. Comput. Phys. Commun. 131, 86-94 (2000) MATHView ArticleGoogle Scholar
  7. Liu, Y, Hong, J: Numerical method of almost periodic solutions for Lotka-Volterra system. J. Tsinghua Univ. (Sci. Technol.) 40(5), 111-113 (2000) MATHMathSciNetGoogle Scholar
  8. Yevik, A, Zhao, H: Numerical approximations to the stationary solutions of stochastic differential equations. SIAM J. Numer. Anal. 49(4), 1397-1416 (2011) MATHMathSciNetView ArticleGoogle Scholar
  9. Arnold, L: Random Dynamical Systems, 2nd edn. Springer, Berlin (2003) Google Scholar
  10. Khasminskii, R: Stochastic Stability of Differential Equations, 2nd edn. Springer, Berlin (2011) Google Scholar
  11. Wang, P: A-stable Runge-Kutta methods for stiff stochastic differential equations with multiplicative noise. Comput. Appl. Math. 34, 773-792 (2015) MathSciNetView ArticleGoogle Scholar
  12. Wang, T: Optimal point-wise error estimate of a compact difference scheme for the coupled Gross-Pitaevskii equations in one dimension. J. Sci. Comput. 59(1), 158-186 (2014) MATHMathSciNetView ArticleGoogle Scholar

Copyright

© Zhan 2015