Skip to main content

The stochastic θ method for stationary distribution of stochastic differential equations with Markovian switching

Abstract

In this paper, stationary distribution of stochastic differential equations (SDEs) with Markovian switching is approximated by numerical solutions generated by the stochastic θ method. We prove the existence and uniqueness of stationary distribution of the numerical solutions firstly. Then, the convergence of numerical stationary distribution to the underlying one is discussed. Numerical simulations are conducted to support the theoretical results.

Introduction

In recent years, a lot of research has been done about the stationary distributions of SDEs. Since SDEs cannot be solved explicitly, studying stationary distribution of the numerical solutions of SDEs has become essential. Stationary distributions of numerical solutions are widely used to approximate those of the underlying equations. We know that there are two books worth reading, one presents the knowledge about the numerical solutions of SDEs, see [9]. The other introduces some basic knowledge of SDEs in detail [13]. As far as we are concerned, there are actually plenty of papers in this aspect. In [20], almost sure and moment exponential stability of Euler–Maruyama discretizations for hybrid stochastic differential equations was studied. The Milstein method was discussed in [28]. The truncated Euler–Maruyama method was used to approximate stationary distributions of SDEs with and without Markovian switching in [11]. Numerical stationary distribution and its convergence by the semi-implicit Euler–Maruyama method was studied, see [12]. And then stationary distribution of the stochastic θ method for nonlinear SDEs was discussed in [8]. On the basis of this paper, we can consider that equations do not have stationary distributions, after a period of switching, equations still have a unique stationary distribution.

When some emergencies affect the rate of population growth, such as earthquakes, debris flows, and other natural disasters, simple SDEs are not efficient to model these problems. Therefore, SDEs with Markovian switching were applied to handle these problems. So far, various dynamical properties of SDEs with Markovian switching, such as moment boundedness, stability, and ergodicity, have been investigated extensively, see for example [2, 4, 6, 14, 18, 19, 21, 2426, 32, 33, 36] and the references therein. There are two monographs providing detailed introduction to SDEs with Markovian switching [15, 29].

In a series of papers [3, 7, 16, 17, 27, 30, 31], approximations of stationary distributions of SDEs with different types of Markovian switchings were investigated by the Euler–Maruyama method. Both the drift and diffusion coefficients of the SDEs in these papers satisfy the global Lipschitz condition. Although the classical Euler–Maruyama method is easy to use in computation and implementation, the numerical solutions of SDEs with super-linear coefficients may diverge to infinity in finite time. To tackle this drawback, Li used the backward Euler–Maruyama method to prove the numerical invariant measure under one-sided Lipschitz condition for those SDEs with Markovian switching, see [10].

The stochastic θ method is an extension of the Euler–Maruyama method and the backward Euler–Maruyama, and its proving process is more complicated. In [23], stability of the stochastic θ method with non-random variable step sizes for bilinear, nonautonomous, homogenous test equations was investigated. The abilities to preserve the almost sure and the mean square exponential stabilities of hybrid SDEs were discussed in [5] and [34]. In [22], asymptotic boundedness of the stochastic θ method was studied. As far as authors’ knowledge, the stochastic θ method has not been used to discuss the stationary distributions of SDEs with Markovian switching. We investigate the numerical stationary distribution by the stochastic θ method.

Throughout the paper, we mainly study the stationary distributions, and the required assumptions on the coefficient are based on the choices of θ. The rest of this paper is organized as follows. Section 2 introduces mathematical preliminaries. We obtain the existence and uniqueness of the stationary distribution of the numerical solutions in Sect. 3. Section 4 shows numerical simulation to verify our theoretical results. Section 5 concludes the paper.

We highlight some main contributions of this paper as follows:

  • When \(\theta \in [0, 1/2)\), both of the drift and diffusion coefficients satisfy the global Lipschitz; when \(\theta \in [1/2, 1]\), the drift coefficient only satisfies the one-side Lipschitz condition. Under these assumptions, it is difficult to prove the stationary distributions of SDEs with Markovian switching by the Euler–Maruyama method and the backward Euler–Maruyama.

  • We study the stationary distributions of SDEs with Markovian switching by the stochastic θ method. This is a generalization of the Euler–Maruyama method and the backward Euler–Maruyama.

  • Compared to the traditional Euler method, the numerical results of the stochastic θ method are more precise. Compared to the backward Euler–Maruyama method, the stochastic θ method can effectively save time and cost. In conclusion, the stochastic θ method is a more efficient numerical method to approximate to stationary distribution of exact solutions.

Let us begin to develop our new theory on the stationary distributions of SDEs with Markovian switching.

Mathematical preliminaries

In this paper, let \((\Omega , \mathcal{F}, \mathbb {P})\) be a complete probability space with filtration \(\{ \mathcal{F}_{t} \} _{t \ge 0}\) satisfying the usual conditions that it is right-continuous and increasing, while \(\mathcal{F}_{0}\) contains all \(\mathbb {P}\)-null sets. Let \(|\cdot |\) denote the Euclidean norm in \(\mathbb {R}^{n}\). The transpose of a vector or matrix M is denoted by \(M^{T}\) and the trace norm of a matrix M is denoted by \(|M| = \sqrt{\operatorname{trace}(M^{T} M)}\).

To keep symbols simple, let \(B(t)=(B_{t}^{1},\ldots,B_{t}^{d})^{T}\), \(t\geq 0\), be a d-dimensional Brownian motion. Let \(R(t), t\geq 0\), denote a continuous-time Markov chain on the probability apace taking values in a finite state space \(S=\{1,2,\ldots,N\}\) with the generator \(\Gamma =(\gamma _{ij})_{N\times N}\) given by

$$ P\bigl\{ R(t+\Delta )=j|R(t)=i\bigr\} = \textstyle\begin{cases} \gamma _{ij}\Delta +o(\Delta ), & i\neq j, \\ 1+\gamma _{ij}\Delta +o(\Delta ), & i=j, \end{cases} $$

where \(\Delta >0\). Here, \(\gamma _{ij}\geq 0\) is the transition rate from i to j if \(i\neq j\), while

$$ \gamma _{ii}=-\sum_{i\neq j} \gamma _{ij}. $$

We assume that the transition probability matrix Q is irreducible and conservative (see [1]). So the Markov chain \(\{R(t)\}_{t\geq 0}\) has a unique stationary distribution \(\eta :=(\eta _{1},\eta _{2},\ldots , \eta _{N})\gg 0 \in \mathbb {R}^{1\times N}\), which can be determined by solving the linear equation

$$ \eta Q=0, \quad \text{subject to } \sum_{i=1}^{N} \eta _{i}=1. $$

We consider the n-dimensional stochastic differential equation with Markovian switching (SDEwMS) of the Itô type

$$ dX(t) = f\bigl(X(t),R(t)\bigr)\,dt + g\bigl(X(t),R(t)\bigr)\,dB(t),\quad t\geq 0, $$
(2.1)

with the initial value \(x(0) = x_{0}\in \mathbb {R}^{n}\) and \(R(0)=i\in S\), where \(f:\mathbb {R}^{n}\times S \rightarrow \mathbb {R}^{n}\) and \(g:\mathbb {R}^{n}\times S \rightarrow \mathbb {R}^{n\times d}\).

Given a stepsize \(h>0\), let \(R_{k}=R(kh)\), \(k\geq 0\), \(\{R_{k}, k=0,1,\ldots \}\) be a discrete Markov chain, and \(P(h)=P_{ij}(h)_{N\times N}=\exp (hQ)\), where Q is the generator of \(R_{k}\), \(k\geq 0\), The \(\{R_{k}, k=0,1,\ldots \}\) can be simulated as follows: let \(R_{0}=i\), and give a random pseudo number \(\tau _{1}\) obeying the uniform \((0,1)\) distribution. Define

$$ R_{1}= \textstyle\begin{cases} i_{1} ,& \text{if } i_{1}\in \mathbb {S}-\{N\}, \text{such that } \sum_{j=1}^{i_{1}-1}P_{ij}(h) \leq \tau _{i}< \sum_{j=1}^{i_{1}}P_{ij}(h), \\ N, &\text{if } \sum_{j=1}^{N-1}P_{ij}(h) \leq \tau _{1}, \end{cases} $$

where let \(\sum_{j=1}^{N}P_{ij}(h)=0\). In other words, the probability of state s being chosen is given by \(\mathbb {P}(R_{1}=s)=P_{is}(h)\). Generally, after the computations of \(R_{0}, R_{1},\ldots ,R_{k-1}\), given a random pseudo number \(\tau _{k}\) obeying a uniform \((0,1)\) distribution, define \(R_{k}\):

$$ R_{k}= \textstyle\begin{cases} i_{k} ,& \text{if } i_{k}\in \mathbb {S}-\{N\}, \text{such that } \sum_{j=1}^{i_{k}-1}P_{R_{k-1}j}(h) \leq \tau _{k}< \sum_{j=1}^{i_{k}}P_{R_{k-1}j}(h), \\ N, &\text{if } \sum_{j=1}^{N-1}P_{R_{k-1}j}(h) \leq \tau _{k}. \end{cases} $$

This procedure can be carried out independently to obtain more trajectories. Now we can define the stochastic θ method for SDEs with Markovian switching.

The stochastic θ method to SDEwMS (2.1) is defined by

$$ X_{k+1} = X_{k} + \theta f(X_{k+1},R_{k})h +(1-\theta ) f(X_{k},R_{k})h+ g(X_{k},R_{k}) \Delta B_{k},\quad k\geq 0, $$
(2.2)

where \(\Delta B_{k} = B(t_{k+1}) - B(t_{k})\) is the Brownian motion increment, and given a step size \(h>0\), let \(t_{k} = kh\) for \(k \geq 0\) and \(R_{k}=R(t_{k})\). Set \(Z_{k}=(X_{k},R_{k})\). Then \(\{Z_{k}\} _{k \geq 0}\) is a Markov process. We will instead use notation \(Z_{k}^{x,i}=(X_{k}^{x,i},R_{k}^{i})\) to highlight the initial values. Let \(P_{k}((x,i),\cdot \times \cdot )\) be the probability measure induced by \(Z_{k}^{x,i}\), namely

$$ P_{k}\bigl((x,i),A\times B\bigr)=P\bigl\{ Z_{k}^{x,i} \in A\times B\bigr\} , \quad \forall A \in \mathscr{B} {\bigl(R^{n}\bigr)}, B\subset S. $$

Denote the family of all probability measures on \(\mathbb {R}^{n}\times S\) by \(\mathcal{P}(\mathbb {R}^{n}\times S)\). Define by \(\mathbb {L}\) the family of mappings \(F: \mathbb {R}^{n}\times S \rightarrow \mathbb {R}\) satisfying

$$ \bigl\vert F(x,j) - F(y,l) \bigr\vert \leq \vert x-y \vert + \vert j-l \vert \quad \text{and}\quad \bigl\vert F(x,j) \bigr\vert \leq 1, $$

for any \(x,y \in \mathbb {R}^{n}\), \(j,l\in S\). For \(\mathbb {P}_{1},\mathbb {P}_{2} \in \mathcal{P}(\mathbb {R}^{n}\times S)\), define metric \(d_{\mathbb {L}}\) by

$$ d_{\mathbb {L}}(\mathbb {P}_{1},\mathbb {P}_{2}) = \sup _{F \in \mathbb {L}} \Biggl\vert \sum_{j=1}^{N} \int _{\mathbb {R}^{n}} F(x,j)\mathbb {P}_{1}(dx,j) - \sum _{j=1}^{N} \int _{\mathbb {R}^{n}} F(x,j)\mathbb {P}_{2}(dx,j) \Biggr\vert . $$

The weak convergence of probability measures can be illustrated in terms of metric \(d_{\mathbb {L}}\). That is, a sequence of probability measures \(\{\mathbb {P}_{k}\}_{k \geq 1}\) in \(\mathcal{P}(\mathbb {R}^{n}\times S)\) converge weakly to a probability measure \(\mathbb {P}\in \mathcal{P}(\mathbb {R}^{n}\times S)\) if and only if

$$ \lim_{k \rightarrow \infty } d_{\mathbb {L}}(\mathbb {P}_{k}, \mathbb {P}) = 0. $$

Then we define the stationary distribution for \(\{X_{k}\}_{k \geq 0}\) by using the concept of weak convergence.

Definition 2.1

For any initial value \((x,i) \in \mathbb {R}^{n}\times S\) and a given step size \(h > 0\), \(Z(t)=(X(t),R(t))\) is said to have a stationary distribution \(\Pi _{h}(\cdot \times \cdot ) \in \mathcal{P}(\mathbb {R}^{n}\times S)\) if the k-step transition probability measure \(\mathbb {P}_{k}((x,i),\cdot \times \cdot )\) converges weakly to \(\Pi _{h}(\cdot \times \cdot )\) as \(k \rightarrow \infty \) for every \((x,i) \in \mathbb {R}^{n}\times S\), that is,

$$ \lim_{k \rightarrow \infty } \Bigl( \sup_{F \in \mathbb {L}} \bigl\vert \mathbb {E}\bigl(F\bigl(Z^{x,i}(t)\bigr)\bigr) - E_{\Pi _{h}} (F) \bigr\vert \Bigr) = 0, $$

where

$$ E_{\Pi _{h}} (F) = \sum_{j=1}^{N} \int _{\mathbb {R}^{n}} F(x,j) \Pi _{h}(dx,j). $$

In [31], the authors presented a very general theory on the existence and uniqueness of the stationary distribution for any one step numerical methods. We adapt it here and state the theory for the stochastic θ method as follows.

Theorem 2.2

Assume that the following three requirements are fulfilled.

  • For any \(\varepsilon >0\) and \((x,i) \in \mathbb {R}^{n}\times S\), there exists a constant \(R = R(\varepsilon ,x,i) > 0\) such that

    $$ \mathbb {P}\bigl( \bigl\vert X_{k}^{x,i} \bigr\vert \geq R\bigr) < \varepsilon \quad \textit{for any } k \geq 0. $$
    (2.3)
  • For any \(\varepsilon >0\) and any compact subset K of \(\mathbb {R}^{n}\), there exists a positive integer \(k^{*} = k^{*}(\varepsilon ,K)\) such that

    $$ \mathbb {P}\bigl( \bigl\vert X_{k}^{x,i} - X_{k}^{y,i} \bigr\vert < \varepsilon \bigr) \geq 1 - \varepsilon \quad \textit{for any } k \geq k^{*} \textit{ and any } (x,y,i) \in K \times K \times S. $$
    (2.4)
  • For any \(\varepsilon >0\), \(n \geq 1\) and any compact subset K of \(\mathbb {R}^{n}\), there exists \(R = R(\varepsilon ,n,K) > 0\) such that

    $$ \mathbb {P}\Bigl(\sup_{0 \leq k \leq n} \bigl\vert X_{k}^{x,i} \bigr\vert \leq R \Bigr) > 1 - \varepsilon \quad \textit{for any } (x,i) \in K\times S. $$
    (2.5)

Then the numerical solutions generated by the stochastic θ method \(\{X_{k}\}_{k \geq 0}\) has a unique stationary distribution \(\Pi _{h}\).

Remark 2.3

Although the theory is very general, the conditions in it are in the sense of probability which are not easy to check. In this paper, we give some coefficients related assumptions and prove the existence and uniqueness of the stationary distribution of the solutions generated by the stochastic θ method under those assumptions.

Now, we present the assumptions on the drift and diffusion coefficients.

Assumption 2.4

Assume that there exists a constant \(\mu _{i}\) such that, for any \(x,y \in \mathbb {R}^{n} \) and \(i\in S \),

$$ \bigl\langle x-y, f(x,i) - f(y,i) \bigr\rangle \leq \mu _{i} \vert x-y \vert ^{2}. $$

Assumption 2.5

Assume that there exists a constant \(\sigma >0\) such that, for any \(x,y \in \mathbb {R}^{n}\) and \(i\in S\),

$$ \bigl\vert g(x,i)-g(y,i) \bigr\vert ^{2}\leq \sigma \vert x-y \vert ^{2}. $$

We give two new assumptions as follows.

Assumption 2.6

There exist constants \(\mu _{i}\) and \(a>0\) such that, for any \(x \in \mathbb {R}^{n}\) and \(i\in S\),

$$ \bigl\langle x,f(x,i)\bigr\rangle \leq \mu _{i} \vert x \vert ^{2}+a. $$

Assumption 2.7

There exist positive constants σ and b such that, for any \(x\in \mathbb{R}^{n}\) and \(i\in S\),

$$ \bigl\vert g(x,i) \bigr\vert ^{2} \leq \sigma \vert x \vert ^{2}+b. $$
(2.6)

Lemma 2.8

Let Assumptions 2.4and 2.5hold and \(\max_{i\in \mathbb {S}}\theta h\mu _{i}<1\), the stochastic θ method (2.2) is well defined.

Proof

It is useful to write (2.2) as

$$ X_{k+1}-\theta f(X_{k+1},R_{k})h = X_{k} +(1-\theta ) f(X_{k},R_{k})h+ g(X_{k},R_{k}) \Delta B_{k}. $$

Define a function \(G:\mathbb{R}^{n}\times S \rightarrow \mathbb{R}^{n}\) by \(G(x)=x-f(x,i)\theta h\). Since

$$\begin{aligned} \bigl\langle x-y,G(x)-G(y)\bigr\rangle &\geq \bigl\langle x-y,x-y-\theta h \bigl(f(x,i)-f(y,i)\bigr) \bigr\rangle \\ &\geq \vert x-y \vert ^{2}-\theta h\mu _{i} \vert x-y \vert ^{2} \\ &=(1-\theta h\mu _{i}) \vert x-y \vert ^{2}>0, \end{aligned}$$

for \(\max_{i\in \mathbb {S}}\theta h\mu _{i}<1\), we know that G has the inverse function \(G^{-1}:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}\). And \(G(x)\) is monotone. The stochastic θ method (2.2) can be written as

$$ X_{k+1} = G^{-1}\bigl(X_{k} +(1-\theta ) f(X_{k},R_{k})h+ g(X_{k},R_{k}) \Delta B_{k}\bigr). $$
(2.7)

Thus, the stochastic θ method (2.2) is well defined. □

Lemma 2.9

For any Borel set \(x\in \mathbb{R}^{n}\), \(A\subset \mathbb{R}^{n}\), \(j\in S\). Then

$$ \mathbb{P}\bigl((X_{k+1},R_{k})\in A\times j|X_{k}=x,R_{k}=i\bigr)=\mathbb{P}(X_{1}, R_{1} \in A\times j|X_{0}=x,R_{0}=i). $$
(2.8)

Proof

If \(X_{k} = x\), \(R_{k}=i\) and \(X_{0} = x\), \(R_{0}=i\), by (2.2) we see

$$ X_{k+1} - \theta f(X_{k+1},R_{k})h = x + (1- \theta )f(x,i)h+ g(x,i) \Delta B_{k} $$

and

$$ X_{1} - \theta f(X_{1},R_{1} )h = x + (1- \theta )f(x)h+ g(x) \Delta B_{0}. $$

Because \(\Delta B_{k}\) and \(\Delta B_{0}\) are identical in probability law, comparing the two equations above, we know that \(X_{k+1} - \theta f(X_{k+1},R_{k})h\) and \(X_{1} - \theta f(X_{1},R_{1})h\) have the identical probability law. Then, due to Lemma 2.8, we have that \((X_{k+1},R_{k})\) and \((X_{1},R_{1})\) are identical in probability law under \(X_{k} = x\), \(R_{k}=i\) and \(X_{0} = x\), \(R_{0}=i\). Therefore, the assertion holds. For any \(x \in \mathbb {R}^{n}\) and \(i\in S\) any Borel set \(A \subset \mathbb {R}^{n}\), define

$$ \mathbb {P}_{m,k}\bigl((x,i),A\times \{j\}\bigr) := \mathbb {P}\bigl(X_{k} \in A\times \{j \} \vert X_{m} = (x,i) \bigr),\quad k\geq m\geq 0. $$

 □

To prove Theorem 2.11, we cite the following classical result (see, for example, Lemma 9.2 on page 87 of [13]).

Lemma 2.10

Let \(h(x,\omega )\) be a scalar bounded measurable random function of x, independent of \(\mathcal{F}_{s}\). Let ζ be an \(\mathcal{F}_{s}\)-measurable random variable. Then

$$ \mathbb {E}\bigl(h(\zeta ,\omega ) \vert \mathcal{F}_{s}\bigr) = H(\zeta ), $$

where \(H(x) = \mathbb {E}h(x,\omega )\).

Theorem 2.11

The solution \(\{Z_{k}\}_{k\geq 0}\) generated by the stochastic θ method (2.2) is a homogeneous Markov process with transition probability kernel \(\mathbb {P}((x,i),A\times \{j\})\).

Proof

The homogeneous property follows Lemma 2.9, so we only need to show the Markov property. Define

$$ Y_{k+1}^{x,i} = G^{-1} \bigl(x + (1 - \theta )f(x,i)h + g(x,i) \Delta B_{k}\bigr) $$

for \(x \in \mathbb {R}^{n}\) and \(k \geq 0\), \(i\in S\). By (2.7) we know that \(X_{k+1} = Y_{k+1}^{X_{k},R_{k}}\) and \(R_{k+1}=\zeta _{k+1}^{R_{k}}\). Let \(\mathcal{G}_{t_{k+1}} = \sigma \{ B(t_{k+1}) - B(t_{k}) \}\). Clearly, \(\mathcal{G}_{t_{k+1}}\) is independent of \(\mathcal{F}_{t_{k}}\). Moreover, \(Y_{k+1}^{x,i}\) depends completely on the increment \(B(t_{k+1}) - B(t_{k})\), so it is \(\mathcal{G}_{t_{k+1}}\)-measurable. Hence, \(Y_{k+1}^{x,i}\) is independent of \(\mathcal{F}_{t_{k}}\). Applying Lemma 2.10 with \(h((x,i),\omega ) = I_{A}(Y_{k+1}^{x,i})\) and \(h(i,\omega )=I_{\{j\}}(R_{k+1}^{i})\), we compute that

$$\begin{aligned}& \mathbb {P}\bigl(Z_{k+1} \in A\times \{j\} \vert \mathcal{F}_{t_{k}}\bigr) \\& \quad = \mathbb {E}\bigl(I_{A\times \{j\}}(Z_{k+1}) \vert \mathcal{F}_{t_{k}}\bigr) = \mathbb {E}\bigl( I_{A\times \{j\}}\bigl(Y_{k+1}^{X_{k},R_{k}}, \zeta _{k+1}^{R_{k}}\bigr) \vert \mathcal{F}_{t_{k}} \bigr) \\& \quad = \mathbb {E}\bigl( I_{A}\bigl(Y_{k+1}^{X_{k},R_{k}} \bigr)|\mathcal{F}_{t_{k}}\bigr)\mathbb {E}\bigl( I_{ \{j\}}\bigl(\zeta _{k+1}^{R_{k}}\bigr) \vert \mathcal{F}_{t_{k}}\bigr)= \mathbb {E}\bigl( I_{A}\bigl(Y_{k+1}^{x,i} \bigr)|_{x=X_{k},i=R_{k}}\bigr) \mathbb {E}\bigl( I_{\{j\}}\bigl(\zeta _{k+1}^{i}\bigr)|_{i=R_{k}}\bigr) \\& \quad = \mathbb {P}\bigl(\bigl(Y_{k+1}^{x,i}\in A \bigr)|_{x=X_{k},i=R_{k}}\bigr)\mathbb {P}\bigl(\bigl(\zeta _{k+1}^{i}=j \bigr)|_{i=R_{k}}\bigr)= \mathbb {P}\bigl(\bigl(Y_{k+1}^{x,i}, \zeta _{k+1}^{i}\bigr)\in A\times \{j\}|_{x=X_{k},i=R_{k}} \bigr) \\& \quad = \mathbb {P}\bigl(Z_{k+1} \in A \times \{j\} \vert Z_{k} \bigr). \end{aligned}$$

The proof is complete. □

Therefore, we see that \(\mathbb {P}(\cdot ,\cdot )\) is the one-step transition probability and \(\mathbb {P}_{k}(\cdot ,\cdot )\) is the k-step transition probability.

We state a simple version of the discrete-type Gronwall inequality in the next lemma (see, for example, [15, Theorem 2.5 on page 56]).

Lemma 2.12

Let \(\{u_{n}\}\) and \(\{w_{n}\}\) be nonnegative sequences and α be a nonnegative constant. If

$$ u_{n} \leq \alpha + \sum_{k=0}^{n-1} u_{k} w_{k} \quad \textit{for } n\geq 0, $$

then

$$ u_{n} \leq \alpha \exp \Biggl(\sum_{k=0}^{n-1}w_{k} \Biggr). $$

Main results

In this section, we present the main results of this paper. Since different choices of the parameter θ in (2.2) require different requirements on the coefficients f and g, we divide this section into three parts. At first, we discuss the case when \(\theta \in [1/2,1]\) in Sect. 3.1. The convergence of the numerical stationary distribution to the underlying counterpart is discussed in Sect. 3.2. And the situation when \(\theta \in (0,1/2]\) is presented in Sect. 3.3.

\(\theta \in [1/2,1]\)

To prove Lemma 3.3 and Lemma 3.4, let us introduce two assumptions.

Assumption 3.1

Let, for any \(i\in \mathbb {S}\), there exist a constant \(\alpha _{i}\in \mathbb {R}\), the inequality

$$\begin{aligned} &(p-2)\bigl\langle x-y, f(x,i)-f(y,i)h\bigr\rangle \bigl\vert g(x,i)-g(y,i) \bigr\vert ^{2} \\ &\quad {}-\frac{2(p-2)(1-\theta )}{\theta ^{2}}\bigl\langle x-y,g(x,i)-g(y,i) \bigr\rangle \bigl\langle F_{k},g(x,i)-g(y,i) \bigr\rangle \leq \alpha _{i} \vert F_{k} \vert ^{4} \end{aligned}$$

holds, where

$$ F_{k+1}= \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert -\theta \bigl\vert f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert h. $$

Assumption 3.2

Let, for any \(i\in \mathbb {S}\), there exist a constant \(\alpha _{i}\in \mathbb {R}\) and \(\beta >0 \), the inequality

$$ (p-2)\bigl\langle x, f(x,i)h\bigr\rangle \bigl\vert g(x,i) \bigr\vert ^{2}- \frac{2(p-2)(1-\theta )}{\theta ^{2}}\bigl\langle x,g(x,i)\bigr\rangle \bigl\langle F_{k},g(x,i) \bigr\rangle \leq \alpha _{i} \vert F_{k} \vert ^{4}+\beta \vert F_{k} \vert ^{2} $$

holds, where

$$\begin{aligned} F_{k+1}= X_{k+1}-\theta f(X_{k+1},R_{k})h. \end{aligned}$$

For simplicity, define

$$ \xi _{i}=2\mu _{i}+ \alpha _{i},\qquad \xi =(\xi _{1},\ldots , \xi _{N})^{T},\qquad \varepsilon = \vert \eta \xi \vert , $$
(3.1)

where assume \(\eta \xi <0\).

Now we are ready to present the two main lemmas in this subsection.

Lemma 3.3

Given Assumptions 2.6, 2.7, and 3.2. Then, for \(h\in (0,h_{1})\), the solutions generated by the stochastic θ method (2.2) obey

$$ \mathbb {E}\vert X_{k} \vert ^{p}\leq C\bigl(1+ \vert F_{0} \vert ^{p}\bigr),\quad k=1,2,3,\ldots , $$

where C is a constant that cannot rely on k.

Proof

From (2.2), we have

$$\begin{aligned} &\bigl\vert X_{k+1}-\theta f(X_{k+1},R_{k})h \bigr\vert ^{2} \\ &\quad = \bigl\vert X_{k}-\theta f(X_{k},R_{k})h \bigr\vert ^{2}+2 \bigl\langle X_{k}, f(X_{k},R_{k})h\bigr\rangle +(1-2\theta ) \bigl\vert f(X_{k+1},R_{k}) \bigr\vert ^{2}h^{2} \\ &\qquad {}+ \bigl\vert g(X_{k},R_{k})\Delta B_{k} \bigr\vert ^{2}+\frac{2}{\theta }\bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle \\ &\qquad {}-\frac{2(1-\theta )}{\theta } \bigl\langle X_{k}-\theta f(X_{k},R_{k})h,g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle . \end{aligned}$$

Due to the fact that \(\theta \in [\frac{1}{2},1]\), \(1-2\theta \leq 0\), we get

$$\begin{aligned} & \bigl\vert X_{k+1}-\theta f(X_{k+1},R_{k})h \bigr\vert ^{2} \\ &\quad = \bigl\vert X_{k}-\theta f(X_{k},R_{k})h \bigr\vert ^{2}+2\bigl\langle X_{k}, f(X_{k},R_{k})h \bigr\rangle + \bigl\vert g(X_{k},R_{k})\Delta B_{k} \bigr\vert ^{2} \\ &\qquad {}+\frac{2}{\theta }\bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle - \frac{2(1-\theta )}{\theta } \bigl\langle X_{k}-\theta f(X_{k},R_{k})h,g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle . \end{aligned}$$

Denote

$$ F_{k+1}= X_{k+1}-\theta f(X_{k+1},R_{k})h. $$

Then we have

$$\begin{aligned}& \begin{aligned} \vert F_{k+1} \vert ^{2}\leq {}& \vert F_{k} \vert ^{2}+2\bigl\langle X_{k}, f(X_{k},R_{k})h \bigr\rangle + \bigl\vert g(X_{k},R_{k})\Delta B_{k} \bigr\vert ^{2} \\ &{}+\frac{2}{\theta }\bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle - \frac{2(1-\theta )}{\theta } \bigl\langle F_{k},g(X_{k},R_{k})\Delta B_{k} \bigr\rangle , \end{aligned} \\& \begin{aligned} 1+ \vert F_{k+1} \vert ^{2}\leq {}&1+ \vert F_{k} \vert ^{2}\biggl\{ 1+\frac{1}{1+ \vert F_{k} \vert ^{2}} \biggl[2 \bigl\langle X_{k}, f(X_{k},R_{k})h \bigr\rangle + \bigl\vert g(X_{k},R_{k})\Delta B_{k} \bigr\vert ^{2} \\ &{}+\frac{2}{\theta }\bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle - \frac{2(1-\theta )}{\theta } \bigl\langle F_{k},g(X_{k},R_{k})\Delta B_{k} \bigr\rangle \biggr]\biggr\} \\ \leq {}&1+ \vert F_{k} \vert ^{2}\bigl\{ 1+\vartheta _{k}(R_{k},\theta )\bigr\} , \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \vartheta _{k}(R_{k},\theta )={}&\frac{1}{1+ \vert F_{k} \vert ^{2}} \biggl[2\bigl\langle X_{k}, f(X_{k},R_{k})h \bigr\rangle + \bigl\vert g(X_{k},R_{k})\Delta B_{k} \bigr\vert ^{2} \\ &{}+\frac{2}{\theta }\bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle - \frac{2(1-\theta )}{\theta } \bigl\langle F_{k},g(X_{k},R_{k})\Delta B_{k} \bigr\rangle \biggr]. \end{aligned}$$

Noting that

$$ (1+u)^{\frac{p}{2}}\leq 1+\frac{p}{2}u+\frac{p(p-2)}{8}u^{2}+ \frac{p(p-2)(p-4)}{48} u^{3},\quad u \geq -1, $$

and \(\vartheta _{k}(R_{k},\theta )>-1\), then we have

$$ \begin{aligned} &\mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2}\bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr) \\ &\quad \leq \bigl(1+ \vert F_{k} \vert ^{2}\bigr)^{ \frac{p}{2}} \mathbb {E}\biggl(1+\frac{p}{2}\vartheta _{k}(R_{k}, \theta )+ \frac{p(p-2)}{8}\vartheta _{k}^{2}(R_{k}, \theta ) \\ &\qquad {}+ \frac{p(p-2)(p-4)}{48}\vartheta _{k}^{3}(R_{k}, \theta )\Big|\mathcal{F}_{t_{k}}\biggr). \end{aligned} $$
(3.2)

Hence

$$ \mathbb {E}\bigl(\vartheta _{k}(R_{k},\theta )|\mathcal{F}_{t_{k}}\bigr)= \frac{2\langle X_{k}, f(X_{k},R_{k})h\rangle + \vert g(X_{k},R_{k}) \vert ^{2}h}{1+ \vert F_{k} \vert ^{2}}, $$

where \(\mathbb {E}(\Delta B_{k}|\mathcal{F}_{t_{k}}) = 0\) and \(\mathbb {E}(|\Delta B_{k}|^{2}|\mathcal{F}_{t_{k}})= h\) is used, and we have \(\mathbb {E}(|\Delta B_{k}|^{2i}|\mathcal{F}_{t_{k}}) = Ch^{i}\), \(\mathbb {E}(|\Delta B_{k}|^{2i-1}|\mathcal{F}_{t_{k}})= Ch^{i-\frac{1}{2}}\). \(i=2,3,\ldots \) , we compute

$$ \begin{aligned} &\mathbb {E}\bigl(\vartheta _{k}^{2}(R_{k},\theta )|\mathcal{F}_{t_{k}}\bigr) \\ &\quad \geq \frac{4\langle X_{k}, f(X_{k},R_{k})h\rangle \vert g(X_{k},R_{k}) \vert ^{2}h-\frac{8(1-\theta )}{\theta ^{2}}\langle X_{k},g(X_{k},R_{k})\rangle \langle F_{k},g(X_{k},R_{k}) \rangle h}{(1+ \vert F_{k} \vert ^{2})^{2}}, \\ &\mathbb {E}\bigl(\vartheta _{k}^{3}(R_{k}, \theta )|\mathcal{F}_{t_{k}}\bigr)\leq \overline{C}h^{2}. \end{aligned} $$
(3.3)

Substituting these three estimates into (3.2), we get

$$\begin{aligned}& \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr) \\& \quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}}\biggl\{ 1+\frac{p}{2}\biggl[ \frac{2\langle X_{k}, f(X_{k},R_{k})h\rangle + \vert g(X_{k},R_{k}) \vert ^{2}h}{1+ \vert F_{k} \vert ^{2}}\biggr] \\& \qquad {}+\frac{p(p-2)}{8}\biggl[ \frac{4\langle X_{k}, f(X_{k},R_{k})h\rangle \vert g(X_{k},R_{k}) \vert ^{2}h-\frac{8(1-\theta )}{\theta ^{2}} \langle X_{k},g(X_{k},R_{k})\rangle \langle F_{k},g(X_{k},R_{k}) \rangle h}{(1+ \vert F_{k} \vert ^{2})^{2}}\biggr] \\& \qquad {}+\frac{p(p-2)(p-4)}{48}\overline{C}h^{2}\biggr\} \\& \quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}}\Bigl\{ 1+\frac{p}{2}\bigl[(2\mu _{r_{k}}+ \sigma )h+\alpha _{r_{k}}h+C_{1}h^{2} \bigr]\Bigr\} +C_{2}h \\& \quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}}\Bigl\{ 1+\frac{p}{2}\bigl[(2\mu _{r_{k}}+ \alpha _{r_{k}})h+\sigma h+C_{1}h^{2} \bigr]\Bigr\} +C_{2}h. \end{aligned}$$

Letting \(h_{1}\) be a constant such that \(h_{1}\in (0,h]\), \(C_{1}h_{1}\leq \frac{3\varepsilon }{4}\), \(\sigma \leq \frac{1}{8}\varepsilon \) and \((|\xi |+\frac{7}{8}\varepsilon )h_{1}<1\) (where \(|\xi |=\max_{i\in \mathbb {S}}|\xi _{i}|\)), we arrive at, for \(h\in (0,h_{1}]\),

$$ \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr)\leq \bigl(1+ \vert F_{k} \vert ^{2}\bigr)^{ \frac{p}{2}}\biggl[1+ \frac{p}{2}\biggl(\xi _{r_{k}}+\frac{7}{8}\varepsilon \biggr)h\biggr]+C_{2}h. $$

Since this holds for all \(k\geq 0\), we also compute

$$\begin{aligned} &\mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{p/2}|\mathcal{F}_{t_{k-1}}\bigr) \\ &\quad \leq \mathbb {E}\bigl( \bigl(1+ \vert F_{k} \vert ^{2}\bigr)^{p/2}| \mathcal{F}_{t_{k-1}}\bigr)\biggl[1+\frac{p}{2}\biggl(\xi _{r_{k}}+\frac{7}{8}\varepsilon \biggr)h\biggr]+C_{2}h \\ &\quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{p/2}\prod_{i=k-1}^{k} \biggl[1+\frac{p}{2}\biggl(\xi _{r_{i}}+ \frac{7}{8} \varepsilon \biggr)h\biggr]+C_{2}h\biggl[1+\frac{p}{2} \biggl(\xi _{r_{i}}+ \frac{7}{8}\varepsilon \biggr)h \biggr]+C_{2}h. \end{aligned}$$

Repeating this procedure yields

$$\begin{aligned} \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{p/2}|\mathcal{F}_{0}\bigr)\leq {}&\bigl(1+ \vert F_{0} \vert ^{2}\bigr)^{p/2} \prod _{i=0}^{k}\biggl[1+\frac{p}{2} \biggl(\xi _{r_{i}}+\frac{7}{8}\varepsilon \biggr)h\biggr] \\ &{}+C_{2}h\sum_{j=1}^{k} \mathbb {E}\Biggl[\mathbb {E}\Biggl(\prod_{i=k-j+1}^{k} \biggl(1+ \frac{p}{2}\biggl(\xi _{r_{i}}+\frac{7}{8} \varepsilon \biggr)h\biggr)\Big|\mathcal{F}_{k-j}\Biggr)\Biggr]+C_{2}h \\ \leq{}& \bigl(1+ \vert F_{0} \vert ^{2} \bigr)^{p/2}\prod_{i=0}^{k} \biggl[1+\frac{p}{2}\biggl(\xi _{r_{i}}+ \frac{7}{8} \varepsilon \biggr)h\biggr] \\ &{}+C_{2}h\sum_{j=1}^{k} \mathbb {E}\Biggl[\prod_{i=1}^{j}\biggl(1+ \frac{p}{2}\biggl(\xi _{r_{i}}+ \frac{7}{8} \varepsilon \biggr)h\biggr)\Biggr]+C_{2}h. \end{aligned}$$

Then we have

$$\begin{aligned} \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{p/2}|\mathcal{F}_{0}\bigr)\leq {}& \bigl(1+ \vert F_{0} \vert ^{2}\bigr)^{p/2} \mathbb {E}\Biggl[\exp \Biggl(\sum_{i=0}^{k}\log \biggl(1+\frac{p}{2}\biggl(\xi _{r_{i}}+ \frac{7}{8} \varepsilon \biggr)h\biggr)\Biggr)\Biggr] \\ &{}+C_{2}h\sum_{j=1}^{k} \mathbb {E}\Biggl[\exp \Biggl(\sum_{i=1}^{j} \log \biggl(1+\frac{p}{2}\biggl( \xi _{r_{i}}+ \frac{7}{8}\varepsilon \biggr)h\biggr)\Biggr)\Biggr]+C_{2}h \\ ={}&A_{1}+A_{2}+C_{2}h. \end{aligned}$$

We reduce h to ensure that

$$ \frac{p}{2}\biggl(\xi _{i}+\frac{7}{8} \varepsilon \biggr)h >-1,\quad i\in S. $$

With the inequality

$$ \log (1+x)\leq x, \quad x\geq -1, $$

we derive that

$$\begin{aligned} {\lim_{j\rightarrow \infty }}\frac{1}{j}\sum _{i=1}^{j}\log \biggl(1+ \frac{p}{16}(8 \xi _{r_{i}}+7\varepsilon )h\biggr)&=\sum_{i\in S} \eta _{i}\log \biggl(1+ \frac{p}{16}(8\xi _{r_{i}}+7\varepsilon )h\biggr) \\ &\leq \frac{ph}{16}\sum_{i\in S}\eta _{i}(8\xi _{r_{i}}+7 \varepsilon )=-\frac{\varepsilon ph}{16}, \quad \text{a.s.} \end{aligned}$$

which implies

$$ {\lim_{j\rightarrow \infty }} \exp \Biggl(\frac{\varepsilon phj}{32}+\sum _{i=1}^{j}\log \biggl(1+ \frac{p}{16}(8 \xi _{r_{i}}+7\varepsilon )h\biggr)\Biggr)=0, \quad \text{a.s.} $$

By virtue of the Fatou lemma, we have

$$ {\lim_{j\rightarrow \infty }\sup }\mathbb {E}\Biggl[ \exp \Biggl( \frac{\varepsilon phj}{32}+\sum_{i=1}^{j}\log \biggl(1+\frac{p}{16}(8\xi _{r_{i}}+7 \varepsilon )h\biggr) \Biggr)\Biggr]=0. $$

There is a positive integer N such that

$$ \mathbb {E}\Biggl[\exp \Biggl(\sum_{i=1}^{j} \log \biggl(1+\frac{p}{16}(8\xi _{r_{i}}+7 \varepsilon )h\biggr) \Biggr)\Biggr]\leq \exp \biggl(-\frac{\varepsilon ph}{32}j\biggr), \quad \forall j>N. $$
(3.4)

So

$$ A_{1}\leq \bigl(1+ \vert F_{0} \vert ^{2}\bigr)^{\frac{p}{2}}\biggl(1+\frac{p}{16}(8 \xi _{r_{0}}+7 \varepsilon )h\biggr)\exp \biggl(-\frac{\varepsilon ph}{32}k \biggr),\quad \forall k>N. $$
(3.5)

Then we know

$$ \sum_{j=1}^{N}\mathbb {E}\Biggl[\exp \Biggl(\sum_{i=1}^{j}\log \biggl(1+ \frac{p}{2}\biggl(\xi _{r_{i}}+ \frac{7}{8} \varepsilon \biggr)h\biggr)\Biggr)\Biggr]\leq \sum_{j=1}^{N} \biggl(1+\frac{p}{2}\biggl( \vert \xi \vert + \frac{7}{8} \varepsilon \biggr)h\biggr)^{i}\leq C_{2}. $$

Together with (3.4), it implies

$$ \begin{aligned} A_{2}&\leq C_{2}h\sum_{j=N+1}^{\infty } \mathbb {E}\Biggl[\exp \Biggl(\sum_{i=1}^{j} \log \biggl(1+ \frac{p}{2}\biggl(\xi _{r_{i}}+ \frac{7}{8}\varepsilon \biggr)h\biggr)\Biggr)\Biggr] \\ &\leq C_{2}h\sum_{j=N+1}^{k} \exp \biggl(-\frac{\varepsilon ph}{32}j\biggr),\quad \forall k>N. \end{aligned} $$
(3.6)

Using (3.6) and (3.5), we obtain

$$\begin{aligned} \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{p/2}\bigr)\leq {}&C_{2}h+C_{2}\bigl(1+ \vert F_{0} \vert ^{2}\bigr)^{p/2}\exp \biggl(- \frac{\varepsilon ph}{32}\bigl(k\vee (N+1)\bigr)\biggr) \\ &{}+C_{2}h\sum_{j=N+1}^{k\vee (N+1)} \exp \biggl(-\frac{\varepsilon ph}{32}j\biggr) \\ \leq{}& C_{3}\bigl(1+ \vert F_{0} \vert ^{p}\bigr),\quad \forall k>0. \end{aligned}$$

Then \(a\geq 0\),

$$\begin{aligned} \vert F_{k+1} \vert ^{2}&= \bigl\vert X_{k+1}-\theta f(X_{k+1},R_{k}) \bigr\vert ^{2}= \vert X_{k+1} \vert ^{2}-2 \bigl\langle X_{k+1},\theta f(X_{k+1},R_{k})h\bigr\rangle +\theta ^{2} \bigl\vert f(X_{k+1},R_{k}) \bigr\vert ^{2} \\ &\geq \vert X_{k+1} \vert ^{2}-2\theta h\bigl(\mu _{r_{k}} \vert X_{k+1} \vert ^{2}+a\bigr) \\ &\geq \vert X_{k+1} \vert ^{2}-2\theta h \mu _{r_{k}} \vert X_{k+1} \vert ^{2}-2a\theta h \\ &\geq \vert X_{k+1} \vert ^{2}-2\theta h \mu _{r_{k}} \vert X_{k+1} \vert ^{2}. \end{aligned}$$

Letting \(\mu _{r_{k}}\neq \frac{1}{2\theta h}\), \(|\widetilde{\mu }|: =\max_{i\in S}|\mu _{i}|\), we have

$$\begin{aligned} \mathbb {E}\bigl( \vert X_{k+1} \vert ^{p}\bigr)&\leq \frac{\mathbb {E}((1+ \vert F_{k+1} \vert ^{2})^{p/2})}{1-2\theta h \mu _{r_{k}}} \\ &\leq \frac{\mathbb {E}((1+ \vert F_{k+1} \vert ^{2})^{p/2})}{1-2\theta h \vert \widetilde{\mu } \vert } \end{aligned}$$

since

$$\begin{aligned} \mathbb {E}\vert X_{k} \vert ^{p}&\leq C_{3}\bigl(1-2\theta h \vert \widetilde{\mu } \vert \bigr)^{-1}\bigl(1+ \vert F_{0} \vert ^{p}\bigr) \\ &\leq C\bigl(1+ \vert F_{0} \vert ^{p}\bigr). \end{aligned}$$

The proof is complete. □

Lemma 3.4

Let Assumptions 2.4, 2.5, and 3.1hold. Then, for \(h\in (0,h_{1})\) and any two initial values \(x,y\in \mathbb{R}^{n}\) with \(x \neq y\), the solutions generated by the stochastic θ method (2.2) satisfy

$$ \mathbb{E} \bigl\vert X_{k}^{x,i}-X_{k}^{y,i} \bigr\vert ^{p} \leq C\bigl(1+ \vert F_{0} \vert ^{p}\bigr)e^{- \frac{pkh\varepsilon }{16}}, $$

where ε is defined as (3.1).

Proof

From (2.2), we have

$$\begin{aligned}& \bigl\vert \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert -\theta h \bigl\vert f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert \bigr\vert ^{2} \\& \quad = \bigl\vert \bigl\vert X_{k}^{x,i}-X_{k}^{y,i} \bigr\vert -\theta h f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2} +2\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \\& \qquad {}+ (1-2\theta ) \bigl\vert f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2}h^{2} + \bigl\vert g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)\Delta B_{k} \bigr\vert ^{2} \\& \qquad {}+ \frac{2}{\theta }\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \\& \qquad {}- \frac{2(1-\theta )}{\theta } \bigl\langle X_{k}-\theta f(X_{k},R_{k})h,g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle . \end{aligned}$$

Due to the fact that \(\theta \in [\frac{1}{2},1]\), \(1-2\theta \leq 0\), we get

$$\begin{aligned}& \bigl\vert \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert -\theta h \bigl\vert f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert \bigr\vert ^{2} \\& \quad = \bigl\vert \bigl\vert X_{k}^{x,i}-X_{k}^{y,i} \bigr\vert -\theta h \bigl\vert f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\vert \bigr\vert ^{2} \\& \qquad {}+2 \bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \\& \qquad {}+ \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)\Delta B_{k} \bigr\vert ^{2} + \frac{2}{\theta }\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \\& \qquad {}- \frac{2(1-\theta )}{\theta } \bigl\langle \bigl\vert X_{k}^{x,i}-X_{k}^{y,i} \bigr\vert - \theta h \bigl\vert f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\vert , \\& \qquad g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle . \end{aligned}$$

Denote

$$\begin{aligned}& F_{k+1}= \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert -\theta \bigl\vert f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert h, \\& \begin{aligned} \vert F_{k+1} \vert ^{2}\leq {}& \vert F_{k} \vert ^{2}+2\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \\ &{}+ \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)\Delta B_{k} \bigr\vert ^{2} \\ &{}+\frac{2}{\theta }\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \\ &{}-\frac{2(1-\theta )}{\theta } \bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle , \end{aligned} \\& 1+ \vert F_{k+1} \vert ^{2} \leq 1+ \vert F_{k} \vert ^{2}\bigl\{ 1+\vartheta _{k}(R_{k},\theta )\bigr\} , \end{aligned}$$

where

$$\begin{aligned} \vartheta _{k}(R_{k},\theta )={}&\frac{1}{1+ \vert F_{k} \vert ^{2}} \biggl[2\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h\bigr\rangle \\ &{}+ \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k} \bigr\vert ^{2} \\ &{}+\frac{2}{\theta }\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \\ &{}-\frac{2(1-\theta )}{\theta } \bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \biggr]. \end{aligned}$$

Noting that

$$ (1+u)^{\frac{p}{2}}\leq 1+\frac{p}{2}u+\frac{p(p-2)}{8}u^{2}+ \frac{p(p-2)(p-4)}{48} u^{3},\quad u \geq -1, $$

and \(\vartheta _{k}(R_{k},\theta )>-1\), we have

$$ \begin{aligned} &\mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2}\bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr) \\ &\quad \leq \bigl(1+ \vert F_{k} \vert ^{2}\bigr)^{ \frac{p}{2}} \mathbb {E}\biggl(1+\frac{p}{2}\vartheta _{k}(R_{k}, \theta )+ \frac{p(p-2)}{8}\vartheta _{k}^{2}(R_{k}, \theta ) \\ &\qquad {}+ \frac{p(p-2)(p-4)}{48}\vartheta _{k}^{3}(R_{k}, \theta )\Big|\mathcal{F}_{t_{k}}\biggr). \end{aligned} $$
(3.7)

Hence

$$ \mathbb {E}\bigl(\vartheta _{k}(R_{k},\theta )|\mathcal{F}_{t_{k}}\bigr)= \frac{2\langle X_{k}^{x,i}-X_{k}^{y,i}, f(X_{k}^{x,i},R_{k}^{i})-f(X_{k}^{y,i},R_{k}^{i})h\rangle + \vert g(X_{k}^{x,i},R_{k}^{i})-g(X_{k}^{y,i},R_{k}^{i}) \vert ^{2}h}{1+ \vert F_{k} \vert ^{2}}, $$

where \(\mathbb {E}(\Delta B_{k}|\mathcal{F}_{t_{k}}) = 0\) and \(\mathbb {E}(|\Delta B_{k}|^{2}|\mathcal{F}_{t_{k}})= h\) are used, and we have \(\mathbb {E}(|\Delta B_{k}|^{2i}|\mathcal{F}_{t_{k}}) = Ch^{i}\), \(\mathbb {E}(|\Delta B_{k}|^{2i-1}|\mathcal{F}_{t_{k}})= Ch^{i-\frac{1}{2}}\), \(i=2,3,\ldots \) . We compute

$$ \begin{aligned} &\mathbb {E}\bigl(\vartheta _{k}^{2}(R_{k},\theta )|\mathcal{F}_{t_{k}}\bigr) \\ &\quad \geq \biggl\{ 4\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2}h \\ &\qquad {}-\frac{8(1-\theta )}{\theta ^{2}}\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\rangle \bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\rangle h\biggr\} \\ &\qquad {}\times\frac{1}{(1+ \vert F_{k} \vert ^{2})^{2}}, \\ &\mathbb {E}\bigl(\vartheta _{k}^{3}(R_{k}, \theta )|\mathcal{F}_{t_{k}}\bigr)\leq \overline{C}h^{2}. \end{aligned} $$
(3.8)

Substituting these three estimates into (3.7), we get

$$ \begin{aligned} &\mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2}\bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr) \\ &\quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}}\biggl\{ 1+\frac{p}{2}\bigl[(2\mu _{r_{k}}+ \sigma )h+\alpha _{r_{k}}h+C_{1}h^{2} \bigr]\biggr\} \\ &\quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}}\biggl\{ 1+\frac{p}{2}\bigl[(2\mu _{r_{k}}+ \alpha _{r_{k}})h+\sigma h+C_{1}h^{2} \bigr]\biggr\} . \end{aligned} $$
(3.9)

Letting \(h_{1}\) be a constant such that \(h_{1}\in (0,h]\), \(C_{1}h_{1}\leq \frac{3\varepsilon }{4}\), \(\sigma \leq \frac{1}{8}\varepsilon \) and \((|\xi |+\frac{7}{8}\varepsilon )h_{1}<1\) (where \(|\xi |=\max_{i\in \mathbb {S}}|\xi _{i}|\)), we arrive at, for \(h\in (0,h_{1}]\),

$$ \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr)\leq \bigl(1+ \vert F_{k} \vert ^{2}\bigr)^{ \frac{p}{2}}\biggl[1+ \frac{p}{2}\biggl(\xi _{r_{n}^{i}}+\frac{7}{8}\varepsilon \biggr)h\biggr]. $$

Since this holds for all \(k\geq 0\), we also compute

$$\begin{aligned} \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr)&\leq \bigl(1+ \vert F_{0} \vert ^{2}\bigr)^{ \frac{p}{2}}\mathbb {E}\Biggl[\prod_{n=0}^{k-1}\biggl(1+ \frac{p}{2}h\biggl(\xi _{r_{n}^{i}}+ \frac{7}{8} \varepsilon \biggr)\biggr)\Biggr] \\ &\leq \bigl(1+ \vert F_{0} \vert ^{2} \bigr)^{\frac{p}{2}}\mathbb {E}\Biggl[\exp \Biggl(\sum _{n=0}^{k-1}\log \biggl(1+ \frac{p}{2}h \biggl(\xi _{r_{n}^{i}}+\frac{7}{8}\varepsilon \biggr)\biggr) \Biggr)\Biggr]. \end{aligned}$$

We further reduce h to ensure that

$$ \frac{p}{2}h\biggl(\xi _{i}+\frac{7}{8} \varepsilon \biggr)>-1, \quad i\in S. $$

With the inequality

$$ \log (1+x)\leq x,\quad x>-1. $$

By the ergodic property of the Markov chain, we derive that

$$ \lim_{k\rightarrow \infty }\frac{1}{k}\sum _{n=0}^{k-1}\log \biggl[1+ \frac{p}{2}h \biggl(\xi _{r_{n}^{i}}+\frac{7}{8}\varepsilon \biggr)\biggr]=\sum _{i\in S} \eta _{i}\log \biggl[1+ \frac{p}{2}h\biggl(\xi _{r_{n}^{i}}+\frac{7}{8} \varepsilon \biggr)\biggr] \leq -\frac{ph\varepsilon }{16}, \quad \text{a.s.} $$

Therefore we have

$$ \lim_{k\rightarrow \infty }\Biggl[\frac{pkh\varepsilon }{16}+\sum _{n=0}^{k-1}\log \biggl(1+ \frac{p}{2}h \biggl(\xi _{r_{n}^{i}}+\frac{7}{8}\varepsilon \biggr)\biggr) \Biggr]=-\infty , \quad \text{a.s.} $$

By the Fatou lemma, we obtain

$$ \lim_{k\rightarrow \infty }\mathbb {E}\Biggl[\exp \Biggl(\frac{pkh\varepsilon }{16}+ \sum_{n=0}^{k-1}\log \biggl(1+ \frac{p}{2}h\biggl(\xi _{r_{n}^{i}}+\frac{7}{8} \varepsilon \biggr)\biggr)\Biggr)\Biggr]=0. $$

We get

$$ \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr) \leq C_{2} \bigl(1+ \vert F_{0} \vert ^{2}\bigr)^{ \frac{p}{2}}e^{-\frac{pkh\varepsilon }{16}},\quad \forall k>0. $$

Then \(a\geq 0\),

$$ \begin{aligned} \vert F_{k+1} \vert ^{2}&= \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i}- \theta f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2} \\ &\geq \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2\theta h \mu _{r_{k}} \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2a \theta h \\ &\geq \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2\theta h \mu _{r_{k}} \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}. \end{aligned} $$
(3.10)

Letting \(\mu _{r_{k}}\neq \frac{1}{2\theta h}\), \(|\widetilde{\mu }|: =\max_{i\in S}|\mu _{i}|\),

$$\begin{aligned} \mathbb {E}\bigl( \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{p}\bigr)&\leq \frac{\mathbb {E}((1+ \vert F_{k+1} \vert ^{2})^{p/2})}{1-2\theta h \mu _{r_{k}}} \\ &\leq \frac{\mathbb {E}((1+ \vert F_{k+1} \vert ^{2})^{p/2})}{1-2\theta h \vert \widetilde{\mu } \vert }. \end{aligned}$$

We obtain

$$ \mathbb{E} \bigl\vert X_{k}^{x,i}-X_{k}^{y,i} \bigr\vert ^{p} \leq C\bigl(1+ \vert F_{0} \vert ^{p}\bigr)e^{- \frac{pkh\varepsilon }{16}}. $$

The proof is complete. □

Lemma 3.5

Given Assumptions 2.6, 2.7, and 3.2, the solutions generated by the stochastic θ method (2.2) obey

$$ \mathbb {E}\Bigl(\sup_{0 \leq k \leq n} \vert X_{k} \vert ^{p} \Bigr)\leq C,\quad k=1,2,3, \ldots , $$

where C is a constant that can rely on k.

Combining Lemmas 3.3, 3.4, 3.5 and using Chebyshev’s inequality, we derive the existence and uniqueness of the stationary distribution of the stochastic θ method with \(\theta \in [1/2, 1]\) from Theorem 2.2.

The convergence

Given Assumptions 2.4 to 3.2, the convergence of the numerical stationary distribution to the underlying stationary distribution is discussed in this subsection.

Recall that the probability measure induced by the numerical solution \(X_{k}^{x,i}\) is denoted by \(\mathbb {P}_{k}((x,i),\cdot \times \cdot )\); similarly we denote the probability measure induced by the underlying solution \(x(t)\) by \(\bar{\mathbb {P}}_{t}((x,i)\cdot \times\, \cdot )\).

Lemma 3.6

Let Assumptions 2.4to 3.2hold and fix any initial data \((x,i)\in \mathbb {R}^{n}\times S\). Then, for any given \(T_{1}>0\) and \(\varepsilon >0\), there exists sufficiently small \(h^{*} >0\) such that

$$ d_{\mathbb {L}}\bigl(\bar{\mathbb {P}}_{k h}\bigl((x,i),\cdot \times \cdot \bigr),\mathbb {P}_{k}\bigl((x,i), \cdot \times \cdot \bigr)\bigr) < \varepsilon $$

provided that \(h < h^{*}\) and \(k h \leq T_{1}\).

The result can be derived from the finite time strong convergence of the stochastic θ method [35].

Now we are ready to show that the numerical stationary distribution converges to the underlying stationary distribution as time step diminishes.

Theorem 3.7

Given Assumptions 2.4to 3.2, we have

$$ \lim_{h \rightarrow 0} d_{\mathbb {L}} \bigl(\Pi _{h}( \cdot \times \cdot ), \pi (\cdot \times \cdot )\bigr) = 0. $$

Proof

Fix any initial value \((x,i) \in \mathbb {R}^{n}\times S\) and set \(\varepsilon >0\) to be an arbitrary real number. Due to the existence and uniqueness of the stationary distribution of the underlying equation, there exists \(\Theta ^{*} >0\) such that, for any \(t > \Theta ^{*}\),

$$ d_{\mathbb {L}} \bigl(\bar{\mathbb {P}}_{t}\bigl((x,i),\cdot \times \cdot \bigr),\pi (\cdot \times \cdot )\bigr) < \varepsilon /3. $$

Similarly, by Theorem 2.2, there exists a pair of \(h^{**}>0\) and \(\Theta ^{**} >0\) such that

$$ d_{\mathbb {L}} \bigl(\mathbb {P}_{k}\bigl((x,i),\cdot \times \cdot \bigr),\Pi _{h}(\cdot \times \cdot )\bigr) < \varepsilon /3 $$

for all \(h < h^{**}\) and \(k h > \Theta ^{**}\). Let \(\Theta = \max (\Theta ^{*},\Theta ^{**})\), from Lemma 3.6 there exists \(h^{*}\) such that, for any \(h < h^{*}\) and \(k h < \Theta + 1\),

$$ d_{\mathbb {L}} \bigl(\bar{\mathbb {P}}_{k h}\bigl((x,i),\cdot \times \cdot \bigr),\mathbb {P}_{k}\bigl((x,i), \cdot \times \cdot \bigr) \bigr) < \varepsilon /3. $$

Therefore, for any \(h < \min (h^{*},h^{**})\), set \(k = [\Theta h] + 1/h\), we see that the assertion holds by the triangle inequality. □

\(\theta \in [0,1/2)\)

We need to add the global Lipschitz and linear growth conditions on the drift coefficient. It is worth mentioning here that we only need to use the one-sided Lipschitz conditions in Sect. 3.1.

Assumption 3.8

Assume that there exists a constant \(K>0\) such that, for any \(x,y \in \mathbb {R}^{n}\) and \(i\in S\),

$$ \bigl\vert f(x,i)-f(y,i) \bigr\vert ^{2}\leq K \vert x-y \vert ^{2}. $$

Assumption 3.9

There exist positive constants κ and c such that, for any \(x\in \mathbb{R}^{n}\) and \(i\in S\),

$$ \bigl\vert f(x,i) \bigr\vert ^{2} \leq \kappa \vert x \vert ^{2}+c. $$

In addition, we require the following two assumptions.

Assumption 3.10

Let, for any \(i\in \mathbb {S}\), there exist a constant \(\alpha _{i}\in \mathbb {R}\), the inequality

$$ \vert x-y \vert ^{2} \bigl\vert g(x,i)-g(y,i) \bigr\vert ^{2}-(p-2) \bigl\vert \bigl\langle x-y, g(x,i) - g(y,i) \bigr\rangle \bigr\vert ^{2}\leq \alpha _{i} \vert x-y \vert ^{4} $$

holds.

For simplicity, define

$$ \xi _{i}=2\mu _{i}+ \alpha _{i},\qquad \xi =(\xi _{1},\ldots , \xi _{N})^{T},\qquad \varepsilon = \vert \eta \xi \vert , $$
(3.11)

where assume \(\eta \xi <0\).

The next assumption can be derived from Assumption 3.10.

Assumption 3.11

Let, for any \(i\in \mathbb {S}\), there exist a constant \(\alpha _{i}\in \mathbb {R}\) and \(d>0\), the inequality

$$ \vert x \vert ^{2} \bigl\vert g(x,i) \bigr\vert ^{2}+(p-2) \bigl\vert x^{T}g(x,i) \bigr\vert ^{2}\leq \alpha _{i} \vert x \vert ^{4}+d \vert x \vert ^{2} $$

holds.

Lemma 3.12

Let Assumptions 2.6, 3.9, and 3.11hold, then for \(h\in (0,\overline{h})\), the solutions generated by the stochastic θ method (2.2) obey

$$ \mathbb {E}\vert X_{k} \vert ^{p}\leq C\bigl(1+ \vert X_{0} \vert ^{p}\bigr),\quad k=0,1,2,\ldots , $$

where C is a constant that does not rely on k.

The proof is the same as that of Lemma 3.3.

Lemma 3.13

Let Assumptions 2.4, 3.8, and 3.10hold. Then, for \(h\in (0,h^{*})\) and any two initial values \(x,y\in \mathbb{R}^{n}\) with \(x \neq y\), the solutions generated by the stochastic θ method (2.2) satisfy

$$ \mathbb{E} \bigl\vert X_{k}^{x,i}-X_{k}^{y,i} \bigr\vert ^{p} \leq C \vert x-y \vert ^{p}e^{- \frac{pkh\varepsilon }{16}}, $$

where C is a constant that does not rely on k and ε is defined as (3.11).

The proof is the same as that of Lemma 3.4.

Lemma 3.14

Let Assumptions 2.6, 3.9, and 3.11hold, the solutions generated by the stochastic θ method (2.2) obey

$$ \mathbb {E}\Bigl(\sup_{0 \leq k \leq n} \vert X_{k} \vert ^{p} \Bigr)\leq C,\quad k=1,2,3, \ldots , $$

where C is a constant that does rely on k.

The proof is the same as that of Lemma 3.5.

Combining Lemmas 3.12, 3.13, 3.14 and using Chebyshev’s inequality, we derive the existence and uniqueness of the stationary distribution of the stochastic θ method with \(\theta \in [0, 1/2)\) from Theorem 2.2.

Simulations

We present two numerical examples in this section to support our theoretical results.

Example 4.1

Consider the SDEs

$$ dX(t)=-2X(t)\,dt+2\,dB(t) $$
(4.1)

and

$$ dX(t)=\bigl(-0.5X(t)-0.5X^{3}(t)\bigr)\,dt+\,dB(t). $$
(4.2)

Let \(R(t)\) be a Markov chain with the state space \(S=\{1,2\}\) and the generator is

$$ Q= \begin{pmatrix} -5 & 5 \\ 1 & -1 \end{pmatrix}, $$

initial value \(X(0)=2\), \(R(0)=1\).

Figures 1 and 2 show the empirical probability density functions of equation (4.1) and equation (4.2), we choose step size \(h=0.001\), \(T=10\), \(\theta = 1/2\), and we simulate \(10\text{,}000\) paths by MATLAB. It can be seen from Figs. 1 and 2 that with the time advancing the density functions tend to a stable one, which indicates the existence of the stationary distribution. Figure 3 shows empirical probability density functions after switching according to transition probability matrix. The initial value, step size, and the number of paths are the same as above. From this figure the equations still have a stationary distribution. These three pictures look very similar, but the distribution trends are different. Figure 4 shows the difference of three different empirical probability density functions when the termination time \(T=10\).

Figure 1
figure1

Equation (4.1)

Figure 2
figure2

Equation (4.2)

Figure 3
figure3

Empirical probability density functions after switching

Figure 4
figure4

Empirical probability density functions when \(T=10\)

We use the Kolmogorov–Smirnov test (K–S test) to measure the differences between the empirical distributions. Figure 5 shows the differences between successive empirical density functions for the SDEs with Markovian switching. We can see from Fig. 5 that the difference between the empirical distributions gradually decreases, which shows that the empirical distributions can quickly converge to a stationary distribution.

Figure 5
figure5

The difference between two adjacent empirical probability density functions after switching

Example 4.2

Consider a two-dimensional SDE

d [ X 1 ( t ) X 2 ( t ) ] = [ 2 X 1 ( t ) + X 1 3 ( t ) + X 1 ( t ) X 2 ( t ) 1 + X 2 ( t ) + X 2 3 ( t ) + X 2 ( t ) X 1 ( t ) ] dt+ [ X 1 ( t ) X 2 ( t ) ] dB(t),
(4.3)
d [ X 1 ( t ) X 2 ( t ) ] = [ X 1 3 ( t ) 5 X 1 ( t ) + X 2 ( t ) + 5 X 2 3 ( t ) X 1 ( t ) 5 X 2 ( t ) + 5 ] dt+ [ X 1 ( t ) X 2 ( t ) + 3 X 1 ( t ) X 2 ( t ) + 3 ] dB(t).
(4.4)

Let the initial value \(X_{1}(0)=2\), \(X_{2}(0)=3\), and the generator is

$$ Q= \begin{pmatrix} -1 & 1 \\ 6 & -6 \end{pmatrix}. $$

We set the step size to be 0.01, \(T=2\), \(\theta =1/2\), and \(10\text{,}000\) paths. It is obvious that the first function (4.3) has no stationary distribution and the numerical solutions tend to infinity. Compared with the first two-dimensional equation, the second one has stationary distribution. Besides, Fig. 6 shows the differences between successive empirical density functions for the SDEs with Markovian switching. It shows that the empirical distribution tends to a stationary one quite fast. That is to say, for the equations without stationary distribution, the distribution of these equations can reach a stable state quickly after switching.

Figure 6
figure6

The difference between two adjacent empirical probability density functions for the two-dimensional equations after switching

Conclusion

This paper studies the numerical stationary distributions generated by the stochastic θ methods. We study when one or more equations in the switching system do not have the stationary distribution, equations still have a unique stationary distribution after a period of switching. Both the drift and diffusion coefficients are required to satisfy the global Lipschitz condition when \(\theta \in [0, 1/2)\). But some super-linear terms are allowed to appear in the drift coefficient when \(\theta \in [1/2, 1]\). Two numerical examples are given to show the convergence of the numerical stationary distributions to their true counterparts. The figures also support our theoretical results.

Availability of data and materials

Not applicable.

References

  1. 1.

    Anderson, W.J.: Continuous-Time Markov Chain. Springer, New York (1991)

    Google Scholar 

  2. 2.

    Bakhtin, Y., Hurth, T.: Invariant densities for dynamical systems with random switching. Nonlinearity 25, 2937–2952 (2012)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Bao, J., Shao, J., Yuan, C.: Approximation of invariant measures for regime-switching diffusions. Potential Anal. 44(4), 707–727 (2016)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Bardet, J.B., Guerin, H., Malrieu, F.: Long time behavior of diffusion with Markov switching. ALEA Lat. Am. J. Probab. Math. Stat. 7, 151–170 (2010)

    MathSciNet  MATH  Google Scholar 

  5. 5.

    Chen, L., Wu, F.: Almost sure exponential stability of the θ-method for stochastic differential equations. Stat. Probab. Lett. 82(9), 1669–1676 (2012)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Higham, D.J., Mao, X., Yuan, C.: Preserving exponential mean-square stability in the simulation of hybrid stochastic differential equations. Numer. Math. 108, 295–325 (2007)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Hutzenthaler, M., Jentzen, A., Kloeden, P.: Strong and weak divergence in finite time of Euler’s method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proc. R. Soc. Lond., Ser. A, Math. Phys. Eng. Sci. 467, 1563–1576 (2011)

    MathSciNet  MATH  Google Scholar 

  8. 8.

    Jiang, Y., Liu, W., Weng, L.: Stationary distribution of the stochastic theta method for nonlinear stochastic differential equations. Numer. Algorithms 83, 1531–1553 (2020)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations. Springer, New York (1992)

    Google Scholar 

  10. 10.

    Li, X., Ma, Q., Yang, H., Yuan, C.: The numerical invariant measure of stochastic differential equations with Markovian switching. SIAM J. Numer. Anal. 56, 1435–1455 (2018)

    MathSciNet  Article  Google Scholar 

  11. 11.

    Li, X., Mao, X., Yin, G.: Corrigendum to: Explicit numerical approximations for stochastic differential equations in finite and infinite horizons: truncation methods, convergence in pth moment and stability. IMA J. Numer. Anal. 39(4), 2168 (2019)

    MathSciNet  Article  Google Scholar 

  12. 12.

    Liu, W., Mao, X.: Numerical stationary distribution and its convergence for nonlinear stochastic differential equations. J. Comput. Appl. Math. 276, 16–29 (2015)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Mao, X.: Stochastic Differential Equations and Applications, 2nd edn. Horwood, Chichester (2007)

    Google Scholar 

  14. 14.

    Mao, X., Shen, Y., Gray, A.: Almost sure exponential stability of backward Euler–Maruyama discretization for hybrid stochastic differential equation. J. Comput. Appl. Math. 235, 1213–1226 (2011)

    MathSciNet  Article  Google Scholar 

  15. 15.

    Mao, X., Yuan, C.: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London (2006)

    Google Scholar 

  16. 16.

    Mao, X., Yuan, C., Yin, G.: Numerical method for stationary distribution of stochastic differential equations with Markovian switching. J. Comput. Appl. Math. 174, 1–27 (2005)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Mattingly, J.C., Stuart, A.M., Higham, D.J.: Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise. Stoch. Process. Appl. 101(2), 185–232 (2002)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Meyn, S.P., Tweedie, R.L.: Stability of Markovian processes. I. Criteria for discrete-time chains. Adv. Appl. Probab. 24(3), 542–574 (1992)

    MathSciNet  Article  Google Scholar 

  19. 19.

    Meyn, S.P., Tweedie, R.L.: Stability of Markovian processes. II. Continuous-time processes and sampled chains. Adv. Appl. Probab. 25(3), 487–517 (1993)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Pang, S., Deng, F., Mao, X.: Almost sure and moment exponential stability of Euler–Maruyama discretizations for hybrid stochastic differential equations. J. Comput. Appl. Math. 213, 127–141 (2008)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Pinsky, M., Scheutzow, M.: Some remarks and examples concerning the transience and recurrence of random diffusions. Ann. Inst. Henri Poincaré 28, 519–536 (1992)

    MathSciNet  MATH  Google Scholar 

  22. 22.

    Qiu, Q., Liu, W., Hu, L.: Asymptotic moment boundedness of the stochastic theta method and its application for stochastic differential equations. Adv. Differ. Equ. 2014, 310 (2014)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Rodkina, A., Schurz, H.: Almost sure asymptotic stability of drift-implicit θ-methods for bilinear ordinary stochastic differential equations in \(R^{1}\). J. Comput. Appl. Math. 180(1), 13–31 (2005)

    MathSciNet  Article  Google Scholar 

  24. 24.

    Rosenblatt, M.: Markov Processes, Structure and Asymptotic Behavior. Springer, New York (1971)

    Google Scholar 

  25. 25.

    Shao, J.: Ergodicity of regime-switching diffusions in Wasserstein distances. Stoch. Process. Appl. 125, 739–758 (2015)

    MathSciNet  Article  Google Scholar 

  26. 26.

    Shao, J., Xi, F.: Strong ergodicity of the regime-switching diffusion processes. Stoch. Process. Appl. 123, 3903–3918 (2013)

    MathSciNet  Article  Google Scholar 

  27. 27.

    Shardlow, T., Stuart, A.M.: A perturbation theory for ergodic Markov chains and application to numerical approximations. SIAM J. Numer. Anal. 37(4), 1120–1137 (2000)

    MathSciNet  Article  Google Scholar 

  28. 28.

    Weng, L., Liu, W.: Invariant measures of the Milstein method for stochastic differential equations with commutative noise. Appl. Math. Comput. 358, 169–176 (2019)

    MathSciNet  Article  Google Scholar 

  29. 29.

    Yin, G., Zhu, C.: Hybrid Switching Diffusions: Properties and Applications. Springer, New York (2010)

    Google Scholar 

  30. 30.

    Yuan, C., Mao, X.: Stability in distribution of numerical solutions for stochastic differential equations. Stoch. Anal. Appl. 22(5), 1133–1150 (2004)

    MathSciNet  Article  Google Scholar 

  31. 31.

    Yuan, C., Mao, X.: Stationary distributions of Euler–Maruyama-type stochastic difference equations with Markovian switching and their convergence. J. Differ. Equ. Appl. 11(1), 29–48 (2005)

    MathSciNet  Article  Google Scholar 

  32. 32.

    Zhou, S.: Strong convergence and stability of backward Euler–Maruyama scheme for highly nonlinear hybrid stochastic differential delay equation. Calcolo 52, 445–473 (2015)

    MathSciNet  Article  Google Scholar 

  33. 33.

    Zhu, C., Yin, G.: Asymptotic properties of hybrid diffusion systems. SIAM J. Control Optim. 46, 1155–1179 (2007)

    MathSciNet  Article  Google Scholar 

  34. 34.

    Zong, X., Wu, F.: Choice of θ and mean-square exponential stability in the stochastic theta method of stochastic differential equations. J. Comput. Appl. Math. 255, 837–847 (2014)

    MathSciNet  Article  Google Scholar 

  35. 35.

    Zong, X., Wu, F., Huang, C.: Theta schemes for SDDEs with non-globally Lipschitz continuous coefficients. J. Comput. Appl. Math. 278, 258–277 (2015)

    MathSciNet  Article  Google Scholar 

  36. 36.

    Zong, X., Wu, F., Huang, C.: The moment exponential stability criterion of nonlinear hybrid stochastic differential equations and its discrete approximations. Proc. R. Soc. Edinb., Sect. A 146(6), 1303–1328 (2016)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

The authors would like to thank the Fundamental Research Funds for the Central Universities, 2232020D-37, for the financial support.

Author information

Affiliations

Authors

Contributions

YJ made the main contribution to this manuscript. LH and JL supplied a lot of suggestions in theoretical research and writing improvement. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jianqiu Lu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Appendix

Appendix

Proof of Lemma 3.3

The proof of equation (3.3) is the following:

$$\begin{aligned}& \mathbb {E}\bigl(\vartheta _{k}^{2}(R_{k}, \theta )|\mathcal{F}_{t_{k}}\bigr) \\& \quad \geq \biggl\{ 4\bigl\langle X_{k}, f(X_{k},R_{k})h \bigr\rangle ^{2}+ \bigl\vert g(X_{k},R_{k}) \Delta B_{k} \bigr\vert ^{4}+\frac{4}{\theta ^{2}} \bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle ^{2} \\& \qquad {}+ \frac{4(1-\theta )^{2}}{\theta ^{2}}\bigl\langle F_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle ^{2}+4\bigl\langle X_{k}, f(X_{k},R_{k})h\bigr\rangle \bigl\vert g(X_{k},R_{k}) \Delta B_{k} \bigr\vert ^{2} \\& \qquad {}+ \frac{8}{\theta }\bigl\langle X_{k}, f(X_{k},R_{k})h \bigr\rangle \bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle \\& \qquad {}- \frac{8(1-\theta )}{\theta }\bigl\langle X_{k}, f(X_{k},R_{k})h \bigr\rangle \bigl\langle F_{k},g(X_{k},R_{k}) \Delta B_{k}\bigr\rangle \\& \qquad {}+\frac{4}{\theta } \bigl\langle X_{k},g(X_{k},R_{k})\Delta B_{k}\bigr\rangle \bigl\vert g(X_{k},R_{k}) \Delta B_{k} \bigr\vert ^{2} \\& \qquad {}- \frac{4(1-\theta )}{\theta }\bigl\langle F_{k},g(X_{k},R_{k}) \Delta B_{k} \bigr\rangle \bigl\vert g(X_{k},R_{k}) \Delta B_{k} \bigr\vert ^{2} \\& \qquad {}- \frac{8(1-\theta )}{\theta ^{2}}\bigl\langle X_{k},g(X_{k},R_{k}) \Delta B_{k} \bigr\rangle \bigl\langle F_{k},g(X_{k},R_{k}) \Delta B_{k} \bigr\rangle \biggr\} \frac{1}{(1+ \vert F_{k} \vert ^{2})^{2}} \\& \quad \geq \frac{4\langle X_{k}, f(X_{k},R_{k})h\rangle \vert g(X_{k},R_{k}) \vert ^{2}h-\frac{8(1-\theta )}{\theta ^{2}}\langle X_{k},g(X_{k},R_{k})\rangle \langle F_{k},g(X_{k},R_{k}) \rangle h}{(1+ \vert F_{k} \vert ^{2})^{2}}. \end{aligned}$$

 □

Proof of Lemma 3.4

The proof of equation (3.8) is the following:

$$\begin{aligned}& \mathbb {E}\bigl(\vartheta _{k}^{2}(R_{k}, \theta )|\mathcal{F}_{t_{k}}\bigr) \\& \quad \geq \biggl\{ 4\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle ^{2}+ \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k} \bigr\vert ^{4} \\& \qquad {}+ \frac{4}{\theta ^{2}}\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle ^{2} \\& \qquad {}+ \frac{4(1-\theta )^{2}}{\theta ^{2}} \bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle ^{2} \\& \qquad {}+ 4\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)\Delta B_{k} \bigr\vert ^{2} \\& \qquad {}+ \frac{8}{\theta }\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \\& \qquad {}- \frac{8(1-\theta )}{\theta }\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \\& \qquad {}+ \frac{4}{\theta }\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k} \bigr\vert ^{2} \\& \qquad {}- \frac{4(1-\theta )}{\theta }\bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k} \bigr\vert ^{2} \\& \qquad {}- \frac{8(1-\theta )}{\theta ^{2}}\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k}\bigr\rangle \\& \qquad {}\times\bigl\langle F_{k},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \Delta B_{k} \bigr\rangle \biggr\} \frac{1}{(1+ \vert F_{k} \vert ^{2})^{2}} \\& \quad \geq \biggl\{ 4\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h \bigr\rangle \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2}h \\& \qquad {}- \frac{8(1-\theta )}{\theta ^{2}}\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\rangle \bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\rangle h\biggr\} \\& \qquad {}\times\frac{1}{(1+ \vert F_{k} \vert ^{2})^{2}}. \end{aligned}$$

The proof of equation (3.9) is the following:

$$\begin{aligned}& \mathbb {E}\bigl(\bigl(1+ \vert F_{k+1} \vert ^{2} \bigr)^{\frac{p}{2}}|\mathcal{F}_{t_{k}}\bigr) \\& \quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}} \\& \qquad {}\times \biggl\{ 1+\frac{p}{2}\biggl[ \frac{2\langle X_{k}^{x,i}-X_{k}^{y,i}, f(X_{k}^{x,i}, R_{k}^{i})-f(X_{k}^{y,i},R_{k}^{i})h\rangle + \vert g(X_{k}^{x,i},R_{k}^{i})-g(X_{k}^{y,i},R_{k}^{i}) \vert ^{2}h}{1+ \vert F_{k} \vert ^{2}}\biggr] \\& \qquad {}+ \frac{p(p-2)}{8(1+ \vert F_{k} \vert ^{2})^{2}} \biggl[4\bigl\langle X_{k}^{x,i}-X_{k}^{y,i}, f\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k}^{y,i},R_{k}^{i} \bigr)h\bigr\rangle \bigl\vert g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2}h \\& \qquad {}- \frac{8(1-\theta )}{\theta ^{2}}\bigl\langle X_{k}^{x,i}-X_{k}^{y,i},g \bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\rangle \bigl\langle F_{k},g\bigl(X_{k}^{x,i},R_{k}^{i} \bigr)-g\bigl(X_{k}^{y,i},R_{k}^{i} \bigr) \bigr\rangle h\biggr] \\& \qquad {}+\frac{p(p-2)(p-4)}{48}\overline{C}h^{2} \biggr\} \\& \quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}}\biggl\{ 1+\frac{p}{2}\bigl[(2\mu _{r_{k}}+ \sigma )h+\alpha _{r_{k}}h+C_{1}h^{2} \bigr]\biggr\} \\& \quad \leq \bigl(1+ \vert F_{k} \vert ^{2} \bigr)^{\frac{p}{2}}\biggl\{ 1+\frac{p}{2}\bigl[(2\mu _{r_{k}}+ \alpha _{r_{k}})h+\sigma h+C_{1}h^{2} \bigr]\biggr\} . \end{aligned}$$

The proof of equation (3.10) is the following:

$$\begin{aligned} \vert F_{k+1} \vert ^{2}={}& \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i}-\theta f \bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2} \\ ={}& \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2\bigl\langle X_{k+1}^{x,i}-X_{k+1}^{y,i}, \theta h f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\rangle \\ &{}+\theta ^{2} \bigl\vert f\bigl(X_{k+1}^{x,i},R_{k}^{i} \bigr)-f\bigl(X_{k+1}^{y,i},R_{k}^{i} \bigr) \bigr\vert ^{2} \\ \geq {}& \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2\theta h\bigl(\mu _{r_{k}} \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}+a\bigr) \\ \geq {}& \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2\theta h \mu _{r_{k}} \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2a \theta h \\ \geq {}& \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}-2\theta h \mu _{r_{k}} \bigl\vert X_{k+1}^{x,i}-X_{k+1}^{y,i} \bigr\vert ^{2}. \end{aligned}$$

 □

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Jiang, Y., Hu, L. & Lu, J. The stochastic θ method for stationary distribution of stochastic differential equations with Markovian switching. Adv Differ Equ 2020, 667 (2020). https://doi.org/10.1186/s13662-020-03129-3

Download citation

MSC

  • 65C30
  • 60H10

Keywords

  • Stochastic θ method
  • SDEs with Markovian switching
  • Numerical solutions
  • Stationary distribution
\