The stochastic θ method for stationary distribution of stochastic differential equations with Markovian switching

*Correspondence: jianqiulu@dhu.edu.cn 2Department of Statistics, Donghua University, Shanghai, China Full list of author information is available at the end of the article Abstract In this paper, stationary distribution of stochastic differential equations (SDEs) with Markovian switching is approximated by numerical solutions generated by the stochastic θ method. We prove the existence and uniqueness of stationary distribution of the numerical solutions firstly. Then, the convergence of numerical stationary distribution to the underlying one is discussed. Numerical simulations are conducted to support the theoretical results.


Introduction
In recent years, a lot of research has been done about the stationary distributions of SDEs. Since SDEs cannot be solved explicitly, studying stationary distribution of the numerical solutions of SDEs has become essential. Stationary distributions of numerical solutions are widely used to approximate those of the underlying equations. We know that there are two books worth reading, one presents the knowledge about the numerical solutions of SDEs, see [9]. The other introduces some basic knowledge of SDEs in detail [13]. As far as we are concerned, there are actually plenty of papers in this aspect. In [20], almost sure and moment exponential stability of Euler-Maruyama discretizations for hybrid stochastic differential equations was studied. The Milstein method was discussed in [28]. The truncated Euler-Maruyama method was used to approximate stationary distributions of SDEs with and without Markovian switching in [11]. Numerical stationary distribution and its convergence by the semi-implicit Euler-Maruyama method was studied, see [12]. And then stationary distribution of the stochastic θ method for nonlinear SDEs was discussed in [8]. On the basis of this paper, we can consider that equations do not have stationary distributions, after a period of switching, equations still have a unique stationary distribution.
When some emergencies affect the rate of population growth, such as earthquakes, debris flows, and other natural disasters, simple SDEs are not efficient to model these prob-lems. Therefore, SDEs with Markovian switching were applied to handle these problems.
In a series of papers [3,7,16,17,27,30,31], approximations of stationary distributions of SDEs with different types of Markovian switchings were investigated by the Euler-Maruyama method. Both the drift and diffusion coefficients of the SDEs in these papers satisfy the global Lipschitz condition. Although the classical Euler-Maruyama method is easy to use in computation and implementation, the numerical solutions of SDEs with super-linear coefficients may diverge to infinity in finite time. To tackle this drawback, Li used the backward Euler-Maruyama method to prove the numerical invariant measure under one-sided Lipschitz condition for those SDEs with Markovian switching, see [10].
The stochastic θ method is an extension of the Euler-Maruyama method and the backward Euler-Maruyama, and its proving process is more complicated. In [23], stability of the stochastic θ method with non-random variable step sizes for bilinear, nonautonomous, homogenous test equations was investigated. The abilities to preserve the almost sure and the mean square exponential stabilities of hybrid SDEs were discussed in [5] and [34]. In [22], asymptotic boundedness of the stochastic θ method was studied. As far as authors' knowledge, the stochastic θ method has not been used to discuss the stationary distributions of SDEs with Markovian switching. We investigate the numerical stationary distribution by the stochastic θ method.
Throughout the paper, we mainly study the stationary distributions, and the required assumptions on the coefficient are based on the choices of θ . The rest of this paper is organized as follows. Section 2 introduces mathematical preliminaries. We obtain the existence and uniqueness of the stationary distribution of the numerical solutions in Sect. 3. Section 4 shows numerical simulation to verify our theoretical results. Section 5 concludes the paper.
We highlight some main contributions of this paper as follows: • When θ ∈ [0, 1/2), both of the drift and diffusion coefficients satisfy the global Lipschitz; when θ ∈ [1/2, 1], the drift coefficient only satisfies the one-side Lipschitz condition. Under these assumptions, it is difficult to prove the stationary distributions of SDEs with Markovian switching by the Euler-Maruyama method and the backward Euler-Maruyama.
• We study the stationary distributions of SDEs with Markovian switching by the stochastic θ method. This is a generalization of the Euler-Maruyama method and the backward Euler-Maruyama.
• Compared to the traditional Euler method, the numerical results of the stochastic θ method are more precise. Compared to the backward Euler-Maruyama method, the stochastic θ method can effectively save time and cost. In conclusion, the stochastic θ method is a more efficient numerical method to approximate to stationary distribution of exact solutions.
Let us begin to develop our new theory on the stationary distributions of SDEs with Markovian switching.

Mathematical preliminaries
In this paper, let ( , F, P) be a complete probability space with filtration {F t } t≥0 satisfying the usual conditions that it is right-continuous and increasing, while F 0 contains all P-null sets. Let | · | denote the Euclidean norm in R n . The transpose of a vector or matrix M is denoted by M T and the trace norm of a matrix M is denoted by |M| = trace(M T M).
To keep symbols simple, let B(t) = (B 1 t , . . . , B d t ) T , t ≥ 0, be a d-dimensional Brownian motion. Let R(t), t ≥ 0, denote a continuous-time Markov chain on the probability apace taking values in a finite state space S = {1, 2, . . . , N} with the generator = (γ ij ) N×N given by We assume that the transition probability matrix Q is irreducible and conservative (see [1]). So the Markov chain {R(t)} t≥0 has a unique stationary distribution η := (η 1 , η 2 , . . . , η N ) 0 ∈ R 1×N , which can be determined by solving the linear equation We consider the n-dimensional stochastic differential equation with Markovian switching (SDEwMS) of the Itô type with the initial value x(0) = x 0 ∈ R n and R(0) = i ∈ S, where f : R n × S → R n and g : R n × S → R n×d . Given a stepsize h > 0, let R k = R(kh), k ≥ 0, {R k , k = 0, 1, . . .} be a discrete Markov chain, and P(h) = P ij (h) N×N = exp(hQ), where Q is the generator of R k , k ≥ 0, The {R k , k = 0, 1, . . .} can be simulated as follows: let R 0 = i, and give a random pseudo number τ 1 obeying the uniform (0, 1) distribution. Define where let N j=1 P ij (h) = 0. In other words, the probability of state s being chosen is given by P(R 1 = s) = P is (h). Generally, after the computations of R 0 , R 1 , . . . , R k-1 , given a random pseudo number τ k obeying a uniform (0, 1) distribution, define R k : This procedure can be carried out independently to obtain more trajectories. Now we can define the stochastic θ method for SDEs with Markovian switching. The stochastic θ method to SDEwMS (2.1) is defined by is the Brownian motion increment, and given a step size h > 0, let t k = kh for k ≥ 0 and R k = R(t k ). Set Z k = (X k , R k ). Then {Z k } k≥0 is a Markov process. We will instead use notation Z x,i k = (X x,i k , R i k ) to highlight the initial values. Let P k ((x, i), · × ·) be the probability measure induced by Z x,i k , namely Denote the family of all probability measures on R n × S by P(R n × S). Define by L the family of mappings F : R n × S → R satisfying for any x, y ∈ R n , j, l ∈ S. For P 1 , P 2 ∈ P(R n × S), define metric d L by The weak convergence of probability measures can be illustrated in terms of metric d L . That is, a sequence of probability measures {P k } k≥1 in P(R n × S) converge weakly to a probability measure P ∈ P(R n × S) if and only if lim k→∞ d L (P k , P) = 0.
Then we define the stationary distribution for {X k } k≥0 by using the concept of weak convergence.

Definition 2.1
For any initial value (x, i) ∈ R n × S and a given step size h > 0, In [31], the authors presented a very general theory on the existence and uniqueness of the stationary distribution for any one step numerical methods. We adapt it here and state the theory for the stochastic θ method as follows.

Theorem 2.2 Assume that the following three requirements are fulfilled.
• For any ε > 0 and ( • For any ε > 0 and any compact subset K of R n , there exists a positive integer • For any ε > 0, n ≥ 1 and any compact subset Then the numerical solutions generated by the stochastic θ method {X k } k≥0 has a unique stationary distribution h .
Remark 2.3 Although the theory is very general, the conditions in it are in the sense of probability which are not easy to check. In this paper, we give some coefficients related assumptions and prove the existence and uniqueness of the stationary distribution of the solutions generated by the stochastic θ method under those assumptions. Now, we present the assumptions on the drift and diffusion coefficients.

Assumption 2.4
Assume that there exists a constant μ i such that, for any x, y ∈ R n and i ∈ S, Assumption 2.5 Assume that there exists a constant σ > 0 such that, for any x, y ∈ R n and i ∈ S, We give two new assumptions as follows.

Assumption 2.6
There exist constants μ i and a > 0 such that, for any x ∈ R n and i ∈ S, Assumption 2.7 There exist positive constants σ and b such that, for any x ∈ R n and i ∈ S, 2) can be written as Thus, the stochastic θ method (2.2) is well defined.

Lemma 2.9 For any Borel set
Because B k and B 0 are identical in probability law, comparing the two equations above, Then, due to Lemma 2.8, we have that (X k+1 , R k ) and (X 1 , R 1 ) are identical in probability law under X k = x, R k = i and X 0 = x, R 0 = i. Therefore, the assertion holds. For any x ∈ R n and i ∈ S any Borel set A ⊂ R n , define To prove Theorem 2.11, we cite the following classical result (see, for example, Lemma 9.2 on page 87 of [13]).

Lemma 2.10
Let h(x, ω) be a scalar bounded measurable random function of x, independent of F s . Let ζ be an F s -measurable random variable. Then where H(x) = Eh(x, ω).

Theorem 2.11
The solution {Z k } k≥0 generated by the stochastic θ method (2.2) is a homogeneous Markov process with transition probability kernel P((x, i), A × {j}).
Proof The homogeneous property follows Lemma 2.9, so we only need to show the Markov property. Define The proof is complete.
Therefore, we see that P(·, ·) is the one-step transition probability and P k (·, ·) is the k-step transition probability.
We state a simple version of the discrete-type Gronwall inequality in the next lemma (see, for example, [15, Theorem 2.5 on page 56]).

Main results
In this section, we present the main results of this paper. Since different choices of the parameter θ in (2.2) require different requirements on the coefficients f and g, we divide this section into three parts. At first, we discuss the case when θ ∈ [1/2, 1] in Sect. 3.1. The convergence of the numerical stationary distribution to the underlying counterpart is discussed in Sect. 3.2. And the situation when θ ∈ (0, 1/2] is presented in Sect. 3.3.

θ ∈ [1/2, 1]
To prove Lemma 3.3 and Lemma 3.4, let us introduce two assumptions. Assumption 3.1 Let, for any i ∈ S, there exist a constant α i ∈ R, the inequality Assumption 3.2 Let, for any i ∈ S, there exist a constant α i ∈ R and β > 0, the inequality For simplicity, define where assume ηξ < 0. Now we are ready to present the two main lemmas in this subsection.
where C is a constant that cannot rely on k.

The convergence
Given Assumptions 2.4 to 3.2, the convergence of the numerical stationary distribution to the underlying stationary distribution is discussed in this subsection.
Recall that the probability measure induced by the numerical solution X x,i k is denoted by P k ((x, i), · × ·); similarly we denote the probability measure induced by the underlying solution x(t) byP t ((x, i) · × ·). Lemma 3.6 Let Assumptions 2.4 to 3.2 hold and fix any initial data (x, i) ∈ R n × S. Then, for any given T 1 > 0 and ε > 0, there exists sufficiently small h * > 0 such that d L P kh (x, i), · × · , P k (x, i), · × · < ε provided that h < h * and kh ≤ T 1 .
The result can be derived from the finite time strong convergence of the stochastic θ method [35]. Now we are ready to show that the numerical stationary distribution converges to the underlying stationary distribution as time step diminishes. Proof Fix any initial value (x, i) ∈ R n × S and set ε > 0 to be an arbitrary real number. Due to the existence and uniqueness of the stationary distribution of the underlying equation, there exists * > 0 such that, for any t > * , Similarly, by Theorem 2.2, there exists a pair of h * * > 0 and * * > 0 such that for all h < h * * and kh > * * . Let = max( * , * * ), from Lemma 3.6 there exists h * such that, for any h < h * and kh < + 1, Therefore, for any h < min(h * , h * * ), set k = [ h] + 1/h, we see that the assertion holds by the triangle inequality.

θ ∈ [0, 1/2)
We need to add the global Lipschitz and linear growth conditions on the drift coefficient. It is worth mentioning here that we only need to use the one-sided Lipschitz conditions in Sect. 3.1.

Assumption 3.8
Assume that there exists a constant K > 0 such that, for any x, y ∈ R n and i ∈ S, Assumption 3.9 There exist positive constants κ and c such that, for any x ∈ R n and i ∈ S, In addition, we require the following two assumptions.
Assumption 3.10 Let, for any i ∈ S, there exist a constant α i ∈ R, the inequality holds.
For simplicity, define where assume ηξ < 0. The next assumption can be derived from Assumption 3.10.

Assumption 3.11
Let, for any i ∈ S, there exist a constant α i ∈ R and d > 0, the inequality holds.
where C is a constant that does not rely on k.
The proof is the same as that of Lemma 3.3.

Lemma 3.13
Let Assumptions 2.4, 3.8, and 3.10 hold. Then, for h ∈ (0, h * ) and any two initial values x, y ∈ R n with x = y, the solutions generated by the stochastic θ method (2.2) satisfy where C is a constant that does not rely on k and ε is defined as (3.11).
The proof is the same as that of Lemma 3.4.
Lemma 3.14 Let Assumptions 2.6, 3.9, and 3.11 hold, the solutions generated by the stochastic θ method (2.2) obey where C is a constant that does rely on k.
The proof is the same as that of Lemma 3.5. Combining Lemmas 3.12, 3.13, 3.14 and using Chebyshev's inequality, we derive the existence and uniqueness of the stationary distribution of the stochastic θ method with θ ∈ [0, 1/2) from Theorem 2.2.

Simulations
We present two numerical examples in this section to support our theoretical results.
and dX(t) = -0.5X(t) -0.5X 3 (t) dt + dB(t).   We use the Kolmogorov-Smirnov test (K-S test) to measure the differences between the empirical distributions. Figure 5 shows the differences between successive empirical density functions for the SDEs with Markovian switching. We can see from Fig. 5 that the difference between the empirical distributions gradually decreases, which shows that the empirical distributions can quickly converge to a stationary distribution. The difference between two adjacent empirical probability density functions after switching
We set the step size to be 0.01, T = 2, θ = 1/2, and 10,000 paths. It is obvious that the first function (4.3) has no stationary distribution and the numerical solutions tend to infinity. Compared with the first two-dimensional equation, the second one has stationary distribution. Besides, Fig. 6 shows the differences between successive empirical density functions for the SDEs with Markovian switching. It shows that the empirical distribution tends to a stationary one quite fast. That is to say, for the equations without stationary distribution, the distribution of these equations can reach a stable state quickly after switching.

Conclusion
This paper studies the numerical stationary distributions generated by the stochastic θ methods. We study when one or more equations in the switching system do not have the stationary distribution, equations still have a unique stationary distribution after a period of switching. Both the drift and diffusion coefficients are required to satisfy the global Lipschitz condition when θ ∈ [0, 1/2). But some super-linear terms are allowed to appear in the drift coefficient when θ ∈ [1/2, 1]. Two numerical examples are given to show the convergence of the numerical stationary distributions to their true counterparts. The figures also support our theoretical results.