Skip to main content

Theory and Modern Applications

Numerical solutions of neutral stochastic functional differential equations with Markovian switching

Abstract

Until now, the theories about the convergence analysis, the almost surely and mean square exponential stability of the numerical solution for neutral stochastic functional differential equations with Markovian switching (NSFDEwMSs) have been well established, but there are very few research works concentrating on the stability in distribution of numerical solution. This paper will pay attention to the stability in distribution of numerical solution of NSFDEwMSs. The strong mean square convergence analysis is also discussed.

1 Introduction

Neutral functional differential equations (NFDEs) are a class of differential equations, in which the state not only depends on the past and the current values, but also involves the derivatives with delays [6]. Since NFDEs have their extensive applications in chemical process, the theory of aeroelasticity, Lotka–Volterra systems, steam or water pipes, heat exchangers, and partial element equivalent circuits [7], many excellent studies (see, e.g., [15, 18] and the references therein) have presented the basic theory of NFDEs. When NFDEs are subject to the environmental external disturbances, they can be characterized by neutral stochastic functional differential equations (NSFDEs) [10, 17]. Such equations have been applied in science and engineering. The existence-uniqueness theorem and the asymptotic behavior for NSFDEs have been extensively discussed, see [9, 10] and their cited references. To our knowledge, it is difficult to obtain the explicit solutions of most of NSFDEs. So the numerical solutions become a very useful tool. Consequently, studying the numerical solution of NSFDEs is becoming more and more important. The common difference scheme is the Euler–Maruyama (EM) method due to its convenient computations and implementations, for example, see [11, 12] and the references therein. In [21], influenced by Mao’s work [12], Wu et al. analyzed the EM scheme of NSFDEs and gave the strong convergence between the exact solution and the numerical solution.

In many practical systems, due to component failures, subsystem interconnection changes and sudden environmental interferences may cause structural and parameter abrupt changes. In order to solve this problem, hybrid systems driven by continuous-time Markov chain have been introduced. All of these systems have a typical feature: the state has continuous values and the jumping parameters take some discrete values [1, 22], and thus such systems can be taken as the special case of hybrid systems [20]. By using the Lyapunov function approach, the exponential stability in moment, the almost surely exponential stability and the almost surely asymptotic stability for neutral stochastic delay differential equations with Markovian switching (NSDDEwMSs) were discussed in [5, 13], respectively. Some basic theoretical results and useful approaches on numerical solution for stochastic delay differential equations with Markovian switching (SDDEwMSs) were systematically introduced in [14]. In [8, 26], the EM method and the convergence of numerical solutions for NSDDEwMSs on the basis of the local Lipschitz condition were developed. Recently, the exponential stability of the EM method for NSFDEs with jumps was analyzed in [16]. The almost surely and mean square exponential stability of numerical solutions for NSFDEs were considered in [23, 27]. In [28], Zong et al. analyzed the mean square exponential stability of the numerical solutions for NSFDEs.

Most of these papers discussed in [16, 23, 27, 28] are related with the asymptotic stability in mean square or in probability, which means that the numerical solution will tend to zero in mean square or in probability as time t approaches infinity. Nevertheless, the asymptotic behavior sometimes is too strong, and under these circumstances to know if the numerical solution will converge in distribution or not is very useful. These properties are referred to as the asymptotic stability in distribution. The asymptotic stability in distribution of the exact solution and the numerical solution for stochastic functional differential equations with Markovian switching (SFDEwMSs) was presented in [2, 3, 25] and their cited references. Some results on the asymptotic stability in distribution of the exact solution for NSFDEwMSs were analyzed in [4]. In [24], although the stability in distribution of numerical solution for stochastic differential equations was considered, the asymptotic stability in distribution of the numerical solution to NSFDEwMSs was not developed yet, due to the difficulty stemming from the simultaneous presence of the neutral item and the Markovian switching. This paper will fill this gap.

In this paper, we study the EM numerical solutions of the following NSFDEwMSs:

$$ d\bigl[y(t)-\mathcal{D}\bigl(y_{t}, r(t)\bigr)\bigr]=f \bigl(y_{t},r(t)\bigr)\,dt + g\bigl(y_{t},r(t)\bigr)\,dw(t), \quad t\geq 0, $$

with the initial data \(y_{0}= \xi \in L_{\mathcal{{F}}_{0}}^{p}([- \tau ,0]; \Re ^{n})\) and \(r(0)=i_{0}\in S\), where \(\mathcal{D}(\cdot ,\cdot )\) and \(f(\cdot ,\cdot ): \mathcal{C}([-\tau , 0]; \Re ^{n}) \times \mathcal{S} \to \Re ^{n}\), and \(g( \cdot ,\cdot ): \mathcal{C}([-\tau , 0]; \Re ^{n}) \times \mathcal{S} \to \Re ^{n\times m}\), \(y(t) \in \Re ^{n} \) for each t and \(y_{t}=y(t+ \theta ): -\tau \le \theta \le 0\). Our primary objective is to extend the method developed in [19, 21], and [24] to NSFDEwMSs and to study the stability in distribution as well as the strong convergence for numerical approximations when \(f(\cdot ,\cdot )\) and \(g(\cdot ,\cdot )\) satisfy both the global Lipschitz condition and the monotonicity condition, and \(\mathcal{D}( \cdot ,\cdot )\) is a contractive mapping.

We should point out that although the EM method used in this paper is borrowed from [21], in which the convergence in finite time was studied, in this paper we impose different conditions from those in [21] and investigate the long-term behavior of the numerical solutions. Besides, the difficulty stemming from the co-occurrence of the neutral term and the Markovian switching can be overcome in this paper. For the self-contained result, the strong mean square convergence analysis of the numerical solution to NSFDEwMSs is also introduced under our assumptions.

This paper consists of the following sections. In Sect. 2, some necessary results are introduced and the definition of the EM approximate solution to NSFDEwMSs is given. We present the main results and give the technique proofs in Sect. 3. We show the relationship between the invariant measure of the numerical solution and that of the exact solution in Sect. 4. In Sect. 5, we give an example to illustrate the results. For the self-contained result, the strong convergence of the approximation solutions for NSFDEwMSs under our assumptions is discussed as an appendix.

2 Euler–Maruyama method: numerical schemes and preparatory lemmas

Throughout this paper, let \((\varOmega , \mathcal{F}, \{\mathcal{F}_{t} \}_{t\ge 0}, P)\) be a complete probability space with a filtration \(\{\mathcal{F}_{t}\}_{t\ge 0}\) satisfying the usual conditions (i.e., it is increasing and right-continuous, while \(\mathcal{F}_{0}\) contains all P-null sets). Let \(w(t)=(w_{1}(t),w_{2}(t), \ldots , w_{m}(t))^{T}\), \(t \geq 0\), be an m-dimensional Brownian motion defined on the probability space with \(\tilde{v}^{T}\) denoting the transpose of a vector ṽ. Let \(|\cdot |\) be the Euclidean norm in \(\Re ^{n}\) and \(\Re ^{n\times m}\). If A is a vector or matrix, its transpose is denoted by \(A^{T}\). For a matrix A, its trace norm is denoted by \(|A|=\sqrt{\operatorname{trace}(A^{T}A)}\). Denote by \(\mathcal{C}=\mathcal{C}([-\tau ,0]; \Re ^{n})\) the family of continuous functions φ from \([-\tau , 0]\) to \(\Re ^{n}\) with the norm \(\|\varphi \|=\sup_{-\tau \le \theta \le 0}|\varphi (\theta )|\). Let \(L_{\mathcal{{F}}_{0}}^{p}([-\tau , 0]; \Re ^{n})\) denote the family of \(\mathcal{F}_{0}\)-measurable \(\mathcal{C}([-\tau ,0]; \Re ^{n})\)-valued random variables ξ such that \(E\|\xi \|^{p}:=E( \sup_{-\tau \le t\le 0}|\xi (t)|^{p}) <\infty \) (\(p\geq 2\)). If \(x(t)\) is an \(\Re ^{n}\)-valued stochastic process for any \(t \in [- \tau , \infty )\), denote \(x_{t}=\{x(t+\theta ): -\tau \le \theta \le 0\}\) for any \(t\ge 0\). Let \(r(t)\) (\(t \geq 0\)) be a right-continuous Markov chain on the probability space taking values in a finite state space \(\mathcal{S}=\{1,2, \ldots , N\}\) with the generator \(\varGamma =( \gamma _{ij})_{n \times n}\) given by

$$ P \bigl\{ r(t+ \delta )=j|r(t)=i \bigr\} = \textstyle\begin{cases} \gamma _{ij} \delta +o(\delta ) & \text{if } i \neq j, \\ 1+\gamma _{ij} \delta +o(\delta ) & \text{if } i = j, \end{cases} $$

where \(\delta >0\). Here, \(\gamma _{ij}\) is the transition rate from i to j and \(\gamma _{ij}>0\) if \(i \neq j\), while

$$ \gamma _{ii}= - \sum_{j \neq i}\gamma _{ij}. $$

In this paper, it is always assumed that the Markov chain \(r(\cdot )\) is independent of the Brownian motion \(w(\cdot )\). It is well known that almost every sample path of \(r(\cdot )\) is a right-continuous step function with a finite number of simple jumps in any finite subinterval of \(\Re _{+} :=[0,\infty )\).

In this paper, we consider the following n-dimensional NSFDEwMSs:

$$ d\bigl[y(t)-\mathcal{D}\bigl(y_{t}, r(t)\bigr)\bigr]=f \bigl(y_{t},r(t)\bigr)\,dt + g\bigl(y_{t},r(t)\bigr)\,dw(t), \quad t\geq 0, $$
(1)

with the initial data \(y_{0}= \xi \in L_{\mathcal{{F}}_{0}}^{p}([- \tau ,0]; \Re ^{n})\) and \(r(0)=i_{0}\in \mathcal{S}\), where

$$ \mathcal{D}(\cdot ,\cdot )\text{ and } f(\cdot ,\cdot ): \mathcal{C} \times \mathcal{S} \to \Re ^{n}, \quad \text{and} \quad g(\cdot ,\cdot ): \mathcal{C} \times \mathcal{S} \to \Re ^{n\times m}. $$

Assumption 1

For any \(\varphi , \phi \in \mathcal{C} \), there exist \(\lambda _{1}>0\), \(\lambda _{2}>0\), \(\lambda _{3}>0\), \(\kappa \in (0,1) \), and a probability measure \(\rho (\cdot )\) on \([-\tau ,0]\) such that

$$\begin{aligned}& \begin{aligned}[b] &2 \bigl\langle \bigl(\varphi (0)- \phi (0) \bigr)- \bigl(\mathcal{D}(\varphi ,i)-\mathcal{D}(\phi ,i) \bigr),f( \varphi ,i)-f(\phi ,i) \bigr\rangle + \bigl\vert g(\varphi ,i)-g(\phi ,i) \bigr\vert ^{2} \\ &\quad \leq -\lambda _{1} \bigl\vert \varphi (0)-\phi (0) \bigr\vert ^{2}+\lambda _{2} \int _{- \tau }^{0} \bigl\vert \varphi (\theta )-\phi ( \theta ) \bigr\vert ^{2}\rho (d\theta ), \end{aligned} \end{aligned}$$
(2)
$$\begin{aligned}& \begin{aligned}[b] & \bigl\vert f(\varphi ,i)-f(\phi ,i) \bigr\vert ^{2}\vee \bigl\vert g(\varphi ,i)-g(\phi ,i) \bigr\vert ^{2} \\ &\quad \leq \lambda _{3} \biggl( \bigl\vert \varphi (0)-\phi (0) \bigr\vert ^{2}+ \int _{-\tau }^{0} \bigl\vert \varphi (\theta )-\phi ( \theta ) \bigr\vert ^{2}\rho (d\theta ) \biggr), \end{aligned} \end{aligned}$$
(3)

and

$$\begin{aligned} \begin{aligned}[b] & \bigl\vert \mathcal{D}(\varphi ,i)-\mathcal{D}(\phi ,i) \bigr\vert \leq \kappa \int _{- \tau }^{0} \bigl\vert \varphi (\theta )-\phi ( \theta ) \bigr\vert \rho (d\theta ), \quad \mathcal{D}(0,i)=0. \end{aligned} \end{aligned}$$
(4)

Assumption 1 can guarantee the existence and uniqueness of the solution for (1). It is seen from (4) that

$$\begin{aligned} \begin{aligned}[b] & \bigl\vert \mathcal{D}(\varphi ,i) \bigr\vert ^{2}\leq \kappa ^{2} \int _{-\tau }^{0} \bigl\vert \varphi (\theta ) \bigr\vert ^{2}\rho (d\theta ). \end{aligned} \end{aligned}$$
(5)

By (3), we have

$$\begin{aligned} \begin{aligned}[b] \bigl\vert f(\varphi ,i) \bigr\vert ^{2}\vee \bigl\vert g(\varphi ,i) \bigr\vert ^{2} \leq 2\lambda _{3} \biggl( \bigl\vert \varphi (0) \bigr\vert ^{2}+ \int _{-\tau }^{0} \bigl\vert \varphi (\theta ) \bigr\vert ^{2}\rho (d \theta ) \biggr)+a, \end{aligned} \end{aligned}$$
(6)

where \(a=2|f(0,i)|^{2}\vee 2|g(0,i)|^{2}\).

For any \(\epsilon >0\), from (2), (3), and (4), we have

$$\begin{aligned} \begin{aligned}[b] &2 \bigl\langle \varphi (0)- \mathcal{D}(\varphi ,i),f(\varphi ,i) \bigr\rangle + \bigl\vert g(\varphi ,i) \bigr\vert ^{2} \\ &\quad \le (-\lambda _{1}+2\epsilon +\epsilon \lambda _{3}) \bigl\vert \varphi (0) \bigr\vert ^{2}+\bigl( \lambda _{2}+2\epsilon \kappa ^{2}+\epsilon \lambda _{3} \bigr) \int _{-\tau } ^{0} \bigl\vert \varphi (\theta ) \bigr\vert ^{2}\rho (d\theta )+\frac{(2+\epsilon )a}{2 \epsilon }. \end{aligned} \end{aligned}$$
(7)

Assumption 2

For any \(\xi \in L_{\mathcal{{F}}_{0}}^{p}([- \tau , 0]; \Re ^{n})\), there exists a nondecreasing function \(\alpha (\cdot )\) such that, for any \(p\geq 2\),

$$\begin{aligned} \begin{aligned}[b] &E\Bigl(\sup_{-\tau \leq s\leq t\leq 0} \bigl\vert \xi (t)-\xi (s) \bigr\vert ^{p}\Bigr)\leq \alpha (t-s) \end{aligned} \end{aligned}$$
(8)

with the property \(\alpha (u)\rightarrow 0\) as \(u\rightarrow 0^{+}\).

Theorem 2.1

Under Assumptions 1 and 2, NSFDEwMSs (1) have a unique continuous solution \(y(t)\) on \(t\geqslant -\tau \). Moreover, the solution has the property that

$$\begin{aligned} \begin{aligned}[b] &E\Bigl(\sup_{-\tau \leq t\leq T} \bigl\vert y(t) \bigr\vert ^{p}\Bigr)\leq (\bar{C}+1) \bigl(1+E \Vert \xi \Vert ^{p}\bigr)e^{\bar{C} T} \end{aligned} \end{aligned}$$

for any \(T>0\), where C̄ is a positive constant only dependent on τ, κ, T, and \(\lambda _{3}\). In other words, the pth (\(p\geq 2\)) moment of the solution is finite.

Proof

The proof is almost similar to Theorem 2.4 of [5], so it is omitted here. □

Let \(r_{k}^{\Delta }=r(k\Delta )\) for \(k\ge 0\). We can use the one-step transition probability matrix \(P(\Delta ) =( P_{ij}(\Delta ))_{n \times n} = e^{\Delta \varGamma } \) to simulate the discrete Markov chain \(\{r_{k}^{\Delta }, k=0, 1, 2, \ldots \}\), where the details can be found in [14]. We can now define the EM approximate solution to NSFDEwMSs (1). Let the step size \(\Delta \in (0,1)\) be a fraction of T and Ï„ (\(T>0\)), namely, \(\Delta =\frac{T }{N}=\frac{ \tau }{M}\) for some integer \(N>T\) and \(M>\tau \). The discrete EM approximate solution \(\bar{X}(k\Delta )\), \(-M\le k\le N\) is defined as follows:

$$\begin{aligned} \begin{aligned}[b] \textstyle\begin{cases} \bar{X}(k \Delta )=\xi (k \Delta ), &-M \le k\le 0, \\ \bar{X}((k+1)\Delta )=\bar{X}(k \Delta )+\mathcal{D}( \bar{X}_{k \Delta },r^{\Delta }_{k})-\mathcal{D}( \bar{X}_{(k-1)\Delta },r^{ \Delta }_{k-1}) \\ \hphantom{\bar{X}((k+1)\Delta )=}{}+f( \bar{X}_{k \Delta }, r_{k}^{\Delta })\Delta +g( \bar{X}_{k \Delta }, r_{k}^{\Delta })\Delta w_{k}, &0 \le k \le N-1, \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(9)

where \(\Delta w_{k}=w((k+1)\Delta )-w(k\Delta )\) and \(\bar{X}_{k \Delta }=\{\bar{X}_{k\Delta }(\theta ): -\tau \le \theta \le 0\}\) is a \(\mathcal{C}\)-valued random variable defined as follows:

$$ \bar{X}_{k\Delta }(\theta )=\bar{X}\bigl((k+i)\Delta \bigr)+ \frac{\theta -i \Delta }{\Delta } \bigl[\bar{X}\bigl((k+i+1)\Delta \bigr)-\bar{X}\bigl((k+i) \Delta \bigr) \bigr] $$
(10)

for \(i\Delta \le \theta \le (i+1)\Delta \), \(i=-M, \ldots , -1\), where in order for \(\bar{X}_{-\Delta }\) to be well defined, we set \(\bar{X}(-(M+1) \Delta )=\xi (-M\Delta )\). Let \(t_{k}=k \Delta \), \(k=0, 1, \ldots ,N-1\), and define

$$ \bar{X}_{t}= \bar{X}_{k\Delta },\qquad \bar{r}(t) = r_{k}^{\Delta } \quad \text{for } t\in [t_{k}, t_{k+1}). $$

In our analysis, it will be more convenient to use continuous-time approximations. We first introduce the \(\mathcal{C}\)-valued step process

$$\begin{aligned} \begin{aligned}[b] &\bar{X}_{t}=\sum _{k=0}^{\infty }\bar{X}_{k\Delta }1_{[k\Delta ,(k+1) \Delta )}(t), \quad \forall k\geq 0, \end{aligned} \end{aligned}$$

and then we define the continuous EM approximate solution as follows: let \(X(t)=\xi (t)\) for \(-\tau \leq t \leq 0\), and let \([\frac{t}{ \Delta }]\) be the integer part of \(\frac{t}{\Delta }\), while for \(t\in [[\frac{t}{\Delta }]\Delta ,([\frac{t}{\Delta }]+1)\Delta )\),

$$ \begin{aligned}[b] X(t) &=\xi (0)+ \int _{0}^{t} f\bigl(\bar{X}_{s}, \bar{r}(s)\bigr)\,ds+ \int _{0}^{t} g\bigl(\bar{X}_{s}, \bar{r}(s)\bigr)\,dw(s) \\ &\quad{}+\bar{\mathcal{D}}(t)-\mathcal{D} \bigl(\bar{X}_{-\Delta }, \bar{r}(0) \bigr), \end{aligned} $$
(11)

where \(\bar{\mathcal{D}}(t)=\mathcal{D} (\bar{X}_{([\frac{t}{ \Delta }]-1)\Delta }+\frac{t-[\frac{t}{\Delta }]\Delta }{\Delta }( \bar{X}_{[\frac{t}{\Delta }]\Delta }-\bar{X}_{([\frac{t}{\Delta }]-1) \Delta }),\bar{r}(t) )\).

Obviously, (11) can be written as

$$\begin{aligned} \begin{aligned}[b] X(t)&= \bar{X}(t_{[\frac{t}{\Delta }]})+ \int _{t_{[\frac{t}{\Delta }]}} ^{t} f\bigl(\bar{X}_{s}, \bar{r}(s)\bigr)\,ds+ \int _{t_{[\frac{t}{\Delta }]}}^{t} g\bigl( \bar{X}_{s}, \bar{r}(s)\bigr)\,dw(s) \\ &\quad{}+\bar{\mathcal{D}}(t)-\mathcal{D} \biggl(\bar{X}_{([\frac{t}{\Delta }]-1)\Delta },\bar{r} \biggl(\biggl(\biggl[\frac{t}{\Delta }\biggr]-1\biggr)\Delta \biggr) \biggr). \end{aligned} \end{aligned}$$
(12)

Observe that \(X(t_{[\frac{t}{\Delta }]})=\bar{X}(t_{[\frac{t}{\Delta }]})\), that is, the discrete and continuous EM approximate solutions coincide at the gridpoints.

Rewriting (10), we have

$$\begin{aligned} \bar{X}_{[\frac{t}{\Delta }]\Delta }(\theta )=\frac{\Delta -(\theta -i \Delta )}{\Delta }\bar{X} \biggl( \biggl(\biggl[\frac{t}{\Delta }\biggr]+i\biggr)\Delta \biggr) +\frac{ \theta -i\Delta }{\Delta } \bar{X} \biggl(\biggl(\biggl[\frac{t}{\Delta }\biggr]+i+1\biggr)\Delta \biggr), \end{aligned}$$
(13)

which yields

$$ \bigl\vert \bar{X}_{[\frac{t}{\Delta }]\Delta }(\theta ) \bigr\vert \le \biggl\vert \bar{X} \biggl(\biggl(\biggl[\frac{t}{\Delta }\biggr]+i\biggr)\Delta \biggr) \biggr\vert \vee \biggl\vert \bar{X} \biggl(\biggl(\biggl[\frac{t}{\Delta } \biggr]+i+1\biggr)\Delta \biggr) \biggr\vert . $$

Moreover, by [21], we have

$$ \Vert \bar{X}_{[\frac{t}{\Delta }]\Delta } \Vert =\max_{-M\le i \le 0} \biggl\vert \bar{X} \biggl(\biggl(\biggl[\frac{t}{\Delta }\biggr]+i\biggr)\Delta \biggr) \biggr\vert , \qquad \Vert \bar{X} _{-\Delta } \Vert \le \Vert \bar{X}_{0} \Vert , \qquad \Vert \bar{X}_{[\frac{t}{\Delta }] \Delta } \Vert \le \Vert X_{[\frac{t}{\Delta }]\Delta } \Vert , $$

where

$$ X_{[\frac{t}{\Delta }]\Delta }=X(t):\quad \biggl[\frac{t}{\Delta }\biggr]\Delta - \tau \le t\le \biggl[\frac{t}{\Delta }\biggr]\Delta \quad \text{and}\quad \sup _{0\le t\le T} \Vert \bar{X}_{t} \Vert \le \sup _{-\tau \le s \le T} \bigl\vert X(s) \bigr\vert . $$

Moreover, the same way as [2, Lemma 5.1], we have the following result.

Proposition 2.2

\((X_{k\Delta }, r_{k}^{\Delta })\), \(k\ge 0\), is a homogeneous Markov chain, that is,

$$\begin{aligned} &P\bigl((X_{(k+1)\Delta }, r_{(k+1)}^{\Delta })\in A \times \{j\}|\bigl(X_{k \Delta }, r_{k}^{\Delta }\bigr)=(\xi , i)\bigr) \\ &\quad =\bigl(\bigl(X_{\Delta }, r_{1}^{\Delta }\bigr)\in A \times \{j\}|\bigl(X_{0}, r_{0} ^{\Delta }\bigr)=(\xi , i)\bigr). \end{aligned}$$

3 Stability in distribution

In this section, we will establish two sufficient criteria on the stability in distribution for the Markov chain \((X_{k\Delta }, r_{k} ^{\Delta })\) with the initial data \((\xi , i)\). It should be pointed out that the continuous EM approximate solutions \(X(t)\) is a point, whereas \(X_{t}\) is a continuous function on the interval \([-\tau ,0]\). For the Markov chain \((X_{k\Delta }, r_{k}^{\Delta })\), we define the k-step transition probability for any Borel set A in \(\mathcal{C} \), \(\xi \in \mathcal{C}\) and \(i, j\in \mathcal{S}\) such that

$$ P_{k}^{\Delta }\bigl((\xi , i); A\times \{j\}\bigr)=P\bigl( \bigl(X_{k\Delta }, r_{k}^{ \Delta }\bigr)\in A\times \{j\}| \bigl(X_{0}, r_{0}^{\Delta }\bigr)=(\xi , i)\bigr). $$

Let \(\mathscr{P}(\mathcal{C} \times \mathcal{S})\) denote all probability measures on \(\mathcal{C} \times \mathcal{S}\). In order to characterize the stability in distribution, we need to introduce the following metric on the space \(\mathscr{P}(\mathcal{C} \times \mathcal{S})\). For any \(P_{1},P_{2}\in \mathscr{P}(\mathcal{C} \times \mathcal{S})\), define metric \(d_{\mathbb{L}}\)

$$ d_{\mathbb{L}}(P_{1},P_{2})=\sup _{f\in L} \Biggl\vert \sum_{i=1}^{N} \int _{\mathcal{C}} f(\xi , i)P_{1}(dx, i)-\sum _{i=1}^{N} \int _{ \mathcal{C}} f(\xi , i)P_{2}(d\xi , i) \Biggr\vert $$

and

$$ \mathbb{L}=\bigl\{ f: \mathcal{C} \times \mathcal{S} \rightarrow R: \bigl\vert f( \xi , i)-f(\eta , j) \bigr\vert \leq \Vert \xi -\eta \Vert + \vert i-j \vert , \bigl\vert f(\cdot , \cdot ) \bigr\vert \leq 1\bigr\} . $$

Definition 3.1

The Markov chain \((X_{k\Delta }, r_{k}^{\Delta })\) is said to be stable in distribution if there exists a probability measure \(\pi ^{\Delta } \in \mathscr{P}(\mathcal{C} \times \mathcal{S})\) such that \(P_{k}^{ \Delta }((\xi , i); \cdot \times \cdot )\) converges weakly to \(\pi ^{\Delta }(\cdot \times \cdot )\) as \(k\longrightarrow \infty \), for any \(\xi \in \mathcal{C}\), \(i \in \mathcal{S}\), that is,

$$ \lim_{k\longrightarrow \infty }d_{\mathbb{L}}\bigl(P_{k}^{\Delta } \bigl((\xi , i); \cdot \times \cdot \bigr), \pi ^{\Delta }(\cdot \times \cdot )\bigr)=0, \quad \forall \xi \in \mathcal{C}, i \in \mathcal{S}. $$

In order to highlight the initial value ξ, we may write \(y(t)\) and \(X(t)\) by \(y^{\xi }(t)\) and \(X^{\xi }(t)\), respectively.

Definition 3.2

Let \(p\geq 2\). The segment process \(X_{k\Delta }\) is said to have Property (P1) if, for any \(\xi \in \mathcal{C}\),

$$ \sup_{k\geq 0}\sup_{\xi \in K}E \bigl\Vert X_{k\Delta }^{\xi } \bigr\Vert ^{p}< \infty . $$
(14)

Moreover, the segment process \(X_{t}\) is said to have Property (P2) if

$$ \lim_{k\rightarrow \infty }\sup_{\xi ,\eta \in K}E \bigl\Vert X_{k\Delta }^{ \xi }-X_{k\Delta }^{\eta } \bigr\Vert ^{p}=0 $$
(15)

uniformly in \((\xi ,\eta )\in K\times K\), where K in (14) and (15) denotes any compact subset of \(\mathcal{C}\).

Theorem 3.1

If \(X_{k\Delta }\) has Properties (P1) and (P2), then \((X_{k\Delta }, r _{k}^{\Delta })\) is stable in distribution.

Since the proof of Theorem 3.1 is similar to [14, Theorem 5.43], we omit it here to save the space. Next, we will show that under Assumption 1, \(X_{k\Delta }\) has Properties (P1) and (P2) as long as the stepsize Δ is sufficiently small. The following result is therefore immediate.

Theorem 3.2

Assume that Assumption 1 holds. If

$$ \lambda _{1}>4\lambda _{3}+\lambda _{2}+4\kappa ^{2}, $$

then \((X_{k\Delta }, r_{k}^{\Delta })\) is stable in distribution when the stepsize Δ is sufficiently small.

Therefore, in order to show that \((X_{k\Delta }, r_{k}^{\Delta })\) is stable in distribution, we only need to prove that \(X_{k\Delta }\) satisfies properties (P1) and (P2).

Lemma 3.3

Under Assumptions 1 and 2, if

$$ \lambda _{1}>4\lambda _{3}+\lambda _{2}+2\kappa ^{2}, $$
(16)

and Δ is sufficiently small, then there exists a constant \(C>0\), which may be dependent on the initial value ξ such that the EM approximate solution has the property

$$ \sup_{t\geq 0}\sup_{\xi \in K}E\bigl[ \bigl\Vert X_{t}^{\xi } \bigr\Vert ^{2}\bigr] \leq C, $$

where K is a compact set in \(\mathcal{C}\). In other words, \(X_{k\Delta }\) has Property (P1) provided the stepsize Δ is sufficiently small.

Proof

Recall the inequality that, for any \(p\geq 2\), \(\beta >0\),

$$ \vert x+y \vert ^{p}\leq (1+\beta )^{p-1} \bigl( \vert x \vert ^{p}+\beta ^{1-p} \vert y \vert ^{p}\bigr), $$
(17)

which yields

$$\begin{aligned} E \bigl\vert X(t) \bigr\vert ^{2}\leq (1+\beta )E \bigl\vert X(t)-\bar{\mathcal{D}}( t) \bigr\vert ^{2}+ \frac{1+ \beta }{\beta }E \bigl\vert \bar{\mathcal{D}}( t) \bigr\vert ^{2}. \end{aligned}$$
(18)

From (5), (13), and \(\|\bar{X}_{-\Delta }\|\leq \| \bar{X}_{0}\|\), we can see that

$$\begin{aligned} \begin{aligned}[b] & \biggl\Vert \bar{X}_{([\frac{t}{\Delta }]-1)\Delta }+\frac{t-[\frac{t}{ \Delta }]\Delta }{\Delta }(\bar{X}_{[\frac{t}{\Delta }]\Delta }- \bar{X}_{([\frac{t}{\Delta }]-1)\Delta }) \biggr\Vert ^{2} \leq \sup _{-\tau \leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}, \\ &E \bigl\vert \bar{\mathcal{D}}(t) \bigr\vert ^{2}\le \kappa ^{2}E \Vert \xi \Vert ^{2}+\kappa ^{2}E\Bigl( \sup_{0\le s \le t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr). \end{aligned} \end{aligned}$$
(19)

Application of the Itô formula yields

$$\begin{aligned} \begin{aligned}[b] &e^{\lambda t}E \bigl\vert X(t)-\bar{\mathcal{D}}(t) \bigr\vert ^{2} \\ &\quad = E \bigl\vert \xi -\mathcal{D}\bigl(\bar{X}_{-\Delta },\bar{r}(0)\bigr) \bigr\vert ^{2} +\lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s)- \bar{\mathcal{D}}(s) \bigr\vert ^{2}\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s} \bigl[2 \bigl\langle X(s)- \bar{\mathcal{D}}(s),f\bigl(\bar{X}_{s},\bar{r}(s)\bigr) \bigr\rangle + \bigl\vert g\bigl( \bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2} \bigr]\,ds \\ &\quad = E \bigl\vert \xi -\mathcal{D}\bigl(\bar{X}_{-\Delta },\bar{r}(0)\bigr) \bigr\vert ^{2}+\lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s)- \bar{\mathcal{D}}(s) \bigr\vert ^{2}\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s} \bigl[2 \bigl\langle X(s)-\bar{X}_{s}(0),f\bigl( \bar{X}_{s},\bar{r}(s) \bigr) \bigr\rangle \bigr]\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s} \bigl[2 \bigl\langle \bar{X}_{s}(0)- \mathcal{D}\bigl(\bar{X}_{s},\bar{r}(s) \bigr),f\bigl(\bar{X}_{s},\bar{r}(s)\bigr) \bigr\rangle + \bigl\vert g\bigl(\bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2} \bigr]\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s} \bigl[2 \bigl\langle \mathcal{D}\bigl(\bar{X} _{s},\bar{r}(s)\bigr)-\bar{\mathcal{D}}(s),f \bigl(\bar{X}_{s},\bar{r}(s)\bigr) \bigr\rangle \bigr]\,ds \\ &\quad = : I_{1}+I_{2}+I_{3}+I_{4}+I_{5}. \end{aligned} \end{aligned}$$
(20)

It is obviously seen that \(I_{1}\) is only dependent on the initial data ξ. Now, we will give the estimations of \(I_{i}\) (\(i=2,\ldots ,5\)), respectively. Before estimating them, one important inequality can be computed as follows:

$$\begin{aligned} \begin{aligned}[b] \int _{0}^{t}e^{\gamma s} \int _{-\tau }^{0} \bigl\vert \bar{X}_{s}( \theta ) \bigr\vert ^{2} \rho (d\theta )\,ds &= \int _{-\tau }^{0} \biggl( \int _{0}^{t}e^{\gamma s} \bigl\vert \bar{X}_{s}(\theta ) \bigr\vert ^{2}\,ds \biggr)\rho (d\theta ) \\ &\leq e^{\gamma \tau } \int _{-\tau }^{0} \biggl( \int _{-\tau }^{t}e^{ \gamma s} \bigl\vert \bar{X}(s) \bigr\vert ^{2}\,ds \biggr)\rho (d\theta ) \\ &\leq \frac{e^{\gamma \tau }}{\gamma } \Vert \xi \Vert ^{2}+e^{\gamma \tau } \int _{0}^{t}e^{\gamma s} \bigl\vert \bar{X}(s) \bigr\vert ^{2}\,ds. \end{aligned} \end{aligned}$$
(21)

From Assumption 1 and (21), we have

$$\begin{aligned} \begin{aligned}[b] I_{2}&\leq \lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s) \bigr\vert ^{2}\,ds+2 \lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s) \bigr\vert \bigl\vert \bar{\mathcal{D}}(s) \bigr\vert \,ds+\lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert \bar{ \mathcal{D}}(s) \bigr\vert ^{2}\,ds \\ &\leq \bigl(2\kappa +\kappa ^{2}\bigr)E \Vert \xi \Vert ^{2}e^{\lambda \tau }+ \bigl[ \lambda +\bigl(2\lambda \kappa +\lambda \kappa ^{2}\bigr)e^{\lambda \tau } \bigr] \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds \end{aligned} \end{aligned}$$
(22)

and

$$ \begin{aligned}[b] I_{3}&\leq \Delta ^{-\frac{1}{2}}E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s)- \bar{X}(s) \bigr\vert ^{2}\,ds+\Delta ^{\frac{1}{2}}E \int _{0}^{t}e^{\lambda s} \bigl\vert f\bigl( \bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2}\,ds \\ &\le \Delta ^{-\frac{1}{2}}E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s)- \bar{X}(s) \bigr\vert ^{2}\,ds+\frac{\lambda _{3}\Delta ^{\frac{1}{2}}e^{\lambda \tau } }{\lambda }E \Vert \xi \Vert ^{2}+a\Delta ^{\frac{1}{2}}E \int _{0}^{t}e ^{\lambda s}\,ds \\ &\quad{}+\Delta ^{\frac{1}{2}}\lambda _{3} \bigl(1+e^{\lambda \tau } \bigr) \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds. \end{aligned} $$
(23)

It follows from (12) that, for any \(t\in [t_{[\frac{t}{ \Delta }]},t_{[\frac{t}{\Delta }]+1} )\) (\(t\geq 0\)),

$$ \begin{aligned}[b] &E \bigl\vert X(t)-\bar{X}(t) \bigr\vert ^{2} \\ &\quad \leq 3E \biggl( \biggl\vert \int _{t_{[\frac{t}{\Delta }]}}^{t}f\bigl(\bar{X} _{s}, \bar{r}(s)\bigr)\,ds \biggr\vert ^{2} \biggr)+3E \biggl( \biggl\vert \int _{t_{[\frac{t}{\Delta }]}}^{t}g\bigl(\bar{X}_{s}, \bar{r}(s)\bigr)\,dB(s) \biggr\vert ^{2} \biggr) \\ &\qquad{}+3E \biggl(\biggl|\bar{\mathcal{D}}(t)-\mathcal{D}\biggl(\bar{X}_{([\frac{t}{ \Delta }]-1)\Delta }, \bar{r}\biggl(\biggl(\biggl[\frac{t}{\Delta }\biggr]-1\biggr)\Delta \biggr)\biggr)\biggr|^{2} \biggr) \\ &\quad \leq \bigl[12\lambda _{3}(M+2) E \bigl( \bigl\vert \bar{X}(t) \bigr\vert ^{2} \bigr)+6a\bigr]\Delta \\ &\qquad {}+6E \biggl( \biggl\vert \bar{\mathcal{D}}(t)\\ &\qquad{}-\mathcal{D} \biggl( \bar{X}_{([\frac{t}{ \Delta }]-1)\Delta }+\frac{t-[\frac{t}{\Delta }]\Delta }{\Delta }( \bar{X}_{[\frac{t}{\Delta }]\Delta }- \bar{X}_{([\frac{t}{\Delta }]-1) \Delta }), \bar{r}\biggl(\biggl(\biggl[\frac{t}{\Delta }\biggr]-1 \biggr)\Delta \biggr) \biggr) \biggr\vert ^{2} \biggr) \\ &\qquad{}+6E \biggl( \biggl\vert \mathcal{D} \biggl(\bar{X}_{([\frac{t}{\Delta }]-1) \Delta }+ \frac{t-[\frac{t}{\Delta }]\Delta }{\Delta }(\bar{X}_{[\frac{t}{ \Delta }]\Delta }-\bar{X}_{([\frac{t}{\Delta }]-1)\Delta }),\bar{r}\biggl( \biggl(\biggl[\frac{t}{ \Delta }\biggr]-1\biggr)\Delta \biggr) \biggr) \\ &\qquad {}-\mathcal{D}\biggl(\bar{X}_{([\frac{t}{\Delta }]-1)\Delta },\bar{r}\biggl(\biggl( \biggl[\frac{t}{ \Delta }\biggr]-1\biggr)\Delta \biggr)\biggr) \biggr\vert ^{2} \biggr)\\ &\quad =: I_{31}+I_{32}+I_{33}, \end{aligned} $$
(24)

where \(I_{31}= [12\lambda _{3}(M+2) E (|\bar{X}(t)|^{2} )+6a] \Delta \).

We now compute \(I_{32}\) and \(I_{33}\), respectively. From Assumptions 1 and 2, we have

$$\begin{aligned} I_{32} &\leq 12E \biggl( \biggl\vert \bar{\mathcal{D}}(t) \\ &\quad {}-\mathcal{D} \biggl( \bar{X}_{([\frac{t}{\Delta }]-1)\Delta }+ \frac{t-[\frac{t}{\Delta }] \Delta }{\Delta }(\bar{X}_{[\frac{t}{\Delta }]\Delta }-\bar{X}_{([\frac{t}{ \Delta }]-1)\Delta }),\bar{r}\biggl( \biggl(\biggl[\frac{t}{\Delta }\biggr]-1\biggr)\Delta \biggr) \biggr) \biggr\vert ^{2}\\ &\quad {}\times I_{\{\bar{r}(t)\neq \bar{r}(([\frac{t}{\Delta }]-1)\Delta )\}} \biggr) \\ &\leq24\kappa ^{2}E \biggl( \int _{-\tau }^{0} \biggl\vert \bar{X}_{([\frac{t}{ \Delta }]-1)\Delta }( \theta )+\frac{t-[\frac{t}{\Delta }]\Delta }{ \Delta } \bigl(\bar{X}_{[\frac{t}{\Delta }]\Delta }(\theta )-\bar{X} _{([\frac{t}{\Delta }]-1)\Delta }(\theta ) \bigr) \biggr\vert ^{2}\rho (d\theta ) \biggr) \\ &\quad {}\times E [I_{\{\bar{r}(t)\neq \bar{r}(([\frac{t}{\Delta }]-1) \Delta )\}} ] \\ &\leq 24\kappa ^{2}E\Bigl(\sup_{-\tau \leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr)E [I _{\{\bar{r}(t)\neq \bar{r}(([\frac{t}{\Delta }]-1)\Delta )\}} ] \\ &\le 24\kappa ^{2}\Bigl(N\max_{0\leq i\leq n}(-\gamma _{ii}\Delta )+o( \Delta )\Bigr)E\Bigl(\sup_{-\tau \leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr) \\ &\le C_{1}\Delta E \Vert \xi \Vert ^{2}+C_{1} \Delta E\Bigl(\sup_{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr)+o( \Delta ) \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}[b] I_{33} &\leq 6\kappa ^{2}E \biggl( \int _{-\tau }^{0}\frac{t-[\frac{t}{ \Delta }]\Delta }{\Delta } \bigl( \bar{X}_{[\frac{t}{\Delta }]\Delta }( \theta )-\bar{X}_{([\frac{t}{\Delta }]-1)\Delta }(\theta ) \bigr)\rho (d \theta ) \biggr) \\ &\leq 6\kappa ^{2}E \biggl(\sup_{t-\tau \leq s\leq t} \biggl\vert \bar{X}\biggl(\biggl[\frac{s}{ \Delta }\biggr]\Delta \biggr)-\bar{X}\biggl( \biggl(\biggl[\frac{s}{\Delta }\biggr]-1\biggr)\Delta \biggr) \biggr\vert ^{2} \biggr) \\ &\leq 6\kappa ^{2}E \biggl(\sup_{-\tau \leq s\leq t} \biggl\vert \bar{X}\biggl(\biggl[\frac{s}{ \Delta }\biggr]\Delta \biggr)-\bar{X}\biggl( \biggl(\biggl[\frac{s}{\Delta }\biggr]-1\biggr)\Delta \biggr) \biggr\vert ^{2} \biggr) \\ &\leq 6\kappa ^{2} \biggl[ \biggl[C_{1}\Delta +D(l)\Delta ^{ \frac{l-1}{l}}+\frac{8\lambda _{3}\Delta ^{2}}{1-\kappa } \biggr]\frac{E \Vert \xi \Vert ^{2}}{1-2\kappa } +\frac{\alpha (\Delta )}{1-2\kappa }+ \frac{2a \Delta ^{2}+4aD(l)\Delta ^{\frac{l-1}{l}}}{(1-\kappa )(1-2\kappa )} \\ &\quad{}+o(\Delta )+\frac{1}{1-2\kappa } \biggl[C_{1}\Delta +D(l) \Delta ^{\frac{l-1}{l}}+\frac{8\lambda _{3}\Delta ^{2}}{1-\kappa } \biggr]E\Bigl(\sup_{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr) \biggr], \end{aligned} \end{aligned}$$
(25)

where (4), (8), and the proof of Lemma 3.2 in [21] are used, \(D(l)=\frac{8\lambda _{3}}{1-\kappa } [(2l-1)!!N ]^{\frac{1}{l}}\), and \(l>1\) is an arbitrary integer.

In particular, when \(l=3\), (25) can be rewritten as

$$\begin{aligned} \begin{aligned}[b] I_{33} &\leq \biggl[C_{1}+D(3)+\frac{8\lambda _{3}}{1-\kappa } \biggr]\frac{6\kappa ^{2}E \Vert \xi \Vert ^{2}\Delta ^{\frac{2}{3}}}{1-2\kappa } + \frac{6 \kappa ^{2}\alpha (\Delta )}{1-2\kappa }+\frac{6\kappa ^{2} [2a+4aD(3) ]}{(1-\kappa )(1-2\kappa )}\Delta ^{\frac{2}{3}} \\ &\quad{}+o(\Delta )+ \biggl[C_{1}+D(3)+\frac{8\lambda _{3}}{1-\kappa } \biggr] \frac{6 \kappa ^{2}\Delta ^{\frac{2}{3}}}{1-2\kappa }E\Bigl(\sup_{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr) \\ &\leq C_{2} \bigl[E \Vert \xi \Vert ^{2}\Delta ^{\frac{2}{3}}+\Delta ^{ \frac{2}{3}}+\alpha \bigl(\Delta ^{\frac{2}{3}}\bigr) \bigr]+o(\Delta )+C_{2} \Delta ^{\frac{2}{3}}E\Bigl(\sup _{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr), \end{aligned} \end{aligned}$$
(26)

where \(C_{2}=\max \{ [C_{1}+D(3)+\frac{8\lambda _{3}}{1- \kappa } ]\frac{6\kappa ^{2}}{1-2\kappa }, \frac{6\kappa ^{2}}{1-2 \kappa }, \frac{6\kappa ^{2}[2a+4aD(3)]}{(1-\kappa )(1-2\kappa )} \}\).

From (24) and (26), we have

$$\begin{aligned} \begin{aligned}[b]&E \bigl\vert X(t)-\bar{X}(t) \bigr\vert ^{2} \\ &\quad \leq 12\lambda _{3}(M+2)\Delta E \bigl( \bigl\vert \bar{X}(t) \bigr\vert ^{2} \bigr)+6a \Delta \\ &\qquad{}+C_{1}\Delta E \Vert \xi \Vert ^{2}+C_{1} \Delta E\Bigl(\sup_{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr)+o( \Delta ) \\ &\qquad{}+C_{2} \bigl[E \Vert \xi \Vert ^{2}\Delta ^{\frac{2}{3}}+\Delta ^{\frac{2}{3}}+ \alpha \bigl(\Delta ^{\frac{2}{3}}\bigr) \bigr]+o(\Delta )+C_{2}\Delta ^{ \frac{2}{3}}E\Bigl(\sup _{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr) \\ &\quad \le (C_{1}+C_{2})E \Vert \xi \Vert ^{2} \Delta ^{\frac{2}{3}}+(6a+C_{2}) \Delta ^{\frac{2}{3}}+C_{2} \alpha \bigl(\Delta ^{\frac{2}{3}}\bigr)+o(\Delta ) \\ &\qquad{}+ \bigl[12\lambda _{3}(M+2)+C_{1}+C_{2} \bigr]\Delta ^{\frac{2}{3}}E\Bigl( \sup_{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr) \\ &\quad \le C_{3} \bigl[E \Vert \xi \Vert ^{2}\Delta ^{\frac{2}{3}}+\Delta ^{ \frac{2}{3}}+\alpha \bigl(\Delta ^{\frac{2}{3}}\bigr) \bigr]+o(\Delta )+C_{3} \Delta ^{\frac{2}{3}}E\Bigl(\sup _{0\leq s\leq t} \bigl\vert X(s) \bigr\vert ^{2}\Bigr), \end{aligned} \end{aligned}$$
(27)

where \(C_{3}=\max \{6a+C_{2},12\lambda _{3}(M+2)+C_{1}+C_{2} \}\).

Substituting (27) into (23) yields

$$\begin{aligned} \begin{aligned}[b] I_{3}&\leq C_{3} \bigl[E \Vert \xi \Vert ^{2}\Delta ^{\frac{1}{6}}+ \Delta ^{\frac{1}{6}}+\alpha \bigl(\Delta ^{\frac{1}{6}}\bigr) \bigr]E \int _{0}^{t}e ^{\lambda s}\,ds+C_{3} \Delta ^{\frac{1}{6}} \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\leq u\leq s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds \\ &\quad{}+o(\Delta )+\frac{\lambda _{3}e^{\lambda \tau } }{\lambda }E \Vert \xi \Vert ^{2}\Delta ^{\frac{1}{6}}+a\Delta ^{\frac{1}{6}}E \int _{0}^{t}e^{\lambda s}\,ds \\ &\quad{}+\lambda _{3} \bigl(1+e^{\lambda \tau } \bigr)\Delta ^{\frac{1}{6}}E \int _{0}^{t}e^{\lambda s}\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\,ds \\ &\le o(\Delta )+ \bigl[C_{3}E \Vert \xi \Vert ^{2} \Delta ^{\frac{1}{6}}+(a+C _{3})\Delta ^{\frac{1}{6}}+C_{3} \alpha \bigl(\Delta ^{\frac{1}{6}}\bigr) \bigr]\frac{e ^{\lambda t}}{\lambda }+ \frac{\lambda _{3}e^{\lambda \tau }-C_{3} }{ \lambda }E \Vert \xi \Vert ^{2}\Delta ^{\frac{1}{6}} \\ &\quad{}-\frac{C_{3}+a}{\lambda }\Delta ^{\frac{1}{6}}-\frac{C_{3}}{\lambda }\alpha \bigl(\Delta ^{\frac{1}{6}}\bigr)+ \bigl[C_{3}+\lambda _{3} \bigl(1+e^{\lambda \tau }\bigr) \bigr]\Delta ^{\frac{1}{6}}E \int _{0}^{t}e^{\lambda s} \sup _{0\leq u\leq s} \bigl\vert X(u) \bigr\vert ^{2}\,ds \\ &\le o(\Delta )+C_{4} \bigl[E \Vert \xi \Vert ^{2} \Delta ^{\frac{1}{6}}+ \Delta ^{\frac{1}{6}}+\alpha \bigl(\Delta ^{\frac{1}{6}} \bigr) \bigr]e^{\lambda t}+C _{4}E \Vert \xi \Vert ^{2}\Delta ^{\frac{1}{6}} \\ &\quad{}+C_{4}\Delta ^{\frac{1}{6}} \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\leq u\leq s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds, \end{aligned} \end{aligned}$$
(28)

where \(C_{4}=\max \{\frac{a+C_{3}}{\lambda },\frac{\lambda _{3}e ^{\lambda \tau }-C_{3}}{\lambda },C_{3}+\lambda _{3}(1+e^{\lambda \tau }) \}\).

From Assumption 1 and (21), we obtain

$$\begin{aligned} \begin{aligned}[b] I_{4}&\leq \frac{(\lambda _{2}+2\epsilon \kappa ^{2}+\epsilon \lambda _{3})E \Vert \xi \Vert ^{2}e^{\lambda \tau }}{\lambda }+ \frac{(2+\epsilon )a}{2 \epsilon }E \int _{0}^{t}e^{\lambda s}\,ds \\ &\quad{}+ \bigl[(-\lambda _{1}+2\epsilon +\epsilon \lambda _{3})+\bigl(\lambda _{2}+2 \epsilon \kappa ^{2}+ \epsilon \lambda _{3}\bigr)e^{\lambda \tau } \bigr] \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds \end{aligned} \end{aligned}$$
(29)

and

$$\begin{aligned} \begin{aligned}[b] I_{5}&\leq E \int _{0}^{t}e^{\lambda s} \bigl\vert \mathcal{D}\bigl(\bar{X}_{s}, \bar{r}(s)\bigr)-\bar{\mathcal{D}}(s) \bigr\vert ^{2}\,ds+E \int _{0}^{t}e^{\lambda s} \bigl\vert f\bigl( \bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2}\,ds \\ &\le \frac{2(\kappa ^{2}+\lambda _{3})E \Vert \xi \Vert ^{2}e^{\lambda \tau }}{\lambda }+2\lambda _{3}aE \int _{0}^{t}e^{\lambda s}\,ds \\ &\quad{}+ \bigl[2\kappa ^{2}e^{\lambda \tau }+2\lambda _{3}+2\lambda _{3}e^{ \lambda \tau } \bigr] \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds. \end{aligned} \end{aligned}$$
(30)

Substituting (22), (28)–(30) into (20) gives

$$\begin{aligned} &e^{\lambda t}E \bigl\vert X(t)-\bar{\mathcal{D}}( t) \bigr\vert ^{2} \\ &\quad \leq E \bigl\vert \xi -\mathcal{D}\bigl(\bar{X}_{-\Delta },\bar{r}(0) \bigr) \bigr\vert ^{2} \\ &\qquad{}+\bigl(2\kappa +\kappa ^{2}\bigr)E \Vert \xi \Vert ^{2}e^{\lambda \tau }+ \bigl[\lambda +\bigl(2 \lambda \kappa +\lambda \kappa ^{2}\bigr)e^{\lambda \tau } \bigr] \int _{0}^{t}e ^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds \\ &\qquad{}+o(\Delta )+C_{4} \bigl[E \Vert \xi \Vert ^{2} \Delta ^{\frac{1}{6}}+ \Delta ^{\frac{1}{6}}+\alpha \bigl(\Delta ^{\frac{1}{6}} \bigr) \bigr]e^{\lambda t}+C _{4}E \Vert \xi \Vert ^{2}\Delta ^{\frac{1}{6}} \\ &\qquad{}+C_{4}\Delta ^{\frac{1}{6}} \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\leq u\leq s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds \\ &\qquad{}+\frac{(\lambda _{2}+2\epsilon \kappa ^{2}+\epsilon \lambda _{3})E \Vert \xi \Vert ^{2}e^{\lambda \tau }}{\lambda }+\frac{(2+\epsilon )a}{2\epsilon }E \int _{0}^{t}e^{\lambda s}\,ds \\ &\qquad{}+ \bigl[(-\lambda _{1}+2\epsilon +\epsilon \lambda _{3})+\bigl(\lambda _{2}+2 \epsilon \kappa ^{2}+ \epsilon \lambda _{3}\bigr)e^{\lambda \tau } \bigr]E \int _{0}^{t}e^{\lambda s}\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\,ds \\ &\qquad{}+\frac{2(\kappa ^{2}+\lambda _{3})E \Vert \xi \Vert ^{2}e^{\lambda \tau }}{ \lambda }+2\lambda _{3}aE \int _{0}^{t}e^{\lambda s}\,ds \\ &\qquad{}+ \bigl[2\kappa ^{2}e^{\lambda \tau }+2\lambda _{3}+2\lambda _{3}e^{ \lambda \tau } \bigr] \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds \\ &\quad \le E \bigl\vert \xi -\mathcal{D}\bigl(\bar{X}_{-\Delta },\bar{r}(0) \bigr) \bigr\vert ^{2}+\frac{ \lambda (2\kappa +\kappa ^{2})+(\lambda _{2}+2\kappa ^{2}+2\lambda _{3})}{ \lambda }E \Vert \xi \Vert ^{2}e^{\lambda \tau } \\ &\qquad{}+C_{4} \bigl[E \Vert \xi \Vert ^{2}\Delta ^{\frac{1}{6}}+\Delta ^{\frac{1}{6}}+ \alpha \bigl(\Delta ^{\frac{1}{6}}\bigr) \bigr]e^{\lambda t}+2\lambda _{3}ae^{ \lambda t} \\ &\qquad{}+\bar{\lambda } \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u) \bigr\vert ^{2}\Bigr)\,ds, \end{aligned}$$

where \(\bar{\lambda }=-\lambda _{1}+\lambda +C_{4}\Delta ^{\frac{1}{6}}+2 \epsilon +\epsilon \lambda _{3}+2\lambda _{3}+(2\lambda \kappa +\lambda \kappa ^{2}+\lambda _{2}+2\epsilon \kappa ^{2}+\epsilon \lambda _{3}+2 \kappa ^{2}+2\lambda _{3})e^{\lambda \tau }\).

Using (16), we can choose a suitable positive constant λ such that \(-\lambda _{1}+\lambda +2\lambda _{3}+(2\lambda \kappa +\lambda \kappa ^{2}+\lambda _{2}+2\kappa ^{2}+2\lambda _{3})e^{ \lambda \tau }<0\). Then, when \(\Delta >0\) and \(\epsilon >0\) are sufficiently small, it yields that \(\bar{\lambda }<0\). Hence

$$\begin{aligned} \begin{aligned}[b]&E \bigl\vert X(t)-\bar{ \mathcal{D}}(t) \bigr\vert ^{2} \\ &\quad \le 2\lambda _{3}a+ \biggl[E \bigl\vert \xi -\mathcal{D} \bigl(\bar{X}_{-\Delta }, \bar{r}(0)\bigr) \bigr\vert ^{2}\\ &\qquad {}+ \frac{(\kappa ^{2}\lambda +2\kappa \lambda +\lambda _{2}+2\kappa ^{2}+2\lambda _{3})E \Vert \xi \Vert ^{2}e^{\lambda \tau }}{\lambda } \biggr]e^{-\lambda t}. \end{aligned} \end{aligned}$$
(31)

Substituting (31) into (18), we have

$$\begin{aligned} \begin{aligned}[b]&E\sup_{0\le s\le t} \bigl\vert X(s) \bigr\vert ^{2} \\ &\quad \leq \frac{(1+\beta )(2\lambda _{3}a\beta +\kappa ^{2}E \Vert \xi \Vert ^{2})}{\beta -(1+\beta )\kappa ^{2}}+e^{-\lambda t}\frac{\beta (1+ \beta )}{\beta -(1+\beta )\kappa ^{2}} \biggl[E \bigl\vert \xi -\mathcal{D}\bigl( \bar{X}_{-\Delta },\bar{r}(0)\bigr) \bigr\vert ^{2} \\ &\qquad{}+\frac{(\kappa ^{2}\lambda +2\kappa \lambda +\lambda _{2}+2\kappa ^{2}+2 \lambda _{3})E \Vert \xi \Vert ^{2}e^{\lambda \tau }}{\lambda } \biggr]. \end{aligned} \end{aligned}$$

Choosing an appropriate positive constant β such that \(\beta -(1+\beta )k^{2}>0\), then for all \(t\geq -\tau \), we have

$$\begin{aligned} E \bigl\vert X(t) \bigr\vert ^{2}\leq cE \Vert \xi \Vert ^{2}+ce^{-\lambda t}E \Vert \xi \Vert ^{2}< \infty . \end{aligned}$$
(32)

We now estimate the moment of the segment process \(X_{t}\). According to the Itô formula, for any \(t\geq \tau \) and \(-\tau \leq \theta \leq 0\), we have

$$\begin{aligned} \begin{aligned}[b]& \bigl\vert X_{t}- \bar{\mathcal{D}}(t+\theta ) \bigr\vert ^{2} \\ &\quad = \bigl\vert X(t-\tau )-\bar{\mathcal{D}}(t-\tau ) \bigr\vert ^{2}+M(t,\theta ) \\ &\qquad{}+ \int _{t-\tau }^{t+\theta } \bigl[2 \bigl\langle X(s)- \bar{ \mathcal{D}}(s),f\bigl(\bar{X}_{s},\bar{r}(s)\bigr) \bigr\rangle + \bigl\vert g\bigl( \bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2} \bigr]\,ds \\ &\quad \leq \bigl\vert X(t-\tau )-\bar{\mathcal{D}}(t-\tau ) \bigr\vert ^{2}+M(t,\theta )+ \lambda _{2} \int _{t-\tau }^{t+\theta } \int _{-\tau }^{0} \sup_{-\tau \leq r\leq s} \bigl\vert X(r+\theta ) \bigr\vert ^{2}\rho (d\theta )\,ds, \end{aligned} \end{aligned}$$
(33)

where

$$\begin{aligned} M(t,\theta )= \int _{t-\tau }^{t+\theta }2\bigl\langle X(s)- \bar{ \mathcal{D}}(s),g\bigl(\bar{X}_{s},\bar{r}(s)\bigr)\,dB(s)\bigr\rangle . \end{aligned}$$

By the Burkholder–Davis–Gundy inequality (see [10]), one obtains

$$\begin{aligned} \begin{aligned}[b]&E \Bigl(\sup_{-\tau \leq \theta \leq 0} \bigl\vert M(t,\theta ) \bigr\vert \Bigr) \\ &\quad \leq c E \biggl(\sup_{t-\tau \leq s\leq t} \bigl\vert X(s)-\bar{ \mathcal{D}}(s) \bigr\vert ^{2} \int _{t-\tau }^{t} \bigl\Vert g\bigl( \bar{X}_{s},\bar{r}(s)\bigr) \bigr\Vert ^{2}\,ds \biggr)^{ \frac{1}{2}} \\ &\quad \leq \frac{1}{2}E \Bigl(\sup_{-\tau \leq \theta \leq 0} \bigl\vert X(t+ \theta )-\bar{\mathcal{D}}(t+\theta ) \bigr\vert ^{2} \Bigr)+cE \int _{t-\tau }^{t} \bigl\vert g\bigl( \bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2}\,ds \\ &\quad \leq \frac{1}{2}E \Bigl(\sup_{-\tau \leq \theta \leq 0} \bigl\vert X(t+ \theta )-\bar{\mathcal{D}}(t+\theta ) \bigr\vert ^{2} \Bigr)+cE \int _{t-\tau }^{t} \bigl\vert X(s) \bigr\vert ^{2}\,ds \\ &\qquad{}+cE \int _{t-\tau }^{t} \int _{-\tau }^{0} \bigl\vert X(s+\theta ) \bigr\vert ^{2}\rho (d \theta )\,ds. \end{aligned} \end{aligned}$$
(34)

Substituting (34) into (33), we have

$$\begin{aligned} \begin{aligned}[b]&E \Bigl(\sup_{-\tau \leq \theta \leq 0} \bigl\vert X_{t}^{\xi }-\bar{ \mathcal{D}}(t+\theta ) \bigr\vert ^{2} \Bigr) \\ &\quad \leq 2E \bigl\vert X(t-\tau )-\bar{\mathcal{D}}(t-\tau ) \bigr\vert ^{2}+cE \int _{t- \tau }^{t} \bigl\vert X(s) \bigr\vert ^{2}\,ds \\ &\qquad{}+cE \int _{t-\tau }^{t} \int _{-\tau }^{0} \bigl\vert X(s+\theta ) \bigr\vert ^{2}\rho (d \theta )\,ds \\ &\quad \leq 4E \bigl\vert X(t-\tau ) \bigr\vert ^{2}+4\kappa ^{2}+cE \int _{t-2\tau }^{t} \bigl\vert X(s) \bigr\vert ^{2}\,ds \\ &\qquad{}+4\kappa ^{2}E \int _{-\tau }^{0}\sup_{-\tau \leq s\leq t} \bigl\vert X(s-\tau + \theta ) \bigr\vert ^{2}\rho (d\theta ). \end{aligned} \end{aligned}$$
(35)

Therefore, (32) and (35) lead to

$$\begin{aligned} \begin{aligned}[b] E \bigl\Vert X_{t}^{\xi } \bigr\Vert ^{2}&=E \Bigl(\sup_{-\tau \leq \theta \leq 0} \bigl\vert X(t+ \theta ) \bigr\vert ^{2} \Bigr) \\ &\leq (1+\beta )E \Bigl(\sup_{-\tau \leq \theta \leq 0} \bigl\vert X(t+\theta )- \bar{ \mathcal{D}}(t+\theta ) \bigr\vert ^{2} \Bigr) \\ &\quad{}+\frac{(1+\beta )\kappa ^{2}}{\beta }E \int _{-\tau }^{0} \sup_{-\tau \leq \theta \leq 0} \bigl\vert X(t+2\theta ) \bigr\vert ^{2}\rho (d\theta ) \\ &\leq cE \Vert \xi \Vert ^{2}+ce^{-\lambda t}E \Vert \xi \Vert ^{2}, \quad t\geq \tau . \end{aligned} \end{aligned}$$
(36)

Hence, the required assertion follows. The proof is therefore completed. □

Lemma 3.4

Under Assumptions 1 and 2, if

$$\begin{aligned} \lambda _{1}>\lambda _{2}+2\lambda _{3}+4\kappa ^{2} \end{aligned}$$
(37)

and Δ is sufficiently small, then the EM approximate solution has Property (P2), that is,

$$ \lim_{t\rightarrow \infty }\sup_{\xi ,\eta \in K}E\bigl[ \bigl\Vert X_{t}^{\xi }-X _{t}^{\eta } \bigr\Vert ^{2}\bigr]=0, $$

where K is a compact subset in \(\mathcal{C}\).

Proof

Considering the difference between two different approximate solutions starting from two different initial values, it follows from (12) that

$$\begin{aligned}& X(t;\xi )-X(t;\eta )-\bigl[\bar{\mathcal{D}} (t;\xi )-\bar{\mathcal{D}} (t; \eta )\bigr] \\& \quad =\xi (0)-\eta (0)-\bigl[\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\xi }, \bar{r}(0)\bigr)- \mathcal{D}\bigl(\bar{X}_{-\Delta }^{\eta }, \bar{r}(0)\bigr)\bigr]+ \int _{0}^{t}\bigl[f\bigl( \bar{X}_{s}^{\xi }, \bar{r}(s)\bigr)-f\bigl(\bar{X}_{s}^{\eta },\bar{r}(s)\bigr) \bigr]\,ds \\& \qquad{}+ \int _{0}^{t}\bigl[g\bigl(\bar{X}_{s}^{\xi }, \bar{r}(s)\bigr)-g\bigl(\bar{X}_{s}^{ \eta },\bar{r}(s)\bigr) \bigr]\,dB(s). \end{aligned}$$

By using the Itô formula, for any \(\lambda >0\),

$$\begin{aligned} \begin{aligned}[b]&e^{\lambda t}E \bigl\vert X(t; \xi )-X(t;\eta )-\bigl[\bar{\mathcal{D}}(t; \xi )-\bar{\mathcal{D}}(t;\eta )\bigr] \bigr\vert ^{2} \\ &\quad =E \bigl\vert \xi (0)-\eta (0)-\bigl[\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\xi }, \bar{r}(0)\bigr)-\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\eta }, \bar{r}(0)\bigr)\bigr] \bigr\vert ^{2} \\ &\qquad{}+\lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s;\xi )-X(s;\eta )-\bigl[\bar{ \mathcal{D}}(s;\xi )-\bar{\mathcal{D}}(s;\eta )\bigr] \bigr\vert ^{2}\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s}\bigl[2\bigl\langle X(s;\xi )-X(s;\eta )-\bigl[\bar{ \mathcal{D}}(s;\xi )-\bar{\mathcal{D}}(s;\eta ) \bigr],f\bigl(\bar{X}_{s}^{ \xi },\bar{r}(s)\bigr)-f\bigl( \bar{X}_{s}^{\eta },\bar{r}(s)\bigr)\bigr\rangle \\ &\qquad{}+ \bigl\vert g\bigl(\bar{X}_{s}^{\xi },\bar{r}(s) \bigr)-g\bigl(\bar{X}_{s}^{\eta },\bar{r}(s)\bigr) \bigr\vert ^{2}\bigr]\,ds \\ &\quad \leq E \bigl\vert \xi (0)-\eta (0)-\bigl[\mathcal{D}\bigl( \bar{X}_{-\Delta }^{ \xi },\bar{r}(0)\bigr)-\mathcal{D}\bigl( \bar{X}_{-\Delta }^{\eta },\bar{r}(0)\bigr)\bigr] \bigr\vert ^{2} \\ &\qquad{}+\lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s;\xi )-X(s;\eta )-\bigl[\bar{ \mathcal{D}}(s;\xi )-\bar{\mathcal{D}}(s;\eta )\bigr] \bigr\vert ^{2}\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s}\bigl[2\bigl\langle \bar{X}^{\xi }(s)-\bar{X}^{ \eta }(s)-\bigl[\mathcal{D}\bigl( \bar{X}_{s}^{\xi },s\bigr)-\mathcal{D}\bigl( \bar{X}_{s} ^{\eta },s\bigr)\bigr],f\bigl(\bar{X}_{s}^{\xi }, \bar{r}(s)\bigr)-f\bigl(\bar{X}_{s}^{\eta }, \bar{r}(s)\bigr) \bigr\rangle \\ &\qquad{}+ \bigl\vert g\bigl(\bar{X}_{s}^{\xi },\bar{r}(s) \bigr)-g\bigl(\bar{X}_{s}^{\eta },\bar{r}(s)\bigr) \bigr\vert ^{2}\bigr]\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s}\bigl[2\bigl\langle X(s;\xi )-X(s;\eta )-\bigl[\bar{X} ^{\xi }(s)-\bar{X}^{\eta }(s) \bigr],f\bigl(\bar{X}_{s}^{\xi },\bar{r}(s)\bigr)-f\bigl( \bar{X}_{s}^{\eta },\bar{r}(s)\bigr)\bigr\rangle \bigr]\,ds \\ &\qquad{}+E \int _{0}^{t}e^{\lambda s}\bigl[2\bigl\langle \mathcal{D}\bigl(\bar{X}_{s}^{ \xi },\bar{r}(s)\bigr)- \mathcal{D}\bigl(\bar{X}_{s}^{\eta },\bar{r}(s)\bigr)-\bigl[ \bar{ \mathcal{D}}(s;\xi )-\bar{\mathcal{D}}(s;\eta )\bigr], \\ &f\bigl(\bar{X}_{s}^{\xi },\bar{r}(s)\bigr)-f\bigl( \bar{X}_{s}^{\eta },\bar{r}(s)\bigr) \bigr\rangle \bigr]\,ds \\ &\quad = : J_{1}+J_{2}+J_{3}+J_{4}+J_{5}. \end{aligned} \end{aligned}$$
(38)

Before estimating \(J_{i}\) (\(i=1,2,\ldots ,5\)), we note that

$$\begin{aligned} & \biggl\Vert \bar{X}_{([\frac{t}{\Delta }]-1)\Delta }^{\xi }-\frac{t-[\frac{t}{ \Delta }]\Delta }{\Delta }\bigl( \bar{X}_{[\frac{t}{\Delta }]\Delta }^{ \xi }-\bar{X}_{([\frac{t}{\Delta }]-1)\Delta }^{\xi }\bigr) \\ &\qquad{}-\biggl[\bar{X}_{([\frac{t}{\Delta }]-1)\Delta }^{\eta }-\frac{t-[\frac{t}{ \Delta }]\Delta }{\Delta }\bigl( \bar{X}_{[\frac{t}{\Delta }]\Delta }^{ \eta }-\bar{X}_{([\frac{t}{\Delta }]-1)\Delta }^{\eta }\bigr) \biggr] \biggr\Vert ^{2} \\ &\quad \leq \biggl|\frac{([\frac{t}{\Delta }]+1)\Delta -t}{\Delta }\biggl\| \bar{X}_{([\frac{t}{\Delta }]-1)\Delta }^{\xi }- \bar{X}_{([\frac{t}{ \Delta }]-1)\Delta }^{\eta }) \biggr\Vert \\ &\qquad{}+\frac{t-[\frac{t}{\Delta }]\Delta }{\Delta } \biggl\Vert \bar{X}_{[\frac{t}{ \Delta }]\Delta }^{\xi }- \bar{X}_{[\frac{t}{\Delta }]\Delta }^{\eta } \biggr\| \biggr|^{2} \\ &\quad \leq \biggl[\frac{([\frac{t}{\Delta }]+1)\Delta -t}{\Delta }\Bigl( \sup_{-\tau \leq s\leq t} \bigl\vert X(s;\xi )-X(s;\eta ) \bigr\vert \Bigr) \\ &\qquad{}+\frac{t-[\frac{t}{\Delta }]\Delta }{\Delta }\Bigl(\sup_{-\tau \leq s \leq t} \bigl\vert X(s; \xi )-X(s;\eta ) \bigr\vert \Bigr) \biggr]^{2} \\ &\quad \leq \sup_{-\tau \leq s\leq t} \bigl\vert X(s;\xi )-X(s;\eta ) \bigr\vert ^{2} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}[b]&E \bigl\vert X(t;\xi )-X(t;\eta )-\bigl[\bar{X}^{\xi }(t)-\bar{X}^{\eta }(t)\bigr] \bigr\vert ^{2} \\ &\quad \le C_{5} \bigl[\Delta ^{\frac{2}{3}}+\alpha \bigl(\Delta ^{\frac{2}{3}}\bigr)+ \Delta ^{\frac{2}{3}}e^{-\lambda t} \bigr]+o(\Delta )+12\lambda _{3} \Delta E\Bigl(\sup_{0\le s\le t} \bigl\vert X(s;\xi )-X(s;\eta ) \bigr\vert ^{2}\Bigr), \end{aligned} \end{aligned}$$
(39)

where the derivation process of (39) is similar to the one in (27), and \(C_{5}=\max \{(C_{1}+C_{2})(1+c)(E\|\xi \|^{2}+E\| \eta \|^{2}),12\lambda _{3}E\|\xi -\eta \|^{2},2C_{2},c(C_{1}+C_{2})(E \|\xi \|^{2}+E\|\eta \|^{2})\}\).

For two different given initial values ξ and η, it is easily seen that \(J_{1}\) is a constant. By Assumption 1,

$$\begin{aligned} \begin{aligned}[b] J_{2}&\leq \lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s;\xi )-X(s;\eta ) \bigr\vert ^{2}\,ds+ \lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert \bar{ \mathcal{D}}(s;\xi )-\bar{ \mathcal{D}}(s;\eta ) \bigr\vert ^{2}\,ds \\ &\leq \lambda E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s;\xi )-X(s;\eta ) \bigr\vert ^{2}\,ds+ \kappa ^{2}e^{\lambda \tau }E \Vert \xi -\eta \Vert ^{2} \\ &\quad{}+\lambda \kappa ^{2}e^{\lambda \tau } \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &\leq \kappa ^{2}e^{\lambda \tau }E \Vert \xi -\eta \Vert ^{2}+\bigl[\lambda + \lambda \kappa ^{2}e^{\lambda \tau } \bigr] \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \end{aligned} \end{aligned}$$
(40)

and

$$\begin{aligned} \begin{aligned}[b] J_{3}&\leq -\lambda _{1}E \int _{0}^{t}e^{\lambda s} \bigl\vert \bar{X}(s;\xi )- \bar{X}(s;\eta ) \bigr\vert ^{2}\,ds+\lambda _{2}E \int _{0}^{t}e^{\lambda s} \int _{- \tau }^{0} \bigl\vert \bar{X}_{s}^{\xi }- \bar{X}_{s}^{\eta } \bigr\vert ^{2}\rho (d\theta )\,ds \\ &\leq \frac{\lambda _{2}E \Vert \xi -\eta \Vert ^{2}e^{\lambda \tau }}{ \lambda }+\bigl[-\lambda _{1}+\lambda _{2}e^{\lambda \tau }\bigr] \int _{0}^{t}e ^{\lambda s}E\Bigl(\sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds. \end{aligned} \end{aligned}$$
(41)

From (39), we have

$$\begin{aligned} J_{4}&\leq \Delta ^{-\frac{1}{2}}E \int _{0}^{t}e^{\lambda s} \bigl\vert X(s; \xi )-X(s;\eta )-\bigl[\bar{X}^{\xi }(s)-\bar{X}^{\eta }(s)\bigr] \bigr\vert ^{2}\,ds \\ &\quad{}+\Delta ^{\frac{1}{2}}E \int _{0}^{t}e^{\lambda s} \bigl\vert f\bigl( \bar{X}^{\xi } _{s},\bar{r}(s)\bigr)-f\bigl( \bar{X}^{\eta }_{s},\bar{r}(s)\bigr) \bigr\vert ^{2}\,ds \\ &\leq \frac{\Delta ^{\frac{1}{2}}\lambda _{3}e^{\lambda \tau }}{ \lambda }+C_{5}\bigl[\Delta ^{\frac{1}{6}}+\alpha \bigl(\Delta ^{\frac{1}{6}}\bigr)\bigr]\frac{e ^{\lambda t}}{\lambda }+C_{5}\Delta ^{\frac{1}{6}}t+o(\Delta ) \\ &\quad{}+\bigl(13+e^{\lambda \tau }\bigr)\lambda _{3}\Delta ^{\frac{1}{6}} \int _{0}^{t}e ^{\lambda s}E\Bigl(\sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \end{aligned}$$
(42)

and

$$\begin{aligned} \begin{aligned}[b] J_{5}&\leq E \int _{0}^{t}e^{\lambda s} \bigl\vert \mathcal{D}\bigl(\bar{X}_{s}^{ \xi },\bar{r}(s)\bigr)- \mathcal{D}\bigl(\bar{X}_{s}^{\eta },\bar{r}(s)\bigr)-\bigl[ \bar{ \mathcal{D}}(s;\xi )-\bar{\mathcal{D}}(s;\eta )\bigr] \bigr\vert ^{2}\,ds \\ &\quad{}+E \int _{0}^{t}e^{\lambda s} \bigl\vert f\bigl( \bar{X}_{s}^{\xi },\bar{r}(s)\bigr)-f\bigl( \bar{X}_{s}^{\eta },\bar{r}(s)\bigr) \bigr\vert ^{2}\,ds \\ &\leq \frac{4\kappa ^{2}e^{\lambda \tau }}{\lambda }E \Vert \xi -\eta \Vert ^{2}+4\kappa ^{2}e^{\lambda \tau } \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &\quad{}+\frac{\lambda _{3}e^{\lambda \tau }}{\lambda }E \Vert \xi -\eta \Vert ^{2}+ \lambda _{3}\bigl(1+e^{\lambda \tau }\bigr) \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &=\bigl(\lambda _{3}+4\kappa ^{2}e^{\lambda \tau }+\lambda _{3}e^{\lambda \tau }\bigr) \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u; \eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &\quad{}+\frac{(4\kappa ^{2}+\lambda _{3})e^{\lambda \tau }}{\lambda }E \Vert \xi -\eta \Vert ^{2}. \end{aligned} \end{aligned}$$
(43)

Then, substituting (40)–(43) into (38), we have

$$\begin{aligned} \begin{aligned}[b]&e^{\lambda t}E \bigl\vert X(t; \xi )-X(t;\eta )-\bigl[\bar{\mathcal{D}}(t; \xi )-\bar{\mathcal{D}}(t;\eta )\bigr] \bigr\vert ^{2} \\ &\quad \le E \bigl\vert \xi -\eta -\bigl[\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\xi }, \bar{r}(0)\bigr)-\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\eta }, \bar{r}(0)\bigr)\bigr] \bigr\vert ^{2} \\ &\qquad{}+\kappa ^{2}e^{\lambda \tau }E \Vert \xi -\eta \Vert ^{2}+\bigl[\lambda +\lambda \kappa ^{2}e^{\lambda \tau }\bigr] \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u \le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &\qquad{}+\frac{\lambda _{2}E \Vert \xi -\eta \Vert ^{2}e^{\lambda \tau }}{\lambda }+\bigl[- \lambda _{1}+\lambda _{2}e^{\lambda \tau }\bigr] \int _{0}^{t}e^{\lambda s}E\Bigl( \sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &\qquad{}+\frac{\Delta ^{\frac{1}{2}}\lambda _{3}e^{\lambda \tau }}{\lambda }+C _{5}\bigl[\Delta ^{\frac{1}{6}}+ \alpha \bigl(\Delta ^{\frac{1}{6}}\bigr)\bigr]\frac{e^{ \lambda t}}{\lambda }+C_{5} \Delta ^{\frac{1}{6}}t+o(\Delta ) \\ &\qquad{}+\bigl(13+e^{\lambda \tau }\bigr)\lambda _{3}\Delta ^{\frac{1}{6}} \int _{0}^{t}e ^{\lambda s}E\Bigl(\sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &\qquad{}+\bigl(\lambda _{3}+4\kappa ^{2}e^{\lambda \tau }+ \lambda _{3}e^{\lambda \tau }\bigr) \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u\le s} \bigl\vert X(u;\xi )-X(u; \eta ) \bigr\vert ^{2}\Bigr)\,ds \\ &\qquad{}+\frac{(4\kappa ^{2}+\lambda _{3})e^{\lambda \tau }}{\lambda }E \Vert \xi -\eta \Vert ^{2} \\ &\quad =: E \bigl\vert \xi -\eta -\bigl[\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\xi }, \bar{r}(0)\bigr)-\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\eta }, \bar{r}(0)\bigr)\bigr] \bigr\vert ^{2} \\ &\qquad{}+\kappa ^{2}e^{\lambda \tau }E \Vert \xi -\eta \Vert ^{2}+\frac{\lambda _{2}E \Vert \xi -\eta \Vert ^{2}e^{\lambda \tau }}{\lambda }+\frac{(4\kappa ^{2}+ \lambda _{3})e^{\lambda \tau }}{\lambda }E \Vert \xi -\eta \Vert ^{2} \\ &\qquad{}+\frac{\Delta ^{\frac{1}{2}}\lambda _{3}e^{\lambda \tau }}{\lambda }+C _{5}\bigl[\Delta ^{\frac{1}{6}}+ \alpha \bigl(\Delta ^{\frac{1}{6}}\bigr)\bigr]\frac{e^{ \lambda t}}{\lambda }+C_{5} \Delta ^{\frac{1}{6}}t+o(\Delta ) \\ &\qquad{}+\tilde{\lambda } \int _{0}^{t}e^{\lambda s}E\Bigl(\sup _{0\le u\le s} \bigl\vert X(u; \xi )-X(u;\eta ) \bigr\vert ^{2}\Bigr)\,ds, \end{aligned} \end{aligned}$$

where \(\tilde{\lambda }=-\lambda _{1}+\lambda +\lambda _{3}+13\lambda _{3}\Delta ^{\frac{1}{6}}+(\lambda \kappa ^{2}+\lambda _{2}+\lambda _{3} \Delta ^{\frac{1}{6}}+4\kappa ^{2}+\lambda _{3})e^{\lambda \tau }\).

When Δ is sufficiently small such that \(\tilde{\lambda }<0\), using (37), we obtain

$$\begin{aligned} \begin{aligned}[b]&E \bigl\vert X(t;\xi )-X(t;\eta )-\bigl[\bar{\mathcal{D}}(t;\xi )-\bar{ \mathcal{D}}(t;\eta )\bigr] \bigr\vert ^{2} \\ &\quad \le E \bigl\vert \xi -\eta -\bigl[\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\xi }, \bar{r}(0)\bigr)-\mathcal{D}\bigl(\bar{X}_{-\Delta }^{\eta }, \bar{r}(0)\bigr)\bigr] \bigr\vert ^{2}e ^{-\lambda t} \\ &\qquad{}+ \biggl[\kappa ^{2}+\frac{\lambda _{2}+4\kappa ^{2}+\lambda _{3}}{ \lambda } \biggr]e^{\lambda \tau }e^{-\lambda t}E \Vert \xi -\eta \Vert ^{2}+e ^{-\lambda t}o(\Delta ) \\ &\quad \le ce^{-\lambda t}\bigl(E \Vert \xi -\eta \Vert ^{2}+o(\Delta )\bigr). \end{aligned} \end{aligned}$$
(44)

Then, from (44), (17), and (4), it yields

$$\begin{aligned} \begin{aligned}[b] E \Bigl[\sup_{-\tau \le s\le t} \bigl\vert X(s;\xi )-X(s;\eta ) \bigr\vert ^{2} \Bigr] &\leq \frac{ \beta (1+\beta )}{\beta -(1+\beta )\kappa ^{2}}ce^{-\lambda t}E \Vert \xi -\eta \Vert ^{2} \\ &\le ce^{-\lambda t}E \Vert \xi -\eta \Vert ^{2}, \end{aligned} \end{aligned}$$

which implies

$$\begin{aligned} \lim_{t\rightarrow \infty }\sup_{\xi ,\eta \in \mathrm{K}}E\bigl[ \bigl\Vert X_{t} ^{\xi }-X_{t}^{\eta } \bigr\Vert ^{2}\bigr]=0, \end{aligned}$$

as required. □

Note that combining Lemma 3.3 and Lemma 3.4 with Theorem 3.2 can yield Theorem 3.1. In fact, \(\pi ^{\Delta }\) is the invariant measure of the Markov chain \((X_{k\Delta }, r_{k}^{ \Delta })\) when Δ is sufficiently small.

4 Convergence of the numerical invariant measures

The previous section shows that \(\{(X_{k\Delta }, r_{k}^{\Delta })\} _{k\ge 0}\) has a unique invariant measure sequence \(\pi ^{\Delta }( \cdot \times \cdot )\). In this section we will show that if the invariant measure sequence \(\pi ^{\Delta }(\cdot \times \cdot )\) converges weakly to a probability measure in \({\mathscr{P}}( \mathcal{C} \times \mathcal{S})\), then this probability measure is the invariant measure of the exact solution \(\pi (\cdot \times \cdot )\) of Eq. (1).

Let \(y(t)\) be the (exact) solution of Eq. (1), and let \(Y(t)=(y(t),r(t))\). Then \(Y(t)\) is a time homogeneous Markov process. If the process starts from \((\xi ,i)\in \mathcal{C}\times \mathcal{S}\), we denote the process by \(Y^{\xi ,i}(t)=(y^{\xi ,i}(t),r^{i}(t))\). Let \(P_{t}((\xi ,i),\cdot \times \cdot )\) be the probability measure induced by \(Y^{\xi ,i}(t)\), namely

$$ P_{t}\bigl((\xi ,i), A \times B\bigr) = P\bigl\{ Y^{\xi ,i}(t) \in A \times B\bigr\} , \quad \forall A\times B\subset \mathcal{C}\times \mathcal{S}. $$

Clearly, \(P_{t}((\xi ,i),\cdot \times \cdot )\) is also the transition probability measure of the Markov process \(Y(t)\). The process \(Y(t)\) is said to be stable in distribution if there exists \(\pi (\cdot \times \cdot ) \in {\mathscr{P}}(\mathcal{C}\times \mathcal{S})\) such that the probability measure \(P_{t}((\xi ,i), \cdot \times \cdot )\) converges weakly to \(\pi (\cdot \times \cdot )\) as \(t \to \infty \) for every \((\xi , i) \in \mathcal{C}\times \mathcal{S}\), that is,

$$ \lim_{t\to \infty } d_{\mathbb{L}}\bigl(P_{t}\bigl((\xi ,i), \cdot \times \cdot \bigr),\pi (\cdot \times \cdot )\bigr) = 0. $$

It is easily seen that if \(Y(t)\) is stable in distribution, then \(\pi (\cdot \times \cdot )\) is the unique invariant measure of \(Y(t)\). Similar to the proof of [2, 19], we have the following.

Theorem 4.1

If Assumptions 1 and 2 hold, then the Markov process \(Y(t)\) is stable in distribution.

To reveal the important relationship between \(\pi ^{\Delta }\) and π, let us establish another lemma.

Lemma 4.2

Let Assumptions 1 and 2 hold and any fixed \((\xi , i)\in \mathcal{C} \times \mathcal{S}\). Then, for any given \(T>0\) and \(\chi >0\), there exists a sufficiently small scalar \(\Delta ^{*}>0\) such that

$$\begin{aligned} d_{\mathbb{L}}\bigl(P_{k}^{\Delta }\bigl((\xi , i), \cdot \times \cdot \bigr), P_{k \Delta } \bigl((\xi , i), \cdot \times \cdot \bigr) \bigr)< \chi \end{aligned}$$
(45)

provided \(\Delta < \Delta ^{*}\) and \(k\Delta \le T\).

Proof

Let \(X^{(\xi , i), \Delta }(t)\) be the continuous EM approximate solution. Under Assumptions 1 and 2, in the Appendix, we can show that

$$\begin{aligned} \lim_{\Delta \to 0}E \Bigl[\sup_{0\le t \le T} \bigl\vert X^{(\xi , i), \Delta }(t)-y^{(\xi ,i)}(t) \bigr\vert ^{2} \Bigr]=0. \end{aligned}$$
(46)

Hence, there exists a sufficiently small scalar \(\Delta ^{*}>0\) such that

$$\begin{aligned} E\bigl( \bigl\vert X_{k}^{(\xi , i), \Delta }-y^{(\xi , i)}(k\Delta ) \bigr\vert \bigr)< \chi \end{aligned}$$
(47)

provided \(\Delta < \Delta ^{*}\) and \(k\Delta \le T\).

Therefore, for any \(f \in \mathbb{L}\),

$$\begin{aligned} & \bigl\vert Ef\bigl(X_{k}^{(\xi , i), \Delta }, r_{k}^{i, \Delta } \bigr)-Ef\bigl(y^{(\xi , i)}(k \Delta ), r^{i}(k \Delta ) \bigr) \bigr\vert \\ &\quad \le E\bigl( \bigl\vert X_{k}^{(\xi , i), \Delta }-y^{(\xi , i)}(k \Delta ) \bigr\vert \bigr) < \chi . \end{aligned}$$
(48)

The required assertion follows. □

We can now show that the numerical invariant measure sequence will weakly converge to the invariant measure of the exact solution.

Theorem 4.3

Under Assumptions 1 and 2, we have

$$\begin{aligned} \lim_{\Delta \to 0} d_{ \mathbb{L}} \bigl(\pi ^{\Delta }(\cdot \times \cdot ), \pi (\cdot \times \cdot )\bigr) = 0. \end{aligned}$$
(49)

Proof

Fix any \((\xi ,i) \in \mathcal{C}\times \mathcal{S}\), and let \(\chi >0\) be arbitrary. By Theorem 4.1, there exists \(T_{1}>0\) such that

$$\begin{aligned} d_{\mathbb{L}}\bigl(P_{T_{1}}\bigl((\xi , i), \cdot \times \cdot \bigr),\pi (\cdot \times \cdot )\bigr)\le \frac{\chi }{3}. \end{aligned}$$
(50)

By Theorem 3.2, there exists a pair of \(\Delta _{0}>0\) and \(T_{2}>0\) such that, for any \(\Delta <\Delta _{0}\) and \(k\Delta \ge T _{2}\),

$$\begin{aligned} d_{\mathbb{L}}\bigl(P_{k}^{\Delta }\bigl( (\xi , i), \cdot \times \cdot \bigr), \pi ^{\Delta }(\cdot \times \cdot )\bigr)\le \frac{\chi }{3}. \end{aligned}$$
(51)

Setting \(T=T_{1}\vee T_{2}\). By Lemma 4.2, there exists a constant \(\Delta ^{*} >0\) such that

$$\begin{aligned} d_{\mathbb{L}}\bigl(P_{k}^{\Delta }\bigl( (\xi ,i), \cdot \times \cdot \bigr), P_{k \Delta }\bigl( (\xi ,i), \cdot \times \cdot \bigr) \bigr) < \frac{\chi }{3} \end{aligned}$$
(52)

provided \(\Delta < \Delta ^{*}\) and \(k\Delta \le T+1\). Now, for any \(\Delta <\Delta _{0} \wedge \Delta ^{*}\), letting \(k=[T/\Delta ]+1\) and using (50), (51), and (52), we derive

$$\begin{aligned} &d_{\mathbb{L}}\bigl(\pi ^{\Delta }(\cdot \times \cdot ), \pi (\cdot \times \cdot )\bigr) \\ &\quad \le d_{\mathbb{L}}\bigl(P_{k}^{\Delta }\bigl( (\xi ,i), \cdot \times \cdot \bigr), \pi ^{\Delta }(\cdot \times \cdot )\bigr) +d_{\mathbb{L}}\bigl(P_{k\Delta }\bigl(( \xi ,i), \cdot \times \cdot \bigr),\pi (\cdot \times \cdot )\bigr) \\ &\qquad{}+d_{\mathbb{L}}\bigl(P_{k}^{\Delta }\bigl( (\xi ,i), \cdot \times \cdot \bigr), P _{k\Delta }\bigl((\xi ,i), \cdot \times \cdot \bigr)\bigr) \\ &\quad < \frac{\chi }{3}+\frac{\chi }{3}+\frac{\chi }{3}=\chi , \end{aligned}$$

as required. □

Let us make a remark to close this section. Theorem 4.3 only gives the existence of invariant measure of the numerical solution to Eq. (1). The method to obtain the invariant measure has been provided. In addition, we can find from Theorem 4.3 that the numerical invariant measure sequence \(\pi ^{\Delta }\) will weakly converge to the invariant measure π of the exact solution. That is to say, this theorem gives us a numerical method to get the approximate invariant measure π for NSFDEwMSs (1).

5 Example

In this section, a numerical example is provided to illustrate the theoretical results established in the previous sections.

Example 5.1

Let \(w(t)\) be a scalar Brownian motion. Let \(r(t)\) be a right-continuous Markov chain taking values in \(\mathcal{S}=\{1,2\}\) with generator

Γ= [ − a a b − b ] ,

where a, b are positive numbers such that \(\pi =(\frac{1}{2}, \frac{1}{2})\) is the stationary distribution of the Markov chain. Assume that \(w(t)\) and \(r(t)\) are independent. Consider a scalar neutral stochastic functional differential equation

$$ \begin{aligned}[b] \,d\bigl[x(t)-\mathcal{D} \bigl(x_{t},r(t)\bigr)\bigr]= f\bigl(x_{t},r(t)\bigr)\,dt+g \bigl(x_{t},r(t)\bigr)\,dw(t), \end{aligned} $$
(53)

where

$$\begin{aligned}& \mathcal{D}\bigl(x_{t},r(t)\bigr)= \textstyle\begin{cases} 0.025\int ^{0}_{-\tau }x(t+\theta )\,d\theta , & r(t)=1, \\ 0.05\int ^{0}_{-\tau }x(t+\theta )\,d\theta , & r(t)=2, \end{cases}\displaystyle \\& f\bigl(x_{t},r(t)\bigr)= \textstyle\begin{cases} -0.1x(t)+0.05\int ^{0}_{-\tau }x(t+\theta )\,d\theta , & r(t)=1, \\ -0.2x(t)+0.1\int ^{0}_{-\tau }x(t+\theta )\,d\theta , & r(t)=2, \end{cases}\displaystyle \end{aligned}$$

and

$$ g\bigl(x_{t},r(t)\bigr)= \textstyle\begin{cases} 0.025\int ^{0}_{-\tau }x(t+\theta )\,d\theta , & r(t)=1, \\ 0.05\int ^{0}_{-\tau }x(t+\theta )\,d\theta , & r(t)=2. \end{cases} $$

It is easy to obtain that, when \(r(t)=1\),

$$\begin{aligned}& 2 \bigl\langle \bigl(\varphi (0)-\psi (0)\bigr)-\bigl[\mathcal{D}(\varphi ,1)- \mathcal{D}(\psi ,1)\bigr],f(\varphi ,1)-f(\psi ,1) \bigr\rangle + \bigl\vert g( \varphi ,1)-g(\psi ,1) \bigr\vert ^{2} \\& \quad \leq -0.2 \bigl\vert \varphi (0)-\psi (0) \bigr\vert ^{2}+0.1 \int ^{0}_{-\tau } \bigl\vert \varphi ( \theta )-\psi ( \theta ) \bigr\vert ^{2}\rho (d\theta ), \\& \bigl\vert f(\varphi ,1)-f(\psi ,1) \bigr\vert ^{2}\vee \bigl\vert g(\varphi ,1)-g(\psi ,1) \bigr\vert ^{2} \\& \quad \leq 0.01 \biggl( \bigl\vert \varphi (0)-\psi (0) \bigr\vert ^{2}+ \int ^{0}_{-\tau } \bigl\vert \varphi (\theta )-\psi ( \theta ) \bigr\vert ^{2}\rho (d\theta ) \biggr), \end{aligned}$$

and

$$ \bigl\vert \mathcal{D}(\varphi ,1)-\mathcal{D}(\psi ,1) \bigr\vert \leq 0.025 \int ^{0}_{- \tau } \bigl\vert \varphi (\theta )-\psi ( \theta ) \bigr\vert \rho (d\theta ). $$

Therefore, \(\lambda _{1}=0.2\), \(\lambda _{2}=0.1\), \(\lambda _{3}=0.01\), and \(\kappa =0.025\). Consequently,

$$ \begin{aligned}[b] &\lambda _{1}=0.2>4\lambda _{3}+\lambda _{2}+2\kappa ^{2}=0.14125, \\ &\lambda _{1}=0.2>\lambda _{2}+2\lambda _{3}+4 \kappa ^{2}=0.1225. \end{aligned} $$

Thus, for Eq. (53) when \(r(t)=1\), the conditions in Lemma 3.3 and Lemma 3.4 are fulfilled.

When \(r(t)=2\), similarly, we have

$$\begin{aligned}& 2 \bigl\langle \bigl(\varphi (0)-\psi (0)\bigr)-\bigl[\mathcal{D}(\varphi ,2)- \mathcal{D}(\psi ,2)\bigr],f(\varphi ,2)-f(\psi ,2) \bigr\rangle + \bigl\vert g( \varphi ,2)-g(\psi ,2) \bigr\vert ^{2} \\& \quad \leq -0.4 \bigl\vert \varphi (0)-\psi (0) \bigr\vert ^{2}+0.22 \int ^{0}_{-\tau } \bigl\vert \varphi (\theta )-\psi ( \theta ) \bigr\vert ^{2}\rho (d\theta ), \\& \bigl\vert f(\varphi ,2)-f(\psi ,2) \bigr\vert ^{2}\vee \bigl\vert g(\varphi ,2)-g(\psi ,2) \bigr\vert ^{2} \\& \quad \leq 0.04 \biggl( \bigl\vert \varphi (0)-\psi (0) \bigr\vert ^{2}+ \int ^{0}_{-\tau } \bigl\vert \varphi (\theta )-\psi ( \theta ) \bigr\vert ^{2}\rho (d\theta ) \biggr), \end{aligned}$$

and

$$ \bigl\vert \mathcal{D}(\varphi ,2)-\mathcal{D}(\psi ,2) \bigr\vert \leq 0.05 \int ^{0}_{- \tau } \bigl\vert \varphi (\theta )-\psi ( \theta ) \bigr\vert \rho (d\theta ). $$

Therefore \(\lambda _{1}=0.4\), \(\lambda _{2}=0.22\), \(\lambda _{3}=0.04\), and \(\kappa =0.05\), which implies that

$$ \begin{aligned}[b] &\lambda _{1}=0.4>4\lambda _{3}+\lambda _{2}+2\kappa ^{2}=0.385, \\ &\lambda _{1}=0.4>\lambda _{2}+2\lambda _{3}+4 \kappa ^{2}=0.31. \end{aligned} $$

Thus, for Eq. (53) when \(r(t)=2\), the conditions in Lemma 3.3 and Lemma 3.4 are satisfied. By Theorem 3.2, there exists a sufficiently small \(\Delta \in (0,1)\) such that the EM approximate solution \((X_{k\Delta },r^{\Delta }_{k})\) is stable in distribution.

6 Conclusion

In this paper, the stability in distribution for the numerical solution of neutral stochastic functional differential equations with Markovian switching has been discussed by using the Euler–Maruyama method. The theoretical results obtained have been analyzed in detail and some sufficient conditions have been presented. Some existing results, for example [8, 21, 23,24,25,26,27,28], have been generalized. Meanwhile, the strong convergence for the theoretical solution and the numerical solution of such equations has been considered.

References

  1. Anderson, W.J.: Continuous-Time Markov Chains. Springer, Berlin (1991)

    Book  Google Scholar 

  2. Bao, J., Shao, J., Yuan, C.: Invariant measures for path-dependent random diffusions (2017). arXiv:1706.05638

  3. Du, N.H., Dang, N.H., Dieu, N.T.: On stability in distribution of stochastic differential delay equations with Markovian switching. Syst. Control Lett. 65, 43–49 (2014)

    Article  MathSciNet  Google Scholar 

  4. Hu, G., Wang, K.: Stability in distribution of neutral stochastic functional differential equations with Markovian switching. J. Math. Anal. Appl. 385, 757–769 (2012)

    Article  MathSciNet  Google Scholar 

  5. Kolmanovskii, V., Koroleva, N., Maizenberg, T., Mao, X., Matasov, A.: Neutral stochastic differential delay equations with Markovian switching. Stoch. Anal. Appl. 21, 819–847 (2003)

    Article  MathSciNet  Google Scholar 

  6. Kolmanovskii, V.B., Nosov, V.R.: Stability of Functional Differential Equations. Academic Press, San Diego (1986)

    Google Scholar 

  7. Kuang, Y.: Delay Differential Equations with Applications in Population Dynamics. Academic Press, San Diego (1993)

    MATH  Google Scholar 

  8. Mao, W., Mao, X.: On the approximations of solutions to neutral SDEs with Markovian switching and jumps under non-Lipschitz conditions. Appl. Math. Comput. 230, 104–119 (2014)

    MathSciNet  MATH  Google Scholar 

  9. Mao, X.: Exponential stability in mean square of neutral stochastic functional differential equations. Syst. Control Lett. 26, 245–251 (1995)

    Article  Google Scholar 

  10. Mao, X.: Stochastic Differential Equations and Applications. Horwood, Chichester (1997)

    MATH  Google Scholar 

  11. Mao, X.: Stochastic functional differential equations with Markovian switching. Funct. Differ. Equ. 6, 375–396 (1999)

    MathSciNet  MATH  Google Scholar 

  12. Mao, X.: Numerical solutions of stochastic functional differential equations. LMS J. Comput. Math. 6, 141–161 (2003)

    Article  MathSciNet  Google Scholar 

  13. Mao, X., Shen, Y., Yuan, C.: Almost surely asymptotic stability of neutral stochastic differential delay equations with Markovian switching. Stoch. Process. Appl. 34, 1385–1406 (2008)

    Article  MathSciNet  Google Scholar 

  14. Mao, X., Yuan, C.: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London (2006)

    Book  Google Scholar 

  15. Mazenc, F.: Stability analysis of time-varying neutral time-delay systems. IEEE Trans. Autom. Control 60, 540–546 (2015)

    Article  MathSciNet  Google Scholar 

  16. Mo, H., Li, M., Deng, F., Mao, X.: Exponential stability of the Euler–Maruyama method for neutral stochastic functional differential equations with jumps. Sci. China Inf. Sci. 61, 70214 (2018)

    Article  MathSciNet  Google Scholar 

  17. Mohammed, S.-E.A.: Stochastic Functional Differential Equations. Pitman, London (1984)

    MATH  Google Scholar 

  18. Ngoc, P.H.A.: Exponential stability of coupled linear delay time-varying differential difference equations. IEEE Trans. Autom. Control 63, 843–848 (2018)

    Article  MathSciNet  Google Scholar 

  19. Tan, L., Jin, W., Suo, Y.: Stability in distribution of neutral stochastic functional differential equations. Stat. Probab. Lett. 107, 27–36 (2015)

    Article  MathSciNet  Google Scholar 

  20. Teel, A.R., Subbaraman, A., Sferlazza, A.: Stability analysis for stochastic hybrid systems: a survey. Automatica 50, 2435–2456 (2014)

    Article  MathSciNet  Google Scholar 

  21. Wu, F., Mao, X.: Numerical solutions of neutral stochastic functional differential equations. SIAM J. Numer. Anal. 46, 1821–1841 (2008)

    Article  MathSciNet  Google Scholar 

  22. Yin, G., Zhu, C.: Hybrid Switching Diffusions: Properties and Applications. Springer, Berlin (2010)

    Book  Google Scholar 

  23. Yu, Z.H.: Almost sure and mean square exponential stability of numerical solution for neutral stochastic functional differential equations. Int. J. Comput. Math. 92, 132–150 (2015)

    Article  MathSciNet  Google Scholar 

  24. Yuan, C., Mao, X.: Stability in distribution of numerical solutions for stochastic differential equations. Stoch. Anal. Appl. 22, 1133–1150 (2004)

    Article  MathSciNet  Google Scholar 

  25. Yuan, C., Zou, J., Mao, X.: Stability in distribution of stochastic differential delay equations with Markovian switching. Syst. Control Lett. 50, 195–207 (2003)

    Article  MathSciNet  Google Scholar 

  26. Zhou, S., Wu, F.: Convergence of numerical solutions to neutral stochastic delay differential equations with Markovian switching. Int. J. Comput. Appl. Math. 229, 85–96 (2009)

    MathSciNet  MATH  Google Scholar 

  27. Zhou, S.B.: Exponential stability of numerical solution to neutral stochastic functional differential equation. Appl. Math. Comput. 266, 441–461 (2015)

    MathSciNet  Google Scholar 

  28. Zong, X., Wu, F., Huang, C.: Exponential mean square stability of the θ approximations for neutral stochastic differential delay equations. Int. J. Comput. Appl. Math. 286, 172–185 (2015)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors sincerely thank the associate editor and the reviewers for their valuable suggestion and comments, which can greatly improve the quality of this work.

Availability of data and materials

Not applicable.

Authors’ information

Mrs. Yuru Hu is currently working towards her Master degree in the Department of Mathematics, School of Sciences, Nanchang University, China. Her interest field includes the numerical solution of stochastic differential equations. Dr Huabin Chen is an associate professor in the Department of Mathematics, School of Sciences, Nanchang University, China. His research fields are stochastic differential equations, stochastic systems and control, and hybrid systems. Dr Chenggui Yuan is a full professor in the Department of Mathematics, Swansea University, Swansea, UK. His research fields are stochastic differential equations, derivation principle, ergodicity and dynamical systems, etc.

Funding

This work is partially supported by the National Natural Science Foundation of China (61364005, 11401292, 61773401, 11561027, 11661039, 71371193), the Natural Science Foundation of Jiangxi Province of China (20161BAB211018, 20171BAB201007, 20171BCB23001), and the Foundation of Jiangxi Provincial Educations of China (GJJ150444, GJJ160061, GJJ14155).

Author information

Authors and Affiliations

Authors

Contributions

Mrs. YH is the first author of this paper who mainly wrote this paper. Dr HC has made some revisions. Professor CY has provided the idea of this writing and given some valuable suggestions on the revision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Huabin Chen.

Ethics declarations

Competing interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Theorem A.1

Under Assumptions 1 and 2, we have

$$ \lim_{\Delta \rightarrow 0}E\Bigl(\sup_{0\leq t\leq T} \bigl\vert y(t)-X(t) \bigr\vert ^{2}\Bigr)=0, \quad \forall T>0. $$

Proof

We first note from Theorem 2.1 and Lemma 3.3 that there exists a positive constant H̄ such that

$$ E\Bigl(\sup_{-\tau \leq t\leq T} \bigl\vert y(t) \bigr\vert ^{2}\Bigr)\vee E\Bigl(\sup_{-\tau \leq t\leq T} \bigl\vert X(t) \bigr\vert ^{2}\Bigr) \leq \bar{H}. $$
(54)

Let j be a sufficiently large integer, define the stopping times

$$ u_{j}:=\inf \bigl\{ t\geq 0: \Vert y_{t} \Vert \geq j \bigr\} , \qquad v_{j}:=\inf \bigl\{ t\geq 0: \Vert X_{t} \Vert \geq j\bigr\} , \qquad \rho _{j}:=u_{j}\wedge v_{j}, $$

where we set \(\inf \phi =\infty \) as usual. Letting \(e(t)=y(t)-X(t)\), it follows

$$\begin{aligned} \begin{aligned}[b] \bigl\vert e(t\wedge \rho _{j}) \bigr\vert ^{2}&= \bigl\vert y(t\wedge \rho _{j})-X(t \wedge \rho _{j}) \bigr\vert ^{2} \\ &= \biggl\vert \mathcal{D}\bigl(y_{t},r(t)\bigr)-\bar{ \mathcal{D}}(t)+ \int _{0}^{t \wedge \rho _{j}}\bigl[f\bigl(y_{s},r(s) \bigr)-f\bigl(\bar{X}_{s},\bar{r}(s)\bigr)\bigr]\,ds \\ &\quad{}+ \int _{0}^{t\wedge \rho _{j}}\bigl[g\bigl(y_{s},r(s) \bigr)-g\bigl(\bar{X}_{s},\bar{r}(s)\bigr)\bigr]\,dB(s) \biggr\vert ^{2} \\ &\le 3 \bigl\vert \mathcal{D}\bigl(y_{t},r(t)\bigr)-\bar{ \mathcal{D}}(t) \bigr\vert ^{2} \\ &\quad{}+3T \int _{0}^{t\wedge \rho _{j}} \bigl\vert f\bigl(y_{s},r(s) \bigr)-f\bigl(\bar{X}_{s}, \bar{r}(s)\bigr) \bigr\vert ^{2} \,ds \\ &\quad{}+3 \biggl\vert \int _{0}^{t\wedge \rho _{j}}\bigl[g\bigl(y_{s},r(s) \bigr)-g\bigl(\bar{X}_{s}, \bar{r}(s)\bigr)\bigr]\,dB(s) \biggr\vert ^{2}. \end{aligned} \end{aligned}$$

By the Doob martingale inequality (see [10]), it yields that, for any \(t_{1}\leq T\),

$$\begin{aligned} \begin{aligned}[b]&E\Bigl[\sup_{0\leq t\leq t_{1}} \bigl\vert e(t\wedge \rho _{j}) \bigr\vert ^{2}\Bigr] \\ &\quad \leq 3E\Bigl[\sup_{0\leq t\leq t_{1}} \bigl\vert \mathcal{D} \bigl(y_{t},r(t)\bigr)-\bar{ \mathcal{D}}(t) \bigr\vert ^{2}\Bigr] \\ &\qquad{}+3TE \int _{0}^{t_{1}\wedge \rho _{j}} \bigl\vert f\bigl(y_{s},r(s) \bigr)-f\bigl(\bar{X}_{s}, \bar{r}(s)\bigr) \bigr\vert ^{2} \,ds \\ &\qquad{}+12E \int _{0}^{t_{1}\wedge \rho _{j}} \bigl\vert g\bigl(y_{s},r(s) \bigr)-g\bigl(\bar{X}_{s}, \bar{r}(s)\bigr) \bigr\vert ^{2} \,ds. \end{aligned} \end{aligned}$$
(55)

For any \(s\in (0,t_{1}\wedge \rho _{j}]\), we derive

$$\begin{aligned}& \bigl\vert f\bigl(y_{s}, r(s)\bigr)-f\bigl(\bar{X}_{s}, \bar{r}(s)\bigr) \bigr\vert ^{2} \\& \quad \le 2 \bigl\vert f\bigl(\bar{X}_{s}, \bar{r}(s)\bigr)-f\bigl( \bar{X}_{s}, r(s)\bigr) \bigr\vert ^{2}+2 \bigl\vert f \bigl( y _{s},r(s)\bigr)-f\bigl(\bar{X}_{s}, r(s)\bigr) \bigr\vert ^{2}. \end{aligned}$$

Let \(\bar{n}=[s/\Delta ]\), the integer part of \(s/\Delta \). Then

$$\begin{aligned}& E \int _{0}^{s} \bigl\vert f\bigl( \bar{X}_{u}, \bar{r}(u)\bigr)- f\bigl(\bar{X}_{u}, r(u) \bigr) \bigr\vert ^{2}\,du \\& \quad = \sum_{k=0}^{\bar{n}} E \int _{t_{k}}^{t_{k+1}} \bigl\vert f\bigl( \bar{X}_{t_{k}}, r(t_{k})\bigr)- f\bigl(\bar{X}_{t_{k}}, r(u)\bigr) \bigr\vert ^{2}\,du, \end{aligned}$$
(56)

with \(t_{\bar{n}+1}\) being now set to be T.

Let \(I_{G}\) be the indicator function for set G. Moreover, in the remainder of the proof, C is a positive constant dependent on only s, \(\lambda _{3}\), ξ and \(\max_{1 \le i \le N}(-\gamma _{ii})\) but independent of Δ. In particular, it may change line by line. With these notations, we observe that

$$\begin{aligned}& E \int _{t_{k}}^{t_{k+1}}|f\bigl({\bar{X}}_{t_{k}}, r(t_{k})\bigr)- f\bigl({\bar{X}} _{t_{k}}, r(u) \bigr)|^{2}\,du \\& \quad \le 2 E \int _{t_{k}}^{t_{k+1}} \bigl[ \bigl\vert f\bigl({ \bar{X}}_{t_{k}}, r(t_{k})\bigr) \bigr\vert ^{2} + \bigl\vert f\bigl({\bar{X}}_{t_{k}}, r(u)\bigr) \bigr\vert ^{2} \bigr] I_{\{r(u) \neq r(t_{k}) \}}\,du \\& \quad \le C E \int _{t_{k}}^{t_{k+1}} \Vert {\bar{X}}_{t_{k}} \Vert ^{2}I_{\{r(u) \neq r(t_{k}) \}}\,du \\& \quad \le C \int _{t_{k}}^{t_{k+1} }E \bigl[ E \bigl[ \Vert { \bar{X}}_{t_{k}} \Vert ^{2} I_{\{r(u) \neq r(t_{k}) \}} | r(t_{k}) \bigr] \bigr]\,du \\& \quad =C \int _{t_{k}}^{t_{k+1} } E \bigl[ E \bigl[ \Vert { \bar{X}}_{t_{k}} \Vert ^{2}|r(t _{k}) \bigr] E \bigl[ I_{\{r(u) \neq r(t_{k}) \}} | r(t_{k}) \bigr] \bigr]\,du, \end{aligned}$$

where in the last step we use the fact that \({\bar{X}}_{t_{k}}\) and \(I_{\{r(u) \neq r(t_{k})\}}\) are conditionally independent with respect to the σ-algebra generated by \(r(t_{k})\).

By (54) and Assumption 1, we have

$$\begin{aligned}& E \int _{t_{k}}^{t_{k+1}} \bigl\vert f\bigl( \bar{X}(t_{k}), r(t_{k})\bigr)- f\bigl(\bar{X}(t _{k}), r(u)\bigr) \bigr\vert ^{2}\,du \\& \quad \le \bigl(C \Delta +o(\Delta )\bigr) \int _{t_{k}}^{t_{k+1}} E \bigl\Vert \bar{X}(t_{k}) \bigr\Vert ^{2}\,du \\& \quad \le \Delta \bigl(C\Delta +o(\Delta )\bigr), \end{aligned}$$
(57)

where C denotes a positive constant independent of t, which may change from line to line.

Substituting (57) into (56) gives

$$ E \int _{0}^{s} \bigl\vert f\bigl( \bar{X}_{u}, \bar{r}(u)\bigr)- f\bigl(\bar{X}_{u}, r(u) \bigr) \bigr\vert ^{2}\,du \le C\Delta +o(\Delta ). $$
(58)

On the other hand, by using the techniques developed in [12] and Assumption 1, we have

$$\begin{aligned}& E \int _{0}^{s} \bigl\vert f\bigl( y_{u}, r(u)\bigr)-f\bigl(\bar{X}_{u}, r(u)\bigr) \bigr\vert ^{2}\,du \\& \quad \le 2E \int _{0}^{s} \bigl\vert f\bigl( y_{u}, r(u)\bigr)-f\bigl( X_{u}, r(u)\bigr) \bigr\vert ^{2}\,du+2E \int _{0}^{s} \bigl\vert f\bigl( X_{u}, r(u)\bigr)-f\bigl(\bar{X}_{u}, r(u)\bigr) \bigr\vert ^{2}\,du \\& \quad \le CE \int _{0}^{s} \int _{-\tau }^{0} \bigl\vert y(u+\theta )-X(u+\theta ) \bigr\vert ^{2} \rho (d\theta ) \,du\\& \qquad{}+CE \int _{0}^{s} \int _{-\tau }^{0} \bigl\vert X(u+\theta )- \bar{X}_{u}(\theta ) \bigr\vert ^{2}\rho (d\theta )\,du \\& \quad \le CE \int _{0}^{s} \Bigl[\sup_{0\le r\le u} \bigl\vert y(r)-X(r) \bigr\vert ^{2} \Bigr]\,du +CE \int _{0}^{s} \int _{-\tau }^{0}E \bigl\vert X(u+\theta )- \bar{X}_{u}(\theta ) \bigr\vert ^{2} \rho (d\theta )\,du \\& \quad \le CE \int _{0}^{s} \Bigl[\sup_{0\le r\le u} \bigl\vert y(r)-X(r) \bigr\vert ^{2} \Bigr]\,du+C \beta (\Delta ), \end{aligned}$$

where \(\beta (\Delta )\) is dependent on Δ as defined in [12].

Therefore,

$$\begin{aligned}& E \int _{0}^{s} \bigl\vert f\bigl(y_{u}, r(u)\bigr)-f\bigl(\bar{X}_{u}, \bar{r}(u)\bigr) \bigr\vert ^{2}\,du \\& \quad \le CE \int _{0}^{s} \Bigl[\sup_{0\le r\le u} \bigl\vert y(r)-X(r) \bigr\vert ^{2} \Bigr]\,du+C \beta (\Delta )+C \Delta +o(\Delta ), \end{aligned}$$
(59)

and the estimation about \(E\int _{0}^{s}|g(y_{u}, r(u))-g(\bar{X}_{u}, \bar{r}(u))|^{2}\,du\) can be similarly given. Here, the detailed process is omitted to save the space.

Then, by (10) and Assumption 1, we have

$$\begin{aligned}& E \bigl\vert \mathcal{D}\bigl(y_{s},r(s)\bigr)-\bar{\mathcal{D}}(s) \bigr\vert ^{2} \\& \quad \le 2E \bigl\vert \mathcal{D}\bigl(y_{s},r(s)\bigr)- \mathcal{D}\bigl(\bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2}+2E \bigl\vert \mathcal{D}\bigl(\bar{X}_{s},\bar{r}(s) \bigr)-\bar{\mathcal{D}}(s) \bigr\vert ^{2} \\& \quad \le 2E \bigl\vert \mathcal{D}\bigl(y_{s},r(s)\bigr)- \mathcal{D}\bigl(\bar{X}_{s},\bar{r}(s)\bigr) \bigr\vert ^{2}+2 \kappa ^{2}E \int _{-\tau }^{0} \vert \bar{X}_{[\frac{s}{\Delta }]\Delta }- \bar{X}_{([\frac{s}{\Delta }]-1)\Delta } \vert ^{2}\rho (d\theta ) \\& \quad \le 2\kappa ^{2}E\Bigl[\sup_{0\le r\le s} \bigl\vert y(r)-X(r) \bigr\vert ^{2}\Bigr]+ C\beta (\Delta )+ C\Delta + \alpha (\Delta ). \end{aligned}$$
(60)

Substituting (59) and (60) into (55) yields

$$\begin{aligned}& E\Bigl[\sup_{0\leq t\leq t_{1}} \bigl\vert e(t\wedge \rho _{j}) \bigr\vert ^{2}\Bigr] \\& \quad \le C\beta (\Delta )+ C\Delta +\alpha (\Delta )+o(\Delta )+CE \int _{0}^{t_{1}}\Bigl[\sup_{0\le u\le s} \bigl\vert y(u)-X(u) \bigr\vert ^{2}\Bigr]\,ds. \end{aligned}$$

By the Gronwall inequality,

$$\begin{aligned} E\Bigl[\sup_{0\leq t\leq t_{1}} \bigl\vert e(t\wedge \rho _{j}) \bigr\vert ^{2}\Bigr]\le \bigl[ C\beta ( \Delta )+ C\Delta +\alpha (\Delta )+o(\Delta )\bigr]e^{Ct_{1}}. \end{aligned}$$
(61)

Letting \(j \rightarrow \infty \), we obtain

$$\begin{aligned} E\Bigl[\sup_{0\leq t\leq T} \bigl\vert e(t) \bigr\vert ^{2}\Bigr]\leq \bigl[ C\beta (\Delta )+ C\Delta + \alpha (\Delta )+o(\Delta )\bigr]e^{CT}, \end{aligned}$$
(62)

as required. □

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, Y., Chen, H. & Yuan, C. Numerical solutions of neutral stochastic functional differential equations with Markovian switching. Adv Differ Equ 2019, 81 (2019). https://doi.org/10.1186/s13662-019-1957-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-019-1957-z

Keywords