- Research
- Open access
- Published:
Numerical method of highly nonlinear and nonautonomous neutral stochastic differential delay equations with Markovian switching
Advances in Difference Equations volume 2020, Article number: 688 (2020)
Abstract
In this paper, we establish a partially truncated Euler–Maruyama scheme for highly nonlinear and nonautonomous neutral stochastic differential delay equations with Markovian switching. We investigate the strong convergence rate and almost sure exponential stability of the numerical solutions under the generalized Khasminskii-type condition.
1 Introduction
Stochastic differential equations play an important role in various fields, such as biology, chemistry, and finance [3, 20, 27]. In practice, parameters and forms in stochastic systems may change when something unexpected happens. At this point, we can use stochastic differential equations with Markovian switching. Mao and Yuan [24] studied stochastic differential equations with Markovian switching in depth. Many stochastic systems not only depend on the present and past states, but also contain derivatives with delays and the function itself, which can be described by neutral stochastic differential delay equations (NSDDEs) [20]. Kolmanovskii et al. [12] established a fundamental theory for neutral stochastic differential delay equations with Markovian switching (NSDDEwMSs) and discussed some important properties of the solutions.
In many cases the true solutions of the equations cannot be found. So it is very useful to study explicit forms of the numerical solutions. The Euler–Maruyama (EM) method for stochastic differential delay equations with Markovian switching (SDDEwMSs) was investigated in [25] and [37]. Wu and Mao [34] showed the convergence of EM method for neutral stochastic functional differential equations. However, Hutzenthaler et al. [9] showed that pth moments of the EM approximations diverge to infinity for any \(p\in [1,\infty )\) when the coefficients grow superlinearly. Many implicit methods were established to estimate the solutions of the equations with superlinearly growing coefficients [2, 4, 8, 11, 26, 30, 32, 33]. Due to the advantages of explicit numerical solutions, such as less computation, plenty of modified EM methods have been studied to approximate the solutions of superlinear stochastic differential equations. The tamed EM scheme was proposed in [10] to estimate the solutions of stochastic differential equations with one-sided Lipschitz drift coefficient and global Lipschitz diffusion coefficient. Sabanis [28, 29] developed tamed EM schemes for nonlinear stochastic differential equations. More detail on the other explicit numerical methods can be found in [1, 16, 18]. In addition, Mao initialized the truncated EM method in [21] and obtained the convergence rate in [22]. Then Guo et al. [7] discussed the convergence rate of the truncated EM method for stochastic differential delay equations. The truncated EM method for time-changed nonautonomous stochastic differential equations was shown in [19]. To get the asymptotic behaviors easily, Guo et al. [6] proposed the partially truncated EM method. In [38], the partially truncated EM method for stochastic differential delay equations was proposed. Cong et al. [5] used the partially truncated EM method to get the convergence rate and almost sure exponential stability of highly nonlinear SDDEwMSs. Tan and Yuan in [33] showed the convergence rates of the theta-method for nonlinear neutral stochastic differential delay equations driven by Brownian motion and Poisson jumps, but the stability was not analyzed as time goes to infinity. In [39], the convergence of the EM method for NSDDEwMSs was proved, but the convergence rate was not given. To our best knowledge, there are few papers concerning with numerical solutions of highly nonlinear and nonautonomous NSDDEwMSs. Therefore, in this paper, we give the strong convergence rate of the partially truncated EM method for highly nonlinear and nonautonomous NSDDEwMSs.
Moreover, many scholars are interested in the asymptotic behaviors of the stochastic systems [3, 5, 6, 20, 24, 31]. The almost surely asymptotic stability of NSDDEwMSs was discussed in [23]. Then Li and Mao [15] established LaSalle-type stability theorem for NSDDEwMSs. Liu et al. [17] showed the mean square polynomial stability of the EM method and the backward EM method for stochastic differential equations. The almost sure exponential stability of EM approximations for stochastic differential delay equations was investigated By means of the semimartingale convergence theorem [36]. The exponential mean square stability of the split-step theta method for NSDDEs was investigated in [40]. Lan and Yuan [14] studied the exponential stability of the exact solutions and θ-EM (\(1/2< \theta \leq 1 \)) approximations to NSDDEwMSs. Lan [13] gave the asymptotic mean-square and almost sure exponential stability of the modified truncated EM method for NSDDEs under local Lipschitz condition and nonlinear growth condition. However, there is little literature studying the almost sure exponential stability of the partially truncated EM method for highly nonlinear and nonautonomous NSDDEwMSs. The second goal of this paper is to fill this gap.
This paper is organized as follows. We introduce some useful notations and establish the partially truncated EM scheme for NSDDEwMSs in Sect. 2. In Sect. 3, we discuss the strong convergence rate. In Sect. 4, we show the almost sure exponential stability of numerical solutions. Section 5 contains two examples to illustrate that our main result covers a large class of highly nonlinear and nonautonomous NSDDEwMSs.
2 Mathematical preliminaries
Unless otherwise specified, we use the following notation. If A is a vector or matrix, its transpose is denoted by \(A^{T}\). For \(x\in \mathbb{R}^{n}\), let \(|x|\) denote its Euclidean norm. If A is a matrix, denote by \(|A| = \sqrt{\operatorname{trace}(A^{T}A)}\) its trace norm. By \(A \le 0\) and \(A<0\) we mean that A is nonpositive and negative definite, respectively. For real numbers a, b, we denote \(a\wedge b = \min\{a, b\}\) and \(a\vee b=\max\{a,b\}\). Let \(\lfloor a \rfloor \) be the largest integer that does not exceed a. Let \(\mathbb{R} _{+} = [0,+\infty )\) and \(\tau > 0\). By \(\mathscr{C}([-\tau ,0];\mathbb{R}^{n})\) we denote the family of continuous functions ν from \([-\tau ,0] \) to \(\mathbb{R}^{n}\) with the norm \(\|\nu \|=\sup_{-\tau \le \tilde{\theta } \le 0}|\nu ( \tilde{\theta })|\). If H is a set, then \(\mathbb{I}_{H}\) denotes its indicator function, that is, \(\mathbb{I}_{H}(\omega )=1\) if \(\omega \in H\) and \(\mathbb{I}_{H}(\omega )=0\) if \(\omega \notin H\). Let C stand for a generic positive real constant different in different cases.
Let \(( \varOmega , \mathcal{F}, \{\mathcal{F}_{t}\}_{t\ge 0}, \mathbb{P} )\) be a complete probability space with a filtration \(\{\mathcal{F}_{t}\}_{t\ge 0}\) satisfying the usual conditions (i.e., it is increasing and right continuous, and \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\)-null sets). Let \(\mathbb{E}\) denote the expectation with respect to \(\mathbb{P}\). For \(p>0\), let \(\mathscr{L}_{\mathcal{F}_{0}}^{p}([-\tau ,0];\mathbb{R}^{n})\) denote the family of all \(\mathcal{F}_{0}\)-measurable \(\mathscr{C}([-\tau ,0];\mathbb{R}^{n})\)-valued random variables ξ such that \(\mathbb{E} \|\xi \|^{p} <\infty \). Let \(B(t) = (B_{1}(t),\ldots ,B_{m}(t) ) ^{T}\) be an m-dimensional Brownian motion defined on the probability space.
Let \(r(t)\) (\(t\ge 0\)) be a right-continuous Markov chain on the probability space taking values in a finite state space \(\mathbb{S}=\{1, 2, \ldots , N\}\) with generator \(\varGamma =(\gamma _{ij})_{N \times N}\) given by
where \(\varDelta >0\), and \(\gamma _{ij}\) is the transition rate from i to j with \(\gamma _{ij}>0\) if \(i\neq j\), whereas \(\gamma _{ii} = -\sum_{j\neq i} \gamma _{ij}\). We suppose that the Markov chain r is independent of the Brownian motion B. As is well known [31], almost every sample path of r is a right-continuous step function with finite number of simple jumps in any finite subinterval of \(\mathbb{R}_{+}\), that is, there is a sequence of stopping times \(0=\tau _{0}<\tau _{1}<\tau _{2}<\cdots <\tau _{k} \rightarrow \infty \) almost surely such that
where \(\mathbb{I}\) is the indicator function defined as before. Hence r is constant on each interval \([\tau _{k} , \tau _{k+1} )\):
In this paper, we consider highly nonlinear and nonautonomous neutral stochastic differential delay equations with Markovian switching of the form
with initial data
where \(r_{0}\) is \(\mathbb{S}\)-valued \(\mathcal{F}_{0}\)-measurable random variable. Here \(f: \mathbb{R}_{+}\times \mathbb{R}^{n} \times \mathbb{R}^{n} \times \mathbb{S}\rightarrow \mathbb{R}^{n}\), \(g: \mathbb{R}_{+}\times \mathbb{R}^{n} \times \mathbb{R}^{n} \times \mathbb{S} \rightarrow \mathbb{R}^{n \times m}\), and \(D: \mathbb{R}^{n} \times \mathbb{S} \rightarrow \mathbb{R}^{n}\). They are all Borel-measurable functions. We suppose that the drift and diffusion coefficients can be decomposed as
To estimate the partially truncated EM method for (2.1), we need the following lemma [24].
Lemma 2.1
Given \(\varDelta >0\), let \(r_{k}^{\varDelta }=r(k\varDelta )\) for \(k\geq 0\). Then \(\{r_{k}^{\varDelta }, k=0,1,2,\ldots \}\) is a discrete Markov chain with the one-step transition probability matrix
Then we impose two standard necessary hypotheses on the initial data and neutral term.
Assumption 2.2
There exist constants \(K_{1} >0\) and \(\alpha \in (0,1]\) such that
Assumption 2.3
(The contractive mapping)
\(D(0,i)=0\), and there exists a constant \(K_{2} \in (0,1)\) such that
for all \(x,y \in \mathbb{R} ^{n} \) and \(i\in \mathbb{S}\).
By Assumption 2.3 we have \(|D(x,i)|\leq K_{2} |x|\) for all \(x \in \mathbb{R} ^{n} \) and \(i\in \mathbb{S}\).
Since \(\gamma _{ij}\) is independent of x, the paths of r could be generated before approximating x. The discrete Markovian chain \(\{r_{k}^{\varDelta }, k=0,1,2,\ldots \}\) can be generated as follows: Compute the one-step transition probability matrix \(\mathbb{P}(\varDelta )\). Let \(r_{0}^{\varDelta }=i_{0}\) and generate a random number \(\xi _{1}\) uniformly distributed in \([0,1]\). Define
where we set \(\sum_{j=1}^{0} \mathbb{P}_{i_{0},j}(\varDelta )=0\) as usual. Then independently generate a new random number \(\xi _{2}\) uniformly distributed in \([0,1]\) as well. Define
Repeating this procedure, we can obtain a trajectory of \(\{r_{k}^{\varDelta }, k=1,2,\ldots \}\). The procedure can be applied independently to get more trajectories. After generating the discrete Markov chain \(\{r_{k}^{\varDelta }, k=0,1,2,\ldots \}\), we can now define the partially truncated EM approximate solution for NSDDEwMSs (2.1) with initial data (2.2).
To define the partially truncated EM scheme, we first choose a strictly increasing continuous function \(\varphi (w) :\mathbb{R} _{+} \rightarrow \mathbb{R}_{+}\) such that \(\varphi (w) \to \infty \) as \(w \rightarrow \infty \) and
Let \(\varphi ^{-1}\) denote the inverse function of φ. Hence \(\varphi ^{-1}\) is a strictly increasing continuous function from \([\varphi (1),\infty )\) to \(\mathbb{R}_{+}\). Then we also choose \(K_{0} \geq 1\vee \varphi (1)\) and a strictly decreasing function \(h: (0,1] \rightarrow (0,\infty )\) such that
For a given step size \(\varDelta \in (0,1]\), define the truncated mapping \(\pi _{\varDelta }\) from \(\mathbb{R} ^{n}\) to the closed ball \(\{ x\in \mathbb{R} ^{n}: |x|\leq \varphi ^{-1}(h(\varDelta ))\}\) by
where we let \(\frac{x}{|x|} =0\) for \(x=0\). Then we can define the truncated functions
for \(x,y \in \mathbb{R} ^{n}\). Thus we obtain that
Moreover, we can easily get that for any \(x,y \in \mathbb{R}^{n}\),
Let us now establish our discrete-time truncated EM numerical solutions to approximate the true solution. For some positive integer M, we take step size \(\varDelta =\tau /M\). It is easy to see that Δ becomes sufficiently small by choosing M sufficiently large. Define \(t_{k} = k\varDelta \) for \(k=-M, -M+1, -M+2, \ldots , -1, 0, 1, 2, \ldots \) . Set \(X_{\varDelta }(t_{k})=\xi (t_{k})\) for \(k=-M, -M+1, -M+2, \ldots , -1, 0\) and then form
for \(k=0, 1, 2, \ldots \) , where \(\varDelta B_{k} = B(t_{k+1})-B(t_{k})\). To form continuous-time step approximations, define
where \(\mathbb{I}\) is the indicator function. As usual, there are two kinds of continuous-time step approximations. The first one whose sample paths are not continuous is
The other one with continuous sample paths is
which is continuous in t. Is easy to see that \(X_{\varDelta }(t_{k})=\bar{x}_{\varDelta }(t_{k})=x_{\varDelta }(t_{k})\). Namely, they coincide at \(t_{k}\).
3 Strong convergence rate
In this section, we estimate the strong convergence rate of the partially truncated EM method for (2.1). Now, to achieve this goal, we have to impose the following assumptions on the coefficients.
Assumption 3.1
There exist constants \(K_{3} >0\) and \(\beta \geq 0\) such that
and
for all \(t\in [0,T]\), \(x,y,\bar{x},\bar{y} \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\).
By Assumption 3.1 we get that there exists a constant \(\bar{K}_{3}>0\) such that
and
for all \(t\in [0,T]\), \(x,y \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\), where \(\bar{K}_{3}=4K_{3} + \sup_{t\in [0,T],i\in \mathbb{S}} [\tilde{F}(t,0,0,i)+ \tilde{G}(t,0,0,i)+F(t,0,0,i)+G(t,0,0,i)]\). We also derive from Assumption 3.1 that
for all \(t\in [0,T]\), \(x,y,\bar{x},\bar{y} \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\).
Before stating the next assumption, we introduce functions \(\bar{V}_{i}\), \(i=1,2,3\), such that for any \(x,y \in \mathbb{R} ^{n}\),
for some \(K_{\bar{V}i}>0\) and \(l_{i}\geq 1\). Denote \(l_{v}=\max \{l_{1},l_{2},l_{3}\}\).
Assumption 3.2
There exist constants \(K_{4} >0\) and \(\bar{q} > 2\) such that
for all \(t\in [0,T]\), \(x,y,\bar{x},\bar{y} \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\).
By Assumption 3.2 we obtain that for any \(q \in (2,\bar{q})\),
where \(\bar{K}_{4}=2K_{3} +\frac{K_{3}^{2} (q -1)(\bar{q} -1)}{\bar{q} -q}\). The proof is trivial, so we omit it.
Assumption 3.3
There exist constants \(K_{5} >0\) and \(\bar{p} > \bar{q}\) such that
for all \(t\in [0,T]\), \(x,y \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\).
By Assumption 3.3 we derive that for any \(p \in [2,\bar{p})\),
where \(\bar{K}_{5}=3\bar{K}_{3} + \frac{3\bar{K}_{3}^{2} (p -1)(\bar{p} -1)}{2(\bar{p} -p)}\).
Assumption 3.4
There exist constants \(K_{6} >0\), \(K_{7}>0\), \(\theta \in (0,1]\), and \(\sigma \in (0,1]\) such that
for all \(t_{1}, t_{2}\in [0,T]\), \(x,y \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\), where β is as in Assumption 3.1.
The following lemma gives that the p-moment of the true solution is bounded. This lemma can be proved similarly to the proof of Theorem 2.4 presented in [12] by means of the technique used in Theorem 2.1 of [35].
Lemma 3.5
Let Assumptions 3.1and 3.3hold. Then neutral stochastic differential delay equations with Markovian switching (2.1) with initial data (2.2) has a unique solution \(x(t)\) on \(t\geq -\tau \). In addition, this solution has the property that
To get the strong convergence rate, we impose another assumption.
Assumption 3.6
There exist constants \(K_{8} >0\) and \(\bar{p} > \bar{q} \) such that
for all \(t\in [0,T]\), \(x,y \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\).
By Assumption 3.6 we can show that for any \(p \in [2,\bar{p})\),
where \(\bar{K}_{8}=3\bar{K}_{3} + \frac{3\bar{K}_{3}^{2} (p -1)(\bar{p} -1)}{2(\bar{p} -p)}\).
Remark 3.7
When \(D(\cdot ,\cdot )=0\), we can derive that for any functions satisfying Assumption 3.3,
for all \(t\in [0,T]\), \(x,y \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\), where \(\tilde{K}_{8}=2K_{5}([1/\varphi ^{-1}(h(1))]\vee 1)\). In other words, Assumption 3.6 can be eliminated if there is no neutral term.
Remark 3.8
In fact, there are plenty of functions such that \(D(y,i)\), \(F(t,x,y,i)\), and \(G(t,x,y,i)\) satisfy Assumption 3.3 and the corresponding \(F_{\varDelta }(t,x,y,i)\) and \(G_{\varDelta }(t,x,y,i)\) satisfy Assumption 3.6. For example, when \(i=1\), define \(D(y,1)=-\frac{1}{6} y\), \(f(t,x,y,1)=-2y^{3}+(t(1-t))^{\frac{1}{3}}y -10x +2y\), \(g(t,x,y,1)=(t(1-t))^{\frac{1}{3}}|y|^{\frac{3}{2}}\) for \(t\in [0,1]\) and \(x,y \in \mathbb{R} ^{1} \). Thus \(F(t,x,y,1)=-2y^{3} \) and \(G(t,x,y,1)=(t(1-t))^{\frac{1}{3}}|y|^{\frac{3}{2}}\). We can easily prove that Assumptions 3.3 and 3.6 are satisfied. A detailed proof is presented in Sect. 5.
Lemma 3.9
Let Assumptions 2.3, 3.1, and 3.6hold. Then for any \(p \in [2,\bar{p})\), we have
Proof
For any \(\varDelta \in ( 0,1 ]\) and \(t \in [0,T]\), by Itô’s formula we derive that
Let us first estimate \(A_{1}\):
By Assumptions 2.3 and 3.1 and Young’s inequality we derive that
Moreover, for any \(t\in [0,T]\), there always is an integer \(k\geq 0\) such that \(t\in [t_{k},t_{k+1})\). By Hölder’s inequality and BDG’s inequality, we have
Thus, by (2.8), (2.10), and (3.19) and Young’s inequality we get
Now, we are handling \(A_{2}\). By Assumptions 2.3 and 3.6 and Hölder’s inequality we get
where \(l_{v*}=l_{v}\vee 2\). Inserting (3.17), (3.18), (3.20), and (3.21) into (3.16) yields that
Therefore
Moreover, for any \(c_{0} >0\),
Then we can take \(c_{0}\) large enough such that \((\frac{1+c_{0}}{c_{0}})^{p-1}K_{2}^{p} <1\) for any \(K_{2} \in (0,1)\). Thus
where
An application of Gronwall’s inequality yields that
The following technique is similar to that in Theorem 2.1 of [35]. Define
We can observe that
By (3.26) and \(\xi \in \mathscr{L}_{\mathcal{F}_{0}}^{p}([-\tau ,0];\mathbb{R}^{n})\) we derive that
Then Hölder’s inequality leads to
The desired result follows by repeating this procedure. We complete the proof. □
Lemma 3.10
Let Assumptions 2.3, 3.1, and 3.6hold. Then for any \(\varDelta \in (0,1] \) and \(t\in [0,T]\), we have
Therefore
Proof
Fix any \(\varDelta \in (0,1] \). For any \(t\in [0,T]\), there is an integer \(k\geq 0\) such that \(t\in [t_{k},t_{k+1})\). In the same way as in the proof of (3.19), we have
Then Lemma 3.9 gives that
We complete the proof. □
Lemma 3.11
Let Assumptions 2.3, 3.1, and 3.6hold. For any real number \(L> \|\xi \|\), define the stopping time
Then we have
Proof
By Itô’s formula and Assumption 3.6 we get
Note that
and
where (2.8), (2.10), (3.3), Young’s inequality, and Lemma 3.9 were used. Then we obtain that
where \(l_{v}*=l_{v}\vee 2\). Using the same technique as in Lemma 3.9 gives that
We can get from (2.6) that
Hence we derive from (3.31) and (3.32) that
Then the desired result follows. We complete the proof. □
The following lemma can be proved in a similar way as Lemma 3.11 was, so we omit the proof.
Lemma 3.12
Let Assumptions 2.3, 3.1, and 3.3hold. For any real number \(L> \|\xi \|\), define the stopping time
Then we have
Lemma 3.13
Let Assumptions 2.2, 2.3, 3.1–3.4, and 3.6hold. Assume that \(q\in [2,\bar{q})\) and \(p> (\beta +l_{v}+2)q\). Let \(L> \|\xi \|\) be a real number, and let \(\varDelta \in (0,1]\) be sufficiently small such that \(\varphi ^{-1} (h(\varDelta )) \geq L\). Then we have
where \(\rho _{\varDelta ,L}:=\tau _{L}\wedge \tau _{\varDelta ,L}\) with \(\tau _{L}\), \(\tau _{\varDelta ,L}\) defined as before.
Proof
For simplicity, we write \(\rho _{\varDelta ,L}=\rho \). Denote \(e_{\varDelta }(t)=x(t)-D(x(t-\tau ),r(t))-x_{\varDelta }(t)+D(\bar{x}_{\varDelta }(t- \tau ),\bar{r}(t))\). For \(0\leq s \leq t \wedge \rho \), we can observe that
Recalling the definition of \(F_{\varDelta }\) and \(G_{\varDelta }\), we have
for \(0\leq s \leq t \wedge \rho \). Hence we derive that
Similarly,
By Itô’s formula we get
Note that
Hence
By Hölder’s inequality, Assumptions 2.2 and 3.2, and Lemmas 3.9 and 3.10 we get
As for \(B_{2}\), we derive from Assumptions 2.3 and 3.4 that
From (3.38) we get
and we have
Moreover, let j be the integer part of \(T/\varDelta \). Then
where in the last step, we used the fact that \(\bar{x}_{\varDelta }(s)\) and \(\bar{x}_{\varDelta }(s-\tau )\) are conditionally independent of \(\mathbb{I}_{\{r(s)\neq r(t_{k})\}}\) with respect to the σ-algebra generated by \(r(t_{k})\). Applying the Markov property yields that
By Lemma 3.9 we have
Inserting (3.40), (3.41), and (3.44) into (3.39) gives that
In addition, we obtain from Assumptions 2.2 and 3.1 and Lemmas 3.5, 3.9, and 3.10 that
Furthermore, let j be the integer part of \(T/\varDelta \). Then by Assumption 2.3 and Lemma 3.9 we have
By (3.43) we get
Hence, for any \(s\in [0,T ]\), we derive that
Inserting (3.47) into (3.46) gives that
Similarly to \(B_{2}\) and \(B_{3}\), we easily derive that
and
Substituting (3.38), (3.45), (3.48), (3.49), and (3.50) into (3.37) yields that
Using Gronwall’s inequality gives that
Therefore
Let \(y(t)=x(t)-D(x(t-\tau ),r(t))\) and \(y_{\varDelta }(t)=x_{\varDelta }(t)-D(\bar{x}_{\varDelta }(t-\tau ),\bar{r}(t))\). Thus \(e_{\varDelta }(t)=y(t)-y_{\varDelta }(t)\). Then for any \(c_{3}, c_{4}, c_{5} > 0\), we have
Choose \(c_{3}\) sufficiently large and choose \(c_{4}\), \(c_{5}\) sufficiently small such that \(c_{6}:= (\frac{(1+c_{3})(1+c_{4})(1+c_{5})}{c_{3}} )^{q-1}K_{2}^{q} <1\). Then let \(c_{7}= (\frac{(1+c_{3})(1+c_{4})(1+c_{5})}{c_{3} c_{5}} )^{q-1}K_{2}^{q}\) and \(c_{8}= (\frac{(1+c_{3})(1+c_{4})}{c_{3} c_{4}} )^{q-1}\). Hence we derive from (3.47) that
Therefore
Then we have
An application of Gronwall’s inequality gives that
where \(\varDelta _{f}=\varDelta ^{\frac{1}{2}} h (\varDelta ) \vee \varDelta ^{(\alpha \wedge \theta \wedge \sigma )}\). Then we use the same technique as in Lemma 3.9 to get the convergence rate. Define
We find that
Note that \(|x(s-\tau )-x_{\varDelta }(s-\tau )|=0\) for \(s\in [0,\tau ]\). Then we derive that
Then by Hölder’s inequality we obtain that
By induction we could get the desired result. We complete the proof. □
Theorem 3.14
Let Assumptions 2.2, 2.3, 3.1–3.4, and 3.6hold. Let \(q\in [2,\bar{q})\) and \(p> (\beta +l_{v}+2)q\). For any sufficiently small \(\varDelta \in (0,1]\), assume that
Then for every such small Δ, we have
and
for any \(T>0\).
Proof
Let \(\tau _{L}\), \(\tau _{\varDelta ,L}\), and \(\rho _{\varDelta ,L}\) be as before. Denote \(z_{\varDelta }(t)=x(t)-x_{\varDelta }(t)\). We write \(\rho _{\varDelta ,L}=\rho \) for simplicity. Obviously,
Let \(\delta >0\) be arbitrary. By Young’s inequality we get
Hence
Applying Lemmas 3.5 and 3.9 gives that
By Lemmas 3.11 and 3.12 we have
Inserting (3.58) and (3.59) into (3.57) yields that
Choose \(\delta = \varDelta ^{\frac{q}{2}} h^{q} (\varDelta ) \vee \varDelta ^{q( \alpha \wedge \theta \wedge \sigma )}\) and \(L=(\varDelta ^{\frac{q}{2}} h^{q} (\varDelta ) \vee \varDelta ^{q(\alpha \wedge \theta \wedge \sigma )})^{\frac{-1}{p-q}}\). Then we have
By condition (3.53) we obtain that
We derive from Lemma 3.13 that
Hence we get the desired result (3.54). Then combing Lemma 3.10 and (3.54) gives (3.55). We complete the proof. □
4 Stability
In this section, we investigate the almost sure exponential stability of the partially truncated EM method for neutral stochastic differential delay equations with Markovian switching. In order to achieve this aim, we need to assume that Assumption 3.1 holds on \(t\in [0,\infty )\). Let \(\tilde{F}(t,0,0,i)=F(t,0,0,i)=0\) and \(\tilde{G}(t,0,0,i)=G(t,0,0,i)=0\) for all \(t\in [0,\infty )\) and \(i\in \mathbb{S}\), which means \(f(t,0,0,i)=g(t,0,0,i)=0\).
Assumption 4.1
There exist constants \(\varLambda \geq 0\) and \(\lambda _{1}, \lambda _{2}, \lambda _{3}, \lambda _{4} \geq 0\) satisfying \(\lambda _{1}> \lambda _{2}+\lambda _{3}+\lambda _{4}\) such that
and
for all \(t\in [0,\infty )\), \(x,y \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\).
Remark 4.2
In fact, there are many functions such that \(D(y,i)\), \(\tilde{F}(t,x,y,i)\), and \(\tilde{G}(t,x,y,i)\) satisfying (4.1) and the corresponding \(F_{\varDelta }(t,x,y,i)\) and \(G_{\varDelta }(t,x,y,i)\) satisfying (4.2). The example and proof will be given in Sect. 5.
In the rest of this paper, we set \(\varLambda =0\) and \(\varLambda ^{-1}|G_{\varDelta }(t,x,y,i)|^{2}=0\) if there is no term \(G_{\varDelta }(t,x,y,i)\). Also, when the linearly growing term \(\tilde{G}(t,x,y,i)\) is absent, we set \(\varLambda =\infty \) and \(\varLambda |\tilde{G}(t,x,y,i)|^{2}=0\).
By Assumption 4.1 we obtain that
for all \(t\in [0,\infty )\), \(x,y \in \mathbb{R} ^{n} \), and \(i\in \mathbb{S}\).
Theorem 4.3
Let Assumptions 2.3, 3.1, and 4.1hold on \(t\in [0,\infty )\). Then the partially truncated EM numerical solution (2.11) is almost surely exponentially stable. Precisely, let \(\lambda >0\) be the unique root of
and let \(\varepsilon \in (0,\frac{\lambda }{2})\) be arbitrary. Then there exists a \(\varDelta ^{*}>0\) such that for any \(\varDelta <\varDelta ^{*}\), we have
Proof
Define
Then (2.11) becomes
We write \(Y_{k}=Y(t_{k},X_{\varDelta }(t_{k}),X_{\varDelta }(t_{k-M}),r_{k}^{\varDelta })\), \(f_{\varDelta ,k} =f_{\varDelta }(t_{k},X_{\varDelta }(t_{k}),X_{\varDelta }(t_{k-M}),r_{k}^{\varDelta })\), and \(g_{\varDelta ,k} =g_{\varDelta }(t_{k},X_{\varDelta }(t_{k}),X_{\varDelta }(t_{k-M}),r_{k}^{\varDelta })\) for simplicity. Hence we have
where
By (3.2) we have
and
Thus
Using (4.3) yields that
Let
Therefore, for any positive constant \(J>1\), we derive that
which means that
Note that
Thus
where
Let us now introduce the function
Define
When \(\varDelta < \varDelta _{1}^{*}\), we can observe that
Moreover, choose \(\varDelta _{2}^{*}>0\) such that for any \(\varDelta < \varDelta _{2}^{*}\),
Hence we can derive that for any \(J>1\),
Therefore there exists a unique constant \(J_{\varDelta }^{*} >1\) such that
for any \(\varDelta < \varDelta _{1}^{*}\wedge \varDelta _{2}^{*}\). Choosing \(J=J_{\varDelta }^{*}\) for any \(\varDelta < \varDelta _{1}^{*}\wedge \varDelta _{2}^{*}\) yields that
Note that the initial sequence \(X_{\varDelta }(t_{i})<\infty \) for any \(i=-M,-M+1,\ldots ,0\) and that \(\sum_{i=0}^{k-1}J^{(i+1)\varDelta } m_{\varDelta ,i}\) is a martingale. Applying the discrete-type semimartingale convergence theorem gives that for any \(\varDelta < \varDelta _{1}^{*}\wedge \varDelta _{2}^{*}\),
By (4.12) we obtain that
In addition, for any \(c_{0}^{*} >0\),
Then we take \(c_{0}^{*}\) sufficiently large such that \(\frac{1+c_{0}^{*}}{c_{0}^{*}}K_{2}^{2}{J_{\varDelta }^{*}}^{\tau }<1\) for any \(K_{2} \in (0,1)\). Hence
where
Therefore
By (4.13) we get that
Choose the constant ϑ such that \(J=e^{\vartheta }\). Hence \(1-J^{-\varDelta }=1-e^{-\vartheta \varDelta }\). Define
Let \(\vartheta _{\varDelta }^{*}=\log J_{\varDelta }^{*}\). Then we have
Since
we derive that
By the definition of λ we get from (4.20) and (4.21) that
which means that for any \(\varepsilon \in (0,\frac{\lambda }{2})\), there exists \(\varDelta _{3}^{*} >0\) such that for any \(\varDelta < \varDelta _{3}^{*}\), we have
We derive from (4.17) and the definition of \(\vartheta _{\varDelta }^{*}\) that
Then for any \(\varDelta < \varDelta _{1}^{*}\wedge \varDelta _{2}^{*}\wedge \varDelta _{3}^{*}=: \varDelta ^{*}\), we have
which is the desired result. We complete the proof. □
5 Example
Example 5.1
Consider a nonlinear and nonautonomous neutral stochastic differential delay equations with Markovian switching
with the initial data \(x_{0}\) satisfying Assumption 2.2. Here \(B(t)\) is a scalar Brownian motion. Moreover r is a Markovian chain on the state space \(\mathbb{S}=\{1,2\}\) with generator
In addition, for all \(t\in [0,1]\), \(x,y \in \mathbb{R} ^{1} \), and \(i\in \mathbb{S}\), let
We easily see that
Obviously, Assumptions 2.3 and 3.1 hold with \(K_{3}=20\) and \(\beta =4\). Now we verify Assumptions 3.2–3.4 and 3.6. For Assumption 3.2, we get
Therefore Assumption 3.2 is satisfied. For Assumption 3.3, we derive that
Hence Assumption 3.3 is satisfied. Moreover, Assumption 3.4 holds with \(\theta =\sigma =\frac{1}{3}\wedge \frac{1}{4}=\frac{1}{4}\) for \(i\in \mathbb{S}\). To verify Assumption 3.6, we need to consider four cases.
Case 1: If \((|x|\vee |y|)\leq \varphi ^{-1}(h(\varDelta ))\), then we have
Case 2: If \((|x|\wedge |y|)> \varphi ^{-1}(h(\varDelta ))\), then we have
Case 3: If \(|y|>\varphi ^{-1}(h(\varDelta ))\) and \(|x|< \varphi ^{-1}(h(\varDelta ))\), then we derive that
Case 4: If \(|y|<\varphi ^{-1}(h(\varDelta ))\) and \(|x|> \varphi ^{-1}(h(\varDelta ))\), then the proof is similar to the previous case.
Combing the four cases, we get that Assumption 3.6 is satisfied as well. Then we choose \(\varphi (\cdot )\) and \(h(\cdot )\). We can observe that
which means that \(\varphi (w)=4w^{5}\). Let \(h(\varDelta )=K_{0}\varDelta ^{-\frac{1}{8}}\). Then by Theorem 3.14, when \(\alpha =\frac{1}{4}\), we obtain that
Since the explicit solution of (5.1) cannot be calculated, we regard the partially truncated EM scheme with step size 2−14 as the true solution in the numerical experiments. Figure 1 presents the \(\mathscr{L}^{2}\)-errors defined by
with step sizes 2−11, 2−10, 2−9, 2−8, 2−7 at \(T=1\). 1000 sample paths were simulated in the numerical experiments. We can observe that the convergence order of partially truncated EM method for (5.1) is approximately \(\frac{1}{4}\), which is close to our result.
Example 5.2
Consider a nonlinear and nonautonomous neutral stochastic differential delay equations with Markovian switching
with the initial data \(x_{0}\) satisfying Assumption 2.2. Here \(B(t)\) and the Markovian chain are the same as Example 5.1. In addition, for all \(t\in [0,\infty )\), \(x,y \in \mathbb{R} ^{1} \), and \(i\in \mathbb{S}\), let
It is easy to see that
Obviously, \(D(y,i)\) satisfies Assumption 2.3 for \(i\in \mathbb{S}\). Now let us check Assumption 4.1. There is no \(\tilde{G}(t,x,y,i)\) term, so we have \(\varLambda =\infty \). Then we derive that
Then like in verifying Assumption 3.6, we need to consider the second inequality in four cases.
Case 1: If \((|x|\vee |y|)\leq \varphi ^{-1}(h(\varDelta ))\), then we get
Case 2: If \((|x|\wedge |y|)> \varphi ^{-1}(h(\varDelta ))\), then we have
Case 3: If \(|x|>\varphi ^{-1}(h(\varDelta ))\) and \(|y|< \varphi ^{-1}(h(\varDelta ))\), then we derive that
Case 4: If \(|x|<\varphi ^{-1}(h(\varDelta ))\) and \(|y|> \varphi ^{-1}(h(\varDelta ))\), then the proof is similar to the above process. Therefore Assumption 4.1 holds. Moreover, we easily to see that Assumption 3.1 is satisfied on \(t\in [0,\infty )\). Then by Theorem 4.3 the partially truncated EM numerical solution is almost surely exponentially stable. Figure 2 shows the almost sure exponential stability of the partially truncated EM method for (5.2) with 10 sample paths.
Availability of data and materials
Not applicable.
References
Anderson, D.F., Higham, D.J., Sun, Y.: Multilevel Monte Carlo for stochastic differential equations with small noise. SIAM J. Numer. Anal. 54, 505–529 (2016)
Appleby, J.A.D., Guzowska, M., Kelly, C., Rodkina, A.: Preserving positivity in solutions of discretised stochastic differential equations. Appl. Math. Comput. 217, 763–774 (2010)
Arnold, L.: Stochastic Differential Equations, Theory and Applications. Wiley, New York (1974)
Burrage, K., Tian, T.: Predictor–corrector methods of Runge–Kutta type for stochastic differential equations. SIAM J. Numer. Anal. 40, 1516–1537 (2002)
Cong, Y., Zhan, W., Guo, Q.: The partially truncated Euler–Maruyama method for highly nonlinear stochastic delay differential equations with Markovian switching. Int. J. Comput. Methods (2019). https://doi.org/10.1142/S0219876219500142
Guo, Q., Liu, W., Mao, X., Yue, R.: The partially truncated Euler–Maruyama method and its stability and boundedness. Appl. Numer. Math. 115, 235–251 (2017)
Guo, Q., Mao, X., Yue, R.: The truncated Euler–Maruyama method for stochastic differential delay equations. Numer. Algorithms 78, 599–624 (2018)
Higham, D.J., Mao, X., Stuart, A.M.: Strong convergence of Euler-type methods for nonlinear stochastic differential equations. SIAM J. Numer. Anal. 40, 1041–1063 (2002)
Hutzenthaler, M., Jentzen, A., Kloeden, P.E.: Strong and weak divergence in finite time of Euler’s method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proc. R. Soc. A 467, 1563–1576 (2010)
Hutzenthaler, M., Jentzen, A., Kloeden, P.E.: Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients. Ann. Appl. Probab. 22, 1611–1641 (2012)
Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations. Springer, Berlin (1992)
Kolmanovskii, V., Koroleva, N., Maizenberg, T., Mao, X., Matasov, A.: Neutral stochastic differential delay equation with Markovian switching. Stoch. Anal. Appl. 21, 819–847 (2003)
Lan, G.: Asymptotic exponential stability of modified truncated EM method for neutral stochastic differential delay equations. J. Comput. Appl. Math. 340, 334–341 (2018)
Lan, G., Yuan, C.: Exponential stability of the exact solutions and θ-EM approximations to neutral SDDEs with Markov switching. J. Comput. Appl. Math. 285, 230–242 (2015)
Li, X., Mao, X.: A note on almost sure asymptotic stability of neutral stochastic delay differential equations with Markovian switching. Automatica 48, 2329–2334 (2012)
Li, X., Mao, X., Yin, G.: Explicit numerical approximations for stochastic differential equations in finite and infinite horizons: truncation methods, convergence in pth moment and stability. IMA J. Numer. Anal. 39, 847–892 (2019)
Liu, W., Foondun, M., Mao, X.: Mean square polynomial stability of numerical solutions to a class of stochastic differential equations. Stat. Probab. Lett. 92, 173–182 (2014)
Liu, W., Mao, X.: Strong convergence of the stopped Euler–Maruyama method for nonlinear stochastic differential equations. Appl. Math. Comput. 223, 389–400 (2013)
Liu, W., Mao, X., Tang, J., Wu, Y.: Truncated Euler–Maruyama method for classical and time-changed non-autonomous stochastic differential equations. Appl. Numer. Math. 153, 66–81 (2020)
Mao, X.: Stochastic Differential Equations and Applications. Horwood, Chichester (2007)
Mao, X.: The truncated Euler–Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 290, 370–384 (2015)
Mao, X.: Convergence rates of the truncated Euler–Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 296, 362–375 (2016)
Mao, X., Shen, Y.: Almost surely asymptotic stability of neutral stochastic differential delay equations with Markovian switching. Stoch. Process. Appl. 118, 1385–1406 (2008)
Mao, X., Yuan, C.: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London (2006)
Mao, X., Yuan, C., Yin, G.: Approximations of Euler–Maruyama type for stochastic differential equations with Markovian switching, under non-Lipschitz conditions. J. Comput. Appl. Math. 205, 936–948 (2007)
Milstein, G.N., Platen, E., Schurz, H.: Balanced implicit methods for stiff stochastic system. SIAM J. Numer. Anal. 35, 1010–1019 (1998)
Øksendal, B.: Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin (2000)
Sabanis, S.: A note on tamed Euler approximations. Electron. Commun. Probab. 18, 47 (2013)
Sabanis, S.: Euler approximations with varying coefficients: the case of superlinearly growing diffusion coefficients. Ann. Appl. Probab. 26, 2083–2105 (2016)
Saito, Y., Mitsui, T.: T-stability of numerical scheme for stochastic differential equations. World Sci. Ser. Appl. Anal. 2, 333–344 (1993)
Skorohod, A.V.: Asymptotic Methods in the Theory of Stochastic Differential Equations. Am. Math. Soc., Providence (1989)
Szpruch, L., Mao, X., Higham, D., Pan, J.: Numerical simulation of a strongly nonlinear Ait-Sahalia-type interest rate model. BIT Numer. Math. 51, 405–425 (2011)
Tan, L., Yuan, C.: Convergence rates of theta-method for NSDDEs under non-globally Lipschitz continuous coefficients. Bull. Math. Sci. 9, 3231–3243 (2019)
Wu, F., Mao, X.: Numerical solutions of neutral stochastic functional differential equations. SIAM J. Numer. Anal. 46, 1821–1841 (2008)
Wu, F., Mao, X., Chen, K.: The Cox–Ingersoll–Ross model with delay and strong convergence of its Euler–Maruyama approximate solutions. Appl. Numer. Math. 59, 2641–2658 (2009)
Wu, F., Mao, X., Szpruch, L.: Almost sure exponential stability of numerical solutions for stochastic delay differential equations. Numer. Math. 115, 681–697 (2010)
Yuan, C., Glover, W.: Approximate solutions of stochastic differential delay equations with Markovian switching. J. Comput. Appl. Math. 194, 207–226 (2006)
Zhang, W., Song, M., Liu, M.: Strong convergence of the partially truncated Euler–Maruyama method for a class of stochastic differential delay equations. J. Comput. Appl. Math. 335, 114–128 (2018)
Zhou, S., Wu, F.: Convergence of numerical solutions to neutral stochastic delay differential equations with Markovian switching. J. Comput. Appl. Math. 229, 85–96 (2009)
Zong, X., Wu, F., Huang, C.: Exponential mean square stability of the theta approximations for neutral stochastic differential delay equations. J. Comput. Appl. Math. 286, 172–185 (2015)
Acknowledgements
The authors would like to thank the anonymous reviewers for their work and constructive comments, which improved the manuscript.
Funding
This work is supported by the National Natural Science Foundation of China (Grant Nos. 61876192 and 62076106) and the Fundamental Research Funds for the Central Universities of South-Central University for Nationalities (Grant Nos. KTZ20051, CZT20020, and CZT20022).
Author information
Authors and Affiliations
Contributions
Both authors contributed equally to each part of this work. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gao, S., Hu, J. Numerical method of highly nonlinear and nonautonomous neutral stochastic differential delay equations with Markovian switching. Adv Differ Equ 2020, 688 (2020). https://doi.org/10.1186/s13662-020-03113-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-020-03113-x