Skip to content

Advertisement

  • Research
  • Open Access

Taylor approximation of stochastic functional differential equations with the Poisson jump

Advances in Difference Equations20132013:230

https://doi.org/10.1186/1687-1847-2013-230

Received: 14 May 2013

Accepted: 23 July 2013

Published: 6 August 2013

Abstract

In the present paper, we are concerned with a class of stochastic functional differential delay equations with the Poisson jump, whose coefficients are general Taylor expansions of the coefficients of the initial equation. Taylor approximations are a useful tool to approximate analytically or numerically the coefficients of stochastic differential equations. The aim of this paper is to investigate the rate of approximation between the true solution and the numerical solution in the sense of the L p -norm when the drift and diffusion coefficients are Taylor approximations.

Keywords

  • stochastic functional differential delay equations
  • Poisson jump
  • Taylor approximations
  • numerical solution

1 Introduction

Stochastic differential equations [13] have attracted a lot of attention, because the problems are not only academically challenging, but also of a practical importance and have played an important role in many fields such as in option pricing, forecast of the growth of population, etc. (see, e.g., [1]). Recently, much work has been done on stochastic differential equations. Here, we highlight Mao et al.’s great contribution (see [39] and references therein). Svishchuk and Kazmerchuk [10] studied the exponential stability of solutions of linear stochastic differential equations with Poisson jump [1113] and Markovian switching [4, 12, 14].

In many applications, one assumes that the system under consideration is governed by a principle of causality, that is, the future states of the system are independent of the past states and are determined solely by the present. However, under closer scrutiny, it becomes apparent that the principle of causality is often only the first approximation to the true situation, and that a more realistic model would include some of the past states of the system. Stochastic functional differential equations [9] give a mathematical explanation for such a system.

Unfortunately, in general, it is impossible to find the explicit solution for stochastic functional differential equations with the Poisson jump. Even when such a solution can be found, it may be only in an implicit form or too complicated to visualize and evaluate numerically. Therefore, many approximate schemes were presented, for example, EM scheme, time discrete approximations, stochastic Taylor expansions [15], and so on.

Meanwhile, the rate of approximation to the true solution by the numerical solution is different for different numerical schemes. Jankovic et al. investigated the following stochastic differential equations (see [15]):
d x ( t ) = f ( x t , t ) d t + g ( x t , t ) d W ( t ) , t 0 t T .
In this paper, we develop approximate methods for stochastic differential equations driven by Poisson process, that is,
d x ( t ) = f ( x t , t ) d t + g ( x t , t ) d W ( t ) + h ( x t , t ) d N ( t ) , t 0 t T .

The rate of the L p -closeness between the approximate solution and the solution of the initial equation increases when the number of degrees in Taylor approximations of coefficients increases. Although the Poisson jump is concerned, the rate of approximation to the true solution by the numerical solution is the same as the equation in [15]. Even when the Poisson process is replaced by Poisson random measure, the rate is also the same.

2 Approximate scheme and hypotheses

Throughout this paper, we let { Ω , F , { F t } t 0 , P } be a probability space with a filtration satisfying the usual conditions, i.e., the filtration is continuous on the right and F 0 -contains all P-zero sets. Let W ( t ) = ( w 1 ( t ) , w 2 ( t ) , , w m ( t ) ) T be an m-dimensional Brownian motion defined on the probability space. For a , b R with a < b , denoted by D ( [ a , b ] ; R n ) , the family of functions φ from [ a , b ] to R n , that are continuous on the right and limitable on the left. D ( [ a , b ] ; R n ) is equipped with the norm φ = sup a s b | φ ( s ) | , where | | is the Euclidean norm in R n , i.e., | x | = x T x ( x R n ). If A is a vector or matrix, its trace norm is denoted by | A | = trace ( A T A ) , where its operator norm is denoted by A = sup { | A x | : x = 1 } . Denote by D F 0 b ( [ τ , 0 ] ; R n ) the family of all bounded, F 0 -measurable, D ( [ τ , 0 ] ; R n ) -valued random variable.

We consider the following Itô stochastic functional differential equations with Poisson jump:
d x ( t ) = f ( x t , t ) d t + g ( x t , t ) d W ( t ) + h ( x t , t ) d N ( t ) , t 0 t T
(1)

with the initial condition x t 0 = { ξ ( t ) , t [ τ , 0 ] } , x t = { x ( t + θ ) , θ [ τ , 0 ] } D F t b ( [ τ , 0 ] ; R n ) , and x t 0 is independent of W ( ) and N ( ) .

Assume that
f : D F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n , g : D F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n × m , h : D F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n ,
where
t 0 T | f ( x t , t ) | d t < , t 0 T [ g ( x t , t ) ] 2 d t < , t 0 T [ h ( x t , t ) ] 2 d t < .

For the existence and uniqueness of the solutions of Eq. (1) (see [3], Theorem 5.2.5), we give the following rather general assumptions.

(H1) f, g and h satisfy the Lipschitz condition and the linear growth condition as follows: for any t [ t 0 , T ] there exists a constant L 1 > 0 such that
| f ( φ , t ) f ( ψ , t ) | | g ( φ , t ) g ( ψ , t ) | | h ( φ , t ) h ( ψ , t ) | L 1 φ ψ , | f ( φ , t ) | | g ( φ , t ) | | h ( φ , t ) | L 1 ( 1 + φ ) ,

where φ , ψ D F t b ( [ τ , 0 ] ; R n ) .

(H2) (The Hölder continuity of the initial data.) There exist constants K 0 and γ ( 0 , 1 ] such that for all τ s < t 0 ,
E | ξ ( t ) ξ ( s ) | K ( t s ) γ .

(H3) The functions f, g and h have Taylor expansions in the argument x up to the m 1 th, m 2 th, and m 3 th Fréchet derivatives, respectively [16].

(H4) The functions f ( x , t ) ( m 1 + 1 ) , g ( x , t ) ( m 2 + 1 ) and h ( x , t ) ( m 3 + 1 ) are uniformly bounded, i.e., there exists a positive constant L 2 such that
sup C F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n f ( x , t ) ( m 1 + 1 ) ( h , h , , h ) L 2 h m 1 + 1 , sup C F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n g ( x , t ) ( m 2 + 1 ) ( h , h , , h ) L 2 h m 2 + 1 , sup C F t b ( [ τ , 0 ] ; R n ) × [ t 0 , T ] R n h ( x , t ) ( m 3 + 1 ) ( h , h , , h ) L 2 h m 1 + 1 .
For some sufficiently large enough n N , we assume the step Δ = T t 0 n , where 0 < Δ 1 . Let t 0 < t 1 < < t n = T be an equidistant partition of the interval [ t 0 , T ] , that is, the partition points are t k = t 0 + k n ( T t 0 ) , k = 0 , 1 , , n . The explicit discrete approximation scheme is defined as follows:
{ x t 0 n = ξ ( t ) , τ t 0 ; x t k + 1 n = x n ( t k ) + t k t i = 0 m 1 f ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d s x t k + 1 n = + t k t i = 0 m 2 g ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d W ( s ) x t k + 1 n = + t k t i = 0 m 3 h ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d N ( s ) , k = 1 , , n .
(2)
Then the continuous approximate solution is defined by
x n ( t ) = x n ( t k ) + t k t i = 0 m 1 f ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d s + t k t i = 0 m 2 g ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d W ( s ) + t k t i = 0 m 3 h ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! d N ( s ) , t [ t k , t k + 1 ] ,
(3)

satisfying the initial condition x t 0 = ξ , x t k n = { x n ( t k + θ ) , θ [ τ , 0 ] } , k = 1 , 2 , , n 1 .

Besides the hypotheses motioned above, we will need another one:

(H5) There exists a positive constant Q, which is independent of n, such that for r 2 ,
E [ sup t [ t 0 τ , T ] | x ( t ) | r ] < , E [ sup t [ t 0 τ , T ] | x n ( t ) | r ] Q .

Moreover, in what follows, C is a generic positive constant independent of Δ, whose values may vary from line to line.

3 Preparatory lemmas and the main result

Since the proof of the main result is very technical, to begin with, we present several lemmas which will play an important role in the subsequent section.

Lemma 1 Let conditions (H1), (H3), (H4), (H5) be satisfied. Then, for any r 2 ,
E [ sup s [ t k , t ] | x n ( s ) x n ( t k ) | r ] C Δ r / 2 , t [ t k , t k + 1 ] , k = 0 , 1 , , n 1 .
(4)
Proof For convenience, we denote
F ( x t n , t ; x t k n ) = i = 0 m 1 f ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! , G ( x t n , t ; x t k n ) = i = 0 m 2 g ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! , H ( x t n , t ; x t k n ) = i = 0 m 3 h ( x t k n , s ) ( i ) ( x s n x t k n , , x s n x t k n ) i ! .
Then, in view of (H3), for t [ t k , t k + 1 ] , k = 0 , 1 , , n , and β ( 0 , 1 ) ,
f ( x t n , t ) = F ( x t n , t ; x t k n ) + f ( x t k n + β ( x t n x t k n ) , t ) ( m 1 + 1 ) ( x t n x t k n , , x t n x t k n ) ( m 1 + 1 ) ! , g ( x t n , t ) = F ( x t n , t ; x t k n ) + g ( x t k n + β ( x t n x t k n ) , t ) ( m 2 + 1 ) ( x t n x t k n , , x t n x t k n ) ( m 2 + 1 ) ! , h ( x t n , t ) = F ( x t n , t ; x t k n ) + h ( x t k n + β ( x t n x t k n ) , t ) ( m 3 + 1 ) ( x t n x t k n , , x t n x t k n ) ( m 3 + 1 ) ! .
Obviously, for any t [ t k , t k + 1 ] , k = 0 , 1 , , n 1 ,
x t n x t k n = t k t F ( x s n , s ; x t k n ) d s + t k t G ( x s n , s ; x t k n ) d W ( s ) + t k t H ( x s n , s ; x t k n ) d N ( s ) .
Making use of the elementary inequality | a + b + c | r 3 r 1 ( | a | 3 + | b | 3 + | c | 3 ) , a , b , c 0 , r N , the Hölder inequality to the Lebesgue integral, and the Burkholder-Davis-Gundy inequality to the Itô integral for r 2 , we can obtain
E [ sup s [ t k , t ] | x n ( s ) x n ( t k ) | ] 3 r 1 [ ( t t k ) r 1 t k t E | F ( x s n , s ; x t k n ) | r d s + ( r 3 2 ( r 1 ) ) r / 2 ( t t k ) r / 2 1 t k t E | G ( x s n , s ; x t k n ) | r d s + E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d N ( t ) | r ] ] 3 r 1 [ ( t t k ) r 1 J 1 ( t ) + ( r 3 2 ( r 1 ) ) r / 2 ( t t k ) r / 2 1 ( t t k ) r / 2 1 J 2 ( t ) + J 3 ( t ) ] .
Then we compute J 1 ( t ) , J 2 ( t ) , J 3 ( t )
J 1 ( t ) = t k t E | f ( x s n , s ) [ f ( x s n , s ) F ( x s n , t ; x t k n ) ] | r d s = t k t E | f ( x s n , s ) f ( x t k n + β ( x s n x t k n ) , s ) ( m 1 + 1 ) ( x s n x t k n , , x s n x t k n ) ( m 1 + 1 ) ! | r d s 2 r 1 [ L 1 r t k t E ( 1 + x t n ) r d s + L 2 r [ ( m 1 + 1 ) ! ] r t k t E [ x n ( s ) x n ( t k ) ( m 1 + 1 ) r ] d s ] C ( t t k ) + L r ( m 1 + 1 ) ! Q m + 1 ( t t k ) C ( t t k ) .

Similarly, by repeating the procedure above, we see that J 2 ( t ) C ( t t k ) .

Noting that { N ( s ) , s [ t k , t ] } is a Poisson process, we will use the compensated Poisson process { N ˜ ( s ) λ s , s [ t k , t ] } , which is a martingale. Then we obtain
J 3 ( t ) = E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d N ¯ ( t ) + λ t k s H ( x t n , t ; x t k n ) d t | r ] 2 r 1 E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d N ¯ ( t ) | r ] + 2 r 1 λ r E [ sup t k s t | t k s H ( x t n , t ; x t k n ) d t | r ] 2 r 1 λ r / 2 ( r 3 2 ( r 1 ) ) r / 2 E [ t k t | H ( x s n , s ; x t k n ) | 2 d s ] r / 2 + 2 r 1 λ r t k t E | H ( x s n , s ; x t k n ) | r d s 2 r 1 λ r / 2 ( r 3 2 ( r 1 ) ) r / 2 ( t t k ) r / 2 1 t k t E | H ( x s n , s ; x t k n ) | r d s + 2 r 1 λ r t k t E | H ( x s n , s ; x t k n ) | r d s C ( t t k ) r / 2 + C ( t t k ) .
In view of J 1 ( t ) , J 2 ( t ) , J 3 ( t ) , we can obtain
E [ sup s [ t k , t ] | x n ( t ) x n ( t k ) | ] C ( t t k ) r / 2 + C ( t t k ) C ( t t k ) r / 2 C Δ r / 2 .

 □

Lemma 2 Under conditions (H1), (H3), (H4) and (H5), for any r 2 ,
E [ x t n x t k n r ] C Δ r / 2 , t [ t k , t k + 1 ] , k = 0 , 1 , , n 1 .
(5)

The proof of this lemma is similar to Proposition 2 in [4].

Then, by Lemmas 1 and 2, we can prove the following main result.

Theorem 1 Let conditions (H1)-(H5) be satisfied, then for any r 2 ,
E [ sup t [ t 0 τ , T ] | x ( t ) x n ( t ) | r ] C Δ ( m + 1 ) r / 2 ,
(6)

where m = min { m 1 , m 2 , m 3 } .

Proof For an arbitrary t [ t 0 , T ] , it follows that
x ( t ) x n ( t ) = t 0 t k : t k t [ f ( x s , s ) F ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) ( s ) d s + t 0 t k : t k t [ g ( x s , s ) G ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) ( s ) d W ( s ) + t 0 t k : t k t [ h ( x s , s ) H ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) ( s ) d N ( s ) .
Since x ( t ) and x n ( t ) satisfy the same initial condition, we can obtain
E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] E [ sup s [ t 0 τ , t 0 ] | x ( s ) x n ( s ) | r ] + E [ sup s [ t 0 , t ] | x ( s ) x n ( s ) | r ] = E [ sup s [ t 0 , t ] | x ( s ) x n ( s ) | r ] 3 r 1 E [ sup s [ t 0 τ , t 0 ] | t 0 s k : t k t [ f ( x u , u ) F ( x u n , u ; x t k n ) ] I [ t k , t k + 1 ) ( u ) d u | r ] + 3 r 1 E [ sup s [ t 0 τ , t 0 ] | t 0 s k : t k t [ g ( x u , u ) G ( x u n , u ; x t k n ) ] I [ t k , t k + 1 ) d W ( u ) | r ] + 3 r 1 E [ sup s [ t 0 τ , t 0 ] | t 0 s k : t k t [ h ( x u , u ) H ( x u n , u ; x t k n ) ] I [ t k , t k + 1 ) d N ( u ) | r ] 3 r 1 ( t t 0 ) r 1 E [ t 0 s | k : t k t [ f ( x s , s ) F ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] + 3 r 1 C ( t t 0 ) r / 2 1 E [ t 0 s k : t k t | [ g ( x s , s ) G ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] + 3 r 1 C ( t t 0 ) r / 2 1 E [ t 0 s k : t k t | [ h ( x s , s ) H ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] + 3 r 1 λ r E [ t 0 s k : t k t | [ h ( x s , s ) H ( x s n , s ; x t k n ) ] I [ t k , t k + 1 ) | r d s ] .
(7)
Let j = max { i { 0 , 1 , 2 , , n 1 } , t i t T } . Denote that
J 1 ( t k , t , u ) = [ f ( x u , u ) F ( x u n , u ; x t k n ) ] I t k , t ( u ) , J 2 ( t k , t , u ) = [ g ( x u , u ) G ( x u n , u ; x t k n ) ] I t k , t ( u ) , J 3 ( t k , t , u ) = [ h ( x u , u ) H ( x u n , u ; x t k n ) ] I t k , t ( u ) .
Then we can write (6) as
E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] 3 r 1 ( t t 0 ) r 1 t 0 t E | k = 0 j 1 J 1 ( t k , t k + 1 , u ) + J 1 ( t j , t , u ) | r d s + 3 r 1 C ( t t 0 ) r / 2 1 t 0 t E | k = 0 j 1 J 2 ( t k , t k + 1 , u ) + J 2 ( t j , t , u ) | r d s + 3 r 1 C ( t t 0 ) r / 2 1 t 0 t E | k = 0 j 1 J 3 ( t k , t k + 1 , u ) + J 3 ( t j , t , u ) | r d s + 3 r 1 λ r t 0 t E | k = 0 j 1 J 3 ( t k , t k + 1 , u ) + J 3 ( t j , t , u ) | r d s .
(8)
On the other hand, for k = 0 , 1 , , j 1 ,
k = 0 j 1 J 1 ( t k , t k + 1 , u ) + J 1 ( t j , t , u ) = { f ( x u , u ) F ( x u n , u ; x t k n ) , u [ t k , t k + 1 ) , f ( x u , u ) F ( x u n , u ; x t j n ) , u [ t j , t ) , k = 0 j 1 J 2 ( t k , t k + 1 , u ) + J 2 ( t j , t , u ) = { g ( x u , u ) G ( x u n , u ; x t k n ) , u [ t k , t k + 1 ) , g ( x u , u ) G ( x u n , u ; x t j n ) , u [ t j , t ) , k = 0 j 1 J 3 ( t k , t k + 1 , u ) + J 3 ( t j , t , u ) = { h ( x u , u ) H ( x u n , u ; x t k n ) , u [ t k , t k + 1 ) , h ( x u , u ) H ( x u n , u ; x t j n ) , u [ t j , t ) .
The relation (7) becomes
E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] 3 r 1 ( T t 0 ) r 1 k = 0 j 1 t k t k + 1 E | f ( x u , u ) F ( x u n , u ; x t k n ) | r d u + 3 r 1 ( T t 0 ) r 1 t j t E | f ( x u , u ) F ( x u n , u ; x t j n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 k = 0 j 1 t k t k + 1 E | g ( x u , u ) G ( x u n , u ; x t k n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 t j t E | g ( x u , u ) G ( x u n , u ; x t j n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 k = 0 j 1 t k t k + 1 E | h ( x u , u ) H ( x u n , u ; x t k n ) | r d u + 3 r 1 C ( T t 0 ) r / 2 1 t j t E | h ( x u , u ) H ( x u n , u ; x t j n ) | r d u + 3 r 1 λ r k = 0 j 1 t k t k + 1 E | h ( x u , u ) H ( x u n , u ; x t k n ) | r d u + 3 r 1 t j t E | h ( x u , u ) H ( x u n , u ; x t j n ) | r d u .
Using (H1), (H4) and (4), yields
t k t E | f ( x u , u ) F ( x u n , u ; x t j n ) | r d u 2 r 1 [ t k t E | f ( x u , u ) f ( x u n , u ) | r d u + t k t E | f ( x u n , u ) F ( x u n , u ; x t j n ) | r d u ] 2 r 1 L 1 r t k t E x u x u n r d u + 2 r 1 t k t E f ( x t k n + β ( x u n x t k n ) , u ) ( m 1 + 1 ) ( x u n x t k n , , x u n x t k n ) ( m 1 + 1 ) ! r d u 2 r 1 L 1 r t k t E x u x u n r d u + 2 r 1 L 2 r [ ( m 1 + 1 ) ! ] r t k t E x u n x t k n ( m 1 + 1 ) r d u 2 r 1 L 1 r t k t E x u x u n r d u + 2 r 1 L 2 r C [ ( m 1 + 1 ) ! ] r Δ ( m 1 + 1 ) r / 2 ( t t k ) ,
where k = 0 , 1 , , j and t [ t k , t k + 1 ] . Similarly,
t k t E | g ( x u , u ) G ( x u n , u ; x t j n ) | r d u C t k t E x u x u n r d u + C Δ ( m 2 + 1 ) r / 2 ( t t k ) , t k t E | h ( x u , u ) H ( x u n , u ; x t j n ) | r d u C t k t E x u x u n r d u + C Δ ( m 3 + 1 ) r / 2 ( t t k ) .
Altogether,
E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] C t k t E x u x u n r d u + C Δ ( m + 1 ) r / 2 ( t t k ) ,
where m = min { m 1 , m 2 , m 3 } . In order to estimate the E x u x u n r , we distinguish two cases:
  1. (1)
    when u τ < t 0 ,
    E x u x u n r E [ sup θ [ τ , 0 ] x ( u + θ ) x n ( u + θ ) r ] = E [ sup γ [ u τ , u ] x ( γ ) x n ( γ ) r ] E [ sup γ [ u τ , t 0 ] x ( γ ) x n ( γ ) r ] + E [ sup γ [ t 0 , u ] x ( γ ) x n ( γ ) r ] = E [ sup γ [ t 0 , u ] x ( γ ) x n ( γ ) r ] E [ sup γ [ t 0 τ , u ] x ( γ ) x n ( γ ) r ] ;
     
  2. (2)
    when u τ t 0 ,
    E x u x u n r E [ sup γ [ u τ , u ] x ( γ ) x n ( γ ) r ] E [ sup γ [ t 0 τ , u ] x ( γ ) x n ( γ ) r ] .
     
So,
E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] C t 0 t E [ sup γ [ t 0 τ , u ] x ( γ ) x n ( γ ) r ] d u + C Δ ( m + 1 ) r / 2 ( t t 0 ) .
By the Gronwall inequality, we obtain the desired result
E [ sup s [ t 0 τ , t ] | x ( s ) x n ( s ) | r ] C Δ ( m + 1 ) r / 2 ( T t 0 ) e C ( T t 0 ) = C Δ ( m + 1 ) r / 2 ,

which completes the proof. □

Remark From the proof, we can easily understand that the convergence speed between the true solution of Eq. (1) and the approximation solution is faster than the Euler-Maruyama method.

Declarations

Acknowledgements

The paper is partly supported by the Scientific Research Fund of the Guangxi Hall of Science and Technology No. 201106LX407.

Authors’ Affiliations

(1)
College of Science, Guangxi University of Technology, Liuzhou, China
(2)
Department of Mathematics, Swansea University, Swansea, UK

References

  1. Øsendal B, Sulem A: Applied Stochastic Control of Jump Diffusions. Springer, Berlin; 2006.Google Scholar
  2. Gikhman II, Skorohod AV: Stochastic Differential Equations. Springer, New York; 1972.View ArticleGoogle Scholar
  3. Mao X: Stochastic Differential Equations and Applications. Horwood, New York; 1997.MATHGoogle Scholar
  4. Yuan C, Mao X: Robust stability and controllability of stochastic differential delay equations with Markovian switching. Automatica 2004, 40: 343-354. 10.1016/j.automatica.2003.10.012MathSciNetView ArticleMATHGoogle Scholar
  5. Kolmanovskii V, Koroleva N, Maizenberg T, Mao X, Matasov A: Neutral stochastic differential delay equations with Markovian switching. Stoch. Anal. Appl. 2003, 21: 819-847. 10.1081/SAP-120022865MathSciNetView ArticleMATHGoogle Scholar
  6. Mao X: Razumikhin-type theorems on exponential stability of neutral stochastic functional-differential equations. SIAM J. Math. Anal. 1997, 28: 389-401. 10.1137/S0036141095290835MathSciNetView ArticleMATHGoogle Scholar
  7. Mao X: Stability of stochastic differential equations with Markovian switching. Stoch. Process. Appl. 1999, 79: 45-67. 10.1016/S0304-4149(98)00070-2View ArticleMathSciNetMATHGoogle Scholar
  8. Mao X, Sabanis S: Numerical solutions of stochastic differential delay equations under local Lipschitz condition. J. Comput. Appl. Math. 2003, 151: 215-227. 10.1016/S0377-0427(02)00750-1MathSciNetView ArticleMATHGoogle Scholar
  9. Mao X: Numerical solutions of stochastic functional differential equations. LMS J. Comput. Math. 2003, 6: 141-161.MathSciNetView ArticleMATHGoogle Scholar
  10. Svishchuk AV, Kazmerchuk YI: Stability of stochastic delay equations of Itô form with jumps and Markovian switchings, and their applications in finance. Theory Probab. Math. Stat. 2002, 64: 167-178.MathSciNetMATHGoogle Scholar
  11. Higham DJ, Kloeden PE: Numerical methods for nonlinear stochastic differential equations with jumps. Numer. Math. 2005, 101: 101-119. 10.1007/s00211-005-0611-8MathSciNetView ArticleMATHGoogle Scholar
  12. Bao J, Hou Z: An analytic approximation of solutions of stochastic differential delay equations with Markovian switching. Math. Comput. Model. 2009, 50: 1379-1384. 10.1016/j.mcm.2009.07.006MathSciNetView ArticleMATHGoogle Scholar
  13. Bao J, Truman A, Yuan C: Almost sure asymptotic stability of stochastic partial differential equations with jumps. SIAM J. Control Optim. 2011, 49(2):771-787. 10.1137/100786812MathSciNetView ArticleMATHGoogle Scholar
  14. Ji Y, Chizeck HJ: Controllability, stabilizability and continuous-time Markovian jump linear quadratic control. IEEE Trans. Autom. Control 1990, 35: 777-788. 10.1109/9.57016MathSciNetView ArticleMATHGoogle Scholar
  15. Milošević M, Jovanović M, Janković S: An approximate method via Taylor series for stochastic functional differential equations. J. Math. Anal. Appl. 2010, 363: 128-137. 10.1016/j.jmaa.2009.07.061MathSciNetView ArticleMATHGoogle Scholar
  16. Mariton M: Jump Linear Systems in Automatic Control. Dekker, New York; 1990.Google Scholar

Copyright

© Wang et al.; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement