Open Access

The averaging method for stochastic differential delay equations under non-Lipschitz conditions

Advances in Difference Equations20132013:38

https://doi.org/10.1186/1687-1847-2013-38

Received: 5 December 2012

Accepted: 31 January 2013

Published: 19 February 2013

Abstract

Under a non-Lipschitz condition, the averaging principle for the general stochastic differential delay equations (SDDEs) is established. The solutions of convergence in mean square and convergence in probability between standard SDDEs and the corresponding averaged SDDEs are considered.

MSC:34K50, 34C29.

Keywords

averaging method stochastic differential delay equations non-Lipschitz

1 Introduction

The averaging principle plays an important role in dynamical systems in problems of mechanics, physics, control and many other areas. The rigorous results on averaging principles were firstly put forward by Krylov and Bogolyubov [1]. After that, Khasminskii [2, 3] considered averaging principles of Itô’s stochastic differential equations, parabolic and elliptic differential equations.

Most systems in science and industry are perturbed by some random environmental effects, which are often described as Gaussian noise. As a special non-Gaussian Lévy noise, Poisson noise is usually a hot spot when dealing with stochastic systems. Stoyanov and Bainov [4] investigated the averaging method for a class of stochastic differential equations with Poisson noise. They considered the connections between the solutions of a standard form and the solutions of averaged systems and proved that under some conditions the solutions of averaged systems converge to the solutions of original systems in mean square and in probability. Following the Bogolyubov theorem (cf. [5]), Kolomiets and Mel’nikov [6] gave a theorem concerning averaging on the finite time interval of a system of integral-differential equations with Poisson noise. Instead of Poisson noise, Xu et al. [7] established an averaging principle for stochastic differential equations with general non-Gaussian Lévy noise.

Stochastic differential equations (SDEs) give models for systems, the future state of which is independent of the past states and is determined solely by the present. However, for real systems, the future state will often be related to past states except for the present states of the system. Thus, SDDEs have attracted great attention recently [8, 9], but averaging principles for SDDEs have not been considered so far. Motivated by the previous paper, we consider the averaging principles for general SDEs and SDDEs with a non-Lipschitz condition.

The following sections are organized as follows. In Section 2 we give a detailed description on the averaging process of general SDEs under a non-Lipschitz condition. Section 3 shows the averaging principles for SDDEs under a non-Lipschitz condition.

2 Stochastic differential equation case

Let ( Ω , F , P ) be a complete probability space with filtration { F t } t 0 satisfying the usual conditions (i.e., it is right continuous and F 0 contains all null sets), and let B ( t ) be a given m-dimensional Brownian motion defined on the space. Let 0 < T < and X 0 be an F 0 -measurable R d -valued random variable such that E | X 0 | 2 < . Consider the following d-dimensional stochastic differential equation:
d X ( t ) = f ( t , X ( t ) ) d t + g ( t , X ( t ) ) d B ( t ) , 0 t T ,
(1)
with initial data X 0 , where f : [ 0 , T ] × R d R d and g : [ 0 , T ] × R d R d × m are both continuous and Borel measurable. By the definition of stochastic differential, this equation is equivalent to the following stochastic integral equation:
X ( t ) = X 0 + 0 t f ( s , X ( s ) ) d s + 0 t g ( s , X ( s ) ) d B ( s ) , t [ 0 , T ] .
(2)

In order to guarantee the existence and uniqueness of the solution to (1), we impose a condition on the coefficient functions.

(A1) Non-Lipschitz condition: for any x , y R d and t [ 0 , T ] ,
| f ( t , x ) f ( t , y ) | 2 | g ( t , x ) g ( t , y ) | 2 κ ( | x y | 2 ) ,
where κ ( ) is a continuous increasing concave function from R + to R + such that κ ( 0 ) = 0 , κ ( x ) > 0 for x > 0 and
0 + d x κ ( x ) = .

It is known from [[10], Theorem 6.5] that under the condition (A1), there exists a unique solution X ( t ) to (1) with the initial data X 0 .

The standard form of (2) is
X ϵ ( t ) = X 0 + ϵ 0 t f ( s , X ϵ ( s ) ) d s + ϵ 0 t g ( s , X ϵ ( s ) ) d B ( s ) ,
(3)

where X 0 and the coefficients have the same conditions as in (1), and ϵ ( 0 , ϵ 0 ] is a positive small parameter with ϵ 0 a fixed number.

According to the existence and uniqueness theorem of differential equations, (3) also has a unique solution X ϵ ( t ) , t [ 0 , T ] for every fixed ϵ ( 0 , ϵ 0 ] . In order to find out whether the solution X ϵ ( t ) will be approximated with small ϵ to some other simpler process, we impose some conditions on the coefficients.

Let f ¯ ( x ) : R d R d , g ¯ ( x ) : R d R d × m be measurable functions, satisfying the non-Lipschitz condition with respect to x as f ( t , x ) and g ( t , x ) . Moreover, we assume that the following inequalities are satisfied:

For x R d and T 1 [ 0 , T ] ,

(A2)
1 T 1 0 T 1 | f ( s , x ) f ¯ ( x ) | d s ψ 1 ( T 1 ) ( 1 + | x | ) ,
(A3)
1 T 1 0 T 1 | g ( s , x ) g ¯ ( x ) | 2 d s ψ 2 ( T 1 ) ( 1 + | x | 2 ) ,

where ψ i ( T 1 ) , i = 1 , 2 are positive bounded functions with lim T 1 ψ i ( T 1 ) = 0 .

We now consider the following averaged stochastic equation which corresponds to the original standard form (3):
Y ϵ ( t ) = X 0 + ϵ 0 t f ¯ ( Y ϵ ( s ) ) d s + ϵ 0 t g ¯ ( Y ϵ ( s ) ) d B ( s ) .
(4)

Obviously, (4) also has a unique solution Y ϵ ( t ) under similar conditions as (3) for the solution X ϵ ( t ) . Now, we consider the connections between the processes X ϵ ( t ) and Y ϵ ( t ) . The convergence in mean square and convergence in probability between the standard form and the averaged form of (2) are especially considered.

The following two theorems give the connections between the processes X ϵ ( t ) and Y ϵ ( t ) .

Theorem 2.1 Suppose that the conditions (A1)-(A3) are satisfied. For a given arbitrarily small number δ 1 > 0 and a constant L > 0 , α ( 0 , 1 ) , there exists a number ϵ 1 ( 0 , ϵ 0 ] such that for all ϵ ( 0 , ϵ 1 ] , we have
E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Y ϵ ( t ) | 2 ) δ 1 .
Proof Consider the difference X ϵ ( t ) Y ϵ ( t ) . By (3) and (4), we have
X ϵ ( t ) Y ϵ ( t ) = ϵ 0 t [ f ( s , X ϵ ( s ) ) f ¯ ( Y ϵ ( s ) ) ] d s + ϵ 0 t [ g ( s , X ϵ ( s ) ) g ¯ ( Y ϵ ( s ) ) ] d B ( s ) .
By the elementary inequality | x 1 + x 2 | 2 2 ( | x 1 | 2 + | x 2 | 2 ) , for u [ 0 , T ] , we obtain
sup 0 t u | X ϵ ( t ) Y ϵ ( t ) | 2 2 ϵ 2 sup 0 t u | 0 t [ f ( s , X ϵ ( s ) ) f ¯ ( Y ϵ ( s ) ) ] d s | 2 + 2 ϵ sup 0 t u | 0 t [ g ( s , X ϵ ( s ) ) g ¯ ( Y ϵ ( s ) ) ] d B ( s ) | 2 .
Denote by
J 1 2 = 2 ϵ 2 sup 0 t u | 0 t [ f ( s , X ϵ ( s ) ) f ¯ ( Y ϵ ( s ) ) ] d s | 2 , J 2 2 = 2 ϵ sup 0 t u | 0 t [ g ( s , X ϵ ( s ) ) g ¯ ( Y ϵ ( s ) ) ] d B ( s ) | 2 .
Thus, using the elementary inequality again, we get
J 1 2 2 ϵ 2 sup 0 t u | 0 t [ f ( s , X ϵ ( s ) ) f ( s , Y ϵ ( s ) ) + f ( s , Y ϵ ( s ) ) f ¯ ( Y ϵ ( s ) ) ] d s | 2 4 ϵ 2 sup 0 t u | 0 t [ f ( s , X ϵ ( s ) ) f ( s , Y ϵ ( s ) ) ] d s | 2 + 4 ϵ 2 sup 0 t u | 0 t [ f ( s , Y ϵ ( s ) ) f ¯ ( Y ϵ ( s ) ) ] d s | 2 : = J 11 2 + J 12 2 .
By the Cauchy-Schwarz inequality and the condition (A1), taking expectation on J 11 2 yields
E | J 11 | 2 4 ϵ 2 E ( sup 0 t u t 0 t | f ( s , X ϵ ( s ) ) f ( s , Y ϵ ( s ) ) | 2 d s ) 4 ϵ 2 u 0 u E [ κ ( | X ϵ ( s ) Y ϵ ( s ) | 2 ) ] d s .
For | J 12 | 2 , we take the expectation and use the condition (A2) to get
E | J 12 | 2 4 ϵ 2 E ( sup 0 t u t 2 | 1 t 0 t [ f ( s , Y ϵ ( s ) ) f ¯ ( Y ϵ ( s ) ) ] d s | 2 ) 4 ϵ 2 sup 0 t u { t 2 ψ 1 ( t ) 2 [ 1 + E ( sup 0 s t | Y ϵ ( s ) | ) ] 2 } 8 ϵ 2 u 2 ψ 1 ( u ) 2 [ 1 + E ( sup 0 t u | Y ϵ ( t ) | 2 ) ] .
By the properties of solutions to stochastic differential equations, we know that if E | X 0 | 2 < , then for each t 0 , E | X ( t ) | 2 < (cf. [11]). Following the discussion of [4], this property combines with the fact that lim T 1 ψ 1 ( T 1 ) = 0 . We can further estimate that there exists a constant C 1 such that
E | J 12 | 2 8 ϵ 2 u 2 C 1 .
Consequently,
E | J 1 | 2 4 ϵ 2 u 0 u E [ κ ( | X ϵ ( s ) Y ϵ ( s ) | 2 ) ] d s + 8 ϵ 2 u 2 C 1 .
(5)
On the other hand, for J 2 2 , taking expectation on it, using the Burkholder-Davis-Gundy inequality [10] and the elementary inequality again, we get
E | J 2 | 2 8 ϵ E ( 0 u | g ( s , X ϵ ( s ) ) g ¯ ( Y ϵ ( s ) ) | 2 d s ) 16 ϵ E ( 0 u | g ( s , X ϵ ( s ) ) g ( s , Y ϵ ( s ) ) | 2 d s ) + 16 ϵ E ( 0 u | g ( s , Y ϵ ( s ) ) g ¯ ( Y ϵ ( s ) ) | 2 d s ) : = E | J 21 | 2 + E | J 22 | 2 .
By the condition (A1), we obtain
E | J 21 | 2 16 ϵ 0 u E [ κ ( | X ϵ ( s ) Y ϵ ( s ) | 2 ) ] d s .
As to E | J 22 | 2 , we use the condition (A3) to get
E | J 22 | 2 16 ϵ E ( 0 u | g ( s , Y ϵ ( s ) ) g ¯ ( Y ϵ ( s ) ) | 2 d s ) 16 ϵ u ψ 2 ( u ) [ 1 + E ( sup 0 t u | Y ϵ ( t ) | 2 ) ] .
As a similar way of dealing with E | J 12 | 2 , there exists a constant C 2 such that
E | J 22 | 2 16 ϵ u C 2 .
Then
E | J 2 | 2 16 ϵ 0 u E [ κ ( | X ϵ ( s ) Y ϵ ( s ) | 2 ) ] d s + 16 ϵ u C 2 .
(6)
Putting (5) and (6) together, we see that
(7)
From the condition (A1), we know that κ is concave and κ ( 0 ) = 0 , therefore, we can find a pair of positive constants a and b such that
κ ( x ) a + b x for all  x 0 .
(8)
Substituting this into (7) gives that
E ( sup 0 t u | X ϵ ( t ) Y ϵ ( t ) | 2 ) 4 b ϵ ( ϵ u + 4 ) 0 u E ( | X ϵ ( s ) Y ϵ ( s ) | 2 ) d s + 4 ϵ u ( ϵ u a + 2 ϵ u C 1 + 4 a + 4 C 2 ) 4 ϵ u ( ϵ u a + 2 ϵ u C 1 + 4 a + 4 C 2 ) + 4 b ϵ ( ϵ u + 4 ) 0 u E ( sup 0 s 1 s | X ϵ ( s 1 ) Y ϵ ( s 1 ) | 2 ) d s .
The Gronwall inequality then yields
E ( sup 0 t u | X ϵ ( t ) Y ϵ ( t ) | 2 ) 4 ϵ u ( ϵ u a + 2 ϵ u C 1 + 4 a + 4 C 2 ) exp { 4 b ϵ u ( ϵ u + 4 ) } .
Choose α ( 0 , 1 ) and L > 0 such that for every t [ 0 , L ϵ α ] [ 0 , T ] ,
E ( sup 0 t L ϵ α | X ϵ ( t ) Y ϵ ( t ) | 2 ) C L ϵ 1 α ,

where C = 4 ( a L ϵ 1 α + 2 C 1 L ϵ 1 α + 4 a + 4 C 2 ) exp { 4 b L ϵ 1 α ( L ϵ 1 α + 4 ) } is a constant.

That is, given any number δ 1 > 0 , we can choose ϵ 1 ( 0 , ϵ 0 ] such that for each ϵ ( 0 , ϵ 1 ] and for every t [ 0 , L ϵ α ] ,
E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Y ϵ ( t ) | 2 ) δ 1 .

This completes the proof. □

Except for the convergence in mean square between (3) and (4), we can also get the properties of convergence in probability.

Theorem 2.2 Suppose that the conditions (A1)-(A3) are satisfied, for a given arbitrarily small number δ 2 > 0 and a constant L > 0 , α ( 0 , 1 ) , there exists a number ϵ 1 ( 0 , ϵ 0 ] such that for all ϵ ( 0 , ϵ 1 ] , we have
lim ϵ 0 P ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Y ϵ ( t ) | > δ 2 ) = 0 .
Proof With the result of Theorem 2.1 and the Chebyshev-Markov inequality, for any given number δ 2 > 0 , we have
P ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Y ϵ ( t ) | > δ 2 ) 1 δ 2 2 E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Y ϵ ( t ) | 2 ) 1 δ 2 2 C L ϵ 1 α .

Taking limits on both sides of the inequality, we get the required result. □

In order to illustrate the averaging process of stochastic differential equations under non-Lipschitz conditions, we give the following example.

Example 2.1

Consider the one dimensional SDE
d X ( t ) = sin t X ( t ) d t + X ( t ) d B ( t ) , t 0 ,
with an initial condition X ( 0 ) = X 0 , and E | X 0 | 2 < , where B ( t ) is a scalar Brownian motion. The corresponding standard form of the above SDE is
d X ϵ = ϵ sin t X ϵ d t + ϵ X ϵ d B ( t ) .
Denote
f ( t , X ϵ ) = sin t X ϵ , g ( t , X ϵ ) = X ϵ .
Then
f ¯ ( X ϵ ) = 1 π 0 π f ( t , X ϵ ) d t = 2 π X ϵ , g ¯ ( X ϵ ) = 1 π 0 π g ( t , X ϵ ) d t = X ϵ .
Define the averaged SDE
d Y ϵ = 2 π ϵ Y ϵ d t + ϵ Y ϵ d B ( t ) .
Obviously, the explicit solution of this equation is
Y ( t ) = X 0 exp { ( 2 π 1 2 ) t + B ( t ) } .
For κ ( x ) = k x , where k is a positive constant, it is easy to see that the conditions (A1)-(A3) are satisfied, thus Theorems 2.1 and 2.2 hold. That is,
E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Y ϵ ( t ) | 2 ) δ 1

and X ϵ ( t ) Y ϵ ( t ) , ϵ 0 in probability.

As a similar process to the stochastic differential equation case, we can derive the averaging principle for SDE with delay.

3 Stochastic differential delay equation case

Let τ > 0 and denote by C ( [ τ , 0 ] ; R d ) the family of continuous functions ϕ from [ τ , 0 ] to R d with the norm ϕ = sup τ θ 0 | ϕ ( θ ) | . Let 0 < T < . We now consider the following stochastic differential delay equation in R d :
d X ( t ) = f ( t , X ( t ) , X ( t τ ) ) d t + g ( t , X ( t ) , X ( t τ ) ) d B ( t ) , 0 t T ,
(9)

where f : [ 0 , T ] × R d × R d R d and g : [ 0 , T ] × R d × R d R d × m are both continuous and Borel measurable. Suppose the initial data X ( 0 ) = ξ = { ξ ( θ ) : τ θ 0 } , which is an F 0 -measurable C ( [ τ , 0 ] ; R d ) -valued random variable such that E ξ 2 < .

We impose the following condition:

(A1′) Non-Lipschitz condition: for any x , x ˆ , y , y ˆ R d and t [ 0 , T ] ,
| f ( t , x , y ) f ( t , x ˆ , y ˆ ) | 2 | g ( t , x , y ) g ( t , x ˆ , y ˆ ) | 2 κ ( | x x ˆ | 2 ) + K | y y ˆ | 2 ,
(10)
where K is a positive constant and κ ( ) is a continuous increasing concave function from R + to R + such that κ ( 0 ) = 0 , κ ( x ) > 0 for x > 0 and
0 + d x κ ( x ) = .

Under the condition (A1′), (9) has a unique solution for t [ 0 , T ] .

Consider the standard form of SDDE in R d :
X ϵ ( t ) = X ( 0 ) + ϵ 0 t f ( s , X ϵ ( s ) , X ϵ ( s τ ) ) d s + ϵ 0 t g ( s , X ϵ ( s ) , X ϵ ( s τ ) ) d B ( s ) ,
(11)

where X ( 0 ) and the coefficients have the same conditions as in (9), and ϵ ( 0 , ϵ 0 ] is a positive small parameter with ϵ 0 a fixed number. Obviously, (11) also has a unique solution X ϵ ( t ) , t [ 0 , T ] for every fixed ϵ ( 0 , ϵ 0 ] .

Define f ¯ ( x , y ) : R d × R d R d , g ¯ ( x , y ) : R d × R d R d × m to be measurable functions, satisfying the condition (A1′). Moreover, we assume that the following inequalities are satisfied:

For x , y R d and T 1 [ 0 , T ] ,

(A2′)
1 T 1 0 T 1 | f ( s , x , y ) f ¯ ( x , y ) | d s ψ 3 ( T 1 ) ( 1 + | x | + | y | ) ,
(12)
(A3′)
1 T 1 0 T 1 | g ( s , x , y ) g ¯ ( x , y ) | 2 d s ψ 4 ( T 1 ) ( 1 + | x | 2 + | y | 2 ) ,
(13)

where ψ i ( T 1 ) , i = 3 , 4 are positive bounded functions with lim T 1 ψ i ( T 1 ) = 0 .

The averaged form of (11) is
Z ϵ ( t ) = X ( 0 ) + ϵ 0 t f ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) d s + ϵ 0 t g ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) d B ( s ) .
(14)

Obviously, (14) also has a unique solution Z ϵ ( t ) under similar conditions as (11) for the solution X ϵ ( t ) . In the rest of the paper, we will consider the connections between the processes X ϵ ( t ) and Z ϵ ( t ) .

Theorem 3.1 Suppose that the conditions (A1′)-(A3′) are satisfied. For a given arbitrarily small number δ 3 > 0 and a constant L > 0 , α ( 0 , 1 ) , there exists a number ϵ 1 ( 0 , ϵ 0 ] such that for all ϵ ( 0 , ϵ 1 ] , we have
E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Z ϵ ( t ) | 2 ) δ 3 .
Proof Considering the difference X ϵ ( t ) Z ϵ ( t ) , we have
X ϵ ( t ) Z ϵ ( t ) = ϵ 0 t [ f ( s , X ϵ ( s ) , X ϵ ( s τ ) ) f ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) ] d s + ϵ 0 t [ g ( s , X ϵ ( s ) , X ϵ ( s τ ) ) g ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) ] d B ( s ) .
For u [ 0 , T ] , it is easy to obtain by elementary inequalities that
sup 0 t u | X ϵ ( t ) Z ϵ ( t ) | 2 2 ϵ 2 sup 0 t u | 0 t [ f ( s , X ϵ ( s ) , X ϵ ( s τ ) ) f ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) ] d s | 2 + 2 ϵ sup 0 t u | 0 t [ g ( s , X ϵ ( s ) , X ϵ ( s τ ) ) g ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) ] d B ( s ) | 2 : = K 1 2 + K 2 2 .
By elementary computation, we get
K 1 2 4 ϵ 2 sup 0 t u | 0 t [ f ( s , X ϵ ( s ) , X ϵ ( s τ ) ) f ( s , Z ϵ ( s ) , Z ϵ ( s τ ) ) ] d s | 2 + 4 ϵ 2 sup 0 t u | 0 t [ f ( s , Z ϵ ( s ) , Z ϵ ( s τ ) ) f ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) ] d s | 2 : = K 11 2 + K 12 2 .
By the Cauchy-Schwarz inequality and the condition (A1′), taking expectation on K 11 2 yields
E | K 11 | 2 4 ϵ 2 E ( sup 0 t u t 0 t | f ( s , X ϵ ( s ) , X ϵ ( s τ ) ) f ( s , Z ϵ ( s ) , Z ϵ ( s τ ) ) | 2 d s ) 4 ϵ 2 u 0 u E [ κ ( | X ϵ ( s ) Z ϵ ( s ) | 2 ) ] d s + 4 ϵ 2 u K E 0 u | X ϵ ( s τ ) Z ϵ ( s τ ) | 2 d s .
Compute
E 0 u | X ϵ ( s τ ) Z ϵ ( s τ ) | 2 d s E τ 0 | X ϵ ( s ) Z ϵ ( s ) | 2 d s + E 0 u | X ϵ ( s ) Z ϵ ( s ) | 2 d s .
Substituting this into K 11 2 , we get
E | K 11 | 2 4 ϵ 2 u 0 u E [ κ ( | X ϵ ( s ) Z ϵ ( s ) | 2 ) ] d s + 4 ϵ 2 u K E 0 u | X ϵ ( s ) Z ϵ ( s ) | 2 d s + 4 ϵ 2 u K E τ 0 | X ϵ ( s ) Z ϵ ( s ) | 2 d s .
Taking expectation on | K 12 | 2 , using the condition (A2′) and inequality | x 1 + x 2 + x 3 | 2 3 ( | x 1 | 2 + | x 2 | 2 + | x 3 | 2 ) , we thus obtain
E | K 12 | 2 4 ϵ 2 E ( sup 0 t u t 2 | 1 t 0 t [ f ( s , Z ϵ ( s ) , Z ϵ ( s τ ) ) f ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) ] d s | 2 ) 4 ϵ 2 E ( sup 0 t u [ t 2 ψ 3 ( t ) 2 sup 0 s t ( 1 + | Z ϵ ( s ) | + | Z ϵ ( s τ ) | ) 2 ] ) 12 ϵ 2 u 2 ψ 3 ( u ) 2 [ 1 + E ( sup τ t 0 | Z ϵ ( t ) | 2 ) + 2 E ( sup 0 t u | Z ϵ ( t ) | 2 ) ] .
Taking expectation on K 2 2 and using the Burkholder-Davis-Gundy inequality, we get
E | K 2 | 2 8 ϵ E ( 0 u | g ( s , X ϵ ( s ) , X ϵ ( s τ ) ) g ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) | 2 d s ) 16 ϵ E ( 0 u | g ( s , X ϵ ( s ) , X ϵ ( s τ ) ) g ( s , Z ϵ ( s ) , Z ϵ ( s τ ) ) | 2 d s ) + 16 ϵ E ( 0 u | g ( s , Z ϵ ( s ) , Z ϵ ( s τ ) ) g ¯ ( Z ϵ ( s ) , Z ϵ ( s τ ) ) | 2 d s ) : = E | K 21 | 2 + E | K 22 | 2 .
With the condition (A1′),
E | K 21 | 2 16 ϵ E 0 u [ κ ( | X ϵ ( s ) Z ϵ ( s ) | 2 ) + K | X ϵ ( s τ ) Z ϵ ( s τ ) | 2 ] d s 16 ϵ 0 u E [ κ ( | X ϵ ( s ) Z ϵ ( s ) | 2 ) ] d s + 16 ϵ K 0 u E | X ϵ ( s ) Z ϵ ( s ) | 2 d s + 16 ϵ K τ 0 E | X ϵ ( s ) Z ϵ ( s ) | 2 d s .
Applying the condition (A3′) to E | K 22 | 2 , we get
E | K 22 | 2 16 ϵ E ( sup 0 t u [ t ψ 4 ( t ) ( 1 + | Z ϵ ( s ) | 2 + | Z ϵ ( s τ ) | 2 ) ] ) 16 ϵ u ψ 4 ( u ) [ 1 + E ( sup τ t 0 | Z ϵ ( t ) | 2 ) + 2 E ( sup 0 t u | Z ϵ ( t ) | 2 ) ] .
Putting | K 1 | 2 and | K 2 | 2 together, we see that
E ( sup 0 t u | X ϵ ( t ) Z ϵ ( t ) | 2 ) 4 ϵ ( ϵ u + 4 ) 0 u E [ κ ( | X ϵ ( s ) Z ϵ ( s ) | 2 ) ] d s + 4 ϵ K ( ϵ u + 4 ) 0 u E | X ϵ ( s ) Z ϵ ( s ) | 2 d s + 4 ϵ K ( ϵ u + 4 ) τ 0 E | X ϵ ( s ) Z ϵ ( s ) | 2 d s + 4 ϵ u [ 3 ϵ u ψ 3 ( u ) 2 + 4 ψ 4 ( u ) ] [ 1 + E ( sup τ t 0 | Z ϵ ( t ) | 2 ) + 2 E ( sup 0 t u | Z ϵ ( t ) | 2 ) ] .
By the fact that E ξ 2 < , for each t 0 , E | X ( t ) | 2 < . This combines with the fact that lim T 1 ψ i ( T 1 ) = 0 , i = 3 , 4 , we can estimate that there exists a constant C 3 such that
E ( sup 0 t u | X ϵ ( t ) Z ϵ ( t ) | 2 ) C 3 + 4 ϵ ( ϵ u + 4 ) 0 u E [ κ ( | X ϵ ( s ) Z ϵ ( s ) | 2 ) ] d s + 4 ϵ K ( ϵ u + 4 ) 0 u E | X ϵ ( s ) Z ϵ ( s ) | 2 d s .
By (8), we can further derive
E ( sup 0 t u | X ϵ ( t ) Z ϵ ( t ) | 2 ) C 3 + 4 ϵ ( ϵ u + 4 ) a u + 4 ϵ ( ϵ u + 4 ) ( b + K ) 0 u E ( sup 0 s 1 s | X ϵ ( s 1 ) Z ϵ ( s 1 ) | 2 ) d s .
The Gronwall inequality then yields
E ( sup 0 t u | X ϵ ( t ) Z ϵ ( t ) | 2 ) [ C 3 + 4 a ϵ u ( ϵ u + 4 ) ] exp { 4 ϵ u ( ϵ u + 4 ) ( b + K ) } [ C 4 + 4 a ( ϵ u + 4 ) ] ϵ u exp { 4 ϵ u ( ϵ u + 4 ) ( b + K ) } ,

where C 4 is a constant.

Choose α ( 0 , 1 ) and L > 0 such that for every t [ 0 , L ϵ α ] [ 0 , T ] ,
E ( sup 0 t L ϵ α | X ϵ ( t ) Z ϵ ( t ) | 2 ) C L ϵ 1 α ,

where C = [ C 4 + 4 a ( L ϵ 1 α + 4 ) ] exp { 4 L ϵ 1 α ( L ϵ 1 α + 4 ) ( b + K ) } is a constant.

That is, given any number δ 3 > 0 , we can choose ϵ 1 ( 0 , ϵ 0 ] such that for each ϵ ( 0 , ϵ 1 ] and for every t [ 0 , L ϵ α ] ,
E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Z ϵ ( t ) | 2 ) δ 3 .

This completes the proof. □

The next theorem gives convergence in probability between X ϵ ( t ) and Z ϵ ( t ) .

Theorem 3.2 Suppose that the conditions (A1′)-(A3′) are satisfied. For a given arbitrarily small number δ 4 > 0 and a constant L > 0 , α ( 0 , 1 ) , there exists a number ϵ 1 ( 0 , ϵ 0 ] such that for all ϵ ( 0 , ϵ 1 ] , we have
lim ϵ 0 P ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Z ϵ ( t ) | > δ 4 ) = 0 .
Proof With the result of Theorem 3.1 and the Chebyshev-Markov inequality, for any given number δ 4 > 0 , we have
P ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Z ϵ ( t ) | > δ 4 ) 1 δ 4 2 E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Z ϵ ( t ) | 2 ) 1 δ 4 2 C L ϵ 1 α .

Taking limits on both sides of the inequality, we get the required result. □

The following example gives the averaging process of stochastic differential delay equations under non-Lipschitz conditions.

Example 3.1

Consider the following one-dimensional SDDE:
d X ϵ ( t ) = 2 ϵ sin 2 t ( a X ϵ ( t ) + b X ϵ ( t 1 ) ) d t + ϵ ( c X ϵ ( t ) + d X ϵ ( t 1 ) ) d B ( t ) , t 0 ,

with an initial condition X ϵ ( t ) = t + 1 , t [ 1 , 0 ] , where a, b, c, d are constants and B ( t ) is a one-dimensional Winner process. Obviously, f ( t , x , y ) = 2 sin 2 t ( a x + b y ) , g ( t , x , y ) = c x + d y .

Let
f ¯ ( X ϵ ( t ) , X ϵ ( t 1 ) ) = 1 π 0 π f ( t , X ϵ ( t ) , X ϵ ( t 1 ) ) d t = a X ϵ ( t ) + b X ϵ ( t 1 ) , g ¯ ( X ϵ ( t ) , X ϵ ( t 1 ) ) = 1 π 0 π g ( t , X ϵ ( t ) , X ϵ ( t 1 ) ) d t = c X ϵ ( t ) + d X ϵ ( t 1 ) .
Define the corresponding averaged SDDE as follows:
d Z ϵ ( t ) = ϵ ( a Z ϵ ( t ) + b Z ϵ ( t 1 ) ) d t + ϵ ( c Z ϵ ( t ) + d Z ϵ ( t 1 ) ) d B ( t ) , t 0 .
On t [ 0 , 1 ] , the linear SDDE becomes a linear SDE
d Z ( t ) = ( a Z ( t ) + b t ) d t + ( c Z ( t ) + d t ) d B ( t ) .
The explicit solution of this SDE is
Z ( t ) = Φ ( t ) ( X ( 0 ) + 0 t ( b c d ) Φ 1 ( s ) s d s + 0 t d Φ 1 ( s ) s d B ( s ) ) ,
where
Φ ( t ) = exp ( ( a 1 2 c 2 ) t + c B ( t ) ) .

Repeating this procedure over the intervals [ 1 , 2 ] , [ 2 , 3 ] , etc. we can obtain the explicit solution.

For κ ( x ) = k x , where k is a positive constant, it is easy to see that the conditions (A1′)-(A3′) are satisfied, thus Theorems 3.1 and 3.2 hold. That is,
E ( sup t [ 0 , L ϵ α ] | X ϵ ( t ) Z ϵ ( t ) | 2 ) δ 3 ,

and X ϵ ( t ) Z ϵ ( t ) , ϵ 0 in probability.

Declarations

Acknowledgements

The authors are very grateful to the editor and the anonymous referees for their insightful and constructive comments and suggestions that have led to an improved version of this paper. The author would like to thank Central South University for the Postdoctoral Funds and the Fundamental Research Funds.

Authors’ Affiliations

(1)
School of Traffic and Transportation Engineering, School of Mathematics, Central South University

References

  1. Krylov NM, Bogolyubov NN: Les propriétés ergodiques des suites des probabilités en chaîne. C. R. Math. Acad. Sci. Paris 1937, 204: 1454-1546.Google Scholar
  2. Khasminskii RZ: Principle of averaging of parabolic and elliptic differential equations for Markov process with small diffusion. Theory Probab. Appl. 1963, 8: 1-21. 10.1137/1108001View ArticleGoogle Scholar
  3. Khasminskii RZ: On the averaging principle for Itô stochastic differential equations. Kybernetika 1968, 4: 260-279.MathSciNetGoogle Scholar
  4. Stoyanov IM, Bainov DD: The averaging method for a class of stochastic differential equations. Ukr. Math. J. 1974, 26(2):186-194.View ArticleGoogle Scholar
  5. Da Prato G, Zabczyk J London Mathematical Society Lecture Note Series 229. In Ergodicity for Infinite-Dimensional System. Cambridge University Press, Cambridge; 1996.View ArticleGoogle Scholar
  6. Kolomiets VG, Mel’nikov AI: Averaging of stochastic systems of integral-differential equations with Poisson noise. Ukr. Math. J. 1991, 43(2):242-246. 10.1007/BF01060515MathSciNetView ArticleGoogle Scholar
  7. Xu Y, Duan JQ, Xu W: An averaging principle for stochastic dynamical systems with Lévy noise. Physica D 2011, 240: 1395-1401. 10.1016/j.physd.2011.06.001MathSciNetView ArticleGoogle Scholar
  8. Taniguchi T, Liu K, Truman A: Existence, uniqueness and asymptotic behavior of mild solutions to stochastic functional differential equations in Hilbert spaces. J. Differ. Equ. 2002, 181: 72-91. 10.1006/jdeq.2001.4073MathSciNetView ArticleGoogle Scholar
  9. Mao XR: Numerical solutions of stochastic functional differential equations. LMS J. Comput. Math. 2003, 6: 141-161.MathSciNetView ArticleGoogle Scholar
  10. Mao XR: Stochastic Differential Equations and Applications. Woodhead Publishing, Cambridge; 2007.Google Scholar
  11. Applebaum D: Lévy Process and Stochastic Calculus. 2nd edition. Cambridge University Press, Cambridge; 2009.View ArticleGoogle Scholar

Copyright

© Tan and Lei; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.