Open Access

On p th moment exponential stability of stochastic fuzzy cellular neural networks with time-varying delays and impulses

Advances in Difference Equations20132013:172

https://doi.org/10.1186/1687-1847-2013-172

Received: 13 March 2013

Accepted: 28 May 2013

Published: 17 June 2013

Abstract

This paper is concerned with the p th moment exponential stability of fuzzy cellular neural networks with time-varying delays under impulsive perturbations and stochastic noises. Based on the Lyapunov function, stochastic analysis and differential inequality technique, a set of novel sufficient conditions on p th moment exponential stability of the system are derived. These results generalize and improve some of the existing ones. Moreover, an illustrative example is given to demonstrate the effectiveness of the results obtained.

Keywords

Unique EquilibriumCellular Neural NetworkUnique Positive SolutionStochastic Functional Differential EquationFuzzy Cellular Neural Network

1 Introduction

In the last decades, cellular neural networks [1, 2] have been extensively studied and applied in many different fields such as associative memory, signal processing and some optimization problems. In such applications, it is of prime importance to ensure that the designed neural networks are stable. In practice, due to the finite speeds of the switching and transmission of signals, time delays do exist in a working network and thus should be incorporated into the model equation. In recent years, the dynamical behaviors of cellular neural networks with constant delays or time-varying delays or distributed delays have been studied; see, for example, [311] and the references therein.

In addition to the delay effects, recently, studies have been intensively focused on stochastic models. It has been realized that the synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes, and it is of great significance to consider stochastic effects on the stability of neural networks or dynamical system described by stochastic functional differential equations (see [1223]). On the other hand, most neural networks can be classified as either continuous or discrete. Therefore most of the investigations focused on the continuous or discrete systems, respectively. However, there are many real-world systems and neural processes that behave in piecewise continuous style interlaced with instantaneous and abrupt change (impulses). Motivated by this fact, several new neural networks with impulses have been recently proposed and studied (see [2433]).

In this paper, we would like to integrate fuzzy operations into cellular neural networks. Speaking of fuzzy operations, Yang and Yang [34] first introduced fuzzy cellular neural networks (FCNNs) combining those operations with cellular neural networks. So far researchers have found that FCNNs are useful in image processing, and some results have been reported on stability and periodicity of FCNNs [3440]. However, to the best of our knowledge, few author investigated the stability of stochastic fuzzy cellular neural networks with time-varying delays and impulses.

Motivated by the above discussions, in this paper, we consider the following stochastic fuzzy cellular neural networks with time-varying delays and impulses
{ d x i ( t ) = [ c i x i ( t ) + j = 1 n a i j f j ( x j ( t ) ) + j = 1 n α i j g j ( x j ( t τ j ( t ) ) ) d x i ( t ) = + j = 1 n β i j g j ( x j ( t τ j ( t ) ) ) + I i ] d t d x i ( t ) = + j = 1 n σ i j ( t , x i ( t ) , x i ( t τ i ( t ) ) ) d ω j ( t ) , t k k , Δ x i ( t k ) = I i k ( x ( t k ) ) = x i ( t k ) x i ( t k ) , k Z + , i = 1 , 2 , , n ,
(1)

where n corresponds to the number of units in the neural networks. For i = 1 , 2 , , n , x i ( t ) corresponds to the state of the i th neuron. f j ( ) , g j ( ) are signal transmission functions. c i > 0 denotes the rate at which a cell i resets its potential to the resting state when isolated from other cells and inputs; τ j ( t ) corresponds to the transmission delay. a i j represents the elements of the feedback template. I i = I i ˜ + T i j u j + H i j u j , α i j , β i j , T i j and H i j are elements of fuzzy feedback MIN template and fuzzy feedback MAX template, fuzzy feed-forward MIN template and fuzzy feed-forward MAX template, respectively; and denote the fuzzy AND and fuzzy OR operation, respectively; u j denotes the external input of the i th neurons. I i ˜ is the external bias of the i th unit. τ i ( t ) is a transmission delay satisfying 0 τ i ( t ) τ ; σ ( t , x , y ) = ( σ i j ( t , x i , y i ) ) n × n R n × n is the diffusion coefficient matrix and σ i ( t , x i , y i ) = ( σ i 1 ( t , x i , y i ) , σ i 2 ( t , x i , y i ) , , σ i n ( t , x i , y i ) ) is the i th row vector of σ ( t , x , y ) : ω ( t ) = ( ω 1 ( t ) , ω 2 ( t ) , , ω n ( t ) ) T is an n-dimensional Brownian motion defined on a complete probability space ( Ω , F , { F t } t 0 , P ) with a filtration { F t } t 0 satisfying the usual conditions (i.e., it is right continuous and F 0 contains all P-null sets). Δ x i ( t k ) = x i ( t k + ) x i ( t k ) is the impulses at moment t k , the fixed moments of time t k satisfy 0 = t 0 < t 1 < t 2 <  , lim k + t k = + ; x t ( s ) = x ( t + s ) , s [ τ , 0 ] .

System (1) is supplemented with the initial condition given by
x t 0 ( s ) = φ ( s ) , s [ τ , 0 ] ,
(2)

where φ ( s ) is F 0 -measurable and continuous everywhere except at a finite number of points t k , at which φ ( t k + ) and φ ( t k ) exist and φ ( t k + ) = φ ( t k ) .

Let P C 1 , 2 ( [ t k , t k + 1 ) × R n ; R + ) denote the family of all nonnegative functions V ( t , x ) on [ t k , t k + 1 ) × R n which are continuous once differentiable in t and twice differentiable in x. If V ( t , x ) P C 1 , 2 ( [ t k , t k + 1 ) × R n ; R + ) , define an operator LV associated with (1) as
L V ( t , x ) = V t ( t , x ) + i = 1 n V x i ( t , x ) [ c i x i ( t ) + j = 1 n a i j f j ( x j ( t ) ) + j = 1 n α i j g j ( x j ( t τ j ( t ) ) ) + j = 1 n β i j g j ( x j ( t τ j ( t ) ) ) + I i ] d t + 1 2 trace [ σ T V x x ( t , x ) σ ] ,
(3)
where
V t ( t , x ) = V ( t , x ) t , V x i ( t , x ) = V ( t , x ) x i , V x x ( t , x ) = ( V ( t , x ) x i x j ) n × n .

For convenience, we introduce several notations. x = ( x 1 , x 2 , , x n ) T R n denotes a column vector. x denotes a vector norm defined by x = ( i = 1 n | x i | p ) 1 / p ; C [ X , Y ] denotes the space of continuous mappings from topological space X to topological space Y. Denoted by C F 0 b [ [ τ , 0 ) , R n ] is the family of all bounded F 0 -measurable, C [ [ τ , 0 ) , R n ] -valued random variables ϕ, satisfying ϕ L P = sup τ θ 0 E ϕ ( θ ) < + , where E ( ) denotes the expectation of a stochastic process.

Throughout the paper, we give the following assumptions.

(A1) The signal transmission functions f j ( ) , g j ( ) ( j = 1 , 2 , , n ) are Lipschitz continuous on R with Lipschitz constants μ j and ν j , namely, for any u , v R ,
| f j ( u ) f j ( v ) | μ j | u v | , | g j ( u ) g j ( v ) | ν j | u v | , f j ( 0 ) = g j ( 0 ) = 0 .
(A2) There exist non-negative numbers s i , w i such that for all x , y , x , y R , i = 1 , 2 , , n ,
[ σ i ( t , x , y ) σ i ( t , x , y ) ] [ σ i ( t , x , y ) σ i ( t , x , y ) ] T s i | x x | 2 + w i | y y | 2 .

(A3) I i k ( x i ( t k ) ) = γ i k ( x i ( t k ) x i ) , where x i is the equilibrium point of (1) with the initial condition (2), γ i k satisfies 0 < γ i k 2 .

Definition 1.1 The equilibrium point x = ( x 1 , x 2 , , x n ) T of system (1) is said to be p th moment exponentially stable if there exist positive constants M > 0 , λ > 0 such that
E ( x ( t ) x p ) M φ x p e λ ( t t 0 ) , t t 0 ,

where x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T is any solution of system (1) with initial value x i ( t + s ) = φ i ( s ) P C ( [ τ , 0 ] , R ) , i = 1 , 2 , , n .

When p = 2 , it is usually said to be exponentially stable in mean square.

Lemma 1.1 [34]

Suppose x and y are two states of system (1), then we have
| j = 1 n α i j g j ( x ) j = 1 n α i j g j ( y ) | j = 1 n | α i j | | g j ( x ) g j ( y ) |
and
| j = 1 n β i j g j ( x ) j = 1 n β i j g j ( y ) | j = 1 n | β i j | | g j ( x ) g j ( y ) | .
Lemma 1.2 If a i > 0 ( i = 1 , 2 , , m ) denote p nonnegative real numbers, then
a 1 a 2 a m a 1 p + a 2 p + + a m p p ,
(4)
where p 1 denotes an integer. A particular form of (4), namely
a 1 p 1 a 2 ( p 1 ) a 1 p p + a 2 p p .

2 Main results

In this section, we consider the existence and global p th moment exponential stability of system (1).

Lemma 2.1 For two positive real numbers a and b, assume that there exists a constant number 0 η < 1 such that 0 < b η a . Then the equation
λ = a b e λ τ
(5)

has a unique solution λ > 0 .

Proof Let F ( λ ) = λ a + b e λ τ . It is easy to see F ( 0 ) = a + b < 0 , F ( a ) = b e a τ > 0 , F ( λ ) = 1 + b τ e λ τ > 0 . Thus, F ( λ ) is strictly increasing on [ 0 , ) . Therefore, Eq. (5) has a unique positive solution λ > 0 . □

Lemma 2.2 [18]

For two positive real numbers a and b, assume that there exists a constant number 0 η < 1 such that 0 < b η a . Assume that z ( t ) is a nonnegative continuous function on [ t 0 τ , t 0 ] and satisfies the following inequality:
D + z ( t ) a z ( t ) + b z t , for t > 0 ,
(6)
then z ( t ) z t 0 e λ ( t t 0 ) , where λ is a solution of (5) and the upper right Dini derivative of z ( t ) is defined as
D + z ( t ) = lim δ 0 + sup z ( t + δ ) z ( t ) δ .
Theorem 2.1 Under conditions (A1)-(A3), if there exist constants K i > 0 ( i = 1 , 2 ), 0 < η < 1 such that
0 < K 2 < η K 1 ,
(7)
where
K 1 = min 1 i n { p c i ( p 1 ) j = 1 n ( μ j | a i j | + ν j ( | α i j | + | β i j | ) ) K 1 = j = 1 n | a j i | μ i p ( p 1 ) 2 s i ( p 1 ) ( p 2 ) 2 w i } , K 2 = max 1 i n { ν i j = 1 n ( | α j i | + | β j i | ) + ( p 1 ) s i } .

Then x = ( x 1 , x 2 , , x n ) T is a unique equilibrium which is globally pth moment exponential stable.

Proof The proof of existence and uniqueness of equilibrium for the system is similar to that of [39]. So we omit it. Suppose that x = ( x 1 , x 2 , , x n ) T is the unique equilibrium of system (1).

Let
y i ( t ) = x i ( t ) x i , σ i j ˜ ( t , y i ( t ) , y i ( t τ i ( t ) ) ) = σ i j ( t , y i ( t ) + x i , y i ( t τ i ( t ) + x i ) ) σ i j ( t , x i , x i ) ,
then system (1) can be transformed into the following equation, for i = 1 , 2 , , n :
{ d y i ( t ) = [ c i y i ( t ) + j = 1 n a i j ( f j ( y j ( t τ i j ( t ) ) + x j ) f j ( x j ) ) d y i ( t ) = + ( j = 1 n α i j g j ( y j ( s ) + x j ) j = 1 n α i j g j ( x j ) ) d y i ( t ) = + ( j = 1 n β j i g j ( y j ( s ) + x j ) j = 1 n β j i g j ( x j ) ) ] d t d y i ( t ) = + j = 1 n σ i j ˜ ( t , y i ( t ) , y i ( t τ i ( t ) ) ) d ω j ( t ) , t t k , k = 1 , 2 , , Δ y i ( t k ) = I i k ˜ ( y ( t k ) ) = I i k ( y ( t k ) + x ) I i k ( x ) .
(8)
We define a Lyapunov function V ( t , y ( t ) ) = i = 1 n | y i ( t ) | p . Let t t 0 , t [ t k 1 , t k ) , then we can get the operator L V ( t , y ( t ) ) associated with system (8) of the following form:
L V ( t , y ( t ) ) = p i = 1 n | y i ( t ) | p 1 sgn ( y i ( t ) ) [ c i y i ( t ) + j = 1 n a i j ( f j ( y j ( t τ i j ( t ) ) + x j ) f j ( x j ) ) + ( j = 1 n α i j g j ( y j ( s ) + x j ) j = 1 n α i j g j ( x j ) ) + ( j = 1 n β j i g j ( y j ( s ) + x j ) j = 1 n β j i g j ( x j ) ) ] + p ( p 1 ) 2 i = 1 n | y i ( t ) | p 2 j = 1 n σ i j ˜ ( t , y i ( t ) , y i ( t τ i ( t ) ) ) p i = 1 n c i | y i ( t ) | p + p i = 1 n j = 1 n | a i j | μ j | y i ( t ) | p 1 | y j ( t ) | + p i = 1 n j = 1 n ( | α i j | + | β i j | ) ν j | y i ( t ) | p 1 | y j ( t τ j ( t ) ) | + p ( p 1 ) 2 i = 1 n | y i ( t ) | p 2 ( s i | y i ( t ) | 2 + w i | y i ( t τ i ( t ) ) | 2 ) p i = 1 n c i | y i ( t ) | p + i = 1 n j = 1 n | a i j | μ j ( ( p 1 ) | y i ( t ) | p + | y j ( t ) | p ) + i = 1 n j = 1 n ( | α i j | + | β i j | ) ν j ( ( p 1 ) | y i ( t ) | p + | y j ( t τ j ( t ) ) | p ) + p ( p 1 ) 2 i = 1 n s i | y i ( t ) | p + p 1 2 i = 1 n w i ( ( p 1 ) | y i ( t ) | p + 2 | y i ( t τ i ( t ) ) | p ) = i = 1 n [ p c i ( p 1 ) j = 1 n ( μ j | a i j | + ν j ( | α i j | + | β i j | ) ) j = 1 n | a j i | μ i p ( p 1 ) 2 s i ( p 1 ) ( p 2 ) 2 w i ] | y i ( t ) | p + i = 1 n [ ν i j = 1 n ( | α j i | + | β j i | ) + ( p 1 ) s i ] | y i ( t τ i ( t ) ) | p K 1 V ( t , y ( t ) ) + K 2 sup t τ s t V ( s , y ( s ) ) ,
(9)
where
K 1 = min 1 i n { p c i ( p 1 ) j = 1 n ( μ j | a i j | + ν j ( | α i j | + | β i j | ) ) K 1 = j = 1 n | a j i | μ i p ( p 1 ) 2 s i ( p 1 ) ( p 2 ) 2 w i } , K 2 = max 1 i n { ν i j = 1 n ( | α j i | + | β j i | ) + ( p 1 ) s i } .
Firstly, for t [ t 0 , t 1 ) , applying the Ito formula, we obtain that
V ( t + δ , y ( t + δ ) ) V ( t , y ( t ) ) = t t + δ L V ( s , y ( s ) ) d s + t t + δ V y ( s , y ( s ) ) σ ( s , y ( s ) , y ( s τ ( s ) ) ) d ω ( s ) .
(10)
Since E [ V y ( s , y ( s ) ) σ ( s , y ( s ) , y ( s τ ( s ) ) ) d ω ( s ) ] = 0 , taking expectations on both sides of equality (10) and applying the inequality (9) yields
E ( V ( t + δ , y ( t + δ ) ) ) E ( V ( t , y ( t ) ) ) t t + δ [ K 1 E ( V ( s , y ( s ) ) ) + K 2 E ( sup s τ θ s V ( θ , y ( θ ) ) ) ] d s .
(11)
Since the Dini derivative D + is
D + E ( V ( t , y ( t ) ) ) = lim δ 0 + sup E ( V ( t + δ , y ( t + δ ) ) ) E ( V ( t , y ( t ) ) ) δ ,
(12)
denote z ( t ) = E ( V ( t , y ( t ) ) ) , the preceding result (11) leads directly to
D + z ( t ) K 1 z ( t ) + K 2 z t p .
(13)
Hence, from Lemma 2.2, we have
z ( t ) z ( t 0 ) p e λ ( t t 0 ) .
Namely,
E [ x ( t ) x p ] M φ x p e λ ( t t 0 ) ,
where M = 1 , λ is the unique positive solution of the following equation:
λ = K 1 K 2 e λ τ .
Next, suppose that for k = 1 , 2 , , m , the inequality
E ( V ( t , y ( t ) ) ) M φ τ p e λ ( t t 0 ) , t [ t k 1 , t k ) , k = 1 , 2 , ,
(14)
holds. From (A3), we get
E ( V ( t m , y ( t m ) ) ) = i = 1 n E | y i ( t m ) + I i ˜ ( y i ( t m ) + x i ) | p = E i = 1 n | 1 γ i m | p | y i ( t m ) | p E i = 1 n | y i ( t m ) | p = E ( V ( t m ) , y ( t m ) ) M φ τ p e λ ( t m t 0 ) .
This, together with (14), leads to
E ( V ( t , y ( t ) ) ) M φ τ p e λ ( t t 0 ) , t [ τ , t m ] .
(15)
On the other hand, for t [ t m , t m + 1 ) , applying the Ito formula, we get
V ( t , y ( t ) ) V ( t m , y ( t m ) ) = t m t L V ( s , y ( s ) ) d s + t m t V y ( s , y ( s ) ) σ ( s , y ( s ) , y ( s τ ( s ) ) ) d ω ( s ) .
Then, we have
E V ( t , y ( t ) ) E V ( t m , y ( t m ) ) = t m t E L V ( s , y ( s ) ) d s .
(16)
So, for small enough δ 0 + , we have
E V ( t + δ , y ( t + δ ) ) E V ( t m , y ( t m ) ) = t m t + δ E L V ( s , y ( s ) ) d s .
(17)
From (16) and (17), we have
E V ( t + δ , y ( t + δ ) ) E V ( t , y ( t ) ) = t t + δ E L V ( s , y ( s ) ) d s t t + δ [ K 1 E ( V ( s , y ( s ) ) ) + K 2 E ( sup s τ θ s V ( θ , y ( θ ) ) ) ] d s .
(18)
Similarly, we obtain
E ( V ( t , y ( t ) ) ) M φ τ p e λ ( t t 0 ) , t [ t m , t m + 1 ) .
(19)
Hence, by the mathematical induction, for any k = 1 , 2 ,  , we conclude that
E ( V ( t , y ( t ) ) ) M φ τ p e λ ( t t 0 ) , t t 0 ,

which implies that the equilibrium point of the impulsive system (1) is p th moment exponentially stable. This completes the proof of the theorem. □

3 Comparisons and remarks

It can be easily seen that many neural networks are special cases of system (1). Thus, in this section, we give some comparisons and remarks.

Suppose that I i k ( x ) = x R n , system (1) becomes the stochastic fuzzy cellular neural networks with time-varying delays.
{ d x i ( t ) = [ c i x i ( t ) + j = 1 n a i j f j ( x j ( t ) ) + j = 1 n α i j g j ( x j ( t τ j ( t ) ) ) d x i ( t ) = + j = 1 n β i j g j ( x j ( t τ j ( t ) ) ) + I i ] d t d x i ( t ) = + j = 1 n σ i j ( t , x i ( t ) , x i ( t τ i ( t ) ) ) d ω j ( t ) , t t 0 , x i ( t 0 + s ) = φ i ( s ) , < s < 0 , i = 1 , 2 , , n .
(20)

For (20), we have the following corollary by Theorem 2.1.

Corollary 3.1 If (A1)-(A2) hold, if there exist constants K i > 0 ( i = 1 , 2 ), 0 < η < 1 such that
0 < K 2 < η K 1 ,
where
K 1 = min 1 i n { p c i ( p 1 ) j = 1 n ( μ j | a i j | + ν j ( | α i j | + | β i j | ) ) K 1 = j = 1 n | a j i | μ i p ( p 1 ) 2 s i ( p 1 ) ( p 2 ) 2 w i } > 0 , K 2 = max 1 i n { ν i j = 1 n ( | α j i | + | β j i | ) + ( p 1 ) s i } ,

then the unique equilibrium x = ( x 1 , x 2 , , x n ) T of system (20) is globally pth moment exponential stable.

If we do not consider fuzzy AND and fuzzy OR operations and when I i k ( x ) = x R n in system (1), then system (1) becomes impulsive stochastic cellular neural networks with time-varying delays
{ d x i ( t ) = [ c i x i ( t ) + j = 1 n a i j f j ( x j ( t ) ) + j = 1 n α i j g j ( x j ( t τ j ( t ) ) ) d x i ( t ) = + j = 1 n β i j g j ( x j ( t τ j ( t ) ) ) + I i ] d t d x i ( t ) = + j = 1 n σ i j ( t , x i ( t ) , x i ( t τ i ( t ) ) ) d ω j ( t ) , t t 0 , x i ( t 0 + s ) = φ i ( s ) , < s < 0 , i = 1 , 2 , , n .
(21)

Remark 3.1 The stability of system (21) has been investigated in [31]. In [31], authors required the differentiability and monotonicity of a time delays function which satisfied τ j ( t ) ξ < 1 . Hence, this assumption may impose a very strict constraint on the model because time delays sometimes vary dramatically with time in real circuits. Obviously, Theorem 2.1 does not require these conditions.

Remark 3.2 In Theorem 2.1, if we do not consider fuzzy AND and OR operation, it becomes traditional cellular neural networks. The results in [32] are the corollary of Theorem 2.1. Therefore the results of this paper extend the previous known publication.

4 An example

Example 4.1 Consider the following stochastic fuzzy neural networks with time-varying delays and time-varying delays and impulses
{ d x 1 ( t ) = [ 11 x 1 ( t ) + 0.1 f 1 ( x 1 ( t ) ) + 0.7 f 1 ( x 2 ( t ) ) + j = 1 2 α 1 j f j ( x j ( t τ j ( t ) ) ) d x 1 ( t ) = + j = 1 2 β 1 j f j ( x j ( t τ j ( t ) ) ) + j = 1 2 T 1 j u j + j = 1 2 H 1 j u j ] d t d x 1 ( t ) = + σ 11 ( t , x 1 ( t ) , x 1 ( t τ 1 ( t ) ) ) d ω 1 d x 1 ( t ) = + σ 12 ( t , x 1 ( t ) , x 1 ( t τ 1 ( t ) ) ) d ω 2 , t t k , d x 2 ( t ) = [ 17 x 2 ( t ) 0.6 f 2 ( x 1 ( t ) ) + 0.3 f 2 ( x 2 ( t ) ) + j = 1 2 α 2 j f j ( x j ( t τ j ( t ) ) ) d x 2 ( t ) = + j = 1 2 β 2 j f j ( x j ( t τ j ( t ) ) ) + j = 1 2 T 2 j u j + j = 1 2 H 2 j u j ] d t d x 2 ( t ) = + σ 21 ( t , x 2 ( t ) , x 2 ( t τ 2 ( t ) ) ) d ω 1 d x 2 ( t ) = + σ 22 ( t , x 2 ( t ) , x 2 ( t τ 2 ( t ) ) ) d ω 2 , t t k , Δ x 1 ( t k ) = ( 1 + 0.3 sin ( 1 + k 2 ) x 1 ( t k ) ) , Δ x 2 ( t k ) = ( 1 + 0.6 sin ( 1 + k ) x 2 ( t k ) ) ,
(22)
where t 0 = 0 , t k = t k 1 + 0.2 k , k = 1 , 2 ,  , and
f i ( r ) = g i ( r ) = 1 2 ( | r + 1 | | r 1 | ) , τ j ( t ) = 0.3 | sin t | + 0.1 0.4 , i , j = 1 , 2 , α 11 = 5 3 , α 21 = 1 3 , α 12 = 1 4 , α 22 = 3 4 ; β 11 = 1 3 , β 21 = 2 3 , β 12 = 1 4 , β 22 = 3 4 ; σ 11 ( x , y ) = 0.2 x 0.1 y , σ 12 ( x , y ) = 0.3 x + 0.1 y , σ 21 ( x , y ) = 0.1 x + 0.2 y , σ 22 ( x , y ) = 0.2 x + 0.1 y ,

and T i j = H i j = S i j = L i j = u i = u j = 1 ( i , j = 1 , 2 ).

Let p = 2 , obviously, μ i = μ i = 1 , i = 1 , 2 . By simple computation, we can easily get that K 1 = min { 19.4 , 31.6 } = 19.4 , K 2 = max { 6 , 2 } = 6 . Letting η = 0.4 , we have
η K 1 > K 2 = 6 > 0 .
(23)

Thus, system (22) satisfies assumptions (A1)-(A3). It follows from Theorem 2.1 that system (22) is exponentially stable in mean square.

Declarations

Acknowledgements

The authors would like to thank the editor and the referees for their helpful comments and valuable suggestions regarding this paper. This work is partially supported by 973 program (2009CB326202), the National Natural Science Foundation of China (11071060, 60835004, 11101133), Fundamental Research Fund for the Central Universities of China (531107040222) and Scientific Research Fund of Hunan Provincial Education Department (Grant No. 12A206).

Authors’ Affiliations

(1)
College of Mathematics and Econometrics, Hunan University, Changsha, Hunan, P.R. China
(2)
College of Mathematics and Computer Science, Hunan City University, Yiyang, Hunan, P.R. China

References

  1. Chua LO, Yang L: Cellular neural networks: theory. IEEE Trans. Circuits Syst. 1988, 35: 1257-1272. 10.1109/31.7600MathSciNetView ArticleGoogle Scholar
  2. Chua LO, Yang L: Cellular neural networks: applications. IEEE Trans. Circuits Syst. 1988, 35: 1273-1290. 10.1109/31.7601MathSciNetView ArticleGoogle Scholar
  3. Zhao H, Cao J: New conditions for global exponential stability of cellular neural networks with delays. Neural Netw. 2005, 18: 1332-1340. 10.1016/j.neunet.2004.11.010View ArticleGoogle Scholar
  4. Cao J, Chen T: Globally exponentially robust stability and periodicity of delayed neural networks. Chaos Solitons Fractals 2004, 4: 957-963.MathSciNetView ArticleGoogle Scholar
  5. Chen Y: Network of neurons with delayed feedback: periodical switching of excitation and inhibition. Dyn. Contin. Discrete Impuls. Syst., Ser. B, Appl. Algorithms 2007, 14: 113-122.MathSciNetGoogle Scholar
  6. Huang C, Huang L: Dynamics of a class of Cohen-Grossberg neural networks with time-varying delays. Nonlinear Anal., Real World Appl. 2007, 8: 40-52. 10.1016/j.nonrwa.2005.04.008MathSciNetView ArticleGoogle Scholar
  7. Wu Y, Li T, Wu Y: Improved exponential stability criteria for recurrent neural networks with time-varying discrete and distributed delays. Int. J. Autom. Comput. 2010, 7: 199-204. 10.1007/s11633-010-0199-zView ArticleGoogle Scholar
  8. Liu Z, Chen A, Cao J, Huang L: Existence and global exponential stability of periodic solution for BAM neural networks with periodic coefficients and time-varying delays. IEEE Trans. Circuits Syst. I 2003, 50: 1162-1173. 10.1109/TCSI.2003.816306MathSciNetView ArticleGoogle Scholar
  9. Jiang H, Teng Z: Global exponential stability of cellular neural networks with time-varying coefficients and delays. Neural Netw. 2004, 17: 1415-1425. 10.1016/j.neunet.2004.03.002View ArticleGoogle Scholar
  10. Huang T, Cao J, Li C: Necessary and sufficient condition for the absolute exponential stability of a class of neural networks with finite delay. Phys. Lett. A 2006, 352: 94-98. 10.1016/j.physleta.2005.11.038View ArticleGoogle Scholar
  11. Li X, Huang L, Zhu H: Global stability of cellular neural networks with constant and variable delays. Nonlinear Anal. 2003, 53: 319-333. 10.1016/S0362-546X(02)00176-1MathSciNetView ArticleGoogle Scholar
  12. Huang C, He Y, Huang L, Zhu W: p th moment stability analysis of stochastic recurrent neural networks with time-varying delays. Inf. Sci. 2008, 178: 2194-2203. 10.1016/j.ins.2008.01.008MathSciNetView ArticleGoogle Scholar
  13. Huang C, Cao J: Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays. Neurocomputing 2009, 72: 3352-3356. 10.1016/j.neucom.2008.12.030View ArticleGoogle Scholar
  14. Sun Y, Cao J: p th moment exponential stability of stochastic recurrent neural networks with time-varying delays. Nonlinear Anal., Real World Appl. 2007, 8: 1171-1185. 10.1016/j.nonrwa.2006.06.009MathSciNetView ArticleGoogle Scholar
  15. Wu S, Li C: Exponential stability of impulsive discrete systems with time delay and applications in stochastic neural networks: a Razumikhin approach. Neurocomputing 2012, 82: 29-36.View ArticleGoogle Scholar
  16. Li X, Fu X: Synchronization of chaotic delayed neural networks with impulsive and stochastic perturbations. Commun. Nonlinear Sci. Numer. Simul. 2011, 16: 885-894. 10.1016/j.cnsns.2010.05.025MathSciNetView ArticleGoogle Scholar
  17. Park J, Kwon O: Analysis on global stability of stochastic neural networks of neutral type. Mod. Phys. Lett. B 2008, 20: 3159-3170.MathSciNetView ArticleGoogle Scholar
  18. Huang C, He Y, Chen P: Dynamic analysis of stochastic recurrent neural networks. Neural Process. Lett. 2008, 27: 267-276. 10.1007/s11063-008-9075-zView ArticleGoogle Scholar
  19. Shatyrko A, Diblik J, Khusainov D, Ruzaickova M: Stabilization of Lur’e-type nonlinear control systems by Lyapunov-Krasovskii functionals. Adv. Differ. Equ. 2012., 2012: Article ID 229Google Scholar
  20. Diblik J, Dzhalladova I, Ruzaickova M: The stability of nonlinear differential systems with random parameters. Abstr. Appl. Anal. 2012., 2012: Article ID 924107 10.1155/2012/924107Google Scholar
  21. Dzhalladova I, Bastinec J, Diblik J, Khusainov D: Estimates of exponential stability for solutions of stochastic control systems with delay. Abstr. Appl. Anal. 2011., 2011: Article ID 920412 10.1155/2011/920412Google Scholar
  22. Diblik J, Khusainov D, Grytsay I, Smarda Z: Stability of nonlinear autonomous quadratic discrete systems in the critical case. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 539087 10.1155/2010/539087Google Scholar
  23. Khusainov D, Diblik J, Svoboda Z, Smarda Z: Instable trivial solution of autonomous differential systems with quadratic right-hand sides in a cone. Abstr. Appl. Anal. 2011., 2011: Article ID 154916 10.1155/2011/154916Google Scholar
  24. Bai C: Stability analysis of Cohen-Grossberg BAM neural networks with delays and impulses. Chaos Solitons Fractals 2008, 35: 263-267. 10.1016/j.chaos.2006.05.043MathSciNetView ArticleGoogle Scholar
  25. Guan Z, James L, Chen G: On impulsive auto-associative neural networks. Neural Netw. 2000, 13: 63-69. 10.1016/S0893-6080(99)00095-7View ArticleGoogle Scholar
  26. Guan Z, Chen G: On delayed impulsive Hopfield neural networks. Neural Netw. 1999, 12: 273-280. 10.1016/S0893-6080(98)00133-6View ArticleGoogle Scholar
  27. Li Y: Globe exponential stability of BAM neural networks with delays and impulses. Chaos Solitons Fractals 2005, 24: 279-285.MathSciNetView ArticleGoogle Scholar
  28. Zhang Y, Sun J: Stability of impulsive neural networks with time delays. Phys. Lett. A 2005, 348: 44-50. 10.1016/j.physleta.2005.08.030View ArticleGoogle Scholar
  29. Chen Z, Ruan J: Global dynamic analysis of general Cohen-Grossberg neural networks with impulse. Chaos Solitons Fractals 2007, 32: 1830-1837. 10.1016/j.chaos.2005.12.018MathSciNetView ArticleGoogle Scholar
  30. Song Q, Zhang J: Global exponential stability of impulsive Cohen-Grossberg neural network with time-varying delays. Nonlinear Anal., Real World Appl. 2008, 9: 500-510. 10.1016/j.nonrwa.2006.11.015MathSciNetView ArticleGoogle Scholar
  31. Sun Y, Cao J: p th moment exponential stability of stochastic recurrent neural networks with time-varying delays. Nonlinear Anal., Real World Appl. 2007, 8: 1171-1185. 10.1016/j.nonrwa.2006.06.009MathSciNetView ArticleGoogle Scholar
  32. Li X, Zou J, Zhu E: The p th moment exponential stability of stochastic cellular neural networks with impulses. Adv. Differ. Equ. 2013., 2013: Article ID 6Google Scholar
  33. Song Q, Wang Z: Stability analysis of impulsive stochastic Cohen-Grossberg neural networks with mixed time delays. Physica A 2008, 387: 3314-3326. 10.1016/j.physa.2008.01.079View ArticleGoogle Scholar
  34. Yang T, Yang L: The global stability of fuzzy cellular neural networks. IEEE Trans. Circuits Syst. I 1996, 43: 880-883. 10.1109/81.538999View ArticleGoogle Scholar
  35. Yang T, Yang L, Wu C, Chua L: Fuzzy cellular neural networks: theory. In Proc. IEEE Int. Workshop Cellular Neural Networks Appl.. Edited by: Circuits I, Society S. IEEE Press, New York; 1996:181-186.Google Scholar
  36. Yang T, Yang L, Wu C, Chua L: Fuzzy cellular neural networks: applications. In Proc. IEEE Int. Workshop Cellular Neural Networks Appl.. Edited by: Circuits I, Society S. IEEE Press, New York; 1996:225-230.Google Scholar
  37. Huang T: Exponential stability of fuzzy cellular neural networks with distributed delay. Phys. Lett. A 2006, 351: 48-52. 10.1016/j.physleta.2005.10.060View ArticleGoogle Scholar
  38. Huang T: Exponential stability of delayed fuzzy cellular neural networks with diffusion. Chaos Solitons Fractals 2007, 31: 658-664. 10.1016/j.chaos.2005.10.015MathSciNetView ArticleGoogle Scholar
  39. Zhang Q, Yang L: Exponential p -stability of impulsive stochastic fuzzy cellular neural networks with mixed delays. WSEAS Trans. Math. 2011, 12: 490-499.Google Scholar
  40. Yuan K, Cao J, Deng J: Exponential stability and periodic solutions of fuzzy cellular neural networks with time-varying delays. Neurocomputing 2006, 69: 1619-1627. 10.1016/j.neucom.2005.05.011View ArticleGoogle Scholar

Copyright

© Xiong and Huang; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.