Skip to main content

Theory and Modern Applications

Synchronization of a class of uncertain stochastic discrete-time delayed neural networks

Abstract

The global asymptotical synchronization problem is discussed for a general class of uncertain stochastic discrete-time neural networks with time delay in this paper. Time delays include time-varying delay and distributed delay. Based on the drive-response concept and the Lyapunov stability theorem, a linear matrix inequality (LMI) approach is given to establish sufficient conditions under which the considered neural networks are globally asymptotically synchronized in the mean square. Therefore, the global asymptotical synchronization of the stochastic discrete-time neural networks can easily be checked by utilizing the numerically efficient Matlab LMI toolbox. Moreover, the obtained results are dependent not only on the lower bound but also on the upper bound of the time-varying delays, that is, they are delay-dependent. And finally, a simulation example is given to illustrate the effectiveness of the proposed synchronization scheme.

1 Introduction

Since Chua and Yang in [1, 2] proposed the theory and applications of cellular neural networks, the dynamical behaviors of neural networks have attracted a great deal of research interest in the past two decades. Those attentions have mainly concentrated on the stability and the synchronization problems of neural networks (see [328]). Especially after synchronization problems of chaotic systems had been studied by Pecora and Carroll in [29, 30], in which they proposed the drive-response concept, the control and synchronization problems of chaotic systems have been thoroughly investigated [58, 1418, 3135]. And many applications of such systems have been found in different areas, particularly in engineering fields such as creating secure communication systems (see [3335]). As long as we can reasonably design the receiver so that the state evolution of the response system synchronize to that of the driven system, the message obtained by the receiver can be hidden in a chaotic signal, hence, secure communication can be implemented.

It is well known that neural networks, including Hopfield neural networks (HNNs) and cellular neural networks (CNNs), are large-scale and complex nonlinear high-dimensional systems composed of a large number of interconnected neurons. And they have also been found effective applications in many areas such as image processing, optimization problems, pattern recognition, and so on. Therefore, it is not easy to achieve the control and synchronization of these systems. In [8, 16, 19, 22], by drive-response method, some results are given for different type neural networks to guarantee the synchronization of drive system and response system in the models discussed. It is easy to apply those results to real neural networks. And in [57, 18], synchronization in an array of linearly or nonlinearly coupled networks has been analyzed in details. The authors studied the global asymptotic or exponential synchronization of a complex dynamical neural networks through constructing a synchronous manifold and showed that it is globally asymptotically or exponentially stable. To the best of our knowledge, up till now, most of the synchronization methods of chaotic systems (especially neural networks) are of drive-response type (which is also called a master-slave system).

At the same time, most of the papers mentioned above are concerned with continuous-time neural networks. When implementing these networks for practical use, discrete-time types of models should be formulated. The readers may refer to [3, 23] for more details as regards the significance of investigating discrete-time neural networks. Therefore, it is important to study the dynamical behaviors of discrete-time neural networks. On the other hand, because the synaptic transmission is probably a noisy process brought about by random fluctuations from the release of neurotransmitters, and a stochastic disturbance must be considered when formulating real artificial neural networks. Recently, the stability and synchronization analysis problems for stochastic or discrete-time neural networks have been investigated; see e.g. [3, 1013, 1921, 23], and references therein. So, in this paper, based on drive-response concept and Lyapunov functional method, some different decentralized control laws will be given for global asymptotical synchronization of a general class of discrete-time delayed chaotic neural networks with stochastic disturbance. In the neural network model, the parameter uncertainties are norm-bounded, the neural networks are subjected to stochastic disturbances described in terms of a Brownian motion, and the delay includes time-varying delay and distributed delay. Up to now, to the best of our knowledge, there are few works about the synchronization problem of discrete-time neural networks with distributed delay. And the master-slave system’s synchronization problem for the uncertain stochastic discrete-time neural networks with distributed delay is little investigated.

This paper is organized as follows. In Section 2, model formulation and some preliminaries are presented for our main results. In Section 3, based on the drive-response concept and the Lyapunov functional method, we discuss global asymptotical synchronization in mean square for uncertain stochastic discrete-time delayed neural networks with mixed delays. A numerical example is given to illustrate the effectiveness and feasibility of our results in Section 4. And finally, in Section 5, we give the conclusions.

Notations Throughout this paper, , R n , and R n × m are used to denote, respectively, the real number field, the real vector space of dimension n, and the set of all n×m real matrices. And E n denotes a n-dimensional identity matrix. The set of all integers on the closed interval [a,b] is denoted as N[a,b], where a, b are integers and a<b. We use to denote the set of all positive integers. Also, we assume that C(N[a,b],R) represents the set of all functions ϕ:N[a,b]R. The superscript ‘T’ represents the transpose of a matrix or a vector, and the notation XY (respectively, X>Y) means that XY is a positive semi-definite matrix (respectively, a positive definite matrix) where X and Y are symmetric matrices. The notation || denotes the Euclidian norm of a vector and E m refers to an m-dimensional identity matrix. Let (Ω,F,P) be a complete probability space with a natural filtration { F t } t 0 satisfying the usual conditions (i.e., the filtration contains all -null sets and is right continuous) and generated by Brownian motion {ω(s):0st}. E{} stands for the mathematical expectation operator with respect to the given probability measure . The asterisk in a matrix is used to denote the term that is induced by symmetry. Usually, if not explicitly specified, matrices are always assumed to have compatible dimensions.

2 Model formulation and preliminaries

It is well known that most of the synchronization methods of chaotic systems are of the master-slave (drive-response) type. The system, which is called a slave system or a response system, can be driven by another system, which is called a master system or drive system, so that the behavior of the slave system can be influenced by the master system, i.e., the master system is independent to the slave system but the slave system is driven by the master system. In this paper, our aim is to design the controller reasonably such that the behavior of the slave system synchronizes to that of the master system.

Now, let us consider a general class of n-neuron discrete-time neural networks with time-varying and distributed delays which is described by the following difference equations:

x ( k + 1 ) = ( A + Δ A ( k ) ) x ( k ) + ( A ˜ + Δ A ˜ ( k ) ) x ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ˜ ( x ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ˜ ( x ( k τ ( k ) ) ) + ( C + Δ C ( k ) ) s = 1 τ ( k ) h ˜ ( x ( k s ) ) + I ( k ) ,
(1)

that is,

x i ( k + 1 ) = ( a i + Δ a i ( k ) ) x i ( k ) + ( a ˜ i + Δ a ˜ i ( k ) ) x i ( k τ ( k ) ) + j = 1 n ( b i j + Δ b i j ( k ) ) f ˜ j ( x j ( k ) ) + j = 1 n ( b ˜ i j + Δ b ˜ i j ( k ) ) g ˜ j ( x i j ( k τ ( k ) ) ) + s = 1 τ ( k ) j = 1 n ( c i j + Δ c i j ( k ) ) h ˜ j ( x i j ( k s ) ) + I i ( k ) , i = 1 , 2 , , n ,

where x(k)= ( x 1 ( k ) , x 2 ( k ) , , x n ( k ) ) T R n is the state vector associated with the n neurons. The positive integer τ(k) corresponds to the time-varying delay satisfying

1 τ m τ(k) τ M ,kN,
(2)

where τ m and τ M are known positive integers. A=diag( a 1 , a 2 ,, a n ) and A ˜ =diag( a ˜ 1 , a ˜ 2 ,, a ˜ n ) are real diagonal constant matrices (corresponding to the state feedback and the delayed state-feedback coefficient matrices, respectively) with | a i |<1, | a ˜ i |<1, i=1,2,,n, B= [ b i j ] n × n , B ˜ = [ b ˜ i j ] n × n , and C= [ c i j ] n × n are the connection weight matrix, the discretely delayed connection weight matrix and the distributively delayed connection weight matrix, respectively. The functions f ˜ (x(k))= ( f ˜ 1 ( x 1 ( k ) ) , f ˜ 2 ( x 2 ( k ) ) , , f ˜ n ( x n ( k ) ) ) T , g ˜ (x(kτ(k)))= ( g ˜ 1 ( x 1 ( k τ ( k ) ) ) , g ˜ 2 ( x 2 ( k τ ( k ) ) ) , , g ˜ n ( x n ( k τ ( k ) ) ) ) T and h ˜ (x(k))= ( h ˜ 1 ( x 1 ( k ) ) , h ˜ 2 ( x 2 ( k ) ) , , h ˜ n ( x n ( k ) ) ) T in R n denote the neuron activation functions. The real vector I(k)= ( I 1 ( k ) , I 2 ( k ) , , I n ( k ) ) T is the exogenous input.

Remark 1 The term of the distributed time delays, s = 1 τ ( k ) h ˜ (x(ks)), in the discrete-time form, is included in the model (1). It can be interpreted as the discrete analog of the following well-discussed continuous-time complex network with time-varying and distributed delays:

x ˙ =Ax(t)+Bf ( x ( t ) ) +Cg ( x ( t τ ( t ) ) ) +D t τ ( t ) t h ( x ( s ) ) ds+I(t).

Obviously, such a distributed delay term will bring about an additional difficulty in our analysis.

For the activation functions in the model (1), we have the following assumptions:

Assumption 1 The activation functions f ˜ i (), g ˜ i (), and h ˜ i () (i=1,2,,n) in model (1) are all continuous and bounded.

Assumption 2 [10, 21, 23, 26]

The activation functions f ˜ i (), g ˜ i (), and h ˜ i () (i=1,2,,n) in model (1) satisfy

θ i f ˜ i ( s 1 ) f ˜ i ( s 2 ) s 1 s 2 θ i + , s 1 , s 2 R,
(3)
ι i g ˜ i ( s 1 ) g ˜ i ( s 2 ) s 1 s 2 ι i + , s 1 , s 2 R,
(4)
δ i h ˜ i ( s 1 ) h ˜ i ( s 2 ) s 1 s 2 δ i + , s 1 , s 2 R,
(5)

where θ i , θ i + , ι i , ι i + , δ i , δ i + are some constants.

Remark 2 The constants θ i , θ i + , ι i , ι i + , δ i , δ i + in Assumption 2 are allowed to be positive, negative or zero. So, the activation functions in this paper are less conservative than the usual sigmoid functions.

The time-varying matrices ΔA(k), Δ A ˜ (k), ΔB(k), Δ B ˜ (k), and ΔC(k) in model (1) represent the parameter uncertainties, which are generated by the modeling errors and are assumed satisfying the following admissible condition:

[ Δ A ( k ) Δ A ˜ ( k ) Δ B ( k ) Δ B ˜ ( k ) Δ C ( k ) ] = M H ( k ) [ W 1 W 2 W 3 W 4 W 5 ] ,
(6)

in which M and W i (i=1,2,,5) are known real constant matrices, and H(k) is the unknown time-varying matrix-valued function subject to the following condition:

H T (k)H(k) E n ,kN.
(7)

Also, the initial conditions of model (1) are given by

x i (s)= ϕ i (s)C ( N [ τ M , 0 ] , R ) (i=1,2,,n).
(8)

In this paper, we consider model (1) as the master system. And the response system is

y ( k + 1 ) = ( A + Δ A ( k ) ) y ( k ) + ( A ˜ + Δ A ˜ ( k ) ) y ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ˜ ( y ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ˜ ( y ( k τ ( k ) ) ) + ( C + Δ C ( k ) ) s = 1 τ ( k ) h ˜ ( y ( k s ) ) + I ( k ) + u ( k ) + σ ( k , e ( k ) , e ( k τ ( k ) ) ) ω ( k ) ,
(9)

namely,

y i ( k + 1 ) = ( a i + Δ a i ( k ) ) y i ( k ) + ( a ˜ i + Δ a ˜ i ( k ) ) y i ( k τ ( k ) ) + j = 1 n ( b i j + Δ b i j ( k ) ) f ˜ j ( y j ( k ) ) + j = 1 n ( b ˜ i j + Δ b ˜ i j ( k ) ) g ˜ j ( y i j ( k τ ( k ) ) ) + s = 1 τ ( k ) j = 1 n ( c i j + Δ c i j ( k ) ) h ˜ j ( y i j ( k s ) ) + I i ( k ) + u i ( k ) + j = 1 n σ i j ( k , e i ( k ) , e i ( k τ ( k ) ) ) ω j ( k ) , i = 1 , 2 , ,

where the related parameters and the activation functions are all same as model (1), u(k) is the controller which is to be designed later, this is also our main aim. e(k)= ( e 1 ( k ) , e 2 ( k ) , , e n ( k ) ) T R n is the error state, ω(k)=( ω 1 (k), ω 2 (k),, ω n (k)) is a n-dimensional Brownian motion defined on a complete probability space (Ω,F,P) with

E { ω ( k ) } =0,E { ω 2 ( k ) } =1,E { ω ( i ) ω ( j ) } =0(for ij)
(10)

and σ: R n × R n ×RR is a continuous function with σ(,0,0)=0 and

σ T (k,x,y)σ(k,x,y) ρ 1 x T x+ ρ 2 y T y,x,y R n ,
(11)

where ρ 1 >0 and ρ 2 >0 are known constant scalars.

The initial conditions of the slave system model (9) are given by

y i (s)= ψ i (s) L F 0 2 ( N [ τ M , 0 ] , R ) ;
(12)

here L F 0 2 ([ τ M ,0],R) denotes the family of all F 0 -measurable C([ τ M ,0],R)-valued random variables satisfying sup τ M s 0 E{ ψ i ( s ) 2 }<.

Now let us define the error state e(k) as e(k)=y(k)x(k), subtracting (9) from (1), it yields the error dynamical systems as follows:

e ( k + 1 ) = ( A + Δ A ( k ) ) e ( k ) + ( A ˜ + Δ A ˜ ( k ) ) e ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ( e ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ( e ( k τ ( k ) ) ) + ( C + Δ C ( k ) ) s = 1 τ ( k ) h ( e ( k s ) ) + u ( k ) + σ ( k , e ( k ) , e ( k τ ( k ) ) ) ω ( k ) ,
(13)

where f(e(k))= f ˜ (e(k)+x(k)) f ˜ (x(k)), g(e(kτ(k)))= g ˜ (e(kτ(k))+x(kτ(k))) g ˜ (x(kτ(k))), h(e(kτ(k)))= h ˜ (e(kτ(k))+x(kτ(k))) h ˜ (x(kτ(k))). Denoted the error state vector as e(k)= ( e 1 ( k ) , e 2 ( k ) , , e n ( k ) ) T R n . Correspondingly, from (8) and (12), the initial condition of error system (12) is φ(s)=ψ(s)ϕ(s)= ( ψ 1 ( s ) ϕ 1 ( s ) , , ψ n ( s ) ϕ n ( s ) ) T . Here it is necessary to assume that φ(s) L F 0 2 (N[ τ M ,0],R).

Now, we firstly give the definition of the globally robust asymptotical synchronization in mean square of the master system (1) and the slave system (9) as follows.

Definition 1 System (1) and system (9) are said to be globally asymptotically synchronized in the mean square if all parameter uncertainties satisfying the admissible condition (6) and (7), and the trajectories of system (1) and system (9) satisfy

lim k + E { | y ( k ) x ( k ) | } =0.

That is, if the error system (13) is globally robustly asymptotically stable in the mean square, then system (1) and system (9) are robustly globally asymptotically synchronized in the mean square.

Remark 3 Assumption 1 and Assumption 2 can derive that the error system (13) has at least an equilibrium point. Our main aim is to design the controller u(k) reasonably such that the equilibrium point of the error system (13) is robustly globally asymptotically stable in the mean square.

In many real applications, we are interested in designing a memoryless state-feedback controller as

u(k)=Ge(k),
(14)

where G R n × n is a constant gain matrix.

However, as a special case where the information on the size of time-varying delay τ(k) is available, we can also consider a discretely delayed-feedback controller of the following form:

u(k)=Ge(k)+ G τ e ( k τ ( k ) ) .
(15)

Moreover, we can design a more general form of a delayed-feedback controller as

u(k)=Ge(k)+ G τ e ( k τ ( k ) ) + G z s = 1 τ ( k ) e(ks),
(16)

Although a memoryless controller (14) has the advantage of easy implementation, its performance cannot be better than a discretely delayed-feedback controller which utilizes the available information of the size of time-varying delay. Therefore, in this respect, the controller (16) could be considered as a compromise between the performance improvement and the implementation simplicity.

Now, let u(k)=Ge(k)+ G τ e(kτ(k))+ G z s = 1 τ ( k ) e(ks), and substituting it into (13), and denoting

A ( k ) = G + A + Δ A ( k ) , A ˜ ( k ) = G τ + A ˜ + Δ A ˜ ( k ) , B ( k ) = B + Δ B ( k ) , B ˜ ( k ) = B ˜ + Δ B ˜ ( k ) , C ( k ) = C + Δ C ( k ) ,
(17)

it follows that

e ( k + 1 ) = A ( k ) e ( k ) + A ˜ ( k ) e ( k τ ( k ) ) + B ( k ) f ( e ( k ) ) + B ˜ ( k ) g ( e ( k τ ( k ) ) ) + G z s = 1 τ ( k ) e ( k s ) + C ( k ) s = 1 τ ( k ) h ( e ( k s ) ) + σ ( k , e ( k ) , e ( k τ ( k ) ) ) ω ( k ) .
(18)

To complete this particular issue, we still need several lemmas to be used later.

Lemma 1 Let , , and H be real matrices of appropriate dimensions with H satisfying H T HE. Then the following inequality:

PHQ+ ( P H Q ) T μ 1 P P T +μ Q T Q

holds for any scalar μ>0.

Lemma 2 (Schur complement)

Given constant matrices , , , where P T =P, Q T =Q, then

[ P R R T Q ]<0

is equivalent to the following conditions:

Q>0,P+R Q 1 R T <0.

Lemma 3 [27, 28]

Let M R n × n be a positive semi-definite matrix. If the series concerned are convergent, we have the following inequality:

( i = 1 m α i x i ) T M ( i = 1 m α i x i ) ( i = 1 m α i ) i = 1 m α i x i T M x i

holds for any x i R n and α i 0 (i=1,2,,m).

3 Main results and proofs

In this section, some sufficient criteria are presented for the globally asymptotically synchronization in the mean square of the neural networks (1) and (9).

Before our main work, for presentation convenience, in the following, we denote

Θ 1 = diag ( θ 1 θ 1 + , θ 2 θ 2 + , , θ n θ n + ) , Θ 2 = diag ( θ 1 + θ 1 + 2 , θ 2 + θ 2 + 2 , , θ n + θ n + 2 ) ,
(19)
ϒ 1 =diag ( ι 1 ι 1 + , ι 2 ι 2 + , , ι n ι n + ) , ϒ 2 =diag ( ι 1 + ι 1 + 2 , ι 2 + ι 2 + 2 , , ι n + ι n + 2 ) ,
(20)
Σ 1 = diag ( δ 1 δ 1 + , δ 2 δ 2 + , , δ n δ n + ) , Σ 2 = diag ( δ 1 + δ 1 + 2 , δ 2 + δ 2 + 2 , , δ n + δ n + 2 ) .
(21)

Then, along the same line as with [21, 23], from Assumption 2, we can easily get, for i=1,2,,n,

( f ( e ( k ) ) θ i e ( k ) ) T ( f ( e ( k ) ) θ i + e ( k ) ) 0 , ( g ( e ( k τ ( k ) ) ) ι i e ( k τ ( k ) ) ) T ( g ( e ( k τ ( k ) ) ) ι i + e ( k τ ( k ) ) ) 0 , ( h ( e ( k ) ) δ i e ( k ) ) T ( f ( e ( k ) ) δ i + e ( k ) ) 0 ,

which are equivalent to

[ e ( k ) f ( e ( k ) ) ] T [ θ i θ i + e i e i T θ i + θ i + 2 e i e i T θ i + θ i + 2 e i e i T e i e i T ][ e ( k ) f ( e ( k ) ) ]0,
(22)
[ e ( k τ ( k ) ) g ( e ( k τ ( k ) ) ) ] T [ ι i ι i + e i e i T ι i + ι i + 2 e i e i T ι i + ι i + 2 e i e i T e i e i T ][ e ( k τ ( k ) ) g ( e ( k τ ( k ) ) ) ]0,
(23)
[ e ( k ) h ( e ( k ) ) ] T [ δ i δ i + e i e i T δ i + δ i + 2 e i e i T δ i + δ i + 2 e i e i T e i e i T ][ e ( k ) h ( e ( k ) ) ]0,
(24)

where i=1,2,,n and e i represents the unit column vector having ‘1’ as the element on its i th row and zeros elsewhere.

Multiplying both sides of (22), (23), and (24) by λ i , γ i , and ν i , respectively, and summing up from 1 to n with respect to i, it will follow that

[ e ( k ) f ( e ( k ) ) ] T [ Λ Θ 1 Λ Θ 2 Λ Θ 2 Λ ][ e ( k ) f ( e ( k ) ) ]0,
(25)
[ e ( k τ ( k ) ) g ( e ( k τ ( k ) ) ) ] T [ Γ ϒ 1 Γ ϒ 2 Γ ϒ 2 Γ ][ e ( k τ ( k ) ) g ( e ( k τ ( k ) ) ) ]0,
(26)
[ e ( k ) h ( e ( k ) ) ] T [ Ξ Σ 1 Ξ Σ 2 Ξ Σ 2 Ξ ][ e ( k ) h ( e ( k ) ) ]0.
(27)

The main results are as follows.

Theorem 1 Under Assumptions 1 and 2, the discrete-time neural networks (1) and (9) are globally robustly asymptotically synchronized in the mean square if there exist three positive definite matrices P, Q, and R, three diagonal matrices Λ=diag{ λ 1 , λ 2 ,, λ n }>0, Γ=diag{ γ 1 , γ 2 ,, γ n }>0, and Ξ=diag{ ν 1 , ν 2 ,, ν n }>0, and two scalars λ >0 and μ>0 such that the following LMIs hold:

P< λ E n ,
(28)

and

Φ:=[ Π 1 μ W 1 T W 2 0 Ω 1 μ W 1 T W 4 Ξ Σ 2 μ W 1 T W 5 ( G + A ) T P 0 Π 2 0 μ W 2 T W 3 Ω 2 0 μ W 2 T W 5 ( G τ + A ˜ ) T P 0 0 0 0 0 0 G z T P 0 Π 3 μ W 3 T W 4 0 μ W 3 T W 5 B T P 0 Π 4 0 μ W 4 T W 5 B ˜ T P 0 Π 5 0 0 0 Π 6 C T P 0 P P M μ E n ]<0,
(29)

where

Π 1 = P + ( τ M τ m + 1 ) Q + λ ρ 1 E n Λ Θ 1 Ξ Σ 1 + μ W 1 T W 1 , Π 2 = Q + λ ρ 2 E n Γ ϒ 1 + μ W 2 T W 2 , Π 3 = Λ + μ W 3 T W 3 , Π 4 = Γ + μ W 4 T W 4 , Π 5 = Ξ + τ M ( τ M τ m + 1 ) R , Π 6 = 1 τ M R + μ W 5 T W 5 , Ω 1 = Λ Θ 2 + μ W 1 T W 3 Ω 2 = Γ ϒ 2 + μ W 2 T W 4 .
(30)

Proof To verify that the neural networks (1) and (9) are globally asymptotically synchronized in the mean square, a Lyapunov-Krasovskii function V is defined as follows:

V(k)= V 1 (k)+ V 2 (k)+ V 3 (k)+ V 4 (k)+ V 5 (k),
(31)

where

V 1 (k)= e T (k)Pe(k),
(32)
V 2 (k)= i = k τ ( k ) k 1 e T (i)Qe(i),
(33)
V 3 (k)= j = τ m + 1 τ M i = k j + 1 k 1 e T (i)Qe(i),
(34)
V 4 (k)= j = 1 τ ( k ) i = k j k 1 h T ( e ( i ) ) Rh ( e ( i ) ) ,
(35)
V 5 (k)= s = τ M + 1 τ M j = k τ m + 1 k 1 i = j k 1 h T ( e ( i ) ) Qh ( e ( i ) ) .
(36)

Calculating the difference of V(k) along the trajectory of the model (18) and taking the mathematical expectation, one obtains

E { Δ V ( k ) } =E { Δ V 1 ( k ) } +E { Δ V 2 ( k ) } +E { Δ V 3 ( k ) } +E { Δ V 4 ( k ) } ,
(37)

where

E { Δ V 1 ( k ) } = E { V 1 ( k + 1 ) V 1 ( k ) } = E { [ A ( k ) e ( k ) + A ˜ ( k ) e ( k τ ( k ) ) + B ( k ) f ( e ( k ) ) + B ˜ ( k ) g ( e ( k τ ( k ) ) ) + G z s = 1 τ ( k ) e ( k s ) + C ( k ) s = 1 τ ( k ) h ( e ( k s ) ) ] T × P [ A ( k ) e ( k ) + A ˜ ( k ) e ( k τ ( k ) ) + B ( k ) f ( e ( k ) ) + B ˜ ( k ) g ( e ( k τ ( k ) ) ) + G z s = 1 τ ( k ) e ( k s ) + C ( k ) s = 1 τ ( k ) h ( e ( k s ) ) ] + σ T ( k , e ( k ) , e ( k τ ( k ) ) ) P σ ( k , e ( k ) , e ( k τ ( k ) ) ) e T ( k ) P e ( k ) } ,
(38)
E { Δ V 2 ( k ) } = E { V 2 ( k + 1 ) V 2 ( k ) } = E { i = k + 1 τ ( k + 1 ) k e T ( i ) Q e ( i ) i = k τ ( k ) k 1 e T ( i ) Q e ( i ) } = E { e T ( k ) Q e ( k ) e T ( k τ ( k ) ) Q e ( k τ ( k ) ) + i = k + 1 τ ( k + 1 ) k 1 e T ( i ) Q e ( i ) i = k + 1 τ ( k ) k 1 e T ( i ) Q e ( i ) } = E { e T ( k ) Q e ( k ) e T ( k τ ( k ) ) Q e ( k τ ( k ) ) + i = k + 1 τ m k 1 e T ( i ) Q e ( i ) + i = k + 1 τ ( k + 1 ) k τ m e T ( i ) Q e ( i ) i = k + 1 τ ( k ) k 1 e T ( i ) Q e ( i ) } E { e T ( k ) Q e ( k ) e T ( k τ ( k ) ) Q e ( k τ ( k ) ) + i = k τ M + 1 k τ m e T ( i ) Q e ( i ) } ,
(39)
E { Δ V 3 ( k ) } = E { V 3 ( k + 1 ) V 3 ( k ) } = E { j = τ m + 1 τ M i = k j + 2 k e T ( i ) Q e ( i ) j = τ m + 1 τ M i = k j + 1 k 1 e T ( i ) Q e ( i ) } = E { j = τ m + 1 τ M [ e T ( k ) Q e ( k ) e T ( k j + 1 ) Q e ( k j + 1 ) ] } = E { ( τ M τ m ) e T ( k ) Q e ( k ) i = τ m + 1 τ M e T ( k i + 1 ) Q e ( k i + 1 ) } = E { ( τ M τ m ) e T ( k ) Q e ( k ) i = k τ M + 1 k τ m e T ( i ) Q e ( i ) } ,
(40)
E { Δ V 4 ( k ) } = E { V 4 ( k + 1 ) V 4 ( k ) } = E { j = 1 τ ( k + 1 ) i = k j + 1 k h T ( e ( i ) ) R h ( e ( i ) ) j = 1 τ ( k ) i = k j k 1 h T ( e ( i ) ) R h ( e ( i ) ) } E { j = 1 τ M i = k j + 1 k h T ( e ( i ) ) R h ( e ( i ) ) j = 1 τ ( k ) i = k j k 1 h T ( e ( i ) ) R h ( e ( i ) ) } E { j = 1 τ M h T ( e ( k ) ) R h ( e ( k ) ) + j = τ m + 1 τ M i = k j + 1 k 1 h T ( e ( i ) ) R h ( e ( i ) ) j = 1 τ ( k ) h T ( e ( k j ) ) R h ( e ( k j ) ) } E { τ M h T ( e ( k ) ) R h ( e ( k ) ) + j = τ m + 1 τ M i = k τ M + 1 k 1 h T ( e ( i ) ) R h ( e ( i ) ) 1 τ M ( j = 1 τ ( k ) h ( e ( k j ) ) ) T R ( j = 1 τ ( k ) h ( e ( k j ) ) ) } ,
(41)

and

E { Δ V 5 ( k ) } = E { V 3 ( k + 1 ) V 3 ( k ) } = E { s = τ m + 1 τ M j = k τ M + 2 k i = j k h T ( e ( i ) ) R h ( e ( i ) ) s = τ m + 1 τ M j = k τ M + 1 k 1 i = j k 1 h T ( e ( i ) ) R h ( e ( i ) ) } = E { s = τ m + 1 τ M j = k τ M + 1 k i = j + 1 k h T ( e ( i ) ) R h ( e ( i ) ) s = τ m + 1 τ M j = k τ M + 1 k 1 i = j k 1 h T ( e ( i ) ) R h ( e ( i ) ) } = E { j = τ m + 1 τ M i = k τ M + 1 k 1 [ h T ( e ( k ) ) R h ( e ( k ) ) h T ( e ( i ) ) R h ( e ( i ) ) ] } = E { τ M ( τ M τ m ) h T ( e ( k ) ) R h ( e ( k ) ) j = τ m + 1 τ M i = k τ M + 1 k 1 h T ( e ( i ) ) R h ( e ( i ) ) } .
(42)

The above inequality (41) results by Lemma 3.

On the other hand, from (11) and (28), we have

σ T ( k , e ( k ) , e ( k τ ( k ) ) ) P σ ( k , e ( k ) , e ( k τ ( k ) ) ) λ max ( P ) σ T ( k , e ( k ) , e ( k τ ( k ) ) ) σ ( k , e ( k ) , e ( k τ ( k ) ) ) λ ( ρ 1 e T ( k ) e ( k ) + ρ 2 e T ( k τ ( k ) ) e ( k τ ( k ) ) ) .
(43)

Substituting (38)-(43) into (37) yields

E { Δ V ( k ) } E { η T ( k ) Φ 1 η ( k ) + η T ( k ) L T ( k ) P L ( k ) η ( k ) } ,
(44)

where

η ( k ) = [ e T ( k ) , e T ( k τ ( k ) ) , ( j = 1 τ ( k ) e ( k j ) ) T , f T ( e ( k ) ) , g T ( e ( k τ ( k ) ) ) , h T ( e ( k ) ) , ( j = 1 τ ( k ) h ( e ( k j ) ) ) T ] T , L ( k ) = [ A ( k ) , A ˜ ( k ) , G z , B ( k ) , B ˜ ( k ) , 0 , C ( k ) ] , Φ 1 = [ Π ˆ 1 0 0 0 0 0 0 Π ˆ 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Π ˆ 3 0 1 τ M R ] ,

with Π ˆ 1 =P+( τ M τ m +1)Q+ λ ρ 1 E n , Π ˆ 2 =Q+ λ ρ 2 E n , Π ˆ 3 = τ M ( τ M τ m +1)R.

From (25), (26), and (27), it follows that

E { Δ V ( k ) } E { η T ( k ) Φ 1 η ( k ) + η T ( k ) L T ( k ) P L ( k ) η ( k ) [ e ( k ) f ( e ( k ) ) ] T [ Λ Θ 1 Λ Θ 2 Λ Θ 2 Λ ] [ e ( k ) f ( e ( k ) ) ] [ e ( k ) h ( e ( k ) ) ] T [ Ξ Σ 1 Ξ Σ 2 Ξ Σ 2 Ξ ] [ e ( k ) h ( e ( k ) ) ] [ e ( k τ ( k ) ) g ( e ( k τ ( k ) ) ) ] T [ Γ ϒ 1 Γ ϒ 2 Γ ϒ 2 Γ ] [ e ( k τ ( k ) ) g ( e ( k τ ( k ) ) ) ] } = E { η T ( k ) [ Φ 2 + L T ( k ) P L ( k ) ] η ( k ) } ,
(45)

where

Φ 2 =[ ϰ 1 0 0 Λ Θ 2 0 Ξ Σ 2 0 ϰ 2 0 0 Γ ϒ 2 0 0 0 0 0 0 0 Λ 0 0 0 Γ 0 0 ϰ 3 0 1 τ M R ]

with ϰ 1 =P+( τ M τ m +1)Q+ λ ρ 1 E n Λ Θ 1 Ξ Σ 1 , ϰ 2 =Q+ λ ρ 2 E n Γ ϒ 1 , and ϰ 3 =Ξ+ τ M ( τ M τ m +1)R.

Denote

Φ ˆ 1 (k)= Φ 2 + L T (k)PL(k)= Φ 3 +Δ Φ 3 (k),
(46)

where

Φ 3 = [ ϰ 1 0 0 Λ Θ 2 0 Ξ Σ 2 0 ( G + A ) T P ϰ 2 0 0 Γ ϒ 2 0 0 ( G τ + A ˜ ) T P 0 0 0 0 0 G z T P Λ 0 0 0 B T P Γ 0 0 B ˜ T P ϰ 3 0 0 1 τ M R C T P P ] , Δ Φ 3 ( k ) = [ 0 0 0 0 0 0 0 Δ A T ( k ) P 0 0 0 0 0 0 Δ A ˜ T ( k ) P 0 0 0 0 0 0 0 0 0 0 Δ B T ( k ) P 0 0 0 Δ B ˜ T ( k ) P 0 0 0 0 Δ C T ( k ) P 0 ] .
(47)

Let

P = [ 0 0 0 0 0 0 0 P ] T , L ( k ) = [ Δ A T ( k ) Δ A ˜ T ( k ) 0 Δ B T ( k ) Δ B ˜ T ( k ) 0 Δ C T ( k ) 0 ] , W = [ W 1 W 2 0 W 3 W 4 0 W 5 0 ] .

It follows easily from (6) and Lemma 1 that

Δ Φ 3 ( k ) = P L ( k ) + L T ( k ) P T = P M H ( k ) W + W T H T ( k ) M T P T μ W T W + μ 1 P M M T P T .
(48)

Substituting (47) and (48) into (46), one obtains

Φ ˆ 1 (k) Φ ˆ 2 (k)+ μ 1 PM M T P T ,
(49)

where

Φ ˆ 2 (k)=[ Π 1 μ W 1 T W 2 0 Ω 1 μ W 1 T W 4 Ξ Σ 2 μ W 1 T W 5 ( G + A ) T P Π 2 0 μ W 2 T W 3 Ω 2 0 μ W 2 T W 5 ( G τ + A ˜ ) T P 0 0 0 0 0 G z T P Π 3 μ W 3 T W 4 0 μ W 3 T W 5 B T P Π 4 0 μ W 4 T W 5 B ˜ T P Π 5 0 0 Π 6 C T P P ],

and Π 1 , Π 2 , Π 3 , Π 4 , Π 5 , Π 6 , Ω 1 , Ω 2 are defined in (30).

By Lemma 2, we have

Φ ˆ 2 (k)+ μ 1 PM M T P T =Φ.
(50)

Therefore, it is not difficult from (29), (45), (46), (49), and (50) to get

E { Δ V ( k ) } E { η T ( k ) Φ η ( k ) } λ max (Φ)E { | e ( k ) | 2 } ,
(51)

with λ max (Φ)<0.

Let N be a positive integer. Summing up both sides of (51) from 1 to N with respect to k, it easily follows that

E { V ( N ) V ( 0 ) } λ max (Φ) k = 1 N E { | e ( k ) | 2 } ,

which implies that

λ max (Φ) k = 1 N E { | e ( k ) | 2 } E { V ( 0 ) } .

By letting N+, it can be seen that the series k = 1 + E{ | e ( k ) | 2 } is convergent, and therefore we have

lim k + E { | e ( k ) | } = lim k + E { | y ( k ) x ( k ) | } =0.

According to Definition 1, it can be deduced that the master system (1) and the slave system (9) are globally robustly asymptotically synchronized in the mean square, and the proof is then completed. □

In the following, we will consider four special cases. Firstly, we can consider a state-feedback controller u(k)=Ge(k)+ G τ e(kτ(k)), and the slave system model (9) can then be rewritten to

y ( k + 1 ) = ( A + Δ A ( k ) ) y ( k ) + ( A ˜ + Δ A ˜ ( k ) ) y ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ˜ ( y ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ˜ ( y ( k τ ( k ) ) ) + ( C + Δ C ( k ) ) s = 1 τ ( k ) h ˜ ( y ( k s ) ) + I ( k ) , + G e ( k ) + G τ e ( k τ ( k ) ) + σ ( k , e ( k ) , e ( k τ ( k ) ) ) ω ( k ) .
(52)

Corollary 1 Under Assumptions 1 and 2, the discrete-time neural networks (1) and (52) are globally robustly asymptotically synchronized in the mean square if there exist three positive definite matrices P, Q, and R, three diagonal matrices Λ=diag{ λ 1 , λ 2 ,, λ n }>0, Γ=diag{ γ 1 , γ 2 ,, γ n }>0, and Ξ=diag{ ν 1 , ν 2 ,, ν n }>0, and two scalars λ >0 and μ>0 such that the following LMIs hold:

P< λ E n ,
(53)

and

[ Π 1 μ W 1 T W 2 Ω 1 μ W 1 T W 4 Ξ Σ 2 μ W 1 T W 5 ( G + A ) T P 0 Π 2 μ W 2 T W 3 Ω 2 0 μ W 2 T W 5 ( A ˜ + G τ ) T P 0 Π 3 μ W 3 T W 4 0 μ W 3 T W 5 B T P 0 Π 4 0 μ W 4 T W 5 B ˜ T P 0 Π 5 0 0 0 Π 6 C T P 0 P P M μ E n ]<0,
(54)

where Π 1 , Π 2 , Π 3 , Π 4 , Π 5 , Π 6 , Ω 1 , Ω 2 are defined in (30).

This corollary is very easily accessible from Theorem 1.

Secondly, if the considered model is without stochastic disturbance, the response system (9) will be specialized to

y ( k + 1 ) = ( A + Δ A ( k ) ) y ( k ) + ( A ˜ + Δ A ˜ ( k ) ) y ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ˜ ( y ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ˜ ( y ( k τ ( k ) ) ) + ( C + Δ C ( k ) ) s = 1 τ ( k ) h ˜ ( y ( k s ) ) + I ( k ) + G e ( k ) + G τ e ( k τ ( k ) ) + G z s = 1 τ ( k ) e ( k s ) .
(55)

Corollary 2 Under Assumptions 1 and 2, the discrete-time neural networks (1) and (55) are globally robustly asymptotically synchronized if there exist three positive definite matrices P, Q, and R, three diagonal matrices Λ=diag{ λ 1 , λ 2 ,, λ n }>0, Γ=diag{ γ 1 , γ 2 ,, γ n }>0, and Ξ=diag{ ν 1 , ν 2 ,, ν n }>0, and two scalars λ >0 and μ>0 such that the following LMIs hold:

P< λ E n ,
(56)

and

[ Π ˜ 1 μ W 1 T W 2 0 Ω 1 μ W 1 T W 4 Ξ Σ 2 μ W 1 T W 5 ( G + A ) T P 0 Π ˜ 2 0 μ W 2 T W 3 Ω 2 0 μ W 2 T W 5 ( G τ + A ˜ ) T P 0 0 0 0 0 0 G z T P 0 Π 3 μ W 3 T W 4 0 μ W 3 T W 5 B T P 0 Π 4 0 μ W 4 T W 5 B ˜ T P 0 Π 5 0 0 0 Π 6 C T P 0 P P M μ E n ]<0,
(57)

where Π ˜ 1 =P+( τ M τ m +1)QΛ Θ 1 Ξ Σ 1 +μ W 1 T W 1 , Π ˜ 2 =QΓ ϒ 1 +μ W 2 T W 2 , and Π 3 , Π 4 , Π 5 , Π 6 , Ω 1 , Ω 2 are defined in (30).

Thirdly, let us consider the uncertainty-free case, that is, there are no parameter uncertainties in the models. Then the master system (1) and the response system (9) can be reduced, respectively, to the following models:

x ( k + 1 ) = A x ( k ) + A ˜ x ( k τ ( k ) ) + B f ˜ ( x ( k ) ) + B ˜ g ˜ ( x ( k τ ( k ) ) ) + C s = 1 τ ( k ) h ˜ ( x ( k s ) ) + I ( k )
(58)

and

y ( k + 1 ) = A y ( k ) + A ˜ y ( k τ ( k ) ) + B f ˜ ( y ( k ) ) + B ˜ g ˜ ( y ( k τ ( k ) ) ) + C s = 1 τ ( k ) h ˜ ( y ( k s ) ) + I ( k ) + G e ( k ) + G τ e ( k τ ( k ) ) + G z s = 1 τ ( k ) e ( k s ) + σ ( k , e ( k ) , e ( k τ ( k ) ) ) ω ( k ) .
(59)

Corollary 3 Under Assumptions 1 and 2, the discrete-time neural network (58) and (59) are globally robustly asymptotically synchronized in the mean square if there exist three positive definite matrices P, Q, and R, three diagonal matrices Λ=diag{ λ 1 , λ 2 ,, λ n }>0, Γ=diag{ γ 1 , γ 2 ,, γ n }>0, and Ξ=diag{ ν 1 , ν 2 ,, ν n }>0, and a scalar λ >0 such that the following LMIs hold:

P< λ E n ,
(60)

and

[ Π 1 0 0 Λ Θ 2 0 Ξ Σ 2 0 ( G + A ) T P Π 2 0 0 Γ ϒ 2 0 0 ( G τ + A ˜ ) T P 0 0 0 0 0 G z T P Λ 0 0 0 B T P Γ 0 0 B ˜ T P Π 5 0 0 1 τ M R C T P P ]<0,
(61)

with Π 1 =P+( τ M τ m +1)Q+ λ ρ 1 E n Λ Θ 1 Ξ Σ 1 , Π 2 =Q+ λ ρ 2 E n Γ ϒ 1 , and Π 5 is defined as (30).

Moreover, in this case, if the stochastic disturbance in the response system (59) is σ i (k,e(k),e(kτ(k)))=0 (i=1,2,,n), then we need only rewrite Π 1 and Π 2 as Π 1 =P+( τ M τ m +1)QΛ Θ 1 Ξ Σ 1 and Π 2 =QΓ ϒ 1 , and the corollary will still be true.

The proofs of Corollary 2 and Corollary 3 are similar to that of Theorem 1 and are therefore omitted.

Finally, we consider the systems without the distributed delay influence. The master system (1) and the response system (9) will become, respectively, the following difference equations:

x ( k + 1 ) = ( A + Δ A ( k ) ) x ( k ) + ( A ˜ + Δ A ˜ ( k ) ) x ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ˜ ( x ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ˜ ( x ( k τ ( k ) ) ) + I ( k )
(62)

and

y ( k + 1 ) = ( A + Δ A ( k ) ) y ( k ) + ( A ˜ + Δ A ˜ ( k ) ) y ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ˜ ( y ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ˜ ( y ( k τ ( k ) ) ) + G e ( k ) + G τ e ( k τ ( k ) ) + I ( k ) + σ ( k , e ( k ) , e ( k τ ( k ) ) ) ω ( k ) .
(63)

Then the error system is

e ( k + 1 ) = ( G + A + Δ A ( k ) ) e ( k ) + ( G τ + A ˜ + Δ A ˜ ( k ) ) e ( k τ ( k ) ) + ( B + Δ B ( k ) ) f ( e ( k ) ) + ( B ˜ + Δ B ˜ ( k ) ) g ( e ( k τ ( k ) ) ) + G e ( k ) + G τ e ( k τ ( k ) ) + σ ( k , e ( k ) , e ( k τ ( k ) ) ) ω ( k ) .
(64)

In this case, we will show that the neural networks (62) and (63) are not only globally, robustly, and asymptotically synchronized in the mean square, but also globally, robustly, and exponentially synchronized in the mean square. The definition of the globally robustly exponentially synchronization in the mean square is given firstly in the following.

Definition 2 Systems (62) and (63) are said to be globally exponentially synchronized in the mean square if all parameter uncertainties satisfy the admissible condition (6) and (7), and if there exist two constants β>0 and 0<ε<1, and a big enough positive integer N, such that the following inequality:

E { | y ( k ) x ( k ) | 2 } β ε k max s N [ τ M , 0 ] E { | ψ ( s ) ϕ ( s ) | 2 }

holds for φ(s)=ψ(s)ϕ(s) L F 0 2 (N[ τ M ,0],R) and all k>N.

Then we have the following theorem.

Theorem 2 Under Assumptions 1 and 2, the discrete-time neural networks (62) and (63) are globally robustly exponentially synchronized in the mean square if there exist two positive definite matrices P and Q, two diagonal matrices Λ=diag{ λ 1 , λ 2 ,, λ n }>0 and Γ=diag{ γ 1 , γ 2 ,, γ n }>0, and two scalars λ >0 and μ>0 such that the following LMIs hold:

P< λ E n ,
(65)

and

Ψ:=[ Π 7 μ W 1 T W 2 Ω 1 μ W 1 T W 4 ( G + A ) T P 0 Π 2 μ W 2 T W 3 Ω 2 ( G τ + A ˜ ) T P 0 Π 3 μ W 3 T W 4 B T P 0 Π 4 B ˜ T P 0 P P M μ E n ]<0,
(66)

where Π 7 =P+( τ M τ m +1)Q+ λ ρ 1 E n Λ Θ 1 +μ W 1 T W 1 and Π 2 , Π 3 , Π 4 , Π 5 , Π 6 , Ω 1 , Ω 2 are defined in (30).

Proof A Lyapunov-Krasovskii function V(k) is needed to guarantee that the neural networks (62) and (63) are globally exponentially synchronized in the mean square:

V(k)= V 1 (k)+ V 2 (k)+ V 3 (k),
(67)

where V 1 (k), V 2 (k), and V 3 (k) are similar to (32), (33), and (34).

Then, along a similar line to the proof of Theorem 1, one can obtain

E { Δ V ( k ) } E { ζ T ( k ) Ψ ζ ( k ) } λ max (Ψ)E { | e ( k ) | 2 } ,
(68)

where λ max (Ψ)<0, and

ζ(k)= [ e T ( k ) , e T ( k τ ( K ) ) , f T ( e ( k ) ) , g T ( e ( k τ ( k ) ) ) ] T .

Now, we are in a position to establish the robust global exponential stability in the mean square of the error system (64).

First, from the definition of function V(k), it is easy to see that

E { V ( k ) } E { λ ¯ P | e ( k ) | 2 + λ ¯ Q i = k τ M k 1 | e ( i ) | 2 } ,
(69)
E { V ( k ) } λ min (P)E { | e ( k ) | 2 } ,
(70)

where

λ ¯ P = λ max (P), λ ¯ Q =( τ M τ m +1) λ max (Q).

For any ε>1, inequalities (68) and (69) imply that

E { Δ ( ε k V ( k ) ) } = E { ε k + 1 V ( k + 1 ) ε k V ( k ) } = E { ε k + 1 Δ V ( k ) + ε k ( ε 1 ) V ( k ) } E { ω 1 ( ε ) ε k | e ( k ) | 2 + ω 2 ( ε ) i = k τ M k 1 ε k | e ( i ) | 2 } ,
(71)

where

ω 1 (ε)= λ max (Ψ)ε+(ε1) λ ¯ P , ω 2 (ε)=(ε1) λ ¯ Q .

Let N be a sufficient big positive integer satisfying N> τ M +1. Summing up both sides of the inequality (71) from 0 to N1 with respect to k, one can obtain

E { ε N V ( N ) V ( 0 ) } E { ω 1 ( ε ) k = 0 N 1 ε k | e ( k ) | 2 + ω 2 ( ε ) k = 0 N 1 i = k τ M k 1 ε k | e ( i ) | 2 } ,
(72)

while for τ M 1,

k = 0 N 1 i = k τ M k 1 ε k E { | e ( i ) | 2 } ( i = τ M 1 k = 0 i + τ M + i = 0 N τ M 1 k = i + 1 i + τ M + i = N τ M N 1 k = i + 1 N 1 ) ε k E { | e ( i ) | 2 } τ M i = τ M 1 ε i + τ M E { | e ( i ) | 2 } + τ M i = 0 N τ M 1 ε i + τ M E { | e ( i ) | 2 } + τ M i = N τ M 1 N 1 ε i + τ M E { | e ( i ) | 2 } τ M 2 ε τ M max τ M i 0 E { | e ( i ) | 2 } + τ M ε τ M i = 0 N 1 ε i E { | e ( i ) | 2 } .
(73)

Then, it follows from (72) and (73) that

ε N E { V ( N ) } E { V ( 0 ) } + ω 2 ( ε ) τ M 2 ε τ M max τ M i 0 E { | e ( i ) | 2 } + [ ω 1 ( ε ) + ω 2 ( ε ) τ M ε τ M ] i = 0 N 1 ε i E { | e ( i ) | 2 } .
(74)

Considering ω 1 (1)= λ max (Ψ)<0, ω 2 (1)=0, and ω 2 (ε)>0 for ε>1, it can be verified that there exists a scalar ε 0 >1 such that ω 1 ( ε 0 )+ ω 2 ( ε 0 ) τ M ε 0 τ M =0. So, it is not difficult to derive

ε 0 N E { V ( N ) } E { V ( 0 ) } + ω 2 ( ε 0 ) τ M 2 ε 0 τ M max τ M i 0 E { | e ( i ) | 2 } .
(75)

On the other hand, it also follows easily from (69) that

E { V ( 0 ) } ϑ max τ M i 0 E { | e ( i ) | 2 } ,
(76)

where ϑ=max{ λ ¯ P , λ ¯ Q }. Therefore, from (70), (75), and (76), one has

E { | e ( N ) | 2 } 1 λ min ( P ) [ ϑ + ω 2 ( ε 0 ) τ M 2 ε 0 τ M ] ( 1 ε 0 ) N max τ M i 0 E { | e ( i ) | 2 } .

According to Definition 2, this completes the proof. □

Remark 4 Based on the drive-response concept, synchronization problems of discrete-time neural networks are little investigated. To the best of our knowledge, for master-slave systems, the synchronization analysis problem for stochastic neural networks with parameter uncertainties, especially distributed delay, is for the first time discussed.

4 Numerical example

In this section, an example will be illustrated to show the feasibility of our results.

Example 1 Consider the drive system (1) and the response system (9) with the following parameters:

A = [ 0.6 0 0 0 0.2 0 0 0 0.4 ] , A ˜ = [ 0.3 0 0 0 0.5 0 0 0 0.1 ] , B = [ 0.4 0.3 0 0.1 0.2 0.3 0.2 0 0.1 ] , B ˜ = [ 0.5 0 0.2 0.1 0.2 0.1 0.1 0.3 0.2 ] , C = [ 0.2 0.2 0 0.1 0.2 0.1 0 0.1 0.2 ] , M = [ 0.1 0.1 0.2 ] , f ˜ ( x ( k ) ) = g ˜ ( x ( k ) ) = h ˜ ( x ( k ) ) = [ tanh ( 0.8 x 1 ( k ) ) tanh ( 0.4 x 1 ( k ) ) tanh ( 0.6 x 1 ( k ) ) ] , W 1 = W 3 = W 4 = W 5 = [ 0.2 0.1 0.1 ] , W 2 = [ 0 0 0 ] , τ ( k ) = 4 + ( 1 ) k , d k = 2 k , ρ 1 = ρ 2 = 0.1 , I ( k ) = 0 .

Therefore, it can be derived that d =1, τ m =3, τ M =5, and the activation functions satisfy Assumption 2 with

Θ 1 = ϒ 1 = Σ 1 =diag{0,0,0}, Θ 2 = ϒ 2 = Σ 2 =diag{0.4,0.2,0.3}.

We design a delayed-feedback controller as u(k)=Ge(k)+ G τ e(kτ(k)), that is, the distributed delayed controller is omitted. By using the Matlab LMI Toolbox, LMI (28) and (29) can be solved and the feasible solutions are obtained as follows:

P = [ 10.7271 0.4759 0.2023 7.3604 0.6581 10.0764 ] , Q = [ 2.2544 0.0658 0.0473 1.4303 0.1111 2.1584 ] , R = [ 2.3904 0.5966 0.5486 3.3556 0.4730 3.0583 ] , Λ = diag { 9.4695 , 18.7205 , 10.4127 } , Γ = diag { 6.3622 , 3.0023 , 9.8068 } , Ξ = diag { 4.9624 , 8.1531 , 7.3455 } , λ = 10.9097 , μ = 6.4636 ,

and

G=[ 4.8030 0.2974 0.5754 1.0704 1.2260 0.2636 0.1993 0.4936 3.8391 ], G τ =[ 1.0885 0.4761 0.2475 0.0031 3.9716 0.5437 0.9665 0.1146 1.5992 ].

Then Corollary 1 proves that the response system (9) and the drive system (1) with the given parameters can achieve globally robustly asymptotically synchronization in the mean square.

5 Conclusions

In this paper, based on Lyapunov stability theorem and drive-response concept, the globally asymptotically synchronization has been discussed for a general class of uncertain stochastic discrete-time neural networks with mixed time delays which consist of time-varying discrete and infinite distributed time delays. The proposed controller is robust to a stochastic disturbance and to the parameter uncertainties. In comparison with previous literature, the distributed delay is taken into account in our models, which are few investigated in the discrete-time complex networks. By using the linear matrix inequality (LMI) approach, several easy-to-verify sufficient criteria have been established to ensure the uncertain stochastic discrete-time neural networks to be globally robustly asymptotically synchronized in the mean square. The LMI-based criteria obtained are dependent not only on the lower bound, but also on the upper bound of the time-varying delay, and they can be solved efficiently via the Matlab LMI Toolbox. Also, the proposed synchronization scheme is easy to implement in practice.

Authors’ information

Zhong Chen (1971-), male, native of Xingning, Guangdong, M.S.D., engages in the groups theory and differential equation.

References

  1. Chua LO, Yang L: Cellular neural networks: theory. IEEE Trans. Circuits Syst. 1988, 35(10):1257-1272. 10.1109/31.7600

    Article  MathSciNet  MATH  Google Scholar 

  2. Chua LO, Yang L: Cellular neural networks: applications. IEEE Trans. Circuits Syst. 1988, 35(10):1273-1290. 10.1109/31.7601

    Article  MathSciNet  Google Scholar 

  3. Mohamad S, Gopalsamy K: Exponential stability of continuous-time and discrete-time cellular neural networks with delays. Appl. Math. Comput. 2003, 135(1):17-38. 10.1016/S0096-3003(01)00299-5

    Article  MathSciNet  MATH  Google Scholar 

  4. Lu W, Chen T: Synchronization analysis of linearly coupled networks of discrete time systems. Physica D 2004, 198: 148-168. 10.1016/j.physd.2004.08.024

    Article  MathSciNet  MATH  Google Scholar 

  5. Chen G, Zhou J, Liu Z: Global synchronization of coupled delayed neural networks and applications to chaotic CNN models. Int. J. Bifurc. Chaos 2004, 14(7):2229-2240. 10.1142/S0218127404010655

    Article  MathSciNet  MATH  Google Scholar 

  6. Lu W, Chen T: Synchronization of coupled connected neural networks with delays. IEEE Trans. Circuits Syst. 2004, 51(12):2491-2503. 10.1109/TCSI.2004.838308

    Article  MathSciNet  Google Scholar 

  7. Wu CW: Synchronization in arrays of coupled nonlinear systems with delay and nonreciprocal time-varying coupling. IEEE Trans. Circuits Syst. 2005, 52(5):282-286.

    Article  Google Scholar 

  8. Chen C, Liao T, Hwang C: Exponential synchronization of a class of chaotic neural networks. Chaos Solitons Fractals 2005, 24(2):197-206.

    Article  MathSciNet  MATH  Google Scholar 

  9. Wang Z, Liu Y, Liu X: On global asymptotic stability of neural networks with discrete and distributed delays. Phys. Lett. A 2005, 345: 299-308. 10.1016/j.physleta.2005.07.025

    Article  MATH  Google Scholar 

  10. Liu Y, Wang Z, Liu X: On global exponential stability of generalized stochastic neural networks with mixed time-delays. Neurocomputing 2006, 70: 314-326. 10.1016/j.neucom.2006.01.031

    Article  Google Scholar 

  11. Wang Z, Liu Y, Yu L, Liu X: Exponential stability of delayed recurrent neural networks with Markovian jumping parameters. Phys. Lett. A 2006, 356: 346-352. 10.1016/j.physleta.2006.03.078

    Article  MATH  Google Scholar 

  12. Wang Z, Shu H, Fang J, Liu X: Robust stability for stochastic Hopfield neural networks with time delays. Nonlinear Anal., Real World Appl. 2006, 7: 1119-1128. 10.1016/j.nonrwa.2005.10.004

    Article  MathSciNet  MATH  Google Scholar 

  13. Wang Z, Liu Y, Fraser K, Liu X: Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays. Phys. Lett. A 2006, 354: 288-297. 10.1016/j.physleta.2006.01.061

    Article  MATH  Google Scholar 

  14. Cao J, Li P, Wang W: Global synchronization in arrays of delayed neural networks with constant and delayed coupling. Phys. Lett. A 2006, 353: 318-325. 10.1016/j.physleta.2005.12.092

    Article  Google Scholar 

  15. Zhou J, Chen TP: Synchronization in general complex delayed dynamical networks. IEEE Trans. Circuits Syst. 2006, 53(3):733-744.

    Article  MathSciNet  Google Scholar 

  16. Lou X, Cui B: Asymptotic synchronization of a class of neural networks with reaction-diffusion terms and time-varying delays. Comput. Math. Appl. 2006, 52: 897-904. 10.1016/j.camwa.2006.05.013

    Article  MathSciNet  MATH  Google Scholar 

  17. Cao J, Li P, Wang W: Global synchronization in arrays of delayed neural networks with constant and delayed coupling. Phys. Lett. A 2006, 353: 318-325. 10.1016/j.physleta.2005.12.092

    Article  Google Scholar 

  18. Wang W, Cao J: Synchronization in an array of linearly coupled networks with time-varying delay. Physica A 2006, 366: 197-211.

    Article  Google Scholar 

  19. Yu W, Cao J: Synchronization control of stochastic delayed neural networks. Physica A 2007, 373: 252-260.

    Article  Google Scholar 

  20. Wang Z, Lauria S, Fang J, Liu X: Exponential stability of uncertain stochastic neural networks with mixed time-delays. Chaos Solitons Fractals 2007, 32: 62-72. 10.1016/j.chaos.2005.10.061

    Article  MathSciNet  MATH  Google Scholar 

  21. Liu Y, Wang Z, Serrano A, Liu X: Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis. Phys. Lett. A 2007, 362: 480-488. 10.1016/j.physleta.2006.10.073

    Article  Google Scholar 

  22. Yan J, Lin J, Hung M, Liao T: On the synchronization of neural networks containing time-varying delays and sector nonlinearity. Phys. Lett. A 2007, 361: 70-77. 10.1016/j.physleta.2006.08.083

    Article  MATH  Google Scholar 

  23. Liu Y, Wang Z, Liu X: Robust stability of discrete-time stochastic neural networks with time-varying delays. Neurocomputing 2008, 71(4-6):823-833. 10.1016/j.neucom.2007.03.008

    Article  Google Scholar 

  24. Chen T, Wu W, Zhou W: Global μ -synchronization of linearly coupled unbounded time-varying delayed neural networks with unbounded delayed coupling. IEEE Trans. Neural Netw. 2008, 19(10):1809-1816.

    Article  Google Scholar 

  25. Liu X, Chen T: Robust μ -stability for uncertain stochastic neural networks with unbounded time-varying delays. Physica A 2008, 387: 2952-2962. 10.1016/j.physa.2008.01.068

    Article  MathSciNet  Google Scholar 

  26. Liu Y, Wang Z, Liu X: On synchronization of coupled neural networks with discrete and unbounded distributed delays. Int. J. Comput. Math. 2008, 85(8):1299-1313. 10.1080/00207160701636436

    Article  MathSciNet  MATH  Google Scholar 

  27. Liu Y, Wang Z, Liang J, Liu X: Synchronization and state estimation for discrete-time complex networks with distributed delays. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 2008, 38(5):1314-1325.

    Article  MathSciNet  Google Scholar 

  28. Liu Y, Wang Z, Liu X: On synchronization of discrete-time Markovian jumping stochastic complex networks with mode-dependent mixed time-delays. Int. J. Mod. Phys. B 2009, 23(3):411-434. 10.1142/S0217979209049826

    Article  MATH  Google Scholar 

  29. Pecora LM, Carroll TL: Synchronization chaotic systems. Phys. Rev. Lett. 1990, 64(8):821-824. 10.1103/PhysRevLett.64.821

    Article  MathSciNet  MATH  Google Scholar 

  30. Carroll TL, Pecora LM: Synchronization chaotic circuits. IEEE Trans. Circuits Syst. 1991, 38(4):453-456. 10.1109/31.75404

    Article  Google Scholar 

  31. Wu CW, Chua LO: A unified framework for synchronization and control of dynamical systems. Int. J. Bifurc. Chaos 1994, 4(4):979-998. 10.1142/S0218127494000691

    Article  MathSciNet  MATH  Google Scholar 

  32. Wu CW, Chua LO: Synchronization in an array of linearly coupled dynamical systems. IEEE Trans. Circuits Syst. 1995, 42(8):430-447. 10.1109/81.404047

    Article  MathSciNet  MATH  Google Scholar 

  33. Liao T, Tsai S: Adaptive synchronization of chaotic systems and its application to secure communications. Chaos Solitons Fractals 2000, 11(9):1387-1396. 10.1016/S0960-0779(99)00051-X

    Article  MATH  Google Scholar 

  34. Lu J, Wu X, Lv J: Synchronization of a unified chaotic system and the application in secure communications. Phys. Lett. A 2002, 305(6):365-370. 10.1016/S0375-9601(02)01497-4

    Article  MathSciNet  Google Scholar 

  35. Feki M: An adaptive chaos synchronization scheme applied to secure communications. Chaos Solitons Fractals 2003, 18: 141-148. 10.1016/S0960-0779(02)00585-4

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the NSF of Guangdong Province of China under Grant S2013010015944 and by the National Science Foundation of China (10576013; 10871075). The authors wish specially to thank the managing editor and referees for their very helpful comments and useful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianming Lin.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

ZC and JL carried out the design of the study and performed the analysis. BX participated in its design and coordination. All authors read and approved the final manuscript.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Z., Xiao, B. & Lin, J. Synchronization of a class of uncertain stochastic discrete-time delayed neural networks. Adv Differ Equ 2014, 212 (2014). https://doi.org/10.1186/1687-1847-2014-212

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-212

Keywords