Skip to main content

Theory and Modern Applications

Robust exponential stability analysis for delayed neural networks with time-varying delay

Abstract

This paper considers the problems of determining the robust exponential stability and estimating the exponential convergence rate for delayed neural networks with parametric uncertainties and time delay. The relationship among the time-varying delay, its upper bound, and their difference is taken into account. Theoretic analysis shows that our result includes a previous result derived in the literature. As illustrations, the results are applied to several concrete models studied in the literature, and a comparison of results is given.

1 Introduction

Time delays are often encountered in various practical systems such as chemical processes, neural networks, and long transmission lines in pneumatic systems. It has been shown that the existence of time delays may lead to oscillations, divergences, or instabilities. This motivates the stability analysis problem for linear systems affected by time delays. During the last decade, increasing research interests have been aroused in the stability analysis and control design of time delay systems [1–17]. It is well known that neural networks have been extensively studied over the past few decades and have been successfully applied to many areas, such as signal processing, static processing, pattern recognition, combinatorial optimization, and so on [18]. So, it is important to study the stability of neural networks. In biological and artificial neural networks, time delays often arise in the processing of information storage and transmission.

Recently, a considerable number of sufficient conditions on the existence, uniqueness, and global asymptotic stability of equilibrium point for neural networks with constant delays or time-varying delays were reported under some assumptions; for example, see [3, 4, 6, 8, 9, 11, 15–17] and references therein. In the design of delayed neural networks (DNNs), however, one is interested not only in the global asymptotic stability of the neural network, but also in some other performances. In particular, it is often desirable that the neural network converges fast enough in order to achieve a fast response [15]. It is well known that exponential stability gives a fast convergence rate to the equilibrium point. Therefore, some researchers studied the exponential stability analysis problem for time delay systems with constant delays or time-varying delays, and a great number of results on this topic have been given in the literature; for example, see [3, 7, 9–11, 14–17] and references therein. The exponential stability problems of switched positive continuous time [12] and discrete-time [13, 14] linear systems with time delay were considered. In [16], exponential stability criteria for DNNs with time-varying delay are derived, but the constraint h ˙ (t)<1 on a time-varying delay was imposed. Such a restriction is very conservative and has physical limitations.

In practical implementations, uncertainties are inevitable in neural networks because of the existence of modeling errors and external disturbance. It is important to ensure the neural networks system is stable under these uncertainties. Both time delays and uncertainties can destroy the stability of neural networks in an electronic implementation. Therefore, it is of great theoretical and practical importance to investigate the robust stability for delayed neural networks with uncertainties [11].

Recently, a free-weighting matrix approach [3] has been employed to study the exponential stability problem for neural networks with a time-varying delay [4]. However, as mentioned in [5], some useful terms in the derivative of the Lyapunov functional were ignored in [4, 6, 17]. The derivative of ∫ − h 0 ∫ t + θ t x ˙ T (s)Z x ˙ (s)dsdθ with 0≤h(t)≤ h ¯ was estimated as x ˙ T (t)hZ x ˙ (t)− ∫ t − h ( t ) t x ˙ T (s)Z x ˙ (s)ds and the negative term − ∫ t − h ( t ) t x ˙ T (s)Z x ˙ (s)ds was ignored in [17], which may lead to considerable conservativeness. Although in [4] and [6] the negative term − ∫ t − h ( t ) t x ˙ T (s)Z x ˙ (s)ds was retained, the other term, − ∫ t − h t − h ( t ) x ˙ T (s)Z x ˙ (s)ds, was ignored, which may also lead to considerable conservativeness. On the other hand, if the free-weighting method introduces too many free-weighting matrices in the theoretical derivation, some of them sometimes have no effect on reducing the conservatism of the obtained results; on the contrary, they mathematically complicate the system analysis and consequently lead to a significant increase in the computational demand [8]. How to overcome the aforementioned disadvantages of the integral inequality approach (IIA) is an important research topic in the delay-dependent related problem and also motivates the work of this paper on exponential stability analysis. Furthermore, the restriction of h ˙ (t)<1 is released in the proposed scheme.

In this paper, a global robust exponential stability of the delayed neural networks with time-varying delays is proposed. By constructing a suitable augmented Lyapunov functional, a delay-dependent criterion is derived in terms of linear matrix inequalities (LMIs) and the integral inequality approach (IIA), which can be solved efficiently by using the generalized eigenvalue problem (GEVP). Furthermore, examples with simulation are given to show that the proposed stability criteria are less conservative than some recent ones in the literature.

Notation: Throughout this paper, N T stands for the transpose of the matrix N, R N denotes the n-dimensional Euclidean space, P>0 means that the matrix P is positive definite, I is an appropriately dimensioned identity matrix, and (diag…) denotes a block diagonal matrix.

2 Problem formulations and preliminaries

Consider continuous neural networks with time-varying delays can be described by the following state equations:

u ˙ i (t)=−( c i +Δ c i ) u i (t)+ ∑ j = 1 n ( a i j +Δ a i j ) f j ( u j ( t ) ) + ∑ j = 1 n ( b i j +Δ b i j ) f j ( u j ( t − h ( t ) ) ) + J i ,
(2.1)

or equivalently

u ˙ (t)=− ( C + Δ C ( t ) ) u(t)+ ( A + Δ A ( t ) ) f ( u ( t ) ) + ( B + Δ B ( t ) ) f ( u ( t − h ( t ) ) ) +J,
(2.2)

where u(t)= [ u 1 ( t ) , u 2 ( t ) , … , u n ( t ) ] T ∈ R n is the neuron state vector, f(u(t))= [ f 1 ( u 1 ( t ) ) , f 2 ( u 2 ( t ) ) , … , f n ( u n ( t ) ) ] T ∈ R n is for the activation functions, f(u(t−h(t)))= [ f 1 ( u 1 ( t − h ( t ) ) ) , f 2 ( u 2 ( t − h ( t ) ) ) , … , f n ( u n ( t − h ( t ) ) ) ] T ∈ R n , J= [ J 1 , J 2 , … , J n ] T is a constant input vector, C=diag( c i ) is a positive diagonal matrix, A= ( a i j ) n × n and B= ( b i j ) n × n are the interconnection matrices representing the weight coefficients of the neurons. The matrices ΔC(t), ΔA(t), and ΔB(t) are the uncertainties of the system and have the form

[ Δ C ( t ) Δ A ( t ) Δ B ( t ) ] =DF(t)[ E c E a E b ],
(2.3)

where D, E c , E a , and E b are known constant real matrices with appropriate dimensions and F(t) is an unknown matrix function with Lebesgue-measurable elements bounded by

F T (t)F(t)≤I,∀t,
(2.4)

where I is an appropriately dimensioned identity matrix.

The time delay h(t) is a time-varying differentiable function that satisfies

0≤h(t)≤h, h ˙ (t)≤ h d ,∀t≥0,
(2.5)

where h and h d are constants.

Assumption 1 Throughout this paper, it is assumed that each of the activation functions f j (j=1,2,…,n) possesses the following condition:

0≤ f i ( u ) − f i ( v ) u − v ≤ k i ,u≠v∈R,i=1,2,…,n,
(2.6)

where k i (i=1,2,…,n) are positive constants.

Next, the equilibrium point u ∗ = [ u 1 ∗ , … , u n ∗ ] T of system (2.1) is shifted to the origin through the transformation x(t)=u(t)− u ∗ , then system (2.1) can be equivalently written as the following system:

x ˙ (t)=− ( C + Δ C ( t ) ) x(t)+ ( A + Δ A ( t ) ) g ( x ( t ) ) + ( B + Δ B ( t ) ) g ( x ( t − h ( t ) ) ) ,
(2.7)

where x(⋅)= [ x 1 ( ⋅ ) , … , x n ( ⋅ ) ] T , g(x(⋅))= [ g 1 ( x 1 ( ⋅ ) ) , … , g n ( x n ( ⋅ ) ) ] T , g i ( x i (⋅))= f i ( x i (⋅)+ u i ∗ )− f i ( u i ∗ ), i=1,2,…,n. It is obvious that the function g j (⋅) (j=1,2,…,n) satisfies the following condition:

0≤ g i ( x i ) x i ≤ k i , g i (0)=0,∀ x i ≠0,i=1,2,…,n,
(2.8)

which is equivalent to

g i ( x i ) ( g i ( x i ) − k i x i ) ≤0, g i (0)=0,∀ x i ≠0,i=1,2,…,n.
(2.9)

To obtain the main results, the following lemmas are needed. First, we introduce a technical lemma of the integral inequality approach (IIA), which will be used in the proof of ours.

Lemma 1 [7, 8]

For any semi-positive definite matrices

X= [ X 11 X 12 X 13 X 12 T X 22 X 23 X 13 T X 23 T X 33 ] ≥0,
(2.10)

the following integral inequality holds:

− ∫ t − h ( t ) t x ˙ T ( s ) X 33 x ˙ ( s ) d s ≤ ∫ t − h ( t ) t [ x T ( t ) x T ( t − h ( t ) ) x ˙ T ( s ) ] [ X 11 X 12 X 13 X 12 T X 22 X 23 X 13 T X 23 T 0 ] [ x ( t ) x ( t − h ( t ) ) x ˙ ( s ) ] d s .
(2.11)

Secondarily, we introduce the following Schur complement which is essential in the proofs of our results.

Lemma 2 [1]

The following matrix inequality:

[ Q ( x ) S ( x ) S T ( x ) R ( x ) ] <0,
(2.12)

where Q(x)= Q T (x), R(x)= R T (x) and S(x) depend in an affine way on x, is equivalent to

R(x)<0,
(2.13)
Q(x)<0
(2.14)

and

Q(x)−S(x) R − 1 (x) S T (x)<0.
(2.15)

Finally, Lemma 3 will be used to handle the parametrical perturbation.

Lemma 3 [18]

Given symmetric matrices Ω and D, E, of appropriate dimensions, we have

Ω+DF(t)E+ E T F T (t) D T <0,
(2.16)

for all F(t) satisfying F T (t)F(t)≤I, if and only if there exists some ε>0 such that

Ω+εD D T + ε − 1 E T E<0.
(2.17)

Firstly, we consider the nominal from system (2.7):

x ˙ (t)=−Cx(t)+Ag ( x ( t ) ) +Bg ( x ( t − h ( t ) ) ) .
(2.18)

For the nominal system (2.18), we will give a stability condition by using an integral inequality approach as follows.

Theorem 1 For given scalars h, α, and h d , system (2.18) is exponentially stable if there exist symmetry positive-definite matrices P= P T >0, Q= Q T >0, R= R T >0, U= U T >0, W= W T >0, diagonal matrices S≥0, Λ 1 ≥0, Λ 2 ≥0, and

X= [ X 11 X 12 X 13 X 12 T X 22 X 23 X 13 T X 23 T X 33 ] ≥0

and

Y= [ Y 11 Y 12 Y 13 Y 12 T Y 22 Y 23 Y 13 T Y 23 T Y 33 ] ≥0

such that the following LMIs hold:

Ω= [ Ω 11 Ω 12 Ω 13 Ω 14 0 Ω 16 Ω 12 T Ω 22 Ω 23 0 0 Ω 26 Ω 13 T Ω 23 T Ω 33 Ω 34 0 Ω 36 Ω 14 T 0 Ω 34 T Ω 44 Ω 45 0 0 0 0 Ω 45 T Ω 55 0 Ω 16 T Ω 26 T Ω 36 T 0 0 Ω 66 ] <0,
(2.19)
R− X 33 ≥0,
(2.20)
R− Y 33 ≥0,
(2.21)

where

K = diag { k 1 , k 2 , … , k n } , Ω 11 = − C T P − P C + e 2 α h ( Q + U ) + e − 2 α h ( h X 11 + X 13 + X 13 T ) , Ω 12 = P A + 2 α S − C T S + K Λ 1 , Ω 13 = P B , Ω 14 = e − 2 α h ( h X 12 − X 13 + X 23 T ) , Ω 16 = − h C T R , Ω 22 = A T S + S A + e 2 α h W − 2 Λ 1 , Ω 23 = S B , Ω 26 = h A T R , Ω 33 = − ( 1 − h d ) W − 2 Λ 2 , Ω 34 = K Λ 2 , Ω 36 = h B T R , Ω 44 = − ( 1 − h d ) Q + e − 2 α h ( h X 22 − X 23 − X 23 T + h Y 11 + Y 13 + Y 13 T ) , Ω 45 = e − 2 α h ( h Y 12 − Y 13 + Y 23 T ) , Ω 55 = − U + e − 2 α h ( h Y 22 − Y 23 − Y 23 T ) , Ω 66 = − h R .

Proof Choose the following Lyapunov-Kravoskii functional candidate:

V(t)= V 1 (t)+ V 2 (t)+ V 3 (t),
(2.22)

where

V 1 ( t ) = e 2 α t x T ( t ) P x ( t ) + 2 e 2 α t ∑ i = 1 n s i ∫ 0 x i ( t ) g i ( s ) d s , V 2 ( t ) = e 2 α h ∫ t − h ( t ) t e 2 α s [ x T ( s ) Q x ( s ) + g T ( x ( s ) ) W g ( x ( s ) ) ] d s + e 2 α h ∫ t − h t e 2 α s x T ( s ) U x ( s ) d s , V 3 ( t ) = ∫ − h 0 ∫ t + θ t e 2 α s x ˙ T ( s ) R x ˙ ( s ) d s d θ .

Then taking the time derivative of V(t) with respect to t along the system (2.18) yields

V Ë™ (t)= V Ë™ 1 (t)+ V Ë™ 2 (t)+ V Ë™ 3 (t).
(2.23)

First, the derivative of V 1 (t) is

V ˙ 1 ( x t ) = 2 α e 2 α t x T ( t ) P x ( t ) + 2 e 2 α t x ˙ T ( t ) P x ( t ) + 4 α e 2 α t g T ( x ( t ) ) S g x ( t ) + 2 e 2 α t g T ( x ( t ) ) S x ˙ ( t ) = e 2 α t [ 2 α x T ( t ) P x ( t ) + 2 x ˙ T ( t ) P x ( t ) + 4 α g T ( x ( t ) ) S x ( t ) + 2 g T ( x ( t ) ) S x ˙ ( t ) ] .
(2.24)

Second, we get the bound of differential on V 2 (t) as

V ˙ 2 ( t ) = e 2 α t { e 2 α h [ x T ( t ) ( Q + U ) x ( t ) + g T ( x ( t ) ) W g ( x ( t ) ) ] − x T ( t − h ) U x ( t − h ) − ( 1 − h ˙ ( t ) ) [ x T ( t − h ( t ) ) Q x ( t − h ( t ) ) + g T ( x ( t − h ( t ) ) ) W g ( x ( t − h ( t ) ) ) ] } ≤ e 2 α t { e 2 α h [ x T ( t ) ( Q + U ) x ( t ) + g T ( x ( t ) ) W g ( x ( t ) ) ] − x T ( t − h ) U x ( t − h ) − ( 1 − h d ) [ x T ( t − h ( t ) ) Q x ( t − h ( t ) ) + g T ( x ( t − h ( t ) ) ) W g ( x ( t − h ( t ) ) ) ] } .
(2.25)

Third, the bound of differential on V 3 (t) is as follows:

V ˙ 3 ( t ) = h e 2 α t x ˙ T ( t ) R x ˙ ( t ) − ∫ t − h t e 2 α s x ˙ T ( s ) R x ˙ ( s ) d s ≤ e 2 α t x ˙ T ( t ) h R x ˙ ( t ) − ∫ t − h t e 2 α ( t − h ) x ˙ T ( s ) R x ˙ ( s ) d s = e 2 α t x ˙ T ( t ) h R x ˙ ( t ) − e 2 α ( t − h ) [ ∫ t − h ( t ) t x ˙ T ( s ) R x ˙ ( s ) d s − ∫ t − h t − h ( t ) x ˙ T ( s ) R x ˙ ( s ) d s ] = e 2 α t { x ˙ T ( t ) h R x ˙ ( t ) − e − 2 α h [ ∫ t − h ( t ) t x ˙ T ( s ) R x ˙ ( s ) d s − ∫ t − h t − h ( t ) x ˙ T ( s ) R x ˙ ( s ) d s ] } = e 2 α t { x ˙ T ( t ) h R x ˙ ( t ) − e − 2 α h [ ∫ t − h ( t ) t x ˙ T ( s ) ( R − X 33 ) x ˙ ( s ) d s − ∫ t − h t − h ( t ) x ˙ T ( s ) ( R − Y 33 ) x ˙ ( s ) d s − ∫ t − h ( t ) t x ˙ T ( s ) X 33 x ˙ ( s ) d s − ∫ t − h t − h ( t ) x ˙ T ( s ) Y 33 x ˙ ( s ) d s ] } .
(2.26)

Using Lemma 2, the term − ∫ t − h ( t ) t x Ë™ T (s) X 33 x Ë™ (s)ds can be written as

− ∫ t − h ( t ) t x ˙ T ( s ) X 33 x ˙ ( s ) d s ≤ ∫ t − h ( t ) t [ x T ( t ) x T ( t − h ( t ) ) x ˙ T ( s ) ] [ X 11 X 12 X 13 X 12 T X 22 X 23 X 13 T X 23 T 0 ] [ x ( t ) x ( t − h ( t ) ) x ˙ ( s ) ] d s = x T ( t ) [ h X 11 + X 13 + X 13 T ] x ( t ) + x T ( t ) [ h X 12 − X 13 + X 23 T ] x ( t − h ( t ) ) + x T ( t − h ( t ) ) [ h X 12 T − X 13 T + X 23 ] x ( t ) + x T ( t − h ( t ) ) [ h X 22 − X 23 − X 23 T ] x ( t − h ( t ) ) .
(2.27)

Similarly, we have

− ∫ t − h t − h ( t ) x ˙ T ( s ) Y 33 x ˙ ( s ) d s ≤ x T ( t − h ( t ) ) [ h Y 11 + Y 13 T + Y 13 ] x ( t − h ( t ) ) + x T ( t − h ( t ) ) [ h Y 12 − Y 13 + Y 23 T ] x ( t − h ) + x T ( t − h ) [ h Y 12 T − Y 13 T + Y 23 ] x ( t − h ( t ) ) + x T ( t − h ) [ h Y 22 − Y 23 − Y 23 T ] x ( t − h ) .
(2.28)

The operator for the term x Ë™ T (t)hR x Ë™ (t) is as follows:

x ˙ T ( t ) h R x ˙ ( t ) = [ − C x ( t ) + A g ( x ( t ) ) + B g ( x ( t − h ( t ) ) ) ] T ( h R ) [ − C x ( t ) + A g ( x ( t ) ) + B g ( x ( t − h ( t ) ) ) ] = x T ( t ) h C T R C x ( t ) − x T ( t ) h C T R A g ( x ( t ) ) − x T ( t ) h C T R B g ( x ( t − h ( t ) ) ) − g T ( x ( t ) ) h A T R C x ( t ) + g T ( x ( t ) ) h A T R A g ( x ( t ) ) + g T ( x ( t ) ) h A T R B g ( x ( t − h ( t ) ) ) − g T ( x ( t − h ( t ) ) ) h B T R C x ( t ) + g T ( x ( t − h ( t ) ) ) h B T R A g ( x ( t ) ) + g T ( x ( t − h ( t ) ) ) h B T R B g ( x ( t − h ( t ) ) ) .
(2.29)

From (2.9) for appropriately dimensioned diagonal matrices Λ i (i=1,2), we have

−2 e 2 α t g T ( x ( t ) ) Λ 1 [ g ( x ( t ) ) − K x ( t ) ] ≥0
(2.30)

and

−2 e 2 α t g T ( x ( t − h ( t ) ) ) Λ 2 [ g ( x ( t − h ( t ) ) ) − K x ( t − h ( t ) ) ] ≥0.
(2.31)

Combining (2.24)-(2.31) yields

V ˙ ( t ) ≤ e 2 α t { ξ T ( t ) Ξ ξ ( t ) − e − 2 α h [ ∫ t − h ( t ) t x ˙ T ( s ) ( R − X 33 ) x ˙ ( s ) d s V ˙ ( t ) ≤ − ∫ t − h t − h ( t ) x ˙ T ( s ) ( R − Y 33 ) x ˙ ( s ) d s ] } , ξ T ( t ) = [ x T ( t ) g T ( x ( t ) ) g T ( x ( t − h ( t ) ) ) x T ( t − h ( t ) ) x T ( t − h ) ] , Ξ = [ Ξ 11 Ξ 12 Ξ 13 Ξ 14 0 Ξ 12 T Ξ 22 Ξ 23 0 0 Ξ 13 T Ξ 23 T Ξ 33 Ξ 34 0 Ξ 14 T 0 Ξ 34 T Ξ 44 Ξ 45 0 0 0 Ξ 45 T Ξ 55 ] and Ξ 11 = − C T P − P C + e 2 α h ( Q + U ) + e − 2 α h ( h X 11 + X 13 + X 13 T ) + h C T R C , Ξ 12 = P A + 2 α S − C T S + K Λ 1 − h C T R A , Ξ 13 = P B − h C T R B , Ξ 14 = e − 2 α h ( h X 12 − X 13 + X 23 T ) , Ξ 22 = A T S + S A + e 2 α h W − 2 Λ 1 + h A T R A , Ξ 23 = S B + h A T R B , Ξ 33 = − ( 1 − h d ) W − 2 Λ 2 + h B T R B , Ξ 34 = K Λ 2 , Ξ 44 = − ( 1 − h d ) Q + e − 2 α h ( h X 22 − X 23 − X 23 T + h Y 11 + Y 13 + Y 13 T ) , Ξ 45 = e − 2 α h ( h Y 12 − Y 13 + Y 23 T ) , Ξ 55 = − U + e − 2 α h ( h Y 22 − Y 23 − Y 23 T ) , K = diag { k 1 , k 2 , … , k n } .
(2.32)

From (2.32) and the Schur complement, it is easy to see that V ˙ ( x t )<0 holds if R− X 33 ≥0, R− Y 33 ≥0. □

3 Exponential robust stability analysis

Based on Theorem 1, we have the following result for uncertain neural networks with time-varying delay (2.7).

Theorem 2 For given positive scalars h, α, and h d , the uncertain delayed neural networks with time-varying delay (2.7) is exponentially robust stable if there exist symmetric positive-definite matrices P= P T >0, Q= Q T >0, R= R T >0, U= U T >0, W= W T >0, diagonal matrices S≥0, Λ 1 ≥0, Λ 2 ≥0, a scalar ε>0 and

X = [ X 11 X 12 X 13 X 12 T X 22 X 23 X 13 T X 23 T X 33 ] ≥ 0 , Y = [ Y 11 Y 12 Y 13 Y 12 T Y 22 Y 23 Y 13 T Y 23 T Y 33 ] ≥ 0

such that the following LMIs are true:

Ω ¯ = [ Ω 11 + ε E c T E c Ω 12 − ε E c T E a Ω 13 − ε E c T E b Ω 14 0 Ω 16 P D Ω 12 T − ε E a T E c Ω 22 + ε E a T E a Ω 23 + ε E a T E b 0 0 Ω 26 0 Ω 13 T − ε E b T E c Ω 23 T + ε E b T E a Ω 33 + ε E b T E b Ω 34 0 Ω 36 0 Ω 14 T 0 Ω 34 T Ω 44 Ω 45 0 0 0 0 0 Ω 45 T Ω 55 0 0 Ω 16 T Ω 26 T Ω 36 T 0 0 Ω 66 h R D D T P 0 0 0 0 h D T R − ε I ] < 0
(3.1)

and

R− X 33 ≥0,
(3.2)
R− Y 33 ≥0,
(3.3)

where Ω i j (i,j=1,…,6; i<j≤6) are defined in (2.19).

It is, incidentally, worth noting that the uncertain delayed neural networks with time-varying delay (2.7) is exponential stable, that is, the uncertain parts of the nominal system can be tolerated within allowable time delay h and exponential convergence rate α.

Proof Replacing A, B, and C in (2.19) with A+DF(t) E a , B+DF(t) E b , and C+DF(t) E c , respectively, we apply Lemma 2 for system (2.7) and the result is equivalent to the following condition:

Ω+ Γ d F(t) Γ e + Γ e T F(t) Γ d T <0,
(3.4)

where Γ d = [ P D 0 0 0 0 h R D ] T and Γ e =[− E c E a E b 000].

According to Lemma 3, (3.4) is true if there exists a scalar ε>0 such that the following inequality holds:

Ω+ ε − 1 Γ d T Γ d +ε Γ e T Γ e <0.
(3.5)

Applying the Schur complement shows that (3.5) is equivalent to (3.1). This completes the proof. □

If the upper bound of the derivative of time-varying delay h d is unknown, Theorem 2 can be reduced to the result with Q=0 and W=0, we have the following Corollary 1.

Corollary 1 For given positive scalars h and α, the system (2.7) is exponentially robust stable if there exist symmetric positive-definite matrices P= P T >0, R= R T >0, U= U T >0, diagonal matrices S≥0, Λ 1 ≥0, Λ 2 ≥0, ε>0 and

Y= [ Y 11 Y 12 Y 13 Y 12 T Y 22 Y 23 Y 13 T Y 23 T Y 33 ] ≥0

such that the following LMIs are true:

Ω ˜ = [ Ω ˜ 11 + ε E c T E c Ω 12 − ε E c T E a Ω 13 − ε E c T E b Ω 14 0 Ω 16 P D Ω 12 T − ε E a T E c Ω ˜ 22 + ε E a T E a Ω 23 + ε E a T E b 0 0 Ω 26 0 Ω 13 T − ε E b T E c Ω 23 T + ε E b T E a Ω ˜ 33 + ε E b T E b Ω 34 0 Ω 36 0 Ω 14 T 0 Ω 34 T Ω ˜ 44 Ω 45 0 0 0 0 0 Ω 45 T Ω 55 0 0 Ω 16 T Ω 26 T Ω 36 T 0 0 Ω 66 h R D D T P 0 0 0 0 h D T R − ε I ] < 0
(3.6)

and

R− X 33 ≥0,
(3.7)
R− Y 33 ≥0,
(3.8)

where

K = diag { k 1 , k 2 , … , k n } , Ω ˜ 11 = − C T P − P C + e 2 α h U + e − 2 α h ( h X 11 + X 13 + X 13 T ) , Ω ˜ 22 = A T S + S A − 2 Λ 1 , Ω ˜ 33 = − 2 Λ 2 , Ω ˜ 44 = e − 2 α h ( h X 22 − X 23 − X 23 T + h Y 11 + Y 13 + Y 13 T ) .

Proof If the matrix Q=W=0 is selected in (2.22). This proof can be completed in a formulation similar to Theorem 1 and Theorem 2. □

Based on that, a convex optimization problem is formulated to find the bound on the allowable delay time h and exponential convergence rate α which maintains the recurrent neural network time delay with parameter uncertainties systems (2.7).

Remark 1 It is interesting to note that h and α appear linearly in (2.19), (3.1), and (3.6). Thus a generalized eigenvalue problem (GEVP) as defined in Boyd et al. [1] can be formulated to solve the minimum acceptable 1/h (or 1/α) and therefore the maximum h (or α) to maintain robust stability as judged by these conditions.

The lower bound of exponential convergence rate or the allowable time delay conditions can be determined by solving the following three optimization problems.

Case 1: estimate the lower bound of exponential convergence rate α>0.

Op1: { max α s.t. condition (3.1) is satisfied , h  and  h d  fixed .

Case 2: estimate the allowable maximum time delay h.

Op2: { max h s.t. condition (3.1) is satisfied , α > 0  and  h d  fixed .

Case 3: estimate the allowable maximum change rate of time delay h d .

Op3: { max h d s.t. condition (3.1) is satisfied , α > 0  and  h  fixed .

If the change rate of time delay is equal to 0, i.e., h(t)=h, then the system (2.7) reduces to the neural networks with constant delay, and, consequently, Theorem 1 reduces to Corollary 1.

The lower bound of exponential convergence rate or the allowable time delay conditions can be determined by solving the following two optimization problems.

Case 4: estimate the lower bound of exponential convergence rate α>0.

Op4: { max α s.t. condition (3.6) is satisfied , h  fixed .

Case 5: estimate the allowable maximum time delay h.

Op5: { max h s.t. condition (3.6) is satisfied , α > 0  fixed .

Remark 2 All the above optimization problems (Op1-Op5) can be solved by the MATLAB LMI toolbox. Especially, Op1 and Op4 can estimate the lower bound of the global exponential convergence rate α, which means that the exponential convergence rate of any neural network included in (2.7) is at least equal to α. It is useful in real-time optimal computation.

4 Numerical examples

This section provides four numerical examples to demonstrate the effectiveness of the presented criterion.

Example 1 Consider the delayed neural network (2.18) as follows:

x ˙ (t)=−Cx(t)+Ag ( x ( t ) ) +Bg ( x ( t − h ( t ) ) ) ,
(4.1)

where

C= [ 2 0 0 3.5 ] ,A= [ − 1 0.5 0.5 − 1 ] ,B= [ − 0.5 0.5 0.5 0.5 ] .

The neuron activation functions are assumed to satisfy Assumption 1 with K=diag{1,1}.

Solution: It is assumed that the upper bound h ¯ is fixed as 1. The exponential convergence rates for various h d ’s obtained from Theorem 1 and those in [4, 11, 17] are listed in Table 1. In the following, in Tables 1-2, ‘-’ means that the results are not applicable to the corresponding cases, and ‘unknown h d ’ means that h d can have arbitrary values, even h d is very large or h(t) is not differentiable.

Table 1 Maximum allowable exponential convergence rate (MAECR) α ¯ for various h d and h ¯ =1
Table 2 Maximum allowable delay bound (MADB) h ¯ for various h d and α=0.8

On the other hand, if the exponential convergence rate of α is fixed as 0.8, the upper bounds of h ¯ for various h d ’s from Theorem 1 and those in [4, 11, 17] are listed in Table 2.

From Table 1, it is clear that when the delay is time-invariant, i.e., h d =0, the obtained result in Theorem 1 is much better than that in [17]. Furthermore, when the delay is time varying, the theorem in [17] fails to obtain the allowable exponential convergence rate of the exponential stable neural network system, but Theorem 1 in this paper can also obtain significantly better results than that in [4, 11], which guarantees the exponential stability of the neural networks. Moreover, when the exponential convergence rate of α is fixed as 0.8, the upper bounds of h ¯ for various h d ’s derived by Theorem 1 are also better than those in [4, 11, 17] from Table 2. The reason is that, compared with [4, 11, 17], our results not only do not ignore any useful terms in the derivative of Lyapunov-Krasovskii functional but also consider the relationship among h ¯ , h(t), and h−h(t). Figure 1 shows the state response of Example 1 with time delay h ¯ =1.59, when the initial value is [ − 1 1 ] T .

Figure 1
figure 1

The simulation of the Example 1 for h=1.59 s.

Example 2 Consider the delayed neural network (2.18) as follows:

x ˙ (t)=−Cx(t)+Ag ( x ( t ) ) +Bg ( x ( t − h ( t ) ) ) ,
(4.2)

where

C = [ 4.1989 0 0 0 0.7160 0 0 0 1.9985 ] , A = [ 0.4094 0.57197 0.2503 1.0645 0.0410 − 0.9923 − 0.7439 0.6344 0.1066 ] , B = [ 0.3008 0 0 0 0.3070 0 0 0 0.3068 ] .

The neuron activation functions are assumed to satisfy Assumption 1 with K=diag{0.4911,0.9218,0.6938}.

Solution: It is assumed that the exponential convergence rate of α is fixed as zero. The upper bounds of h for various h d ’s from Theorem 1 and those in [4–6, 11] are listed in Table 3. It is also clear that the obtained upper bounds of h ¯ in this paper are better than those in [4–6, 11], which guarantee the asymptotic stability of neural networks. The reason is that our results do not ignore any useful negative terms in the derivative of Lyapunov-Krasovskii functional compared with [4, 6], and our results also consider the relationship among h ¯ , h(t), and h−h(t) compared with [5, 11].

Table 3 Maximum allowable delay bound (MADB) h ¯ for various h d and α=0

Example 3 Consider the following uncertain delayed neural network:

x ˙ (t)=−Cx(t)+ ( A + Δ A ( t ) ) g ( x ( t ) ) + ( B + Δ B ( t ) ) g ( x ( t − h ( t ) ) ) ,
(4.3)

where

C = [ 1 0 0 1 ] , A = [ 0.4 0.1 0.1 − 0.5 ] , B = [ 0.2 0.1 0 0.2 ] , D = [ 0.05 0 0 0.05 ] , E a = E b = [ 1 0 0 1 ] .

The neuron activation functions are assumed to satisfy Assumption 1 with K=diag{1,1}.

Solution: We let h d =0.1 and h=0.5 as [9] did and, by Theorem 2, we can obtain the maximum allowable exponential convergence rate (MAECR) α ¯ size to be α ¯ =0.4742. However, applying the criteria in [9], the maximum value of α ¯ for the above system is 0.1. This example demonstrates that our robust stability condition gives a less conservative result. Hence, it is obvious that the results obtained from our simple method are less conservative than those obtained by existing methods. The maximum allowable exponential convergence rate (MAECR) α ¯ for various h from Theorem 2 are listed in Table 4.

Table 4 Maximum allowable exponential convergence rate (MAECR) α ¯ for various h and h d =0.1

Example 4 Consider the following delayed neural network:

x ˙ (t)=−Cx(t)+Ag ( x ( t ) ) +Bg ( x ( t − h ( t ) ) ) ,
(4.4)

where

C = [ 1.34 0 0 0 0.08 0 0 0 1.46 ] , A = [ 1.08 − 0.52 0.05 − 0.08 0.68 0.19 0.14 0.29 − 0.58 ] , B = [ 2.45 − 0.64 1.2 0.45 0.88 0.47 0.07 − 1.56 0.97 ] .

The neuron activation functions are assumed to satisfy Assumption 1 with K=diag{0.31,0.40,0.23}.

Solution: Let h d ≥1 and α=0.5. Using the MATLAB LMI toolbox, the conditions of Theorem 2 can be met, and then we get the global robust exponential stability in this example for h≤0.3970 and

P = [ 0.1866 − 0.0648 − 0.0804 − 0.0648 1.0184 0.4300 − 0.0804 0.4300 1.1488 ] , Q = [ 0.0027 − 0.0095 − 0.0188 − 0.0095 0.0367 0.0697 − 0.0188 0.0697 0.1418 ] , R = [ 0.5059 − 0.0985 0.0115 − 0.0985 2.0362 0.1325 0.0115 0.1325 1.1883 ] , W = [ 0.0046 0.0033 − 0.0397 0.0033 0.0155 − 0.0398 − 0.0397 − 0.0398 0.5184 ] , U = [ 0.0559 − 0.0333 − 0.0421 − 0.0333 0.2848 0.1771 − 0.0421 0.1771 0.4583 ] , X 11 = [ 0.6600 − 0.0760 − 0.0218 − 0.0760 0.6376 0.0560 − 0.0218 0.0560 0.5618 ] , X 12 = [ − 0.6176 0.0432 − 0.0831 0.0510 − 0.4650 0.0937 − 0.0121 0.0518 − 0.3444 ] , X 13 = [ − 0.4185 0.1784 − 0.0811 0.0231 − 0.6490 0.0839 0.0230 − 0.0181 − 0.0991 ] , X 22 = [ 0.5958 − 0.0468 0.0380 − 0.0468 0.5207 0.0331 0.0380 0.0331 0.5891 ] , X 23 = [ 0.3502 − 0.1472 0.0416 − 0.0718 0.2440 0.0365 0.0860 − 0.1138 0.2952 ] , X 33 = [ 0.4916 − 0.0631 0.0484 − 0.0631 1.7896 − 0.0241 0.0484 − 0.0241 0.6812 ] , Y 11 = [ 0.0327 − 0.0196 − 0.0252 − 0.0196 0.1679 0.1037 − 0.0252 0.1037 0.2660 ] , Y 12 = [ 0.0364 − 0.0038 0.0131 − 0.0023 0.1310 − 0.0298 0.0109 − 0.0269 0.0159 ] , Y 13 = [ − 0.1209 0.0266 − 0.0036 0.0245 − 0.4874 − 0.0356 − 0.0010 − 0.0388 − 0.3024 ] , Y 22 = [ 0.0522 − 0.0200 − 0.0155 − 0.0200 0.2333 0.0787 − 0.0155 0.0787 0.2580 ] , Y 23 = [ − 0.1558 0.0218 − 0.0216 0.0253 − 0.5988 0.0240 − 0.0268 0.0307 − 0.2447 ] , Y 33 = [ 0.4994 − 0.0763 0.0540 − 0.0763 1.9480 − 0.0206 0.0540 − 0.0206 0.8819 ] , S = [ 0.0003 0 0 0 0.0022 0 0 0 0.0370 ] , Λ 1 = [ 0.4615 0 0 0 1.1598 0 0 0 0.9087 ] , Λ 2 = [ 1.1328 0 0 0 2.2406 0 0 0 5.7114 ] .

For this case, it can be verified that the stability conditions in [10, 13] are not applicable when h d ≥1. This implies that for this example the stability condition in Theorem 2 in this paper is less conservative than those in [10, 13].

Remark 3 In this paper, we are mainly concerned with exponential stability. For this reason another practical background and other definitions have not been introduced. Among them, exponential stability provides a useful measure for the decaying rate, or convergence speed. It is also worth pointing out that the main results in this paper can easily be extended to exponential stability for neural networks with time-varying delays by the same approach used in [8]. Note that we mainly focus on the effects brought about by the maximum allowable delay bound (MADB) h ¯ and exponential convergence rate (MAECR) α ¯ in this paper.

Remark 4 Some comparisons have been made with the same examples that appear in many recent papers. Our results show them to be less conservative than those reports.

5 Conclusion

In this paper, we have proposed some new delay dependent sufficient conditions for the global robust exponential stability analysis of a class of delayed neural networks with time-varying delays and parameter uncertainties. We have discussed the advantage of the assumption condition investigated in our paper over those in previous studies in the literature. Some global exponential stability criteria, which depend on time delay, are derived via the approach of the Lyapunov-Krasovskii functional. Four numerical examples are given to show the significant improvement over some existing results in the literature. In addition, the method proposed in this paper can easily be extended to solve the stability or exponential stability problem for delayed neural networks with distributed delay.

References

  1. Boyd S, El Ghaoui L, Feron E, Balakrishnan V SIAM Studies in Applied Mathematics. In Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia; 1994.

    Chapter  Google Scholar 

  2. Gahinet P, Nemirovski A, Laub A, Chilali M: LMI Control Toolbox User’s Guide. The Mathworks, Natick; 1995.

    Google Scholar 

  3. He Y, Wu M, She JH, Liu GP: Parameter-dependent Lyapunov functional for stability of time-delay systems with polytopic-type uncertainties. IEEE Trans. Autom. Control 2004, 49: 828–832. 10.1109/TAC.2004.828317

    Article  MathSciNet  Google Scholar 

  4. He Y, Wu M, She JH: Delay-dependent exponential stability of delayed neural networks with time-varying delay. IEEE Trans. Circuits Syst. II, Express Briefs 2006, 53: 553–557.

    Article  Google Scholar 

  5. He Y, Liu GP, Rees D: New delay-dependent stability criteria for neutral networks with time-varying delay. IEEE Trans. Neural Netw. 2007, 18: 310–314.

    Article  Google Scholar 

  6. Hua CC, Long CN, Guan XP: New results on stability analysis of neural networks with time-varying delays. Phys. Lett. A 2006, 352: 335–340. 10.1016/j.physleta.2005.12.005

    Article  Google Scholar 

  7. Liu PL: Robust exponential stability for uncertain time-varying delay systems with delay dependence. J. Franklin Inst. 2009, 346: 958–968. 10.1016/j.jfranklin.2009.04.005

    Article  MathSciNet  Google Scholar 

  8. Liu PL: Improved delay-dependent robust stability criteria for recurrent neural networks with time-varying delays. ISA Trans. 2013, 52: 30–35. 10.1016/j.isatra.2012.07.007

    Article  Google Scholar 

  9. Park JH: Further note on global exponential stability of uncertain cellular neural networks with variable delays. Appl. Math. Comput. 2007, 188: 850–854. 10.1016/j.amc.2006.10.036

    Article  MathSciNet  Google Scholar 

  10. Song QK: Exponential stability of recurrent neural networks with both time-varying delays and general activation functions via LMI approach. Neurocomputing 2008, 71: 2823–2830. 10.1016/j.neucom.2007.08.024

    Article  Google Scholar 

  11. Xiang M, Xiang ZR:Stability, L 1 -gain and control synthesis for positive switched systems with time-varying delay. Nonlinear Anal. Hybrid Syst. 2013, 9(1):9–17.

    Article  MathSciNet  Google Scholar 

  12. Xiang ZR, Liu SL, Chen QW: An improved exponential stability criterion for discrete-time switched non-linear systems with time-varying delays. Trans. Inst. Meas. Control 2013, 35(3):353–359. 10.1177/0142331212448271

    Article  Google Scholar 

  13. Xiang M, Xiang ZR: Exponential stability of discrete-time switched linear positive systems with time-delay. Appl. Math. Comput. 2014, 230: 193–199.

    Article  Google Scholar 

  14. Wu X, Wang YN, Huang LH, Zuo Y: Robust exponential stability criterion for uncertain neural networks with discontinuous activation functions and time-varying delays. Neurocomputing 2010, 73: 1265–1271. 10.1016/j.neucom.2010.01.002

    Article  Google Scholar 

  15. Xu SY, Lam J: A new approach to exponential stability analysis of neural networks with time-varying delays. Neural Netw. 2006, 19: 76–83. 10.1016/j.neunet.2005.05.005

    Article  Google Scholar 

  16. Yuan K, Cao J, Deng J: Exponential stability and periodic solutions of fuzzy cellular neural networks with time-varying delays. Neurocomputing 2006, 69: 1619–1627. 10.1016/j.neucom.2005.05.011

    Article  Google Scholar 

  17. Zhang Q, Wei X, Xu J: Delay-dependent exponential stability of cellular neural networks with time-varying delays. Chaos Solitons Fractals 2005, 23: 1363–1369. 10.1016/j.chaos.2004.06.036

    Article  MathSciNet  Google Scholar 

  18. Chua L, Yang L: Cellular neural networks: theory. IEEE Trans. Circuits Syst. 1988, 35: 1257–1272. 10.1109/31.7600

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pin-Lin Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xie, JC., Chen, CP., Liu, PL. et al. Robust exponential stability analysis for delayed neural networks with time-varying delay. Adv Differ Equ 2014, 131 (2014). https://doi.org/10.1186/1687-1847-2014-131

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-131

Keywords