Skip to main content

Theory and Modern Applications

Further analysis of stability of uncertain neural networks with multiple time delays

Abstract

This paper studies the robust stability of uncertain neural networks with multiple time delays with respect to the class of nondecreasing activation functions. By using the Lyapunov functional and homeomorphism mapping theorems, we derive a new delay-independent sufficient condition the existence, uniqueness, and global asymptotic stability of the equilibrium point for delayed neural networks with uncertain network parameters. The condition obtained for the robust stability establishes a matrix-norm relationship between the network parameters of the neural system, and therefore it can easily be verified. We also present some constructive numerical examples to compare the proposed result with results in the previously published corresponding literature. These comparative examples show that our new condition can be considered as an alternative result to the previous corresponding literature results as it defines a new set of network parameters ensuring the robust stability of delayed neural networks.

1 Introduction

Dynamical neural networks have recently received a great deal of attention due to their potential applications in image and signal processing, combinatorial optimization problems, pattern recognition, control engineering, and some other related areas. In the electronic implementation of analog neural networks, during the processing and transmission of signals in the network, due to the finite switching speed of amplifiers, some time delays occur which may change the dynamical behavior of the network from stable to unstable. Therefore, it is important to take into account the effects of the time delays in the dynamical analysis of neural networks. On the other hand, it is well known that unavoidably some disturbances are to be considered in the modeling and stability analysis neural networks. The major disturbances occur within the network, which is mainly due to the deviations in the values of the electronic components during the process of implementation. Therefore, in recent years, many papers have focused on studying the existence, uniqueness, and global robust asymptotic stability of the equilibrium point in the presence of time delays and parameter uncertainties for various classes of nonlinear neural networks, and one reported some robust stability results [143].

In the current paper, we aim to study the robust stability of a class of uncertain neural networks with multiple time delays. By using the Lyapunov functional and homeomorphism mapping theorems, a new delay-independent sufficient condition for global robust asymptotic stability of the equilibrium point for this class of neural networks is derived. Meanwhile, three numerical examples are presented to demonstrate the applicability of the condition and to show the advantages of our result over the previously published robust stability results.

We use the following notation. Throughout this paper, the superscript T represents the transpose. I stands for the identity matrix of appropriate dimension. For the vector v= ( v 1 , v 2 , , v n ) T , |v| will denote |v|= ( | v 1 | , | v 2 | , , | v n | ) T . For any real matrix Q= ( q i j ) n × n , |Q| will denote |Q|= ( | q i j | ) n × n , and λ m (Q) and λ M (Q) will denote the minimum and maximum eigenvalues of Q, respectively. If Q= ( q i j ) n × n is a symmetric matrix, then Q>0 will imply that Q is positive definite, i.e., Q has all eigenvalues real and positive. Let P= ( p i j ) n × n and Q= ( q i j ) n × n be two symmetric matrices. Then, P<Q will imply that v T Pv< v T Qv for any real vector v= ( v 1 , v 2 , , v n ) T . A real matrix P= ( p i j ) n × n is said to be a nonnegative matrix if p i j 0, i,j=1,2,,n. Let P= ( p i j ) n × n and Q= ( q i j ) n × n be two real matrices. Then, PQ will imply that p i j q i j , i,j=1,2,,n. We also recall the following vector and matrix norms:

v 1 = i = 1 n | v i | , v 2 = i = 1 n v i 2 , v = max 1 i n | v i | , Q 1 = max 1 i n j = 1 n | q j i | , Q 2 = λ max ( Q T Q ) , Q = max 1 i n j = 1 n | q i j | .

2 Preliminaries

The delayed neural network model we consider in this paper is described by the set of nonlinear differential equations of the form

d x i ( t ) d t = c i x i (t)+ j = 1 n a i j f j ( x j ( t ) ) + j = 1 n b i j f j ( x j ( t τ i j ) ) + u i ,
(2.1)

where i=1,2,,n, n is the number of the neurons, x i (t) denotes the state of the neuron i at time t, f i () denote activation functions, a i j and b i j denote the strengths of connectivity between neurons j and i at time t and t τ i j , respectively; τ i j represents the time delay required in transmitting a signal from the neuron j to the neuron i, u i is the constant input to the neuron i, c i is the charging rate for the neuron i.

In order to accomplish the objectives of this paper in the sense of robust stability of dynamical neural networks, we will first define the class of the activation functions that we will employ in the neural network model (2.1) and the parametric uncertainties of the system matrices A, B, and C.

The activation functions f i are assumed to be nondecreasing and slope-bounded, that is, there exist some positive constants k i such that the following conditions hold:

0 f i ( x ) f i ( y ) x y k i ,i=1,2,,n,x,yR,xy.

This class of functions will be denoted by fK.

We will set intervals for the system matrices A= a i j , B= b i j and C= c i in (2.1) as follows:

C I : = { C = diag ( c i ) : 0 C ̲ C C ¯ , i.e. , 0 < c ̲ i c i c ¯ i , i = 1 , 2 , , n } , A I : = { A = ( a i j ) : A ̲ A A ¯ , i.e. , a ̲ i j a i j a ¯ i j , i , j = 1 , 2 , , n } , B I : = { B = ( b i j ) : B ̲ B B ¯ , i.e. , b ̲ i j b i j b ¯ i j , i , j = 1 , 2 , , n } .
(2.2)

In what follows, we will give some basic definitions and lemmas that will play an important role in the proof of our robust stability results.

Definition 2.1 (See [28])

Let x = ( x 1 , x 2 , , x n ) T be an equilibrium point of neural system (2.1). The neural network model (2.1) with the parameter ranges defined by (2.2) is globally asymptotically robust stable if x is a unique and globally asymptotically stable equilibrium point of system (2.1) for all C C I , A A I , and B B I .

Lemma 2.1 (See [29])

If H(x) C 0 satisfies the conditions H(x)H(y) for all xy and H(x) as x, then H(x) is a homeomorphism of R n .

Lemma 2.2 (See [1])

Let x= ( x 1 , x 2 , , x n ) T R n . If

A A I := { A = ( a i j ) : A ̲ A A ¯ , i.e. , a ̲ i j a i j a ¯ i j , i , j = 1 , 2 , , n }

then, for any positive diagonal matrix P and a nonnegative diagonal matrix ϒ, the following inequality holds:

x T ( P A + A T P ) x x T ( P ( A ϒ ) + ( A ϒ ) T P ) x + x T ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) x ,

where A = 1 2 ( A ¯ + A ̲ ), A = 1 2 ( A ¯ A ̲ ).

3 Robust stability analysis

In this section, we will present a new sufficient condition that guarantees the global robust asymptotic stability of the equilibrium point of the neural network model (2.1), which is stated in the following theorem.

Theorem 3.1 For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) has a unique and globally robust asymptotically stable equilibrium point for each u, if there exist a positive diagonal matrix P and a nonnegative diagonal matrix ϒ such that the following condition holds:

Π = 2 C ̲ P K 1 P ( A ϒ ) ( A ϒ ) T P P ( A + ϒ ) + ( A + ϒ ) T P 2 I 2 n p M ( ρ 1 R + ρ 2 R 1 ) I > 0 ,

where K=diag( k i >0), A = 1 2 ( A ¯ + A ̲ ), A = 1 2 ( A ¯ A ̲ ), p M =max( p i ), R= ( r i j ) n × n with r i j = b ˆ i j 2 with b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}, ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0.

Proof We will first prove the existence and uniqueness of the equilibrium point of system (2.1) by making use of the homeomorphism mapping theorem defined in Lemma 2.1. To this end, we define the mapping associated with system (2.1) as

H(x)=Cx+Af(x)+Bf(x)+u.
(3.1)

We point out here that if x is an equilibrium point of the neural network model (2.1), then, by definition, x satisfies the following equilibrium equation:

C x +Af ( x ) +Bf ( x ) +u=0.

Therefore, every solution of the equation H(x)=0 is an equilibrium point of system (2.1). Hence, if we shaw that H(x) is homeomorphism of R n , then we will conclude that H( x )=0 has a unique solution for each u. In order to prove that H(x) is a homeomorphism of R n , we choose two real vectors x R n and y R n such that xy. In this case, we can write the following equation for H(x) given by (3.1):

H(x)H(y)=C(xy)+A ( f ( x ) f ( y ) ) +B ( f ( x ) f ( y ) ) .
(3.2)

Let xy. If f(x)f(y)=0 when xy, then (3.2) takes the form

H(x)H(y)=C(xy)

from which it follows that H(x)H(y) if xy0 as C is a positive diagonal matrix. Now assume that f(x)f(y)0 when xy0. In this case, if we multiply both sides of (3.2) by 2 ( f ( x ) f ( y ) ) T P, we obtain

2 ( f ( x ) f ( y ) ) T P ( H ( x ) H ( y ) ) = 2 ( f ( x ) f ( y ) ) T P C ( x y ) + 2 ( f ( x ) f ( y ) ) T P A ( f ( x ) f ( y ) ) + 2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) = 2 ( f ( x ) f ( y ) ) T P C ( x y ) + ( f ( x ) f ( y ) ) T ( P A + A T P ) ( f ( x ) f ( y ) ) + 2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) ,
(3.3)

where P=diag( p i >0) is a positive diagonal matrix.

In the light of Lemma 2.2, we can write

( f ( x ) f ( y ) ) T ( P A + A T P ) ( f ( x ) f ( y ) ) ( f ( x ) f ( y ) ) T ( P ( A ϒ ) + ( A ϒ ) T P ) ( f ( x ) f ( y ) ) + ( f ( x ) f ( y ) ) T ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) ( f ( x ) f ( y ) ) ,
(3.4)

where ϒ is a nonnegative diagonal matrix.

fK implies that

2 ( f ( x ) f ( y ) ) T P C ( x y ) = 2 i = 1 n p i c i ( f i ( x i ) f i ( y i ) ) ( x i y i ) 2 i = 1 n p i c ̲ i k i ( f i ( x i ) f i ( y i ) ) 2 = 2 ( f ( x ) f ( y ) ) T C ̲ P K 1 ( f ( x ) f ( y ) ) .
(3.5)

We also note the following inequality:

2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) = i = 1 n j = 1 n 2 p i b i j ( f i ( x i ) f i ( y i ) ) ( f j ( x j ) f j ( y j ) ) ( ρ 1 + ρ 2 ) p M i = 1 n j = 1 n 2 b ˆ i j | f i ( x i ) f i ( y i ) | | f j ( x j ) f j ( y j ) | = ρ 1 p M i = 1 n j = 1 n 2 b ˆ i j | f i ( x i ) f i ( y i ) | | f j ( x j ) f j ( y j ) | + ρ 2 p M i = 1 n j = 1 n 2 b ˆ j i | f i ( x i ) f i ( y i ) | | f j ( x j ) f j ( y j ) | ,
(3.6)

where ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0, p M =max( p i ) and b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}.

Equation (3.6) can be written in the following form:

2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) ρ 1 p M i = 1 n j = 1 n ( α b ˆ i j 2 ( f i ( x i ) f i ( y i ) ) 2 + 1 α ( f j ( x j ) f j ( y j ) ) 2 ) + ρ 2 p M i = 1 n j = 1 n ( β b ˆ j i 2 ( f i ( x i ) f i ( y i ) ) 2 + 1 β ( f j ( x j ) f j ( y j ) ) 2 ) = ρ 1 p M i = 1 n j = 1 n ( α r i j ( f i ( x i ) f i ( y i ) ) 2 + 1 α ( f j ( x j ) f j ( y j ) ) 2 ) + ρ 2 p M i = 1 n j = 1 n ( β r j i ( f i ( x i ) f i ( y i ) ) 2 + 1 β ( f j ( x j ) f j ( y j ) ) 2 ) ρ 1 p M ( α R f ( x ) f ( y ) 2 2 + 1 α n f ( x ) f ( y ) 2 2 ) + ρ 2 p M ( β R 1 f ( x ) f ( y ) 2 2 + 1 β n f ( x ) f ( y ) 2 2 ) ,
(3.7)

where α and β are some positive constants. Letting α= n R and β= n R 1 in (3.7) yields

2 ( f ( x ) f ( y ) ) T P B ( f ( x ) f ( y ) ) 2 n p M ρ 1 R ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) + 2 n p M ρ 2 R 1 ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) .
(3.8)

Using (3.4), (3.5), and (3.8) in (3.3) results in

2 ( f ( x ) f ( y ) ) T P ( H ( x ) H ( y ) ) 2 ( f ( x ) f ( y ) ) T C ̲ P K 1 ( f ( x ) f ( y ) ) + ( f ( x ) f ( y ) ) T ( P ( A ϒ ) + ( A ϒ ) T P ) ( f ( x ) f ( y ) ) + ( f ( x ) f ( y ) ) T P ( A + ϒ ) + ( A + ϒ ) T P 2 ( f ( x ) f ( y ) ) + 2 n p M ρ 1 R ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) + 2 n p M ρ 2 R 1 ( f ( x ) f ( y ) ) T ( f ( x ) f ( y ) ) ,

which can be written in the form

2 ( f ( x ) f ( y ) ) T P ( H ( x ) H ( y ) ) ( f ( x ) f ( y ) ) T Π ( f ( x ) f ( y ) ) .
(3.9)

For the activations functions belonging to the class , it has been shown in [27] that for the inequality in the form of (3.9), if Π>0, then H(x)H(y), for all xy, and H(x) as x. Hence, we have proved that the map H(x): R n R n is a homomorphism of  R n , meaning that the condition of Theorem 3.1 implies the existence and uniqueness of the equilibrium point for neural network model (2.1).

It will be now shown that the condition obtained for the existence and uniqueness of the equilibrium point of neural network model (2.1) in Theorem 3.1 also implies the global asymptotic stability of the equilibrium point. To this end, we will shift the equilibrium point x of system (2.1) to the origin. The transformation z i ()= x i () x i , i=1,2,,n, puts the network model (2.1) into the following form:

z ˙ i (t)= c i z i (t)+ j = 1 n a i j g j ( z j ( t ) ) + j = 1 n b i j g j ( z j ( t τ i j ) ) ,i=1,2,,n,
(3.10)

where g i ( z i ())= f i ( z i ()+ x i ) f i ( x i ), i=1,2,,n, satisfies the following property:

0 g i ( z ) z k i ,zR,z0and g i (0)=0,i=1,2,,n.

Note that equilibrium and stability properties of the neural network models are identical. Therefore, proving the asymptotic stability of the origin of system (3.10) will directly imply the asymptotic stability of x . Now, consider the following positive definite Lyapunov functional for system (3.10):

V ( z ( t ) ) = i = 1 n z i 2 ( t ) + 2 ε i = 1 n 0 z i ( t ) p i g i ( s ) d s + i = 1 n j = 1 n ( γ + n c ̲ i b ˆ i j 2 + ρ 1 ε α p M + ρ 2 ε β p M b ˆ i j 2 ) t τ i j t g j 2 ( z j ( ξ ) ) d ξ ,

where the p i , α, β, γ, and ε are positive constants to be determined later. The time derivative of the functional along the trajectories of system (3.10) is obtained as follows:

V ˙ ( z ( t ) ) = 2 i = 1 n c i z i 2 ( t ) + i = 1 n j = 1 n 2 a i j z i ( t ) g j ( z j ( t ) ) + i = 1 n j = 1 n 2 b i j z i ( t ) g j ( z j ( t τ i j ) ) 2 ε i = 1 n p i c i z i ( t ) g i ( z i ( t ) ) + ε i = 1 n j = 1 n 2 p i a i j g i ( z i ( t ) ) g j ( z j ( t ) ) + ε i = 1 n j = 1 n 2 p i b i j g i ( z i ( t ) ) g j ( z j ( t τ i j ) ) + ρ 1 ε p M i = 1 n j = 1 n 1 α g j 2 ( z j ( t ) ) ρ 1 ε p M i = 1 n j = 1 n 1 α g j 2 ( z j ( t τ i j ) ) + γ i = 1 n j = 1 n g j 2 ( z j ( t ) ) γ i = 1 n j = 1 n g j 2 ( z j ( t τ i j ) ) + ρ 2 ε p M i = 1 n j = 1 n β b ˆ i j 2 g j 2 ( z j ( t ) ) ρ 2 ε p M i = 1 n j = 1 n β b ˆ i j 2 g j 2 ( z j ( t τ i j ) ) + i = 1 n j = 1 n n c ̲ i b ˆ i j 2 g j 2 ( z j ( t ) ) i = 1 n j = 1 n n c ̲ i b ˆ i j 2 g j 2 ( z j ( t τ i j ) ) .
(3.11)

We note that the following inequalities hold:

i = 1 n j = 1 n 2 a i j z i (t) g j ( z j ( t ) ) i = 1 n c i z i 2 (t)+ i = 1 n j = 1 n n c ̲ i a ˆ i j 2 g j 2 ( z j ( t ) ) ,
(3.12)
i = 1 n j = 1 n 2 b i j z i (t) g j ( z j ( t τ i j ) ) i = 1 n c i z i 2 (t)+ i = 1 n j = 1 n n c ̲ i b ˆ i j 2 g j 2 ( z j ( t τ i j ) ) ,
(3.13)
i = 1 n j = 1 n 2 p i b i j g i ( z i ( t ) ) g j ( z j ( t τ i j ) ) ρ 1 p M i = 1 n j = 1 n ( α b ˆ i j 2 g i 2 ( z i ( t ) ) + 1 α g j 2 ( z j ( t τ i j ) ) ) + ρ 2 p M i = 1 n j = 1 n ( 1 β g j 2 ( z j ( t ) ) + β b ˆ j i 2 g i 2 ( z i ( t τ j i ) ) ) .
(3.14)

For gK, we have

2 i = 1 n p i c i z i ( t ) g i ( z i ( t ) ) 2 i = 1 n p i c i k i g i 2 ( z i ( t ) ) 2 g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) .
(3.15)

In the light of Lemma 2.2, we can write

i = 1 n j = 1 n 2 p i a i j g i ( z i ( t ) ) g j ( z j ( t ) ) = g T ( z ( t ) ) ( P A + A T P ) g ( z ( t ) ) g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) .
(3.16)

Using (3.12)-(3.16) in (3.11) yields

V ˙ ( z ( t ) ) i = 1 n j = 1 n n c m a ˆ M 2 g j 2 ( z j ( t ) ) + i = 1 n j = 1 n n c m b ˆ M 2 g j 2 ( z j ( t ) ) 2 ε g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) + ε ρ 1 p M i = 1 n j = 1 n α r i j g i 2 ( z i ( t ) ) + ε ρ 1 p M i = 1 n j = 1 n 1 α g j 2 ( z j ( t ) ) + ε ρ 2 p M i = 1 n j = 1 n 1 β g j 2 ( z j ( t ) ) + ε ρ 2 p M i = 1 n j = 1 n β r j i g i 2 ( z i ( t ) ) + γ i = 1 n j = 1 n g j 2 ( z j ( t ) ) γ i = 1 n j = 1 n g j 2 ( z j ( t τ i j ) ) ,
(3.17)

where c m =min{ c ̲ i }, a ˆ M =max{ a ˆ i j }, b ˆ M =max{ b ˆ i j }. We now note the following inequalities:

i = 1 n j = 1 n r j i g i 2 ( z i ( t ) ) R 1 i = 1 n g i 2 ( z i ( t ) ) = R 1 g T ( z ( t ) ) g ( z ( t ) ) ,
(3.18)
i = 1 n j = 1 n r i j g i 2 ( z i ( t ) ) R i = 1 n g i 2 ( z i ( t ) ) = R g T ( z ( t ) ) g ( z ( t ) ) ,
(3.19)
i = 1 n j = 1 n g j 2 ( z j ( t ) ) =n g T ( z ( t ) ) g ( z ( t ) ) .
(3.20)

Using (3.18)-(3.20) in (3.17) leads to

V ˙ ( z ( t ) ) i = 1 n j = 1 n n c m a ˆ M 2 g j 2 ( z j ( t ) ) + i = 1 n j = 1 n n c m b ˆ M 2 g j 2 ( z j ( t ) ) 2 ε g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) + γ i = 1 n j = 1 n g j 2 ( z j ( t ) ) + ε g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) + ε ρ 1 p M α R g T ( z ( t ) ) g ( z ( t ) ) + ε ρ 1 p M 1 α n g T ( z ( t ) ) g ( z ( t ) ) + ε ρ 2 p M 1 β n g T ( z ( t ) ) g ( z ( t ) ) + ε ρ 2 p M β R 1 g T ( z ( t ) ) g ( z ( t ) ) .
(3.21)

Let γ= n c m ( a ˆ M 2 + b ˆ M 2 ), α= n R and β= n R 1 . Then (3.21) takes the form

V ˙ ( z ( t ) ) 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) g T ( z ( t ) ) g ( z ( t ) ) 2 ε g T ( z ( t ) ) C ̲ P K 1 g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A ϒ ) + ( A ϒ ) T P ) g ( z ( t ) ) + ε g T ( z ( t ) ) ( P ( A + ϒ ) + ( A + ϒ ) T P 2 I ) g ( z ( t ) ) + 2 ε ρ 1 p M n R g T ( z ( t ) ) g ( z ( t ) ) + 2 ε ρ 2 p M n R 1 g T ( z ( t ) ) g ( z ( t ) ) = 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) g T ( z ( t ) ) g ( z ( t ) ) ε g T ( z ( t ) ) Π g ( z ( t ) ) 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) g ( z ( t ) ) 2 2 ε λ m ( Π ) g ( z ( t ) ) 2 2 = [ 2 n 2 c m ( a ˆ M 2 + b ˆ M 2 ) ε λ m ( Π ) ] g ( z ( t ) ) 2 2 .
(3.22)

It has been shown in [27] that, for the Lyapunov functional defined above, if its time derivative is in the form of (3.22) and ε> 2 n 2 ( a ˆ M 2 + b ˆ M 2 ) c m λ m ( Π ) with Π being positive definite, then the origin system (3.10), or equivalently the equilibrium point of system (2.1) is globally asymptotically stable. Hence, we have shown that the condition of Theorem 3.1 implies the global robust asymptotic stability of system (2.1). □

4 Comparison and examples

In this section, we present some constructive numerical materials to demonstrate the effectiveness and applicability of the proposed conditions and to show the advantages of our results over the previous corresponding robust stability result derived in the literature. In order to make a precise comparison between the results, we will first restate results from the previous literature.

Theorem 4.1 (See [27])

For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) is globally asymptotically robust stable if there exists a positive diagonal matrix P=diag( p i >0) such that

Λ=2 C ̲ P K 1 +S2 n p M ( ρ 1 R + ρ 2 R 1 ) I>0,

where K=diag( k i >0) is a positive diagonal matrix, S= ( s i j ) n × n with s i i =2 p i a ¯ i i , s i j =max(| p i a ¯ i j + p j a ¯ j i |,| p i a ̲ i j + p j a ̲ j i |) for ij, p M =max( p i ), R= ( r i j ) n × n with r i j = b ˆ i j 2 , and b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}, and ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0.

Theorem 4.2 (See [27])

For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) is globally asymptotically robust stable, if there exists a positive diagonal matrix P=diag( p i >0) such that

Γ=2 C ̲ P K 1 P A A T P P A + A T P 2 I2 n p M ( ρ 1 R + ρ 2 R 1 ) I>0,

where K=diag( k i >0) is a positive diagonal matrix, p M =max( p i ), R= ( r i j ) n × n with r i j = b ˆ i j 2 , and b ˆ i j =max{| b ̲ i j |,| b ¯ i j |}; ρ 1 and ρ 2 are positive constants such that ρ 1 + ρ 2 =1 and ρ 1 ρ 2 =0, and A = 1 2 ( A ¯ + A ̲ ), A = 1 2 ( A ¯ A ̲ ).

Theorem 4.3 (See [30])

For the neural network model (2.1), assume that fK and the network parameters satisfy (2.2). Then the neural network model (2.1) is globally asymptotically robust stable if there exist positive constants α i , i=1,2,,n, such that

α i ( c ̲ i μ i a ¯ i i ) j = 1 j i n α j a ˆ j i j = 1 n α j b ˆ j i >0,i=1,2,,n,

where a ˆ j i =max{| a ̲ j i |,| a ¯ j i |} and b ˆ j i =max{| b ̲ j i |,| b ¯ j i |}.

We will now consider the following examples.

Example 4.1 Consider the neural system (2.1) with the following network parameters:

A ̲ = [ 1 1 1 1 0 1 0 1 2 1 1 1 2 2 0 1 ] , A ¯ = [ 1 1 1 1 2 1 2 1 0 1 1 1 0 0 2 1 ] , B ̲ = [ 1 1 1 1 2 2 2 2 2 2 2 2 4 4 4 4 ] , B ¯ = [ 1 1 1 1 2 2 2 2 2 2 2 2 4 4 4 4 ] , k 1 = k 2 = k 3 = k 4 = 1 and c ̲ 1 = c ̲ 2 = c ̲ 3 = c ̲ 4 = c m ,

from which we obtain the following matrices:

A = [ 1 0 0 0 1 1 1 0 1 0 1 0 1 1 1 1 ] , A = [ 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , B ˆ = [ 1 1 1 1 2 2 2 2 2 2 2 2 4 4 4 4 ] , R = [ 1 1 1 1 4 4 4 4 4 4 4 4 16 16 16 16 ] .

We note that R 1 =25 and R =64. We consider a special case of Theorem 4.1 where P=I, in which case the matrix S in Theorem 4.1 is of the form

S=[ 2 3 3 3 3 2 3 3 3 3 2 3 3 3 3 2 ].

Then, for ρ 1 =0 and ρ 2 =1, Λ in Theorem 4.1 is obtained as follows:

Λ=2 c m I+S2 n R 1 I=2[ c m 11 1.5 1.5 1.5 1.5 c m 11 1.5 1.5 1.5 1.5 c m 11 1.5 1.5 1.5 1.5 c m 11 ].

Note that Λ>0 if and only if c m >15.5. Hence, the robust stability condition imposed by Theorem 4.1 is c m >15.5. For the same network parameters of this example, we will obtain the robust stability condition imposed by Theorem 3.1. Let P=I and

ϒ=[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ].

Then we obtain

P ( A ϒ ) + ( A ϒ ) T P = [ 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , P ( A + ϒ ) + ( A + ϒ ) T P = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] .

Π in Theorem 3.1 takes the form

Π=[ 2 c m 28 1 1 1 1 2 c m 28 1 1 1 1 2 c m 28 1 1 1 1 2 c m 28 ].

It can be calculated that Π>0 if and only if c m >15.12. Therefore, Theorem 3.1 imposes a less restrictive stability condition on the network parameters than Theorem 4.1 does.

Example 4.2 Consider the neural system (2.1) with the following network parameters:

A ̲ = [ 2 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , A ¯ = [ 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , B ̲ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , B ¯ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , k 1 = k 2 = k 3 = k 4 = 1 , and c ̲ 1 = c ̲ 2 = c ̲ 3 = c ̲ 4 = c m .

From the above matrices, we can obtain the following matrices:

A = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] , A = [ 1 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 ] , B ˆ [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , R = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] ,

from which we can calculate the norms R 1 =25 and R =64. Let P=I and

ϒ=[ 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ].

Then we can write

P ( A ϒ ) + ( A ϒ ) T P = [ 2 0 0 0 0 2 0 0 0 0 2 0 0 0 0 2 ] , P ( A + ϒ ) + ( A + ϒ ) T P = [ 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 ] .

For ρ 1 =0 and ρ 2 =1, the matrix Π in Theorem 3.1 is in the form

Π=2[ c m 7 0 0 0 0 c m 7 0 0 0 0 c m 7 0 0 0 0 c m 7 ].

The condition Π>0 is satisfied if c m >7. For the parameters of this example, Γ in Theorem 4.2 is in the form

Γ = 2 c m I A A T A + A T 2 I 4 R 1 I = 2 [ c m 6.3 0 0 0 0 c m 7.3 0 0 0 0 c m 7.3 0 0 0 0 c m 7.3 ] .

The choice c m >7.3 implies that Γ>0, which ensures the global robust stability of neural system (2.1). Hence, for the network parameters of this example, if 7< c m 7.3, then the result of Theorems 4.2 does not hold, whereas the result of Theorem 3.1 is still applicable.

Example 4.3 Assume that the network parameters of neural system (2.1) are given as follows:

A ̲ = [ 1 1 1 1 3 1 1 1 3 3 1 1 3 3 3 1 ] , A ¯ = [ 3 3 3 3 1 3 3 3 1 1 3 3 1 1 1 3 ] , B ̲ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , B ¯ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , k 1 = k 2 = k 3 = k 4 = 1 , and c ̲ 1 = c ̲ 2 = c ̲ 3 = c ̲ 4 = c m .

For the matrices given above, we can obtain the following matrices:

A = [ 2 1 1 1 1 2 1 1 1 1 2 1 1 1 1 2 ] , A = [ 1 2 2 2 2 1 2 2 2 2 1 2 2 2 2 1 ] , A ˆ = [ 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 ] , B ˆ = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] , R = [ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ] ,

from which we calculate R 1 =4 and R =4. Let P=I and

ϒ=[ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ].

We have

P ( A ϒ ) + ( A ϒ ) T P = [ 2 0 0 0 0 2 0 0 0 0 2 0 0 0 0 2 ] , P ( A + ϒ ) + ( A + ϒ ) T P = [ 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 ] .

For ρ 1 =0 and ρ 2 =1, the matrix Π in Theorem 3.1 is of the form

Π=2[ c m 13 0 0 0 0 c m 13 0 0 0 0 c m 13 0 0 0 0 c m 13 ].

Clearly, Π>0 holds if and only if c m >13. Therefore, for this example, Theorem 3.1 ensures the global robust stability of neural system (2.1) under the condition that c m >13.

When checking the condition of Theorem 4.3 for the same network parameters of this example, we search for the existence of the positive constants α 1 , α 2 , α 3 , and α 4 such that the following conditions hold:

c m α 1 4 α 1 4 α 2 4 α 3 4 α 4 > 0 , 4 α 1 + c m α 2 4 α 2 4 α 3 4 α 4 > 0 , 4 α 1 4 α 2 + c m α 3 4 α 3 4 α 4 > 0 , 4 α 1 4 α 2 4 α 3 + c m α 4 4 α 4 > 0 ,

which can be written in form

[ c m 4 4 4 4 4 c m 4 4 4 4 4 c m 4 4 4 4 4 c m 4 ][ α 1 α 2 α 3 α 4 ]>0.

From the properties of the nonsingular M-matrices [44], in order to ensure the existence of α 1 , α 2 , α 3 , and α 4 the symmetric matrix in the above inequality must be positive definite, which holds if and only if c m >16. Obviously, for the interval 13< c m 16, our condition obtained in Theorem 3.1 is satisfied, but the result of Theorem 4.3 does not hold.

5 Conclusions

In this paper, we have focused on the existence, uniqueness, and global robust stability of an equilibrium point for neural networks with multiple time delays with respect to the class of nondecreasing activation functions. We have employed a suitable Lyapunov functional and made use of the homeomorphism mapping theorem to derive a new time-independent robust stability condition for dynamical neural networks with multiple time delays. The obtained condition basically establishes a relationship between the network parameters of the neural system and the number of neurons. We have also presented some numerical examples, which enabled us to show the advantages of our result over previously reported robust stability results. We should point here that in the neural network model we have considered, the delay parameters are constant and the stability condition we obtain is delay independent. However, it is possible to derive some delay-dependent stability conditions for the same neural network model by employing different classes of Lyapunov functionals.

References

  1. Arik, S: New criteria for global robust stability of delayed neural networks with norm-bounded uncertainties. IEEE Transactions on Neural Networks Learning Systems (in press). doi:10.1109/TNNLS.2013.2287279

  2. Shen Y, Wang J: Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23: 87-96.

    Article  Google Scholar 

  3. Huang TW, Li CD, Duan SK, Starzyk A: Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23: 866-875.

    Article  Google Scholar 

  4. Sakthivel R, Raja R, Anthoni SM: Exponential stability for delayed stochastic bidirectional associative memory neural networks with Markovian jumping and impulses. J. Optim. Theory Appl. 2011, 150: 166-187. 10.1007/s10957-011-9808-4

    Article  MathSciNet  MATH  Google Scholar 

  5. Shen H, Huang X, Zhou J, Wang Z: Global exponential estimates for uncertain Markovian jump neural networks with reaction-diffusion terms. Nonlinear Dyn. 2012, 69: 473-486. 10.1007/s11071-011-0278-x

    Article  MathSciNet  MATH  Google Scholar 

  6. Wu ZG, Park JH, Su H, Chu J: Robust dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. Nonlinear Dyn. 2012, 69: 1323-1332. 10.1007/s11071-012-0350-1

    Article  MathSciNet  MATH  Google Scholar 

  7. Wu ZG, Park JH, Su H, Chu J: Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays. J. Franklin Inst. 2012, 349: 2136-2150. 10.1016/j.jfranklin.2012.03.005

    Article  MathSciNet  MATH  Google Scholar 

  8. Guo Z, Huang L: LMI conditions for global robust stability of delayed neural networks with discontinuous neuron activations. Appl. Math. Comput. 2009, 215: 889-900. 10.1016/j.amc.2009.06.013

    Article  MathSciNet  MATH  Google Scholar 

  9. Zhang Z, Zhou D: Global robust exponential stability for second-order Cohen-Grossberg neural networks with multiple delays. Neurocomputing 2009, 73: 213-218. 10.1016/j.neucom.2009.09.003

    Article  Google Scholar 

  10. Han W, Liu Y, Wang L: Robust exponential stability of Markovian jumping neural networks with mode-dependent delay. Commun. Nonlinear Sci. Numer. Simul. 2010, 15: 2529-2535. 10.1016/j.cnsns.2009.09.024

    Article  MathSciNet  MATH  Google Scholar 

  11. Yuan Y, Li X: New results for global robust asymptotic stability of BAM neural networks with time-varying delays. Neurocomputing 2010, 74: 337-342. 10.1016/j.neucom.2010.03.007

    Article  Google Scholar 

  12. Wang Z, Liu Y, Liu X, Shi Y: Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays. Neurocomputing 2010, 74: 256-264. 10.1016/j.neucom.2010.03.013

    Article  Google Scholar 

  13. Zhang Z, Yang Y, Huang Y: Global exponential stability of interval general BAM neural networks with reaction-diffusion terms and multiple time-varying delays. Neural Netw. 2011, 24: 457-465. 10.1016/j.neunet.2011.02.003

    Article  MATH  Google Scholar 

  14. Kao Y, Wang C: Global stability analysis for stochastic coupled reaction-diffusion systems on networks. Nonlinear Anal., Real World Appl. 2013, 14: 1457-1465. 10.1016/j.nonrwa.2012.10.008

    Article  MathSciNet  MATH  Google Scholar 

  15. Muralisankar S, Gopalakrishnan N, Balasubramaniam P: An LMI approach for global robust dissipativity analysis of T-S fuzzy neural networks with interval time-varying delays. Expert Syst. Appl. 2012, 39: 3345-3355. 10.1016/j.eswa.2011.09.021

    Article  Google Scholar 

  16. Shao JL, Huang TZ, Wang XP: Further analysis on global robust exponential stability of neural networks with time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 2012, 17: 1117-1124. 10.1016/j.cnsns.2011.08.022

    Article  MathSciNet  MATH  Google Scholar 

  17. Lou X, Ye Q, Cui B: Parameter-dependent robust stability of uncertain neural networks with time-varying delay. J. Franklin Inst. 2012, 349: 1891-1903. 10.1016/j.jfranklin.2012.02.015

    Article  MathSciNet  MATH  Google Scholar 

  18. Raja R, Samidurai R: New delay dependent robust asymptotic stability for uncertain stochastic recurrent neural networks with multiple time varying delays. J. Franklin Inst. 2012, 349: 2108-2123. 10.1016/j.jfranklin.2012.03.007

    Article  MathSciNet  MATH  Google Scholar 

  19. Kao YG, Guo JF, Wang CH, Sun XQ: Delay-dependent robust exponential stability of Markovian jumping reaction-diffusion Cohen-Grossberg neural networks with mixed delays. J. Franklin Inst. 2012, 349: 1972-1988. 10.1016/j.jfranklin.2012.04.005

    Article  MathSciNet  MATH  Google Scholar 

  20. Zhang H, Liu Z, Huang GB: Novel delay-dependent robust stability analysis for switched neutral-type neural networks with time-varying delays via SC technique. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 2010, 40: 1480-1491.

    Article  Google Scholar 

  21. Huang Z, Li X, Mohamad S, Lu Z: Robust stability analysis of static neural network with S-type distributed delays. Appl. Math. Model. 2009, 33: 760-769. 10.1016/j.apm.2007.12.006

    Article  MathSciNet  MATH  Google Scholar 

  22. Luo M, Zhong S, Wang R, Kang W: Robust stability analysis for discrete-time stochastic neural networks systems with time-varying delays. Appl. Math. Comput. 2009, 209: 305-313. 10.1016/j.amc.2008.12.084

    Article  MathSciNet  MATH  Google Scholar 

  23. Balasubramaniam P, Ali MS: Robust stability of uncertain fuzzy cellular neural networks with time-varying delays and reaction diffusion terms. Neurocomputing 2010, 74: 439-446. 10.1016/j.neucom.2010.08.014

    Article  Google Scholar 

  24. Wang Z, Liu Y, Liu X, Shi Y: Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays. Neurocomputing 2010, 74: 256-264. 10.1016/j.neucom.2010.03.013

    Article  Google Scholar 

  25. Mahmoud MS, Ismail A: Improved results on robust exponential stability criteria for neutral-type delayed neural networks. Appl. Math. Comput. 2010, 217: 3011-3019. 10.1016/j.amc.2010.08.034

    Article  MathSciNet  MATH  Google Scholar 

  26. Shao JL, Huang TZ, Wang XP: Improved global robust exponential stability criteria for interval neural networks with time-varying delays. Expert Syst. Appl. 2011, 38: 15587-15593. 10.1016/j.eswa.2011.05.066

    Article  Google Scholar 

  27. Faydasicok O, Arik S: An analysis of stability of uncertain neural networks with multiple time delays. J. Franklin Inst. 2013, 350: 1808-1826. 10.1016/j.jfranklin.2013.05.006

    Article  MathSciNet  Google Scholar 

  28. Chen A, Cao J, Huang L: Global robust stability of interval cellular neural networks with time-varying delays. Chaos Solitons Fractals 2005, 23: 787-799. 10.1016/j.chaos.2004.05.029

    Article  MathSciNet  MATH  Google Scholar 

  29. Faydasicok O, Arik S: Equilibrium and stability analysis of delayed neural networks under parameter uncertainties. Appl. Math. Comput. 2012, 218: 6716-6726. 10.1016/j.amc.2011.12.036

    Article  MathSciNet  MATH  Google Scholar 

  30. Sun C, Feng CB: Global robust exponential stability of interval neural networks with delays. Neural Process. Lett. 2003, 17: 107-115. 10.1023/A:1022999219879

    Article  Google Scholar 

  31. Liu B: Global exponential stability for BAM neural networks with time-varying delays in the leakage terms. Nonlinear Anal., Real World Appl. 2013, 14: 559-566. 10.1016/j.nonrwa.2012.07.016

    Article  MathSciNet  MATH  Google Scholar 

  32. Zhang H, Shao J: Almost periodic solutions for cellular neural networks with time-varying delays in leakage terms. Appl. Math. Comput. 2013, 219: 11471-11482. 10.1016/j.amc.2013.05.046

    Article  MathSciNet  MATH  Google Scholar 

  33. Feng Z, Lam J: Stability and dissipativity analysis of distributed delay cellular neural networks. IEEE Trans. Neural Netw. 2011, 22: 976-981.

    Article  Google Scholar 

  34. Berezansky L, Bastinec J, Diblik J, Smarda Z: On a delay population model with quadratic nonlinearity. Adv. Differ. Equ. 2012., 2012: Article ID 230 10.1186/1687-1847-2012-230

    Google Scholar 

  35. Diblik J, Khusainov DY, Grytsay IV, Smarda Z: Stability of nonlinear autonomous quadratic discrete systems in the critical case. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 539087 10.1155/2010/539087

    Google Scholar 

  36. Diblik J, Koksch N: Sufficient conditions for the existence of global solutions of delayed differential equations. J. Math. Anal. Appl. 2006, 318: 611-625. 10.1016/j.jmaa.2005.06.020

    Article  MathSciNet  MATH  Google Scholar 

  37. Jia R, Yang M: Convergence for HRNNs with unbounded activation functions and time-varying delays in the leakage terms. Neural Process. Lett. 2014, 39: 69-79. 10.1007/s11063-013-9290-0

    Article  Google Scholar 

  38. Liu B, Gong S: Periodic solution for impulsive cellar neural networks with time-varying delays in the leakage terms. Abstr. Appl. Anal. 2013., 2013: Article ID 701087

    Google Scholar 

  39. Liu B, Shao J: Almost periodic solutions for SICNNs with time-varying delays in the leakage terms. J. Inequal. Appl. 2013., 2013: Article ID 494

    Google Scholar 

  40. Peng L, Wang W: Anti-periodic solutions for shunting inhibitory cellular neural networks with time-varying delays in leakage terms. Neurocomputing 2013, 111: 27-33.

    Article  Google Scholar 

  41. Xiong W, Meng J: Exponential convergence for cellular neural networks with continuously distributed delays in the leakage terms. Electron. J. Qual. Theory Differ. Equ. 2013, 10: 1-12.

    Article  MathSciNet  Google Scholar 

  42. Yu Y, Jiao W: New results on exponential convergence for HRNNs with continuously distributed delays in the leakage terms. Neural Process. Lett. 2013. 37 10.1007/s11063-013-9296-7

    Google Scholar 

  43. Zhang H, Shao J: Almost periodic solutions for cellular neural networks with time-varying delays in leakage terms. Appl. Math. Comput. 2013, 219: 11471-11482. 10.1016/j.amc.2013.05.046

    Article  MathSciNet  MATH  Google Scholar 

  44. Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1991.

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sabri Arik.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Arik, S. Further analysis of stability of uncertain neural networks with multiple time delays. Adv Differ Equ 2014, 41 (2014). https://doi.org/10.1186/1687-1847-2014-41

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-41

Keywords