Open Access

A switching rule for exponential stability of switched recurrent neural networks with interval time-varying delay

Advances in Difference Equations20132013:44

https://doi.org/10.1186/1687-1847-2013-44

Received: 22 September 2012

Accepted: 17 January 2013

Published: 28 February 2013

Abstract

This paper studies the problem for exponential stability of switched recurrent neural networks with interval time-varying delay. The time delay is a continuous function belonging to a given interval, but not necessarily differentiable. By constructing a set of argumented Lyapunov-Krasovskii functionals combined with the Newton-Leibniz formula, a switching rule for exponential stability of switched recurrent neural networks with interval time-varying delay is designed via linear matrix inequalities, and new sufficient conditions for the exponential stability of switched recurrent neural networks with interval time-varying delay via linear matrix inequalities (LMIs) are derived. A numerical example is given to illustrate the effectiveness of the obtained result.

Keywords

neural networks switching design exponential stability interval time-varying delays Lyapunov function linear matrix inequalities

1 Introduction

In recent years, neural networks (especially recurrent neural networks, Hopfield neural networks, and cellular neural networks) have been successfully applied in many areas such as signal processing, image processing, pattern recognition, fault diagnosis, associative memory, and combinatorial optimization; see, for example, [16]. One of the best important works in these applications is to study the stability of the equilibrium point of neural networks. A major purpose is to find stability conditions i.e., the conditions for the stability of the equilibrium point of neural networks. The stability and control of recurrent neural networks with time delay have attracted considerable attention in recent years [110]. In many practical systems, it is desirable to design neural networks which are not only asymptotically or exponentially stable but can also guarantee an adequate level of system performance. In the area of control, signal processing, pattern recognition, and image processing, delayed neural networks have many useful applications. Some of these applications require the equilibrium points of the designed network to be stable. In both biological and artificial neural systems, time delays due to integration and communication are ubiquitous and often become a source of instability. The time delays in electronic neural networks are usually time-varying and sometimes vary violently with respect to time due to the finite switching speed of amplifiers and faults in the electrical circuitry. The Lyapunov-Krasovskii functional technique has been among the popular and effective tools in the design of guaranteed cost controls for neural networks with time delay. Nevertheless, despite such diversity of results available, most existing works either assumed that the time delays are constant or differentiable [914]. To the best of our knowledge, a switching rule and exponential stability for switched recurrent neural networks with interval time-varying delay, non-differentiable time-varying delays have not been fully studied yet (see, e.g., [49, 1325] and the references therein), and they are important in both theories and applications. This motivates our research.

In this paper, we investigate the exponential stability for a switched recurrent neural networks problem. The novel features here are that the delayed neural network under consideration is with various globally Lipschitz continuous activation functions, and the time-varying delay function is interval, non-differentiable. Based on constructing a set of augmented Lyapunov-Krasovskii functionals combined with Newton-Leibniz formula, a switching rule for exponential stability of switched recurrent neural networks with interval time-varying delay and new delay-dependent exponential stability criteria for switched recurrent neural networks with interval time-varying delay are established in terms of LMIs, which allow simultaneous computation of two bounds that characterize the exponential stability rate of the solution and can be easily determined by utilizing MATLABs LMI control toolbox.

The outline of the paper is as follows. Section 2 presents definitions and some well-known technical propositions needed for the proof of the main result. LMI delay-dependent exponential stability criteria for switched recurrent neural networks with interval time-varying delay criteria, a switching rule for exponential stability of switched recurrent neural networks with interval time-varying delay, and a numerical example showing the effectiveness of the result are presented in Section 3. The paper ends with conclusions and cited references.

2 Preliminaries

The following notations will be used in this paper. R + denotes the set of all real non-negative numbers; R n denotes the n-dimensional space with the scalar product x , y or x T y of two vectors x, y and the vector norm ; M n × r denotes the space of all matrices of ( n × r ) -dimensions. A T denotes the transpose of matrix A; A is symmetric if A = A T ; I denotes the identity matrix; λ ( A ) denotes the set of all eigenvalues of A; λ max ( A ) = max { Re λ ; λ λ ( A ) } , λ min ( A ) = min { Re λ ; λ λ ( A ) } . x t : = { x ( t + s ) : s [ h 1 , 0 ] } , x t = sup s [ h 1 , 0 ] x ( t + s ) ; C 1 ( [ 0 , t ] , R n ) denotes the set of all R n -valued continuously differentiable functions on [ 0 , t ] ; L 2 ( [ 0 , t ] , R m ) denotes the set of all the R m -valued square integrable functions on [ 0 , t ] .

Matrix A is called semi-positive definite ( A 0 ) if A x , x 0 for all x R n ; A is positive definite ( A > 0 ) if A x , x > 0 for all x 0 ; A > B means A B > 0 . The notation diag { } stands for a block-diagonal matrix. The symmetric term in a matrix is denoted by .

Consider the following switched recurrent neural networks with interval time-varying delay:
x ˙ ( t ) = A γ ( x ( t ) x ( t ) + W 0 γ ( x ( t ) ) f ( x ( t ) ) + W 1 γ ( x ( t ) ) g ( x ( t h ( t ) ) ) , t 0 , x ( t ) = ϕ ( t ) , t [ h 1 , 0 ] ,
(2.1)
where x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x n ( t ) ] T R n is the state of the neural, n is the number of neurals, and
f ( x ( t ) ) = [ f 1 ( x 1 ( t ) ) , f 2 ( x 2 ( t ) ) , , f n ( x n ( t ) ) ] T , g ( x ( t ) ) = [ g 1 ( x 1 ( t ) ) , g 2 ( x 2 ( t ) ) , , g n ( x n ( t ) ) ] T

are the activation functions; γ ( ) : R n N : = { 1 , 2 , , N } is the switching rule, which is a function depending on the state at each time and will be designed. A switching function is a rule which determines a switching sequence for a given switching system. Moreover, γ ( x ( t ) ) = j implies that the system realization is chosen as the j th system, j = 1 , 2 , , N . It is seen that the system (2.1) can be viewed as an autonomous switched system in which the effective subsystem changes when the state x ( t ) hits predefined boundaries.

A j = diag ( a ¯ 1 j , a ¯ 2 j , , a ¯ n j ) , a ¯ i j > 0 represents the self-feedback term; W 0 j , W 1 j denote the connection weights, the discretely delayed connection weights, and the distributively delayed connection weight, respectively. The time-varying delay function h ( t ) satisfies the condition
0 h 0 h ( t ) h 1 .
The initial functions ϕ ( t ) C 1 ( [ h 1 , 0 ] , R n ) with the norm
ϕ = sup t [ h 1 , 0 ] ϕ ( t ) 2 + ϕ ˙ ( t ) 2 .
In this paper, we consider various activation functions and assume that the activation functions f ( ) , g ( ) are Lipschitzian with the Lipschitz constants f i , e i > 0 :
| f i ( ξ 1 ) f i ( ξ 2 ) | f i | ξ 1 ξ 2 | , i = 1 , 2 , , n , ξ 1 , ξ 2 R , | g i ( ξ 1 ) g i ( ξ 2 ) | e i | ξ 1 ξ 2 | , i = 1 , 2 , , n , ξ 1 , ξ 2 R .
(2.2)
Definition 2.1 The zero solution of switched recurrent neural networks with interval time-varying delay (2.1) is α-exponentially stable if there exist two positive numbers α > 0 , N > 0 such that every solution x ( t , ϕ ) satisfies the following condition:
x ( t , ϕ ) N e α t ϕ , t 0 .

We introduce the following technical well-known propositions, which will be used in the proof of our results.

Proposition 2.1 (Schur complement lemma [26])

Given constant matrices X, Y, Z with appropriate dimensions satisfying X = X T , Y = Y T > 0 . Then X + Z T Y 1 Z < 0 if and only if
( X Z T Z Y ) < 0 .

Proposition 2.2 (Integral matrix inequality [27])

For any symmetric positive definite matrix M > 0 , scalar σ > 0 and vector function ω : [ 0 , σ ] R n such that the integrations concerned are well defined, the following inequality holds:
( 0 σ ω ( s ) d s ) T M ( 0 σ ω ( s ) d s ) σ ( 0 σ ω T ( s ) M ω ( s ) d s ) .

3 Main results

Let us set
w 11 = [ P + α I ] A j A j T [ P + α I ] + i = 0 1 G i P A j A j T P i = 0 1 e 2 α h i H i + 4 P F D 0 1 F P , w 12 = P + A j P , w 13 = e 2 α h 0 H 0 + A j P , w 14 = 2 e 2 α h 1 H 1 + A j P , w 15 = P + A j P , w 22 = i = 0 1 W i j D i W i j T + i = 0 1 h i 2 H i + ( h 1 h 0 ) U 2 P , w 23 = P , w 24 = P , w 25 = P , w 33 = e 2 α h 0 G 0 e 2 α h 0 H 0 e 2 α h 1 U + i = 0 1 W i j D i W i j T , w 34 = 0 , w 35 = 2 α h 1 U , w 44 = i = 0 1 W i j D i W i j T e 2 α h 1 U e 2 α h 1 G 1 e 2 α h 1 H 1 , w 45 = e 2 α h 1 U , w 55 = i = 0 1 W i j D i W i j T e 2 α h 1 U e 2 α h 2 U + 4 P E D 1 1 E P , E = diag { e i , i = 1 , , n } , F = diag { f i , i = 1 , , n } , λ 1 = λ min ( P 1 ) , λ 2 = λ max ( P 1 ) + h 0 λ max [ P 1 ( i = 0 1 G i ) P 1 ] + h 1 2 λ max [ P 1 ( i = 0 1 H i ) P 1 ] + ( h 1 h 0 ) λ max ( P 1 U P 1 ) .
Theorem 3.1 The zero solution of the switched recurrent neural networks with interval time-varying delay (2.1) is α-exponentially stable if there exist a positive number α > 0 , symmetric positive definite matrices P, U, G 0 , G 1 , H 0 , H 1 , and diagonal positive definite matrices D i , i = 0 , 1 satisfying the following LMIs:
[ w 11 w 12 w 13 w 14 w 15 w 22 w 23 w 24 w 25 w 33 w 34 w 35 w 44 w 45 w 55 ] < 0 , j = 1 , 2 , , N ,
(3.1)
the switching rule is chosen as γ ( x ( t ) ) = j . Moreover, the solution x ( t , ϕ ) of the system satisfies
x ( t , ϕ ) λ 2 λ 1 e α t ϕ , t 0 .
Proof Let Y = P 1 , y ( t ) = Y x ( t ) . We consider the following Lyapunov-Krasovskii functional:
V ( t , x t ) = i = 1 6 V i ( t , x t ) , V 1 = x T ( t ) Y x ( t ) , V 2 = t h 0 t e 2 α ( s t ) x T ( s ) Y G 0 Y x ( s ) d s , V 3 = t h 1 t e 2 α ( s t ) x T ( s ) Y G 1 Y x ( s ) d s , V 4 = h 0 h 0 0 t + s t e 2 α ( τ t ) x ˙ T ( τ ) Y H 0 Y x ˙ ( τ ) d τ d s , V 5 = h 1 h 1 0 t + s t e 2 α ( τ t ) x ˙ T ( τ ) Y H 1 Y x ˙ ( τ ) d τ d s , V 6 = ( h 1 h 0 ) t h 1 t h 0 t + s t e 2 α ( τ t ) x ˙ T ( τ ) Y U Y x ˙ ( τ ) d τ d s .
It is easy to check that
λ 1 x ( t ) 2 V ( t , x t ) λ 2 x t 2 , t 0 .
(3.2)
Taking the derivative of V 1 , we have
V 1 ˙ = 2 x T ( t ) Y x ˙ ( t ) = y T ( t ) [ P A j T A j P ] y ( t ) + 2 y T ( t ) W 0 j f ( ) P y ( t ) + 2 y T ( t ) W 1 j g ( ) P y ( t ) ; V 2 ˙ = y T ( t ) G 0 y ( t ) e 2 α h 0 y T ( t h 0 ) G 0 y ( t h 0 ) 2 α V 2 ; V 3 ˙ = y T ( t ) G 1 y ( t ) e 2 α h 1 y T ( t h 1 ) G 1 y ( t h 1 ) 2 α V 3 ; V 4 ˙ = h 0 2 y ˙ T ( t ) H 0 y ˙ ( t ) h 1 e 2 α h 0 t h 0 t x ˙ T ( s ) H 0 x ˙ ( s ) d s 2 α V 4 ; V 5 ˙ = h 1 2 y ˙ T ( t ) H 1 y ˙ ( t ) h 1 e 2 α h 1 t h 1 t y ˙ T ( s ) H 1 y ˙ ( s ) d s 2 α V 5 ; V 6 ˙ = ( h 1 h 0 ) 2 y ˙ T ( t ) U y ˙ ( t ) ( h 1 h 0 ) e 2 α h 1 t h 1 t h 0 y ˙ T ( s ) U y ˙ ( s ) d s 2 α V 6 .
Applying Proposition 2.2 and the Leibniz-Newton formula
s t y ˙ ( τ ) d τ = y ( t ) y ( s ) ,
we have, for i = 0 , 1 ,
h i t h i t y ˙ T ( s ) H i y ˙ ( s ) d s [ t h i t y ˙ ( s ) d s ] T H i [ t h i t y ˙ ( s ) d s ] [ y ( t ) y ( t h ( t ) ) ] T H i [ y ( t ) y ( t h ( t ) ) ] = y T ( t ) H i y ( t ) + 2 y T ( t ) H i y ( t h ( t ) ) y T ( t h i ) H i y ( t h i ) .
(3.3)
Note that
t h 1 t h 0 y ˙ T ( s ) U y ˙ ( s ) d s = t h 1 t h ( t ) y ˙ T ( s ) U y ˙ ( s ) d s + t h ( t ) t h 0 y ˙ T ( s ) U y ˙ ( s ) d s .
Applying Proposition 2.2 gives
[ h 1 h ( t ) ] t h 1 t h ( t ) y ˙ T ( s ) U y ˙ ( s ) d s [ t h 1 t h ( t ) y ˙ ( s ) d s ] T U [ t h 1 t h ( t ) y ˙ ( s ) d s ] [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] .
Since h 1 h ( t ) h 1 h 0 , we have
[ h 1 h 0 ] t h 1 t h ( t ) y ˙ T ( s ) U y ˙ ( s ) d s [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] ,
then
[ h 1 h 0 ] t h 1 t h ( t ) y ˙ T ( s ) U y ˙ ( s ) d s [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] .
Similarly, we have
( h 1 h 0 ) t h ( t ) t h 0 y ˙ T ( s ) U y ˙ ( s ) d s [ y ( t h 0 ) y ( t h ( t ) ) ] T U [ y ( t h 0 ) y ( t h ( t ) ) ] .
Then we have
V ˙ ( ) + 2 α V ( ) y T ( t ) [ P A j T A j P ] y ( t ) + 2 y T ( t ) W 0 j f ( ) P y ( t ) + 2 y T ( t ) W 1 j g ( ) P y ( t ) + y T ( t ) ( i = 0 1 G i ) y ( t ) + 2 α P y ( t ) , y ( t ) + y ˙ T ( t ) ( i = 0 1 h i 2 H i ) y ˙ ( t ) + ( h 1 h 0 ) y ˙ T ( t ) U y ˙ ( t ) i = 0 1 e 2 α h i y T ( t h i ) G i y ( t h i ) e 2 α h 0 [ y ( t ) y ( t h 0 ) ] T H 0 [ y ( t ) y ( t h 0 ) ] e 2 α h 1 [ y ( t ) y ( t h 1 ) ] T H 1 [ y ( t ) y ( t h 1 ) ] e 2 α h 1 [ y ( t h ( t ) ) y ( t h 1 ) ] T U [ y ( t h ( t ) ) y ( t h 1 ) ] e 2 α h 1 [ y ( t h 0 ) y ( t h ( t ) ) ] T U [ y ( t h 0 ) y ( t h ( t ) ) ] .
(3.4)
Using equation (2.1),
P y ˙ ( t ) + A j P y ( t ) W 0 j f ( ) W 1 j g ( ) = 0 ,
and multiplying both sides by [ 2 y ( t ) , 2 y ˙ ( t ) , 2 y ( t h 0 ) , 2 y ( t h 1 ) , 2 y ( t h ( t ) ) ] T , we have
2 y T ( t ) P y ˙ ( t ) + 2 y T ( t ) A j P y ( t ) 2 y T ( t ) W 0 j f ( ) 2 y T ( t ) W 1 j g ( ) = 0 , 2 y ˙ T ( t ) P y ˙ ( t ) 2 y ˙ T ( t ) A j P y ( t ) + 2 y ˙ T ( t ) W 0 j f ( ) + 2 y ˙ T ( t ) W 1 j g ( ) = 0 , 2 y T ( t h 0 ) P y ˙ ( t ) + 2 y T ( t h 0 ) A j P y ( t ) 2 y T ( t h 0 ) W 0 j f ( ) 2 y T ( t h 0 ) W 1 j g ( ) = 0 , 2 y T ( t h 1 ) P y ˙ ( t ) + 2 y T ( t h 1 ) A j P y ( t ) 2 y T ( t h 1 ) W 0 j f ( ) 2 y T ( t h 1 ) W 1 j g ( ) = 0 , 2 y T ( t h ( t ) ) P y ˙ ( t ) + 2 y T ( t h ( t ) ) A j P y ( t ) 2 y T ( t h ( t ) ) W 0 j f ( ) 2 y T ( t h ( t ) ) W 1 j g ( ) = 0 .
(3.5)
Adding all the zero items of (3.5) into (3.4) for the following estimations:
2 W 0 j f ( x ) , y W 0 j D 0 W 0 j T y , y + D 0 1 f ( x ) , f ( x ) , 2 W 1 j g ( z ) , y W 1 j D 1 W 1 j T y , y + D 1 1 g ( z ) , g ( z ) , 2 D 0 1 f ( x ) , f ( x ) F D 0 1 F x , x , 2 D 1 1 g ( z ) , g ( z ) E D 1 1 E z , z ,
we obtain
V ˙ ( ) + 2 α V ( ) ζ T ( t ) E j ζ ( t ) ,
(3.6)
where ζ ( t ) = [ y T ( t ) , y ˙ T ( t ) , y T ( t h 0 ) , y T ( t h 1 ) , y T ( t h ( t ) ) ] T , and
E j = [ w 11 w 12 w 13 w 14 w 15 w 22 w 23 w 24 w 25 w 33 w 34 w 35 w 44 w 45 w 55 ] , j = 1 , 2 , , N .
Therefore, by condition (3.1), we obtain from (3.6) that
V ˙ ( t , x t ) 2 α V ( t , x t ) , t 0 .
(3.7)
Integrating both sides of (3.7) from 0 to t, we obtain
V ( t , x t ) V ( ϕ ) e 2 α t , t 0 .
Furthermore, taking condition (3.2) into account, we have
λ 1 x ( t , ϕ ) 2 V ( x t ) V ( ϕ ) e 2 α t λ 2 e 2 α t ϕ 2 ,
then
x ( t , ϕ ) λ 2 λ 1 e α t ϕ , t 0 ,

which concludes the exponential stability of (2.1). This completes the proof of the theorem. □

Example 3.1 Consider the switched recurrent neural networks with interval time-varying delay (2.1) for j = 2 , where
A 1 = [ 0.1 0 0 0.3 ] , A 2 = [ 0.2 0 0 0.4 ] , W 01 = [ 0.1 0.3 0.2 0.8 ] , W 02 = [ 0.7 0.3 0.4 0.9 ] , W 11 = [ 0.4 0.2 0.3 0.3 ] , W 12 = [ 0.2 0.3 0.1 0.4 ] , E = [ 0.2 0 0 0.4 ] , F = [ 0.3 0 0 0.5 ] , { h ( t ) = 0.1 + 1.2 sin 2 t if  t I = k 0 [ 2 k π , ( 2 k + 1 ) π ] , h ( t ) = 0 if  t R + I .
Note that h ( t ) is non-differentiable, therefore, the stability criteria proposed in [59, 1114, 1725] are not applicable to this system. We choose that α = 0.4 , h 0 = 0.1 , h 1 = 1.3 . By using the Matlab LMI toolbox, we can solve linear matrix inequalities for P, U, G 0 , G 1 , D 0 , D 1 , H 0 and H 1 which satisfy the conditions (3.1) in Theorem 3.1. A set of solutions is as follows:
P = [ 1.5219 0.3659 0.3659 2.2398 ] , U = [ 3.1239 0.2365 0.2365 3.0123 ] , G 0 = [ 1.3225 0.0258 0.0258 1.2698 ] , G 1 = [ 2.2368 0.0148 0.0148 3.1121 ] , H 0 = [ 2.2189 0.1238 0.1238 1.2368 ] , H 1 = [ 2.3225 0.0369 0.0369 2.1897 ] , D 1 = [ 1.2398 0.3659 0.3659 1.8935 ] , D 2 = [ 2.3641 0.0593 0.0593 1.2380 ] .
By Theorem 3.1, the switched recurrent neural networks with interval time-varying delay are exponentially stable and the switching rule is chosen as γ ( x ( t ) ) = j . Moreover, the solution x ( t , ϕ ) of the system satisfies
x ( t , ϕ ) 12.6984 e 0.4 t ϕ , t 0 .

4 Conclusion

In this paper, the problem of exponential stability for switched recurrent neural networks with interval non-differentiable time-varying delay has been studied. By constructing a set of time-varying Lyapunov-Krasovskii functional combined with Newton-Leibniz formula, a switching rule for exponential stability of switched recurrent neural networks with interval time-varying delay has been presented and new sufficient conditions for the exponential stability for the system have been derived in terms of LMIs.

Declarations

Acknowledgements

This work was supported by the Office of Agricultural Research and Extension Maejo University Chiang Mai Thailand, the National Research Council of Thailand, the Thailand Research Fund Grant, the Higher Education Commission and Faculty of Science, Maejo University, Thailand. The authors thank anonymous reviewers for valuable comments and suggestions, which allowed us to improve the paper.

Authors’ Affiliations

(1)
Division of Mathematics and Statistics, Faculty of Science, Maejo University
(2)
Deparment of Mathematics, Faculty of Science, Chiang Mai University
(3)
Center of Excellence in Mathematics, CHE
(4)

References

  1. Hopfield JJ: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79: 2554-2558. 10.1073/pnas.79.8.2554MathSciNetView ArticleGoogle Scholar
  2. Ratchagit K: Asymptotic stability of delay-difference system of Hopfield neural networks via matrix inequalities and application. Int. J. Neural Syst. 2007, 17: 425-430. 10.1142/S0129065707001263View ArticleGoogle Scholar
  3. Kevin G: An Introduction to Neural Networks. CRC Press, Boca Raton; 1997.Google Scholar
  4. Wu M, He Y, She JH: Stability Analysis and Robust Control of Time-Delay Systems. Springer, Berlin; 2010.View ArticleGoogle Scholar
  5. Arik S: An improved global stability result for delayed cellular neural networks. IEEE Trans. Circuits Syst. 2002, 499: 1211-1218.MathSciNetView ArticleGoogle Scholar
  6. He Y, Wang QG, Wu M: LMI-based stability criteria for neural networks with multiple time-varying delays. Physica D 2005, 112: 126-131.MathSciNetView ArticleGoogle Scholar
  7. Kwon OM, Park JH: Exponential stability analysis for uncertain neural networks with interval time-varying delays. Appl. Math. Comput. 2009, 212: 530-541. 10.1016/j.amc.2009.02.043MathSciNetView ArticleGoogle Scholar
  8. Phat VN, Trinh H: Exponential stabilization of neural networks with various activation functions and mixed time-varying delays. IEEE Trans. Neural Netw. 2010, 21: 1180-1185.View ArticleGoogle Scholar
  9. Botmart T, Niamsup P: Robust exponential stability and stabilizability of linear parameter dependent systems with delays. Appl. Math. Comput. 2010, 217: 2551-2566. 10.1016/j.amc.2010.07.068MathSciNetView ArticleGoogle Scholar
  10. Fridman E, Orlov Y: Exponential stability of linear distributed parameter systems with time-varying delays. Automatica 2009, 45: 194-201. 10.1016/j.automatica.2008.06.006MathSciNetView ArticleGoogle Scholar
  11. Xu S, Lam J: A survey of linear matrix inequality techniques in stability analysis of delay systems. Int. J. Syst. Sci. 2008, 39(12):1095-1113. 10.1080/00207720802300370MathSciNetView ArticleGoogle Scholar
  12. Xie JS, Fan BQ, Young SL, Yang J: Guaranteed cost controller design of networked control systems with state delay. Acta Autom. Sin. 2007, 33: 170-174.Google Scholar
  13. Yu L, Gao F: Optimal guaranteed cost control of discrete-time uncertain systems with both state and input delays. J. Franklin Inst. 2001, 338: 101-110. 10.1016/S0016-0032(00)00073-9MathSciNetView ArticleGoogle Scholar
  14. Park JH, Kwon OM, Lee SM, Won SC:On robust H filter design for uncertain neural systems: LMI optimization approach. Appl. Math. Comput. 2004, 159: 625-639. 10.1016/j.amc.2003.09.025MathSciNetView ArticleGoogle Scholar
  15. Park JH: Further result on asymptotic stability criterion of cellular neural networks with time-varying discrete and distributed delays. Appl. Math. Comput. 2006, 182: 1661-1666. 10.1016/j.amc.2006.06.005MathSciNetView ArticleGoogle Scholar
  16. Ratchagit K, Phat VN: Robust stability and stabilization of linear polytopic delay-difference equations with interval time-varying delays. Neural Parallel Sci. Comput. 2011, 19: 361-372.MathSciNetGoogle Scholar
  17. Phat VN, Ratchagit K: Stability and stabilization of switched linear discrete-time systems with interval time-varying delay. Nonlinear Anal. Hybrid Syst. 2011, 5: 605-612. 10.1016/j.nahs.2011.05.006MathSciNetView ArticleGoogle Scholar
  18. Tian L, Liang J, Cao J: Robust observer for discrete-time Markovian jumping neural networks with mixed mode-dependent delays. Nonlinear Dyn. 2012, 67: 47-61. 10.1007/s11071-011-9956-yMathSciNetView ArticleGoogle Scholar
  19. Rajchakit M, Rajchakit G: Mean square exponential stability of stochastic switched system with interval time-varying delays. Abstr. Appl. Anal. 2012., 2012: Article ID 623014. doi:10.1155/2012/623014Google Scholar
  20. Phat VN, Kongtham Y, Ratchagit K: LMI approach to exponential stability of linear systems with interval time-varying delays. Linear Algebra Appl. 2012, 436: 243-251. 10.1016/j.laa.2011.07.016MathSciNetView ArticleGoogle Scholar
  21. Wu H, Li N, Wang K, Xu G, Guo Q: Global robust stability of switched interval neural networks with discrete and distributed time-varying delays of neural type. Math. Probl. Eng. 2012., 2012: Article ID 361871. doi:10.1155/2012/361871Google Scholar
  22. Xu H, Wu H, Li N: Switched exponential state estimation and robust stability for interval neural networks with discrete and distributed time delays. Abstr. Appl. Anal. 2012., 2012: Article ID 103542. doi:10.1155/2012/103542Google Scholar
  23. Rajchakit M, Niamsup P, Rojsiraphisal T, Rajchakit G: Delay-dependent guaranteed cost controller design for uncertain neural networks with interval time-varying delay. Abstr. Appl. Anal. 2012., 2012: Article ID 587426. doi:10.1155/2012/587426Google Scholar
  24. Zhang H, Wang Z, Liu D: Robust exponential stability of recurrent neural networks with multiple time-varying delays. IEEE Trans. Circuits Syst. II, Express Briefs 2007, 54(8):730-734.View ArticleGoogle Scholar
  25. Gau R-S, Lien C-H, Hsieh J-G: Novel stability conditions for interval delayed neural networks with multiple time-varying delays. Int. J. Innov. Comput. Inf. Control 2011, 7(1):433-444.Google Scholar
  26. Boyd S, El Ghaoui L, Feron E, Balakrishnan V: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia; 1994.View ArticleGoogle Scholar
  27. Gu K, Kharitonov V, Chen J: Stability of Time-Delay Systems. Birkhäuser, Berlin; 2003.View ArticleGoogle Scholar

Copyright

© Rajchakit et al.; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.