Skip to main content

Theory and Modern Applications

Sampled-data state estimation for neural networks of neutral type

Abstract

In this paper, the sampled-data state estimation is investigated for a class of neural networks of neutral type. By employing a suitable Lyapunov functional, a delay-dependent criterion is established to guarantee the existence of the sampled-data estimator. The estimator gain matrix can be obtained by solving linear matrix inequalities (LMIs). A numerical example is given to show the effectiveness of the proposed method.

1 Introduction

Over the past decades, considerable attention has been devoted to the study of artificial neural networks due to their extensive application in signal and image processing, pattern recognition, and combinatorial optimization [1, 2]. Numerous models of neural networks such as cellular neural networks, Hopfield-type neural networks, Cohen-Grossberg neural networks, and bidirectional associative memory neural networks have been extensively investigated in the literature [3–7].

In recent years, the state estimation problem has been a hot topic because of its great significance to estimation of the neuron states through available output measurements. Many results have been available. For example, the state estimation problem for delayed neural networks was discussed in [8]. Huang and Feng investigated the state estimation of recurrent neural networks with time-varying delay. A delay-dependent condition was derived to guarantee the existence of a desired state estimator [9, 10]. By constructing a Lyapunov-Krasovskii functional and using a convex combination technique, the exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays was investigated [11]. The state estimation of discrete-time neural networks and fuzzy neural networks was studied in [12–15]. Recently, the state estimation problem of neural networks of neutral type has motivated a great deal of interest. Park et al. investigated the state estimation of neural networks of neutral type with interval time-varying delays [16–18]. A delay-dependent condition was proposed for neural networks of neutral type with mixed time-varying delays and Markovian jumping parameters [19].

At the same time, the sampled-data state estimation of neural networks has gradually caused researchers’ concern with the development of computer hardware technology. For instance, Hu investigated the sampled-data state estimation of delayed neural networks with Markovian jumping parameters [20], and in [21] the sampled-data state estimator was designed for Markovian jumping fuzzy cellular neural networks with mode-dependent probabilistic time-varying delays. In order to effectively deal with the sampled-data, the author investigated the sampled-data state estimation of neural networks by using a discontinuous Lyapunov functional approach [22, 23]. It is worth mentioning that the approach to estimate the neuron states through the output sampled measurement needs less information from the network outputs, which can lead to a significant reduction of the information communication burden in the network. To the best of our knowledge, the sampled-data state estimation of neural networks of neutral type has not been investigated so far.

Motivated by the above discussion, in this paper, we aim to deal with the sampled-data state estimation problem for neural networks of neutral type. By utilizing Lyapunov functional, Jensen’s inequalities and Schur complement, a delay-dependent criterion is given to guarantee the existence of the state estimator of neural networks of neutral type. The estimator gain matrix can be obtained in terms of LMIs.

The organization of the rest is as follows. In Section 2, the problem is formulated and some lemmas are introduced. In Section 3, some sufficient conditions are given to guarantee the existence of the sampled-data estimator. An example is given to exemplify the usefulness of our theoretical results in Section 4. And in the last section, Section 5, we give some conclusions.

2 Problem formulation

In this paper, the sampled-data state estimation problem will be investigated. Consider the following neural networks of neural type:

x ˙ ( t ) = − A x ( t ) + W 0 f ( x ( t ) ) + W 1 g ( x ( t − h ( t ) ) ) + V x ˙ ( t − d ( t ) ) + J , y ( t ) = C x ( t ) ,
(1)

where x(t)= [ x 1 ( t ) , … , x n ( t ) ] T ∈ R n is the neuron state vector associated with n neurons. f(x(t))= [ f 1 ( x 1 ( t ) ) , … , f n ( x n ( t ) ) ] T ∈ R n and g(x(t−h(t)))= [ g 1 ( x 1 ( t − h ( t ) ) ) , … , g n ( x n ( t − h ( t ) ) ) ] T ∈ R n are the neuron activation functions. J= [ J 1 , … , J n ] T is the external input vector at time t. y(t)∈ R m is the measurement output. A=diag( a 1 , a 2 ,…, a n ) is a positive diagonal matrix. W 0 , W 1 , and V are the interconnection matrices representing the weight coefficients of the neurons. C∈ R m × n is a known constant matrix. h(t)>0 and d(t)>0 are axonal signal transmission delays that satisfy the following inequalities:

0 < h ( t ) ≤ h ¯ , h ˙ ( t ) ≤ h D , 0 < d ( t ) ≤ d ¯ , d ˙ ( t ) ≤ d D < 1 .

Assumption 1 The neuron activation functions f(â‹…), g(â‹…) satisfy the Lipschitz condition

∥ f ( x 1 ) − f ( x 2 ) ∥ ≤ ∥ F ( x 1 − x 2 ) ∥ , ∥ g ( x 1 ) − g ( x 2 ) ∥ ≤ ∥ G ( x 1 − x 2 ) ∥ ,

where F∈ R n × n and G∈ R n × n are the known constant matrices.

The aim of this paper is to present an efficient estimation algorithm to observe the neuron states from the available network sampling output. To this end, the following full order observer is proposed:

x ˆ ˙ ( t ) = − A x ˆ + W 0 f ( x ˆ ( t ) ) + W 1 g ( x ˆ ( t − h ( t ) ) ) + V x ˆ ˙ ( t − d ( t ) ) + J + u ( t ) , y ˆ ( t ) = C x ˆ ( t ) ,
(2)

where x ˆ (t)∈ R n is the estimation of neuron state x(t). y ˆ (t)∈ R m is the estimated output vector. u(t)∈ R n is the control input, which is assumed to use sampled data as follows:

u ( t ) = K [ y ( t k ) − y ˆ ( t k ) ] = K C [ x ( t k ) − x ˆ ( t k ) ] , k = 0 , 1 , 2 , … ,
(3)

where t k denotes the sampling instant and satisfies lim k → ∞ t k =∞.

Assumption 2 For k≥0, there exists a known positive constant τ ¯ such that the sampling instant t k satisfies t k + 1 − t k ≤ τ ¯ .

Define the error vector to be e(t)=x(t)− x ˆ (t), and

ϕ ( t ) = f ( x ( t ) ) − f ( x ˆ ( t ) ) , φ ( t ) = g ( x ( t ) ) − g ( x ˆ ( t ) ) .
(4)

Let τ(t)=t− t k , t k ≤t< t k + 1 , the controller (3) can be represented as follows:

u ( t ) = K C e ( t k ) = K C e ( t − τ ( t ) ) , t k ≤ t < t k + 1 .
(5)

Therefore, the error dynamical system can be expressed by

e ˙ (t)=−Ae(t)−KCe ( t − τ ( t ) ) +V e ˙ ( t − d ( t ) ) + W 0 ϕ(t)+ W 1 φ ( t − h ( t ) ) ,
(6)

where τ(t) satisfies 0≤τ(t)≤ τ ¯ .

The following lemmas will be used in deriving the main results.

Lemma 1 ([19], Jensen’s inequality)

For any constant matrix Q∈ R n × n , Q= Q T >0, scalar b>0, and vector function x:[0,b]→ R n , one has

− ∫ 0 b x T (s)Qx(s)ds≤− 1 b [ ∫ 0 b x ( s ) d s ] T Q [ ∫ 0 b x ( s ) d s ] .

Lemma 2 ([20], Schur complement)

Given constant matrices Ω 1 , Ω 2 , and Ω 3 , where Ω 1 = Ω 1 T and Ω 2 >0, then

Ω 1 + Ω 3 T Ω 2 − 1 Ω 3 <0
(7)

if and only if

[ Ω 1 Ω 3 T Ω 3 − Ω 2 ] <0.
(8)

3 Main results

In this section, we derive a new delay-dependent LMI criterion for the existence of state estimator (2) for neural networks (1).

Theorem 1 For given matrices F, G, positive scalars h ¯ and 0<α<1, the error system (6) is globally asymptotically stable if there exist eight positive definite matrices P, Q i (i=1,2,3,4,5,6,7), two positive scalars β 1 and β 2 , and any matrix X satisfying the following LMI:

[ Λ 1 Λ 3 T ∗ − Λ 2 ] <0,
(9)

where

Λ 1 = [ Z 1 1 τ ¯ Q 3 − X C 0 0 0 ( α h ¯ ) − 1 Q 2 P V P W 0 P W 1 ∗ − 2 τ ¯ Q 3 1 τ ¯ Q 3 0 0 0 0 0 0 ∗ ∗ − 1 τ ¯ Q 3 0 0 0 0 0 0 ∗ ∗ ∗ Z 2 h ¯ − 1 Q 2 ( h ¯ − α h ¯ ) − 1 Q 2 0 0 0 ∗ ∗ ∗ ∗ Z 3 0 0 0 0 ∗ ∗ ∗ ∗ ∗ Z 4 0 0 0 ∗ ∗ ∗ ∗ ∗ ∗ − ( 1 − d D ) Q 7 0 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ − β 1 I 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Z 5 ] , Λ 3 T = [ − h ¯ A T P − τ ¯ A T P − A T P − h ¯ C T X T − τ ¯ C T X T − C T X T 0 0 0 0 0 0 0 0 0 0 0 0 h ¯ V T P τ ¯ V T P V T P h ¯ W 0 T P τ ¯ W 0 T P W 0 T P h ¯ W 1 T P τ ¯ W 1 T P W 1 T P ] , Λ 2 = [ h ¯ ( 2 P − Q 2 ) 0 0 ∗ τ ¯ ( 2 P − Q 3 ) 0 ∗ ∗ 2 P − Q 7 ] , Z 1 = − P A − A T P + Q 1 + Q 5 + Q 6 + G T Q 4 G − 1 τ ¯ Q 3 − ( α h ¯ ) − 1 Q 2 + β 1 F T F , Z 2 = − ( 1 − h D ) Q 5 − ( h ¯ − α h ¯ ) − 1 Q 2 − h ¯ − 1 Q 2 − β 2 G T G , Z 3 = − Q 6 − h ¯ − 1 Q 2 , Z 5 = − ( 1 − h D ) Q 4 − β 2 I , Z 4 = − ( 1 − α h D ) Q 1 − ( h ¯ − α h ¯ ) − 1 Q 2 − ( α h ¯ ) − 1 Q 2 ,

and then the estimator gain can be designed as K= P − 1 X.

Proof Consider the following Lyapunov functional:

V= ∑ i = 1 8 V i ,
(10)

where

V 1 = e T ( t ) P e ( t ) , V 2 = ∫ t − α h ( t ) t e T ( s ) Q 1 e ( s ) d s , V 3 = ∫ t − h ¯ t ∫ s t e ˙ T ( θ ) Q 2 e ˙ ( θ ) d θ d s , V 4 = ∫ − τ ¯ 0 ∫ t + θ t e ˙ T ( s ) Q 3 e ˙ ( s ) d s d θ , V 5 = ∫ t − h ( t ) t φ T ( s ) Q 4 φ ( s ) d s , V 6 = ∫ t − h ( t ) t e T ( s ) Q 5 e ( s ) d s , V 7 = ∫ t − h ¯ t e T ( s ) Q 6 e ( s ) d s , V 8 = ∫ t − d ( t ) t e ˙ T ( s ) Q 7 e ˙ ( s ) d s .

Calculating the derivative of V i along the trajectories of system (6), we have

V ˙ 1 =2 e T (t)P [ − A e ( t ) − K C e ( t − τ ( t ) ) + V e ˙ ( t − d ( t ) ) + W 0 ϕ ( t ) + W 1 φ ( t − h ( t ) ) ] ,
(11)
V ˙ 2 = e T ( t ) Q 1 e ( t ) − ( 1 − α h ˙ ( t ) ) e T ( t − α h ( t ) ) Q 1 e ( t − α h ( t ) ) V ˙ 2 ≤ e T ( t ) Q 1 e ( t ) − ( 1 − α h D ) e T ( t − α h ( t ) ) Q 1 e ( t − α h ( t ) ) ,
(12)
V ˙ 3 = h ¯ e ˙ T (t) Q 2 e ˙ (t)− ∫ t − h ¯ t e ˙ T (s) Q 2 e ˙ (s)ds,
(13)
V ˙ 4 = τ ¯ e ˙ T (t) Q 3 e ˙ (t)− ∫ t − τ ¯ t e ˙ T (s) Q 3 e ˙ (s)ds,
(14)
V ˙ 5 = φ T ( t ) Q 4 φ ( t ) − ( 1 − h ˙ ( t ) ) φ T ( t − h ( t ) ) Q 4 φ ( t − h ( t ) ) V ˙ 5 ≤ e T ( t ) G T Q 4 G e ( t ) − ( 1 − h D ) φ T ( t − h ( t ) ) Q 4 φ ( t − h ( t ) ) ,
(15)
V ˙ 6 = e T ( t ) Q 5 e ( t ) − ( 1 − h ˙ ( t ) ) e T ( t − h ( t ) ) Q 5 e ( t − h ( t ) ) V ˙ 6 ≤ e T ( t ) Q 5 e ( t ) − ( 1 − h D ) e T ( t − h ( t ) ) Q 5 e ( t − h ( t ) ) ,
(16)
V ˙ 7 = e T (t) Q 6 e(t)− e T (t− h ¯ ) Q 6 e(t− h ¯ ),
(17)
V ˙ 8 = e ˙ T ( t ) Q 7 e ˙ ( t ) − ( 1 − d ˙ ( t ) ) e ˙ T ( t − d ( t ) ) Q 7 e ˙ ( t − d ( t ) ) V ˙ 8 ≤ e ˙ T ( t ) Q 7 e ˙ ( t ) − ( 1 − d D ) e ˙ T ( t − d ( t ) ) Q 7 e ˙ ( t − d ( t ) ) .
(18)

Note that the Lyapunov functions − ∫ t − h ¯ t e ˙ T (s) Q 2 e ˙ (s)ds and − ∫ t − τ ¯ t e ˙ T (s) Q 3 e ˙ (s)ds have the following relationship:

− ∫ t − h ¯ t e ˙ T ( s ) Q 2 e ˙ ( s ) d s = − ∫ t − α h ( t ) t e ˙ T ( s ) Q 2 e ˙ ( s ) d s − ∫ t − h ( t ) t − α h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s − ∫ t − h ¯ t e ˙ T ( s ) Q 2 e ˙ ( s ) d s = − ∫ t − h ¯ t − h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ,
(19)
− ∫ t − τ ¯ t e ˙ T (s) Q 3 e ˙ (s)ds=− ∫ t − τ ( t ) t e ˙ T (s) Q 3 e ˙ (s)ds− ∫ t − τ ¯ t − τ ( t ) e ˙ T (s) Q 3 e ˙ (s)ds.
(20)

By Lemma 1, we have

− ∫ t − α h ( t ) t e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − ( α h ¯ ) − 1 [ ∫ t − α h ( t ) t e ˙ ( s ) d s ] T Q 2 [ ∫ t − α h ( t ) t e ˙ ( s ) d s ] − ∫ t − α h ( t ) t e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − ( α h ¯ ) − 1 [ e ( t ) − e ( t − α h ( t ) ) ] T Q 2 [ e ( t ) − e ( t − α h ( t ) ) ] − ∫ t − α h ( t ) t e ˙ T ( s ) Q 2 e ˙ ( s ) d s = − ( α h ¯ ) − 1 [ e T ( t ) Q 2 e ( t ) − 2 e T ( t ) Q 2 e ( t − α h ( t ) ) − ∫ t − α h ( t ) t e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ + e T ( t − α h ( t ) ) Q 2 e ( t − α h ( t ) ) ] , − ∫ t − h ( t ) t − α h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − ( h ¯ − α h ¯ ) − 1 [ ∫ t − h ( t ) t − α h ( t ) e ˙ ( s ) d s ] T Q 2 [ ∫ t − h ( t ) t − α h ( t ) e ˙ ( s ) d s ] − ∫ t − h ( t ) t − α h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − ( h ¯ − α h ¯ ) − 1 [ e ( t − α h ( t ) ) − e ( t − h ( t ) ) ] T − ∫ t − h ( t ) t − α h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ × Q 2 [ e ( t − α h ( t ) ) − e ( t − h ( t ) ) ] − ∫ t − h ( t ) t − α h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s = − ( h ¯ − α h ¯ ) − 1 [ e T ( t − α h ( t ) ) Q 2 e ( t − α h ( t ) )
(21)
− ∫ t − h ( t ) t − α h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − 2 e T ( t − α h ( t ) ) Q 2 e ( t − h ( t ) ) − ∫ t − h ( t ) t − α h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ + e T ( t − h ( t ) ) Q 2 e ( t − h ( t ) ) ] , − ∫ t − h ¯ t − h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − ( h ¯ ) − 1 [ ∫ t − h ¯ t − h ( t ) e ˙ ( s ) d s ] T Q 2 [ ∫ t − h ¯ t − h ( t ) e ˙ ( s ) d s ] − ∫ t − h ¯ t − h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − ( h ¯ ) − 1 [ e ( t − h ( t ) ) − e ( t − h ¯ ) ] T − ∫ t − h ¯ t − h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ × Q 2 [ e ( t − h ( t ) ) − e ( t − h ¯ ) ] − ∫ t − h ¯ t − h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s = − ( h ¯ ) − 1 [ e T ( t − h ( t ) ) Q 2 e ( t − h ( t ) ) − ∫ t − h ¯ t − h ( t ) e ˙ T ( s ) Q 2 e ˙ ( s ) d s ≤ − 2 e T ( t − h ( t ) ) Q 2 e ( t − h ¯ ) + e T ( t − h ¯ ) Q 2 e ( t − h ¯ ) ] , − ∫ t − τ ( t ) t e ˙ T ( s ) Q 3 e ˙ ( s ) d s ≤ − 1 τ ( t ) [ ∫ t − τ ( t ) t e ˙ ( s ) d s ] T Q 3 [ ∫ t − τ ( t ) t e ˙ ( s ) d s ] − ∫ t − τ ( t ) t e ˙ T ( s ) Q 3 e ˙ ( s ) d s ≤ − 1 τ ¯ [ e ( t ) − e ( t − τ ( t ) ) ] T Q 3 [ e ( t ) − e ( t − τ ( t ) ) ] − ∫ t − τ ( t ) t e ˙ T ( s ) Q 3 e ˙ ( s ) d s = − 1 τ ¯ [ e T ( t ) Q 3 e ( t ) − 2 e T ( t ) Q 3 e ( t − τ ( t ) ) − ∫ t − τ ( t ) t e ˙ T ( s ) Q 3 e ˙ ( s ) d s ≤ + e T ( t − τ ( t ) ) Q 3 e ( t − τ ( t ) ) ] , − ∫ t − τ ¯ t − τ ( t ) e ˙ T ( s ) Q 3 e ˙ ( s ) d s ≤ − 1 τ ¯ [ e ( t − τ ( t ) ) − e ( t − τ ¯ ) ] T Q 3 [ e ( t − τ ( t ) ) − e ( t − τ ¯ ) ] − ∫ t − τ ¯ t − τ ( t ) e ˙ T ( s ) Q 3 e ˙ ( s ) d s = − 1 τ ¯ [ e T ( t − τ ( t ) ) Q 3 e ( t − τ ( t ) ) − 2 e T ( t − τ ( t ) ) Q 3 e ( t − τ ¯ ) − ∫ t − τ ¯ t − τ ( t ) e ˙ T ( s ) Q 3 e ˙ ( s ) d s ≤ + e T ( t − τ ¯ ) Q 3 e ( t − τ ¯ ) ] .
(22)

From Assumption 1 and (4), for positive scalars β i (i=1,2), we have the following inequalities:

β 1 [ e T ( t ) F T F e ( t ) − ϕ T ( t ) ϕ ( t ) ] ≥ 0 , β 2 [ e T ( t − h ( t ) ) G T G e ( t − h ( t ) ) − φ T ( t − h ( t ) ) φ ( t − h ( t ) ) ] ≥ 0 .
(23)

By combining (10)-(23), it is not difficult to deduce that

V ˙ = ∑ i = 1 8 V ˙ i ≤ e T ( t ) [ − P A − A T P + Q 1 + Q 5 + Q 6 + G T Q 4 G − 1 τ ¯ Q 3 − ( α h ¯ ) − 1 Q 2 + β 1 F T F ] e ( t ) + 2 e T ( t ) [ − P K C + 1 τ ¯ Q 3 ] e ( t − τ ( t ) ) − e T ( t − τ ( t ) ) [ 2 τ ¯ Q 3 ] e ( t − τ ( t ) ) + 2 e T ( t ) [ P V ] e ˙ ( t − d ( t ) ) − e ˙ T ( t − d ( t ) ) [ ( 1 − d D ) Q 7 ] e ˙ ( t − d ( t ) ) + 2 e T ( t ) [ P W 0 ] ϕ ( t ) − ϕ T ( t ) [ β 1 I ] ϕ ( t ) + 2 e T ( t ) [ P W 1 ] φ ( t − h ( t ) ) − φ T ( t − h ( t ) ) [ ( 1 − h D ) Q 4 + β 2 I ] φ ( t − h ( t ) ) + 2 e T ( t ) [ ( α h ¯ ) − 1 Q 2 ] e ( t − α h ( t ) ) − e T ( t − α h ( t ) ) [ ( 1 − α h D ) Q 1 + ( h ¯ − α h ¯ ) − 1 Q 2 + ( α h ¯ ) − 1 Q 2 ] e ( t − α h ( t ) ) + 2 e T ( t − τ ( t ) ) [ 1 τ ¯ Q 3 ] e ( t − τ ¯ ) − e T ( t − τ ¯ ) [ 1 τ ¯ Q 3 ] e ( t − τ ¯ ) + 2 e T ( t − α h ( t ) ) [ ( h ¯ − α h ¯ ) − 1 Q 2 ] e ( t − h ( t ) ) − e T ( t − h ( t ) ) [ ( 1 − h D ) Q 5 + ( h ¯ − α h ¯ ) − 1 Q 2 + ( h ¯ ) − 1 Q 2 + β 2 G T G ] e ( t − h ( t ) ) + 2 e T ( t − h ( t ) ) [ h ¯ − 1 Q 2 ] e ( t − h ¯ ) − e T ( t − h ¯ ) [ h ¯ − 1 Q 2 + Q 6 ] e ( t − h ¯ ) + e ˙ T ( t ) [ h ¯ Q 2 + τ ¯ Q 3 + Q 7 ] e ˙ ( t ) .
(24)

Let ϒ=[−A−KC0000V W 0 W 1 ], and η T (t)=[ e T (t) e T (t−τ(t)) e T (t− τ ¯ ) e T (t−h(t)) e T (t− h ¯ ) e T (t−αh(t)) e ˙ T (t−d(t)) ϕ T (t) φ T (t−h(t))]. From (6), we have e ˙ (t)=ϒη(t).

Therefore,

e ˙ T (t)[ h ¯ Q 2 + τ ¯ Q 3 + Q 7 ] e ˙ (t)= η T (t) [ ϒ T ( h ¯ Q 2 + τ ¯ Q 3 + Q 7 ) ϒ ] η(t).
(25)

Thus, changing the variable as X=PK, equation (24) can be rewritten as

V ˙ ≤ η T (t) [ Λ 1 + ϒ T ( h ¯ Q 2 + τ ¯ Q 3 + Q 7 ) ϒ ] η(t),
(26)

where Λ 1 can be seen from Theorem 1.

In view of the fact −P Q 2 − 1 P≤−(2P− Q 2 ), −P Q 3 − 1 P≤−(2P− Q 3 ), −P Q 7 − 1 P≤−(2P− Q 7 ). From (9), we can obtain the following inequality:

[ Λ 1 Λ 3 T ∗ − Λ 6 ] <0,
(27)

where

Λ 6 = [ h ¯ P T Q 2 − 1 P 0 0 ∗ τ ¯ P T Q 3 − 1 P 0 ∗ ∗ P T Q 7 − 1 P ] .

Pre- and post-multiplying diag{I,I,I,I,I,I,I,I,I, Q 2 P − 1 , Q 3 P − 1 , Q 7 P − 1 } and diag{I,I,I,I,I,I,I,I,I, P − 1 Q 2 , P − 1 Q 3 , P − 1 Q 7 }, respectively, we have

[ Λ 1 Λ 4 T ∗ − Λ 5 ] <0,
(28)

where

Λ 4 T = [ − h ¯ A T Q 2 − τ ¯ A T Q 3 − A T Q 7 − h ¯ ( K C ) T Q 2 − τ ¯ ( K C ) T Q 3 − ( K C ) T Q 7 0 0 0 0 0 0 0 0 0 0 0 0 h ¯ V T Q 2 τ ¯ V T Q 3 V T Q 7 h ¯ W 0 T Q 2 τ ¯ W 0 T Q 3 W 0 T Q 7 h ¯ W 1 T Q 2 τ ¯ W 1 T Q 3 W 1 T Q 7 ] , Λ 5 = [ h ¯ Q 2 0 0 ∗ τ ¯ Q 3 0 ∗ ∗ Q 7 ] .

By Lemma 2, inequality (28) is equivalent to the inequality [ Λ 1 + Ï’ T ( h ¯ Q 2 + Ï„ ¯ Q 3 + Q 7 )Ï’]<0. This implies that the error dynamic (6) is asymptotically stable. This completes the proof. □

Remark 1 In the paper, the controller u(t)=K[y( t k )− y ˆ ( t k )] is designed by the sampled data, which needs less information from the network outputs than the controller in [16, 17]. The sampled-data estimation approach can lead to a significant reduction of the information communication burden in the network and save more computing cost.

To this end, the sampled-data estimation problem has been solved and the desired estimator can be designed by Theorem 1. In the next section, the effectiveness of the developed sampled-data estimation approach for the neural network of neutral type is shown by a numerical example.

4 Simulation example

In this section, a numerical example with simulation results is employed to demonstrate the effectiveness of the proposed method.

Example 1 Consider the neural networks of neutral type with the following parameters:

A = [ 3 0 0 0 2 0 0 0 5 ] , W 0 = [ 0.3 − 0.1 0 0.1 0.4 − 0.2 − 0.2 0.1 0.2 ] , W 1 = [ 0.1 1 0.2 − 0.1 0.3 − 0.1 − 0.2 0.1 0.2 ] , V = [ 0.1 − 0.05 0.05 0.05 0.1 − 0.05 0.05 0.1 0.1 ] , J = [ 2 sin ( π t ) + 0.03 t 2 − sin ( 4 t ) − 0.04 t 2 1.5 cos ( 4 t ) + 0.01 t 2 ] , C = I , f ( x ) = 0.5 tanh ( x ) , g ( x ( t − h ( t ) ) ) = 0.4 tanh ( x ( t − h ( t ) ) ) , x ( 0 ) = [ − 1.5 1 − 1.5 ] T , h ( t ) = 0.36 ( 1 − sin ( t ) ) , d ( t ) = 0.25 ( 1 − sin ( t ) ) , τ ¯ = 0.05 , α = 0.5 .

From the parameters above, it can be verified that F=0.5I, G=0.4I. By solving LMI given in Theorem 1, the feasible solutions are obtained as follows:

P = [ 22.5358 − 1.1052 0.0140 − 1.1052 32.2373 0.6453 0.0140 0.6453 14.9831 ] , Q 1 = [ 8.4845 − 0.0169 − 0.0747 − 0.0169 10.5790 0.1719 − 0.0747 0.1719 5.6045 ] , Q 2 = [ 15.2407 − 0.6559 − 0.1345 − 0.6559 19.8115 0.2071 − 0.1345 0.2071 11.4000 ] , Q 3 = [ 18.3501 − 0.1527 0.1320 − 0.1527 19.0875 0.0926 0.1320 0.0926 16.9542 ] , Q 4 = [ 11.8617 0.2203 0.0955 0.2203 18.7959 1.1519 0.0955 1.1519 9.2649 ] , Q 5 = [ 6.8482 − 0.0052 − 0.0725 − 0.0052 9.5406 0.1891 − 0.0725 0.1891 3.8875 ] , Q 6 = [ 8.3328 − 0.0359 − 0.0743 − 0.0359 10.8433 0.1788 − 0.0743 0.1788 5.4375 ] , Q 7 = [ 6.1224 − 0.3327 0.1054 − 0.3327 10.1217 − 0.1539 0.1054 − 0.1539 4.1545 ] , X = [ − 29.6833 0.2433 − 0.3916 0.2433 − 15.3925 0.9746 − 0.3916 0.9746 − 53.1328 ] , β 1 = 18.9090 , β 2 = 23.3269 .

Thus, the estimator gain matrix K is as follows:

K= P − 1 X= [ − 1.3190 − 0.0128 − 0.0102 − 0.0372 − 0.4796 0.1009 − 0.0233 0.0857 − 3.5505 ] .

The simulation results are shown in Figures 1 and 2. It is easy to find that the responses of the state estimators track to true states very quickly.

Figure 1
figure 1

The true state x(t) and its estimate x ˆ (t) .

Figure 2
figure 2

The error of the true state x(t) and its estimate x ˆ (t) .

5 Conclusion

In this paper, the sampled-data state estimation has been investigated for a class of neural networks of neutral type. By employing a Lyapunov functional, a delay-dependent condition is established to guarantee the existence of the desired state estimator. The feasible solution can be obtained easily by Matlab LMI toolbox. In the end, the numerical example is given to show the effectiveness of the proposed estimator. This work can be extended to the state estimation problem of general systems. This will be done in the near future.

References

  1. Elanayar V, Yung C: Radial basis function neural network for approximation and estimation of nonlinear stochastic dynamic systems. IEEE Trans. Neural Netw. 1994, 5: 594–603. 10.1109/72.298229

    Article  Google Scholar 

  2. Joya G, Atencia M, Sandoval F: Hopfield neural network for optimization: study of different dynamic. Neurocomputing 2002, 43: 219–237. 10.1016/S0925-2312(01)00337-X

    Article  MATH  Google Scholar 

  3. Cao J, Zhou D: Stability analysis of delayed cellular neural networks. Neural Netw. 1998, 11: 1601–1605. 10.1016/S0893-6080(98)00080-X

    Article  MathSciNet  Google Scholar 

  4. Liu X, Teo K, Xu B: Exponential stability of impulsive high-order Hopfield-type neural networks with time-varying delays. IEEE Trans. Neural Netw. 2005, 16: 1329–1339. 10.1109/TNN.2005.857949

    Article  Google Scholar 

  5. Cao J, Liang J: Boundedness and stability for Cohen-Grossberg neural network with time-varying delays. J. Math. Anal. Appl. 2004, 296: 665–685. 10.1016/j.jmaa.2004.04.039

    Article  MathSciNet  MATH  Google Scholar 

  6. Arik S: Global asymptotic stability analysis of bidirectional associative memory neural networks with time delays. IEEE Trans. Neural Netw. 2005, 16: 580–586. 10.1109/TNN.2005.844910

    Article  Google Scholar 

  7. Park J, Park C, Kwon O, Lee S: A new stability criterion for bidirectional associative memory neural networks of neutral-type. Appl. Math. Comput. 2008, 199: 716–722. 10.1016/j.amc.2007.10.032

    Article  MathSciNet  MATH  Google Scholar 

  8. He Y, Wang Q, Wu M, Lin C: Delay-dependent state estimation for delayed neural networks. IEEE Trans. Neural Netw. 2006, 17: 1077–1081. 10.1109/TNN.2006.875969

    Article  Google Scholar 

  9. Huang H, Feng G, Cao J: An LMI approach to delay-dependent state estimation for delayed neural networks. Neurocomputing 2008, 71: 2857–2867. 10.1016/j.neucom.2007.08.008

    Article  Google Scholar 

  10. Huang H, Feng G: State estimation of recurrent neural networks with time-varying delay: a novel delay partition approach. Neurocomputing 2011, 74: 792–796. 10.1016/j.neucom.2010.10.006

    Article  Google Scholar 

  11. Zhang D, Yu L: Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays. Neural Netw. 2012, 35: 103–111.

    Article  MATH  Google Scholar 

  12. Balasubramaniam P, Vembarasan V, Rakkiyappan R: Delay-dependent robust exponential state estimation of Markovian jumping fuzzy Hopfield neural networks with mixed random time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 2011, 16: 2109–2129. 10.1016/j.cnsns.2010.08.024

    Article  MathSciNet  MATH  Google Scholar 

  13. Balasubramaniam P, Kalpana M, Rakkiyappan R: State estimation for fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Comput. Math. Appl. 2011, 62: 3959–3972.

    Article  MathSciNet  MATH  Google Scholar 

  14. Liu Y, Wang Z, Liu X: State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays. Phys. Lett. A 2008, 372: 7147–7155. 10.1016/j.physleta.2008.10.045

    Article  MathSciNet  MATH  Google Scholar 

  15. Mou S, Gao H, Qiang W, Fei Z: State estimation for discrete-time neural networks with time-varying delays. Neurocomputing 2008, 72: 643–647. 10.1016/j.neucom.2008.06.009

    Article  Google Scholar 

  16. Park J, Kwon O, Lee S: State estimation for neural networks of neutral-type with interval time-varying delays. Appl. Math. Comput. 2008, 203: 217–223. 10.1016/j.amc.2008.04.025

    Article  MathSciNet  MATH  Google Scholar 

  17. Park J, Kwon O: Further results on state estimation for neural networks of neutral-type with time-varying delay. Appl. Math. Comput. 2009, 208: 69–75. 10.1016/j.amc.2008.11.017

    Article  MathSciNet  MATH  Google Scholar 

  18. Park J, Kwon O: Design of state estimator for neural networks of neutral-type. Appl. Math. Comput. 2008, 202: 360–369. 10.1016/j.amc.2008.02.024

    Article  MathSciNet  MATH  Google Scholar 

  19. Lakshmanan S, Park J, Jung H: State estimation for neural neutral-type networks with mixed time-varying delays and Markovian jumping parameters. Chin. Phys. B 2012., 21(10): Article ID 100205

    Google Scholar 

  20. Hu J, Li N, Liu X, Zhang G: Sampled-data state estimation for delayed neural networks with Markovian jumping parameters. Nonlinear Dyn. 2013, 73: 275–284. 10.1007/s11071-013-0783-1

    Article  MathSciNet  MATH  Google Scholar 

  21. Rakkiyappan R, Sakthivel N, Park J, Kwon O: Sampled-data state estimation for Markovian jumping fuzzy cellular neural networks with mode-dependent probabilistic time-varying delays. Appl. Math. Comput. 2013, 221: 741–769.

    Article  MathSciNet  MATH  Google Scholar 

  22. Lakshmanan S, Park J, Rakkiyappan R, Jung H: State estimator for neural networks with sampled data using discontinuous Lyapunov functional approach. Nonlinear Dyn. 2013, 73: 509–520. 10.1007/s11071-013-0805-z

    Article  MathSciNet  MATH  Google Scholar 

  23. Rakkiyappan R, Zhu Q, Radhika T: Design of sampled data state estimator for Markovian jumping neural networks with leakage time-varying delays and discontinuous Lyapunov functional approach. Nonlinear Dyn. 2013, 73: 1367–1383. 10.1007/s11071-013-0870-3

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was jointly supported by the National Natural Science Foundation of China under Grant 60875036, the Foundation of Key Laboratory of Advanced Process Control for Light Industry (Jiangnan University), Ministry of Education, P.R. China the Fundamental Research Funds for the Central Universities (JUSRP51317B, JUDCF13042).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongqing Yang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

CY carried out the main results of this paper and drafted the manuscript. YY directed the study and helped to inspect the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, C., Yang, Y., Hu, M. et al. Sampled-data state estimation for neural networks of neutral type. Adv Differ Equ 2014, 138 (2014). https://doi.org/10.1186/1687-1847-2014-138

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-138

Keywords