Skip to main content

Theory and Modern Applications

The exponential stability of BAM neural networks with leakage time-varying delays and sampled-data state feedback input

Abstract

In this paper, the exponential stability of bidirectional associative memory neural networks with leakage time-varying delays and sampled-data state feedback input is considered. By applying the time-delay approach, some conditions for ensuring the stability of a system are obtained. In addition, a numerical example is given to demonstrate the effectiveness of the obtained results.

1 Introduction

In the past few decades, neural networks have been widely investigated by researchers. In 1987, bidirectional associative memory (BAM) neural networks were firstly introduced by Kosko [1, 2]. Due to its better abilities as information associative memory, the BAM neural network has attracted considerable attention in different fields such as signal processing, pattern recognition, optimization, and so on.

It is well known that time delay is unavoidable in the hardware implementation of neural networks due to the finite switching speed of neurons and amplifiers. The delay can cause instability, oscillation, or poor dynamical behavior. In practical applications, there exist many types of time delays such as discrete delays [3], time-varying delays [4], distributed delays [5, 6], random delays [7] and leakage delays (or forgetting delays) [8, 9]. Up to now, a large number of results about delay BAM neural networks have been reported [1013]. All of the conclusions could be roughly summarized into two types: in terms of the stability analysis of equilibrium points, and of the existence and stability of periodic or almost periodic solutions.

The leakage delay, which exists in the negative feedback term of a neural network system, emerges as a research topic of primary importance recently. Gopalsamy [8] investigated the stability of the BAM neural networks with constant leakage delays. Further, Liu [14] discussed the global exponential stability for BAM neural networks with time-varying leakage delays, which extend and improve the main results of Gopalsamy. Peng et al. [1517] derived the stability criteria for the BAM neural networks with leakage delays, unbounded distributed delays and probabilistic time-varying delays.

Sampled-data state feedback is a practical and useful control scheme and has been studied extensively over the past decades. There are some results dealing with synchronization [18, 19], state estimate [2022] and stability [2329]. Recently, the work in [24] has studied the problem of the stability of sampled-data piecewise affine systems via the input delay approach. Although the importance of the stability of neural networks has been widely recognized, no related results have been established for the sampled-data stability of BAM neural networks with leakage time-varying delays. Motivated by the works above, we consider the sampled-data stability of BAM neural networks with leakage time-varying delays under variable sampling with a known upper bound on the sampling intervals.

The organization of this paper is as follows. In Section 2, the problem is formulated and some basic preliminaries and assumptions are given. The main results are presented in Section 3. In Section 4, a numerical example is given to demonstrate the effectiveness of the obtained results. Some conclusions are proposed in Section 5.

2 Preliminaries

In this paper, we consider the following BAM neural networks with leakage time-varying delays and sampled-data state feedback inputs:

{ x i ˙ ( t ) = a i x i ( t ρ i ( t ) ) + j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) + u ˜ i ( t ) , y i ˙ ( t ) = c i y i ( t r i ( t ) ) + j = 1 n d i j ( 1 ) f j ( x j ( t ) ) + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) + v ˜ i ( t ) ,
(1)

where i N ˜ ={1,2,,n}, the x i (t) and y i (t) are neuron state variables, the positive constants a i and c i denote the time scales of the respective layers of the networks, b i j ( 1 ) , b i j ( 2 ) , d i j ( 1 ) , d i j ( 2 ) are connection weights of the network. ρ i (t) and r i (t) denote the leakage delays, τ i j (t) and σ i j (t) are time-varying delays, f j (), g j () are neuron activation functions, u ˜ i (t)= k i x i ( t k ), v ˜ i (t)= l i y i ( t k ) are sampled-data state feedback inputs, t k denotes the sample time point, t k t< t k + 1 , kN, denotes the set of all natural numbers.

Assume that there exists a positive constant L such that the sample interval t k + 1 t k L, kN. Let d k (t)=t t k , for t[ t k , t k + 1 ), then t k =t d k (t) with 0 d k (t)L.

For the sake of convenience, we give the following notations:

ρ ¯ i = sup t R ρ i ( t ) , ρ i ̲ = inf t R ρ i ( t ) , r ¯ i = sup t R r i ( t ) , r i ̲ = inf t R r i ( t ) , τ ¯ i j = sup t R τ i j ( t ) , τ i j ̲ = inf t R τ i j ( t ) , σ ¯ i j = sup t R σ i j ( t ) , σ i j ̲ = inf t R σ i j ( t ) , ρ = sup t R ρ ˙ i ( t ) , r = sup t R r ˙ i ( t ) .

Before ending this section, we introduce two assumptions, which will be used in next section.

Assumption 1 There exist constants L j f >0, L j g >0 such that

0 f j ( x ) f j ( y ) x y L j f ,0 g j ( x ) g j ( y ) x y L j g ,

for all x,yR, xy and j N ˜ .

Assumption 2 Let a i ρ ¯ i <1, c i r ¯ i <1, for all i N ˜ . There exist positive constants ξ 1 , ξ 2 ,, ξ n and η 1 , η 2 ,, η n such that, for t>0 and i N ˜ , the following inequalities hold:

{ [ a i ( 1 2 a i ρ ¯ i ) a i ρ k i ] 1 1 a i ρ ¯ i ξ i + [ j = 1 n | b i j ( 1 ) | + j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r ¯ j η j < 0 , [ c i ( 1 2 c i r ¯ i ) c i r l i ] 1 1 c i r ¯ i η i + [ j = 1 n | d i j ( 1 ) | + j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ¯ j ξ j < 0 .
(2)

3 Main results

In this section, we investigate the exponential stability of (1). By using the input delay approach [24], (1) can be rewritten in the following form:

{ x i ˙ ( t ) = a i x i ( t ρ i ( t ) ) + j = 1 n b i j ( 1 ) g j ( y j ( t ) ) x i ˙ ( t ) = + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) k i x i ( t d k ( t ) ) , y i ˙ ( t ) = c i y i ( t r i ( t ) ) + j = 1 n d i j ( 1 ) f j ( x j ( t ) ) y i ˙ ( t ) = + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) l i y i ( t d k ( t ) ) .
(3)

The initial conditions of (3) are: x i (s)= ϕ i (s), y i (s)= φ i (s), s(,0], i N ˜ , where ϕ i (s) and φ i (s) are continuous functions on (,0].

The main results are stated as follows.

Theorem 1 Let Assumptions 1 and 2 hold; then the BAM neural network (3) is exponentially stable, i.e., there exists a positive constant λ such that | x i (t)|=O( e λ t ), | y i (t)|=O( e λ t ), i N ˜ .

Proof Define the continuous functions

{ Φ i ( ω ) = [ ( a i ω ) ( 1 2 a i ρ ¯ i ) a i ( e ω ρ ¯ i ( 1 ρ ) k i e ω L ) ] 1 1 a i ρ ¯ i ξ i Φ i ( ω ) = + [ j = 1 n | b i j ( 1 ) | + e ω τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r ¯ j η j , Φ n + i ( ω ) = [ ( c i ω ) ( 1 2 c i r ¯ i ) c i ( e ω r ¯ i ( 1 r ) l i e ω L ) ] 1 1 c i r ¯ i η i Φ n + i ( ω ) = + [ j = 1 n | d i j ( 1 ) | + e ω σ ¯ i j j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ¯ j ξ j ,
(4)

where ω0, i N ˜ .

By Assumption 2, we have

{ Φ i ( 0 ) = [ a i ( 1 2 a i ρ ¯ i ) a i ρ k i ] 1 1 a i ρ ¯ i ξ i + [ j = 1 n | b i j ( 1 ) | + j = 1 n | b i j ( 2 ) | ] Φ i ( 0 ) = × L j g 1 1 c j r ¯ j η j < 0 , Φ n + i ( 0 ) = [ c i ( 1 2 c i r ¯ i ) c i r l i ] 1 1 c i r ¯ i η i + [ j = 1 n | d i j ( 1 ) | + j = 1 n | d i j ( 2 ) | ] Φ n + i ( 0 ) = × L j f 1 1 a j ρ ¯ j ξ j < 0 .
(5)

Because Φ i (ω) and Φ n + i (ω) are continuous functions, we can choose a small positive constant λ such that, for all i N ˜ ,

{ Φ i ( λ ) = [ ( a i λ ) ( 1 2 a i ρ ¯ i ) a i [ e λ ρ ¯ i ( 1 ρ ) ] k i e λ L ] 1 1 a i ρ ¯ i ξ i Φ i ( λ ) = + [ j = 1 n | b i j ( 1 ) | + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r ¯ j η j < 0 , Φ n + i ( λ ) = [ ( c i λ ) ( 1 2 c i r i ¯ ) c i [ e λ r i ¯ ( 1 r ) ] l i e λ L ] 1 1 c i r ¯ i η i Φ n + i ( λ ) = + [ j = 1 n | d i j ( 1 ) | + e λ σ ¯ i j j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ¯ j ξ j < 0 .
(6)

Let

X i (t)= e λ t x i (t) t ρ i ( t ) t a i e λ s x i (s)ds, Y i (t)= e λ t y i (t) t r i ( t ) t c i e λ s y i (s)ds,i N ˜ .

Calculating the derivative of X i and Y i along the solution of (3), we have

X ˙ i ( t ) = λ e λ t x i ( t ) + e λ t x ˙ i ( t ) a i [ e λ t x i ( t ) ( 1 ρ ˙ i ( t ) ) e λ ( t ρ i ( t ) ) x i ( t ρ i ( t ) ) ] = λ e λ t x i ( t ) + e λ t [ a i x i ( t ρ i ( t ) ) + j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) k i x i ( t d k ( t ) ) ] a i e λ t x i ( t ) + a i ( 1 ρ ˙ i ( t ) ) e λ ( t ρ i ( t ) ) x i ( t ρ i ( t ) ) = λ e λ t x i ( t ) a i e λ t x i ( t ) + a i ( 1 ρ ˙ i ( t ) ) e λ ( t ρ i ( t ) ) x i ( t ρ i ( t ) ) a i e λ t x i ( t ρ i ( t ) ) k i e λ t x i ( t d k ( t ) ) + e λ t [ j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) ] = ( a i λ ) X i ( t ) ( a i λ ) t ρ i ( t ) t a i e λ s x i ( s ) d s [ a i a i ( 1 ρ ˙ i ( t ) ) e λ ρ i ( t ) ] e λ t x i ( t ρ i ( t ) ) k i e λ t x i ( t d k ( t ) ) + e λ t [ j = 1 n b i j ( 1 ) g j ( y j ( t ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t τ i j ( t ) ) ) ]

and

Y ˙ i ( t ) = λ e λ t y i ( t ) + e λ t y ˙ i ( t ) c i [ e λ t y i ( t ) ( 1 r ˙ i ( t ) ) e λ ( t r i ( t ) ) y i ( t r i ( t ) ) ] = λ e λ t y i ( t ) + e λ t [ c i y i ( t r i ( t ) ) + j = 1 n d i j ( 1 ) f j ( x j ( t ) ) + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) l i y i ( t d k ( t ) ) ] c i [ e λ t y i ( t ) ( 1 r ˙ i ( t ) ) e λ ( t r i ( t ) ) y i ( t r i ( t ) ) ] = ( c i λ ) Y i ( t ) ( c i λ ) t r i ( t ) t c i e λ s y i ( s ) d s [ c i c i ( 1 r ˙ i ( t ) ) e λ r i ( t ) ] e λ t y i ( t r i ( t ) ) l i e λ t y i ( t d k ( t ) ) + e λ t [ j = 1 n d i j ( 1 ) f j ( x j ( t ) ) + j = 1 n d i j ( 2 ) f j ( x j ( t σ i j ( t ) ) ) ] .

We define a positive constant M as follows:

M= max 1 i n { sup t ( , 0 ] | X i ( t ) | , sup t ( , 0 ] | Y i ( t ) | } ,M>0.

Let K be a positive number such that

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all  t ( , 0 ] .
(7)

Now, we will prove that

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all  t > 0 .
(8)

Let t 0 =0, we firstly prove

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for  t [ t 0 , t 1 ) .
(9)

In fact, if it is not valid, there exist i N ˜ , t 0 [ t 0 , t 1 ) such that at least one of the following cases occurs:

{ ( a ) X i ( t 0 ) = K ξ i , X i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ , ( b ) X i ( t 0 ) = K ξ i , X i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ , ( c ) Y i ( t 0 ) = K η i , Y i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ , ( d ) Y i ( t 0 ) = K η i , Y i ˙ ( t 0 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 0 ) , j N ˜ .
(10)

For t(, t 0 ], j N ˜ ,

e λ t | x j ( t ) | | e λ t x j ( t ) t ρ j ( t ) t a j e λ s x j ( s ) d s | + | t ρ j ( t ) t a j e λ s x j ( s ) d s | K ξ j + a j ρ ¯ j sup s ( , t 0 ] e λ s | x j ( s ) | .

Hence

e λ t | x j ( t ) | sup s ( , t 0 ] e λ s | x j ( s ) | K ξ j 1 a j ρ ¯ j .
(11)

Similarly, we have

e λ t | y j ( t ) | sup s ( , t 0 ] e λ s | y j ( s ) | K η j 1 c j r ¯ j .

If (a) holds, we get

X ˙ i ( t 0 ) = ( a i λ ) X i ( t 0 ) ( a i λ ) t 0 ρ i ( t 0 ) t 0 a i e λ s x i ( s ) d s [ a i a i ( 1 ρ ˙ i ( t 0 ) ) e λ ρ i ( t 0 ) ] e λ t 0 x i ( t 0 ρ i ( t 0 ) ) k i e λ t 0 x i ( t 0 d k ( t 0 ) ) + e λ t 0 [ j = 1 n b i j ( 1 ) g j ( y j ( t 0 ) ) + j = 1 n b i j ( 2 ) g j ( y j ( t 0 τ i j ( t 0 ) ) ) ] ( a i λ ) K ξ i + ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i + [ a i a i ( 1 ρ ˙ i ( t 0 ) ) e λ ρ i ( t 0 ) ] e λ ρ i ( t 0 ) e λ ( t 0 ρ i ( t 0 ) ) x i ( t 0 ρ i ( t 0 ) ) + k i e λ d k ( t 0 ) e λ ( t 0 d k ( t 0 ) ) x i ( t 0 d k ( t 0 ) ) + e λ t 0 j = 1 n | b i j ( 1 ) | L j g | y j ( t 0 ) | + e λ τ i j ( t 0 ) j = 1 n | b i j ( 2 ) | L j g e λ ( t 0 τ i j ( t 0 ) ) | y j ( t 0 τ i j ( t 0 ) | ( a i λ ) K ξ i + ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i + [ a i e λ ρ i ( t 0 ) a i ( 1 ρ ˙ i ( t 0 ) ) ] K ξ i 1 a i ρ ¯ i + k i e λ L K ξ i 1 a i ρ ¯ i + j = 1 n | b i j ( 1 ) | L j g K η j 1 c j r ¯ j + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | L j g K η j 1 c j r ¯ j { [ ( a i λ ) ( 1 2 a i ρ ¯ i ) a i ( e λ ρ ¯ i ( 1 ρ ) ) k i e λ L ] ξ i 1 a i ρ ¯ i + [ j = 1 n | b i j ( 1 ) | + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g η j 1 c j r ¯ j } K = Φ i ( λ ) K < 0 ,

which contradicts (a).

If (b) holds, we get

X ˙ i ( t 0 ) ( a i λ ) K ξ i ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i [ a i a i ( 1 ρ ˙ i ( t 0 ) ) e λ ρ i ( t 0 ) ] e λ ρ i ( t 0 ) e λ ( t 0 ρ i ( t 0 ) ) x i ( t 0 ρ i ( t 0 ) ) k i e λ L K ξ i 1 a i ρ ¯ i e λ t 0 j = 1 n | b i j ( 1 ) | L j g | y j ( t 0 ) | e λ τ i j ( t 0 ) j = 1 n | b i j ( 2 ) | L j g e λ ( t 0 τ i j ( t 0 ) ) | y j ( t 0 τ i j ( t 0 ) | ( a i λ ) K ξ i ( a i λ ) a i ρ ¯ i K ξ i 1 a i ρ ¯ i [ a i e λ ρ i ( t 0 ) a i ( 1 ρ ˙ i ( t 0 ) ) ] K ξ i 1 a i ρ ¯ i k i e λ L K ξ i 1 a i ρ ¯ i j = 1 n | b i j ( 1 ) | L j g K η j 1 c j r ¯ j e λ τ ¯ i j j = 1 n | b i j ( 2 ) | L j g K η j 1 c j r ¯ j { [ ( a i λ ) ( 1 2 a i ρ ¯ i ) a i ( e λ ρ ¯ i ( 1 ρ ) ) k i e λ L ] ξ i 1 a i ρ ¯ i + [ j = 1 n | b i j ( 1 ) | + e λ τ ¯ i j j = 1 n | b i j ( 2 ) | ] L j g η j 1 c j r ¯ j } ( K ) = Φ i ( λ ) K > 0 .

This is a contradiction with (b).

Similarly, if (c) or (d) holds, we can also derive contradictory results with respect to (c) or (d), respectively. So (9) is correct. From (7) and (9), we have

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all t(, t 1 ).
(12)

Next, we will prove

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for t[ t 1 , t 2 ),i N ˜ .
(13)

If it is not like this, there exist i N ˜ , t 1 [ t 1 , t 2 ) such that one of the following cases occurs:

{ ( a ) X i ( t 1 ) = K ξ i , X i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ , ( b ) X i ( t 1 ) = K ξ i , X i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ , ( c ) Y i ( t 1 ) = K η i , Y i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ , ( d ) Y i ( t 1 ) = K η i , Y i ˙ ( t 1 ) 0 , | X j ( t ) | < K ξ j , | Y j ( t ) | < K η j , for  t ( , t 1 ) , j N ˜ .
(14)

Similar to the proof of (9), we can deduce that (13) holds. Combining (9) and (13), we have

{ | X i ( t ) | M < K ξ i , | Y i ( t ) | M < K η i , for all  t ( , t 2 ) .
(15)

Using mathematical induction, the inequalities (8) hold. By a similar proof to (11), we have e λ t | x i (t)| K ξ i 1 a i ρ ¯ i , e λ t | y i (t)| K η i 1 c i r ¯ i , for t>0, which implies | x i (t)|=O( e λ t ), | y i (t)|=O( e λ t ), i N ˜ . This completes the proof. □

Remark 2 If the leakage delays in (3) are constant, that is, ρ i (t)=ρ, r i (t)=r. Assumption 2 is changed into the following form.

Assumption 2′ Let a i ρ<1, c i r<1, for all i N ˜ . There exist positive constants ξ 1 , ξ 2 ,, ξ n and η 1 , η 2 ,, η n such that, for t>0 and i N ˜ , the following conditions hold:

{ [ a i ( 1 2 ρ a i ) k i ] 1 1 a i ρ ξ i + [ j = 1 n | b i j ( 1 ) | + j = 1 n | b i j ( 2 ) | ] L j g 1 1 c j r η j < 0 , [ c i ( 1 2 r c i ) l i ] 1 1 c i r η i + [ j = 1 n | d i j ( 1 ) | + j = 1 n | d i j ( 2 ) | ] L j f 1 1 a j ρ ξ j < 0 .
(16)

Similar to the proof of Theorem 1, we get the following result.

Corollary 1 If Assumptions 1 and 2′ hold, the BAM neural networks with constant leakage delays and the sampled-data state feedback inputs are exponentially stable.

4 Simulation example

In this section, we give an illustrative example to show the efficiency of our theoretical results.

Example 1 Consider the following BAM neural network with leakage delays and sampled-data state feedback inputs:

{ x ˙ ( t ) = A x ( t ρ i ( t ) ) + B 1 g ( y ( t ) ) + B 2 g ( y ( t τ ( t ) ) K x ( t k ) , y ˙ ( t ) = C y ( t r i ( t ) ) + D 1 f ( x ( t ) ) + D 2 f ( x ( t σ ( t ) ) L y ( t k ) ,
(17)

where

A = [ 1 0 0 0 1 0 0 0 1 ] , C = [ 0.9 0 0 0 0.9 0 0 0 0.9 ] , B 1 = [ 0 0 0.2 0.2 0 0.5 0 0.2 1 ] , B 2 = [ 1 0 0.2 0.2 0 0.4 0.2 0.2 0 ] , D 1 = [ 0.1 0.1 0 0.1 0.1 0.1 0 0 0.1 ] , D 2 = [ 0.1 0 0 0 0.1 0 0 0 0.1 ] ,

and the sampled-data gain

K=L=[ 0.2 0 0 0 0.2 0 0 0 0.2 ].

The activation functions are taken as f()=g()=0.4tanh(). Time-varying delays are chosen as τ(t)=0.1|sint|, σ(t)=0.1|cost| and the leakage delays are chosen as ρ i (t)=0.2+0.01sint, r i (t)=0.2+0.01cost, respectively.

It is easy to verify a i ρ ¯ i <1, c i r ¯ i <1. Select ξ i =20, η i =10, i=1,2,3, and we obtain

{ [ a 1 ( 1 2 a 1 ρ ¯ 1 ) a 1 ρ 1 k 1 ] 1 1 a 1 ρ ¯ 1 ξ 1 + [ j = 1 3 | b 1 j ( 1 ) | + j = 1 3 | b 1 j ( 2 ) | ] L 1 g 1 1 c 1 r ¯ 1 η 1 = 2.9207 < 0 , [ a 2 ( 1 2 a 2 ρ ¯ 2 ) a 2 ρ 2 k 2 ] 1 1 a 2 ρ ¯ 2 ξ 2 + [ j = 1 3 | b 2 j ( 1 ) | + j = 1 3 | b 2 j ( 2 ) | ] L 2 g 1 1 c 2 r ¯ 2 η 2 = 3.4085 < 0 , [ a 3 ( 1 2 a 3 ρ ¯ 3 ) a 3 ρ 3 k 3 ] 1 1 a 3 ρ ¯ 3 ξ 3 + [ j = 1 3 | b 3 j ( 1 ) | + j = 1 3 | b 3 j ( 2 ) | ] L 3 g 1 1 c 3 r ¯ 3 η 3 = 1.9451 < 0 , [ c 1 ( 1 2 c 1 r ¯ 1 ) c 1 r 1 l 1 ] 1 1 c 1 r ¯ 1 η 1 + [ j = 1 3 | d 1 j ( 1 ) | + j = 1 3 | d 1 j ( 2 ) | ] L 1 f 1 1 a 1 ρ ¯ 1 ξ 1 = 1.4756 < 0 , [ c 2 ( 1 2 c 2 r ¯ 2 ) c 2 r 2 l 2 ] 1 1 c 2 r ¯ 2 η 2 + [ j = 1 3 | d 2 j ( 1 ) | + j = 1 3 | d 2 j ( 2 ) | ] L 2 f 1 1 a 2 ρ ¯ 2 ξ 2 = 0.4756 < 0 , [ c 3 ( 1 2 c 3 r ¯ 3 ) c 3 r 3 l 3 ] 1 1 c 3 r ¯ 3 η 3 + [ j = 1 3 | d 3 j ( 1 ) | + j = 1 3 | d 3 j ( 2 ) | ] L 3 f 1 1 a 3 ρ ¯ 3 ξ 3 = 0.4756 < 0 .
(18)

This means that all conditions in Theorem 1 are satisfied. Hence, by Theorem 1 (17) is exponentially stable. On the other hand, we have the following simulation result, shown in Figure 1.

Figure 1
figure 1

State trajectory of the system [17].

5 Conclusion

In this paper, we investigate the stability of BAM neural networks with leakage delays and a sampled-data input. By using the time-delay approach, the conditions for ensuring the exponential stability of the system are derived. It should be pointed out that there are many papers focusing on the stability problem of sampled-data systems, leakage delay, and sampled-data state feedback that have never been taken into consideration in the BAM neural networks. To the best of our knowledge, this is the first time to consider the stability of BAM neural networks with both leakage delays and sampled-data state feedback at the same time. The results of this paper are worthy as complementary to the existing results. Finally, a numerical example and its computer simulations have been presented to show the effectiveness of our theoretical results.

References

  1. Kosko B: Adaptive bi-directional associative memories. Appl. Opt. 1987, 26: 4947-4960. 10.1364/AO.26.004947

    Article  Google Scholar 

  2. Kosko B: Bi-directional associative memories. IEEE Trans. Syst. Man Cybern. 1988, 18: 49-60. 10.1109/21.87054

    Article  MathSciNet  Google Scholar 

  3. Gao M, Cui B: Global robust exponential stability of discrete-time interval BAM neural networks with time-varying delays. Appl. Math. Model. 2009, 33(3):1270-1284. 10.1016/j.apm.2008.01.019

    Article  MathSciNet  MATH  Google Scholar 

  4. Ailong W, Zhigang Z: Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 2012, 36: 1-10.

    Article  MATH  Google Scholar 

  5. Wang Z, Zhang H: Global asymptotic stability of reaction-diffusion Cohen-Grossberg neural networks with continuously distributed delays. IEEE Trans. Neural Netw. 2010, 21(1):39-48.

    Article  Google Scholar 

  6. Li Y, Yang C: Global exponential stability analysis on impulsive BAM neural networks with distributed delays. J. Math. Anal. Appl. 2006, 324(2):1125-1139. 10.1016/j.jmaa.2006.01.016

    Article  MathSciNet  MATH  Google Scholar 

  7. Lou X, Ye Q, Cui B: Exponential stability of genetic regulatory networks with random delays. Neurocomputing 2010, 73: 759-769. 10.1016/j.neucom.2009.10.006

    Article  Google Scholar 

  8. Gopalsamy K: Leakage delays in BAM. J. Math. Anal. Appl. 2007, 325: 1117-1132. 10.1016/j.jmaa.2006.02.039

    Article  MathSciNet  MATH  Google Scholar 

  9. Zhang H, Shao J: Existence and exponential stability of almost periodic solutions for CNNs with time-varying leakage delays. Neurocomputing 2013, 74: 226-233.

    Article  Google Scholar 

  10. Arik S, Tavsanoglu V: Global asymptotic stability analysis of bidirectional associative memory neural networks with constant time delays. Neurocomputing 2005, 68: 161-176.

    Article  Google Scholar 

  11. Cao J, Wang L: Periodic oscillatory solution of bidirectional associative memory networks with delays. Phys. Rev. E 2000, 61: 1825-1828.

    Article  MathSciNet  Google Scholar 

  12. Liu Z, Chen A, Cao J: Existence and global exponential stability of almost periodic solutions of BAM neural networks with distributed delays. Phys. Lett. A 2003, 319: 305-316. 10.1016/j.physleta.2003.10.020

    Article  MathSciNet  MATH  Google Scholar 

  13. Chen A, Huang L, Cao J: Existence and stability of almost periodic solution for BAM neural networks with delays. Appl. Math. Comput. 2003, 137: 177-193. 10.1016/S0096-3003(02)00095-4

    Article  MathSciNet  MATH  Google Scholar 

  14. Liu B: Global exponential stability for BAM neural networks with time-varying delays in the leakage terms. Nonlinear Anal., Real World Appl. 2013, 14: 559-566. 10.1016/j.nonrwa.2012.07.016

    Article  MathSciNet  MATH  Google Scholar 

  15. Peng S: Global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms. Nonlinear Anal., Real World Appl. 2010, 11: 2141-2151. 10.1016/j.nonrwa.2009.06.004

    Article  MathSciNet  MATH  Google Scholar 

  16. Balasubramaniam P, Kalpana M, Rakkiyappan R: Global asymptotic stability of BAM fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Math. Comput. Model. 2011, 53: 839-853. 10.1016/j.mcm.2010.10.021

    Article  MathSciNet  MATH  Google Scholar 

  17. Lakshmanan S, Park J, Lee T, Jung H, Rakkiyappan R: Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl. Math. Comput. 2013, 219: 9408-9423. 10.1016/j.amc.2013.03.070

    Article  MathSciNet  MATH  Google Scholar 

  18. Wu Z, Shi P, Su H, Chu J: Sampled-data synchronization of chaotic Lur’e systems with time delays. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24(3):410-421.

    Article  Google Scholar 

  19. Gan Q: Synchronisation of chaotic neural networks with unknown parameters and random time-varying delays based on adaptive sampled-data control and parameter identification. IET Control Theory Appl. 2012, 6(10):1508-1515. 10.1049/iet-cta.2011.0426

    Article  MathSciNet  Google Scholar 

  20. Lee T, Park J, Kwon O, Lee S: Stochastic sampled-data control for state estimation of time-varying delayed neural networks. Neural Netw. 2013, 46: 99-108.

    Article  MATH  Google Scholar 

  21. Hu J, Li N, Liu X, Zhang G: Sampled-data state estimation for delayed neural networks with Markovian jumping parameters. Nonlinear Dyn. 2013, 73: 275-284. 10.1007/s11071-013-0783-1

    Article  MathSciNet  MATH  Google Scholar 

  22. Rakkiyappan R, Sakthivel N, Park J, Kwon O: Sampled-data state estimation for Markovian jumping fuzzy cellular neural networks with mode-dependent probabilistic time-varying delays. Appl. Math. Comput. 2013, 221: 741-769.

    Article  MathSciNet  MATH  Google Scholar 

  23. Fridman E, Blighovsky A: Robust sampled-data control of a class of semilinear parabolic systems. Automatica 2012, 48: 826-836. 10.1016/j.automatica.2012.02.006

    Article  MathSciNet  MATH  Google Scholar 

  24. Samadi B, Rodrigues L: Stability of sampled-data piecewise affine systems: a time-delay approach. Automatica 2009, 45: 1995-2001. 10.1016/j.automatica.2009.04.025

    Article  MathSciNet  MATH  Google Scholar 

  25. Rodrigues L: Stability of sampled-data piecewise-affine systems under state feedback. Automatica 2007, 43: 1249-1256. 10.1016/j.automatica.2006.12.016

    Article  MathSciNet  MATH  Google Scholar 

  26. Zhang C, Jiang L, He Y, Wu H, Wu M: Stability analysis for control systems with aperiodically sampled data using an augmented Lyapunov functional method. IET Control Theory Appl. 2013, 7(9):1219-1226. 10.1049/iet-cta.2012.0814

    Article  MathSciNet  Google Scholar 

  27. Seuret A, Peet M: Stability analysis of sampled-data systems using sum of squares. IEEE Trans. Autom. Control 2013, 58(6):1620-1625.

    Article  MathSciNet  Google Scholar 

  28. Oishi Y, Fujioka H: Stability and stabilization of aperiodic sampled-data control systems using robust linear matrix inequalities. Automatica 2010, 46: 1327-1333. 10.1016/j.automatica.2010.05.006

    Article  MathSciNet  MATH  Google Scholar 

  29. Feng L, Song Y: Stability condition for sampled data based control of linear continuous switched systems. Syst. Control Lett. 2011, 60: 787-797. 10.1016/j.sysconle.2011.07.006

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was jointly supported by the National Natural Science Foundation of China under Grant 60875036, the Foundation of Key Laboratory of Advanced Process Control for Light Industry (Jiangnan university), Ministry of Education, P.R. China the Fundamental Research Funds for the Central Universities (JUSRP51317B, JUDCF13042).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongqing Yang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

LL carried out the main results of this paper and drafted the manuscript. YY directed the study and helped to inspect the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Li, L., Yang, Y., Liang, T. et al. The exponential stability of BAM neural networks with leakage time-varying delays and sampled-data state feedback input. Adv Differ Equ 2014, 39 (2014). https://doi.org/10.1186/1687-1847-2014-39

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-39

Keywords