Skip to main content

Theory and Modern Applications

Almost periodic solutions for neutral delay Hopfield neural networks with time-varying delays in the leakage term on time scales

Abstract

In this paper, a class of neutral delay Hopfield neural networks with time-varying delays in the leakage term on time scales is considered. By utilizing the exponential dichotomy of linear dynamic equations on time scales, Banach’s fixed point theorem and the theory of calculus on time scales, some sufficient conditions are obtained for the existence and exponential stability of almost periodic solutions for this class of neural networks. Finally, a numerical example illustrates the feasibility of our results and also shows that the continuous-time neural network and the discrete-time analogue have the same dynamical behaviors. The results of this paper are completely new and complementary to the previously known results even when the time scale T=R or Z.

1 Introduction

The dynamical properties for delayed Hopfield neural networks have been extensively studied since they can be applied into pattern recognition, image processing, speed detection of moving objects, optimization problems and many other fields. Besides, due to the finite speed of information processing, the existence of time delays frequently causes oscillation, divergence, or instability in neural networks. Therefore, it is of prime importance to consider the delay effects on the stability of neural networks. Up to now, neural networks with various types of delay have been widely investigated by many authors [120].

However, so far, very little attention has been paid to neural networks with time delay in the leakage (or ‘forgetting’) term [2135]. Such time delays in the leakage terms are difficult to handle and have been rarely considered in the literature. In fact, the leakage term has a great impact on the dynamical behavior of neural networks. Also, recently, another type of time delays, namely, neutral-type time delays which always appear in the study of automatic control, population dynamics and vibrating masses attached to an elastic bar, etc., has drawn much research attention. So far there have been only a few papers that have taken neutral-type phenomenon into account in delayed neural networks [3343].

In fact, both continuous and discrete systems are very important in implementation and applications. But it is troublesome to study the existence of almost periodic solutions for continuous and discrete systems respectively. Therefore, it is meaningful to study that on time scales which can unify the continuous and discrete situations (see [4450]).

To the best of our knowledge, up to now, there have been no papers published on the existence and stability of almost periodic solutions to neutral-type delay neural networks with time-varying delays in the leakage term on time scales. Thus, it is important and, in effect, necessary to study the existence of almost periodic solutions for neutral-type neural networks with time-varying delay in the leakage term on time scales.

Motivated by above, in this paper, we propose the following neutral delay Hopfield neural networks with time-varying delays in the leakage term on time scale :

x i Δ ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s + I i ( t ) , i = 1 , 2 , , n ,
(1.1)

where is an almost periodic time scale that will be defined in the next section, x i (t) denotes the potential (or voltage) of cell i at time t, c i (t)>0 represents the rate at which the i th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs at time t, a i j (t), b i j (t), d i j (t) and e i j (t) represent the delayed strengths of connectivity and neutral delayed strengths of connectivity between cell i and j at time t, respectively, θ i j () and ξ i j () are the kernel functions determining the distributed delays, f j , g j , h j and k j are the activation functions in system (1.1), I i (t) is an external input on the i th unit at time t, τ i j (t)0 and σ i j (t)0 correspond to the transmission delays of the i th unit along the axon of the j th unit at time t.

If T=R, then system (1.1) is reduced to the following continuous-time neutral delay Hopfield neural network:

x i ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) d s + j = 1 n b i j ( t ) g j ( x j ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j ( s ) ) d s + I i ( t ) , i = 1 , 2 , , n ,
(1.2)

and if T=Z, then system (1.1) is reduced to the discrete-time neutral delay Hopfield neural network

Δ x i ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) s = t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) + j = 1 n b i j ( t ) g j ( Δ x j ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) s = t ζ i j ( t ) t ξ i j ( s ) k j ( Δ x j ( s ) ) + I i ( t ) , i = 1 , 2 , , n ,
(1.3)

where tZ and Δx(t)=x(t+1)x(t). When η i (t)0, d i j (t)0, e i j (t)0, i,j=1,2,,n, Bai [37] and Xiao [38] studied the almost periodicity of (1.2), respectively. However, even when η i (t)0, i=1,2,,n, the almost periodicity to (1.3), the discrete-time analogue of (1.2), has been not studied yet.

For convenience, for any almost periodic function f(t) defined on , we define f = inf i T |f(t)|, f + = sup T |f(t)|.

The initial condition associated with system (1.1) is of the form

x i (t)= φ i (t), x i Δ (t)= φ i Δ (t),t [ ι , 0 ] T ,i=1,2,,n,

where φ i () denotes a real-value bounded Δ-differentiable function defined on [ ι , 0 ] T and ι= max 1 i , j n { η i + , τ i j + , σ i j + , δ i j + , ζ i j + }.

Throughout this paper, we assume that:

(H1) c i (t)>0 with c i R + , η i (t)0, τ i j (t)0, σ i j (t)0, δ i j (t)0, ζ i j (t)0, a i j (t), b i j (t), d i j (t), e i j (t) and I i (t) are all almost periodic functions on , t η i (t)T, t τ i j (t)T, t σ i j (t)T, t δ i j (t)T, t ζ i j (t)T for tT, i,j=1,2,,n.

(H2) There exist positive constants L i >0, l i >0, L i h >0, l i k >0 such that for i=1,2,,n,

| f i ( x ) f i ( y ) | L i | x y | , | g i ( x ) g i ( y ) | l i | x y | , | h i ( x ) h i ( y ) | L i h | x y | , | k i ( x ) k i ( y ) | l i k | x y | ,

where x,yR and f i (0)= g i (0)= h i (0)= k i (0)=0.

(H3) For i,j=1,2,,n, the delay kernels θ i j , ξ i j :TR are continuous and integrable with

0 t δ i j + t | θ i j (s)|Δs θ ¯ i j ,0 t ζ i j + t | ξ i j (s)|Δs ξ ¯ i j .

Our main purpose of this paper is to study the existence and global exponential stability of the almost periodic solution to (1.1). Our results of this paper are completely new and complementary to the previously known results even when the time scale T=R or Z. The organization of the rest of this paper is as follows. In Section 2, we introduce some definitions and make some preparations for later sections. In Section 3 and Section 4, by utilizing Banach’s fixed point theorem and the theory of calculus on time scales, we present some sufficient conditions which guarantee the existence of a unique globally exponentially stable almost periodic solution for system (1.1). In Section 5, we present examples to illustrate the feasibility and effectiveness of our results obtained in previous sections. We draw a conclusion in Section 6.

2 Preliminaries

In this section, we shall first recall some basic definitions and lemmas which will be useful for the proof of our main results.

Let be a nonempty closed subset (time scale) of . The forward and backward jump operators σ,ρ:TT and the graininess μ:T R + are defined, respectively, by

σ(t)=inf{sT:s>t},ρ(t)=sup{sT:s<t}andμ(t)=σ(t)t.

A point tT is called left-dense if t>infT and ρ(t)=t, left-scattered if ρ(t)<t, right-dense if t<supT and σ(t)=t, and right-scattered if σ(t)>t. If has a left-scattered maximum m, then T k =T{m}; otherwise T k =T. If has a right-scattered minimum m, then T k =T{m}; otherwise T k =T.

A function f:TR is right-dense continuous provided it is continuous at right-dense point in and its left-side limits exist at left-dense points in . If f is continuous at each right-dense point and each left-dense point, then f is said to be a continuous function on .

For y:TR and t T k , we define the delta derivative of y(t), y Δ (t), to be the number (if it exists) with the property that for a given ε>0, there exists a neighborhood U of t such that

| [ y ( σ ( t ) ) y ( s ) ] y Δ (t) [ σ ( t ) s ] |<ε|σ(t)s|

for all sU.

If y is continuous, then y is right-dense continuous, and if y is delta differentiable at t, then y is continuous at t.

Let y be right-dense continuous. If Y Δ (t)=y(t), then we define the delta integral by

a t y(s)Δs=Y(t)Y(a).

A function r:TR is called regressive if

1+μ(t)r(t)0

for all t T k . The set of all regressive and rd-continuous functions r:TR will be denoted by R=R(T)=R(T,R). We define the set R + = R + (T,R)={rR:1+μ(t)r(t)>0,tT}.

If r is a regressive function, then the generalized exponential function e r is defined by

e r (t,s)=exp { s t ξ μ ( τ ) ( r ( τ ) ) Δ τ } for s,tT,

with the cylinder transformation

ξ h (z)= { Log ( 1 + h z ) h if  h 0 , z if  h = 0 .

Let p,q:TR be two regressive functions, we define

pq:=p+q+μpq,p:= p 1 + μ p ,pq:=p(q).

Then the generalized exponential function has the following properties.

Definition 2.1 [51]

Let p,q:TR be two regressive functions, define

pq=p+q+μpq,p= p 1 + μ p ,pq=p(q).

Lemma 2.1 [51]

Assume that p,q:TR are two regressive functions, then

  1. (1)

    e 0 (t,s)1 and e p (t,t)1;

  2. (2)

    e p (σ(t),s)=(1+μ(t)p(t)) e p (t,s);

  3. (3)

    e p (t,s)= 1 e p ( s , t ) = e p (s,t);

  4. (4)

    e p (t,s) e p (s,r)= e p (t,r);

  5. (5)

    ( e p ( t , s ) ) Δ =(p)(t) e p (t,s);

  6. (6)

    if a,b,cT, then a b p(t) e p (c,σ(t))Δt= e p (c,a) e p (c,b).

Definition 2.2 [51]

Assume that f:TR is a function and let t T k . Then we define f Δ (t) to be the number (provided it exists) with the property that given any ε>0, there is a neighborhood U of t (i.e., U=(tδ,t+δ)T for some δ>0) such that

| [ f ( σ ( t ) ) f ( s ) ] f Δ (t) [ σ ( t ) s ] |ε|σ(t)s|

for all sU. We call f Δ (t) the delta (or Hilger) derivative of f at t. Moreover, we say that f is delta (or Hilger) differentiable (or, in short, differentiable) on T k provided f Δ (t) exists for all t T k . The function f Δ : T k R is then called the (delta) derivative of f on T k .

Definition 2.3 [52]

A time scale is called an almost periodic time scale if

Π:={τR:t±τT,tT}{0}.

Definition 2.4 [52]

Let be an almost periodic time scale. A function fC(T, E n ) is called an almost periodic function if the ε-translation set of

E(ε,f)= { τ Π : | f ( t + τ ) f ( t ) | < ε , t T }

is a relatively dense set in for all ε>0; that is, for any given ε>0, there exists a constant l(ε)>0 such that each interval of length l(ε) contains a τ(ε)E(ε,f) such that

|f(t+τ)f(t)|<ε,tT.

τ is called the ε-translation number of f and l(ε) is called the inclusion length of T(ε,f).

Definition 2.5 [52]

Let A(t) be an n×n rd-continuous matrix on , the linear system

x Δ (t)=A(t)x(t),tT
(2.1)

is said to admit an exponential dichotomy on if there exist positive constants k, α, projection P and the fundamental solution matrix X(t) of (2.1), satisfying

X ( t ) P X 1 ( σ ( s ) ) 0 k e α ( t , σ ( s ) ) , s , t T , t σ ( s ) , X ( t ) ( I P ) X 1 ( σ ( s ) ) 0 k e α ( σ ( s ) , t ) , s , t T , t σ ( s ) ,

where 0 is a matrix norm on (say, for example, if A= ( a i j ) n × m , then we can take A 0 = ( i = 1 n j = 1 m | a i j | 2 ) 1 2 ).

Consider the following almost periodic system:

x Δ (t)=A(t)x(t)+f(t),tT,
(2.2)

where A(t) is an almost periodic matrix function, f(t) is an almost periodic vector function.

Lemma 2.2 [52]

If the linear system (2.1) admits exponential dichotomy, then system (2.2) has a unique almost periodic solution

x(t)= t X(t)P X 1 ( σ ( s ) ) f(s)Δs t + X(t)(IP) X 1 ( σ ( s ) ) f(s)Δs,

where X(t) is the fundamental solution matrix of (2.1).

Lemma 2.3 [53]

Let c i (t) be an almost periodic function on , where c i (t)>0, c i (t) R + , i=1,2,,n, tT and min 1 i n { inf t T c i (t)}= m ˜ >0, then the linear system

x Δ (t)=diag ( c 1 ( t ) , c 2 ( t ) , , c n ( t ) ) x(t)
(2.3)

admits an exponential dichotomy on .

One can easily prove the following.

Lemma 2.4 Suppose that f(t) is an rd-continuous function and c(t) is a positive rd-continuous function satisfying c(t) R + . Let

g(t)= t 0 t e c ( t , σ ( s ) ) f(s)Δs,

where t 0 T, then

g Δ (t)=f(t)+ t 0 t [ c ( t ) e c ( t , σ ( s ) ) f ( s ) ] Δs.

3 Existence of almost periodic solutions

Let AP(T)={x(t)C(T,R):x(t) is a real-valued, almost periodic function on T}, Y={x(t) C 1 (T,R):x(t), x Δ (t)AP(T)} and

X= { φ = ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) T : φ i ( t ) Y , i = 1 , 2 , , n } .

For φX, if we define induced modulus φ X =max{ φ 0 , φ Δ 0 }, where

φ 0 = sup t T φ ( t ) 0 , φ ( t ) 0 = max 1 i n | φ i (t)|

and φ Δ (t)= ( φ 1 Δ ( t ) , φ 2 Δ ( t ) , , φ n Δ ( t ) ) T , then is a Banach space.

Theorem 3.1 Assume that (H1)-(H3) and

(H4) r= max 1 i n max{ 1 c i ,1+ c i + c i }( c i + η i + + j = 1 n a i j + L j + j = i n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j )<1

hold, then there exists exactly one almost periodic solution of system (1.1) in the region X 0 ={ϕ(t) ϕ ϕ 0 X r R 1 r }, where

R = max 1 i n max { I i + c i , I i + ( 1 + c i + c i ) } , ϕ 0 = ( t I 1 ( s ) e c 1 ( t , σ ( s ) ) Δ s , , t I n ( s ) e c n ( t , σ ( s ) ) Δ s ) T .

Proof Rewrite (1.1) in the form

x i Δ ( t ) = c i ( t ) x i ( t ) + c i ( t ) t η i ( t ) t x i Δ ( s ) Δ s + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s + I i ( t ) , i = 1 , 2 , , n .

For any ϕX, we consider the following system:

x i Δ ( t ) = c i ( t ) x i ( t ) + c i ( t ) t η i ( t ) t ϕ i Δ ( s ) Δ s + j = 1 n a i j ( t ) f j ( ϕ j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( ϕ j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( ϕ j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( ϕ j Δ ( s ) ) Δ s + I i ( t ) , i = 1 , 2 , , n .
(3.1)

Since min 1 i n { inf t T c i (t)}>0, it follows from Lemma 2.2 and Lemma 2.3 that system (3.1) has a unique almost periodic solution which can be expressed as follows:

x ϕ (t)= ( x 1 ϕ ( t ) , x 2 ϕ ( t ) , , x n ϕ ( t ) ) T ,

where

x i ϕ ( t ) = t e c i ( t , σ ( s ) ) [ c i ( s ) s η i ( s ) s ϕ i Δ ( u ) Δ u + j = 1 n a i j ( s ) f j ( ϕ j ( s τ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) h j ( ϕ j ( u ) ) Δ u + j = 1 n b i j ( s ) g j ( ϕ j Δ ( s σ i j ( s ) ) ) + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) k j ( ϕ j Δ ( u ) ) Δ u + I i ( s ) ] Δ s , i = 1 , 2 , , n .

Now, we define a mapping T: X 0 X 0 by (Tϕ)(t)= x ϕ (t), ϕ X 0 .

By the definition of X , we have

ϕ 0 X = max { ϕ 0 0 , ϕ 0 Δ 0 } = max { sup t T max 1 i n | t I i ( s ) e c i ( t , σ ( s ) ) Δ s | , sup t T max 1 i n | I i ( t ) t c i ( t ) I i ( s ) e c i ( t , σ ( s ) ) Δ s | } max { max 1 i n { I i + c i } , max 1 i n { I i + ( 1 + c i + c i ) } } = R .
(3.2)

Hence, for any ϕ X 0 ={ϕϕX, ϕ ϕ 0 X r R 1 r }, one has

ϕ X ϕ 0 X + ϕ ϕ 0 X R+ r R 1 r = R 1 r .

Next, we will show that T( X 0 ) X 0 . In fact, for any ϕ X 0 , we have

T ϕ ϕ 0 0 = sup t T max 1 i n { | t e c i ( t , σ ( s ) ) [ c i ( s ) s η i ( s ) s ϕ i Δ ( u ) Δ u + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) h j ( ϕ j ( u ) ) Δ u + j = 1 n b i j ( s ) g j ( ϕ j Δ ( s σ i j ( s ) ) ) + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) k j ( ϕ j Δ ( u ) ) Δ u ] Δ s | } sup t T max 1 i n { t e c i ( t , σ ( s ) ) [ c i + ϕ Δ 0 η i + + j = 1 n a i j + L j ϕ 0 + j = 1 n b i j + l j ϕ Δ 0 + j = 1 n d i j + L j h θ ¯ i j ϕ 0 + j = 1 n e i j + l j k ξ ¯ i j ϕ Δ 0 ] Δ s } sup t T max 1 i n { t e c i ( t , σ ( s ) ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X Δ s } max 1 i n { 1 c i ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } ϕ X

and

( T ϕ ϕ 0 ) Δ 0 = sup t T max 1 i n { | c i ( t ) t η i ( t ) t ϕ Δ ( u ) Δ u + j = 1 n a i j ( t ) f j ( ϕ j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s t c i ( t ) e c i ( t , σ ( s ) ) [ c i ( s ) s η i ( s ) s ϕ i Δ ( u ) Δ u + j = 1 n a i j ( s ) f j ( ϕ j ( s τ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) h j ( ϕ j ( u ) ) Δ u + j = 1 n b i j ( s ) g j ( ϕ j Δ ( s σ i j ( s ) ) ) + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) k j ( ϕ j Δ ( u ) ) Δ u ] Δ s | } sup t T max 1 i n { ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X + c i + t e c i ( t , σ ( s ) ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X Δ s } max 1 i n { ( 1 + c i + c i ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } ϕ X .

Thus, we obtain

T ϕ ϕ 0 X = max { T ϕ ϕ 0 0 , ( T ϕ ϕ 0 ) Δ 0 } max 1 i n max { 1 c i , 1 + c i + c i } × ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X = r ϕ X r R 1 r ,

which implies (Tϕ) X 0 , so the mapping T is a self-mapping from X 0 to X 0 .

Finally, we prove that T is a contraction mapping. Taking ϕ,ψ X 0 , we have that

T ϕ T ψ 0 max 1 i n { 1 c i ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } ϕ ψ X r ϕ ψ X

and

( T ϕ T ψ ) Δ 0 max 1 i n { ( 1 + c i + c i ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } × ϕ ψ X r ϕ ψ X .

Noticing that r<1, it means that T is a contraction mapping. Thus, there exists a unique fixed point ϕ X 0 such that Tϕ=ϕ. Then system (1.1) has a unique almost periodic solution in the region X 0 ={ϕ(t)Xϕ ϕ 0 r R 1 r }. This completes the proof. □

4 Exponential stability of the almost periodic solution

Definition 4.1 The almost periodic solution x ¯ (t)= ( x ¯ 1 ( t ) , x ¯ 2 ( t ) , , x ¯ n ( t ) ) T of system (1.1) with initial value φ ¯ (t)= ( φ ¯ 1 ( t ) , φ ¯ 2 ( t ) , , φ ¯ n ( t ) ) T is said to be globally exponentially stable. If there exist positive constants λ with λ R + and M>1 such that every solution x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T of system (1.1) with initial value φ(t)= ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) T satisfies

x ( t ) x ¯ ( t ) 1 M e λ (t, t 0 ) ψ X ,t [ ι , + ) T ,t t 0 ,

where

x ( t ) x ¯ ( t ) 1 = max { x ( t ) x ¯ ( t ) 0 , ( x ( t ) x ¯ ( t ) ) Δ 0 } , ψ X = max { sup t [ ι , 0 ] T max 1 i n | φ i ( t ) φ i ¯ ( t ) | , sup t [ ι , 0 ] T max 1 i n | φ i Δ ( t ) ( φ i ¯ ) Δ ( t ) | } ,

and t 0 [ ι , 0 ] T .

Theorem 4.1 Assume that (H1)-(H4) hold, then system (1.1) has a unique almost periodic solution which is globally exponentially stable.

Proof From Theorem 3.1, we see that system (1.1) has at least one almost periodic solution x ¯ (t)= ( x ¯ 1 ( t ) , x ¯ 2 ( t ) , , x ¯ n ( t ) ) T . Suppose that x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T is an arbitrary solution. Set y i (t)= x i (t) x ¯ i (t), i=1,2,,n, then it follows from system (1.1) that

y i Δ ( t ) = x i Δ ( t ) x ¯ i Δ ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s + c i ( t ) x ¯ i ( t η i ( t ) ) j = 1 n a i j ( t ) f j ( x ¯ j ( t τ i j ( t ) ) ) j = 1 n b i j ( t ) g j ( x ¯ j Δ ( t σ i j ( t ) ) ) j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x ¯ j ( s ) ) Δ s j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x ¯ j Δ ( s ) ) Δ s = c i ( t ) y i ( t η i ( t ) ) + j = 1 n a i j ( t ) [ f j ( x j ( t τ i j ( t ) ) ) f j ( x ¯ j ( t τ i j ( t ) ) ) ] + j = 1 n b i j ( t ) [ g j ( x j Δ ( t σ i j ( t ) ) ) g j ( x ¯ j Δ ( t σ i j ( t ) ) ) ] + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) [ h j ( x j ( s ) ) h j ( x ¯ j ( s ) ) ] Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) [ k j ( x j Δ ( s ) ) k j ( x ¯ j Δ ( s ) ) ] Δ s = c i ( t ) y i ( t η i ( t ) ) + j = 1 n a i j ( t ) F j ( y j ( t τ i j ( t ) ) ) + j = 1 n b i j ( t ) G j ( y j Δ ( t σ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) H j ( y j ( s ) ) Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) K j ( y j Δ ( s ) ) Δ s ,
(4.1)

where i=1,2,,n and for i,j=1,2,,n,

F j ( y j ( t τ i j ( t ) ) ) = f j ( y j ( t τ i j ( t ) ) + x ¯ j ( t τ i j ( t ) ) ) f j ( x ¯ j ( t τ i j ( t ) ) ) , g j ( y j Δ ( t σ i j ( t ) ) ) = g j ( y j Δ ( t σ i j ( t ) ) + x ¯ j Δ ( t σ i j ( t ) ) ) g j ( x ¯ j Δ ( t σ i j ( t ) ) ) , H j ( y j ( s ) ) = h j ( x j ( s ) ) h j ( x ¯ j ( s ) ) , K j ( y j Δ ( s ) ) = k j ( x j Δ ( s ) ) k j ( x ¯ j Δ ( s ) ) .

From (H2) we have that for i,j=1,2,,n,

| F j ( y j ( t τ i j ( t ) ) ) | L j | y j ( t τ i j ( t ) ) | , | G j ( y j Δ ( t σ i j ( t ) ) ) | l j | y j Δ ( t σ i j ( t ) ) |

and

| H j ( y j ( s ) ) | L j h | y j ( s ) | , | K j ( y j Δ ( s ) ) | l j k | y j Δ ( s ) | .

The initial condition of (4.1) is

ψ i (t)= φ i (t) φ ¯ i (t), ψ i Δ (t)= φ i Δ (t) φ ¯ i Δ (t),t [ ι , 0 ] T ,i=1,2,,n.

Let Θ i and Λ i be defined by

Θ i ( ω ) = c i ω exp ( ω sup s T μ ( s ) ) ( c i + η i + exp ( ω η i + ) + j = 1 n a i j + L j exp ( ω τ i j + ) + j = 1 n b i j + l j exp ( ω σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( ω ) + j = 1 n e i j + l j k ξ ¯ i j exp ( ω ) ) , i = 1 , 2 , , n

and

Λ i ( ω ) = c i ω ( c i + exp ( ω sup s T μ ( s ) ) + c i ω ) ( c i + η i + exp ( ω η i + ) + j = 1 n a i j + L j exp ( ω τ i j + ) + j = 1 n b i j + l j exp ( ω σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( ω ) + j = 1 n e i j + l j k ξ ¯ i j exp ( ω ) ) , i = 1 , 2 , , n .

By (H3), for i=1,2,,n, we get

Θ i (0)= c i ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) >0

and

Λ i (0)= c i ( c i + + c i ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) >0.

Since Θ i , Λ i are continuous on [0,+) and Θ i (ω), Θ i (ω), as ω+, so there exist ω i , ω i >0 such that Θ i ( ω i )= Λ i ( ω i )=0 and Θ i (ω)>0 for ω(0, ω i ), Λ i (ω)>0 for ω(0, ω i ), i=1,2,,n.

By choosing a=min{ ω 1 , ω 2 ,, ω n , ω 1 , ω 2 ,, ω n }, we have Θ i (a)0, Λ i (a)0, i=1,2,,n. So, we can choose a positive constant 0<λ<min{a, min 1 i n { c i }} such that

Θ i (λ)>0, Λ i (λ)>0,i=1,2,,n,

which implies that

exp ( λ sup s T μ ( s ) ) c i λ ( c i + η i + exp ( λ η i + ) + j = 1 n a i j + L j exp ( λ τ i j + ) + j = 1 n b i j + l j exp ( λ σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( λ ) + j = 1 n e i j + l j k ξ ¯ i j exp ( λ ) ) < 1
(4.2)

and

( 1 + c i + exp ( λ sup s T μ ( s ) ) c i λ ) ( c i + η i + exp ( λ η i + ) + j = 1 n a i j + L j exp ( λ τ i j + ) + j = 1 n b i j + l j exp ( λ σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( λ ) + j = 1 n e i j + l j k ξ ¯ i j exp ( λ ) ) < 1 ,
(4.3)

where i=1,2,,n. Let

M= max 1 i n { c i c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j } ,

by (H3) we have M>1. Thus

1 M < exp ( λ sup s T μ ( s ) ) c i λ ( c i + η i + exp ( λ η i + ) + j = 1 n a i j + L j exp ( λ τ i j + ) + j = 1 n b i j + l j exp ( λ σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( λ ) + j = 1 n e i j + l j k ξ ¯ i j exp ( λ ) ) .

Rewrite (4.1) in the form

y i Δ ( t ) + c i ( t ) y i ( t ) = c i ( t ) t η i ( t ) t y i Δ ( s ) Δ s + j = 1 n a i j ( t ) F j ( y j ( t τ i j ( t ) ) ) + j = 1 n b i j ( t ) G j ( y j Δ ( t σ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) H j ( y j ( s ) ) Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) K j ( y j Δ ( s ) ) Δ s , i = 1 , 2 , , n .
(4.4)

Multiplying the both sides of (4.4) by e c i (t,σ(s)) and integrating over [ t 0 , t ] T , we get

y i ( t ) = y i ( t 0 ) e c i ( t , t 0 ) + t 0 t e c i ( t , σ ( s ) ) { c i ( s ) s η i ( s ) s y i Δ ( s ) Δ s + j = 1 n a i j ( s ) F j ( y j ( s τ i j ( s ) ) ) + j = 1 n b i j ( s ) G j ( y j Δ ( s σ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) H j ( y j ( u ) ) Δ u + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) K j ( y j Δ ( u ) ) Δ u } Δ s , i = 1 , 2 , , n .
(4.5)

It is easy to see that

y ( t ) 1 = ψ ( t ) 1 ψ X M e λ (t, t 0 ) ψ X ,t [ ι , 0 ] T .

We claim that

y ( t ) 1 M e λ (t, t 0 ) ψ X ,t ( 0 , + ) T .
(4.6)

To prove (4.6), we first show that for any p>1, the following inequality holds:

y ( t ) 1 <pM e λ (t, t 0 ) ψ X ,t ( 0 , + ) T .
(4.7)

If (4.7) is not true, then there must be some t 1 ( 0 , + ) T and some i 1 , i 2 {1,2,,n} such that

y ( t 1 ) 1 = max { y ( t 1 ) 0 , y Δ ( t 1 ) 0 } = max { | y i 1 ( t 1 ) | , | y i 2 Δ ( t 1 ) | } p M e λ ( t 1 , t 0 ) ψ X

and

y ( t ) 1 pM e λ (t, t 0 ) ψ X ,t [ ι , t 1 ] T .

Therefore, there must exist a constant c1 such that

y ( t 1 ) 1 = max { y ( t 1 ) 0 , y Δ ( t 1 ) 0 } = max { | y i 1 ( t 1 ) | , | y i 2 Δ ( t 1 ) | } = c p M e λ ( t 1 , t 0 ) ψ X
(4.8)

and

y ( t ) 1 cpM e λ (t, t 0 ) ψ X ,t [ ι , t 1 ] T .
(4.9)

By (4.5), (4.8), (4.9) and (H1)-(H3), we obtain

| y i 1 ( t 1 ) | ψ X e c i 1 ( t 1 , t 0 ) + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 1 ( t 1 , σ ( s ) ) e λ ( t 1 , σ ( s ) ) × { c i 1 + s η i 1 ( s ) s e λ ( σ ( s ) , θ ) Δ θ + j = 1 n a i 1 j + L j e λ ( σ ( s ) , s τ i 1 j ( s ) ) + j = 1 n b i 1 j + l j e λ ( σ ( s ) , s σ i 1 j ( s ) ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j e λ ( σ ( s ) , s ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j e λ ( σ ( s ) , s ) } Δ s ψ X e c i 1 ( t 1 , t 0 ) + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 1 λ ( t 1 , σ ( s ) ) × { c i 1 + η i 1 + e λ ( σ ( s ) , s η i 1 ( s ) ) + j = 1 n a i 1 j + L j e λ ( σ ( s ) , s τ i 1 j ( s ) ) + j = 1 n b i 1 j + l j e λ ( σ ( s ) , s σ i 1 j ( s ) ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j e λ ( σ ( s ) , s ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j e λ ( σ ( s ) , s ) } Δ s ψ X e c i 1 ( t 1 , t 0 ) + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 1 λ ( t 1 , σ ( s ) ) × { c i 1 + η i 1 + exp [ λ ( η i 1 + + sup s T μ ( s ) ) ] + j = 1 n a i 1 j + L j exp [ λ ( τ i 1 j + + sup s T μ ( s ) ) ] + j = 1 n b i 1 j + l j exp [ λ ( σ i 1 j + + sup s T μ ( s ) ) ] + j = 1 n d i 1 j + L j h θ ¯ i 1 j exp ( λ sup s T μ ( s ) ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j exp ( λ sup s T μ ( s ) ) } Δ s = c p M e λ ( t 1 , t 0 ) ψ X { 1 p M e c i 1 λ ( t 1 , t 0 ) + exp ( λ sup s T μ ( s ) ) [ c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j + j = 1 n e i 1 j + l j k ξ ¯ i 1 j ] × t 0 t 1 e c i 1 λ ( t 1 , σ ( s ) ) Δ s } < c p M e λ ( t 1 , t 0 ) ψ X { 1 M e ( c i 1 λ ) ( t 1 , t 0 ) + exp ( λ sup s T μ ( s ) ) [ c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j + j = 1 n e i 1 j + l j k ξ ¯ i 1 j ] × 1 ( c i 1 λ ) t 0 t 1 ( c i 1 λ ) e ( c i 1 λ ) ( t 1 , σ ( s ) ) Δ s } c p M e λ ( t 1 , t 0 ) ψ X { [ 1 M exp ( λ sup s T μ ( s ) ) c i 1 λ ( c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j + j = 1 n e i 1 j + l j k ξ ¯ i 1 j ) ] e ( c i 1 λ ) ( t 1 , t 0 ) + exp ( λ sup s T μ ( s ) ) c i 1 λ ( c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j exp ( λ ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j exp ( λ ) ) } < c p M e λ ( t 1 , t 0 ) ψ X .
(4.10)

By Lemma 2.4 and (4.5), we have, for i=1,2,,n,

y i Δ ( t ) = c i ( t ) y i ( t 0 ) e c i ( t , t 0 ) + ( c i ( t ) t η i ( t ) t y i Δ ( s ) Δ s + j = 1 n a i j ( t ) F j ( y j ( t τ i j ( t ) ) ) + j = 1 n b i j ( t ) G j ( y j Δ ( t σ i j ( t ) ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) H j ( y j ( s ) ) Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) K j ( y j Δ ( s ) ) Δ s + t 0 t c i ( t ) e c i ( t , σ ( s ) ) { c i ( s ) s η i ( s ) s y i Δ ( u ) Δ u + j = 1 n a i j ( s ) F j ( y j ( s τ i j ( s ) ) ) + j = 1 n b i j ( s ) G j ( y j Δ ( s σ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) H j ( y j ( u ) ) Δ u + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) K j ( y j Δ ( u ) ) Δ u } Δ s .
(4.11)

Thus, it follows from (4.8), (4.9) and (4.11) that

| y i 2 Δ ( t 1 ) | c i 2 + e c i 2 ( t 1 , t 0 ) ψ X + c p M e λ ( t 1 , t 0 ) ψ X ( c i 2 + t 1 η i 2 ( t 1 ) t 1 e λ ( t 1 , θ ) Δ θ + j = 1 n a i 2 j + L j e λ ( t 1 , t 1 τ i 2 j ( t 1 ) ) + j = 1 n b i 2 j + l j e λ ( t 1 , t 1 σ i 2 j ( t 1 ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j e λ ( t 1 , t 1 ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j e λ ( t 1 , t 1 ) ) + c i 2 + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 2 ( t 1 , σ ( s ) ) e λ ( t 1 , σ ( s ) ) × { c i 2 + s η i 2 ( s ) s e λ ( σ ( s ) , θ ) Δ θ + j = 1 n a i 2 j + L j e λ ( σ ( s ) , s τ i 2 j ( s ) ) + j = 1 n b i 2 j + l j e λ ( σ ( s ) , s σ i 2 j ( s ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j e λ ( σ ( s ) , s ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j e λ ( σ ( s ) , s ) } Δ s c i 2 + e c i 2 ( t 1 , t 0 ) ψ X + c p M e λ ( t 1 , t 0 ) ψ X ( c i 2 + η i 2 + e λ ( t 1 , t 1 η i 2 ( t 1 ) ) + j = 1 n a i 2 j + L j e λ ( t 1 , t 1 τ i 2 j ( t 1 ) ) + j = 1 n b i 2 j + l j e λ ( t 1 , t 1 σ i 2 j ( t 1 ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j + j = 1 n e i 2 j + l j k ξ ¯ i 2 j ) + c i 2 + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 2 λ ( t 1 , σ ( s ) ) × { c i 2 + η i 2 + e λ ( σ ( s ) , s η i 2 ( s ) ) + j = 1 n a i 2 j + L j e λ ( σ ( s ) , s τ i 2 j ( s ) ) + j = 1 n b i 2 j + l j e λ ( σ ( s ) , s σ i 2 j ( s ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j e λ ( σ ( s ) , s ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j e λ ( σ ( s ) , s ) } Δ s c i 2 + e c i 2 ( t 1 , t 0 ) ψ X + c p M e λ ( t 1 , t 0 ) ψ X ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) × ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) t 0 t 1 e c i 2 λ ( t 1 , σ ( s ) ) Δ s ) < c p M e λ ( t 1 , t 0 ) ψ X { c i 2 + M e c i 2 λ ( t 1 , t 0 ) + ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) × ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) t 0 t 1 e c i 2 λ ( t 1 , σ ( s ) ) Δ s ) } c p M e λ ( t 1 , t 0 ) ψ X { c i 2 + M e ( c i 2 λ ) ( t 1 , t 0 ) + ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) × ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) 1 ( c i 2 λ ) ( e ( c i 2 λ ) ( t 1 , t 0 ) 1 ) ) } c p M e λ ( t 1 , t 0 ) ψ X { [ 1 M exp ( λ sup s T μ ( s ) ) c i 2 λ ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) ] c i 2 + e ( c i 2 λ ) ( t 1 , t 0 ) + ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) c i 2 λ ) ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) } < c p M e λ ( t 1 , t 0 ) ψ X .
(4.12)

In view of (4.10) and (4.12), we get

y ( t 1 ) 1 <cpM e λ ( t 1 , t 0 ) ψ X ,

which contradicts (4.8), and so (4.7) holds. Letting p1, then (4.6) holds. Hence, the almost periodic solution of system (1.1) is globally exponentially stable. This completes the proof. □

Remark 4.1 When T=R, η i (t)= d i j (t)= e i j (t)0, i=1,2,,n, Theorem 3.1 and Theorem 4.1 are reduced to Theorem 2.3 and Theorem 3.1 in [37], respectively.

Remark 4.2 According to Theorem 3.1 and Theorem 4.1, we see that the existence and exponential stability of almost periodic solutions for system (1.1) only depend on time delays η i (the delays in the leakage term) and do not depend on time delays τ i j and σ i j .

5 An example

In this section, we give an example to illustrate the feasibility and effectiveness of our results obtained in Sections 3 and 4.

Example 5.1 Let n=3. Consider the following neutral Hopfield neural network on time scale :

x i Δ ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 3 a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 3 d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 3 b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 3 e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s + I i ( t ) ,
(5.1)

where i=1,2,3 and the coefficients are as follows:

c 1 ( t ) = 0.5 + 0.1 | sin t | , c 2 ( t ) = 0.6 + 0.3 | cos 2 t | , c 3 ( t ) = 0.8 + 0.1 | sin 2 t | , η 1 ( t ) = 1 + | sin 2 t | 200 , η 2 ( t ) = 1.8 + 0.2 | cos 2 t | 100 , η 3 ( t ) = 2.5 + 0.5 | sin 2 t | 200 , ( a i j ( t ) ) 3 × 3 = ( 0.08 | sin t | 0.15 | cos 2 t | 0.06 | cos t | 0.15 | cos t | 0.12 | cos 2 t | 0.04 | sin 2 t | 0.10 | sin t | 0.08 | sin 2 t | 0.09 | sin t | ) , ( b i j ( t ) ) 3 × 3 = ( 0.10 | sin t | 0.06 | sin 2 t | 0.05 | cos t | 0.06 | sin t | 0.03 | cos 2 t | 0.08 | sin 2 t | 0.12 | cos t | 0.04 | sin 2 t | 0.07 | sin t | ) , ( d i j ( t ) ) 3 × 3 = ( 0.02 | cos t | 0.15 | sin 2 t | 0.01 | cos t | 0.05 | sin t | 0.12 | cos 2 t | 0.03 | sin 2 t | 0.03 | sin t | 0.08 | cos 2 t | 0.01 | sin t | ) , ( e i j ( t ) ) 3 × 3 = ( 0.02 | cos t | 0.01 | sin 2 t | 0.05 | cos t | 0.01 | sin t | 0.03 | cos 2 t | 0.04 | cos 2 t | 0.02 | cos t | 0.04 | sin 2 t | 0.01 | sin t | ) , f 1 ( x ) = 0.2 | x | , f 2 ( x ) = 0.4 | sin x | , f 3 ( x ) = | x | , g 1 ( x ) = 0.3 | cos x | , g 2 ( x ) = 0.1 | x | , g 3 ( x ) = 0.5 | sin x | , h 1 ( x ) = 0.1 | x | , h 2 ( x ) = 0.4 | sin x | , h 3 ( x ) = 0.2 | x | , k 1 ( x ) = 0.3 | cos x | , k 2 ( x ) = k 3 ( x ) = 0.2 | x | , θ i j ( u ) = exp ( 2 u ) , ξ i j ( u ) = exp ( 4 u ) , δ i j ( t ) = 0.001 | sin t | , ζ i j ( t ) = 0.002 | cos t | , i , j = 1 , 2 , 3 .

Take τ i j >0, σ i j >0, I i (t) (i,j=1,2,3) to be arbitrary almost periodic functions. If T=R, then μ(t)=0 and if T=Z, then μ(t)=1. By calculating, we can easily check, for above two cases, that c i R + , r0.7961<1. By Theorem 3.1 and Theorem 4.1, we know that system (5.1) has a unique almost periodic solution that is globally exponentially stable. This shows that the almost periodicity of system (5.1) does not depend on time scale . In particular, the continuous-time neural network and the discrete-time analogue described by (5.1) have the same dynamical behaviors (see Figures 1-4).

Figure 1
figure 1

Continuous situation x 1 , x 2 , x 3 with time t .

Figure 2
figure 2

Continuous situation x 1 , x 2 , x 3 .

Figure 3
figure 3

Discrete situation x 1 , x 2 , x 3 with time t .

Figure 4
figure 4

Discrete situation x 1 , x 2 , x 3 .

6 Conclusion

In this paper, a class of neutral delay Hopfield neural networks with neutral time-varying delays in the leakage term on time scales is investigated. For the model, we have given some sufficient conditions ensuring the existence and global exponential stability of almost periodic solutions by using the exponential dichotomy of linear dynamic equations on time scales, Banach’s fixed point theorem and the theory of calculus on time scales. These obtained results are new and complement previously known results. Furthermore, a simple example is given to demonstrate the effectiveness of our results and the example also shows that the continuous-time neural network and the discrete-time analogue have the same dynamical behaviors.

References

  1. Gu HB, Jiang HJ, Teng ZD: Existence and globally exponential stability of periodic solution of BAM neural networks with impulses and recent-history distributed delays. Neurocomputing 2008, 71: 813–822. 10.1016/j.neucom.2007.03.007

    Article  Google Scholar 

  2. Huang CX, Cao JD: Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays. Neurocomputing 2009, 72: 3352–3356. 10.1016/j.neucom.2008.12.030

    Article  Google Scholar 

  3. Kwon OM, Park JH: Delay-dependent stability for uncertain cellular neural networks with discrete and distribute time-varying delays. J. Franklin Inst. 2008, 345: 766–778. 10.1016/j.jfranklin.2008.04.011

    Article  MathSciNet  MATH  Google Scholar 

  4. Li YK, Yang L: Anti-periodic solutions for Cohen-Grossberg neural networks with bounded and unbounded delays. Commun. Nonlinear Sci. Numer. Simul. 2009, 14: 3134–3140. 10.1016/j.cnsns.2008.12.002

    Article  MathSciNet  MATH  Google Scholar 

  5. Li YK: Global exponential stability of BAM neural networks with delays and impulses. Chaos Solitons Fractals 2005, 24: 279–285. 10.1016/S0960-0779(04)00561-2

    Article  MathSciNet  MATH  Google Scholar 

  6. Li CZ, Li YK, Ye Y: Exponential stability of fuzzy Cohen-Grossberg neural networks with time delays and impulsive effects. Commun. Nonlinear Sci. Numer. Simul. 2010, 15: 3599–3606. 10.1016/j.cnsns.2010.01.001

    Article  MathSciNet  MATH  Google Scholar 

  7. Cai Z, Huang L, Guo Z, Chen X: On the periodic dynamics of a class of time-varying delayed neural networks via differential inclusions. Neural Netw. 2012, 33: 97–113.

    Article  MATH  Google Scholar 

  8. Akhmet MU, Yılmaz E: Global exponential stability of neural networks with non-smooth and impact activations. Neural Netw. 2012, 34: 18–27.

    Article  MATH  Google Scholar 

  9. Kwon OM, Park JH, Lee SM, Cha EJ: New results on exponential passivity of neural networks with time-varying delays. Nonlinear Anal., Real World Appl. 2012, 13: 1593–1599. 10.1016/j.nonrwa.2011.11.017

    Article  MathSciNet  Google Scholar 

  10. Liu PC, Yi FQ, Guo Q, Yang J, Wu W: Analysis on global exponential robust stability of reaction-diffusion neural networks with S-type distributed delays. Physica D 2008, 237: 475–485. 10.1016/j.physd.2007.09.014

    Article  MathSciNet  MATH  Google Scholar 

  11. Mathiyalagan K, Sakthivel R, Marshal Anthoni S: Exponential stability result for discrete-time stochastic fuzzy uncertain neural networks. Phys. Lett. A 2012, 376: 901–912. 10.1016/j.physleta.2012.01.038

    Article  MATH  Google Scholar 

  12. Mohamad S, Gopalsamy K, Akca H: Exponential stability of artificial neural networks with distributed delays and large impulses. Nonlinear Anal., Real World Appl. 2008, 9: 872–888. 10.1016/j.nonrwa.2007.01.011

    Article  MathSciNet  MATH  Google Scholar 

  13. Sakthivel R, Mathiyalagan K, Marshal Anthoni S: Design of a passification controller for uncertain fuzzy Hopfield neural networks with time-varying delays. Phys. Scr. 2011., 84: Article ID 045024

    Google Scholar 

  14. Zhou DM, Zhang LM, Cao JD: On global exponential stability of cellular neural networks with Lipschitz-continuous activation function and variable delays. Appl. Math. Comput. 2004, 151(2):379–392. 10.1016/S0096-3003(03)00347-3

    Article  MathSciNet  MATH  Google Scholar 

  15. Li YK, Fan XL: Existence and globally exponential stability of almost periodic solution for Cohen-Grossberg BAM neural networks with variable coefficients. Appl. Math. Model. 2009, 33: 2114–2120. 10.1016/j.apm.2008.05.013

    Article  MathSciNet  MATH  Google Scholar 

  16. Li YK, Liu CC, Zhu LF: Global exponential stability of periodic solution for shunting inhibitory CNNs with delays. Phys. Lett. A 2005, 337: 46–54. 10.1016/j.physleta.2005.01.008

    Article  MATH  Google Scholar 

  17. Li YK: Global stability and existence of periodic solutions of discrete delayed cellular neural networks. Phys. Lett. A 2004, 333: 51–61. 10.1016/j.physleta.2004.10.022

    Article  MathSciNet  MATH  Google Scholar 

  18. Li YK, Zhang TW, Xing ZW: The existence of nonzero almost periodic solution for Cohen-Grossberg neural networks with continuously distributed delays and impulses. Neurocomputing 2010, 73: 3105–3113. 10.1016/j.neucom.2010.06.012

    Article  Google Scholar 

  19. Li YK, Zhao KH: Robust stability of delayed reaction-diffusion recurrent neural networks with Dirichlet boundary conditions on time scales. Neurocomputing 2011, 74: 1632–1637. 10.1016/j.neucom.2011.01.006

    Article  Google Scholar 

  20. Song QK, Cao JD: Stability analysis of Cohen-Grossberg neural network with both time-varying and continuously distributed delays. J. Comput. Appl. Math. 2006, 197: 188–203. 10.1016/j.cam.2005.10.029

    Article  MathSciNet  MATH  Google Scholar 

  21. Liu BW: Global exponential stability for BAM neural networks with time-varying delays in the leakage terms. Nonlinear Anal., Real World Appl. 2013, 14: 559–566. 10.1016/j.nonrwa.2012.07.016

    Article  MathSciNet  MATH  Google Scholar 

  22. Balasubramaniam P, Kalpana M, Rakkiyappan R: Existence and global asymptotic stability of fuzzy cellular neural networks with time delay in the leakage term and unbounded distributed delays. Circuits Syst. Signal Process. 2011, 30: 1595–1616. 10.1007/s00034-011-9288-7

    Article  MathSciNet  MATH  Google Scholar 

  23. Li X, Cao J: Delay-dependent stability of neural networks of neutral type with time delay in the leakage term. Nonlinearity 2010, 23: 1709–1726. 10.1088/0951-7715/23/7/010

    Article  MathSciNet  MATH  Google Scholar 

  24. Li X, Rakkiyappan R, Balasubramanian P: Existence and global stability analysis of equilibrium of fuzzy cellular neural networks with time delay in the leakage term under impulsive perturbations. J. Franklin Inst. 2011, 348: 135–155. 10.1016/j.jfranklin.2010.10.009

    Article  MathSciNet  MATH  Google Scholar 

  25. Balasubramanian P, Nagamani G, Rakkiyappan R: Passivity analysis for neural networks of neutral type with Markovian jumping parameters and time delay in the leakage term. Commun. Nonlinear Sci. Numer. Simul. 2011, 16: 4422–4437. 10.1016/j.cnsns.2011.03.028

    Article  MathSciNet  MATH  Google Scholar 

  26. Lakshmanan S, Park JH, Jung HY, Balasubramaniam P: Design of state estimator for neural networks with leakage, discrete and distributed delays. Appl. Math. Comput. 2012, 218: 11297–11310. 10.1016/j.amc.2012.05.022

    Article  MathSciNet  MATH  Google Scholar 

  27. Balasubramaniam P, Vembarasan V, Rakkiyappan R: Leakage delays in T-S fuzzy cellular neural networks. Neural Process. Lett. 2011, 33: 111–136. 10.1007/s11063-010-9168-3

    Article  MATH  Google Scholar 

  28. Li X, Fu X, Balasubramaniam P, Rakkiyappan R: Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under impulsive perturbations. Nonlinear Anal., Real World Appl. 2010, 11: 4092–4108. 10.1016/j.nonrwa.2010.03.014

    Article  MathSciNet  MATH  Google Scholar 

  29. Gopalsamy K: Leakage delays in BAM. J. Math. Anal. Appl. 2007, 325: 1117–1132. 10.1016/j.jmaa.2006.02.039

    Article  MathSciNet  MATH  Google Scholar 

  30. Li C, Huang T: On the stability of nonlinear systems with leakage delay. J. Franklin Inst. 2009, 346: 366–377. 10.1016/j.jfranklin.2008.12.001

    Article  MathSciNet  MATH  Google Scholar 

  31. Peng S: Global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms. Nonlinear Anal., Real World Appl. 2010, 11: 2141–2151. 10.1016/j.nonrwa.2009.06.004

    Article  MathSciNet  MATH  Google Scholar 

  32. Balasubramaniam P, Kalpana M, Rakkiyappan R: State estimation for fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Comput. Math. Appl. 2011, 62: 3959–3972.

    Article  MathSciNet  MATH  Google Scholar 

  33. Li YK, Li YQ: Existence and exponential stability of almost periodic solution for neutral delay BAM neural networks with time-varying delays in leakage terms. J. Franklin Inst. 2013, 350: 2808–2825. 10.1016/j.jfranklin.2013.07.005

    Article  MathSciNet  MATH  Google Scholar 

  34. Chen ZB: A shunting inhibitory cellular neural network with leakage delays and continuously distributed delays of neutral type. Neural Comput. Appl. 2013, 23: 2429–2434. 10.1007/s00521-012-1200-2

    Article  Google Scholar 

  35. Zhao CH, Wang ZY: Exponential convergence of a SICNN with leakage delays and continuously distributed delays of neutral type. Neural Process. Lett. 2014. 10.1007/s11063-014-9341-1

    Google Scholar 

  36. Li YK, Zhao L, Chen XR: Existence of periodic solutions for neutral type cellular neural networks with delays. Appl. Math. Model. 2012, 36: 1173–1183. 10.1016/j.apm.2011.07.090

    Article  MathSciNet  MATH  Google Scholar 

  37. Bai C: Global stability of almost periodic solutions of Hopfield neural networks with neutral time-varying delays. Appl. Math. Comput. 2008, 203: 72–79. 10.1016/j.amc.2008.04.002

    Article  MathSciNet  MATH  Google Scholar 

  38. Xiao B: Existence and uniqueness of almost periodic solutions for a class of Hopfield neural networks with neutral delays. Appl. Math. Lett. 2009, 22: 528–533. 10.1016/j.aml.2008.06.025

    Article  MathSciNet  MATH  Google Scholar 

  39. Park JH, Park CH, Kwon OM, Lee SM: A new stability criterion for bidirectional associative memory neural networks of neutral-type. Appl. Math. Comput. 2008, 199: 716–722. 10.1016/j.amc.2007.10.032

    Article  MathSciNet  MATH  Google Scholar 

  40. Rakkiyappan R, Balasubramaniam P: New global exponential stability results for neutral type neural networks with distributed time delays. Neurocomputing 2008, 71: 1039–1045. 10.1016/j.neucom.2007.11.002

    Article  MATH  Google Scholar 

  41. Rakkiyappan R, Balasubramaniam P: LMI conditions for global asymptotic stability results for neutral-type neural networks with distributed time delays. Appl. Math. Comput. 2008, 204: 317–324. 10.1016/j.amc.2008.06.049

    Article  MathSciNet  MATH  Google Scholar 

  42. Zhang Z, Liu W, Zhou D: Global asymptotic stability to a generalized Cohen-Grossberg BAM neural networks of neutral type delays. Neural Netw. 2012, 25: 94–105.

    Article  MATH  Google Scholar 

  43. Liu PL: Improved delay-dependent stability of neutral type neural networks with distributed delays. ISA Trans. 2013, 52: 717–724. 10.1016/j.isatra.2013.06.012

    Article  Google Scholar 

  44. Li YK, Chen XR, Zhao L: Stability and existence of periodic solutions to delayed Cohen-Grossberg BAM neural networks with impulses on time scales. Neurocomputing 2009, 72: 1621–1630. 10.1016/j.neucom.2008.08.010

    Article  Google Scholar 

  45. Li YK, Shu JY: Anti-periodic solutions to impulsive shunting inhibitory cellular neural networks with distributed delays on time scales. Commun. Nonlinear Sci. Numer. Simul. 2011, 16: 3326–3336. 10.1016/j.cnsns.2010.11.004

    Article  MathSciNet  MATH  Google Scholar 

  46. Li YK, Wang C: Almost periodic solutions of shunting inhibitory cellular neural networks on time scales. Commun. Nonlinear Sci. Numer. Simul. 2012, 17: 3258–3266. 10.1016/j.cnsns.2011.11.034

    Article  MathSciNet  MATH  Google Scholar 

  47. Liang T, Yang YQ, Liu Y, Li L: Existence and global exponential stability of almost periodic solutions to Cohen-Grossberg neural networks with distributed delays on time scales. Neurocomputing 2014, 123: 207–215.

    Article  Google Scholar 

  48. Zhang ZQ, Liu KY: Existence and global exponential stability of a periodic solution to interval general bidirectional associative memory (BAM) neural networks with multiple delays on time scales. Neural Netw. 2011, 24: 427–439. 10.1016/j.neunet.2011.02.001

    Article  MATH  Google Scholar 

  49. Li YK, Zhang TW: Global exponential stability of fuzzy interval delayed neural networks with impulses on time scales. Int. J. Neural Syst. 2009, 19(6):449–456. 10.1142/S0129065709002142

    Article  Google Scholar 

  50. Li YK, Gao S: Global exponential stability for impulsive BAM neural networks with distributed delays on time scales. Neural Process. Lett. 2010, 31: 65–91. 10.1007/s11063-009-9127-z

    Article  Google Scholar 

  51. Bohner M, Peterson A: Advances in Dynamic Equations on Time Scales. Birkhäuser, Boston; 2003.

    Book  MATH  Google Scholar 

  52. Li YK, Wang C: Uniformly almost periodic functions and almost periodic solutions to dynamic equations on time scales. Abstr. Appl. Anal. 2011., 2011: Article ID 341520

    Google Scholar 

  53. Li YK, Wang C: Almost periodic functions on time scales and applications. Discrete Dyn. Nat. Soc. 2011., 2011: Article ID 727068

    Google Scholar 

Download references

Acknowledgements

This study was supported by the National Natural Sciences Foundation of People’s Republic of China under Grant 11361072.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongkun Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the manuscript and typed, read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, L., Li, Y. & Yang, L. Almost periodic solutions for neutral delay Hopfield neural networks with time-varying delays in the leakage term on time scales. Adv Differ Equ 2014, 178 (2014). https://doi.org/10.1186/1687-1847-2014-178

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-178

Keywords