Skip to content

Advertisement

  • Research
  • Open Access

Almost periodic solutions for neutral delay Hopfield neural networks with time-varying delays in the leakage term on time scales

Advances in Difference Equations20142014:178

https://doi.org/10.1186/1687-1847-2014-178

  • Received: 15 March 2014
  • Accepted: 30 June 2014
  • Published:

Abstract

In this paper, a class of neutral delay Hopfield neural networks with time-varying delays in the leakage term on time scales is considered. By utilizing the exponential dichotomy of linear dynamic equations on time scales, Banach’s fixed point theorem and the theory of calculus on time scales, some sufficient conditions are obtained for the existence and exponential stability of almost periodic solutions for this class of neural networks. Finally, a numerical example illustrates the feasibility of our results and also shows that the continuous-time neural network and the discrete-time analogue have the same dynamical behaviors. The results of this paper are completely new and complementary to the previously known results even when the time scale T = R  or  Z .

Keywords

  • almost periodic solutions
  • Hopfield neural networks
  • neutral delay
  • leakage term
  • time scales

1 Introduction

The dynamical properties for delayed Hopfield neural networks have been extensively studied since they can be applied into pattern recognition, image processing, speed detection of moving objects, optimization problems and many other fields. Besides, due to the finite speed of information processing, the existence of time delays frequently causes oscillation, divergence, or instability in neural networks. Therefore, it is of prime importance to consider the delay effects on the stability of neural networks. Up to now, neural networks with various types of delay have been widely investigated by many authors [120].

However, so far, very little attention has been paid to neural networks with time delay in the leakage (or ‘forgetting’) term [2135]. Such time delays in the leakage terms are difficult to handle and have been rarely considered in the literature. In fact, the leakage term has a great impact on the dynamical behavior of neural networks. Also, recently, another type of time delays, namely, neutral-type time delays which always appear in the study of automatic control, population dynamics and vibrating masses attached to an elastic bar, etc., has drawn much research attention. So far there have been only a few papers that have taken neutral-type phenomenon into account in delayed neural networks [3343].

In fact, both continuous and discrete systems are very important in implementation and applications. But it is troublesome to study the existence of almost periodic solutions for continuous and discrete systems respectively. Therefore, it is meaningful to study that on time scales which can unify the continuous and discrete situations (see [4450]).

To the best of our knowledge, up to now, there have been no papers published on the existence and stability of almost periodic solutions to neutral-type delay neural networks with time-varying delays in the leakage term on time scales. Thus, it is important and, in effect, necessary to study the existence of almost periodic solutions for neutral-type neural networks with time-varying delay in the leakage term on time scales.

Motivated by above, in this paper, we propose the following neutral delay Hopfield neural networks with time-varying delays in the leakage term on time scale :
x i Δ ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s + I i ( t ) , i = 1 , 2 , , n ,
(1.1)

where is an almost periodic time scale that will be defined in the next section, x i ( t ) denotes the potential (or voltage) of cell i at time t, c i ( t ) > 0 represents the rate at which the i th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs at time t, a i j ( t ) , b i j ( t ) , d i j ( t ) and e i j ( t ) represent the delayed strengths of connectivity and neutral delayed strengths of connectivity between cell i and j at time t, respectively, θ i j ( ) and ξ i j ( ) are the kernel functions determining the distributed delays, f j , g j , h j and k j are the activation functions in system (1.1), I i ( t ) is an external input on the i th unit at time t, τ i j ( t ) 0 and σ i j ( t ) 0 correspond to the transmission delays of the i th unit along the axon of the j th unit at time t.

If T = R , then system (1.1) is reduced to the following continuous-time neutral delay Hopfield neural network:
x i ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) d s + j = 1 n b i j ( t ) g j ( x j ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j ( s ) ) d s + I i ( t ) , i = 1 , 2 , , n ,
(1.2)
and if T = Z , then system (1.1) is reduced to the discrete-time neutral delay Hopfield neural network
Δ x i ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) s = t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) + j = 1 n b i j ( t ) g j ( Δ x j ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) s = t ζ i j ( t ) t ξ i j ( s ) k j ( Δ x j ( s ) ) + I i ( t ) , i = 1 , 2 , , n ,
(1.3)

where t Z and Δ x ( t ) = x ( t + 1 ) x ( t ) . When η i ( t ) 0 , d i j ( t ) 0 , e i j ( t ) 0 , i , j = 1 , 2 , , n , Bai [37] and Xiao [38] studied the almost periodicity of (1.2), respectively. However, even when η i ( t ) 0 , i = 1 , 2 , , n , the almost periodicity to (1.3), the discrete-time analogue of (1.2), has been not studied yet.

For convenience, for any almost periodic function f ( t ) defined on , we define f = inf i T | f ( t ) | , f + = sup T | f ( t ) | .

The initial condition associated with system (1.1) is of the form
x i ( t ) = φ i ( t ) , x i Δ ( t ) = φ i Δ ( t ) , t [ ι , 0 ] T , i = 1 , 2 , , n ,

where φ i ( ) denotes a real-value bounded Δ-differentiable function defined on [ ι , 0 ] T and ι = max 1 i , j n { η i + , τ i j + , σ i j + , δ i j + , ζ i j + } .

Throughout this paper, we assume that:

(H1) c i ( t ) > 0 with c i R + , η i ( t ) 0 , τ i j ( t ) 0 , σ i j ( t ) 0 , δ i j ( t ) 0 , ζ i j ( t ) 0 , a i j ( t ) , b i j ( t ) , d i j ( t ) , e i j ( t ) and I i ( t ) are all almost periodic functions on , t η i ( t ) T , t τ i j ( t ) T , t σ i j ( t ) T , t δ i j ( t ) T , t ζ i j ( t ) T for t T , i , j = 1 , 2 , , n .

(H2) There exist positive constants L i > 0 , l i > 0 , L i h > 0 , l i k > 0 such that for i = 1 , 2 , , n ,
| f i ( x ) f i ( y ) | L i | x y | , | g i ( x ) g i ( y ) | l i | x y | , | h i ( x ) h i ( y ) | L i h | x y | , | k i ( x ) k i ( y ) | l i k | x y | ,

where x , y R and f i ( 0 ) = g i ( 0 ) = h i ( 0 ) = k i ( 0 ) = 0 .

(H3) For i , j = 1 , 2 , , n , the delay kernels θ i j , ξ i j : T R are continuous and integrable with
0 t δ i j + t | θ i j ( s ) | Δ s θ ¯ i j , 0 t ζ i j + t | ξ i j ( s ) | Δ s ξ ¯ i j .

Our main purpose of this paper is to study the existence and global exponential stability of the almost periodic solution to (1.1). Our results of this paper are completely new and complementary to the previously known results even when the time scale T = R  or  Z . The organization of the rest of this paper is as follows. In Section 2, we introduce some definitions and make some preparations for later sections. In Section 3 and Section 4, by utilizing Banach’s fixed point theorem and the theory of calculus on time scales, we present some sufficient conditions which guarantee the existence of a unique globally exponentially stable almost periodic solution for system (1.1). In Section 5, we present examples to illustrate the feasibility and effectiveness of our results obtained in previous sections. We draw a conclusion in Section 6.

2 Preliminaries

In this section, we shall first recall some basic definitions and lemmas which will be useful for the proof of our main results.

Let be a nonempty closed subset (time scale) of . The forward and backward jump operators σ , ρ : T T and the graininess μ : T R + are defined, respectively, by
σ ( t ) = inf { s T : s > t } , ρ ( t ) = sup { s T : s < t } and μ ( t ) = σ ( t ) t .

A point t T is called left-dense if t > inf T and ρ ( t ) = t , left-scattered if ρ ( t ) < t , right-dense if t < sup T and σ ( t ) = t , and right-scattered if σ ( t ) > t . If has a left-scattered maximum m, then T k = T { m } ; otherwise T k = T . If has a right-scattered minimum m, then T k = T { m } ; otherwise T k = T .

A function f : T R is right-dense continuous provided it is continuous at right-dense point in and its left-side limits exist at left-dense points in . If f is continuous at each right-dense point and each left-dense point, then f is said to be a continuous function on  .

For y : T R and t T k , we define the delta derivative of y ( t ) , y Δ ( t ) , to be the number (if it exists) with the property that for a given ε > 0 , there exists a neighborhood U of t such that
| [ y ( σ ( t ) ) y ( s ) ] y Δ ( t ) [ σ ( t ) s ] | < ε | σ ( t ) s |

for all s U .

If y is continuous, then y is right-dense continuous, and if y is delta differentiable at t, then y is continuous at t.

Let y be right-dense continuous. If Y Δ ( t ) = y ( t ) , then we define the delta integral by
a t y ( s ) Δ s = Y ( t ) Y ( a ) .
A function r : T R is called regressive if
1 + μ ( t ) r ( t ) 0

for all t T k . The set of all regressive and rd-continuous functions r : T R will be denoted by R = R ( T ) = R ( T , R ) . We define the set R + = R + ( T , R ) = { r R : 1 + μ ( t ) r ( t ) > 0 , t T } .

If r is a regressive function, then the generalized exponential function e r is defined by
e r ( t , s ) = exp { s t ξ μ ( τ ) ( r ( τ ) ) Δ τ } for  s , t T ,
with the cylinder transformation
ξ h ( z ) = { Log ( 1 + h z ) h if  h 0 , z if  h = 0 .
Let p , q : T R be two regressive functions, we define
p q : = p + q + μ p q , p : = p 1 + μ p , p q : = p ( q ) .

Then the generalized exponential function has the following properties.

Definition 2.1 [51]

Let p , q : T R be two regressive functions, define
p q = p + q + μ p q , p = p 1 + μ p , p q = p ( q ) .

Lemma 2.1 [51]

Assume that p , q : T R are two regressive functions, then
  1. (1)

    e 0 ( t , s ) 1 and e p ( t , t ) 1 ;

     
  2. (2)

    e p ( σ ( t ) , s ) = ( 1 + μ ( t ) p ( t ) ) e p ( t , s ) ;

     
  3. (3)

    e p ( t , s ) = 1 e p ( s , t ) = e p ( s , t ) ;

     
  4. (4)

    e p ( t , s ) e p ( s , r ) = e p ( t , r ) ;

     
  5. (5)

    ( e p ( t , s ) ) Δ = ( p ) ( t ) e p ( t , s ) ;

     
  6. (6)

    if a , b , c T , then a b p ( t ) e p ( c , σ ( t ) ) Δ t = e p ( c , a ) e p ( c , b ) .

     

Definition 2.2 [51]

Assume that f : T R is a function and let t T k . Then we define f Δ ( t ) to be the number (provided it exists) with the property that given any ε > 0 , there is a neighborhood U of t (i.e., U = ( t δ , t + δ ) T for some δ > 0 ) such that
| [ f ( σ ( t ) ) f ( s ) ] f Δ ( t ) [ σ ( t ) s ] | ε | σ ( t ) s |

for all s U . We call f Δ ( t ) the delta (or Hilger) derivative of f at t. Moreover, we say that f is delta (or Hilger) differentiable (or, in short, differentiable) on T k provided f Δ ( t ) exists for all t T k . The function f Δ : T k R is then called the (delta) derivative of f on T k .

Definition 2.3 [52]

A time scale is called an almost periodic time scale if
Π : = { τ R : t ± τ T , t T } { 0 } .

Definition 2.4 [52]

Let be an almost periodic time scale. A function f C ( T , E n ) is called an almost periodic function if the ε-translation set of
E ( ε , f ) = { τ Π : | f ( t + τ ) f ( t ) | < ε , t T }
is a relatively dense set in for all ε > 0 ; that is, for any given ε > 0 , there exists a constant l ( ε ) > 0 such that each interval of length l ( ε ) contains a τ ( ε ) E ( ε , f ) such that
| f ( t + τ ) f ( t ) | < ε , t T .

τ is called the ε-translation number of f and l ( ε ) is called the inclusion length of T ( ε , f ) .

Definition 2.5 [52]

Let A ( t ) be an n × n rd-continuous matrix on , the linear system
x Δ ( t ) = A ( t ) x ( t ) , t T
(2.1)
is said to admit an exponential dichotomy on if there exist positive constants k, α, projection P and the fundamental solution matrix X ( t ) of (2.1), satisfying
X ( t ) P X 1 ( σ ( s ) ) 0 k e α ( t , σ ( s ) ) , s , t T , t σ ( s ) , X ( t ) ( I P ) X 1 ( σ ( s ) ) 0 k e α ( σ ( s ) , t ) , s , t T , t σ ( s ) ,

where 0 is a matrix norm on (say, for example, if A = ( a i j ) n × m , then we can take A 0 = ( i = 1 n j = 1 m | a i j | 2 ) 1 2 ).

Consider the following almost periodic system:
x Δ ( t ) = A ( t ) x ( t ) + f ( t ) , t T ,
(2.2)

where A ( t ) is an almost periodic matrix function, f ( t ) is an almost periodic vector function.

Lemma 2.2 [52]

If the linear system (2.1) admits exponential dichotomy, then system (2.2) has a unique almost periodic solution
x ( t ) = t X ( t ) P X 1 ( σ ( s ) ) f ( s ) Δ s t + X ( t ) ( I P ) X 1 ( σ ( s ) ) f ( s ) Δ s ,

where X ( t ) is the fundamental solution matrix of (2.1).

Lemma 2.3 [53]

Let c i ( t ) be an almost periodic function on , where c i ( t ) > 0 , c i ( t ) R + , i = 1 , 2 , , n , t T and min 1 i n { inf t T c i ( t ) } = m ˜ > 0 , then the linear system
x Δ ( t ) = diag ( c 1 ( t ) , c 2 ( t ) , , c n ( t ) ) x ( t )
(2.3)

admits an exponential dichotomy on .

One can easily prove the following.

Lemma 2.4 Suppose that f ( t ) is an rd-continuous function and c ( t ) is a positive rd-continuous function satisfying c ( t ) R + . Let
g ( t ) = t 0 t e c ( t , σ ( s ) ) f ( s ) Δ s ,
where t 0 T , then
g Δ ( t ) = f ( t ) + t 0 t [ c ( t ) e c ( t , σ ( s ) ) f ( s ) ] Δ s .

3 Existence of almost periodic solutions

Let AP ( T ) = { x ( t ) C ( T , R ) : x ( t )  is a real-valued, almost periodic function on  T } , Y = { x ( t ) C 1 ( T , R ) : x ( t ) , x Δ ( t ) AP ( T ) } and
X = { φ = ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) T : φ i ( t ) Y , i = 1 , 2 , , n } .
For φ X , if we define induced modulus φ X = max { φ 0 , φ Δ 0 } , where
φ 0 = sup t T φ ( t ) 0 , φ ( t ) 0 = max 1 i n | φ i ( t ) |

and φ Δ ( t ) = ( φ 1 Δ ( t ) , φ 2 Δ ( t ) , , φ n Δ ( t ) ) T , then is a Banach space.

Theorem 3.1 Assume that (H1)-(H3) and

(H4) r = max 1 i n max { 1 c i , 1 + c i + c i } ( c i + η i + + j = 1 n a i j + L j + j = i n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) < 1

hold, then there exists exactly one almost periodic solution of system (1.1) in the region X 0 = { ϕ ( t ) ϕ ϕ 0 X r R 1 r } , where
R = max 1 i n max { I i + c i , I i + ( 1 + c i + c i ) } , ϕ 0 = ( t I 1 ( s ) e c 1 ( t , σ ( s ) ) Δ s , , t I n ( s ) e c n ( t , σ ( s ) ) Δ s ) T .
Proof Rewrite (1.1) in the form
x i Δ ( t ) = c i ( t ) x i ( t ) + c i ( t ) t η i ( t ) t x i Δ ( s ) Δ s + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s + I i ( t ) , i = 1 , 2 , , n .
For any ϕ X , we consider the following system:
x i Δ ( t ) = c i ( t ) x i ( t ) + c i ( t ) t η i ( t ) t ϕ i Δ ( s ) Δ s + j = 1 n a i j ( t ) f j ( ϕ j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( ϕ j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( ϕ j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( ϕ j Δ ( s ) ) Δ s + I i ( t ) , i = 1 , 2 , , n .
(3.1)
Since min 1 i n { inf t T c i ( t ) } > 0 , it follows from Lemma 2.2 and Lemma 2.3 that system (3.1) has a unique almost periodic solution which can be expressed as follows:
x ϕ ( t ) = ( x 1 ϕ ( t ) , x 2 ϕ ( t ) , , x n ϕ ( t ) ) T ,
where
x i ϕ ( t ) = t e c i ( t , σ ( s ) ) [ c i ( s ) s η i ( s ) s ϕ i Δ ( u ) Δ u + j = 1 n a i j ( s ) f j ( ϕ j ( s τ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) h j ( ϕ j ( u ) ) Δ u + j = 1 n b i j ( s ) g j ( ϕ j Δ ( s σ i j ( s ) ) ) + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) k j ( ϕ j Δ ( u ) ) Δ u + I i ( s ) ] Δ s , i = 1 , 2 , , n .

Now, we define a mapping T : X 0 X 0 by ( T ϕ ) ( t ) = x ϕ ( t ) , ϕ X 0 .

By the definition of X , we have
ϕ 0 X = max { ϕ 0 0 , ϕ 0 Δ 0 } = max { sup t T max 1 i n | t I i ( s ) e c i ( t , σ ( s ) ) Δ s | , sup t T max 1 i n | I i ( t ) t c i ( t ) I i ( s ) e c i ( t , σ ( s ) ) Δ s | } max { max 1 i n { I i + c i } , max 1 i n { I i + ( 1 + c i + c i ) } } = R .
(3.2)
Hence, for any ϕ X 0 = { ϕ ϕ X , ϕ ϕ 0 X r R 1 r } , one has
ϕ X ϕ 0 X + ϕ ϕ 0 X R + r R 1 r = R 1 r .
Next, we will show that T ( X 0 ) X 0 . In fact, for any ϕ X 0 , we have
T ϕ ϕ 0 0 = sup t T max 1 i n { | t e c i ( t , σ ( s ) ) [ c i ( s ) s η i ( s ) s ϕ i Δ ( u ) Δ u + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) h j ( ϕ j ( u ) ) Δ u + j = 1 n b i j ( s ) g j ( ϕ j Δ ( s σ i j ( s ) ) ) + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) k j ( ϕ j Δ ( u ) ) Δ u ] Δ s | } sup t T max 1 i n { t e c i ( t , σ ( s ) ) [ c i + ϕ Δ 0 η i + + j = 1 n a i j + L j ϕ 0 + j = 1 n b i j + l j ϕ Δ 0 + j = 1 n d i j + L j h θ ¯ i j ϕ 0 + j = 1 n e i j + l j k ξ ¯ i j ϕ Δ 0 ] Δ s } sup t T max 1 i n { t e c i ( t , σ ( s ) ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X Δ s } max 1 i n { 1 c i ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } ϕ X
and
( T ϕ ϕ 0 ) Δ 0 = sup t T max 1 i n { | c i ( t ) t η i ( t ) t ϕ Δ ( u ) Δ u + j = 1 n a i j ( t ) f j ( ϕ j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s t c i ( t ) e c i ( t , σ ( s ) ) [ c i ( s ) s η i ( s ) s ϕ i Δ ( u ) Δ u + j = 1 n a i j ( s ) f j ( ϕ j ( s τ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) h j ( ϕ j ( u ) ) Δ u + j = 1 n b i j ( s ) g j ( ϕ j Δ ( s σ i j ( s ) ) ) + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) k j ( ϕ j Δ ( u ) ) Δ u ] Δ s | } sup t T max 1 i n { ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X + c i + t e c i ( t , σ ( s ) ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X Δ s } max 1 i n { ( 1 + c i + c i ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } ϕ X .
Thus, we obtain
T ϕ ϕ 0 X = max { T ϕ ϕ 0 0 , ( T ϕ ϕ 0 ) Δ 0 } max 1 i n max { 1 c i , 1 + c i + c i } × ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) ϕ X = r ϕ X r R 1 r ,

which implies ( T ϕ ) X 0 , so the mapping T is a self-mapping from X 0 to X 0 .

Finally, we prove that T is a contraction mapping. Taking ϕ , ψ X 0 , we have that
T ϕ T ψ 0 max 1 i n { 1 c i ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } ϕ ψ X r ϕ ψ X
and
( T ϕ T ψ ) Δ 0 max 1 i n { ( 1 + c i + c i ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) } × ϕ ψ X r ϕ ψ X .

Noticing that r < 1 , it means that T is a contraction mapping. Thus, there exists a unique fixed point ϕ X 0 such that T ϕ = ϕ . Then system (1.1) has a unique almost periodic solution in the region X 0 = { ϕ ( t ) X ϕ ϕ 0 r R 1 r } . This completes the proof. □

4 Exponential stability of the almost periodic solution

Definition 4.1 The almost periodic solution x ¯ ( t ) = ( x ¯ 1 ( t ) , x ¯ 2 ( t ) , , x ¯ n ( t ) ) T of system (1.1) with initial value φ ¯ ( t ) = ( φ ¯ 1 ( t ) , φ ¯ 2 ( t ) , , φ ¯ n ( t ) ) T is said to be globally exponentially stable. If there exist positive constants λ with λ R + and M > 1 such that every solution x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T of system (1.1) with initial value φ ( t ) = ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) T satisfies
x ( t ) x ¯ ( t ) 1 M e λ ( t , t 0 ) ψ X , t [ ι , + ) T , t t 0 ,
where
x ( t ) x ¯ ( t ) 1 = max { x ( t ) x ¯ ( t ) 0 , ( x ( t ) x ¯ ( t ) ) Δ 0 } , ψ X = max { sup t [ ι , 0 ] T max 1 i n | φ i ( t ) φ i ¯ ( t ) | , sup t [ ι , 0 ] T max 1 i n | φ i Δ ( t ) ( φ i ¯ ) Δ ( t ) | } ,

and t 0 [ ι , 0 ] T .

Theorem 4.1 Assume that (H1)-(H4) hold, then system (1.1) has a unique almost periodic solution which is globally exponentially stable.

Proof From Theorem 3.1, we see that system (1.1) has at least one almost periodic solution x ¯ ( t ) = ( x ¯ 1 ( t ) , x ¯ 2 ( t ) , , x ¯ n ( t ) ) T . Suppose that x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T is an arbitrary solution. Set y i ( t ) = x i ( t ) x ¯ i ( t ) , i = 1 , 2 , , n , then it follows from system (1.1) that
y i Δ ( t ) = x i Δ ( t ) x ¯ i Δ ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x j ( s ) ) Δ s + j = 1 n b i j ( t ) g j ( x j Δ ( t σ i j ( t ) ) ) + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x j Δ ( s ) ) Δ s + c i ( t ) x ¯ i ( t η i ( t ) ) j = 1 n a i j ( t ) f j ( x ¯ j ( t τ i j ( t ) ) ) j = 1 n b i j ( t ) g j ( x ¯ j Δ ( t σ i j ( t ) ) ) j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) h j ( x ¯ j ( s ) ) Δ s j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) k j ( x ¯ j Δ ( s ) ) Δ s = c i ( t ) y i ( t η i ( t ) ) + j = 1 n a i j ( t ) [ f j ( x j ( t τ i j ( t ) ) ) f j ( x ¯ j ( t τ i j ( t ) ) ) ] + j = 1 n b i j ( t ) [ g j ( x j Δ ( t σ i j ( t ) ) ) g j ( x ¯ j Δ ( t σ i j ( t ) ) ) ] + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) [ h j ( x j ( s ) ) h j ( x ¯ j ( s ) ) ] Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) [ k j ( x j Δ ( s ) ) k j ( x ¯ j Δ ( s ) ) ] Δ s = c i ( t ) y i ( t η i ( t ) ) + j = 1 n a i j ( t ) F j ( y j ( t τ i j ( t ) ) ) + j = 1 n b i j ( t ) G j ( y j Δ ( t σ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) H j ( y j ( s ) ) Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) K j ( y j Δ ( s ) ) Δ s ,
(4.1)
where i = 1 , 2 , , n and for i , j = 1 , 2 , , n ,
F j ( y j ( t τ i j ( t ) ) ) = f j ( y j ( t τ i j ( t ) ) + x ¯ j ( t τ i j ( t ) ) ) f j ( x ¯ j ( t τ i j ( t ) ) ) , g j ( y j Δ ( t σ i j ( t ) ) ) = g j ( y j Δ ( t σ i j ( t ) ) + x ¯ j Δ ( t σ i j ( t ) ) ) g j ( x ¯ j Δ ( t σ i j ( t ) ) ) , H j ( y j ( s ) ) = h j ( x j ( s ) ) h j ( x ¯ j ( s ) ) , K j ( y j Δ ( s ) ) = k j ( x j Δ ( s ) ) k j ( x ¯ j Δ ( s ) ) .
From (H2) we have that for i , j = 1 , 2 , , n ,
| F j ( y j ( t τ i j ( t ) ) ) | L j | y j ( t τ i j ( t ) ) | , | G j ( y j Δ ( t σ i j ( t ) ) ) | l j | y j Δ ( t σ i j ( t ) ) |
and
| H j ( y j ( s ) ) | L j h | y j ( s ) | , | K j ( y j Δ ( s ) ) | l j k | y j Δ ( s ) | .
The initial condition of (4.1) is
ψ i ( t ) = φ i ( t ) φ ¯ i ( t ) , ψ i Δ ( t ) = φ i Δ ( t ) φ ¯ i Δ ( t ) , t [ ι , 0 ] T , i = 1 , 2 , , n .
Let Θ i and Λ i be defined by
Θ i ( ω ) = c i ω exp ( ω sup s T μ ( s ) ) ( c i + η i + exp ( ω η i + ) + j = 1 n a i j + L j exp ( ω τ i j + ) + j = 1 n b i j + l j exp ( ω σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( ω ) + j = 1 n e i j + l j k ξ ¯ i j exp ( ω ) ) , i = 1 , 2 , , n
and
Λ i ( ω ) = c i ω ( c i + exp ( ω sup s T μ ( s ) ) + c i ω ) ( c i + η i + exp ( ω η i + ) + j = 1 n a i j + L j exp ( ω τ i j + ) + j = 1 n b i j + l j exp ( ω σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( ω ) + j = 1 n e i j + l j k ξ ¯ i j exp ( ω ) ) , i = 1 , 2 , , n .
By (H3), for i = 1 , 2 , , n , we get
Θ i ( 0 ) = c i ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) > 0
and
Λ i ( 0 ) = c i ( c i + + c i ) ( c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j ) > 0 .

Since Θ i , Λ i are continuous on [ 0 , + ) and Θ i ( ω ) , Θ i ( ω ) , as ω + , so there exist ω i , ω i > 0 such that Θ i ( ω i ) = Λ i ( ω i ) = 0 and Θ i ( ω ) > 0 for ω ( 0 , ω i ) , Λ i ( ω ) > 0 for ω ( 0 , ω i ) , i = 1 , 2 , , n .

By choosing a = min { ω 1 , ω 2 , , ω n , ω 1 , ω 2 , , ω n } , we have Θ i ( a ) 0 , Λ i ( a ) 0 , i = 1 , 2 , , n . So, we can choose a positive constant 0 < λ < min { a , min 1 i n { c i } } such that
Θ i ( λ ) > 0 , Λ i ( λ ) > 0 , i = 1 , 2 , , n ,
which implies that
exp ( λ sup s T μ ( s ) ) c i λ ( c i + η i + exp ( λ η i + ) + j = 1 n a i j + L j exp ( λ τ i j + ) + j = 1 n b i j + l j exp ( λ σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( λ ) + j = 1 n e i j + l j k ξ ¯ i j exp ( λ ) ) < 1
(4.2)
and
( 1 + c i + exp ( λ sup s T μ ( s ) ) c i λ ) ( c i + η i + exp ( λ η i + ) + j = 1 n a i j + L j exp ( λ τ i j + ) + j = 1 n b i j + l j exp ( λ σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( λ ) + j = 1 n e i j + l j k ξ ¯ i j exp ( λ ) ) < 1 ,
(4.3)
where i = 1 , 2 , , n . Let
M = max 1 i n { c i c i + η i + + j = 1 n a i j + L j + j = 1 n b i j + l j + j = 1 n d i j + L j h θ ¯ i j + j = 1 n e i j + l j k ξ ¯ i j } ,
by (H3) we have M > 1 . Thus
1 M < exp ( λ sup s T μ ( s ) ) c i λ ( c i + η i + exp ( λ η i + ) + j = 1 n a i j + L j exp ( λ τ i j + ) + j = 1 n b i j + l j exp ( λ σ i j + ) + j = 1 n d i j + L j h θ ¯ i j exp ( λ ) + j = 1 n e i j + l j k ξ ¯ i j exp ( λ ) ) .
Rewrite (4.1) in the form
y i Δ ( t ) + c i ( t ) y i ( t ) = c i ( t ) t η i ( t ) t y i Δ ( s ) Δ s + j = 1 n a i j ( t ) F j ( y j ( t τ i j ( t ) ) ) + j = 1 n b i j ( t ) G j ( y j Δ ( t σ i j ( t ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) H j ( y j ( s ) ) Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) K j ( y j Δ ( s ) ) Δ s , i = 1 , 2 , , n .
(4.4)
Multiplying the both sides of (4.4) by e c i ( t , σ ( s ) ) and integrating over [ t 0 , t ] T , we get
y i ( t ) = y i ( t 0 ) e c i ( t , t 0 ) + t 0 t e c i ( t , σ ( s ) ) { c i ( s ) s η i ( s ) s y i Δ ( s ) Δ s + j = 1 n a i j ( s ) F j ( y j ( s τ i j ( s ) ) ) + j = 1 n b i j ( s ) G j ( y j Δ ( s σ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) H j ( y j ( u ) ) Δ u + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) K j ( y j Δ ( u ) ) Δ u } Δ s , i = 1 , 2 , , n .
(4.5)
It is easy to see that
y ( t ) 1 = ψ ( t ) 1 ψ X M e λ ( t , t 0 ) ψ X , t [ ι , 0 ] T .
We claim that
y ( t ) 1 M e λ ( t , t 0 ) ψ X , t ( 0 , + ) T .
(4.6)
To prove (4.6), we first show that for any p > 1 , the following inequality holds:
y ( t ) 1 < p M e λ ( t , t 0 ) ψ X , t ( 0 , + ) T .
(4.7)
If (4.7) is not true, then there must be some t 1 ( 0 , + ) T and some i 1 , i 2 { 1 , 2 , , n } such that
y ( t 1 ) 1 = max { y ( t 1 ) 0 , y Δ ( t 1 ) 0 } = max { | y i 1 ( t 1 ) | , | y i 2 Δ ( t 1 ) | } p M e λ ( t 1 , t 0 ) ψ X
and
y ( t ) 1 p M e λ ( t , t 0 ) ψ X , t [ ι , t 1 ] T .
Therefore, there must exist a constant c 1 such that
y ( t 1 ) 1 = max { y ( t 1 ) 0 , y Δ ( t 1 ) 0 } = max { | y i 1 ( t 1 ) | , | y i 2 Δ ( t 1 ) | } = c p M e λ ( t 1 , t 0 ) ψ X
(4.8)
and
y ( t ) 1 c p M e λ ( t , t 0 ) ψ X , t [ ι , t 1 ] T .
(4.9)
By (4.5), (4.8), (4.9) and (H1)-(H3), we obtain
| y i 1 ( t 1 ) | ψ X e c i 1 ( t 1 , t 0 ) + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 1 ( t 1 , σ ( s ) ) e λ ( t 1 , σ ( s ) ) × { c i 1 + s η i 1 ( s ) s e λ ( σ ( s ) , θ ) Δ θ + j = 1 n a i 1 j + L j e λ ( σ ( s ) , s τ i 1 j ( s ) ) + j = 1 n b i 1 j + l j e λ ( σ ( s ) , s σ i 1 j ( s ) ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j e λ ( σ ( s ) , s ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j e λ ( σ ( s ) , s ) } Δ s ψ X e c i 1 ( t 1 , t 0 ) + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 1 λ ( t 1 , σ ( s ) ) × { c i 1 + η i 1 + e λ ( σ ( s ) , s η i 1 ( s ) ) + j = 1 n a i 1 j + L j e λ ( σ ( s ) , s τ i 1 j ( s ) ) + j = 1 n b i 1 j + l j e λ ( σ ( s ) , s σ i 1 j ( s ) ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j e λ ( σ ( s ) , s ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j e λ ( σ ( s ) , s ) } Δ s ψ X e c i 1 ( t 1 , t 0 ) + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 1 λ ( t 1 , σ ( s ) ) × { c i 1 + η i 1 + exp [ λ ( η i 1 + + sup s T μ ( s ) ) ] + j = 1 n a i 1 j + L j exp [ λ ( τ i 1 j + + sup s T μ ( s ) ) ] + j = 1 n b i 1 j + l j exp [ λ ( σ i 1 j + + sup s T μ ( s ) ) ] + j = 1 n d i 1 j + L j h θ ¯ i 1 j exp ( λ sup s T μ ( s ) ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j exp ( λ sup s T μ ( s ) ) } Δ s = c p M e λ ( t 1 , t 0 ) ψ X { 1 p M e c i 1 λ ( t 1 , t 0 ) + exp ( λ sup s T μ ( s ) ) [ c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j + j = 1 n e i 1 j + l j k ξ ¯ i 1 j ] × t 0 t 1 e c i 1 λ ( t 1 , σ ( s ) ) Δ s } < c p M e λ ( t 1 , t 0 ) ψ X { 1 M e ( c i 1 λ ) ( t 1 , t 0 ) + exp ( λ sup s T μ ( s ) ) [ c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j + j = 1 n e i 1 j + l j k ξ ¯ i 1 j ] × 1 ( c i 1 λ ) t 0 t 1 ( c i 1 λ ) e ( c i 1 λ ) ( t 1 , σ ( s ) ) Δ s } c p M e λ ( t 1 , t 0 ) ψ X { [ 1 M exp ( λ sup s T μ ( s ) ) c i 1 λ ( c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j + j = 1 n e i 1 j + l j k ξ ¯ i 1 j ) ] e ( c i 1 λ ) ( t 1 , t 0 ) + exp ( λ sup s T μ ( s ) ) c i 1 λ ( c i 1 + η i 1 + exp ( λ η i 1 + ) + j = 1 n a i 1 j + L j exp ( λ τ i 1 j + ) + j = 1 n b i 1 j + l j exp ( λ σ i 1 j + ) + j = 1 n d i 1 j + L j h θ ¯ i 1 j exp ( λ ) + j = 1 n e i 1 j + l j k ξ ¯ i 1 j exp ( λ ) ) } < c p M e λ ( t 1 , t 0 ) ψ X .
(4.10)
By Lemma 2.4 and (4.5), we have, for i = 1 , 2 , , n ,
y i Δ ( t ) = c i ( t ) y i ( t 0 ) e c i ( t , t 0 ) + ( c i ( t ) t η i ( t ) t y i Δ ( s ) Δ s + j = 1 n a i j ( t ) F j ( y j ( t τ i j ( t ) ) ) + j = 1 n b i j ( t ) G j ( y j Δ ( t σ i j ( t ) ) ) ) + j = 1 n d i j ( t ) t δ i j ( t ) t θ i j ( s ) H j ( y j ( s ) ) Δ s + j = 1 n e i j ( t ) t ζ i j ( t ) t ξ i j ( s ) K j ( y j Δ ( s ) ) Δ s + t 0 t c i ( t ) e c i ( t , σ ( s ) ) { c i ( s ) s η i ( s ) s y i Δ ( u ) Δ u + j = 1 n a i j ( s ) F j ( y j ( s τ i j ( s ) ) ) + j = 1 n b i j ( s ) G j ( y j Δ ( s σ i j ( s ) ) ) + j = 1 n d i j ( s ) s δ i j ( s ) s θ i j ( u ) H j ( y j ( u ) ) Δ u + j = 1 n e i j ( s ) s ζ i j ( s ) s ξ i j ( u ) K j ( y j Δ ( u ) ) Δ u } Δ s .
(4.11)
Thus, it follows from (4.8), (4.9) and (4.11) that
| y i 2 Δ ( t 1 ) | c i 2 + e c i 2 ( t 1 , t 0 ) ψ X + c p M e λ ( t 1 , t 0 ) ψ X ( c i 2 + t 1 η i 2 ( t 1 ) t 1 e λ ( t 1 , θ ) Δ θ + j = 1 n a i 2 j + L j e λ ( t 1 , t 1 τ i 2 j ( t 1 ) ) + j = 1 n b i 2 j + l j e λ ( t 1 , t 1 σ i 2 j ( t 1 ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j e λ ( t 1 , t 1 ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j e λ ( t 1 , t 1 ) ) + c i 2 + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 2 ( t 1 , σ ( s ) ) e λ ( t 1 , σ ( s ) ) × { c i 2 + s η i 2 ( s ) s e λ ( σ ( s ) , θ ) Δ θ + j = 1 n a i 2 j + L j e λ ( σ ( s ) , s τ i 2 j ( s ) ) + j = 1 n b i 2 j + l j e λ ( σ ( s ) , s σ i 2 j ( s ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j e λ ( σ ( s ) , s ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j e λ ( σ ( s ) , s ) } Δ s c i 2 + e c i 2 ( t 1 , t 0 ) ψ X + c p M e λ ( t 1 , t 0 ) ψ X ( c i 2 + η i 2 + e λ ( t 1 , t 1 η i 2 ( t 1 ) ) + j = 1 n a i 2 j + L j e λ ( t 1 , t 1 τ i 2 j ( t 1 ) ) + j = 1 n b i 2 j + l j e λ ( t 1 , t 1 σ i 2 j ( t 1 ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j + j = 1 n e i 2 j + l j k ξ ¯ i 2 j ) + c i 2 + c p M e λ ( t 1 , t 0 ) ψ X t 0 t 1 e c i 2 λ ( t 1 , σ ( s ) ) × { c i 2 + η i 2 + e λ ( σ ( s ) , s η i 2 ( s ) ) + j = 1 n a i 2 j + L j e λ ( σ ( s ) , s τ i 2 j ( s ) ) + j = 1 n b i 2 j + l j e λ ( σ ( s ) , s σ i 2 j ( s ) ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j e λ ( σ ( s ) , s ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j e λ ( σ ( s ) , s ) } Δ s c i 2 + e c i 2 ( t 1 , t 0 ) ψ X + c p M e λ ( t 1 , t 0 ) ψ X ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) × ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) t 0 t 1 e c i 2 λ ( t 1 , σ ( s ) ) Δ s ) < c p M e λ ( t 1 , t 0 ) ψ X { c i 2 + M e c i 2 λ ( t 1 , t 0 ) + ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) × ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) t 0 t 1 e c i 2 λ ( t 1 , σ ( s ) ) Δ s ) } c p M e λ ( t 1 , t 0 ) ψ X { c i 2 + M e ( c i 2 λ ) ( t 1 , t 0 ) + ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) × ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) 1 ( c i 2 λ ) ( e ( c i 2 λ ) ( t 1 , t 0 ) 1 ) ) } c p M e λ ( t 1 , t 0 ) ψ X { [ 1 M exp ( λ sup s T μ ( s ) ) c i 2 λ ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) ] c i 2 + e ( c i 2 λ ) ( t 1 , t 0 ) + ( 1 + c i 2 + exp ( λ sup s T μ ( s ) ) c i 2 λ ) ( c i 2 + η i 2 + exp ( λ η i 2 + ) + j = 1 n a i 2 j + L j exp ( λ τ i 2 j + ) + j = 1 n b i 2 j + l j exp ( λ σ i 2 j + ) + j = 1 n d i 2 j + L j h θ ¯ i 2 j exp ( λ ) + j = 1 n e i 2 j + l j k ξ ¯ i 2 j exp ( λ ) ) } < c p M e λ ( t 1 , t 0 ) ψ X .
(4.12)
In view of (4.10) and (4.12), we get
y ( t 1 ) 1 < c p M e λ ( t 1 , t 0 ) ψ X ,

which contradicts (4.8), and so (4.7) holds. Letting p 1 , then (4.6) holds. Hence, the almost periodic solution of system (1.1) is globally exponentially stable. This completes the proof. □

Remark 4.1 When T = R , η i ( t ) = d i j ( t ) = e i j ( t ) 0 , i = 1 , 2 , , n , Theorem 3.1 and Theorem 4.1 are reduced to Theorem 2.3 and Theorem 3.1 in [37], respectively.

Remark 4.2 According to Theorem 3.1 and Theorem 4.1, we see that the existence and exponential stability of almost periodic solutions for system (1.1) only depend on time delays η i (the delays in the leakage term) and do not depend on time delays τ i j and σ i j .

5 An example

In this section, we give an example to illustrate the feasibility and effectiveness of our results obtained in Sections 3 and 4.

Example 5.1 Let n = 3 . Consider the following neutral Hopfield neural network on time scale :
x i Δ ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 3 a i j ( t ) f j ( x j ( t τ i j ( t ) ) ) + <