Skip to main content

Theory and Modern Applications

Anti-periodic solution for impulsive high-order Hopfield neural networks with time-varying delays in the leakage terms

Abstract

This paper presents new results on anti-periodic solutions for impulsive high-order Hopfield neural networks with time-varying delays in the leakage terms. By employing a novel proof, some criteria are derived for guaranteeing the existence and exponential stability of the anti-periodic solution, which are new and complement previously known results. Moreover, a numerical simulation is given to show the effectiveness.

1 Introduction

In this paper, we discuss anti-periodic solutions for impulsive high-order Hopfield neural networks (IHHNNs) with time-varying delays in the leakage terms

{ x i ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) g j ( x j ( t τ i j ( t ) ) ) x i ( t ) = + j = 1 n l = 1 n b i j l ( t ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) + I i ( t ) , x i ( t ) = t > 0 , t t k , Δ x i ( t k ) = d i k x i ( t k ) , x i ( t ) = φ i ( t ) , t [ τ i , 0 ] , k = 1 , 2 , ,
(1.1)

where iN:={1,2,,n} and n is the number of units in a neural network, x i (t) corresponds to the state vector of the i th unit at time t, c i (t)>0 represents the rate at which the i th unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs, a i j (t) and b i j l (t) are the first- and second-order connection weights of the neural network, η i (t)0 denotes the leakage delay and t η i (t)>0 for all t>0. τ i j (t)0, σ i j l (t)0, v i j l (t)0 correspond to the transmission delays, I i (t) denotes the external inputs at time t, and g j is the activation function of signal transmission. c i , η i , I i , a i j , b i j l , g j , τ i j , σ i j l , v i j l are continuous functions on R. τ i = max j , l N max t [ 0 , ω ] { η i (t), τ i j (t), σ i j l (t), v i j l (t)} is a positive constant. Δ x i ( t k )= x i ( t k + ) x i ( t k ), x i ( t k + )= lim Δ t 0 + x i ( t k +Δt), x i ( t k )= lim Δ t 0 x i ( t k +Δt), iN, k=1,2, . t k >0 are impulsive moments satisfying t k < t k + 1 and lim k + t k =+. φ(t)= ( φ 1 , φ 2 , , φ n ) T is the initial condition and φ i () denotes real-valued continuous functions defined on [ τ i ,0], iN.

The impulsive differential equations have been proposed in many fields such as control theory, physics, chemistry, population dynamics, biotechnologies, industrial robotics, economics, etc. [13]. High-order neural networks have been the object of intensive analysis by numerous authors since high-order neural networks have stronger approximation property, faster convergence rate, greater storage capacity, and higher fault tolerance than lower-order neural networks [48]. Thus, many high-order Hopfield neural networks with impulses have been studied extensively, and a great deal of literature focuses on the existence and stability of equilibrium points, periodic solutions, almost periodic solutions and anti-periodic solutions [916]. However, to the best of our knowledge, few authors have considered the existence and stability of an anti-periodic solution of system (1.1) with the leakage delay η i (t)constant. We mention that arguments in [916] are not applicable to system (1.1).

The purpose of this paper is to discuss the existence and exponential stability of an anti-periodic solution for IHHNNs with time-varying delays in the leakage terms of system (1.1). The outline of the paper is as follows. In Section 2, some preliminaries and basic results are established. In Section 3, we give sufficient conditions for the existence and exponential stability of an anti-periodic solution for system (1.1). In Section 4, we give an example and numerical simulation to illustrate our results.

2 Preliminaries and basic results

Throughout this paper, we assume that the following conditions hold.

( H 1 ) For i,j,lN and k Z + , where Z + denotes the set of all positive integers, there exists a constant ω>0 such that

{ c i ( t + ω ) = c i ( t ) , η i ( t + ω ) = η i ( t ) , a i j ( t + ω ) g j ( u ) = a i j ( t ) g j ( u ) , τ i j ( t + ω ) = τ i j ( t ) , σ i j l ( t + ω ) = σ i j l ( t ) , v i j l ( t + ω ) = v i j l ( t ) , b i j l ( t + ω ) g j ( u ) g l ( u ) = b i j l ( t ) g j ( u ) g l ( u ) , I i ( t + ω ) = I i ( t ) , t , u R .
(2.1)

( H 2 ) For i,j,lN, there exist constants c i + , η i + , I i + , a i j + , τ i j + , b i j l + , σ i j l + , v i j l + such that

{ c i + = max t [ 0 , ω ] c i ( t ) , η i + = max t [ 0 , ω ] η i ( t ) , a i j + = max t [ 0 , ω ] | a i j ( t ) | , τ i j + = max t [ 0 , ω ] τ i j ( t ) , b i j l + = max t [ 0 , ω ] | b i j l ( t ) | , σ i j l + = max t [ 0 , ω ] σ i j l ( t ) , v i j l + = max t [ 0 , ω ] v i j l ( t ) , I i + = max t [ 0 , ω ] | I i ( t ) | .
(2.2)

( H 3 ) 2 d i k 0 for iN and k Z + .

( H 4 ) There exists q Z + such that

d i ( k + q ) = d i k , t k + q = t k +ω.

( H 5 ) For each jN, the activation functions g j :RR are continuous and there exist nonnegative constants L j and M such that, for all u,vR,

g j (0)=0, | g j ( u ) g j ( v ) | L j |uv|, | g j ( u ) | M.

( H 6 ) For all t>0 and iN, there exist positive constants ξ i and η such that

η > [ c i ( t ) c i ( t ) η i ( t ) c i + ] ξ i + j = 1 n ( | a i j ( t ) | + c i ( t ) η i ( t ) a i j + ) L j ξ j + j = 1 n l = 1 n ( | b i j l ( t ) | + c i ( t ) η i ( t ) b i j l + ) ( L j ξ j + L l ξ l ) M .
(2.3)

For convenience, let R n be the set of all real vectors. We use x= ( x 1 , x 2 , , x n ) T R n to denote a column vector, in which the symbol (T) denotes the transpose of a vector. As usual in the theory of impulsive differential equations, at the points of discontinuity t k of the solution t ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T , we assume that ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T = ( x 1 ( t 0 ) , x 2 ( t 0 ) , , x n ( t 0 ) ) T . It is clear that, in general, the derivative x i ( t k ) does not exist. On the other hand, according to system (1.1), there exists the limit x i ( t k 0). In view of the above convention, we assume that x i ( t k ) x i ( t k 0).

Definition 2.1 A solution x(t) of (1.1) is said to be ω-anti-periodic if

{ x ( t + ω ) = x ( t ) , t t k , x ( ( t k + ω ) + ) = x ( t k + ) , k Z + ,

where the smallest positive number ω is called the anti-period of function x(t).

In what follows, we shall prove the lemmas which will be used to prove our main results in Section 3.

Lemma 2.1 Let ( H 1 )-( H 6 ) hold. Suppose that x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T is a solution of system (1.1) with initial conditions

x i (s)= φ i (s), | φ i ( s ) | < ξ i γ η ,s[ τ i ,0],
(2.4)

where γ=1+ max i N {[ c i + η i + +1] I i + }, iN. Then

| x i ( s ) | < ξ i γ η for allt>0,iN.
(2.5)

Proof Assume that (2.5) does not hold. From ( H 3 ), we have

| x i ( t k + ) | =|1+ d i k | | x i ( t k ) | | x i ( t k ) | .

So, if | x i ( t k + )|> ξ i γ η , then | x i ( t k )|> ξ i γ η . Thus, we may assume that there exist iN and t ( t k , t k + 1 ) such that

| x i ( t ) | = ξ i γ η ,and | x j ( t ) | < ξ j γ η for all t[ τ j , t ),jN.
(2.6)

In view of (1.1), for iN, we obtain

x i ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) g j ( x j ( t τ i j ( t ) ) ) + j = 1 n l = 1 n b i j l ( t ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) + I i ( t ) = c i ( t ) x i ( t ) + c i ( t ) [ x i ( t ) x i ( t η i ( t ) ) ] + j = 1 n a i j ( t ) g j ( x j ( t τ i j ( t ) ) ) + j = 1 n l = 1 n b i j l ( t ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) + I i ( t ) = c i ( t ) x i ( t ) + c i ( t ) t η i ( t ) t x i ( s ) d s + j = 1 n a i j ( t ) g j ( x j ( t τ i j ( t ) ) ) + j = 1 n l = 1 n b i j l ( t ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) + I i ( t ) , t > 0 , t t k .
(2.7)

Calculating the upper left derivative of | x i (t)|, together with (2.6), ( H 5 ), ( H 6 ) and

γ> [ c i + η i + + 1 ] I i + ,

we obtain

0 D | x i ( t ) | c i ( t ) | x i ( t ) | + c i ( t ) t η i ( t ) t | x i ( s ) | d s + j = 1 n | a i j ( t ) | | g j ( x j ( t τ i j ( t ) ) ) | + j = 1 n l = 1 n | b i j l ( t ) | | g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) | + | I i ( t ) | = c i ( t ) | x i ( t ) | + c i ( t ) t η i ( t ) t | c i ( s ) x i ( s η i ( s ) ) + j = 1 n a i j ( s ) [ g j ( x j ( s τ i j ( s ) ) ) g j ( 0 ) ] + j = 1 n l = 1 n b i j l ( s ) ( g j ( x j ( s σ i j l ( s ) ) ) g j ( 0 ) ) g l ( x l ( s v i j l ( s ) ) ) + I i ( s ) | d s + j = 1 n | a i j ( t ) | | g j ( x j ( t τ i j ( t ) ) ) g j ( 0 ) | + j = 1 n l = 1 n | b i j l ( t ) | | g j ( x j ( t σ i j l ( t ) ) ) g j ( 0 ) | | g l ( x l ( t v i j l ( t ) ) ) | + | I i ( t ) | [ c i ( t ) c i ( t ) η i ( t ) c i + ] | x i ( t ) | + j = 1 n ( | a i j ( t ) | + c i ( t ) η i ( t ) a i j + ) L j ξ j γ η + j = 1 n l = 1 n ( | b i j l ( t ) | + c i ( t ) η i ( t ) b i j l + ) L j ξ j γ η M + [ c i + η i + + 1 ] I i + < [ c i ( t ) c i ( t ) η i ( t ) c i + ] ξ i γ η + j = 1 n ( | a i j ( t ) | + c i ( t ) η i ( t ) a i j + ) L j ξ j γ η + j = 1 n l = 1 n ( | b i j l ( t ) | + c i ( t ) η i ( t ) b i j l + ) ( L j ξ j + L l ξ l ) M γ η + [ c i + η i + + 1 ] I i + = { [ c i ( t ) c i ( t ) η i ( t ) c i + ] ξ i + j = 1 n ( | a i j ( t ) | + c i ( t ) η i ( t ) a i j + ) L j ξ j + j = 1 n l = 1 n ( | b i j l ( t ) | + c i ( t ) η i ( t ) b i j l + ) ( L j ξ j + L l ξ l ) M } γ η + [ c i + η i + + 1 ] I i + < η γ η + [ c i + η i + + 1 ] I i + < 0 .

It is a contradiction and shows that (2.5) holds. The proof is now completed. □

Remark 2.1 After conditions ( H 1 )-( H 6 ), the solution of system (1.1) always exists (see [1, 2]). In view of the boundedness of this solution, from the theory of impulsive differential equations in [1], it follows that the solution of system (1.1) can be defined on [0,+).

Lemma 2.2 Suppose that ( H 1 )-( H 6 ) are true. Let x (t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T be the solution of system (1.1) with initial value φ (t)= ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) T , and let x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T be the solution of system (1.1) with initial value φ(t)= ( φ 1 ( t ) , φ 2 ( t ) , , φ n ( t ) ) T . Then there exists a positive constant λ such that

x i (t) x i (t)=O ( e λ t ) ,iN.

Proof Let y(t)=x(t) x (t). Then, for iN, it follows that

{ y i ( t ) = c i ( t ) y i ( t η i ( t ) ) + j = 1 n a i j ( t ) [ g j ( x j ( t τ i j ( t ) ) ) g j ( x j ( t τ i j ( t ) ) ) ] y i ( t ) = + j = 1 n l = 1 n b i j l ( t ) [ g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) y i ( t ) = g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) ] , t > 0 , t t k , y i + ( t k + ) = ( 1 + d i k ) y i ( t k ) , k Z + .
(2.8)

Define continuous functions Γ i (r) by setting

Γ i ( r ) = [ c i ( t ) e r η i ( t ) r c i ( t ) e r η i ( t ) η i ( t ) ( r + c i + e r η i + ) ] ξ i + j = 1 n ( | a i j ( t ) | e r τ i j ( t ) + a i j + c i ( t ) e r τ i j + η i ( t ) ) L j ξ j + j = 1 n l = 1 n b i j l + c i ( t ) e r η i ( t ) η i ( t ) ( e r v i j l ( t ) L l ξ l + e r σ i j l ( t ) L j ξ j ) M + j = 1 n l = 1 n | b i j l ( t ) | ( e r v i j l ( t ) L l ξ l + e r σ i j l ( t ) L j ξ j ) M , r 0 , t 0 , i N .

Then

Γ i ( 0 ) = [ c i ( t ) c i ( t ) η i ( t ) c i + ] ξ i + j = 1 n ( | a i j ( t ) | + c i ( t ) η i ( t ) a i j + ) L j ξ j + j = 1 n l = 1 n ( | b i j l ( t ) | + c i ( t ) η i ( t ) b i j l + ) ( L l ξ l + L j ξ j ) M < 0 , t 0 , i N ,

which, for iN, together with the continuity of Γ i (t), implies that we can choose sufficiently small λ satisfying c i (t)>λ>0 and η ¯ >0 such that

η ¯ > Γ i ( λ ) = [ c i ( t ) e λ η i ( t ) λ c i ( t ) e λ η i ( t ) η i ( t ) ( λ + c i + e λ η i + ) ] ξ i + j = 1 n ( | a i j ( t ) | e λ τ i j ( t ) + a i j + c i ( t ) e λ τ i j + η i ( t ) ) L j ξ j + j = 1 n l = 1 n b i j l + c i ( t ) e λ η i ( t ) η i ( t ) ( e λ v i j l + L l ξ l + e λ σ i j l + L j ξ j ) M + j = 1 n l = 1 n | b i j l ( t ) | ( e λ v i j l ( t ) L l ξ l + e λ σ i j l ( t ) L j ξ j ) M , t 0 .
(2.9)

Let

Y i (t)= y i (t) e λ t ,iN.

Then, for iN,

Y i ( t ) = λ Y i ( t ) c i ( t ) e λ t y i ( t η i ( t ) ) + e λ t { j = 1 n a i j ( t ) [ g j ( x j ( t τ i j ( t ) ) ) g j ( x j ( t τ i j ( t ) ) ) ] + j = 1 n l = 1 n b i j l ( t ) [ g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) ] } = λ Y i ( t ) c i ( t ) e λ η i ( t ) Y i ( t ) + c i ( t ) e λ η i ( t ) [ Y i ( t ) Y i ( t η i ( t ) ) ] + e λ t { j = 1 n a i j ( t ) [ g j ( x j ( t τ i j ( t ) ) ) g j ( x j ( t τ i j ( t ) ) ) ] + j = 1 n l = 1 n b i j l ( t ) [ g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) ] } = λ Y i ( t ) c i ( t ) e λ η i ( t ) Y i ( t ) + c i ( t ) e λ η i ( t ) t η i ( t ) t Y i ( s ) d s + e λ t { j = 1 n a i j ( t ) [ g j ( x j ( t τ i j ( t ) ) ) g j ( x j ( t τ i j ( t ) ) ) ] + j = 1 n l = 1 n b i j l ( t ) [ g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) ] } = λ Y i ( t ) c i ( t ) e λ η i ( t ) Y i ( t ) + c i ( t ) e λ η i ( t ) t η i ( t ) t { λ Y i ( s ) c i ( s ) e λ s y i ( s η i ( s ) ) + e λ s j = 1 n a i j ( s ) [ g j ( x j ( s τ i j ( s ) ) ) g j ( x j ( s τ i j ( s ) ) ) ] + e λ s j = 1 n l = 1 n b i j l ( s ) [ g j ( x j ( s σ i j l ( s ) ) ) g l ( x l ( s v i j l ( s ) ) ) g j ( x j ( s σ i j l ( s ) ) ) g l ( x l ( s v i j l ( s ) ) ) ] } d s + e λ t { j = 1 n a i j ( t ) [ g j ( x j ( t τ i j ( t ) ) ) g j ( x j ( t τ i j ( t ) ) ) ] + j = 1 n l = 1 n b i j l ( t ) [ g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) ] } , t > 0 , t t k ,
(2.10)

and

| Y i ( t k + ) | = | ( 1 + d i k ) Y i ( t k ) | .
(2.11)

We define a positive constant M ¯ as follows:

M ¯ = max i N { sup s [ τ i , 0 ] | Y i ( s ) | } .

Let K be a positive number such that

| Y i ( t ) | M ¯ <K ξ i for all t[ τ i ,0],iN.
(2.12)

We claim that

| Y i ( t ) | <K ξ i for all t>0,iN.
(2.13)

Obviously, (2.13) holds for t=0. We first prove that (2.13) is true for 0<t t 1 . Otherwise, there exist iN and ρ(0, t 1 ] such that one of the following two cases must occur.

(1) Y i (ρ)=K ξ i , | Y j ( t ) | <K ξ j for all t[0,ρ),jN.
(2.14)
(2) Y i (ρ)=K ξ i , | Y j ( t ) | <K ξ j for all t[0,ρ),jN.
(2.15)

Now, we consider two cases.

Case (i). If (2.14) holds. Then, from (2.9), (2.10) and ( H 1 )-( H 6 ), we have

0 Y i ( ρ ) = λ Y i ( ρ ) c i ( ρ ) e λ η i ( ρ ) Y i ( ρ ) + c i ( ρ ) e λ η i ( ρ ) ρ η i ( ρ ) ρ { λ Y i ( s ) c i ( s ) e λ s y i ( s η i ( s ) ) + e λ s j = 1 n a i j ( s ) [ g j ( x j ( s τ i j ( s ) ) ) g j ( x j ( s τ i j ( s ) ) ) ] + e λ s j = 1 n l = 1 n b i j l ( s ) [ g j ( x j ( s σ i j l ( s ) ) ) g l ( x l ( s v i j l ( s ) ) ) g j ( x j ( s σ i j l ( s ) ) ) g l ( x l ( s v i j l ( s ) ) ) ] } d s + e λ ρ { j = 1 n a i j ( ρ ) [ g j ( x j ( ρ τ i j ( ρ ) ) ) g j ( x j ( ρ τ i j ( ρ ) ) ) ] + j = 1 n l = 1 n b i j l ( ρ ) [ g j ( x j ( ρ σ i j l ( ρ ) ) ) g l ( x l ( ρ v i j l ( ρ ) ) ) g j ( x j ( ρ σ i j l ( ρ ) ) ) g l ( x l ( ρ v i j l ( ρ ) ) ) ] } λ Y i ( ρ ) c i ( ρ ) e λ η i ( ρ ) Y i ( ρ ) + c i ( ρ ) e λ η i ( ρ ) ρ η i ( ρ ) ρ { λ Y i ( ρ ) + c i + e λ η i ( s ) | Y i ( s η i ( s ) ) | + j = 1 n a i j + L j e λ τ i j ( s ) | Y j ( s τ i j ( s ) ) | + j = 1 n l = 1 n b i j l + × [ e λ s | g j ( x j ( s σ i j l ( s ) ) ) | | g l ( x l ( s v i j l ( s ) ) ) g l ( x l ( s v i j l ( s ) ) ) | + e λ s | g l ( x l ( s v i j l ( s ) ) ) | | g j ( x j ( s σ i j l ( s ) ) ) g j ( x j ( s σ i j l ( s ) ) ) | ] } d s + j = 1 n | a i j ( ρ ) | L j e λ τ i j ( ρ ) | Y j ( ρ τ i j ( ρ ) ) | + j = 1 n l = 1 n | b i j l ( ρ ) | × [ e λ ρ | g j ( x j ( ρ σ i j l ( ρ ) ) ) | | g l ( x l ( ρ v i j l ( ρ ) ) ) g l ( x l ( ρ v i j l ( ρ ) ) ) | + e λ ρ | g l ( x l ( ρ v i j l ( ρ ) ) ) | | g j ( x j ( ρ σ i j l ( ρ ) ) ) g j ( x j ( ρ σ i j l ( ρ ) ) ) | ] [ c i ( ρ ) e λ η i ( ρ ) λ c i ( ρ ) e λ η i ( ρ ) η i ( ρ ) ( λ + c i + e λ η i + ) ] K ξ i + j = 1 n ( | a i j ( ρ ) | e λ τ i j ( ρ ) + a i j + c i ( ρ ) e λ τ i j + η i ( ρ ) ) L j K ξ j + j = 1 n l = 1 n b i j l + c i ( ρ ) e λ η i ( ρ ) η i ( ρ ) ( M e λ v i j l + L l K ξ l + M e λ σ i j l + L j K ξ j ) + j = 1 n l = 1 n | b i j l ( ρ ) | ( M e λ v i j l ( ρ ) L l K ξ l + M e λ σ i j l ( ρ ) L j K ξ j ) = { [ c i ( ρ ) e λ η i ( ρ ) λ c i ( ρ ) e λ η i ( ρ ) η i ( ρ ) ( λ + c i + e λ η i + ) ] ξ i + j = 1 n ( | a i j ( ρ ) | e λ τ i j ( ρ ) + a i j + c i ( ρ ) e λ τ i j + η i ( ρ ) ) L j ξ j + j = 1 n l = 1 n b i j l + c i ( ρ ) e λ η i ( ρ ) η i ( ρ ) ( e λ v i j l + L l ξ l + e λ σ i j l + L j ξ j ) M + j = 1 n l = 1 n | b i j l ( ρ ) | ( e λ v i j l ( ρ ) L l ξ l + e λ σ i j l ( ρ ) L j ξ j ) M } K < η ¯ K < 0 .

Case (ii). If (2.15) holds. From (2.9), (2.10) and ( H 1 )-( H 6 ), using a similar method, we can obtain the contradiction. Therefore, (2.13) holds for t[0, t 1 ]. From (2.11) and (2.13), we know that

| Y i ( t 1 ) | = | y i ( t 1 ) | e λ t 1 <K ξ i ,iN,

and

| Y i ( t 1 + ) | =|1+ d i 1 | | Y i ( t 1 ) | | Y i ( t 1 ) | <K ξ i ,iN.

Thus, for t[ t 1 , t 2 ], we may repeat the above procedure and obtain

| Y i ( t ) | = | y i ( t ) | e λ t <K ξ i for all t[ t 1 , t 2 ],iN.

Further, we have

| Y i ( t ) | = | y i ( t ) | e λ t <K ξ i for all t>0,iN.

That is,

| x i ( t ) x i ( t ) | K ξ i e λ t ,t>0, and iN.

 □

Remark 2.2 If x (t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T is an ω-anti-periodic solution of system (1.1), it follows from Lemma 2.2 that x (t) is globally exponentially stable.

3 Main results

In this section, we study the existence and exponential stability for an anti-periodic solution of system (1.1).

Theorem 3.1 Suppose that all conditions in Lemma  2.2 are satisfied. Then system (1.1) has exactly one ω-anti-periodic solution x (t). Moreover, x (t) is globally exponentially stable.

Proof Let x(t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T be a solution of system (1.1). By Remark 2.1, the solution x(t) can be defined for all t[0,+). By hypothesis ( H 1 ), we have, for any natural number h and iN,

( ( 1 ) h + 1 x i ( t + ( h + 1 ) ω ) ) = ( 1 ) h + 1 x i ( t + ( h + 1 ) ω ) = ( 1 ) h + 1 { c i ( t + ( h + 1 ) ω ) x i ( t + ( h + 1 ) ω η i ( t + ( h + 1 ) ω ) ) + j = 1 n a i j ( t + ( h + 1 ) ω ) g j ( x j ( t + ( h + 1 ) ω τ i j ( t + ( h + 1 ) ω ) ) ) + j = 1 n l = 1 n b i j l ( t + ( h + 1 ) ω ) g j ( x j ( t + ( h + 1 ) ω σ i j l ( t + ( h + 1 ) ω ) ) ) × g l ( x l ( t + ( h + 1 ) ω v i j l ( t + ( h + 1 ) ω ) ) ) + I i ( t + ( h + 1 ) ω ) } = ( 1 ) h + 1 { c i ( t ) x i ( t + ( h + 1 ) ω η i ( t ) ) + j = 1 n a i j ( t ) ( 1 ) h + 1 g j ( ( 1 ) h + 1 x j ( t + ( h + 1 ) ω τ i j ( t ) ) ) + j = 1 n l = 1 n b i j l ( t ) ( 1 ) h + 1 g j ( ( 1 ) h + 1 x j ( t + ( h + 1 ) ω σ i j l ( t ) ) ) × g l ( ( 1 ) h + 1 x l ( t + ( h + 1 ) ω v i j l ( t ) ) ) + ( 1 ) h + 1 I i ( t ) } = c i ( t ) ( 1 ) h + 1 x i ( t + ( h + 1 ) ω η i ( t ) ) + j = 1 n a i j ( t ) g j ( ( 1 ) h + 1 x j ( t + ( h + 1 ) ω τ i j ( t ) ) ) + j = 1 n l = 1 n b i j l ( t ) g j ( ( 1 ) h + 1 x j ( t + ( h + 1 ) ω σ i j l ( t ) ) ) × g l ( ( 1 ) h + 1 x l ( t + ( h + 1 ) ω v i j l ( t ) ) ) + I i ( t ) , t t k .
(3.1)

Further, by hypothesis ( H 4 ), we obtain

( 1 ) h + 1 x i ( ( t k + ( h + 1 ) ω ) + ) = ( 1 ) h + 1 x i ( t k + ( h + 1 ) q + ) = ( 1 ) h + 1 ( 1 + d i ( k + ( h + 1 ) q ) ) x i ( t k + ( h + 1 ) q ) = ( 1 + d i k ) ( 1 ) h + 1 x i ( t k + ( h + 1 ) ω ) , k = 1 , 2 , .
(3.2)

Thus, for any natural number h, we obtain that ( 1 ) h + 1 x(t+(h+1)ω) is a solution of system (1.1) for all t+(h+1)ω0. Hence, x(t+ω) is also a solution of (1.1) with initial values

x i (s+ω),s[ τ i ,0],iN.

Then, by the proof of Lemma 2.2, for iN, there exists a constant K>0 such that for any natural number h, we have

| ( 1 ) h + 1 x i ( t + ( h + 1 ) ω ) ( 1 ) h x i ( t + h ω ) | = | x i ( t + h ω ) ( x i ( t + h ω + ω ) ) | K ξ i e λ ( t + h ω ) = K ξ i e λ t ( 1 e λ ω ) h , t + h ω 0 , t t k ,
(3.3)

and

| ( 1 ) h + 1 x i ( ( t k + ( h + 1 ) ω ) + ) ( 1 ) h x i ( ( t k + h ω ) + ) | = | 1 + d i k | | x i ( t k + h ω ) ( x i ( t k + h ω + ω ) ) | K ξ i e λ ( t k + h ω ) = K ξ i e λ t k ( 1 e λ ω ) h , k Z + .
(3.4)

Moreover, for any natural number m, and iN, we can obtain

( 1 ) m + 1 x i ( t + ( m + 1 ) ω ) = x i ( t ) + h = 0 m [ ( 1 ) h + 1 x i ( t + ( h + 1 ) ω ) ( 1 ) h x i ( t + h ω ) ] , t + h ω 0 , t t k ,
(3.5)

and

( 1 ) m + 1 x i ( ( t k + ( m + 1 ) ω ) + ) = x i ( t k + ) + h = 0 m [ ( 1 ) h + 1 x i ( ( t k + ( h + 1 ) ω ) + ) ( 1 ) h x i ( ( t k + h ω ) + ) ] , k Z + .
(3.6)

Combining (3.3)-(3.4) with (3.5)-(3.6), we know that ( 1 ) m x(t+mω) converges uniformly to a piecewise continuous function x (t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T on any compact set of R.

Now we are in a position to prove that x (t) is an ω-anti-periodic solution of system (1.1). It is easily known that x (t) is ω-anti-periodic since

x i ( t + ω ) = lim m + ( 1 ) m x i ( t + ω + m ω ) = lim m + 1 + ( 1 ) m + 1 x i ( t + ( m + 1 ) ω ) = x i ( t ) , t t k ,

and

x i ( ( t k + ω ) + ) = lim m + 1 + ( 1 ) m + 1 x i ( ( t k + ( m + 1 ) ω ) + ) = x i ( t k + ) ,k Z + ,

where iN. Noting that the right-hand side of (1.1) is piecewise continuous, together with (3.1) and (3.2), we know that ( 1 ) m + 1 { x i (t+(m+1)ω)} converges uniformly to a piecewise continuous function on any compact set of R{ t 1 , t 2 ,}. Therefore, letting m+ on both sides of (3.1) and (3.2), we get

x i ( t ) = c i ( t ) x i ( t η i ( t ) ) + j = 1 n a i j ( t ) g j ( x j ( t τ i j ( t ) ) ) x i ( t ) = + j = 1 n l = 1 n b i j l ( t ) g j ( x j ( t σ i j l ( t ) ) ) g l ( x l ( t v i j l ( t ) ) ) + I i ( t ) , x i ( t ) = t > 0 , t t k , x i ( t k + ) = ( 1 + d i k ) x i ( t k ) , k Z + , } iN.

Thus, x (t)= ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T is an ω-anti-periodic solution of system (1.1).

Finally, by Lemma 2.2, we can prove that x (t) is globally exponentially stable. This completes the proof. □

4 Example

In this section, we give an example to demonstrate the results obtained in previous sections.

Example 4.1 Consider the following IHHNNs consisting of two neurons with time-varying delays in the leakage terms:

{ x 1 ( t ) = 1.5 x 1 ( t | sin π t | 1 , 000 ) + | sin π t | 32 g 1 ( x 1 ( t | sin π t | ) ) x 1 ( t ) = + | cos π t | 32 g 2 ( x 2 ( t | cos π t | ) ) x 1 ( t ) = + cos π t 32 [ g 1 ( x 1 ( t 2 | cos π t | ) ) g 1 ( x 1 ( t 2 | sin π t | ) ) x 1 ( t ) = + g 1 ( x 1 ( t 2 | sin π t | ) ) g 2 ( x 2 ( t 2 | cos π t | ) ) x 1 ( t ) = + g 2 ( x 2 ( t 2 | sin π t | ) ) g 1 ( x 1 ( t 2 | cos π t | ) ) x 1 ( t ) = + g 2 ( x 2 ( t 2 | sin π t | ) ) g 2 ( x 2 ( t 2 | cos π t | ) ) ] x 2 ( t ) = + 10 sin π t , x 2 ( t ) = 1.5 x 2 ( t | sin π t | 1 , 000 ) + | cos π t | 32 g 1 ( x 1 ( t | cos π t | ) ) x 2 ( t ) = + | sin π t | 32 g 2 ( x 2 ( t | sin π t | ) ) x 2 ( t ) = + sin π t 32 [ g 1 ( x 1 ( t 2 | sin π t | ) ) g 1 ( x 1 ( t 2 | cos π t | ) ) x 2 ( t ) = + g 1 ( x 1 ( t 2 | cos π t | ) ) g 2 ( x 2 ( t 2 | sin π t | ) ) x 2 ( t ) = + g 2 ( x 2 ( t 2 | cos π t | ) ) g 1 ( x 1 ( t 2 | sin π t | ) ) x 2 ( t ) = + g 2 ( x 2 ( t 2 | cos π t | ) ) g 2 ( x 2 ( t 2 | sin π t | ) ) ] x 2 ( t ) = + 10 cos π t , } t k 0.5 , x i ( t k + ) = ( 1 + d i k ) x i ( t k ) , d i ( 2 s ) = 2 , d i ( 2 s 1 ) = 1 , } t k = k 0.5 , i = 1 , 2 , k , s = 1 , 2 , .
(4.1)

Here, it is assumed that the activation functions

g 1 (x)= g 2 (x)=|x+1||x1|.

Note that

c 1 ( t ) = c 2 ( t ) = 1.5 , L 1 = L 2 = 2 , M = 2 , a 11 ( t ) = | sin π t | 32 , a 12 ( t ) = | cos π t | 32 , a 21 ( t ) = | cos π t | 32 , a 22 ( t ) = | sin π t | 32 , b 111 ( t ) = b 112 ( t ) = b 121 ( t ) = b 122 ( t ) = cos π t 32 , b 211 ( t ) = b 212 ( t ) = b 221 ( t ) = b 222 ( t ) = sin π t 32 , η 1 ( t ) = η 2 ( t ) = | sin π t | 1 , 000 , I 1 ( t ) = 10 sin π t , I 2 ( t ) = 10 cos π t , τ 11 ( t ) = | sin π t | , τ 12 ( t ) = | cos π t | , τ 21 ( t ) = | cos π t | , τ 22 ( t ) = | sin π t | , σ 111 ( t ) = 2 | cos π t | , σ 112 ( t ) = σ 121 ( t ) = σ 122 ( t ) = 2 | sin π t | , σ 211 ( t ) = 2 | sin π t | , σ 212 ( t ) = σ 221 ( t ) = σ 222 ( t ) = 2 | cos π t | , v 111 ( t ) = 2 | sin π t | , v 112 ( t ) = v 121 ( t ) = v 122 ( t ) = 2 | cos π t | , v 211 ( t ) = 2 | cos π t | , v 212 ( t ) = v 221 ( t ) = v 222 ( t ) = 2 | sin π t | .

Then we obtain

[ c i ( t ) c i ( t ) η i ( t ) c i + ] ξ i + j = 1 n ( | a i j ( t ) | + c i ( t ) η i ( t ) a i j + ) L j ξ j + j = 1 n l = 1 n ( | b i j l ( t ) | + c i ( t ) η i ( t ) b i j l + ) ( L j ξ j + L l ξ l ) M < 1.5 + 1.5 × 1 1 , 000 × 1.5 + ( 1 32 + 3 × 1 1 , 000 × 1 32 ) × 2 × 2 + ( 1 32 + 3 × 1 1 , 000 × 1 32 ) × ( 2 + 2 ) × 2 × 4 = 0.369375 < 0.3 , ξ i = 1 , i = 1 , 2 .
(4.2)

It follows that system (4.1) satisfies all the conditions in Theorem 3.1. Hence, system (4.1) has exactly one 1-anti-periodic solution. Moreover, the 1-anti-periodic solution is globally exponentially stable. The fact is verified by the numerical simulation in Figure 1.

Figure 1
figure 1

Numerical solution x(t)= ( x 1 ( t ) , x 2 ( t ) ) T of systems ( 4.1 ) for initial value φ(s) ( 6 , 7 ) T , s[2,0] .

Remark 4.1 Since [916] only dealt with IHHNNs without leakage delays, one can observe that all the results in these works and the references therein cannot be applicable to prove the existence and exponential stability of 1-anti-periodic solution for IHHNNs (4.1). This implies that the results of this paper are essentially new.

Author’s contributions

The author has made this manuscript independently. The author read and approved the final version.

References

  1. Lakshmikantham V, Bainov DD, Simeonov PS: Theory of Impulsive Differential Equations. World Scientific, Singapore; 1989.

    Book  Google Scholar 

  2. Samoilenko AM, Perestyuk NA: Impulsive Differential Equations. World Scientific, Singapore; 1995.

    Google Scholar 

  3. Akhmet MU: On the general problem of stability for impulsive differential equations. J. Math. Anal. Appl. 2003, 288: 182–196. 10.1016/j.jmaa.2003.08.001

    Article  MathSciNet  Google Scholar 

  4. Giles CL, Maxwell T: Learning, invariance and generalization in high-order neural networks. Appl. Opt. 1987, 26: 4972–4978. 10.1364/AO.26.004972

    Article  Google Scholar 

  5. Karayiannis NB: On the training and performance of higher-order neural networks. Math. Biosci. 1995, 129: 143–168. 10.1016/0025-5564(94)00057-7

    Article  Google Scholar 

  6. Schmidt WAC, Davis JP: Pattern recognition properties of various feature spaces for higher order neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15: 795–801. 10.1109/34.236250

    Article  Google Scholar 

  7. Reid, MB, Spirkovska, L, Ochoa, E: Rapid training of higher-order neural networks for invariant pattern recognition. In: Proc. Int. Joint Conf. Neural Networks, Washington, 1989

    Google Scholar 

  8. Zhang J, Gui Z: Existence and stability of periodic solutions of high-order Hopfield neural networks with impulses and delays. J. Comput. Appl. Math. 2009, 224: 602–613. 10.1016/j.cam.2008.05.042

    Article  MathSciNet  Google Scholar 

  9. Shi P, Dong L: Existence and exponential stability of anti-periodic solutions of Hopfield neural networks with impulses. Appl. Math. Comput. 2010, 216: 623–630. 10.1016/j.amc.2010.01.095

    Article  MathSciNet  Google Scholar 

  10. Guan Z, Chen G: On delayed impulsive Hopfield neural networks. Neural Netw. 1999, 12: 273–280. 10.1016/S0893-6080(98)00133-6

    Article  Google Scholar 

  11. Zhang A: Existence and exponential stability of anti-periodic solutions for HCNNs with time-varying leakage delays. Adv. Differ. Equ. 2013., 2013: Article ID 162 10.1186/1687-1847-2013-162

    Google Scholar 

  12. Liu B, Gong S: Periodic solution for impulsive cellar neural networks with time-varying delays in the leakage terms. Abstr. Appl. Anal. 2013., 2013: Article ID 701087

    Google Scholar 

  13. Jiang Y, Yang B, Wang J, Shao C: Delay-dependent stability criterion for delayed Hopfield neural networks. Chaos Solitons Fractals 2009, 39: 2133–2137. 10.1016/j.chaos.2007.06.039

    Article  MathSciNet  Google Scholar 

  14. Xiao B, Meng H: Existence and exponential stability of positive almost periodic solutions for high-order Hopfield neural networks. Appl. Math. Model. 2009, 33: 532–542. 10.1016/j.apm.2007.11.027

    Article  MathSciNet  Google Scholar 

  15. Zhang F, Li Y: Almost periodic solutions for higher-order Hopfield neural networks without bounded activation functions. Electron. J. Differ. Equ. 2007, 99: 1–10.

    Article  Google Scholar 

  16. Ou CX: Anti-periodic solutions for high-order Hopfield neural networks. Comput. Math. Appl. 2008, 56: 1838–1844. 10.1016/j.camwa.2008.04.029

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

I am grateful to the referees for their suggestions that improved the writing of the paper. This work was supported by the National Natural Science Foundation of China (grant no. 11201184), the Natural Scientific Research Fund of Zhejiang Provincial of P.R. China (grant no. LY12A01018), and the Natural Scientific Research Fund of Zhejiang Provincial Education Department of P.R. China (grant no. Z201122436).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wentao Wang.

Additional information

Competing interests

The author declares that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wang, W. Anti-periodic solution for impulsive high-order Hopfield neural networks with time-varying delays in the leakage terms. Adv Differ Equ 2013, 273 (2013). https://doi.org/10.1186/1687-1847-2013-273

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2013-273

Keywords