Skip to main content

Theory and Modern Applications

Periodicity and exponential stability of discrete-time neural networks with variable coefficients and delays

Abstract

Discrete analogues of continuous-time neural models are of great importance in numerical simulations and practical implementations. In the current paper, a discrete model of continuous-time neural networks with variable coefficients and multiple delays is investigated. By Lyapunov functional, continuation theorem of topological degree, inequality technique and matrix analysis, sufficient conditions guaranteeing the existence and globally exponential convergence of periodic solutions are obtained, without assuming the boundedness and differentiability of activation functions. To show the effectiveness of our method, an illustrative example is presented along with numerical simulations.

MSC:34D23, 34K20, 39A12, 92B20.

1 Introduction

Various neural networks have been so far proposed, and they attracted extensive interest of researchers from various fields, since they play important roles and have found successful applications in the fields such as pattern recognition, signal and image processing, nonlinear optimization problems, parallel computation, and other engineering areas; see, for example, [1–8]. The dynamical behaviors in neural networks, such as the existence and their asymptotic stability of equilibria, periodic solutions, bifurcations and chaos, have been the most active areas of research and have been extensively studied over the past years [9–21].

Time-delays in interactions between neurons are frequently unavoidable due to the finite transmission speed of signals among neurons, and they cause instability, divergence and oscillations in neural networks [22], so it is necessary to introduce time delays into the neural models. Numerous sufficient conditions ensuring the stability have been given for neural models with discrete, time-varying and distributed delays, respectively.

Meanwhile, in numerical simulations and practical implementations, discretization of continuous-time models is necessary and of great importance. Certainly, to faithfully reflect the dynamical behaviors of continuous systems, the discrete analogues should inherit the dynamical characteristics of continuous counterparts [13, 23]. The ways to derive the discrete-time analogues from continuous versions are diverse, but most of them cannot keep the original dynamics and display more complicated behaviors. To this end, an implicit scheme has been put forward [13] to derive the discrete analogues. For discrete models under this scheme, such as discrete Hopfield, bidirectional associate memory and cellular neural networks, several authors [24–29] have studied the existence and exponential stability of equilibria and periodic solutions. However, they are mainly concerned with the models of constant coefficients. Research on the discrete models with variable coefficients and delays is very rare. Since the neuron charging time, interconnection weights and external inputs often change during the course of time, neural models with temporal structure of neural activities are much closer to real systems, and hence studies on such systems and their discrete versions are of great practical and theoretical value.

Motivated by the above discussions, we present a discrete analogue of continuous-time neural networks with variable coefficients and delays. Here the activation functions are not assumed to be bounded, as opposed to those in [25–28]. To deal with this general model, a suitable and effective Lyapunov functional is constructed. Continuation theorem of topological degree [30], inequality technique and matrix analysis [31] are employed to obtain the sufficient conditions guaranteeing the existence and globally exponential stability of periodic solutions. As we see, these sufficient conditions are less conservative and easy to verify. Further, no restrictions of the differentiability and monotonicity are imposed on activation functions. Also, note that the discrete models admit the common dynamical behaviors with continuous versions. That implies they preserve the dynamics very well from continuous versions. To show the effectiveness of our results, an illustrative example along with numerical simulations is presented.

The paper is organized as follows. In Section 2, discrete-time neural networks with variable coefficients and delays are formulated. Some assumptions and mathematical preliminaries are also given. Section 3 is devoted to the existence of periodic solutions. In Section 4, the exponential convergence of this discrete model is discussed. To show the effectiveness of the method, an illustrative example is presented in Section 5. Some conclusions are drawn in Section 6.

2 Mathematical preliminaries

Continuous-time neural networks with time-varying coefficients and delays read as follows:

d x i ( t ) d t =− a i (t) x i (t)+ ∑ j = 1 m b i j (t) f j ( x j ( t − σ ( t ) ) ) + I i (t),
(1)

with initial values

x i (s)= ϕ i (s),−τ≤s≤0,
(2)

where i=1,…,m, t∈ R + =[0,+∞), 0≤σ(t)≤τ; x=col( x 1 ,…, x m )∈ R m , x i (t) is the state of the i th neuron at time t; the continuous function a i (t) represents the neuron charging time, b i j (t) is the strength of the j th unit on the i unit at time t−σ(t); f j denotes the activation function of the neuron, which satisfies the global Lipschitz condition; σ(t) denotes the transmission delay along the axon of the j th unit; the continuous function I i (t) is the external input on the i th neuron at time t; the initial value function ϕ i (s) is bounded and continuous on [−τ,0]. Research into dynamics such as stability and periodic oscillations for this model has been extensively carried out [18]. Now we will focus on the dynamics of its discrete analogue.

Set Z to be the set of integers and Z + the set of nonnegative integers; let N(a,b) represent the set of integers between a and b with a≤b, a,b∈Z, namely, N(a,b)={a,a+1,…,b}.

Reformulate the continuous-time model (1) by equations with piecewise constant arguments of the form

d x i ( t ) d t =− a i ( [ t / h ] h ) x i (t)+ ∑ j = 1 m b i j ( [ t / h ] h ) f j ( x j ( [ t / h ] h − σ ( [ t / h ] h ) ) ) + I i ( [ t / h ] h ) ,

where i=1,…,m, t∈([t/h]h,[t/h]h+h), h>0 is the discretization step size and [t/h] is the integer part of t/h. Let

[ t / h ] = n , x i ( n h ) = x i ( n ) , a i ( n h ) = a i ( n ) , b i j ( n h ) = b i j ( n ) , σ ( n h ) = k ( n ) ,

where n=0,1,2,… , then one has

d x i ( t ) d t =− a i (n) x i (t)+ ∑ j = 1 m b i j (n) f j ( x j ( n − k ( n ) ) ) + I i (n).

Integrate over the interval [nh,t) to get

x i (t) e a i ( n ) t − x i (n) e a i ( n ) n h = ( e a i ( n ) t − e a i ( n ) n h a i ( n ) ) [ ∑ j = 1 m b i j ( n ) f j ( x j ( n − k ( n ) ) ) + I i ( n ) ] ,

let t→(n+1)h, then

x i (n+1)= x i (n) e − a i ( n ) h + θ i (h) [ ∑ j = 1 m b i j ( n ) f j ( x j ( n − k ( n ) ) ) + I i ( n ) ] ,
(7)

where

θ i (h)= 1 − e − a i ( n ) h a i ( n ) ,n∈ Z + .

The discrete model (3) is endowed with initial values

x i (l)= ϕ i (l),l∈N(−k,0),
(9)

where ϕ i (l) is bounded on N(−k,0).

Note that θ i (h)>0 and θ i (h)≈h+O( h 2 ) for small h>0. It could be showed that the discrete-time analogue (3) converges to the continuous-time model (1) as h→0.

To investigate the stability and periodic oscillations of system (3), we make further assumptions.

(H1) Suppose that a i (n), b i j (n), I i (n) and k(n) are all ω-periodic functions; moreover, a i (n)>0, 0≤k(n)≤k, with ω and k being positive integers, for i=1,…,m.

(H2) Assume that a function f j satisfies the Lipschitz condition, i.e., there exists a constant F j >0 such that

| f j (ξ)− f j (η)|≤ F j |ξ−η|

for any ξ,η∈R; further, f j (0)=0, j=1,…,m.

(H3) Suppose that there exist constants λ i >0 such that

λ i a i − − ∑ j = 1 m λ j b i j + F j >0,

where a i − = min 0 ≤ n ≤ ω − 1 a i (n), i=1,…,m.

For any ϕ=( ϕ 1 ,…, ϕ m ), a solution of system (3) and (4) is a vector-valued function x: Z + → R m satisfying system (3) and initial conditions (4) for n∈ Z + . In this paper, it is always assumed that the neural model (3) and (4) admits a solution represented by x(n,ϕ) or simply x(n).

For later convenience, throughout this paper f ¯ represents the mean value of a function f(n) over [0,ω−1]. Denote by g + , g − the maximum and the minimum of a function |g(n)| over [0,ω−1], respectively, i.e.,

f ¯ = 1 ω ∑ n = 0 ω − 1 f(n), g + = max 0 ≤ n ≤ ω − 1 |g(n)|, g − = min 0 ≤ n ≤ ω − 1 |g(n)|.

To analyze the existence and stability of periodic solutions for system (3), M-matrix theory is employed. Some notations and terminologies are given below. For more details, please refer to [31].

Definition 2.1 [31]

Matrix Λ=( a i j ) is said to be a nonsingular M-matrix if (i) a i i >0, (ii)  a i j ≤0 for i≠j, (iii) Λ − 1 ≥0, i,j=1,…,m.

Lemma 2.1 [31]

If Λ is a nonsingular M-matrix and Λy≤h, then

y≤ Λ − 1 h.

Lemma 2.2 [31]

Let Λ be an m×m matrix with nonpositive off-diagonal elements. Then Λ is a nonsingular M-matrix if and only if one of following statements holds true:

  1. (i)

    There exists a constant vector ξ= ( ξ 1 , … , ξ m ) T , with ξ i >0, i=1,…,m, such that

    Λξ>0.
  2. (ii)

    The real parts of all eigenvalues of Λ are positive.

  3. (iii)

    There exists a symmetric positive definite matrix W such that

    ΛW+W Λ T

is positive definite.

  1. (iv)

    All of the principal minors of Λ are positive.

3 Periodic solutions

Next we investigate the existence and global exponential stability of periodic solutions of system (3). By the continuation theorem of topological degree, the Lyapunov functional and analytic techniques such as matrix analysis, the existence and its global exponential stability of ω-periodic solutions are established. Let us first introduce the continuation theorem due to Gaines and Mawhin [30].

Let X and Y be two real Banach spaces, let L:DomL∩X→Y be a Fredholm operator of index zero, and let P:X→X, Q:Y→Y be continuous projectors such that ImP=KerL, KerQ=ImL, and X=KerL⊕KerP, Y=ImL⊕ImQ. Denote by L P the restriction of L on KerP∩DomL; by K P :ImL→KerP∩DomL the inverse of L P ; by J:ImQ→KerL the algebraic and topological isomorphism of ImQ onto KerL, due to the same dimensions of these two subspaces.

Lemma 3.1 (Continuation theorem [30])

Let Ω⊂X be an open bounded set and let N:X→Y be a continuous operator which is L-compact on Ω ¯ . Suppose that

  1. (a)

    for each λ∈(0,1), x∈∂Ω∩DomL, Lx≠λNx;

  2. (b)

    for each x∈∂Ω∩KerL, QNx≠0;

  3. (c)

    deg{JQN,Ω∩KerL,0}≠0.

Then Lx=Nx admits at least one solution in Ω ¯ ∩DomL.

To achieve our goal, take

X=Y= { x = { x ( n ) } : x ( n ) ∈ R m , x ( n + ω ) = x ( n ) , n ∈ Z } ,

endowed with the norm

∥x∥= ∑ i = 1 m λ i − 1 max 0 ≤ n ≤ ω | x i (n)|,

with which X, Y are Banach spaces, here |â‹…| is the Euclidean norm.

Set

F=diag{ F 1 ,…, F m },B= ( θ i ( h ) 1 − e − a i − h b i j + ) m × m ,Γ=I−BF,

where I is the identity matrix. From periodicity and positiveness, it holds that a i + ≥ a i (n)≥ a i − >0, i=1,…,m.

Lemma 3.2 Under hypothesis (H3), matrix Γ is a nonsingular M-matrix.

Proof Since

λ i − θ i ( h ) 1 − e − a i − h ∑ j = 1 m λ j b i j + F j = θ i ( h ) 1 − e − a i − h [ λ i a i − − ∑ j = 1 m λ j b i j + F j ] , i = 1 , … , m ,

it follows from (H3) and Lemma 2.2 that Γ is a nonsingular M-matrix. □

Theorem 3.1 Suppose that hypotheses (H1)-(H3) hold. If

κ≡ min 1 ≤ i ≤ m { a ¯ i − F i ∑ j = 1 m | b ¯ j i | } >0,

then system (3) admits at least one ω-periodic solution.

Proof Let L:DomL∩X→X be defined by

(Lx)(n)=x(n+1)−x(n),

and let the i th component of the mapping N:X→X be defined, respectively, by

( N x ) i ( n ) = ( e − a i ( n ) h − 1 ) x i ( n ) + θ i ( h ) ∑ j = 1 n b i j ( n ) f j ( x j ( n − k ( n ) ) ) + θ i ( h ) I i ( n ) , i = 1 , … , m .

With these notations, system (3) is rewritten into the form

Lx=Nx,x∈DomL∩X.

It is not difficult to see that

KerL= R m ,ImL= { x ∈ X : x ¯ = 1 ω ∑ n = 0 ω − 1 x ( n ) = 0 } ,

hence ImL is closed in X, and dimKerL=codimImL=m, that is, L is a Fredholm operator of index zero.

Define the linear continuous projectors P,Q:X→X by

Px=Qx= x ¯ .

In this way, ImP=KerL, ImL=KerQ, X=KerL⊕KerP=ImL⊕ImQ, and the isomorphism J from ImQ onto KerL is taken to be the identity map.

Clearly, the mapping L P :DomL∩KerP→ImL is one-to-one and onto, so invertible. Its inverse K P :ImL→DomL∩KerP is defined as

( K P x)(n)= ∑ s = 0 n − 1 x(s)− 1 ω ∑ n = 0 ω − 1 ∑ s = 0 n x(s).

Consequently, by the Lebesgue convergence theorem, QN and K P (I−Q)N are continuous, and via the Arzela-Ascoli theorem, QN( Ω ¯ ) and K P (I−Q)N( Ω ¯ ) are compact for any open bounded set Ω⊂X, namely, N is L-compact on Ω ¯ .

For a certain λ∈(0,1), suppose that x∈DomL∩X is a solution of

Lx=λNx,

equivalently,

x i ( n + 1 ) − x i ( n ) = λ [ ( e − a i ( n ) h − 1 ) x i ( n ) + θ i ( h ) ∑ j = 1 m b i j ( n ) f j ( x j ( n − k ( n ) ) ) + θ i ( h ) I i ( n ) ] .
(28)

Now one could get the estimates as follows:

max 0 ≤ n ≤ ω − 1 | x i ( n ) | = max 0 ≤ n ≤ ω − 1 | x i ( n + 1 ) | = max 0 ≤ n ≤ ω − 1 [ ( 1 + λ ( e − a i ( n ) h − 1 ) ) | x i ( n ) | + λ θ i ( h ) ∑ j = 1 m | b i j ( n ) | | f j ( x j ( n − k ( n ) ) ) | + λ θ i ( h ) I i ( n ) ] ≤ ( 1 + λ ( e − a i ( n ) h − 1 ) ) max 0 ≤ n ≤ ω − 1 | x i ( n ) | + λ θ i ( h ) ∑ j = 1 m b i j + F j max 0 ≤ n ≤ ω − 1 | x j ( n − k ( n ) ) | + λ θ i ( h ) I i + ≤ ( 1 + λ ( e − a i − h − 1 ) ) max 0 ≤ n ≤ ω − 1 | x i ( n ) | + λ θ i ( h ) ∑ j = 1 m b i j + F j max 0 ≤ n ≤ ω − 1 | x j ( n ) | + λ θ i ( h ) I i + .

Therefore, we have

max 0 ≤ n ≤ ω − 1 | x i (n)|≤ θ i ( h ) 1 − e − a i − h [ ∑ j = 1 m b i j + F j max 0 ≤ n ≤ ω − 1 | x j ( n ) | + I i + ]
(30)

for i=1,…,m. Set

y = ( max 0 ≤ n ≤ ω − 1 | x 1 ( n ) | , … , max 0 ≤ n ≤ ω − 1 | x m ( n ) | ) T , h = ( θ 1 ( h ) 1 − e − a 1 − h I 1 + , … , θ m ( h ) 1 − e − a m − h I m + ) T ,

then inequality (6) is equivalent to

Γy≤h.
(32)

From Lemma 2.2, we know that Γ is a nonsingular M-matrix, then from Lemma 2.1, it holds that

y≤ Γ − 1 h.
(33)

That means max 0 ≤ n ≤ ω − 1 | x i (n)| is bounded, i.e., there exist constants α i such that

max 0 ≤ n ≤ ω − 1 | x i (n)|≤ α i ,i=1,…,m.

Take β>0 such that

λκM> ∑ i = 1 m | I ¯ i |,

where M= ∑ i = 1 m λ i − 1 α i +β, λ= min 1 ≤ i ≤ m { λ i }.

Set

Ω= { x ≡ ( x 1 ( n ) , … , x m ( n ) ) T ∈ X : ∥ x ∥ < M } ,

according to the above discussions, Lx≠λNx for x∈∂Ω∩Dom L, λ∈(0,1). So, condition (a) in Lemma 3.1 is satisfied. When x∈∂Ω∩KerL, x is a constant vector in R m , with ∥x∥= ∑ i = 1 m λ i − 1 | x i |=M, then QNx is expressed as QNx= ( ( Q N x ) 1 ( n ) , … , ( Q N x ) m ( n ) ) T , with

( Q N x ) i (n)= 1 ω ∑ s = 0 ω − 1 [ ( e − a i ( s ) h − 1 ) x i + θ i ( h ) [ ∑ j = 1 m b i j ( s ) f j ( x j ) + I i ( s ) ] ] ,

where i=1,…,m. Since

1 ω ∑ s = 0 ω − 1 a i ( s ) θ i ( h ) | x i | − θ i ( h ) ∑ j = 1 m | 1 ω ∑ s = 0 ω − 1 b i j ( s ) | F j | x j | − θ i ( h ) ω ∑ s = 0 ω − 1 I i ( s ) = θ i ( h ) [ a ¯ i | x i | − ∑ j = 1 m | b ¯ i j | F j | x j | − | I i ¯ | ] ,

and further

∑ i = 1 m [ a ¯ i | x i | − ∑ j = 1 m | b ¯ i j | F j | x j | − | I i ¯ | ] = ∑ i = 1 m λ i [ a ¯ i − F i ∑ j = 1 m | b ¯ j i | ] λ i − 1 | x i | − ∑ i = 1 m | I i ¯ | ≥ λ κ M − ∑ i = 1 m | I i ¯ | > 0 ,

in this way,

∑ i = 1 m | ( Q N x ) i ( n ) | θ i ( h ) >0.

Consequently,

QNx≠0for x∈∂Ω∩KerL,

that is, condition (b) in Lemma 3.1 holds.

Let Φ:KerL×[0,1]→X be defined by

Φ(x,μ)= ( Φ 1 , … , Φ m ) T =:−μϕ(x)+(1−μ)QNx,

where ϕ(x)=( a ¯ 1 θ 1 (h) x 1 ,…, a ¯ m θ m (h) x m ). When x∈∂Ω∩KerL, it follows that

∑ i = 1 m | Φ i | θ i ( h ) ≥ ∑ i = 1 m [ a ¯ i | x i | − ( 1 − μ ) ( ∑ j = 1 m | b ¯ i j | F j | x j | − | I ¯ i | ) ] ≥ ∑ i = 1 m λ i [ a ¯ i − ∑ j = 1 m | b ¯ j i | F i ] λ i − 1 | x i | − ∑ i = 1 m | I ¯ i | ≥ λ κ M − ∑ i = 1 m | I ¯ i | > 0 .

That means

Φ(x,μ)≠0for x∈∂Ω∩KerL,μ∈[0,1].

As a result, the homotopy invariance implies

deg ( Q N , Ω ∩ Ker L , 0 ) = deg ( − ϕ , Ω ∩ Ker L , 0 ) = ( − 1 ) m ≠ 0 .

Hence condition (c) in Lemma 3.1 is verified. By Lemma 3.1, we conclude that system (3) admits at least one ω-periodic solution. This completes the proof. □

4 Exponential stability of periodic solutions

By Theorem 3.1, system (3) has at least an ω-periodic solution x ∗ (n)= ( x 1 ∗ ( n ) , … , x m ∗ ( n ) ) T . Clearly, if x ∗ (n) is exponentially stable, then the ω-periodic solution of system (3) is unique. Now we will investigate the exponential stability of periodic solutions. Set u(n)=x(n)− x ∗ (n), then system (3) is equivalent to

u i (n+1)= u i (n) e − a i ( n ) h + θ i (h) ∑ j = 1 m b i j (n) [ f j ( x j ( n − k ( n ) ) ) − f j ( x j ∗ ) ] .
(46)

Theorem 4.1 Suppose that all conditions in Theorem  3.1 hold except that (H3) is replaced by

(H4) Suppose that there exist constants λ i >0 such that

λ i a i − − ( 1 + k + − k − ) F i ∑ j = 1 m λ j b j i + >0,i=1,…,m,

then the ω-periodic solution of system (3) is globally exponentially stable in the sense that there exist constants η>1 and C>1 such that for any solution x(n,ϕ) of system (3) and (4) with initial condition ϕ, it holds that

∑ i = 1 m | x i ( n , ϕ ) − x i ∗ ( n ) | θ i ( h ) ≤C ( 1 η ) n ∑ i = 1 m { sup l ∈ N ( − k , 0 ) | ϕ i ( l ) − x i ∗ ( l ) | θ i ( h ) }

for all n∈ Z + .

Proof Define the function G i :R→R by

G i ( η i )= λ i ( 1 − η i e − a i − h ) − ( 1 + k + − k − ) F i θ i (h) ∑ j = 1 m λ j b j i + η i k + 1

for i=1,…,m. From (H4), we have

G i ( 1 ) = λ i ( 1 − e − a i − h ) − ( 1 + k + − k − ) F i θ i ( h ) ∑ j = 1 m λ j b j i + = θ i ( h ) [ λ i a i − − ( 1 + k + − k − ) F i ∑ j = 1 m λ j b j i + ] > 0 .

From the continuity of functions G i , there must be a number η>1 such that

G i (η)>0,i=1,…,m,

that is,

λ i ( 1 − η e − a i − h ) − ( 1 + k + − k − ) F i θ i (h) ∑ j = 1 m λ j b j i + η k + 1 >0
(52)

for i=1,…,m. From (9) and (H2), one has

| u i (n+1)|≤| u i (n)| e − a i − h + θ i (h) ∑ j = 1 m b i j + F j | u j ( n − k ( n ) ) |.
(53)

Set U i (n)= η n | u i ( n ) | θ i ( h ) , then it holds from (11) that

U i (n+1)≤η U i (n) e − a i − h + ∑ j = 1 m b i j + F j θ j (h) η 1 + k U j ( n − k ( n ) ) ,
(54)

where n∈ Z + . Define a Lyapunov functional V(n)=V( U 1 ,…, U m )(n) as follows:

V(n)= V 1 (n)+ V 2 (n)+ V 3 (n),

where

V 1 ( n ) = ∑ i = 1 m λ i U i ( n ) , V 2 ( n ) = ∑ i , j = 1 m ∑ s = n − k ( n ) n − 1 λ i b i j + F j θ j ( h ) η 1 + k U j ( s ) , V 3 ( n ) = ∑ i , j = 1 m ∑ r = − k + + 2 − k − + 1 ∑ s = n + r − 1 n − 1 λ i b i j + F j θ j ( h ) η 1 + k U j ( s ) .

To investigate the exponential stability of an ω-periodic solution x ∗ (n) by the Lyapunov functional V(n), it is necessary to calculate the difference ΔV(n)=V(n+1)−V(n) along the solutions of (12). From (12), we have

Δ V 1 ( n ) = V 1 ( n + 1 ) − V 1 ( n ) ≤ ∑ i = 1 m λ i [ ( η e − a i − h − 1 ) U i ( n ) + ∑ j = 1 m b i j + F j θ j ( h ) η 1 + k U j ( n − k ( n ) ) ] .

Since

n − k ( n ) + 1 ≤ n + 1 − k − , n + 1 − k + ≤ n + 1 − k ( n + 1 ) ,

one obtains

Δ V 2 ( n ) = ∑ i , j = 1 m η 1 + k b i j + λ i F j θ j ( h ) ( ∑ s = n + 1 − k ( n + 1 ) n U j ( s ) − ∑ s = n − k ( n ) n − 1 U j ( s ) ) Δ V 2 ( n ) ≤ ∑ i , j = 1 m η 1 + k b i j + λ i F j θ j ( h ) ( U j ( n ) − U j ( n − k ( n ) ) + ∑ s = n + 1 − k − n − 1 U j ( s ) Δ V 2 ( n ) = + ∑ s = n + 1 − k ( n + 1 ) n − k − U j ( s ) − ∑ s = n + 1 − k − n − 1 U j ( s ) ) Δ V 2 ( n ) ≤ ∑ i , j = 1 m η 1 + k b i j + λ i F j θ j ( h ) ( U j ( n ) − U j ( n − k ( n ) ) + ∑ s = n + 1 − k + n − k − U j ( s ) ) , Δ V 3 ( n ) = ∑ i , j = 1 m η 1 + k b i j + λ i F j θ j ( h ) ∑ r = − k + + 2 − k − + 1 ( ∑ s = n + r n U j ( s ) − ∑ s = n + r − 1 n − 1 U j ( s ) ) Δ V 3 ( n ) = ∑ i , j = 1 m η 1 + k b i j + λ i F j θ j ( h ) ∑ r = − k + + 2 − k − + 1 ( U j ( n ) − U j ( n + r − 1 ) ) Δ V 3 ( n ) = ∑ i , j = 1 m η 1 + k b i j + λ i F j θ j ( h ) ( ( k + − k − ) U j ( n ) − ∑ s = n + 1 − k + n − k − U j ( s ) ) .

Therefore, we have

Δ V ( n ) = Δ V 1 ( n ) + Δ V 2 ( n ) + Δ V 3 ( n ) ≤ ∑ i = 1 m λ i ( η e − a i − h − 1 ) U i ( n ) + ∑ i , j = 1 m ( 1 + k + − k − ) η 1 + k λ i b i j + F j θ j ( h ) U j ( n ) = ∑ i = 1 m [ λ i ( η e − a i − h − 1 ) + ( 1 + k + − k − ) η 1 + k θ i ( h ) F i ∑ j = 1 m λ j b j i + ] U i ( n ) .

In view of (10), it follows that ΔV(n)≤0 for all n∈ Z + , which means that V(n)≤V(0) for n=1,2,3,… . Note that

V ( n ) ≥ min 1 ≤ i ≤ m { λ i } ∑ i = 1 m U i ( n ) , V ( 0 ) = ∑ i = 1 m λ i U i ( 0 ) + ∑ i , j = 1 m ∑ s = − k ( 0 ) − 1 λ i b i j + F j θ j ( h ) η 1 + k U j ( s ) V ( 0 ) = + ∑ i , j = 1 m ∑ r = − k + + 2 − k − + 1 ∑ s = r − 1 − 1 λ i b i j + F j θ j ( h ) η 1 + k U j ( s ) V ( 0 ) ≤ ∑ i = 1 m { λ i U i ( 0 ) + ∑ j = 1 m ∑ s = − k + − 1 λ i b i j + F j θ j ( h ) η 1 + k U j ( s ) V ( 0 ) = + ∑ j = 1 m ∑ r = − k + + 2 − k − + 1 ∑ s = − k + − 1 λ i b i j + F j θ j ( h ) η 1 + k U j ( s ) } ,

so we have

∑ i = 1 m | x i ( n , ϕ ) − x ∗ ( n ) | θ i ( h ) ≤ 1 λ ( 1 η ) n ∑ i = 1 m [ λ i + k + ( 1 + k + − k − ) × F i θ i ( h ) η 1 + k ∑ j = 1 m λ j b j i + ] sup l ∈ N ( − k , 0 ) | ϕ i ( l ) − x i ∗ ( l ) | θ i ( h ) ≤ C ( 1 η ) n ∑ i = 1 m sup l ∈ N ( − k , 0 ) | ϕ i ( l ) − x i ∗ ( l ) | θ i ( h ) ,

where n∈ Z + , λ= min 1 ≤ i ≤ m { λ i } and

C= max 1 ≤ i ≤ m { λ i + k + ( 1 + k + − k − ) F i θ i ( h ) η 1 + k ∑ j = 1 m λ j b j i + λ } .

The proof is complete. □

Theorem 4.2 Suppose that all conditions in Theorem  3.1 hold, then the ω-periodic solution of system (3) is globally exponentially stable in the sense that there exist constants η>1 and C ∗ >0 such that for any solution x(n,Ï•) of system (3) and (4) with initial conditions Ï•, it holds that

| x i (n,ϕ)− x i ∗ (n)|≤ C ∗ ( 1 η ) n max 1 ≤ i ≤ m { sup l ∈ N ( − k , 0 ) | ϕ i ( l ) − x i ∗ ( l ) | }

for all n∈ Z + .

Proof Define the function G i ∗ :R→R by

G i ∗ ( η i )= λ i ( 1 − η i e − a i − h ) − θ i (h) ∑ i = 1 m λ j b i j + F j η i k + 1 ,i=l,…,m.

From (H3), we have

G i ∗ ( 1 ) = λ i ( 1 − e − a i − h ) − θ i ( h ) ∑ j = 1 m λ j b i j + F j = θ i ( h ) ( λ i a i − − ∑ j = 1 m λ j b i j + F j ) > 0 .

From the continuity of functions G i ∗ , there must be a number η>1 such that

G i ∗ (η)>0,i=1,…,m,

that is,

λ i ( 1 − η e − a i − h ) − θ i (h) ∑ j = 1 m λ j b i j + F j η k + 1 >0,i=l,…,m.
(68)

From (9) and (H2), one has

| u i (n+1)|≤| u i (n)| e − a i − h + θ i (h) ∑ j = 1 m b i j + F j | u j ( n − k ( n ) ) |.
(69)

Set V i (n)= η n λ i − 1 | u i (n)|, then it holds that

V i (n+1)≤η V i (n) e − a i − h + θ i (h) λ i − 1 ∑ j = 1 m b i j + F j λ j η 1 + k V j ( n − k ( n ) ) ,
(70)

where n∈ Z + . Let M= max 1 ≤ i ≤ m { sup l ∈ N ( − k , 0 ) λ i − 1 | u i (l)|}. It is clear that V i (l)≤M for l∈N(−k,0), i=1,…,m. We claim that

V i (n)≤Mfor i=1,…,m,n∈ Z + .
(71)

Otherwise, there should be an index r and a positive integer n 1 such that

V r ( n 1 )>M, V r (n)≤Mfor n∈N(−k, n 1 −1)

and

V i (n)≤Mfor i=1,…,m,i≠r,n∈N(−k, n 1 −1).

That is, n 1 is the first time that inequality (16) is violated. Meanwhile, by (15) and (13), one has

M < V r ( n 1 ) ≤ η V r ( n 1 − 1 ) e − a i − h + θ i ( h ) λ i − 1 ∑ j = 1 m b i j + F j λ j η 1 + k V j ( n 1 − 1 − k ( n 1 − 1 ) ) ≤ ( η e − a i − h + λ i − 1 θ i ( h ) ∑ j = 1 m b i j + F j λ j η 1 + k ) M < M .

That leads to a contradiction. Therefore, the assertion (16) is true. Consequently,

| u i (n)|≤ λ i ( 1 η ) n M,i=1,…,m,n∈ Z + .

That implies that the ω-periodic solution x ∗ (n) of system (3) and (4) is globally exponentially stable. The proof is completed. □

Remark Frequently activation functions f j are assumed to be bounded and monotonic; see, for instance, [18, 28]. However, no restrictions of boundedness, monotonicity and differentiability are imposed in this paper. Moreover, the conditions ensuring exponential stability are less conservative and easy to verify. Also, it could be noted [18] that continuous model (1) admits the common behaviors with system (3).

From Theorems 3.1, 4.1, 4.2 and Lemma 2.2, some corollaries could be immediately derived.

Corollary 4.1 Assume that hypotheses (H1) and (H2) hold and κ>0; further assume that one of the following conditions holds:

  1. (i)

    a i − −(1+ k + − k − ) F i ∑ j = 1 m b j i + >0;

  2. (ii)

    a i − − ∑ j = 1 m b i j + F j >0, i=1,…,m.

Then system (3) admits a unique ω-periodic solution, which is globally exponentially stable, that is, all other solutions converge to it exponentially as n→∞.

Corollary 4.2 Under conditions (H1), (H2) and κ>0, if the matrix A−DF is a nonsingular M-matrix, where A=diag{ a 1 − ,…, a m − }, D= ( b i j + ) m × m , then system (3) admits a unique ω-periodic solution, which is globally exponentially stable.

When a 1 (n)≡ a i , b i j (n)≡ b i j , I i (n)≡ I i , k(n)≡k, then the matrices A=diag{ a 1 ,…, a m }, D= ( b i j ) m × m and system (3) reduces to the model with constant coefficients and delays, that is,

x i (n+1)= x i (n) e − a i h + θ i (h) ∑ j = 1 m b i j f j ( x j ( n − k ) ) + I i .
(76)

Since the equilibrium could be viewed as the periodic solution of arbitrary period, as a consequence of Theorems 3.1, 4.1 and 4.2, we obtain the following corollary.

Corollary 4.3 If f j satisfies hypothesis (H2), a i >0, k>0,

a i − F i ∑ j = 1 m | b j i |>0,i=1,…,m,

and, further, the matrix A−DF is a nonsingular M-matrix, then system (17) has a unique equilibrium, which is globally exponentially stable.

5 Numerical results

Note that when neural networks are applied to practical problems such as image processing, pattern recognition, artificial intelligence, computer simulations, and so on, discrete versions of models are needed, since the information is discrete in nature and processing procedures occur in discrete steps.

Now some algorithms based on discrete-time neural models have been given. For example, Chen et al. [32] put forward an image processing method based on discrete neural models, which were implemented on circuits; Wang et al. [33, 34] proposed the discrete Hopfield neural models for Max-cut problems and cellular channel assignments; and Yashtini et al. [35] gave the discrete model for nonlinear convex programming. Moreover, the discrete-time cellular neural networks for associate memories with learning and forgetting capabilities [36, 37] also were established. So, discrete neural networks have extensive uses in real-life applications.

To show the effectiveness of the obtained theoretical results, an illustrative example is given. Consider the following discrete-time neural network:

x 1 ( n + 1 ) = e − a 1 ( n ) h x 1 ( n ) + θ 1 ( h ) [ b 11 ( n ) f 1 ( x 1 ( n − k ( n ) ) ) x 1 ( n + 1 ) = + b 12 ( n ) f 2 ( x 2 ( n − k ( n ) ) ) + I 1 ( n ) ] , x 2 ( n + 1 ) = e − a 2 ( n ) h x 2 ( n ) + θ 2 ( h ) [ b 21 ( n ) f 1 ( x 1 ( n − k ( n ) ) ) x 2 ( n + 1 ) = + b 22 ( n ) f 2 ( x 2 ( n − k ( n ) ) ) + I 2 ( n ) ] .

The example is a discrete network of two neurons with self-connection, which is of Hopfield type. The model is often implemented in practical applications such as image processing, pattern recognition and artificial intelligence. Such networks are the prototypes to understand the dynamics of larger-scale networks.

The corresponding continuous model is

x ˙ 1 ( t ) = − a 1 ( t ) x 1 ( t ) + [ b 11 ( t ) f 1 ( x 1 ( t − k ( t ) ) ) + b 12 ( t ) f 2 ( x 2 ( t − k ( t ) ) ) + I 1 ( t ) ] , x ˙ 2 ( t ) = − a 2 ( t ) x 2 ( t ) + [ b 21 ( t ) f 1 ( x 1 ( t − k ( t ) ) ) + b 22 ( t ) f 2 ( x 2 ( t − k ( t ) ) ) + I 2 ( t ) ] .

It is the Hopfield network with associate memory and data storage capability [38]. To measure the information transmitted among neurons, i.e., inputs and outputs, discrete samplings are necessary on discrete time instances. Frequently, periodic samplings are adopted. So, under the proposed discretization scheme, the discrete version of neural model follows.

Now take

a 1 ( n ) = 9 + cos ( 2 n π h ) , b 11 ( n ) = 4 − sin ( 2 n π h ) , b 12 ( n ) = 3 + cos ( 2 n π h ) , a 2 ( n ) = 12 − sin ( 2 n π h ) , b 21 ( n ) = 2 − cos ( 2 n π h ) , b 22 ( n ) = 3 − sin ( 2 n π h ) , I 1 ( n ) = − cos ( 2 n π h ) , I 2 ( n ) = − sin ( 2 n π h ) , k ( n ) = 1 , f 1 ( u ) = tanh u , f 2 ( u ) = arctan u , h = 1 15 .

Activation functions f 1 , f 2 are chosen to be hyperbolic tangent and inverse tangent ones, respectively, which are of sigmoid symmetric type. They are frequently used in neural networks and work very well in applications. The coefficients a i , b i j (i,j=1,2) are chosen to be periodic functions. Here we take trigonometric functions. It is not difficult to see that (H1) and (H2) are satisfied by this discrete-time neural network. The parameters are as follows:

a 1 − = 8 , b 11 + = 5 , b 12 + = 4 , a 2 − = 11 , b 21 + = 3 , b 22 + = 4 , F 1 = F 2 = 1 , κ > 0

and

θ 1 (h)= 1 − e − ( 9 + cos ( 2 n π h ) ) h 9 + cos ( 2 n π h ) , θ 2 (h)= 1 − e − ( 12 − sin ( 2 n π h ) ) h 12 − sin ( 2 n π h ) .

When λ 1 , λ 2 are set to be 1, 1 2 , respectively, (H3) is also true, so from Theorems 3.1 and 4.2, it admits a unique periodic solution, with all other solutions converging to it exponentially as n→∞ (see Figures 1-3).

Figure 1
figure 1

The trajectory of x 1 (n) versus n .

Figure 2
figure 2

The trajectory of x 2 (n) versus n .

Figure 3
figure 3

Existence and stability of the unique periodic solution.

From Figures 1-3, note that all the solutions tend to the unique periodic solution. So, the unique periodic solution exists and it is exponentially stable. The trajectories of a continuous model are showed in Figure 4. Note that it admits a unique periodic solution. Also note that the discrete analogue model well preserves the dynamics of the corresponding continuous one.

Figure 4
figure 4

Trajectories of the continuous model.

6 Conclusions

In the current paper, a class of discrete-time neural networks has been studied. Using the coincidence degree, the Lyapunov functional and matrix analysis, the existence and its global exponential stability of a periodic solution have been established for this model, assuming no boundedness, monotonicity and differentiability of activation functions. The obtained results are less conservative and will be of practical use for applying neural models. Also note that the discrete-time analogue model well preserves the dynamical behaviors from the continuous version.

References

  1. Hopfield J: Neurons with graded response have collective computational properties like those of two state neurons. Proc. Natl. Acad. Sci. USA 1984, 81: 3088-3092. 10.1073/pnas.81.10.3088

    Article  Google Scholar 

  2. Chen M, Grossberg S: Absolute stability and global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 1983, 13: 815-821.

    Article  Google Scholar 

  3. Chua L, Yang L: Cellular neural networks: theory and applications. IEEE Trans. Circuits Syst. I 1988, 35: 1257-1290. 10.1109/31.7600

    Article  MathSciNet  Google Scholar 

  4. Kosko B: Neural Networks and Fuzzy Systems - A Dynamical System Approach Machine Intelligence. Prentice Hall, Englewood Cliffs; 1992.

    Google Scholar 

  5. Hjelmfelt A, Ross J: Pattern recognition, chaos and multiplicity in neural networks and excitable systems. Proc. Natl. Acad. Sci. USA 1994, 91: 63-67. 10.1073/pnas.91.1.63

    Article  Google Scholar 

  6. Kennedy M, Chua L: Neural networks for nonlinear programming. IEEE Trans. Circuits Syst. 1998, 35: 554-562.

    Article  MathSciNet  Google Scholar 

  7. Cochocki A, Unbehauen R: Neural Networks for Optimization and Signal Processing. Wiley, Stuttgart; 1993.

    Google Scholar 

  8. Rolls E, Alessandro T: Neural Networks and Brain Function. Oxford University Press, Oxford; 1998.

    Google Scholar 

  9. Marcus CM, Westervelt RM: Stability of analog neural networks with delay. Phys. Rev. A 1989, 39(1):347-359. 10.1103/PhysRevA.39.347

    Article  MathSciNet  Google Scholar 

  10. Gopalsamy K, He X: Delay-independent stability in bidirectional associative memory networks. IEEE Trans. Neural Netw. 1994, 5: 998-1002. 10.1109/72.329700

    Article  Google Scholar 

  11. Driessche PV, Zou XF: Global attractivity in delayed Hopfield neural networks. SIAM J. Appl. Math. 1998, 58: 1878-1890. 10.1137/S0036139997321219

    Article  MathSciNet  Google Scholar 

  12. Rao V, Phaneendra BR: Global dynamics of bidirectional associative memory neural networks involving transmission delays and dead zones. Neural Netw. 1999, 12(3):455-465. 10.1016/S0893-6080(98)00134-8

    Article  Google Scholar 

  13. Mohamad S, Gopalsamy K: Dynamics of a class of discrete-time neural networks and their continuous-time counterparts. Math. Comput. Simul. 2000, 53: 1-39. 10.1016/S0378-4754(00)00168-3

    Article  MathSciNet  Google Scholar 

  14. Liu Y, Wang Z, Liu X: Asymptotic stability for neural networks with mixed time-delays: the discrete-time case. Neural Netw. 2009, 22(1):67-74. 10.1016/j.neunet.2008.10.001

    Article  Google Scholar 

  15. Ensari T, Arik S: New results for robust stability of dynamical neural networks with discrete time delays. Expert Syst. Appl. 2010, 37(8):5925-5930. 10.1016/j.eswa.2010.02.013

    Article  Google Scholar 

  16. Yu J, Zhang K, Fei S: Exponential stability criteria for discrete-time recurrent neural networks with time-varying delay. Nonlinear Anal., Real World Appl. 2010, 11(1):207-216. 10.1016/j.nonrwa.2008.10.053

    Article  MathSciNet  Google Scholar 

  17. Alanis AY, Sanchez EN, Loukianov AG, Hernandez EA: Discrete-time recurrent high order neural networks for nonlinear identification. J. Franklin Inst. 2010, 347(7):1253-1265. 10.1016/j.jfranklin.2010.05.018

    Article  MathSciNet  Google Scholar 

  18. Liu Z, Liao L: Existence and global exponential stability of periodic solution of cellular neural networks with time-varying delays. J. Math. Anal. Appl. 2004, 290: 247-262. 10.1016/j.jmaa.2003.09.052

    Article  MathSciNet  Google Scholar 

  19. Yucel E, Arik S: New exponential stability results for delayed neural networks with time varying delays. Physica D 2004, 191(3-4):314-322. 10.1016/j.physd.2003.11.010

    Article  MathSciNet  Google Scholar 

  20. Faydasicok O, Arik S: Robust stability analysis of a class of neural networks with discrete time delays. Neural Netw. 2012, 29-30: 52-59.

    Article  Google Scholar 

  21. Cheng C, Lin K, Shih C: Multistability and convergence in delayed neural networks. Physica D 2007, 225: 61-74. 10.1016/j.physd.2006.10.003

    Article  MathSciNet  Google Scholar 

  22. Niculescu SI: Delay Effects on Stability: A Robust Approach. Springer, Berlin; 2001.

    Google Scholar 

  23. Stuart A, Humphries A: Dynamical Systems and Numerical Analysis. Cambridge University Press, Cambridge; 1996.

    Google Scholar 

  24. Sun C, Feng C: Discrete-time analogues of integrodifferential equations modelling neural networks. Phys. Lett. A 2005, 334: 180-191. 10.1016/j.physleta.2004.10.082

    Article  MathSciNet  Google Scholar 

  25. Mohamad S: Global exponential stability in continuous-time and discrete-time delayed bidirectional neural networks. Physica D 2001, 159: 233-251. 10.1016/S0167-2789(01)00344-X

    Article  MathSciNet  Google Scholar 

  26. Liang J, Cao JD, Ho D: Discrete-time bidirectional associative memory neural networks with variable delays. Phys. Lett. A 2005, 335: 226-234. 10.1016/j.physleta.2004.12.026

    Article  Google Scholar 

  27. Liu X, Tang ML, Martin R, Liu BX: Discrete-time BAM neural networks with variable delays. Phys. Lett. A 2007, 367: 322-330. 10.1016/j.physleta.2007.03.037

    Article  Google Scholar 

  28. Zhao H, Sun L, Wang G: Periodic oscillation of discrete-time bidirectional associative memory neural networks. Neurocomputing 2007, 70: 2924-2930. 10.1016/j.neucom.2006.11.010

    Article  Google Scholar 

  29. Patan L: Local stability conditions for discrete-time cascade locally recurrent neural networks. Int. J. Appl. Math. Comput. Sci. 2010, 20(1):23-34.

    Article  MathSciNet  Google Scholar 

  30. Gaines RE, Mawhin JL: Coincidence Degree and Nonlinear Differential Equation. Springer, Berlin; 1977.

    Google Scholar 

  31. Berman A, Plemmons RJ: Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York; 1979.

    Google Scholar 

  32. Chen HC, Hung YC, Chen CK, Liao TL, Chen CK: Image-processing algorithms realized by discrete-time neural networks and their circuit implementations. Chaos Solitons Fractals 2006, 29(5):1100-1108. 10.1016/j.chaos.2005.08.067

    Article  MathSciNet  Google Scholar 

  33. Wang JH: An improved discrete Hopfield neural network for Max-Cut problems. Neurocomputing 2006, 69(13-15):1665-1669. 10.1016/j.neucom.2006.02.001

    Article  Google Scholar 

  34. Wang JH, Tang Z, Xu XS, Li Y: A discrete competitive Hopfield neural network for cellular channel assignment problems. Neurocomputing 2005, 67: 436-442.

    Article  Google Scholar 

  35. Yashtini M, Malek A: A discrete-time neural network for solving nonlinear convex problems with hybrid constraints. Appl. Math. Comput. 2008, 195(2):576-584. 10.1016/j.amc.2007.05.034

    Article  MathSciNet  Google Scholar 

  36. Michele B, Leonarda C, Giuseppe G: Discrete-time cellular neural networks for associate memories with learning and forgetting capabilities. IEEE Trans. Circuits Syst. I 1995, 42(7):396-399. 10.1109/81.401156

    Article  Google Scholar 

  37. Michel AN, Si J, Yen G: Analysis and synthesis of a class of discrete-time neural networks described on hypercubes. IEEE Trans. Neural Netw. 1991, 2(1):32-46. 10.1109/72.80289

    Article  Google Scholar 

  38. McEliece RJ, Posner EC, Rodemich ER, Venkatsch SS: The capacity of the Hopfield associative memory. IEEE Trans. Inf. Theory 1998, 33: 461-482.

    Article  Google Scholar 

Download references

Acknowledgements

This study is supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20093401120001), the Natural Science Foundation of Anhui Province (No. 11040606M12) and the Natural Science Foundation of Anhui Education Bureau (No. KJ2010A035), the 211 project of Anhui University (No. KJJQ1102).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ranchao Wu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

RW directed the study, helped inspection, established the models, carried out the results of this article and drafted the manuscript. HX performed the numerical simulation. All the authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Xu, H., Wu, R. Periodicity and exponential stability of discrete-time neural networks with variable coefficients and delays. Adv Differ Equ 2013, 226 (2013). https://doi.org/10.1186/1687-1847-2013-226

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2013-226

Keywords