Global exponential stability of delayed inertial competitive neural networks

In this paper, the exponential stability for a class of delayed competitive neural networks is studied. By applying the inequality technique and non-reduced-order approach, some novel and useful criteria of global exponential stability for the addressed network model are established. Moreover, a numerical example is presented to show the feasibility and effectiveness of the theoretical results.


Introduction
Competitive neural networks (CNNs), firstly proposed by Meyer-Bäse et al. in [1], involve two types of memory: long-term and short-term memories. Viewed mathematically, there are two classes of state variables in such a network model described by ODEs, one is shortterm memory, which depicts fast neuronal activity, another one is long-term memory, which depicts the slow unsupervised synaptic modifications. Such a network model possesses a two-layer structure and then extensive fundamental results have been reported to address the dynamic behaviors of such a network model. For example, Lu and He in [2] gave some sufficient conditions for the global exponential stability of delayed competitive neural networks with different time scales. Nie and Cao [3], Duan and Huang [4] studied the dynamics of equilibrium for two different kinds of competitive neural networks with time-varying delay and discontinuous activations, respectively. Nie et al. [5,6], and Xu et al. [7], respectively, investigated the multistability issue of CNNs by using the fixed point theorem and the contraction mapping theorem. Pratap [8] and Yang [9] investigated the finite time synchronization and adaptive lag synchronization problem of delayed CNNs. For other interesting theoretical results of CNNs, one may refer to [10][11][12][13][14][15][16] and the references therein.
On the other hand, for evident engineering and biological reasons, it is often useful to bring an inertial term into a neural system. For instance, comparing to electronic neural networks of the standard resistor-capacitor variety, Babcocka and Westervelt showed that the dynamics could be complex when the neuron couplings were of an inertial nature [17].
Another evident biological reason for introducing an inertial term into the standard neural system is the implementation of the membrane of a hair cell by equivalent circuits containing an inductance in semicircular canals of some animals, such as pigeon [18]. Therefore, various dynamical behaviors for types of neural networks with inertial terms have been studied by many authors, such as in [19][20][21][22][23][24][25][26] and the references therein. However, the method employed in the aforementioned works are all through a reduced-order approach, that is, by changing the second-order inertial neural networks into the first-order system, the reduced-order method clearly expanding the dimension of the system, which increases the difficulty of the theoretical analysis for the NNs.
In this paper, abandoning the traditional reduced-order method and being inspired by recent work [27,28], we further study the issue of exponential stability of a delayed inertial CNNs. The main contribution of this paper lies in the following aspects.
(1) By introducing the non-reduced-order approach, a novel Lyapunov-Krasovskii functional proposed and new criteria are established for the exponential stability of the considered model. (2) Different from existing works on the stability analysis of delayed CNNs in which the reduced-order approach is employed, the presented non-reduced-order approach significantly reduces the computational complexity and improves some recently reported ones. This paper is outlined as follows. In Sect. 2, a model description and preliminaries are presented. In Sect. 3, exponential stability of the considered model is studied. In Sect. 4, a numerical example is provided. Finally, a conclusion is drawn in Sect. 5.

Model description and preliminaries
In this paper, we consider the following delayed inertial CNN model: . . , n, l = 1, . . . , q, where x i (t) is the neuron current activity level, f j (t) and g j (tτ (t)) are the activation functions, m il (t) is the synaptic efficiency, p l is the constant external stimulus, D ij represents the connection weight between the ith neuron and the jth neuron, c i is the strength of the external stimulus, D τ ij represents the synaptic weight of delayed feedback and I i is the constant input. In system (2.1), the second derivatives are called inertial terms, and the time-varying delay τ (t) is a differentiable function satisfying Firstly, we shall rewrite system (2.1) as follows: Setting . . , m iq (t)) T , and summing up the LTM over l, the neural networks (2.1) can be rewritten as the state-space form where p 2 = p 2 1 + · · · + p 2 q is a constant. Without loss of generality, the input stimulus p is assumed to be normalized with unit magnitude p 2 = 1, then the above networks are simplified as the initial conditions associated with (2.1) or (2.3) are to be of the form Throughout this paper, the activation functions are assumed to satisfy the following assumption.
(H1) For any i = 1, 2, . . . , n, the activation functions f i and g i satisfy Lipschitz condition, that is, there exist constants L F i and L G i such that for all u, v ∈ R Definition 2.1 Suppose (x(t), s(t)) T and (x * (t), s * (t)) T are two solutions of the system satisfying the initial condition, the system is said to be globally exponentially stable if there are two positive constants ε and M depending on the initial values, such that

Main result
In this section, we will derive some sufficient conditions which ensure the global exponential stability for system (2.3).

1)
and In view of (3.1), (3.2) and combining with the continuity theory, there exists a sufficiently small ε > 0 such that and Consider a Lyapunov-Krasovskii functional given by First, let us compute the derivative of V 1 (t), we have It follows from (H1) and the fundamental inequality uv ≤ 1 Substituting (3.8), (3.9) into (3.7) leads to (3.10) Secondly, calculating the derivative of V 2 (t), we obtain One can easily deduce that and thus By virtue of (2.3) and a straightforward computation which, together with (3.10), and (3.11), gives Hence, which implies that where O(e -εt ) means that |y i (t)|, |z i (t)| exponentially converge to 0 with the same order as e -εt . This completes the proof of Theorem 3.1.

Numerical simulations
In this section, a numerical example is presented to justify the effectiveness of the proposed exponential stability results. The simulation is performed using Matlab software.