Convergence analysis on inertial proportional delayed neural networks

This article mainly explores a class of inertial proportional delayed neural networks. Abstaining reduced order strategy, a novel approach involving differential inequality technique and Lyapunov function fashion is presented to open out that all solutions of the considered system with their derivatives are convergent to zero vector, which refines some previously known research. Moreover, an example and its numerical simulations are given to display the exactness of the proposed approach.


Introduction
In neural networks dynamics, inertial neural networks can be described as second-order differential equations, and the inertial term is used as a convenient tool for causing bifurcation and chaos [1,2]. Consequently, dynamic analyses on constant delayed inertial neural networks have been extensively explored, and plentiful important results have been gained in [3][4][5][6][7][8][9][10][11] and the references cited therein. It should be adverted to that all research approaches involved in the above papers are on the base of the reduced order strategy, which will produce a large amount of computation and has no practical value. Therefore, the authors in [12,13] used non-reduced order strategy to explore the synchronization and stability in inertial neural networks. As is well known, the neural networks involving time-varying parameters will have more practical issues [14][15][16]. In particular, taking the global Lipschitz activation functions, the authors in [12,13] applied some novel Lyapunov functionals instead of the classical reduced order strategy and established a set of new conditions to illustrate the dynamic behaviors such as synchronization and stability in non-autonomous inertial neural networks. Yet in some applications, activation functions without Lipschitz conditions are inevitably encountered. However, there is little research on the convergence of non-autonomous inertial neural networks without taking the global Lipschitz requirements in activation functions.
For the last few years, the dynamics in proportional delayed neural networks have attracted widespread concern because of biological and physical applications (see [17][18][19][20][21][22]). Particularly, the global convergence of proportional delayed neural networks without inertial terms has been widely studied in [23][24][25][26][27][28][29][30]. Unfortunately, so far, there has been no publishing article using non-reduced order strategy to establish the global convergence analysis on inertial proportional delayed neural networks without global Lipschitz activation functions.
On account of the above considerations, in this paper, our aim is to utilize the nonreduced order strategy to deal with the global convergence of the following inertial proportional delayed neural networks: involving initial values +∞) are continuous and bounded, and i, j ∈ Q = {1, 2, . . . , n}, z i (t) is called an inertial term of (1.1), z(t) = (z 1 (t), z 2 (t), . . . , z n (t)) is the state vector, proportional delay factor q ij satisfies the conditions that 0 < q ij < 1, J i (t) is the continuous external input, and the continuous activation functions G j and F j involve two nonnegative constants L F j and L G j satisfying The remainder of this paper is organized as follows. We apply Barbalat's lemma to set the main result in Sect. 2. The validity of our methods is shown by an application example in Sect. 3. Finally, Sect. 4 concludes the paper with discussion.

From (1.3) and the fact that
which, together with (T 3

), (2.2), and (2.3), yield
Consequently, U(t) ≤ U(t 0 ) holds on t ∈ [t 0 , +∞), and Note that one can obtain the uniform boundedness of z i (t) and z i (t) on [t 0 , +∞), where i ∈ Q. This entails the uniform boundedness of z i (t) for all t ∈ [t 0 , +∞) and i ∈ Q. Clearly, on [t 0 , +∞), it follows from ( Furthermore, (2.4) leads to and This and Lemma 2.1 suggest that which finishes the proof of Theorem 2.1.
Remark 2.1 Most recently, taking the global Lipschitz activation functions, the authors in [12,13] applied the non-reduced order strategy to reveal the convergence on the state vectors of inertial constant delayed neural networks. Unfortunately, the authors in [12,13] have not given any opinion on the convergence of the inertial proportional delayed neural networks without choosing global Lipschitz activation functions. In this present paper, without taking the global Lipschitz activation functions, the convergence for all solutions and their derivatives in inertial proportional delayed neural networks are established. Therefore, compared with the methods in references [12] and [13], our method has fewer conditions and a simpler proof.

An illustrative numerical example
In this section, an example is given to reveal analytical results obtained in the previous sections graphically.
Remark 3.1 As far as the authors know, no one has used the non-reduced order strategy to study the global convergence of inertial proportional delay neural networks without the global Lipschitz activation functions. Moreober, the results in  have not touched on the global convergence of inertial proportional delay neural networks. It should be noted that the global Lipschitz assumption about the activation function is not applicable to system (3.1), and we can easily discover that all achievements in [3][4][5][6][7][8][9][10][11][12][13] and  cannot be directly used to establish the global convergence of (3.1).

Conclusions
In this paper, the global convergence of a class of inertial proportional delayed neural networks without the global Lipschitz activation functions is explored without involving the reduced order strategy. Some sufficient assertions have been obtained by using novel Lyapunov function and differential inequality. It is worth noting that the conditions adopted in this manuscript are easy to be checked with simple inequality strategy, which provides a possible approach for the investigation of dynamic behavior on other delayed neural networks with inertial terms and without the global Lipschitz activation functions.