Global exponential asymptotic stability of RNNs with mixed asynchronous time-varying delays

The present article addresses the exponential stability of recurrent neural networks (RNNs) with distributive and discrete asynchronous time-varying delays. Some novel algebraic conditions are obtained to ensure that for the model there exists a unique balance point, and it is global exponential asymptotically stable. Meanwhile, it also reveals the difference about the equilibrium point between systems with and without distributed asynchronous delay. One numerical example and its Matlab software simulations are given to illustrate the correctness of the present results.


Introduction
In the last few decades, a number of successful applications of RNNs have been witnessed in many areas, including associative memory, prediction and optimal control, and pattern recognition [1][2][3][4][5][6][7][8]. During the implementation of the operation, time delay is inevitably inherent in the transmission process among neurons on account of limited propagation speed and limited switching of the amplifier [9][10][11][12][13]. In addition, because of the existence of a large number of parallel channels with different coaxial process sizes and lengths, there maybe exist distributions of conduction velocities delays and propagation delays along with these paths. In these cases, we cannot only model the signal propagation with discrete delays due to it not being instantaneous. Thus, it is more suitable to add continuous distribution delays into the neural network model. Moreover, these delays sometimes may produce the desired excellent performance, such as processing moving images between neurons when signals are transmitted, exhibiting chaotic phenomena applied to secure communication. Therefore, it is quite necessary to discuss the dynamical behavior of the neural networks with mixed distributive and discrete delays. And there has been a lot of literature on mixed constant delays [14][15][16][17][18][19] and time-varying delays [19][20][21][22][23][24].
Recently, Liu et al. [25] proposed the asynchronous delays, and investigated the exponential stability for complex-valued recurrent neural networks with discrete asynchronous delays. Afterwards, Li et al. [26] presented the stability preservation in discrete analogue of an impulsive Cohen-Grossberg neural network with discrete asynchronous delays. In the implementation of the operation, time delays are not just discrete asynchronous, but also distributive asynchronous, or even mixed asynchronous. In fact, for a driver, there is not only one kind of delay; his eyes, hands and feet all have delays in responding to the operation. Since the delays are different for different drivers, it needs to be coordinated in the driver's brain central nervous system. Therefore, the stability analysis of neural networks with distributive and discrete asynchronous delays is a challenge that we should look forward to discussing.
Inspired by the challenge above, we investigated the exponential stability of RNNs with mixed asynchronous time-varying delays. The main contribution was to find some novel sufficient conditions which make the discussed system's balance point unique and the global exponential asymptotically stable. The rest arrangement of this article are as follows. In the second section, the RNN model with some reasonable assumption is given. The main results are given and proved in the third section. The corollaries and comparisons with the existing literature are given in the fourth section. Section 5 gives a numerical example with comprehensible simulation to illustrate the effectiveness of the main results. In the end of this paper, the conclusion is drawn.

Model description
In the present article, we investigate a class of RNNs of n (n ≥ 2) interconnected neurons as follows: where x i (t) is the state variate at time t related to the ith neuron; a i is a positive behaved constant; f j (·) stands for activation function of the jth neuron, and it is a globally Lipschitz continuous and differentiable nonlinear function such that b ij , c ij , and d ij are the corresponding connection weights associated with the neurons without delays, with discrete delays, and with distributed delays, respectively; τ ij (t) corresponds to the discrete asynchronous transmission time-varying delay along with the axon of the unit j to the unit i at time t such that h ij (t) corresponds to the distributed asynchronous transmission time-varying delay along with the axon of the unit j to the unit i at time t, and satisfies u i is a constant, and represents the external input of the ith neuron.
For model (1), its initial conditions are assumed to be where ϕ i (s) is real-valued continuous function, and Remark 1 τ ij (t) and h ij (t) above receive different information between different nodes at time t, which means that the time-varying delays are asynchronous in system (1). Therefore, model (1) is more general than Refs. [22,26].
Assume that x * is a balance of model (1), and x * i is its ith component. Then Eq. (1) becomes By Ref. [20], we can define the global exponential asymptotic stability of x * .

Definition 1
The equilibrium point x * in model (1) is said to have global exponential asymptotic stability, if there are M ≥ 1 and γ > 0 such that each solution of Eq. (1) satisfies where ϕ i (s) is the initial continuous function, and Ω is a set of real numbers.

Main results and proofs
In this section, we will show that there is a unique balance point x * in the neural networks (1), and it shows global exponential asymptotic stability.
then the equilibrium point x * exists and is unique in system (1).
Proof On account of a i > 0, (8) can turn into Let Then we know that x * i is a fixed point of the mapping g i (x) from Eq. (11). Hence the equilibrium point of Eq. (1) can be determined by the fixed points of functions g 1 (x), g 2 (x), . . . , and g n (x) within a specific range. Let x(t) be the vector (x 1 (t), x 2 (t), . . . , x n (t)) T , and Φ be a hypercube set defined by By the hypotheses (3) and (5), we can get Let g(x) be the vector function (g 1 (x), g 2 (x), . . . , g n (x)) T . From the continuity of f i , we know that g(x) is a continuous mapping from set Φ to Φ. By Brouwer's fixed point theorem, there is at least one x * ∈ Φ such that It follows that there is at least one equilibrium point in Eq. (1). Next, we will show the uniqueness of the equilibrium point in Eq. (1). Let y * = (y * 1 , y * 2 , . . . , y * n ) T be also an equilibrium point of model (1). From (2), (3), and (8), we can obtain Summing over all the neurons that satisfy the inequality (14), we get It follows that According to the condition (10), we can get implying that there exists a unique equilibrium in model (1). (5), and (10) hold, and we have β ≥ 1 and q > 0 such that If the equilibrium point x * and each solution of Eq. (1) with the initial conditions (6) satisfy then x * is the global exponential asymptotic stability.
Proof By Theorem 1, model (1) exists a unique balance point under the assumptions (2), (3), (5), and (10), and we denote it as x * . Then from Eq. (1), we have Assumed that Then the derivative along with (19) is Since we substitute (21) into (20), and get Consider a Lyapunov function V (t) = V (y 1 , y 2 , . . . , y n )(t) defined by Taking the derivative of V (t) along with (19), we get Let F i (q i ) be an auxiliary continuous function related to index i, defined by where q i is a positive real number, and i is a positive nature number not bigger than n. In view of the hypothesis (10), one has From the continuity of F i , there exists q * i ∈ (0, +∞) such that Without loss of generality, let q = max 1≤i≤n {q * 1 , q * 2 , . . . , q * n }. Then Therefore, by (24) and (27), one can see that the derivative of V (t) is smaller than 0 for t ∈ [0, +∞]. Based on the definition of V (t) and the assumption (4), we obtain where Combining (17), (28), and (29), one can derive the inequality (18), and thus the equilibrium point x * of Eq. (1) has the global exponential asymptotic stability on account of Definition 1.
Remark 2 The constant β ≥ 1, which plays a significant role in the index of convergence of model (1), relies on the distributive delay h j and delay τ j for j = 1, 2, . . . , n. If either the discrete delay τ j or the distribution delay h j of (17) is sufficiently large, namely, the discrete asynchronous delays τ ij (t) of (4) and distributed asynchronous delays h ij (t) of (5) are sufficiently large, then β will be large enough, and thus the convergence time towards the equilibrium point will be longer. Therefore, the convergence time of model (1) can be shortened only if the two delays are reduced appropriately in the process of operation coordination.

Corollaries and comparisons
By Theorem 1 and Theorem 2, we will have the following corollaries. Meanwhile, we also will compare the conclusions of this paper with the existing literature.
When h ij (t) = 0 for i, j ∈ {1, 2, . . . , n}, Eq. (1) changes into the following neural networks: and its initial conditions are where i is a positive integer not bigger than n. (2) and (3)

Corollary 1 Assume that
then the equilibrium point x * exists and is unique in the system (30).

Corollary 2
Suppose that (2)-(4), and (32) hold. If there exist two constants β ≥ 1 and q > 0 such that and the equilibrium point x * and each solution of Eq. (30) with the initial conditions (31) satisfy then x * is the global exponential asymptotic stability.
Remark 3 By Ref. [26], the equilibrium point of model (30) with discrete asynchronous time-varying delay is the same to that without delays. Meanwhile h ij (t) = 0, by Theorem 1, the equilibrium point of model (1) will be affected by h ij (t), t > 0.

Corollary 4 Suppose that
and the equilibrium point x * and each solution of Eq. (35) with the initial conditions (36) satisfy then x * has the global exponential asymptotic stability.
When τ ij (t) = τ (t) and h ij (t) = h(t) for i, j ∈ {1, 2, . . . , n}, let sup t≥0 τ (t) ≤ τ , and and its initial conditions are where ϕ i (s) is real-valued continuous functions, and then x * has the global exponential asymptotic stability. Remark 4 The system (40) is one of the RNNs with distributive and discrete delays, which occur to the literature [22]. In this paper, we show the balance x * of Eq. (40) is the global exponential asymptotic stability via the Lyapunov function method, which is different from the method used in the literature [22].
After comparison and simple calculation, we know that Eq. (46) represents a twodimensional RNN with discrete and distributed asynchronous time-varying delays, and satisfies all the assumptions of Theorem 1 and Theorem 2. In Fig. 1, we illustrate state trajectories of x 1 and x 2 of model (46), which show that model (46) has a unique and exponential asymptotically stable equilibrium point.
Taking out the distributed delay, model (46) turns into a two-dimensional RNN with discrete asynchronous time-varying delays. Figure 3 illustrates its state trajectories, which are marked as x1-discrete and x2-discrete, respectively. Meanwhile, Fig. 3 also shows the sate trajectories of x 1 and x 2 of model (46) without delays. From Fig. 3, we see that the dynamical behaviors of Eq. (46) without distributed delay and Eq. (46) without delay are converging toward the same equilibrium point. Removing the discrete delay, model (46) becomes a two-dimensional RNN with distributive asynchronous time-varying delays. Figure 4 illustrates its state trajectories, which are marked as x1-distributed and x2-distributed, respectively. Meanwhile, Fig. 4 also demonstrate the sate trajectories of x 1 and x 2 of model (46) without delays. From Fig. 4, we see that the trajectories of Eq. (46) with distributive asynchronous time-varying delays and Eq. (46) without delay are different, which implies that the dynamical behavior of model (46) is affected by the distributed asynchronous time-varying delay.
In we see that the stability convergence time of neural networks with big upper bound delay is longer than that of neural networks with small upper bound delay.

Conclusion
In the present paper, we discuss the RNNs with mixed asynchronous time-varying delays. By the Lyapunov function method, some algebra conditions are given to make the investigated model have a unique and global exponential asymptotically stable balance point. Meanwhile, we also show that the balance point of the neural networks with distributive asynchronous time-varying delays is different from that without distributed delays. Finally, one numerical example and its simulation are given to demonstrate the effectiveness of our results. The considered neural networks in this paper can be further discussed as regards their discrete-time analogue, and also can be investigated as regards their dynamical characteristics by adding pulses.