Novel extended dissipativity criteria for generalized neural networks with interval discrete and distributed time-varying delays

The problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays is investigated. Based on a suitable Lyapunov–Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique, the new asymptotic stability and extended dissipativity criteria are achieved for the generalized neural networks with interval discrete and distributed time-varying delays. By the above methods, the less conservative asymptotic stability criteria are obtained for a special case of the generalized neural networks. By using the Matlab LMI toolbox, the derived new asymptotic stability and extended dissipativity criteria are expressed in terms of linear matrix inequalities (LMIs) that cover H∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$H_{\infty }$\end{document}, L2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$L_{2}$\end{document}–L∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$L_{\infty }$\end{document}, passivity, and dissipativity performance by setting parameters in the general performance index. Finally, we show numerical examples which are less conservative than other examples in the literature. Moreover, we present numerical examples for asymptotic stability and extended dissipativity performance of the generalized neural networks, including a special case of the generalized neural networks.


Introduction
In the numerous science and engineering fields, neural networks (NNs) have been studied extensively in recent years due to the wide range of their applications such as in signal processing, fault diagnosis, pattern recognition, associative memory, reproducing moving pictures, optimization problems, and industrial automation [1][2][3][4][5]. Theoretical stability analysis of the equilibrium is initially performed to make the above applications possible. To obtain the model for theoretical analysis, the important factors that affect the system are examined, and one important factor is the time delay. It is well known that the time delay always occurs in real world situations, and it causes oscillation, instability, and poor performance of the system. Furthermore, the time delay in neural networks is caused by the finite speed of information processing and the communication time of neurons. There-fore, many researchers are interested in investigating the stability or performance of the neural networks with time delay. The problem of stability or performance analysis for neural networks with constant, discrete, and distributed time-varying delays has received much attention [6][7][8][9][10].
In addition, neural networks can be classified into two types, local-field neural networks (LFNNs) and static neural networks (SNNs), based on the different neuron states. Many studies have separated the stability or performance of LFNNs and SNNs. For example, Zeng et al. [11] investigated the stability and dissipativity problem of static neural networks with interval time-varying delay. In [6], the stability for local-field neural networks with time-varying interval was investigated and stability criteria were improved by using some new techniques. Moreover, in 2011 for the first time, Zhang and Han [12] combined them into a unified system model, called generalized neural networks (GNNs), which covers both SNNs and LFNNs models, using certain assumptions. And later the GNNs with time delay model have been extensively used for stability or performance analyses [13][14][15][16][17][18].
On the other hand, the performance of neural networks has been analyzed by a variety of techniques, which often have input and output relationships, and they play an important role in science and engineering applications. For example, Du et al. [19] studied the problem of robust reliable H ∞ control for neural networks with mixed time delays based on the LMI technique and the Lyapunov stability theory. In [20], the problem of finite-time nonfragile passivity control for neural networks with time-varying delay is investigated based on a new Lyapunov-Krasovskii function with tripple and quadruple integral terms and utilizing Wirtinger-type inequality technique. Passivity performance analysis for neural networks is examined in [21][22][23][24]. In [25], the issue of L 2 -L ∞ state estimation design for delayed neural networks (NNs) is considered via quadratic-type generalized free-matrixbased integral inequality. The problem of dissipative analysis for aircraft flight control systems and uncertain discrete-time neural networks is addressed in [26,27]. It is well known that the concept of dissipativity was first studied by Willems [28]. Also, many researchers have studied the dissipativity theory, since it does not only cover H ∞ and passivity performance, but is also advisable to be used in a convenient control structure in engineering applications such as chemical process control [29] and power converters [30]. Dissipativity theory is widely studied in neural networks because it provides a fundamental framework for the analysis and design problems of control systems, and it can keep the system internally stable. Recently, many researchers have studied the dissipativity for stochastic fuzzy neural networks, static neural networks, stochastic Markovian switching CVNNs, and so on [11,31,32]. However, the analysis of dissipativity does not relate to L 2 -L ∞ performance. To fill this gap, Zhang et al. [33] created a new general performance index, called an extended dissipativity performance index, which links all of these performance indexes. Therefore, many problems have been studied to analyze extended dissipativity for neural networks with time delays, and the results have been reported in [34][35][36]. Moreover, the extended dissipative analysis was studied for the GNNs with interval time-varying delay signals [37]. It is interesting to study the extended dissipativity performance for GNNs with interval discrete and distributed time-varying delays, which has not been studied, yet.
The problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays is investigated in this paper. The main contributions of this research are as follows: • We construct more general systems named the generalized neural networks that cover both SNNs and LFNNs. Moreover, the interval discrete and distributed time-varying delays are not necessarily differentiable functions, the lower bound of the delays is not restricted to be 0, the activation functions are different, and the output contains terms of the state vector with interval discrete time-varying delay and the disturbance.
• We create a suitable Lyapunov-Krasovskii functional (LKF) for application in asymptotic stability and extended dissipativity analysis of the GNNs with new inequalities.
• For the first time, we use a novel triple integral inequality and an improved Wirtinger single integral inequality with convex combination technique for estimation to obtain the upper bound of the interval discrete time-varying delay that is better than in other references.
• We obtain new asymptotic stability and extended dissipativity criteria that cover H ∞ , L 2 -L ∞ , passivity, and dissipativity performance by setting parameters in the general performance index. This paper is structured in five sections as follows. In Sect. 2, the generalized neural networks model is formulated, and some definitions, lemmas, and assumptions are introduced. In Sect. 3, we show the asymptotic stability and extended dissipativity criteria for the generalized neural networks and a special case of the generalized neural networks. Numerical examples are shown in Sect. 4 to demonstrate the effectiveness of asymptotic stability and extended dissipativity performance for the generalized neural networks, including a special case of the generalized neural networks. Finally, conclusions are addressed in Sect. 5.

Network model and preliminaries
Notations Throughout this paper, R and R + represent the set of real numbers and the set of nonnegative real numbers, respectively; R n and R n×r denote the n-dimensional Euclidean space and the set of n×r real matrices, respectively; I is the identity matrix with appropriate dimensions; C([-, 0], R n ) represents the space of all continuous vector-valued functions mapping [-, 0] into R n , where ∈ R + ; L 2 [0, ∞) denotes the space of functions φ : R + → R n with the norm φ L 2 = [ ∞ 0 |φ(θ )| 2 dθ ] 1 2 ; P T is the transpose of the matrix P; P = P T denotes that the matrix P is a symmetric matrix; P > (≥)0 means that the symmetric matrix P is positive (semi-positive) definite; P < (≤)0 denotes that the symmetric matrix P is negative (semi-negative) definite; Sym{P} represents P + P T ; e i represents the unit column vector having 1 on its ith row and zeros elsewhere.
Consider the following generalized neural networks model with both interval discrete and distributed time-varying delays: where w(t) = [w 1 (t), w 2 (t), . . . , w n (t)] T ∈ R n is the neuron state vector; z(t) ∈ R n is the output vector; u(t) ∈ R n is the deterministic disturbance input which belongs to L 2 [0, ∞); f (·), g(·), h(·) ∈ R n are the neuron activation functions; C = diag{c 1 , c 2 , . . . , c n } is a positive In the same way, if we set B 0 = B 1 = B 2 = I, the NNs model (1) is changed to the following model, namely SNNs: h Ww(s) ds Before moving to the main results, we introduce the definitions, lemmas, and assumptions which are necessary to state the new criteria.

Lemma 2.2 ([38])
For a given symmetric positive definite matrix P ∈ R n×n , scalars t, a, and b satisfying b ≥ a ≥ 0, and a vector function w : [tb, t] → R n such that the integrals involved are well defined, the following inequality holds: t t+θ w(s) ds dθ .

Lemma 2.3 ([39])
For any constant matrices P ∈ R n×n , X ∈ R 2n×2n , and Y ∈ R 2n×n with [ X Y * P ] ≥ 0, and such that the following inequality is well defined, it holds that

Lemma 2.4 ([40])
For a given matrix P > 0, the following inequality holds for all continuous differentiable function w(t) in [a, b] ∈ R n :

Main results
In what follows, for the simplification, some notations are introduced as:

Stability analysis for generalized neural networks
In this section, new asymptotic stability criteria for the generalized neural networks (1), and their special case, are obtained based on a suitable Lyapunov-Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique. Theorem 3.1 For given scalars δ 1 , δ 2 , σ 1 , σ 2 , β 1 , and β 2 , if there exist symmetric positive definite matrices P, U 1 , R n×n , and positive scalars c 1 , c 2 such that the following LMIs hold: where then, the system (1) with u(t) = 0 is asymptotically stable.

Extended dissipativity analysis for generalized neural networks
In this section, new extended dissipativity criteria for the generalized neural networks (1), and their special case, are obtained based on the stability conditions that were developed in Theorems 3.1 and 3.2.

then, the system (32) is asymptotically stable and extended dissipative.
Proof To show that the GNNs system (32) is extended dissipative, first, we use the LKFs candidate (37) and the following performance index for the GNNs (32).
Using inequality (56) in Theorem 3.2, equation (3), and LMIs (68)-(72), we geṫ Then we integrate both sides of the inequality (73) from 0 to t ≥ 0 and, letting 0 Next, we consider two cases: Case I. 4 = 0. For this case, from inequality (74) we get By the relationship of output with (72), So, it is clear that for any t satisfying 0 ≤ t ≤ t f , Taking the supremum over t in inequalities (75) and (76), system (32) is extended dissipative. This completes the proof.
Remark 3 Recently, many problems have been investigated to analyze extended dissipativity for neural networks with time delays, and the results have been reported in [11,34,35]. Moreover, the extended dissipative analysis was studied for the GNNs with interval timevarying delay signals [37]. Unfortunately, most of research does not include the distributed time-varying delay in the GNNs systems. We know that the distributed time-varying delay is a delay resulting from the transmission of distributed nerve impulses in a diversity of axon sizes and lengths, which is considered difficult to avoid. Therefore, the extended dissipativity analysis for the GNNs with interval discrete and distributed time-varying delays was addressed in our research.
Remark 4 In recent years, the WSII was developed in [40] to estimate the derivatives of LKFs with a single integral term. Several studies have used the improved WSII to estimate the derivative of LKFs, for example, in [42], the authors have obtained criteria of finite-time passivity for neutral-type neural networks with time-varying delays by using the improved WSII with Jensen's inequality. In 2018, a novel triple integral inequality was constructed in [39] to estimate the derivative of LKFs with the triple integral term, moreover, the authors achieved an improved delay-dependent exponential stability criterion by using a novel triple integral inequality with the extended reciprocally convex technique. On the other hand, in this work, we use the improved WSII to estimate four terms and the triple integral inequality to estimate -(δ 3 2 -δ 3 1 ) 6 -δ 1 -δ 2 0 β t t+λẇ T (s)S 1ẇ (s) ds dλ dβ. By applying the improved WSII, a novel triple integral inequality, and convex combination technique, we gain less conservative results when compared with the other works [6,7,[43][44][45].
Remark 5 In this work, the Lyapunov-Krasovskii functional contains single, double, triple, and quadruple integral terms, in which full information on the delays δ 1 , δ 2 , σ 1 , σ 2 , and a state variable is used. Moreover, more information on activation functions has been taken into the stability and performance analysis, that is, W i w(t) ≤ H + i are addressed in the proof. Therefore, the construction and the technique for computation of the Lyapunov-Krasovskii functional are the main keys to improve results of this work. In the proof of Theorems 3.1-3.4, improved Wirtinger's single integral inequality [40], a novel triple integral inequality [39], and convex combination technique are used to bound the derivative of Lyapunov-Krasovskii functional, which provide tighter bound than the inequalities in [6,7,[43][44][45]. All of these lead to a reduction of the conservatism of our results compared to those in some exiting works and, in particular, numerical examples. However, the complex computation of the Lyapunov-Krasovskii functional leads to the LMI derived in this work which contains much information about the system. Hence, for further work, it is interesting for researchers to improve the technique for a simple Lyapunov-Krasovskii functional and also achieve better results.

Numerical examples
In this section, five numerical examples are presented to illustrate the effectiveness of our results.    By taking parameters β 1 = β 2 = 1 and solving Example 4.2 using LMIs in Theorem 3.2, we obtain maximum allowable values of δ 2 for δ 1 = 1 without the upper bound of differentiable delay (μ) as shown in Table 2. This table shows that the results derived in this research are less conservative than those in [6,7,45]. When the LMIs of (4) and (5)  The maximum allowable values of δ 2 for different values of δ 1 are shown in Table 3. Figure 1 shows the response solution w(t) in Example 4.3 where u(t) = 0 and the initial condition     When we solve Example 4.4 by using LMIs of (68), (69) in Theorem 3.4, we obtain four cases: Case I. L 2 -L ∞ performance. By using the LMIs in Theorem 3.4 and letting 1 = 0, 2 = 0, 3 = γ 2 I, and 4 = I, the extended dissipativity performance is converted into the L 2 -L ∞ performance. The L 2 -L ∞ performance index γ can be achieved for δ 1 = 0.5, and different δ 2 , which are shown in Table 4. Figure 3 shows the plot of L(t) = z T (t)z(t)/ t 0 u T (s)u(s) ds versus time with the initial condition φ(t) = [-0.1 0.1] T . Clearly, sup t L(t) = 0.0265, which is less than the prescribed L 2 -L ∞ performance index 2.0751 in Table 4.
Case II. Passivity performance. By applying the LMIs in Theorem 3.4 and taking 1 = 0, 2 = I, 3 = γ I, and 4 = 0, the extended dissipativity performance degenerates to the passivity performance. The passivity performance index γ can be gained for δ 1 =  determines the dissipativity performance. The maximum allowable values of δ 2 with various γ can be achieved for δ 1 = 0.5, which are shown in Table 5. Figure 6 shows the plot of D(t) = (   Table 6. Figure 7 shows the response solution w(t) in Example 4.5 where u(t) = 0 and the initial condition φ(t) = [-0.1 0.1] T . Figure 8 shows the response solution w(t) in Example 4.5 where u(t) is Gaussian noise with mean 0 and variance 1, and the initial condition φ(t) = [-0.1 0.1] T .

Conclusions
In this paper, we focus on the problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed timevarying delays. Firstly, we obtain new asymptotic stability criteria for the generalized neural networks and also achieve an improved asymptotic stability criterion for a special case of the generalized neural networks by using a suitable Lyapunov-Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique. Then, the asymptotic stability results are applied