- Research
- Open access
- Published:
Novel extended dissipativity criteria for generalized neural networks with interval discrete and distributed time-varying delays
Advances in Difference Equations volume 2021, Article number: 42 (2021)
Abstract
The problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays is investigated. Based on a suitable Lyapunov–Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique, the new asymptotic stability and extended dissipativity criteria are achieved for the generalized neural networks with interval discrete and distributed time-varying delays. By the above methods, the less conservative asymptotic stability criteria are obtained for a special case of the generalized neural networks. By using the Matlab LMI toolbox, the derived new asymptotic stability and extended dissipativity criteria are expressed in terms of linear matrix inequalities (LMIs) that cover \(H_{\infty }\), \(L_{2}\)–\(L_{\infty }\), passivity, and dissipativity performance by setting parameters in the general performance index. Finally, we show numerical examples which are less conservative than other examples in the literature. Moreover, we present numerical examples for asymptotic stability and extended dissipativity performance of the generalized neural networks, including a special case of the generalized neural networks.
1 Introduction
In the numerous science and engineering fields, neural networks (NNs) have been studied extensively in recent years due to the wide range of their applications such as in signal processing, fault diagnosis, pattern recognition, associative memory, reproducing moving pictures, optimization problems, and industrial automation [1–5]. Theoretical stability analysis of the equilibrium is initially performed to make the above applications possible. To obtain the model for theoretical analysis, the important factors that affect the system are examined, and one important factor is the time delay. It is well known that the time delay always occurs in real world situations, and it causes oscillation, instability, and poor performance of the system. Furthermore, the time delay in neural networks is caused by the finite speed of information processing and the communication time of neurons. Therefore, many researchers are interested in investigating the stability or performance of the neural networks with time delay. The problem of stability or performance analysis for neural networks with constant, discrete, and distributed time-varying delays has received much attention [6–10].
In addition, neural networks can be classified into two types, local-field neural networks (LFNNs) and static neural networks (SNNs), based on the different neuron states. Many studies have separated the stability or performance of LFNNs and SNNs. For example, Zeng et al. [11] investigated the stability and dissipativity problem of static neural networks with interval time-varying delay. In [6], the stability for local-field neural networks with time-varying interval was investigated and stability criteria were improved by using some new techniques. Moreover, in 2011 for the first time, Zhang and Han [12] combined them into a unified system model, called generalized neural networks (GNNs), which covers both SNNs and LFNNs models, using certain assumptions. And later the GNNs with time delay model have been extensively used for stability or performance analyses [13–18].
On the other hand, the performance of neural networks has been analyzed by a variety of techniques, which often have input and output relationships, and they play an important role in science and engineering applications. For example, Du et al. [19] studied the problem of robust reliable \(H_{\infty }\) control for neural networks with mixed time delays based on the LMI technique and the Lyapunov stability theory. In [20], the problem of finite-time nonfragile passivity control for neural networks with time-varying delay is investigated based on a new Lyapunov–Krasovskii function with tripple and quadruple integral terms and utilizing Wirtinger-type inequality technique. Passivity performance analysis for neural networks is examined in [21–24]. In [25], the issue of \(L_{2}\)–\(L_{\infty }\) state estimation design for delayed neural networks (NNs) is considered via quadratic-type generalized free-matrix-based integral inequality. The problem of dissipative analysis for aircraft flight control systems and uncertain discrete-time neural networks is addressed in [26, 27]. It is well known that the concept of dissipativity was first studied by Willems [28]. Also, many researchers have studied the dissipativity theory, since it does not only cover \(H_{\infty }\) and passivity performance, but is also advisable to be used in a convenient control structure in engineering applications such as chemical process control [29] and power converters [30]. Dissipativity theory is widely studied in neural networks because it provides a fundamental framework for the analysis and design problems of control systems, and it can keep the system internally stable. Recently, many researchers have studied the dissipativity for stochastic fuzzy neural networks, static neural networks, stochastic Markovian switching CVNNs, and so on [11, 31, 32]. However, the analysis of dissipativity does not relate to \(L_{2}\)–\(L_{\infty }\) performance. To fill this gap, Zhang et al. [33] created a new general performance index, called an extended dissipativity performance index, which links all of these performance indexes. Therefore, many problems have been studied to analyze extended dissipativity for neural networks with time delays, and the results have been reported in [34–36]. Moreover, the extended dissipative analysis was studied for the GNNs with interval time-varying delay signals [37]. It is interesting to study the extended dissipativity performance for GNNs with interval discrete and distributed time-varying delays, which has not been studied, yet.
The problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays is investigated in this paper. The main contributions of this research are as follows:
• We construct more general systems named the generalized neural networks that cover both SNNs and LFNNs. Moreover, the interval discrete and distributed time-varying delays are not necessarily differentiable functions, the lower bound of the delays is not restricted to be 0, the activation functions are different, and the output contains terms of the state vector with interval discrete time-varying delay and the disturbance.
• We create a suitable Lyapunov–Krasovskii functional (LKF) for application in asymptotic stability and extended dissipativity analysis of the GNNs with new inequalities.
• For the first time, we use a novel triple integral inequality and an improved Wirtinger single integral inequality with convex combination technique for estimation to obtain the upper bound of the interval discrete time-varying delay that is better than in other references.
• We obtain new asymptotic stability and extended dissipativity criteria that cover \(H_{\infty }\), \(L_{2}\)–\(L_{\infty }\), passivity, and dissipativity performance by setting parameters in the general performance index.
This paper is structured in five sections as follows. In Sect. 2, the generalized neural networks model is formulated, and some definitions, lemmas, and assumptions are introduced. In Sect. 3, we show the asymptotic stability and extended dissipativity criteria for the generalized neural networks and a special case of the generalized neural networks. Numerical examples are shown in Sect. 4 to demonstrate the effectiveness of asymptotic stability and extended dissipativity performance for the generalized neural networks, including a special case of the generalized neural networks. Finally, conclusions are addressed in Sect. 5.
2 Network model and preliminaries
Notations
Throughout this paper, \(\mathbb{R}\) and \(\mathbb{R}^{+}\) represent the set of real numbers and the set of nonnegative real numbers, respectively; \(\mathbb{R}^{n} \) and \(\mathbb{R}^{n\times r}\) denote the n-dimensional Euclidean space and the set of \(n\times r\) real matrices, respectively; I is the identity matrix with appropriate dimensions; \(\mathcal{C}([-\varrho ,0],\mathbb{R}^{n})\) represents the space of all continuous vector-valued functions mapping \([-\varrho ,0]\) into \(\mathbb{R}^{n}\), where \(\varrho \in \mathbb{R}^{+}\); \(\mathcal{L}_{2}[0,\infty )\) denotes the space of functions \(\phi : \mathbb{R}^{+} \rightarrow \mathbb{R}^{n}\) with the norm \(\Vert \phi \Vert _{\mathcal{L}_{2}} = [ \int _{0}^{\infty } \vert \phi ( \theta ) \vert ^{2}\,d\theta ]^{\frac{1}{2}}\); \(P^{T}\) is the transpose of the matrix P; \(P = P^{T}\) denotes that the matrix P is a symmetric matrix; \(P>(\geq ) 0\) means that the symmetric matrix P is positive (semi-positive) definite; \(P<(\leq )0\) denotes that the symmetric matrix P is negative (semi-negative) definite; \(\operatorname{Sym}\{P\}\) represents \(P+P^{T}\); \(e_{i}\) represents the unit column vector having 1 on its ith row and zeros elsewhere.
Consider the following generalized neural networks model with both interval discrete and distributed time-varying delays:
where \(w(t)=[w_{1}(t),w_{2}(t),\ldots,w_{n}(t)]^{T} \in \mathbb{R}^{n}\) is the neuron state vector; \(z(t) \in \mathbb{R}^{n}\) is the output vector; \(u(t) \in \mathbb{R}^{n}\) is the deterministic disturbance input which belongs to \(\mathcal{L}_{2} [0,\infty )\); \(f(\cdot ),g(\cdot ),h(\cdot ) \in \mathbb{R}^{n}\) are the neuron activation functions; \(C=\operatorname{diag}\{c_{1},c_{2},\ldots,c_{n}\}\) is a positive diagonal matrix; \(B_{0}\), \(B_{1}\), \(B_{2}\), and W are connection weight matrices; \(B_{3}\), \(D_{1}\), \(D_{2}\), and \(D_{3}\) are given real matrices; \(\phi (t)\in \mathcal{C}[[-\varrho ,0],\mathbb{R}^{n}]\) is the initial function. The variables \(\delta (t)\) and \(\sigma _{i}(t)\) (\(i=1,2\)) denote the interval discrete and distributed time-varying delays that satisfy \(0 \leq \delta _{1} \leq \delta (t) \leq \delta _{2}\), \(0 \leq \sigma _{1} \leq \sigma _{1}(t) \leq \sigma _{2}(t) \leq \sigma _{2}\) where \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), \(\varrho =\max \{ \delta _{2},\sigma _{2}\}\) are known real constants.
The neuron activation functions \(f(\cdot )\), \(g(\cdot )\), and \(h(\cdot )\) satisfy the following conditions:
- (A1):
-
The neuron activation function f is continuous and there exist constants \(F^{-}_{i}\) and \(F^{+}_{i}\) such that
$$ F^{-}_{i} \leq \frac{f_{i}(\alpha _{1})-f_{i}(\alpha _{2})}{\alpha _{1} -\alpha _{2}} \leq F^{+}_{i} $$for all \(\alpha _{1} \neq \alpha _{2}\); we also let \(\tilde{F}_{i} =\max \{ \vert F^{-}_{i} \vert , \vert F^{+}_{i} \vert \}\), where \(f(\cdot )=[f_{1}(\cdot ),f_{2}(\cdot ),\ldots,f_{n}(\cdot )]^{T}\) and for any \(i \in \{1,2,\ldots,n\}\), \(f_{i}(0)=0\).
- (A2):
-
The neuron activation function g is continuous and there exist constants \(G^{-}_{i}\) and \(G^{+}_{i}\) such that
$$ G^{-}_{i} \leq \frac{g_{i}(\alpha _{1})-g_{i}(\alpha _{2})}{\alpha _{1} -\alpha _{2}} \leq G^{+}_{i} $$for all \(\alpha _{1} \neq \alpha _{2}\); and we let \(\tilde{G}_{i} =\max \{ \vert G^{-}_{i} \vert , \vert G^{+}_{i} \vert \}\), where \(g(\cdot )=[g_{1}(\cdot ),g_{2}(\cdot ),\ldots,g_{n}(\cdot )]^{T}\) and for any \(i \in \{1,2,\ldots,n\}\), \(g_{i}(0)=0\).
- (A3):
-
The neuron activation function h is continuous and there exist constants \(H^{-}_{i}\) and \(H^{+}_{i}\) such that
$$ H^{-}_{i} \leq \frac{h_{i}(\alpha _{1})-h_{i}(\alpha _{2})}{\alpha _{1} -\alpha _{2}} \leq H^{+}_{i} $$for all \(\alpha _{1} \neq \alpha _{2}\); and we let \(\tilde{H}_{i} =\max \{ \vert H^{-}_{i} \vert , \vert H^{+}_{i} \vert \}\), where \(h(\cdot )=[h_{1}(\cdot ),h_{2}(\cdot ),\ldots,h_{n}(\cdot )]^{T}\) and for any \(i \in \{1,2,\ldots,n\}\), \(h_{i}(0)=0\).
Remark 1
The NNs model (1) provides a general form of delay NNs model, which covers both LFNNs and SNNs. It can be simply reduced to each of them by setting \(B_{0}\), \(B_{1}\), \(B_{2}\), and W. That is, if we set \(W=I\), the NNs model (1) leads to the following model, namely LFNNs:
In the same way, if we set \(B_{0}=B_{1}=B_{2}=I\), the NNs model (1) is changed to the following model, namely SNNs:
Before moving to the main results, we introduce the definitions, lemmas, and assumptions which are necessary to state the new criteria.
Assumption (H1)
([34])
For given real symmetric matrices \(\Gamma _{1}\leq 0\), \(\Gamma _{3}, \Gamma _{4} \geq 0\), and a real matrix \(\Gamma _{2}\), the following conditions are satisfied:
-
(1)
\(\Vert D_{3} \Vert \cdot \Vert \Gamma _{4} \Vert =0\);
-
(2)
\(( \Vert \Gamma _{1} \Vert + \Vert \Gamma _{2} \Vert ) \cdot \Vert \Gamma _{4} \Vert =0\);
-
(3)
\(D_{3}^{T}\Gamma _{1} D_{3}+D_{3}^{T}\Gamma _{2}+\Gamma _{2}^{T}D_{3}+ \Gamma _{3}>0\).
Definition 2.1
([34])
For given matrices \(\Gamma _{1}\), \(\Gamma _{2}\), \(\Gamma _{3}\), and \(\Gamma _{4}\) satisfying Assumption (H1), system (1) is said to be extended dissipative, if, under the zero initial condition, there exists a scalar λ such that the following inequality holds for any \(t_{f} \geq 0\) and all \(u(t)\in \mathcal{L}_{2} [0,\infty )\):
where
Remark 2
The inequality (2) implies that the new performance measure is more general by allowing to set the weighting matrices \(\Gamma _{i}\), \(i=1,2,3,4\), i.e.,
-
If \(\Gamma _{1}=0\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2}I\), \(\Gamma _{4}=I\), and \(\lambda =0\) then the inequality (2) describes the \(L_{2}\)–\(L_{\infty }\) performance;
-
If \(\Gamma _{1}=-I\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2}I\), \(\Gamma _{4}=0\), and \(\lambda =0\) then the inequality (2) determines the \(H_{\infty }\) performance;
-
If \(\Gamma _{1}=0\), \(\Gamma _{2}=I\), \(\Gamma _{3}=\gamma I\), \(\Gamma _{4}=0\), and \(\lambda =0\) then the inequality (2) reduces to the passivity performance;
-
If \(\Gamma _{1}=\mathcal{Q}\), \(\Gamma _{2}=\mathcal{S}\), \(\Gamma _{3}= \mathcal{R}-\gamma I\), \(\Gamma _{4}=0\), and \(\lambda =0\) then the inequality (2) degenerates to the \((\mathcal{Q},\mathcal{S},\mathcal{R})\)–γ-dissipativity performance.
Lemma 2.2
([38])
For a given symmetric positive definite matrix \(P\in \mathbb{R}^{n\times n}\), scalars t, a, and b satisfying \(b\geq a\geq 0\), and a vector function \(w:[t-b,t]\rightarrow \mathbb{R}^{n}\) such that the integrals involved are well defined, the following inequality holds:
Lemma 2.3
([39])
For any constant matrices \(P\in \mathbb{R}^{n\times n}\), \(X\in \mathbb{R}^{2n\times 2n}\), and \(Y\in \mathbb{R}^{2n\times n}\) with , and such that the following inequality is well defined, it holds that
where \(\Theta =[I,-I]\) and \(\Omega _{1}= [w^{T}(t),\int _{-b}^{-a}\int _{t+\theta }^{t} \frac{2}{b^{2}-a^{2}}w^{T}(s)\,ds\,d \theta ]^{T}\).
Lemma 2.4
([40])
For a given matrix \(P>0\), the following inequality holds for all continuous differentiable function \(w(t)\) in \([a,b]\in \mathbb{R}^{n}\):
where
Lemma 2.5
([41])
Suppose that \(w(t)\in \mathbb{R}^{n} \) and \(\eta \in \mathbb{R}\). Then for any positive definite matrix P, the following inequality holds:
3 Main results
In what follows, for the simplification, some notations are introduced as:
3.1 Stability analysis for generalized neural networks
In this section, new asymptotic stability criteria for the generalized neural networks (1), and their special case, are obtained based on a suitable Lyapunov–Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique.
Theorem 3.1
For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), \(\beta _{1}\), and \(\beta _{2}\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2},Q \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2},Z_{3} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1}, X_{2},X_{3}, X_{4}, Y_{1}, Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(c_{1}\), \(c_{2}\) such that the following LMIs hold:
where
then, the system (1) with \(u(t)=0\) is asymptotically stable.
Proof
We consider the following Lyapunov–Krasovskii functional candidate for the system (1):
where
Time derivatives of \(V_{i}(w(t),t)\), \(i=1,2,\ldots,10\), along the trajectories of (1) are as follows:
Utilizing Lemma 2.4, the following relations are easily obtained:
On the other hand, we have the following inequality from Lemma 2.2:
where \(\varepsilon = \frac{\delta ^{2}(t)-\delta ^{2}_{1}}{\delta ^{2}_{2}-\delta ^{2}_{1}}\).
From Lemma 2.3 and condition (7), we obtain
From Lemma 2.5, we gain
where \(\alpha = \frac{\delta ^{3}(t)-\delta ^{3}_{1}}{\delta ^{3}_{2}-\delta ^{3}_{1}}\).
It follows from Assumptions (A1), (A2), and (A3) that
We consider system (1), the following equation is obtained:
By adding the right-hand side of (28) to \(\dot{V}(w(t),t)\), we achieve from (9)–(27) that
where \(\bar{\Xi }^{(i)}=\frac{1}{2}\bar{\Xi } +\Xi _{i}\) (\(i=1,2\)) and \(\bar{\Xi }^{(j)}=\frac{1}{2}\bar{\Xi } +\Xi _{j}\) (\(j=3,4\)), \(\bar{\Xi }=\Xi +2e_{26}\beta _{1} B^{T}_{3}N_{1}e^{T}_{1}+2e_{26} \beta _{2} B^{T}_{3}N_{2}e^{T}_{2}\), with Ξ and \(\Xi _{i}\), \(\Xi _{j}\) defined in (4) and (5).
When \(u(t)=0 \) (no disturbance), one has from (29) that
where \(\Xi ^{(i)}=\frac{1}{2} \Xi +\Xi _{i}\) (\(i=1,2\)) and \(\Xi ^{(j)}=\frac{1}{2}\Xi +\Xi _{j}\) (\(j=3,4\)).
The upper bound of \(\dot{V}(w(t),t)\) is negative if the condition (6) and the following relations hold simultaneously:
The above relations can be rewritten as follows:
Since \(0\leq \varepsilon ,\alpha \leq 1\), the term \(\varepsilon (\Xi ^{(1)}+c_{1}I)+(1-\varepsilon )(\Xi ^{(2)}+c_{1}I)\) is a convex combination of \(\Xi ^{(1)}+c_{1}I\) and \(\Xi ^{(2)}+c_{1}I\); and the expression \(\alpha (\Xi ^{(3)}-c_{2}I)+(1-\alpha )(\Xi ^{(4)}-c_{2}I)\) is a convex combination of \(\Xi ^{(3)}-c_{2}I\) and \(\Xi ^{(4)}-c_{2}I\). These combinations are negative definite only if the vertices become negative; therefore, (30) and (31) are equivalent to (4) and (5), respectively. Then, the system (1) with \(u(t)=0\) is asymptotically stable. □
Next, we consider the following generalized neural network model as a special case of the system (1):
Theorem 3.2
For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\beta _{1}\), and \(\beta _{2}\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2} \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1},X_{2},X_{3},X_{4},Y_{1},Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(b_{1}\), \(b_{2}\) such that the following LMIs hold:
where
then, the system (32) with \(u(t)=0\) is asymptotically stable.
Proof
We consider the following Lyapunov–Krasovskii functional candidate for the system (32):
where
Time derivatives of \(V_{i}(w(t),t)\), \(i=1,2,\ldots,9\), along the trajectories of (32) are as follows:
Utilizing Lemma 2.4, the following relations are easily obtained:
On the other hand, we have the following inequality from Lemma 2.2:
where \(\varepsilon = \frac{\delta ^{2}(t)-\delta ^{2}_{1}}{\delta ^{2}_{2}-\delta ^{2}_{1}}\).
From Lemma 2.3 and condition (36), we obtain
From Lemma 2.5, we get
where \(\alpha = \frac{\delta ^{3}(t)-\delta ^{3}_{1}}{\delta ^{3}_{2}-\delta ^{3}_{1}}\).
It follows from Assumptions (A1) and (A2) that
We consider system (32), and the following equation is obtained:
By adding the right-hand side of (55) to \(\dot{V}(t)\), we achieve from (38)–(54) that
where \(\bar{\Theta }^{(i)}=\frac{1}{2}\bar{\Theta } +\Theta _{i}\) (\(i=1,2\)) and \(\bar{\Theta }^{(j)}=\frac{1}{2}\bar{\Theta } +\Theta _{j}\) (\(j=3,4\)), \(\bar{\Theta }=\Theta +2e_{24}\beta _{1} B^{T}_{3}N_{1}e^{T}_{1}+2e_{24} \beta _{2} B^{T}_{3}N_{2}e^{T}_{2}\), with Θ and \(\Theta _{i}\), \(\Theta _{j}\) defined in (33) and (34).
When \(u(t)=0 \) (no disturbance), one has from (56) that
where \(\Theta ^{(i)}=\frac{1}{2} \Theta +\Theta _{i}\) (\(i=1,2\)) and \(\Theta ^{(j)}=\frac{1}{2}\Theta +\Theta _{j}\) (\(j=3,4\)).
The upper bound of \(\dot{V}(w(t),t)\) is negative if the condition (35) and the following relations hold simultaneously:
The above relations can be rewritten as follows:
Since \(0\leq \varepsilon ,\alpha \leq 1\), the term \(\varepsilon (\Theta ^{(1)}+b_{1}I)+(1-\varepsilon )(\Theta ^{(2)}+b_{1}I)\) is a convex combination of \(\Theta ^{(1)}+b_{1}I\) and \(\Theta ^{(2)}+b_{1}I\); and the expression \(\alpha (\Theta ^{(3)}-b_{2}I)+(1-\alpha )(\Theta ^{(4)}-b_{2}I)\) is a convex combination of \(\Theta ^{(3)}-b_{2}I\) and \(\Theta ^{(4)}-b_{2}I\). These combinations are negative definite only if the vertices become negative; therefore, (57) and (58) are equivalent to (33) and (34), respectively. Then, the system (32) with \(u(t)=0\) is asymptotically stable. □
3.2 Extended dissipativity analysis for generalized neural networks
In this section, new extended dissipativity criteria for the generalized neural networks (1), and their special case, are obtained based on the stability conditions that were developed in Theorems 3.1 and 3.2.
Theorem 3.3
For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), \(\beta _{1}\), \(\beta _{2}\), and a positive scalar \(d<1\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2}, Q \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2},Z_{3} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1},X_{2},X_{3},X_{4},Y_{1},Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(c_{1}\), \(c_{2}\) such that the following LMIs hold:
where
then, the system (1) is asymptotically stable and extended dissipative.
Proof
To show that the GNNs system (1) is extended dissipative, first, we use the LKFs candidate (8) and the following performance index for the GNNs (1).
Using inequality (29) in Theorem 3.1, equation (3), and LMIs (59)–(63), we obtain
Then we integrate both sides of the inequality (64) from 0 to \(t \geq 0\) and, letting \(\lambda \leq -V (w(0),0)\), get
Next, we consider two cases:
Case I. \(\Gamma _{4}=0\). For this case, from inequality (65) we obtain
This matches Definition 2.1 with \(\Gamma _{4}=0\).
Case II. \(\Gamma _{4}\neq 0\). From Assumption (H1), it is clear that \(\Gamma _{1} = 0\), \(\Gamma _{2}=0\), \(\Gamma _{3} >0\), and \(D_{3}=0\). Then, for any \(0\leq t \leq t_{f}\) and \(0\leq t-\delta (t) \leq t_{f}\), (65) leads to
and
respectively. On the other hand, for \(t-\delta (t)\leq 0\), it can be shown that
Thus, there exists a positive scalar \(d<1\) such that
By the relationship of output \(z(t)\) with (63),
So, it is clear that for any t satisfying \(0\leq t \leq t_{f}\),
Taking the supremum over t in inequalities (66) and (67), system (1) is extended dissipative. This completes the proof. □
Theorem 3.4
For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\beta _{1}\), and \(\beta _{2}\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2} \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1},X_{2},X_{3},X_{4},Y_{1},Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(b_{1}\), \(b_{2}\) such that the following LMIs hold:
where
then, the system (32) is asymptotically stable and extended dissipative.
Proof
To show that the GNNs system (32) is extended dissipative, first, we use the LKFs candidate (37) and the following performance index for the GNNs (32).
Using inequality (56) in Theorem 3.2, equation (3), and LMIs (68)–(72), we get
Then we integrate both sides of the inequality (73) from 0 to \(t \geq 0\) and, letting \(0=\lambda \leq -V (w(0),0)\), get
Next, we consider two cases:
Case I. \(\Gamma _{4}=0\). For this case, from inequality (74) we get
This matches Definition 2.1 with \(\Gamma _{4}=0\).
Case II. \(\Gamma _{4}\neq 0\). From Assumption (H1), it is clear that \(\Gamma _{1} = 0\), \(\Gamma _{2}=0\), and \(\Gamma _{3} >0\). Then, for any \(0\leq t \leq t_{f}\), (74) leads to
By the relationship of output with (72),
So, it is clear that for any t satisfying \(0\leq t \leq t_{f}\),
Taking the supremum over t in inequalities (75) and (76), system (32) is extended dissipative. This completes the proof. □
Remark 3
Recently, many problems have been investigated to analyze extended dissipativity for neural networks with time delays, and the results have been reported in [11, 34, 35]. Moreover, the extended dissipative analysis was studied for the GNNs with interval time-varying delay signals [37]. Unfortunately, most of research does not include the distributed time-varying delay in the GNNs systems. We know that the distributed time-varying delay is a delay resulting from the transmission of distributed nerve impulses in a diversity of axon sizes and lengths, which is considered difficult to avoid. Therefore, the extended dissipativity analysis for the GNNs with interval discrete and distributed time-varying delays was addressed in our research.
Remark 4
In recent years, the WSII was developed in [40] to estimate the derivatives of LKFs with a single integral term. Several studies have used the improved WSII to estimate the derivative of LKFs, for example, in [42], the authors have obtained criteria of finite-time passivity for neutral-type neural networks with time-varying delays by using the improved WSII with Jensen’s inequality. In 2018, a novel triple integral inequality was constructed in [39] to estimate the derivative of LKFs with the triple integral term, moreover, the authors achieved an improved delay-dependent exponential stability criterion by using a novel triple integral inequality with the extended reciprocally convex technique. On the other hand, in this work, we use the improved WSII to estimate four terms and the triple integral inequality to estimate \(-\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6}\int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} \dot{w}^{T}(s)S_{1} \dot{w}(s)\,ds\,d\lambda \,d\beta \). By applying the improved WSII, a novel triple integral inequality, and convex combination technique, we gain less conservative results when compared with the other works [6, 7, 43–45].
Remark 5
In this work, the Lyapunov–Krasovskii functional contains single, double, triple, and quadruple integral terms, in which full information on the delays \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), and a state variable is used. Moreover, more information on activation functions has been taken into the stability and performance analysis, that is, \(F^{-}_{i} \leq \frac{f_{i}(W_{i}w(t))}{W_{i}w(t)} \leq F^{+}_{i}\), \(G^{-}_{i} \leq \frac{g_{i}(W_{i}w(t-\delta (t)))}{W_{i}w(t-\delta (t))} \leq G^{+}_{i}\), and \(H^{-}_{i} \leq \frac{h_{i}(W_{i}w(t))}{W_{i}w(t)} \leq H^{+}_{i}\) are addressed in the proof. Therefore, the construction and the technique for computation of the Lyapunov–Krasovskii functional are the main keys to improve results of this work. In the proof of Theorems 3.1–3.4, improved Wirtinger’s single integral inequality [40], a novel triple integral inequality [39], and convex combination technique are used to bound the derivative of Lyapunov–Krasovskii functional, which provide tighter bound than the inequalities in [6, 7, 43–45]. All of these lead to a reduction of the conservatism of our results compared to those in some exiting works and, in particular, numerical examples. However, the complex computation of the Lyapunov–Krasovskii functional leads to the LMI derived in this work which contains much information about the system. Hence, for further work, it is interesting for researchers to improve the technique for a simple Lyapunov–Krasovskii functional and also achieve better results.
4 Numerical examples
In this section, five numerical examples are presented to illustrate the effectiveness of our results.
Example 4.1
Consider the generalized neural network (32) with the following matrices:
By taking parameters \(\beta _{1}=\beta _{2}=1\) and solving Example 4.1 using LMIs in Theorem 3.2, we obtain maximum allowable values of \(\delta _{2}\) for \(\delta _{1}=0.5\) without the upper bound of differentiable delay (μ) as shown in Table 1. This table shows that the results derived in this research are less conservative than those in [43–45].
Example 4.2
Consider the generalized neural network (32) with the following matrices:
By taking parameters \(\beta _{1}=\beta _{2}=1\) and solving Example 4.2 using LMIs in Theorem 3.2, we obtain maximum allowable values of \(\delta _{2}\) for \(\delta _{1}=1\) without the upper bound of differentiable delay (μ) as shown in Table 2. This table shows that the results derived in this research are less conservative than those in [6, 7, 45].
Example 4.3
Consider the generalized neural network (1) with \(\delta _{1}=0.3\), \(\delta _{2}=1\), \(\sigma _{1}=0.01\), \(\sigma _{2}=0.4\), \(\beta _{1}=0.9\), \(\beta _{2}=1\),
When the LMIs of (4) and (5) in Theorem 3.1 are solved, we obtain
The maximum allowable values of \(\delta _{2}\) for different values of \(\delta _{1}\) are shown in Table 3. Figure 1 shows the response solution \(w(t)\) in Example 4.3 where \(u(t)=0\) and the initial condition \(\phi (t)=[-0.2 \ 0.2]^{T}\). Figure 2 shows the response solution \(w(t)\) in Example 4.3 where \(u(t)\) is Gaussian noise with mean 0 and variance 1, and the initial condition is \(\phi (t)=[-0.2 \ 0.2]^{T}\).
Example 4.4
In this example, the extended dissipativity performance of the generalized neural networks (32) is considered, which links all of the famous and important performance notions such as the \(L_{2}\)–\(L_{\infty }\), \(H_{\infty }\), passivity, and dissipativity performances. We consider the GNNs (32) with the following parameters:
When we solve Example 4.4 by using LMIs of (68), (69) in Theorem 3.4, we obtain four cases:
Case I. \(L_{2}\)–\(L_{\infty }\) performance. By using the LMIs in Theorem 3.4 and letting \(\Gamma _{1}=0\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2} I\), and \(\Gamma _{4}=I\), the extended dissipativity performance is converted into the \(L_{2}\)–\(L_{ \infty }\) performance. The \(L_{2}\)–\(L_{\infty }\) performance index γ can be achieved for \(\delta _{1}= 0.5\), and different \(\delta _{2}\), which are shown in Table 4. Figure 3 shows the plot of \(L(t)=\sqrt{z^{T}(t)z(t)/\int _{0}^{t}u^{T}(s)u(s)\,ds}\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(\sup_{t} L(t)=0.0265\), which is less than the prescribed \(L_{2}\)–\(L_{\infty }\) performance index 2.0751 in Table 4.
Case II. Passivity performance. By applying the LMIs in Theorem 3.4 and taking \(\Gamma _{1}=0\), \(\Gamma _{2}=I\), \(\Gamma _{3}=\gamma I\), and \(\Gamma _{4}=0\), the extended dissipativity performance degenerates to the passivity performance. The passivity performance index γ can be gained for \(\delta _{1}=0.5\), and various \(\delta _{2}\), which are presented in Table 4. Figure 4 shows the plot of \(P(t)=-2\int _{0}^{t} z^{T}(s)u(s)\,ds/\int _{0}^{t}u^{T}(s)u(s)\,ds\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(P(t)\) converges to 0.6817, which is less than the prescribed passivity performance index 4.3061 in Table 4.
Case III. \(H_{\infty }\) performance. By using the LMIs in Theorem 3.4 and letting \(\Gamma _{1}=-I\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2} I\), and \(\Gamma _{4}=0\), the extended dissipativity performance becomes the \(H_{\infty }\) performance. The maximum allowable values of \(\delta _{2}\) with various γ can be obtained for \(\delta _{1}=0.5\), which are depicted in Table 5. Figure 5 shows the plot of \(H(t)=\sqrt{\int _{0}^{t} z^{T}(s)z(s)\,ds/\int _{0}^{t}u^{T}(s)u(s)\,ds}\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(H(t)\) converges to 3.0415.
Case IV. Dissipativity performance. By applying the LMIs in Theorem 3.4 and taking \(\Gamma _{1}=-I\), \(\Gamma _{2}=I\), \(\Gamma _{3}=\mathcal{R}-\gamma I\), \(\mathcal{R}= 8I\), and \(\Gamma _{4}=0\), the extended dissipativity performance determines the dissipativity performance. The maximum allowable values of \(\delta _{2}\) with various γ can be achieved for \(\delta _{1}= 0.5\), which are shown in Table 5. Figure 6 shows the plot of \(D(t)= (\int _{0}^{t}-z^{T}(s)z(s)+2z^{T}(s)u(s)+8u^{T}(s)u(s)\,ds )/ (\int _{0}^{t}u^{T}(s)u(s)\,ds )\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(D(t)\) converges to −1.9323.
Example 4.5
Consider the neural network (1) with \(\sigma _{1}=0.1\), \(\sigma _{2}=0.5\), \(\beta _{1}=2\), \(\beta _{2}=3\),
Then, the extended dissipativity performance of system (1) is determined by choosing \(\Gamma _{1}=-I\), \(\Gamma _{2}=I\), \(\Gamma _{3}=(8-\gamma )I\), and \(\Gamma _{4}=I\). Here solving LMIs of (59) and (60) in Theorem 3.3, we obtain the maximum allowable values of \(\delta _{2}\) for different values of \(\delta _{1}\) which are shown in Table 6. Figure 7 shows the response solution \(w(t)\) in Example 4.5 where \(u(t)=0\) and the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Figure 8 shows the response solution \(w(t)\) in Example 4.5 where \(u(t)\) is Gaussian noise with mean 0 and variance 1, and the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\).
5 Conclusions
In this paper, we focus on the problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays. Firstly, we obtain new asymptotic stability criteria for the generalized neural networks and also achieve an improved asymptotic stability criterion for a special case of the generalized neural networks by using a suitable Lyapunov–Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique. Then, the asymptotic stability results are applied to extended dissipativity analysis that covers \(H_{\infty }\), \(L_{2}\)–\(L_{\infty }\), passivity, and dissipativity performance by setting parameters in the general performance index. Finally, we demonstrate numerical examples that are less conservative than in other references. Moreover, we present numerical examples for asymptotic stability and extended dissipativity performance of the generalized neural networks, including a special case of the generalized neural networks. In the future work, the derived results and methods in this paper are expected to be applied to other systems such as fuzzy generalized neural networks, generalized neural networks with Markovian switching, complex dynamical networks, and so on [10, 32, 46].
Availability of data and materials
Not applicable.
References
Cichoki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. Wiley, Chichester (1993)
Watta, P.B., Wang, K., Hassoun, M.H.: Recurrent neural nets as dynamical Boolean systems with applications to associative memory. IEEE Trans. Neural Netw. 8, 1268–1280 (1997)
Cohen, M.A., Grossberg, S.: Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 13, 815–826 (1983)
Meng, X.Z., Zhao, S.N., Zhang, W.Y.: Adaptive dynamics analysis of a predator–prey model with selective disturbance. Appl. Math. Comput. 266, 946–958 (2015)
Chen, Y., Zheng, W.X.: Stochastic state estimation for neural networks with distributed delays and Markovian jump. Neural Netw. 25, 14–20 (2012)
Chen, J., Sun, J., Liu, G.P., Rees, D.: New delay-dependent stability criteria for neural networks with time-varying interval delay. Phys. Lett. A 374, 4397–4405 (2010)
Wang, J.A., Ma, X.H., Wen, X.Y.: Less conservative stability criteria for neural networks with interval time-varying delay based on delay-partitioning approach. Neurocomputing 155, 146–152 (2015)
Lv, X., Li, X.: Delay-dependent dissipativity of neural networks with mixed non-differentiable interval delays. Neurocomputing 267, 85–94 (2017)
Yu, H.J., He, Y., Wu, M.: Delay-dependent state estimation for neural networks with time-varying delay. Neurocomputing 275, 881–887 (2018)
Li, Q., Liang, J.: Improved stabilization results for Markovian switching CVNNs with partly unknown transition rates. Neural Process. Lett. 52(2), 1189–1205 (2020)
Zeng, H.B., Park, J.H., Zhang, C.F., Wang, W.: Stability and dissipativity analysis of static neural networks with interval time-varying delay. J. Franklin Inst. 352, 1284–1295 (2015)
Zhang, X.M., Han, Q.L.: Global asymptotic stability for a class of generalized neural networks with interval time-varying delays. IEEE Trans. Neural Netw. 22, 1180–1192 (2011)
Lin, W.J., He, Y., Zhang, C.K., Long, F., Wu, M.: Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. Inf. Sci. 450, 169–181 (2018)
Manivannan, R., Samidurai, R., Cao, J., Alsaedi, A., Alsaadi, F.E.: Global exponential stability and dissipativity of generalized neural networks with time-varying delay signals. Neural Netw. 87, 149–159 (2017)
Liu, Y., Lee, S.M., Kwon, O.M., Park, J.H.: New approach to stability criteria for generalized neural networks with interval time-varying delays. Neurocomputing 149, 1544–1551 (2015)
Rakkiyappan, R., Sivasamy, R., Park, J.H., Lee, T.H.: An improved stability criterion for generalized neural networks with additive time-varying delays. Neurocomputing 171, 615–624 (2016)
Shu, Y., Liu, X.G., Qiu, S., Wang, F.: Dissipativity analysis for generalized neural networks with Markovian jump parameters and time-varying delay. Nonlinear Dyn. 89, 2125–2140 (2017)
Li, Z., Bai, Y., Huang, C., Yan, H., Mu, S.: Improved stability analysis for delayed neural networks. IEEE Trans. Neural Netw. Learn. Syst. 29, 4535–4541 (2018)
Du, Y., Liu, X., Zhong, S.: Robust reliable \(H_{\infty }\) control for neural networks with mixed time delays. Chaos Solitons Fractals 91, 1–8 (2016)
Rajavel, S., Samidurai, R., Cao, J., Alsaedi, A., Ahmad, B.: Finite-time non-fragile passivity control for neural networks with time-varying delay. Appl. Math. Comput. 297, 145–158 (2017)
Thuan, M.V., Trinh, H., Hien, L.V.: New inequality-based approach to passivity analysis of neural networks with interval time-varying delay. Neurocomputing 194, 301–307 (2016)
Maharajan, C., Raja, R., Cao, J., Rajchakit, G., Alsaedi, A.: Novel results on passivity and exponential passivity for multiple discrete delayed neutral-type neural networks with leakage and distributed time-delays. Chaos Solitons Fractals 115, 268–282 (2018)
Senthilraj, S., Raja, R., Zhu, Q., Samidurai, R., Yao, Z.: Exponential passivity analysis of stochastic neural networks with leakage, distributed delays and Markovian jumping parameters. Neurocomputing 175, 401–410 (2015)
Yotha, N., Botmart, T., Mukdasai, K., Weera, W.: Improved delay-dependent approach to passivity analysis for uncertain neural networks with discrete interval and distributed time-varying delays. Vietnam J. Math. 45(4), 721–736 (2017)
Cao, J., Manivannan, R., Chong, K.T., Lv, X.: Enhanced \(L_{2}\)–\(L_{\infty }\) state estimation design for delayed neural networks including leakage term via quadratic-type generalized free-matrix-based integral inequality. J. Franklin Inst. 356, 7371–7392 (2019)
Kumar, S.V., Anthoni, S.M., Raja, R.: Dissipative analysis for aircraft flight control systems with randomly occurring uncertainties via non-fragile sampled-data control. Math. Comput. Simul. 155, 217–226 (2019)
Raja, R., Karthik Raja, U., Samidurai, R., Leelamani, A.: Improved stochastic dissipativity of uncertain discrete-time neural networks with multiple delays and impulses. Int. J. Mach. Learn. Cybern. 6(2), 289–305 (2015)
Willems, J.C.: Dissipative dynamical systems part I: general theory. Arch. Ration. Mech. Anal. 45(5), 321–351 (1972)
Niu, Y., Wang, X., Lu, J.: Dissipative-based adaptive neural control for nonlinear systems. J. Control Theory Appl. 2, 126–130 (2004)
Jeltsema, D., Scherpen, J.M.A.: Tuning of passivity-preserving controllers for switched-mode power converters. IEEE Trans. Autom. Control 49, 1333–1344 (2004)
Senthilraj, S., Raja, R., Cao, J., Fardoun, H.M.: Dissipativity analysis of stochastic fuzzy neural networks with randomly occurring uncertainties using delay dividing approach. Nonlinear Anal., Model. Control 24(4), 561–581 (2019)
Li, Q., Liang, J.: Dissipativity of the stochastic Markovian switching CVNNs with randomly occurring uncertainties and general uncertain transition rates. Int. J. Syst. Sci. 51(6), 1102–1118 (2020)
Zhang, B., Zheng, W.X., Xu, S.: Filtering of Markovian jump delay systems based on a new performance index. IEEE Trans. Circuits Syst. I, Regul. Pap. 60, 1250–1263 (2013)
Lee, T.H., Park, M.J., Park, J.H., Kwon, O.M., Lee, S.M.: Extended dissipative analysis for neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 25, 1936–1941 (2014)
Wang, X., She, K., Zhong, S., Cheng, J.: On extended dissipativity analysis for neural networks with time-varying delay and general activation functions. Adv. Differ. Equ. 2016, 1 (2016)
Lin, W.J., He, Y., Zhang, C.K., Wu, M., Shen, J.: Extended dissipativity analysis for Markovian jump neural networks with time-varying delay via delay-product-type functionals. IEEE Trans. Neural Netw. Learn. Syst. 30(8), 2527–2537 (2019)
Manivannan, R., Mahendrakumar, G., Samidurai, R., Cao, J., Alsaedi, A.: Exponential stability and extended dissipativity criteria for generalized neural networks with interval time-varying delay signals. J. Franklin Inst. 354, 4353–4376 (2017)
Sun, J., Liu, G.P., Chen, J.: Delay-dependent stability and stabilization of neutral time-delay systems. Int. J. Robust Nonlinear Control 19, 1364–1375 (2009)
Xie, W., Zhu, H., Zhong, S., Zhang, D., Shi, K., Cheng, J.: Extended dissipative estimator design for uncertain switched delayed neural networks via a novel triple integral inequality. Appl. Math. Comput. 335, 82–102 (2018)
Park, P.G., Lee, W.I., Lee, S.Y.: Auxiliary function-based integral/summation inequalities: application to continuous/discrete time-delay systems. Int. J. Control. Autom. Syst. 14, 3–11 (2016)
Farnam, A., Esfanjani, R.M.: Improved linear matrix inequality approach to stability analysis of linear systems with interval time-varying delays. J. Comput. Appl. Math. 294, 49–56 (2016)
Saravanan, S., Ali, M.S., Alsaedi, A., Ahmad, B.: Finite-time passivity for neutral-type neural networks with time-varying delays—via auxiliary function-based integral inequalities. Nonlinear Anal., Model. Control 25(2), 206–224 (2020)
Yang, Q., Ren, Q., Xie, X.: New delay dependent stability criteria for recurrent neural networks with interval time-varying delay. ISA Trans. 53, 994–999 (2014)
Lee, W.I., Lee, S.Y., Park, P.: Improved stability criteria for recurrent neural networks with interval time-varying delays via new Lyapunov functionals. Neurocomputing 155, 128–134 (2015)
Saravanakumar, R., Ali, M.S., Ahn, C.K., Karimi, H.R., Shi, P.: Stability of Markovian jump generalized neural networks with interval time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 28, 1840–1850 (2017)
Niamsup, P., Botmart, T., Weera, W.: Modified function projective synchronization of complex dynamical networks with mixed time-varying and asymmetric coupling delays via new hybrid pinning adaptive control. Adv. Differ. Equ. 2017(1), 1 (2017)
Acknowledgements
The authors thank the reviewers for their valuable comments and suggestions, which led to the improvement of the content of the paper.
Funding
The first author was supported by the Science Achievement Scholarship of Thailand (SAST). The second author was financially supported by Khon Kaen University. The third author was supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (grant number: B05F630095).
Author information
Authors and Affiliations
Contributions
The authors claim to have contributed significantly and equally to this work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Luemsai, S., Botmart, T. & Weera, W. Novel extended dissipativity criteria for generalized neural networks with interval discrete and distributed time-varying delays. Adv Differ Equ 2021, 42 (2021). https://doi.org/10.1186/s13662-020-03210-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-020-03210-x