Skip to main content

Theory and Modern Applications

Novel extended dissipativity criteria for generalized neural networks with interval discrete and distributed time-varying delays

Abstract

The problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays is investigated. Based on a suitable Lyapunov–Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique, the new asymptotic stability and extended dissipativity criteria are achieved for the generalized neural networks with interval discrete and distributed time-varying delays. By the above methods, the less conservative asymptotic stability criteria are obtained for a special case of the generalized neural networks. By using the Matlab LMI toolbox, the derived new asymptotic stability and extended dissipativity criteria are expressed in terms of linear matrix inequalities (LMIs) that cover \(H_{\infty }\), \(L_{2}\)\(L_{\infty }\), passivity, and dissipativity performance by setting parameters in the general performance index. Finally, we show numerical examples which are less conservative than other examples in the literature. Moreover, we present numerical examples for asymptotic stability and extended dissipativity performance of the generalized neural networks, including a special case of the generalized neural networks.

1 Introduction

In the numerous science and engineering fields, neural networks (NNs) have been studied extensively in recent years due to the wide range of their applications such as in signal processing, fault diagnosis, pattern recognition, associative memory, reproducing moving pictures, optimization problems, and industrial automation [15]. Theoretical stability analysis of the equilibrium is initially performed to make the above applications possible. To obtain the model for theoretical analysis, the important factors that affect the system are examined, and one important factor is the time delay. It is well known that the time delay always occurs in real world situations, and it causes oscillation, instability, and poor performance of the system. Furthermore, the time delay in neural networks is caused by the finite speed of information processing and the communication time of neurons. Therefore, many researchers are interested in investigating the stability or performance of the neural networks with time delay. The problem of stability or performance analysis for neural networks with constant, discrete, and distributed time-varying delays has received much attention [610].

In addition, neural networks can be classified into two types, local-field neural networks (LFNNs) and static neural networks (SNNs), based on the different neuron states. Many studies have separated the stability or performance of LFNNs and SNNs. For example, Zeng et al. [11] investigated the stability and dissipativity problem of static neural networks with interval time-varying delay. In [6], the stability for local-field neural networks with time-varying interval was investigated and stability criteria were improved by using some new techniques. Moreover, in 2011 for the first time, Zhang and Han [12] combined them into a unified system model, called generalized neural networks (GNNs), which covers both SNNs and LFNNs models, using certain assumptions. And later the GNNs with time delay model have been extensively used for stability or performance analyses [1318].

On the other hand, the performance of neural networks has been analyzed by a variety of techniques, which often have input and output relationships, and they play an important role in science and engineering applications. For example, Du et al. [19] studied the problem of robust reliable \(H_{\infty }\) control for neural networks with mixed time delays based on the LMI technique and the Lyapunov stability theory. In [20], the problem of finite-time nonfragile passivity control for neural networks with time-varying delay is investigated based on a new Lyapunov–Krasovskii function with tripple and quadruple integral terms and utilizing Wirtinger-type inequality technique. Passivity performance analysis for neural networks is examined in [2124]. In [25], the issue of \(L_{2}\)\(L_{\infty }\) state estimation design for delayed neural networks (NNs) is considered via quadratic-type generalized free-matrix-based integral inequality. The problem of dissipative analysis for aircraft flight control systems and uncertain discrete-time neural networks is addressed in [26, 27]. It is well known that the concept of dissipativity was first studied by Willems [28]. Also, many researchers have studied the dissipativity theory, since it does not only cover \(H_{\infty }\) and passivity performance, but is also advisable to be used in a convenient control structure in engineering applications such as chemical process control [29] and power converters [30]. Dissipativity theory is widely studied in neural networks because it provides a fundamental framework for the analysis and design problems of control systems, and it can keep the system internally stable. Recently, many researchers have studied the dissipativity for stochastic fuzzy neural networks, static neural networks, stochastic Markovian switching CVNNs, and so on [11, 31, 32]. However, the analysis of dissipativity does not relate to \(L_{2}\)\(L_{\infty }\) performance. To fill this gap, Zhang et al. [33] created a new general performance index, called an extended dissipativity performance index, which links all of these performance indexes. Therefore, many problems have been studied to analyze extended dissipativity for neural networks with time delays, and the results have been reported in [3436]. Moreover, the extended dissipative analysis was studied for the GNNs with interval time-varying delay signals [37]. It is interesting to study the extended dissipativity performance for GNNs with interval discrete and distributed time-varying delays, which has not been studied, yet.

The problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays is investigated in this paper. The main contributions of this research are as follows:

• We construct more general systems named the generalized neural networks that cover both SNNs and LFNNs. Moreover, the interval discrete and distributed time-varying delays are not necessarily differentiable functions, the lower bound of the delays is not restricted to be 0, the activation functions are different, and the output contains terms of the state vector with interval discrete time-varying delay and the disturbance.

• We create a suitable Lyapunov–Krasovskii functional (LKF) for application in asymptotic stability and extended dissipativity analysis of the GNNs with new inequalities.

• For the first time, we use a novel triple integral inequality and an improved Wirtinger single integral inequality with convex combination technique for estimation to obtain the upper bound of the interval discrete time-varying delay that is better than in other references.

• We obtain new asymptotic stability and extended dissipativity criteria that cover \(H_{\infty }\), \(L_{2}\)\(L_{\infty }\), passivity, and dissipativity performance by setting parameters in the general performance index.

This paper is structured in five sections as follows. In Sect. 2, the generalized neural networks model is formulated, and some definitions, lemmas, and assumptions are introduced. In Sect. 3, we show the asymptotic stability and extended dissipativity criteria for the generalized neural networks and a special case of the generalized neural networks. Numerical examples are shown in Sect. 4 to demonstrate the effectiveness of asymptotic stability and extended dissipativity performance for the generalized neural networks, including a special case of the generalized neural networks. Finally, conclusions are addressed in Sect. 5.

2 Network model and preliminaries

Notations

Throughout this paper, \(\mathbb{R}\) and \(\mathbb{R}^{+}\) represent the set of real numbers and the set of nonnegative real numbers, respectively; \(\mathbb{R}^{n} \) and \(\mathbb{R}^{n\times r}\) denote the n-dimensional Euclidean space and the set of \(n\times r\) real matrices, respectively; I is the identity matrix with appropriate dimensions; \(\mathcal{C}([-\varrho ,0],\mathbb{R}^{n})\) represents the space of all continuous vector-valued functions mapping \([-\varrho ,0]\) into \(\mathbb{R}^{n}\), where \(\varrho \in \mathbb{R}^{+}\); \(\mathcal{L}_{2}[0,\infty )\) denotes the space of functions \(\phi : \mathbb{R}^{+} \rightarrow \mathbb{R}^{n}\) with the norm \(\Vert \phi \Vert _{\mathcal{L}_{2}} = [ \int _{0}^{\infty } \vert \phi ( \theta ) \vert ^{2}\,d\theta ]^{\frac{1}{2}}\); \(P^{T}\) is the transpose of the matrix P; \(P = P^{T}\) denotes that the matrix P is a symmetric matrix; \(P>(\geq ) 0\) means that the symmetric matrix P is positive (semi-positive) definite; \(P<(\leq )0\) denotes that the symmetric matrix P is negative (semi-negative) definite; \(\operatorname{Sym}\{P\}\) represents \(P+P^{T}\); \(e_{i}\) represents the unit column vector having 1 on its ith row and zeros elsewhere.

Consider the following generalized neural networks model with both interval discrete and distributed time-varying delays:

$$\begin{aligned}& \begin{aligned}[b] \dot{w}(t)&=-Cw(t)+B_{0}f \bigl(Ww(t) \bigr)+B_{1}g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr) \\ &\quad{} +B_{2} \int _{t-\sigma _{2}(t)}^{t-\sigma _{1}(t)}h \bigl(Ww(s) \bigr) \,ds+B_{3}u (t), \end{aligned} \\& z(t)=D_{1}w(t)+D_{2} w \bigl(t-\delta (t) \bigr)+D_{3} u(t), \\& w(t)=\phi (t),\quad \forall t\in [-\varrho ,0], \end{aligned}$$
(1)

where \(w(t)=[w_{1}(t),w_{2}(t),\ldots,w_{n}(t)]^{T} \in \mathbb{R}^{n}\) is the neuron state vector; \(z(t) \in \mathbb{R}^{n}\) is the output vector; \(u(t) \in \mathbb{R}^{n}\) is the deterministic disturbance input which belongs to \(\mathcal{L}_{2} [0,\infty )\); \(f(\cdot ),g(\cdot ),h(\cdot ) \in \mathbb{R}^{n}\) are the neuron activation functions; \(C=\operatorname{diag}\{c_{1},c_{2},\ldots,c_{n}\}\) is a positive diagonal matrix; \(B_{0}\), \(B_{1}\), \(B_{2}\), and W are connection weight matrices; \(B_{3}\), \(D_{1}\), \(D_{2}\), and \(D_{3}\) are given real matrices; \(\phi (t)\in \mathcal{C}[[-\varrho ,0],\mathbb{R}^{n}]\) is the initial function. The variables \(\delta (t)\) and \(\sigma _{i}(t)\) (\(i=1,2\)) denote the interval discrete and distributed time-varying delays that satisfy \(0 \leq \delta _{1} \leq \delta (t) \leq \delta _{2}\), \(0 \leq \sigma _{1} \leq \sigma _{1}(t) \leq \sigma _{2}(t) \leq \sigma _{2}\) where \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), \(\varrho =\max \{ \delta _{2},\sigma _{2}\}\) are known real constants.

The neuron activation functions \(f(\cdot )\), \(g(\cdot )\), and \(h(\cdot )\) satisfy the following conditions:

(A1):

The neuron activation function f is continuous and there exist constants \(F^{-}_{i}\) and \(F^{+}_{i}\) such that

$$ F^{-}_{i} \leq \frac{f_{i}(\alpha _{1})-f_{i}(\alpha _{2})}{\alpha _{1} -\alpha _{2}} \leq F^{+}_{i} $$

for all \(\alpha _{1} \neq \alpha _{2}\); we also let \(\tilde{F}_{i} =\max \{ \vert F^{-}_{i} \vert , \vert F^{+}_{i} \vert \}\), where \(f(\cdot )=[f_{1}(\cdot ),f_{2}(\cdot ),\ldots,f_{n}(\cdot )]^{T}\) and for any \(i \in \{1,2,\ldots,n\}\), \(f_{i}(0)=0\).

(A2):

The neuron activation function g is continuous and there exist constants \(G^{-}_{i}\) and \(G^{+}_{i}\) such that

$$ G^{-}_{i} \leq \frac{g_{i}(\alpha _{1})-g_{i}(\alpha _{2})}{\alpha _{1} -\alpha _{2}} \leq G^{+}_{i} $$

for all \(\alpha _{1} \neq \alpha _{2}\); and we let \(\tilde{G}_{i} =\max \{ \vert G^{-}_{i} \vert , \vert G^{+}_{i} \vert \}\), where \(g(\cdot )=[g_{1}(\cdot ),g_{2}(\cdot ),\ldots,g_{n}(\cdot )]^{T}\) and for any \(i \in \{1,2,\ldots,n\}\), \(g_{i}(0)=0\).

(A3):

The neuron activation function h is continuous and there exist constants \(H^{-}_{i}\) and \(H^{+}_{i}\) such that

$$ H^{-}_{i} \leq \frac{h_{i}(\alpha _{1})-h_{i}(\alpha _{2})}{\alpha _{1} -\alpha _{2}} \leq H^{+}_{i} $$

for all \(\alpha _{1} \neq \alpha _{2}\); and we let \(\tilde{H}_{i} =\max \{ \vert H^{-}_{i} \vert , \vert H^{+}_{i} \vert \}\), where \(h(\cdot )=[h_{1}(\cdot ),h_{2}(\cdot ),\ldots,h_{n}(\cdot )]^{T}\) and for any \(i \in \{1,2,\ldots,n\}\), \(h_{i}(0)=0\).

Remark 1

The NNs model (1) provides a general form of delay NNs model, which covers both LFNNs and SNNs. It can be simply reduced to each of them by setting \(B_{0}\), \(B_{1}\), \(B_{2}\), and W. That is, if we set \(W=I\), the NNs model (1) leads to the following model, namely LFNNs:

$$\begin{aligned}& \begin{aligned} \dot{w}(t)&=-Cw(t)+B_{0}f \bigl(w(t) \bigr)+B_{1}g \bigl(w \bigl(t-\delta (t) \bigr) \bigr)+B_{2} \int _{t- \sigma _{2}(t)}^{t-\sigma _{1}(t)}h \bigl(w(s) \bigr)\,ds \\ &\quad{} +B_{3}u (t), \end{aligned} \\& z(t)=D_{1}w(t)+D_{2} w \bigl(t-\delta (t) \bigr)+D_{3} u(t). \end{aligned}$$

In the same way, if we set \(B_{0}=B_{1}=B_{2}=I\), the NNs model (1) is changed to the following model, namely SNNs:

$$\begin{aligned}& \begin{aligned} \dot{w}(t)&=-Cw(t)+f \bigl(Ww(t) \bigr)+g \bigl(Ww \bigl(t- \delta (t) \bigr) \bigr)+ \int _{t-\sigma _{2}(t)}^{t- \sigma _{1}(t)}h \bigl(Ww(s) \bigr)\,ds \\ &\quad{} +B_{3}u (t), \end{aligned} \\& z(t)=D_{1}w(t)+D_{2} w \bigl(t-\delta (t) \bigr)+D_{3} u(t). \end{aligned}$$

Before moving to the main results, we introduce the definitions, lemmas, and assumptions which are necessary to state the new criteria.

Assumption (H1)

([34])

For given real symmetric matrices \(\Gamma _{1}\leq 0\), \(\Gamma _{3}, \Gamma _{4} \geq 0\), and a real matrix \(\Gamma _{2}\), the following conditions are satisfied:

  1. (1)

    \(\Vert D_{3} \Vert \cdot \Vert \Gamma _{4} \Vert =0\);

  2. (2)

    \(( \Vert \Gamma _{1} \Vert + \Vert \Gamma _{2} \Vert ) \cdot \Vert \Gamma _{4} \Vert =0\);

  3. (3)

    \(D_{3}^{T}\Gamma _{1} D_{3}+D_{3}^{T}\Gamma _{2}+\Gamma _{2}^{T}D_{3}+ \Gamma _{3}>0\).

Definition 2.1

([34])

For given matrices \(\Gamma _{1}\), \(\Gamma _{2}\), \(\Gamma _{3}\), and \(\Gamma _{4}\) satisfying Assumption (H1), system (1) is said to be extended dissipative, if, under the zero initial condition, there exists a scalar λ such that the following inequality holds for any \(t_{f} \geq 0\) and all \(u(t)\in \mathcal{L}_{2} [0,\infty )\):

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds \geq \sup _{0\leq t \leq t_{f}} z^{T}(t) \Gamma _{4} z(t)+ \lambda , \end{aligned}$$
(2)

where

$$\begin{aligned} J(s)=z^{T}(s)\Gamma _{1} z(s)+2z^{T}(s)\Gamma _{2} u(s)+u^{T}(s) \Gamma _{3} u(s). \end{aligned}$$
(3)

Remark 2

The inequality (2) implies that the new performance measure is more general by allowing to set the weighting matrices \(\Gamma _{i}\), \(i=1,2,3,4\), i.e.,

  • If \(\Gamma _{1}=0\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2}I\), \(\Gamma _{4}=I\), and \(\lambda =0\) then the inequality (2) describes the \(L_{2}\)\(L_{\infty }\) performance;

  • If \(\Gamma _{1}=-I\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2}I\), \(\Gamma _{4}=0\), and \(\lambda =0\) then the inequality (2) determines the \(H_{\infty }\) performance;

  • If \(\Gamma _{1}=0\), \(\Gamma _{2}=I\), \(\Gamma _{3}=\gamma I\), \(\Gamma _{4}=0\), and \(\lambda =0\) then the inequality (2) reduces to the passivity performance;

  • If \(\Gamma _{1}=\mathcal{Q}\), \(\Gamma _{2}=\mathcal{S}\), \(\Gamma _{3}= \mathcal{R}-\gamma I\), \(\Gamma _{4}=0\), and \(\lambda =0\) then the inequality (2) degenerates to the \((\mathcal{Q},\mathcal{S},\mathcal{R})\)γ-dissipativity performance.

Lemma 2.2

([38])

For a given symmetric positive definite matrix \(P\in \mathbb{R}^{n\times n}\), scalars t, a, and b satisfying \(b\geq a\geq 0\), and a vector function \(w:[t-b,t]\rightarrow \mathbb{R}^{n}\) such that the integrals involved are well defined, the following inequality holds:

$$\begin{aligned} \frac{1}{2} \bigl(b^{2}-a^{2} \bigr) \int _{-b}^{-a} \int _{t+\theta }^{t} w^{T}(s) P w(s)\,ds \,d\theta \geq & \int _{-b}^{-a} \int _{t+\theta }^{t} w^{T}(s)\,ds\,d \theta \\ & {} \times P \int _{-b}^{-a} \int _{t+\theta }^{t} w(s)\,ds\,d\theta . \end{aligned}$$

Lemma 2.3

([39])

For any constant matrices \(P\in \mathbb{R}^{n\times n}\), \(X\in \mathbb{R}^{2n\times 2n}\), and \(Y\in \mathbb{R}^{2n\times n}\) with [ X Y P ]0, and such that the following inequality is well defined, it holds that

$$\begin{aligned}& - \int _{-b}^{-a} \int _{\theta }^{0} \int _{t+\beta }^{t} \dot{w}^{T}(s) P \dot{w}(s) \,ds \,d \beta \,d\theta \\& \quad \leq \Omega _{1}^{T}(t) \biggl[ \bigl(b^{2}-a^{2} \bigr)\operatorname{Sym}\{Y\Theta \}+\frac{b^{3}-a^{3}}{6}X \biggr]\Omega _{1}(t), \end{aligned}$$

where \(\Theta =[I,-I]\) and \(\Omega _{1}= [w^{T}(t),\int _{-b}^{-a}\int _{t+\theta }^{t} \frac{2}{b^{2}-a^{2}}w^{T}(s)\,ds\,d \theta ]^{T}\).

Lemma 2.4

([40])

For a given matrix \(P>0\), the following inequality holds for all continuous differentiable function \(w(t)\) in \([a,b]\in \mathbb{R}^{n}\):

$$\begin{aligned} - \int _{a}^{b}& \dot{w}^{T} (s) P \dot{w}(s)\,d s \leq - \frac{1}{b-a} \bigl( \Omega ^{T}_{2}P \Omega _{2}+3\Omega ^{T}_{3}P \Omega _{3}+5\Omega ^{T}_{4}P\Omega _{4}+7 \Omega ^{T}_{5}P\Omega _{5} \bigr), \end{aligned}$$

where

$$\begin{aligned}& \Omega _{2} = w(b)-w(a), \\& \Omega _{3} = w(b)+w(a)-\frac{2}{b-a} \int _{a}^{b}w(s)\,ds, \\& \Omega _{4} = w(b)-w(a)+\frac{6}{b-a} \int _{a}^{b}w(s)\,ds- \frac{12}{(b-a)^{2}} \int _{a}^{b} \int _{u}^{b}w(s)\,ds\,du, \\& \begin{aligned} \Omega _{5} &= w(b)+w(a)-\frac{12}{b-a} \int _{a}^{b}w(s)\,ds+ \frac{60}{(b-a)^{2}} \int _{a}^{b} \int _{u}^{b}w(s)\,ds\,du \\ &\quad{} -\frac{120}{(b-a)^{3}} \int _{a}^{b} \int _{u}^{b} \int _{s}^{b} w(r)\,dr\,ds\,du. \end{aligned} \end{aligned}$$

Lemma 2.5

([41])

Suppose that \(w(t)\in \mathbb{R}^{n} \) and \(\eta \in \mathbb{R}\). Then for any positive definite matrix P, the following inequality holds:

$$\begin{aligned}& - \frac{\eta _{2}^{3}}{6} \int _{-\eta _{2}}^{0} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s) P w(s)\,ds \,d\lambda \,d\beta \\& \quad \leq - \int _{-\eta _{2}}^{0} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s)\,ds\,d \lambda \,d\beta P \int _{-\eta _{2}}^{0} \int _{ \beta }^{0} \int _{t+\lambda }^{t} w(s)\,ds\,d\lambda \,d\beta . \end{aligned}$$

3 Main results

In what follows, for the simplification, some notations are introduced as:

G 1 = e 1 e 3 , G 2 = e 1 + e 3 2 e 8 , G 3 = e 1 e 3 + 6 e 8 12 e 14 , G 4 = e 1 + e 3 12 e 8 + 60 e 14 120 e 20 , G 5 = e 1 e 4 , G 6 = e 1 + e 4 2 e 9 , G 7 = e 1 e 4 + 6 e 9 12 e 15 , G 8 = e 1 + e 4 12 e 9 + 60 e 15 120 e 21 , G 9 = e 5 e 4 , G 10 = e 5 + e 4 2 e 11 , G 11 = e 5 e 4 + 6 e 11 12 e 16 , G 12 = e 5 + e 4 12 e 11 + 60 e 16 120 e 22 , G 13 = e 3 e 5 , G 14 = e 3 + e 5 2 e 10 , G 15 = e 3 e 5 + 6 e 10 12 e 17 , G 16 = e 3 + e 5 12 e 10 + 60 e 17 120 e 23 , G 17 = [ e 1 2 e 12 + 2 e 13 ] , G 18 = e 1 W T F p T e 6 , G 19 = e 6 F m W e 1 , G 20 = e 5 W T G p T e 7 , G 21 = e 7 G m W e 5 , G 22 = e 1 W T H p T e 24 , G 23 = e 24 H m W e 1 , Y ¯ = [ Y 1 + Y 1 T Y 1 + Y 2 T Y 2 Y 2 T ] , X ¯ = [ X 1 + X 1 T X 2 + X 3 T X 4 + X 4 T ] , F p = diag { F 1 + , F 2 + , , F n + } , F m = diag { F 1 , F 2 , , F n } , G p = diag { G 1 + , G 2 + , , G n + } , G m = diag { G 1 , G 2 , , G n } , H p = diag { H 1 + , H 2 + , , H n + } , H m = diag { H 1 , H 2 , , H n } , Z 1 = diag { z 11 , z 12 , , z 1 n } 0 , Z 2 = diag { z 21 , z 22 , , z 2 n } 0 , Z 3 = diag { z 31 , z 32 , , z 3 n } 0 , ς T ( t ) = [ w T ( t ) , w ˙ T ( t ) , w T ( t δ 1 ) , w T ( t δ 2 ) , w T ( t δ ( t ) ) , f T ( W w ( t ) ) , g T ( W w ( t δ ( t ) ) ) , 1 δ 1 t δ 1 t w T ( s ) d s , 1 δ 2 t δ 2 t w T ( s ) d s , 1 δ ( t ) δ 1 t δ ( t ) t δ 1 w T ( s ) d s , 1 δ 2 δ ( t ) t δ 2 t δ ( t ) w T ( s ) d s , 1 δ 2 2 δ 1 2 δ ( t ) δ 1 t + θ t w T ( s ) d s d θ , 1 δ 2 2 δ 1 2 δ 2 δ ( t ) t + θ t w T ( s ) d s d θ , 1 δ 1 2 t δ 1 t θ t w T ( s ) d s d θ , 1 δ 2 2 t δ 2 t θ t w T ( s ) d s d θ , 1 ( δ 2 δ ( t ) ) 2 t δ 2 t δ ( t ) θ t δ ( t ) w T ( s ) d s d θ , 1 ( δ ( t ) δ 1 ) 2 t δ ( t ) t δ 1 θ t δ 1 w T ( s ) d s d θ , δ 2 δ ( t ) β 0 t + λ t w T ( s ) d s d λ d β , δ ( t ) δ 1 β 0 t + λ t w T ( s ) d s d λ d β , 1 δ 1 3 t δ 1 t u t s t w T ( r ) d r d s d u , 1 δ 2 3 t δ 2 t u t s t w T ( r ) d r d s d u , 1 ( δ 2 δ ( t ) ) 3 t δ 2 t δ ( t ) u t δ ( t ) s t δ ( t ) w T ( r ) d r d s d u , 1 ( δ ( t ) δ 1 ) 3 t δ ( t ) t δ 1 u t δ 1 s t δ 1 w T ( r ) d r d s d u ] , ς ¯ T ( t ) = [ ς T ( t ) , u T ( t ) ] , η T ( t ) = [ ς T ( t ) , h T ( W w ( t ) ) , t σ 2 ( t ) t σ 1 ( t ) h T ( W w ( s ) ) d s ] , η ¯ T ( t ) = [ η T ( t ) , u T ( t ) ] .

3.1 Stability analysis for generalized neural networks

In this section, new asymptotic stability criteria for the generalized neural networks (1), and their special case, are obtained based on a suitable Lyapunov–Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique.

Theorem 3.1

For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), \(\beta _{1}\), and \(\beta _{2}\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2},Q \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2},Z_{3} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1}, X_{2},X_{3}, X_{4}, Y_{1}, Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(c_{1}\), \(c_{2}\) such that the following LMIs hold:

$$\begin{aligned}& \Xi +2 \Xi _{i}+2c_{1}I < 0, \quad i=1,2, \end{aligned}$$
(4)
$$\begin{aligned}& \Xi + 2\Xi _{j} -2c_{2}I< 0, \quad j=3,4, \end{aligned}$$
(5)
$$\begin{aligned}& c_{1}-c_{2}>0, \end{aligned}$$
(6)
$$\begin{aligned}& \begin{bmatrix} X_{1}+X^{T}_{1} & X_{2}+X^{T}_{3} &Y_{1} \\ * & X_{4}+X^{T}_{4} &Y_{2} \\ * & *&S_{1} \end{bmatrix} \geq 0, \end{aligned}$$
(7)

where

$$\begin{aligned}& \begin{aligned} \Xi ={}& 2e_{1}Pe^{T}_{2}+e_{1}U_{1}e^{T}_{1}-e_{3}U_{1}e^{T}_{3}+e_{1}U_{2}e^{T}_{1}-e_{4}U_{2}e^{T}_{4}+ \delta ^{2}_{1} e_{2}T_{1}e^{T}_{2}-G_{1} T_{1}G_{1}^{T} \\ & {} -3G_{2}T_{1}G_{2}^{T}-5G_{3}T_{1}G_{3}^{T}-7G_{4}T_{1}G_{4}^{T}+ \delta ^{2}_{2} e_{2}T_{2}e^{T}_{2}-G_{5} T_{2}G_{5}^{T}-3G_{6}T_{2}G_{6}^{T} \\ & {} -5G_{7}T_{2}G_{7}^{T}-7G_{8}T_{2}G_{8}^{T}+( \delta _{2}-\delta _{1})^{2} e_{2}T_{3}e^{T}_{2}-G_{9} T_{3}G_{9}^{T}-3G_{10}T_{3}G_{10}^{T} \\ & {} -5G_{11}T_{3}G_{11}^{T}-7G_{12}T_{3}G_{12}^{T}-G_{13} T_{3}G_{13}^{T}-3G_{14}T_{3}G_{14}^{T}-5G_{15}T_{3}G_{15}^{T} \\ & {} -7G_{16}T_{3}G_{16}^{T}+ \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})^{2}}{4}e_{1}Le_{1}^{T}- \bigl( \delta ^{2}_{2}-\delta ^{2}_{1} \bigr)^{2} e_{13}Le^{T}_{13}- \bigl(\delta ^{2}_{2}- \delta ^{2}_{1} \bigr)^{2} e_{12}Le^{T}_{12} \\ & {} +\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36}e_{2}S_{1}e_{2}^{T}+G_{17} \biggl( \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)\bar{Y}+ \frac{\delta ^{3}_{2}-\delta ^{3}_{1}}{6}\bar{X} \biggr) G^{T}_{17} \\ & {} +\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36}e_{1}S_{2}e_{1}^{T}-e_{18}S_{2}e_{18}^{T}-e_{19}S_{2}e_{19}^{T}+2G_{18}Z_{1}G^{T}_{19}+2G_{20}Z_{2}G^{T}_{21} \\ & {} +(\sigma _{2}-\sigma _{1})^{2}e_{24}Qe^{T}_{24} -e_{25}Qe^{T}_{25}+2G_{22}Z_{3}G^{T}_{23} \\ & {} -2e_{1}\beta _{1} N_{1}^{T} e_{2}^{T}-2e_{1}\beta _{1} N_{1}^{T}C e_{1}^{T}+2e_{1} \beta _{1} N_{1}^{T}B_{0} e_{6}^{T}+2e_{1}\beta _{1} N_{1}^{T}B_{1} e_{7}^{T} \\ & {} +2e_{1}\beta _{1} N_{1}^{T}B_{2} e_{25}^{T}-2e_{2}\beta _{2} N_{2}^{T} e_{2}^{T}-2e_{2}\beta _{2} N_{2}^{T}C e_{1}^{T} \\ & {} +2e_{2}\beta _{2} N_{2}^{T}B_{0} e_{6}^{T}+2e_{2}\beta _{2} N_{2}^{T}B_{1} e_{7}^{T}+2e_{2} \beta _{2} N_{2}^{T}B_{2} e_{25}^{T}, \end{aligned} \\& \Xi _{1} = - \bigl(\delta ^{2}_{2}-\delta ^{2}_{1} \bigr)^{2}e_{13}Le^{T}_{13}, \\& \Xi _{2} = - \bigl(\delta ^{2}_{2}-\delta ^{2}_{1} \bigr)^{2}e_{12}Le^{T}_{12}, \\& \Xi _{3} = -e_{18}S_{2}e^{T}_{18}, \\& \Xi _{4} = -e_{19}S_{2}e^{T}_{19}, \end{aligned}$$

then, the system (1) with \(u(t)=0\) is asymptotically stable.

Proof

We consider the following Lyapunov–Krasovskii functional candidate for the system (1):

$$\begin{aligned} V \bigl(w(t),t \bigr)=\sum _{i=1}^{10} V_{i} \bigl(w(t),t \bigr), \end{aligned}$$
(8)

where

$$\begin{aligned}& V_{1} \bigl(w(t),t \bigr) = w^{T}(t) P w(t), \\& V_{2} \bigl(w(t),t \bigr) = \int _{t-\delta _{1}}^{t} w^{T}(s) U_{1} w(s)\,ds, \\& V_{3} \bigl(w(t),t \bigr) = \int _{t-\delta _{2}}^{t} w^{T}(s) U_{2} w(s)\,ds, \\& V_{4} \bigl(w(t),t \bigr) = \delta _{1} \int _{-\delta _{1}}^{0} \int _{t+s}^{t} \dot{w}^{T}(\tau ) T_{1}\dot{w}(\tau )\,d\tau \,ds, \\& V_{5} \bigl(w(t),t \bigr) = \delta _{2} \int _{-\delta _{2}}^{0} \int _{t+s}^{t} \dot{w}^{T}(\tau ) T_{2}\dot{w}(\tau )\,d\tau \,ds, \\& V_{6} \bigl(w(t),t \bigr) = (\delta _{2}-\delta _{1}) \int _{-\delta _{2}}^{- \delta _{1}} \int _{t+s}^{t} \dot{w}^{T}(\tau ) T_{3}\dot{w}(\tau )\,d \tau \,ds, \\& V_{7} \bigl(w(t),t \bigr) = \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{- \delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s) L w(s)\,ds \,d\lambda \,d\beta , \\& V_{8} \bigl(w(t),t \bigr) = \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{- \delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{\lambda }^{0} \int _{t+\varphi }^{t} \dot{w}^{T}(s) S_{1} \dot{w}(s)\,ds\,d\varphi \,d\lambda \,d \beta , \\& V_{9} \bigl(w(t),t \bigr) = \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{- \delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{\lambda }^{0} \int _{t+\varphi }^{t} w^{T}(s) S_{2} w(s)\,ds\,d\varphi \,d\lambda \,d\beta , \\& V_{10} \bigl(w(t),t \bigr) = (\sigma _{2}-\sigma _{1}) \int _{-\sigma _{2}}^{- \sigma _{1}} \int _{t+s}^{t} h^{T} \bigl(Ww(r) \bigr)Qh \bigl(Ww(r) \bigr)\,dr\,ds. \end{aligned}$$

Time derivatives of \(V_{i}(w(t),t)\), \(i=1,2,\ldots,10\), along the trajectories of (1) are as follows:

$$\begin{aligned}& \dot{V_{1}} \bigl(w(t),t \bigr) = w^{T}(t) P \dot{w}(t)+ \dot{w}^{T}(t) P w(t), \end{aligned}$$
(9)
$$\begin{aligned}& \dot{V_{2}} \bigl(w(t),t \bigr) = w^{T}(t) U_{1} w(t) -w^{T}(t-\delta _{1}) U_{1} w(t-\delta _{1}), \end{aligned}$$
(10)
$$\begin{aligned}& \dot{V_{3}} \bigl(w(t),t \bigr) = w^{T}(t) U _{2} w(t) -w^{T}(t-\delta _{2}) U_{2} w(t-\delta _{2}), \end{aligned}$$
(11)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{4}} \bigl(w(t),t \bigr) = {}&\delta _{1} \int _{-\delta _{1}}^{0} \bigl[ \dot{w}^{T}(t) T_{1} \dot{w}(t) -\dot{w}^{T}(t+s) T_{1} \dot{w}(t+s) \bigr]\,ds \\ = {}&\delta ^{2}_{1} \dot{w}^{T}(t) T_{1} \dot{w}(t) -\delta _{1} \int _{t- \delta _{1}}^{t} \dot{w}^{T}(\alpha ) T_{1} \dot{w}(\alpha )\,d \alpha , \end{aligned} \end{aligned}$$
(12)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{5}} \bigl(w(t),t \bigr)&= \delta _{2} \int _{-\delta _{2}}^{0} \bigl[ \dot{w}^{T}(t) T_{2} \dot{w}(t) -\dot{w}^{T}(t+s) T_{2} \dot{w}(t+s) \bigr]\,ds \\ &= \delta ^{2}_{2} \dot{w}^{T}(t) T_{2} \dot{w}(t) -\delta _{2} \int _{t- \delta _{2}}^{t} \dot{w}^{T}(\alpha ) T_{2} \dot{w}(\alpha )\,d \alpha , \end{aligned} \end{aligned}$$
(13)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{6}} \bigl(w(t),t \bigr)&= (\delta _{2}-\delta _{1}) \int _{-\delta _{2}}^{- \delta _{1}} \bigl[ \dot{w}^{T}(t) T_{3} \dot{w}(t) -\dot{w}^{T}(t+s) T_{3} \dot{w}(t+s) \bigr]\,ds \\ &= (\delta _{2}-\delta _{1})^{2} \dot{w}^{T}(t) T_{3} \dot{w}(t) -( \delta _{2}- \delta _{1}) \int _{t-\delta _{2}}^{t-\delta _{1}} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha , \end{aligned} \end{aligned}$$
(14)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{7}} \bigl(w(t),t \bigr) &= \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{-\delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \bigl[ w^{T}(t)L w(t)-w^{T}(t+\lambda )L w(t+\lambda ) \bigr]\,d\lambda \,d \beta \\ &=\frac{(\delta ^{2}_{2}-\delta ^{2}_{1})^{2}}{4} w^{T}(t)L w(t) \\ &\quad{} - \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{-\delta _{2}}^{- \delta _{1}} \int _{t+\beta }^{t} w^{T}(s) L w(s)\,ds \,d\beta , \end{aligned} \end{aligned}$$
(15)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{8}} \bigl(w(t),t \bigr) &= \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{\lambda }^{0} \bigl[\dot{w}^{T}(t)S_{1} \dot{w}(t) \\ &\quad{} -\dot{w}^{T}(t+\varphi )S_{1} \dot{w}(t+\varphi ) \bigr]\,d\varphi \,d\lambda \,d\beta \\ &=\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36} \dot{w}^{T}(t)S_{1} \dot{w}(t) \\ &\quad{} -\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} \dot{w}^{T}(s)S_{1} \dot{w}(s)\,ds\,d\lambda \,d\beta , \end{aligned} \end{aligned}$$
(16)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{9}} \bigl(w(t),t \bigr) &= \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36} w^{T}(t)S_{2} w(t) \\ &\quad{} -\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s)S_{2} w(s)\,ds\,d\lambda \,d\beta , \end{aligned} \end{aligned}$$
(17)
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{10} \bigl(w(t),t \bigr) &=(\sigma _{2}-\sigma _{1})^{2} h^{T} \bigl(Ww(t) \bigr)Qh \bigl(Ww(t) \bigr) \\ &\quad{} -(\sigma _{2}-\sigma _{1}) \int _{t-\sigma _{2}}^{t-\sigma _{1}} h^{T} \bigl(Ww(r) \bigr)Qh \bigl(Ww(r) \bigr)\,dr \\ &\leq (\sigma _{2}-\sigma _{1})^{2} h^{T} \bigl(Ww(t) \bigr)Qh \bigl(Ww(t) \bigr) \\ &\quad{} - \bigl(\sigma _{2}(t)-\sigma _{1}(t) \bigr) \int _{t-\sigma _{2}(t)}^{t- \sigma _{1}(t)} h^{T} \bigl(Ww(r) \bigr)Qh \bigl(Ww(r) \bigr)\,dr \\ &\leq (\sigma _{2}-\sigma _{1})^{2} \eta ^{T}(t) e_{24}Q e^{T}_{24} \eta (t) -\eta ^{T}(t)e_{25}Qe^{T}_{25}\eta (t). \end{aligned} \end{aligned}$$
(18)

Utilizing Lemma 2.4, the following relations are easily obtained:

$$\begin{aligned}& -\delta _{1} \int _{t-\delta _{1}}^{t} \dot{w}^{T}(\alpha ) T_{1} \dot{w}(\alpha )\,d\alpha \\& \quad \leq -\eta ^{T}(t) (e_{1}-e_{3} ) T_{1} (e_{1}-e_{3} )^{T} \eta (t) \\& \quad \quad {} -3\eta ^{T}(t) (e_{1}+e_{3}-2e_{8} ) T_{1} (e_{1}+e_{3}-2e_{8} )^{T} \eta (t) \\& \quad \quad {} -5\eta ^{T}(t) (e_{1}-e_{3}+6e_{8}-12e_{14} ) T_{1} (e_{1}-e_{3}+6e_{8}-12e_{14} )^{T}\eta (t) \\& \quad \quad {} -7\eta ^{T}(t) (e_{1}+e_{3}-12e_{8}+60e_{14}-120e_{20} ) \\& \quad \quad {} \times T_{1} (e_{1}+e_{3}-12e_{8}+60e_{14}-120e_{20} )^{T} \eta (t), \end{aligned}$$
(19)
$$\begin{aligned}& -\delta _{2} \int _{t-\delta _{2}}^{t} \dot{w}^{T}(\alpha ) T_{2} \dot{w}(\alpha )\,d\alpha \\& \quad \leq -\eta ^{T}(t) (e_{1}-e_{4} ) T_{2} (e_{1}-e_{4} )^{T} \eta (t) \\& \quad \quad {} -3\eta ^{T}(t) (e_{1}+e_{4}-2e_{9} ) T_{2} (e_{1}+e_{4}-2e_{9} )^{T} \eta (t) \\& \quad \quad {} -5\eta ^{T}(t) (e_{1}-e_{4}+6e_{9}-12e_{15} ) T_{2} (e_{1}-e_{4}+6e_{9}-12e_{15} )^{T}\eta (t) \\& \quad \quad {} -7\eta ^{T}(t) (e_{1}+e_{4}-12e_{9}+60e_{15}-120e_{21} ) \\& \quad \quad {} \times T_{2} (e_{1}+e_{4}-12e_{9}+60e_{15}-120e_{21} )^{T} \eta (t), \end{aligned}$$
(20)
$$\begin{aligned}& \begin{aligned}[b] & {-}(\delta _{2}-\delta _{1}) \int _{t-\delta _{2}}^{t-\delta _{1}} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha \\ &\quad =-(\delta _{2}-\delta _{1}) \int _{t-\delta _{2}}^{t-\delta (t)} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha -(\delta _{2}- \delta _{1}) \int _{t-\delta (t)}^{t-\delta _{1}} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha \\ &\quad \leq -\eta ^{T}(t) (e_{5}-e_{4} ) T_{3} (e_{5}-e_{4} )^{T} \eta (t) \\ &\quad\quad {} -3\eta ^{T}(t) (e_{5}+e_{4}-2e_{11} ) T_{3} (e_{5}+e_{4}-2e_{11} )^{T} \eta (t) \\ &\quad\quad {} -5\eta ^{T}(t) (e_{5}-e_{4}+6e_{11}-12e_{16} ) T_{3} (e_{5}-e_{4}+6e_{11}-12e_{16} )^{T}\eta (t) \\ &\quad\quad {} -7\eta ^{T}(t) (e_{5}+e_{4}-12e_{11}+60e_{16}-120e_{22} ) \\ &\quad\quad {} \times T_{3} (e_{5}+e_{4}-12e_{11}+60e_{16}-120e_{22} )^{T} \eta (t) \\ &\quad\quad {} -\eta ^{T}(t) (e_{3}-e_{5} ) T_{3} (e_{3}-e_{5} )^{T} \eta (t) \\ &\quad\quad {} -3\eta ^{T}(t) (e_{3}+e_{5}-2e_{10} ) T_{3} (e_{3}+e_{5}-2e_{10} )^{T} \eta (t) \\ &\quad\quad {} -5\eta ^{T}(t) (e_{3}-e_{5}+6e_{10}-12e_{17} ) T_{3} (e_{3}-e_{5}+6e_{10}-12e_{17} )^{T}\eta (t) \\ &\quad\quad {} -7\eta ^{T}(t) (e_{3}+e_{5}-12e_{10}+60e_{17}-120e_{23} ) \\ &\quad\quad {} \times T_{3} (e_{3}+e_{5}-12e_{10}+60e_{17}-120e_{23} )^{T} \eta (t). \end{aligned} \end{aligned}$$
(21)

On the other hand, we have the following inequality from Lemma 2.2:

$$\begin{aligned}& - \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{-\delta _{2}}^{- \delta _{1}} \int _{t+\beta }^{t} w^{T}(s) L w(s)\,ds \,d\beta \\ & \quad \leq - \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2}\eta ^{T}(t)e_{13}Le_{13}^{T} \eta (t) -\varepsilon \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2}\eta ^{T}(t)e_{13}Le_{13}^{T} \eta (t) \\ & \quad\quad {} - \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2}\eta ^{T}(t)e_{12}Le_{12}^{T} \eta (t) -(1-\varepsilon ) \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2}\eta ^{T}(t)e_{12}Le_{12}^{T} \eta (t), \end{aligned}$$
(22)

where \(\varepsilon = \frac{\delta ^{2}(t)-\delta ^{2}_{1}}{\delta ^{2}_{2}-\delta ^{2}_{1}}\).

From Lemma 2.3 and condition (7), we obtain

( δ 2 3 δ 1 3 ) 6 δ 2 δ 1 β 0 t + λ t w ˙ T ( s ) S 1 w ˙ ( s ) d s d λ d β η T ( t ) [ e 1 2 e 12 + 2 e 13 ] × ( ( δ 2 2 δ 1 2 ) Y ¯ + δ 2 3 δ 1 3 6 X ¯ ) [ e 1 2 e 12 + 2 e 13 ] T η ( t ) .
(23)

From Lemma 2.5, we gain

$$\begin{aligned}& -\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s)S_{2} w(s)\,ds\,d\lambda \,d\beta \\ & \quad \leq -\eta ^{T}(t)e_{18}S_{2}e_{18}^{T} \eta (t) -\alpha \eta ^{T}(t)e_{18}S_{2}e_{18}^{T} \eta (t) \\ & \quad\quad {} -\eta ^{T}(t)e_{19}S_{2}e_{19}^{T} \eta (t) -(1-\alpha ) \eta ^{T}(t)e_{19}S_{2}e_{19}^{T} \eta (t), \end{aligned}$$
(24)

where \(\alpha = \frac{\delta ^{3}(t)-\delta ^{3}_{1}}{\delta ^{3}_{2}-\delta ^{3}_{1}}\).

It follows from Assumptions (A1), (A2), and (A3) that

$$\begin{aligned}& 2 \bigl(F_{p}Ww(t)-f \bigl(Ww(t) \bigr) \bigr)^{T}Z_{1} \bigl(f \bigl(Ww(t) \bigr)-F_{m}Ww(t) \bigr)\geq 0, \end{aligned}$$
(25)
$$\begin{aligned}& 2 \bigl(G_{p}Ww \bigl(t-\delta (t) \bigr)-g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr) \bigr)^{T} \\& \quad {} \times Z_{2} \bigl(g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr)-G_{m}Ww \bigl(t-\delta (t) \bigr) \bigr) \geq 0, \end{aligned}$$
(26)
$$\begin{aligned}& 2 \bigl(H_{p}Ww(t)-h \bigl(Ww(t) \bigr) \bigr)^{T}Z_{3} \bigl(h \bigl(Ww(t) \bigr)-H_{m}Ww(t) \bigr)\geq 0. \end{aligned}$$
(27)

We consider system (1), the following equation is obtained:

$$\begin{aligned} 0={}&2 \bigl[ w^{T}(t) \beta _{1} N_{1}^{T} + \dot{w}^{T}(t)\beta _{2} N_{2}^{T} \bigr] \biggl[ - \dot{w}(t) -Cw(t)+B_{0}f \bigl(Ww(t) \bigr) \\ & {} +B_{1}g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr)+B_{2} \int _{t-\sigma _{2}(t)}^{t-\sigma _{1}(t)}h \bigl(Ww(s) \bigr) \,ds+B_{3}u(t) \biggr]. \end{aligned}$$
(28)

By adding the right-hand side of (28) to \(\dot{V}(w(t),t)\), we achieve from (9)–(27) that

$$\begin{aligned} \dot{V} \bigl(w(t),t \bigr) \leq &\bar{\eta }^{T}(t) \bigl(\varepsilon \bar{\Xi }^{(1)} +(1-\varepsilon ) \bar{\Xi }^{(2)}+\alpha \bar{\Xi }^{(3)}+(1-\alpha ) \bar{\Xi }^{(4)} \bigr) \bar{\eta }(t), \end{aligned}$$
(29)

where \(\bar{\Xi }^{(i)}=\frac{1}{2}\bar{\Xi } +\Xi _{i}\) (\(i=1,2\)) and \(\bar{\Xi }^{(j)}=\frac{1}{2}\bar{\Xi } +\Xi _{j}\) (\(j=3,4\)), \(\bar{\Xi }=\Xi +2e_{26}\beta _{1} B^{T}_{3}N_{1}e^{T}_{1}+2e_{26} \beta _{2} B^{T}_{3}N_{2}e^{T}_{2}\), with Ξ and \(\Xi _{i}\), \(\Xi _{j}\) defined in (4) and (5).

When \(u(t)=0 \) (no disturbance), one has from (29) that

$$\begin{aligned} \dot{V} \bigl(w(t),t \bigr) \leq &\eta ^{T}(t) \bigl(\varepsilon \Xi ^{(1)} +(1- \varepsilon ) \Xi ^{(2)}+ \alpha \Xi ^{(3)}+(1-\alpha )\Xi ^{(4)} \bigr) \eta (t), \end{aligned}$$

where \(\Xi ^{(i)}=\frac{1}{2} \Xi +\Xi _{i}\) (\(i=1,2\)) and \(\Xi ^{(j)}=\frac{1}{2}\Xi +\Xi _{j}\) (\(j=3,4\)).

The upper bound of \(\dot{V}(w(t),t)\) is negative if the condition (6) and the following relations hold simultaneously:

$$\begin{aligned} \varepsilon \Xi ^{(1)}+(1-\varepsilon )\Xi ^{(2)} &< -c_{1}I, \\ \alpha \Xi ^{(3)}+(1-\alpha )\Xi ^{(4)} &< c_{2}I. \end{aligned}$$

The above relations can be rewritten as follows:

$$\begin{aligned} \varepsilon \bigl(\Xi ^{(1)}+c_{1}I \bigr)+(1-\varepsilon ) \bigl(\Xi ^{(2)}+c_{1}I \bigr) &< 0, \end{aligned}$$
(30)
$$\begin{aligned} \alpha \bigl(\Xi ^{(3)}-c_{2}I \bigr)+(1-\alpha ) \bigl(\Xi ^{(4)}-c_{2}I \bigr) &< 0. \end{aligned}$$
(31)

Since \(0\leq \varepsilon ,\alpha \leq 1\), the term \(\varepsilon (\Xi ^{(1)}+c_{1}I)+(1-\varepsilon )(\Xi ^{(2)}+c_{1}I)\) is a convex combination of \(\Xi ^{(1)}+c_{1}I\) and \(\Xi ^{(2)}+c_{1}I\); and the expression \(\alpha (\Xi ^{(3)}-c_{2}I)+(1-\alpha )(\Xi ^{(4)}-c_{2}I)\) is a convex combination of \(\Xi ^{(3)}-c_{2}I\) and \(\Xi ^{(4)}-c_{2}I\). These combinations are negative definite only if the vertices become negative; therefore, (30) and (31) are equivalent to (4) and (5), respectively. Then, the system (1) with \(u(t)=0\) is asymptotically stable. □

Next, we consider the following generalized neural network model as a special case of the system (1):

$$\begin{aligned}& \dot{w}(t)=-Cw(t)+B_{0}f \bigl(Ww(t) \bigr)+B_{1}g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr)+B_{3}u (t), \\& z(t)=D_{1}w(t). \end{aligned}$$
(32)

Theorem 3.2

For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\beta _{1}\), and \(\beta _{2}\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2} \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1},X_{2},X_{3},X_{4},Y_{1},Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(b_{1}\), \(b_{2}\) such that the following LMIs hold:

$$\begin{aligned}& \Theta +2 \Theta _{i}+2b_{1}I < 0, \quad i=1,2, \end{aligned}$$
(33)
$$\begin{aligned}& \Theta + 2\Theta _{j} -2b_{2}I< 0, \quad j=3,4, \end{aligned}$$
(34)
$$\begin{aligned}& b_{1}-b_{2}>0, \end{aligned}$$
(35)
$$\begin{aligned}& \begin{bmatrix} X_{1}+X^{T}_{1} & X_{2}+X^{T}_{3} &Y_{1} \\ * & X_{4}+X^{T}_{4} &Y_{2} \\ * & *&S_{1} \end{bmatrix}\geq 0, \end{aligned}$$
(36)

where

$$\begin{aligned}& \begin{aligned}[b] \Theta ={}&2e_{1}Pe^{T}_{2}+e_{1}U_{1}e^{T}_{1}-e_{3}U_{1}e^{T}_{3}+e_{1}U_{2}e^{T}_{1}-e_{4}U_{2}e^{T}_{4}+ \delta ^{2}_{1} e_{2}T_{1}e^{T}_{2}-G_{1} T_{1}G_{1}^{T} \\ & {} -3G_{2}T_{1}G_{2}^{T}-5G_{3}T_{1}G_{3}^{T}-7G_{4}T_{1}G_{4}^{T}+ \delta ^{2}_{2} e_{2}T_{2}e^{T}_{2}-G_{5} T_{2}G_{5}^{T}-3G_{6}T_{2}G_{6}^{T} \\ & {} -5G_{7}T_{2}G_{7}^{T}-7G_{8}T_{2}G_{8}^{T}+( \delta _{2}-\delta _{1})^{2} e_{2}T_{3}e^{T}_{2}-G_{9} T_{3}G_{9}^{T}-3G_{10}T_{3}G_{10}^{T} \\ & {} -5G_{11}T_{3}G_{11}^{T}-7G_{12}T_{3}G_{12}^{T}-G_{13} T_{3}G_{13}^{T}-3G_{14}T_{3}G_{14}^{T}-5G_{15}T_{3}G_{15}^{T} \\ & {} -7G_{16}T_{3}G_{16}^{T}+ \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})^{2}}{4}e_{1}Le_{1}^{T}- \bigl( \delta ^{2}_{2}-\delta ^{2}_{1} \bigr)^{2} e_{13}Le^{T}_{13}- \bigl(\delta ^{2}_{2}- \delta ^{2}_{1} \bigr)^{2} e_{12}Le^{T}_{12} \\ & {} +\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36}e_{2}S_{1}e_{2}^{T}+G_{17} \biggl( \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)\bar{Y}+ \frac{\delta ^{3}_{2}-\delta ^{3}_{1}}{6}\bar{X} \biggr) G^{T}_{17} \\ & {} +\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36}e_{1}S_{2}e_{1}^{T}-e_{18}S_{2}e_{18}^{T}-e_{19}S_{2}e_{19}^{T}+2G_{18}Z_{1}G^{T}_{19} \\ & {} +2G_{20}Z_{2}G^{T}_{21}-2e_{1} \beta _{1} N_{1}^{T} e_{2}^{T}-2e_{1} \beta _{1} N_{1}^{T}C e_{1}^{T} \\ & {} +2e_{1}\beta _{1} N_{1}^{T}B_{0} e_{6}^{T}+2e_{1}\beta _{1} N_{1}^{T}B_{1} e_{7}^{T}-2e_{2} \beta _{2} N_{2}^{T} e_{2}^{T}-2e_{2} \beta _{2} N_{2}^{T}C e_{1}^{T} \\ & {} +2e_{2}\beta _{2} N_{2}^{T}B_{0} e_{6}^{T}+2e_{2}\beta _{2} N_{2}^{T}B_{1} e_{7}^{T}, \end{aligned} \\& \Theta _{1}=- \bigl(\delta ^{2}_{2}-\delta ^{2}_{1} \bigr)^{2}e_{13}Le^{T}_{13}, \\& \Theta _{2}=- \bigl(\delta ^{2}_{2}-\delta ^{2}_{1} \bigr)^{2}e_{12}Le^{T}_{12}, \\& \Theta _{3}=-e_{18}S_{2}e^{T}_{18}, \\& \Theta _{4}=-e_{19}S_{2}e^{T}_{19}, \end{aligned}$$

then, the system (32) with \(u(t)=0\) is asymptotically stable.

Proof

We consider the following Lyapunov–Krasovskii functional candidate for the system (32):

$$\begin{aligned} V \bigl(w(t),t \bigr)=\sum _{i=1}^{9} V_{i} \bigl(w(t),t \bigr), \end{aligned}$$
(37)

where

$$\begin{aligned}& V_{1} \bigl(w(t),t \bigr) = w^{T}(t) P w(t), \\& V_{2} \bigl(w(t),t \bigr) = \int _{t-\delta _{1}}^{t} w^{T}(s) U_{1} w(s)\,ds, \\& V_{3} \bigl(w(t),t \bigr) = \int _{t-\delta _{2}}^{t} w^{T}(s) U_{2} w(s)\,ds, \\& V_{4} \bigl(w(t),t \bigr) = \delta _{1} \int _{-\delta _{1}}^{0} \int _{t+s}^{t} \dot{w}^{T}(\tau ) T_{1}\dot{w}(\tau )\,d\tau \,ds, \\& V_{5} \bigl(w(t),t \bigr) = \delta _{2} \int _{-\delta _{2}}^{0} \int _{t+s}^{t} \dot{w}^{T}(\tau ) T_{2}\dot{w}(\tau )\,d\tau \,ds, \\& V_{6} \bigl(w(t),t \bigr) = (\delta _{2}-\delta _{1}) \int _{-\delta _{2}}^{- \delta _{1}} \int _{t+s}^{t} \dot{w}^{T}(\tau ) T_{3}\dot{w}(\tau )\,d \tau \,ds, \\& V_{7} \bigl(w(t),t \bigr) = \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{- \delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s) L w(s)\,ds \,d\lambda \,d\beta , \\& V_{8} \bigl(w(t),t \bigr) = \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{- \delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{\lambda }^{0} \int _{t+\varphi }^{t} \dot{w}^{T}(s) S_{1} \dot{w}(s)\,ds\,d\varphi \,d\lambda \,d \beta , \\& V_{9} \bigl(w(t),t \bigr) = \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{- \delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{\lambda }^{0} \int _{t+\varphi }^{t} w^{T}(s) S_{2} w(s)\,ds\,d\varphi \,d\lambda \,d\beta . \end{aligned}$$

Time derivatives of \(V_{i}(w(t),t)\), \(i=1,2,\ldots,9\), along the trajectories of (32) are as follows:

$$\begin{aligned}& \dot{V_{1}} \bigl(w(t),t \bigr) = w^{T}(t) P \dot{w}(t)+ \dot{w}^{T}(t) P w(t), \end{aligned}$$
(38)
$$\begin{aligned}& \dot{V_{2}} \bigl(w(t),t \bigr) = w^{T}(t) U_{1} w(t) -w^{T}(t-\delta _{1}) U_{1} w(t-\delta _{1}), \end{aligned}$$
(39)
$$\begin{aligned}& \dot{V_{3}} \bigl(w(t),t \bigr) = w^{T}(t) U _{2} w(t) -w^{T}(t-\delta _{2}) U_{2} w(t-\delta _{2}), \end{aligned}$$
(40)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{4}} \bigl(w(t),t \bigr) = {}&\delta _{1} \int _{-\delta _{1}}^{0} \bigl[ \dot{w}^{T}(t) T_{1} \dot{w}(t) -\dot{w}^{T}(t+s) T_{1} \dot{w}(t+s) \bigr]\,ds \\ ={}& \delta ^{2}_{1} \dot{w}^{T}(t) T_{1} \dot{w}(t) -\delta _{1} \int _{t- \delta _{1}}^{t} \dot{w}^{T}(\alpha ) T_{1} \dot{w}(\alpha )\,d \alpha , \end{aligned} \end{aligned}$$
(41)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{5}} \bigl(w(t),t \bigr) = \delta _{2} \int _{-\delta _{2}}^{0} \bigl[ \dot{w}^{T}(t) T_{2} \dot{w}(t) -\dot{w}^{T}(t+s) T_{2} \dot{w}(t+s) \bigr]\,ds \\ = \delta ^{2}_{2} \dot{w}^{T}(t) T_{2} \dot{w}(t) -\delta _{2} \int _{t- \delta _{2}}^{t} \dot{w}^{T}(\alpha ) T_{2} \dot{w}(\alpha )\,d \alpha , \end{aligned} \end{aligned}$$
(42)
$$\begin{aligned}& \dot{V_{6}} \bigl(w(t),t \bigr) =(\delta _{2}-\delta _{1}) \int _{-\delta _{2}}^{- \delta _{1}} \bigl[ \dot{w}^{T}(t) T_{3} \dot{w}(t) -\dot{w}^{T}(t+s) T_{3} \dot{w}(t+s) \bigr]\,ds \\& \hphantom{ \dot{V_{6}} \bigl(w(t),t \bigr) =}=(\delta _{2}-\delta _{1})^{2} \dot{w}^{T}(t) T_{3} \dot{w}(t) \\& \hphantom{ \dot{V_{6}} \bigl(w(t),t \bigr) =} {} -(\delta _{2}-\delta _{1}) \int _{t-\delta _{2}}^{t-\delta _{1}} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha , \end{aligned}$$
(43)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{7}} \bigl(w(t),t \bigr) ={}& \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{-\delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \bigl[ w^{T}(t)L w(t)-w^{T}(t+\lambda )L w(t+\lambda ) \bigr]\,d\lambda \,d \beta \\ ={}&\frac{(\delta ^{2}_{2}-\delta ^{2}_{1})^{2}}{4} w^{T}(t)L w(t) \\ & {} - \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{-\delta _{2}}^{- \delta _{1}} \int _{t+\beta }^{t} w^{T}(s) L w(s)\,ds \,d\beta , \end{aligned} \end{aligned}$$
(44)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{8}} \bigl(w(t),t \bigr) ={}& \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{-\delta _{1}} \int _{\beta }^{0} \int _{\lambda }^{0} \bigl[\dot{w}^{T}(t)S_{1} \dot{w}(t) \\ & {} -\dot{w}^{T}(t+\varphi )S_{1} \dot{w}(t+\varphi ) \bigr]\,d\varphi \,d\lambda \,d\beta \\ ={}&\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36} \dot{w}^{T}(t)S_{1} \dot{w}(t) \\ & {} -\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} \dot{w}^{T}(s)S_{1} \dot{w}(s)\,ds\,d\lambda \,d\beta , \end{aligned} \end{aligned}$$
(45)
$$\begin{aligned}& \begin{aligned}[b] \dot{V_{9}} \bigl(w(t),t \bigr) ={}& \frac{(\delta ^{3}_{2}-\delta ^{3}_{1})^{2}}{36} w^{T}(t)S_{2} w(t) \\ & {} -\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s)S_{2} w(s)\,ds\,d\lambda \,d\beta . \end{aligned} \end{aligned}$$
(46)

Utilizing Lemma 2.4, the following relations are easily obtained:

$$\begin{aligned}& \begin{aligned}[b] &{-}\delta _{1} \int _{t-\delta _{1}}^{t} \dot{w}^{T}(\alpha ) T_{1} \dot{w}(\alpha )\,d\alpha \\ &\quad \leq -\varsigma ^{T}(t) (e_{1}-e_{3} ) T_{1} (e_{1}-e_{3} )^{T} \varsigma (t) \\ &\quad \quad {} -3\varsigma ^{T}(t) (e_{1}+e_{3}-2e_{8} ) T_{1} (e_{1}+e_{3}-2e_{8} )^{T} \varsigma (t) \\ &\quad \quad {} -5\varsigma ^{T}(t) (e_{1}-e_{3}+6e_{8}-12e_{14} ) T_{1} (e_{1}-e_{3}+6e_{8}-12e_{14} )^{T}\varsigma (t) \\ &\quad \quad {} -7\varsigma ^{T}(t) (e_{1}+e_{3}-12e_{8}+60e_{14}-120e_{20} ) \\ &\quad \quad {} \times T_{1} (e_{1}+e_{3}-12e_{8}+60e_{14}-120e_{20} )^{T} \varsigma (t), \end{aligned} \end{aligned}$$
(47)
$$\begin{aligned}& \begin{aligned}[b] &{-}\delta _{2} \int _{t-\delta _{2}}^{t} \dot{w}^{T}(\alpha ) T_{2} \dot{w}(\alpha )\,d\alpha \\ &\quad \leq -\varsigma ^{T}(t) (e_{1}-e_{4} ) T_{2} (e_{1}-e_{4} )^{T} \varsigma (t) \\ &\quad \quad {} -3\varsigma ^{T}(t) (e_{1}+e_{4}-2e_{9} ) T_{2} (e_{1}+e_{4}-2e_{9} )^{T} \varsigma (t) \\ &\quad \quad {} -5\varsigma ^{T}(t) (e_{1}-e_{4}+6e_{9}-12e_{15} ) T_{2} (e_{1}-e_{4}+6e_{9}-12e_{15} )^{T}\varsigma (t) \\ &\quad \quad {} -7\varsigma ^{T}(t) (e_{1}+e_{4}-12e_{9}+60e_{15}-120e_{21} ) \\ &\quad \quad {} \times T_{2} (e_{1}+e_{4}-12e_{9}+60e_{15}-120e_{21} )^{T} \varsigma (t), \end{aligned} \end{aligned}$$
(48)
$$\begin{aligned}& {-}(\delta _{2}-\delta _{1}) \int _{t-\delta _{2}}^{t-\delta _{1}} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha \\ & \quad =-(\delta _{2}-\delta _{1}) \int _{t-\delta _{2}}^{t-\delta (t)} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha -(\delta _{2}- \delta _{1}) \int _{t-\delta (t)}^{t-\delta _{1}} \dot{w}^{T}(\alpha ) T_{3} \dot{w}(\alpha )\,d\alpha \\& \quad \leq -\varsigma ^{T}(t) (e_{5}-e_{4} ) T_{3} (e_{5}-e_{4} )^{T} \varsigma (t) \\& \quad\quad {} -3\varsigma ^{T}(t) (e_{5}+e_{4}-2e_{11} ) T_{3} (e_{5}+e_{4}-2e_{11} )^{T} \varsigma (t) \\& \quad\quad {} -5\varsigma ^{T}(t) (e_{5}-e_{4}+6e_{11}-12e_{16} ) T_{3} (e_{5}-e_{4}+6e_{11}-12e_{16} )^{T}\varsigma (t) \\& \quad\quad {} -7\varsigma ^{T}(t) (e_{5}+e_{4}-12e_{11}+60e_{16}-120e_{22} ) \\& \quad \quad {} \times T_{3} (e_{5}+e_{4}-12e_{11}+60e_{16}-120e_{22} )^{T} \varsigma (t), \\& \quad\quad {} -\varsigma ^{T}(t) (e_{3}-e_{5} ) T_{3} (e_{3}-e_{5} )^{T} \varsigma (t) \\& \quad\quad {} -3\varsigma ^{T}(t) (e_{3}+e_{5}-2e_{10} ) T_{3} (e_{3}+e_{5}-2e_{10} )^{T} \varsigma (t) \\& \quad\quad {} -5\varsigma ^{T}(t) (e_{3}-e_{5}+6e_{10}-12e_{17} ) T_{3} (e_{3}-e_{5}+6e_{10}-12e_{17} )^{T}\varsigma (t) \\& \quad\quad {} -7\varsigma ^{T}(t) (e_{3}+e_{5}-12e_{10}+60e_{17}-120e_{23} ) \\& \quad\quad {} \times T_{3} (e_{3}+e_{5}-12e_{10}+60e_{17}-120e_{23} )^{T} \varsigma (t). \end{aligned}$$
(49)

On the other hand, we have the following inequality from Lemma 2.2:

$$\begin{aligned}& - \frac{(\delta ^{2}_{2}-\delta ^{2}_{1})}{2} \int _{-\delta _{2}}^{- \delta _{1}} \int _{t+\beta }^{t} w^{T}(s) L w(s)\,ds \,d\beta \\& \quad \leq - \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2}\varsigma ^{T}(t)e_{13}Le_{13}^{T} \varsigma (t) -\varepsilon \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2} \varsigma ^{T}(t)e_{13}Le_{13}^{T} \varsigma (t) \\& \quad\quad {} - \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2}\varsigma ^{T}(t)e_{12}Le_{12}^{T} \varsigma (t) -(1-\varepsilon ) \bigl(\delta _{2}^{2}-\delta _{1}^{2} \bigr)^{2} \varsigma ^{T}(t)e_{12}Le_{12}^{T} \varsigma (t), \end{aligned}$$
(50)

where \(\varepsilon = \frac{\delta ^{2}(t)-\delta ^{2}_{1}}{\delta ^{2}_{2}-\delta ^{2}_{1}}\).

From Lemma 2.3 and condition (36), we obtain

( δ 2 3 δ 1 3 ) 6 δ 2 δ 1 β 0 t + λ t w ˙ T ( s ) S 1 w ˙ ( s ) d s d λ d β ς T ( t ) [ e 1 2 e 12 + 2 e 13 ] × ( ( δ 2 2 δ 1 2 ) Y ¯ + δ 2 3 δ 1 3 6 X ¯ ) [ e 1 2 e 12 + 2 e 13 ] T ς ( t ) .
(51)

From Lemma 2.5, we get

$$\begin{aligned}& -\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6} \int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} w^{T}(s)S_{2} w(s)\,ds\,d\lambda \,d\beta \\& \quad \leq -\varsigma ^{T}(t)e_{18}S_{2}e_{18}^{T} \varsigma (t) -\alpha \varsigma ^{T}(t)e_{18}S_{2}e_{18}^{T} \varsigma (t) \\& \quad\quad {} -\varsigma ^{T}(t)e_{19}S_{2}e_{19}^{T} \varsigma (t) -(1-\alpha ) \varsigma ^{T}(t)e_{19}S_{2}e_{19}^{T} \varsigma (t), \end{aligned}$$
(52)

where \(\alpha = \frac{\delta ^{3}(t)-\delta ^{3}_{1}}{\delta ^{3}_{2}-\delta ^{3}_{1}}\).

It follows from Assumptions (A1) and (A2) that

$$\begin{aligned}& 2 \bigl(F_{p}Ww(t)-f \bigl(Ww(t) \bigr) \bigr)^{T}Z_{1} \bigl(f \bigl(Ww(t) \bigr)-F_{m}Ww(t) \bigr)\geq 0, \end{aligned}$$
(53)
$$\begin{aligned}& 2 \bigl(G_{p}Ww \bigl(t-\delta (t) \bigr)-g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr) \bigr)^{T} \\& \quad {} \times Z_{2} \bigl(g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr)-G_{m}Ww \bigl(t-\delta (t) \bigr) \bigr) \geq 0. \end{aligned}$$
(54)

We consider system (32), and the following equation is obtained:

$$\begin{aligned} 0={}&2 \bigl[ w^{T}(t) \beta _{1} N_{1}^{T} + \dot{w}^{T}(t)\beta _{2} N_{2}^{T} \bigr] \bigl[ - \dot{w}(t) -Cw(t)+B_{0}f \bigl(Ww(t) \bigr) \\ & {} +B_{1}g \bigl(Ww \bigl(t-\delta (t) \bigr) \bigr)+B_{3}u(t) \bigr]. \end{aligned}$$
(55)

By adding the right-hand side of (55) to \(\dot{V}(t)\), we achieve from (38)–(54) that

$$\begin{aligned} \dot{V} \bigl(w(t),t \bigr) \leq &\bar{\varsigma }^{T}(t) \bigl(\varepsilon \bar{\Theta }^{(1)} +(1-\varepsilon ) \bar{\Theta }^{(2)}+\alpha \bar{\Theta }^{(3)}+(1-\alpha ) \bar{\Theta }^{(4)} \bigr) \bar{\varsigma }(t), \end{aligned}$$
(56)

where \(\bar{\Theta }^{(i)}=\frac{1}{2}\bar{\Theta } +\Theta _{i}\) (\(i=1,2\)) and \(\bar{\Theta }^{(j)}=\frac{1}{2}\bar{\Theta } +\Theta _{j}\) (\(j=3,4\)), \(\bar{\Theta }=\Theta +2e_{24}\beta _{1} B^{T}_{3}N_{1}e^{T}_{1}+2e_{24} \beta _{2} B^{T}_{3}N_{2}e^{T}_{2}\), with Θ and \(\Theta _{i}\), \(\Theta _{j}\) defined in (33) and (34).

When \(u(t)=0 \) (no disturbance), one has from (56) that

$$\begin{aligned} \dot{V} \bigl(w(t),t \bigr) \leq &\varsigma ^{T}(t) \bigl(\varepsilon \Theta ^{(1)} +(1-\varepsilon ) \Theta ^{(2)}+\alpha \Theta ^{(3)}+(1-\alpha ) \Theta ^{(4)} \bigr) \varsigma (t), \end{aligned}$$

where \(\Theta ^{(i)}=\frac{1}{2} \Theta +\Theta _{i}\) (\(i=1,2\)) and \(\Theta ^{(j)}=\frac{1}{2}\Theta +\Theta _{j}\) (\(j=3,4\)).

The upper bound of \(\dot{V}(w(t),t)\) is negative if the condition (35) and the following relations hold simultaneously:

$$\begin{aligned} \varepsilon \Theta ^{(1)}+(1-\varepsilon )\Theta ^{(2)} &< -b_{1}I, \\ \alpha \Theta ^{(3)}+(1-\alpha )\Theta ^{(4)} &< b_{2}I. \end{aligned}$$

The above relations can be rewritten as follows:

$$\begin{aligned} \varepsilon \bigl(\Theta ^{(1)}+b_{1}I \bigr)+(1-\varepsilon ) \bigl(\Theta ^{(2)}+b_{1}I \bigr) &< 0, \end{aligned}$$
(57)
$$\begin{aligned} \alpha \bigl(\Theta ^{(3)}-b_{2}I \bigr)+(1-\alpha ) \bigl( \Theta ^{(4)}-b_{2}I \bigr) &< 0. \end{aligned}$$
(58)

Since \(0\leq \varepsilon ,\alpha \leq 1\), the term \(\varepsilon (\Theta ^{(1)}+b_{1}I)+(1-\varepsilon )(\Theta ^{(2)}+b_{1}I)\) is a convex combination of \(\Theta ^{(1)}+b_{1}I\) and \(\Theta ^{(2)}+b_{1}I\); and the expression \(\alpha (\Theta ^{(3)}-b_{2}I)+(1-\alpha )(\Theta ^{(4)}-b_{2}I)\) is a convex combination of \(\Theta ^{(3)}-b_{2}I\) and \(\Theta ^{(4)}-b_{2}I\). These combinations are negative definite only if the vertices become negative; therefore, (57) and (58) are equivalent to (33) and (34), respectively. Then, the system (32) with \(u(t)=0\) is asymptotically stable. □

3.2 Extended dissipativity analysis for generalized neural networks

In this section, new extended dissipativity criteria for the generalized neural networks (1), and their special case, are obtained based on the stability conditions that were developed in Theorems 3.1 and 3.2.

Theorem 3.3

For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), \(\beta _{1}\), \(\beta _{2}\), and a positive scalar \(d<1\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2}, Q \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2},Z_{3} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1},X_{2},X_{3},X_{4},Y_{1},Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(c_{1}\), \(c_{2}\) such that the following LMIs hold:

$$\begin{aligned}& \tilde{\Xi }+2 \Xi _{i}+2c_{1}I < 0, \quad i=1,2, \end{aligned}$$
(59)
$$\begin{aligned}& \tilde{\Xi }+ 2\Xi _{j} -2c_{2}I< 0, \quad j=3,4, \end{aligned}$$
(60)
$$\begin{aligned}& c_{1}-c_{2}>0, \end{aligned}$$
(61)
$$\begin{aligned}& \begin{bmatrix} X_{1}+X^{T}_{1} & X_{2}+X^{T}_{3} &Y_{1} \\ * & X_{4}+X^{T}_{4} &Y_{2} \\ * & *&S_{1} \end{bmatrix}\geq 0, \end{aligned}$$
(62)
$$\begin{aligned}& \begin{bmatrix} dP-D_{1}^{T}\Gamma _{4}D_{1} & -D^{T}_{1}\Gamma _{4} D_{2} \\ * & (1-d)P-D_{2}^{T}\Gamma _{4}D_{2} \end{bmatrix}\geq 0, \end{aligned}$$
(63)

where

$$\begin{aligned}& \begin{aligned} \tilde{\Xi }={}&\bar{\Xi }-e_{1}D_{1}^{T} \Gamma _{1} D_{1} e^{T}_{1}-e_{1}D_{1}^{T} \Gamma _{1} D_{2} e^{T}_{5}-e_{1}D_{1}^{T} \Gamma _{1} D_{3} e^{T}_{26}-e_{5}D_{2}^{T} \Gamma _{1} D_{1} e^{T}_{1} \\ & {} -e_{5}D_{2}^{T} \Gamma _{1} D_{2} e^{T}_{5}-e_{5}D_{2}^{T} \Gamma _{1} D_{3} e^{T}_{26}-e_{26}D_{3}^{T} \Gamma _{1} D_{1} e^{T}_{1}-e_{26}D_{3}^{T} \Gamma _{1} D_{2} e^{T}_{5} \\ & {} -2e_{1}D_{1}^{T} \Gamma _{2} e^{T}_{26}-2e_{5}D_{2}^{T} \Gamma _{2} e^{T}_{26}-e_{26} \bigl(D_{3}^{T} \Gamma _{1} D_{3}+2D_{3}^{T} \Gamma _{2}+\Gamma _{3} \bigr) e^{T}_{26}, \end{aligned} \\& \bar{\Xi }=\Xi +2e_{26}\beta _{1} B^{T}_{3}N_{1}e^{T}_{1}+2e_{26} \beta _{2} B^{T}_{3}N_{2}e^{T}_{2}, \end{aligned}$$

then, the system (1) is asymptotically stable and extended dissipative.

Proof

To show that the GNNs system (1) is extended dissipative, first, we use the LKFs candidate (8) and the following performance index for the GNNs (1).

Using inequality (29) in Theorem 3.1, equation (3), and LMIs (59)–(63), we obtain

$$\begin{aligned}& \dot{V} \bigl(w(t),t \bigr)-J(t)\leq \bar{\eta }^{T}(t) \bigl( \varepsilon \tilde{\Xi }^{(1)} +(1-\varepsilon ) \tilde{\Xi }^{(2)}+\alpha \tilde{\Xi }^{(3)}+(1-\alpha )\tilde{\Xi }^{(4)} \bigr) \bar{\eta }(t) \leq 0, \\& \begin{aligned}[b] \dot{V} \bigl(w(t),t \bigr)&\leq \bar{\eta }^{T}(t) \bigl(\varepsilon \tilde{\Xi }^{(1)} +(1- \varepsilon ) \tilde{\Xi }^{(2)}+\alpha \tilde{\Xi }^{(3)}+(1- \alpha )\tilde{\Xi }^{(4)} \bigr) \bar{\eta }(t) +J(t) \\ &\leq J(t). \end{aligned} \end{aligned}$$
(64)

Then we integrate both sides of the inequality (64) from 0 to \(t \geq 0\) and, letting \(\lambda \leq -V (w(0),0)\), get

$$\begin{aligned} \int _{0}^{t} J(s)\,ds \geq V \bigl(w(t),t \bigr)-V \bigl(w(0),0 \bigr) \geq w^{T}(t)Pw(t)+ \lambda . \end{aligned}$$
(65)

Next, we consider two cases:

Case I. \(\Gamma _{4}=0\). For this case, from inequality (65) we obtain

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq \lambda . \end{aligned}$$
(66)

This matches Definition 2.1 with \(\Gamma _{4}=0\).

Case II. \(\Gamma _{4}\neq 0\). From Assumption (H1), it is clear that \(\Gamma _{1} = 0\), \(\Gamma _{2}=0\), \(\Gamma _{3} >0\), and \(D_{3}=0\). Then, for any \(0\leq t \leq t_{f}\) and \(0\leq t-\delta (t) \leq t_{f}\), (65) leads to

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq \int _{0}^{t}J(s)\,ds\geq w^{T}(t)Pw(t)+ \lambda \end{aligned}$$

and

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq \int _{0}^{t-\delta (t)}J(s)\,ds\geq w^{T} \bigl(t- \delta (t) \bigr)Pw \bigl(t-\delta (t) \bigr)+\lambda , \end{aligned}$$

respectively. On the other hand, for \(t-\delta (t)\leq 0\), it can be shown that

$$\begin{aligned} w^{T} \bigl(t-\delta (t) \bigr)Pw \bigl(t-\delta (t) \bigr)+\lambda \leq & \Vert P \Vert \bigl\vert w \bigl(t-\delta (t) \bigr) \bigr\vert ^{2} +\lambda \\ \leq & \Vert P \Vert \sup_{-\delta _{2} \leq \theta \leq 0} \bigl\vert \phi ( \theta ) \bigr\vert ^{2} +\lambda \\ \leq & -V \bigl(w(0),0 \bigr) \\ \leq & \int _{0}^{t_{f}}J(s)\,ds. \end{aligned}$$

Thus, there exists a positive scalar \(d<1\) such that

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq \lambda +d w^{T}(t)Pw(t)+(1-d)w^{T} \bigl(t- \delta (t) \bigr)Pw \bigl(t- \delta (t) \bigr). \end{aligned}$$

By the relationship of output \(z(t)\) with (63),

$$\begin{aligned} z(t)^{T}\Gamma _{4} z(t) =&- \begin{bmatrix} w(t) \\ w(t-\delta (t)) \end{bmatrix}^{T} \begin{bmatrix} dP-D_{1}^{T}\Gamma _{4}D_{1} &-D^{T}_{1}\Gamma _{4} D_{2} \\ * & (1-d)P-D_{2}^{T}\Gamma _{4}D_{2} \end{bmatrix} \\ & {} \times \begin{bmatrix} w(t) \\ w(t-\delta (t)) \end{bmatrix}+d w^{T}(t)Pw(t)+(1-d)w^{T} \bigl(t-\delta (t) \bigr)Pw \bigl(t-\delta (t) \bigr). \end{aligned}$$

So, it is clear that for any t satisfying \(0\leq t \leq t_{f}\),

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq z(t)^{T} \Gamma _{4} z(t)+\lambda . \end{aligned}$$
(67)

Taking the supremum over t in inequalities (66) and (67), system (1) is extended dissipative. This completes the proof. □

Theorem 3.4

For given scalars \(\delta _{1}\), \(\delta _{2}\), \(\beta _{1}\), and \(\beta _{2}\), if there exist symmetric positive definite matrices \(P, U_{1},U_{2},T_{1},T_{2},T_{3},L,S_{1},S_{2} \in \mathbb{R}^{n \times n}\), positive definite matrices \(N_{1},N_{2} \in \mathbb{R}^{n\times n}\), positive diagonal matrices \(Z_{1},Z_{2} \in \mathbb{R}^{n\times n}\), any matrices \(X_{1},X_{2},X_{3},X_{4},Y_{1},Y_{2} \in \mathbb{R}^{n\times n}\), and positive scalars \(b_{1}\), \(b_{2}\) such that the following LMIs hold:

$$\begin{aligned}& \tilde{\Theta }+2 \Theta _{i}+2b_{1}I < 0, \quad i=1,2, \end{aligned}$$
(68)
$$\begin{aligned}& \tilde{\Theta }+ 2\Theta _{j} -2b_{2}I< 0, \quad j=3,4, \end{aligned}$$
(69)
$$\begin{aligned}& b_{1}-b_{2}>0, \end{aligned}$$
(70)
$$\begin{aligned}& \begin{bmatrix} X_{1}+X^{T}_{1} & X_{2}+X^{T}_{3} &Y_{1} \\ * & X_{4}+X^{T}_{4} &Y_{2} \\ * & *&S_{1} \end{bmatrix}\geq 0, \end{aligned}$$
(71)
$$\begin{aligned}& P-D_{1}^{T}\Gamma _{4} D_{1}\geq 0, \end{aligned}$$
(72)

where

$$\begin{aligned}& \tilde{\Theta }=\bar{\Theta }-e_{1}D_{1}^{T} \Gamma _{1} D_{1} e^{T}_{1}-2e_{24} \Gamma ^{T}_{2} D_{1} e^{T}_{1}-e_{24} \Gamma _{3} e^{T}_{24}, \\& \bar{\Theta }=\Theta +2e_{24}\beta _{1} B^{T}_{3}N_{1}e^{T}_{1}+2e_{24} \beta _{2} B^{T}_{3}N_{2}e^{T}_{2}, \end{aligned}$$

then, the system (32) is asymptotically stable and extended dissipative.

Proof

To show that the GNNs system (32) is extended dissipative, first, we use the LKFs candidate (37) and the following performance index for the GNNs (32).

Using inequality (56) in Theorem 3.2, equation (3), and LMIs (68)–(72), we get

$$\begin{aligned}& \dot{V} \bigl(w(t),t \bigr)-J(t)\leq \bar{\varsigma }^{T}(t) \bigl( \varepsilon \tilde{\Theta }^{(1)} +(1-\varepsilon ) \tilde{\Theta }^{(2)}+\alpha \tilde{\Theta }^{(3)}+(1-\alpha )\tilde{\Theta }^{(4)} \bigr) \bar{\varsigma }(t) \leq 0, \\& \begin{aligned}[b] \dot{V} \bigl(w(t),t \bigr)&\leq \bar{ \varsigma }^{T}(t) \bigl(\varepsilon \tilde{\Theta }^{(1)} +(1- \varepsilon ) \tilde{\Theta }^{(2)}+\alpha \tilde{\Theta }^{(3)}+(1-\alpha )\tilde{\Theta }^{(4)} \bigr) \bar{\varsigma }(t) +J(t) \\ &\leq J(t). \end{aligned} \end{aligned}$$
(73)

Then we integrate both sides of the inequality (73) from 0 to \(t \geq 0\) and, letting \(0=\lambda \leq -V (w(0),0)\), get

$$\begin{aligned} \int _{0}^{t} J(s)\,ds \geq V \bigl(w(t),t \bigr)-V \bigl(w(0),0 \bigr) \geq w^{T}(t)Pw(t). \end{aligned}$$
(74)

Next, we consider two cases:

Case I. \(\Gamma _{4}=0\). For this case, from inequality (74) we get

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq 0. \end{aligned}$$
(75)

This matches Definition 2.1 with \(\Gamma _{4}=0\).

Case II. \(\Gamma _{4}\neq 0\). From Assumption (H1), it is clear that \(\Gamma _{1} = 0\), \(\Gamma _{2}=0\), and \(\Gamma _{3} >0\). Then, for any \(0\leq t \leq t_{f}\), (74) leads to

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq \int _{0}^{t}J(s)\,ds\geq w^{T}(t)Pw(t). \end{aligned}$$

By the relationship of output with (72),

$$\begin{aligned} z(t)^{T}\Gamma _{4} z(t)=-w^{T}(t) \bigl(P-D_{1}^{T}\Gamma _{4} D_{1} \bigr)w(t)+w^{T}(t)Pw(t) \leq w^{T}(t)Pw(t). \end{aligned}$$

So, it is clear that for any t satisfying \(0\leq t \leq t_{f}\),

$$\begin{aligned} \int _{0}^{t_{f}}J(s)\,ds\geq z(t)^{T} \Gamma _{4} z(t). \end{aligned}$$
(76)

Taking the supremum over t in inequalities (75) and (76), system (32) is extended dissipative. This completes the proof. □

Remark 3

Recently, many problems have been investigated to analyze extended dissipativity for neural networks with time delays, and the results have been reported in [11, 34, 35]. Moreover, the extended dissipative analysis was studied for the GNNs with interval time-varying delay signals [37]. Unfortunately, most of research does not include the distributed time-varying delay in the GNNs systems. We know that the distributed time-varying delay is a delay resulting from the transmission of distributed nerve impulses in a diversity of axon sizes and lengths, which is considered difficult to avoid. Therefore, the extended dissipativity analysis for the GNNs with interval discrete and distributed time-varying delays was addressed in our research.

Remark 4

In recent years, the WSII was developed in [40] to estimate the derivatives of LKFs with a single integral term. Several studies have used the improved WSII to estimate the derivative of LKFs, for example, in [42], the authors have obtained criteria of finite-time passivity for neutral-type neural networks with time-varying delays by using the improved WSII with Jensen’s inequality. In 2018, a novel triple integral inequality was constructed in [39] to estimate the derivative of LKFs with the triple integral term, moreover, the authors achieved an improved delay-dependent exponential stability criterion by using a novel triple integral inequality with the extended reciprocally convex technique. On the other hand, in this work, we use the improved WSII to estimate four terms and the triple integral inequality to estimate \(-\frac{(\delta ^{3}_{2}-\delta ^{3}_{1})}{6}\int _{-\delta _{2}}^{- \delta _{1}} \int _{\beta }^{0} \int _{t+\lambda }^{t} \dot{w}^{T}(s)S_{1} \dot{w}(s)\,ds\,d\lambda \,d\beta \). By applying the improved WSII, a novel triple integral inequality, and convex combination technique, we gain less conservative results when compared with the other works [6, 7, 4345].

Remark 5

In this work, the Lyapunov–Krasovskii functional contains single, double, triple, and quadruple integral terms, in which full information on the delays \(\delta _{1}\), \(\delta _{2}\), \(\sigma _{1}\), \(\sigma _{2}\), and a state variable is used. Moreover, more information on activation functions has been taken into the stability and performance analysis, that is, \(F^{-}_{i} \leq \frac{f_{i}(W_{i}w(t))}{W_{i}w(t)} \leq F^{+}_{i}\), \(G^{-}_{i} \leq \frac{g_{i}(W_{i}w(t-\delta (t)))}{W_{i}w(t-\delta (t))} \leq G^{+}_{i}\), and \(H^{-}_{i} \leq \frac{h_{i}(W_{i}w(t))}{W_{i}w(t)} \leq H^{+}_{i}\) are addressed in the proof. Therefore, the construction and the technique for computation of the Lyapunov–Krasovskii functional are the main keys to improve results of this work. In the proof of Theorems 3.13.4, improved Wirtinger’s single integral inequality [40], a novel triple integral inequality [39], and convex combination technique are used to bound the derivative of Lyapunov–Krasovskii functional, which provide tighter bound than the inequalities in [6, 7, 4345]. All of these lead to a reduction of the conservatism of our results compared to those in some exiting works and, in particular, numerical examples. However, the complex computation of the Lyapunov–Krasovskii functional leads to the LMI derived in this work which contains much information about the system. Hence, for further work, it is interesting for researchers to improve the technique for a simple Lyapunov–Krasovskii functional and also achieve better results.

4 Numerical examples

In this section, five numerical examples are presented to illustrate the effectiveness of our results.

Example 4.1

Consider the generalized neural network (32) with the following matrices:

$$\begin{aligned}& C=\operatorname{diag}\{7.3458,6.9987,5.5949\}, \qquad W= \begin{bmatrix} 13.6014 & -2.9616 & -0.6936 \\ 7.4736 & 21.6810 & 3.2100 \\ 0.7290 & -2.6334 & -20.1300 \end{bmatrix} \\& B_{1}=I, \qquad B_{0}=F_{m}=G_{m}=0, \quad \text{and} \quad F_{p}=G_{p}=\operatorname{diag} \{0.3680,0.1795, 0.2876\}. \end{aligned}$$

By taking parameters \(\beta _{1}=\beta _{2}=1\) and solving Example 4.1 using LMIs in Theorem 3.2, we obtain maximum allowable values of \(\delta _{2}\) for \(\delta _{1}=0.5\) without the upper bound of differentiable delay (μ) as shown in Table 1. This table shows that the results derived in this research are less conservative than those in [4345].

Table 1 The maximum allowable values of \(\delta _{2}\) for \(\delta _{1}=0.5\) and different values of μ in Example 4.1

Example 4.2

Consider the generalized neural network (32) with the following matrices:

$$\begin{aligned}& C=W=I, \qquad B_{0}= \begin{bmatrix} -1 & 0.5 \\ 0.5 & -1.5 \end{bmatrix}, \qquad B_{1}= \begin{bmatrix} -2 & 0.5 \\ 0.5 & -2 \end{bmatrix}, \\& F_{m} = G_{m}=0, \quad \text{and} \quad F_{p}=G_{p}=\operatorname{diag}\{0.4,0.8\}. \end{aligned}$$

By taking parameters \(\beta _{1}=\beta _{2}=1\) and solving Example 4.2 using LMIs in Theorem 3.2, we obtain maximum allowable values of \(\delta _{2}\) for \(\delta _{1}=1\) without the upper bound of differentiable delay (μ) as shown in Table 2. This table shows that the results derived in this research are less conservative than those in [6, 7, 45].

Table 2 The maximum allowable values of \(\delta _{2}\) for \(\delta _{1}=1\) and different values μ in Example 4.2

Example 4.3

Consider the generalized neural network (1) with \(\delta _{1}=0.3\), \(\delta _{2}=1\), \(\sigma _{1}=0.01\), \(\sigma _{2}=0.4\), \(\beta _{1}=0.9\), \(\beta _{2}=1\),

$$\begin{aligned}& C=I, \qquad B_{0}= \begin{bmatrix} 0.2 & -0.1 \\ -0.5 & 0.1 \end{bmatrix}, \qquad B_{1}= \begin{bmatrix} -0.5 & 0 \\ -0.3 & -0.2 \end{bmatrix}, \\& B_{2}= \begin{bmatrix} 0.15 & 0.1 \\ 0 & -0.3 \end{bmatrix}, \qquad W=0.5I, \qquad I= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \\& D_{1}=D_{2}=D_{3}=0, \qquad F_{p}=G_{p}=H_{p}=0.5I, \quad \text{and} \quad F_{m}=G_{m}=H_{m}=0. \end{aligned}$$

When the LMIs of (4) and (5) in Theorem 3.1 are solved, we obtain

$$\begin{aligned}& P=10^{-3}\times \begin{bmatrix} 0.0814&-0.0082 \\ -0.0082 &0.4716 \end{bmatrix}, \qquad U_{1}=10^{-4}\times \begin{bmatrix} 0.1293&-0.0131 \\ -0.0131& 0.8506 \end{bmatrix}, \\& U_{2}=10^{-4}\times \begin{bmatrix} 0.0851&-0.0155 \\ -0.0155&0.8611 \end{bmatrix}, \qquad T_{1}=10^{-6}\times \begin{bmatrix} 0.2360 &0.0002 \\ 0.0002&0.2268 \end{bmatrix}, \\& T_{2}= 10^{-6}\times \begin{bmatrix} 0.2350 &0.0002 \\ 0.0002&0.2268 \end{bmatrix}, \qquad T_{3}= 10^{-6}\times \begin{bmatrix} 0.7593&0.0118 \\ 0.0118& 0.6331 \end{bmatrix}, \\& L= 10^{-3}\times \begin{bmatrix} 0.0882 & -0.0089 \\ -0.0089&0.9342 \end{bmatrix}, \qquad S_{1}= \begin{bmatrix} 0.0010&-0.0001 \\ -0.0001&0.0041 \end{bmatrix}, \\& S_{2}= \begin{bmatrix} 0.0004& 0 \\ 0&0.0030 \end{bmatrix}, \qquad Q= \begin{bmatrix} 0.0014&0 \\ 0&0.0032 \end{bmatrix}, \\& Z_{1}= 10^{-3}\times \begin{bmatrix} 0.4624&0 \\ 0&0.4624 \end{bmatrix}, \qquad Z_{2}= 10^{-4}\times \begin{bmatrix} 0.2162&0 \\ 0&0.2162 \end{bmatrix}, \\& Z_{3}= 10^{-3}\times \begin{bmatrix} 0.5730&0 \\ 0&0.5730 \end{bmatrix}, \qquad X_{1}= 10^{7}\times \begin{bmatrix} 0&1.7712 \\ -1.7712&0 \end{bmatrix}, \\& X_{2}=10^{8} \times \begin{bmatrix} -0.6297&-0.4719 \\ 0.8100&-5.6813 \end{bmatrix}, \qquad X_{3}= 10^{8}\times \begin{bmatrix} 0.6297&-0.8100 \\ 0.4719&5.6813 \end{bmatrix}, \\& X_{4}=10^{5}\times \begin{bmatrix} 0&1.1347 \\ -1.1347&0 \end{bmatrix}, \qquad Y_{1}= 10^{-3}\times \begin{bmatrix} -0.9288&0.0009 \\ 0.0002&-0.9876 \end{bmatrix}, \\& Y_{2}=10^{-3}\times \begin{bmatrix} 0.9194&-0.0003 \\ 0.0001&0.8781 \end{bmatrix}, \qquad c_{1}=2.0007\times 10^{-8}, \quad \text{and} \\& c_{2}=1.1845\times 10^{-8}. \end{aligned}$$

The maximum allowable values of \(\delta _{2}\) for different values of \(\delta _{1}\) are shown in Table 3. Figure 1 shows the response solution \(w(t)\) in Example 4.3 where \(u(t)=0\) and the initial condition \(\phi (t)=[-0.2 \ 0.2]^{T}\). Figure 2 shows the response solution \(w(t)\) in Example 4.3 where \(u(t)\) is Gaussian noise with mean 0 and variance 1, and the initial condition is \(\phi (t)=[-0.2 \ 0.2]^{T}\).

Figure 1
figure 1

The trajectories of \(w_{1}(t)\) and \(w_{2} (t)\) with \(u(t)=0\) in Example 4.3

Figure 2
figure 2

The trajectories of \(w_{1}(t)\) and \(w_{2} (t)\) in Example 4.3

Table 3 The maximum allowable values of \(\delta _{2}\) for different values of \(\delta _{1}\) in Example 4.3

Example 4.4

In this example, the extended dissipativity performance of the generalized neural networks (32) is considered, which links all of the famous and important performance notions such as the \(L_{2}\)\(L_{\infty }\), \(H_{\infty }\), passivity, and dissipativity performances. We consider the GNNs (32) with the following parameters:

$$\begin{aligned}& C=5I, \qquad B_{0}= \begin{bmatrix} 2 & -0.1 \\ -5 & 2 \end{bmatrix}, \qquad W=0.3I, \qquad \beta _{1}=\beta _{2}=1, \\& B_{1}= \begin{bmatrix} -1.5 & -0.1 \\ -0.2 & -1.5 \end{bmatrix}, \qquad B_{3}=D_{1}=I, \qquad F_{m}=G_{m}=0, \quad \text{and} \quad F_{p}=G_{p}=I. \end{aligned}$$

When we solve Example 4.4 by using LMIs of (68), (69) in Theorem 3.4, we obtain four cases:

Case I. \(L_{2}\)\(L_{\infty }\) performance. By using the LMIs in Theorem 3.4 and letting \(\Gamma _{1}=0\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2} I\), and \(\Gamma _{4}=I\), the extended dissipativity performance is converted into the \(L_{2}\)\(L_{ \infty }\) performance. The \(L_{2}\)\(L_{\infty }\) performance index γ can be achieved for \(\delta _{1}= 0.5\), and different \(\delta _{2}\), which are shown in Table 4. Figure 3 shows the plot of \(L(t)=\sqrt{z^{T}(t)z(t)/\int _{0}^{t}u^{T}(s)u(s)\,ds}\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(\sup_{t} L(t)=0.0265\), which is less than the prescribed \(L_{2}\)\(L_{\infty }\) performance index 2.0751 in Table 4.

Figure 3
figure 3

The trajectory of \(L(t)\) in Example 4.4

Table 4 Minimum γ for Case I and Case II in Example 4.4 with \(\delta _{1}=0.5\), and various \(\delta _{2}\)

Case II. Passivity performance. By applying the LMIs in Theorem 3.4 and taking \(\Gamma _{1}=0\), \(\Gamma _{2}=I\), \(\Gamma _{3}=\gamma I\), and \(\Gamma _{4}=0\), the extended dissipativity performance degenerates to the passivity performance. The passivity performance index γ can be gained for \(\delta _{1}=0.5\), and various \(\delta _{2}\), which are presented in Table 4. Figure 4 shows the plot of \(P(t)=-2\int _{0}^{t} z^{T}(s)u(s)\,ds/\int _{0}^{t}u^{T}(s)u(s)\,ds\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(P(t)\) converges to 0.6817, which is less than the prescribed passivity performance index 4.3061 in Table 4.

Figure 4
figure 4

The trajectory of \(P(t)\) in Example 4.4

Case III. \(H_{\infty }\) performance. By using the LMIs in Theorem 3.4 and letting \(\Gamma _{1}=-I\), \(\Gamma _{2}=0\), \(\Gamma _{3}=\gamma ^{2} I\), and \(\Gamma _{4}=0\), the extended dissipativity performance becomes the \(H_{\infty }\) performance. The maximum allowable values of \(\delta _{2}\) with various γ can be obtained for \(\delta _{1}=0.5\), which are depicted in Table 5. Figure 5 shows the plot of \(H(t)=\sqrt{\int _{0}^{t} z^{T}(s)z(s)\,ds/\int _{0}^{t}u^{T}(s)u(s)\,ds}\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(H(t)\) converges to 3.0415.

Figure 5
figure 5

The trajectory of \(H(t)\) in Example 4.4

Table 5 The maximum allowable values of \(\delta _{2}\) for Case III and Case IV in Example 4.4 with \(\delta _{1}=0.5\), and various γ

Case IV. Dissipativity performance. By applying the LMIs in Theorem 3.4 and taking \(\Gamma _{1}=-I\), \(\Gamma _{2}=I\), \(\Gamma _{3}=\mathcal{R}-\gamma I\), \(\mathcal{R}= 8I\), and \(\Gamma _{4}=0\), the extended dissipativity performance determines the dissipativity performance. The maximum allowable values of \(\delta _{2}\) with various γ can be achieved for \(\delta _{1}= 0.5\), which are shown in Table 5. Figure 6 shows the plot of \(D(t)= (\int _{0}^{t}-z^{T}(s)z(s)+2z^{T}(s)u(s)+8u^{T}(s)u(s)\,ds )/ (\int _{0}^{t}u^{T}(s)u(s)\,ds )\) versus time with the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Clearly, \(D(t)\) converges to −1.9323.

Figure 6
figure 6

The trajectory of \(D(t)\) in Example 4.4

Example 4.5

Consider the neural network (1) with \(\sigma _{1}=0.1\), \(\sigma _{2}=0.5\), \(\beta _{1}=2\), \(\beta _{2}=3\),

$$\begin{aligned}& C=5I, \qquad B_{0}= \begin{bmatrix} 0.2 & -0.1 \\ -0.5 & 0.1 \end{bmatrix}, \qquad B_{1}= \begin{bmatrix} -0.5 & 0 \\ -0.3 & -0.2 \end{bmatrix}, \\& B_{2}= \begin{bmatrix} 0.15 & 0.1 \\ 0 & -0.3 \end{bmatrix}, \qquad B_{3}=I, \qquad W=-0.4I, \\& D_{1}=D_{2}=D_{3}=0.1I, \qquad F_{p}=G_{p}=H_{p}=I, \quad \text{and} \quad F_{m}=G_{m}=H_{m}=0. \end{aligned}$$

Then, the extended dissipativity performance of system (1) is determined by choosing \(\Gamma _{1}=-I\), \(\Gamma _{2}=I\), \(\Gamma _{3}=(8-\gamma )I\), and \(\Gamma _{4}=I\). Here solving LMIs of (59) and (60) in Theorem 3.3, we obtain the maximum allowable values of \(\delta _{2}\) for different values of \(\delta _{1}\) which are shown in Table 6. Figure 7 shows the response solution \(w(t)\) in Example 4.5 where \(u(t)=0\) and the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\). Figure 8 shows the response solution \(w(t)\) in Example 4.5 where \(u(t)\) is Gaussian noise with mean 0 and variance 1, and the initial condition \(\phi (t)=[-0.1 \ 0.1]^{T}\).

Figure 7
figure 7

The trajectories of \(w_{1}(t)\) and \(w_{2} (t)\) with \(u(t)=0\) in Example 4.5

Figure 8
figure 8

The trajectories of \(w_{1}(t)\) and \(w_{2} (t)\) in Example 4.5

Table 6 The maximum allowable values of \(\delta _{2}\) for different values of \(\delta _{1}\) in Example 4.5

5 Conclusions

In this paper, we focus on the problem of asymptotic stability and extended dissipativity analysis for the generalized neural networks with interval discrete and distributed time-varying delays. Firstly, we obtain new asymptotic stability criteria for the generalized neural networks and also achieve an improved asymptotic stability criterion for a special case of the generalized neural networks by using a suitable Lyapunov–Krasovskii functional (LKF), an improved Wirtinger single integral inequality, a novel triple integral inequality, and convex combination technique. Then, the asymptotic stability results are applied to extended dissipativity analysis that covers \(H_{\infty }\), \(L_{2}\)\(L_{\infty }\), passivity, and dissipativity performance by setting parameters in the general performance index. Finally, we demonstrate numerical examples that are less conservative than in other references. Moreover, we present numerical examples for asymptotic stability and extended dissipativity performance of the generalized neural networks, including a special case of the generalized neural networks. In the future work, the derived results and methods in this paper are expected to be applied to other systems such as fuzzy generalized neural networks, generalized neural networks with Markovian switching, complex dynamical networks, and so on [10, 32, 46].

Availability of data and materials

Not applicable.

References

  1. Cichoki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. Wiley, Chichester (1993)

    Google Scholar 

  2. Watta, P.B., Wang, K., Hassoun, M.H.: Recurrent neural nets as dynamical Boolean systems with applications to associative memory. IEEE Trans. Neural Netw. 8, 1268–1280 (1997)

    Article  Google Scholar 

  3. Cohen, M.A., Grossberg, S.: Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 13, 815–826 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  4. Meng, X.Z., Zhao, S.N., Zhang, W.Y.: Adaptive dynamics analysis of a predator–prey model with selective disturbance. Appl. Math. Comput. 266, 946–958 (2015)

    MathSciNet  MATH  Google Scholar 

  5. Chen, Y., Zheng, W.X.: Stochastic state estimation for neural networks with distributed delays and Markovian jump. Neural Netw. 25, 14–20 (2012)

    Article  MATH  Google Scholar 

  6. Chen, J., Sun, J., Liu, G.P., Rees, D.: New delay-dependent stability criteria for neural networks with time-varying interval delay. Phys. Lett. A 374, 4397–4405 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Wang, J.A., Ma, X.H., Wen, X.Y.: Less conservative stability criteria for neural networks with interval time-varying delay based on delay-partitioning approach. Neurocomputing 155, 146–152 (2015)

    Article  Google Scholar 

  8. Lv, X., Li, X.: Delay-dependent dissipativity of neural networks with mixed non-differentiable interval delays. Neurocomputing 267, 85–94 (2017)

    Article  Google Scholar 

  9. Yu, H.J., He, Y., Wu, M.: Delay-dependent state estimation for neural networks with time-varying delay. Neurocomputing 275, 881–887 (2018)

    Article  Google Scholar 

  10. Li, Q., Liang, J.: Improved stabilization results for Markovian switching CVNNs with partly unknown transition rates. Neural Process. Lett. 52(2), 1189–1205 (2020)

    Article  Google Scholar 

  11. Zeng, H.B., Park, J.H., Zhang, C.F., Wang, W.: Stability and dissipativity analysis of static neural networks with interval time-varying delay. J. Franklin Inst. 352, 1284–1295 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Zhang, X.M., Han, Q.L.: Global asymptotic stability for a class of generalized neural networks with interval time-varying delays. IEEE Trans. Neural Netw. 22, 1180–1192 (2011)

    Article  Google Scholar 

  13. Lin, W.J., He, Y., Zhang, C.K., Long, F., Wu, M.: Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. Inf. Sci. 450, 169–181 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  14. Manivannan, R., Samidurai, R., Cao, J., Alsaedi, A., Alsaadi, F.E.: Global exponential stability and dissipativity of generalized neural networks with time-varying delay signals. Neural Netw. 87, 149–159 (2017)

    Article  MATH  Google Scholar 

  15. Liu, Y., Lee, S.M., Kwon, O.M., Park, J.H.: New approach to stability criteria for generalized neural networks with interval time-varying delays. Neurocomputing 149, 1544–1551 (2015)

    Article  Google Scholar 

  16. Rakkiyappan, R., Sivasamy, R., Park, J.H., Lee, T.H.: An improved stability criterion for generalized neural networks with additive time-varying delays. Neurocomputing 171, 615–624 (2016)

    Article  Google Scholar 

  17. Shu, Y., Liu, X.G., Qiu, S., Wang, F.: Dissipativity analysis for generalized neural networks with Markovian jump parameters and time-varying delay. Nonlinear Dyn. 89, 2125–2140 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  18. Li, Z., Bai, Y., Huang, C., Yan, H., Mu, S.: Improved stability analysis for delayed neural networks. IEEE Trans. Neural Netw. Learn. Syst. 29, 4535–4541 (2018)

    Article  Google Scholar 

  19. Du, Y., Liu, X., Zhong, S.: Robust reliable \(H_{\infty }\) control for neural networks with mixed time delays. Chaos Solitons Fractals 91, 1–8 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  20. Rajavel, S., Samidurai, R., Cao, J., Alsaedi, A., Ahmad, B.: Finite-time non-fragile passivity control for neural networks with time-varying delay. Appl. Math. Comput. 297, 145–158 (2017)

    MathSciNet  MATH  Google Scholar 

  21. Thuan, M.V., Trinh, H., Hien, L.V.: New inequality-based approach to passivity analysis of neural networks with interval time-varying delay. Neurocomputing 194, 301–307 (2016)

    Article  Google Scholar 

  22. Maharajan, C., Raja, R., Cao, J., Rajchakit, G., Alsaedi, A.: Novel results on passivity and exponential passivity for multiple discrete delayed neutral-type neural networks with leakage and distributed time-delays. Chaos Solitons Fractals 115, 268–282 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  23. Senthilraj, S., Raja, R., Zhu, Q., Samidurai, R., Yao, Z.: Exponential passivity analysis of stochastic neural networks with leakage, distributed delays and Markovian jumping parameters. Neurocomputing 175, 401–410 (2015)

    Article  MATH  Google Scholar 

  24. Yotha, N., Botmart, T., Mukdasai, K., Weera, W.: Improved delay-dependent approach to passivity analysis for uncertain neural networks with discrete interval and distributed time-varying delays. Vietnam J. Math. 45(4), 721–736 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  25. Cao, J., Manivannan, R., Chong, K.T., Lv, X.: Enhanced \(L_{2}\)\(L_{\infty }\) state estimation design for delayed neural networks including leakage term via quadratic-type generalized free-matrix-based integral inequality. J. Franklin Inst. 356, 7371–7392 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kumar, S.V., Anthoni, S.M., Raja, R.: Dissipative analysis for aircraft flight control systems with randomly occurring uncertainties via non-fragile sampled-data control. Math. Comput. Simul. 155, 217–226 (2019)

    Article  MathSciNet  Google Scholar 

  27. Raja, R., Karthik Raja, U., Samidurai, R., Leelamani, A.: Improved stochastic dissipativity of uncertain discrete-time neural networks with multiple delays and impulses. Int. J. Mach. Learn. Cybern. 6(2), 289–305 (2015)

    Article  MATH  Google Scholar 

  28. Willems, J.C.: Dissipative dynamical systems part I: general theory. Arch. Ration. Mech. Anal. 45(5), 321–351 (1972)

    Article  MATH  Google Scholar 

  29. Niu, Y., Wang, X., Lu, J.: Dissipative-based adaptive neural control for nonlinear systems. J. Control Theory Appl. 2, 126–130 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  30. Jeltsema, D., Scherpen, J.M.A.: Tuning of passivity-preserving controllers for switched-mode power converters. IEEE Trans. Autom. Control 49, 1333–1344 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  31. Senthilraj, S., Raja, R., Cao, J., Fardoun, H.M.: Dissipativity analysis of stochastic fuzzy neural networks with randomly occurring uncertainties using delay dividing approach. Nonlinear Anal., Model. Control 24(4), 561–581 (2019)

    MathSciNet  MATH  Google Scholar 

  32. Li, Q., Liang, J.: Dissipativity of the stochastic Markovian switching CVNNs with randomly occurring uncertainties and general uncertain transition rates. Int. J. Syst. Sci. 51(6), 1102–1118 (2020)

    Article  MathSciNet  Google Scholar 

  33. Zhang, B., Zheng, W.X., Xu, S.: Filtering of Markovian jump delay systems based on a new performance index. IEEE Trans. Circuits Syst. I, Regul. Pap. 60, 1250–1263 (2013)

    Article  MathSciNet  Google Scholar 

  34. Lee, T.H., Park, M.J., Park, J.H., Kwon, O.M., Lee, S.M.: Extended dissipative analysis for neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 25, 1936–1941 (2014)

    Article  Google Scholar 

  35. Wang, X., She, K., Zhong, S., Cheng, J.: On extended dissipativity analysis for neural networks with time-varying delay and general activation functions. Adv. Differ. Equ. 2016, 1 (2016)

    MathSciNet  MATH  Google Scholar 

  36. Lin, W.J., He, Y., Zhang, C.K., Wu, M., Shen, J.: Extended dissipativity analysis for Markovian jump neural networks with time-varying delay via delay-product-type functionals. IEEE Trans. Neural Netw. Learn. Syst. 30(8), 2527–2537 (2019)

    Article  MathSciNet  Google Scholar 

  37. Manivannan, R., Mahendrakumar, G., Samidurai, R., Cao, J., Alsaedi, A.: Exponential stability and extended dissipativity criteria for generalized neural networks with interval time-varying delay signals. J. Franklin Inst. 354, 4353–4376 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  38. Sun, J., Liu, G.P., Chen, J.: Delay-dependent stability and stabilization of neutral time-delay systems. Int. J. Robust Nonlinear Control 19, 1364–1375 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  39. Xie, W., Zhu, H., Zhong, S., Zhang, D., Shi, K., Cheng, J.: Extended dissipative estimator design for uncertain switched delayed neural networks via a novel triple integral inequality. Appl. Math. Comput. 335, 82–102 (2018)

    MathSciNet  MATH  Google Scholar 

  40. Park, P.G., Lee, W.I., Lee, S.Y.: Auxiliary function-based integral/summation inequalities: application to continuous/discrete time-delay systems. Int. J. Control. Autom. Syst. 14, 3–11 (2016)

    Article  Google Scholar 

  41. Farnam, A., Esfanjani, R.M.: Improved linear matrix inequality approach to stability analysis of linear systems with interval time-varying delays. J. Comput. Appl. Math. 294, 49–56 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  42. Saravanan, S., Ali, M.S., Alsaedi, A., Ahmad, B.: Finite-time passivity for neutral-type neural networks with time-varying delays—via auxiliary function-based integral inequalities. Nonlinear Anal., Model. Control 25(2), 206–224 (2020)

    MathSciNet  MATH  Google Scholar 

  43. Yang, Q., Ren, Q., Xie, X.: New delay dependent stability criteria for recurrent neural networks with interval time-varying delay. ISA Trans. 53, 994–999 (2014)

    Article  Google Scholar 

  44. Lee, W.I., Lee, S.Y., Park, P.: Improved stability criteria for recurrent neural networks with interval time-varying delays via new Lyapunov functionals. Neurocomputing 155, 128–134 (2015)

    Article  Google Scholar 

  45. Saravanakumar, R., Ali, M.S., Ahn, C.K., Karimi, H.R., Shi, P.: Stability of Markovian jump generalized neural networks with interval time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 28, 1840–1850 (2017)

    Article  MathSciNet  Google Scholar 

  46. Niamsup, P., Botmart, T., Weera, W.: Modified function projective synchronization of complex dynamical networks with mixed time-varying and asymmetric coupling delays via new hybrid pinning adaptive control. Adv. Differ. Equ. 2017(1), 1 (2017)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the reviewers for their valuable comments and suggestions, which led to the improvement of the content of the paper.

Funding

The first author was supported by the Science Achievement Scholarship of Thailand (SAST). The second author was financially supported by Khon Kaen University. The third author was supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (grant number: B05F630095).

Author information

Authors and Affiliations

Authors

Contributions

The authors claim to have contributed significantly and equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Wajaree Weera.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luemsai, S., Botmart, T. & Weera, W. Novel extended dissipativity criteria for generalized neural networks with interval discrete and distributed time-varying delays. Adv Differ Equ 2021, 42 (2021). https://doi.org/10.1186/s13662-020-03210-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-03210-x

Keywords