Skip to main content

Theory and Modern Applications

On extended dissipativity analysis for neural networks with time-varying delay and general activation functions

Abstract

We investigate the problem of extended dissipativity analysis for a class of neural networks with time-varying delay. The extended dissipativity analysis generalizes a few previous known results, which contain the \(H_{\infty}\), passivity, dissipativity, and \(\ell_{2}-\ell _{\infty}\) performance in a unified framework. By introducing a suitable augmented Lyapunov-Krasovskii functional and considering the sufficient information of neuron activation functions and together with a new bound inequality, we give some sufficient conditions in terms of linear matrix inequalities (LMIs) to guarantee the stability and extended dissipativity of delayed neural networks. Numerical examples are given to illustrate the efficiency and less conservative of the proposed methods.

1 Introduction

In recent years, neural networks have received extensive attention due to their extensive applications in variety of areas, such as signal processing, image processing, pattern recognition, associative memory, and optimization problems [1, 2]. Since theoretical analysis is usually a prerequisite for guaranteeing success in applications, numerous investigations have been conducted on theoretical analysis of the dynamical behaviors of delayed neural networks. It is well known that time delay is always encountered because the neural networks are frequently implemented by all kinds of hardware circuits-digital or integrated circuits. In addition, the existence of time delay is often one of the main sources to cause poor performance, chaos, and instability. As a result, numerous stability analysis criteria of delayed neural networks have been reported in [3–20].

It is worth pointing out that the performance of a neural network, which is usually characterized by an input-output relationship, plays an important role in various scenarios. For example, \(H_{\infty}\) control problem [21–23], passivity and passification problems [24, 25], \(\ell_{2}-\ell_{\infty}\) performance analysis [26], and dissipativity analysis [27–29]. Up to now, dissipativity has attracted many researchers’ attention because it does not only unifies the \(H_{\infty }\) and passivity performance [30–35] but also provides a more flexible robust control design in practical engineering, such as chemical process control [36] and power converters [37]. Recently, \((\mathcal {Q},\mathcal{S},\mathcal{R})\)-dissipativity is developed in [38] and [39]; however, the \(\ell_{2}-\ell_{\infty}\) performance is not contained in the dissipativity. In order to overcome this drawback, Zhang et el. [40] proposed a general performance called extended dissipativity, which unifies these performances. Further, in [41], the authors discussed the issue of the extended dissipativity analysis in continuous-time delay neural networks. In [42], the authors addressed the problem of the extended dissipativity for the discrete-time delay neural networks. In addition, in [43, 44], the authors studied dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. However, it should be mentioned that in [40, 41], the stability criteria of neural networks are conservative. There still exists room for further improvement because some useful terms are ignored in the Lyapunov-Krasovskii functional employed in [40, 41]. It is natural to look for an alternative view to reduce the conservatism of stability criteria. This has motivated our research on this issue.

In this paper, we investigate extended dissipativity analysis for neural networks with time-varying delay and general activation functions. The contribution of this paper is as follows. First, constructing a suitable augmented Lyapunov-Krasovskii functional, the aim is to utilize a new bound inequality to reduce the conservatism of the results. Second, the extended dissipativity generalizes a few previous known results, which encompass the \(H_{\infty}\) performance, \(\ell_{2}-\ell_{\infty}\), passivity, and dissipativity by adjusting weighting matrices in a new performance index. Third, we pay more attention to activation functions. Differently from some existing methods [9, 10, 12], and [45, 46], which divided the bound of neuron activation functions into two subintervals directly, we introduce a parameter δ such that \(\lambda_{i}^{\delta}=\lambda _{i}^{-}+\delta(\lambda_{i}^{+}-\lambda_{i}^{-})\) and we employ cross terms among the states with the conditions of \(\lambda_{i}^{-}\leq\frac {f_{i}(a)-f_{i}(b)}{a-b}\leq\lambda_{i}^{\delta}\) and \(\lambda _{i}^{\delta}\leq\frac{f_{i}(a)-f_{i}(b)}{a-b}\leq\lambda_{i}^{+}\). In addition, for the particular case \(b=0\), the conditions of \(\lambda _{i}^{-}\leq\frac{f_{i}(a)}{a}\leq\lambda_{i}^{\delta}\) and \(\lambda _{i}^{\delta}\leq\frac{f_{i}(a)}{a}\leq\lambda_{i}^{+}\) are also taken into full consideration. The derived conditions are formulated in terms of linear matrix inequalities (LMIs) to guarantee the stability and extended dissipativity of delayed neural networks. Numerical examples are presented to show the improvement and effectiveness of the results.

In this presentation, we use the following notation. We denote by \(\mathbb{R}^{n}\) the n-dimensional Euclidean space and by \(\mathbb {R}^{m\times n}\) the set of all \(m\times n\) real matrix. The asterisk ∗ denotes the symmetric part in a symmetric matrix, \(\operatorname{diag}\{\cdots\}\) denotes a diagonal matrix. The notation \(P>0\) (\(P\geq0\)) means that a matrix P is a symmetric positive-definite (positive-semidefinite) matrix. By I and 0 we denote the identity and zero matrices of appropriate dimensions, respectively. The superscript \('T'\) stands for matrix transposition, \(\operatorname{sym}(A)\) is defined as \(A+A^{T}\), and \(\|\cdot\|\) refers to the Euclidean vector norm and its induced norm of a matrix. For a real matrix N, \(N^{\perp}\) denotes its orthogonal complement with maximum row rank.

2 Preliminaries

Consider the class of neural networks with time-varying delay described by

$$ \left \{ \textstyle\begin{array}{l} \dot{x}(t)= -Cx(t)+Af(x(t))+Bf(x(t-h(t)))+\omega(t), \\ y(t)= Dx(t), \\ x(t)= \phi(t), \quad \forall t\in[-h,0], \end{array}\displaystyle \right . $$
(1)

where \(x(t)=[x_{1}(t),x_{2}(t),\ldots,x_{n}(t)]^{T}\in\mathbb{R}^{n}\), and \(x_{i}(t)\) denotes the state of ith neuron at time t; \(f(x(t))=[f_{1}(x_{1}(t)),f_{2}(x_{2}(t)),\ldots ,f_{n}(x_{n}(t))]^{T}\in\mathbb{R}^{n}\), and \(f_{i}(x_{i}(t))\) is the activation function of the ith neuron at time t; \(y(t)\) is the output of the neural network; \(C=\operatorname{diag}(c_{1},c_{2},\ldots,c_{n})\) describes the rate with which each neural neuron will rest its potential to the resting state in isolation when disconnected from the networks and external inputs; A, B, and D denote constant matrices of appropriate dimensions; \(\phi(t)\) is the initial condition; \(h(t)\) is the time-varying delay satisfying \(0\leq h(t)\leq h\), \(\dot {h}(t)\leq\mu<1\); and \(\omega(t)\in\mathbb{R}^{n}\) is the disturbance input belonging to \(\ell_{2}[0,\infty]\).

Assumption 2.1

As assumed in many references, such as [45], the activation function \(f_{i}(\cdot)\) of neural network (1) is continuous, bounded, and there exist constants \(\lambda^{-}_{i}\) and \(\lambda^{+}_{i}\) such that

$$ \lambda^{-}_{i}\leq\frac{f_{i}(a)-f_{i}(b)}{a-b}\leq \lambda^{+}_{i}, \qquad f_{i}(0)=0,\quad a, b\in \mathbb{R}, a\neq b, i=1,2,\ldots,n. $$
(2)

The following lemmas, definition, and assumption play a key role in deriving the main results of this paper.

Lemma 2.1

([47])

For a given matrix \(M>0\), the following inequality holds for all continuous differentiable functions \(x:[a,b]\rightarrow\mathbb{R}^{n}\):

$$ \int_{a}^{b}\dot{x}^{T}(s)M\dot{x}(s)\,ds \geq\frac{1}{b-a}\xi _{1}^{T}(t)M\xi_{1}(t)+ \frac{3}{b-a}\xi_{2}^{T}(t)M\xi_{2}(t), $$
(3)

where \(\xi_{1}(t)=x(b)-x(a)\) and \(\xi_{2}(t)=x(b)+x(a)-\frac{2}{b-a}\int ^{b}_{a}x(s)\,ds\).

Lemma 2.2

([14])

For any constant matrices \(N\in\mathbb{R}^{n_{a}\times n_{b}}\), \(X\in \mathbb{R}^{n_{a}\times n_{a}}\), \(Y\in\mathbb{R}^{n_{a}\times n_{b}}\), and \(R\in\mathbb{R}^{n_{b}\times n_{b}}\), with \(\bigl [ {\scriptsize\begin{matrix}{} X & Y \cr * & R\end{matrix}} \bigr ]\geq0\), the following inequality holds for any \(a\in\mathbb{R}^{n_{a}}\) and \(b\in\mathbb{R}^{n_{b}}\):

$$ -2a^{T}Nb\leq \left [ \textstyle\begin{array}{@{}c@{}} a \\ b \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} X & Y-N \\ {*} & R \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} a \\ b \end{array}\displaystyle \right ]. $$
(4)

Applying this lemma yields the following new integral inequality.

Lemma 2.3

For any constant matrices \(R\in\mathbb{R}^{n\times n}\), \(X\in\mathbb {R}^{2n\times2n}\), and \(Y\in\mathbb{R}^{2n\times n}\) with \(\bigl [ {\scriptsize\begin{matrix}{} X & Y \cr * & R\end{matrix}} \bigr ]\geq0\) and scalars \(b>a>0\) such that the following inequality is well defined, we have:

$$ - \int^{b}_{a} \int_{s}^{b}\dot{x}^{T}(u)R\dot{x}(u)\, du\, ds\leq\varpi^{T}(t) \biggl[(b-a)\operatorname{sym}\bigl\{ Y [\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ]\bigr\} +\frac{(b-a)^{2}}{2} \biggr]\varpi(t), $$
(5)

where \(\varpi(t)=[x^{T}(b)\ \int^{b}_{a}\frac{x^{T}(s)}{b-a}\,ds]^{T}\).

Proof

It is easy to see that

$$ (b-a)x(b)- \int^{b}_{a}x(s)\,ds- \int^{b}_{a} \int_{s}^{b}\dot{x}(u)\,du\,ds=0. $$

Therefore, the following equation holds for any \(N_{1}, N_{2}\in \mathbb{R}^{n\times n}\):

$$\begin{aligned} 0&=2 \biggl[x^{T}(b)N_{1}^{T}+ \int^{b}_{a}\frac {x^{T}(s)}{b-a}\,dsN_{2}^{T} \biggr] \biggl[x(b)- \int^{b}_{a}\frac{x(s)}{b-a}\,ds- \frac{1}{b-a} \int^{b}_{a} \int _{s}^{b}\dot{x}(u)\,du\,ds \biggr] \\ &=2\varpi^{T}(t)N^{T}[\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ]\varpi(t)- \frac{2}{b-a} \int^{b}_{a} \int _{s}^{b}\varpi^{T}(t)N^{T} \dot{x}(u)\,du\,ds, \end{aligned}$$

where \(N=[N_{1}\ N_{2}]\). Applying Lemma 2.2 yields

$$\begin{aligned}& -\frac{2}{b-a} \int^{b}_{a} \int_{s}^{b}\varpi^{T}(t)N^{T} \dot {x}(u)\,du\,ds \\& \quad \leq \frac{b-a}{2}\varpi^{T}(t)X\varpi(t)+2 \varpi^{T}(t) \bigl(Y-N^{T}\bigr)[\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ]\varpi(t) \\& \qquad {}+\frac{1}{b-a} \int^{b}_{a} \int_{s}^{b}\dot{x}^{T}(u)R\dot{x}(u)\,du\,ds. \end{aligned}$$

To sum up, we have

$$ -\frac{1}{b-a} \int^{b}_{a} \int_{s}^{b}\dot{x}^{T}(u)R\dot{x}(u)\,du\,ds\leq \frac{b-a}{2}\varpi^{T}(t)X\varpi(t)+2\varpi^{T}(t)Y[\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ] \varpi(t). $$

After s simple rearrangement, we can obtain (5). This completes the proof. □

Remark 2.1

Inequality (5) is called an integral inequality. In this paper, it plays a key role in the derivation of a criterion for delay-dependent stabilization. If we let \(Y=\frac{2}{b-a}[-R\ R]^{T}\) and \(X=Y^{T}R^{-1}Y\), then (5) reduces to \(-\int_{a}^{b}\int_{s}^{b}\dot{x}^{T}(u)R\dot{x}(u)\,du\,ds \leq-\frac{2}{(b-a)^{2}}(\int_{a}^{b}\int_{s}^{b}\dot {x}(u)\,du\,ds)^{T} R(\int_{a}^{b}\int_{s}^{b}\dot{x}(u)\,du\,ds) \), which means that (5) provides freedom in deriving stability criteria and makes it possible to find a tight bound.

Lemma 2.4

([22])

For any vectors \(x_{1}\), \(x_{2}\), constant matrices \(Q_{i}\), \(i=1,\ldots,4\), and \(S_{i}\), \(i=1,2\), and real scalars \(\alpha\geq0\), \(\beta\geq0\) satisfying \(\alpha+\beta=1\), the following inequality holds:

$$ -\frac{1}{\alpha}x_{1}^{T}Q_{1}x_{1}- \frac{1}{\beta }x_{2}^{T}Q_{2}x_{2}- \frac{\beta}{\alpha}x_{1}^{T}Q_{3}x_{1} - \frac{\alpha}{\beta}x_{2}^{T}Q_{4}x_{2} \leq-\left [ \textstyle\begin{array}{@{}c@{}} x_{1} \\ x_{2} \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} Q_{1} & S \\ {*} & Q_{2} \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} x_{1} \\ x_{2} \end{array}\displaystyle \right ] $$

subject to

$$ 0\leq \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} Q_{1}+Q_{3} & S \\ * & Q_{2}+Q_{4} \end{array}\displaystyle \right ]. $$

Lemma 2.5

([48])

Let \(\zeta\in\mathbb{R} ^{n}\), \(\Phi=\Phi^{T}\in \mathbb{R} ^{n \times n}\), and \(B\in\mathbb{R} ^{m \times n}\) with \(\operatorname{rank}(B)< n\). Then, the following two statements are equivalent:

  1. (a)

    \(\zeta^{T}\Phi\zeta<0\), \(B\zeta=0\), \(\zeta\neq0\);

  2. (b)

    \((B^{\perp})^{T}\Phi B^{\perp}<0\), where \(B^{\perp}\) is a right orthogonal complement of B.

Definition 2.1

([40])

For given matrices \(\Psi_{1}\), \(\Psi_{2}\), \(\Psi_{3}\), and \(\Psi_{4}\) satisfying Assumption 2.2, system (1) is said to be extended dissipative if for any \(t_{f}\geq0\) and all \(\omega(t)\in\ell _{2}[0,\infty)\), under the zero initial state, the following inequality holds:

$$ \int^{t_{f}}_{0}J(t)\,dt\geq \sup_{0\leq t\leq t_{f}}y^{T}(t) \Psi_{4}y(t), $$
(6)

where \(J(t)=y^{T}(t)\Psi_{1}y(t)+2y^{T}(t)\Psi_{2}\omega(t)+\omega ^{T}(t)\Psi_{3}\omega(t)\).

Assumption 2.2

For given real symmetric matrices \(\Psi_{1}\), \(\Psi_{2}\), \(\Psi_{3}\), and \(\Psi_{4}\) the following conditions are satisfied:

  1. (1)

    \(\Psi_{1}\leq0\), \(\Psi_{3}>0\), and \(\Psi_{4}\geq0\);

  2. (2)

    \((\|\Psi_{1}\|+\|\Psi_{2}\|)\cdot\|\Psi_{4}\|=0\).

Remark 2.2

The matrices \(\Psi_{1}\), \(\Psi_{2}\), \(\Psi_{3}\), and \(\Psi_{4}\) satisfy inequality (6). This can lead to the complexity of systems and increase the difficulty of solving the problem. The performance index in (6) is an extended index, which gives a more general performance by setting the weighting matrices \(\Psi _{i}\) (\(i=1,2,3,4\)). More specifically, (6) becomes the \(\ell_{2}-\ell _{\infty}\) performance when \(\Psi_{1}=\Psi_{2}=0\), \(\Psi_{3}=\gamma ^{2}I\), and \(\Psi_{4}=I\); (6) denotes the \(H_{\infty}\) performance when \(\Psi_{1}=-I\), \(\Psi_{2}=\Psi_{4}=0\), and \(\Psi _{3}=\gamma^{2}I\); (6) represents the passivity performance when \(\Psi _{1}=\Psi_{4}=0\), \(\Psi_{2}=I\), and \(\Psi_{3}=\gamma I\); (6) reduces to the \((\mathcal{Q},\mathcal{S},\mathcal{R})\)-dissipativity performance when \(\Psi_{1}=\mathcal{Q}\), \(\Psi_{2}=\mathcal{S}\), \(\Psi _{3}=\mathcal{R}-\alpha I\), and \(\Psi_{4}=0\).

3 Main results

In this section, new stability criteria for system (1) are derived by using the Lyapunov method and LMI framework. For the sake of simplicity of matrix and vector representation, \(e_{i}\in\mathbb{R}^{9n\times n}\) are defined as block entry matrices, for example, \(e^{T}_{4}=[0\ 0 \ 0 \ I \ 0 \ 0 \ 0 \ 0 \ 0]\). The other notations are the following:

$$\begin{aligned}& \Gamma=[\textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}}-C& 0 & 0 & A & B & 0 & 0 & 0 & -I\end{array}\displaystyle ], \\& \varpi_{1}(t)=\left[\textstyle\begin{array}{@{}c@{\quad}c@{}}x^{T}(t-h(t))& \int^{t-h(t)}_{t-h}\frac {x^{T}(s)}{h-h(t)}\,ds\end{array}\displaystyle \right]^{T}, \qquad \varpi_{2}(t)=\left[\textstyle\begin{array}{@{}c@{\quad}c@{}} x^{T}(t) & \int^{t}_{t-h(t)}\frac{x^{T}(s)}{h(t)}\,ds\end{array}\displaystyle \right]^{T}, \\& \varpi_{3}(t)=\left[\textstyle\begin{array}{@{}c@{\quad}c@{}}x^{T}(t-h(t)) & \int^{t}_{t-h(t)}\frac {x^{T}(s)}{h(t)}\,ds\end{array}\displaystyle \right]^{T}, \qquad \varpi_{4}(t)=\left[\textstyle\begin{array}{@{}c@{\quad}c@{}}x^{T}(t-h)& \int^{t-h(t)}_{t-h}\frac{x^{T}(s)}{h-h(t)}\,ds\end{array}\displaystyle \right]^{T}, \\& \xi^{T}(t)=\bigl[x^{T}(t),x^{T}\bigl(t-h(t) \bigr),x^{T}(t-h),f^{T}\bigl(x(t)\bigr), f^{T} \bigl(x\bigl(t-h(t)\bigr)\bigr), \\& \hphantom{\xi^{T}(t)={}}f^{T}\bigl(x(t-h)\bigr), \eta_{1}^{T}(t), \eta_{2}^{T}(t), \dot{x}^{T}(t)\bigr], \\& \eta_{1}(t)= \int^{t}_{t-h(t)}\frac{x(s)}{h(t)}\,ds, \qquad \eta_{2}(t)= \int ^{t-h(t)}_{t-h}\frac{x(s)}{h-h(t)}\,ds, \\& \lambda^{\delta}=\operatorname{diag}\bigl\{ \lambda_{1}^{\delta}, \ldots,\lambda_{n}^{\delta }\bigr\} =\lambda_{m}+\delta( \lambda_{M}-\lambda_{m}), \\& \lambda_{m}= \operatorname{diag}\bigl\{ \lambda_{1}^{-},\ldots, \lambda_{n}^{-}\bigr\} , \qquad \lambda_{M}= \operatorname{diag}\bigl\{ \lambda _{1}^{+},\ldots, \lambda_{n}^{+}\bigr\} , \\& \Xi_{[h(t)]}=-h(t)e_{7}U_{2}e^{T}_{7}- \bigl(h-h(t)\bigr)e_{8}U_{2}e^{T}_{8} \\& \hphantom{\Xi_{[h(t)]}={}}{}+[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2}& e_{8}\end{array}\displaystyle ] \bigl(h-h(t)\bigr)\operatorname{sym}\bigl\{ Y_{1}[\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ]\bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{8}\end{array}\displaystyle ]^{T} \\& \hphantom{\Xi_{[h(t)]}={}}{}+[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1} & e_{7}\end{array}\displaystyle ]h(t)\operatorname{sym}\bigl\{ Y_{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ] \bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1} & e_{7}\end{array}\displaystyle ]^{T} \\& \hphantom{\Xi_{[h(t)]}={}}{}+[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{7}\end{array}\displaystyle ]h(t)\operatorname{sym}\bigl\{ Y_{3}[\textstyle\begin{array}{@{}c@{\quad}c@{}}-I& I\end{array}\displaystyle ]\bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2}& e_{7}\end{array}\displaystyle ]^{T} \\& \hphantom{\Xi_{[h(t)]}={}}{}+[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{8}\end{array}\displaystyle ]\bigl(h-h(t)\bigr)\operatorname{sym}\bigl\{ Y_{4}[\textstyle\begin{array}{@{}c@{\quad}c@{}}-I& I\end{array}\displaystyle ]\bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{8}\end{array}\displaystyle ]^{T}, \\& \Phi_{a}=\Phi_{1}+\Phi_{2}+\Phi_{3}, \\& \Phi_{b}=\Phi_{1}+\Phi_{2}+ \Phi_{3}^{*}, \\& \Phi_{1}=\operatorname{sym}\bigl(e_{1}Pe_{9}^{T} \bigr)+\operatorname{sym}\bigl((e_{4}-e_{1}\lambda _{m})K_{1}e_{9}^{T}\bigr)+ \operatorname{sym}\bigl((e_{1}\lambda_{M}-e_{4})K_{2}e_{9}^{T} \bigr) \\& \hphantom{\Phi_{1}={}}{}+[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1} & e_{4}\end{array}\displaystyle ](Q_{1}+Q_{2})[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1}& e_{4}\end{array}\displaystyle ]^{T}-(1- \mu)[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2}& e_{5}\end{array}\displaystyle ]Q_{1}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{5}\end{array}\displaystyle ]^{T} \\& \hphantom{\Phi_{1}={}}{}-[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{6}\end{array}\displaystyle ]Q_{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{6}\end{array}\displaystyle ]^{T}, \\& \Phi_{2}=e_{9}\biggl(h^{2}U_{1}+ \frac {h^{2}}{2}(R_{1}+R_{2})\biggr)e^{T}_{9}+he_{1}U_{2}e^{T}_{1} \\& \hphantom{\Phi_{2}={}}{}-[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1}-e_{2} & e_{2}-e_{3}\end{array}\displaystyle ] \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} U_{1} & S_{1} \\ * & U_{1} \end{array}\displaystyle \right ] [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1}-e_{2}& e_{2}-e_{3}\end{array}\displaystyle ]^{T} \\& \hphantom{\Phi_{2}={}}{}-[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1}+e_{2}-2e_{7} & e_{2}+e_{3}-2e_{8}\end{array}\displaystyle ]\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 3U_{1} & S_{2} \\ * & 3U_{1} \end{array}\displaystyle \right ] [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1}+e_{2}-2e_{7} & e_{2}+e_{3}-2e_{8}\end{array}\displaystyle ]^{T} \\& \hphantom{\Phi_{2}={}}{}+\operatorname{sym}\bigl((e_{1}F_{1}+e_{9}F_{2}) \Gamma\bigr), \\& \Phi_{3}=-2\bigl[e_{4}-e_{5}-(e_{1}-e_{2}) \lambda _{m}\bigr]H_{1}\bigl[e_{4}-e_{5}-(e_{1}-e_{2}) \lambda^{\delta }\bigr]^{T}-2\bigl[e_{5}-e_{6}-(e_{2}-e_{3}) \lambda_{m}\bigr] \\& \hphantom{\Phi_{3}={}}{}\times H_{2}\bigl[e_{5}-e_{6}-(e_{2}-e_{3}) \lambda^{\delta}\bigr]^{T}-\operatorname{sym}\bigl(e_{1} \Pi _{1}\bigl(\lambda_{m}\lambda^{\delta} \bigr)e^{T}_{1}\bigr)+\operatorname{sym}\bigl(e_{1} \Pi_{1}\bigl(\lambda _{m}+\lambda^{\delta} \bigr)e^{T}_{4}\bigr) \\& \hphantom{\Phi_{3}={}}{}-\operatorname{sym}\bigl(e_{4}\Pi_{1}e^{T}_{4} \bigr)-\operatorname{sym}\bigl(e_{2}\Pi_{2}\bigl( \lambda_{m}\lambda^{\delta }\bigr)e^{T}_{2} \bigr)+\operatorname{sym}\bigl(e_{2}\Pi_{2}\bigl( \lambda_{m}+\lambda^{\delta}\bigr)e^{T}_{5} \bigr) \\& \hphantom{\Phi_{3}={}}{}-\operatorname{sym}\bigl(e_{5}\Pi_{2}e^{T}_{5} \bigr)-\operatorname{sym}\bigl(e_{3}\Pi_{3}\bigl( \lambda_{m}\lambda^{\delta }\bigr)e^{T}_{3} \bigr) \\& \hphantom{\Phi_{3}={}}{}+\operatorname{sym}\bigl(e_{3}\Pi_{3}\bigl( \lambda_{m}+\lambda^{\delta }\bigr)e^{T}_{6} \bigr)-\operatorname{sym}\bigl(e_{6}\Pi_{3}e^{T}_{6} \bigr), \\& \Phi^{*}_{3}=-2\bigl[e_{4}-e_{5}-(e_{1}-e_{2}) \lambda^{\delta }\bigr]H_{3}\bigl[e_{4}-e_{5}-(e_{1}-e_{2}) \lambda _{M}\bigr]^{T}-2\bigl[e_{5}-e_{6}-(e_{2}-e_{3}) \lambda^{\delta}\bigr] \\& \hphantom{\Phi^{*}_{3}={}}{}\times H_{4}\bigl[e_{5}-e_{6}-(e_{2}-e_{3}) \lambda_{M}\bigr]^{T}-\operatorname{sym}\bigl(e_{1} \Pi _{4}\bigl(\lambda^{\delta}\lambda_{M} \bigr)e^{T}_{1}\bigr)+\operatorname{sym}\bigl(e_{1} \Pi_{4}\bigl(\lambda ^{\delta}+\lambda_{M} \bigr)e^{T}_{4}\bigr) \\& \hphantom{\Phi^{*}_{3}={}}{}-\operatorname{sym}\bigl(e_{4}\Pi_{4}e^{T}_{4} \bigr)-\operatorname{sym}\bigl(e_{2}\Pi_{5}\bigl( \lambda^{\delta}\lambda _{M}\bigr)e^{T}_{2} \bigr)+\operatorname{sym}\bigl(e_{2}\Pi_{5}\bigl( \lambda^{\delta}+\lambda _{M}\bigr)e^{T}_{5} \bigr) \\& \hphantom{\Phi^{*}_{3}={}}{}-\operatorname{sym}\bigl(e_{5}\Pi_{5}e^{T}_{5} \bigr)-\operatorname{sym}\bigl(e_{3}\Pi_{6}\bigl( \lambda^{\delta}\lambda _{M}\bigr)e^{T}_{3} \bigr) \\& \hphantom{\Phi^{*}_{3}={}}{}+\operatorname{sym}\bigl(e_{3}\Pi_{6}\bigl( \lambda^{\delta}+\lambda _{M}\bigr)e^{T}_{6} \bigr)-\operatorname{sym}\bigl(e_{6}\Pi_{6}e^{T}_{6} \bigr), \\& \Sigma_{a}=\frac{h^{2}}{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1}& e_{7}\end{array}\displaystyle ]X_{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1} & e_{7}\end{array}\displaystyle ]^{T}+ \frac{h^{2}}{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{7}\end{array}\displaystyle ]X_{3}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2}& e_{7}\end{array}\displaystyle ]^{T}, \\& \Sigma_{b}=\frac{h^{2}}{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2}& e_{8}\end{array}\displaystyle ]X_{1}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{8}\end{array}\displaystyle ]^{T}+ \frac{h^{2}}{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{8}\end{array}\displaystyle ]X_{4}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{8}\end{array}\displaystyle ]^{T}. \end{aligned}$$

3.1 Stability analysis

The following theorem is given for system (1) with \(\omega(t)=0\) as the first result.

Theorem 3.1

For given scalars \(0<\delta\leq1\), \(h>0\), and μ and diagonal matrices \(\lambda_{m}=\operatorname{diag}\{\lambda_{1}^{-},\ldots,\lambda_{n}^{-}\}\) and \(\lambda_{M}=\operatorname{diag}\{\lambda_{1}^{+},\ldots,\lambda_{n}^{+}\}\), system (1) with \(\omega(t)=0\) is asymptotically stable if there exist positive definite matrices P, \(Q_{i}\), \(U_{i}\), \(R_{i}\) (\(i=1,2\)) and positive diagonal matrices \(K_{i}=\operatorname{diag}(k_{i1},\ldots,k_{in})\) (\(i=1,2\)), \(H_{i}=\operatorname{diag}(h_{i1},\ldots,h_{in})\) (\(i=1,\ldots,4\)), and \(\Pi _{i}=\operatorname{diag}(\pi_{i1},\ldots,\pi_{in})\) (\(i=1,\ldots,6\)) for any matrices \(Y_{i}\) (\(k=1,\ldots,4\)), \(S_{i}\) (\(i=1,2\)), \(F_{i}\) (\(i=1,2\)), and \(X_{i}\) (\(i=1,\ldots,4\)) of appropriate dimensions such that the following conditions hold:

$$\begin{aligned}& \bigl(\Gamma^{\perp}\bigr)^{T}(\Xi_{[h(t)=0]}+ \Phi_{i}+\Sigma_{j}) \bigl(\Gamma^{\perp }\bigr)< 0 \quad (\forall i,j=a,b), \end{aligned}$$
(7)
$$\begin{aligned}& \bigl(\Gamma^{\perp}\bigr)^{T}(\Xi_{[h(t)=h]}+ \Phi_{i}+\Sigma_{j}) \bigl(\Gamma^{\perp }\bigr)< 0 \quad (\forall i,j=a,b), \end{aligned}$$
(8)
$$\begin{aligned}& \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} U_{1}+R_{1} & S_{1} \\ {*} & U_{1}+R_{2} \end{array}\displaystyle \right ]\geq0, \qquad \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 3(U_{1}+R_{1}) & S_{2} \\ {*} & 3(U_{1}+R_{2}) \end{array}\displaystyle \right ]\geq0. \end{aligned}$$
(9)

Proof

Let us consider the Lyapunov-Krasovskii functional candidate

$$ V(t,x_{t})=\sum_{i=1}^{5}V_{i}(t,x_{t}), $$
(10)

where

$$\begin{aligned}& V_{1}(t,x_{t})= x^{T}(t)Px(t) +2\sum ^{n}_{i=1} \int^{x_{i}(t)}_{0}\bigl[k_{1i} \bigl(f_{i}(s)-\lambda _{i}^{-}s \bigr)+k_{2i}\bigl(\lambda_{i}^{+}s-f_{i}(s) \bigr)\bigr]\,ds, \\& V_{2}(t,x_{t})= \int^{t}_{t-h(t)}\varepsilon^{T}(s)Q_{1} \varepsilon(s)\,ds+ \int^{t}_{t-h}\varepsilon^{T}(s)Q_{2} \varepsilon(s)\,ds, \\& V_{3}(t,x_{t})=h \int^{0}_{-h} \int^{t}_{t+\theta}\dot{x}^{T}(s)U_{1} \dot {x}(s)\,ds\,d\theta+ \int^{0}_{-h} \int^{t}_{t+\theta}x^{T}(s)U_{2}x(s) \,ds\,d\theta, \\& V_{4}(t,x_{t})= \int^{0}_{-h} \int^{0}_{\theta} \int^{t}_{t+\vartheta}\dot {x}^{T}(s)R_{1} \dot{x}(s)\, ds\, d\vartheta\, d\theta, \\& V_{5}(t,x_{t})= \int^{0}_{-h} \int^{\theta}_{-h} \int^{t}_{t+\vartheta}\dot {x}^{T}(s)R_{2} \dot{x}(s)\, ds\, d\vartheta\, d\theta \end{aligned}$$

and

$$ \varepsilon(t)=\left[\textstyle\begin{array}{@{}c@{\quad}c@{}}x^{T}(t) & f^{T}(x(t))\end{array}\displaystyle \right]^{T}. $$

Then, calculating the time derivative of \(V(t,x_{t})\) along the trajectory of system (1) yields

$$\begin{aligned}& \dot{V}_{1}(t,x_{t})= 2x^{T}(t)P\dot{x}(t) +2 \sum^{n}_{i=1}\bigl[k_{1i} \bigl(f_{i}\bigl(x_{i}(t)\bigr)-\lambda _{i}^{-}x_{i}(t) \bigr)+k_{2i}\bigl(\lambda_{i}^{+}x_{i}(t)-f_{i} \bigl(x_{i}(t)\bigr)\bigr)\bigr] \dot{x}_{i}(t) \\& \hphantom{\dot{V}_{1}(t,x_{t})}= 2x^{T}(t)P\dot{x}(t)+2\bigl[f\bigl(x(t)\bigr)- \lambda_{m}x(t)\bigr]^{T}K_{1}\dot {x}(t)+2\bigl[ \lambda_{M}x(t)-f\bigl(x(t)\bigr)\bigr]^{T}K_{2} \dot{x}(t) \\& \hphantom{\dot{V}_{1}(t,x_{t})}= \xi^{T}(t) \bigl(\operatorname{sym} \bigl(e_{1}Pe_{9}^{T}+(e_{4}-e_{1} \lambda _{m})K_{1}e_{9}^{T}+(e_{1} \lambda_{M}-e_{4})K_{2}e_{9}^{T} \bigr)\bigr)\xi(t), \end{aligned}$$
(11)
$$\begin{aligned}& \dot{V}_{2}(t,x_{t})\leq \varepsilon^{T}(t) (Q_{1}+Q_{2})\varepsilon (t)-(1-\mu)\varepsilon^{T} \bigl(t-h(t)\bigr)Q_{1}\varepsilon\bigl(t-h(t)\bigr) - \varepsilon^{T}(t-h)Q_{2}\varepsilon(t-h) \\& \hphantom{\dot{V}_{2}(t,x_{t})}= \xi^{T}(t) \bigl([\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1} & e_{4}\end{array}\displaystyle ](Q_{1}+Q_{2})[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1} & e_{4}\end{array}\displaystyle ]^{T}-(1- \mu )[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{5}\end{array}\displaystyle ]Q_{1}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2}& e_{5}\end{array}\displaystyle ]^{T} \\& \hphantom{\dot{V}_{2}(t,x_{t})\leq{}}{} -[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{6}\end{array}\displaystyle ]Q_{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{6}\end{array}\displaystyle ]^{T} \bigr)\xi(t), \end{aligned}$$
(12)
$$\begin{aligned}& \dot{V}_{3}(t,x_{t})= h^{2} \dot{x}^{T}(t)U_{1}\dot{x}(t)-h \int ^{t}_{t-h}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds +hx^{T}(t)U_{2}x(t) \\& \hphantom{\dot{V}_{3}(t,x_{t})={}}{}- \int^{t}_{t-h}x^{T}(s)U_{2}x(s) \,ds. \end{aligned}$$
(13)

By using Lemma 2.1 we can obtain

$$\begin{aligned}& -h \int^{t}_{t-h}\dot{x} ^{T}(s)U_{1} \dot{x}(s)\,ds \\& \quad = -h \int^{t}_{t-h(t)}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds-h \int ^{t-h(t)}_{t-h}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds \\& \quad \leq -\frac{h}{h(t)}\bigl(x(t)-x\bigl(t-h(t)\bigr)\bigr)^{T}U_{1} \bigl(x(t)-x\bigl(t-h(t)\bigr)\bigr) \\& \qquad {}-\frac {h}{h-h(t)}\bigl(x\bigl(t-h(t) \bigr)-x(t-h)\bigr)^{T} \\& \qquad {}\times U_{1}\bigl(x \bigl(t-h(t)\bigr)-x(t-h)\bigr)-\frac{3h}{h(t)} \bigl(x(t)+x\bigl(t-h(t)\bigr)-2\eta _{1}(t)\bigr)^{T} \\& \qquad {}\times U_{1} \bigl(x(t)+x\bigl(t-h(t)\bigr)-2\eta_{1}(t)\bigr) \\& \qquad {}- \frac{3h}{h-h(t)}\bigl(x\bigl(t-h(t)\bigr)+x(t-h)-2\eta _{2}(t)\bigr)^{T} \\& \qquad {}\times U_{1}\bigl(x\bigl(t-h(t) \bigr)+x(t-h)-2\eta_{2}(t)\bigr). \end{aligned}$$

Using Jensen’ inequality to estimate the \(U_{2}\)-dependent integral term in (13) yields

$$\begin{aligned}& - \int^{t}_{t-h}x ^{T}(s)U_{2}x(s) \,ds=- \int ^{t}_{t-h(t)}x^{T}(s)U_{2}x(s) \,ds- \int ^{t-h(t)}_{t-h}x^{T}(s)U_{2}x(s) \,ds \\& \hphantom{\int^{t}_{t-h}x ^{T}(s)U_{2}x(s) \,ds}\leq -\frac{1}{h(t)}\biggl( \int^{t}_{t-h(t)}x(s)\,ds\biggr)^{T}U_{2} \biggl( \int ^{t}_{t-h(t)}x(s)\,ds\biggr) \\& \hphantom{\int^{t}_{t-h}x ^{T}(s)U_{2}x(s) \,ds={}}{}-\frac{1}{h-h(t)} \biggl( \int^{t-h(t)}_{t-h}x(s)\,ds\biggr)^{T} U_{2}\biggl( \int^{t-h(t)}_{t-h}x(s)\,ds\biggr) \\& \hphantom{\int^{t}_{t-h}x ^{T}(s)U_{2}x(s) \,ds}= -h(t)\eta_{1}^{T}(t)U_{2} \eta_{1}(t)-\bigl(h-h(t)\bigr)\eta_{2}^{T}(t)U_{2} \eta_{2}(t), \\& \dot{V}_{4}(t,x_{t})= \frac{h^{2}}{2} \dot{x}^{T}(t)R_{1}\dot{x}(t)- \int _{-h}^{0} \int_{t+\theta}^{t} \dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta \\& \hphantom{\dot{V}_{4}(t,x_{t})}= \frac{h^{2}}{2}\dot{x}^{T}(t)R_{1} \dot{x}(t) -\bigl(h-h(t)\bigr) \int_{t-h(t)}^{t}\dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta \\& \hphantom{\dot{V}_{4}(t,x_{t})={}}{}- \int_{-h}^{-h(t)} \int_{t+\theta}^{t-h(t)} \dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta - \int_{-h(t)}^{0} \int_{t+\theta}^{t} \dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta \\& \hphantom{\dot{V}_{4}(t,x_{t})}\leq \frac{h^{2}}{2}\dot{x}^{T}(t)R_{1} \dot{x}(t)-\biggl(\frac{h-h(t)}{h(t)}\biggr) \bigl[\bigl(x(t)-x\bigl(t-h(t)\bigr) \bigr)^{T}R_{1}\bigl(x(t)-x\bigl(t-h(t)\bigr)\bigr) \\& \hphantom{\dot{V}_{4}(t,x_{t})={}}{}+3\bigl(x(t)+x\bigl(t-h(t)\bigr)-2\eta_{1}(t) \bigr)^{T}R_{1}\bigl(x(t)+x\bigl(t-h(t)\bigr)-2\eta _{1}(t)\bigr)\bigr] \\& \hphantom{\dot{V}_{4}(t,x_{t})={}}{}- \int_{-h}^{-h(t)} \int_{t+\theta}^{t-h(t)} \dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta - \int_{-h(t)}^{0} \int_{t+\theta}^{t} \dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta, \end{aligned}$$
(14)
$$\begin{aligned}& \dot{V}_{5}(t,x_{t})= \frac{h^{2}}{2} \dot{x}^{T}(t)R_{2}\dot{x}(t)- \int _{-h}^{0} \int_{t-h}^{t+\theta} \dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta \\& \hphantom{\dot{V}_{5}(t,x_{t})}= \frac{h^{2}}{2}\dot{x}^{T}(t)R_{2} \dot{x}(t) -h(t) \int_{t-h}^{t-h(t)}\dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta \\& \hphantom{\dot{V}_{5}(t,x_{t})={}}{}- \int_{-h(t)}^{0} \int_{t-h(t)}^{t+\theta} \dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta - \int_{-h}^{-h(t)} \int_{t-h}^{t+\theta} \dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta \\& \hphantom{\dot{V}_{5}(t,x_{t})}\leq \frac{h^{2}}{2}\dot{x}^{T}(t)R_{2} \dot{x}(t)-\biggl(\frac{h(t)}{h-h(t)}\biggr) \bigl[\bigl(x\bigl(t-h(t)\bigr)-x(t-h) \bigr)^{T} \\& \hphantom{\dot{V}_{5}(t,x_{t})={}}{}\times R_{2}\bigl(x\bigl(t-h(t)\bigr)-x(t-h)\bigr) +3\bigl(x\bigl(t-h(t)\bigr)+x(t-h)-2\eta_{2}(t) \bigr)^{T} \\& \hphantom{\dot{V}_{5}(t,x_{t})={}}{}\times R_{2}\bigl(x\bigl(t-h(t)\bigr)+x(t-h)-2\eta _{2}(t)\bigr)\bigr] \\& \hphantom{\dot{V}_{5}(t,x_{t})={}}{}- \int_{-h(t)}^{0} \int_{t-h(t)}^{t+\theta} \dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta - \int_{-h}^{-h(t)} \int_{t-h}^{t+\theta} \dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta. \end{aligned}$$
(15)

On one hand, from Lemma 2.4 it is clear that if there exist matrices \(S_{1}\) and \(S_{2}\) satisfying (9), then the estimation of the \(U_{1}\)-dependent integral term in (13), the \(R_{1}\)-dependent integral term in (14), and the \(R_{2}\)-dependent integral term in (15) can be obtained as follows:

$$\begin{aligned}& -\xi^{T}(t)\biggl\{ \frac{1}{\alpha}(e_{1}-e_{2})U_{1}(e_{1}-e_{2})^{T}+ \frac {1}{\beta}(e_{2}-e_{3})U_{1}(e_{2}-e_{3})^{T} \\& \qquad {}+\frac{\beta}{\alpha}(e_{1}-e_{2})R_{1}(e_{1}-e_{2})^{T}+\frac{\alpha}{\beta}(e_{2}-e_{3})R_{2}(e_{2}-e_{3})^{T} \biggr\} \xi(t) \\& \quad \leq-\xi^{T}(t)\left [ \textstyle\begin{array}{@{}c@{}} e^{T}_{1}-e^{T}_{2} \\ e^{T}_{2}-e^{T}_{3} \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} U_{1} & S_{1} \\ * & U_{1} \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} e^{T}_{1}-e^{T}_{2} \\ e^{T}_{2}-e^{T}_{3} \end{array}\displaystyle \right ] \xi(t), \end{aligned}$$
(16)
$$\begin{aligned}& -\xi^{T}(t)\biggl\{ \frac{1}{\alpha }(e_{1}+e_{2}-2e_{7})3U_{1}(e_{1}+e_{2}-2e_{7})^{T}+ \frac{1}{\beta }(e_{2}+e_{3}-2e_{8}) 3U_{1}(e_{2}+e_{3}-2e_{8})^{T} \\& \qquad {}+\frac{\beta}{\alpha}(e_{1}+e_{2}-2e_{7})3R_{1}(e_{1}+e_{2}-2e_{7})^{T} +\frac{\alpha}{\beta }(e_{2}+e_{3}-2e_{8})3R_{2}(e_{2}+e_{3}-2e_{8})^{T} \biggr\} \xi(t) \\& \quad \leq-\xi^{T}(t)\left [ \textstyle\begin{array}{@{}c@{}} e^{T}_{1}+e^{T}_{2}-2e^{T}_{7} \\ e^{T}_{2}+e^{T}_{3}-2e^{T}_{8} \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 3U_{1} & S_{2} \\ {*} & 3U_{1} \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} e^{T}_{1}+e^{T}_{2}-2e^{T}_{7} \\ e^{T}_{2}+e^{T}_{3}-2e^{T}_{8} \end{array}\displaystyle \right ] \xi(t), \end{aligned}$$
(17)

where \(\alpha=\frac{h(t)}{h}\) and \(\beta=\frac{h-h(t)}{h}\).

On the other hand, according to Lemma 2.3, we obtain

$$\begin{aligned}& -\biggl( \int_{-h}^{-h(t)} \int_{t+\theta}^{t-h(t)} \dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta + \int_{-h(t)}^{0} \int_{t+\theta}^{t} \dot{x}^{T}(s)R_{1} \dot{x}(s)\,ds\,d\theta \\& \quad {}+ \int_{-h(t)}^{0} \int _{t-h(t)}^{t+\theta} \dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta+ \int_{-h}^{-h(t)} \int_{t-h}^{t+\theta} \dot{x}^{T}(s)R_{2} \dot{x}(s)\,ds\,d\theta\biggr) \\& \quad \leq\xi^{T}(t)[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{8}\end{array}\displaystyle ]\bigl(h-h(t)\bigr)\operatorname{sym} \bigl\{ Y_{1}[\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ]\bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{8}\end{array}\displaystyle ]^{T}\xi(t) ^{T}(t) \\& \qquad {}+\xi[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1}& e_{7}\end{array}\displaystyle ]h(t)\operatorname{sym}\bigl\{ Y_{2}[\textstyle\begin{array}{@{}c@{\quad}c@{}}I& -I\end{array}\displaystyle ] \bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{1} & e_{7}\end{array}\displaystyle ]^{T}\xi(t) \\& \qquad {}+\xi^{T}(t)[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{7}\end{array}\displaystyle ]h(t) \operatorname{sym}\bigl\{ Y_{3}[\textstyle\begin{array}{@{}c@{\quad}c@{}}-I& I\end{array}\displaystyle ]\bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{2} & e_{7}\end{array}\displaystyle ]^{T} \xi(t) \\& \qquad {}+\xi^{T}(t)[\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3}& e_{8}\end{array}\displaystyle ]\bigl(h-h(t)\bigr)\operatorname{sym} \bigl\{ Y_{4}[\textstyle\begin{array}{@{}c@{\quad}c@{}}-I& I\end{array}\displaystyle ]\bigr\} [\textstyle\begin{array}{@{}c@{\quad}c@{}}e_{3} & e_{8}\end{array}\displaystyle ]^{T}\xi(t) \\& \qquad {}+ \frac{(h-h(t))^{2}}{2}\varpi_{1}^{T}(t)X_{1} \varpi_{1}(t)+\frac {h(t)^{2}}{2}\varpi_{2}^{T}(t)X_{2} \varpi_{2}(t) \\& \qquad {}+\frac{h(t)^{2}}{2}\varpi_{3}^{T}(t)X_{3} \varpi_{3}(t)+\frac {(h-h(t))^{2}}{2}\varpi_{4}^{T}(t)X_{4} \varpi_{4}(t). \end{aligned}$$
(18)

Now, letting \(M=\varpi_{1}^{T}(t)X_{1}\varpi_{1}(t)+\varpi _{4}^{T}(t)X_{4}\varpi_{4}(t)\) and \(Z=\varpi_{2}^{T}(t)X_{2}\varpi _{2}(t)+\varpi_{3}^{T}(t)X_{3}\varpi_{3}(t)\), define the vector-valued function

$$\begin{aligned} g\bigl(h(t)\bigr) =&\frac{(h-h(t))^{2}}{2}\varpi_{1}^{T}(t)X_{1} \varpi_{1}(t)+\frac {h(t)^{2}}{2}\varpi_{2}^{T}(t)X_{2} \varpi_{2}(t) \\ &{}+\frac{h(t)^{2}}{2}\varpi_{3}^{T}(t)X_{3} \varpi_{3}(t)+\frac{(h-h(t))^{2}}{2}\varpi_{4}^{T}(t)X_{4} \varpi_{4}(t) \\ =&\frac{(h-h(t))^{2}}{2}M+\frac{h(t)^{2}}{2}Z. \end{aligned}$$
(19)

When \(h(t)=\frac{h}{M+Z}\), we have \(\dot{g}(h(t))=0\), and in this case, we can obtain a minimum value. So, it is clear that we can get a maximum value at the endpoints.

Case I: when \(M\geq Z\),

$$ g\bigl(h(t)\bigr)=\frac{(h-h(t))^{2}}{2}M+\frac{h(t)^{2}}{2}Z\leq g(0)=\frac{h^{2}}{2}M. $$
(20)

Case II: when \(M< Z\),

$$ g\bigl(h(t)\bigr)=\frac{(h-h(t))^{2}}{2}M+\frac{h(t)^{2}}{2}Z\leq g(h)=\frac{h^{2}}{2}Z. $$
(21)

In addition, for any matrices \(F_{1}\) and \(F_{2}\) with appropriate dimension, the following zero equation holds:

$$ 2\bigl[x^{T}(t)F_{1}+\dot{x}^{T}(t)F_{2} \bigr] \bigl[-\dot {x}(t)-Cx(t)+Af\bigl(x(t)\bigr)+Bf\bigl(x\bigl(t-h(t)\bigr) \bigr)\bigr]=0. $$
(22)

Furthermore, by introducing a parameter δ for the bound of the activation function we will consider two subintervals, \(\lambda_{i}^{-}\leq(f_{i}(a)-f_{i}(b))/(a-b)\leq\lambda_{i}^{\delta}\) and \(\lambda_{i}^{\delta}\leq(f_{i}(a)-f_{i}(b))/(a-b)\leq\lambda_{i}^{+}\), where \(\lambda_{i}^{\delta}=\lambda_{i}^{-}+\delta(\lambda _{i}^{+}-\lambda_{i}^{-})\).

Case I: \(\lambda_{i}^{-}\leq\frac{f_{i}(a)-f_{i}(b)}{a-b}\leq\lambda _{i}^{\delta}\).

For Case I, the following conditions hold:

$$ \lambda_{i}^{-}\leq\frac {f_{i}(x_{i}(t))-f_{i}(x_{i}(t-h(t)))}{x_{i}(t)-x_{i}(t-h(t))}\leq \lambda_{i}^{\delta}, \quad i=1,2,\ldots,n $$

and

$$ \lambda_{i}^{-}\leq\frac {f_{i}(x_{i}(t-h(t)))-f_{i}(x_{i}(t-h))}{x_{i}(t-h(t))-x_{i}(t-h)}\leq \lambda_{i}^{\delta}, \quad i=1,2,\ldots,n. $$

Then, for any appropriate diagonal matrices \(H_{i}=\operatorname{diag}\{h_{i1},\ldots ,h_{in}\}>0\), \(i=1,2\), we have:

$$\begin{aligned}& 0 \leq-2 {\sum }_{i=1}^{n}h_{1i} \bigl[f_{i}\bigl(x_{i}(t)\bigr)-f_{i} \bigl(x_{i}\bigl(t-h(t)\bigr)\bigr)-\lambda _{i}^{-} \bigl(x_{i}(t)-x_{i}\bigl(t-h(t)\bigr)\bigr)\bigr] \\& \hphantom{0\leq{}}{}\times \bigl[f_{i}\bigl(x_{i}(t) \bigr)-f_{i}\bigl(x_{i}\bigl(t-h(t)\bigr)\bigr)- \lambda_{i}^{\delta }\bigl(x_{i}(t)-x_{i} \bigl(t-h(t)\bigr)\bigr)\bigr] \\& \hphantom{0}=-2\xi^{T}(t)\bigl[e_{4}-e_{5}-(e_{1}-e_{2}) \lambda _{m}\bigr]H_{1}\bigl[e_{4}-e_{5}-(e_{1}-e_{2}) \lambda^{\delta}\bigr]^{T}\xi(t), \end{aligned}$$
(23)
$$\begin{aligned}& 0 \leq-2 {\sum }_{i=1}^{n}h_{2i} \bigl[f_{i}\bigl(x_{i}\bigl(t-h(t)\bigr) \bigr)-f_{i}\bigl(x_{i}(t-h)\bigr)-\lambda _{i}^{-}\bigl(x_{i}\bigl(t-h(t) \bigr)-x_{i}(t-h)\bigr)\bigr] \\& \hphantom{0\leq{}}{}\times\bigl[f_{i}\bigl(x_{i}\bigl(t-h(t) \bigr)\bigr)-f_{i}\bigl(x_{i}(t-h)\bigr)- \lambda_{i}^{\delta }\bigl(x_{i}\bigl(t-h(t) \bigr)-x_{i}(t-h)\bigr)\bigr] \\& \hphantom{0}=-2\xi^{T}(t)\bigl[e_{5}-e_{6}-(e_{2}-e_{3}) \lambda _{m}\bigr]H_{2}\bigl[e_{5}-e_{6}-(e_{2}-e_{3}) \lambda^{\delta}\bigr]^{T}\xi(t). \end{aligned}$$
(24)

When \(b=0\), we have \(\lambda_{i}^{-}\leq\frac{f_{i}(a)}{a}\leq\lambda _{i}^{\delta}\) and, for any scalars \(\pi_{1i}>0\), \(i=1,2,\ldots,n\),

$$ 2\sum_{i=1}^{n}\pi_{1i} \bigl(f_{i}\bigl(x_{i}(t)\bigr)-\lambda _{i}^{-}x_{i}(t) \bigr) \bigl(f_{i}\bigl(x_{i}(t)\bigr)-\lambda_{i}^{\delta}x_{i}(t) \bigr)\leq0, $$

which is equivalent to

$$\begin{aligned}& 2\varepsilon^{T}(t)\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \Pi_{1}\lambda_{m}\lambda^{\delta} & -\frac{\Pi_{1}}{2}(\lambda _{m}+\lambda^{\delta})\\ * & \Pi_{1} \end{array}\displaystyle \right ]\varepsilon(t) \\& \quad =\xi^{T}(t) \bigl(\operatorname{sym}\bigl(e_{1} \Pi_{1}\bigl(\lambda_{m}\lambda^{\delta } \bigr)e^{T}_{1}\bigr)-\operatorname{sym}\bigl(e_{1} \Pi_{1}\bigl(\lambda_{m}+\lambda^{\delta} \bigr)e^{T}_{4}\bigr) +\operatorname{sym} \bigl(e_{4}\Pi_{1}e^{T}_{4}\bigr)\bigr) \xi(t) \\& \quad \leq0, \end{aligned}$$
(25)

where \(\Pi_{1}=\operatorname{diag}\{\pi_{11},\ldots,\pi_{1n}\}\).

Similarly, for any appropriately diagonal matrices \(\Pi_{i}=\operatorname{diag}\{\pi _{i1},\ldots,\pi_{in}\}>0\), \(i=2,3\), we have:

$$\begin{aligned}& 2\varepsilon^{T}\bigl(t-h(t)\bigr)\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \Pi_{2}\lambda_{m}\lambda^{\delta} & -\frac{\Pi_{2}}{2}(\lambda _{m}+\lambda^{\delta}) \\ {*} & \Pi_{2} \end{array}\displaystyle \right ]\varepsilon\bigl(t-h(t)\bigr) \\& \quad =\xi^{T}(t) \bigl(\operatorname{sym}\bigl(e_{2} \Pi_{2}\bigl(\lambda_{m}\lambda^{\delta } \bigr)e^{T}_{2}\bigr)-\operatorname{sym}\bigl(e_{2} \Pi_{2}\bigl(\lambda_{m}+\lambda^{\delta} \bigr)e^{T}_{5}\bigr) +\operatorname{sym} \bigl(e_{5}\Pi_{2}e^{T}_{5}\bigr)\bigr) \xi(t) \\& \quad \leq0, \end{aligned}$$
(26)
$$\begin{aligned}& 2\varepsilon^{T}(t-h)\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \Pi_{3}\lambda_{m}\lambda^{\delta} & -\frac{\Pi_{3}}{2}(\lambda _{m}+\lambda^{\delta}) \\ {*} & \Pi_{3} \end{array}\displaystyle \right ]\varepsilon(t-h) \\& \quad =\xi^{T}(t) \bigl(\operatorname{sym}\bigl(e_{3} \Pi_{3}\bigl(\lambda_{m}\lambda^{\delta } \bigr)e^{T}_{3}\bigr)-\operatorname{sym}\bigl(e_{3} \Pi_{3}\bigl(\lambda_{m}+\lambda^{\delta} \bigr)e^{T}_{6}\bigr) +\operatorname{sym} \bigl(e_{6}\Pi_{3}e^{T}_{6}\bigr)\bigr) \xi(t) \\& \quad \leq0. \end{aligned}$$
(27)

Combining the inequalities from (11) to (27) together gives the upper bound of \(\dot{V}(t,x_{t})\):

$$ \dot{V}(t,x_{t})\leq\xi^{T}(t) ( \Xi_{[h(t)]}+\Phi_{a}+\Sigma_{j})\xi(t) \quad (\forall j=a,b). $$
(28)

Case II: \(\lambda_{i}^{\delta}\leq\frac{f_{i}(a)-f_{i}(b)}{a-b}\leq \lambda_{i}^{+}\).

Case II can be discussed similarly as the procedure in Case I. Then we obtain:

$$ 0\leq\xi^{T}(t)\Phi^{*}_{4}\xi(t), $$
(29)

where \(H_{3}\), \(H_{4}\), and \(\Pi_{i}\) (\(i=4,\ldots,6\)) are defined in Theorem 3.1.

Combining the inequalities from (11) to (22), together with (29), gives the upper bound of \(\dot{V}(t,x_{t})\):

$$ \dot{V}(t,x_{t})\leq\xi^{T}(t) ( \Xi_{[h(t)]}+\Phi_{b}+\Sigma_{j})\xi(t) \quad (\forall j=a,b). $$
(30)

Using the fact that \(\Xi_{[h(t)]}\) is dependent on \(h(t)\) and applying Lemma 2.5 with \(\Gamma\xi(t)=0\), it follows that if LMIs (7), (8) hold, then system (1) with \(\omega(t)=0\) is asymptotically stable. This ends the proof. □

3.2 Extended dissipative analysis

In this section, by assuming zero initial conditions we establish the extended dissipativity condition for all nonzero \(\omega(t)\in\ell _{2}[0,\infty]\).

Theorem 3.2

For given scalars \(0<\delta\leq1\), \(h>0\), and μ, diagonal matrices \(\lambda_{m}=\operatorname{diag}\{\lambda_{1}^{-},\ldots, \lambda_{n}^{-}\}\) and \(\lambda_{M}=\operatorname{diag}\{\lambda_{1}^{+},\ldots,\lambda_{n}^{+}\}\), and matrices \(\Psi_{i}\) (\(i=1,\ldots,4\)) satisfying Assumption  2.2, system (1) is asymptotically stable and extended dissipative if there exist positive definite matrices P, \(Q_{i}\), \(U_{i}\), \(R_{i}\) (\(i=1,2\)) and positive diagonal matrices \(K_{i}=\operatorname{diag}(k_{i1},\ldots,k_{in})\) (\(i=1,2\)), \(H_{i}=\operatorname{diag}(h_{i1},\ldots,h_{in})\) (\(i=1,\ldots,4\)), and \(\Pi _{i}=\operatorname{diag}(\pi_{i1},\ldots,\pi_{in})\) (\(i=1,\ldots,6\)) for any matrices \(Y_{i}\) (\(k=1,\ldots,4\)), \(S_{i}\) (\(i=1,2\)), \(F_{i}\) (\(i=1,2\)), and \(X_{i}\) (\(i=1,\ldots,4\)) of appropriate dimensions such that LMIs (9) and the following conditions hold:

$$\begin{aligned}& \bigl(\bar{\Gamma}^{\perp}\bigr)^{T}(\bar{\Xi}_{[h(t)=0]}+ \bar{\Phi}_{i}+\bar {\Sigma}_{j}) \bigl(\bar{ \Gamma}^{\perp}\bigr)< 0 \quad (\forall i,j=a,b), \end{aligned}$$
(31)
$$\begin{aligned}& \bigl(\bar{\Gamma}^{\perp}\bigr)^{T}(\bar{\Xi}_{[h(t)=h]}+ \bar{\Phi}_{i}+\bar {\Sigma}_{j}) \bigl(\bar{ \Gamma}^{\perp}\bigr)< 0 \quad (\forall i,j=a,b), \end{aligned}$$
(32)
$$\begin{aligned}& P-D^{T}\Psi_{4}D\geq0, \end{aligned}$$
(33)

where

$$\begin{aligned}& \bar{\Gamma}=[\textstyle\begin{array}{@{}c@{\quad}c@{}}\Gamma & I\end{array}\displaystyle ], \qquad \bar{\Xi}_{[h(t)]}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \Xi_{[h(t)]} & 0 \\ {*} & 0 \end{array}\displaystyle \right ], \\& \bar{\Phi}_{i}= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \bar{\Phi}_{1} & \bar{\Phi}_{2} \\ * & -\Psi_{3} \end{array}\displaystyle \right ], \qquad \bar{ \Sigma}_{j}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \Sigma_{j} & 0 \\ * & 0 \end{array}\displaystyle \right ], \\& \bar{\Phi}_{1}=\Phi_{i}-e_{1}D^{T} \Psi_{1}De_{1}^{T},\qquad \bar{ \Phi}_{2}=\bigl[\textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}}-\Psi^{T}_{2}D & 0& 0& 0 & 0 & 0 & 0 & 0 & 0\end{array}\displaystyle \bigr]^{T}. \end{aligned}$$

Proof

From (28) and (30) we have \(\dot{V}(t,x_{t})\leq\xi^{T}(t)(\Xi_{[h(t)]}+\Phi_{i}+\Sigma_{j})\xi(t)\) (\(\forall i,j=a,b\)), and it is clear that

$$ \bar{\xi}^{T}(t) (\bar{\Xi}_{[h(t)]}+\bar{\Phi}_{i}+ \bar{\Sigma}_{j})\bar {\xi}(t) =\xi^{T}(t) ( \Xi_{[h(t)]}+\Phi_{i}+\Sigma_{j})\xi(t)-J(t), $$

where \(\bar{\xi}^{T}(t)=[\xi^{T}(t)\ \omega^{T}(t)]^{T}\) and \(J(t)\) are defined in Definition 2.1. By Lemma 2.5, (31) and (32) are equivalent to \(\bar{\xi}^{T}(t)(\bar{\Xi }_{[h(t)]}+\bar{\Phi}_{i}+\bar{\Sigma}_{j})\bar{\xi}(t)<0\) (\(\forall i,j=a,b\)). Therefore, we can obtain

$$ \dot{V}(t)\leq\bar{\xi}^{T}(t) (\bar{\Xi}_{[h(t)]}+\bar{ \Phi}_{i}+\bar {\Sigma}_{j})\bar{\xi}(t) +J(t)\leq J(t). $$

By integrating both sides of this inequality from 0 to \(t\geq0\) we can obtain

$$ \int_{0}^{t}J(s)\,ds\geq V(t)-V(0)\geq x^{T}(t)Px(t). $$
(34)

Considering the two cases of \(\Psi_{4}=0\) and \(\Psi_{4}>0\), due to the extended dissipativity condition, we can represent the strictly \((\mathcal{Q},\mathcal{S},\mathcal{R})\)-dissipativity condition, the \(H_{\infty}\) performance, and the passivity when \(\Psi _{4}=0\) or the \(\ell_{2}-\ell_{\infty}\) performance criterion when \(\Psi_{4}>0\).

On one hand, we consider \(\Psi_{4}=0\) and from (34) we can get that

$$ \int_{0}^{t_{f}}J(s)\,ds\geq0. $$
(35)

This implies Assumption 2.2 with \(\Psi_{4}=0\).

On the other hand, when \(\Psi_{4}>0\), as mentioned in Assumption 2.2, we have the matrices \(\Psi_{1}=0\), \(\Psi_{2}=0\), and \(\Psi_{3}>0\) in this case. Then, for any \(0\leq t\leq t_{f}\), (34) leads to \(\int _{0}^{t_{f}}J(s)\,ds\geq\int_{0}^{t}J(s)\,ds\geq x^{T}(t)Px(t)\). Therefore, according to (33), we have

$$ y^{T}(t)\Psi_{4}y(t)=x^{T}(t)D^{T} \Phi_{4}Dx(t)\leq x^{T}(t)Px(t)\leq \int_{0}^{t_{f}}J(s)\,ds. $$
(36)

From (35) and (36) we get that system (1) is extended dissipative. This completes the proof. □

4 Illustrative examples

In this section, we introduce two examples to illustrate the merits of the derived results.

Example 1

Consider the neural networks (1) with the following parameters:

$$\begin{aligned}& C=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 2 & 0 \\ 0 & 2 \end{array}\displaystyle \right ],\qquad A=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 1 \\ -1 & -1 \end{array}\displaystyle \right ], \qquad B=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.88 & 1 \\ 1 & 1 \end{array}\displaystyle \right ], \\& \lambda_{m}= \operatorname{diag}\{0,0\},\qquad \lambda_{M}=\operatorname{diag} \{0.4,0.8\}, \\& f_{1}(s)=0.2\bigl(\vert s+1\vert -|s-1|\bigr), \qquad f_{2}(s)=0.4\bigl(\vert s+1\vert -|s-1|\bigr). \end{aligned}$$

In this example, for stability analysis, D is chosen to be zero. Our purpose is to estimate the allowable upper bounds delay h under different μ such that system (1) is globally asymptotically stable. When \(\delta=0.8\), according to Table 1, this example shows that the stability criterion in this paper gives much less conservative results than those in [9–12, 41]. In addition, for the case of \(\mu =0.8\), \(h=8.2046\), and the initial state \((-0.2,0.2)^{T}\), the stability results can be further verified by Figure 1.

Figure 1
figure 1

The dynamical behavior of system ( 1 ) in Example 1 .

Table 1 Allowable upper bounds of h for different μ in Example 1

Example 2

In this example, the generality of the extended dissipativity is demonstrated, which unifies the popular and important performance, such as \(H_{\infty}\) performance, passivity, dissipativity, and \(\ell_{2}-\ell_{\infty}\) performance. Consider the neural networks (1) with the following parameters:

$$\begin{aligned}& C=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.7 & 0 \\ 0 & 0.8 \end{array}\displaystyle \right ],\qquad A=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.001 & 0 \\ 0 & 0.005 \end{array}\displaystyle \right ], \\& B=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -0.1 & 0.01 \\ -0.2 & -0.1 \end{array}\displaystyle \right ], \qquad D=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ]\quad \mbox{and} \\& \lambda_{m}=\operatorname{diag}\{0,0\}, \qquad \lambda_{M}= \operatorname{diag}\{0.2,0.4\}. \end{aligned}$$

Case I: \(H_{\infty}\) performance. Let \(\Psi_{1}=-I\), \(\Psi _{2}=0\), \(\Psi_{3}=\gamma^{2}I\), and \(\Psi_{4}=0\). The extended dissipativity reduces to standard \(H_{\infty}\) performance. By Theorem 3.2, the allowable \(H_{\infty}\) performance γ can be obtained for the case \(\mu =0.5\) and different δ and h. The relationship among γ, δ, and h is demonstrated in Table 2. For \(\mu=0.5\) and fixed h, we can see from Table 2 that the minimum value of γ becomes smaller when the value of δ increases.

Table 2 Different minimums γ for various h and δ in Example 2

Case II: \(\ell_{2}-\ell_{\infty}\) performance. When we let \(\Psi _{1}=0\), \(\Psi_{2}=0\), \(\Psi_{3}=\gamma^{2}I\), and \(\Psi_{4}=I\), the extended dissipativity becomes the \(\ell_{2}-\ell_{\infty}\) performance. For \(\mu=0.8\), the different values of γ are listed in Table 3 by solving the LMIs in Theorem 3.2 with various values of δ and h. It is easy to see that the best value of δ is 0.7.

Table 3 Different minimums γ for various h and δ in Example 2

Case III: passivity performance. When we let \(\Psi_{1}=0\), \(\Psi _{2}=I\), \(\Psi_{3}=\gamma I\) and \(\Psi_{4}=0\), the passivity performance is obtained. For given \(\mu=0.5\) and \(\delta=0.5\), the maximum values of h with various γ are obtained in Table 4 by solving the LMIs in Theorem 3.2.

Table 4 Allowable maximums h for various γ and fixed δ , μ in Example 2

Case IV: dissipativity. When we let \(\Psi_{1}=-0.5I\), \(\Psi _{2}=I\), \(\Psi_{3}=2I\), and \(\Psi_{4}=0\), the dissipativity performance is obtained. For given \(\mu=0.5\) and \(\delta=0.1\), the maximum values of h with various γ are obtained in Table 5 by solving the LMIs in Theorem 3.2.

Table 5 Allowable maximums h for various γ and fixed δ , μ in Example 2

Finally, through Example 1, we conclude that our results have improvements at the amount of 3.84% and 3.37% for \(\mu=0.8\) and 0.9, respectively, compared with the recent work [41].

5 Conclusions

In this paper, we investigated the problem of extended dissipativity analysis for a class of neural network with time-varying delay. The extended dissipativity generalizes a few previous known results, which contain the \(H_{\infty}\), passivity, dissipativity, and \(\ell_{2}-\ell _{\infty}\) performance in a unified framework. By introducing a suitable augmented Lyapunov-Krasovskii functional and considering the sufficient information of neuron activation functions together with a new bound inequality, some sufficient conditions are given in terms of linear matrix inequalities (LMIs) to guarantee the stability and extended dissipativity of delayed neural networks. At present, we only give the theoretical results in our paper, and we will try to extend these theoretical results to real-life applications in the future.

References

  1. Cichoki, A, Unbehauen, R: Neural Networks for Optimization and Signal Processing. Wiley, Chichester (1993)

    Google Scholar 

  2. Watta, PB, Wang, K, Hassoun, MH: Recurrent neural nets as dynamical Boolean systems with applications to associative memory. IEEE Trans. Neural Netw. 8, 1268-1280 (1997)

    Article  Google Scholar 

  3. Wu, Z, Shi, P, Su, H, Chu, J: Delay-dependent exponential stability analysis for discrete-time switched neural networks with time-varying delay. Neurocomputing 74, 1626-1631 (2011)

    Article  Google Scholar 

  4. Kwon, O, Park, J, Lee, S, Cha, E: Analysis on delay-dependent stability for neural networks with time-varying delays. Neurocomputing 103, 114-120 (2013)

    Article  Google Scholar 

  5. Tian, JK, Xiong, WJ, Xu, F: Improved delay-partitioning method to stability analysis for neural networks with discrete and distributed time-varying delays. Appl. Math. Comput. 233, 152-164 (2014)

    MathSciNet  Google Scholar 

  6. Rakkiyappan, R, Sakthivel, N, Park, JH, Kwon, OM: Sampled-data state estimation for Markovian jumping fuzzy cellular neural networks with mode-dependent probabilistic time-varying delays. Appl. Math. Comput. 221, 741-769 (2013)

    MathSciNet  MATH  Google Scholar 

  7. Zhang, H, Wang, Z, Liu, D: Robust stability analysis for interval Cohen-Grossberg neural networks with unknown time-varying delays. IEEE Trans. Neural Netw. 21, 1942-1954 (2009)

    Google Scholar 

  8. Cheng, J, Zhu, H, Zhong, S, Li, G: Novel delay-dependent robust stability criteria for neutral systems with mixed time-varying delays and nonlinear perturbations. Appl. Math. Comput. 219(14), 7741-7763 (2013)

    MathSciNet  MATH  Google Scholar 

  9. Kown, OM, Park, JH: Improved delay-dependent stability criterion for neural networks with time-varying delays. Phys. Lett. A 373, 529-535 (2009)

    Article  MathSciNet  Google Scholar 

  10. Tian, J, Zhong, S: Improved delay-dependent stability criterion for neural networks with time-varying delay. Appl. Math. Comput. 217, 10278-10288 (2011)

    MathSciNet  MATH  Google Scholar 

  11. Wang, Y, Yang, C, Zuo, Z: On exponential stability analysis for neural networks with time-varying delays and general activation functions. Commun. Nonlinear Sci. Numer. Simul. 17, 1447-1459 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Kwon, O, Park, M, Lee, S, Park, J, Cha, E: Stability for neural networks with time-varying delays via some new approaches. IEEE Trans. Neural Netw. Learn. Syst. 24, 181-193 (2013)

    Article  Google Scholar 

  13. Park, JH, Kwon, OM: Further results on state estimation for neural networks of neutral-type with time-varying delay. Appl. Math. Comput. 208, 69-75 (2009)

    MathSciNet  MATH  Google Scholar 

  14. Moon, YS, Park, PG, Kwon, WH, Lee, YS: Delay dependent robust stabilization of uncertain state-delayed systems. Int. J. Control 74, 1447-1455 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Shatyrko, A, Diblík, J, Khusainov, D, Ruzickova, M: Stabilization of Lur’e-type nonlinear control systems by Lyapunov-Krasovskii functionals. Adv. Differ. Equ. 2012, 229 (2012)

    Article  MathSciNet  Google Scholar 

  16. Zeng, HB, Park, JH, Zhang, CF, Wang, W: Stability and dissipativity analysis of static neural networks with interval time-varying delay. J. Franklin Inst. 352, 1284-1295 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  17. Feng, JW, Tang, Z, Zhao, Y, Xu, C: Cluster synchronisation of non-linearly coupled Lur’e networks with identical and non-identical nodes and an asymmetrical coupling matrix. IET Control Theory Appl. 7, 2117-2127 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Tang, Z, Feng, JW, Zhao, Y: Global synchronization of nonlinear coupled complex dynamical networks with information exchanges at discrete-time. Neurocomputing 151, 1486-1494 (2015)

    Article  Google Scholar 

  19. Tang, Z, Park, JH, Lee, TH, Feng, JW: Mean square exponential synchronization for impulsive coupled neural networks with time-varying delays and stochastic disturbances. Complexity (2015). doi:10.1002/cplx.21647

    Google Scholar 

  20. Tang, Z, Park, JH, Lee, TH, Feng, JW: Random adaptive control for cluster synchronization of complex networks with distinct communities. Int. J. Adapt. Control Signal Process. (2015). doi:10.1002/acs.2599

    Google Scholar 

  21. Bara, GI, Boutayeb, M: Static output feedback stabilization with \(H_{\infty}\) performance for linear discrete-time systems. IEEE Trans. Autom. Control 50, 250-254 (2005)

    Article  MathSciNet  Google Scholar 

  22. Lee, WI, Lee, SY, Park, PG: Improved criteria on robust stability and \(H_{\infty}\) performance for linear systems with interval time-varying delays via new triple integral functional. Appl. Math. Comput. 243, 570-577 (2014)

    MathSciNet  Google Scholar 

  23. Liu, M, Zhang, S, Fan, Z, Zheng, S, Sheng, W: Exponential \(H_{\infty}\) synchronization and state estimation for chaotic systems via a unified model. IEEE Trans. Neural Netw. Learn. Syst. 24, 1114-1126 (2013)

    Article  Google Scholar 

  24. Mahmoud, MS, Ismail, A: Passivity and passification of time-delay systems. J. Math. Anal. Appl. 292, 247-258 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  25. Xu, M, Zheng, WX, Zou, Y: Passivity analysis of neural networks with time-varying delays. IEEE Trans. Circuits Syst. II 56, 325-329 (2009)

    Article  Google Scholar 

  26. Zhang, L, Shi, P, Boukas, EK, Wang, C: Robust \(\ell_{2}-\ell_{\infty}\) filtering for switched linear discrete time-delay systems with polytopic uncertainties. IET Control Theory Appl. 1, 722-730 (2007)

    Article  MathSciNet  Google Scholar 

  27. Wu, Z, Cui, M, Xie, X, Shi, P: Theory of stochastic dissipative systems. IEEE Trans. Autom. Control 56, 1650-1655 (2011)

    Article  MathSciNet  Google Scholar 

  28. Han, C, Wu, L, Shi, P, Zeng, Q: On dissipativity of Takagi-Sugeno fuzzy descriptor systems with time-delay. J. Franklin Inst. 349, 3170-3184 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Wu, ZG, Shi, P, Su, H, Chu, J: Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 24, 345-355 (2013)

    Article  Google Scholar 

  30. Mahmoud, MS, Khan, GD: Dissipativity analysis for discrete stochastic neural networks with Markovian delays and partially known transition matrix. Appl. Math. Comput. 228, 292-310 (2014)

    MathSciNet  Google Scholar 

  31. Mahmoud, MS, Nounou, HN: Dissipative analysis and synthesis of time-delay systems. Mediterr. J. Meas. Control 1, 97-108 (2005)

    Google Scholar 

  32. Meisami-Azad, M, Mohammadpour, J, Grigoriadis, KM: Dissipative analysis and control of state-space symmetric systems. Automatica 45, 1574-1579 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  33. Mahmoud, MS, Nounou, HN, Xia, Y: Robust dissipative control for Internet-based switching systems. J. Franklin Inst. 347, 154-172 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  34. Mahmoud, MS, Saif, AWA: Dissipativity analysis and design for uncertain Markovian jump systems with time-varying delays. Appl. Math. Comput. 219, 9681-9695 (2013)

    MathSciNet  MATH  Google Scholar 

  35. Mahmoud, MS, Shi, Y, Al-Sunni, FM: Dissipativity analysis and synthesis of a class of nonlinear systems with time-varying delays. J. Franklin Inst. 346, 570-592 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  36. Jeltsema, D, Scherpen, JMA: Tuning of passivity-preserving controllers for switched-mode power converters. IEEE Trans. Autom. Control 49, 1333-1344 (2004)

    Article  MathSciNet  Google Scholar 

  37. Niu, Y, Wang, X, Lu, J: Dissipative-based adaptive neural control for nonlinear systems. J. Control Theory Appl. 2, 126-130 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  38. Feng, Z, Lam, J: Stability and dissipativity analysis of distributed delay cellular neural networks. IEEE Trans. Neural Netw. 22, 976-981 (2011)

    Article  Google Scholar 

  39. Wu, ZG, Park, JH, Shu, H, Chu, J: Admissibility and dissipativity analysis for discrete-time singular systems with mixed time-varying delays. Appl. Math. Comput. 218, 7128-7138 (2012)

    MathSciNet  MATH  Google Scholar 

  40. Zhang, B, Zheng, WX, Xu, S: Filtering of Markovian jump delay systems based on a new performance index. IEEE Trans. Circuits Syst. I 60, 1250-1263 (2013)

    Article  MathSciNet  Google Scholar 

  41. Lee, TH, Park, MJ, Park, JH, Kwon, OM, Lee, SM: Extended dissipative analysis for neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 25, 1936-1941 (2014)

    Article  Google Scholar 

  42. Feng, Z, Zeng, W: On extended dissipativity of discrete-time neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 26, 3293-3300 (2015)

    Article  Google Scholar 

  43. Zeng, HB, Park, JH, Xia, JW: Further results on dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. Nonlinear Dyn. 79, 83-91 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  44. Wang, J, Park, JH, Shen, H, Wang, J: Delay-dependent robust dissipativity conditions for delayed neural networks with random uncertainties. Appl. Math. Comput. 221, 710-719 (2013)

    MathSciNet  MATH  Google Scholar 

  45. Cheng, J, Zhu, H, Ding, Y, Zhong, S, Zhong, Q: Stochastic finite-time boundedness for Markovian jumping neural networks with time-varying delays. Appl. Math. Comput. 242, 281-295 (2014)

    MathSciNet  Google Scholar 

  46. Zhang, Y, Yue, D, Tian, E: New stability criteria of neural networks with interval time-varying delays: a piecewise delay method. Appl. Math. Comput. 208, 249-259 (2009)

    MathSciNet  MATH  Google Scholar 

  47. Seuret, A, Gouaisbaut, F: Wirtinger-based integral inequality: application to time-delay systems. Automatica 49, 2860-2866 (2013)

    Article  MathSciNet  Google Scholar 

  48. Skelton, RE, Iwasaki, T, Grigoradis, KM: A Unified Algebraic Approach to Linear Control Design. Taylor & Francis, New York (1997)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editors and the reviewers for their valuable suggestions and comments, which have led to a much improved paper. This work was financially supported by the National Natural Science Foundation of China (No. 61273015, No. 61533006) and the China Scholarship Council.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Wang.

Additional information

Competing interests

All authors drafted the manuscript and read and approved the final manuscript.

Authors’ contributions

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., She, K., Zhong, S. et al. On extended dissipativity analysis for neural networks with time-varying delay and general activation functions. Adv Differ Equ 2016, 79 (2016). https://doi.org/10.1186/s13662-016-0769-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0769-7

Keywords