Skip to main content

Theory and Modern Applications

New results on \(H_{\infty}\) state estimation of static neural networks with time-varying delay

Abstract

This paper is concerned with the problem of \(H_{\infty}\) state estimation problem for a class of delayed static neural networks. The purpose of the problem is to design a delay-dependent state estimator such that the dynamics of the error system is globally exponentially stable with a prescribed \(H_{\infty}\) performance. Some improved delay-dependent conditions are established by using delay partitioning method and the free-matrix-based integral inequality. The gain matrix and the optimal performance index are obtained via solving a convex optimization problem subject to LMIs (linear matrix inequality). Numerical examples are provided to illustrate the effectiveness of the proposed method comparing with some existing results.

1 Introduction

Neural networks (NNs) have attracted a great deal of attention because of their extensive applications in various fields such as associative memory, pattern recognition, combinatorial optimization, adaptive control, etc. [1, 2]. Due to the finite switching speed of the amplifiers, time delays are inevitable in electronic implementations of neural networks (such as Hopfield neural networks, cellular neural networks). Time delay in a system is often a source of instability, and delay systems has been widely studied (see [39] and the references therein). The stability and passivity problems of delayed NNs have been widely reported (see [1, 2, 1014]).

We mainly focus on static neural networks (SNNs) in this paper, which is one type of recurrent neural networks. According to whether the neuron states or the local field states of neurons are chosen as basic variables, the model of neural networks can be classified into static neural networks or local field networks. As mentioned in [15, 16], local field neural networks and SNNs are not always equivalent. In contrast to the local field neural networks, the stability analysis of SNNs has attracted relatively little attention, many interesting results on the stability analysis of SNNs have been addressed in the literature [2, 1721].

Meanwhile, the state estimation of neural networks is an important issue. In many applications, it is important to estimate the state of neurons to utilize the neural networks. Recently, some results on the state estimation problem for the neural networks have been investigated in [2237]. Among them \(H_{\infty}\) state estimation of static neural networks with time delay was studied in [31, 3337]. In [31], a delay partition approach was proposed to deal with the state estimation problem for a class of static neural networks with time-varying delay. In [33], the state estimation problem of the guaranteed \(H_{\infty}\) and \(H_{2}\) performance of static neural networks was considered. Further improved results were obtained in [34, 35, 37] by using the convex combination approach. The exponential state estimation of time-varying delayed neural networks was studied in [36]. These literatures all use the Lyapunov-Krasovskii functionals (LKFs) method, conservativeness comes from two things: the choice of functional and the bound on its derivative. A delay partitioning LKF reduce the conservativeness of a simple LKF by containing more delay information. There are two main techniques for dealing with the integrals that appear in the derivative: the free-weighting matrix method and the integral inequality method. However, the inequalities used may lead to conservatism to some extent. Moreover, the information of neuron activation functions has not been adequately taken into account in [31, 3337]. Therefore, the guaranteed performance state estimation problem has not yet been fully studied and remains a space for improvement. Thus, it remains a space to improve the results.

This paper studies the problem of \(H_{\infty}\) state estimation for a class of delayed SNNs. Our aim is to design a delay-dependent state estimator, such that the filtering error system is globally exponentially stable with a prescribed \(H_{\infty}\) performance. By using delay equal-partitioning method, augmented LKFs are properly constructed. The lower bound is partitioned into several components in [21], so it is impossible to deal the situation when lower bound is zero. Moreover, the effectiveness of delay partitioning method is not evident when lower bound is very small. Different from the delay partitioning method used in [21], we decompose the delay interval into multiple equidistant subintervals, this method is more universal in dealing with the interval time-varying delay. Then the free-weighting matrix technique is used to get a tighter upper bound on the derivatives of the LKFs. As mentioned in Remark 3.3, we also reduce conservatism by taking advantage of the information on activation function. Moreover, the free-matrix-based integral inequality was used to derive improved delay-dependent criteria. Compared with existing results in [3436], the criteria in this paper lead to less conservatism.

The remainder of this paper is organized as follows. The state estimation problem is formulated in Section 2. Section 3 deals with the design of the state estimator for delayed static neural networks. In Section 4, two numerical examples are provided to show the effectiveness of the results. Finally, some conclusions are made in Section 5.

Notations: The notations used throughout the paper are fairly standard. \(\mathbb{R}^{n}\) denotes the n-dimensional Euclidean space; \(\mathbb{R}^{n\times m}\) is the set of all \(n\times m\) real matrices; the notation \(A > 0\) (<0) means A is a symmetric positive (negative) definite matrix; \(A^{-1}\) and \(A^{T}\) denote the inverse of matrix A and the transpose of matrix A, respectively; I represents the identity matrix with proper dimensions; a symmetric term in a symmetric matrix is denoted by (); \(\operatorname{sym}\{A\}\) represents \((A + A^{T})\); \(\operatorname{diag}\{\cdot\}\) stands for a block-diagonal matrix. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

2 Problem formulation

Consider the delayed static neural network subject to noise disturbances described by

$$ \left\{ \textstyle\begin{array}{l} \dot{x}(t)=-Ax(t)+f(Wx(t-h(t))+J )+B_{1}w(t), \\ y(t)=Cx(t)+Dx(t-h(t))+B_{2}w(t), \\ z(t)=Hx(t), \\ x(t)=\phi(t),\quad t\in[-\tau,0], \end{array}\displaystyle \right. $$
(1)

where \(x(t)=[x_{1}(t), x_{2}(t), \ldots, x_{n}(t)]^{T}\in \mathbb{R}^{n} \) is the state vector of the neural network, \(y(t)\in \mathbb{R}^{m}\) is the neural network output measurement, \(z(t)\in \mathbb{R}^{r}\) to be estimated is a linear combination of the state, \(w(t)\in \mathbb{R}^{n}\) is the noise input belonging to \(L_{2}[0,\infty)\), \(f(\cdot)\) denote the neuron activation function, \(A= {\operatorname{diag}} \{a_{1}, a_{2}, \ldots, a_{n}\}\) with \(a_{i}> 0\), \(i=1, 2, \ldots, n\) is the positive diagonal matrix, \(B_{1}\), \(B_{2}\), C, D, and H are real known matrices with appropriate dimensions, \(W\in \mathbb{R}^{n\times n} \) denote the connection weights, \(h(t)\) represent the time-varying delays, J represents the exogenous input vector, the function \(\phi(t)\) is the initial condition, and \(\tau= \sup_{t\geq 0}\{h(t)\}\).

In this paper, the time-varying delays satisfy

$$0\leqslant d_{1}\leqslant h(t) \leqslant d_{2},\qquad {\dot{h}(t)} \leqslant \mu, $$

where \(d_{1}\), \(d_{2}\), and μ are constants.

In order to conduct the analysis, the following assumptions are necessary.

Assumption 2.1

For any \(x, y \in R\) (\(x\neq{y}\)), \(i \in \{1, 2, \ldots, n\}\), the activation function satisfies

$$ {k}_{i}^{-}\leq\frac{f_{i}(x)-f_{i}(y)}{x-y}\leq {k}_{i}^{+}, $$
(2)

where \({k}_{i}^{-}\), \({k}_{i}^{+}\) are constants, and we define \(K^{-}= {\operatorname{diag}} \{k^{-}_{1}, k^{-}_{2}, \ldots, k^{-}_{n}\}\), \(K^{+}= {\operatorname{diag}} \{k^{+}_{1}, k^{+}_{2}, \ldots, k^{+}_{n}\}\).

We construct a state estimator for estimation of \(z(t)\):

$$ \left\{ \textstyle\begin{array}{l} \dot{\hat{x}}(t)=-A\hat{x}(t)+f(W\hat{x} (t-h(t))+J)+K(y(t)-\hat{y}(t)), \\ \hat{y}(t)=C\hat{x}(t)+D\hat{x}(t-h(t)), \\ \hat{z}(t)=H\hat{x}(t), \\ \hat{x}(t)=0,\quad t\in[-\tau,0], \end{array}\displaystyle \right. $$
(3)

where \(\hat{x}(t)\in \mathbb{R}^{n} \) is the estimated state vector of the neural network, \(\hat{z}(t)\) and \(\hat{y}(t)\) denote the estimated measurement of \({z}(t)\) and \({y}(t)\), K is the gain matrix to be determined. Define the error \(e(t)={x}(t)-\hat{x}(t)\), \(\bar{z}(t)={z}(t)-\hat{z}(t)\), we can easily obtain the error system:

$$ \left\{ \textstyle\begin{array}{l} \dot{{e}}(t)=-(A+KC){e}(t)-KDe(t-h(t) )+g(W{e}(t-h(t)))+(B_{1}-KB_{2})w(t), \\ \bar{z}(t)=H{e}(t), \end{array}\displaystyle \right. $$
(4)

where \(g(W{e}(t))=f(Wx(t)+J)-f(W\hat{x}(t)+J)\).

Definition 2.1

[35] For any finite initial condition \(\phi(t) \in \mathcal{C}^{1}([-\tau,0];\mathbb{R}^{n})\), the error system (4) with \(w(t)=0\) is said to be globally exponentially stable with a decay rate β, if there exist constant \(\lambda >0\) and \(\beta >0\) such that

$$ \bigl\| e(t)\bigr\| ^{2}\leq\lambda e^{-\beta t} \sup_{-\tau< s< 0} \bigl\{ \bigl\| \phi(s)\bigr\| ^{2},\bigl\| \dot{\phi}(s)\bigr\| ^{2}\bigr\} . $$

Given a prescribed level of disturbance attenuation level \(\gamma > 0\), the error system is said to be globally exponentially stable with \(H_{\infty}\) performance, when the error system is globally exponentially stable and the response \(\bar{z}(t)\) under zero initial condition satisfies

$$ \begin{aligned} \|\bar{z}\|^{2}_{2} \leq \gamma^{2}\|w\|^{2}_{2}, \end{aligned} $$

for every nonzero \(w(t) \in L_{2}[0,\infty)\), where \(\|\phi\|_{2}=\sqrt{\int_{0}^{\infty}{\phi^{T}(t)\phi(t) \,\mathrm{d}t}}\).

Lemma 2.1

[38]

For any constant matrix \(Z \in {R}^{n \times n}\), \(Z=Z^{T}>0\), scalars \(h_{2}>h_{1}>0\), and vector function \(x: [h_{1},h_{2}]\to R^{n}\) such that the following integrations are well defined,

$$ \begin{aligned} -(h_{2}-h_{1}) \int_{t-h_{2}}^{t-h_{1}}{x^{T}(s)Z x(s) \,\mathrm{d}s } \leqslant- \int_{t-h_{2}}^{t-h_{1}}{x^{T}(s)\, \mathrm{d}s} Z \int_{t-h_{2}}^{t-h_{1}}{x(s)\, \mathrm{d}s}. \end{aligned} $$

Lemma 2.2

Schur complement

Given constant symmetric matrices \(S_{1}\), \(S_{2}\), and \(S_{3}\), where \(S_{1}=S_{1}^{T}\), and \(S_{2}=S_{2}^{T}>0\), then \(S_{1}+S_{3}^{T}S_{2}^{-1}S_{3}<0\) if and only if

$$\left [\textstyle\begin{array}{c@{\quad}c} S_{1} & S_{3}^{T}\\ S_{3} & -S_{2} \end{array}\displaystyle \right ]< 0 \quad\textit{or}\quad \left [\textstyle\begin{array}{c@{\quad}c} -S_{2} & S_{3}\\ S_{3}^{T} & S_{1} \end{array}\displaystyle \right ]< 0. $$

Lemma 2.3

[39]

For differentiable signal x in \([\alpha,\beta]\rightarrow \mathbb{R}^{n}\), symmetric matrix \(R\in \mathbb{R}^{n \times n}\), \(Y_{1},Y_{3}\in \mathbb{R}^{3n \times 3n}\), any matrices \(Y_{2}\in \mathbb{R}^{3n \times 3n}\), and \(M_{1},M_{2}\in \mathbb{R}^{3n \times n}\) satisfying

$$ \begin{aligned} \bar{Y}=\left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c} Y_{1}&Y_{2}&M_{1}\\ \ast&Y_{3}&M_{2}\\ \ast&\ast&R \end{array}\displaystyle \right ]\geq 0, \end{aligned} $$

the following inequality holds:

$$ \begin{aligned} - \int_{\alpha}^{\beta} \dot{x}^{T} (s )R \dot{x} (s )\,\mathrm{d}s \leq \varpi^{T} \Omega\varpi, \end{aligned} $$
(5)

where

$$ \begin{aligned} &\Omega= (\beta-\alpha ) \biggl(Y_{1}+ \frac{1}{3}Y_{3} \biggr)+ \operatorname{Sym} \{M_{1}\bar{ \Pi}_{1}+M_{2}\bar{\Pi}_{2} \}, \\ &\bar{\Pi}_{1}=\bar{e}_{1}-\bar{e}_{2},\qquad \bar{ \Pi}_{2}=2\bar{e}_{3}-\bar{e}_{1}- \bar{e}_{2}, \\ &\bar{e}_{1}=[ \textstyle\begin{array}{c@{\quad}c@{\quad}c} I&0&0 \end{array}\displaystyle ],\qquad\bar{e}_{2}= [ \textstyle\begin{array}{c@{\quad}c@{\quad}c} 0&I&0 \end{array}\displaystyle ],\qquad \bar{e}_{3}=[ \textstyle\begin{array}{c@{\quad}c@{\quad}c} 0&0&I \end{array}\displaystyle ], \end{aligned} $$

and

$$\varpi=\left[\textstyle\begin{array}{c@{\quad}c@{\quad}c} x^{T}(\beta) & x^{T}(\alpha) & \frac{1}{\beta-\alpha}\int_{\alpha}^{\beta}{{x}^{T}(s)\, \mathrm{d}s}\end{array}\displaystyle \right]. $$

3 State estimator design

In this section, a delay partition approach is proposed to design a state estimator for the static neural network (1). For convenience of presentation, we denote

$$\hat{\eta}_{1}(t)=\begin{bmatrix} e(t-d_{1})\\ e(t-d_{1}-\frac{1}{m}d) \\ e(t-d_{1}-\frac {2}{m}d)\\ \vdots\\ e(t-d_{1}-\frac {m-1}{m}d) \end{bmatrix},\qquad \hat{\eta}_{2}(t)=\begin{bmatrix} g(We(t-d_{1}))\\ g(We(t-d_{1}-\frac{1}{m}d))\\ g(We(t-d_{1}-\frac {2}{m}d))\\ \vdots\\ g(We(t-d_{1}-\frac {m-1}{m}d)) \end{bmatrix}, $$

where \(d=d_{2}-d_{1}\).

Theorem 3.1

Under Assumption 2.1, for given scalars μ, \(0\leqslant d_{1}\leqslant d_{2}\), \(\gamma>0\), \(\alpha \geq0\), and integers \(m\geq1\), \(m\geq k\geq1\), the system (4) is globally exponentially stable with \(H_{\infty}\) performance γ if there exist matrices \(P\mbox{ }(\in \mathbb{R}^{n\times n})>0\), \(R_{i}\mbox{ }(\in \mathbb{R}^{n\times n})>0\) (\(i=1,2,\ldots,m+1\)), \({Q_{1}\mbox{ }(\in \mathbb{R}^{2mn\times 2mn})>0}\), \(Q_{2}\mbox{ }(\in \mathbb{R}^{2n\times 2n})>0\), symmetrical matrices \(X_{1},X_{3},Y_{1},Y_{3},X_{4},X_{6},Y_{4},Y_{6}\in \mathbb{R}^{3n\times 3n}\), positive diagonal matrices Γ, \(\Lambda_{i}\) (\(i=1,2,\ldots,m+1\)), \(\Lambda_{m+2}\), and any matrices with appropriate dimensions M, \(X_{2}\), \(X_{5}\), \(Y_{2}\), \(Y_{5}\), \(N_{j}\) (\(j=1,2,\ldots,8\)), such that the following LMIs hold:

$$\begin{aligned} & \begin{bmatrix} \hat{\Xi} & \bar{H}^{T} \\ \ast & -I \end{bmatrix}< 0, \end{aligned}$$
(6)
$$\begin{aligned} &\begin{bmatrix} X_{1} & X_{2} &N_{1} \\ \ast & X_{3} &N_{2} \\ \ast &\ast &R_{m+1} \end{bmatrix}\geq0, \end{aligned}$$
(7)
$$\begin{aligned} &\begin{bmatrix} Y_{1} & Y_{2} &N_{3} \\ \ast & Y_{3} &N_{4} \\ \ast &\ast & R_{m+1} \end{bmatrix}\geq0, \end{aligned}$$
(8)
$$\begin{aligned} &\begin{bmatrix} X_{4} & X_{5} &N_{5} \\ \ast & X_{6} &N_{6} \\ \ast &\ast & R_{k} \end{bmatrix}\geq0, \end{aligned}$$
(9)
$$\begin{aligned} &\begin{bmatrix} Y_{4} & Y_{5} &N_{7} \\ \ast & Y_{6} &N_{8} \\ \ast &\ast & R_{k} \end{bmatrix}\geq0, \end{aligned}$$
(10)

where

$$\begin{aligned} &\hat{\Xi}=\alpha E_{2m+3}^{T}PE_{2m+3}+\operatorname{sym} \bigl\{ E_{2m+3}^{T}PE_{2m+12}\bigr\} + \Pi_{1}^{T}Q_{1} \Pi_{1} - e^{{-\alpha d}/m} \Pi_{2}^{T}Q_{1} \Pi_{2} \\ &\hphantom{\hat{\Xi}=} +\Pi_{3}^{T}Q_{2} \Pi_{3} -(1- \mu)e^{{-\alpha d_{2}}/m} \Pi_{4}^{T}Q_{2} \Pi_{4}+(d_{2}-d_{1})E_{2m+12}^{T}R_{m+1}E_{2m+12} \\ &\hphantom{\hat{\Xi}=}+\bigl(h(t)-d_{1}\bigr)e^{{-\alpha d_{2}}} \Pi_{5}^{T} \biggl(X_{1}+\frac{1}{3}X_{3}\biggr) \Pi_{5}+e^{{-\alpha d_{2}}}\operatorname{sym}\bigl\{ \Pi_{5}^{T}N_{1} \Pi_{6}+\Pi_{5}^{T}N_{2} \Pi_{7}\bigr\} \\ &\hphantom{\hat{\Xi}=}+\bigl(d_{2}-h(t)\bigr)e^{{-\alpha d_{2}}} \Pi_{8}^{T} \biggl(Y_{1}+\frac{1}{3}Y_{3}\biggr) \Pi_{8}+e^{{-\alpha d_{2}}}\operatorname{sym}\bigl\{ \Pi_{8}^{T}N_{3} \Pi_{9}+\Pi_{8}^{T}N_{4} \Pi_{10}\bigr\} \\ &\hphantom{\hat{\Xi}=} +\biggl(\frac{d}{m}\biggr)^{2}E_{2m+12}^{T} \Biggl(\sum_{i = 1}^{m}R_{i} \Biggr)E_{2m+12} \\ &\hphantom{\hat{\Xi}=}+\frac {d}{m} \biggl(\frac {k}{m}d+d_{1}-h(t) \biggr)e^{{-\alpha (d_{1}+kd/m)}} \Pi_{11}^{T}\biggl(X_{4}+ \frac{1}{3}X_{6}\biggr) \Pi_{11} \\ &\hphantom{\hat{\Xi}=} +\frac {d}{m} e^{{-\alpha (d_{1}+kd/m)}}\operatorname{sym}\bigl\{ \Pi_{11}^{T}N_{5} \Pi_{12}+\Pi_{11}^{T}N_{6} \Pi_{13}\bigr\} \\ &\hphantom{\hat{\Xi}=}+\frac {d}{m} e^{{-\alpha (d_{1}+kd/m)}}\operatorname{sym}\bigl\{ \Pi_{14}^{T}N_{7} \Pi_{15}+\Pi_{14}^{T}N_{8} \Pi_{16}\bigr\} \\ &\hphantom{\hat{\Xi}=} +\frac {d}{m} \biggl(h(t)-\frac {k-1}{m}d-d_{1} \biggr)e^{{-\alpha (d_{1}+kd/m)}}\Pi_{14}^{T}\biggl(Y_{4}+ \frac{1}{3}Y_{6}\biggr) \Pi_{14} \\ &\hphantom{\hat{\Xi}=}-\sum_{i = 1,i\neq k}^{m} e^{-\alpha (d_{1}+id/m)} \bigl[E_{i}^{T}-E_{i+1}^{T} \bigr]R_{i}[E_{i}-E_{i+1}] -2\sum _{i = 1}^{m+1}\bigl( E_{m+i+1}^{T} \Lambda_{i}E_{m+i+1}\bigr) \\ &\hphantom{\hat{\Xi}=}-2\sum_{i = 1}^{m+1}\bigl( E_{i}^{T}W^{T}K^{-}\Lambda_{i} K^{+}WE_{i}\bigr)+\sum_{i = 1}^{m+1}\operatorname{sym} \bigl\{ E_{i}^{T}W^{T}\bigl(K^{-}+K^{+}\bigr) \Lambda_{i} E_{m+i+1}\bigr\} \\ &\hphantom{\hat{\Xi}=}-2E_{2m+10}^{T}\Lambda_{m+2}E_{2m+10}-2E_{2m+4}^{T}W^{T}K^{-} \Lambda_{m+2} K^{+}WE_{2m+4} \\ &\hphantom{\hat{\Xi}=}+\operatorname{sym}\bigl\{ E_{2m+4}^{T}W^{T}\bigl(K^{-}+K^{+}\bigr) \Lambda_{m+2} E_{2m+10}\bigr\} -2E_{2m+9}^{T} \Gamma E_{2m+9} \\ &\hphantom{\hat{\Xi}=}-2E_{2m+3}^{T}W^{T}K^{-}\Gamma K^{+}WE_{2m+3}+\operatorname{sym} \bigl\{ E_{2m+3}^{T}W^{T}\bigl(K^{-}+K^{+}\bigr)\Gamma E_{2m+9}\bigr\} \\ &\hphantom{\hat{\Xi}=}-\operatorname{sym}\bigl\{ E_{2m+3}^{T}ME_{2m+12}\bigr\} -E_{2m+3}^{T}\operatorname{sym}\{MA\}E_{2m+3} \\ &\hphantom{\hat{\Xi}=}-E_{2m+3}^{T}\operatorname{sym}\{GC\}E_{2m+3} -\operatorname{sym}\bigl\{ E_{2m+3}^{T}GDE_{2m+4}\bigr\} +\operatorname{sym}\bigl\{ E_{2m+3}^{T}ME_{2m+10}\bigr\} \\ &\hphantom{\hat{\Xi}=} +\operatorname{sym}\bigl\{ E_{2m+3}^{T}MB_{1}E_{2m+11}\bigr\} -\operatorname{sym}\bigl\{ E_{2m+3}^{T}GB_{2}E_{2m+11}\bigr\} \\ &\hphantom{\hat{\Xi}=}-E_{2m+12}^{T}\operatorname{sym}\{M\}E_{2m+12} -\operatorname{sym}\bigl\{ E_{2m+12}^{T}MAE_{2m+3}\bigr\} \\ &\hphantom{\hat{\Xi}=}-\operatorname{sym}\bigl\{ E_{2m+12}^{T}GCE_{2m+3}\bigr\} -\operatorname{sym}\bigl\{ E_{2m+12}^{T}GDE_{2m+4}\bigr\} \\ &\hphantom{\hat{\Xi}=}+\operatorname{sym}\bigl\{ E_{2m+12}^{T}ME_{2m+10}\bigr\} +\operatorname{sym}\bigl\{ E_{2m+12}^{T}MB_{1}E_{2m+11}\bigr\} \\ &\hphantom{\hat{\Xi}=}-\operatorname{sym}\bigl\{ E_{2m+12}^{T}GB_{2}E_{2m+11}\bigr\} -\gamma^{2}E_{2m+11}^{T}E_{2m+11}, \\ &\Pi_{1}=\bigl[E_{1}^{T} ,E_{2}^{T} ,E_{3}^{T},\ldots,E_{m}^{T},E_{m+2}^{T},E_{m+3}^{T}, \ldots,E_{2m+1}^{T}\bigr]^{T}, \\ &\Pi_{2}=\bigl[E_{2}^{T} ,E_{3}^{T} ,E_{4}^{T},\ldots,E_{m+1}^{T},E_{m+3}^{T},E_{m+4}^{T}, \ldots,E_{2m+2}^{T}\bigr]^{T}, \\ &\Pi_{3}=\bigl[E_{2m+3}^{T},E_{2m+9}^{T} \bigr]^{T}, \\ &\Pi_{4}=\bigl[E_{2m+4}^{T},E_{2m+9}^{T} \bigr]^{T},\\ &\Pi_{5}=\bigl[E_{1}^{T},E_{2m+4}^{T},E_{2m+5}^{T} \bigr]^{T},\\ &\Pi_{6}=E_{1}-E_{2m+4}, \\ &\Pi_{7}=2E_{2m+5}-E_{1}-E_{2m+4}, \\ &\Pi_{8}=\bigl[E_{2m+4}^{T},E_{m+1}^{T},E_{2m+6}^{T} \bigr]^{T}, \\ &\Pi_{9}=E_{2m+4}-E_{m+1}, \\ &\Pi_{10}=2E_{2m+6}-E_{m+1}-E_{2m+4}, \\ &\Pi_{11}=\bigl[E_{2m+4}^{T},E_{k+1}^{T},E_{2m+7}^{T} \bigr]^{T}, \\ &\Pi_{12}=E_{2m+4}-E_{k+1}, \\ &\Pi_{13}=2E_{2m+7}-E_{k+1}-E_{2m+4}, \\ &\Pi_{14}=\bigl[E_{k}^{T},E_{2m+4}^{T},E_{2m+8}^{T} \bigr]^{T}, \\ &\Pi_{15}=E_{k}-E_{2m+4}, \\ &\Pi_{16}=2E_{2m+8}-E_{k}-E_{2m+4}, \\ &E_{i}=[0_{n\times(i-1)n},I_{n},0_{n\times(2m+12-i)n}], \quad i=1,2, \ldots,2m+12, \\ &\bar{H}=[0,0,0,0,H,0 ,0,0,0,0,0,0,0,0], \end{aligned}$$

the estimator gain matrix is given by \(K=M^{-1}G\).

Proof

Construct a Lyapunov-Krasovskii functional candidate as follows:

$$ V(t,e_{t}) =\sum_{i = 1}^{5} V_{i}(t,e_{t}), $$
(11)

where

$$\begin{aligned} &V_{1}(t,e_{t})=e^{T}(t)Pe(t), \\ &V_{2}(t,e_{t})= \int_{t -\frac {d}{m}}^{t} {e^{-\alpha (t-s)}\begin{bmatrix} \hat{\eta}_{1}(s) \\ \hat{\eta}_{2}(s) \end{bmatrix}^{T} Q_{1} \begin{bmatrix} \hat{\eta}_{1}(s) \\ \hat{\eta}_{2}(s) \end{bmatrix} \, \mathrm{d}s}, \\ &V_{3}(t,e_{t})= \int_{t -h(t)}^{t} {e^{-\alpha (t-s)}\begin{bmatrix} e(s) \\ g(We(s)) \end{bmatrix}^{T} Q_{2} \begin{bmatrix} e(s) \\ g(We(s)) \end{bmatrix} \, \mathrm{d}s}, \\ &V_{4}(t,e_{t})= \frac {d}{m}\sum _{i = 1}^{m} \int_{-\frac{i}{m}d-d_{1}}^{-\frac {i-1}{m}d-d_{1}} { \int_{t + \theta }^{t} {e^{-\alpha (t-s)}\dot{e}^{T} (s)R_{i} \dot{e}(s)\, \mathrm{d}s\, \mathrm{d}\theta } }, \\ &V_{5}(t,e_{t})= \int_{-d_{2}}^{-d_{1}} { \int_{t + \theta }^{t} {e^{-\alpha (t-s)}\dot{e}^{T} (s)R_{m+1} \dot{e}(s)\, \mathrm{d}s\, \mathrm{d}\theta } }, \end{aligned}$$

calculating the derivative of \(V(t,e_{t})\) along the trajectory of system, we obtain

$$\begin{aligned} &\dot{V}_{1}(t,e_{t})=2e^{T}(t)P \dot{e}(t)=\hat{\zeta}_{t}^{T}\operatorname{sym}\bigl\{ E_{2m+3}^{T}PE_{2m+12} \bigr\} \hat{\zeta}_{t}, \end{aligned}$$
(12)
$$\begin{aligned} &\dot{V}_{2}(t,e_{t})= -\alpha V_{2}(t,e_{t})+ \begin{bmatrix} \hat{\eta}_{1}(t) \\ \hat{\eta}_{2}(t) \end{bmatrix}^{T}Q_{1} \begin{bmatrix} \hat{\eta}_{1}(t) \\ \hat{\eta}_{2}(t) \end{bmatrix} \\ &\hphantom{\dot{V}_{2}(t,e_{t})=}- e^{{-\alpha d}/m}\begin{bmatrix} \hat{\eta}_{1}(t-d/m) \\ \hat{\eta}_{2}(t-d/m) \end{bmatrix}^{T} Q_{1} \begin{bmatrix} \hat{\eta}_{1}(t-d/m) \\ \hat{\eta}_{2}(t-d/m) \end{bmatrix} \\ &\hphantom{\dot{V}_{2}(t,e_{t})}= -\alpha V_{2}(t,e_{t})+ \hat{\zeta}_{t}^{T} \Pi_{1}^{T}Q_{1} \Pi_{1}\hat{ \zeta}_{t} - e^{{-\alpha d}/m} \hat{\zeta}_{t}^{T} \Pi_{2}^{T}Q_{1} \Pi_{2} \hat{ \zeta}_{t} , \end{aligned}$$
(13)
$$\begin{aligned} &\dot{V}_{3}(t,e_{t})=-\alpha V_{3}(t,e_{t})+\begin{bmatrix} e(t) \\ g(We(t)) \end{bmatrix}^{T} Q_{2} \begin{bmatrix} e(t) \\ g(We(t)) \end{bmatrix} \\ &\hphantom{\dot{V}_{3}(t,e_{t})=}-(1-\mu)e^{{-\alpha h(t)}}\begin{bmatrix} e(t-h(t)) \\ g(We(t-h(t))) \end{bmatrix}^{T} Q_{2} \begin{bmatrix} e(t-h(t)) \\ g(We(t-h(t))) \end{bmatrix} \\ &\hphantom{\dot{V}_{3}(t,e_{t})}\leq-\alpha V_{3}(t,e_{t})+\hat{\zeta}_{t}^{T} \Pi_{3}^{T}Q_{2} \Pi_{3}\hat{ \zeta}_{t} -(1-\mu)e^{{-\alpha d_{2}}/m} \hat{\zeta}_{t}^{T} \Pi_{4}^{T}Q_{2} \Pi_{4} \hat{ \zeta}_{t} , \end{aligned}$$
(14)
$$\begin{aligned} & \dot{V}_{4}(t,e_{t})\leq-\alpha V_{4}(t,e_{t})+\biggl(\frac {d}{m} \biggr)^{2} \dot{e}^{T}(t) \Biggl(\sum _{i = 1}^{m}R_{i}\Biggr)\dot{e}(t) \\ &\hphantom{\dot{V}_{4}(t,e_{t})\leq}-\biggl(\frac {d}{m} \biggr)\sum_{i = 1}^{m} \int_{t -\frac{i}{m}d-d_{1}}^{t -\frac {i-1}{m}d-d_{1}}e^{-\alpha (d_{1}+id/m)}{\dot{e}^{T}(s)R_{i} \dot{e}(s)\, \mathrm{d}s} , \end{aligned}$$
(15)
$$\begin{aligned} &\dot{V}_{5}(t,e_{t})\leq -\alpha V_{5}(t,e_{t}) + (d_{2}-d_{1})\dot{e}^{T}(t)R_{m+1} \dot{e}(t) \\ &\hphantom{\dot{V}_{5}(t,e_{t})\leq}- \int_{t -h(t)}^{t-d_{1}}e^{-\alpha d_{2}}{\dot{e}^{T}(s)R_{m+1} \dot{e}(s)\, \mathrm{d}s} - \int_{t-d_{2}}^{t -h(t)}e^{-\alpha d_{2}}{\dot{e}^{T}(s)R_{m+1} \dot{e}(s)\, \mathrm{d}s}, \end{aligned}$$
(16)

according to Lemma 2.3, it follows that

$$\begin{aligned} \dot{V}_{5}(t,e_{t})\leq{}& {-}\alpha V_{5}(t,e_{t})+(d_{2}-d_{1})\hat{ \zeta}_{t}^{T}E_{2m+12}^{T}R_{m+1}E_{2m+12} \hat{\zeta}_{t} \\ &+\bigl(h(t)-d_{1}\bigr)e^{{-\alpha d_{2}}} \hat{ \zeta}_{t}^{T}\Pi_{5}^{T} \biggl(X_{1}+\frac{1}{3}X_{3}\biggr) \Pi_{5}\hat{\zeta}_{t} \\ &+e^{{-\alpha d_{2}}}\hat{\zeta}_{t}^{T}\operatorname{sym}\bigl\{ \Pi_{5}^{T}N_{1}\Pi_{6}+ \Pi_{5}^{T}N_{2}\Pi_{7}\bigr\} \hat{ \zeta}_{t} \\ &+\bigl(d_{2}-h(t)\bigr)e^{{-\alpha d_{2}}} \hat{ \zeta}_{t}^{T}\Pi_{8}^{T} \biggl(Y_{1}+\frac{1}{3}Y_{3}\biggr) \Pi_{8}\hat{\zeta}_{t} \\ &+e^{{-\alpha d_{2}}}\hat{\zeta}_{t}^{T}\operatorname{sym}\bigl\{ \Pi_{8}^{T}N_{3}\Pi_{9}+ \Pi_{8}^{T}N_{4}\Pi_{10}\bigr\} \hat{ \zeta}_{t}, \end{aligned}$$
(17)

there should exist a positive integer \(k\in\{1,2,\ldots,m\}\), such that \(h(t)\in[\frac {(k-1)}{m}d+d_{1},\frac {k}{m}d+d_{1}]\),

$$\begin{aligned} &{-}\biggl(\frac {d}{m} \biggr) \int_{t -\frac{k}{m}d-d_{1}}^{t -\frac {k-1}{m}d-d_{1}}e^{{-\alpha (d_{1}+kd/m)}}{\dot{e}^{T}(s)R_{k} \dot{e}(s)\, \mathrm{d}s} \\ &\quad\leq\frac {d}{m} \biggl(\frac {k}{m}d+d_{1}-h(t) \biggr)e^{{-\alpha (d_{1}+kd/m)}} \hat{\zeta}_{t}^{T} \Pi_{11}^{T}\biggl(X_{4}+\frac{1}{3}X_{6} \biggr) \Pi_{11}\hat{\zeta}_{t} \\ &\qquad+\frac {d}{m} e^{{-\alpha (d_{1}+kd/m)}}\hat{\zeta}_{t}^{T}\operatorname{sym} \bigl\{ \Pi_{11}^{T}N_{5}\Pi_{12}+ \Pi_{11}^{T}N_{6}\Pi_{13}\bigr\} \hat{ \zeta}_{t} \\ &\qquad+\frac {d}{m} \biggl(h(t)-\frac {k-1}{m}d-d_{1} \biggr)e^{{-\alpha (d_{1}+kd/m)}} \hat{\zeta}_{t}^{T} \Pi_{14}^{T}\biggl(Y_{4}+\frac{1}{3}Y_{6} \biggr) \Pi_{14}{}\hat{\zeta}_{t} \\ &\qquad+\frac {d}{m} e^{{-\alpha (d_{1}+kd/m)}}\hat{\zeta}_{t}^{T}\operatorname{sym} \bigl\{ \Pi_{14}^{T}N_{7}\Pi_{15}+ \Pi_{14}^{T}N_{8}\Pi_{16}\bigr\} \hat{ \zeta}_{t}. \end{aligned}$$
(18)

For \(i\neq k\), we also have the following inequality by Lemma 2.1:

$$\begin{aligned} & {-}\biggl(\frac {d}{m} \biggr)\sum_{i = 1}^{m} \int_{t -\frac{i}{m}d-d_{1}}^{t -\frac {i-1}{m}d-d_{1}}e^{-\alpha (d_{1}+id/m)}{\dot{e}^{T}(s)R_{i} \dot{e}(s)\, \mathrm{d}s} \\ &\quad\leq-\sum_{i = 1}^{m} e^{-\alpha (d_{1}+id/m)}\biggl[{e}^{T}\biggl(t-d_{1} - \frac {i-1}{m}d\biggr)-{e}^{T}\biggl(t-d_{1} - \frac{i}{m}d\biggr)\biggr]R_{i} \\ & \qquad\times\biggl[{e}\biggl(t-d_{1} -\frac {i-1}{m}d\biggr)-{e} \biggl(t-d_{1} -\frac{i}{m}d\biggr)\biggr] \\ &\quad=-\sum_{i = 1}^{m} e^{-\alpha (d_{1}+id/m)} \hat{\zeta}_{t}^{T}\bigl[E_{i}^{T}-E_{i+1}^{T} \bigr]R_{i}[E_{i}-E_{i+1}]\hat{ \zeta}_{t}. \end{aligned}$$
(19)

According to Assumption 2.1, we obtain

$$\begin{aligned} &2\bigl(g\bigl(We(t)\bigr)-K^{-}We(t)\bigr)^{T}\Gamma\bigl(g\bigl(We(t) \bigr)-K^{+}We(t)\bigr)\leqslant 0, \end{aligned}$$
(20)
$$\begin{aligned} &2\sum^{m+1}_{i=1} \biggl(g\biggl(We\biggl(t-d_{1}-\frac{i-1}{m}d\biggr)\biggr)-K^{-}We \biggl(t-d_{1}-\frac{i-1}{m}d\biggr)\biggr)\Lambda_{i} \\ &\quad\times\biggl(g\biggl(We\biggl(t-d_{1}-\frac{i-1}{m}d\biggr) \biggr)-K^{+}We\biggl(t-d_{1}-\frac{i-1}{m}d\biggr)\biggr)\leqslant 0, \end{aligned}$$
(21)
$$\begin{aligned} &2\bigl(g\bigl(We\bigl(t-h(t)\bigr)\bigr)-K^{-}We\bigl(t-h(t)\bigr) \bigr)^{T}\Lambda_{m+1} \\ &\quad\times\bigl(g\bigl(We\bigl(t-h(t)\bigr) \bigr)-K^{+}We\bigl(t-h(t)\bigr)\bigr)\leqslant 0, \end{aligned}$$
(22)

where Γ, \(\Lambda_{i}\) are positive diagonal matrices.

According to the system equation, the following equality holds:

$$\begin{aligned} &2\bigl(e^{T}(t)+\dot{e}^{T}(t) \bigr)M \bigl\{ -(A+KC){e}(t)-KDe\bigl(t-h(t)\bigr) \\ &\quad+g\bigl(W{e}\bigl(t-h(t)\bigr)\bigr)+(B_{1}-KB_{2})w(t)-\dot{e}(t)\bigr\} =0. \end{aligned}$$
(23)

Combining the qualities and inequalities from (12) to (23), we can obtain

$$ \bar{z}^{T}(t)\bar{z}(t)-\gamma^{2}w^{T}(t)w(t)+ \dot{V}(t,x_{t})+\alpha V(t,x_{t})\leq\hat{ \zeta}_{t}^{T}(\hat{\Xi})\hat{\zeta}_{t}+e^{T}(t)H^{T}He(t), $$
(24)

where \(\hat{\zeta}_{t}\) is defined as

$$\begin{aligned} \hat{\zeta}_{t}^{T} = {}&\biggl[\hat{\eta}_{1}^{T}(t),{e}^{T}(t-d_{2}),\hat{\eta}_{2}^{T}(t),g\bigl(W{e}(t-d_{2}) \bigr)^{T},{e}^{T}(t),{e}^{T}\bigl(t-h(t)\bigr), \\ &\frac{1}{h(t)-d_{1}} \int_{t -h(t)}^{t-d_{1}}{{e}^{T}(s)\, \mathrm{d}s}, \frac{1}{d_{2}-h(t)} \int_{t -d_{2}}^{t-h(t)}{{e}^{T}(s)\, \mathrm{d}s}, \frac{1}{d_{2}-h(t)} \int_{t -d_{2}}^{t-h(t)}{{e}^{T}(s)\, \mathrm{d}s}, \\ &\frac{1}{d_{2}-h(t)} \int_{t -d_{2}}^{t-h(t)}{{e}^{T}(s)\, \mathrm{d}s},g \bigl(W{e}(t)\bigr)^{T},g\bigl(W{e}\bigl(t-h(t)\bigr) \bigr)^{T},w^{T}(t),\dot{e}^{T}(t) \biggr]. \end{aligned}$$

Based on Lemma 2.2, one can deduce that

$$ \bar{z}^{T}(t)\bar{z}(t)-\gamma^{2}w^{T}(t)w(t)+ \dot{V}(t,x_{t})+\alpha V(t,x_{t})\leq\hat{ \zeta}_{t}^{T}\begin{bmatrix} \hat{\Xi} & \bar{H}^{T} \\ \ast & -I \end{bmatrix}\hat{\zeta}_{t}, $$
(25)

where \(\bar{H}=[0,0,0,0,H,0 ,0,0,0,0,0,0,0,0]\).

If the LMI (6) holds, then

$$ \bar{z}^{T}(t)\bar{z}(t)-\gamma^{2}w^{T}(t)w(t)+ \dot{V}(t,x_{t})+\alpha V(t,x_{t})< 0, $$
(26)

\(\alpha V(t,x_{t})\geq0\), so we can obtain

$$\begin{aligned} \int_{0}^{\infty}{\bigl[ \bar{z}^{T}(t) \bar{z}(t)-\gamma^{2}w^{T}(t)w(t)+\dot{V}\bigl(t,e(t)\bigr) \bigr]\,\mathrm{d}t}< 0, \end{aligned}$$

since \(V(t,e(t))>0\), under the zero initial condition, we have

$$\begin{aligned} \bigl\| \bar{z}(t)\bigr\| ^{2} \leq \gamma^{2}\bigl\| w(t)\bigr\| ^{2}. \end{aligned}$$

Therefore, the error system (4) guarantee \(H_{\infty}\) performance γ according to Definition 2.1. In the sequel, we show the globally exponentially stability of the estimation error system with \(w(t)=0\). When \(w(t)=0\), the error system (4) becomes

$$ \left\{ \textstyle\begin{array}{@{}l} \dot{{e}}(t)=-(A+KC){e}(t)-KDe(t-h(t))+g(W{e}(t-h(t))), \\ \bar{z}(t)=H{e}(t), \end{array}\displaystyle \right. $$
(27)

equation (23) becomes

$$ 2\bigl(e^{T}(t)+\dot{e}^{T}(t)\bigr) M \bigl\{ -(A+KC){e}(t)-KDe(t-d)+g\bigl(W{e}(t-d)\bigr)-\dot{e}(t)\bigr\} =0, $$
(28)

considering the same Lyapunov-Krasovskii functional candidate and calculating its time derivative along the solution of (27), we can derive

$$ \dot{V}(t,e_{t})+\alpha V(t,e_{t})\leq{\zeta}_{t}^{T}{ \Xi} {\zeta}_{t}, $$
(29)

where Ξ is obtained by deleting the terms in Ξ̂ associated with \(w(t)\),

$$\begin{aligned} {\zeta}_{t}^{T} ={}& \biggl[\hat{ \eta}_{1}^{T}(t),{e}^{T}(t-d_{2}),\hat{ \eta}_{2}^{T}(t),g\bigl(W{e}(t-d_{2}) \bigr)^{T},{e}^{T}(t),{e}^{T}\bigl(t-h(t)\bigr), \\ &\frac{1}{h(t)-d_{1}} \int_{t -h(t)}^{t-d_{1}}{{e}^{T}(s)\, \mathrm{d}s}, \frac{1}{d_{2}-h(t)} \int_{t -d_{2}}^{t-h(t)}{{e}^{T}(s)\, \mathrm{d}s}, \frac{1}{d_{2}-h(t)} \int_{t -d_{2}}^{t-h(t)}{{e}^{T}(s)\, \mathrm{d}s}, \\ &\frac{1}{d_{2}-h(t)} \int_{t -d_{2}}^{t-h(t)}{{e}^{T}(s)\, \mathrm{d}s},g \bigl(W{e}(t)\bigr)^{T},g\bigl(W{e}\bigl(t-h(t)\bigr) \bigr)^{T},\dot{e}^{T}(t) \biggr]. \end{aligned}$$

Let \(G=MK\), it is obvious that if \(\hat{\Xi}<0\), then \(\Xi<0\), so we get

$$ \dot{V}(t,e_{t})+\alpha V(t,e_{t})\leq0. $$
(30)

Integrating the above inequality (30), so we obtain

$$ V(t,e_{t})\leq e^{-\alpha t}V(0,e_{0}). $$
(31)

From (11), we have

$$\begin{aligned} &V(t,e_{t})\geq \lambda_{\mathrm{min}}(P)\bigl\| e(t) \bigr\| ^{2}, \end{aligned}$$
(32)
$$\begin{aligned} &V(0,e_{0})\leq b \sup_{-\tau< s< 0}\bigl\{ \bigl\| \phi(s)\bigr\| ^{2},\bigl\| \dot{\phi}(s)\bigr\| ^{2}\bigr\} , \end{aligned}$$
(33)

where

$$\begin{aligned} b={}&\lambda_{\mathrm{max}}(P)+\bigl(1+\rho^{2}\bigr)d \lambda_{\mathrm{max}}(Q_{1})+\bigl(1+\rho^{2} \bigr)d_{2}\lambda_{\mathrm{max}}(Q_{2}) \\ &+\frac{d}{m}\sum_{i = 1}^{m} \frac{(\frac{i}{m}d+d_{1})^{2}-(\frac{i-1}{m}d+d_{1})^{2}}{2}\lambda_{\mathrm{max}}(R_{i}) +\frac{(d_{2})^{2}-(d_{1})^{2}}{2}\lambda_{\mathrm{max}}(R_{m+1}), \\ \rho&= \max_{1\leq i\leq0}\bigl(\bigl|k_{i}^{-}\bigr|,\bigl|k_{i}^{+}\bigr| \bigr). \end{aligned}$$

Combining (31), (32), and (33) yields

$$ \bigl\| e(t)\bigr\| ^{2} \leq \frac{1}{\lambda_{\mathrm{min}}(P)}V(t,e_{t})\leq \frac{b}{\lambda_{\mathrm{min}}(P)}e^{-\alpha t} \sup_{-\tau< s< 0}\bigl\{ \bigl\| \phi(s) \bigr\| ^{2},\bigl\| \dot{\phi}(s)\bigr\| ^{2}\bigr\} , $$
(34)

hence the error system (4) is globally exponentially stable. Above all, if \(\hat{\Xi}<0\), then the state estimator for the static neural network has the prescribed \(H_{\infty}\) performance and guarantees the globally exponentially stable of the error system. This completes the proof. □

Remark 3.1

We use two ways to reduce the conservativeness: a good choice of the Lyapunov-Krasovskii functionals, and the use of a less conservative integral inequality. To make the Lyapunov-Krasovskii functionals contain more detailed information of time delay, delay partitioning method is employed. We also use the free-weighting matrix method to obtain a tighter upper bound on the derivative of the LKFs, many free-weighting matrices will be introduced with the increasing number of partitions. That will lead to complexity and computational burden. Hence, the partitioning number should be properly chosen.

Remark 3.2

Condition (6) in Theorem 3.1 depends on the time-varying delay, it is easy to show that the condition is satisfied for all \(0\leqslant d_{1}\leqslant h(t) \leqslant d_{2}\) if \(\hat{\Xi}|_{ h(t)=d_{1}}<0\) and \(\hat{\Xi}|_{ h(t)=d_{2}}<0\).

Remark 3.3

In some previous literature [33, 35, 36], \({k}_{i}^{-}\leq\frac{f_{i}(x)}{x}\leq {k}_{i}^{+}\), which is a special case of \({k}_{i}^{-}\leq\frac{f_{i}(x)-f_{i}(y)}{x-y}\leq {k}_{i}^{+}\) was used to reduce the conservatism. In our proof, not only \({k}_{i}^{-}\leq\frac{g_{i}(W_{i}e(t))}{W_{i}e(t)}\leq {k}_{i}^{+}\), but also \({k}_{i}^{-}\leq\frac{g_{i}(W_{i}e(t-h(t)))}{W_{i}e(t-h(t))}\leq {k}_{i}^{+}\), \({k}_{i}^{-}\leq\frac{g_{i}(W_{i}e(t-d_{1}-\frac{j-1}{m}d))}{W_{i}e(t-d_{1}-\frac{j-1}{m}d)}\leq {k}_{i}^{+}\), \(j \in \{1, 2, \ldots, m+1\}\) have been used, which play an important role in reducing the conservatism.

4 Numerical examples

In this section, numerical examples are provided to illustrate effectiveness of the developed method for the state estimation of static neural networks.

Example 1

Consider the neural networks (1) with the following parameters:

$$\begin{aligned} &A=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 0.96 & 0 & 0\\ 0 & 0.8 & 0 \\ 0 & 0 & 1.48 \end{array}\displaystyle \right ) ,\qquad W=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 0.5 & 0.3 & -0.36\\ 0.1 & 0.12 & 0.5 \\ -0.42 & 0.78 & 0.9 \end{array}\displaystyle \right ) ,\\& H=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 1 & 1 & 0\\ 1 & 0 & -1 \\ 0 & 1 & 1 \end{array}\displaystyle \right ) , \qquad B_{1}=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 0.1 & 0 & 0\\ 0.2 & 0 & 0 \\ 0.1 & 0 & 0 \end{array}\displaystyle \right ) ,\qquad J= \left (\textstyle\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\displaystyle \right ) ,\\ & C=(\textstyle\begin{array}{c@{\quad}c@{\quad}c} 1 & 0 & -2 \end{array}\displaystyle ) ,\qquad D= (\textstyle\begin{array}{c@{\quad}c@{\quad}c} 0.5 & 0 & -1 \end{array}\displaystyle ) ,\qquad B_{2}=(\textstyle\begin{array}{c@{\quad}c@{\quad}c} -0.1 & 0 & 0 \end{array}\displaystyle ) . \end{aligned}$$

To compare with the existing results, we let \(\alpha=0\), \(d_{1}=0\). We obtain the optimal \(H_{\infty}\) performance index γ for different values of delay \(d_{2}\) and μ. It is summarized in Table 1.

Table 1 The \(\pmb{H_{\infty}}\) performance index γ with different \(\pmb{(d_{2},\mu)}\)

From Table 1, it is clear that our results achieve better performance. In addition, the optimal \(H_{\infty}\) performance index γ becomes smaller as the partitioning number is increasing. It shows that the delay partitioning method can reduce the conservatism effectively.

Example 2

Consider the neural networks (1) with the following parameters:

$$\begin{aligned} &A=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 1.56 & 0 & 0\\ 0 & 2.42 & 0 \\ 0 & 0 & 1.88 \end{array}\displaystyle \right ) , \qquad W=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} -0.3 & 0.8 & -1.36\\ 1.1 & 0.4 & -0.5 \\ 0.42 & 0 & -0.95 \end{array}\displaystyle \right ) ,\\& H=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 1 & 0 & 0.5\\ 1 & 0 & 1 \\ 0 & -1 & 1 \end{array}\displaystyle \right ) , \qquad B_{1}=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c} 0.2 & 0 & 0\\ 0.2 & 0 & 0 \\ 0.2 & 0 & 0 \end{array}\displaystyle \right ) , \qquad J= \left (\textstyle\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\displaystyle \right ) ,\\& C=(\textstyle\begin{array}{c@{\quad}c@{\quad}c} 1 & 0 & 0 \end{array}\displaystyle ) ,\qquad D= (\textstyle\begin{array}{c@{\quad}c@{\quad}c} 2 & 0 & 0 \end{array}\displaystyle ) ,\qquad B_{2}=(\textstyle\begin{array}{c@{\quad}c@{\quad}c} 0.4 & 0 & 0 \end{array}\displaystyle ) . \end{aligned}$$

The activation function is \(f(x)=\tanh(x)\), it is easy to get \(K^{-}=0\), \(K^{+}=I\). We set \(\gamma=1\), \(\alpha=0\), \(h(t)=0.5+0.5\sin(0.8t)\), so the bound of time delay \(d_{1}=0\), \(d_{2}=1\), and \(\mu=0.4\). The noise disturbance is assumed to be

$$w(t)=\left (\textstyle\begin{array}{c} \frac{1}{0.8+1.2t} \\ 0 \\ 0 \end{array}\displaystyle \right ). $$

By solving through the Matlab LMI toolbox, we obtain the gain matrix of the estimator:

$$K=\left (\textstyle\begin{array}{c} -0.0890 \\ 0.2664\\ 0.0914 \end{array}\displaystyle \right ) . $$

Figure 1 present the state variables and their estimation of the neural network (1) from initial values \([0.3,-0.5,0.2]^{T}\). Figure 2 shows the state response of the error system (4) under the initial condition \(e(0)=[0.3,-0.5,0.2]^{T}\). It is clear that \(e(t)\) converges rapidly to zero. The simulation results reveal the effectiveness of the proposed approach to the design of the state estimator for static neural networks.

Figure 1
figure 1

The state variables and their estimation.

Figure 2
figure 2

The response of the error \(\pmb{e(t)}\) .

5 Conclusions

In this paper, we investigated the \(H_{\infty}\) state estimation problem for a class of delayed static neural networks. By constructing the augmented Lyapunov-Krasovskii functionals, new delay-dependent conditions were established. The estimator matrix gain was obtained by solving a set of LMIs, which can guarantee the global exponential stability with a prescribed \(H_{\infty}\) performance level of the error system. In the end, numerical examples were provided to illustrate the effectiveness of the proposed method comparing with some existing results.

References

  1. Kwon, OM, Lee, SM, Park, JH: Improved results on stability analysis of neural networks with time-varying delays: novel delay-dependent criteria. Mod. Phys. Lett. B 24(8), 775-789 (2010)

    Article  MATH  Google Scholar 

  2. Du, B, Lam, J: Stability analysis of static recurrent neural networks using delay-partitioning and projection. Neural Netw. 22(4), 343-347 (2009)

    Article  MATH  Google Scholar 

  3. Lu, J, Wang, Z, Cao, J, Ho, DW, Kurths, J: Pinning impulsive stabilization of nonlinear dynamical networks with time-varying delay. Int. J. Bifurc. Chaos 22(7), 1250176 (2012)

    Article  MATH  Google Scholar 

  4. Zhang, B, Lam, J, Xu, S: Relaxed results on reachable set estimation of time-delay systems with bounded peak inputs. Int. J. Robust Nonlinear Control 26(9), 1994-2007 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Lu, R, Xu, Y, Xue, A: \(H_{\infty}\) filtering for singular systems with communication delays. Signal Process. 90(4), 1240-1248 (2010)

    Article  MATH  Google Scholar 

  6. Wu, Z-G, Shi, P, Su, H, Lu, R: Dissipativity-based sampled-data fuzzy control design and its application to truck-trailer system. IEEE Trans. Fuzzy Syst. 23(5), 1669-1679 (2015)

    Article  Google Scholar 

  7. Ali, MS, Saravanan, S, Cao, J: Finite-time boundedness, \(L_{2}\)-gain analysis and control of Markovian jump switched neural networks with additive time-varying delays. Nonlinear Anal. Hybrid Syst. 23, 27-43 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  8. Ali, MS, Yogambigai, J: Synchronization of complex dynamical networks with hybrid coupling delays on time scales by handling multitude Kronecker product terms. Appl. Math. Comput. 291, 244-258 (2016)

    MathSciNet  Google Scholar 

  9. Cheng, J, Park, JH, Zhang, L, Zhu, Y: An asynchronous operation approach to event-triggered control for fuzzy Markovian jump systems with general switching policies. IEEE Trans. Fuzzy Syst. (2016, to appear)

  10. Arunkumar, A, Sakthivel, R, Mathiyalagan, K, Park, JH: Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks. ISA Trans. 53(4), 1006-1014 (2014)

    Article  Google Scholar 

  11. Shi, K, Zhu, H, Zhong, S, Zeng, Y, Zhang, Y: Less conservative stability criteria for neural networks with discrete and distributed delays using a delay-partitioning approach. Neurocomputing 140, 273-282 (2014)

    Article  Google Scholar 

  12. Shi, K, Zhu, H, Zhong, S, Zeng, Y, Zhang, Y: New stability analysis for neutral type neural networks with discrete and distributed delays using a multiple integral approach. J. Franklin Inst. 352(1), 155-176 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kwon, OM, Park, MJ, Park, JH, Lee, SM, Cha, EJ: Improved approaches to stability criteria for neural networks with time-varying delays. J. Franklin Inst. 350(9), 2710-2735 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  14. Ali, MS, Arik, S, Rani, ME: Passivity analysis of stochastic neural networks with leakage delay and Markovian jumping parameters. Neurocomputing 218, 139-145 (2016)

    Article  Google Scholar 

  15. Xu, Z-B, Qiao, H, Peng, J, Zhang, B: A comparative study of two modeling approaches in neural networks. Neural Netw. 17(1), 73-85 (2004)

    Article  MATH  Google Scholar 

  16. Qiao, H, Peng, J, Xu, Z-B, Zhang, B: A reference model approach to stability analysis of neural networks. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 33(6), 925-936 (2003)

    Article  Google Scholar 

  17. Li, P, Cao, J: Stability in static delayed neural networks: a nonlinear measure approach. Neurocomputing 69(13), 1776-1781 (2006)

    Article  Google Scholar 

  18. Zheng, C-D, Zhang, H, Wang, Z: Delay-dependent globally exponential stability criteria for static neural networks: an LMI approach. IEEE Trans. Circuits Syst. II 56(7), 605-609 (2009)

    Article  Google Scholar 

  19. Shao, H: Delay-dependent stability for recurrent neural networks with time-varying delays. IEEE Trans. Neural Netw. 19(9), 1647-1651 (2008)

    Article  Google Scholar 

  20. Li, X, Gao, H, Yu, X: A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 41(5), 1275-1286 (2011)

    Article  Google Scholar 

  21. Wu, Z-G, Lam, J, Su, H, Chu, J: Stability and dissipativity analysis of static neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 23(2), 199-210 (2012)

    Article  Google Scholar 

  22. Shi, K, Liu, X, Tang, Y, Zhu, H, Zhong, S: Some novel approaches on state estimation of delayed neural networks. Inf. Sci. 372, 313-331 (2016)

    Article  Google Scholar 

  23. Li, T, Fei, S, Zhu, Q: Design of exponential state estimator for neural networks with distributed delays. Nonlinear Anal., Real World Appl. 10(2), 1229-1242 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Mahmoud, MS: New exponentially convergent state estimation method for delayed neural networks. Neurocomputing 72(16), 3935-3942 (2009)

    Article  Google Scholar 

  25. Zheng, C-D, Ma, M, Wang, Z: Less conservative results of state estimation for delayed neural networks with fewer LMI variables. Neurocomputing 74(6), 974-982 (2011)

    Article  Google Scholar 

  26. Zhang, D, Yu, L: Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays. Neural Netw. 35, 103-111 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  27. Chen, Y, Zheng, WX: Stochastic state estimation for neural networks with distributed delays and Markovian jump. Neural Netw. 25, 14-20 (2012)

    Article  MATH  Google Scholar 

  28. Rakkiyappan, R, Sakthivel, N, Park, JH, Kwon, OM: Sampled-data state estimation for Markovian jumping fuzzy cellular neural networks with mode-dependent probabilistic time-varying delays. Appl. Math. Comput. 221, 741-769 (2013)

    MathSciNet  MATH  Google Scholar 

  29. Huang, H, Huang, T, Chen, X: A mode-dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays. Neural Netw. 46, 50-61 (2013)

    Article  MATH  Google Scholar 

  30. Lakshmanan, S, Mathiyalagan, K, Park, JH, Sakthivel, R, Rihan, FA: Delay-dependent \(H_{\infty}\) state estimation of neural networks with mixed time-varying delays. Neurocomputing 129, 392-400 (2014)

    Article  Google Scholar 

  31. Huang, H, Feng, G, Cao, J: State estimation for static neural networks with time-varying delay. Neural Netw. 23(10), 1202-1207 (2010)

    Article  Google Scholar 

  32. Huang, H, Feng, G: Delay-dependent \(H_{\infty}\) and generalized \(H_{2}\) filtering for delayed neural networks. IEEE Trans. Circuits Syst. I 56(4), 846-857 (2009)

    Article  MathSciNet  Google Scholar 

  33. Huang, H, Feng, G, Cao, J: Guaranteed performance state estimation of static neural networks with time-varying delay. Neurocomputing 74(4), 606-616 (2011)

    Article  Google Scholar 

  34. Duan, Q, Su, H, Wu, Z-G: \(H_{\infty}\) state estimation of static neural networks with time-varying delay. Neurocomputing 97, 16-21 (2012)

    Article  Google Scholar 

  35. Huang, H, Huang, T, Chen, X: Guaranteed \(H_{\infty}\) performance state estimation of delayed static neural networks. IEEE Trans. Circuits Syst. II 60(6), 371-375 (2013)

    Article  Google Scholar 

  36. Liu, Y, Lee, SM, Kwon, OM, Park, JH: A study on \(H_{\infty}\) state estimation of static neural networks with time-varying delays. Appl. Math. Comput. 226, 589-597 (2014)

    MathSciNet  Google Scholar 

  37. Ali, MS, Saravanakumar, R, Arik, S: Novel \(H_{\infty}\) state estimation of static neural networks with interval time-varying delays via augmented Lyapunov-Krasovskii functional. Neurocomputing 171, 949-954 (2016)

    Article  Google Scholar 

  38. Gu, K, Chen, J, Kharitonov, VL: Stability of Time-Delay Systems. Springer, Berlin (2003)

    Book  MATH  Google Scholar 

  39. Zeng, H-B, He, Y, Wu, M, She, J: Free-matrix-based integral inequality for stability analysis of systems with time-varying delay. IEEE Trans. Autom. Control 60(10), 2768-2772 (2015)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Sichuan Science and Technology Plan (2017GZ0166).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Wen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have made equal contributions to the writing of this paper. All authors have read and approved the final version of the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wen, B., Li, H. & Zhong, S. New results on \(H_{\infty}\) state estimation of static neural networks with time-varying delay. Adv Differ Equ 2017, 17 (2017). https://doi.org/10.1186/s13662-016-1056-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-1056-3

Keywords