Skip to main content

Theory and Modern Applications

\(H_{\infty}\) State estimation for discrete-time neural networks with time-varying and distributed delays

Abstract

The problem of an \(H_{\infty}\) state estimation for discrete-time neural networks with time-varying and distributed delays is investigated in this paper. By constructing a new Lyapunov-Krasovskii functional and utilizing a reciprocally convex method, several sufficient conditions are derived in terms of linear matrix inequalities (LMIs). The results obtained in this paper are less conservative than the existing ones, which can be checked efficiently by using some standard numerical packages. Finally, three numerical examples are given to show the effectiveness of the proposed method.

1 Introduction

In the past few decades, neural networks have been applied successfully in different fields, such as static processing, pattern recognition, combinatorial optimization, and so on [13]. It is well known that time delay is inevitable in practical neural networks because of the finite switching speed of the amplifiers, which may induce undesirable dynamic behaviors such as oscillation or instability. Therefore, the stability analysis problem of delayed neural networks has received considerable attention, and a large number of important results have been obtained in [410]. It should be noted that most neural networks are focused on the continuous-time case [1115]. However, discrete time plays a considerable role in today’s information society. Particularly, when implementing delayed continuous-time neural networks for computer simulation, it becomes necessary to develop a discrete-time system. In recent years, a lot of significant results have been reported in the literature [1622]. For example, Kwon et al. [17] presented the stability criteria for the discrete-time system with time varying delays. Recently, Li and Li [22] addressed the exponential stability for stochastic discrete-time recurrent neural networks with mixed delays.

In addition, the state estimation is a critical issue in dynamic analysis for complex system including genetic regulatory networks, recurrent neural networks, and complex networks. The state estimation involves the fact that using partial information about the neuron states in the networks outputs of large-scale neural networks, using the estimated neuron state, one also can get a certain performance such as control engineering and system modeling [9]. Thus, many effective approaches have been proposed in this research area [2335]. On the other hand, another type of time delay, named distributed delay, has attracted much attention in research [22, 2528]. The authors in [28] discussed the problem of a state estimation for discrete neural networks with Markovian jumping parameters and mixed time delays. In [31], the state estimation problem was studied for fuzzy cellular neural networks with time delay in the leakage term, in discrete and unbounded distributed delays. More recently, the authors in [34] investigated an \(H_{\infty}\) estimation for static delayed neural networks by using the reciprocally convex approach, zero equality, and linear matrix inequality. Further improved results were obtained in [32] by using an augmented Lyapunov approach and a linear matrix inequality technique. In [33], the problem of a state estimation for a class of discrete nonlinear systems with randomly occurring uncertainties and distributed sensor delays was considered. The issue of the \(H_{\infty}\) state estimation problem for discrete-time delayed neural networks has been addressed in [35]. However, to the best of our knowledge, the problem of an \(H_{\infty}\) state estimation for discrete-time neural networks with time-varying and distributed delays has not been studied.

Motivated by the above discussion, we study the problem of an \(H_{\infty}\) state estimation for discrete-time neural networks with time-varying and distributed delays. The major contributions of this paper can be summarized as follows. Firstly, by construction of a suitable Lyapunov-Krasovskii functional, sufficient conditions are established such that the error system is asymptotically stable and a prescribed \(H_{\infty}\) performance is guaranteed. Secondly, by introducing two new zero equalities, based on reciprocally convex method and free-weighting matrices techniques, less conservative criteria were derived in terms of LMIs. Thirdly, in order to design an estimator gain matrix, a new zero equality is devised to convert the problem of a nonlinear matrix inequality in the LMIs, which can give flexibility in solving LMIs. Finally, three numerical examples are given to confirm the effectiveness of the proposed method.

Notations

Throughout this paper, the superscripts −1 and T stand for the inverse and transpose of a matrix, respectively; \(P>0\) (\(P\geqslant0\), \(P<0\), \(P\leqslant0\)) means that the matrix P is symmetric positive definite (positive-semi definite, negative definite, and negative-semi definite); \(\|\cdot\|\) refers to the Euclidean vector norm; \(N[a,b]\) denotes the discrete interval given by \(N[a,b]=\{ a,a+1,\ldots,b-1,b\}\); \(\mathcal{R}^{n}\) denotes n-dimensional Euclidean space; \(\mathcal{R}^{{m}\times{n}}\) is the set of \(m\times n\) real matrices; denotes the symmetric block in a symmetric matrix; \(Z^{-}\) denotes the set of negative integers; \(\lambda_{\max}(Q)\) and \(\lambda_{\min}(Q)\) denote, respectively, the maximal and minimal eigenvalue of the matrix Q.

2 Preliminaries

Consider the following discrete-time neural networks with time-varying and distributed delays:

$$\begin{aligned} &x(k+1) = Ax(k)+B_{1}f \bigl(x(k) \bigr)+B_{2}f \bigl(x \bigl(k-\tau(k) \bigr) \bigr)+B_{3}\sum _{i=1}^{+\infty}\delta(i)f \bigl(x(k-i) \bigr) \\ &\hphantom{x(k+1) =}{} +D_{1}\omega(k), \\ &y(k) = C_{1}x(k)+C_{2}x \bigl(k-\tau(k) \bigr)+D_{2}\omega(k), \\ &z(k) = Hx(k), \\ &x(j) = \psi(j), \quad j=\ldots,-2,-1,0, \end{aligned}$$
(1)

where \(x(k)=[x_{1}(k),x_{2}(k),\ldots,x_{n}(k)]^{T}\in{\mathcal{R}}^{n}\) is the neuron state vector of the system; \(y(k)\in\mathcal{R}^{m}\) is the output measurement of the networks; \(z(k)\in\mathcal{R}^{q}\) stands for the neural signal to be estimated; \(\omega(k)\) is the noise input belonging to \(l_{2}([0,\infty);\mathcal{R}^{p})\); \(\psi(j)\) is the initial condition; \(f(x(k))=[f_{1}(x(k)),f_{2}(x(k)),\ldots,f_{n}(x(k))]^{T}\in{\mathcal{R}}^{n}\) represents the neuron activation functions. \(A = \operatorname{diag}\{a_{1},a_{2},\ldots ,a_{n} \}\) and \(B_{1}\), \(B_{2}\), \(B_{3}\), \(C_{1}\), \(C_{2}\), \(D_{1}\), \(D_{2}\), and H are known constant matrices with appropriate dimensions. \(\tau(k)\) denotes the known time-varying delay and satisfies \(0<\tau_{m}\leq\tau(k)\leq\tau_{M}\).

Assumption 2.1

The neuron activation function \(f(\cdot)\) satisfies

$$ l_{s}^{-}\leqslant\frac{f_{s}(a)-f_{s}(b)}{a-b} \leqslant l_{s}^{+},\quad f_{s}(0)=0, s=1,2, \ldots,n, $$
(2)

for all \(a,b\in R\), \(a\neq b \), with \(l_{s}^{-}\) and \(l_{s}^{+}\) known real constants.

Remark 1

In Assumption 2.1, \(l_{s}^{-}\) and \(l_{s}^{+}\) can be positive, negative or zero, when \(l_{s}^{-}=0\) and \(l_{s}^{+}>0\).

Assumption 2.2

The function \(\delta(i)\) is a real-valued non-negative function defined on \(i\in Z^{+}\), and there exists a constant scalar \(\xi>0\) such that

$$ \sum_{i=1}^{+\infty}\delta(i)=\xi< + \infty,\qquad \sum_{i=1}^{+\infty}\delta(i)i< + \infty. $$
(3)

For neural networks, a proper state estimator is constructed as

$$\begin{aligned} &{\hat{x}}(k+1) = A{\hat{x}}(k)+B_{1}f \bigl({ \hat{x}}(k) \bigr)+B_{2}f \bigl({\hat{x}} \bigl(k-\tau (k) \bigr) \bigr)+B_{3} \sum_{i=1}^{+\infty} \delta(i)f \bigl({\hat{x}}(k-i) \bigr) \\ &\hphantom{{\hat{x}}(k+1) =}{} +K \bigl(y(k)-{\hat{y}}(k) \bigr), \\ &{\hat{y}}(k) = C_{1}{\hat{x}}(k)+C_{2}{\hat{x}} \bigl(k- \tau(k) \bigr), \\ &{\hat{z}}(k) = H{\hat{x}}(k), \\ &{\hat{x}}(j) = {\hat{\psi}}(j),\quad j=\ldots,-2,-1,0, \end{aligned}$$
(4)

where \({\hat{x}}(k)\in{\mathcal{R}}^{n}\) denotes the estimate of the state \(x(k)\), \({\hat{z}}(k)\) represents the estimate of the output \(z(k)\) and K is the gain matrix to be determined.

Defining the error \(e(k)=x(k)-{\hat{x}}(k)\) and \(\tilde{z}(k)=z(k)-{\hat {z}}(k)\), we can obtain the error system from (1) and (4) as follows:

$$\begin{aligned} &e(k+1) = (A-KC_{1})e(k)-KC_{2}e \bigl(k- \tau(k) \bigr)+B_{1}g \bigl(e(k) \bigr)+B_{2}g \bigl(e \bigl(k- \tau(k) \bigr) \bigr) \\ &\hphantom{e(k+1) =}{} +B_{3}\sum_{i=1}^{+\infty} \delta (i)g \bigl(e \bigl((k-i) \bigr) \bigr)+(D_{1}-KD_{2}) \omega(k), \\ &\tilde{z}(k) = He(k), \end{aligned}$$
(5)

where \(g(e(k))=f(x(k))-f({\hat{x}}(k))\).

Definition 2.1

The error system (5) with \(\omega(k)=0\) is said to be asymptotically stable, for any such solution \(e(k)\) of the system (5) satisfying

$$\lim_{k\rightarrow\infty}\bigl\| e(k)\bigr\| ^{2}=0. $$

In this paper, our aim is to design a proper state estimator (4) such that the following conditions hold:

  1. (1)

    The error system (5) with \(\omega(k)=0\) is said to be asymptotically stable.

  2. (2)

    Under the zero-initial condition, the estimation error \(\tilde {z}(k)\) satisfies

    $$\sum_{k=0}^{\infty}\bigl\| \tilde{z}(k) \bigr\| ^{2}\leq\gamma^{2}\sum_{k=0}^{\infty} \bigl\| \omega(k)\bigr\| ^{2}, $$

    for all nonzero \(\omega(k)\), where \(\gamma>0\) is a given disturbance attenuation level.

The following lemmas are useful in deriving our main results.

Lemma 2.1

[1]

Let \(M\in{\mathcal{R}}^{n\times n}\) be a positive-definite matrix, \(X_{i}\in{\mathcal{R}}^{n}\), \(a_{i}>0\) (\(i=1,2,\ldots\)), then

$$\begin{aligned}& -(m-n)\sum_{i=k-m}^{k-n-1}X_{i}^{T}MX_{i} \leqslant- \Biggl(\sum_{i=k-m}^{k-n-1}X_{i} \Biggr)^{T}M \Biggl(\sum_{i=k-m}^{k-n-1}X_{i} \Biggr), \end{aligned}$$
(6)
$$\begin{aligned}& - \Biggl( {\sum_{i = 1}^{\infty}{a_{i} } } \Biggr)\sum_{i = 1}^{\infty}{a_{i} } x_{i}^{T} Mx_{i} \le - \Biggl( {\sum_{i = 1}^{\infty}{a_{i} } x_{i} } \Biggr)^{T} M \Biggl( {\sum _{i = 1}^{\infty}{a_{i} } x_{i} } \Biggr). \end{aligned}$$
(7)

Lemma 2.2

[4]

For any vectors \(\zeta_{1}\), \(\zeta_{2}\), given constant matrices \(R_{1}\), \(R_{2}\), S, and any scalars \(\alpha\geq0\), \(\beta\geq0\), satisfying \(\alpha+\beta=1\), and \(\bigl [{\scriptsize\begin{matrix}{} R_{1} & S \cr \ast& R_{2} \end{matrix}} \bigr ]\geq0\), we have

$$ -\frac{1}{\alpha}\zeta_{1}^{T}R_{1} \zeta_{1}-\frac{1}{\beta}\zeta _{2}^{T}R_{2} \zeta_{2}\leq- \left .\begin{bmatrix} \zeta_{1} \\ \zeta_{2} \end{bmatrix} \right .^{T} \begin{bmatrix} R_{1} & S \\ \ast& R_{2} \end{bmatrix} \begin{bmatrix} \zeta_{1} \\ \zeta_{2} \end{bmatrix}. $$
(8)

Lemma 2.3

For integers \(\tau_{m}\leqslant\tau(k)\leqslant\tau_{M}\) and the vector function \(e(k):N[k-\tau_{M},k-\tau_{m}-1]\mapsto\mathcal{R}^{n}\), \(T_{4}^{T}={T_{4}}>0\), \(\tau_{Mm}=\tau_{M}-\tau_{m}\), \(\eta(k)=e(k+1)-e(k)\), \(N_{1}\), \(N_{2}\) being free-weighting symmetric matrices, Y being a free-weighting matrix with appropriate dimensions, and \(\bigl [ {\scriptsize\begin{matrix}{} T_{4}+N_{1} & Y\cr \ast& T_{4}+N_{2} \end{matrix}} \bigr ]\geq0\), the following inequality holds:

$$\begin{aligned} &{-}\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1} \eta ^{T}(i) (T_{4}+N_{1})\eta(i)- \tau_{Mm}\sum_{i=k-\tau_{M}}^{k-\tau (k)-1} \eta^{T}(i) (T_{4}+N_{2})\eta(i) \\ &\quad\leq- \left .\begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix} \right .^{T} \begin{bmatrix} {T_{4}+N_{1}} & {Y} \\ \ast& {T_{4}+N_{2}} \end{bmatrix} \\ &\qquad{} \cdot \begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix}. \end{aligned}$$
(9)

Proof

In fact, if \(\tau_{m}<\tau(k)<\tau_{M}\), by using Lemma 2.1 and Lemma 2.2, we have

$$\begin{aligned} &{-}\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1}\eta ^{T}(i) (T_{4}+N_{1})\eta(i)-\tau_{Mm} \sum_{i=k-\tau_{M}}^{k-\tau (k)-1}\eta^{T}(i) (T_{4}+N_{2})\eta(i) \\ &\quad\le-\frac{{\tau_{Mm} }}{{\tau(k) - \tau_{m} }} \Biggl[ {\sum_{i = k - \tau ( k )}^{k - \tau_{m} - 1} {\eta(i)} } \Biggr]^{T} ( {T_{4} + N_{1} } ) \Biggl[ {\sum_{i = k - \tau ( k )}^{k - \tau_{m} - 1} {\eta(i)} } \Biggr]^{T} \\ &\qquad{} -\frac{\tau_{Mm}}{\tau_{M} - \tau(k)} \Biggl[ {\sum_{i = k - \tau_{M} }^{k - \tau(k) - 1} {\eta(i)} } \Biggr]^{T} ( {T_{4} + N_{2} } ) \Biggl[ {\sum_{i = k - \tau_{M} }^{k - \tau(k) - 1} {\eta(i)} } \Biggr]^{T} \\ &\quad =-\frac{{\tau_{Mm} }}{{\tau(k) - \tau_{m} }} \bigl[ {e(k - \tau_{m} ) - e\bigl(k - \tau(k)\bigr)} \bigr]^{T} ( {T_{4} + N_{1} } ) \bigl[ {e(k - \tau_{m} ) - e\bigl(k - \tau(k)\bigr)} \bigr] \\ &\qquad{} -\frac{{\tau_{Mm} }}{{\tau_{M} - \tau(k)}} \bigl[ {e \bigl(k - \tau (k) \bigr) - e(k - \tau_{M} )} \bigr]^{T} ( {T_{4} + N_{2} } ) \bigl[ {e \bigl(k - \tau(k) \bigr) - e(k - \tau_{M} )} \bigr] \\ &\quad\leq- \left .\begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix} \right .^{T} \begin{bmatrix} {T_{4}+N_{1}} & {Y} \\ \ast& {T_{4}+N_{2}} \end{bmatrix} \begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix}. \end{aligned}$$

It should be noted that, when \(\tau(k)=\tau_{m}\) or \(\tau(k)=\tau_{M}\), we have

$$\begin{aligned}& \sum_{i = k - \tau ( k )}^{k - \tau_{m} - 1} {\eta (i)} = e(k - \tau_{m} ) - e\bigl(k - \tau(k)\bigr) = 0, \\& \sum_{i = k - \tau_{M} }^{k - \tau(k) - 1} {\eta(i)} = e \bigl(k - \tau(k) \bigr) - e(k - \tau_{M} ) = 0. \end{aligned}$$

Therefore, (9) still holds by using Lemma 2.1. The proof is completed. □

3 Main results

We denote

$$\begin{aligned}& L_{1}=\operatorname{diag} \bigl\{ l_{1}^{-}l_{1}^{+}, \ldots,l_{n}^{-}l_{n}^{+} \bigr\} , \qquad L_{2}= \operatorname{diag} \biggl\{ \frac{l_{1}^{-}+l_{1}^{+}}{2},\ldots,\frac{l_{n}^{-}+l_{n}^{+}}{2} \biggr\} , \\& \tau_{Mm}=\tau_{M}-\tau_{m}, \qquad\eta(k)=e(k+1)-e(k). \end{aligned}$$

Theorem 3.1

For a given scalar \(\gamma>0\), the error system (5) is asymptotically stable with \(H_{\infty}\) performance γ, if there exist symmetric positive definite matrices \(P_{1}>0\), \(P_{2}>0\), \(Q=\bigl [ {\scriptsize\begin{matrix}{} Q_{11} & Q_{12} \cr \ast& Q_{22} \end{matrix}} \bigr ] >0\), \(R_{l}>0\) (\(l=1,2\)), \(T_{l}>0\) (\(l=1,2,3,4\)), the positive diagonal matrices \(S_{k}=\operatorname{diag}\{s_{1k},s_{2k},\ldots,s_{nk}\}\) (\(k=1,2\)), any symmetric matrices \(N_{1}\), \(N_{2}\), any appropriately dimensioned matrices Y, G, and any appropriately dimensioned invertible matrix M, such that the following matrix inequalities hold:

$$\begin{aligned} & \begin{bmatrix} T_{1} & N_{1} \\ \ast& T_{2} \end{bmatrix}>0,\qquad \begin{bmatrix} T_{1} & N_{2} \\ \ast& T_{2} \end{bmatrix}>0,\qquad \begin{bmatrix} T_{4}+N_{1} & Y \\ \ast& T_{4}+N_{2} \end{bmatrix}\geq0, \end{aligned}$$
(10)
$$\begin{aligned} &\Theta= \begin{bmatrix} \Omega& \Xi_{1} & \Xi_{2} \\ \ast& -\gamma^{2} I & 0 \\ \ast& \ast& -I \end{bmatrix}< 0, \quad\Omega= \begin{bmatrix} \Omega_{11} & \cdots& \Omega_{18} \\ \ast& \ddots& \vdots \\ \ast& \ast& \Omega_{88} \end{bmatrix}, \end{aligned}$$
(11)

where

$$\begin{aligned}& \Xi_{1}^{T} = \bigl[D_{1}^{T} M^{T} - D_{2}^{T} G^{T} \quad 0 \quad 0 \quad 0 \quad {D_{1}^{T} M^{T} - D_{2}^{T} G^{T} } \quad 0 \quad 0 \quad 0 \bigr], \\& \Xi_{2}^{T} = [ H \quad 0 \quad 0 \quad 0 \quad 0 \quad 0 \quad 0 \quad 0 ], \\& \Omega_{11}=(\tau_{Mm}+1)Q_{11}+R_{1}+ \tau _{Mm}^{2}T_{1}-T_{3}-S_{1}L_{1}+MA+AM^{T}-GC_{1} \\& \hphantom{\Omega_{11}=}{} -C_{1}^{T}G^{T}-M-M^{T}, \qquad\Omega_{12}=T_{3},\qquad \Omega _{13}=-GC_{2}, \\& \Omega_{15}=P_{1}-M-M^{T}+AM^{T}-C_{1}^{T}G^{T}, \\& \Omega_{16}=(\tau_{Mm}+1)Q_{12}+MB_{1}+L_{2}S_{1}, \qquad \Omega_{17}=MB_{2}, \qquad \Omega_{18}=MB_{3}, \\& \Omega_{22}=-R_{1}+R_{2}-T_{3}-T_{4}+( \tau_{Mm}-1)N_{1}, \qquad \Omega_{23}=T_{4}+N_{1}-Y,\qquad \Omega_{24}=Y, \\& \Omega_{33}=-Q_{11}-2T_{4}-(\tau_{Mm}+1)N_{1}+( \tau _{Mm}-1)N_{2}+Y+Y^{T}-S_{2}L_{1}, \\& \Omega_{34}=T_{4}+N_{2}-Y, \qquad \Omega_{35}=-C_{2}^{T}G^{T},\qquad \Omega _{37}=-Q_{12}+L_{2}S_{2}, \\& \Omega_{44}=-R_{2}-T_{4}-(\tau_{Mm}+1)N_{2}, \\& \Omega_{55}=\tau_{Mm}^{2}(T_{2}+T_{4})+ \tau_{m}^{2}T_{3}+P_{1}-M-M^{T}, \\& \Omega_{56}=MB_{1}, \qquad \Omega_{57}=MB_{2}, \qquad\Omega_{58}=MB_{3}, \\& \Omega_{66}=(\tau_{Mm}+1)Q_{22}+\xi P_{2}-S_{1}, \qquad \Omega _{77}=-Q_{22}-S_{2}, \qquad\Omega_{88}=-\frac{1}{\xi}P_{2}, \\& \textit{otherwise},\quad \Omega_{ij}=0, \quad i,j=1,2,\ldots,8. \end{aligned}$$

Furthermore, the gain matrix K can be designed as \(K=M^{-1}G\).

Proof

Define a new Lyapunov-Krasovskii functional as follows:

$$ V(k)=V_{1}(k)+V_{2}(k)+V_{3}(k)+V_{4}(k)+V_{5}(k), $$
(12)

where

$$\begin{aligned}& V_{1}(k)=e^{T}(k)P_{1}e(k)+\sum _{i=1}^{+\infty}\delta(i)\sum_{l=k-i}^{k-1}g^{T} \bigl(e(l) \bigr)P_{2}g \bigl(e(l) \bigr), \\& V_{2}(k)=\sum_{i=k-\tau_{m}}^{k-1}e^{T}(i)R_{1}e(i)+ \sum_{i=k-\tau_{M}}^{k-\tau_{m}-1}e^{T}(i)R_{2}e(i), \\& \begin{aligned}[b] V_{3}(k)={}&\sum_{i=k-\tau(k)}^{k-1} \left .\begin{bmatrix} e(i) \\ g(e(i)) \end{bmatrix} \right .^{T} \begin{bmatrix} Q_{11} & Q_{12} \\ \ast& Q_{22} \end{bmatrix} \begin{bmatrix} e(i) \\ g(e(i)) \end{bmatrix} \\ &{}+ \sum_{j=-\tau_{M}+1}^{-\tau_{m}}\sum _{i=k+j}^{k-1} \left .\begin{bmatrix} e(i) \\ g(e(i)) \end{bmatrix} \right .^{T} \begin{bmatrix} Q_{11} & Q_{12} \\ \ast& Q_{22} \end{bmatrix} \begin{bmatrix} e(i) \\ g(e(i)) \end{bmatrix}, \end{aligned} \\& V_{4}(k)=\tau_{Mm}\sum_{j=-\tau_{M}}^{-\tau_{m}-1} \sum_{i=k+j}^{k-1}e^{T}(i)T_{1}e(i) +\tau_{Mm}\sum_{j=-\tau_{M}}^{-\tau_{m}-1}\sum _{i=k+j}^{k-1}\eta^{T}(i)T_{2} \eta(i), \\& V_{5}(k)=\tau_{m}\sum_{j=-\tau_{m}}^{-1} \sum_{i=k+j}^{k-1}\eta^{T}(i)T_{3} \eta(i) +\tau_{Mm}\sum_{j=-\tau_{M}}^{-\tau_{m}-1} \sum_{i=k+j}^{k-1}\eta^{T}(i)T_{4} \eta(i). \end{aligned}$$

Taking the forward difference of \(V(k)\) along the trajectories of system (5) yields

$$\begin{aligned}& \begin{aligned}[b] \Delta{V}_{1}(k)={}& \bigl(\eta(k)+e(k) \bigr)^{T}P_{1} \bigl(\eta(k)+e(k) \bigr)-e^{T}(k)P_{1}e(k) \\ &{} +\sum_{i=1}^{+\infty}\delta(i) \Biggl(\sum _{l=k-i+1}^{k}-\sum _{l=k-i}^{k-1} \Biggr)g^{T} \bigl(e(l) \bigr)P_{2}g \bigl(e(l) \bigr) \\ \leq{}&\eta^{T}(k)P_{1}\eta(k)+2e^{T}(k)P_{1} \eta(k)+\xi g^{T} \bigl(e(k) \bigr)P_{2}g \bigl(e(k) \bigr) \\ &{} -\frac{1}{\xi}\sum_{i=1}^{+\infty } \delta(i)g^{T} \bigl(e(k-i) \bigr)P_{2}\sum _{i=1}^{+\infty}\delta(i)g \bigl(e(k-i) \bigr), \end{aligned} \end{aligned}$$
(13)
$$\begin{aligned}& \Delta{V}_{2}(k)=e^{T}(k)R_{1}e(k)-e^{T}(k- \tau_{m}) (R_{1}-R_{2})e(k-\tau_{m}) -e^{T}(k-\tau_{M})R_{2}e(k- \tau_{M}), \end{aligned}$$
(14)
$$\begin{aligned}& \begin{aligned}[b] \Delta{V}_{3}(k)\leq{}&(\tau_{Mm}+1) \left .\begin{bmatrix} e(k) \\ g(e(k)) \end{bmatrix} \right .^{T} \begin{bmatrix} Q_{11} & Q_{12} \\ \ast& Q_{22} \end{bmatrix} \begin{bmatrix} e(k) \\ g(e(k)) \end{bmatrix} \\ &{} - \left .\begin{bmatrix} e(k-\tau(k)) \\ g(e(k-\tau(k))) \end{bmatrix} \right .^{T} \begin{bmatrix} Q_{11} & Q_{12} \\ \ast& Q_{22} \end{bmatrix} \begin{bmatrix} e(k-\tau(k)) \\ g(e(k-\tau(k))) \end{bmatrix}, \end{aligned} \end{aligned}$$
(15)
$$\begin{aligned}& \begin{aligned}[b] \Delta{V}_{4}(k)={}&\tau_{Mm}^{2} \bigl[e^{T}(k)T_{1}e(k)+\eta^{T}(k)T_{2} \eta(k) \bigr] \\ &{} -\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1} \left .\begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix} \right .^{T} \begin{bmatrix} T_{1} & 0 \\ 0 & T_{2} \end{bmatrix} \begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix} \\ &{} -\tau_{Mm}\sum_{i=k-\tau_{M}}^{k-\tau(k)-1} \left .\begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix} \right .^{T} \begin{bmatrix} T_{1} & 0 \\ 0 & T_{2} \end{bmatrix} \begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix}. \end{aligned} \end{aligned}$$
(16)

For any matrix N, the following equality holds:

$$ e^{T}(i+1)Ne(i+1)-e^{T}(i)Ne(i)= \eta(i)^{T}N\eta(i)+2e^{T}(i)N\eta(i). $$
(17)

From the equality (17), the following two zero equalities hold with any symmetric matrices \(N_{i}\) (\(i=1,2\)):

$$\begin{aligned} 0={}&\tau_{Mm}e^{T}(k- \tau_{m})N_{1}e(k-\tau_{m})-\tau_{Mm}e^{T} \bigl(k-\tau (k) \bigr)N_{1}e \bigl(k-\tau(k) \bigr) \\ &{}-\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1} \bigl[\eta^{T}(i)N_{1}\eta (i)+2e^{T}(i)N_{1} \eta(i) \bigr], \end{aligned}$$
(18)
$$\begin{aligned} 0={}&\tau_{Mm}e^{T} \bigl(k-\tau(k) \bigr)N_{2}e \bigl(k-\tau(k) \bigr)-\tau_{Mm}e^{T}(k- \tau _{M})N_{2}e(k-\tau_{M}) \\ &{}-\tau_{Mm}\sum_{i=k-\tau_{M}}^{k-\tau(k)-1} \bigl[\eta(i)^{T}N_{2}\eta (i)+2e^{T}(i)N_{2} \eta(i) \bigr]. \end{aligned}$$
(19)

From (18), (19), and the calculation result of \(\Delta{V}_{4}(k)\), we have

$$\begin{aligned} \Delta{V}_{4}(k)={}&\tau_{Mm}^{2} \bigl[e^{T}(k)T_{1}e(k)+\eta^{T}(k)T_{2} \eta (k) \bigr]+\tau_{Mm}e^{T}(k-\tau_{m})N_{1}e(k- \tau_{m}) \\ &{} -\tau_{Mm}e^{T}(k-\tau_{M})N_{2}e(k- \tau_{M}) \\ &{} +\tau_{Mm}e^{T} \bigl(k-\tau(k) \bigr) (N_{2}-N_{1})e \bigl(k-\tau(k) \bigr) \\ &{} -\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1} \left .\begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix} \right .^{T} \begin{bmatrix} T_{1} & N_{1} \\ \ast& T_{2} \end{bmatrix} \begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix} \\ &{} -\tau_{Mm}\sum_{i=k-\tau_{M}}^{k-\tau(k)-1} \left .\begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix} \right .^{T} \begin{bmatrix} T_{1} & N_{2} \\ \ast& T_{2} \end{bmatrix} \begin{bmatrix} e(i) \\ \eta(i) \end{bmatrix} \\ &{} -\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1} \eta^{T}(i)N_{1}\eta (i)-\tau_{Mm}\sum _{i=k-\tau_{M}}^{k-\tau(k)-1}\eta^{T}(i)N_{2} \eta(i), \end{aligned}$$
(20)
$$\begin{aligned} \Delta{V}_{5}(k)={}&\eta^{T}(k) \bigl( \tau_{m}^{2}T_{3}+\tau_{Mm}^{2}T_{4} \bigr)\eta (k)-\tau_{m}\sum_{i=k-\tau_{m}}^{k-1} \eta^{T}(i)T_{3}\eta(i) \\ &{} -\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1} \eta^{T}(i)T_{4}\eta (i)-\tau_{Mm}\sum _{i=k-\tau_{M}}^{k-\tau(k)-1}\eta^{T}(i)T_{4} \eta(i). \end{aligned}$$
(21)

From \(\bigl [{\scriptsize\begin{matrix}{} T_{1} & N_{1} \cr \ast& T_{2} \end{matrix}} \bigr ]>0 \) and \(\bigl [ {\scriptsize\begin{matrix}{} T_{1} & N_{2} \cr \ast& T_{2} \end{matrix}} \bigr ]>0\), we know that

$$\begin{aligned} &\Delta{V}_{4}(k)+\Delta{V}_{5}(k) \\ &\quad\leq\tau_{Mm}^{2} \bigl[e^{T}(k)T_{1}e(k)+ \eta^{T}(k)T_{2}\eta(k) \bigr] \\ &\qquad{} +\tau_{Mm}e^{T} \bigl(k-\tau(k) \bigr) (N_{2}-N_{1})e \bigl(k-\tau(k) \bigr) \\ &\qquad{} +\tau_{Mm}e^{T}(k-\tau_{m})N_{1}e(k- \tau_{m})-\tau_{Mm}e^{T}(k-\tau _{M})N_{2}e(k-\tau_{M}) \\ &\qquad{} +\eta^{T}(k) \bigl(\tau_{m}^{2}T_{3}+ \tau_{Mm}^{2}T_{4} \bigr)\eta(k)- \tau_{m}\sum_{i=k-\tau_{m}}^{k-1} \eta^{T}(i)T_{3}\eta(i) \\ &\qquad{} -\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}-1} \eta ^{T}(i) (T_{4}+N_{1})\eta(i) - \tau_{Mm}\sum_{i=k-\tau_{M}}^{k-\tau(k)-1} \eta^{T}(i) (T_{4}+N_{2})\eta(i). \end{aligned}$$
(22)

By using Lemmas 2.1 and 2.3, we can obtain

$$\begin{aligned} &{-}\tau_{m}\sum_{i=k-\tau_{m}}^{k-1} \eta^{T}(i)T_{3}\eta(i)\leq- \left .\begin{bmatrix} e(k) \\ e(k-\tau_{m}) \end{bmatrix} \right .^{T} \begin{bmatrix} {T_{3}} & {-T_{3}} \\ {\ast} & {T_{3}} \end{bmatrix} \begin{bmatrix} e(k) \\ e(k-\tau_{m}) \end{bmatrix}, \end{aligned}$$
(23)
$$\begin{aligned} &{-}\tau_{Mm}\sum_{i=k-\tau(k)}^{k-\tau_{m}} \eta^{T}(i) (T_{4}+N_{1})\eta (i)- \tau_{Mm}\sum_{i=k-\tau_{M}}^{k-\tau(k)-1}\eta ^{T}(i) (T_{4}+N_{2})\eta(i) \\ &\quad\leq- \left .\begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix} \right .^{T} \begin{bmatrix} {T_{4}+N_{1}} & {Y} \\ {\ast} & {T_{4}+N_{2}} \end{bmatrix} \\ &\qquad{} \cdot \begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix}. \end{aligned}$$
(24)

Then we can get

$$\begin{aligned} &\Delta{V}_{4}(k)+\Delta{V}_{5}(k) \\ &\quad\leq\tau_{Mm}^{2} \bigl[e^{T}(k)T_{1}e(k)+ \eta^{T}(k)T_{2}\eta(k) \bigr]+\eta ^{T}(k) \bigl( \tau_{m}^{2}T_{3}+\tau_{Mm}^{2}T_{4} \bigr)\eta(k) \\ &\qquad{} +\tau_{Mm}e^{T}(k-\tau_{m})N_{1}e(k- \tau_{m})+\tau_{Mm}e^{T} \bigl(k-\tau (k) \bigr) (N_{2}-N_{1})e \bigl(k-\tau(k) \bigr) \\ &\qquad{} -\tau_{Mm}e^{T}(k-\tau_{M})N_{2}e(k- \tau_{M})- \bigl[e(k)-e(k-\tau _{m}) \bigr]^{T}T_{3} \bigl[e(k)-e(k-\tau_{m}) \bigr] \\ &\qquad{} - \left .\begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix} \right .^{T} \begin{bmatrix} {T_{4}+N_{1}} & {Y} \\ {Y^{T} } & {T_{4}+N_{2}} \end{bmatrix} \\ &\qquad{} \cdot \begin{bmatrix} e(k-\tau_{m})-e(k-\tau(k)) \\ e(k-\tau(k))-e(k-\tau_{M}) \end{bmatrix}. \end{aligned}$$
(25)

On the other hand, for any appropriately dimensioned invertible matrix M, the following zero equality holds:

$$\begin{aligned} 0={}&2 \bigl[e^{T}(k)M+\eta^{T}(k)M \bigr] \Biggl[(A-KC_{1}-I)e(k)-KC_{2}e \bigl(k-\tau(k) \bigr)+B_{1}g \bigl(e(k) \bigr) \\ &{}+B_{2}g\bigl(e \bigl(k-\tau(k) \bigr)\bigr)+B_{3}\sum _{i=1}^{+\infty}\delta (i)g \bigl(x(k-i) \bigr)+(D_{1}-KD_{2}) \omega(k)-\eta(k) \Biggr]. \end{aligned}$$
(26)

From the Assumption 2.1, it follows that

$$\begin{aligned}& \bigl( {g_{i} \bigl( {e_{i} ( k )} \bigr) - l_{i}^{+} e_{i} ( k )} \bigr) \bigl( {g_{i} \bigl( {e_{i} ( k )} \bigr) - l_{i}^{-} e_{i} ( k )} \bigr) \le0, \end{aligned}$$
(27)
$$\begin{aligned}& \bigl( {g_{i} \bigl( {e_{i} \bigl( {k - \tau(k)} \bigr)} \bigr) - l_{i}^{+} e_{i} \bigl( {k - \tau(k)} \bigr)} \bigr) \bigl( {g_{i} \bigl( {e_{i} \bigl( {k - \tau(k)} \bigr)} \bigr) - l_{i}^{-} e_{i} \bigl( {k - \tau (k)} \bigr)} \bigr) \le0. \end{aligned}$$
(28)

For the diagonal matrices \(S_{k}=\operatorname{diag}\{s_{1k},s_{2k},\ldots,s_{nk}\}\) (\(k=1,2\)), one can obtain the following inequalities:

$$\begin{aligned}& - \left .\begin{bmatrix} {e(k)} \\ {g(e(k))} \end{bmatrix} \right .^{T} \begin{bmatrix} {S_{1} L_{1} } & { - S_{1} L_{2} } \\ \ast & {S_{1} } \end{bmatrix} \begin{bmatrix} {e(k)} \\ {g(e(k))} \end{bmatrix} \ge0, \end{aligned}$$
(29)
$$\begin{aligned}& - \left .\begin{bmatrix} {e(k - \tau(k))} \\ {g(e(k - \tau(k)))} \end{bmatrix} \right .^{T} \begin{bmatrix} {S_{2} L_{1} } & { - S_{2} L_{2} } \\ \ast& {S_{2} } \end{bmatrix} \begin{bmatrix} {e(k - \tau(k))} \\ {g(e(k - \tau(k)))} \end{bmatrix} \ge0. \end{aligned}$$
(30)

In order to establish the \(H_{\infty}\) performance of the estimation error system, we define

$$ J(s)=\sum_{k=0}^{s} \bigl\{ \bigl\| \hat{z}(k)\bigr\| ^{2}-\gamma^{2}\bigl\| \omega(k)\bigr\| ^{2} \bigr\} . $$
(31)

Under the zero-initial condition, combined with (12)-(31), one can get

$$\begin{aligned} J(s)&=\sum_{k=0}^{s} \bigl\{ \bigl\| \hat{z}(k)\bigr\| ^{2}-\gamma^{2}\bigl\| \omega(k)\bigr\| ^{2}+ \Delta{V}(k)-V(s+1) \bigr\} \\ &\leq\sum_{k=0}^{s} \bigl\{ \bigl\| \hat{z}(k) \bigr\| ^{2}-\gamma^{2}\bigl\| \omega(k)\bigr\| ^{2}+\Delta{V}(k) \bigr\} \\ &\leq\sum_{k=0}^{s} \Biggl\{ \xi^{T}(k) \begin{bmatrix} \Omega& \Xi_{1} \\ \ast& -\gamma^{2}I \end{bmatrix} \xi(k)+e^{T}(k)H^{T}He(k) \Biggr\} < 0, \end{aligned}$$
(32)

where

$$\begin{aligned}& \begin{aligned}[b] \alpha^{T}(k)={}& \Biggl[e^{T}(k),e^{T}(k- \tau_{m}),e^{T} \bigl(k-\tau(k) \bigr),e^{T}(k- \tau _{M}),\eta^{T}(k),g^{T} \bigl(e(k) \bigr), \\ &{} g^{T} \bigl(e \bigl(k-\tau(k) \bigr) \bigr),\sum _{i=1}^{+\infty }\delta(i)g^{T} \bigl(e(k-i) \bigr) \Biggr], \end{aligned} \\& \xi^{T}(k)= \bigl[\alpha^{T}(k),\omega^{T}(k) \bigr]. \end{aligned}$$

When \(s\longrightarrow\infty\), we can obtain

$$\sum_{k=0}^{\infty}\bigl\| \tilde{z}(k) \bigr\| ^{2}\leq\gamma^{2}\sum_{k=0}^{\infty} \bigl\| \omega(k)\bigr\| ^{2}. $$

Then, when \(\omega(k)=0\), from (12) to (32), we can get \(\Delta {V}(k)\leq\alpha^{T}(k)\Omega\alpha(k)\), where Ω is defined in (11) and \(\Omega<0\).

There must exist a sufficiently small \(\varepsilon_{0}>0\), such that

$$ \Delta V(k)\leq-\varepsilon_{0}\bigl\| e(k) \bigr\| ^{2}< 0. $$
(33)

From Lyapunov stability theory, we know that the error system is globally asymptotically stable in mean square with \(\omega(k)=0\). The proof is completed. □

Remark 2

In this paper, we use the zero equality (26) to avoid the problem of nonlinear matrix inequality, which can give flexibility in solving LMIs, and the effectiveness of this way will be demonstrated in the numerical examples. In [29], the authors use the way of \(-PR^{-1}P\leq-2P+R\) (\(R\geq0\)) to overcome the problem of the nonlinear matrix inequality into LMI.

Remark 3

In this paper, by introducing two new zero equalities (18) and (19), the free-weighting matrices \(N_{1}\), \(N_{2}\) were added in the main diagonal terms of the matrix \(\bigl [{\scriptsize\begin{matrix}{} T_{4} & 0 \cr 0 & T_{4} \end{matrix}} \bigr ]\), the condition of the reciprocally convex approach is changed as \(\bigl [{\scriptsize\begin{matrix}{} T_{4}+N_{1} & Y\cr Y^{T} & T_{4}+N_{2} \end{matrix}} \bigr ]\geq0\). This method may lead to less conservative conditions of the \(H_{\infty}\) state estimation and stability for discrete-time neural networks, which will be shown by the following numerical examples.

Remark 4

It should be noted that the proposed Lyapunov-Krasovskii functional is more generalized, since \(V_{4}(k)\) was not considered in [33]. The effectiveness of \(V_{4}(k)\) has been proved in terms of reducing the conservatism in this paper. Therefore, our results may be more applicable than the ones in [33].

In the following, we consider the discrete-time neural networks with mixed time delays

$$ e(k+1) = Ae(k)+B_{1}g \bigl(e(k) \bigr)+B_{2}g \bigl(e \bigl(k-\tau(k) \bigr) \bigr)+B_{3}\sum _{i=1}^{+\infty}\delta(i)g \bigl(e(k-i) \bigr). $$
(34)

Theorem 3.2

The system (34) is globally asymptotically stable if there exist symmetric positive definite matrices \(P_{1}>0\), \(P_{2}>0\), \(Q=\bigl [ {\scriptsize\begin{matrix}{} Q_{11} & Q_{12} \cr \ast& Q_{22} \end{matrix}} \bigr ] >0\), \(R_{l}>0\) (\(l=1,2\)), \(T_{l}>0\) (\(l=1,2,3,4\)), the positive diagonal matrices \(S_{k}=\operatorname{diag}\{s_{1k},s_{2k},\ldots,s_{nk}\}\) (\(k=1,2\)), any appropriately dimensioned symmetric matrices \(N_{1}\), \(N_{2}\), and any appropriately dimensioned matrices Y, \(M_{1}\), \(M_{2}\), such that the following matrix inequalities hold:

$$\begin{aligned}& \begin{bmatrix} T_{1} & N_{1} \\ \ast& T_{2} \end{bmatrix}>0,\qquad \begin{bmatrix} T_{1} & N_{2} \\ \ast& T_{2} \end{bmatrix}>0,\qquad \begin{bmatrix} T_{4}+N_{1} & Y \\ \ast& T_{4}+N_{2} \end{bmatrix}\geq0, \end{aligned}$$
(35)
$$\begin{aligned}& \hat{\Omega}= \begin{bmatrix} \hat{\Omega}_{11} & \cdots& \hat{\Omega}_{18} \\ \ast& \ddots& \vdots \\ \ast& \ast& \hat{\Omega}_{88} \end{bmatrix}< 0, \end{aligned}$$
(36)

where

$$\begin{aligned}& \hat{\Omega}_{11}=(\tau_{Mm}+1)Q_{11}+R_{1}+ \tau _{Mm}^{2}T_{1}-T_{3}-S_{1}L_{1}+M_{1}A+AM_{1}^{T}-M_{1}-M_{1}^{T}, \\& \hat{\Omega}_{12}=T_{3}, \qquad\hat{\Omega}_{15}=P_{1}-M_{1}-M_{2}^{T}+AM_{2}^{T}, \\& \hat{\Omega}_{16}=(\tau_{Mm}+1)Q_{12}+M_{1}B_{1}+L_{2}S_{1}, \qquad \hat{\Omega}_{17}=M_{1}B_{2}, \qquad\hat{\Omega}_{18}=M_{1}B_{3}, \\& \Omega_{22}=-R_{1}+R_{2}-T_{3}-T_{4}+( \tau_{Mm}-1)N_{1}, \qquad\hat{\Omega}_{23}=T_{4}+N_{1}-Y, \qquad \hat{\Omega}_{24}=Y, \\& \hat{\Omega}_{33}=-Q_{11}-2T_{4}-( \tau_{Mm}+1)N_{1}+(\tau _{Mm}-1)N_{2}+Y+Y^{T}-S_{2}L_{1}, \\& \hat{\Omega}_{34}=T_{4}+N_{2}-Y, \qquad \hat{\Omega}_{35}=-C_{2}^{T}G^{T}, \qquad\hat{\Omega}_{37}=-Q_{12}+L_{2}S_{2}, \\& \hat{\Omega}_{44}=-R_{2}-T_{4}-( \tau_{Mm}+1)N_{2}, \qquad\hat{\Omega}_{55}=\tau _{Mm}^{2}(T_{2}+T_{4})+ \tau_{m}^{2}T_{3}+P_{1}-M_{2}-M_{2}^{T}, \\& \hat{\Omega}_{56}=M_{2}B_{1}, \qquad \hat{\Omega}_{57}=M_{2}B_{2},\qquad \hat{\Omega}_{58}=M_{2}B_{3}, \qquad \hat{\Omega}_{66}=( \tau_{Mm}+1)Q_{22}+\xi P_{2}-S_{1}, \\& \hat{\Omega}_{77}=-Q_{22}-S_{2}, \qquad\hat{\Omega}_{88}=-\frac{1}{\xi }P_{2}, \\& \textit{otherwise},\quad \Omega_{ij}=0, \quad i,j=1,2,\ldots,8. \end{aligned}$$

Proof

Define the same Lyapunov-Krasovskii functional in Theorem 3.1

$$ V(k)=V_{1}(k)+V_{2}(k)+V_{3}(k)+V_{4}(k)+V_{5}(k). $$
(37)

For any appropriately dimensioned matrix \(M_{1}\), \(M_{2}\), the following zero equality holds:

$$\begin{aligned} 0={}&2 \bigl[e^{T}(k)M_{1}+ \eta^{T}(k)M_{2} \bigr] \Biggl[(A-I)e(k)+B_{1}g \bigl(e(k) \bigr)+B_{2}g\bigl(e \bigl(k-\tau (k) \bigr)\bigr) \\ &{}+B_{3}\sum_{i=1}^{+\infty} \delta(i)g \bigl(x(k-i) \bigr)-\eta(k) \Biggr]. \end{aligned}$$
(38)

Similar to the above analysis in Theorem 3.1, we can get \(\alpha ^{T}(k)\hat{\Omega}\alpha(k)<0\), and it follows from Lyapunov stability theory that the system (34) is globally asymptotically stable. The proof is completed. □

4 Examples

In this section, we present three examples to demonstrate the effectiveness of our results.

Example 1

Consider the system model (1) with the following parameters:

$$\begin{aligned}& A = \begin{bmatrix} {0.4} & 0 \\ 0 & {0.6} \end{bmatrix},\qquad B_{1} = \begin{bmatrix} { - 0.1} & {0.2} \\ {0.3} & {0.4} \end{bmatrix}, \qquad B_{2} = \begin{bmatrix} {0.1} & {0.2} \\ {0.3} & {0.2} \end{bmatrix}, \\& B_{3} = \begin{bmatrix} { - 0.4} & {0.3} \\ {0.2} & {0.1} \end{bmatrix},\qquad C_{1} = \begin{bmatrix} {0.4} & {0.2} \\ { - 0.1} & {0.6} \end{bmatrix},\qquad C_{2} = \begin{bmatrix} {0.45} & {0.3} \\ { - 0.25} & {0.4} \end{bmatrix}, \\& D_{1} = \begin{bmatrix} {0.1} & 0 \\ 0 & {0.2} \end{bmatrix},\qquad D_{2} = \begin{bmatrix} {0.4} & 0 \\ 0 & {0.6} \end{bmatrix},\qquad H = \begin{bmatrix} {0.2} & {0.5} \\ {0.6} & {0.4} \end{bmatrix}, \\& f \bigl(x(k) \bigr) = \begin{bmatrix} {\tanh(0.6x_{1} ) - 0.2\sin x_{1} }\\ {\tanh( - 0.4x_{2} )} \end{bmatrix}, \\& \delta(i) = 2^{ - i - 3},\qquad \tau ( k ) = 4 + 2\sin \biggl( \frac{{\pi k}}{2} \biggr). \end{aligned}$$

It can be verified that

$$\begin{aligned}& L_{1} = \begin{bmatrix} { - 0.16} & 0 \\ 0 & 0 \end{bmatrix},\qquad L_{2} = \begin{bmatrix} {0.3} & 0 \\ 0 & { - 0.2} \end{bmatrix}, \\& \tau_{M} = 6,\qquad \tau_{m} = 2, \qquad\xi = \frac{1}{8}. \end{aligned}$$

In this example, the attenuation level is \(\gamma=0.98\). By applying Theorem 3.1 with Matlab, we can get a set of feasible solution as follows:

$$\begin{aligned}& P_{1} = \begin{bmatrix} {46.7544} & { - 13.6028} \\ { - 13.6028} & {33.8604} \end{bmatrix},\qquad P_{2} = \begin{bmatrix} {6.1443} & { - 1.3809} \\ { - 1.3809} & {4.0085} \end{bmatrix}, \\& Q_{11} = \begin{bmatrix} {2.8464} & { - 0.8814} \\ { - 0.8814} & {1.5361} \end{bmatrix},\qquad Q_{12} = \begin{bmatrix} { -1.5476} & { - 0.5241} \\ { - 1.0620} & { - 0.0556} \end{bmatrix}, \\& Q_{22} = \begin{bmatrix} {5.5451} & {-1.2002} \\ {-1.2002} & {4.0080} \end{bmatrix},\qquad R_{1} = \begin{bmatrix} {7.8533} & { - 1.8061} \\ { -1.8061} & {4.5324} \end{bmatrix}, \\& R_{2} = \begin{bmatrix} {4.1881} & { - 0.7765} \\ { - 0.7765} & {2.5721} \end{bmatrix},\qquad T_{1} = \begin{bmatrix} {0.8884} & { - 0.2448} \\ { - 0.2448} & {0.4314} \end{bmatrix}, \\& T_{2} = \begin{bmatrix} {0.4356} & { - 0.0892} \\ { - 0.0892} & {0.2865} \end{bmatrix},\qquad T_{3} = \begin{bmatrix} {0.5196} & { - 0.0758} \\ { - 0.0758} & {0.3620} \end{bmatrix}, \\& T_{4} = \begin{bmatrix} {0.2861} & { - 0.0542} \\ { - 0.0542} & {0.2297} \end{bmatrix},\qquad N_{1} = \begin{bmatrix} {0.3573} & { - 0.1300} \\ { - 0.1300} & {0.2067} \end{bmatrix}, \\& N_{2} = \begin{bmatrix} {0.1441} & { - 0.0043} \\ { - 0.0043} & {0.0227} \end{bmatrix},\qquad Y = \begin{bmatrix} { - 0.0812} & {0.0427} \\ {0.0431} & { - 0.0759} \end{bmatrix}, \\& M = \begin{bmatrix} {44.9538} & { - 6.8919} \\ { - 12.3568} & {34.5155} \end{bmatrix},\qquad G = \begin{bmatrix} {5.0036} & { - 2.3473} \\ { - 1.9508} & {9.8097} \end{bmatrix}, \\& S_{1} = \begin{bmatrix} { 52.4217} & {0} \\ {0} & { 48.0183} \end{bmatrix},\qquad S_{2} = \begin{bmatrix} {22.7074} & { 0} \\ {0} & {12.9386} \end{bmatrix}. \end{aligned}$$

Therefore, according to Theorem 3.1, the desired estimator gains can be designed as

$$K=M^{-1}G= \begin{bmatrix} 0.1086 & -0.0091 \\ 0.0176 & 0.2809 \end{bmatrix}. $$

Example 2

Consider the system (34) with the following parameters:

$$\begin{aligned}& A= \begin{bmatrix} 0.4 & 0 \\ 0 & 0.3 \end{bmatrix},\qquad B_{1}= \begin{bmatrix} -0.3 &0.1\\ 0.2 & 0.2 \end{bmatrix}, \qquad B_{2}= \begin{bmatrix} 0.4 &0.2\\ 0.1 & 0.2 \end{bmatrix}, \\& B_{3}= \begin{bmatrix} -0.1 & 0.1\\ 0.2&0.1 \end{bmatrix},\qquad g(e)= \begin{bmatrix} \tanh(0.8e_{1}) \\ \tanh(-0.6e_{2}) \end{bmatrix}. \end{aligned}$$

We can obtain

$$L_{1}= \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix},\qquad L_{2}= \begin{bmatrix} 0.4 &0\\ 0 & -0.3 \end{bmatrix},\qquad \sum_{i=1}^{+\infty}e^{-2i}< + \infty. $$

The aim of Example 2 is to get the allowable delay bounds \(\tau_{M}\) with different \(\tau_{m}\); by using Theorem 3.2, the computational results are listed in Table 1. From Table 1, it can be confirmed that our results are less conservative than the ones in [25, 26].

Table 1 Allowable bounds of \(\pmb{\tau_{M}}\) for different \(\pmb{\tau_{m}}\)

Example 3

In this example, we will show the application of the proposed method to a biological network, which has been studied in [36]. Here, we consider the following discrete-time genetic regulatory network:

$$ \begin{aligned} &m(k+1)=\tilde{A}m(k)+\tilde{B}f \bigl(p- \tau(k) \bigr), \\ &p(k+1)=\tilde{C}p(k)+\tilde{D}m \bigl(k-\tau(k) \bigr), \end{aligned} $$
(39)

where \(m(k)=[m_{1}(k),m_{2}(k),\ldots,m_{n}(k)]^{T}\), \(p(k)=[p_{1}(k),p_{2}(k),\ldots,p_{n}(k)]^{T}\), and \(m_{i}(k)\), \(p_{i}(k)\) can be viewed as the concentrations of the mRNA and protein of the ith mode. \(\tilde{A}=\operatorname{diag}\{\tilde{a}_{1},\tilde{a}_{2},\ldots,\tilde{a}_{n}\}\) with \(|\tilde{a}_{i}|<1\), \(\tilde{C}=\operatorname{diag}\{\tilde{c}_{1},\tilde{c}_{2},\ldots ,\tilde{c}_{n}\}\) with \(|\tilde{c}_{i}|<1\), \(\tilde{a}_{i}\) and \(\tilde {c}_{i}\) are the decay rates of mRNA and protein. \(\tilde{D}=\operatorname{diag}\{\tilde {d}_{1},\tilde{d}_{2},\ldots,\tilde{d}_{n}\}\), \(f(p(t))=[f_{1}(p_{1}(t)),f_{2}(p_{2}(t)),\ldots, f_{n}(p_{n}(t))]^{T}\), and \(f_{i}(\cdot )\) denotes the feedback regulation of the protein on the transcription. \(\tilde{B}=(\tilde{b}_{ij})\in R^{n\times n}\) is the coupling matrix of the genetic network, which is defined as follows: \(b_{ij}=\alpha_{ij}\) if the transcription factor j is an activator of gene i, \(b_{ij}=0\) if there is no link node j to i, \(b_{ij}=-\alpha_{ij}\) if the transcription factor j is a repressor of gene i, \(\alpha_{ij}\) is the dimensionless transcriptional rate, a bounded constant.

In this example, consider a three-node genetic regulatory network with the following parameters:

$$ \begin{aligned} &\tilde{A}=\tilde{C}= \begin{bmatrix} 0.1 & 0 & 0 \\ 0 & 0.1 & 0 \\ 0 & 0 & 0.1 \end{bmatrix},\qquad \tilde{B}= \begin{bmatrix} 0 & 0 & -0.5 \\ -0.5 & 0 & 0 \\ 0 & -0.5 & 0 \end{bmatrix}, \\ &\tilde{D}= \begin{bmatrix} 0.08 & 0 & 0 \\ 0 & 0.08 & 0 \\ 0 & 0 & 0.08 \end{bmatrix}. \end{aligned} $$
(40)

Letting

$$\begin{aligned}& e(k)= \begin{bmatrix} m(k) \\ p(k) \end{bmatrix},\qquad g \bigl(e \bigl(k-\tau(k) \bigr) \bigr)= \begin{bmatrix} m(k-\tau(k)) \\ p(k-\tau(k)) \end{bmatrix}, \\& A= \begin{bmatrix} \tilde{A} & 0 \\ 0 & \tilde{C} \end{bmatrix},\qquad B_{1}=0,\qquad B_{2}= \begin{bmatrix} 0 & \tilde{B} \\ \tilde{D} &0 \end{bmatrix},\qquad B_{3}=0. \end{aligned}$$

We can find that genetic regulatory network (39) can be transformed into discrete-time neural network (34), it is assumed that \(\tau (k)=3+2\sin(\frac{k\pi}{2})\). Applying Theorem 3.2, it can be checked that the discrete-time neural network (34) is asymptotically stable, which implies the genetic regulatory network (39) is stable.

5 Conclusions

In this paper, the problem of an \(H_{\infty}\) state estimation for discrete-time neural networks with distributed delays has been investigated. The presented sufficient conditions are based on a new Lyapunov-Krasovskii functional, appropriate free-weighting matrices, a reciprocally convex approach, and three new zero equalities, and new criteria are established in terms of LMIs. Three numerical examples are given to demonstrate the usefulness and effectiveness of the proposed results. Finally, it should be worth noting that the proposed method in this paper can be greatly applicable in many other cases, such as Markovian jumping neural networks, fuzzy neural networks and switched neural networks, which deserves further investigation.

References

  1. Wang, ZD, Wei, GL, Feng, G: Reliable \(H_{\infty}\) control for discrete-time piecewise linear systems with infinite distributed delays. Automatica 45(12), 2991-2994 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  2. Liu, YR, Wang, ZD, Liu, XH: State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays. Phys. Lett. A 372(48), 7147-7155 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  3. Li, T, Wang, T, Song, AG, Fei, SM: Exponential synchronization for arrays of coupled neural networks with time-delay couplings. Int. J. Control. Autom. Syst. 9(1), 187-196 (2011)

    Article  MathSciNet  Google Scholar 

  4. Park, PG, Ko, JW, Jeong, CK: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47(1), 235-238 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  5. Park, JH, Park, CH, Kwon, OM, Lee, SM: A new stability criterion for bidirectional associative memory neural networks of neutral-type. Appl. Math. Comput. 199(2), 716-722 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  6. Zhang, D, Yu, L: \(H_{\infty}\) Filtering for linear neutral systems with mixed time-varying delays and nonlinear perturbations. J. Franklin Inst. 347(7), 1374-1390 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  7. Lin, C, Wang, QG, Lee, TH: A less conservative robust stability test for linear uncertain time-delay systems. IEEE Trans. Autom. Control 51(1), 87-91 (2006)

    Article  MathSciNet  Google Scholar 

  8. Sun, J, Chen, J: Stability analysis of static recurrent neural networks with interval time-varying delay. Appl. Math. Comput. 221(15), 111-120 (2013)

    Article  MathSciNet  Google Scholar 

  9. Wang, ZD, Ho, DWC, Liu, XH: State estimation for delayed neural networks. IEEE Trans. Neural Netw. 16(1), 279-284 (2005)

    Article  Google Scholar 

  10. Lu, RQ, Wu, HY, Bai, JJ: New delay-dependent robust stability criteria for uncertain neutral systems with mixed delays. J. Franklin Inst. 351(3), 1386-1399 (2014)

    Article  MathSciNet  Google Scholar 

  11. Wu, ZG, Park, JH, Su, HY, Chu, J: New results on exponential passivity of neural networks with time-delays. Nonlinear Anal., Real World Appl. 13(4), 1593-1599 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  12. Ji, DH, Koo, JH, Won, SC, Lee, SM, Park, JH: Passivity-based control for Hopfield neural networks using convex representation. Appl. Math. Comput. 217(13), 6168-6175 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  13. Lee, SM, Kwon, OM, Park, JH: A novel delay-dependent criterion for delayed neural networks of neutral type. Phys. Lett. A 374(17-18), 1843-1848 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chen, WH, Zheng, WX: Global asymptotic stability of a class of neural networks with distributed delays. IEEE Trans. Circuits Syst. I 53(3), 644-652 (2006)

    Article  MathSciNet  Google Scholar 

  15. Tian, JK, Zhong, SM, Wang, Y: Improved exponential stability criteria for neural networks with time-varying delays. Neurocomputing 97(11), 164-173 (2012)

    Article  Google Scholar 

  16. Liang, JL, Wang, ZD, Liu, XH: Global synchronization in an array of discrete-time neural networks with nonlinear coupling and time-varying delays. Int. J. Neural Syst. 19(1), 57-63 (2009)

    Article  Google Scholar 

  17. Kwon, OM, Park, MJ, Lee, SM, Cha, EJ: New criteria on delay-dependent stability for discrete-time neural networks with time-varying delays. Neurocomputing 121, 185-194 (2013)

    Article  Google Scholar 

  18. Song, QK, Wang, ZD: A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays. Phys. Lett. A 368(1-2), 134-145 (2007)

    Article  Google Scholar 

  19. Zhang, BY, Xu, SY, Zou, Y: Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays. Neurocomputing 72(1-3), 321-330 (2008)

    Article  Google Scholar 

  20. Song, CW, Gao, HJ, Zheng, WX: A new approach to stability analysis of discrete-time recurrent neural networks with time-varying delay. Neurocomputing 72(10-12), 2563-2568 (2009)

    Article  Google Scholar 

  21. Wu, ZG, Shi, P, Su, HY, Chu, J: Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 24(3), 345-355 (2013)

    Article  Google Scholar 

  22. Li, JN, Li, LS: Mean-square exponential stability for stochastic discrete-time recurrent neural networks with mixed time delays. Neurocomputing 151, 790-797 (2015)

    Article  Google Scholar 

  23. Fu, MY, Souza, CE: State estimation for linear discrete-time systems using quantized measurements. Automatica 45(12), 2937-2945 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  24. Liang, JL, Shen, B, Dong, HL, Lam, J: Robust distributed state estimation for sensor networks with multiple stochastic communication delays. Int. J. Syst. Sci. 42(9), 1459-1471 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  25. Liu, YR, Wang, ZD, Liu, XH: Asymptotic stability for neural networks with mixed time-delays:the discrete-time case. Neural Netw. 22(1), 67-74 (2009)

    Article  Google Scholar 

  26. Li, T, Song, AG, Feng, SM: Novel stability criteria on discrete-time neural networks with time-varying and distributed delays. Int. J. Neural Syst. 19(4), 269-283 (2009)

    Article  Google Scholar 

  27. Liang, JL, Lam, J, Wang, ZD: State estimation for Markov-type genetic regulatory networks with delays and uncertain mode transition rates. Phys. Lett. A 373(47), 4328-4337 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  28. Zhang, D, Li, Y: Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays. Neural Netw. 35, 103-111 (2012)

    Article  Google Scholar 

  29. Huang, H, Huang, TW, Chen, XP: Guaranteed \(H_{\infty}\) performance state estimation of delayed static neural networks. IEEE Trans. Circuits Syst. II 60(6), 371-375 (2013)

    Article  Google Scholar 

  30. Cheng, J, Zhong, SM, Zhong, QS, Zhu, H, Du, YH: Finite-time boundedness of state estimation for neural networks with time-varying delays. Neurocomputing 129, 257-264 (2014)

    Article  Google Scholar 

  31. Balasubramaniama, P, Kalpanaa, M, Rakkiyappanb, R: State estimation for fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Comput. Math. Appl. 62(10), 3959-3972 (2011)

    MathSciNet  Google Scholar 

  32. Syed Ali, M, Saravanakumar, R: Augmented Lyapunov approach to \(H_{\infty}\) state estimation of static neural networks with discrete and distributed time-varying delays. Chin. Phys. B 24, 050201 (2015). http://iopscience.iop.org/1674-1056/24/5/050201

    Article  Google Scholar 

  33. Hu, J, Chen, DY, Du, JH: State estimation for a class of discrete nonlinear systems with randomly occurring uncertainties and distributed sensor delays. Int. J. Gen. Syst. 43(3-4), 387-401 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  34. Liu, YJ, Lee, SM, Park, JH: A study on \(H_{\infty}\) state estimation of static neural networks with time-varying delays. Appl. Math. Comput. 226(1), 589-597 (2014)

    Article  MathSciNet  Google Scholar 

  35. Zhang, J, Wang, ZD, Ding, DR, Liu, XH: \(H_{\infty}\) State estimation for discrete-time delayed neural networks with randomly occurring quantizations and missing measurements. Neurocomputing 148(19), 388-396 (2015)

    Article  Google Scholar 

  36. Li, P, Lam, J, Shu, Z: On the transient and steady-state estimates of interval genetic regulatory networks. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 40(2), 336-349 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China no. 61273015, the Natural Science Research Project of Fuyang Teachers College no. 2013FSKJ09.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Kang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors drafted the manuscript and read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kang, W., Zhong, S. & Cheng, J. \(H_{\infty}\) State estimation for discrete-time neural networks with time-varying and distributed delays. Adv Differ Equ 2015, 263 (2015). https://doi.org/10.1186/s13662-015-0603-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0603-7

Keywords