Skip to main content

Theory and Modern Applications

Variance-constrained resilient \(H_{\infty }\) state estimation for time-varying neural networks with randomly varying nonlinearities and missing measurements

Abstract

This paper addresses the resilient \(H_{\infty }\) state estimation problem under variance constraint for discrete uncertain time-varying recurrent neural networks with randomly varying nonlinearities and missing measurements. The phenomena of missing measurements and randomly varying nonlinearities are described by introducing some Bernoulli distributed random variables, in which the occurrence probabilities are known a priori. Besides, the multiplicative noise is employed to characterize the estimator gain perturbation. Our main purpose is to design a time-varying state estimator such that, for all missing measurements, randomly varying nonlinearities and estimator gain perturbation, both the estimation error variance constraint and the prescribed \(H_{\infty }\) performance requirement are met simultaneously by providing some sufficient criteria. Finally, the feasibility of the proposed variance-constrained resilient \(H_{\infty }\) state estimation method is verified by some simulations.

1 Introduction

In the past two decades, the popularization of the Internet has greatly changed our way of life through the rapid communication ways [1,2,3]. Accordingly, many scholars have witnessed successful applications of recurrent neural networks (RNNs) in wide fields including pattern recognition, image processing, associate memory and optimization domains [4,5,6]. Nevertheless, it should be noted that the nonlinearities are commonly inherent characteristics between neurons, which indeed affect the understanding and analysis of the neural networks (NNs). Thus, some efficient methods have been given to analyze different NNs. For example, an effective finite-time synchronization criterion has been proposed in [7] for coupled stochastic NNs, where both the Markovian switching parameters and saturation have been addressed. Moreover, some useful state estimation algorithms have been given in [8] for delayed NNs to guarantee the \(H_{\infty }\) as well as passivity and in [9] for bidirectional associative NNs subject to mixed time-delays. During the analysis and implementation of the methods related to RNNs, it should be noticed that the neuron states may not always available in reality, so there is a need to estimate them by utilizing effective estimation methods [10,11,12]. Until now, many results have been published with respect to the state estimation problem of different types of dynamical networks [13,14,15,16]. Nevertheless, it is worth mentioning that much published literature is only applicable to time-invariant case, which might lead to certain application limitations. Hence, more and more researchers have paid attention on the analysis of the state estimation problem for time-varying system and proposed a variety of methods to analyze the behaviors of dynamics systems, see e.g. [17,18,19,20] and the references therein. To be specific, an event-based joint input and state estimation strategy have been presented in [19] to ensure that the covariance of the estimation error has an upper bound at any time for the addressed linear discrete time-varying systems. Accordingly, there exist the demands on the development of efficient analysis methods for time-varying RNNs.

During the network communications or transmissions, the perfect measurements can not be always available, thus increasing research effort has been made on the state estimation problems against the missing measurements [20,21,22]. The missing measurements are inevitable primarily due to the signal interference during the transmission including limited communication channels, noise interferences and so forth. For example, the study of missing measurements has been conducted in the areas of principal component analysis and partial least squares models [23]. Actually, it is well recognized that the existence of the missing measurements would lead to poor system performance for the addressed NNs [24, 25]. Accordingly, a large number of results have been given to deal with the state estimation problem for dynamics networks subject to missing measurements [26,27,28,29]. For example, in [27, 28], the \(H_{\infty }\) state estimation design strategies have been proposed for discrete delayed neural networks, where the impacts from the multiple missing measurements have been discussed. Nevertheless, few results have been reported to handle the problem of state estimation for time-varying dynamics networks with missing measurements [20, 30]. Recently, a new optimal state estimation algorithm has been developed in [20] for time-varying complex networks, where the impacts from both the stochastic coupling and the missing measurements within the framework of uncertain occurrence probabilities have been addressed. However, it is worthwhile to notice that few scholars have studied the state estimation problem of time-varying neural networks with missing measurements, which constitutes one of research motivations.

Recently, the phenomena of randomly varying nonlinearities have been modeled and discussed in various fields [31, 32]. The existence of randomly varying nonlinearities has brought more influences in physics, engineering, information science and other fields, and hence the dynamics system analysis problems with randomly varying nonlinearities have already received increasing interest [33,34,35]. For example, the state estimation algorithm in distributed way has been established in [33] for a class of discrete systems over sensor networks with incomplete measurements and randomly varying nonlinearities, and the probability-dependent \(H_{\infty }\) synchronization condition has been presented in [35] for dynamics networks by designing new control method. During the implementation process, the presented estimation method would be fragile/non-resilient due to various reasons, such as the numerical rounding error, finite-word length effects and analog-to-digital conversion accuracy [36, 37]. Accordingly, the non-fragile/resilient estimation schemes have wide application fields as in water quality processing [38] and synchronous generators [39]. In [40, 41], the non-fragile \(H_{\infty }\) state estimation algorithms have been given for dynamical systems with Markov switching effects, where the possible estimator parameter variations have been modeled and discussed. The resilient \(H_{\infty }\) filtering methods have been given in [42] for switched delayed neural networks with ensured finite-time criterion and in [43] for time-varying dynamics systems through sensor networks with communication protocols. However, few resilient \(H_{\infty }\) estimation methods can be available for time-varying RNNs. On the other hand, from the engineering-oriented viewpoint on the state estimation problems, it is of great practical significance to ensure the estimation performance has a certain upper bound of related to the estimated error covariance [44]. Unlike to the minimal estimation of error covariance, the variance-constrained state estimation strategy could provide a looser evaluation by introducing a specific upper bound constraint, which can reflect the admissible accuracy of the proposed estimation methods. Very recently, a new variance-constrained filtering algorithm with distributed feature has been established in [45] for time-varying sensor networks subject to deception attacks and state estimation scheme; see [46] for time-varying complex networks to attenuate the impacts induced by randomly varying topologies. As such, we make the first attempt to handle the \(H_{\infty }\) resilient estimation problem under variance constraint for time-varying RNNs with randomly varying nonlinearities and missing measurements.

Inspired by the above discussions, we aim to design the resilient \(H_{\infty }\) state estimation approach for the addressed discrete time-varying RNNs subject to missing measurements and randomly varying nonlinearities under the variance constraint. In particular, the estimator gain perturbation is considered by employing the Gaussian white noise. By resorting to the matrix theory and stochastic analysis technique, a new nonlinear time-varying state estimation method is proposed, which can ensure the error variance boundedness and prescribed \(H_{\infty }\) performance requirements simultaneously. From an engineering viewpoint, the proposed recursive state estimation scheme has a time-varying feature applicable for handling the estimation problems of neural networks, which is suitable for the online estimation applications. Moreover, some sufficient conditions characterized by the matrix inequalities are given to ensure two mixed performance indies, which can better achieve satisfactory disturbance attenuation level and the estimation covariance performance, thus performing wide application domains. The major features of the paper can be summarized as follows: (1) the \(H_{\infty }\) state estimation problem under variance constraint is, for the first time, investigated for a class of discrete time-varying RNNs subject to missing measurements and randomly varying nonlinearities; (2) a new probability-dependent time-varying state estimation algorithm is proposed, which can be implemented in terms of the solutions to certain matrix inequalities; (3) the impacts caused by the missing measurements and randomly varying nonlinearities onto two estimation performance indices are discussed and examined simultaneously; and (4) the proposed estimation algorithm has time-varying characteristic applicable for online applications, which performs new advantages compared with the existing estimation results for time-invariant neural networks.

Notations. The symbols used throughout the paper are fairly standard. \(\mathbb{R}^{r}\) denotes the r dimensional Euclidean space. \(\mathbb{N}^{+}\) stands for the sets of positive integers. For the matrix A and the vector x, \(A^{T}\) and \(x^{T}\) represents the transpose A and x, respectively. The identity matrix is denoted by I and the zero matrix is denoted by 0. \(\mathbb{E}\{x\}\) means the mathematical expectation of x. \(X>0\) means that X is a positive-definite symmetric matrix. In symmetric block matrices, we use an asterisk to represent a term that is induced by symmetry, and diag{…} stands for a block-diagonal matrix. It is assumed that the matrices have compatible dimensions if it is not explicitly specified.

2 Problem formulation and preliminaries

In this paper, we consider the n-neurons time-varying neural network given by

$$\begin{aligned}& x_{k+1} =(A_{k}+\Delta A_{k})x_{k}+\alpha _{k}B_{1k}f(x_{k})+(1- \alpha _{k})B_{2k}g(x_{k})+C_{k}v_{1k}, \\& y_{k} =\lambda _{k}D_{k}x_{k}+v_{2k}, \\& z_{k} =M_{k}x_{k}, \end{aligned}$$
(1)

where \(x_{k}= [ x_{1,k} \ x_{2,k} \ \ldots \ x_{n,k} ] ^{T}\in {\mathbb{R}}^{n}\) represents the state vector of neural network, \(y_{k}\in {\mathbb{R}}^{m}\) denotes the measurement output, \(z_{k}\in {\mathbb{R}^{r}}\) stands for the controlled output, \(A_{k}=\operatorname{diag}\{a_{1,k},a_{2,k},\ldots ,a_{n,k}\}\) is the state coefficient matrix, \(C_{k}\), \(D_{k}\) and \(M_{k}\) are known real matrices with appropriate dimensions, \(v_{1k}\in {\mathbb{R}}^{n_{w}}\) and \(v_{2k}\in {\mathbb{R}}^{m}\) are Gaussian white noises with zero mean values and covariances \(V_{1}>0\) and \(V_{2}>0\), respectively. \(B_{1k}= [ b_{1ij}(k) ] _{n\times n}\) and \(B_{2k}= [ b_{2ij}(k) ] _{n\times n}\) are the connection weight matrices. \(f(x_{k})= [ f_{1}(x_{1,k}) \ \ldots \ f_{n}(x_{n,k}) ] ^{T}\) and \(g(x_{k})= [ g_{1}(x_{1,k}) \ \ldots \ g_{n}(x_{n,k}) ] ^{T}\) are the neuron activation functions. \(\Delta A_{k}\) describes the parameter uncertainty satisfying

$$\begin{aligned} \Delta A_{k} =&H_{k}F_{k}N_{k}, \end{aligned}$$
(2)

where \(H_{k}\) and \(N_{k}\) are appropriately dimensional known matrices, the unknown matrix \(F_{k}\) satisfies

$$\begin{aligned} F_{k}^{T}F_{k} \leq &I, \quad \forall k\in {\mathbb{N^{+}}}. \end{aligned}$$
(3)

The Bernoulli distributed random variables \(\alpha _{k}\) and \(\lambda _{k}\) are utilized to describe the phenomena of randomly varying nonlinearities and missing measurements, respectively, and satisfy

$$\begin{aligned}& \operatorname{Prob}\{\alpha _{k} =1\}=\mathbb{E}\{ \alpha _{k}\}=\bar{ \alpha }, \qquad \operatorname{Prob}\{\alpha _{k}=0\}=1-\bar{\alpha }, \\& \operatorname{Prob}\{\lambda _{k} =1\}=\mathbb{E}\{\lambda _{k}\}=\bar{ \lambda }, \qquad \operatorname{Prob}\{\lambda _{k}=0\}=1-\bar{\lambda }, \end{aligned}$$

where \(\bar{\alpha }\in [0,1]\) and \(\bar{\lambda }\in [0,1]\) are known constants.

Assumption 1

For the activation functions \(f(\cdot )\) and \(g(\cdot )\) with \(f(0)=g(0)=0\), there exist four scalars \(\lambda _{i}^{-}\), \(\lambda _{i}^{+}\), \(\sigma _{i}^{-}\) and \(\sigma _{i}^{+}\) satisfying the following conditions:

$$\begin{aligned}& \begin{gathered} \lambda _{i}^{-}\leq \frac{f_{i}(s_{1})-f_{i}(s_{2})}{s_{1}-s_{2}} \leq \lambda _{i}^{+}, \\ \sigma _{i}^{-}\leq \frac{g_{i}(s_{1})-g_{i}(s_{2})}{s_{1}-s_{2}} \leq \sigma _{i}^{+}, \quad \forall s_{1},s_{2} \in {{\mathbb{R}}}, s_{1}\neq s_{2}. \end{gathered} \end{aligned}$$
(4)

In order to estimate the states of neurons, the following time-varying nonlinear state estimator is constructed:

$$\begin{aligned}& \begin{gathered} \hat{x}_{k+1} =A_{k} \hat{x}_{k}+\bar{\alpha }B_{1k}f(\hat{x}_{k})+(1- \bar{ \alpha })B_{2k}g(\hat{x}_{k})+ (K_{k}+ \delta _{k}\bar{K}_{k}) (y_{k}-\bar{ \lambda }D_{k}\hat{x}_{k}), \\ \hat{z}_{k} =M_{k}\hat{x}_{k}, \end{gathered} \end{aligned}$$
(5)

where \(\hat{x}_{k}\) is the estimation of neuron state \(x_{k}\), \(\bar{K}_{k}\) is a known real matrix with appropriate dimension, \(\delta _{k}\) is zero mean Gaussian white noise with unity covariance. \(K_{k}\) is the estimator gain matrix to be determined later. In the sequel, suppose that \(\alpha _{k}\), \(\lambda _{k}\), \(v_{1k}\), \(v_{2k}\) and \(\delta _{k}\) are mutually independent.

Remark 1

During the implementation process, the state estimation performance of the neural networks is usually affected by the numerical rounding error, finite-word length effects and analog-to-digital conversion accuracy, which inevitably lead to the possible deviations of the estimation values provided by the state estimator. Thus, the presented estimation method might be a fragile/non-resilient one. Therefore, we aim to take the estimator gain perturbations into account during the estimator design with hope to provide a resilient time-varying state estimator with admissible adjustment ability. Accordingly, the estimator gain perturbations are modeled by a zero mean Gaussian white noise \(\delta _{k}\) and the nominal matrix \(\bar{K}_{k}\), where the changes of the white noise \(\delta _{k}\) are utilized to characterize the admissible errors of the estimator gain. As such, a resilient state estimation scheme under prescribed performance indices is expected to be given later for the addressed time-varying RNNs.

Let the estimation error be \(e_{k}=x_{k}-\hat{x}_{k}\) and the controlled output estimation error be \(\tilde{z}_{k}=z_{k}-\hat{z}_{k}\). Then the estimation error dynamics can be obtained from (1) and (5) as follows:

$$\begin{aligned}& \begin{gathered} e_{k+1} =\bigl[A_{k}-(K_{k}+ \delta _{k} \bar{K}_{k})\bar{\lambda }D_{k} \bigr]e _{k}+\Delta A_{k}x_{k}+\tilde{ \alpha }_{k}B_{1k}f(x_{k})+\bar{\alpha }B_{1k}f(e_{k}) \\ \hphantom{e_{k+1} =}{}-\tilde{\alpha }_{k}B_{2k}g(x_{k})+(1- \bar{\alpha })B_{2k}g(e_{k})-(K _{k}+\delta _{k}\bar{K}_{k}) (\tilde{\lambda }_{k}D_{k}x_{k}+v_{2k}) +C_{k}v_{1k}, \\ \tilde{z}_{k} =M_{k}e_{k}, \end{gathered} \end{aligned}$$
(6)

with \(f(e_{k})=f(x_{k})-f(\hat{x}_{k})\), \(g(e_{k})=g(x_{k})-g(\hat{x} _{k})\), \(\tilde{\alpha }_{k}=\alpha _{k}-\bar{\alpha }\) and \(\tilde{\lambda }_{k}=\lambda _{k}-\bar{\lambda }\).

To simplify the notation, we can define

$$\begin{aligned}& \eta _{k} = \begin{bmatrix} x_{k}^{T} & e_{k}^{T} \end{bmatrix} ^{T},\qquad v_{k}= \begin{bmatrix} v_{1k}^{T} & v_{2k}^{T} \end{bmatrix} ^{T}, \\& f(\eta _{k}) = \begin{bmatrix} f^{T}(x_{k}) & f^{T}(e_{k}) \end{bmatrix} ^{T},\qquad g(\eta _{k})= \begin{bmatrix} g^{T}(x_{k}) & g^{T}(e_{k}) \end{bmatrix} ^{T}. \end{aligned}$$

Considering (1), (6) and the above notations, we can easily derive the following augmented system:

$$\begin{aligned}& \begin{gathered} \eta _{k+1} = (\mathcal{A}_{k}+ \tilde{\lambda }_{k}\mathcal{D}_{1k}+ \tilde{\lambda }_{k}\delta _{k}\mathcal{D}_{2k}+\delta _{k}A_{2k})\eta _{k}+(\mathcal{B}_{1k}+ \tilde{\alpha }_{k}B_{k})f(\eta _{k}) \\ \hphantom{\eta _{k+1} =}{} +(\mathcal{B}_{2k}-\tilde{\alpha }_{k} \check{B}_{k})g(\eta _{k})+( \mathcal{C}_{1k}+ \delta _{k}\mathcal{C}_{2k})v_{k}, \\ \tilde{z}_{k} =\mathcal{M}_{k}\eta_{k}, \end{gathered} \end{aligned}$$
(7)

where

$$\begin{aligned}& \mathcal{A}_{k} = \begin{bmatrix} A_{k}+\Delta A_{k} & 0 \\ \Delta A_{k} & A_{k}-\bar{\lambda }K_{k}D_{k} \end{bmatrix}, \qquad \mathcal{D}_{1k}= \begin{bmatrix} 0& 0 \\ -K_{k}D_{k} & 0 \end{bmatrix},\\& \mathcal{D}_{2k}= \begin{bmatrix} 0& 0 \\ -\bar{K}_{k}D_{k} & 0 \end{bmatrix}, \qquad A_{2k} = \begin{bmatrix} 0 & 0 \\ 0 &-\bar{\lambda }\bar{K}_{k}D_{k} \end{bmatrix}, \qquad \mathcal{B}_{1k}= \begin{bmatrix} \bar{\alpha }B_{1k} & 0 \\ 0 &\bar{\alpha }B_{1k} \end{bmatrix},\\& B_{k}= \begin{bmatrix} B_{1k} & 0 \\ B_{1k}& 0 \end{bmatrix}, \qquad \mathcal{B}_{2k} = \begin{bmatrix} (1-\bar{\alpha })B_{2k} & 0 \\ 0 & (1-\bar{\alpha })B_{2k} \end{bmatrix}, \qquad \check{B}_{k}= \begin{bmatrix} B_{2k} & 0 \\ B_{2k}&0 \end{bmatrix}, \\& \mathcal{C}_{1k}= \begin{bmatrix} C_{k}& 0 \\ C_{k} & -K_{k} \end{bmatrix}, \qquad \mathcal{C}_{2k} = \begin{bmatrix} 0& 0 \\ 0 & -\bar{K}_{k} \end{bmatrix}, \qquad \mathcal{M}_{k}= \begin{bmatrix} 0 & M_{k} \end{bmatrix}. \end{aligned}$$

Next, we introduce the covariance matrix \(X_{k}\) described by

$$\begin{aligned} X_{k} =&\mathbb{E}\bigl\{ \eta _{k} \eta _{k}^{T}\bigr\} =\mathbb{E}\left \{ \begin{aligned} \begin{bmatrix} x_{k}\\ e_{k} \end{bmatrix} \begin{bmatrix} x_{k}\\ e_{k} \end{bmatrix} ^{T} \end{aligned} \right \}. \end{aligned}$$
(8)

The main purpose of this paper is to design a time-varying nonlinear state estimator (5) such that the following two requirements are achieved simultaneously.

  1. (R1)

    Let the scalar \(\gamma >0\), the positive-definite matrices \(W_{\varphi }\) and \(W_{\phi }\) be given. For the initial state \(\eta _{0}\), the estimation error \(\tilde{z}_{k}\) satisfies the following constraint:

    $$\begin{aligned} J_{1}: =&\mathbb{E} \Biggl\{ \begin{aligned} \sum_{k=0}^{N-1}\bigl( \Vert \tilde{z}_{k} \Vert ^{2}-\gamma ^{2} \Vert v_{k} \Vert ^{2}_{W _{\varphi }} \bigr) \end{aligned} \Biggr\} -\gamma ^{2}\mathbb{E} \bigl\{ \eta _{0}^{T}W_{\phi }\eta _{0} \bigr\} < 0, \end{aligned}$$
    (9)

    where \(\| v_{k}\|^{2}_{W_{\varphi }}=v_{k}^{T}W_{\varphi }v_{k}\).

  2. (R2)

    The estimation error covariance satisfies the following bounded constraint:

    $$\begin{aligned} J_{2}: =&\mathbb{E}\bigl\{ e_{k}e_{k}^{T} \bigr\} \leq \varPsi _{k}, \end{aligned}$$
    (10)

    where \(\varPsi _{k}~(0\leq k < N)\) is a set of pre-defined upper matrix, which reflects the admissible estimation precision demand with respect to the specific situation.

Remark 2

In this paper, our main purpose is to design a time-varying state estimator such that, for all missing measurements, randomly varying nonlinearities and estimator gain perturbations, both the estimation error variance constraint and the prescribed \(H_{\infty }\) performance requirement are met simultaneously by providing some sufficient criteria. On the one hand, the \(H_{\infty }\) performance requirement within finite horizon is used to reflect the attenuation capacity of the presented estimation algorithm against the effects from the exogenous disturbances and initial value. On the other hand, the upper bound constraint on the estimation error covariance is also introduced to evaluate the estimation accuracy. Compared with the traditional estimation method in the minimum variance framework, a prescribed upper bound constraint regarding the estimation error covariance is needed to be ensured, which could provide another evaluation way to the characterize the estimation performance.

Remark 3

In the practical circumstances, both the nonlinear effects and random coupling are inevitable [7]. Moreover, there is a common case that the NNs consist of a large number of highly interconnected neurons. Thus, it is generally difficult to obtain all states of the neurons due to the inherent interaction effects. Consequently, it is necessary to propose a proper way estimating the state information of all neurons, which has wide theoretical importance and practical significance. For example, a battery state-of-charge estimator has been designed in [47] to testify the usefulness of the merging fuzzy neural networks with a novel learning structure. As such, the desirable state estimation strategy with resilience ability for time-varying RNNs subject to missing measurements and randomly varying nonlinearities can provide important solvable method, thereby enriching the state estimation techniques for NNs.

At the end of this section, the following lemmas are introduced to facilitate further derivations.

Lemma 1

([26])

Suppose that \(\varGamma =\operatorname{diag}\{\mu _{1},\mu _{2},\ldots ,\mu _{n} \}\) is a diagonal matrix. For \(x=[ x_{1} \ x_{2} \ \ldots \ x_{n}]^{T}\in \mathbb{R}^{n}\), let \(\mathcal{H}(x)=[ h_{1}(x_{1}) \ h_{2}(x_{2}) \ \ldots \ h_{n}(x_{n})]^{T}\) be a continuous nonlinear function satisfying

$$\begin{aligned} l_{i}^{-}\leq \frac{h_{i}(s)}{s}\leq l_{i}^{+}, \quad s\in {\mathbb{R}}, s\neq 0 \end{aligned}$$

with \(l_{i}^{+}\) and \(l_{i}^{-}\) being constant scalars. Then

$$\begin{aligned} \begin{bmatrix} x \\ \mathcal{H}(x) \end{bmatrix}^{T} \begin{bmatrix} \varGamma L_{1} &-\varGamma L_{2} \\ -\varGamma L_{2} & \varGamma \end{bmatrix} \begin{bmatrix} x \\ \mathcal{H}(x) \end{bmatrix}\leq 0, \end{aligned}$$

where \(L_{1}=\operatorname{diag}\{l_{1}^{+}l_{1}^{-},l_{2}^{+}l_{2}^{-}, \ldots ,l_{n}^{+}l_{n}^{-}\}\) and \(L_{2}=\operatorname{diag} \{\frac{l _{1}^{+}+l_{1}^{-}}{2},\frac{l_{2}^{+}+l_{2}^{-}}{2},\ldots ,\frac{l _{n}^{+}+l_{n}^{-}}{2} \}\).

Lemma 2

([48])

Let R, F, H and Q be real matrices of appropriate dimensions with Q and F satisfying \(Q=Q^{T}\) and \(FF^{T}\leq I\). Then \(Q+RFH+H^{T}F^{T}R^{T}<0\) holds if and only if there exists \(\epsilon >0\) such that

$$\begin{aligned} Q+\epsilon RR^{T}+\epsilon ^{-1}H^{T}H < 0. \end{aligned}$$

Lemma 3

If the activation functions \(f(s)\) and \(g(s)\) satisfy the conditions in Assumption 1, then the following inequalities hold:

$$ \begin{gathered} f(s)f^{T}(s) \leq \biggl\{ \frac{\rho +\frac{1}{\rho }}{2(1-\rho )}( \varLambda _{2}+\varLambda _{0})^{2}+ \frac{1}{\rho (1-\rho )}(\varLambda _{2}- \varLambda _{0})^{2} \biggr\} \Vert s \Vert ^{2}, \\ g(s)g^{T}(s) \leq \biggl\{ \frac{\rho +\frac{1}{\rho }}{2(1-\rho )}( \varSigma _{2}+\varSigma _{0})^{2}+ \frac{1}{\rho (1-\rho )}(\varSigma _{2}-\varSigma _{0})^{2} \biggr\} \Vert s \Vert ^{2}, \quad \rho \in (0,1), \end{gathered} $$
(11)

where

$$ \begin{gathered} \varLambda _{2}=\operatorname{diag} \biggl\{ \frac{\lambda _{1}^{+}+\lambda _{1} ^{-}}{2},\frac{\lambda _{2}^{+}+\lambda _{2}^{-}}{2},\ldots ,\frac{ \lambda _{n}^{+}+\lambda _{n}^{-}}{2} \biggr\} =\operatorname{diag}\bigl\{ \lambda _{i}^{1}\bigr\} , \\ \varLambda _{0}=\operatorname{diag} \biggl\{ \frac{\lambda _{1}^{+}-\lambda _{1} ^{-}}{2}, \frac{\lambda _{2}^{+}-\lambda _{2}^{-}}{2},\ldots ,\frac{ \lambda _{n}^{+}-\lambda _{n}^{-}}{2} \biggr\} =\operatorname{diag} \bigl\{ \lambda _{i}^{0}\bigr\} , \\ \varSigma _{2}=\operatorname{diag} \biggl\{ \frac{\sigma _{1}^{+}+\sigma _{1}^{-}}{2}, \frac{ \sigma _{2}^{+}+\sigma _{2}^{-}}{2},\ldots ,\frac{\sigma _{n}^{+}+\sigma _{n}^{-}}{2} \biggr\} =\operatorname{diag} \bigl\{ \sigma _{i}^{1}\bigr\} , \\ \varSigma _{0}=\operatorname{diag} \biggl\{ \frac{\sigma _{1}^{+}-\sigma _{1}^{-}}{2}, \frac{ \sigma _{2}^{+}-\sigma _{2}^{-}}{2},\ldots ,\frac{\sigma _{n}^{+}-\sigma _{n}^{-}}{2} \biggr\} =\operatorname{diag} \bigl\{ \sigma _{i}^{0}\bigr\} . \end{gathered} $$
(12)

Proof

The proof this lemma can be easily obtained, hence it is omitted for brevity. □

3 Discussions of \(H_{\infty }\) performances and covariance constraint

In this section, both the \(H_{\infty }\) performance requirement and the upper bounded constraint of the estimation error covariance matrix are considered. The sufficient criteria are proposed accordingly based on the recursive matrix inequality technique.

3.1 \(H_{\infty }\) performance requirement

Firstly, a sufficient condition is obtained to ensure that the output estimation error satisfies the specified \(H_{\infty }\) performance index over the finite horizon.

Theorem 1

Consider the time-varying RNNs (1) subject to randomly varying nonlinearities and missing measurements. Suppose that the scalars \(\gamma >0\), \(\bar{\alpha }\in [0,1]\) and \(\bar{\lambda } \in [0,1]\), matrices \(W_{\varphi }>0\) and \(W_{\phi }>0\), state estimator gain matrix \(K_{k}\) in (5) are given. If \(Q_{0} \leq \gamma ^{2}W_{\phi }\) and there exists a series of positive-definite real-value matrices \(\{Q_{k}\}_{1\leq k\leq N+1}\) satisfying the following recursive matrix inequality:

$$\begin{aligned} \varPhi = \begin{bmatrix} \varPhi _{11} & \varLambda _{21}+\mathcal{A}_{k}^{T}Q_{k+1}\mathcal{B}_{1k} & \varSigma _{22}+\mathcal{A}_{k}^{T}Q_{k+1}\mathcal{B}_{2k} & 0 \\ \ast & \varPhi _{22} &\varPhi _{23}& 0 \\ \ast & \ast & \varPhi _{33} & 0 \\ \ast & \ast & \ast & \varPhi _{44} \end{bmatrix} < 0, \end{aligned}$$
(13)

where

$$\begin{aligned}& \varPhi _{11} =\mathcal{A}_{k}^{T}Q_{k+1} \mathcal{A}_{k}+\bar{\lambda }(1-\bar{ \lambda }) \mathcal{D}_{1k}^{T}Q_{k+1} \mathcal{D}_{1k}+\bar{\lambda }(1-\bar{ \lambda }) \mathcal{D}_{2k}^{T}Q_{k+1} \mathcal{D}_{2k} \\& \hphantom{\varPhi _{11} =}{}+A_{2k}^{T}Q_{k+1}A_{2k}+ \mathcal{M}_{k}^{T}\mathcal{M}_{k}-Q_{k}- \varLambda _{11}-\varSigma _{12}, \\& \varPhi _{22} =\mathcal{B}_{1k}^{T}Q_{k+1} \mathcal{B}_{1k}+\bar{\alpha }(1-\bar{ \alpha })B_{k}^{T}Q_{k+1}B_{k}- \bar{F}, \\& \varPhi _{23} =\mathcal{B}_{1k}^{T}Q_{k+1} \mathcal{B}_{2k}-\bar{\alpha }(1-\bar{ \alpha })B_{k}^{T}Q_{k+1} \check{B}_{k}, \\& \varPhi _{33} =\mathcal{B}_{2k}^{T}Q_{k+1} \mathcal{B}_{2k}+\bar{\alpha }(1-\bar{ \alpha })\check{B}_{k}^{T}Q_{k+1} \check{B}_{k}-\bar{H}, \\& \varPhi _{44} =\mathcal{C}_{1k}^{T}Q_{k+1} \mathcal{C}_{1k}+\mathcal{C} _{2k}^{T}Q_{k+1} \mathcal{C}_{2k}-\gamma ^{2}W_{\varphi }, \\& \varLambda _{11} = \begin{bmatrix} \varGamma _{1}\varLambda _{1} & 0 \\ 0 & \varGamma _{2}\varLambda _{1} \end{bmatrix}, \qquad \varLambda _{21}= \begin{bmatrix} \varGamma _{1}\varLambda _{2} & 0 \\ 0 &\varGamma _{2}\varLambda _{2} \end{bmatrix}, \qquad \varSigma _{12}= \begin{bmatrix} \varGamma _{3}\varSigma _{1} & 0 \\ 0 & \varGamma _{4}\varSigma _{1} \end{bmatrix}, \\& \varSigma _{22} = \begin{bmatrix} \varGamma _{3}\varSigma _{2}& 0 \\ 0 &\varGamma _{4}\varSigma _{2} \end{bmatrix}, \qquad \bar{F}= \begin{bmatrix} \varGamma _{1}& 0 \\ 0 & \varGamma _{2} \end{bmatrix}, \qquad \bar{H}= \begin{bmatrix} \varGamma _{3} & 0 \\ 0 & \varGamma _{4} \end{bmatrix}, \\& \varGamma _{1} =\operatorname{diag}\{\mu _{11},\mu _{12},\ldots ,\mu _{1n}\}, \qquad \varGamma _{2}=\operatorname{diag}\{\mu _{21},\mu _{22},\ldots ,\mu _{2n}\}, \\& \varGamma _{3} =\operatorname{diag}\{\mu _{31},\mu _{32},\ldots ,\mu _{3n}\}, \qquad \varGamma _{4}=\operatorname{diag}\{\mu _{41},\mu _{42},\ldots ,\mu _{4n}\}, \\& \varLambda _{1} =\operatorname{diag}\bigl\{ \lambda _{1}^{+}\lambda _{1}^{-}, \lambda _{2}^{+}\lambda _{2}^{-}, \ldots ,\lambda _{n}^{+}\lambda _{n}^{-} \bigr\} , \qquad \varSigma _{1}=\operatorname{diag}\bigl\{ \sigma _{1}^{+}\sigma _{1}^{-},\sigma _{2}^{+} \sigma _{2}^{-},\ldots , \sigma _{n}^{+}\sigma _{n}^{-} \bigr\} , \end{aligned}$$

then the \(H_{\infty }\) performance constraint defined in (9) can be achieved for all nonzero \(v_{k}\).

Proof

Define

$$\begin{aligned} V_{k} =&\eta _{k+1}^{T}Q_{k+1} \eta _{k+1}- \eta _{k}^{T}Q_{k} \eta _{k}. \end{aligned}$$
(14)

Next, it follows from the augmented system (7) that

$$\begin{aligned} \mathbb{E}\{V_{k}\} =&\mathbb{E}\bigl\{ \eta _{k}^{T}\mathcal{A}_{k} ^{T}Q_{k+1}\mathcal{A}_{k}\eta _{k}+\bar{\lambda }(1-\bar{\lambda }) \eta _{k}^{T} \mathcal{D}_{1k}^{T}Q_{k+1} \mathcal{D}_{1k}\eta _{k} +\bar{ \lambda }(1-\bar{\lambda })\eta _{k}^{T}\mathcal{D}_{2k}^{T} \\ &{}\times Q_{k+1}\mathcal{D}_{2k}\eta _{k}+\eta _{k}^{T}A_{2k} ^{T}Q_{k+1}A_{2k}\eta _{k}+f^{T}( \eta _{k})\mathcal{B}_{1k}^{T}Q_{k+1} \mathcal{B}_{1k}f(\eta _{k}) \\ &{}+\bar{\alpha }(1-\bar{\alpha })f^{T}(\eta _{k})B_{k}^{T}Q_{k+1}B _{k}f(\eta _{k}) +g^{T}(\eta _{k})\mathcal{B}_{2k}^{T}Q_{k+1} \mathcal{B}_{2k}g(\eta _{k}) \\ &{}+\bar{\alpha }(1-\bar{\alpha })g^{T}(\eta _{k})\check{B}_{k} ^{T}Q_{k+1} \check{B}_{k}g(\eta _{k})+2\eta _{k}^{T} \mathcal{A}_{k}^{T}Q _{k+1} \mathcal{B}_{1k}f(\eta _{k}) \\ &{}+2\eta _{k}^{T}\mathcal{A}_{k}^{T}Q_{k+1} \mathcal{B}_{2k}g( \eta _{k})+2f^{T}(\eta _{k})\mathcal{B}_{1k}^{T}Q_{k+1} \mathcal{B}_{2k}g( \eta _{k}) \\ &{}-2\bar{\alpha }(1-\bar{\alpha })f^{T} (\eta _{k})B_{k}^{T}Q _{k+1} \check{B}_{k}g(\eta _{k}) \\ &{} +v_{k}^{T}\mathcal{C}_{1k}^{T}Q_{k+1} \mathcal{C}_{1k}v_{k} +v _{k}^{T} \mathcal{C}_{2k}^{T}Q_{k+1} \mathcal{C}_{2k}v_{k}-\eta _{k}^{T}Q _{k}\eta _{k}\bigr\} . \end{aligned}$$
(15)

Adding the zero term \(\tilde{z}^{T}_{k}\tilde{z}_{k}-\gamma ^{2}v_{k} ^{T}W_{\varphi }v_{k}-\tilde{z}^{T}_{k}\tilde{z}_{k}+\gamma ^{2}v_{k} ^{T}W_{\varphi }v_{k}\) to \(\mathbb{E}\{V_{k}\}\) leads to

$$\begin{aligned} \mathbb{E}\{V_{k}\} =&\mathbb{E}\left \{ \begin{bmatrix} \bar{\eta }_{k}^{T} & v_{k}^{T} \end{bmatrix} \tilde{\varPhi } \begin{bmatrix} \bar{\eta }_{k} \\ v_{k} \end{bmatrix} -\tilde{z}^{T}_{k} \tilde{z}_{k}+\gamma ^{2}v_{k}^{T}W_{\varphi }v_{k} \right \}, \end{aligned}$$

where

$$ \begin{gathered} \bar{\eta }_{k}^{T} = \begin{bmatrix} \eta _{k}^{T} & f^{T}(\eta _{k}) & g^{T}(\eta _{k}) \end{bmatrix} , \\ \tilde{\varPhi } = \begin{bmatrix} \tilde{\varPhi }_{11} & \mathcal{A}_{k}^{T}Q_{k+1}\mathcal{B}_{1k} & \mathcal{A}_{k}^{T}Q_{k+1}\mathcal{B}_{2k} & 0 \\ \ast & \tilde{\varPhi }_{22} &\varPhi _{23} & 0 \\ \ast & \ast & \tilde{\varPhi }_{33} & 0 \\ \ast & \ast & \ast & \varPhi _{44} \end{bmatrix} < 0, \\ \tilde{\varPhi }_{11} =\mathcal{A}_{k}^{T}Q_{k+1} \mathcal{A}_{k}+\bar{ \lambda }(1-\bar{\lambda }) \mathcal{D}_{1k}^{T}Q_{k+1} \mathcal{D}_{1k}+\bar{ \lambda }(1-\bar{\lambda }) \mathcal{D}_{2k}^{T}Q_{k+1} \mathcal{D}_{2k} \\ \hphantom{\tilde{\varPhi }_{11} =}{}+A_{2k}^{T}Q_{k+1}A_{2k}+ \mathcal{M}_{k}^{T}\mathcal{M}_{k}-Q_{k}, \\ \tilde{\varPhi }_{22} =\mathcal{B}_{1k}^{T}Q_{k+1} \mathcal{B}_{1k}+\bar{ \alpha }(1-\bar{\alpha })B_{k}^{T}Q_{k+1}B_{k}, \\ \tilde{\varPhi }_{33} =\mathcal{B}_{2k}^{T}Q_{k+1} \mathcal{B}_{2k}+\bar{ \alpha }(1-\bar{\alpha })\check{B}_{k}^{T}Q_{k+1} \check{B}_{k}, \end{gathered} $$
(16)

and \(\varPhi _{23}\) as well as \(\varPhi _{44}\) being defined below (13).

Moreover, based on Assumption 1 and Lemma 1, we have

$$\begin{aligned}& \begin{bmatrix} x_{k} \\ f(x_{k}) \end{bmatrix}^{T} \begin{bmatrix} \varGamma _{1}\varLambda _{1} & -\varGamma _{1}\varLambda _{2} \\ -\varGamma _{1}\varLambda _{2} & \varGamma _{1} \end{bmatrix} \begin{bmatrix} x_{k} \\ f(x_{k}) \end{bmatrix} \leq 0, \\& \begin{bmatrix} e_{k} \\ f(e_{k}) \end{bmatrix}^{T} \begin{bmatrix} \varGamma _{2}\varLambda _{1} & -\varGamma _{2}\varLambda _{2} \\ -\varGamma _{2}\varLambda _{2} & \varGamma _{2} \end{bmatrix} \begin{bmatrix} e_{k} \\ f(e_{k}) \end{bmatrix} \leq 0, \\& \begin{bmatrix} x_{k} \\ g(x_{k}) \end{bmatrix}^{T} \begin{bmatrix} \varGamma _{3}\varSigma _{1} & -\varGamma _{3}\varSigma _{2} \\ -\varGamma _{3}\varSigma _{2} &\varGamma _{3} \end{bmatrix} \begin{bmatrix} x_{k} \\ g(x_{k}) \end{bmatrix} \leq 0, \\& \begin{bmatrix} e_{k} \\ g(e_{k}) \end{bmatrix}^{T} \begin{bmatrix} \varGamma _{4}\varSigma _{1} & -\varGamma _{4}\varSigma _{2} \\ -\varGamma _{4}\varSigma _{2} & \varGamma _{4} \end{bmatrix} \begin{bmatrix} e_{k} \\ g(e_{k}) \end{bmatrix} \leq 0, \quad k=1,2,\ldots ,n, \end{aligned}$$

where \(\varLambda _{2}\) and \(\varSigma _{2}\) are defined in (12), \(\varLambda _{1}\), \(\varSigma _{1}\), \(\varGamma _{1}\), \(\varGamma _{2}\), \(\varGamma _{3}\) and \(\varGamma _{4}\) are defined in (13). From the above inequalities, we can deduce that

$$ \begin{gathered} \begin{bmatrix} \eta _{k} \\ f(\eta _{k}) \end{bmatrix} ^{T} \begin{bmatrix} \varLambda _{11} & -\varLambda _{21} \\ -\varLambda _{21} & \bar{F} \end{bmatrix} \begin{bmatrix} \eta _{k} \\ f(\eta _{k}) \end{bmatrix} \leq 0, \\ \begin{bmatrix} \eta _{k} \\ g(\eta _{k}) \end{bmatrix} ^{T} \begin{bmatrix} \varSigma _{12} & -\varSigma _{22} \\ -\varSigma _{22} & \bar{H} \end{bmatrix} \begin{bmatrix} \eta _{k} \\ g(\eta _{k}) \end{bmatrix} \leq 0, \end{gathered} $$
(17)

where \(\varLambda _{11}\), \(\varLambda _{21}\), \(\varSigma _{12}\), \(\varSigma _{22}\), and are mentioned below (13). Then, together with (16)–(17), this yields

$$\begin{aligned} \mathbb{E}\{V_{k}\} \leq &\mathbb{E}\left \{ \begin{bmatrix} \bar{\eta }_{k}^{T} & v_{k}^{T} \end{bmatrix} \tilde{\varPhi } \begin{bmatrix} \bar{\eta }_{k} \\ v_{k} \end{bmatrix} -\tilde{z}^{T}_{k} \tilde{z}_{k}+\gamma ^{2}v_{k}^{T}W_{\varphi }v_{k}- \bigl[ \eta _{k}^{T}\varLambda _{11}\eta _{k}-2\eta _{k}^{T}\varLambda _{21}f(\eta _{k})\right . \\ &\left.{}+f^{T}(\eta _{k})\bar{F}f(\eta _{k}) \bigr]-\bigl[\eta _{k}^{T}\varSigma _{12}\eta _{k}-2\eta _{k} ^{T}\varSigma _{22}g(\eta _{k})+g^{T}(\eta _{k})\bar{H}g(\eta _{k})\bigr]\vphantom{ \begin{bmatrix} \bar{\eta }_{k}^{T} & v_{k}^{T} \end{bmatrix} } \right\} \\ =&\mathbb{E}\left\{ \begin{bmatrix} \bar{\eta }_{k}^{T} & v_{k}^{T} \end{bmatrix} \varPhi \begin{bmatrix} \bar{\eta }_{k} \\ v_{k} \end{bmatrix} -\tilde{z}^{T}_{k} \tilde{z}_{k}+\gamma ^{2}v_{k}^{T}W_{\varphi }v_{k} \right \}. \end{aligned}$$
(18)

Summarizing (18) from 0 to \(N-1\) with respect to k, it is not difficult to see that

$$\begin{aligned} \sum_{k=0}^{N-1}\mathbb{E} \{V_{k}\} =&\mathbb{E}\bigl\{ \eta _{N}^{T}Q_{N} \eta _{N} -\eta _{0}^{T}Q_{0} \eta _{0}\bigr\} \\ \leq &\mathbb{E}\left \{\sum_{k=0}^{N-1} \begin{bmatrix} \bar{\eta }_{k}^{T} & v_{k}^{T} \end{bmatrix} \varPhi \begin{bmatrix} \bar{\eta }_{k} \\ v_{k} \end{bmatrix} \right \}-\mathbb{E} \Biggl\{ \sum_{k=0}^{N-1} \bigl(\tilde{z}^{T}_{k} \tilde{z}_{k}-\gamma ^{2}v_{k}^{T}W_{\varphi }v_{k} \bigr) \Biggr\} , \end{aligned}$$
(19)

where Φ is defined in (13). Therefore, we can derive the following inequality:

$$\begin{aligned} J_{1} \leq &\mathbb{E}\left \{\sum _{k=0}^{N-1} \begin{bmatrix} \bar{\eta }_{k}^{T} & v_{k}^{T} \end{bmatrix} \varPhi \begin{bmatrix} \bar{\eta }_{k} \\ v_{k} \end{bmatrix} +\eta _{0}^{T} \bigl(Q_{0}-\gamma ^{2}W_{\phi }\bigr)\eta _{0}\right \}- \mathbb{E} \bigl\{ \eta _{N}^{T}Q_{N} \eta _{N}\bigr\} . \end{aligned}$$
(20)

According to \(\varPhi <0\), \(Q_{N}>0\) and the initial condition \(Q_{0}\leq \gamma ^{2}W_{\phi }\), it follows that \(J_{1}<0\). The proof in Theorem 1 is now complete. □

So far, we have presented the criterion to guarantee the \(H_{\infty }\) performance within the finite horizon. Next, we are ready to propose the sufficient condition to ensure the upper bound constraint with respect to the covariance matrix \(X_{k}\).

3.2 Variance constraint analysis

In this subsection, the upper bound constraint of the covariance matrix \(X_{k}\) is ensured by providing a sufficient criterion.

Theorem 2

Consider the time-varying RNNs (1) subject to randomly varying nonlinearities and missing measurements. Let the scalars \(\bar{\alpha }\in [0,1]\), \(\bar{\lambda }\in [0,1]\) and the estimator gain matrix \(K_{k}\) in (5) be given. Under the initial condition \(G_{0}=X_{0}\), if there exists a set of positive-definite matrices \(\{G_{k}\}_{1\leq k\leq N+1}\) satisfying

$$\begin{aligned} G_{k+1} \geq &\varPsi (G_{k}), \end{aligned}$$
(21)

where

$$ \begin{gathered} \varPsi (G_{k}) =(1+\varepsilon _{1}+\varepsilon _{2})\mathcal{A}_{k}G _{k}\mathcal{A}_{k}^{T}+\bar{\lambda }(1- \bar{\lambda })\mathcal{D} _{1k}G_{k} \mathcal{D}_{1k}^{T} +\bar{\lambda }(1-\bar{\lambda }) \mathcal{D}_{2k}G_{k}\mathcal{D}_{2k}^{T} \\ \hphantom{\varPsi (G_{k}) =}{}+A_{2k}G_{k}A_{2k}^{T}+ \bigl(1+\varepsilon _{1}^{-1}+\varepsilon _{3}\bigr) \operatorname{tr}(G_{k})\mathcal{B}_{1k}Y_{1} \mathcal{B}_{1k}^{T}+\bar{ \alpha }(1-\bar{\alpha }) \bigl(1+\varepsilon _{4}^{-1}\bigr) \\ \hphantom{\varPsi (G_{k}) =}{}{}\times \operatorname{tr}(G_{k})\check{B}_{k}Y_{2} \check{B}_{k}^{T}+\bar{ \alpha }(1-\bar{\alpha }) (1+ \varepsilon _{4})\operatorname{tr}(G_{k})B_{k}Y _{1}B_{k}^{T}+\bigl(1+\varepsilon _{2}^{-1} \\ \hphantom{\varPsi (G_{k}) =}{}+\varepsilon _{3}^{-1}\bigr)\operatorname{tr}(G_{k}) \mathcal{B}_{2k}Y_{2} \mathcal{B}_{2k}^{T}+ \mathcal{C}_{1k}V\mathcal{C}_{1k}^{T}+ \mathcal{C}_{2k}V\mathcal{C}_{2k}^{T}, \\ V =\operatorname{diag}\{V_{1},V_{2}\}, \\ Y_{1} =\frac{\rho +\frac{1}{\rho }}{2(1-\rho )}(\varLambda _{2}+ \varLambda _{0})^{2}+\frac{1}{\rho (1-\rho )}(\varLambda _{2}-\varLambda _{0})^{2}, \\ Y_{2} =\frac{\rho +\frac{1}{\rho }}{2(1-\rho )}(\varSigma _{2}+\varSigma _{0})^{2}+\frac{1}{\rho (1-\rho )}(\varSigma _{2}-\varSigma _{0})^{2}, \end{gathered} $$
(22)

then we have \(G_{k}\geq X_{k}\) (\(\forall k\in {1,2,\ldots ,N+1}\)).

Proof

As we know from (8), the equation of state covariance \(X_{k}\) can be calculated by

$$\begin{aligned} X_{k+1} =&\mathbb{E} \bigl\{ \eta _{k+1}\eta _{k+1}^{T} \bigr\} \\ =&\mathbb{E}\bigl\{ \mathcal{A}_{k}\eta _{k} \eta _{k}^{T}\mathcal{A} _{k}^{T}+ \bar{\lambda }(1-\bar{\lambda })\mathcal{D}_{1k}\eta _{k} \eta _{k}^{T}\mathcal{D}_{1k}^{T} +\bar{\lambda }(1-\bar{\lambda }) \mathcal{D}_{2k}\eta _{k}\eta _{k}^{T}\mathcal{D}_{2k}^{T}+A_{2k} \eta _{k} \\ &{}\times \eta _{k}^{T}A_{2k}^{T}+ \bar{\alpha }(1-\bar{\alpha })B_{k}f( \eta _{k}) f^{T}(\eta _{k})B_{k}^{T}+ \mathcal{B}_{1k}f(\eta _{k})\eta _{k}^{T} \mathcal{A}_{k}^{T}+\mathcal{A}_{k}\eta _{k}f^{T}(\eta _{k}) \\ & {}\times \mathcal{B}_{1k}^{T}+\mathcal{B}_{2k}g( \eta _{k})g^{T}(\eta _{k}) \mathcal{B}_{2k}^{T}+\bar{\alpha }(1-\bar{\alpha }) \check{B}_{k}g( \eta _{k})g^{T}(\eta _{k})\check{B}_{k}^{T}+ \mathcal{B}_{2k}g(\eta _{k}) \eta _{k}^{T} \\ &{}\times \mathcal{A}_{k}^{T}+\mathcal{A}_{k} \eta _{k}g^{T}(\eta _{k}) \mathcal{B}_{2k}^{T}+\mathcal{B}_{2k}g(\eta _{k})f^{T}(\eta _{k}) \mathcal{B}_{1k}^{T}+ \mathcal{B}_{1k}f(\eta _{k})g^{T}(\eta _{k}) \mathcal{B}_{2k}^{T} \\ &{}+\mathcal{B}_{1k}f(\eta _{k})f^{T}(\eta _{k})\mathcal{B}_{1k}^{T}-\bar{ \alpha }(1- \bar{\alpha })B_{k}f(\eta _{k})g^{T}(\eta _{k})\check{B}_{k} ^{T}-\bar{\alpha }(1- \bar{\alpha })\check{B}_{k}g(\eta _{k}) \\ &{}\times f^{T}(\eta _{k})B_{k}^{T}+ \mathcal{C}_{1k}V\mathcal{C}_{1k} ^{T}+ \mathcal{C}_{2k}V\mathcal{C}_{2k}^{T}\bigr\} . \end{aligned}$$

On the basis of the inequality \(ab^{T}+ba^{T}\leq \zeta aa^{T}+\zeta ^{-1}bb^{T}\) with \(\zeta >0\), we can obtain

$$\begin{aligned}& \mathbb{E}\bigl\{ \mathcal{A}_{k}\eta _{k}f^{T}( \eta _{k})\mathcal{B}_{1k} ^{T}+ \mathcal{B}_{1k}f(\eta _{k})\eta _{k}^{T} \mathcal{A}_{k}^{T}\bigr\} \\& \quad \leq \mathbb{E}\bigl\{ \varepsilon _{1}\mathcal{A}_{k} \eta _{k}\eta _{k} ^{T}\mathcal{A}_{k}^{T}+ \varepsilon _{1}^{-1}\mathcal{B}_{1k}f(\eta _{k})f^{T}(\eta _{k})\mathcal{B}_{1k}^{T} \bigr\} , \\& \mathbb{E}\bigl\{ \mathcal{A}_{k}\eta _{k}g^{T}( \eta _{k})\mathcal{B}_{2k} ^{T}+ \mathcal{B}_{2k}g(\eta _{k})\eta _{k}^{T} \mathcal{A}_{k}^{T}\bigr\} \\& \quad \leq \mathbb{E}\bigl\{ \varepsilon _{2}\mathcal{A}_{k} \eta _{k}\eta _{k} ^{T}\mathcal{A}_{k}^{T}+ \varepsilon _{2}^{-1}\mathcal{B}_{2k}g(\eta _{k})g^{T}(\eta _{k})\mathcal{B}_{2k}^{T} \bigr\} , \\& \mathbb{E}\bigl\{ \mathcal{B}_{1k}f(\eta _{k})g^{T}( \eta _{k})\mathcal{B} _{2k}^{T}+ \mathcal{B}_{2k}g(\eta _{k})f^{T}(\eta _{k})\mathcal{B}_{1k} ^{T}\bigr\} \\& \quad \leq \mathbb{E}\bigl\{ \varepsilon _{3}\mathcal{B}_{1k}f( \eta _{k})f^{T}( \eta _{k}) \mathcal{B}_{1k}^{T}+\varepsilon _{3}^{-1} \mathcal{B}_{2k}g( \eta _{k})g^{T}(\eta _{k})\mathcal{B}_{2k}^{T}\bigr\} , \\& \mathbb{E}\bigl\{ -\bar{\alpha }(1-\bar{\alpha })B_{k}f(\eta _{k})g^{T}( \eta _{k})\check{B}_{k}^{T} -\bar{\alpha }(1-\bar{\alpha })\check{B} _{k}g(\eta _{k})f^{T}(\eta _{k})B_{k}^{T} \bigr\} \\& \quad \leq \mathbb{E}\bigl\{ \varepsilon _{4}\bar{\alpha }(1-\bar{ \alpha })B _{k}f(\eta _{k})f^{T}(\eta _{k})B_{k}^{T}+\varepsilon _{4}^{-1}\bar{ \alpha }(1-\bar{\alpha }) \check{B}_{k}g(\eta _{k})g^{T}(\eta _{k}) \check{B}_{k}^{T}\bigr\} , \end{aligned}$$

where \(\varepsilon _{i}\) (\(i=1,2,3,4\)) are positive scalars. Then it is straightforward to see that

$$\begin{aligned} X_{k+1} \leq &\mathbb{E}\bigl\{ (1+\varepsilon _{1}+\varepsilon _{2}) \mathcal{A}_{k}\eta _{k}\eta _{k}^{T}\mathcal{A}_{k}^{T}+ \bar{\lambda }(1-\bar{ \lambda })\mathcal{D}_{1k}\eta _{k}\eta _{k}^{T}\mathcal{D}_{1k}^{T} +\bar{ \lambda }(1-\bar{\lambda })\mathcal{D}_{2k}\eta _{k} \\ &{}\times \eta _{k}^{T}\mathcal{D}_{2k}^{T}+A_{2k} \eta _{k}\eta _{k}^{T}A_{2k}^{T}+ \bigl(1+\varepsilon _{1}^{-1}+\varepsilon _{3}\bigr) \mathcal{B}_{1k}f(\eta _{k})f^{T}( \eta _{k})\mathcal{B}_{1k}^{T} \\ & {}+\bar{ \alpha }(1 -\bar{\alpha }) \bigl(1+\varepsilon _{4}^{-1} \bigr)\check{B}_{k}g(\eta _{k})g^{T}(\eta _{k})\check{B}_{k}^{T} +\bar{\alpha }(1- \bar{\alpha }) (1+ \varepsilon _{4})B_{k} f(\eta _{k})f^{T}(\eta _{k})B_{k}^{T} \\ &{}+\bigl(1+\varepsilon _{2}^{-1}+\varepsilon _{3}^{-1}\bigr)\mathcal{B} _{2k}g(\eta _{k})g^{T}(\eta _{k})\mathcal{B}_{2k}^{T}+ \mathcal{C}_{1k}V \mathcal{C}_{1k}^{T} + \mathcal{C}_{2k}V\mathcal{C}_{2k}^{T}\bigr\} . \end{aligned}$$

Furthermore, it follows from Lemma 3 that

$$\begin{aligned}& \mathbb{E}\bigl\{ f(\eta _{k})f^{T}(\eta _{k})\bigr\} \leq \mathbb{E}\bigl\{ Y_{1} \Vert \eta _{k} \Vert ^{2}\bigr\} =\mathbb{E}\bigl\{ Y_{1}\eta _{k}^{T}\eta _{k} \bigr\} , \\& \mathbb{E}\bigl\{ g(\eta _{k})g^{T}(\eta _{k})\bigr\} \leq \mathbb{E}\bigl\{ Y_{2} \Vert \eta _{k} \Vert ^{2}\bigr\} =\mathbb{E}\bigl\{ Y_{2}\eta _{k}^{T}\eta _{k} \bigr\} , \end{aligned}$$

where \(Y_{1}\) and \(Y_{2}\) are defined in (22). Thus, one has

$$\begin{aligned} X_{k+1} \leq &\mathbb{E}\bigl\{ (1+ \varepsilon _{1}+\varepsilon _{2}) \mathcal{A}_{k} \eta _{k}\eta _{k}^{T}\mathcal{A}_{k}^{T}+ \bar{\lambda }(1-\bar{ \lambda })\mathcal{D}_{1k}\eta _{k}\eta _{k}^{T}\mathcal{D}_{1k}^{T} \\ &{}+\bar{ \lambda }(1 -\bar{\lambda })\mathcal{D}_{2k}\eta _{k}\eta _{k}^{T} \mathcal{D}_{2k}^{T}+A_{2k} \eta _{k}\eta _{k}^{T}A_{2k}^{T}+ \bigl(1+ \varepsilon _{1}^{-1} +\varepsilon _{3}\bigr)\mathcal{B}_{1k}Y_{1}\eta _{k} ^{T}\eta _{k}\mathcal{B}_{1k}^{T} \\ &{}+\bar{\alpha }(1-\bar{\alpha }) \bigl(1+\varepsilon _{4}^{-1}\bigr) \check{B}_{k}Y_{2} \eta _{k}^{T}\eta _{k}\check{B}_{k}^{T}+ \bar{\alpha }(1-\bar{ \alpha }) (1+\varepsilon _{4})B_{k}Y_{1} \eta _{k}^{T}\eta _{k} \\ &{}\times B_{k}^{T}+\bigl(1+\varepsilon _{2}^{-1}+\varepsilon _{3}^{-1} \bigr) \mathcal{B}_{2k}Y_{2}\eta _{k}^{T} \eta _{k}\mathcal{B}_{2k}^{T}+ \mathcal{C}_{1k}V\mathcal{C}_{1k}^{T}+ \mathcal{C}_{2k}V\mathcal{C} _{2k}^{T}\bigr\} . \end{aligned}$$
(23)

According to the feature of the trace, we can obtain

$$\begin{aligned} \mathbb{E}\bigl\{ \eta _{k}^{T}\eta _{k}\bigr\} =\mathbb{E}\bigl\{ \operatorname{tr}\bigl(\eta _{k} \eta _{k}^{T}\bigr)\bigr\} =\operatorname{tr}(X_{k}). \end{aligned}$$
(24)

Combining (23) with (24) results in

$$\begin{aligned} X_{k+1} \leq &(1+\varepsilon _{1}+\varepsilon _{2})\mathcal{A}_{k}X_{k} \mathcal{A}_{k}^{T}+\bar{\lambda }(1-\bar{\lambda }) \mathcal{D}_{1k}X _{k}\mathcal{D}_{1k}^{T} +\bar{\lambda }(1-\bar{\lambda })\mathcal{D} _{2k}X_{k} \mathcal{D}_{2k}^{T} \\ &{} +A_{2k}X_{k}A_{2k}^{T}+ \bigl(1+\varepsilon _{1}^{-1} +\varepsilon _{3}\bigr) \operatorname{tr}(X_{k})\mathcal{B}_{1k}Y_{1} \mathcal{B}_{1k}^{T}+\bar{ \alpha }(1-\bar{\alpha }) \bigl(1+\varepsilon _{4}^{-1}\bigr) \\ &{}\times \operatorname{tr}(X_{k})\check{B}_{k}Y_{2} \check{B}_{k}^{T}+\bar{ \alpha }(1-\bar{\alpha }) (1+ \varepsilon _{4})\operatorname{tr}(X_{k})B_{k}Y _{1}B_{k}^{T}+\bigl(1+\varepsilon _{2}^{-1} \\ &{}+\varepsilon _{3}^{-1}\bigr)\operatorname{tr}(X_{k}) \mathcal{B}_{2k}Y_{2} \mathcal{B}_{2k}^{T}+ \mathcal{C}_{1k}V\mathcal{C}_{1k}^{T}+ \mathcal{C}_{2k}V\mathcal{C}_{2k}^{T} \\ =&\varPsi (X_{k}). \end{aligned}$$

Noting that \(G_{0}\geq X_{0}\) and letting \(G_{k}\geq X_{k}\), we can derive the following inequality:

$$\begin{aligned} \varPsi (G_{k})\geq \varPsi (X_{k}) \geq X_{k+1}. \end{aligned}$$
(25)

Then, from (21) and (25), we arrive at

$$\begin{aligned} G_{k+1} \geq &\varPsi (G_{k})\geq \varPsi (X_{k})\geq X_{k+1}. \end{aligned}$$
(26)

Therefore, the proof is complete. □

Based on the above theorems, a sufficient condition can be presented to guarantee the specified \(H_{\infty }\) performance and estimation error variance constraint by solving the recursive matrix inequalities.

Theorem 3

Consider the time-varying RNNs (1) and suppose that the estimator gain matrix \(K_{k}\) is given. For given scalars \(\gamma >0\), \(\bar{\alpha }\in [0,1]\) and \(\bar{\lambda }\in [0,1]\), positive-definite matrices \(W_{\varphi }>0\) and \(W_{\phi }>0\), under the initial conditions \(Q_{0}\leq \gamma ^{2}W_{\phi }\) and \(G_{0}=X_{0}\), if there are two series of positive-definite real-valued matrices \(\{Q_{k}\}_{1\leq k\leq N+1}\) and \(\{G_{k}\}_{1\leq k\leq N+1}\) satisfying the following matrix inequalities:

$$\begin{aligned}& \begin{bmatrix} \varXi _{11} & \varXi _{12} & \varXi _{13}& \varXi _{14} & 0 \\ \ast & \varXi _{22}&0 & 0 &0 \\ \ast & \ast & \varXi _{33}&0 & \varXi _{35} \\ \ast & \ast & \ast & \varXi _{44}& 0 \\ \ast & \ast & \ast & \ast & \varXi _{55} \end{bmatrix} < 0, \end{aligned}$$
(27)
$$\begin{aligned}& \begin{bmatrix} \varUpsilon _{11} & \varUpsilon _{12}& \varUpsilon _{13}& \varUpsilon _{14} \\ \ast & -G_{k} & 0& 0 \\ \ast & \ast & \varUpsilon _{33}& 0 \\ \ast & \ast &\ast & \varUpsilon _{44} \end{bmatrix} < 0, \end{aligned}$$
(28)

where

$$\begin{aligned}& \varXi _{11} =-\varLambda _{11}-\varSigma _{12}-Q_{k}, \\& \varXi _{12} = \begin{bmatrix} \varLambda _{21}+\mathcal{A}_{k}^{T}Q_{k+1}\mathcal{B}_{1k} & \varSigma _{22}+ \mathcal{A}_{k}^{T}Q_{k+1}\mathcal{B}_{2k} \end{bmatrix} , \\& \varXi _{13} = \begin{bmatrix} 0 &\mathcal{A}_{k}^{T}Q_{k+1}&\varrho _{2}\mathcal{D}_{1k}^{T}Q_{k+1} \end{bmatrix} , \\& \varXi _{14} = \begin{bmatrix} \varrho _{2}\mathcal{D}_{2k}^{T}Q_{k+1} &A_{2k}^{T}Q_{k+1} & \mathcal{M}_{k}^{T} \end{bmatrix} , \\& \varXi _{22} = \begin{bmatrix} -\bar{F}+\mathcal{B}_{1k}^{T}Q_{k+1}\mathcal{B}_{1k}+\bar{\alpha }(1-\bar{ \alpha })B_{k}^{T}Q_{k+1}B_{k}&\varOmega _{1} \\ \ast & \varOmega _{2} \end{bmatrix} , \\& \varXi _{33} =\operatorname{diag}\bigl\{ -\gamma ^{2}W_{\varphi },-Q_{k+1},-Q_{k+1}\bigr\} , \\& \varXi _{35} = \begin{bmatrix} \mathcal{C}_{1k}^{T}Q_{k+1}& \mathcal{C}_{2k}^{T}Q_{k+1} \\ 0& 0 \\ 0& 0 \end{bmatrix} , \\& \varXi _{44} =\operatorname{diag}\{-Q_{k+1},-Q_{k+1},-I \}, \qquad \varXi _{55} =\operatorname{diag}\{-Q_{k+1},-Q_{k+1} \}, \\& \varUpsilon _{11} =-G_{k+1}+\varrho _{5}{ \operatorname{tr}}(G_{k})\mathcal{B} _{1k}Y_{1} \mathcal{B}_{1k}^{T}+\varrho _{1}{ \operatorname{tr}}(G_{k})B_{k}Y _{1}B_{k}^{T} \\& \hphantom{\varUpsilon _{11} =}{}+\varrho _{3}{\operatorname{tr}}(G_{k}) \mathcal{B}_{2k}Y_{2}\mathcal{B} _{2k}^{T}+ \varrho _{6}{\operatorname{tr}}(G_{k})\check{B}_{k}Y_{2} \check{B} _{k}^{T}, \\& \varUpsilon _{12} =\varrho _{4}\mathcal{A}_{k}G_{k}, \\& \varUpsilon _{13} = \begin{bmatrix} \varrho _{2}\mathcal{D}_{1k}G_{k}&\varrho _{2}\mathcal{D}_{2k}G_{k} \end{bmatrix} , \\& \varUpsilon _{14} = \begin{bmatrix} A_{2k}G_{k}&\mathcal{C}_{1k}V&\mathcal{C}_{2k}V \end{bmatrix} , \\& \varUpsilon _{33} =\operatorname{diag}\{-G_{k},-G_{k} \}, \\& \varUpsilon _{44} =\operatorname{diag}\{-G_{k},-V,-V \}, \\& \varOmega _{1} =\mathcal{B}_{1k}^{T}Q_{k+1} \mathcal{B}_{2k}-\bar{\alpha }(1-\bar{\alpha })B_{k}^{T}Q_{k+1} \check{B}_{k}, \\& \varOmega _{2} =-\bar{H}+\mathcal{B}_{2k}^{T}Q_{k+1} \mathcal{B}_{2k}+\bar{ \alpha }(1-\bar{\alpha })\check{B}_{k}^{T}Q_{k+1} \check{B}_{k}, \\& \varrho _{1} =\sqrt{\bar{\alpha }(1-\bar{\alpha }) (1+\varepsilon _{4})}, \qquad \varrho _{2}=\sqrt{\bar{\lambda }(1-\bar{ \lambda })}, \\& \varrho _{3} =\sqrt{1+\varepsilon _{2}^{-1}+ \varepsilon _{3}^{-1}}, \qquad \varrho _{4}= \sqrt{1+\varepsilon _{1}+\varepsilon _{2}}, \\& \varrho _{5} =\sqrt{1+\varepsilon _{1}^{-1}+ \varepsilon _{3}}, \qquad \varrho _{6}=\sqrt{\bar{\alpha }(1-\bar{\alpha }) \bigl(1+\varepsilon _{4} ^{-1} \bigr)}, \end{aligned}$$

then both the \(H_{\infty }\) performance requirement and the upper constraints of estimation error covariance can be satisfied simultaneously.

Proof

Under the initial conditions, according to the above analysis of \(H_{\infty }\) performance and estimation error covariance in Theorems 12, the inequality (27) implies (13) and (28) yields (22). As such, both the \(H_{\infty }\) performance requirement and variance constraint are guaranteed, which ends the proof. □

Remark 4

So far, some sufficient conditions are given to ensure the desirable performance requirements. To be specific, a sufficient condition is firstly established in Theorem 1 to ensure that the output estimation error satisfies the specified \(H_{\infty }\) performance index over the finite horizon provided that the estimator gain is given. Secondly, the upper bound constraint of the estimation error covariance matrix is guaranteed in Theorem 2. Subsequently, based on the above two theorems, a sufficient condition is presented in Theorem 3 to ensure the specified \(H_{\infty }\) performance and estimation error variance constraint by solving some recursive matrix inequalities. Finally, the design problem of a discrete time-varying state estimator is discussed in the next section, where the estimator gains can be obtained at each sampling step by solving several recursive matrix inequalities.

4 Design of the estimator gain matrix

In this section, a sufficient criterion is proposed to deal with the design problem of discrete time-varying state estimator, which can be solved by several recursive matrix inequalities.

Theorem 4

Given the disturbance attenuation level \(\gamma >0\), the scalars \(\bar{\alpha }\in [0,1]\) and \(\bar{\lambda }\in [0,1]\), the positive-definite matrices \(W_{\varphi }>0\) and W ϕ = [ W ϕ 1 W ϕ 2 W ϕ 2 T W ϕ 4 ] >0 and a set of pre-defined variance upper bound matrices \(\{\varPsi _{k}\}_{0\leq k\leq N+1}\), under the initial conditions

{ [ L 0 γ 2 W ϕ 1 W ϕ 2 W ϕ 2 T Z 0 γ 2 W ϕ 4 ] 0 , E { e 0 e 0 T } = G 2 , 0 Ψ 0 ,
(29)

if there exist series of positive-definite matrices \(\{L_{k}\}_{1 \leq k\leq N+1}\), \(\{Z_{k}\}_{1\leq k\leq N+1}\), \(\{G_{1k}\}_{1\leq k \leq N+1}\) and \(\{G_{2k}\}_{1\leq k\leq N+1}\), positive scalars \(\{\epsilon _{1,k}\}_{0\leq k\leq N+1}\) and \(\{\epsilon _{2,k}\}_{0 \leq k\leq N+1}\), matrices \(\{K_{k}\}_{0\leq k\leq N+1}\) and \(\{G_{3k}\}_{1\leq k\leq N+1}\) with appropriate dimensions satisfying the following recursive matrix inequalities:

$$\begin{aligned}& \begin{bmatrix} \varTheta _{11} & \varTheta _{12} & \varTheta _{13}&\varTheta _{14}& 0&0 \\ \ast & \varTheta _{22} & 0 & 0& 0&\mathcal{W}^{T}_{k} \\ \ast & \ast & \varTheta _{33} &0 & \varTheta _{35}&\mathcal{Y}^{T}_{k} \\ \ast & \ast & \ast &\varTheta _{44}&0 &0 \\ \ast & \ast & \ast &\ast &\varTheta _{55} &0 \\ \ast & \ast & \ast &\ast &\ast &-\epsilon _{1,k}I \end{bmatrix} < 0, \end{aligned}$$
(30)
$$\begin{aligned}& \begin{bmatrix} \varPi _{11} & \varPi _{12} &\varPi _{13}&\varPi _{14} &0 \\ \ast & \varPi _{22}& 0 &0 & \mathcal{X}^{T}_{k} \\ \ast & \ast & \varPi _{33} &0& 0 \\ \ast & \ast & \ast &\varPi _{44}& 0 \\ \ast & \ast &\ast &\ast &-\epsilon _{2,k}I \end{bmatrix} < 0, \end{aligned}$$
(31)
$$\begin{aligned}& G_{2,k+1}-\varPsi _{k+1}\leq 0, \end{aligned}$$
(32)

where

$$\begin{aligned}& \varTheta _{11} = \begin{bmatrix} -\varGamma _{1}\varLambda _{1}-\varGamma _{3}\varSigma _{1}+\epsilon _{1,k}N_{k}N_{k} ^{T}-L_{k} & 0 \\ 0 & -\varGamma _{2}\varLambda _{1}-\varGamma _{4}\varSigma _{1}-Z_{k} \end{bmatrix} , \\& \varTheta _{12} = \begin{bmatrix} \bar{\alpha }A_{k}^{T}L_{k+1}B_{1k}+\varGamma _{1}\varLambda _{2} & 0 &(1-\bar{ \alpha })A_{k}^{T}L_{k+1}B_{2k}+\varGamma _{3}\varSigma _{2} & 0 \\ 0 & \varOmega _{5} & 0 & \varOmega _{6} \end{bmatrix} , \\& \varTheta _{13} = \begin{bmatrix} 0 & A_{k}^{T}L_{k+1} &0& 0 & -\varrho _{2}D_{k}^{T}K_{k}^{T}Z_{k+1} \\ 0 & 0 & \varOmega _{7}&0 & 0 \end{bmatrix} , \\& \varTheta _{14} = \begin{bmatrix} 0 & -\varrho _{2}D_{k}^{T}\bar{K}_{k}^{T}Z_{k+1}& 0 & 0& 0 \\ 0&0&0 & -\bar{\lambda }D_{k}^{T}\bar{K}_{k}^{T}Z_{k+1} & M_{k}^{T} \end{bmatrix}, \\& \varTheta _{22} = \begin{bmatrix} \varOmega _{8} & 0 & -\bar{\alpha }(1-\bar{\alpha })B_{1k}^{T}Z_{k+1}B _{2k} & 0 \\ \ast & -\varGamma _{2}+\bar{\alpha }^{2}B_{1k}^{T}Z_{k+1}B_{1k} & 0 & \varOmega _{3} \\ \ast & \ast & \varOmega _{9} & 0 \\ \ast & \ast & \ast & \varOmega _{4} \end{bmatrix}, \\& \varTheta _{33} =\operatorname{diag}\bigl\{ -\gamma ^{2}W_{\varphi },-L_{k+1},-Z_{k+1},-L _{k+1},-Z_{k+1}\bigr\} , \\& \varTheta _{35} = \begin{bmatrix} C_{k}^{T}L_{k+1} & C_{k}^{T}Z_{k+1}& 0 & 0 \\ 0 & -K_{k}^{T}Z_{k+1}& 0 & -\bar{K}_{k}^{T}Z_{k+1} \\ 0& 0 & 0& 0 \\ 0& 0 & 0& 0 \\ 0& 0 & 0& 0 \end{bmatrix} , \\& \varTheta _{44} =\operatorname{diag}\{-L_{k+1},-Z_{k+1},-L_{k+1},-Z_{k+1},-I \}, \\& \varTheta _{55} =\operatorname{diag}\{-L_{k+1},-Z_{k+1},-L_{k+1},-Z_{k+1} \}, \\& \varPi _{11} = \begin{bmatrix} \varOmega _{10} & \varOmega _{11} \\ \ast & \varOmega _{12} \end{bmatrix} , \\& \varPi _{12} = \begin{bmatrix} \varrho _{4}A_{k}G_{1k} & \varrho _{4}A_{k}G_{3k}^{T} \\ \varrho _{4}(A_{k}-\bar{\lambda }K_{k}D_{k})G_{3k} & \varrho _{4}(A_{k}-\bar{ \lambda }K_{k}D_{k})G_{2k} \end{bmatrix} , \\& \varPi _{13} = \begin{bmatrix} 0& 0 &0& 0 \\ -\varrho _{2}K_{k}D_{k}G_{1k}&-\varrho _{2}K_{k}D_{k}G_{3k}^{T}& - \varrho _{2}\bar{K}_{k}D_{k}G_{1k}&-\varrho _{2}\bar{K}_{k}D_{k}G_{3k} ^{T} \end{bmatrix} , \\& \varPi _{14} = \begin{bmatrix} 0& 0& C_{k}V_{1}&0&0&0 \\ -\bar{\lambda }\bar{K}_{k}D_{k}G_{3k} &-\bar{\lambda }\bar{K}_{k}D _{k}G_{2k}&C_{k}V_{1}&-K_{k}V_{2}&0&-\bar{K}_{k}V_{2} \end{bmatrix} , \\& \varPi _{22} = \begin{bmatrix} -G_{1k} & -G_{3k}^{T} \\ \ast & -G_{2k} \end{bmatrix} , \\ & \varPi _{33} =\operatorname{diag}\{\varPi _{22},\varPi _{22}\}, \\ & \varPi _{44} =\operatorname{diag}\{\varPi _{22},-V_{1},-V_{2},-V_{1},-V_{2} \}, \\ & \varOmega _{3} =\bar{\alpha }(1-\bar{\alpha })B_{1k}^{T}Z_{k+1}B_{2k}, \\ & \varOmega _{4} =-\varGamma _{4}+(1-\bar{\alpha })^{2}B_{2k}^{T}Z_{k+1}B_{2k}, \\ & \varOmega _{5} =\bar{\alpha }\bigl(A_{k}^{T}- \bar{\lambda }D_{k}^{T}K_{k}^{T} \bigr)Z _{k+1}B_{1k}+\varGamma _{2}\varLambda _{2}, \\ & \varOmega _{6} =(1-\bar{\alpha }) \bigl(A_{k}^{T}- \bar{\lambda }D_{k}^{T}K_{k} ^{T}\bigr)Z_{k+1}B_{2k}+\varGamma _{4}\varSigma _{2}, \\ & \varOmega _{7} =A_{k}^{T}Z_{k+1}- \bar{\lambda }D_{k}^{T}K_{k}^{T}Z_{k+1}, \\ & \varOmega _{8} =-\varGamma _{1}+\bar{\alpha }B_{1k}^{T}L_{k+1}B_{1k}+\bar{ \alpha }(1-\bar{\alpha })B_{1k}^{T}Z_{k+1}B_{1k}, \\ & \varOmega _{9} =-\varGamma _{3}+(1-\bar{\alpha })B_{2k}^{T}L_{k+1}B_{2k}+\bar{ \alpha }(1-\bar{\alpha })B_{2k}^{T}Z_{k+1}B_{2k}, \\ & \varOmega _{10} =-G_{1,k+1}+\bar{\alpha }^{2} \varrho _{5}{\operatorname{tr}}(G _{k})B_{1k}Y_{1}B_{1k}^{T}+ \varrho _{1}{\operatorname{tr}}(G_{k})B_{1k}Y_{1}B _{1k}^{T}+(1-\bar{\alpha })^{2}\varrho _{3} \\ & \hphantom{\varOmega _{10} =}{}\times \operatorname{tr}(G_{k})B_{2k}Y_{2}B_{2k}^{T}+ \varrho _{6}{\operatorname{tr}}(G _{k})B_{2k}Y_{2}B_{2k}^{T}+ \epsilon _{2,k}H_{k}^{T}H_{k}, \\ & \varOmega _{11} =-G_{3,k+1}^{T}+\varrho _{1}{\operatorname{tr}}(G_{k})B_{1k}Y _{1}B_{1k}^{T}+\varrho _{6}{ \operatorname{tr}}(G_{k})B_{2k}Y_{2}B_{2k}^{T}+ \epsilon _{2,k}H_{k}^{T}H_{k}, \\ & \varOmega _{12} =-G_{2,k+1}+\bar{\alpha }^{2} \varrho _{5}{\operatorname{tr}}(G _{k})B_{1k}Y_{1}B_{1k}^{T}+ \varrho _{1}{\operatorname{tr}}(G_{k})B_{1k}Y_{1}B _{1k}^{T}+(1-\bar{\alpha })^{2}\varrho _{3} \\ & \hphantom{\varOmega _{12} =}{}\times \operatorname{tr}(G_{k})B_{2k}Y_{2}B_{2k}^{T}+ \varrho _{6}{\operatorname{tr}}(G _{k})B_{2k}Y_{2}B_{2k}^{T}+ \epsilon _{2,k}H_{k}^{T}H_{k}, \\ & \mathcal{W}_{k} = \begin{bmatrix} \bar{\alpha }H_{k}^{T}L_{k+1}B_{1k}& \bar{\alpha }H_{k}^{T}Z_{k+1}B _{1k}&(1-\bar{\alpha })H_{k}^{T}L_{k+1}B_{2k} & (1-\bar{\alpha })H _{k}^{T}Z_{k+1}B_{2k} \end{bmatrix}, \\ & \mathcal{Y}_{k} = \begin{bmatrix} 0 & H_{k}^{T}L_{k+1} &H_{k}^{T}Z_{k+1}&0&0 \end{bmatrix}, \\ & \mathcal{N}_{k}^{T} = \begin{bmatrix} N_{k} & 0 \end{bmatrix}, \\ & \mathcal{X}_{k} = \begin{bmatrix} \varrho _{4}N_{k}G_{1k} & \varrho _{4}N_{k}G_{3k}^{T} \end{bmatrix}, \\ & \mathcal{F}_{k}^{T} = \begin{bmatrix} H_{k}^{T} & H_{k}^{T} \end{bmatrix} , \end{aligned}$$

then we can conclude that the estimator design problem is solvable.

Proof

Firstly, the matrices \(Q_{k}\) and \(G_{k}\) are decomposed as follows:

$$\begin{aligned} Q_{k}= \begin{bmatrix} L_{k} & 0 \\ \ast & Z_{k} \end{bmatrix} , \qquad G_{k}= \begin{bmatrix} G_{1k} & G_{3k}^{T} \\ \ast & G_{2k} \end{bmatrix} . \end{aligned}$$

Secondly, in order to deal with parameter uncertainty, we rewrite (27) as follows:

$$\begin{aligned} \begin{bmatrix} \varXi _{11} & \varXi _{12}^{0} & \varXi _{13}^{0}& \varXi _{14} & 0 \\ \ast & \varXi _{22}&0 & 0 &0 \\ \ast & \ast & \varXi _{33}&0 & \varXi _{35} \\ \ast & \ast & \ast & \varXi _{44}& 0 \\ \ast & \ast & \ast & \ast & \varXi _{55} \end{bmatrix} +\bar{N}_{k}F_{k} \bar{H}_{k}+\bar{H}_{k}^{T}F_{k}^{T} \bar{N}_{k}^{T}< 0, \end{aligned}$$

where

$$\begin{aligned}& \varXi _{12}^{0} = \begin{bmatrix} \bar{\alpha }A_{k}^{T}L_{k+1}B_{1k}+\varGamma _{1}\varLambda _{2} & 0 &(1-\bar{ \alpha })A_{k}^{T}L_{k+1}B_{2k}+\varGamma _{3}\varSigma _{2} & 0 \\ 0 & \varOmega _{5} & 0 & \varOmega _{6} \end{bmatrix} , \\& \varXi _{13}^{0} = \begin{bmatrix} 0 & A_{k}^{T}L_{k+1} &0& 0 & -\varrho _{2}D_{k}^{T}K_{k}^{T}Z_{k+1}& \\ 0 & 0 & \varOmega _{7}&0 & 0 \end{bmatrix} , \\& \bar{N}_{k}^{T} = \begin{bmatrix} \mathcal{N}_{k}^{T} & 0 & 0& 0 &0 \end{bmatrix} , \\& \bar{H}_{k} = \begin{bmatrix} 0&\mathcal{W}_{k} & \mathcal{Y}_{k}& 0&0 \end{bmatrix} . \end{aligned}$$

Subsequently, it follows from Lemma 2 that

$$\begin{aligned} \begin{bmatrix} \varXi _{11} & \varXi _{12}^{0} & \varXi _{13}^{0}& \varXi _{14} & 0 \\ \ast & \varXi _{22}&0 & 0 &0 \\ \ast & \ast & \varXi _{33}&0 & \varXi _{35} \\ \ast & \ast & \ast & \varXi _{44}& 0 \\ \ast & \ast & \ast & \ast & \varXi _{55} \end{bmatrix} +\epsilon _{1,k} \bar{N}_{k}\bar{N}_{k}^{T}+\epsilon _{1,k}^{-1}\bar{H} _{k}^{T} \bar{H}_{k}< 0. \end{aligned}$$

Similarly, (28) can be rewritten as

$$\begin{aligned} \begin{bmatrix} -G_{k+1} & \varUpsilon _{12}^{0}& \varUpsilon _{13}& \varUpsilon _{14} \\ \ast & -G_{k} & 0& 0 \\ \ast & \ast & \varUpsilon _{33}& 0 \\ \ast & \ast &\ast & \varUpsilon _{44} \end{bmatrix} +\tilde{N}_{k}F_{k} \tilde{H}_{k}+\tilde{H}_{k}^{T}F_{k}^{T} \tilde{N} _{k}^{T}< 0, \end{aligned}$$

where

$$\begin{aligned} \varUpsilon _{12}^{0} =& \begin{bmatrix} \varrho _{4}A_{k}G_{1k} & \varrho _{4}A_{k}G_{3k}^{T} \\ \varrho _{4}(A_{k}-\bar{\lambda }K_{k}D_{k})G_{3k} & \varrho _{4}(A_{k}-\bar{ \lambda }K_{k}D_{k})G_{2k} \end{bmatrix} , \\ \tilde{N}_{k}^{T} =& \begin{bmatrix} \mathcal{F}_{k}^{T} & 0 &0&0 \end{bmatrix} , \\ \tilde{H}_{k} =& \begin{bmatrix} 0&\mathcal{X}_{k}& 0&0 \end{bmatrix} . \end{aligned}$$

Then it follows from Lemma 2 that

$$\begin{aligned} \begin{bmatrix} -G_{k+1} & \varUpsilon _{12}^{0}& \varUpsilon _{13}& \varUpsilon _{14} \\ \ast & -G_{k} & 0& 0 \\ \ast & \ast & \varUpsilon _{33}& 0 \\ \ast & \ast &\ast & \varUpsilon _{44} \end{bmatrix} +\epsilon _{2,k} \tilde{N}_{k}\tilde{N}_{k}^{T}+\epsilon _{2,k}^{-1} \tilde{H}_{k}^{T} \tilde{H}_{k}< 0. \end{aligned}$$

Now, it should be noted that (30) implies (27). Similarly, we can see that (31) leads to (28). As such, both the estimation error covariance constraint and \(H_{\infty }\) performance requirement of system (7) are ensured. The proof of Theorem 4 is complete. □

Remark 5

Up to now, we have discussed the variance-constrained resilient \(H_{\infty }\) state estimation problem for a class of time-varying RNNs with randomly varying nonlinearities and missing measurements. By applying the recursive matrix inequality technique, some criteria have been established to guarantee the prescribed \(H_{\infty }\) performance and the estimation error covariance constraints for the addressed estimation problem of time-varying neural networks within the finite-horizon framework. It should be noticed that the proposed estimation approach has the following three advantages: (i) the disturbance effects can be effectively attenuated by the \(H_{\infty }\) performance index over finite horizon; (ii) the prescribed upper bound of the estimation error covariance can be guaranteed by verifying certain matrix inequalities; and (iii) the newly designed state estimation approach can be applied to the online calculations and implementations for solving the estimation problems of time-varying RNNs, which constitutes another appealing feature.

Remark 6

In fact, almost all existing estimation schemes can be applied to time-invariant NNs only, but we have made one of the first attempts to discuss the characteristics of the time-varying RNNs and address two combined performance indices to meet the practical requirements, which are the essential superiority of the proposed result. For example, compared with the non-fragile/resilient state estimation method in [10], our estimation scheme has the advantage to reveal the whole impacts from missing measurements and randomly varying nonlinearities onto the estimation algorithm performance, which can present a new treatment way. In contrast to the results in [11, 12], the superiority dealing with the time-varying characteristics can be observed from our new state estimation scheme.

5 An illustrative example

In this section, we give a simulation to illustrate the feasibility of proposed estimation approach under variance constraint. The parameters of time-varying RNNs (1) are given as follows:

$$\begin{aligned}& A_{k} = \begin{bmatrix} -0.5 & 0 \\ 0 & -0.1\operatorname{sin}(2k) \end{bmatrix} , \qquad B_{1k}= \begin{bmatrix} -\sin (2k) & 0.5 \\ -0.2 & 0.5 \end{bmatrix} , \qquad \varGamma _{1}= \begin{bmatrix} 0.9 & 0 \\ 0 & 0.9 \end{bmatrix} , \\& B_{2k} = \begin{bmatrix} -0.27\sin (k) & 0.2 \\ -0.1 & -0.14 \end{bmatrix} , \qquad C_{k}= \begin{bmatrix} -0.1&-0.3\sin (2k) \end{bmatrix} ^{T}, \qquad \varGamma _{2}= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} , \\& D_{k} = \begin{bmatrix} -0.55\sin (k) & 1.5 \end{bmatrix} , \qquad \bar{K}_{k}= \begin{bmatrix} -0.27\sin (2k) & 0.15 \end{bmatrix} ^{T}, \qquad \varGamma _{3}= \begin{bmatrix} 1.1 & 0 \\ 0 & 1.1 \end{bmatrix} , \\& \varGamma _{4} = \begin{bmatrix} 1.3 & 0 \\ 0 & 1.3 \end{bmatrix} , \qquad H_{k}= \begin{bmatrix} 0.1 & 0.15 \end{bmatrix} ^{T}, \qquad N_{k}= \begin{bmatrix} 0.2 & 0.1 \end{bmatrix} , \qquad F_{k}=\sin (0.6k), \\& M_{k} = \begin{bmatrix} -0.01 & -0.12\sin (k) \end{bmatrix} , \qquad \bar{\lambda }=0.34, \qquad \bar{\alpha }=0.1, \qquad \rho =0.5, \\& \varepsilon _{1}=0.5, \qquad \varepsilon _{2}=0.3,\qquad \varepsilon _{3} =0.2, \qquad \varepsilon _{4}=0.1. \end{aligned}$$

Moreover, the activation functions can be taken as

$$\begin{aligned} f(x_{k})=g(x_{k})= \begin{bmatrix} \tanh (x_{1,k}) \\ \tanh (0.8x_{2,k}) \end{bmatrix} \end{aligned}$$

with \(x_{k}= [ x_{1,k} \ x_{2,k} ] ^{T}\) being the neuron state vector of neural network. It is easy to obtain \(\varLambda _{0}=\operatorname{diag}\{0.1,0.1\}\), \(\varLambda _{1}= \operatorname{diag}\{0.2,0.2\}\), \(\varLambda _{2}=\operatorname{diag}\{0.9,0.9\}\), \(\varSigma _{0}=\operatorname{diag}\{0.2,0.2\}\), \(\varSigma _{1}=\operatorname{diag}\{0.3,0.3 \}\) and \(\varSigma _{2}=\operatorname{diag}\{0.5,0.5\}\). Let the disturbance attenuation level be \(\gamma =0.9\) and \(N=94\), weighted matrices as \(W_{\varphi }(1)=W_{\varphi }(2)=1\), upper bound matrices as \(\{\varPsi _{k}\}_{1\leq k\leq N}=\operatorname{diag}\{0.3,0.3\}\), and covariances as \(V_{1}=V_{2}=1\). Choose the parameters’ initial matrices satisfying (29). Then the matrix inequalities (30)–(32) in Theorem 4 can be solved, and \(K_{k}\) is designed as in Table 1.

Table 1 The values of estimator gain \(K_{k}\)

Suppose the initial states as \(x_{0}= [ -1.5\ 0.3 ] ^{T}\) and \(\hat{x}_{0}= [ -1.2 \ 0.3 ] ^{T}\). Based on the state estimation method in Theorem 4, the simulation results can be shown in Figs. 14. Figures 12 plot the output \(z_{k}\) and its estimation \(\hat{z}_{k}\), respectively. Figure 3 depicts the output estimation error \(\tilde{z} _{k}\). The error variance upper bound and actual error variance are plotted in Fig. 4, which indeed illustrates that the actual error variance below the error variance upper bound. From the simulations, we can conclude that the newly presented variance-constrained resilient \(H_{\infty }\) estimation algorithm is efficient.

Figure 1
figure 1

The controlled output \(z_{1,k}\) and its estimation

Figure 2
figure 2

The controlled output \(z_{2,k}\) and its estimation

Figure 3
figure 3

The output estimation errors

Figure 4
figure 4

The upper bound of error variance and actual error variance

6 Conclusions

In this paper, we have discussed the variance-constrained resilient \(H_{\infty }\) state estimation problem for a class of time-varying neural networks with randomly varying nonlinearities and missing measurements. Two random variables that obey Bernoulli distribution have been adopted to describe the phenomena of randomly varying nonlinearities and missing measurements. A new variance-constrained \(H_{\infty }\) state estimation method has been designed based on the available information. By applying the recursive matrix inequality technique, some criteria have been established to guarantee the prescribed \(H_{\infty }\) performance and the estimation error covariance constraints for the addressed estimation problem of the time-varying neural networks. In addition, the gain matrix of state estimator has been obtained by testing the feasibility of the concerned recursive matrix inequalities. Finally, the validity of the proposed estimation method has been verified by a simulation example. Our future research topics include the state estimation problems for time-varying RNNs with the finite-time criterion as in [36] and the inaccuracy occurrence probability as mentioned in [49].

References

  1. Zhang, X., Han, Q., Yu, X.: Survey on recent advances in networked control systems. IEEE Trans. Ind. Inform. 12(5), 1740–1752 (2016)

    Article  Google Scholar 

  2. Zhang, H., Hu, J., Liu, H., Yu, X., Liu, F.: Recursive state estimation for time-varying complex networks subject to missing measurements and stochastic inner coupling under random access protocol. Neurocomputing 346, 48–57 (2019)

    Article  Google Scholar 

  3. Selvaraj, P., Sakthivel, R., Ahn, C.K.: Observer-based synchronization of complex dynamical networks under actuator saturation and probabilistic faults. IEEE Trans. Syst. Man Cybern. Syst. 49(7), 1516–1526 (2019)

    Article  Google Scholar 

  4. Zheng, M., Tang, W., Zhao, X.: Hyperparameter optimization of neural network-driven spatial models accelerated using cyber-enabled high-performance computing. Int. J. Geogr. Inf. Sci. 33(2), 314–345 (2019)

    Article  Google Scholar 

  5. Maharajan, C., Raja, R., Cao, J., Ravi, G., Rajchakit, G.: Global exponential stability of Markovian jumping stochastic impulsive uncertain BAM neural networks with leakage, mixed time delays, and alpha-inverse Holder activation functions. Adv. Differ. Equ. 2018, Article ID 113 (2018). https://doi.org/10.1186/s13662-018-1553-7

    Article  MATH  Google Scholar 

  6. Manivannan, R., Samidurai, R., Cao, J., Alsaedi, A., Alsaadi, F.E.: Delay-dependent stability criteria for neutral-type neural networks with interval time-varying delay signals under the effects of leakage delay. Adv. Differ. Equ. 2018, Article ID 53 (2018). https://doi.org/10.1186/s13662-018-1509-y

    Article  MathSciNet  MATH  Google Scholar 

  7. Selvaraj, P., Sakthivel, R., Kwon, O.M.: Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation. Neural Netw. 105, 154–165 (2018)

    Article  Google Scholar 

  8. Sakthivel, R., Anbuvithya, R., Mathiyalagan, K., Prakash, P.: Combined \(H_{\infty }\) and passivity state estimation of memristive neural networks with random gain fluctuations. Neurocomputing 168, 1111–1120 (2015)

    Article  Google Scholar 

  9. Sakthivel, R., Vadivel, P., Mathiyalagan, K., Arunkumar, A., Sivachitra, M.: Design of state estimator for bidirectional associative memory neural networks with leakage delays. Inf. Sci. 296, 263–274 (2015)

    Article  MathSciNet  Google Scholar 

  10. Li, R., Gao, X., Cao, J.: Non-fragile state estimation for delayed fractional-order memristive neural networks. Appl. Math. Comput. 340, 221–233 (2019)

    MathSciNet  Google Scholar 

  11. Guo, R., Zhang, Z., Gao, M.: State estimation for complex-valued memristive neural networks with time-varying delays. Adv. Differ. Equ. 2018, Article ID 118 (2018). https://doi.org/10.1186/s13662-018-1575-1

    Article  MathSciNet  MATH  Google Scholar 

  12. Liu, Y., Wang, Z., Liu, X.: State estimation for discrete-time neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays. Neural Process. Lett. 36(1), 1–19 (2012)

    Article  Google Scholar 

  13. Hu, J., Wang, Z., Alsaadi, F.E., Hayat, T.: Event-based filtering for time-varying nonlinear systems subject to multiple missing measurements with uncertain missing probabilities. Inf. Fusion 38, 74–83 (2017)

    Article  Google Scholar 

  14. Bao, H., Cao, J., Kurths, J.: State estimation of fractional-order delayed memristive neural networks. Nonlinear Dyn. 94(2), 1215–1225 (2018)

    Article  Google Scholar 

  15. Hu, J., Wang, Z., Gao, H.: Joint state and fault estimation for uncertain time-varying nonlinear systems with randomly occurring faults and sensor saturations. Automatica 97, 150–160 (2018)

    Article  MathSciNet  Google Scholar 

  16. Kang, W., Zhong, S., Cheng, J.: \(H_{\infty }\) state estimation for discrete-time neural networks with time-varying and distributed delays. Adv. Differ. Equ. 2015, Article ID 263 (2015). https://doi.org/10.1186/s13662-015-0603-7

    Article  MathSciNet  MATH  Google Scholar 

  17. Bernat, J.: Multi observer structure for rapid state estimation in linear time varying systems. Int. J. Control. Autom. Syst. 16(4), 1746–1755 (2018)

    Article  Google Scholar 

  18. Dong, H., Bu, X., Hou, N., Liu, Y., Alsaadi, F.E., Hayat, T.: Event-triggered distributed state estimation for a class of time-varying systems over sensor networks with redundant channels. Inf. Fusion 36, 243–250 (2017)

    Article  Google Scholar 

  19. Hu, L., Wang, Z., Han, Q., Liu, X.: Event-based input and state estimation for linear discrete time-varying systems. Int. J. Control 91(1), 101–113 (2018)

    Article  MathSciNet  Google Scholar 

  20. Zhang, H., Hu, J., Zou, L., Yu, X., Wu, Z.: Event-based state estimation for time-varying stochastic coupling networks with missing measurements under uncertain occurrence probabilities. Int. J. Gen. Syst. 47(5), 422–437 (2018)

    Article  MathSciNet  Google Scholar 

  21. Jia, C., Hu, J.: Variance-constrained filtering for nonlinear systems with randomly occurring quantized measurements: recursive scheme and boundedness analysis. Adv. Differ. Equ. 2019, Article ID 53 (2019). https://doi.org/10.1186/s13662-019-2000-0

    Article  MathSciNet  MATH  Google Scholar 

  22. Duan, H., Peng, T.: Finite-time reliable filtering for T-S fuzzy stochastic jumping neural networks under unreliable communication links. Adv. Differ. Equ. 2017, Article ID 54 (2017). https://doi.org/10.1186/s13662-017-1108-3

    Article  MathSciNet  MATH  Google Scholar 

  23. Nelson, P.R.C., MacGregor, J.F., Taylor, P.A.: The impact of missing measurements on PCA and PLS prediction and monitoring applications. Chemom. Intell. Lab. Syst. 80(1), 1–12 (2006)

    Article  Google Scholar 

  24. Che, Y., Shu, H., Liu, Y.: Exponential mean-square \(H_{\infty }\) filtering for arbitrarily switched neural networks with missing measurements. Neurocomputing 193, 227–234 (2016)

    Article  Google Scholar 

  25. Tsai, L.T., Yang, C.-C.: Improving measurement invariance assessments in survey research with missing data by novel artificial neural networks. Expert Syst. Appl. 39(12), 10456–10464 (2012)

    Article  Google Scholar 

  26. Song, Y., Hu, J., Chen, D., Liu, Y., Alsaadi, F.E., Sun, G.: A resilience approach to state estimation for discrete neural networks subject to multiple missing measurements and mixed time-delays. Neurocomputing 272, 74–83 (2018)

    Article  Google Scholar 

  27. Liu, M., Chen, H.: \(H_{\infty }\) state estimation for discrete-time delayed systems of the neural network type with multiple missing measurements. IEEE Trans. Neural Netw. Learn. Syst. 26(12), 2987–2998 (2015)

    Article  MathSciNet  Google Scholar 

  28. Rakkiyappan, R., Sasirekha, R., Zhu, Y., Zhang, L.: \(H_{\infty }\) state estimator design for discrete-time switched neural networks with multiple missing measurements and sojourn probabilities. J. Franklin Inst. 353(6), 1358–1385 (2016)

    Article  MathSciNet  Google Scholar 

  29. Liang, J., Wang, Z., Liu, X.: State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: the discrete-time case. IEEE Trans. Neural Netw. 20(5), 781–793 (2009)

    Article  Google Scholar 

  30. Liu, H., Wang, Z., Shen, B., Liu, X.: Event-triggered \(H_{\infty }\) state estimation for delayed stochastic memristive neural networks with missing measurements: the discrete time case. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3726–3737 (2018)

    Article  MathSciNet  Google Scholar 

  31. Ding, D., Wang, Z., Shen, B., Dong, H.: \(H_{\infty }\) state estimation with fading measurements, randomly varying nonlinearities and probabilistic distributed delays. Int. J. Robust Nonlinear Control 25(13), 2180–2195 (2015)

    Article  MathSciNet  Google Scholar 

  32. Zhang, P., Hu, J., Liu, H., Zhang, C.: Sliding mode control for networked systems with randomly varying nonlinearities and stochastic communication delays under uncertain occurrence probabilities. Neurocomputing 320, 1–11 (2018)

    Article  Google Scholar 

  33. Liang, J., Wang, Z., Liu, X.: Distributed state estimation for discrete-time sensor networks with randomly varying nonlinearities and missing measurements. IEEE Trans. Neural Netw. 22(3), 486–496 (2011)

    Article  Google Scholar 

  34. Dong, H., Wang, Z., Gao, H.: Fault detection for Markovian jump systems with sensor saturations and randomly varying nonlinearities. IEEE Trans. Circuits Syst. I, Regul. Pap. 59(10), 2354–2362 (2012)

    Article  MathSciNet  Google Scholar 

  35. Wang, L., Wei, G., Li, W.: Probability-dependent \(H_{\infty }\) synchronization control for dynamical networks with randomly varying nonlinearities. Neurocomputing 133, 369–376 (2014)

    Article  Google Scholar 

  36. Sakthivel, R., Sakthivel, R., Kaviarasan, B., Wang, C., Ma, Y.K.: Finite-time nonfragile synchronization of stochastic complex dynamical networks with semi-Markov switching outer coupling. Complexity 2018, Article ID 8546304 (2018). https://doi.org/10.1155/2018/8546304

    Article  MathSciNet  MATH  Google Scholar 

  37. Sakthivel, R., Nithya, V., Ma, Y.K., Wang, C.: Finite-time nonfragile dissipative filter design for wireless networked systems with sensor failures. Complexity 2018, Article ID 7482015 (2018). https://doi.org/10.1155/2018/7482015

    Article  MATH  Google Scholar 

  38. Wang, D., Shi, P., Wang, W., Karimi, H.R.: Non-fragile \(H_{\infty }\) control for switched stochastic delay systems with application to water quality process. Int. J. Robust Nonlinear Control 24(11), 1677–1693 (2014)

    Article  MathSciNet  Google Scholar 

  39. Pourgholi, M., Majd, V.J.: A new non-fragile \(H_{\infty }\) proportional-integral filtered-error adaptive observer for a class of non-linear systems and its application to synchronous generators. Proc. Inst. Mech. Eng. 225(1), 99–112 (2011)

    Google Scholar 

  40. Wu, Z., Xu, Z., Shi, P., Chen, M.Z., Su, H.: Nonfragile state estimation of quantized complex networks with switching topologies. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 5111–5121 (2018)

    Article  MathSciNet  Google Scholar 

  41. Shen, H., Wang, T., Chen, M., Lu, J.: Nonfragile mixed state estimation for repeated scalar nonlinear systems with Markov jumping parameters and redundant channels. Nonlinear Dyn. 91(1), 641–654 (2018)

    Article  Google Scholar 

  42. Xie, W., Zhu, H., Cheng, J., Zhong, S., Shi, K.: Finite-time asynchronous \(H_{\infty }\) resilient filtering for switched delayed neural networks with memory unideal measurements. Inf. Sci. 487, 156–175 (2019)

    Article  MathSciNet  Google Scholar 

  43. Sheng, L., Niu, Y., Gao, M.: Distributed resilient filtering for time-varying systems over sensor networks subject to round-robin/stochastic protocol. ISA Trans. 87, 55–67 (2019)

    Article  Google Scholar 

  44. Dong, H., Wang, Z., Ho, D.W., Gao, H.: Variance-constrained \(H_{\infty }\) filtering for a class of nonlinear time-varying systems with multiple missing measurements: the finite-horizon case. IEEE Trans. Signal Process. 58(5), 2534–2543 (2010)

    Article  MathSciNet  Google Scholar 

  45. Ma, L., Wang, Z., Han, Q.L., Lam, H.K.: Variance-constrained distributed filtering for time-varying systems with multiplicative noises and deception attacks over sensor networks. IEEE Sens. J. 17(7), 2279–2288 (2017)

    Article  Google Scholar 

  46. Dong, H., Hou, N., Wang, Z., Ren, W.: Variance-constrained state estimation for complex networks with randomly varying topologies. IEEE Trans. Neural Netw. Learn. Syst. 29(7), 2757–2768 (2018)

    MathSciNet  Google Scholar 

  47. Li, I.H., Wang, W.Y., Su, S.F., Lee, Y.S.: A merged fuzzy neural network and its applications in battery state-of-charge estimation. IEEE Trans. Energy Convers. 22(3), 697–708 (2007)

    Article  Google Scholar 

  48. Hu, J., Zhang, H., Yu, X., Liu, H., Chen, D.: Design of sliding-mode-based control for nonlinear systems with mixed-delays and packet losses under uncertain missing probability. IEEE Trans. Syst. Man Cybern. Syst. (2019). https://doi.org/10.1109/TSMC.2019.2919513

    Article  Google Scholar 

  49. Hu, J., Zhang, P., Kao, Y., Liu, H., Chen, D.: Sliding mode control for Markovian jump repeated scalar nonlinear systems with packet dropouts: the uncertain occurrence probabilities case. Appl. Math. Comput. (2019) https://doi.org/10.1016/j.amc.2019.124574

    Article  MathSciNet  Google Scholar 

Download references

Availability of data and materials

Not applicable.

Funding

This work was supported in part by the Outstanding Youth Science Foundation of Heilongjiang Province of China under grant JC2018001, the National Natural Science Foundation of China under Grant 61673141, the Fok Ying Tung Education Foundation of China under Grant 151004, the Natural Science Foundation of Heilongjiang Province of China under grant A2018007, the Fundamental Research Funds in Heilongjiang Provincial Universities of China under Grant 135209250, the Educational Research Project of Qiqihar University of China under Grant 2017028, and the Alexander von Humboldt Foundation of Germany.

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally to this paper. The authors read and approved the final version of the submission.

Corresponding author

Correspondence to Jun Hu.

Ethics declarations

Competing interests

All authors declared that they have no competing interests.

Consent for publication

Both authors read and approved the final version of the paper.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, Y., Hu, J., Chen, D. et al. Variance-constrained resilient \(H_{\infty }\) state estimation for time-varying neural networks with randomly varying nonlinearities and missing measurements. Adv Differ Equ 2019, 380 (2019). https://doi.org/10.1186/s13662-019-2298-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-019-2298-7

Keywords