Open Access

Finite-time reliable filtering for T-S fuzzy stochastic jumping neural networks under unreliable communication links

Advances in Difference Equations20172017:54

DOI: 10.1186/s13662-017-1108-3

Received: 27 September 2016

Accepted: 6 February 2017

Published: 17 February 2017

Abstract

This study is concerned with the problem of finite-time state estimation for T-S fuzzy stochastic jumping neural networks, where the communication links between the stochastic jumping neural networks and its estimator are imperfect. By introducing the fuzzy technique, both the nonlinearities and the stochastic disturbances are represented by T-S model. Stochastic variables subject to the Bernoulli white sequences are employed to determine the nonlinearities occurring in different sector bounds. Some sufficient conditions for the existence of the state estimator are given in terms of linear matrix inequalities, whose effectiveness are illustrated with the aid of simulation results.

Keywords

finite-time boundedness discrete-time T-S fuzzy model Lyapunov-Krasovskii functional stochastic jumping neural networks

1 Introduction

Over the past decades, an enormous number of works have been significant as regards various neural networks because of wide applications, such as signal processing, pattern recognition, solving nonlinear algebraic equations, and so on. Therefore, numerous works which have been considered in the stability analysis performance behavior of neural networks [15]. Over the past decades, the stochastic jumping neural network (SJNN) has been widely investigated due to random changes in the interconnections of dynamic network nodes, and many works have been devoted to the study of SJNN [68]. It is well recognized that the time delays are frequently encountered in many practical systems, such as communication systems, neural networks and engineering systems etc., which is the main source of poor performance in the system. It is noted that, because of finite speed, the discrete delay always involves the information processing. Therefore, the stability problem for the discrete-time stochastic jumping neural network (DTSJNN) has attracted a considerable amount of attention; see [912] and the references therein.

However, complexity and uncertainty as well as vagueness exist in the dynamic systems, which can be described by fuzzy theory. It is noted that T-S fuzzy systems give a local linear representation of the considered nonlinear dynamic system, which involve of a set of IF-THEN rules. It is reasonable that nonlinear systems are modeled by a set of linear sub-models with the aid of a T-S fuzzy model [1319]. Originally, the linear models are introduced to represent the local dynamics of state space regions. Recently, various previous results have been considered on the stability and other dynamical behaviors of T-S fuzzy neural networks [2024] and stochastic differential equations with fuzzy rules [2531]. However, in the existing literature, the proposed state estimator design methods were based on knowledge of communication between the neural network and the estimator are perfect. However, in many practical systems, the communication may be partly available. It is necessary to study such general SJNN with unreliable communication links (UCLs) [3234], which was neglected in the aforementioned literature. On the other hand, most obtained results concern an infinite-time interval. Compared with the infinite-time one, a finite-time possesses performance of fast convergence and achieves better stability properties. Therefore, many scholars have devoted their studies to the finite-time stability problem for nonlinear systems without delay [35] and with delay [3642]. From [2527, 3543] and the references therein it is apparent that researchers in this field have not established an estimation problem for fuzzy DTSJNN with UCLs so far. A natural question is how to cope with the finite-time state estimation problem for T-S fuzzy DTSJNN with UCLs. To the best of our knowledge, such a question has not been fully studied.

Motivated by the above discussion, we present a new and more relaxed technique to study the finite-time state estimation for T-S fuzzy stochastic jumping neural networks subject to UCLs. This paper gets more information as regards large and small activation functions, which covers some existing activation functions as special cases. A new random process is introduced to model the phenomenon of signal transmission, and some delay dependent sufficient conditions are given by implementing the Lyapunov functional. Finally a numerical example has been offered to show the effectiveness of the proposed approach.

Notation: \(\mathbb{R}^{n}\) denotes the n-dimensional Euclidean space; the superscripts −1 and T denote the inverse and transpose, respectively. denotes the expectation operator with respect to some probability measure. The symbol \(\operatorname{He}(\mathcal{Q})\) is used to represent \(\mathcal{Q}+\mathcal{Q}^{\intercal}\).  is employed to represent a term that is induced by symmetry. denotes the Kronecker product. \(e_{s}=[0_{2n\times2(s-1)n} \ I_{2n} \ 0_{2n\times2(9-s)n}]\) (\(s=1,2,\ldots,9\)).

2 Preliminaries

Given a probability space \((\Omega, F, \rho)\) where Ω is the sample space, F is the algebra of events, and ρ is the probability measure defined on F. The T-S fuzzy DTSJNNs over the space \((\Omega,F,\rho)\) are given by the following model:
$$ \begin{gathered}[b] \textbf{Plant Rule}\mbox{ $i$:} \\ \textbf{IF} \mbox{$\xi_{1}$ is $M_{i1}$ and $\ldots$ and $ \xi_{p}$ is $M_{ip}$} \\ \textbf{THEN} \\ \textstyle\begin{cases} x(k+1)=A_{i}(r_{k})x(k)+B_{i}(r_{k})f(x(k))+C_{i}(r_{k})g(x(k-\tau(k)))+D_{i}(r_{k})\omega(t) \\ y(k)=C_{1i}(r_{k})x(k)+C_{2i}(r_{k})x(k-\tau(k)), \end{cases}\displaystyle \end{gathered} $$
(1)
where \(x(k) \in \mathbb{R}^{n}\) and \(y(k)\in\mathbb{R}^{p}\) represent state and output measured vector, respectively. The external disturbance \(\omega(k)\in\mathbb{R}^{q}\) is a disturbance signal that belongs to \(l_{2}[0,\infty)\) and satisfies
$$ \mathcal{E} \Biggl\{ \sum_{k=0}^{N} \omega^{\intercal}(k)\omega(k)\leq d \Biggr\} ,\quad d\geq0. $$
(2)
The stochastic jump process \(\{r_{k},k\geq0\}\) is a discrete-time, discrete-state Markov chain taking values in a finite set \(\mathcal{L}=\{1,2,\ldots,s\}\) with transition probabilities \(\pi_{lm}\) given by \(\sum_{m=1}^{s}\pi_{lm}=1\), \(\pi_{lm}>0\), \(l\in\mathbb{L}\). \(\xi_{j}\) and \(M_{ij}\) (\(i=1,2,\ldots,q\), \(j=1,2,\ldots,p\)) are, respectively, the premise variables and the fuzzy sets, q is the number of IF-THEN rules. The fuzzy basis functions are given by
$$ h_{i}\bigl(\xi(k)\bigr)=\frac{\prod_{j=1}^{p}\mu_{ij}(\xi_{j}(k))}{\sum_{i=1}^{q}\prod_{j=1}^{p}\mu_{ij}(\xi_{j}(k))}, $$
in which \(\mu_{ij}(\xi_{j}(k))\) represents the grade of membership of \(\xi_{j}(k)\) in \(\mu_{ij}\). It is obvious that \(\sum_{i=1}^{q}h_{i}(\xi(k))=1\) with \(h_{i}(\xi(k))>0\). The transmission delay \(\tau(k)\) is time-varying and satisfies \(0<\tau_{1}\leq\tau(k)\leq\tau_{2}\), where \(\tau_{2}\) and \(\tau_{1}\) are known constants. \(f(x(k))\) and \(g(x(k-\tau(k)))\) are the neuron activation functions.
Utilizing the centroid method for defuzzification, the fuzzy system (1) is inferred as follows:
$$ \textstyle\begin{cases} x(k+1)\\ \quad =\sum_{i=1}^{q}h_{i}(\xi(k))[A_{i}(r_{k})x(k)+B_{i}(r_{k})f(x(k))+C_{i}(r_{k})g(x(k-\tau(k))) +D_{i}(r_{k})\omega(t)],\hspace{-10pt} \\ y(k)=\sum_{i=1}^{q}h_{i}(\xi(k))[C_{1i}(r_{k})x(k)+C_{2i}(r_{k})x(k-\tau(k))]. \end{cases} $$
(3)
Throughout the paper, it is definitely understood that the actual input available to the desired estimator is \(y_{as}(k)\). In the early research of state estimation for neural networks, the signals’ transmissions were assumed to be in an ideal communication link, that is, \(y_{as}(k)=y(k)\). However, there exists a spot with transmission from the sensor to the estimator in the real world. The missing data phenomenon was modeled by introducing a stochastic Bernoulli approach, which is employed to described the UCL, and the relationship between \(y_{as}(k)\) and \(y(k)\) can be described by
$$ y_{as}(k)=\varpi(k)y(k), $$
(4)
where the stochastic variable \(\varpi(k)\) is Bernoulli-distributed white noise sequence specified by the following law:
$$ \operatorname{Pr}\bigl\{ \varpi(k)\bigr\} =\varpi, $$
(5)
where \(\varpi\in[0,1]\) is a known constant. Obviously, \(\varpi=0\) means the information of communication link (CL) is not available. Similarly, \(\varpi=1\) means the information of CL is available. For the stochastic variable \(\varpi(k)\), it is easy to see that
$$ \mathcal{E}\bigl\{ \varpi(k)-\varpi\bigr\} =0,\qquad \mathcal{E}\bigl\{ \bigl\vert \varpi(k)-\varpi\bigr\vert ^{2}\bigr\} =\varpi(1-\varpi). $$
(6)
For the T-S fuzzy SJNN (3), the state estimator is presented as follows:
$$ \begin{aligned}[b] \hat{x}(k+1)&=\sum_{i=1}^{q}h_{i} \bigl(\xi(k)\bigr)\bigl[A_{i}(r_{k})\hat{x}(k)+B_{i}(r_{k})f \bigl(\hat{x}(k)\bigr)+C_{i}(r_{k})g\bigl(\hat{x}\bigl(k- \tau(k)\bigr)\bigr) \\ &\quad {}+K_{i}(r_{k}) \bigl(y_{as}(k)-C_{1i}(r_{k}) \hat{x}(k)-C_{2i}(r_{k})\hat{x}\bigl(k-\tau(k)\bigr)\bigr) \bigr] \\ &=\sum_{i=1}^{q}h_{i}\bigl( \xi(k)\bigr)\bigl[A_{i}(r_{k})\hat{x}(k)+B_{i}(r_{k})f \bigl(\hat{x}(k)\bigr)+C_{i}(r_{k})g\bigl(\hat{x}\bigl(k- \tau(k)\bigr)\bigr) \\ &\quad {}+K_{i}(r_{k}) \bigl(\varpi(k)y(k)-C_{1i}(r_{k}) \hat{x}(k)-C_{2i}(r_{k})\hat{x}\bigl(k-\tau(k)\bigr)\bigr) \bigr], \end{aligned} $$
(7)
where \(\hat{x}(k)\) is the estimate of the state \(x(k)\) and for each \(r_{k}\in\mathbb{L}\), \(K_{i}(r_{k})\) is the estimator parameter to be determined.
Let \(x(k)=(e_{1}(k), e_{2}(k), \ldots, e_{n}(k))^{\intercal}=x(k)-\hat{x}(k)\) be the state error and the output estimation error. \(f(e(k))=f(x(k))-f(\hat{x}(k))\) and \(g(e(k))=g(x(k))-g(\hat{x}(k))\). For convenience, we denote \(A_{i}(r_{k})=A_{i,l}\), and the other symbols are similarly represented. The resulting estimation error is governed by
$$ \begin{aligned}[b] e(k+1)&=\sum_{i=1}^{q}h_{i} \bigl(\xi(k)\bigr)\sum_{i=1}^{q}h_{j} \bigl(\xi(k)\bigr) \bigl[(A_{i,l}-K_{i,l}C_{1j,l})e(k)-K_{i,l}C_{2j,l}e \bigl(k-\tau(k)\bigr)\\ &\quad {} +B_{i,l}f\bigl(e(k)\bigr) +C_{i,l}g\bigl(e\bigl(k-\tau(k)\bigr)\bigr)+(1- \varpi)K_{i,l}C_{1j,l}x(k)\\ &\quad {}-\bigl(\varpi(k)-\varpi \bigr)K_{i,l}C_{1j,l}x(k) +(1-\varpi)K_{i,l}C_{2j,l}x\bigl(k-\tau(k)\bigr)\\ &\quad {}- \bigl(\varpi(k)-\varpi\bigr)K_{i,l}C_{2j,l}x\bigl(k-\tau(k) \bigr)+D_{i}(r_{k})\omega(t)\bigr]. \end{aligned} $$
(8)
In the following, we introduce a new vector \(\eta(k)=[x^{\intercal}(k) \ e^{\intercal}(k)]^{\intercal}\), \(f(\eta(k))= [f^{\intercal}(x(k)) \ f^{\intercal}(e(k))]^{\intercal}\), and \(g(\eta(k-\tau(k)))=[g^{\intercal}(x(k-\tau(k))) \ g^{\intercal}(e(k-\tau(k)))]^{\intercal}\), the state estimation for SJNN can be represented as follows:
$$ \begin{aligned}[b] \eta(k+1)&=\mathcal{A}_{i,l}\eta(k)+ \mathcal{A}_{1i,l}\eta\bigl(k-\tau(k)\bigr) +\mathcal{B}_{i,l}f \bigl(\eta(k)\bigr)+\mathcal{C}_{i,l}g\bigl(\eta\bigl(t-\tau(k)\bigr) \bigr) \\ &\quad {}+\mathcal{D}_{i,l}\omega(k)+\bigl(\varpi(k)-\varpi\bigr) \mathcal{M}K_{i,l}C_{1j,l}\mathcal{N}\eta(k)\\ &\quad {} +\bigl(\varpi(k)- \varpi\bigr)\mathcal{M}K_{i,l}C_{2j,l}\mathcal{N}\eta\bigl(k- \tau(k)\bigr), \end{aligned} $$
(9)
where
A i , l = i = 1 q h i ( ξ ( k ) ) j = 1 q h j ( ξ ( k ) ) [ A i , l 0 ( 1 ϖ ) K i , l C 1 j , l A i , l K i , l C 1 j , l ] , A 1 i , l = i = 1 q h i ( ξ ( k ) ) j = 1 q h j ( ξ ( k ) ) [ 0 0 ( 1 ϖ ) K i , l C 2 j , l K i , l C 2 j , l ] , B i , l = i = 1 q h i ( ξ ( k ) ) [ B i , l 0 0 B i , l ] , C i , l = i = 1 q h i ( ξ ( k ) ) [ C i , l 0 0 C i , l ] , D i , l = i = 1 q h i ( ξ ( k ) ) [ D i , l D i , l ] , M = [ 0 I ] , N = [ I 0 ] .

Definition 2.1

[11, 15]

The augmented T-S fuzzy MJNN (9) with \(\omega(k)=0\) is said to be stochastically finite-time stable (SFTS) with respect to \((c_{1},c_{2},R,N)\), if there exist a positive matrix R and scalars \(c_{1},c_{2}>0\), such that
$$\begin{aligned}& \mathcal{E}\bigl\{ x^{\intercal}(k_{1})Rx(k_{1})\bigr\} \leq c_{1} \quad \Rightarrow\quad \mathcal{E}\bigl\{ x^{\intercal}(k_{2})Rx(k_{2}) \bigr\} < c_{2},\\& \quad \forall t_{1}\in\{-\tau_{2}, \ldots,-1,0\}, t_{2}\in\{1,2,\ldots,N\}. \end{aligned}$$

Definition 2.2

[11, 15]

The augmented T-S fuzzy MJNN (9) is said to be stochastically finite-time bounded (SFTB) with respect to \((c_{1},c_{2},R,N,d)\), if there exist a matrix \(R>0\) and scalars \(c_{1},c_{2}>0\), such that
$$\begin{aligned}& \mathcal{E}\bigl\{ x^{\intercal}(k_{1})Rx(k_{1})\bigr\} \leq c_{1}\quad \Rightarrow\quad \mathcal{E}\bigl\{ x^{\intercal}(k_{2})Rx(k_{2}) \bigr\} < c_{2},\\& \quad \forall t_{1}\in\{-\tau_{2}, \ldots,-1,0\}, t_{2}\in\{1,2,\ldots,N\}. \end{aligned}$$

Lemma 2.1

[43]

Let \(X=X^{\intercal}\), Y and Z be real matrices of appropriate dimensions with L satisfying \(L^{\intercal}L\leq I\), the following inequality holds:
$$ X+YLZ+Z^{\intercal}LY^{\intercal}< 0 $$
if and only if there exists a positive scalar \(\varepsilon>0\) such that
$$ X+\varepsilon YY^{\intercal}+\varepsilon^{-1}Z^{\intercal}Z< 0. $$

Remark 1

In [11], it is found that the neuron state-based nonlinear functions \(f(\cdot)\) and \(g(\cdot)\) are related to \(\eta(k)\) and \(\eta(k-\tau(k))\), respectively, which cannot be handled directly by the Matlab tool. Notice that \(f(0)=0\), \(g(0)=0\), one has
$$ \begin{aligned} &\bigl[f(\mu)-f(\nu)-U_{1}(\mu-\nu) \bigr]^{\intercal}\bigl[f(\mu)-f(\nu)-U_{2}(\mu-\nu)\bigr]\leq0, \\ &\bigl[g(\mu)-g(\nu)-V_{1}(\mu-\nu)\bigr]^{\intercal}\bigl[g( \mu)-g(\nu)-V_{2}(\mu-\nu)\bigr]\leq0, \end{aligned} $$
where \(U_{1}\), \(U_{2}\), \(V_{1}\) and \(V_{2}\) are real matrices with compatible dimensions. In this paper, \(f(\cdot)\) and \(g(\cdot)\) are mode-dependent nonlinear functions:
$$ \begin{aligned} &\bigl[f\bigl(\eta(k)\bigr)-U_{1l}\eta(k) \bigr]^{\intercal}\bigl[f\bigl(\eta(k)\bigr)-U_{2l}\bigl(\eta(k)\bigr) \bigr]\leq0, \\ &\bigl[g\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)-V_{1l}\eta\bigl(k- \tau(k)\bigr)\bigr]^{\intercal}\bigl[g\bigl(\eta\bigl(k-\tau(k)\bigr) \bigr)-V_{2l}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)\bigr]\leq0, \end{aligned} $$
(10)
where \(U_{1l}\), \(U_{2l}\), \(V_{1l}\), and \(V_{2l}\) are real matrices with appropriate dimensions. It will be used in the proof of our results.
It is noted that \(\operatorname{tr}(U_{1l})\leq\operatorname{tr}(U_{2l})\) and \(\operatorname{tr}(V_{1l})\leq\operatorname{tr}(V_{2l})\). In such a case, one finds that \(f(\eta(k))\in[U_{1l},U_{2l}]\) and \(g(\eta(k-\tau(k)))\in[V_{1l},V_{2l}]\). One has
$$\begin{gathered} \chi_{1}(k)= \textstyle\begin{cases} 0& \mbox{if } f(\eta(k))\in[U_{1l},U_{l}], \\ 1& \mbox{if } f(\eta(k))\in[U_{l},U_{2l}], \end{cases}\displaystyle \qquad \chi_{1}(k)+ \chi_{2}(k)=1, \\ \kappa_{1}(k)=\textstyle\begin{cases} 0& \mbox{if } g(\eta(k-\tau(k)))\in[V_{1l},V_{l}], \\ 1& \mbox{if } g(\eta(k-\tau(k)))\in[V_{l},V_{2l}], \end{cases}\displaystyle \qquad \kappa_{1}(k)+ \kappa_{2}(k)=1, \end{gathered} $$
where \(\chi_{1}(k)\) and \(\kappa_{1}(k)\) are two independent Bernoulli-distributed sequences satisfying
$$ \begin{aligned} &\operatorname{Pr}\bigl\{ \chi_{1}(k)=1\bigr\} = \chi_{1},\qquad \operatorname{Pr}\bigl\{ \chi_{1}(k)=0\bigr\} =1-\chi_{1}, \\ &\operatorname{Pr}\bigl\{ \kappa_{1}(k)=1\bigr\} =\kappa_{1}, \qquad \operatorname{Pr}\bigl\{ \kappa_{1}(k)=0\bigr\} =1- \kappa_{1}, \end{aligned} $$
which yields
$$ \begin{aligned} &\bigl[f_{1}\bigl(\eta(k)\bigr)-U_{1l} \eta(k)\bigr]^{\intercal}\bigl[f_{1}\bigl(\eta(k) \bigr)-U_{l}\bigl(\eta(k)\bigr)\bigr]\leq0, \\ &\bigl[f_{2}\bigl(\eta(k)\bigr)-U_{l}\eta(k) \bigr]^{\intercal}\bigl[f_{2}\bigl(\eta(k)\bigr)-U_{2l} \bigl(\eta(k)\bigr)\bigr]\leq0, \\ &\bigl[g_{1}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)-V_{1l} \eta\bigl(k-\tau(k)\bigr)\bigr]^{\intercal}\bigl[g_{1}\bigl(\eta \bigl(k-\tau(k)\bigr)\bigr)-V_{l}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr) \bigr]\leq0, \\ &\bigl[g_{2}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)-V_{l}\eta \bigl(k-\tau(k)\bigr)\bigr]^{\intercal}\bigl[g_{2}\bigl(\eta\bigl(k- \tau(k)\bigr)\bigr)-V_{2l}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)\bigr] \leq0, \end{aligned} $$
(11)
where
$$ \begin{aligned} &f_{1}\bigl(\eta(k)\bigr)= \textstyle\begin{cases} f(\eta(k)),& \chi_{1}(k)=1, \\ U_{1l}\eta(k), &\chi_{1}(k)=0, \end{cases}\displaystyle \qquad f_{2}\bigl(\eta(k)\bigr)= \textstyle\begin{cases} f(\eta(k)),& \chi_{2}(k)=1, \\ U_{2l}\eta(k), &\chi_{2}(k)=0, \end{cases}\displaystyle \\ &g_{1}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)= \textstyle\begin{cases} g(\eta(k-\tau(k))),& \kappa_{1}(k)=1, \\ V_{1l}\eta(k-\tau(k)), &\kappa_{1}(k)=0, \end{cases}\displaystyle \\ &g_{2}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)= \textstyle\begin{cases} g(\eta(k-\tau(k))),& \kappa_{2}(k)=1, \\ V_{2l}\eta(k-\tau(k)), &\kappa_{2}(k)=0. \end{cases}\displaystyle \end{aligned} $$
Therefore, \(f(\eta(k))\) and \(g(\eta(k-\tau(k)))\) can be replaced by
$$ \begin{aligned} &f\bigl(\eta(k)\bigr)=\chi_{1}(k)f_{1} \bigl(\eta(k)\bigr)+\chi_{2}(k)f_{2}\bigl(\eta(k)\bigr), \\ &g\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)=\kappa_{1}(k)g_{1} \bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)+\kappa_{2}(k)g_{2}\bigl( \eta\bigl(k-\tau(k)\bigr)\bigr). \end{aligned} $$
(12)

3 Main results

The following is the main result of this paper.

Theorem 3.1

For given scalars \(N>0\), \(\alpha>1\), \(c_{1}>0\), \(c_{2}>0\) and \(d>0\), the system (4) is SFTB if there exist symmetric matrices \(P_{l}>0\), \(Q_{s}>0\) (\(s=1,2,3\)), \(R_{n}>0\) (\(n=1,2\)), \(S_{l}>0\) and appropriately matrices \(H_{s}\) (\(s=1,2,3\)), \(X_{nl}>0\), \(Y_{nl}>0\) (\(n=1,2\)) such that, for any \(l\in\mathcal{L}\), the following LMIs hold:
[ Γ 1 l Γ 2 l Γ 3 l Γ 4 l 0 Γ 5 l ] < 0 ,
(13)
$$\begin{aligned}& \psi_{1}c_{1}+\psi_{2}\rho+\lambda_{8}d < \lambda_{1}c_{2}\alpha^{-N}, \end{aligned}$$
(14)
where
Γ 1 l = e 1 ( Q 1 + Q 2 + Q 3 ) e 1 e 3 Q 1 e 3 e 2 Q 2 e 2 e 4 Q 3 e 4 + e 1 ( τ 12 Q 2 + τ 12 R 1 + τ 2 R 2 ) e 1 Γ 1 l = + 2 H 1 ( e 1 e 2 ) + 2 H 2 ( e 3 e 2 ) + 2 H 3 ( e 2 e 4 ) ( α 1 ) e 1 P l e 1 e 9 S l e 9 Γ 1 l = ( e 5 U 1 l e 1 ) X 1 l ( e 5 U l e 1 ) ( e 6 U l e 1 ) X 2 l ( e 6 U 2 l e 1 ) Γ 1 l = ( e 7 V 1 l e 2 ) Y 1 l ( e 7 V l e 2 ) ( e 8 V l e 2 ) Y 2 l ( e 8 V 2 l e 2 ) , Γ 2 l = [ Γ 2 l ( 1 ) Γ 2 l ( 2 ) 0 Γ 2 l ( 3 ) ] , Γ 3 l = diag { P l 1 , P l 1 , P l 1 , P l 1 , P l 1 , P l 1 } , Γ 4 l = [ τ 2 H 1 τ 12 H 2 τ 12 H 3 ] , Γ 5 l = diag { R 2 , ( R 1 + R 2 ) , R 1 } , Γ 2 l ( 1 ) = [ 3 A i , l 3 A 1 i , l 0 0 ϖ ( 1 ϖ ) M K i , l C 1 j , l N 0 0 0 0 ϖ ( 1 ϖ ) M K i , l C 2 j , l N 0 0 ] , Γ 2 l ( 2 ) = [ 0 0 0 0 3 D i , l 0 0 0 0 0 0 0 0 0 0 ] , Γ 2 l ( 3 ) = [ 2 χ 1 B i , l 0 0 0 0 0 2 χ 2 B i , l 0 0 0 0 0 2 κ 1 C i , l 0 0 0 0 0 2 κ 2 C i , l 0 ] , P l = m = 1 q π l m P m , τ 12 = τ 2 τ 1 , ψ 1 = λ 2 + τ 1 λ 3 + τ M λ 4 + τ M λ 5 + 1 2 τ 12 ( τ 1 + τ 2 1 ) λ 4 , ψ 2 = 1 2 [ τ 12 ( τ 1 + τ 2 1 ) λ 6 + τ 2 ( τ 2 1 ) λ 7 ] , λ 1 = max l L λ min ( P l ) , λ 2 = max l L λ max ( P l ) , λ 3 = λ max ( Q 1 ) , λ 4 = λ max ( Q 2 ) , λ 5 = λ max ( Q 3 ) , λ 6 = λ max ( R 1 ) , λ 7 = λ max ( R 2 ) , λ 8 = max l L λ max ( S l ) , P l = R 1 2 P l R 1 2 , Q s = R 1 2 Q s R 1 2 ( s = 1 , 2 , 3 ) , R s = R 1 2 R n R 1 2 ( n = 1 , 2 ) .

Proof

Let us construct the following Lyapunov functional of the form
$$ V\bigl(\eta(k),r_{k}\bigr)=\sum_{n=1}^{3}V_{n}(x_{k},r_{k}), $$
(15)
where
$$\begin{aligned}& V_{1}\bigl(\eta(k),r_{k}\bigr)=x^{\intercal}(k)P_{l}x(k), \end{aligned}$$
(16)
$$\begin{aligned}& V_{2}\bigl(\eta(k),r_{k}\bigr)=\sum _{s=k-\tau_{1}}^{k-1}\eta^{\intercal}(s)Q_{1} \eta(s) +\sum_{s=k-\tau(t)}^{k-1}\eta^{\intercal}(s)Q_{2} \eta(s) +\sum_{s=k-\tau_{2}}^{k-1}\eta^{\intercal}(s)Q_{3} \eta(s), \end{aligned}$$
(17)
$$\begin{aligned}& V_{3}\bigl(\eta(k),r_{k}\bigr)=\sum _{s=t-\tau_{2}+1}^{-\tau_{1}}\sum_{s=k+t}^{k-1} \eta^{\intercal}(s)Q_{2}\eta(s) +\sum_{t-\tau_{2}}^{-\tau_{1}-1} \sum_{s=k+t}^{k-1}\varsigma^{\intercal}(s)R_{1} \varsigma(s) \\& \hphantom{V_{3}\bigl(\eta(k),r_{k}\bigr)=}{} +\sum_{t=-\tau_{2}}^{-1}\sum _{s=k+t}^{k-1}\varsigma^{\intercal}(s)R_{2} \varsigma(s) , \end{aligned}$$
(18)
with \(\varsigma(k)=\eta(k+1)-\eta(k)\).
Let \(\mathcal{E}\{\Delta V(\eta(k),r_{k})\}=\mathcal{E}\{ V(\eta(k+1),r_{k+1}=j)\mid r_{k}=i-V(\eta(k),r_{k}=i)\}\). Then we have
$$ \begin{aligned}[b] \mathcal{E}\bigl\{ \Delta V_{1}\bigl( \eta(k),r_{k}\bigr)\bigr\} &= \bigl\{ \eta^{\intercal}(k+1) \mathcal{P}_{l}\eta(k+1) -x^{\intercal}(k)P_{l}x(k) \bigr\} \\ &=\varphi^{\intercal}(k)\Biggl\{ [\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2} +\mathcal{D}_{i,l}e_{9}]^{\intercal} \mathcal{P}_{l}[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2} +\mathcal{D}_{i,l}e_{9}] \\ &\quad {}+2[\mathcal{A}_{i,l}e_{1}+\mathcal{A}_{1i,l}e_{2}+ \mathcal{D}_{i,l}e_{9}]^{\intercal}\mathcal{P}_{l} \\ &\quad {}\times \Biggl(\mathcal{B}_{i,l}\sum _{n=1}^{2}\chi_{n}e_{4+n} + \mathcal{C}_{i,l}\sum_{s=1}^{2} \kappa_{s}e_{6+s} \Biggr) \\ &\quad {}+\sum_{n=1}^{2} \chi_{n}e_{4+n}^{\intercal}\mathcal{B}_{i,l}^{\intercal} \mathcal{P}_{l}\mathcal{B}_{i,l}e_{4+n} +\sum _{s=1}^{2}\kappa_{s}e_{6+s}^{\intercal} \mathcal{C}_{i,l}^{\intercal}\mathcal{P}_{l} \mathcal{C}_{i,l}e_{6+s} \\ &\quad {}+\varpi(1-\varpi)e_{1}^{\intercal}(\mathcal{M}K_{i,l}C_{1j,l} \mathcal{N})^{\intercal}\mathcal{P}_{l}\mathcal{M}K_{i,l}C_{1j,l} \mathcal{N}e_{1} \\ &\quad {}+\varpi(1-\varpi)e_{2}^{\intercal}(\mathcal{M}K_{i,l}C_{2j,l} \mathcal{N})^{\intercal}\mathcal{P}_{l}\mathcal{M}K_{i,l}C_{2j,l} \mathcal{N}e_{2}\Biggr\} \varphi(k), \end{aligned} $$
(19)
where \(\varphi^{\intercal}(k)=[\eta^{\intercal}(k) \ \eta^{\intercal}(k-\tau(k)) \ \eta^{\intercal}(k-\tau_{1}) \ \eta^{\intercal}(k-\tau_{2}) \ f_{1}^{\intercal}(\eta(k)) \ f_{2}^{\intercal}(\eta(k)) \ g_{1}^{\intercal}(\eta(k-\tau(k))) \ g_{2}^{\intercal}(\eta(k-\tau(k))) \ \omega^{\intercal}(k)]\).
Using Lemma 2.1
$$\begin{aligned} &2\varphi^{\intercal}(k)[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2} +\mathcal{D}_{i,l}e_{9}]^{\intercal} \mathcal{P}_{l}\mathcal{B}_{i,l}\sum _{n=1}^{2}\chi_{n}e_{4+n}\varphi(k) \\ &\quad \leq\varphi^{\intercal}(k)[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2}+\mathcal{D}_{i,l}e_{9}]^{\intercal} \mathcal{P}_{l}[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2}+\mathcal{D}_{i,l}e_{9}] \varphi(k) \\ &\qquad {}+\varphi^{\intercal}(k)\sum_{n=1}^{2} \chi_{n}e_{4+n}^{\intercal}\mathcal{B}_{i,l}^{\intercal} \mathcal{P}_{l}\mathcal{B}_{i,l}e_{4+n}\varphi(k), \end{aligned}$$
(20)
$$\begin{aligned} &2\varphi^{\intercal}(k)[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2} +\mathcal{D}_{i,l}e_{9}]^{\intercal} \mathcal{P}_{l}\mathcal{C}_{i,l}\sum _{s=1}^{2}\kappa_{s}e_{6+s} \varphi(k) \\ &\quad \leq\varphi^{\intercal}(k)[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2}+\mathcal{D}_{i,l}e_{9}]^{\intercal} \mathcal{P}_{l}[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2}+\mathcal{D}_{i,l}e_{9}] \varphi(k) \\ &\qquad {}+\varphi^{\intercal}(k)\sum_{s=1}^{2} \kappa_{s}e_{6+s}^{\intercal}\mathcal{C}_{i,l}^{\intercal} \mathcal{P}_{l}\mathcal{C}_{i,l}e_{6+n}\varphi(k), \end{aligned}$$
(21)
$$\begin{aligned} &\mathcal{E}\bigl\{ \Delta V_{2}\bigl(\eta(k),r_{k}\bigr) \bigr\} =\mathcal{E} \Biggl\{ \sum_{s=k-\tau_{1}+1}^{k} \eta^{\intercal}(s)Q_{1}\eta(s) -\sum_{s=k-\tau_{1}}^{k-1} \eta^{\intercal}(s)Q_{1}\eta(s) \\ &\hphantom{\mathcal{E}\{\Delta V_{2}(\eta(k),r_{k})\} =}{}+\sum_{s=k-\tau(t)+1}^{k} \eta^{\intercal}(s)Q_{2}\eta(s) -\sum_{s=k-\tau(t)}^{k-1} \eta^{\intercal}(s)Q_{2}\eta(s) \\ &\hphantom{\mathcal{E}\{\Delta V_{2}(\eta(k),r_{k})\} =}{}+\sum_{s=k-\tau_{2}+1}^{k} \eta^{\intercal}(s)Q_{3}\eta(s) -\sum_{s=k-\tau_{2}}^{k-1} \eta^{\intercal}(s)Q_{3}\eta(s) \Biggr\} \\ &\hphantom{\mathcal{E}\{\Delta V_{2}(\eta(k),r_{k})\} }\leq\varphi^{\intercal}(k)\bigl\{ e_{1}^{\intercal}(Q_{1}+Q_{2}+Q_{3})e_{1} -e_{3}^{\intercal}Q_{1}e_{3} -e_{2}^{\intercal}Q_{2}e_{2} -e_{4}^{\intercal}Q_{3}e_{4} \bigr\} \varphi(k) \\ &\hphantom{\mathcal{E}\{\Delta V_{2}(\eta(k),r_{k})\} =}{}+\sum_{s=k-\tau_{2}+1}^{k-\tau_{1}} \eta^{\intercal}(s)Q_{2}\eta(s), \end{aligned}$$
(22)
$$\begin{aligned} &\mathcal{E}\bigl\{ \Delta V_{3}\bigl(\eta(k),r_{k}\bigr) \bigr\} =\mathcal{E} \Biggl\{ \sum_{t=-\tau_{m}+1}^{-\tau_{1}} \Biggl[\sum_{s=k+t+1}^{k}\eta^{\intercal}(s)Q_{2} \eta(s) -\sum_{s=k+t}^{k-1}\eta^{\intercal}(s)Q_{2} \eta(s) \Biggr] \\ &\hphantom{\mathcal{E}\{\Delta V_{3}(\eta(k),r_{k})\} =}{}+\sum_{t=-\tau_{m}}^{-\tau_{1}-1} \Biggl[ \sum_{s=k+t+1}^{k}\varsigma^{\intercal}(s)R_{1} \varsigma(s) -\sum_{s=k+t}^{k-1} \varsigma^{\intercal}(s)R_{1}\varsigma(s) \Biggr] \\ &\hphantom{\mathcal{E}\{\Delta V_{3}(\eta(k),r_{k})\} =}{}+\sum_{t=-\tau_{m}}^{-1} \Biggl[ \sum_{s=k+t+1}^{k}\varsigma^{\intercal}(s)R_{2} \varsigma(s) -\sum_{s=k+t}^{k-1} \varsigma^{\intercal}(s)R_{2}\varsigma(s) \Biggr] \Biggr\} \\ &\hphantom{\mathcal{E}\{\Delta V_{3}(\eta(k),r_{k})\} }=\varphi^{\intercal}(k)e_{1}^{\intercal}( \tau_{12}Q_{2}+\tau_{12}R_{1}+ \tau_{2}R_{2})e_{1}\varphi(k) -\sum _{s=k-\tau_{2}+1}^{k-\tau_{m}}\eta^{\intercal}(k)Q_{2}\eta(k) \\ &\hphantom{\mathcal{E}\{\Delta V_{3}(\eta(k),r_{k})\} =}{}-\sum_{s=k-\tau_{2}}^{k-\tau_{m}-1} \varsigma^{\intercal}(k)R_{1}\varsigma(k) -\sum _{s=k-\tau_{2}}^{k-1}\varsigma^{\intercal}(k)R_{2} \varsigma(k). \end{aligned}$$
(23)
On the other hand, for any appropriately dimensioned matrices \(H_{s}\) (\(s=1,2,3\)), the following inequalities hold:
$$\begin{aligned}& 0=2\varphi^{\intercal}(k)H_{1} \Biggl[(e_{1}-e_{2}) \varphi(k)-\sum_{s=k-\tau(k)}^{k-1}\varsigma(s) \Biggr], \end{aligned}$$
(24)
$$\begin{aligned}& 0=2\varphi^{\intercal}(k)H_{2} \Biggl[(e_{3}-e_{2}) \varphi(k)-\sum_{s=k-\tau(k)}^{k-\tau_{1}-1}\varsigma(s) \Biggr], \end{aligned}$$
(25)
$$\begin{aligned}& 0=2\varphi^{\intercal}(k)H_{3} \Biggl[(e_{2}-e_{4}) \varphi(k)-\sum_{s=k-\tau_{2}}^{k-\tau(k)-1}\varsigma(s) \Biggr] . \end{aligned}$$
(26)
From (10), for any matrix variables \(X_{nl}>0\) and \(Y_{nl}>0\) (\(n=1,2\)), one has
$$\begin{aligned}& 0\leq-\bigl(f_{1}\bigl(\eta(k)\bigr)-U_{1l}\eta(k) \bigr)^{\intercal}X_{1l}\bigl(f_{1}\bigl(\eta(k) \bigr)-U_{l}\eta(k)\bigr), \\& 0\leq-\bigl(f_{2}\bigl(\eta(k)\bigr)-U_{l}\eta(k) \bigr)^{\intercal}X_{2l}\bigl(f_{2}\bigl(\eta(k) \bigr)-U_{2l}\eta(k)\bigr), \\& 0\leq-\bigl(g_{1}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)-V_{1l} \eta\bigl(k-\tau(k)\bigr)\bigr)^{\intercal}Y_{1l}\bigl(g_{1} \bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)-V_{l}\eta\bigl(k-\tau(k)\bigr) \bigr), \\& 0\leq-\bigl(g_{2}\bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)-V_{l} \eta\bigl(k-\tau(k)\bigr)\bigr)^{\intercal}Y_{2l}\bigl(g_{2} \bigl(\eta\bigl(k-\tau(k)\bigr)\bigr)-V_{2l}\eta\bigl(k-\tau(k)\bigr) \bigr), \end{aligned}$$
which can be rewritten as
$$\begin{aligned}& 0\leq-\varphi^{\intercal}(k) (e_{5}-U_{1l}e_{1})^{\intercal}X_{1l}(e_{5}-U_{l}e_{1}) \varphi(k), \end{aligned}$$
(27)
$$\begin{aligned}& 0\leq-\varphi^{\intercal}(k) (e_{6}-U_{l}e_{1})^{\intercal}X_{2l}(e_{6}-U_{2l}e_{1}) \varphi(k), \end{aligned}$$
(28)
$$\begin{aligned}& 0\leq-\varphi^{\intercal}(k) (e_{7}-V_{1l}e_{2})^{\intercal}Y_{1l}(e_{7}-V_{l}e_{2}) \varphi(k), \end{aligned}$$
(29)
$$\begin{aligned}& 0\leq-\varphi^{\intercal}(k) (e_{8}-V_{l}e_{2})^{\intercal}Y_{2l}(e_{8}-V_{2l}e_{2}) \varphi(k) . \end{aligned}$$
(30)
Combining (15)–(30), one has
$$ \begin{aligned}[b] &\mathcal{E}\bigl\{ \Delta V\bigl(\eta(k),r_{k} \bigr)\bigr\} \\ &\quad =\mathcal{E} \bigl\{ \varphi^{\intercal}(k) (\Gamma_{1l}+ \Gamma_{0i,l})\varphi(k) \bigr\} \\ &\qquad {} -\sum_{s=k-\tau(k)}^{k-1}\bigl\{ H_{1}^{\intercal}\varphi(k)+R_{2}\lambda(s)\bigr\} ^{\intercal} R_{2}^{-1}\bigl\{ H_{1}^{\intercal} \varphi(k)+R_{2}\lambda(s)\bigr\} \\ &\qquad{}-\sum_{s=k-\tau(k)}^{k-\tau_{1}-1}\bigl\{ H_{3}^{\intercal}\varphi(k)+R_{1}\lambda(s)\bigr\} ^{\intercal} R_{1}^{-1}\bigl\{ H_{3}^{\intercal} \varphi(k)+R_{1}\lambda(s)\bigr\} \\ &\qquad{}-\sum_{s=k-\tau_{2}}^{k-\tau(k)-1}\bigl\{ H_{2}^{\intercal}\varphi(k)+(R_{1}+R_{2}) \lambda(s)\bigr\} ^{\intercal} (R_{1}+R_{2})^{-1} \bigl\{ H_{2}^{\intercal}\varphi(k)+(R_{1}+R_{2}) \lambda(s)\bigr\} \\ &\quad \leq\mathcal{E} \bigl\{ \varphi^{\intercal}(k) (\Gamma_{1l}+ \Gamma_{0i,l})\varphi(k) \bigr\} , \end{aligned} $$
(31)
where
$$ \begin{aligned} \Gamma_{0i,l}&=3[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2} +\mathcal{D}_{i,l}e_{9}]^{\intercal} \mathcal{P}_{l}[\mathcal{A}_{i,l}e_{1}+ \mathcal{A}_{1i,l}e_{2} +\mathcal{D}_{i,l}e_{9}] \\ &\quad {}+\varpi(1-\varpi)e_{1}^{\intercal}(\mathcal{M}K_{i,l}C_{1j,l} \mathcal{N})^{\intercal}\mathcal{P}_{l}\mathcal{M}K_{i,l}C_{1j,l} \mathcal{N}e_{1} \\ &\quad {}+\varpi(1-\varpi)e_{2}^{\intercal}(\mathcal{M}K_{i,l}C_{2j,l} \mathcal{N})^{\intercal}\mathcal{P}_{l}\mathcal{M}K_{i,l}C_{2j,l} \mathcal{N}e_{2} \\ &\quad {}+\tau_{2}H_{1}R_{2}^{-1}H_{1}^{\intercal} +\tau_{12}H_{2}(R_{1}+R_{2})^{-1}H_{2}^{\intercal} +\tau_{12}H_{3}R_{1}^{-1}H_{3}^{\intercal}. \end{aligned} $$
Let \(\lambda_{0}=\min\{\Omega\}\), then \(\lambda_{0}>0\) due to Ω. Finally from (13), we obtain, for any \(k\geq0\),
$$ \begin{aligned}[b] \mathcal{E}\bigl\{ \Delta V\bigl(\eta(k),r_{k} \bigr)\bigr\} &=\mathcal{E}\bigl\{ V\bigl(\eta(k+1),r_{k+1}=j\mid \eta(k),r_{k}=i\bigr)\bigr\} -\mathcal{E}\bigl\{ V\bigl( \eta(k),r_{k}=i\bigr)\bigr\} \\ &\leq (\alpha-1)\eta^{\intercal}(k)P_{l}\eta(k) + \omega^{\intercal}(k)S_{l}\omega(k) \\ &\leq V\bigl(\eta(k),r_{k}\bigr)+\omega^{\intercal}(k)S_{l} \omega(k). \end{aligned} $$
(32)
Taking mathematical expectation on both sides of inequality (32) and noting that \(\alpha\geq1\), it can be shown from (2) and (32) that
$$ \begin{aligned}[b] \mathcal{E}\bigl\{ \Delta V\bigl(\eta(k+1),r_{k+1} \bigr)\bigr\} &< \alpha V\bigl(\eta(k),r_{k}\bigr)+\lambda_{\max}(S_{l}) \mathcal{E}\bigl\{ \omega^{\intercal}(k)\omega(k)\bigr\} \\ &< \cdots< \alpha^{k}\mathcal{E}\bigl\{ V\bigl(\eta(0),r_{0} \bigr)\bigr\} +\lambda_{\max}(S_{l})\mathcal{E} \Biggl\{ \sum _{s=0}^{k-1}\alpha^{k-s-1} \omega^{\intercal}(k)\omega(k) \Biggr\} \hspace{-12pt} \\ &\leq\alpha^{k}\mathcal{E}\bigl\{ V\bigl(\eta(0),r_{0} \bigr)\bigr\} +\lambda_{\max}(S_{l})\alpha^{k}d. \end{aligned} $$
(33)
In view of conditions (15), letting \(\overline{P}_{l}=R^{-\frac{1}{2}}P_{l}R^{-\frac{1}{2}}\), \(\overline{Q}_{s}=R^{-\frac{1}{2}}Q_{s}R^{-\frac{1}{2}}\) (\(s=1,2,3\)) and \(\overline{R}_{n}=R^{-\frac{1}{2}}R_{n}R^{-\frac{1}{2}}\) (\(n=1,2\)), we obtain
$$ \begin{aligned}[b] \mathcal{E}\bigl\{ V\bigl(\eta(0),r_{0}\bigr)\bigr\} &=\eta^{\intercal}(0)P_{l}\eta(0) +\sum _{s=-\tau_{1}}^{-1}\eta^{\intercal}(s)Q_{1}\eta(s) +\sum_{s=-\tau(k)}^{-1}\eta^{\intercal}(s)Q_{2} \eta(s) \\ &\quad {}+\sum_{s=-\tau_{2}}^{-1} \eta^{\intercal}(s)Q_{3}\eta(s) +\sum_{s=t-\tau_{2}+1}^{-\tau_{1}} \sum_{s=t}^{-1}\eta^{\intercal}(s)Q_{2} \eta(s) \\ &\quad {}+\sum_{t-\tau_{2}}^{-\tau_{1}-1}\sum _{s=t}^{-1}\lambda^{\intercal}(s)R_{1} \lambda(s) +\sum_{t=-\tau_{2}}^{-1}\sum _{s=t}^{-1}\lambda^{\intercal}(s)R_{2} \lambda(s) \\ &\leq (\lambda_{\max}(\overline{P}_{l})+\tau_{1} \lambda_{\max}(\overline{Q}_{1}) +\tau_{M} \lambda_{\max}(\overline{Q}_{2}) +\tau_{M} \lambda_{\max}(\overline{Q}_{3}) \\ &\quad {}+\frac{1}{2}\tau_{12}(\tau_{1}+ \tau_{2}-1)\lambda_{\max}(\overline{Q}_{2}) ]c_{1} \\ &\quad {}+\frac{1}{2}\bigl[\tau_{12}(\tau_{1}+ \tau_{2}-1)\lambda_{\max}(\overline{R}_{1}) + \tau_{2}(\tau_{2}-1)\lambda_{\max}( \overline{R}_{2})\bigr]\rho. \end{aligned} $$
(34)
On the other hand, for all \(l\in\mathcal{L}\), it can be seen from (15) that
$$ \begin{aligned} \mathcal{E}\bigl\{ V\bigl(\eta(k),r_{k}\bigr)\bigr\} \geq\mathcal{E}\bigl\{ \eta^{\intercal}(k)P_{l}\eta(k)\bigr\} \geq \lambda_{\min}(\overline{P}_{l})\eta^{\intercal}(k)R\eta(k). \end{aligned} $$
(35)
From (34) and (35), we get
$$ \begin{aligned} \eta^{\intercal}(k)R\eta(k)< \frac {(\psi_{1}c_{1}+\psi_{2}\rho+\lambda_{\max}(S_{l})d)\alpha^{N}}{ \lambda_{\min}(\overline{P}_{l})} \end{aligned} . $$
(36)

Noting condition (14), it can be derived from (14) and (35) that \(\eta^{\intercal}(k)R\eta(k)< c_{2}\) for all \(k\in\{1,2,\ldots,N\}\). □

Remark 2

To estimate the derivative of the Lyapunov functional, more information is needed on the slope of neuron activation functions \(f(\eta(k))\) and \(g(\eta(k-\tau(k)))\) derivative than [69], which yield less conservative results.

Remark 3

In this brief contribution, the UCL is introduced to save the communication resource, which was assumed to be perfect in the existing literature. Hence, the applicability of SJNN subject to UCL is reasonable and relatively wide.

Remark 4

Note that the failures of sensors are mode-dependent and depict that the signal may vary between actuator and controller, which is extended to the filtering for T-S fuzzy stochastic jumping neural networks subject to UCLs.

Theorem 3.2

For given scalars \(N>0\), \(\alpha>1\), \(c_{1}>0\), \(c_{2}>0\), and \(d>0\), the system (4) is SFTB if there exist symmetric matrices \(P_{l}=\operatorname{diag}\{P_{11l},P_{22l}\}>0\), \(Q_{s}>0\) (\(s=1,2,3\)), \(R_{n}>0\) (\(n=1,2\)), \(S_{l}>0\), and appropriately matrices \(H_{s}\) (\(s=1,2,3\)), \(X_{nl}>0\), \(Y_{nl}>0\) (\(n=1,2\)) such that, for any \(l\in\mathcal{L}\), the following LMIs hold:
$$\begin{aligned}& \Gamma_{ij,l}+\Gamma_{ji,l}< 0 \quad (i< j), \end{aligned}$$
(37)
$$\begin{aligned}& \Gamma_{ii,l}< 0, \end{aligned}$$
(38)
$$\begin{aligned}& \overline{\psi}_{1}c_{1}+\psi_{2}\rho+ \lambda_{8}d < c_{2}\alpha^{-N}, \end{aligned}$$
(39)
where
Γ i j , l = [ Γ 1 l Γ 2 i j , l Γ 3 l Γ 4 l 0 Γ 5 l ] , Γ 2 i j , l = Γ 2 i j , l ( 0 ) × [ Γ 2 i j , l ( 1 ) Γ 2 i j , l ( 2 ) 0 Γ 2 i j , l ( 3 ) ] , Γ 2 i j , l ( 0 ) = [ π l 1 I 2 n π l 2 I 2 n π l N I 2 n π l 1 I 2 n π l 2 I 2 n π l N I 2 n π l 1 I 2 n π l 2 I 2 n π l N I 2 n ] , Γ 2 i j , l ( 1 ) = [ 3 A ¯ i j , l 3 A ¯ 1 i j , l 0 0 ϖ ( 1 ϖ ) M ¯ i j , l 0 0 0 0 ϖ ( 1 ϖ ) N ¯ i j , l 0 0 ] , Γ 2 i j , l ( 2 ) = [ 0 0 0 0 3 D ¯ i , l 0 0 0 0 0 0 0 0 0 0 ] , Γ 2 i j , l ( 3 ) = [ 2 χ 1 B ¯ i , l 0 0 0 0 0 2 χ 2 B ¯ i , l 0 0 0 0 0 2 κ 1 C ¯ i , l 0 0 0 0 0 2 κ 2 C ¯ i , l 0 ] , M ¯ i j , l = [ 0 0 Z i , l C 1 j , l 0 ] , N ¯ i j , l = [ 0 0 Z i , l C 2 j , l 0 ] , Γ 3 l = diag { P 1 2 P l , P 2 2 P l , , P h 2 P l h , P 1 2 P l , P 2 2 P l , , P h 2 P l h , Γ 3 l = P 1 2 P l , P 2 2 P l , , P h 2 P l h , P 1 2 P l , P 2 2 P l , , P h 2 P l h , Γ 3 l = P 1 2 P l , P 2 2 P l , , P h 2 P l h , P 1 2 P l , P 2 2 P l , , P h 2 P l h , } , A ¯ i j , l = [ P 11 l A i , l 0 ( 1 ϖ ) Z i , l C 1 j , l P 22 l A i , l Z i , l C 1 j , l ] , A ¯ 1 i , l = [ 0 0 ( 1 ϖ ) Z i , l C 2 j , l Z i , l C 2 j , l ] , B ¯ i , l = [ P 11 l B i , l 0 0 P 22 l B i , l ] , C ¯ i , l = [ P 11 l C i , l 0 0 P 22 l C i , l ] , D i , l = [ P 11 l D i , l P 22 l D i , l ] , ψ 1 = λ 0 + τ 1 λ 3 + τ M λ 4 + τ M λ 5 + 1 2 τ 12 ( τ 1 + τ 2 1 ) λ 4 , ψ 2 = 1 2 [ τ 12 ( τ 1 + τ 2 1 ) λ 6 + τ 2 ( τ 2 1 ) λ 7 ] , λ 0 1 = max l L { λ min ( P 11 l ) , λ min ( P 22 l ) } , λ 3 = λ max ( Q 1 ) , λ 4 = λ max ( Q 2 ) , λ 5 = λ max ( Q 3 ) , λ 6 = λ max ( R 1 ) , λ 7 = λ max ( R 2 ) , λ 8 = max l L λ max ( S l ) , P m m l = R 1 2 P m m l R 1 2 ( m = 1 , 2 ) , Q s = R 1 2 Q s R 1 2 ( s = 1 , 2 , 3 ) , R s = R 1 2 R n R 1 2 ( n = 1 , 2 ) .
Moreover, the finite-time state estimator can be constructed by
$$ K_{i,l}=P_{22l}^{-1}Z_{i,l} . $$
(40)

Proof

Letting \(P_{l}=\operatorname{diag}\{P_{11l},P_{22l}\}\). Pre- and post-multiplying (13) by the block-diagonal matrix \(P_{l}=\operatorname{diag}\{I,I,I,I,I,I,I,I,I,P_{l}^{-1}, P_{l}^{-1}, P_{l}^{-1}, P_{l}^{-1}, P_{l}^{-1}, P_{l}^{-1}, I, I, I\}\) and using the Schur complement lemma, one has
$$ \sum_{i=1}^{q}\sum _{j=1}^{q}h_{i}\bigl(\xi(k) \bigr)h_{j}\bigl(\xi(k)\bigr)\widetilde{\Gamma}_{ij,l}< 0, $$
(41)
where
Γ ˜ i j , l = [ Γ 1 l Γ ˜ 2 i j , l Γ ˜ 3 l Γ 4 l 0 Γ 5 l ] , Γ ˜ 2 i j , l = Γ 2 i j , l ( 0 ) × [ Γ ˜ 2 i j , l ( 1 ) Γ 2 i j , l ( 2 ) 0 Γ 2 i j , l ( 3 ) ] , Γ ˜ 2 i j , l ( 1 ) = [ 3 A ˜ i j , l 3 A ˜ 1 i j , l 0 0 ϖ ( 1 ϖ ) M ˜ i j , l 0 0 0 0 ϖ ( 1 ϖ ) N ˜ i j , l 0 0 ] , M ˜ i j , l = [ 0 0 P 22 l K i , l C 1 j , l 0 ] , N ˜ i j , l = [ 0 0 P 22 l K i , l C 2 j , l 0 ] , Γ ˜ 3 l = diag { P l P 1 1 P l , P l P 2 1 P l , , P l P N 1 P l , N , P l P 1 1 P l , P l P 2 1 P l , , P l P N 1 P l N , Γ ˜ 3 l = P l P 1 1 P l , P l P 2 1 P l , , P l P N 1 P l N , P l P 1 1 P l , P l P 2 1 P l , , P l P N 1 P l N , Γ ˜ 3 l = P l P 1 1 P l , P l P 2 1 P l , , P l P N 1 P l N , P l P 1 1 P l , P l P 2 1 P l , , P l P N 1 P l N } , A ¯ i j , l = [ P 11 l A i , l 0 ( 1 ϖ ) P 22 l K i , l C 1 j , l P 22 l A i , l P 22 l K i , l C 1 j , l ] , A ¯ 1 i , l = [ 0 0 ( 1 ϖ ) P 22 l K i , l C 2 j , l P 22 l K i , l C 2 j , l ] .
It follows that
$$ [P_{m}-P_{l}]^{\intercal}P_{m}^{-1}[P_{m}-P_{l}] \geq0 \quad (m=1,2, \ldots,N), $$
which leads to
$$ -P_{l}P_{m}^{-1}P_{m}\leq P_{m}-2P_{l} \quad (m=1,2, \ldots,N). $$
It follows from (41) that
$$ \sum_{i=1}^{q}\sum _{j=1}^{q}h_{i}\bigl(\xi(k) \bigr)h_{j}\bigl(\xi(k)\bigr)\Gamma_{ij,l}< 0. $$
(42)
Furthermore, condition (42) can be written as
$$ \sum_{i=1}^{q}\sum _{j>i}^{q}h_{i}\bigl(\xi(k) \bigr)h_{j}\bigl(\xi(k)\bigr)[\Gamma_{ij,l}+ \Gamma_{ji,l}] +\sum_{i=1}^{q} \Gamma_{ii,l}< 0. $$
 □

4 Illustrative example

Example 1

Consider the T-S fuzzy Markovian jump neural network (9) involving two modes with the following parameters:
A 11 = [ 0.9 0 0 0.4 ] , B 11 = [ 0.3 0.2 0.2 0.1 ] , C 11 = [ 0.5 0.3 0.1 0.3 ] , D 11 = [ 0.1 0 0 0.1 ] , A 12 = [ 0.6 0 0 0.8 ] , B 12 = [ 0.4 0.2 0.4 0.1 ] , C 12 = [ 0.5 0.3 0.1 0.3 ] , D 12 = [ 0.1 0 0 0.1 ] , A 21 = [ 0.6 0 0 0.7 ] , B 21 = [ 0.5 0.2 0.4 0.2 ] , C 21 = [ 0.2 0 0.2 0.1 ] , D 21 = [ 0.1 0 0 0.1 ] , A 22 = [ 0.6 0 0 0.9 ] , B 22 = [ 0.4 0.2 0.1 0.2 ] , C 22 = [ 0.6 0 0.5 0.4 ] , D 22 = [ 0.1 0 0 0.1 ] , C 111 = C 121 = C 112 = C 122 = [ 1 1 ] , C 211 = C 221 = C 212 = C 222 = [ 0.1 0.2 ] , U 1 l = V 1 l = [ 0.1 0.1 0 0.1 ] , U 2 l = V 2 l = [ 0.2 0.1 0 0.2 ] , U l = V l = [ 0.15 0.1 0 0.15 ] , l = 1 , 2 .
Moreover, assume that the transition rate matrix is given by
Π = [ 0.7 0.3 0.4 0.6 ] .
The nonlinear activation functions \(f(\eta(k))\) and \(g(\eta(k))\) are chosen as
$$ f\bigl(\eta(k)\bigr)=g\bigl(\eta(k)\bigr) =\bigl[ 0.1\eta(1)+\tan h\bigl(0.1 \eta(1)\bigr)+0.1\eta(2) -0.1\eta(2)-\tan h\bigl(0.1\eta(2)\bigr) \bigr] $$
and the membership functions \(h_{1}(\eta(k))\) and \(h_{2}(\eta(k))\) are defined as
$$h_{1}\bigl(\eta(k)\bigr)= \textstyle\begin{cases} 0.5(1-\eta(k)), & \vert \eta(k)\vert < 1, \\ 1,& \vert \eta(k)\vert \geq1. \end{cases} $$

Given the initial values for \(R=I\), \(c_{1}=1\), \(d=4\), \(N=8\), \(\alpha=2\), \(\tau_{1}=1\) and \(\tau_{2}=3\). By using the Matlab Toolbox, one has minimum \(c_{2}=1.0252\). Therefore, the normal augmented fuzzy Markovian jump neural network (9) is SFTB with respect to \((1,2,I,8,1.0252)\).

Remark 5

In view of the parameters given above, the sector bounds of the activation functions \(f(\eta(k))\) and \(g(\eta(k-\tau(k)))\) are \([\{U_{1l},V_{1l}\},\{U_{2l},V_{2l}\}]\). If the lower and upper bounds of the activation functions are introduced instead of the probability distribution information, that is, \(\chi_{1}=\kappa_{1}=0\), \(\chi_{2}=\kappa_{2}=1\) and letting \(U_{l}=V_{l}= [ {\scriptsize\begin{matrix}{} 0.1 & 0.1 \cr 0 & -0.1\end{matrix}} ]\), the minimum \(c_{2}=1.2896\). However, if the probability information of the small and large activation functions is employed, one has minimum \(c_{2}=1.0252\).

5 Conclusions

This paper is concerned with the finite-time state estimation problem for T-S fuzzy stochastic jumping neural networks under unreliable communication links. Stochastic variables subject to the Bernoulli white sequences are employed to govern the nonlinearities occurring in different sector bounds. By employing the reasonable Lyapunov-Krasovskii functional and using Newton-Leibniz enumerating, sufficient conditions for the existence of the state estimator are given in terms of linear matrix inequalities. Finally a numerical example has been offered to show the effectiveness of the proposed approach. The main results in this paper may be further extended to famous dynamical models, such as fuzzy semi-Markovian jump systems, which will be dealt with by the authors in future work.

Declarations

Acknowledgements

This work was supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission under Grant no. KJ1601009.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Mathematics and Statistics, Chongqing Three Gorges University

References

  1. Zhang, B, Xu, S, Zou, Y: Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays. Neurocomputing 72(1-3), 321-330 (2008) View ArticleGoogle Scholar
  2. Liu, Y, Wang, Z, Liu, X: Robust stability of discrete-time stochastic neural networks with time-varying delays. Neurocomputing 71(4-6), 823-833 (2008) View ArticleGoogle Scholar
  3. Zhang, D, Shi, P, Zhang, W, Yu, L: Energy-efficient distributed filtering in sensor networks: a unified switched system approach. IEEE Trans. Cybern. (2016). doi:10.1109/TCYB.2016.2553043 Google Scholar
  4. Zhang, D, Shi, P, Wang, QG, Yu, L: Analysis and synthesis of networked control systems: a survey of recent advances and challenges. ISA Trans. 66, 376-392 (2017) View ArticleGoogle Scholar
  5. Pan, L, Cao, J: Exponential stability of stochastic functional differential equations with Markovian switching and delayed impulses via Razumikhin method. Adv. Differ. Equ. 2012, 61 (2012) MathSciNetView ArticleMATHGoogle Scholar
  6. Wang, J, Yao, F, Shen, H: Dissipativity-based state estimation for Markov jump discrete-time neural networks with unreliable communication links. Neurocomputing 139, 107-113 (2014) View ArticleGoogle Scholar
  7. Shen, H, Huang, X, Zhou, J, Wang, Z: Global exponential estimates for uncertain Markovian jump neural networks with reaction-diffusion terms. Nonlinear Dyn. 69, 473-486 (2012) MathSciNetView ArticleMATHGoogle Scholar
  8. Zhang, D, Shi, P, Wang, QG: Energy-efficient distributed control of large-scale systems: a switched system approach. Int. J. Robust Nonlinear Control 26, 3101-3117 (2016) MathSciNetView ArticleMATHGoogle Scholar
  9. Chen, Y, Zheng, W: Stochastic state estimation for neural networks with distributed delays and Markovian jump. Neural Netw. 25, 14-20 (2012) View ArticleMATHGoogle Scholar
  10. Arunkumar, A, Sakthivel, R, Mathiyalagan, K, Park, JH: Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks. ISA Trans. 53, 1006-1014 (2014) View ArticleGoogle Scholar
  11. Zhang, Y, Mu, J, Shi, Y, Zhang, J: Finite-time filtering for T-S fuzzy jump neural networks with sector-bounded activation functions. Neurocomputing 186, 97-106 (2016) View ArticleGoogle Scholar
  12. Chang, X, Yang, G: New results on output feedback \(H_{\infty}\) control for linear discrete-time systems. IEEE Trans. Autom. Control 59(5), 1355-1359 (2014) MathSciNetView ArticleGoogle Scholar
  13. Chang, X: Robust nonfragile \(H_{\infty}\) filtering of fuzzy systems with linear fractional parametric uncertainties. IEEE Trans. Fuzzy Syst. 20(6), 1001-1011 (2012) View ArticleGoogle Scholar
  14. Shen, M, Ye, D: Improved fuzzy control design for nonlinear Markovian-jump. Fuzzy Sets Syst. 217, 80-95 (2013) View ArticleMATHGoogle Scholar
  15. Wang, X, Fang, J, Dai, A, Zhou, W: Global synchronization for a class of Markovian switching complex networks with mixed time-varying delays in the delay-partition approach. Adv. Differ. Equ. 2014, 248 (2014) MathSciNetView ArticleMATHGoogle Scholar
  16. Zhang, D, Cai, W, Xie, L, Wang, Q: Nonfragile distributed filtering for T-S fuzzy systems in sensor networks. IEEE Trans. Fuzzy Syst. 23(5), 1883-1890 (2015) View ArticleGoogle Scholar
  17. Su, X, Shi, P, Wu, L, Nguang, SK: Induced l 2 filtering of fuzzy stochastic systems with time-varying delays. IEEE Trans. Cybern. 43(4), 1251-1264 (2013) View ArticleGoogle Scholar
  18. Su, X, Wu, L, Shi, P, Chen, CLP: Model approximation for fuzzy switched systems with stochastic perturbation. IEEE Trans. Fuzzy Syst. 23(5), 1458-1473 (2015) View ArticleGoogle Scholar
  19. Su, X, Wu, L, Shi, P, Song, Y: A novel approach to output feedback control of fuzzy stochastic systems. Automatica 50(12), 3268-3275 (2014) MathSciNetView ArticleMATHGoogle Scholar
  20. Chang, X, Park, J, Tang, Z: New approach to \(H_{\infty}\) filtering for discrete-time systems with polytopic uncertainties. Signal Processing 113, 147-158 (2015) View ArticleGoogle Scholar
  21. Malinowski, MT: Strong solutions to stochastic fuzzy differential equations of Itô type. Math. Comput. Model. 55(3), 918-928 (2012) View ArticleMATHGoogle Scholar
  22. Liu, F, Wu, M, He, Y, Yokoyama, R: New delay-dependent stability criteria for T-S fuzzy systems with time-varying delay. Fuzzy Sets Syst. 161, 2033-2042 (2010) MathSciNetView ArticleMATHGoogle Scholar
  23. Pan, Y, Zhou, Q, Lu, Q, Wu, C: New dissipativity condition of stochastic fuzzy neural networks with discrete and distributed time-varying delays. Neurocomputing 162, 250-260 (2015) View ArticleGoogle Scholar
  24. He, S, Liu, F: \(L_{2}-L_{\infty}\) fuzzy control for Markov jump systems with neutral time-delays. Math. Comput. Simul. 92, 1-13 (2013) MathSciNetView ArticleGoogle Scholar
  25. Malinowski, MT: Some properties of strong solutions to stochastic fuzzy differential equations. Inf. Sci. 252, 62-80 (2013) MathSciNetView ArticleMATHGoogle Scholar
  26. Malinowski, MT, Agarwal, RP: On solutions to set-valued and fuzzy stochastic differential equations. J. Franklin Inst. 352, 3014-3043 (2015) MathSciNetView ArticleGoogle Scholar
  27. Malinowski, MT: Set-valued and fuzzy stochastic differential equations in M-type 2 Banach spaces. Tohoku Math. J. 67(3), 349-381 (2015) MathSciNetView ArticleMATHGoogle Scholar
  28. Malinowski, MT: Stochastic fuzzy differential equations of a nonincreasing type. Commun. Nonlinear Sci. Numer. Simul. 33, 99-117 (2016) MathSciNetView ArticleGoogle Scholar
  29. Malinowski, MT: Fuzzy and set-valued stochastic differential equations with local Lipschitz condition. IEEE Trans. Fuzzy Syst. 23(5), 1891-1898 (2015) View ArticleGoogle Scholar
  30. Malinowski, MT: Fuzzy stochastic differential equations of decreasing fuzziness: approximate solutions. J. Intell. Fuzzy Syst. 29(3), 1087-1107 (2015) MathSciNetView ArticleGoogle Scholar
  31. Malinowski, MT: Fuzzy stochastic differential equations of decreasing fuzziness: non-Lipschitz coefficients. J. Intell. Fuzzy Syst. 31(1), 13-25 (2016) View ArticleGoogle Scholar
  32. Shen, H, Zhu, Y, Zhang, L, Park, J: Extended dissipative state estimation for Markov jump neural networks with unreliable links. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 346-358 (2017) View ArticleGoogle Scholar
  33. Chen, M, Shen, H, Li, F: On dissipative filtering over unreliable communication links for stochastic jumping neural networks based on a unified design method. J. Franklin Inst. 353(17), 4583-4601 (2016) MathSciNetView ArticleMATHGoogle Scholar
  34. Li, H, Chen, Z, Wu, L, Lam, H: Event-triggered control for nonlinear systems under unreliable communication links. IEEE Trans. Fuzzy Syst. (2016). doi:10.1109/TFUZZ.2016.2578346 Google Scholar
  35. Cheng, J, Zhu, H, Zhong, S, Zheng, F, Zeng, Y: Finite-time filtering for switched linear systems with a mode-dependent average dwell time. Nonlinear Anal. Hybrid Syst. 15, 145-156 (2015) MathSciNetView ArticleMATHGoogle Scholar
  36. Cheng, J, Zhu, H, Zhong, S, Zeng, Y, Dong, X: Finite-time \(H_{\infty}\) control for a class of Markovian jump systems with mode-dependent time-varying delays via new Lyapunov functionals. ISA Trans. 52(6), 768-774 (2013) View ArticleGoogle Scholar
  37. Cheng, J, Park, JH, Zhang, L, Zhu, Y: An asynchronous operation approach to event-triggered control for fuzzy Markovian jump systems with general switching policies. IEEE Trans. Fuzzy Syst. (2016). doi:10.1109/TFUZZ.2016.2633325 Google Scholar
  38. Shen, H, Park, JH, Wu, ZG, Zhang, Z: Finite-time \(H_{\infty}\) synchronization for complex networks with semi-Markov jump topology. Commun. Nonlinear Sci. Numer. Simul. 24(1-3), 40-51 (2015) MathSciNetView ArticleGoogle Scholar
  39. Cheng, J, Park, JH, Liu, Y, Liu, Z, Tang, L: Finite-time \(H_{\infty}\) fuzzy control of nonlinear Markovian jump delayed systems with partly uncertain transition descriptions. Fuzzy Sets Syst. (2016). doi:10.1016/j.fss.2016.06.007 Google Scholar
  40. Tian, E, Yue, D, Wei, G: Robust control for Markovian jump systems with partially known transition probabilities and nonlinearities. J. Franklin Inst. 350, 2069-2083 (2013) MathSciNetView ArticleMATHGoogle Scholar
  41. He, S, Liu, F: Finite-time \(H_{\infty}\) fuzzy control of nonlinear jump systems with time delays via dynamic observer-based state feedback. IEEE Trans. Fuzzy Syst. 20(4), 605-614 (2012) View ArticleGoogle Scholar
  42. Cheng, J, Li, G, Zhu, H, Zhong, S, Zeng, Y: Finite-time \(H_{\infty}\) control for a class of Markovian jump systems with mode-dependent time-varying delay. Adv. Differ. Equ. 2013, 214 (2013) MathSciNetView ArticleGoogle Scholar
  43. Xu, Y, Lu, R, Zhou, K, Li, Z: Nonfragile asynchronous control for fuzzy Markov jump systems with packet dropouts. Neurocomputing 175, 443-449 (2016) View ArticleGoogle Scholar

Copyright

© The Author(s) 2017