Skip to main content

Mode-dependent delays for dissipative filtering of stochastic semi-Markovian jump for neural networks

Abstract

This work is concerned with the issue of dissipative filtering for stochastic semi-Markovian jump via neural networks where the time-varying delay is based upon another semi-Markov process. Dissipative performance analysis is employed to solve a mode-dependent filtering problem in a unified way. To achieve this task, we implemented the recently proposed notion of extended dissipativity, which gives an inequality equivalent to the well-known \(H_{\infty }\), \(L_{2}\)\(L_{\infty }\), and dissipative performances. Different from the existing literature (Arslan et al. in Neural Netw 91:11–21, 2017; Chen et al. in ISA Trans. 101:170–176, 2020) where mostly delay-free filters have been investigated, our filter contains a communication delay. Based upon the delay-dependent conditions, for the analysis of stochastic stability and extended dissipativity for neural networks with time-varying delays, our results are obtained by using a mode-dependent Lyapunov–Krasovskii functional together with a novel integral inequality. Original stochastic filtering conditions are characterized by linear matrix inequalities. A numerical simulation is elaborated to elucidate the feasibility of the proposed design methodology.

Introduction

To study most modern phenomena, delayed neural networks (DNNs) are widely used in almost all fields, including but not limited to, associative memory, target tracking, pattern identification, signal processing, combination optimization, and nonlinear control [38], where the system status, connection weight, and activation functions take random values. In some application domains, stochastic neural networks outperform real-valued solutions, allowing them to solve particular issues like XOR problems and symmetry detection [9, 10]. As a result, studying their dynamics is extremely important. In recent literature, DNNs and a large number of results have been obtained [11, 12]. There are a lot of findings on dynamic analysis of delayed neural networks right now. For instance, for stochastic DNNs, the asymptotic stability problem has been addressed in [13], where the time-varying delay happens in a probabilistic manner. For connected DNNs in a master–slave architecture with bounded asynchronous delays, the finite time antisynchronization problem has been addressed in [14].

It should be mentioned that mode-dependent filters in the previously described literature are not considering time-varying delays. In general, the signal filter itself has delays in its model, and the filter’s input would be a delayed version signal from the plant when taking into account the communication channel [15]. As a result, it is more important to consider filters having both state and input delays. Unfortunately, such a fascinating subject has not been extensively studied for neural networks delay systems, yet, which remains an open and demanding problem.

It is noteworthy that the true state of the network may not easily be discovered due to stochastic uncertainty and environmental variations [1621], which is usually critical for the successful use of neural networks. It is nicely acknowledged that time delay is continually encountered due to the fact neural networks are regularly employed in all types of hardware circuits—digital or integrated circuits. Because in networks the time delay is a source of oscillatory response and instability, the stability analysis problem of neural networks while considering time delay attracts enormous attention from researchers [22, 23]. In particular, \(H_{\infty }\) filtering has become a popular research topic in the fields of signal processing and network communications. While the system is precisely recognized and the statistical properties of exogenous disturbances are known, the Kalman filtering approach is the most effective filter that minimizes the \(H_{\infty }\)-norm of the estimation error [24]. The filtering problem entails using output measurement to estimate the state of a system [25, 26] and some improved methods for a stochastic system in [27, 28]. Traditional Kalman filtering, on the other hand, may perform poorly when dealing with modeling errors and noises with uncertain spectral densities. The \(H_{\infty }\) filtering is presented as a solution to the problem while considering such uncertainty. Also \(H_{\infty }\) filtering is used to solve estimation problems in which the energy-to-energy gain from external disturbances to the estimation error can be limited to less than a prescribed level. Therefore, in the last few years and with devotion of researchers, numerous results related to \(H_{\infty }\) filtering have been established [2933]. There has been little research attention paid to the dissipative filtering problem for DNNs with mode-dependent time-varying delays. Our current research stems from this situation.

Based on the foregoing, we aim to solve the dissipative filtering problem for stochastic doubly-semi-Markovian switching DNNs in this paper. This article’s main novelties can be summarized in the following points:

  1. 1.

    This is one of the first studies of the dissipative filtering problem for stochastic DNNs, in which all of the system characteristic matrices switch according to a semi–Markovian process.

  2. 2.

    The time-varying delay under discussion is based on another semi-Markov process, which considerably expands the conventional mode-dependent delay situation in both state and input delays in both filters.

  3. 3.

    By combining the relevant LMIs with mode-dependent criteria, it is possible to ensure that the investigated error system is stochastically stable for a certain extended dissipative disturbance attenuation level.

The rest of this paper is laid out as follows. A stochastic semi-Markovian switching DNN model with mode-dependent filter delays is proposed in Sect. 2, along with other relevant preliminaries. In Sect. 3, adequate requirements in the form of matrix inequalities are constructed by using an appropriate mode-dependent Lyapunov functional in combination with a reciprocal convex inequality to ensure that the examined system is stochastically stable with a specified performance index level. One example is given in Sect. 4 to demonstrate the efficacy of the theoretical results achieved. Finally, in Sect. 5, conclusions are formed.

Notations. This paper uses fairly standard notation throughout. The subscript T denotes the transposition of the matrix. The notation ♠ presents an entry which is induced by symmetry. Furthermore, the notation \(\mathbf{X} \geq \mathbf{Y}\) (respectively, \(\mathbf{X} > \mathbf{Y}\)) for real symmetric matrices X and Y throughout this article denotes that the \(\mathbf{X}-\mathbf{Y}\) matrix is positive semidefinite (respectively, positive definite). Also \(\Re ^{n_{x}}\), \(\Re ^{n_{r}}\), and \(\Re ^{n_{\omega }}\) denote, respectively, the set of \(n_{x}\)-dimensional complex vectors, \(n_{r}\)-dimensional real vectors, and \(n_{\omega }\) real matrices; I stands for an identity matrix of appropriate dimension. Let \((\Omega ,\mathcal{F},\{\mathcal{F}_{t\geq 0}\},\mathbb{P})\) be a complete probability space with filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\), satisfying the usual condition (i.e., the filtration \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\)-null sets, and \(\mathcal{F}_{0}\) is monotonically increasing and right continuous). The block-diagonal matrix is presented by \(\operatorname{diag}(\dots )\); \(|\cdot |\) denotes the Euclidean norm for vectors and \(\|\cdot \|\) denotes the spectral norm for matrices; \(l_{2}[0,\infty )\) represents the space of square-integrable vector functions over \([0,\infty )\).

System description and problem formulation

As shown in Fig. 1, we consider the following Markovian jump stochastic system with mode-dependent time-varying delays, which is modeled by neural networks.

$$\begin{aligned} \textstyle\begin{cases} \dot{m}(t) = -A_{r(t)}m(t)+T_{0}p(m(t))+T_{1}p(m(t-\hbar _{1\sigma (t)}(t)))\\ \phantom{\dot{m}(t) =}{}+A_{ \hbar _{1}r(t)}m(t-\hbar _{1\sigma (t)}(t))+B_{r(t)}\omega (t), \\ z(t) = E_{r(t)}m(t)+E_{\hbar _{1}r(t)}m(t-\hbar _{1\sigma (t)}(t)), \\ y(t) = C_{r(t)}m(t), \\ m(t) = \Psi (t), \quad t \geq 0, \end{cases}\displaystyle \end{aligned}$$
(1)

where m(t)= [ m 1 m 2 m 3 m n ] T n x presents the state vector of the neural network;

$$\begin{aligned} A_{r(t)}=\operatorname{diag}\{\alpha _{1r(t)},\alpha _{2r(t)}, \dots , \alpha _{nr(t)}\}, \end{aligned}$$

belongs to the set of diagonal matrices with positive entries \(\alpha _{lr(t)}>0\), \(l=1,2,\dots ,n\); neuron activation function is presented by p(m(t))= [ p 1 ( m 1 ( t ) ) p 2 ( m 2 ( t ) ) p 3 ( m 3 ( t ) ) p n ( m n ( t ) ) ] T ; while \(T_{0}\) and \(T_{1}\) are associated with the weight connection and their delay connection, respectively; y(t)= [ y 1 y 2 y 3 y r ] T n r is the system measurement; \(\omega (t) \in \Re ^{n_{\omega }}\) is the exogenous disturbance that belongs to \(l_{2} [0, \infty )\); z(t)= [ z 1 z 2 z 3 z p ] T n p is the signal to be estimated; \(\Psi (t)\) is a continuous vector-valued initial function on \([-\bar{\hbar }_{1},0]\); \(\hbar _{1}(t)>0\) is the state delay of the system; \(A_{r(t)}, A_{\hbar _{1}r(t)}, B_{r(t)}, E_{r(t)}\), \(E_{\hbar _{1}r(t)}\), and \(C_{r(t)}\) are system parameter matrices with proper dimensions.

Figure 1
figure 1

General framework of a neural network filtering with mode-dependent delay

For system (1), consider a full-order mode-dependent delayed filter of the following form:

$$\begin{aligned} \textstyle\begin{cases} {\dot{m}_{f}}(t) = A_{fr(t)}m_{f}(t)+A_{\hbar fr(t)}m_{f}(t-\hbar _{2 \sigma (t)}(t))+B_{fr(t)}y(t), \\ z_{f}(t) = C_{fr(t)}m_{f}(t) + C_{\hbar fr(t)}m_{f}(t-\hbar _{2 \sigma (t)}(t)) + D_{fr(t)}y(t), \\ m_{f}(t) = \Psi _{f}(t), \quad t \in [-\bar{\hbar }_{2},0], \end{cases}\displaystyle \end{aligned}$$
(2)

where \(m_{f}\in {{}\Re ^{n_{x}}}\) and \(z_{f}\in {{}\Re ^{n_{z}}}\) denote the state and output of the filter, respectively; \(\Psi _{f}(t)\) is the initial condition; \(y(t)\) is transmitted; the matrices \(A_{fr(t)}\), \(A_{\hbar fr(t)}\), \(B_{fr(t)}\), \(C_{fr(t)}\), \(C_{\hbar fr(t)}\), and \(D_{fr(t)}\) are the filter parameters to be determined.

In the rest of this paper, for each possible \(r(t)=i \in \mathcal{S}_{r}\) and \(\sigma (t)=p \in \mathcal{S}_{\sigma } \), we write, for example, \(A_{r(t)}=A_{i}\), \(A_{\hbar _{1}r(t)}=A_{\hbar _{1i}}\), and so on. Define the augmented vector m ˆ (t)= [ m ( t ) T m f ( t ) T ] T and filtering error as \(\hat{z}(t) = z(t)-z_{f}(t) \). Then, combining the system (1) and the filter (2) leads to the filtering error system:

$$\begin{aligned} \textstyle\begin{cases} \dot{\hat{m}}(t)=\bar{A} \hat{m}(t)+\bar{T}_{0}\hat{p}(\mathcal{H} \hat{m}(t))+\bar{T}_{1}\hat{p}(\mathcal{H}\hat{m}(t-\hbar _{1\sigma (t)}(t)))\\ \phantom{\dot{\hat{m}}(t)=}{}+ \sum_{k=1}^{2} \bar{A}_{\hbar _{k}}\hat{m}(t-\hbar _{k\sigma (t)}(t))+ \bar{B}\omega (t), \\ \hat{z}(t) = \bar{E} \hat{m}(t)+\sum_{k=1}^{2}\bar{E}_{\hbar _{k}} \hat{m}(t-\hbar _{k\sigma (t)}(t)), \end{cases}\displaystyle \end{aligned}$$
(3)

where

$$\begin{aligned} &\bar{A} = \begin{bmatrix} -A_{i}& 0 \\ B_{fj}C_{i} & A_{fj} \end{bmatrix}, \qquad \bar{A}_{\hbar _{1}}= \begin{bmatrix} A_{\hbar _{1i}}& 0 \\ 0 & 0 \end{bmatrix}, \\ &\bar{A}_{\hbar _{2}}= \begin{bmatrix} 0 & 0 \\ 0 & A_{\hbar fj} \end{bmatrix},\qquad \bar{B}= \begin{bmatrix} B_{i} \\ 0 \end{bmatrix},\\ & \bar{T}_{0}= \begin{bmatrix} T_{0} \\ 0 \end{bmatrix},\qquad \bar{T}_{1}= \begin{bmatrix} T_{1} \\ 0 \end{bmatrix},\qquad \bar{E} = \begin{bmatrix} E_{i}-D_{fj}C_{i} & -C_{fj}\end{bmatrix}, \\ & \bar{E}_{\hbar _{1}} = \begin{bmatrix} E_{\hbar _{1i}} & 0 \end{bmatrix}, \qquad \bar{E}_{\hbar _{2}} = \begin{bmatrix} 0 & -C_{\hbar fj} \end{bmatrix}, \qquad\mathcal{H} = \begin{bmatrix} I & 0\end{bmatrix}. \end{aligned}$$

In the following, we introduce some lemmas and definitions, which will support us in the developing the main results.

Assumption 1

([34])

The neuron activation function fulfills one of the following conditions, and \(\mathcal{U}_{1}\), \(\mathcal{U}_{2}\) are real constant matrixes that satisfy \(\mathcal{U}_{1}-\mathcal{U}_{2}\geq 0\) and

$$\begin{aligned} \begin{bmatrix} p(m)-\mathcal{U}_{1}m \end{bmatrix}^{T} \begin{bmatrix} p(m)-\mathcal{U}_{2}m \end{bmatrix} \leq 0,\quad {\forall m \in \Re ^{n_{x}} } . \end{aligned}$$
(4)

Assumption 2

([35])

The \(\hbar _{k\sigma (t)}(t)\) are mode-dependent time-delays such that

$$\begin{aligned} 0 \leq \hbar _{k\sigma (t)}(t)\leq \bar{\hbar }_{k}, \dot{\hbar }_{k \sigma (t)}(t)\leq \mu _{k},\quad k=1,2, \end{aligned}$$
(5)

where \(\bar{\hbar }_{k}>0\) and \(\mu _{k}\) are prescribed scalars.

The main purpose of this article is to design the dissipative filter (2) with time-varying delays.

Lemma 1

([35])

For given positive integers m and n, constant \(\hat{\alpha } \in (0,1)\), vector \(\varsigma \in \mathfrak{R}^{m}\), and any matrices \(Q \in \mathfrak{R}^{n \times n}\) with \(Q>0\), \(S_{1} \in \mathfrak{R}^{n \times m}\), and \(S_{2} \in \mathfrak{R}^{n \times m}\), if a positive matrix \(G \in \mathfrak{R}^{n \times n}\) with [ Q G T Q ] >0 exists, the following inequalities hold:

$$\min_{\hat{\alpha } \in (0,1)} \biggl[ \frac{1}{\hat{\alpha }} \varsigma ^{T}S_{1}^{T}QS_{1}\varsigma + \frac{1}{1-\hat{\alpha }} \varsigma ^{T}S_{2}^{T}QS_{2} \varsigma \biggr] \geq \begin{bmatrix} \varsigma ^{T}S_{1}^{T} & \varsigma ^{T}S_{2}^{T} \end{bmatrix} \begin{bmatrix} Q & G^{T} \\ \spadesuit & Q \end{bmatrix} \begin{bmatrix} S_{1}\varsigma \\ S_{2}\varsigma \end{bmatrix}. $$

Main results

Theorem 1

Under Assumption 1, the given filter (3) with scalars \(\nu _{\hat{\ell }}\), fulfilling \(0<\nu _{\hat{\ell }}<1, \hat{\ell }=0,1,2\) and \(\nu _{0}+\nu _{1}+\nu _{2}=1\), is stochastically stable for any time-varying delays \(\hbar _{kp}(t)\) satisfying (5), if there exist matrices \({\mathbb{G}}>0\), \({\mathbf{S}}_{k}>0\), \({\mathbf{W}}_{k}>0\), \(\mathbf{P}_{ip}>0\), \({\mathbf{Q}}_{ki}>0\), \({\mathbf{R}}_{ki}>0\), \({\mathbf{Z}}_{ki}>0\), \({\mathbf{M}}_{ki}\) such that [ Z k i M k i M k i ] >0, \(k=1,2,3\), and the following inequalities hold:

$$\begin{aligned} &{\mathbb{G}}-\mathbf{P}_{ip}< 0, \end{aligned}$$
(6)
$$\begin{aligned} &\nabla _{a}:=\sum^{N_{r}}_{j=1}\pi _{ij} [{\mathbf{Q}}_{kj}+{ \mathbf{R}}_{kj} ]-{ \mathbf{S}}_{k}< 0, \end{aligned}$$
(7)
$$\begin{aligned} &\nabla _{b}:=\sum^{N_{r}}_{j=1}\pi _{ij}{\mathbf{R}}_{kj}-{ \mathbf{S}}_{k}< 0, \end{aligned}$$
(8)
$$\begin{aligned} &\nabla _{c}:=\sum^{N_{r}}_{j=1}\pi _{ij}{\mathbf{Z}}_{kj}-{\hbar }_{k}^{-1}{ \mathbf{W}}_{k}< 0, \end{aligned}$$
(9)
$$\begin{aligned} & \begin{bmatrix} -\nu _{0}\mathbb{G}&0&0& \bar{E} ^{T}\tilde{\exists }_{0}^{T} \\ \spadesuit & -\nu _{1}\mathbb{G}&0& \bar{E}_{\hbar _{1}}^{T} \tilde{\exists }_{0}^{T} \\ \spadesuit &\spadesuit &-\nu _{2}\mathbb{G}\mathbbm{ } &\bar{E}_{\hbar _{2}}^{T} \tilde{\exists }_{0}^{T} \\ \spadesuit &\spadesuit &\spadesuit & -I \end{bmatrix}< 0, \end{aligned}$$
(10)
$$\begin{aligned} & \begin{bmatrix} {\chi }_{ij} & \chi _{ij}^{a^{T}}& \mathbb{A} ^{T}\mathbf{P}_{ip} & \mathbb{E} ^{T}\tilde{\exists }_{1}^{T} \\ \spadesuit & -\exists _{3}-\gamma ^{2}I & \bar{B}^{T} \mathbf{P}_{ip} &0 \\ \spadesuit &\spadesuit & \varrho -2\mathbf{P}_{ip} & 0 \\ \spadesuit &\spadesuit &\spadesuit & -I \end{bmatrix}< 0, \end{aligned}$$
(11)

where \(\varrho =\sum_{k=1}^{2} (\bar{\hbar }_{k}^{2}\mathbf{Z}_{ki}+ \frac{1}{2}\bar{\hbar }_{k}^{2}\mathbf{W}_{k} )\) and

$$\begin{aligned} &{\chi }_{ij}= \begin{bmatrix} {\chi }_{ij}^{1} & {\chi }_{21} & {\mathbf{M}}_{1i} & {\chi }_{22} & { \mathbf{M}}_{2i} & \mathbf{P}_{ip}\bar{T}_{0}-a\hat{\mathcal{U}}_{2} & \mathbf{P}_{ip}\bar{T}_{1}-b\hat{\mathcal{U}}_{2} \\ \spadesuit & \chi _{31} & \mathbf{Z}_{1i} - \mathbf{M}_{1i} & 0 & 0 & 0 & 0 \\ \spadesuit & \spadesuit & -\mathbf{Z}_{1i} -\mathbf{R}_{1i} & 0 & 0 & 0 & 0 \\ \spadesuit & \spadesuit & \spadesuit & \chi _{32}& \mathbf{Z}_{2i} - \mathbf{M}_{2i} & 0 & 0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & -\mathbf{Z}_{2i} - \mathbf{R}_{2i} & 0 & 0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & -a I & 0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & -b I \end{bmatrix}, \\ &\chi _{ij}^{a} = \begin{bmatrix} -\exists _{2}^{T} \bar{E}+\bar{B}^{T} \mathbf{P}_{ip}& -\exists _{2}^{T} \bar{E}_{\hbar _{1}}& 0 & -\exists _{2}^{T} \bar{E}_{\hbar _{2}}& 0 & 0 & 0 \end{bmatrix}, \\ &\mathbb{A} = \begin{bmatrix} \bar{A}& \bar{A}_{\hbar _{1}}& 0 & \bar{A}_{\hbar _{2}}& 0 & 0& 0 \end{bmatrix},\qquad \mathbb{E}= \begin{bmatrix} \bar{E}& \bar{E}_{\hbar _{1}}& 0 & \bar{E}_{\hbar _{2}}& 0 & 0 & 0 \end{bmatrix}, \\ &\chi _{ij}^{1}= \Biggl(\sum^{N_{r}}_{j=1} \pi _{ij}\mathbf{P}_{jp} + \sum^{N_{\sigma }}_{q=1} \lambda _{pq}\mathbf{P}_{iq}+\sum _{k=1}^{2} \{\mathbf{Q}_{ki}+ \mathbf{R}_{ki}-\mathbf{Z}_{ki}+ \bar{\hbar }_{k} \mathbf{S}_{k}\} \Biggr)\\ &\phantom{\chi _{ij}^{1}=}{}+ \mathbf{P}_{ip}\bar{A} + \bar{A} ^{T} \mathbf{P}_{ip}-(a+b)\hat{\mathcal{U}}_{1}, \\ &\chi _{2k}= \mathbf{P}_{ip}\bar{A}_{\hbar _{k}} + \mathbf{Z}_{ki} - \mathbf{M}_{ki},\quad k=1,2, \\ &\chi _{3k}= -(1-\mu _{k})\mathbf{Q}_{ki}-2 \mathbf{Z}_{ki} + \mathbf{M}_{ki}+ \mathbf{M}^{T}_{ki},\quad k=1,2. \end{aligned}$$

Proof

We construct the following Lyapunov–Krasovskii functional candidate for system (3):

$$\begin{aligned} \mathcal{G}\bigl(t,\hat{m}(t),\sigma (t),r(t)\bigr) = \hat{m}(t)^{T}\mathbf{P}_{(r(t), \sigma (t))}\hat{m}(t)+\sum ^{5}_{l = 1}\mathcal{G}_{l}(t), \end{aligned}$$
(12)

where

$$\begin{aligned} &\mathcal{G}_{1}(t) = \sum_{k=1}^{2} \biggl( \int ^{t}_{t-\hbar _{k \sigma (t)}(t)} \hat{m}(\alpha )^{T} \mathbf{Q}_{kr_{t}}\hat{m}( \alpha ) \biggr)\,d\alpha , \\ &\mathcal{G}_{2}(t) = \sum_{k=1}^{2} \biggl( \int ^{t}_{t-\hbar _{k}} \hat{m}(\alpha )^{T} \mathbf{R}_{kr_{t}}\hat{m}(\alpha ) \biggr)\,d \alpha , \\ &\mathcal{G}_{3}(t) = \sum_{k=1}^{2} \biggl(\hbar _{k} \int ^{0}_{- \hbar _{k}} \int ^{t}_{t+\beta } \dot{\hat{m}}(\alpha )^{T} \mathbf{Z}_{kr_{t}} \dot{\hat{m}}(\alpha ) \biggr)\,d\alpha\, d\beta , \\ &\mathcal{G}_{4}(t) = \sum_{k=1}^{2} \biggl( \int ^{0}_{-\hbar _{k}} \int ^{t}_{t+\beta }\hat{m}(\alpha )^{T} \mathbf{S}_{k} \hat{m}(\alpha ) \biggr)\,d\alpha\, d\beta , \\ &\mathcal{G}_{5}(t) = \sum_{k=1}^{2} \biggl( \int ^{0}_{-\hbar _{k}} \int ^{0}_{\theta } \int ^{t}_{t+\beta }\dot{\hat{m}}(\alpha )^{T} \mathbf{W}_{k}\dot{\hat{m}}(\alpha ) \biggr)\,d\alpha\, d\beta \,d \theta , \end{aligned}$$

in which \(\mathbf{P}_{(r(t),\sigma (t))}\), \(\mathbf{Q}_{kr_{t}}\), \(\mathbf{R}_{kr_{t}}\), \(\mathbf{Z}_{kr_{t}}\), \(\mathbf{S}_{k}\), and \(\mathbf{Z}_{k}\) are to be determined, when \(r(t)=i\) and \(\sigma (t)=p\). Let \(\mathcal{A}\) be the weak infinitesimal generator of the random process \(\{\hat{m}(t),\sigma (t),r(t)\}\) (see, e.g., [23]). Then, by using similar techniques as those in [23, 38], we have

$$\begin{aligned} \begin{aligned} &\mathcal{A}\bigl\{ \mathcal{G}(t, \hat{m}(t),\sigma (t),r(t)\bigr\} \\ &\quad= \hat{m}(t)^{T} \Biggl(\sum ^{N_{r}}_{j=1}\pi _{ij}\mathbf{P}_{jp} + \sum^{N_{\sigma }}_{q=1}\lambda _{pq} \mathbf{P}_{iq}+\mathbf{Q}_{ki}+ \mathbf{R}_{ki}+ \bar{\hbar }_{k}\mathbf{S}_{k} \Biggr)\hat{m}(t) \\ &\qquad{} +2\hat{m}(t)^{T}\mathbf{P}_{ip}\dot{\hat{m}}(t)- \bigl(1- \bar{\hbar }_{k}(t)\bigr)\hat{m}\bigl(t-\hbar _{kp}(t) \bigr)^{T}\mathbf{Q}_{ki} \hat{m}\bigl(t-\hbar _{k}(t)\bigr) \\ &\qquad{} -\sum_{k=1}^{2}\hat{m}(t-\bar{\hbar }_{k})^{T}\mathbf{R}_{ki} \hat{m}(t-\bar{\hbar }_{k}) +\sum_{k=1}^{2}\dot{ \hat{m}}(t)^{T} \biggl( \bar{\hbar }_{k}^{2} \mathbf{Z}_{ki}+\frac{1}{2}\bar{\hbar }_{k}^{2} \mathbf{W}_{k} \biggr)\dot{\hat{m}}(t) \\ &\qquad{} -\sum_{k=1}^{2}\bar{\hbar }_{k} \int ^{t}_{t-\bar{\hbar }_{k}} \dot{\hat{m}}(\alpha )^{T} \mathbf{Z}_{ki}\dot{\hat{m}}(\alpha )\,d \alpha +\sum _{k=1}^{2} \int ^{t}_{t-\hbar _{kp}(t)}\hat{m}(\alpha )^{T} \nabla _{a}\hat{m}(\alpha )\,d\alpha \\ &\qquad{} +\sum_{k=1}^{2} \int ^{t-\hbar _{kp}(t)}_{t-\bar{\hbar }_{k}} \hat{m}(\alpha )^{T}\nabla _{b}\hat{m}(\alpha )\,d\alpha +\sum_{k=1}^{2} \bar{\hbar }_{k} \int ^{0}_{-\bar{\hbar }_{k}} \int ^{t}_{t+\beta } \dot{\hat{m}}(\alpha )^{T} \nabla _{c}\dot{\hat{m}}(\alpha )\,d\alpha \,d \beta . \end{aligned} \end{aligned}$$
(13)

Recalling (14) and applying Lemma 1 for each \(k=1,2\), we have

$$\begin{aligned} -\bar{\hbar }_{k} \int ^{t}_{t-\bar{\hbar }_{k}}\dot{m}(\alpha )^{T} \mathbf{Z}_{ki}\dot{m}(\alpha )\,d\alpha \leq \aleph (t)^{T} \mathcal{M}_{k}\aleph (t), \end{aligned}$$
(14)

where

$$\begin{aligned} &\aleph (t) = \begin{bmatrix} m(t)^{T} & m(t-{\hbar }_{kp})^{T} & m(t-\bar{\hbar }_{k})^{T} & \omega (t)^{T} \end{bmatrix}^{T}, \\ &\mathcal{M}_{k} = \begin{bmatrix} -\mathbf{Z}_{ki} & \mathbf{Z}_{ki} - \mathbf{M}_{ki} & \mathbf{M}_{ki} & 0 \\ \spadesuit & -2\mathbf{Z}_{ki} + \mathbf{M}_{ki}+ \mathbf{M}^{T}_{ki} & \mathbf{Z}_{ki}-\mathbf{M}_{ki} & 0 \\ \spadesuit & \spadesuit & -\mathbf{Z}_{ki} & 0 \\ \spadesuit & \spadesuit & \spadesuit & 0 \end{bmatrix}. \end{aligned}$$

From Assumption 1, we get

$$\begin{aligned} \begin{bmatrix} \hat{m}(t) \\ \hat{p}(\mathcal{H}(\hat{m}(t))) \end{bmatrix}^{T} \begin{bmatrix} \hat{\mathcal{U}}_{1} & \hat{\mathcal{U}}_{2} \\ \hat{\mathcal{U}}_{2} & I \end{bmatrix} \begin{bmatrix} \hat{m}(t) \\ \hat{p}(\mathcal{H}(\hat{m}(t))) \end{bmatrix} \leq 0, \end{aligned}$$
(15)

where \((\hat{\mathcal{U}}_{1},\hat{\mathcal{U}}_{2})=(\mathcal{H}^{T} \hat{\mathcal{U}}_{1}\mathcal{H},-\mathcal{H}^{T}\hat{\mathcal{U}}_{2})\). Furthermore, \((\hat{\mathcal{U}}_{1},\hat{\mathcal{U}}_{2})=( \frac{\mathcal{U}_{1}^{T}\mathcal{U}_{2}+\mathcal{U}_{2}^{T}\mathcal{U}_{1}}{2}, \frac{\mathcal{U}_{1}^{T}+\mathcal{U}_{2}^{T}}{2})\). So for the parameters \(a>0\) and \(b>0\), this yields

$$\begin{aligned} & {-}a \begin{bmatrix} \hat{m}(t) \\ \hat{p}(\mathcal{H}(\hat{m}(t))) \end{bmatrix}^{T} \begin{bmatrix} \hat{\mathcal{U}}_{1} & \hat{\mathcal{U}}_{2} \\ \hat{\mathcal{U}}_{2} & I \end{bmatrix} \begin{bmatrix} \hat{m}(t) \\ \hat{p}(\mathcal{H}(\hat{m}(t))) \end{bmatrix} \geq 0, \end{aligned}$$
(16)
$$\begin{aligned} &{-}b \begin{bmatrix} \hat{m}(t) \\ \hat{p}(\mathcal{H}(\hat{m}(t-\hbar _{1p}(t)))) \end{bmatrix}^{T} \begin{bmatrix} \hat{\mathcal{U}}_{1} & \hat{\mathcal{U}}_{2} \\ \hat{\mathcal{U}}_{2} & I \end{bmatrix} \begin{bmatrix} \hat{m}(t) \\ \hat{p}(\mathcal{H}(\hat{m}(t-\hbar _{1p}(t)))) \end{bmatrix} \geq 0. \end{aligned}$$
(17)

To simplify the notation, define

ζ T ( t ) = [ m ˆ ( t ) m ˆ ( t ħ 1 p ( t ) ) m ˆ ( t ħ ¯ 1 p ) m ˆ ( t ħ 2 p ( t ) ) m ˆ ( t ħ ¯ 2 p ) p ˆ ( H m ˆ ( t ) ) p ˆ ( H m ˆ ( t ħ 1 p ( t ) ) ) ω ( t ) ] .

It follows from (12)–(17) that then

$$\begin{aligned} \mathcal{A}\bigl\{ \mathcal{G}(t,\hat{m}(t),\sigma (t),r(t)\bigr\} - \mathbb{J}(t) \leq \zeta (t)^{T} \tilde{\digamma }_{ij} \zeta (t), \end{aligned}$$
(18)

where

$$\begin{aligned} \tilde{\digamma }_{ij}= \begin{bmatrix} \chi _{ij} & \chi _{ij}^{a^{T}} \\ \spadesuit & -\exists _{3}-\gamma ^{2}I \end{bmatrix}+ \begin{bmatrix} \mathbb{A}^{T} \\ \mathbb{B}^{T} \end{bmatrix} \varrho \begin{bmatrix} \mathbb{A}^{T} \\ \mathbb{B}^{T} \end{bmatrix}^{T}+ \begin{bmatrix} \mathbb{E}^{T} \\ 0 \end{bmatrix}\tilde{\exists }_{1}^{T} \tilde{\exists }_{1} \begin{bmatrix} \mathbb{E}^{T} \\ 0 \end{bmatrix}^{T}. \end{aligned}$$

Note that

$$\begin{aligned} \varrho =\mathbf{P}_{ip} \bigl[ \mathbf{P}_{ip}\varrho ^{-1} \mathbf{P}_{ip} \bigr]^{-1} \mathbf{P}_{ip}\leq \mathbf{P}_{ip}[2 \mathbf{P}_{ip}- \varrho ]^{-1}\mathbf{P}_{ip}. \end{aligned}$$

Applying the Schur complement equivalence to (11) yields \(\tilde{\digamma }_{ij}<0\).

Thus, by following the same procedure as in [35], we can show that system (3) with is stochastically stable in the sense of Definition 2 [35]. □

Theorem 2

Under Assumption 1, the given filter (3) is stochastically stable in the sense of dissipative property for any time-varying delays \(\hbar _{kp}(t)\) satisfying (5), if there exist matrices \(\mathbf{P}_{ip}= \operatorname{diag}\{ \mathbf{P}_{ip_{1}} , \mathbf{P}_{ip_{2}} \} >0\), \({\mathbb{G}}>0\), \({\mathbf{S}}_{k}>0\), \({\mathbf{W}}_{k}>0\), \({\mathbf{Q}}_{ki}>0\), \({\mathbf{R}}_{ki}>0\), \({\mathbf{Z}}_{ki}>0\), \({\mathbf{M}}_{ki}\), \(\mathbb{A}_{fj}\), \(\mathbb{B}_{fj}\), \(\mathbb{C}_{fj}\), \(\mathbb{D}_{fj}\), \(\mathbb{A}_{\hbar fj}\), and \(\mathbb{C}_{\hbar fj}\) such that [ Z k i M k i M k i ] >0, \(k=1,2,3\), and the following inequalities hold:

$$\begin{aligned} & \begin{bmatrix} -\nu _{0}{\mathbb{G}}&0&0& \Xi _{ai} \tilde{\exists }_{0}^{T} \\ \spadesuit & -\nu _{1}{\mathbb{G}}&0& \Xi _{bi} \tilde{\exists }_{0}^{T} \\ \spadesuit &\spadesuit &-\nu _{2}{\mathbb{G}} &\Xi _{ci} \tilde{\exists }_{0}^{T} \\ \spadesuit &\spadesuit &\spadesuit & -I \end{bmatrix}< 0, \end{aligned}$$
(19)
$$\begin{aligned} & \begin{bmatrix} \check{\chi }_{ij}^{1} & \check{\chi }_{21} & {\mathbf{M}}_{1i} & \check{\chi }_{22} & {\mathbf{M}}_{2i} & \check{\varphi }_{1} & \check{\varphi }_{2} & \Xi _{1ij}\exists _{2}+ \Xi _{4ij} & \Xi _{1ij} ^{T} & \Xi _{ai}\tilde{\exists }_{1}^{T} \\ \spadesuit & \check{\chi }_{31} & \check{\phi }_{1} & 0 & 0 & 0 & 0 & - \Xi _{2ij}\exists _{2} & \Xi _{2ij} ^{T} & \Xi _{bi}\tilde{\exists }_{1}^{T} \\ \spadesuit & \spadesuit & \check{\phi }_{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \spadesuit & \spadesuit & \spadesuit & \check{\chi }_{32}& \check{\phi }_{3} & 0 & 0 & -\Xi _{3ij}\exists _{2} & \Xi _{3ij} ^{T} & \Xi _{ci}\tilde{\exists }_{1}^{T} \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \check{\phi }_{4} & 0 & 0 & 0 & 0 & 0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & -a I & 0 & 0 & 0 & 0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & -b I& 0 & 0 & 0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & -\exists _{3}-\gamma ^{2}I & \Xi _{4ij} ^{T} &0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \varrho -2\mathbf{P}_{ip} & 0 \\ \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & \spadesuit & -I \end{bmatrix} < 0, \\ &\check{\chi }_{ij}^{1}= \Biggl(\sum ^{N_{r}}_{j=1}\pi _{ij} \mathbf{P}_{jp} +\sum^{N_{\sigma }}_{q=1}\lambda _{pq} \mathbf{P}_{iq}+ \sum_{k=1}^{2}\{ \mathbf{Q}_{ki}+\mathbf{R}_{ki}-\mathbf{Z}_{ki}+ \bar{\hbar }_{k}\mathbf{S}_{k}\} \Biggr) + \Xi _{1ij} + \Xi _{1ij} ^{T}-(a+b) \hat{ \mathcal{U}}_{1}, \\ &\chi _{2k}= \Xi _{(k+1)ij} + {\mathbf{Z}}_{ki} - \check{\mathbf{M}}_{ki},\quad k=1,2,\qquad \check{\varphi }_{1}= \Xi _{5ij}-a\hat{\mathcal{U}}_{2}, \qquad\check{\varphi }_{2}= \Xi _{6ij}-b\hat{\mathcal{U}}_{2}, \\ &\check{\phi }_{1} = {\mathbf{Z}}_{1i} - { \mathbf{M}}_{1i} , \qquad\check{\phi }_{2}= -{\mathbf{Z}}_{1i} -{\mathbf{R}}_{1i},\qquad \check{\phi }_{3}= { \mathbf{Z}}_{2i} - {\mathbf{M}}_{2i} ,\qquad \check{\phi }_{4}= -{\mathbf{Z}}_{2i} -{\mathbf{R}}_{2i}, \\ &\check{\chi }_{3k}= -(1-\mu _{k}){\mathbf{Q}}_{ki}-2{ \mathbf{Z}}_{ki} + {\mathbf{M}}_{ki}+ {\mathbf{M}}^{T}_{ki},\quad k=1,2, \\ &\Xi _{1ij} = \begin{bmatrix} -\mathbf{P}_{ip_{1}}A_{i} & 0 \\ \mathbb{B}_{fj}C_{i} & \mathbb{A}_{fj} \end{bmatrix},\qquad \Xi _{2ij} = \begin{bmatrix} \mathbf{P}_{ip_{1}}A_{\hbar _{1i}} & 0 \\ 0 & 0 \end{bmatrix},\qquad \Xi _{3ij} = \begin{bmatrix} 0 & 0 \\ 0 & \mathbb{A}_{\hbar fj} \end{bmatrix} , \\ &\Xi _{4ij} = \begin{bmatrix} \mathbf{P}_{ip_{1}}B_{i} \\ 0 \end{bmatrix}, \qquad\Xi _{5ij} = \begin{bmatrix} \mathbf{P}_{ip_{1}}T_{0} \\ 0 \end{bmatrix}, \qquad\Xi _{6ij} = \begin{bmatrix} \mathbf{P}_{ip_{1}}T_{1} \\ 0 \end{bmatrix} , \\ &\Xi _{aij} = \begin{bmatrix} E_{i}^{T}-C_{i}^{T}\mathbb{D}_{fj} \\ -\mathbb{C}_{fj}^{T} \end{bmatrix},\qquad \Xi _{bij} = \begin{bmatrix} E_{\hbar _{1i}}^{T} \\ 0 \end{bmatrix}, \qquad\Xi _{cij} = \begin{bmatrix} 0 \\ -\mathbb{C}_{\hbar fj}^{T} \end{bmatrix}. \end{aligned}$$
(20)

Some of the parameters are the same as mentioned in the previous theorem.

Proof

$$\begin{aligned} \mathbf{P}_{ip}= \operatorname{diag}\{ \mathbf{P}_{ip_{1}} , \mathbf{P}_{ip_{2}} \}. \end{aligned}$$

Based on the above matrix, we further define the matrix variables mentioned in Theorem 1:

$$\begin{aligned} \textstyle\begin{cases} \mathbb{A}_{fj} =\mathbf{P}_{ip_{2}}A_{fj}, \\ \mathbb{A}_{\hbar fj} =\mathbf{P}_{ip_{2}}A_{\hbar fj} , \\ \mathbb{B}_{fj} =\mathbf{P}_{ip_{2}}B_{fj}, \end{cases}\displaystyle \qquad\textstyle\begin{cases} \mathbb{C}_{fj} =C_{fj}, \\ \mathbb{C}_{\hbar fj} =C_{\hbar fj} , \\ \mathbb{D}_{fj} =D_{fj}, \end{cases}\displaystyle \end{aligned}$$
(21)

From the above, it means that all the conditions in Theorem 1 are satisfied. Therefore, by Theorem 1, the filtering error system (3) is extended dissipative for any time-varying delays \(\hbar _{kp}(t)\) which are mode dependent and satisfying (5). The proof is completed. □

Numerical example

This section presents one numerical example to show the effectiveness of our derived theoretical results. Consider a semi-Markovian switching for neural network (1) with two operation modes, i.e., \(\mathcal{S}_{r} = {1, 2}\). The corresponding self-feedback connection matrices of the subsystems are:

$$\begin{aligned} & \begin{bmatrix} A_{1} & A_{\hbar _{11}} & T_{0} & B_{1} \\ A_{2} & A_{\hbar _{12}} & T_{1} & B_{2} \end{bmatrix}= \left [ \textstyle\begin{array}{cc|cc|cc|cc} 2.3 & 0 & 3.9 & 0.5 & 0.3 & 0.2 & 0.16 & 0.09 \\ 0 & 0.9 & 1.8 & -2.6 & 0.3 & -0.2 & 0.21 & -0.31 \\ \hline 1.9 & 0 & 4.9 & 3.7 & 0.3 & 0.5 & 0.21 & 0.11 \\ 9 & 2 & 0.45 & 1.8 &-0.2 & 0.1 & 0.27 & -0.29 \end{array}\displaystyle \right ], \\ & \begin{bmatrix} E_{1} & E_{\hbar _{11}} & C_{1} \\ E_{2} & E_{\hbar _{12}} & C_{2} \end{bmatrix}= \left [ \textstyle\begin{array}{cc|cc|cc} 0.1 & 0 & -0.5 & 1.5 & 1 & 0 \\ \hline 0.1 & 0.1 & 0.9 & 0 & 1 & 0 \end{array}\displaystyle \right ]. \end{aligned}$$

Furthermore, the activation functions are taken as

$$\begin{aligned} m(t)= \begin{bmatrix} 0.5m_{1}(t)-\tanh (0.3m_{1}(t))+0.2m_{2}(t) \\ 0.95m_{2}(t)-\tanh (0.75m_{1}(t)) \end{bmatrix}. \end{aligned}$$

Mode-dependent time-varying delays are chosen as:

$$\begin{aligned} &\hbar _{11}(t) = 1.0 +0.2\sin (t), \\ &\hbar _{12}(t) = 0.5 +0.3\cos (t)\quad \text{with } \bar{\hbar }_{k}=0.35, \end{aligned}$$

which implies that \(\bar{\hbar }_{1}=1.25\), \(\bar{\hbar }_{2}=1.75\). Let \(\gamma =5.5\), with tuning the parameters for the \(H_{\infty }\) filtering (\(\exists _{0}=0\), \(\exists _{1}=-1\), \(\exists _{2}=0\), and \(\exists _{3}=\gamma ^{2}\)). By solving the matrix inequalities (6)–(9) and (19)–(20), a feasible solution can be found. Here, for space considerations, only a part of the solution is presented:

$$\begin{aligned} & \begin{bmatrix} \mathbb{A}_{f1} & \mathbb{A}_{f2} \\ \mathbb{A}_{\hbar f1}& \mathbb{A}_{\hbar f2} \end{bmatrix}= \left [ \textstyle\begin{array}{cc|cc} 12.1390 & -9.4334 & 0.2188 & 12.6660 \\ -7.2454 & 10.2466 & -0.2379 & -8.4686 \\ \hline -0.0376 & 0.0388 & 0 & -0.0240 \\ 0.0740 & -0.0765 & 0.0003 & 0.0453 \end{array}\displaystyle \right ], \\ & \begin{bmatrix} \mathbb{B}_{f1} & \mathbb{B}_{f2} \\ \mathbb{C}_{f1} & \mathbb{C}_{f2} \end{bmatrix}= \left [ \textstyle\begin{array}{c|c} -1.6474 & -1.8340 \\ 1.0451 & 1.2894 \\ \hline -0.0009 & -0.0018 \\ 0.0009 & 0.0033 \end{array}\displaystyle \right ], \\ & \begin{bmatrix} \mathbb{C}_{\hbar f1} & \mathbb{C}_{\hbar f2} \\ \mathbb{D}_{f1} & \mathbb{D}_{f2} \end{bmatrix}= \left [ \textstyle\begin{array}{cc|cc} -0.2483 & 0.5730 & -0.0024 & -0.1711 \\ \hline -0.0040 & 0.0041 & 0 & -0.0022 \end{array}\displaystyle \right ]. \end{aligned}$$

It follows from Theorem 2 that the augmented system (3) exhibits the \(H_{\infty }\) stability with a disturbance attenuation level γ. With the initial conditions \(\Psi (t)=[5,-8]^{T}\) and \(\Psi _{f}(t)=[-5,8]^{T}\), Figs. 23 show the trajectories of the two SMPs \(r(t)\) and \(\sigma (t)\), respectively, while Figs. 45 present the time response of the original and filter states, which further confirms that the filtering error system (3) is stochastically stable.

Figure 2
figure 2

Trajectory of the semi-Markov process \(r(t)\)

Figure 3
figure 3

Trajectory of the semi-Markov process \(\sigma (t)\)

Figure 4
figure 4

Trajectory of \(m_{1}(t)\) with filter state \(m_{f1}(t)\)

Figure 5
figure 5

Trajectory of \(m_{2}(t)\) with filter state \(m_{f2}(t)\)

On the other hand, it is noted that a delay-free neural network filtering problem was studied in [36]. To show the superiority of our proposed algorithm, we make a comparison. In this comparison, we select different time-delays and achieve the maximum upper bound delay. The comparison is shown in Table 1, where \(\bar{\hbar } = \bar{\hbar }_{k}, k=1,2\). Furthermore, complexity is one of the main issues of such a problem which is also addressed in Table 1, from which one can see t more information about the superiority of the neural model.

Table 1 Maximum allowable upper bound delay

Secondly, we select \(\gamma =3.5\), after tuning the parameters for the dissipative filtering (\(\exists _{0}=0\), \(\exists _{1}=-1\), \(\exists _{2}=1\) and \(\exists _{3}=\gamma \)), and then, by solving the matrix inequalities (6)–(9) and (19)–(20), a feasible solution can be found. Here, for space considerations, only part of the solution is presented:

$$\begin{aligned} & \begin{bmatrix} \mathbb{A}_{f1} & \mathbb{A}_{f2} \\ \mathbb{A}_{\hbar f1}& \mathbb{A}_{\hbar f2} \end{bmatrix}= \left [ \textstyle\begin{array}{cc|cc} 12.1390 & -9.4334 & 0.2192 & 12.6660 \\ -7.2454 & 10.2466 & -0.2379 & -8.4686 \\ \hline -0.0376 & 0.0388 & 0 & -0.0240 \\ 0.0740 & -0.0765 & 0.0003 & 0.0453 \end{array}\displaystyle \right ], \\ & \begin{bmatrix} \mathbb{B}_{f1} & \mathbb{B}_{f2} \\ \mathbb{C}_{f1} & \mathbb{C}_{f2} \end{bmatrix}= \left [ \textstyle\begin{array}{c|c} -1.6474 & -1.8340 \\ 1.0451 & 1.2894 \\ \hline -0.0009 & -0.0018 \\ 0.0009 & 0.0033 \end{array}\displaystyle \right ], \\ & \begin{bmatrix} \mathbb{C}_{\hbar f1} & \mathbb{C}_{\hbar f2} \\ \mathbb{D}_{f1} & \mathbb{D}_{f2} \end{bmatrix}= \left [ \textstyle\begin{array}{cc|cc} -0.2483 & 0.5730 & -0.0024 & -0.1711 \\ \hline -0.0040 & 0.0041 & 0 & -0.0022 \end{array}\displaystyle \right ]. \end{aligned}$$

Figure 6 depicts the estimation error, while Fig. 7 presents the output response of the neural network, which exhibits that the filtering error finally approaches zero.

Figure 6
figure 6

Trajectory of the estimation error

Figure 7
figure 7

Trajectory of the output of the neural plant

Conclusions

In this article, we have investigated the extended dissipativity filtering problem for a stochastic process for an NN with a delayed filter, where the time-varying delay is mode-dependent. Some new mode-dependent sufficient conditions have been designed, under which the filtering error of the system exhibits extended stability with a given disturbance attenuation level. Additionally, the conditions were established in the form of linear matrix inequalities (LMIs), which can be computed by adopting the standard software packages. A numerical simulation was elaborated upon to elucidate the feasibility of the proposed design methodology. Moreover, an extension of the proposed work for practical application with asynchronous filtering inputs and restricted prediction control intervals deserves further investigation. On the other hand, the computational complexity of the proposed algorithm is not so good due to a number of lacking matrices. Furthermore, we also expect to study T–S fuzzy systems with actuator saturation and fault isolation delay by combining the methods proposed in this paper and that in [37].

Availability of data and materials

Data used to support the findings of this work are available from the corresponding author upon request.

Abbreviations

DNN:

Delayed Neural Network

PDF:

Probability Density Function

LMIs:

Linear Matrix Inequalities

SMPs:

Semi-Markovian Process

CDFs:

Cumulative Distribution Functions

References

  1. Arslan, E., Vadivel, R., Ali, M.S., Arik, S.: Event-triggered \(H_{\infty }\) filtering for delayed neural networks via sampled-data. Neural Netw. 91, 11–21 (2017)

    Article  Google Scholar 

  2. Chen, G., Chen, Y., Zeng, H.B.: Event-triggered \(H_{\infty }\) filter design for sampled-data systems with quantization. ISA Trans. 101, 170–176 (2020)

    Article  Google Scholar 

  3. Kobayashi, M.: Singularities of three-layered complex-valued neural networks with split activation function. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1900–1907 (2018)

    MathSciNet  Article  Google Scholar 

  4. Jankowski, S., Lozowski, A., Zurada, J.M.: Complex-valued multistate neural associative memory. IEEE Trans. Neural Netw. 7(6), 1491–1496 (1996)

    Article  Google Scholar 

  5. Wang, Y., Qing, D.: Model predictive control of nonlinear system based on GA-RBP neural network and improved gradient descent method. Complexity 2021, Article ID 6622149 (2021). https://doi.org/10.1155/2021/6622149

    Article  Google Scholar 

  6. Ding, K., Zhu, Q., Yang, X.: Intermittent estimator-based mixed passive and \(H_{\infty }\) control for high-speed train with actuator stochastic fault. IEEE Trans. Cybern. (2021). https://doi.org/10.1109/TCYB.2021.3079437

    Article  Google Scholar 

  7. Ding, K., Zhu, Q.: Fuzzy intermittent extended dissipative control for delayed distributed parameter systems with stochastic disturbance: a spatial point sampling approach. IEEE Trans. Fuzzy Syst. (2021). https://doi.org/10.1109/TFUZZ.2021.3065524

    Article  Google Scholar 

  8. Ding, K., Zhu, Q.: Extended dissipative anti-disturbance control for delayed switched singular semi-Markovian jump systems with multi-disturbance via disturbance observer. Automatica 128, 109556 (2021)

    MathSciNet  Article  Google Scholar 

  9. Hirose, A.: Recent progress in applications of complex-valued neural networks. In: Proceeding of the 10th International Conference on Artifical Intelligence and Soft Computing, pp. 42–46 (2010)

    Google Scholar 

  10. Nitta, T.: Solving the XOR problem and the detection of symmetry using a single complex-valued neuron. Neural Netw. 16, 1101–1105 (2003)

    Article  Google Scholar 

  11. Sunaga, Y., Natsuaki, R., Hirose, A.: Land form classification and similar land-shape discovery by using complex-valued convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 57(10), 7907–7917 (2019)

    Google Scholar 

  12. Gong, W., Liang, J., Cao, J.: Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays. Neural Netw. 70, 81–89 (2015)

    Article  Google Scholar 

  13. Sriraman, R., Cao, Y., Samidurai, R.: Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays. Math. Comput. Simul. 171, 103–118 (2020)

    MathSciNet  Article  Google Scholar 

  14. Liu, X., Li, Z.: Finite time anti-synchronization of complex-valued neural networks with bounded asynchronous time-varying delays. Neurocomputing 387, 129–138 (2020)

    Article  Google Scholar 

  15. Zhang, D., Shi, P., Wang, Q.G., Yu, L.: Analysis and synthesis of networked control systems: a survey of recent advances and challenges. ISA Trans. 66, 376–392 (2017)

    Article  Google Scholar 

  16. Arik, S.: An improved robust stability result for uncertain neural networks with multiple time delays. Neural Netw. 54, 1–10 (2014)

    Article  Google Scholar 

  17. Venzke, A., Chatzivasileiadis, S.: Verification of neural network behaviour: formal guarantees for power system applications. IEEE Trans. Smart Grid 12(1), 383–397 (2021)

    Article  Google Scholar 

  18. Kwon, O.M., Park, M.J., Ju, H., Lee, S.M., Cha, E.J.: New and improved results on stability of static neural networks with interval time-varying delays. Appl. Math. Comput. 239, 1280–1285 (2014)

    MathSciNet  MATH  Google Scholar 

  19. Zhu, Q., Cao, J.: Stability analysis for stochastic neural networks of neutral type with both Markovian jump parameters and mixed time delays. Neurocomputing 73, 13–15, 2671–2680 (2010)

    Google Scholar 

  20. Kong, F., Zhu, Q., Huang, T.: Fixed-time stability for discontinuous uncertain inertial neural networks with time-varying delays. IEEE Trans. Syst. Man Cybern. Syst. (2021). https://doi.org/10.1109/TSMC.2021.3096261

    Article  Google Scholar 

  21. Kong, F., Zhu, Q., Huang, T.: New fixed-time stability lemmas and applications to the discontinuous fuzzy inertial neural networks. IEEE Trans. Fuzzy Syst. (2020). https://doi.org/10.1109/TFUZZ.2020.3026030

    Article  Google Scholar 

  22. Park, M.J., Kwon, O.M., Cha, E.J.: On stability analysis for generalized neural networks with time-varying delays. Math. Probl. Eng. 7, Article ID 387805 (2015)

    MathSciNet  MATH  Google Scholar 

  23. Park, P., Ko, J.W., Jeong, C.: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47(1), 235–238 (2011)

    MathSciNet  Article  Google Scholar 

  24. Mahmoud, M.S., Shi, P.: Robust Kalman filtering for continuous time-lag systems with Markovian jump parameters. IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 50(1), 98–105 (2003)

    MathSciNet  Article  Google Scholar 

  25. Tan, M.H., Fei, J., Ni, J.: Robust stability and \(H_{\infty }\) filter design for neutral stochastic neural networks with parameter uncertainties and time-varying delay. Int. J. Mach. Learn. Cybern. 8, 511–524 (2017)

    Article  Google Scholar 

  26. Syed, M.A., Saravanakumar, R., Zhu, Q.: Less conservative delay-dependent \(H_{\infty }\) control of uncertain neural networks with discrete interval and distributed time-varying delays. Neurocomputing 166, 84–95 (2015)

    Article  Google Scholar 

  27. Hu, W., Zhu, Q., Karimi, H.R.: Some improved Razumikhin stability criteria for impulsive stochastic delay differential systems. IEEE Trans. Autom. Control 64(12), 5207–5213 (2019). https://doi.org/10.1109/TAC.2019.2911182

    MathSciNet  Article  MATH  Google Scholar 

  28. Wang, H., Zhu, Q.: Global stabilization of a class of stochastic nonlinear time-delay systems with SISS inverse dynamics. IEEE Trans. Autom. Control 65(10), 4448–4455 (2020). https://doi.org/10.1109/TAC.2020.3005149

    MathSciNet  Article  MATH  Google Scholar 

  29. Ahn, C.K.: Neural network \(H_{\infty }\) chaos synchronization. Nonlinear Dyn. 60(3), 295–302 (2009)

    MathSciNet  Article  Google Scholar 

  30. Kao, Y., Xie, J., Wang, C., Karimi, H.R.: A sliding mode approach to \(H_{\infty }\) non-fragile observer-based control design for uncertain Markovian neutral-type stochastic systems. Automatica 52, 218–226 (2015)

    MathSciNet  Article  Google Scholar 

  31. Yi, X., Li, G., Liu, Y., Fang, F.: Event-triggered \(H_{\infty }\) filtering for nonlinear networked control systems via T–S fuzzy model approach. Neurocomputing 448, 344–352 (2021)

    Article  Google Scholar 

  32. Ren, W., Hou, N., Wang, Q., Lu, Y., Liu, X.: Non-fragile \(H_{\infty }\) filtering for nonlinear systems with randomly occurring gain variations and channel fadings. Neurocomputing 156, 176–185 (2015)

    Article  Google Scholar 

  33. Hou, Z., Luo, J., Shi, P.: Stochastic stability of linear systems with semi-Markovian jump parameters. ANZIAM J. 46(3), 331–340 (2005)

    MathSciNet  Article  Google Scholar 

  34. Li, N., Hu, J., Hu, J., Li, L.: Exponential state estimation for delayed recurrent neural networks with sampled-data. Nonlinear Dyn. 69, 555–564 (2012)

    MathSciNet  Article  Google Scholar 

  35. Aslam, M.S., Zhang, B., Zhang, Y., Zhang, Z.: Extended dissipative filter design for T–S fuzzy systems with multiple time delays. ISA Trans. 80, 22–34 (2018)

    Article  Google Scholar 

  36. Nagamani, G., Radhika, T., Zhu, Q.: An improved result on dissipativity and passivity analysis of Markovian jump stochastic neural networks with two delay components. IEEE Trans. Neural Netw. Learn. Syst. 28(12), 3018–3031 (2016)

    MathSciNet  Article  Google Scholar 

  37. Liu, C., Jiang, B., Zhang, K., Patton, R.J.: Distributed fault-tolerant consensus tracking control of multi-agent systems under fixed and switching topologies. IEEE Trans. Circuits Syst. I, Regul. Pap. 68(4), 1646–1658 (2021)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors thank their universities.

Authors’ information

Muhammad Shamrooz Aslam is working at the School of Electrical and Information Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China. Prof. Qianmu Li and Miss Jun Hou are working at the School of Cyber Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, and the School of Social Science, Nanjing Vocational University of Industry Technology, Nanjing, 210046, China, respectively. Mr. Hua Qiulong is working at the Intelligent Manufacturing Department, Wuyi University, Jiangmen, 529020, China.

Funding

This work was supported in part by “Research on the Key Technology of Endogenous Security Switches” (2020YFB1804604) of the National Key R&D Program “New Network Equipment Based on Independent Programmable Chips” (2020YFB1804600), the 2020 Industrial Internet Innovation and Development Project from Ministry of Industry and Information Technology of China, the Fundamental Research Fund for the Central Universities (30918012204, 30920041112), the 2019 Industrial Internet Innovation and Development Project from Ministry of Industry and Information Technology of China. The work is also supported by starting PhD fund No. 20z14.

Author information

Affiliations

Authors

Contributions

MSA has done the writing, reviewing, and editing. Professor QL helped in the revision of the manuscript and Ms. JH took part in the drafting of this article. Mr. HQ validated the simulation results and also helped with the theoretical results of this article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Qianmu Li.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflicts of interest.

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Aslam, M.S., Li, Q., Hou, J. et al. Mode-dependent delays for dissipative filtering of stochastic semi-Markovian jump for neural networks. Adv Cont Discr Mod 2022, 21 (2022). https://doi.org/10.1186/s13662-022-03694-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-022-03694-9

Keywords

  • Semi-Markov jump systems
  • Mode-dependent delays
  • Neural networks